Skip to content

Commit

Permalink
merge and squash dev into master
Browse files Browse the repository at this point in the history
  • Loading branch information
willpatera committed Jan 12, 2017
1 parent 85aa152 commit b8ee361
Show file tree
Hide file tree
Showing 7 changed files with 14 additions and 16 deletions.
5 changes: 3 additions & 2 deletions contents/articles/2016-04_hmd-eyes/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,9 @@ featured_img_thumb: "../../../../media/images/blog/thumb/plopski_itoh_corneal-re

After receiving many requests from the community, we have taken the first steps towards supporting eye tracking in Virtual Reality and Augmented Reality (VR/AR) head mounted displays (HMDs) with the release of eye tracking add-ons for Oculus DK2 and Epson Moverio BT-200. We are committed to bringing eye tracking to VR/AR HMDS, and plan to create new hardware for the latest VR and AR hardware when it hits the market.

<img src="../../../../media/images/blog/plopski_itoh_corneal-reflection.png" class='Feature-image' alt="Plopski, Itoh, et al. Corneal Imaging">
Corneal reflection of an HMD screen. Image by Alexander Plopski, Yuta Itoh, et al. See their paper: [Corneal-Imaging Calibration for Optical See-Through Head-Mounted Displays](http://campar.in.tum.de/pub/itoh2015vr2/itoh2015vr2.pdf)
<img src="../../../../media/images/blog/plopski_itoh_corneal-reflection.png" class='Feature-image u-padTop--1' alt="Plopski, Itoh, et al. Corneal Imaging">

<div class="small u-padBottom--2" >Corneal reflection of an HMD screen. Image by Alexander Plopski, Yuta Itoh, et al. See their paper: [Corneal-Imaging Calibration for Optical See-Through Head-Mounted Displays](http://campar.in.tum.de/pub/itoh2015vr2/itoh2015vr2.pdf)</div>

## Blackbox vs Open Source Building Blocks
Now that we have the hardware, the next step is to develop software for eye tracking in HMDs. Based on what we have learned from our community and our experience in experience developing Pupil, we believe that eye tracking in HMDs will not be a “one size fits all” solution. The various applications for eye tracking with AR and VR are extremely diverse and vastly unexplored.
Expand Down
4 changes: 2 additions & 2 deletions contents/articles/2016-11_facevr/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ featured_img_thumb: "../../../../media/images/blog/thumb/facevr_fig9.png"

[Justus Thies](http://lgdv.cs.fau.de/people/card/justus/thies/) et al., have developed a novel approach for real-time gaze-aware facial capture system to drive the photo-realistic reconstructed digital face in virtual reality. Their approach enables facial reenactment that can transfer facial expressions and realistic eye appearance between a source and a target actor video.

<img class="Feature-image u-padBottom--2 u-padTop--2" src="../../../../media/images/blog/facevr_fig1.png" alt="Real-Time Facial Reenactment and Eye Gaze Control in VR">
<img class="Feature-image u-padTop--1" src="../../../../media/images/blog/facevr_fig1.png" alt="Real-Time Facial Reenactment and Eye Gaze Control in VR">

Source: [FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in VR](https://arxiv.org/abs/1610.03151)
<div class="small u-padBottom--2">Source: [FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in VR](https://arxiv.org/abs/1610.03151)</div>

Head mounted displays (HMDs) provide immersive renderings of virtual environments. But in order to do so HMDs block the majority of the actor/participant's face. In order to reconstruct the actor/participant's face, Thies et al., use an RGB camera to capture the facial performance of the participant and Pupil Labs's [Oculus Rift DK2 add-on cup](https://pupil-labs.com/store/#vr-ar) to capture eye movements within the HMD. The source actor's facial and eye movement data is then used to drive the photo-realistic facial animations of the target video, therefore enabling gaze-aware facial reenactment.

Expand Down
5 changes: 2 additions & 3 deletions contents/articles/2016-11_focal-musical-expression/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,10 @@ Focal enables musicians to manage and control electronic effects while both hand
A foot pedal can be used to adjust the selected effects enabling a musician to maintain posture and balance for minimal disturbance during performance. Check out Stewart Greenhill and Cathie Travers full paper for NIME 2016 [here](http://stewartgreenhill.com/documents/FocalEyeTrackingMusicalExpressionController-NIME2016.pdf).

<div class="Grid Grid--center Grid--justifyCenter">
<img class=".Feature-image--capturePlayerIcons
" src="../../../../media/images/blog/focal-system.jpg" class='Feature-image u-padBottom--1' alt="Focal System">
<img class=".Feature-image--capturePlayerIcons u-padTop--1 u-padBottom--1" src="../../../../media/images/blog/focal-system.jpg" alt="Focal System">
</div>

<small>Image source: [Focal](http://stewartgreenhill.com/articles/focal/)</small>
<div class="small u-padBottom--2">Image source: [Focal](http://stewartgreenhill.com/articles/focal/)</div>

The Focal system consists of four main technical components:

Expand Down
5 changes: 3 additions & 2 deletions contents/articles/2016-11_multiplayer_gameplay/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,9 @@ featured_img_thumb: "../../../../media/images/blog/thumb/gaze-gameplay.png"

[Joshua Newn et al.](http://www.socialnui.unimelb.edu.au/research/social-play/#team), explore the invisible visual signals between players in multiplayer gameplay. The gameplay would simulate different gaze conditions to test players interactions, varying in levels of gaze information from invisible to visible.

<img src="../../../../media/images/blog/gaze-gameplay.png" style="width: 70%; margin: auto;" class='Feature-image u-padBottom--1' alt="Exploring the Effects of Gaze Awareness on Multiplayer Gameplay">
Image Source: [Exploring the Effects of Gaze Awareness on Multiplayer Gameplay PDF](http://www.socialnui.unimelb.edu.au/publications/2016-SocialNUI-Newn-3.pdf)
<img src="../../../../media/images/blog/gaze-gameplay.png" style="width: 70%; margin: auto;" class='Feature-image u-padTop--1' alt="Exploring the Effects of Gaze Awareness on Multiplayer Gameplay">

<div class="small u-padBottom--2">Image Source: [Exploring the Effects of Gaze Awareness on Multiplayer Gameplay PDF](http://www.socialnui.unimelb.edu.au/publications/2016-SocialNUI-Newn-3.pdf)</div>

Gaze can provide visual information of the player's intention. During gameplay, players monitor each other's interactions to gain and evaluate opponents intentions and formulate strategies based upon the visual information.

Expand Down
3 changes: 1 addition & 2 deletions contents/articles/2016-11_pupil-lsl-plugin/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,8 @@ featured_img_thumb: "../../../../media/images/blog/thumb/Lab-Streaming-Layer.jpg
<img src="../../../../media/images/blog/Lab-Streaming-Layer.jpg" style="width:50%" class='Feature-image u-padBottom--1' alt="Lab Streaming Layer">
</div>

<div class="small">Image Source: [Qusp Product Portfolio](https://qusp.io/projects)</div>
<div class="small u-padBottom--2">Image Source: [Qusp Product Portfolio](https://qusp.io/projects)</div>

<br>
We are excited to introduce the [Pupil + Lab Streaming Layer relay plugin](https://github.com/sccn/labstreaminglayer/tree/master/Apps/PupilLabs). The plugin works with [Pupil Capture](https://github.com/pupil-labs/pupil/wiki/Pupil-Capture) to relay pupil data, gaze data, and notifications to LSL. Users can link the data to other inlets in the network.

The [Lab Streaming Layer](https://github.com/sccn/labstreaminglayer) (LSL) is a system that provides unified collection of measurement time series between programs, computers, and devices over a network for distributed signal transport, time synchronization, and data collection. LSL has an extensive range of supported measurement modalities including eye tracking.
Expand Down
5 changes: 2 additions & 3 deletions contents/articles/2016-11_van-gogh-museum-project/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,10 @@ featured_img: "../../../../media/images/blog/van-gogh-museum.jpg"
featured_img_thumb: "../../../../media/images/blog/thumb/van-gogh-museum.jpg"
---

<img src="../../../../media/images/blog/van-gogh-museum.jpg" class='Feature-image u-padBottom--1' alt="The Van Gogh Museum Eye-tracking Project">
Image Source: [The Van Gogh Museum Eye-tracking Project](http://www.vupsy.nl/van-gogh-museum-eye-tracking-project/)
<img src="../../../../media/images/blog/van-gogh-museum.jpg" class='Feature-image' alt="The Van Gogh Museum Eye-tracking Project">

<div class="small u-padBottom--2">Image Source: [The Van Gogh Museum Eye-tracking Project](http://www.vupsy.nl/van-gogh-museum-eye-tracking-project/)</div>

<br>
We are really excited to see Pupil used in the [Van Gogh Museum](https://www.vangoghmuseum.nl/en) by researchers from the [Department of Experimental and Applied psychology](http://www.vupsy.nl/) at the [VU Amsterdam](http://www.vu.nl/en/) to study how we perceive and appreciate art in a real life environment.

[Francesco Walker](http://www.vupsy.nl/staff-members/francesco-walker/) (assisted by Berno Bucker, [Daniel Schreij](http://www.vupsy.nl/staff-members/daniel-schreij/) and Nicola Anderson, and supervised by prof. Jan Theeuwes) used Pupil to record the gaze patterns of adults and children as they viewed paintings in the Van Gogh museum. The team used Pupil to gain insight into how people look at paintings.
Expand Down
3 changes: 1 addition & 2 deletions contents/articles/2017-01_saliency-in-vr/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,8 @@ In their recent research paper [Saliency in VR: How do people explore virtual en

<img src="../../../../media/images/blog/vr_saliency.png" class='Feature-image u-padTop--1' alt="Ground Truth Saliency Map">

<div class="small">Saliency map generated using ground truth data collected using Pupil Labs Oculus DK2 eye tracking add-on overlay on top of one of the stimulus panorama images shown to the participants. Image Source: [Fig 5. Page 7.](https://arxiv.org/pdf/1612.04335.pdf)</div>
<div class="small u-padBottom--2">Saliency map generated using ground truth data collected using Pupil Labs Oculus DK2 eye tracking add-on overlay on top of one of the stimulus panorama images shown to the participants. Image Source: [Fig 5. Page 7.](https://arxiv.org/pdf/1612.04335.pdf)</div>

<br>
To further understand viewing behavior and saliency in VR, Vincent Sitzmann et al. collected a dataset that records gaze data and head orientation from users oberserving omni-directional stereo panoramas using an Oculus Rift DK2 VR headset with Pupil Lab's [Oculus Rift DK2 add-on cup](https://pupil-labs.com/store/#vr-ar).

The dataset shows that gaze and head orientation can be used to build more accurate saliency maps for VR environments. Based on the data, Sitzmann and his colleagues propose new methods to learn and predict time-dependent saliency in VR. The collected data is a first step towards building saliency models specifically tailored to VR environments. If successful, these VR saliency models could serve as a method to approximate and predict gaze movements using movement data and image information.
Expand Down

0 comments on commit b8ee361

Please sign in to comment.