From b8ee361e9392d65a7bfa0efb766f6df12d2210da Mon Sep 17 00:00:00 2001 From: Will Patera Date: Thu, 12 Jan 2017 11:19:05 +0700 Subject: [PATCH] merge and squash dev into master --- contents/articles/2016-04_hmd-eyes/index.md | 5 +++-- contents/articles/2016-11_facevr/index.md | 4 ++-- contents/articles/2016-11_focal-musical-expression/index.md | 5 ++--- contents/articles/2016-11_multiplayer_gameplay/index.md | 5 +++-- contents/articles/2016-11_pupil-lsl-plugin/index.md | 3 +-- contents/articles/2016-11_van-gogh-museum-project/index.md | 5 ++--- contents/articles/2017-01_saliency-in-vr/index.md | 3 +-- 7 files changed, 14 insertions(+), 16 deletions(-) diff --git a/contents/articles/2016-04_hmd-eyes/index.md b/contents/articles/2016-04_hmd-eyes/index.md index b6a3bbd7..516a35e7 100644 --- a/contents/articles/2016-04_hmd-eyes/index.md +++ b/contents/articles/2016-04_hmd-eyes/index.md @@ -9,8 +9,9 @@ featured_img_thumb: "../../../../media/images/blog/thumb/plopski_itoh_corneal-re After receiving many requests from the community, we have taken the first steps towards supporting eye tracking in Virtual Reality and Augmented Reality (VR/AR) head mounted displays (HMDs) with the release of eye tracking add-ons for Oculus DK2 and Epson Moverio BT-200. We are committed to bringing eye tracking to VR/AR HMDS, and plan to create new hardware for the latest VR and AR hardware when it hits the market. -Plopski, Itoh, et al. Corneal Imaging -Corneal reflection of an HMD screen. Image by Alexander Plopski, Yuta Itoh, et al. See their paper: [Corneal-Imaging Calibration for Optical See-Through Head-Mounted Displays](http://campar.in.tum.de/pub/itoh2015vr2/itoh2015vr2.pdf) +Plopski, Itoh, et al. Corneal Imaging + +
Corneal reflection of an HMD screen. Image by Alexander Plopski, Yuta Itoh, et al. See their paper: [Corneal-Imaging Calibration for Optical See-Through Head-Mounted Displays](http://campar.in.tum.de/pub/itoh2015vr2/itoh2015vr2.pdf)
## Blackbox vs Open Source Building Blocks Now that we have the hardware, the next step is to develop software for eye tracking in HMDs. Based on what we have learned from our community and our experience in experience developing Pupil, we believe that eye tracking in HMDs will not be a “one size fits all” solution. The various applications for eye tracking with AR and VR are extremely diverse and vastly unexplored. diff --git a/contents/articles/2016-11_facevr/index.md b/contents/articles/2016-11_facevr/index.md index 0fa48776..c5a6a0bd 100644 --- a/contents/articles/2016-11_facevr/index.md +++ b/contents/articles/2016-11_facevr/index.md @@ -9,9 +9,9 @@ featured_img_thumb: "../../../../media/images/blog/thumb/facevr_fig9.png" [Justus Thies](http://lgdv.cs.fau.de/people/card/justus/thies/) et al., have developed a novel approach for real-time gaze-aware facial capture system to drive the photo-realistic reconstructed digital face in virtual reality. Their approach enables facial reenactment that can transfer facial expressions and realistic eye appearance between a source and a target actor video. -Real-Time Facial Reenactment and Eye Gaze Control in VR +Real-Time Facial Reenactment and Eye Gaze Control in VR -Source: [FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in VR](https://arxiv.org/abs/1610.03151) +
Source: [FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in VR](https://arxiv.org/abs/1610.03151)
Head mounted displays (HMDs) provide immersive renderings of virtual environments. But in order to do so HMDs block the majority of the actor/participant's face. In order to reconstruct the actor/participant's face, Thies et al., use an RGB camera to capture the facial performance of the participant and Pupil Labs's [Oculus Rift DK2 add-on cup](https://pupil-labs.com/store/#vr-ar) to capture eye movements within the HMD. The source actor's facial and eye movement data is then used to drive the photo-realistic facial animations of the target video, therefore enabling gaze-aware facial reenactment. diff --git a/contents/articles/2016-11_focal-musical-expression/index.md b/contents/articles/2016-11_focal-musical-expression/index.md index a89ba1a1..cf3136aa 100644 --- a/contents/articles/2016-11_focal-musical-expression/index.md +++ b/contents/articles/2016-11_focal-musical-expression/index.md @@ -14,11 +14,10 @@ Focal enables musicians to manage and control electronic effects while both hand A foot pedal can be used to adjust the selected effects enabling a musician to maintain posture and balance for minimal disturbance during performance. Check out Stewart Greenhill and Cathie Travers full paper for NIME 2016 [here](http://stewartgreenhill.com/documents/FocalEyeTrackingMusicalExpressionController-NIME2016.pdf).
- Focal System + Focal System
-Image source: [Focal](http://stewartgreenhill.com/articles/focal/) +
Image source: [Focal](http://stewartgreenhill.com/articles/focal/)
The Focal system consists of four main technical components: diff --git a/contents/articles/2016-11_multiplayer_gameplay/index.md b/contents/articles/2016-11_multiplayer_gameplay/index.md index 7b3baf65..cdaf5794 100644 --- a/contents/articles/2016-11_multiplayer_gameplay/index.md +++ b/contents/articles/2016-11_multiplayer_gameplay/index.md @@ -9,8 +9,9 @@ featured_img_thumb: "../../../../media/images/blog/thumb/gaze-gameplay.png" [Joshua Newn et al.](http://www.socialnui.unimelb.edu.au/research/social-play/#team), explore the invisible visual signals between players in multiplayer gameplay. The gameplay would simulate different gaze conditions to test players interactions, varying in levels of gaze information from invisible to visible. -Exploring the Effects of Gaze Awareness on Multiplayer Gameplay -Image Source: [Exploring the Effects of Gaze Awareness on Multiplayer Gameplay PDF](http://www.socialnui.unimelb.edu.au/publications/2016-SocialNUI-Newn-3.pdf) +Exploring the Effects of Gaze Awareness on Multiplayer Gameplay + +
Image Source: [Exploring the Effects of Gaze Awareness on Multiplayer Gameplay PDF](http://www.socialnui.unimelb.edu.au/publications/2016-SocialNUI-Newn-3.pdf)
Gaze can provide visual information of the player's intention. During gameplay, players monitor each other's interactions to gain and evaluate opponents intentions and formulate strategies based upon the visual information. diff --git a/contents/articles/2016-11_pupil-lsl-plugin/index.md b/contents/articles/2016-11_pupil-lsl-plugin/index.md index 670cd654..ce2379d2 100644 --- a/contents/articles/2016-11_pupil-lsl-plugin/index.md +++ b/contents/articles/2016-11_pupil-lsl-plugin/index.md @@ -11,9 +11,8 @@ featured_img_thumb: "../../../../media/images/blog/thumb/Lab-Streaming-Layer.jpg Lab Streaming Layer -
Image Source: [Qusp Product Portfolio](https://qusp.io/projects)
+
Image Source: [Qusp Product Portfolio](https://qusp.io/projects)
-
We are excited to introduce the [Pupil + Lab Streaming Layer relay plugin](https://github.com/sccn/labstreaminglayer/tree/master/Apps/PupilLabs). The plugin works with [Pupil Capture](https://github.com/pupil-labs/pupil/wiki/Pupil-Capture) to relay pupil data, gaze data, and notifications to LSL. Users can link the data to other inlets in the network. The [Lab Streaming Layer](https://github.com/sccn/labstreaminglayer) (LSL) is a system that provides unified collection of measurement time series between programs, computers, and devices over a network for distributed signal transport, time synchronization, and data collection. LSL has an extensive range of supported measurement modalities including eye tracking. diff --git a/contents/articles/2016-11_van-gogh-museum-project/index.md b/contents/articles/2016-11_van-gogh-museum-project/index.md index 4c758946..4534d84e 100644 --- a/contents/articles/2016-11_van-gogh-museum-project/index.md +++ b/contents/articles/2016-11_van-gogh-museum-project/index.md @@ -7,11 +7,10 @@ featured_img: "../../../../media/images/blog/van-gogh-museum.jpg" featured_img_thumb: "../../../../media/images/blog/thumb/van-gogh-museum.jpg" --- -The Van Gogh Museum Eye-tracking Project -Image Source: [The Van Gogh Museum Eye-tracking Project](http://www.vupsy.nl/van-gogh-museum-eye-tracking-project/) +The Van Gogh Museum Eye-tracking Project +
Image Source: [The Van Gogh Museum Eye-tracking Project](http://www.vupsy.nl/van-gogh-museum-eye-tracking-project/)
-
We are really excited to see Pupil used in the [Van Gogh Museum](https://www.vangoghmuseum.nl/en) by researchers from the [Department of Experimental and Applied psychology](http://www.vupsy.nl/) at the [VU Amsterdam](http://www.vu.nl/en/) to study how we perceive and appreciate art in a real life environment. [Francesco Walker](http://www.vupsy.nl/staff-members/francesco-walker/) (assisted by Berno Bucker, [Daniel Schreij](http://www.vupsy.nl/staff-members/daniel-schreij/) and Nicola Anderson, and supervised by prof. Jan Theeuwes) used Pupil to record the gaze patterns of adults and children as they viewed paintings in the Van Gogh museum. The team used Pupil to gain insight into how people look at paintings. diff --git a/contents/articles/2017-01_saliency-in-vr/index.md b/contents/articles/2017-01_saliency-in-vr/index.md index ea21dd7e..8e4bee70 100644 --- a/contents/articles/2017-01_saliency-in-vr/index.md +++ b/contents/articles/2017-01_saliency-in-vr/index.md @@ -11,9 +11,8 @@ In their recent research paper [Saliency in VR: How do people explore virtual en Ground Truth Saliency Map -
Saliency map generated using ground truth data collected using Pupil Labs Oculus DK2 eye tracking add-on overlay on top of one of the stimulus panorama images shown to the participants. Image Source: [Fig 5. Page 7.](https://arxiv.org/pdf/1612.04335.pdf)
+
Saliency map generated using ground truth data collected using Pupil Labs Oculus DK2 eye tracking add-on overlay on top of one of the stimulus panorama images shown to the participants. Image Source: [Fig 5. Page 7.](https://arxiv.org/pdf/1612.04335.pdf)
-
To further understand viewing behavior and saliency in VR, Vincent Sitzmann et al. collected a dataset that records gaze data and head orientation from users oberserving omni-directional stereo panoramas using an Oculus Rift DK2 VR headset with Pupil Lab's [Oculus Rift DK2 add-on cup](https://pupil-labs.com/store/#vr-ar). The dataset shows that gaze and head orientation can be used to build more accurate saliency maps for VR environments. Based on the data, Sitzmann and his colleagues propose new methods to learn and predict time-dependent saliency in VR. The collected data is a first step towards building saliency models specifically tailored to VR environments. If successful, these VR saliency models could serve as a method to approximate and predict gaze movements using movement data and image information.