forked from pupil-labs/pupil-labs-website
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
b42ee06
commit 62f0bb8
Showing
2 changed files
with
22 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
--- | ||
title: "Saliency in VR: How do people explore virtual environments?" | ||
date: Mon Jan 09 2017 11:45:18 GMT+0700 (ICT) | ||
author: Pupil Dev Team | ||
subtitle: "Vincent Sitzmann et al., use Pupil Labs Oculus DK2 add-on to research and develop new methods to learn and predict saliency in VR environments..." | ||
featured_img: "../../../../media/images/blog/vr_saliency.png" | ||
featured_img_thumb: "../../../../media/images/blog/thumb/vr_saliency.png" | ||
--- | ||
|
||
In their recent research paper [Saliency in VR: How do people explore virtual environments?](https://arxiv.org/pdf/1612.04335.pdf), Vincent Sitzmann et al., argue that viewing behavior in VR environments is much more complex than on conventional displays due to the technology of interactions in kinematics that makes VR possible. | ||
|
||
<img src="../../../../media/images/blog/vr_saliency.png" class='Feature-image u-padTop--1' alt="Ground Truth Saliency Map"> | ||
|
||
<div class="small">Saliency map generated using ground truth data collected using Pupil Labs Oculus DK2 eye tracking add-on overlay on top of one of the stimulus panorama images shown to the participants. Image Source: [Fig 5. Page 7.](https://arxiv.org/pdf/1612.04335.pdf)</div> | ||
|
||
<br> | ||
To further understand viewing behavior and saliency in VR, Vincent Sitzmann et al. collected a dataset that records gaze data and head orientation from users oberserving omni-directional stereo panoramas using an Oculus Rift DK2 VR headset with Pupil Lab's [Oculus Rift DK2 add-on cup](https://pupil-labs.com/store/#vr-ar). | ||
|
||
The dataset shows that gaze and head orientation can be used to build more accurate saliency maps for VR environments. Based on the data, Sitzmann and his colleagues propose new methods to learn and predict time-dependent saliency in VR. The collected data is a first step towards building saliency models specifically tailored to VR environments. If successful, these VR saliency models could serve as a method to approximate and predict gaze movements using movement data and image information. | ||
|
||
If you use Pupil in your research and have published work, please send us a note. We would love to include your work here on the blog and in a list of [work that cites Pupil](https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/). |
Submodule media
updated
4 files
+ − | images/blog/thumb/eyefield.jpg | |
+ − | images/blog/thumb/vr_saliency.png | |
+ − | images/blog/visfield.png | |
+ − | images/blog/vr_saliency.png |