If you missed SIGGRAPH 2017 watch a selection of recorded Live Streaming Sessions.
If you missed SIGGRAPH 2017 watch a selection of recorded Live Streaming Sessions.
The accessibility, visual comfort, and quality of current VR devices are limited. This study of optocomputational display modes shows their potential to improve experiences for users across ages and with common refractive errors. It lays the foundations of next-generation computational near-eye displays that can be used by everyone.
Nitish Padmanaban
Stanford University
Robert Konrad
Stanford University
Emily Cooper
Dartmouth College
Gordon Wetzstein
Stanford University
This talk explores a simple rendering-in-the-loop optimization model for an SLM-based head-mounted display architecture that allows production of color-image decompositions at resolutions that would otherwise be computationally infeasible.
Nathan Matsuda
Oculus Research
Alexander Fix
Oculus Research
Douglas Lanman
Oculus Research
A novel end-to-end framework using real-time eye-gaze information to significantly reduce bandwidth requirements for content provision, the key prerequisite to fulfilling the vision of truly mobile VR services.
Konrad Tollmar
KTH Royal Institute of Technology
Pietro Lungaro
KTH Royal Institute of Technology
Ashutosh Mittal
KTH Royal Institute of Technology
Alfredo Fanghella Valero
KTH Royal Institute of Technology
Experiencing VR requires users to wear a headset, which occludes the face and blocks eye-gaze. This talk presents a technique to virtually remove the headset and reveal the face underneath it using a combination of 3D vision, machine learning, and graphics techniques, and demonstrates its application to mixed reality.
Christian Frueh
Google Inc., Google Research
Avneesh Sud
Google Research, Google Inc.
Vivek Kwatra
Google Inc., Google Research