If you missed SIGGRAPH 2017 watch a selection of recorded Live Streaming Sessions.
If you missed SIGGRAPH 2017 watch a selection of recorded Live Streaming Sessions.
This work presents a systematic analysis of viewer behavior and perceived continuity in VR video content. The results may have direct implications in VR, informing content creators about the potential responses that certain edit configurations may elicit in the audience.
Ana Serrano
Universidad de Zaragoza
Vincent Sitzmann
Stanford University
Jaime Ruiz-Borau
Universidad de Zaragoza
Gordon Wetzstein
Stanford University
Diego Gutierrez
Universidad de Zaragoza
Belen Masia
Universidad de Zaragoza
This method of adjusting design parameters using crowdsourced human computation decomposes a high-dimensional search task into a sequence of line-search microtasks that can be performed by crowds. It is formulated using Bayesian optimization techniques. The paper applies it to photo enhancement and material BRDF design.
Yuki Koyama
The University of Tokyo
Issei Sato
The University of Tokyo
Daisuke Sakamoto
The University of Tokyo
Takeo Igarashi
The University of Tokyo
Application of a formal, broadly applicable, procedural, and empirically grounded association between personality and body motion using an intermediate step called Laban Movement Analysis to modify any given virtual human-body animation.
Norman Badler
University of Pennsylvania
Funda Durupinar
Oregon Health & Science University
Mubbasir Kapadia
Rutgers University
Susan Deutsch
Drexel University
Michael Neff
University of California Davis
This paper presents two experiments that further our understanding of virtual-agent personality perception. It identifies relevant aspects of gesture performance for each trait in the Big Five model of personality, then it generates four distinctly perceived personalities by adjusting gesture performance alone and identifies potential limitations of the approach.
Harrison Jesse Smith
University of California, Davis
Michael Neff
University of California, Davis
This paper presents an end-to-end system that uses a saccade-landing-position prediction to combat system latency in gaze-contingent systems. The authors created a measurements-driven model for predicting saccade landing positions and validated its application in rendering.
Elena Arabadzhiyska
Universität des Saarlandes, Max Planck Institut für Informatik
Okan Tursun
Max Planck Institut für Informatik
Karol Myszkowski
Max Planck Institut für Informatik
Hans-Peter Seidel
Max Planck Institut für Informatik
Piotr Didyk
Universität des Saarlandes