We are rapidly moving toward a world where personal networked video cameras will be ubiquitous. Already, camera-equipped cell phones are becoming commonplace. Imagine being able to tap into all of these real-time video feeds to remotely explore the world live. We introduce RealityFlythrough, a tele-reality/telepresence system that will make this vision possible. By situating live 2d images in a 3d model of the world, RealityFlythrough allows any space to be explored remotely. No special cameras, tripods, rigs, scaffolding, or lighting is required to create the model, and no lengthy preprocessing of images is necessary. Rather than try to achieve photorealism at every point in space, we instead focus on providing the user with a sense of how the images spatially relate to one another. By providing spatial cues in the form of dynamic transitions, we can approximate tele-reality and harness cameras in the wild. This paper focuses on the sensibility of these imperfect dynamic transitions from camera to camera. We present early experimental results that suggest that imperfect transitions are more sensible, and provide a more pleasant user experience than no transitions at all.
The authors of these documents have submitted their reports to this technical report series for the purpose of non-commercial dissemination of scientific work. The reports are copyrighted by the authors, and their existence in electronic format does not imply that the authors have relinquished any rights. You may copy a report for scholarly, non-commercial purposes, such as research or instruction, provided that you agree to respect the author's copyright. For information concerning the use of this document for other than research or instructional purposes, contact the authors. Other information concerning this technical report series can be obtained from the Computer Science and Engineering Department at the University of California at San Diego, email@example.com.
[ Search ]