Project R-2546


Scalable Algorithms for the Acquisition, Rendering, Compression and Re-lighting of Video-based 3D objects


This research proposal focuses on 3D-video based modeling and rendering (3D-VBR). The basic idea of 3D-VBR is to acquire dynamic real-world scenery by means of synchronized video sequences from multiple view points. 3D-VBR will make it possible to view or re-play the recorded scenery from novel view-points, and will enable 3D-editing and re-lighting operations in post-processing. This is not possible with conventional video and film recordings, and complements traditional 3D computer graphics techniques. We will improve on the state of the art in 3D-VBR in the following aspects: -Acquisition using a large array of inexpensive cameras (LAIC) driven by a commodity PC cluster. A large number of cameras will allow higher 3D spatial and directional resolution, and higher dynamic range. Construction of such a LAIC poses several challenges to be overcome; - Real-time rendering of 3D-video from a LAIC, by improving and parallelizing a recently proposed view interpolation algorithm that does not require 3D geometry reconstruction or model fitting first, and that is well suited for implementation on commodity, but powerful, pro- grammable graphics processing units; - Real-time compression of 3D-video from a LAIC using motion- and disparity-compensated second-generation wavelets, again avoiding a-priori 3D geometry reconstruction or model fit- ting; - Acquisitionofthelightreflectionpropertiesofmovingreal-worldscenery,bymeansofaLAIC. Light reflection properties are required in order to perform re-lighting in post-production. Work in this area is relevant for our other research, in the field of computer animation, networked virtual environments and human computer interfaces. Building upon the results of this project, we also plan to realize in future a number of prototype 3D-VBR applications, including an immersive tele-class room or meeting room, and a broadcast quality 3D-video studio. This studio will allow improved video production, and will be used in multidisciplinary research into new, immersive, forms of theater performances.

Period of project

01 January 2005 - 31 December 2008