Project R-4343


Advanced Re-lighting algorithms for post production


Advanced post-production techniques are often used in the filmindustry to make movies at a much lower cost than with additional recordings or hiring real actors and extras [1]. A major problem during the post-production phase is editing or re-lighting objects or actors in a scene in order to reach a photo-realistic result. The difficulty of this problem lies in the simultaneous estimation of materials and lighting in a scene under any circumstances. This is still considered an open problem [2]. Although some progress has been made under outdoor light, [3, 4, 5, 6] many film scenes, however, are recorded in studios where part of the licghting may also be located in the scene. An example of these are scenes that seem to be playing in a home or office, but are actually recorded on the set with a decor. It is hard to find useful re-lighting solutions for the filmindustry which supports random lightsources. Current re-lighting algorithms are also computationally intensive so one must wait hours to see the result. Real-time algorithms which generate an approximate example of the re-lighting, would result in significant time saving. Such algorithms could also give directors a good idea of the final result during the recordings. Traditionally, digital video is considered as a series of planar raster images, consisting of pixels with color values. That video data is unstructured and disconnected from the recorded scene. That it is very difficult to perform certain tasks without additional information about the content of an image, was shown by the numerous publications on depth estimation [7], object recognition [8], object tracking [9], face recognition [10], Re-lighting [11, 12, 13, 14, 15, 16], etc.. . For fundamental tasks such as edge detection, an acceptable performance level was achieved, but witch all complex image processing tasks a human still performs much better than a computer [17, 18]. A large part of the current techniques in image processing exhibit very little resemblance to the processes by which images are processed and interpreted in the human brain[18]. Digital video is suitable to capture, photo-realistic images but limited by what can be filmed with a fixed or single camera position. Simple editing operations are typically implemented, but after recording the position and illumination remain invariant. Computer graphics (CG) images on the other hand, allow to create fully animated, free navigable, modifiable and relightable scenes. Standard CG techniques are often used in movies for special effects. This is evidenced by the many companies that specialize and focus on it, such as Industrial Light & Magic and Pixar. The big disadvantage is that the mostly manual construction of detailed environments is very labour-intensive. Moreover, manually modeled scenes often suffer from the uncanny valley, where something closely leans to reality, but however is not experienced as real[19]. There is a need for a solution that preserves the structure of scene, so the benefits of digital video and CG can be combined. there should be found a good compromise on the spectrum of image based to geometric methods [20]. A problem for games that are based on movies, is that not a lot of images can be copied from the original production. Often, all in-game models from the environments and characters have to be re-modeled. New algorithms to correctly relight and exchange characters and scenes, should blur the production boundary between interactive and passive media. It gets really interesting if we do not restrict such technology to its use in major film studios, but when we bring it to the living room or even make it mobile. On this there have been quite a few developments using the Microsoft Kinect [21]. Plane Augmented reality applications could make use of better algorithms to integrate synthetic objects in a realistic way with real images. An example is a GPS system that projects the route on the windshield of a car.

Period of project

01 January 2013 - 31 December 2016