Project R-2214


Co-eploration of Free Viewpoint Video for Sport Scenes with Occlusion Handling using Motion Detection and Ad Hoc Calibration on Parallel Architectures


Nowadays, a lot of attention is given to high quality sports broadcasting. However, the current images are restricted to the positions of the cameras; important events and actions can be missed or filmed incorrectly. Therefore, it is desirable to choose the camera positions freely, both by the provider as the viewer. This position is not limited by the position of the camera equipment, and can be used to generate new and unusual views. For example, a player can be followed from closely behind. This kind of immersion will enrich the experience of the viewer. To accomplish such effects, algorithms must be available that can generate new viewpoints for virtual cameras. Existing algorithms use 3D reconstruction or image-based rendering, but these algorithms are not suited for sport scenes. This thesis will focus on this kind of algorithms, considering the following aspects. The inter camera distance will be very big and the exact location cannot be chosen. Moreover, the majority of the cameras will be directed towards the action and therefore the cameras will not be stationary and the properties will not be constant. Calibrating beforehand will not be possible. Therefore, ad-hoc and automatic calibration is necessary. In sports images, occlusion is a common problem and is an obstacle for current free viewpoint algorithms. The camera distances are too big to acquire coherent occlusion information. In this thesis, this problem will be solved in the spatiotemporal domain using optical flow based algorithms. All algorithms will be developed on parallel architectures to enable real-time processing and make the results usable in existing setups.

Period of project

01 January 2010 - 31 December 2013