Published In

2019 IEEE Winter Conference on Applications of Computer Vision (WACV)

Document Type

Post-Print

Publication Date

3-7-2019

Subjects

Imaging systems -- Technological innovations, Markov random fields, Image processing -- Algorithms

Abstract

This paper presents a method for capturing high-speed video using an asynchronous camera array. Our method sequentially fires each sensor in a camera array with a small time offset and assembles captured frames into a high-speed video according to the time stamps. The resulting video, however, suffers from parallax jittering caused by the viewpoint difference among sensors in the camera array. To address this problem, we develop a dedicated novel view synthesis algorithm that transforms the video frames as if they were captured by a single reference sensor. Specifically, for any frame from a non-reference sensor, we find the two temporally neighboring frames captured by the reference sensor. Using these three frames, we render a new frame with the same time stamp as the non-reference frame but from the viewpoint of the reference sensor. Specifically, we segment these frames into super-pixels and then apply local content-preserving warping to warp them to form the new frame. We employ a multi-label Markov Random Field method to blend these warped frames. Our experiments show that our method can produce high-quality and high-speed video of a wide variety of scenes with large parallax, scene dynamics, and camera motion and outperforms several baseline and state-of-the-art approaches.

Description

© Copyright 2019 IEEE - All rights reserved.

DOI

10.1109/WACV.2019.00237

Persistent Identifier

https://archives.pdx.edu/ds/psu/29230

Share

COinS