Published In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Document Type
Article
Publication Date
9-2022
Subjects
Interpolation -- Applications to image processing, Image processing -- Digital techniques, Digital video
Abstract
Implicit neural representation (INR) has been successful in representing static images. Contemporary image-based INR, with the use of Fourier-based positional encoding, can be viewed as a mapping from sinusoidal patterns with different frequencies to image content. Inspired by that view, we hypothesize that it is possible to generate temporally varying content with a single image-based INR model by displacing its input sinusoidal patterns over time. By exploiting the relation between the phase information in sinusoidal functions and their displacements, we incorporate into the conventional image-based INR model a phase-varying positional encoding module, and couple it with a phase-shift generation module that determines the phase-shift values at each frame. The model is trained end-to-end on a video to jointly determine the phase-shift values at each time with the mapping from the phase-shifted sinusoidal functions to the corresponding frame, enabling an implicit video representation. Experiments on a wide range of videos suggest that such a model is capable of learning to interpret phase-varying positional embeddings into the corresponding time-varying content. More importantly, we found that the learned phase-shift vectors tend to capture meaningful temporal and motion information from the video. In particular, manipulating the phase-shift vectors induces meaningful changes in the temporal dynamics of the resulting video, enabling non-trivial temporal and motion editing effects such as temporal interpolation, motion magnification, motion smoothing, and video loop detection.
Rights
Copyright (c) 2022 The Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.
Persistent Identifier
https://archives.pdx.edu/ds/psu/38634
Citation Details
Mai, L., & Liu, F. (2022). Motion-Adjustable Neural Implicit Video Representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10738-10747).