Results 281 to 290 of about 1,483,995 (324)
Some of the next articles are maybe not open access.
Repurposing Pre-trained Video Diffusion Models for Event-based Video Interpolation
Computer Vision and Pattern RecognitionVideo Frame Interpolation aims to recover realistic missing frames between observed frames, generating a high-frame-rate video from a low-frame-rate video. However, without additional guidance, the large motion between frames makes this problem ill-posed.
Jingxi Chen +8 more
semanticscholar +1 more source
Self-Reproducing Video Frame Interpolation
2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), 2019Frame interpolation has recently witnessed success by convolutional neural networks, that are learned from end to end to minimizing the reconstruction loss of dropped frames. This paper introduces a novel self-reproducing mechanism, that the real (given) frames could in turn be interpolated from the interpolated ones, to further substantially improve ...
Jiajun Deng +4 more
openaire +1 more source
EDEN: Enhanced Diffusion for High-quality Large-motion Video Frame Interpolation
Computer Vision and Pattern RecognitionHandling complex or nonlinear motion patterns has long posed challenges for video frame interpolation. Although recent advances in diffusion-based methods offer improvements over traditional optical flow-based approaches, they still struggle to generate ...
Zihao Zhang +6 more
semanticscholar +1 more source
Event-based Video Frame Interpolation with Cross-Modal Asymmetric Bidirectional Motion Fields
Computer Vision and Pattern Recognition, 2023Video Frame Interpolation (VFI) aims to generate intermediate video frames between consecutive input frames. Since the event cameras are bio-inspired sensors that only encode brightness changes with a micro-second temporal resolution, several works ...
Taewoo Kim +3 more
semanticscholar +1 more source
RichSpace: Enriching Text-to-Video Prompt Space via Text Embedding Interpolation
arXiv.orgText-to-video generation models have made impressive progress, but they still struggle with generating videos with complex features. This limitation often arises from the inability of the text encoder to produce accurate embeddings, which hinders the ...
Yuefan Cao +6 more
semanticscholar +1 more source
Lap-Based Video Frame Interpolation
2019 IEEE International Conference on Image Processing (ICIP), 2019High-quality video frame interpolation often necessitates accurate motion estimation, which can be obtained using modern optical flow methods. In this paper, we use the recently proposed Local All-Pass (LAP) algorithm to compute the optical flow between two consecutive frames.
Tejas Jayashankar +3 more
openaire +1 more source
Invertibility-Driven Interpolation Filter for Video Coding
IEEE Transactions on Image Processing, 2019Motion compensation with fractional motion vector has been widely utilized in the video coding standards. The fractional samples are usually generated by fractional interpolation filters. Traditional interpolation filters are usually designed based on the signal processing theory with the assumption of band-limited signal, which cannot effectively ...
Ning Yan +5 more
openaire +2 more sources
Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation
International Conference on Learning RepresentationsWe present a method for generating video sequences with coherent motion between a pair of input key frames. We adapt a pretrained large-scale image-to-video diffusion model (originally trained to generate videos moving forward in time from a single input
Xiaojuan Wang +5 more
semanticscholar +1 more source
Quadratic Video Interpolation for VTSR Challenge
2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019Video interpolation is an important problem in image manipulation, which has drawn increased interests from the vision and graphics communities. In this work, we apply the quadratic video interpolation algorithm to the VTSR challenge of Advances in Image
Siyao Li, Xiangyu Xu, Ze Pan, Wenxiu Sun
semanticscholar +1 more source
International Conference on Learning Representations
We present TANGO, a framework for generating co-speech body-gesture videos. Given a few-minute, single-speaker reference video and target speech audio, TANGO produces high-fidelity videos with synchronized body gestures.
Haiyang Liu +6 more
semanticscholar +1 more source
We present TANGO, a framework for generating co-speech body-gesture videos. Given a few-minute, single-speaker reference video and target speech audio, TANGO produces high-fidelity videos with synchronized body gestures.
Haiyang Liu +6 more
semanticscholar +1 more source

