Skip to content

sauradip/In-2-4D

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 

Repository files navigation

In-2-4D: Inbetweening from Two Single-View Images
to 4D Generation

1Simon Fraser University 2Tel Aviv University  
We propose a new problem, In-2-4D, for generative 4D (i.e., 3D + motion) inbetweening from a minimalistic input setting: two single-view images capturing an object in two distinct motion states. Given two images representing the start and end states of an object in motion, our goal is to generate and reconstruct the motion in 4D. We utilize a video interpolation model to predict the motion, but large frame-to-frame motions can lead to ambiguous interpretations. To overcome this, we employ a hierarchical approach to identify keyframes that are visually close to the input states and show significant motion, then generate smooth fragments between them. For each fragment, we construct the 3D representation of the keyframe using Gaussian Splatting. The temporal frames within the fragment guide the motion, enabling their transformation into dynamic Gaussians through a deformation field. To improve temporal consistency and refine 3D motion, we expand the self-attention of multi-view diffusion across timesteps and apply rigid transformation regularization. Finally, we merge the independently generated 3D motion segments by interpolating boundary deformation fields and optimizing them to align with the guiding video, ensuring smooth and flicker-free transitions.

Why In-2-4D?

  • In-2-4D minimizes the requirement of any camera information and can convert casually captured images describing a motion into 4D.
  • Due to "divid-and-conquer" strategy of breaking complex motions into simple ones, In-2-4D can handle motion complexity of considerable degree.
  • Usage of rigid-constraints in smaller fragments ensures the 3D generation is smooth and textures are accurate in global scene.
  • Industrial Applications include VFX Animation and 3D product generation with motion from different posed product image.

Code will be released soon ! Stay Tuned

About

[Arxiv'25] PyTorch implementation of "Inbetweening from Two Single-View Images to 4D Generation"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published