Results 221 to 230 of about 83,934 (302)
VividHairEdit is an advanced StyleGAN2 inversion system for high‐fidelity hair transfer and editing. Our system features improved structural integration, vibrant appearance representation, and optimized latent code selection, achieving superior generation quality and usability. A user‐friendly sketch interface enables precise modifications that reflect
Eunyeong Choi, Sihun Jin, Dongjoon Kim
wiley +1 more source
Style Brush: Guided Style Transfer for 3D Objects
We introduce Style Brush, a guided 3D style‐transfer method for textured meshes that provides precise creative control. It supports the use of multiple style images, smooth transitions and intuitive guidance, producing visually appealing textures that follow user intent as we demonstrate in our user study and results. Abstract We introduce Style Brush,
Áron Samuel Kovács +2 more
wiley +1 more source
Statistical Denoising of Transient Rendering
Abstract Transient rendering simulates light in motion, measuring the time of flight from the light source to the camera. However, the stochastic nature of Monte Carlo is aggravated in transient rendering, since samples are now spread along the temporal domain.
Oscar Pueyo‐Ciutad +2 more
wiley +1 more source
See4D: Pose‐Free 4D Generation via Auto‐Regressive Video Inpainting
Abstract Immersive applications call for synthesizing spatiotemporal 4D content from casual videos without costly 3D supervision. Existing video‐to‐4D methods typically rely on manually annotated camera poses, which are labor‐intensive and brittle for in‐the‐wild footage.
Dongyue Lu +10 more
wiley +1 more source
High‐Gloss SVBRDF Captur e Using Bounce Light
Abstract Reflectance capture aims at the visual reproduction of an object under varying illumination. Past works differ substantially in their experimental overhead, from single‐ or few‐image approaches, that employ significant (often learned) priors at the expense of biased reconstructions, to more accurate approaches that tend to be time‐consuming ...
Tomáš Iser +2 more
wiley +1 more source
MultiCOIN: Multi‐Modal COntrollable INbetweening
Abstract Video inbetweening creates smooth transitions between two frames making it an indispensable tool for video editing and longform video synthesis. Existing methods struggle with large or complex motion and offer limited control over intermediate frames, often misaligning with user intent.
M. Tanveer +6 more
wiley +1 more source
Reduced susceptibility to experimentally-induced complex visual hallucinations with age
Shenyan O +6 more
europepmc +1 more source
Lesion network guided neuromodulation to the extrastriate visual cortex in Charles Bonnet syndrome reduces visual hallucinations: A case study. [PDF]
Raymond N, Trotti R, Oss E, Lizano P.
europepmc +1 more source
SAGE: Structure‐Aware Generative Video Transitions between Diverse Clips
Abstract Video transitions aim to synthesize intermediate frames between two clips, but naïve approaches such as linear blending introduce artifacts that limit professional use or break temporal coherence. Traditional techniques (cross‐fades, morphing, frame interpolation) and recent generative inbetweening methods can produce high‐quality plausible ...
Mia Kan, Yilin Liu, Niloy J. Mitra
wiley +1 more source
Occipital Lobe Cavernoma Presenting With Headaches and Visual Hallucinations: A Case Report. [PDF]
Abdulaal MA +7 more
europepmc +1 more source

