Results 11 to 20 of about 4,250,670 (346)
InstructPix2Pix: Learning to Follow Image Editing Instructions [PDF]
We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image.
Tim Brooks +2 more
semanticscholar +1 more source
Prompt-to-Prompt Image Editing with Cross Attention Control [PDF]
Recent large-scale text-driven synthesis models have attracted much attention thanks to their remarkable capabilities of generating highly diverse images that follow given text prompts.
Amir Hertz +5 more
semanticscholar +1 more source
Imagic: Text-Based Real Image Editing with Diffusion Models [PDF]
Text-conditioned image editing has recently attracted considerable interest. However, most methods are currently limited to one of the following: specific editing types (e.g., object overlay, style transfer), synthetically generated images, or requiring ...
Bahjat Kawar +7 more
semanticscholar +1 more source
MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing [PDF]
Text-guided image editing is widely needed in daily life, ranging from personal use to professional applications such as Photoshop. However, existing methods are either zero-shot or trained on an automatically synthesized dataset, which contains a high ...
Kai Zhang +4 more
semanticscholar +1 more source
Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions [PDF]
We propose a method for editing NeRF scenes with text-instructions. Given a NeRF of a scene and the collection of images used to reconstruct it, our method uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images ...
Ayaan Haque +4 more
semanticscholar +1 more source
TokenFlow: Consistent Diffusion Features for Consistent Video Editing [PDF]
The generative AI revolution has recently expanded to videos. Nevertheless, current state-of-the-art video models are still lagging behind image models in terms of visual quality and user control over the generated content.
Michal Geyer +3 more
semanticscholar +1 more source
Unified Concept Editing in Diffusion Models [PDF]
Text-to-image models suffer from various safety issues that may limit their suitability for deployment. Previous methods have separately addressed individual issues of bias, copyright, and offensive content in text-to-image models.
Rohit Gandikota +4 more
semanticscholar +1 more source
DragDiffusion: Harnessing Diffusion Models for Interactive Point-Based Image Editing [PDF]
Accurate and controllable image editing is a challenging task that has attracted significant attention recently. Notably, DRAGGAN developed by Pan et al.
Yujun Shi +5 more
semanticscholar +1 more source
MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions [PDF]
The information stored in large language models (LLMs) falls out of date quickly, and retraining from scratch is often not an option. This has recently given rise to a range of techniques for injecting new facts through updating model weights.
Zexuan Zhong +4 more
semanticscholar +1 more source
Editing Models with Task Arithmetic [PDF]
Changing how pre-trained models behave -- e.g., improving their performance on a downstream task or mitigating biases learned during pre-training -- is a common practice when developing machine learning systems.
Gabriel Ilharco +6 more
semanticscholar +1 more source

