Results 11 to 20 of about 1,137,884 (305)

The Power of Scale for Parameter-Efficient Prompt Tuning [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2021
In this work, we explore “prompt tuning,” a simple yet effective mechanism for learning “soft prompts” to condition frozen language models to perform specific downstream tasks.
Brian Lester   +2 more
semanticscholar   +1 more source

Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing [PDF]

open access: yesACM Computing Surveys, 2021
This article surveys and organizes research works in a new paradigm in natural language processing, which we dub “prompt-based learning.” Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x),
Pengfei Liu   +5 more
semanticscholar   +1 more source

Visual Prompt Tuning [PDF]

open access: yesEuropean Conference on Computer Vision, 2022
The current modus operandi in adapting pre-trained models involves updating all the backbone parameters, ie, full fine-tuning. This paper introduces Visual Prompt Tuning (VPT) as an efficient and effective alternative to full fine-tuning for large-scale ...
Menglin Jia   +6 more
semanticscholar   +1 more source

Prompt-to-Prompt Image Editing with Cross Attention Control [PDF]

open access: yesInternational Conference on Learning Representations, 2022
Recent large-scale text-driven synthesis models have attracted much attention thanks to their remarkable capabilities of generating highly diverse images that follow given text prompts.
Amir Hertz   +5 more
semanticscholar   +1 more source

Conditional Prompt Learning for Vision-Language Models [PDF]

open access: yesComputer Vision and Pattern Recognition, 2022
With the rise of powerful pre-trained vision-language models like CLIP, it becomes essential to investigate ways to adapt these models to downstream datasets.
Kaiyang Zhou   +3 more
semanticscholar   +1 more source

IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models [PDF]

open access: yesarXiv.org, 2023
Recent years have witnessed the strong power of large text-to-image diffusion models for the impressive generative capability to create high-fidelity images.
Hu Ye   +4 more
semanticscholar   +1 more source

Large Language Models Are Human-Level Prompt Engineers [PDF]

open access: yesInternational Conference on Learning Representations, 2022
By conditioning on natural language instructions, large language models (LLMs) have displayed impressive capabilities as general-purpose computers. However, task performance depends significantly on the quality of the prompt used to steer the model, and ...
Yongchao Zhou   +6 more
semanticscholar   +1 more source

Learning to Prompt for Continual Learning [PDF]

open access: yesComputer Vision and Pattern Recognition, 2021
The mainstream paradigm behind continual learning has been to adapt the model parameters to non-stationary data distributions, where catastrophic forgetting is the central challenge.
Zifeng Wang   +9 more
semanticscholar   +1 more source

Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection [PDF]

open access: yesAISec@CCS, 2023
Large Language Models (LLMs) are increasingly being integrated into applications, with versatile functionalities that can be easily modulated via natural language prompts. So far, it was assumed that the user is directly prompting the LLM.
Kai Greshake   +5 more
semanticscholar   +1 more source

MaPLe: Multi-modal Prompt Learning [PDF]

open access: yesComputer Vision and Pattern Recognition, 2022
Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well ...
Muhammad Uzair Khattak   +4 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy