Automatic Prompt Optimization with "Gradient Descent" and Beam Search [PDF]
Large Language Models (LLMs) have shown impressive performance as general purpose agents, but their abilities remain highly dependent on prompts which are hand written with onerous trial-and-error effort. We propose a simple and nonparametric solution to
Reid Pryzant +5 more
semanticscholar +1 more source
Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study [PDF]
Large Language Models (LLMs), like ChatGPT, have demonstrated vast potential but also introduce challenges related to content constraints and potential misuse.
Yi Liu +8 more
semanticscholar +1 more source
Prompt Injection attack against LLM-integrated Applications [PDF]
Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them.
Yi Liu +8 more
semanticscholar +1 more source
Visual-Language Prompt Tuning with Knowledge-Guided Context Optimization [PDF]
Prompt tuning is an effective way to adapt the pretrained visual-language model (VLM) to the downstream task using task-related textual tokens. Representative CoOp-based work combines the learnable textual tokens with the class tokens to obtain specific ...
Hantao Yao, Rui Zhang, Changsheng Xu
semanticscholar +1 more source
Visual Prompt Multi-Modal Tracking [PDF]
Visible-modal object tracking gives rise to a series of downstream multi-modal tracking tributaries. To inherit the powerful representations of the foundation model, a natural modus operandi for multi-modal tracking is full fine-tuning on the RGB-based ...
Jiawen Zhu +4 more
semanticscholar +1 more source
LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression [PDF]
In long context scenarios, large language models (LLMs) face three main challenges: higher computational cost, performance reduction, and position bias. Research indicates that LLM performance hinges on the density and position of key information in the ...
Huiqiang Jiang +6 more
semanticscholar +1 more source
Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery [PDF]
The strength of modern generative models lies in their ability to be controlled through text-based prompts. Typical"hard"prompts are made from interpretable words and tokens, and must be hand-crafted by humans.
Yuxin Wen +5 more
semanticscholar +1 more source
Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts
Pre-trained large language models (“LLMs”) like GPT-3 can engage in fluent, multi-turn instruction-taking out-of-the-box, making them attractive materials for designing natural language interactions.
J.D. Zamfirescu-Pereira +3 more
semanticscholar +1 more source
CODA-Prompt: COntinual Decomposed Attention-Based Prompting for Rehearsal-Free Continual Learning [PDF]
Computer vision models suffer from a phenomenon known as catastrophic forgetting when learning novel concepts from continuously shifting training data. Typical solutions for this continual learning problem require extensive rehearsal of previously seen ...
James Smith +8 more
semanticscholar +1 more source
Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm [PDF]
Prevailing methods for mapping large generative language models to supervised tasks may fail to sufficiently probe models’ novel capabilities. Using GPT-3 as a case study, we show that 0-shot prompts can significantly outperform few-shot prompts.
Laria Reynolds, Kyle McDonell
semanticscholar +1 more source

