Results 21 to 30 of about 1,137,884 (305)

Automatic Prompt Optimization with "Gradient Descent" and Beam Search [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2023
Large Language Models (LLMs) have shown impressive performance as general purpose agents, but their abilities remain highly dependent on prompts which are hand written with onerous trial-and-error effort. We propose a simple and nonparametric solution to
Reid Pryzant   +5 more
semanticscholar   +1 more source

Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study [PDF]

open access: yesarXiv.org, 2023
Large Language Models (LLMs), like ChatGPT, have demonstrated vast potential but also introduce challenges related to content constraints and potential misuse.
Yi Liu   +8 more
semanticscholar   +1 more source

Prompt Injection attack against LLM-integrated Applications [PDF]

open access: yesarXiv.org, 2023
Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them.
Yi Liu   +8 more
semanticscholar   +1 more source

Visual-Language Prompt Tuning with Knowledge-Guided Context Optimization [PDF]

open access: yesComputer Vision and Pattern Recognition, 2023
Prompt tuning is an effective way to adapt the pretrained visual-language model (VLM) to the downstream task using task-related textual tokens. Representative CoOp-based work combines the learnable textual tokens with the class tokens to obtain specific ...
Hantao Yao, Rui Zhang, Changsheng Xu
semanticscholar   +1 more source

Visual Prompt Multi-Modal Tracking [PDF]

open access: yesComputer Vision and Pattern Recognition, 2023
Visible-modal object tracking gives rise to a series of downstream multi-modal tracking tributaries. To inherit the powerful representations of the foundation model, a natural modus operandi for multi-modal tracking is full fine-tuning on the RGB-based ...
Jiawen Zhu   +4 more
semanticscholar   +1 more source

LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression [PDF]

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2023
In long context scenarios, large language models (LLMs) face three main challenges: higher computational cost, performance reduction, and position bias. Research indicates that LLM performance hinges on the density and position of key information in the ...
Huiqiang Jiang   +6 more
semanticscholar   +1 more source

Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery [PDF]

open access: yesNeural Information Processing Systems, 2023
The strength of modern generative models lies in their ability to be controlled through text-based prompts. Typical"hard"prompts are made from interpretable words and tokens, and must be hand-crafted by humans.
Yuxin Wen   +5 more
semanticscholar   +1 more source

Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts

open access: yesInternational Conference on Human Factors in Computing Systems, 2023
Pre-trained large language models (“LLMs”) like GPT-3 can engage in fluent, multi-turn instruction-taking out-of-the-box, making them attractive materials for designing natural language interactions.
J.D. Zamfirescu-Pereira   +3 more
semanticscholar   +1 more source

CODA-Prompt: COntinual Decomposed Attention-Based Prompting for Rehearsal-Free Continual Learning [PDF]

open access: yesComputer Vision and Pattern Recognition, 2022
Computer vision models suffer from a phenomenon known as catastrophic forgetting when learning novel concepts from continuously shifting training data. Typical solutions for this continual learning problem require extensive rehearsal of previously seen ...
James Smith   +8 more
semanticscholar   +1 more source

Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm [PDF]

open access: yesCHI Extended Abstracts, 2021
Prevailing methods for mapping large generative language models to supervised tasks may fail to sufficiently probe models’ novel capabilities. Using GPT-3 as a case study, we show that 0-shot prompts can significantly outperform few-shot prompts.
Laria Reynolds, Kyle McDonell
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy