Results 241 to 250 of about 1,137,884 (305)
Some of the next articles are maybe not open access.
LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression
Annual Meeting of the Association for Computational LinguisticsThis paper focuses on task-agnostic prompt compression for better generalizability and efficiency. Considering the redundancy in natural language, existing approaches compress prompts by removing tokens or lexical units according to their information ...
Zhuoshi Pan +22 more
semanticscholar +1 more source
International Conference on Learning Representations, 2023
As large language models (LLMs) are adopted as a fundamental component of language technologies, it is crucial to accurately characterize their performance.
Melanie Sclar +3 more
semanticscholar +1 more source
As large language models (LLMs) are adopted as a fundamental component of language technologies, it is crucial to accurately characterize their performance.
Melanie Sclar +3 more
semanticscholar +1 more source
InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents
Annual Meeting of the Association for Computational LinguisticsRecent work has embodied LLMs as agents, allowing them to access tools, perform actions, and interact with external content (e.g., emails or websites).
Qiusi Zhan +3 more
semanticscholar +1 more source
Defeating Prompt Injections by Design
arXiv.orgLarge Language Models (LLMs) are increasingly deployed in agentic systems that interact with an untrusted environment. However, LLM agents are vulnerable to prompt injection attacks when handling untrusted data.
Edoardo Debenedetti +9 more
semanticscholar +1 more source
A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications
arXiv.orgPrompt engineering has emerged as an indispensable technique for extending the capabilities of large language models (LLMs) and vision-language models (VLMs). This approach leverages task-specific instructions, known as prompts, to enhance model efficacy
Pranab Sahoo +5 more
semanticscholar +1 more source
DataSentinel: A Game-Theoretic Detection of Prompt Injection Attacks
IEEE Symposium on Security and PrivacyLLM-integrated applications and agents are vulnerable to prompt injection attacks, where an attacker injects prompts into their inputs to induce attacker-desired outputs.
Yupei Liu +4 more
semanticscholar +1 more source
ACM Conference on Recommender Systems, 2022
For a long time, different recommendation tasks require designing task-specific architectures and training objectives. As a result, it is hard to transfer the knowledge and representations from one task to another, thus restricting the generalization ...
Shijie Geng +4 more
semanticscholar +1 more source
For a long time, different recommendation tasks require designing task-specific architectures and training objectives. As a result, it is hard to transfer the knowledge and representations from one task to another, thus restricting the generalization ...
Shijie Geng +4 more
semanticscholar +1 more source
StruQ: Defending Against Prompt Injection with Structured Queries
USENIX Security SymposiumRecent advances in Large Language Models (LLMs) enable exciting LLM-integrated applications, which perform text-based tasks by utilizing their advanced language understanding capabilities. However, as LLMs have improved, so have the attacks against them.
Sizhe Chen +3 more
semanticscholar +1 more source
TopicGPT: A Prompt-based Topic Modeling Framework
North American Chapter of the Association for Computational LinguisticsTopic modeling is a well-established technique for exploring text corpora. Conventional topic models (e.g., LDA) represent topics as bags of words that often require “reading the tea leaves” to interpret; additionally, they offer users minimal control ...
Chau Minh Pham +3 more
semanticscholar +1 more source
Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks
Neural Information Processing SystemsDespite advances in AI alignment, large language models (LLMs) remain vulnerable to adversarial attacks or jailbreaking, in which adversaries can modify prompts to induce unwanted behavior.
Andy Zhou, Bo Li, Haohan Wang
semanticscholar +1 more source

