Results 241 to 250 of about 1,400,670 (295)
Some of the next articles are maybe not open access.

MedAgents: Large Language Models as Collaborators for Zero-shot Medical Reasoning

Annual Meeting of the Association for Computational Linguistics, 2023
Large language models (LLMs), despite their remarkable progress across various general domains, encounter significant barriers in medicine and healthcare.
Xiangru Tang   +6 more
semanticscholar   +1 more source

Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels

North American Chapter of the Association for Computational Linguistics, 2023
Zero-shot text rankers powered by recent LLMs achieve remarkable ranking performance by simply prompting. Existing prompts for pointwise LLM rankers mostly ask the model to choose from binary relevance labels like “Yes” and “No”.
Honglei Zhuang   +6 more
semanticscholar   +1 more source

LLM Comparative Assessment: Zero-shot NLG Evaluation through Pairwise Comparisons using Large Language Models

Conference of the European Chapter of the Association for Computational Linguistics, 2023
Current developments in large language models (LLMs) have enabled impressive zero-shot capabilities across various natural language tasks. An interesting application of these systems is in the automated assessment of natural language generation (NLG), a ...
Adian Liusie, Potsawee Manakul, M. Gales
semanticscholar   +1 more source

LM-Infinite: Zero-Shot Extreme Length Generalization for Large Language Models

North American Chapter of the Association for Computational Linguistics, 2023
Today’s large language models (LLMs) typically train on short text segments (e.g.,
Chi Han   +6 more
semanticscholar   +1 more source

Self-Improving for Zero-Shot Named Entity Recognition with Large Language Models

North American Chapter of the Association for Computational Linguistics, 2023
Exploring the application of powerful large language models (LLMs) on the named entity recognition (NER) task has drawn much attention recently. This work pushes the performance boundary of zero-shot NER with LLMs by proposing a training-free self ...
Tingyu Xie   +4 more
semanticscholar   +1 more source

VoiceCraft: Zero-Shot Speech Editing and Text-to-Speech in the Wild

Annual Meeting of the Association for Computational Linguistics
We introduce VoiceCraft, a token infilling neural codec language model, that achieves state-of-the-art performance on both speech editing and zero-shot text-to-speech (TTS) on audiobooks, internet videos, and podcasts.
Puyuan Peng   +4 more
semanticscholar   +1 more source

LSTPrompt: Large Language Models as Zero-Shot Time Series Forecasters by Long-Short-Term Prompting

Annual Meeting of the Association for Computational Linguistics
Time-series forecasting (TSF) finds broad applications in real-world scenarios. Prompting off-the-shelf Large Language Models (LLMs) demonstrates strong zero-shot TSF capabilities while preserving computational efficiency.
Haoxin Liu   +4 more
semanticscholar   +1 more source

Large Language Models as Zero-shot Dialogue State Tracker through Function Calling

Annual Meeting of the Association for Computational Linguistics
Large language models (LLMs) are increasingly prevalent in conversational systems due to their advanced understanding and generative capabilities in general contexts.
Zekun Li   +9 more
semanticscholar   +1 more source

Rethinking Task-Oriented Dialogue Systems: From Complex Modularity to Zero-Shot Autonomous Agent

Annual Meeting of the Association for Computational Linguistics
Task-oriented dialogue (TOD) systems are pre-001 dominantly designed to be composed of several 002 functional modules (e.g. dialogue state tracker, 003 dialogue policy, natural language generation) 004 whether they are pipeline or end-to-end archi-005 ...
Heng-Da Xu   +4 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy