Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models [PDF]
Large language models (LLMs) have recently been shown to deliver impressive performance in various NLP tasks. To tackle multi-step reasoning tasks, Few-shot chain-of-thought (CoT) prompting includes a few manually crafted step-by-step reasoning ...
Lei Wang+6 more
semanticscholar +1 more source
Better Zero-Shot Reasoning with Role-Play Prompting [PDF]
Modern large language models (LLMs) exhibit a remarkable capacity for role-playing, enabling them to embody not only human characters but also non-human entities.
Aobo Kong+6 more
semanticscholar +1 more source
Precise Zero-Shot Dense Retrieval without Relevance Labels [PDF]
While dense retrieval has been shown to be effective and efficient across tasks and languages, it remains difficult to create effective fully zero-shot dense retrieval systems when no relevance labels are available.
Luyu Gao+3 more
semanticscholar +1 more source
Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors [PDF]
Recent work has shown that fine-tuning large language models (LLMs) on large-scale instruction-following datasets substantially improves their performance on a wide range of NLP tasks, especially in the zero-shot setting.
Kai Zhang+2 more
semanticscholar +1 more source
Better Zero-Shot Reasoning with Self-Adaptive Prompting [PDF]
Modern large language models (LLMs) have demonstrated impressive capabilities at sophisticated tasks, often through step-by-step reasoning similar to humans.
Xingchen Wan+4 more
semanticscholar +1 more source
On Second Thought, Let’s Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning [PDF]
Generating a Chain of Thought (CoT) has been shown to consistently improve large language model (LLM) performance on a wide range of NLP tasks. However, prior work has mainly focused on logical reasoning tasks (e.g.
Omar Shaikh+4 more
semanticscholar +1 more source
ChatGPT for Zero-shot Dialogue State Tracking: A Solution or an Opportunity? [PDF]
Recent research on dialog state tracking (DST) focuses on methods that allow few- and zero-shot transfer to new domains or schemas. However, performance gains heavily depend on aggressive data augmentation and fine-tuning of ever larger language model ...
Michael Heck+8 more
semanticscholar +1 more source
Tab-CoT: Zero-shot Tabular Chain of Thought [PDF]
The chain-of-though (CoT) prompting methods were successful in various natural language processing (NLP) tasks thanks to their ability to unveil the underlying complex reasoning processes.
Ziqi Jin, Wei Lu
semanticscholar +1 more source
ReGen: Zero-Shot Text Classification via Training Data Generation with Progressive Dense Retrieval [PDF]
With the development of large language models (LLMs), zero-shot learning has attracted much attention for various NLP tasks. Different from prior works that generate training data with billion-scale natural language generation (NLG) models, we propose a ...
Yue Yu+5 more
semanticscholar +1 more source
Zero-shot Faithful Factual Error Correction [PDF]
Faithfully correcting factual errors is critical for maintaining the integrity of textual knowledge bases and preventing hallucinations in sequence-to-sequence models.
Kung-Hsiang Huang+2 more
semanticscholar +1 more source