Results 1 to 10 of about 1,369,839 (231)
Combining computational linguistics with sentence embedding to create a zero-shot NLIDB
Accessing relational databases using natural language is a challenging task, with existing methods often suffering from poor domain generalization and high computational costs. In this study, we propose a novel approach that eliminates the training phase while offering high adaptability across domains. Our method combines structured linguistic rules, a
Yuriy Perezhohin+2 more
doaj +4 more sources
Based on Halliday’s systemic functional grammar, especially the ideational function, this research aims at disclosing the hidden ideologies and values of the seemingly objective news reports on China’s COVID-19 policies in The Economist. Transitivity, voice, and nominalization are the major analytical subjects. After China lifted the zero-COVID policy,
Chuyan Wang
openalex +3 more sources
Corpus linguistics is the study of language as expressed in a body of texts or documents. The relative frequency of a word within a text and the dispersion of the word across the collection of texts provide information about the word's prominence and diffusion, respectively.
Brent D. Burch, Jesse Egbert
openalex +5 more sources
Ontology extension by online clustering with large language model agents. [PDF]
An ontology is a structured framework that categorizes entities, concepts, and relationships within a domain to facilitate shared understanding, and it is important in computational linguistics and knowledge representation.
Wu G, Ling C, Graetz I, Zhao L.
europepmc +2 more sources
Linguistics parameters for zero anaphora resolution
This dissertation describes and proposes a set of linguistically motivated rules for zero anaphora resolution in the context of a natural language processing chain developed for Portuguese. Some languages, like Portuguese, allow noun phrase (NP) deletion (or zeroing) in several syntactic contexts in order to avoid the redundancy that would result from ...
Simone Cristina Pereira
openalex +2 more sources
Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models [PDF]
Large language models (LLMs) have recently been shown to deliver impressive performance in various NLP tasks. To tackle multi-step reasoning tasks, Few-shot chain-of-thought (CoT) prompting includes a few manually crafted step-by-step reasoning ...
Lei Wang+6 more
semanticscholar +1 more source
Better Zero-Shot Reasoning with Role-Play Prompting [PDF]
Modern large language models (LLMs) exhibit a remarkable capacity for role-playing, enabling them to embody not only human characters but also non-human entities.
Aobo Kong+6 more
semanticscholar +1 more source
Precise Zero-Shot Dense Retrieval without Relevance Labels [PDF]
While dense retrieval has been shown to be effective and efficient across tasks and languages, it remains difficult to create effective fully zero-shot dense retrieval systems when no relevance labels are available.
Luyu Gao+3 more
semanticscholar +1 more source
Zero-shot Cross-Linguistic Learning of Event Semantics
Typologically diverse languages offer systems of lexical and grammatical aspect that allow speakers to focus on facets of event structure in ways that comport with the specific communicative setting and discourse constraints they face. In this paper, we look specifically at captions of images across Arabic, Chinese, Farsi, German, Russian, and Turkish ...
Alikhani, Malihe+8 more
openaire +4 more sources
Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors [PDF]
Recent work has shown that fine-tuning large language models (LLMs) on large-scale instruction-following datasets substantially improves their performance on a wide range of NLP tasks, especially in the zero-shot setting.
Kai Zhang+2 more
semanticscholar +1 more source