Is ChatGPT a General-Purpose Natural Language Processing Task Solver? [PDF]
Spurred by advancements in scale, large language models (LLMs) have demonstrated the ability to perform a variety of natural language processing (NLP) tasks zero-shot -- i.e., without adaptation on downstream data.
Chengwei Qin +5 more
openalex +3 more sources
Connectionist natural language parsing [PDF]
The key developments of two decades of connectionist parsing are reviewed. Connectionist parsers are assessed according to their ability to learn to represent syntactic structures from examples automatically, without being presented with symbolic grammar
Berg +52 more
core +3 more sources
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing [PDF]
This article surveys and organizes research works in a new paradigm in natural language processing, which we dub “prompt-based learning.” Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x),
Pengfei Liu +5 more
semanticscholar +1 more source
Survey of Hallucination in Natural Language Generation [PDF]
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG,
Ziwei Ji +11 more
semanticscholar +1 more source
Organizing an in-class hackathon to correct PDF-to-text conversion errors of 1.0 [PDF]
This paper describes a community effort to improve earlier versions of the full-text corpus of Genomics & Informatics by semi-automatically detecting and correcting PDF-to-text conversion errors and optical character recognition errors during the first ...
Sunho Kim +44 more
doaj +1 more source
Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation [PDF]
We introduce a method to measure uncertainty in large language models. For tasks like question answering, it is essential to know when we can trust the natural language outputs of foundation models.
Lorenz Kuhn, Y. Gal, Sebastian Farquhar
semanticscholar +1 more source
Can Language Models Solve Graph Problems in Natural Language? [PDF]
Large language models (LLMs) are increasingly adopted for a variety of tasks with implicit graphical structures, such as planning in robotics, multi-hop question answering or knowledge probing, structured commonsense reasoning, and more.
Heng Wang +5 more
semanticscholar +1 more source
Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing [PDF]
Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web.
Yu Gu +8 more
semanticscholar +1 more source
Text-to-SQL is the problem of converting a user question into an SQL query, when the question and database are given. In this article, we present a neural network approach called RYANSQL (Recursively Yielding Annotation Network for SQL) to solve complex ...
DongHyun Choi +3 more
doaj +1 more source
Recent Advances in Natural Language Processing via Large Pre-trained Language Models: A Survey [PDF]
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically changed the Natural Language Processing (NLP) field. For numerous NLP tasks, approaches leveraging PLMs have achieved state-of-the-art performance. The key idea is to learn a
Bonan Min +8 more
semanticscholar +1 more source

