Results 1 to 10 of about 22,713,326 (323)

Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2022
Large language models (LMs) are able to in-context learn—perform a new task via inference alone by conditioning on a few input-label pairs (demonstrations) and making predictions for new inputs.
Sewon Min   +6 more
semanticscholar   +1 more source

LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding [PDF]

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2023
Although large language models (LLMs) demonstrate impressive performance for many language tasks, most of them can only handle texts a few thousand tokens long, limiting their applications on longer sequence inputs, such as books, reports, and codebases.
Yushi Bai   +12 more
semanticscholar   +1 more source

What Makes Good In-Context Examples for GPT-3? [PDF]

open access: yesWorkshop on Knowledge Extraction and Integration for Deep Learning Architectures; Deep Learning Inside Out, 2021
GPT-3 has attracted lots of attention due to its superior performance across a wide range of NLP tasks, especially with its in-context learning abilities.
Jiachang Liu   +5 more
semanticscholar   +1 more source

In-Context Retrieval-Augmented Language Models [PDF]

open access: yesTransactions of the Association for Computational Linguistics, 2023
Retrieval-Augmented Language Modeling (RALM) methods, which condition a language model (LM) on relevant documents from a grounding corpus during generation, were shown to significantly improve language modeling performance. In addition, they can mitigate
Ori Ram   +6 more
semanticscholar   +1 more source

Identifying Opportunities for Collective Curation During Archaeological Excavations

open access: yesInternational Journal of Digital Curation, 2021
Archaeological excavations are comprised of interdisciplinary teams that create, manage, and share data as they unearth and analyse material culture. These team-based settings are ripe for collective curation during these data lifecycle stages.
Ixchel Faniel   +5 more
doaj   +1 more source

Large Language Models Can Be Easily Distracted by Irrelevant Context [PDF]

open access: yesInternational Conference on Machine Learning, 2023
Large language models have achieved impressive performance on various natural language processing tasks. However, so far they have been evaluated primarily on benchmarks where all information in the input context is relevant for solving the task. In this
Freda Shi   +7 more
semanticscholar   +1 more source

Transformer-XL: Attentive Language Models beyond a Fixed-Length Context [PDF]

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2019
Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length ...
Zihang Dai   +5 more
semanticscholar   +1 more source

Context matters [PDF]

open access: yesExperimental Economics, 2018
AbstractEliciting the level of risk aversion of experimental subjects is of crucial concern to experimenters. In the literature there are a variety of methods used for such elicitation; the concern of the experiment reported in this paper is to compare them.
Zhou, Wenting, Hey, John Denis
openaire   +5 more sources

Context Encoders: Feature Learning by Inpainting [PDF]

open access: yesComputer Vision and Pattern Recognition, 2016
We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders - a convolutional neural network trained to generate the contents of an arbitrary image ...
Deepak Pathak   +4 more
semanticscholar   +1 more source

Microsoft COCO: Common Objects in Context [PDF]

open access: yesEuropean Conference on Computer Vision, 2014
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding.
Tsung-Yi Lin   +7 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy