Results 201 to 210 of about 320,905 (370)

Human tests for machine models: What lies “Beyond the Imitation Game”?

open access: yesJournal of Linguistic Anthropology, Volume 36, Issue 1, May 2026.
Abstract Benchmarking large language models (LLMs) is a key practice for evaluating their capabilities and risks. This paper considers the development of “BIG Bench,” a crowdsourced benchmark designed to test LLMs “Beyond the Imitation Game.” Drawing on linguistic anthropological and ethnographic analysis of the project's GitHub repository, we examine ...
Noya Kohavi, Anna Weichselbraun
wiley   +1 more source

Co‐textual dopes: How LLMs produce contextually appropriate text in chat interactions with humans without access to context

open access: yesJournal of Linguistic Anthropology, Volume 36, Issue 1, May 2026.
Abstract This paper asks how LLM‐based systems can produce text that is taken as contextually appropriate by humans without having seen text in its broader context. To understand how this is possible, context and co‐text have to be distinguished. Co‐text is input to LLMs during training and at inference as well as the primary resource of sense‐making ...
Ole Pütz
wiley   +1 more source

Nonhuman situational enmeshments—How participants build temporal infrastructures for ChatGPT

open access: yesJournal of Linguistic Anthropology, Volume 36, Issue 1, May 2026.
Abstract This paper investigates how participants recruit Large Language Models (LLMs) like ChatGPT as interactional co‐participants depending on their temporal enmeshment within an interactional flow. Using Charles Goodwin's co‐operative action framework, we analyze video data of human–AI interaction to trace the temporal structures established by ...
Nils Klowait, Maria Erofeeva
wiley   +1 more source

Budget deficit mythology [PDF]

open access: yes
Preston J. Miller
core  

Home - About - Disclaimer - Privacy