Results 141 to 150 of about 275,295 (247)

A Survey for Deep Reinforcement Learning Based Network Intrusion Detection

open access: yesApplied AI Letters, Volume 7, Issue 2, June 2026.
This paper surveys deep reinforcement learning (DRL) for network intrusion detection, evaluating model efficiency, minority attack detection, and dataset imbalance. Findings show DRL achieves state‐of‐the‐art results on public datasets, sometimes surpassing traditional deep learning.
Wanrong Yang   +3 more
wiley   +1 more source

Language machines: Toward a linguistic anthropology of large language models

open access: yesJournal of Linguistic Anthropology, Volume 36, Issue 1, May 2026.
Abstract Large language models (LLMs) challenge long‐standing assumptions in linguistics and linguistic anthropology by generating human‐like language without relying on rule‐based structures. This introduction to the special issue Language Machines calls for renewed engagement with LLMs as socially embedded language technologies.
Siri Lamoureaux   +2 more
wiley   +1 more source

Evaluating AI-based comprehensive clinical decision support for sepsis and ARDS: protocol for a Clinician Turing Test. [PDF]

open access: yesBMJ Open
Angeli Gazola A   +12 more
europepmc   +1 more source

Human tests for machine models: What lies “Beyond the Imitation Game”?

open access: yesJournal of Linguistic Anthropology, Volume 36, Issue 1, May 2026.
Abstract Benchmarking large language models (LLMs) is a key practice for evaluating their capabilities and risks. This paper considers the development of “BIG Bench,” a crowdsourced benchmark designed to test LLMs “Beyond the Imitation Game.” Drawing on linguistic anthropological and ethnographic analysis of the project's GitHub repository, we examine ...
Noya Kohavi, Anna Weichselbraun
wiley   +1 more source

Co‐textual dopes: How LLMs produce contextually appropriate text in chat interactions with humans without access to context

open access: yesJournal of Linguistic Anthropology, Volume 36, Issue 1, May 2026.
Abstract This paper asks how LLM‐based systems can produce text that is taken as contextually appropriate by humans without having seen text in its broader context. To understand how this is possible, context and co‐text have to be distinguished. Co‐text is input to LLMs during training and at inference as well as the primary resource of sense‐making ...
Ole Pütz
wiley   +1 more source

The chatbot's real self: On the archaeology of artificial personas Le vrai soi du chatbot: vers une archéologie des personnes artificielles

open access: yesJournal of Linguistic Anthropology, Volume 36, Issue 1, May 2026.
Abstract From the beginning of widespread public interactions with ChatGPT and other large language models, some users have seen the disfluencies of chatbots as opportunities for them to go on an archaeological search for an unfettered chatbot persona that they need to jailbreak. These are not claims of sentience, but rather of personhood.
Courtney Handman
wiley   +1 more source

Home - About - Disclaimer - Privacy