Results 101 to 110 of about 132,919 (197)
A Universal Reversible Turing Machine that Directly Simulates Reversible Counter Machines [PDF]
We construct a 1-tape 98-state 10-symbol universal reversible Turing machine (URTM(98,10)) that directly simulates reversible counter machines (RCMs). The objective of this construction is not to minimize the numbers of states and tape symbols, but
Kenichi Morita
doaj +1 more source
A Survey for Deep Reinforcement Learning Based Network Intrusion Detection
This paper surveys deep reinforcement learning (DRL) for network intrusion detection, evaluating model efficiency, minority attack detection, and dataset imbalance. Findings show DRL achieves state‐of‐the‐art results on public datasets, sometimes surpassing traditional deep learning.
Wanrong Yang +3 more
wiley +1 more source
Language machines: Toward a linguistic anthropology of large language models
Abstract Large language models (LLMs) challenge long‐standing assumptions in linguistics and linguistic anthropology by generating human‐like language without relying on rule‐based structures. This introduction to the special issue Language Machines calls for renewed engagement with LLMs as socially embedded language technologies.
Siri Lamoureaux +2 more
wiley +1 more source
Human tests for machine models: What lies “Beyond the Imitation Game”?
Abstract Benchmarking large language models (LLMs) is a key practice for evaluating their capabilities and risks. This paper considers the development of “BIG Bench,” a crowdsourced benchmark designed to test LLMs “Beyond the Imitation Game.” Drawing on linguistic anthropological and ethnographic analysis of the project's GitHub repository, we examine ...
Noya Kohavi, Anna Weichselbraun
wiley +1 more source
Statistical Complexity Analysis of Turing Machine tapes with Fixed Algorithmic Complexity Using the Best-Order Markov Model. [PDF]
Silva JM, Pinho E, Matos S, Pratas D.
europepmc +1 more source
Abstract This paper asks how LLM‐based systems can produce text that is taken as contextually appropriate by humans without having seen text in its broader context. To understand how this is possible, context and co‐text have to be distinguished. Co‐text is input to LLMs during training and at inference as well as the primary resource of sense‐making ...
Ole Pütz
wiley +1 more source
Newcomb's Paradox: a Subversive Interpretation [PDF]
A re-interpretation of the asymmetric roles assigned to the two agents in the genesis of Newcomb’s Paradox is suggested. The re-interpretation assigns a more active role for the 'rational' agent and a possible Turing Machine interpretation for the ...
K.Vela Velupillai
core
Abstract From the beginning of widespread public interactions with ChatGPT and other large language models, some users have seen the disfluencies of chatbots as opportunities for them to go on an archaeological search for an unfettered chatbot persona that they need to jailbreak. These are not claims of sentience, but rather of personhood.
Courtney Handman
wiley +1 more source
Risk management in the era of data-centric engineering
Novel methods of data collection and analysis can enhance traditional risk management practices that rely on expert engineering judgment and established safety records, specifically when key conditions are met: Analysis is linked to the decisions it is ...
Domenic Di Francesco
doaj +1 more source
Abstract The current melanoma staging system predicts 74% of the variance in survival, with prognostic biomarkers subject to high levels of inter‐observer variation. This work assesses whether a previously developed convolutional neural network (CNN) for invasive melanoma segmentation in whole slide images (WSIs) may reveal new insights into melanoma ...
Emily L Clarke +15 more
wiley +1 more source

