Results 51 to 60 of about 186,001 (296)
Scalable Task Planning via Large Language Models and Structured World Representations
This work efficiently combines graph‐based world representations with the commonsense knowledge in Large Language Models to enhance planning techniques for the large‐scale environments that modern robots will need to face. Planning methods often struggle with computational intractability when solving task‐level problems in large‐scale environments ...
Rodrigo Pérez‐Dattari +4 more
wiley +1 more source
Unsafe AI for Education: A Conversation on Stochastic Parrots and Other Learning Metaphors ⚠️
This interview article discusses the impact on popular and educational discourses of the metaphor for a Large Language Model of a “stochastic parrot”. The metaphor comes from the title of an influential paper on the harms of large language models from ...
Emily M. Bender +4 more
doaj +1 more source
LLM Critics Help Catch LLM Bugs
Reinforcement learning from human feedback (RLHF) is fundamentally limited by the capacity of humans to correctly evaluate model output. To improve human evaluation ability and overcome that limitation this work trains "critic" models that help humans to more accurately evaluate model-written code. These critics are themselves LLMs trained with RLHF to
McAleese, Nat +5 more
openaire +2 more sources
Data Cube Approximation and Mining using Probabilistic Modeling [PDF]
On-line Analytical Processing (OLAP) techniques commonly used in data warehouses allow the exploration of data cubes according to different analysis axes (dimensions) and under different abstraction levels in a dimension hierarchy.
Boujenoui, Ameur +2 more
core
Exact Holography of the Mass-deformed M2-brane Theory
We test the holographic relation between the vacuum expectation values of gauge invariant operators in ${\cal N} = 6$ ${\rm U}_k(N)\times {\rm U}_{-k}(N)$ mass-deformed ABJM theory and the LLM geometries with $\mathbb{Z}_k$ orbifold in 11-dimensional ...
Jang, Dongmin +3 more
core +1 more source
Rule-restricted Automaton-grammar transducers: Power and Linguistic Applications [PDF]
This paper introduces the notion of a new transducer as a two-component system, which consists of a nite automaton and a context-free grammar. In essence, while the automaton reads its input string, the grammar produces its output string, and their ...
Horáček, Petr +2 more
core +1 more source
The Future of Research in Cognitive Robotics: Foundation Models or Developmental Cognitive Models?
Research in cognitive robotics founded on principles of developmental psychology and enactive cognitive science would yield what we seek in autonomous robots: the ability to perceive its environment, learn from experience, anticipate the outcome of events, act to pursue goals, and adapt to changing circumstances without resorting to training with ...
David Vernon
wiley +1 more source
A model of ensuring LLM cybersecurity
The subject of study is a model for ensuring cybersecurity of Large Language Models (LLM). The goal of this study is to develop and analyze the components of the LLM cybersecurity model to improve its assessment accuracy and ensure the required security ...
Oleksii Neretin, Vyacheslav Kharchenko
doaj +1 more source
Localized antibody responses in influenza virus-infected mice [PDF]
The Abstracts of the Conference is located at: http://optionsviii.controlinfluenza.com/optionsviii/assets/File/Options_VIII_Abstracts_2013.pdfPoster Session: Innate and Adaptive ImmunityBackground: Traditionally, vaccine-mediated protective responses ...
Li, OTW, Poon, LLM
core

