Results 61 to 70 of about 213,157 (286)
What Do Large Language Models Know About Materials?
If large language models (LLMs) are to be used inside the material discovery and engineering process, they must be benchmarked for the accurateness of intrinsic material knowledge. The current work introduces 1) a reasoning process through the processing–structure–property–performance chain and 2) a tool for benchmarking knowledge of LLMs concerning ...
Adrian Ehrenhofer +2 more
wiley +1 more source
Fostering Innovation: Streamlining Magnetocaloric Materials Research by Digitalization
Magnetocaloric cooling (MCE) is an environmentally friendly refrigeration method with great potential. Optimizing MCE materials involves the preparation and screening of large quantities of samples, which in turn generates a large amount of data. A digitalization approach is presented that uses ontologies, knowledge graphs, and digital workflows to ...
Simon Bekemeier +17 more
wiley +1 more source
How large should a dense corpus be for reliable studies in early language acquisition ?
Dense corpora have been put forward as necessary tools for corpus studies of language acquisition. Despite their great interest, they are not yet frequently used, probably because of the high cost involved in their creation. The goal of the present study
Christophe Parisse
doaj +1 more source
Marrying Universal Dependencies and Universal Morphology
The Universal Dependencies (UD) and Universal Morphology (UniMorph) projects each present schemata for annotating the morphosyntactic details of language.
Cotterell, Ryan +4 more
core +1 more source
Magnetic tunnel junctions (MTJs) using MgO tunnel barriers face challenges of high resistance‐area product and low tunnel magnetoresistance (TMR). To discover alternative materials, Literature Enhanced Ab initio Discovery (LEAD) is developed. The LEAD‐predicted materials are theoretically evaluated, showing that MTJs with dusting of ScN or TiN on ...
Sabiq Islam +6 more
wiley +1 more source
Grounding Large Language Models for Robot Task Planning Using Closed‐Loop State Feedback
BrainBody‐Large Language Model (LLM) introduces a hierarchical, feedback‐driven planning framework where two LLMs coordinate high‐level reasoning and low‐level control for robotic tasks. By grounding decisions in real‐time state feedback, it reduces hallucinations and improves task reliability.
Vineet Bhat +4 more
wiley +1 more source
Multimodal Human–Robot Interaction Using Human Pose Estimation and Local Large Language Models
A multimodal human–robot interaction framework integrates human pose estimation (HPE) and a large language model (LLM) for gesture‐ and voice‐based robot control. Speech‐to‐text (STT) enables voice command interpretation, while a safety‐aware arbitration mechanism prioritizes gesture input for rapid intervention.
Nasiru Aboki +2 more
wiley +1 more source
This review comprehensively summarizes the atomic defects in TMDs for their applications in sustainable energy storage devices, along with the latest progress in ML methodologies for high‐throughput TEM data analysis, offering insights on how ML‐empowered microscopy facilitates bridging structure–property correlation and inspires knowledge for precise ...
Zheng Luo +6 more
wiley +1 more source
Hand in hand: automatic sign Language to English translation [PDF]
In this paper, we describe the first data-driven automatic sign-language-to- speech translation system. While both sign language (SL) recognition and translation techniques exist, both use an intermediate notation system not directly intelligible for ...
Dreuw, Philippe +4 more
core +2 more sources
Multimodal Wearable Biosensing Meets Multidomain AI: A Pathway to Decentralized Healthcare
Multimodal biosensing meets multidomain AI. Wearable biosensors capture complementary biochemical and physiological signals, while cross‐device, population‐aware learning aligns noisy, heterogeneous streams. This Review distills key sensing modalities, fusion and calibration strategies, and privacy‐preserving deployment pathways that transform ...
Chenshu Liu +10 more
wiley +1 more source

