Identifying postpartum depression subtypes using natural language processing and clinical notes. [PDF]
Adekkanattu P +10 more
europepmc +1 more source
Grounding Large Language Models for Robot Task Planning Using Closed‐Loop State Feedback
BrainBody‐Large Language Model (LLM) introduces a hierarchical, feedback‐driven planning framework where two LLMs coordinate high‐level reasoning and low‐level control for robotic tasks. By grounding decisions in real‐time state feedback, it reduces hallucinations and improves task reliability.
Vineet Bhat +4 more
wiley +1 more source
A natural language processing-driven map of the aging research landscape. [PDF]
Perez-Maletzki J, Sanz-Ros J.
europepmc +1 more source
Continual Learning for Multimodal Data Fusion of a Soft Gripper
Models trained on a single data modality often struggle to generalize when exposed to a different modality. This work introduces a continual learning algorithm capable of incrementally learning different data modalities by leveraging both class‐incremental and domain‐incremental learning scenarios in an artificial environment where labeled data is ...
Nilay Kushawaha, Egidio Falotico
wiley +1 more source
Natural Language Processing and Computational Linguistics
Junichi Tsujii
doaj +1 more source
Natural language processing for geriatric syndromes: a systematic review of methods, applications, and challenges. [PDF]
Rahman F +8 more
europepmc +1 more source
Here, we present a textile, wearable capacitive interface enabling multidirectional remote control by dynamically modulating electrode overlap and spacing via a freely gliding upper electrode. A forearm‐mounted prototype drives robotic and media tasks with 12–15 ms latency, maintains < 0.8% drift after 500 cycles, and remains stably functional at 90 ...
Cagatay Gumus +8 more
wiley +1 more source
Natural Language Processing Algorithm Accurately Classifies Diverticulitis-Related Complications and Predicts Long-Term Outcomes. [PDF]
Ma W +12 more
europepmc +1 more source
Multimodal Human–Robot Interaction Using Human Pose Estimation and Local Large Language Models
A multimodal human–robot interaction framework integrates human pose estimation (HPE) and a large language model (LLM) for gesture‐ and voice‐based robot control. Speech‐to‐text (STT) enables voice command interpretation, while a safety‐aware arbitration mechanism prioritizes gesture input for rapid intervention.
Nasiru Aboki +2 more
wiley +1 more source
Natural Language Processing and Linguistic Fieldwork
Steven Bird
doaj +1 more source

