Results 121 to 130 of about 78,230 (294)
An AI‐Enabled All‐In‐One Visual, Proximity, and Tactile Perception Multimodal Sensor
Targeting integrated multimodal perception of robots, an AI‐enabled all‐in‐one multimodal sensor is proposed. This sensor is capable of perceiving three types of modalities, including vision, proximity, and tactility. By toggling an ultraviolet light and adjusting the camera focus, it switches smoothly between multiple perceptual modalities, enabling ...
Menghao Pu +7 more
wiley +1 more source
Autonomous navigation in unknown environments demands policies that can jointly perceive semantic context and geometric safety. Existing Transformer-enabled deep reinforcement learning (DRL) frameworks, such as the Goal-guided Transformer Soft Actor ...
Alpaslan Burak İnner +1 more
doaj +1 more source
Grounding Large Language Models for Robot Task Planning Using Closed‐Loop State Feedback
BrainBody‐Large Language Model (LLM) introduces a hierarchical, feedback‐driven planning framework where two LLMs coordinate high‐level reasoning and low‐level control for robotic tasks. By grounding decisions in real‐time state feedback, it reduces hallucinations and improves task reliability.
Vineet Bhat +4 more
wiley +1 more source
Visual teach‐and‐repeat (VTR) navigation allows robots to learn and follow routes without building a full metric map. We show that navigation accuracy for VTR can be improved by integrating a topological map with error‐drift correction based on stereo vision.
Fuhai Ling, Ze Huang, Tony J. Prescott
wiley +1 more source
A new benchmark for camouflaged object detection: RGB-D camouflaged object detection dataset
This article aims to provide a novel image paradigm for camouflaged object detection, i.e., RGB-D images. To promote the development of camouflaged object detection tasks based on RGB-D images, we construct an RGB-D camouflaged object detection dataset ...
Zhang Dongdong, Wang Chunping, Fu Qiang
doaj +1 more source
Stitching of depth and color images from multiple RGB-D sensors for extended field of view
Existing commodity RGB-D sensors have a limited field of view compared with a nearly 360° horizontal field of view of Lidar. This article presents a method to extend the field of view by stitching depth and color images from multiple RGB-D sensors to ...
Changquan Ding, Hang Liu, Hengyu Li
doaj +1 more source
TacScope: A Miniaturized Vision‐Based Tactile Sensor for Surgical Applications
TacScope is a compact, vision‐based tactile sensor designed for robot‐assisted surgery. By leveraging a curved elastomer surface with pressure‐sensitive particle redistribution, it captures high‐resolution 3D tactile feedback. TacScope enables accurate tumor detection and shape classification beneath soft tissue phantoms, offering a scalable, low‐cost ...
Md Rakibul Islam Prince +3 more
wiley +1 more source
Continual Learning for Multimodal Data Fusion of a Soft Gripper
Models trained on a single data modality often struggle to generalize when exposed to a different modality. This work introduces a continual learning algorithm capable of incrementally learning different data modalities by leveraging both class‐incremental and domain‐incremental learning scenarios in an artificial environment where labeled data is ...
Nilay Kushawaha, Egidio Falotico
wiley +1 more source
This project has been developed as an implementation of a SLAM technique called GraphSLAM. This technique applies the theory of graphs to create an on-line optimization system that allows robots to map the scenario and locate themselves using a Time of Flight camera as the input source.
openaire +2 more sources
This work presents a state‐adaptive Koopman linear quadratic regulator framework for real‐time manipulation of a deformable swab tool in robotic environmental sampling. By combining Koopman linearization, tactile sensing, and centroid‐based force regulation, the system maintains stable contact forces and high coverage across flat and inclined surfaces.
Siavash Mahmoudi +2 more
wiley +1 more source

