Results 191 to 200 of about 352,774 (308)

AI‐Powered Framework for Evaluating Drug Efficacy for Three‐Dimensional In Vitro Cancer Models in Robot‐Assisted Production

open access: yesAdvanced Robotics Research, EarlyView.
An AI‐powered, robot‐assisted framework automatically produces, images, and analyzes 3D tumor spheroids to evaluate drug efficacy. Integrated modules handle spheroid formation, live/dead staining, brightfield imaging, and automated image analysis, including spheroid segmentation, viability and metrics to assess the drug treatment efficacy. The workflow
Dalia Mahdy   +13 more
wiley   +1 more source

Improving the Robustness of Visual Teach‐and‐Repeat Navigation Using Drift Error Correction and Event‐Based Vision for Low‐Light Environments

open access: yesAdvanced Robotics Research, EarlyView.
Visual teach‐and‐repeat (VTR) navigation allows robots to learn and follow routes without building a full metric map. We show that navigation accuracy for VTR can be improved by integrating a topological map with error‐drift correction based on stereo vision.
Fuhai Ling, Ze Huang, Tony J. Prescott
wiley   +1 more source

Continual Learning for Multimodal Data Fusion of a Soft Gripper

open access: yesAdvanced Robotics Research, EarlyView.
Models trained on a single data modality often struggle to generalize when exposed to a different modality. This work introduces a continual learning algorithm capable of incrementally learning different data modalities by leveraging both class‐incremental and domain‐incremental learning scenarios in an artificial environment where labeled data is ...
Nilay Kushawaha, Egidio Falotico
wiley   +1 more source

A Comparative Study of Cast-Tape, Freeze, and Oven Drying on the Physicochemical and Bioactive Properties of Red Cabbage Microgreen (Brassica oleracea var. capitata f. rubra) Foam Powders. [PDF]

open access: yesJ Food Sci
De Jesus Silva I   +9 more
europepmc   +1 more source

Multimodal Human–Robot Interaction Using Human Pose Estimation and Local Large Language Models

open access: yesAdvanced Robotics Research, EarlyView.
A multimodal human–robot interaction framework integrates human pose estimation (HPE) and a large language model (LLM) for gesture‐ and voice‐based robot control. Speech‐to‐text (STT) enables voice command interpretation, while a safety‐aware arbitration mechanism prioritizes gesture input for rapid intervention.
Nasiru Aboki   +2 more
wiley   +1 more source

Home - About - Disclaimer - Privacy