Results 251 to 260 of about 5,155,222 (297)
Some of the next articles are maybe not open access.
Representation and Experience-Based Learning of Explainable Models for Robot Action Execution
IEEE/RJS International Conference on Intelligent RObots and Systems, 2020For robots acting in human-centered environments, the ability to improve based on experience is essential for reliable and adaptive operation; however, particularly in the context of robot failure analysis, experience-based improvement is practically ...
Alex Mitrevski, P. Plöger, G. Lakemeyer
semanticscholar +1 more source
International Journal of Neuroscience, 2020
Background Human motor imagery (MI), action execution, and action observation (AO) are functionally considered as equivalent. MI during AO can extensively induce activation of motor-related brain network in the absence of overt movement.
Jiu Chen +7 more
semanticscholar +1 more source
Background Human motor imagery (MI), action execution, and action observation (AO) are functionally considered as equivalent. MI during AO can extensively induce activation of motor-related brain network in the absence of overt movement.
Jiu Chen +7 more
semanticscholar +1 more source
Real-Time Execution of Action Chunking Flow Policies
arXiv.orgModern AI systems, especially those interacting with the physical world, increasingly require real-time performance. However, the high latency of state-of-the-art generalist models, including recent vision-language action models (VLAs), poses a ...
Kevin Black +2 more
semanticscholar +1 more source
Simultaneous action execution and observation optimise grasping actions
Experimental Brain Research, 2013Action observation and execution share overlapping neural resonating mechanisms. In the present study, we sought to examine the effect of the activation of this system during concurrent movement observation and execution in a prehension task, when no a priori information about the requirements of grasping action was available. Although it is known that
Ménoret, Mathilde +4 more
openaire +3 more sources
Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success
RoboticsRecent vision-language-action models (VLAs) build upon pretrained vision-language models and leverage diverse robot datasets to demonstrate strong task execution, language following ability, and semantic generalization.
Moo Jin Kim, Chelsea Finn, Percy Liang
semanticscholar +1 more source
Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models
International Conference on Machine LearningGeneralist robots that can perform a range of different tasks in open-world settings must be able to not only reason about the steps needed to accomplish their goals, but also process complex instructions, prompts, and even feedback during task execution.
L. Shi +14 more
semanticscholar +1 more source
Concurrent execution system for action languages
Proceedings of the 15th ACM-IEEE International Conference on Formal Methods and Models for System Design, 2017Traditional methods of managing concurrent processes are difficult and prone to errors. We propose that actions can provide a much simpler approach to the problem. In this paper, we use Temporal Logic of Actions to define an execution system that can be used to concurrently execute programs created with action languages.
Antti Jääskeläinen +2 more
openaire +1 more source

