Results 81 to 90 of about 18,124 (93)

Opmærksomhedspunkter ved evaluering af politiske reformer:Rapport fra metaevalueringen af evalueringen af NAV-reformen i Norge [PDF]

open access: yes, 2014
Breidahl, Karen Nielsen; id_orcid 0000-0002-2779-2402   +3 more
core   +4 more sources
Some of the next articles are maybe not open access.

Related searches:

Nav-R1: Reasoning and Navigation in Embodied Scenes

arXiv.org
Embodied navigation requires agents to integrate perception, reasoning, and action for robust interaction in complex 3D environments. Existing approaches often suffer from incoherent and unstable reasoning traces that hinder generalization across diverse
Qingxiang Liu   +3 more
semanticscholar   +1 more source

SG-Nav: Online 3D Scene Graph Prompting for LLM-based Zero-shot Object Navigation

Neural Information Processing Systems
In this paper, we propose a new framework for zero-shot object navigation. Existing zero-shot object navigation methods prompt LLM with the text of spatially closed objects, which lacks enough scene context for in-depth reasoning.
Hang Yin   +4 more
semanticscholar   +1 more source

VL-Nav: Real-time Vision-Language Navigation with Spatial Reasoning

arXiv.org
Vision-language navigation in unknown environments is crucial for mobile robots. In scenarios such as household assistance and rescue, mobile robots need to understand a human command, such as"find a person wearing black".
Yi Du   +6 more
semanticscholar   +1 more source

Open-Nav: Exploring Zero-Shot Vision-and-Language Navigation in Continuous Environment with Open-Source LLMs

IEEE International Conference on Robotics and Automation
Vision-and-Language Navigation (VLN) tasks require an agent to follow textual instructions to navigate through 3D environments. Traditional approaches use supervised learning methods, relying heavily on domain-specific datasets to train VLN models ...
Yanyuan Qiao   +7 more
semanticscholar   +1 more source

VLM-Social-Nav: Socially Aware Robot Navigation Through Scoring Using Vision-Language Models

IEEE Robotics and Automation Letters
We propose VLM-Social-Nav, a novel Vision-Language Model (VLM) based navigation approach to compute a robot's motion in human-centered environments.
Daeun Song   +5 more
semanticscholar   +1 more source

RATE-Nav: Region-Aware Termination Enhancement for Zero-shot Object Navigation with Vision-Language Models

Annual Meeting of the Association for Computational Linguistics
Object Navigation (ObjectNav) is a fundamental task in embodied artificial intelligence. Although significant progress has been made in semantic map construction and target direction prediction in current research, redundant exploration and exploration ...
Junjie Li   +6 more
semanticscholar   +1 more source

X-Nav: Learning End-to-End Cross-Embodiment Navigation for Mobile Robots

IEEE Robotics and Automation Letters
Existing navigation methods are primarily designed for specific robot embodiments, limiting their generalizability across diverse robot platforms. In this letter, we introduce X-Nav, a novel framework for end-to-end cross-embodiment navigation where a ...
Haitong Wang   +3 more
semanticscholar   +1 more source

IGL-Nav: Incremental 3D Gaussian Localization for Image-goal Navigation

arXiv.org
Visual navigation with an image as goal is a fundamental and challenging problem. Conventional methods either rely on end-to-end RL learning or modular-based policy with topological graph or BEV map as memory, which cannot fully model the geometric ...
Wenxuan Guo   +6 more
semanticscholar   +1 more source

FiLM-Nav: Efficient and Generalizable Navigation via VLM Fine-tuning

arXiv.org
Enabling robotic assistants to navigate complex environments and locate objects described in free-form language is a critical capability for real-world deployment.
Naoki Yokoyama, Sehoon Ha
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy