This appendix bridges the gap between simulation and real-world robotics by providing the IVLMap framework's practical implementation and validation details. In order to create the fundamental API that the LLM uses to build navigation code, it first presents a complete high-level function library (Table III) that adds instance-aware and attribute-aware searches to the capabilities of earlier maps (such as VLMap).This appendix bridges the gap between simulation and real-world robotics by providing the IVLMap framework's practical implementation and validation details. In order to create the fundamental API that the LLM uses to build navigation code, it first presents a complete high-level function library (Table III) that adds instance-aware and attribute-aware searches to the capabilities of earlier maps (such as VLMap).

IVLMap Bridges the Sim-to-Real Gap in Robot Navigation

2025/11/10 19:41
3 min read
For feedback or concerns regarding this content, please contact us at [email protected]

Abstract and I. Introduction

II. Related Work

III. Method

IV. Experiment

V. Conclusion, Acknowledgements, and References

VI. Appendix

\ TABLE IIILIST OF HIGH-LEVEL FUNCTIONS USED

VI. APPENDIX

APPENDIX

A. Full List of Navigation High-Level Functions

\ On the basis of VLMap, we have developed a new high-level function library based on the distinctive features of IVLMap. Our full list of navigation high-level functions are listed in Table. III

\ B. Full Prompts for LLM Generating Python Code

\ system_prompt

\

\ attributes_prompt

\

\

\ function_prompt

\

\ The omitted function prompt can be referred to in Table III.

\ example_prompt

\

\ gencodeprompt

\

\ C. Experiment on Real Robot

\ ROS-based Smart Car Real-World Data Collection Scheme: In current research on visual language navigation, the majority of work is implemented using simulators such as Habitat and AI2THOR, achieving notable results in virtual environments. To validate the effectiveness of the algorithm in real-world scenarios, we conducted corresponding experiments in actual environments. Our real-world data collection platform is illustrated in Fig.8. Before initiating the data collection process, we performed camera calibration to establish the transformation relationship between the robot base coordinate system and the camera coordinate system. During the data collection process, it is crucial to ensure that the robot and the host are on the same local network. The robot’s movement is controlled using the laptop keyboard through the ROS communication mechanism. RGB and depth information is captured through the Astro Pro Plus camera, while the pose information is obtained from IMU and the robot’s velocity encoder. These sensor data are published as ROS topics. Subsequently, ROS message filter[5] is utilized to synchronize the three different types of sensor data, ensuring they are roughly at the same timestamp.

\ Fig. 8. We built an intelligent robotic car with ROS, leveraging Jetson Nano for deep learning. It includes a high-precision Orbbec Astra Pro Plus monocular depth camera and Slamtec E300 LiDAR for RGB-D and LiDAR data capture.

In the real-world environment, where only odometer information reflecting the pose changes of the robot car is available, a coordinate transformation between the ROSRobot base coordinate system and the Camera coordinate system is necessary(Fig.9). The rotational relationship between the robot coordinate system and the camera coordinate system can be expressed using the rotation matrix shown in Equation.3.

\

\ Fig. 9. Habitat coordinate system, camera coordinate system, and ROSRobot coordinate system

\ Fig. 10. 3D Reconstruction Results in Real Environment, examples of RGB images captured by the camera on the left, the corresponding 3D reconstruction results are displayed on the right.

\ D. 3D Reconstruction in Bird’s-Eye View

\ 3D Reconstruction Bird’s-Eye View of Different Scenes in the Dataset Collected with cmu-exploration can be refered at Fig. 11.

\ E. IVLMap segmentation results

\ To compare the semantic maps between IVLMap (ours, depicted in orange) and VLMap (ground truth, represented in green), refer to the visual representation of the maps for a comprehensive understanding of the differences and similarities Fig.12.

\ Fig. 11. 3D Reconstruction Bird’s-Eye View of Different Scenes

\ Fig. 12. Semantic Map Comparison between IVLMap (Ours, Orange) and VLMap (GT, Green)

\

:::info Authors:

(1) Jiacui Huang, Senior, IEEE;

(2) Hongtao Zhang, Senior, IEEE;

(3) Mingbo Zhao, Senior, IEEE;

(4) Wu Zhou, Senior, IEEE.

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

[5]ROS message_filter is a library in the Robot Operating System (ROS) that provides tools for synchronizing and filtering messages from multiple topics in a flexible and efficient manner.

Market Opportunity
RealLink Logo
RealLink Price(REAL)
$0.06458
$0.06458$0.06458
+5.60%
USD
RealLink (REAL) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!