Welcome to MRDVS Hub
Camera Hardware
Industrial Solutions
User Cases
Frequently Asked Questions
Ecological Partners
🛠V1 Pro Visual Mapping Deployment Instructions
type
status
date
slug
summary
tags
category
icon
password
Overview
This article mainly introduces the function of V1 Pro camera visual positioning module, including the basic characteristics of the positioning function, the use of the module, application scenarios, etc. This article describes the basic map building, positioning process and precautions, and provides simple example tutorials for the operation of the host computer and the use of the SDK, and it also supports the UDP communication mode.
Key Features
- Overhead view positioning, repeatable positioning accuracy to the point ±1cm
- For different ceiling heights, there are 6-meter and 12-meter versions according to the focal length.
- Built-in arithmetic, no need for an external industrial computer
- Positioning mode inputs robot odometer data and directly outputs robot’s pose.
- Default with supporting LiDAR to assist in map building, and can also import the laser data through the SDK.
Coordinate system of the camera
The camera coordinate system is based on the Cartesian coordinate system (right-handed) as shown in Fig:

Method of use
- Windows host computer. Visualize and operate the camera building and positioning processes through the camera host software.
- SDK. When practically deployed, it can be developed by calling the camera SDK to realize various functions.
Application scenario
- Overhead view navigation, horizontal mounting required
- Use either the 6-meter or 12-meter version of the camera depending on the actual ceiling height to achieve the best positioning results.
- Suitable for overhead view scenes with rich visual features, large areas with fewer visual features or solid colors can not be used and can be improved by applying reflective stickers.
- The camera is mounted reliably, with no obstruction of vision, and operates in a relatively smooth environment, free of visible dust.
- Low-speed and smooth operation when building the map, the speed does not exceed 0.2m/s; the speed does not exceed 1m/s when positioning, and the frame rate of the returned positioning results is about 10Hz.
Mapping process

The process of building a diagram is shown in the figure, with the following main points to note:
- The odometer is necessary for map building and positioning, and odometer data are absolute coordinates;
- Single line laser data is needed to assist positioning when building maps. Some better scenarios do not rely on laser data to assist building maps. At this time, virtual laser data can be input. Please contact the relevant technical support staff;
- If you need to unify the coordinate system of the laser map and the visual map (the positioning results of the visual map and the laser map can not be completely overlapped, and there is usually a deviation of a few centimeters), you need to pass in the laser pose of the robot when building the map, and then you can input the virtual laser data;
- You can use the map building tool to build maps offline. Some scenarios can not be realized to build maps or have problems. Please contact the relevant technical support staff;
- Virtual data indicates that there is real-time data coming in, but there can be no requirement for the data to be real data from the actual scene, For example, virtual laser data where all
rangevalues are set to zero;
- If no actual laser is actually input, no laser raster map is provided and no laser raster map is displayed in the host computer display interface;
- After enabling mapping, data recording will only start when the robot is in motion; no data will be recorded while it remains stationary;
- When exporting data, please turn off the map building enable in advance and wait patiently for the data to be exported;
- Plan the approximate recording path in advance, keeping the robot moving at a low and steady speed (<0.2 m/s), with straight-line motion and right-angle turns. Avoid overlapping paths or random shaking. Ensure that paths in the same direction maintain about one-third overlap in the field of view. In general, keep a parallel distance of around 2 meters, so as to maximize coverage of the mapping area or the robot’s travel path. The ideal situation is illustrated in the figure below:

Localization process
The positioning process is shown in the figure, with the following main points to note:
- Please import the built visual map before positioning, and switch to the corresponding map when positioning;
- The initial positioning needs to send a repositioning request, and the correct positioning result can only be received after successful repositioning;
- Enabling mapping is required when sending a relocalization request.
- The positioning module records the pose in the normal positioning state, and if the camera is turned off and then restarted, it automatically repositions itself according to the last recorded pose;
- Knowing the approximate coordinates of the robot in the vision map is required for retargeting.
(x, y, yaw), starting point for recording data on a build map(0, 0, 0)perform repositioning. Generally,x、ydeviation in less than 30cm,yawangular deviation less than 20°;
- In the normal positioning state, the robot coordinates at the fixed station are recorded, and repositioning can be performed at that fixed station using the recorded robot coordinates;
- If repositioning fails, please move the robot's actual position or fine-tune the repositioning coordinates, or move it to the area with rich visual features and reposition it again; if repositioning still fails, please check whether the SDK program, data communication, and camera output are working properly;
- During normal positioning, it is necessary to run within the range of the camera's built-in map. Running in an area where no map has been built will output incorrect position information;
- Factors such as unsatisfactory visual features in some areas, low match between overhead view height and camera focal length, etc. can affect the localization accuracy
Preparation work
- Check that the camera is properly connected and that the output and communication are normal
- If you are using a LIDAR kit, please connect it to the corresponding network port.
- The camera is mounted horizontally and the external reference relative to the center of the robot is determined according to the mounting position and orientation.
[x, y, z, yaw, pitch, roll]
- The single-line lidar is mounted horizontally and provides an external reference of the lidar with respect to the center of the robot.
[x, y, yaw]
- If you use both the Windows-side host computer and the SDK for the industrial control unit, you can connect the camera, the industrial control unit and the Windows computer via a switch for communication.
Note: The camera supports up to two accesses at the same time, e.g., it can be accessed through a single uplink and a single SDK at the same time, or through two uplinks at the same time
Example of Use
This part needs to be carried out with the SDK sample program, which uses ROS topic communication to pass data to the SDK and receive the results returned by the algorithm module.Users can pass the data through the SDK sample program on the linux industrial control machine side, and visualize the operation on the host computer on the Windows side; for later deployment on the site, the SDK can be used separately for development, to complete the needs of map building and positioning.
In addition to the above SDK method, you can also use UDP communication method, but the function is limited, you need to use the host computer software to complete the map building, and then deploy the map for positioning, we recommend using the SDK method for map building and positioning.
Build a map
Preparation work
Confirm that the camera and the communication is normal.You can test the communication by virtual data, please refer to the ROS example document provided by SDK for details to configure.Connect the camera through the switch, open the host computer software to connect the camera, the operation is shown in the following figure:
- Open the host computer program
- Connecting the camera
- Select the Application Algorithm
- Selection of visual localization in application algorithms
- At this time, you can check whether the data status of each sensor is normal through the visual localization view, if there is any abnormality, please check the communication configuration.
- This is the state returned by the algorithm. When in the "Normal Positioning" state, the robot pose is returned and can be used for localization.

User input data
According to the previous step, the external incoming data and the camera internal positioning module has established communication, you can follow the SDK sample program to transfer the real sensor data into the camera for map building, the data include:
- Robot odometer data in absolute coordinate system, >30Hz
- LiDAR data (virtual data if LiDAR-equipped is used), >= 10 Hz
- Pose of the robot under the laser map (may not be entered if the coordinate system is not required to be harmonized with the laser map), >=10Hz
Recording data
- Configuration parameters
Map Name.(If not, use the default "example_map1".)
Camera relative to the robot's external reference.In meters and degrees
Lidar versus robotic external reference.In meters and degrees
- Recorded map path planning
According to the actual map building scenarios to plan in advance to record the map traveling route, in accordance with the requirements of
- Activation chart
Setting map building enable, i.e., enable map building mode, the robot will start recording when it moves.
- Export data
Record data to complete, click export data (if you use the SDK, you need to turn off the building map enable first), export data saved in the
params\f13141140000\vision_map.zip,Just send it back to the appropriate technical support person. Attention,f13141140000is the camera ID number, which is different for different cameras and can be confirmed on the host computer.
Offline building maps
This part can be completed by deploying offline map building tools to generate map package files, or by MRDVS related technical support staff, and provide map package files.
Importing visual maps
Import the map file as shown in the figure below, such as a zip file named ".zip".If you can see the newly imported map in the map name drop-down list, it means the import is successful.
Localization
Preparation work
- Confirm that the camera and the communication is working.
- Map of your area has been imported into the camera
User input data
- After the successful completion of the map recording, map building, and map importing operations, the real sensor data can be transferred into the camera for positioning purposes according to the SDK sample program, and the data includes:
- Robot odometer data in absolute coordinate system, >30Hz
Enabling positioning
- Configuration parameters
- Map Name.(Use of already constructed maps for the corresponding area)
- Camera relative to the robot's external reference.In meters and degrees
- Lidar versus robotic external reference.In meters and degrees
- Enabling positioning
- Setting the positioning enable, i.e. enabling the positioning mode

Repositioning request
Move the robot to the relocation point, i.e. the robot's coordinate position in the map, note that the angle should also be approximately the same, approximate locations can be determined with laser assistance (raster maps are not available without actual laser data).It is possible to send a relocation request via the host computer, ROS or SDK: drag the robot with the left mouse button to move to the relocation point, rotate the robot direction and click on Send Relocation Request.

If the repositioning fails, the robot is pushed to fine-tune the repositioning position and resend the repositioning request.
Normal positioning (math.)
After sending a repositioning request to the host computer and succeeding, you can see that it is in the normal positioning state and can run the positioning normally.

UDP communication
The whole process of visual map building (map picking) and localization can be completed using the camera SDK, and the camera also supports UDP mode for communication. UDP communication is used to establish the communication between the camera and the robot. The robot side transmits information such as robot odometry, robot repositioning pose, robot pose (not required) to the camera visual localization module, and the camera visual localization module transmits information such as robot pose and repositioning result to the robot side, as described in the UDP communication protocol document.
Vision Module->Robot
- Localization value
- Repositioning results
Robot->Vision Module
- Speedometer (of a vehicle)
- Repositioning request
- Robot pose (to be passed in if the vision map and laser map coordinate systems need to be harmonized, otherwise it can be left out)
Parameter Configuration
The visual localization module configuration parameter file is located inside the camera by default, and its path is
/home/user/visloc_config/visloc_user.json. The login username isuser, the default login password is also user, the configuration parameters are described as follows (Note: json file does not support the following comment text, the following is only for illustration):Loading...