[Documentation] [TitleIndex] [WordIndex

Package Summary

Adaptive Monte Carlo Localization (AMCL) in 3D.

amcl3d is a probabilistic algorithm to localizate a robot moving in 3D. It uses Monte-Carlo Localization, i.e. a particle filter. This package use a laser sensor and radio-range sensors to localizate a UAV within a known map.

  • Maintainer status: maintained
  • Maintainer: Francisco Cuesta Rodriguez <fcuesta AT catec DOT aero>, Paloma Carrasco Fernandez <pcarrasco AT catec DOT aero>
  • Author: Francisco J.Perez-Grau <fjperez AT catec DOT aero>
  • License: Apache 2.0
  • Source: git https://github.com/fada-catec/amcl3d.git (branch: master)

Overview

The Adaptive Monte-Carlo Localization in 3D is a particle filter that allows you to estimate where a robot is within a 3D environment without using GPS, for example in indoor environments. This package extends previous Monte-Carlo Localization solutions meant for ground robots only, making possible to use it also with aerial robots.

The algorithm uses radio-range sensors, laser sensors, and a known map of the environment for calculating the particles weights. The algorithm takes as inputs an odometry, point-clouds that the robot sees and the distances between range sensors that are in the environment. It compares the map point-cloud with the robot point-clouds and takes into account the radio-range measurements to calculate the probability that the robot is in a bounded environment. In such environment, the particles have different weights according to each computed probability. The weighted average of all the particles would give the result of the robot pose (position and orientation).

The generic Monte-Carlo algorithm has three steps: prediction, update, and resampling. This package offers one additional previous step: init. It makes it possible to initialize the robot in a given pose to start to run the algorithm. Even though this filter is able to solve the kidnapped robot problem, we propose this initialization step in order to be able to perform autonomous navigation from the very beginning of the filter execution (otherwise the estimated pose would jump until it converges to the actual pose, making the robot uncontrollable).

Prediction

Input: Odometry

Description: It is the part in charge of carrying out the propagation of the particles so that, using the input odometry, it proposes a new set of particles that potentially contain the position of the robot.

Update

Input: Map (known map of the environment), Radio-Range Sensor Measurements, 3D Laser Sensor Measurements

Description: It is responsible for evaluating the weights of the particles. It tries to verify how correctly is the pose ofeach particle by using the point-cloud of the 3D laser sensor and the measurements of the radio-range sensors. Depending on how likely each particle would receive such sensor readings at its current pose, they are assigned a corresponding weight by combining the weights computed from the two sensor technologies (laser and radio).

Resampling

Description: It is responsible for suppressing those particles that have smaller weights. Without this part, the algorithm would not converge to the correct position.

Original Paper

More information about the algorithm can be found in the following paper:

If you use amcl3d in an academic context, please cite this publication:

@article{doi:10.1177/1729881417732757,
author = {Francisco J Perez-Grau and Fernando Caballero and Antidio Viguria and Anibal Ollero},
title ={Multi-sensor three-dimensional Monte Carlo localization for long-term aerial robot navigation},
journal = {International Journal of Advanced Robotic Systems},
volume = {14},
number = {5},
year = {2017},
doi = {10.1177/1729881417732757},
URL = {https://doi.org/10.1177/1729881417732757}
}

Package files

The package is structured as follows:

files.jpg

Source file

Test source file

For more information, see section 4. Test.

RViz

Launch

roslaunch.jpg

Msg

For more information, see 4.2.1 Radio-Range Message.

Node

Amcl3d

The first thing the algorithm does is wait for odometry messages. If it receives them, then it waits for the robot take-off marked on the takeOffHeight_ parameter. Then, the filter begins to perform the prediction/update/resampling steps according to the point-clouds of the laser sensor and the radio-range sensor measurements, calculating particle weights. It also updates a TF for the robot.

Subscribed Topics

laser_sensor (sensor_msgs/PointCloud2) odometry (geometry_msgs/TransformStamped) radiorange_sensor (rosinrange_msg/range_pose)

Published Topics

base_transform (geometry_msgs/TransformStamped) map_point_cloud (sensor_msgs/PointCloud2) particle_cloud (geometry_msgs/PoseArray) pointcloud_passfiltered (sensor_msgs/PointCloud2) pointcloud_voxelfiltered (sensor_msgs/PointCloud2) range (visualization_msgs/Marker)

Parameters

~base_frame_id (string)

~odom_frame_id (string) ~global_frame_id (string) ~map_path (string) ~set_initial_pose (bool) ~init_x (double) ~init_y (double) ~init_z (double) ~init_a (double) ~init_x_dev (double) ~init_y_dev (double) ~init_z_dev (double) ~init_a_dev (double) ~publish_point_cloud_rate (double) ~grid_slice (float) ~publish_grid_slice_rate (double) ~publish_grid_tf_rate (double) ~sensor_dev (double) ~sensor_range (double) ~voxel_size (double) ~num_particles (int) ~odom_x_mod (double) ~odom_y_mod (double) ~odom_z_mod (double) ~odom_a_mod (double) ~resample_interval (int) ~update_rate (double) ~d_th (double) ~aTh (double) ~a_th (double) ~alpha (double)

Test

In the package, you can run a demo to see the behavior of the algorithm. This demo uses a rosbag and a map that you can download here. We recommend you to put this files in the test folder, like this:

         - test
         | - map
         | | - mapfile_complete.bt
         | | - mapfile_complete.grid
         | - rosbag
         | | - RosbagRosin.bag
         | - main_test.cpp
         | - Test.cpp
         | - Test.h

This is because the test roslaunch is prepared to this route. But you can modified it if you want.

Run

To run the demo:

roslaunch ouster_ros os1.launch os1_hostname:=10.5.5.94 replay:=true
roslaunch amcl3d amcl3d_test.launch

Rosbag

It contains several topics:

It is mandatory to start the roslaunch of ouster_ros before running amcl3d node.

Radio-Range Messages

In this case, the demo uses three sensors. One of them was mounted onboard the aerial robot, and measured distances with respect to the other two, that were statically integrated around the flight volume. The sensors used in this demo are "swarm bee ER" from Nanotron. It should be noted that the algorithm needs the distance measurements and the position of each sensor. That is why we created this message with the following structure:

Header header

uint64 source_id
uint64 destination_id
float64 range geometry_msgs/Point position

Map

It is necessary to have a map of the environment. In this case, we have used octomap_server from Octomap to create it. Using a rosbag with VICON odometry, we have the below result:

To create the .grid from .bt, you have to run the next command:

roslaunch amcl3d amcl3d.launch

This show you the next lines:

[ERROR] [1568280031.986394659]: [/amcl3d_node] Error opening file /home/.../mapfile_complete.grid for reading
[ INFO] [1568280031.986435846]: [/amcl3d_node] Computing 3D occupancy grid. This will take some time...
[ INFO] [1568280329.208192793]: [/amcl3d_node] Computing 3D occupancy grid done!
[ INFO] [1568280329.417065336]: [/amcl3d_node] Grid map successfully saved on /home/.../mapfile_complete.grid

TF

In this section, you can see the TF tree that you will find in the algorithm and the relationships among them.

Acknowledgement

rosin.png

Supported by ROSIN - ROS-Industrial Focused Technical Projects (FTP).

More information: rosin-project.eu


2020-01-25 12:27