[Documentation] [TitleIndex] [WordIndex

Package Summary

Find moving objects based on a laser scan or point cloud data stream.

Overview

This package provides two main nodes; one for finding moving objects in a stream of sensor_msgs/LaserScan messages, and one for finding moving objects in a stream of sensor_msgs/PointCloud2 messages. Two corresponding nodelets are also provided.

There are also two nodes and two nodelets for handling two special message types: find_moving_objects/LaserScanArray and find_moving_objects/PointCloud2Array. These message types contain a header and, respectively, an array of sensor_msgs/LaserScan and sensor_msgs/PointCloud2. The intended use for these message types is for multi-diode Lidars and similar sensors.

Note that a moving-object-finding node/nodelet should take input from a single sensor only!

Finding Moving Objects

The above-mentioned nodes and nodelets output a stream of messages (see Defined Message Types) containing an array of found moving objects. For each found object, the provided information includes details about its position and velocity in the given sensor frame, as well as in a map, fixed, and base frame. Typically, the latter frames should be set to map, odom and base_link, respectively (see REP 105). A confidence value and information about each object's distance, closest point and width (relative the sensor) are also provided.

Note that the nodes rely on message filtering from tf2 so the map, fixed and base frames must be specified. For testing purposes, these could of course all be set to the frame of the sensor itself.

The nodes/nodelets store a history of received range scans in an internal data structure referred to as the bank. The velocity of an object is calculated based on how it has moved from the oldest range scan stored in the bank to the newest - so the size of the bank (that is, the number of range scans it stores) is a critical property to set for the system. The size of the bank can be automatically calculated based on (measuring the publish rate of the sensor and) a desired time period which it should cover.

There is also a confidence-enhancing node that can be used for cases where several sensors are used. This node can take the output of the moving-object-finding nodes connected to the sensor data streams and increase the confidence of objects that it believes are seen by several sensors.

The overall structure of using the provided nodes is given in overall_structure.pdf.

Visualization Using RViz

All the nodes and nodelets can be told to publish visualization messages so that the properties of the found objects can be presented using RViz. Each node/nodelet can publish a subset of the following message types.

Please refer to the "Parameters" section for each node below for more details.

Paper

A paper explaining the algorithm and data structure used by find_moving_objects was presented at CVC 2019 and published by Springer. This is a pre-print fulltext version of the above paper.

Example

The nodelet for interpreting sensor_msgs/PointCloud2 data streams and the node for interpreting sensor_msgs/LaserScan data streams are used in the provided launch file. This launch file can be used to run the interpreters on live or recorded sensor data (an example bag file is provided, but it must be extracted before use). The sensors supported by the launch file are the following (however, note that other ROS packages and software libraries are needed to run these live).

For the nodelets, there is a launch argument manager which is used to specify which manager the nodelet should connect to. For more information about the available arguments for launching nodes and nodelets, please refer to the respective launch/includes files.

Defined Message Types

The following are the message types defined by this package:

Please refer to the message type sources for their documentation.

Nodes and Nodelets

The node and nodelet used for finding objects in sensor_msgs/LaserScan message streams have the same parameters and publish/subscribe to the same topics. The same is true for the other node and nodelet pairs.

laserscan_interpreter_node and LaserScanInterpreterNodelet

The node/nodelet which finds moving objects in a sensor_msgs/LaserScan message stream. There are many parameters which can be specified but the most needed ones are probably:

Parameters

subscribe_topic (string, default: "laserscan") subscribe_buffer_size (int, default: 1) ema_alpha (double, default: 1.0) map_frame (string, default: "map") fixed_frame (string, default: "odom") base_frame (string, default: "base_link") nr_scans_in_bank (int, default: 0) optimize_nr_scans_in_bank (double, default: 0.5) publish_objects (bool, default: true) publish_ema (bool, default: true) publish_objects_closest_points_markers (bool, default: true) publish_objects_velocity_arrows (bool, default: true) publish_objects_delta_position_lines (bool, default: true) publish_objects_width_lines (bool, default: true) publish_buffer_size (int, default: 1) topic_objects (string, default: "moving_objects") topic_ema (string, default: "ema") topic_objects_closest_points_markers (string, default: "objects_closest_point_markers") topic_objects_velocity_arrows (string, default: "objects_velocity_arrows") topic_objects_delta_position_lines (string, default: "objects_delta_position_lines") topic_objects_width_lines (string, default: "objects_width_lines") ns_velocity_arrows (string, default: "velocity_arrows") ns_delta_position_lines (string, default: "delta_position_lines") ns_width_lines (string, default: "width_lines") velocity_arrows_use_full_gray_scale (bool, default: false) velocity_arrows_use_sensor_frame (bool, default: false) velocity_arrows_use_base_frame (bool, default: false) velocity_arrows_use_fixed_frame (bool, default: false) object_threshold_edge_max_delta_range (double, default: 0.15) object_threshold_min_nr_points (int, default: 3) object_threshold_max_distance (double, default: 6.5) object_threshold_min_speed (double, default: 0.1) object_threshold_max_delta_width_in_points (int, default: 15) object_threshold_bank_tracking_max_delta_distance (double, default: 0.4) object_threshold_min_confidence (double, default: 0.7) base_confidence (double, default: 0.3)

laserscanarray_interpreter_node and LaserScanArrayInterpreterNodelet

The node/nodelet which finds moving objects in a find_moving_objects/LaserScanArray message stream. The same parameters as specified for the laserscan_interpreter_node and LaserScanInterpreterNodelet above are valid here as well.

pointcloud2_interpreter_node and PointCloud2InterpreterNodelet

The node/nodelet which finds moving objects in a sensor_msgs/PointCloud2 message stream. The same parameters as specified for the laserscan_interpreter_node and LaserScanInterpreterNodelet above are valid here as well, with the addition of some PointCloud2-specific ones listed below. There are many parameters which can be specified but the most needed ones are probably:

Parameters

subscribe_topic (string, default: "pointcloud") sensor_frame_has_z_axis_forward (bool, default: true) bank_view_angle (double, default: 3.141592654) nr_points_per_scan_in_bank (int, default: 360) message_x_coordinate_field_name (string, default: "x") message_y_coordinate_field_name (string, default: "y") message_z_coordinate_field_name (string, default: "z") voxel_leaf_size (double, default: 0.01) threshold_z_min (double, default: 0.0) threshold_z_max (double, default: 1.0)

pointcloud2array_interpreter_node and PointCloud2ArrayInterpreterNodelet

The node/nodelet which finds moving objects in a find_moving_objects/PointCloud2Array message stream. The same parameters as specified for the pointcloud2_interpreter_node and PointCloud2InterpreterNodelet above are valid here as well.

moving_objects_confidence_enhancer_node

The node which can take information about moving objects as found by several sensors and increase the confidence for those objects which it believes are seen by several sensors. There are many parameters which can be specified but the most needed ones are probably:

Parameters

subscribe_topic (string, default: "moving_objects") subscribe_buffer_size (int, default: 10) verbose (bool, default: false) print_received_objects (bool, default: false) publish_objects (bool, default: true) publish_objects_velocity_arrows (bool, default: true) publish_objects_closest_points_markers (bool, default: true) publish_buffer_size (int, default: 2) topic_moving_objects_enhanced (string, default: "moving_objects_enhanced") topic_objects_velocity_arrows (string, default: "objects_velocity_arrows") topic_objects_closest_points_markers (string, default: "objects_closest_points_markers") velocity_arrows_use_full_gray_scale (bool, default: false) velocity_arrows_use_sensor_frame (bool, default: false) velocity_arrows_use_base_frame (bool, default: false) velocity_arrows_use_fixed_frame (bool, default: false) threshold_min_confidence (double, default: 0.0) threshold_max_delta_time_for_different_sources (double, default: 0.2) threshold_max_delta_position (double, default: 0.1) threshold_max_delta_velocity (double, default: 0.1) ignore_z_map_coordinate_for_position (bool, default: true)

Bug Reports and Feature Requests

Use GitHub to report bugs or submit feature requests. [View active issues]


2019-10-12 12:37