[Documentation] [TitleIndex] [WordIndex

These tutorials require packages that are not open to the public. If you want to run the packages, please write to jsf@ipa.fhg.de .

Hardware Requirements

To use this package you need a stereo camera system or an RGBD device that delivers point clouds with color information. Alternatively you can use a simulated version without any hardware, see cob_gazebo.

ROS API

The cob_object_detection package provides a configurable node for detecting textured objects based on local feature point detection and 3D model matching.

cob_object_detection

The cob_object_detection node takes in sensor_msgs/PointCloud2 messages generated e.g. from a stereo camera pair or an RGBD camera device and performs 6 DOF object detection based on previously trained object models. Object detection and training is triggered using services or actions. Visualization and the detection results are published on topics.

Action Goal

/object_detection/acquire_object_image/goal (cob_object_detection_msgs/AcquireObjectImageActionGoal) /object_detection/detect_object/goal (cob_object_detection_msgs/DetectObjectsActionGoal) /object_detection/train_object/goal (cob_object_detection_msgs/TrainObjectsActionGoal)

Action Result

/object_detection/acquire_object_image/result (cob_object_detection_msgs/AcquireObjectImageActionResult) /object_detection/detect_object/result (cob_object_detection_msgs/DetectObjectsActionResult) /object_detection/train_object/result (cob_object_detection_msgs/TrainObjectActionResult)

Action Feedback

/object_detection/acquire_object_image/feedback (cob_object_detection_msgs/AcquireObjectImageActionFeedback) /object_detection/detect_object/feedback (cob_object_detection_msgs/DetectObjectsActionFeedback) /object_detection/train_object/feedback (cob_object_detection_msgs/TrainObjectActionFeedback)

Subscribed Topics

/sensor_fusion/stereo/reprojection_matrix (cob_object_perception_msgs/ReprojectionMatrix)

Published Topics

/object_detection/point_cloud_2 (sensor_msgs/PointCloud2) /object_detection/image (sensor_msgs/Image)

Services

/object_detection/detect_object (cob_object_detection_msgs/DetectObjects) /object_detection/train_object (cob_object_detection_msgs/TrainObject) /object_detection/acquire_object_image (cob_object_detection_msgs/AcquireObjectImage)

Services Called

/sensor_fusion/stereo/get_colored_pc (cob_srvs/GetPointCloud2)

Parameters

~object_model_type (string) ~visualization (bool) ~logging (bool) ~use_STAR_detector (bool) ~load_model_from_srs_database (bool) ~insert_generated_model_to_srs_database (bool) ~train_learning_pos_x (float) ~train_learning_pos_y (float) ~train_learning_pos_z (float) ~train_learning_radius (float) ~train_affine_fp_transforms (bool) ~train_min_cluster_size (integer) ~train_max_feature_match_dist_in_meter (float) ~robot_building_models (bool) ~detect_ransac_norm_inlier_dist_thresh (float) ~detect_min_inlier_size (integer)

Usage/Examples

The package can be used via a launch file which loads all parameters from a .yaml file and starts the object_detection node.

roslaunch cob_object_detection object_detection.launch

For including the object_detection in your overall launch file use

<include file="$(find cob_object_detection)/ros/launch/object_detection.launch" />

A sample parameter file could look like this

##########################################
# Object model type
##########################################
# MODEL_BAYES or MODEL_6D_SLAM
object_model_type: MODEL_BAYES
# Visualize detection results in 3-D
visualization: true
# Write log files
logging: true
# Feature point detector
use_STAR_detector: false
##########################################
# Parameters for srs database interface
##########################################
# load the models form srs database or from local storage
load_model_from_srs_database: true
# if the build model ends successfully the object will be automatically add th the database
insert_generated_model_to_srs_database: false
##########################################
# Parameters for object model training
##########################################
# Use u, v parameters to limit training region
# Limit the training area of the image by cutting out 
# relevant regions
# Unit [px]
train_learning_segm_min_u: 500
train_learning_segm_max_u: 1300
train_learning_segm_min_v: 400
train_learning_segm_max_v: 1000

# XYZ and Radius DEPRECATED and no longer in us and no longer in usee, 
# X Position of object during training in camera coordinates
# Unit: [m]
train_learning_pos_x: 0.0776975028338
# Y Position of object during training in camera coordinates
# Unit: [m]
train_learning_pos_y: 0.188573
# Z Position of object during training in camera coordinates
# Unit: [m]
train_learning_pos_z: 0.837447
# Radius of training area for object segmentation during trainng
# Unit: [m]

train_learning_radius: 0.1
# Apply affine transformation to source images before feature extraction
train_affine_fp_transforms: false
# Minimal cluster size for feature points to be valid
train_min_cluster_size: 3
# After applying the odometry, matching features should ideally be at the same position.
# This value specifies the maximal difference in m that is allowed.
train_max_feature_match_dist_in_meter: 0.005
# Enable PreBuildSegmentation to eliminate grasp (sdh) from images and improve transformation data 
robot_building_models: true
##########################################
# Parameters for object model detection
##########################################
# Minimal cluster size for feature points to be valid
detect_ransac_norm_inlier_dist_thresh: 0.014
# Standard deviation of inlier points
detect_min_inlier_size: 10

Definition of object coordinate system

The frame of the object is defined like the following picture.

bounding_box_conventions.png

The front side of the object is the native object front like you would place an object into a shelf. The the coordinate system is oriented to the specific object. Normally the front side show in the negative Y-direction. The Z-direction show vertical to the top.

Tutorials

Tutorials can be found on the Tutorials page.

Models

You can get model files either from https://github.com/ipa320/srs_data or from an internal Server at IPA

smb://saturn20.ipa.fhg.de/austauschipa/J0SF

To insert models into cob_object_detection, pllace the models you like to use in the folder

roscd cob_object_detection/common/files/models/ObjectModels/

Either you delete the Info.txt or you write the name correct inside the file.

rm Info.txt

The object names must be "Model_<object_name>.txt".


2019-12-07 12:31