[Documentation] [TitleIndex] [WordIndex

!

Note: This tutorial assumes that you have completed the previous tutorials: First Steps, Parameters.
(!) Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags.

Camera Frames

Description: This tutorial explains the different frames defined in the ensenso node or ensenso mono node and how the 3D data is transformed.

Tutorial Level: INTERMEDIATE

Next Tutorial: Calibration Patterns Workspace Calibration Hand-Eye Calibration

We introduce different kind of frames: camera_frame, target_frame and link_frame, which will be explained in the following. In general the camera node automatically manages the transformation of the 3D data between different frames.

The camera frame (specified by the camera node's camera_frame parameter) is a TF frame in which the data is captured by the camera. By default, this frame is set to <serial>_optical_frame, where serial is a placeholder for the camera's serial number.

The link_frame is a helper frame. Internally a camera link is stored for each of the cameras in the NxLib. This camera link stores, for example, the transformation of the hand eye calibration (fixed) from the camera to the robot base. It can also store a transformation to another camera or the transformation to the workspace, which is received by a workspace calibration. This internal camera link is static and will be published with TF. The link_frame will be published as parent TF frame and the camera_frame as child TF frame. If the link_frame parameter is not given, it defaults to the target_frame, if the target_frame is given. Otherwise link_frame will be the same as the camera_frame.

The target_frame is a TF frame in which you want to receive data from the camera. All data is automatically transformed using the current TF transformation between the link_frame and the target_frame (this includes the point cloud as well as poses of calibration patterns). By default, the target frame is the same as the camera frame, if not defined.

If neither the link_frame nor the target_frame is given, no transformation for the data is used and the data is received directly by the camera and is also published in the camera's camera_frame.

If internally a camera link is stored (after a workspace calibration, hand eye calibration or link calibration) to the eeprom of the camera, this transformation will be used in any case. If you do not want to use any transformation for the 3D data, make sure the transformation in the Link node of the camera is empty or is at least a unit transform.

Camera Frame (data as it is seen from the camera)
      |
      |  Static internal Camera Link (Hand-Eye/Workspace Calibration/Link Calibration)
      |  Gets published with TF from NxLib to ROS.
      v
Link Frame
      |
      |  Dynamic Global Link (defined from outside and fetched from current TF Tree)
      |  Is Updated with current TF Transformations from ROS to NxLib
      v
Target Frame (data from camera published in this frame)

Currently, the Link Calibration is not yet available within ROS, but within the NxLib. How you can calibrate a link between two cameras, is described here and the command, you will have to use, is described here

Camera Frames Usage Examples

Here are some launch files, that illustrate what frames have to be given, in order to get the 3D data with the right transform in the right frame.

Camera workspace calibrated, you want to receive data in the workspace frame

If the camera has been workspace calibrated before, the internal camera link stores the transformation from the camera to the workspace. That transform will get published, if the target_frame or the link_frame is given. Because the link_frame defaults to the target_frame (if the target_frame is given), we only need to define the target_frame as the workspace's frame.

   1 <launch>
   2   <arg name="target_frame" doc="Workspace frame, you want to receive data in". />
   3   <arg name="serial" doc="The serial number of the camera."/>
   4   <node pkg="ensenso_camera" type="ensenso_camera_node" name="ensenso_camera_node">
   5     <param name="serial" value="$(arg serial)"/>
   6     <param name="target_frame" value="$(arg target_frame)"/>
   7     <!-- frame below not needed, but can be given -->
   8     <!-- param name="link_frame" value="$(arg target_frame)"/ -->
   9   </node>
  10 </launch>

Camera workspace calibrated, you want to receive data in another frame

You want to receive 3D data in another frame, that is not known to the NxLib (internal camera link), that transform has to be given with Tf. In that case you will have to give both frames, target_frame and link_frame in this way shown below:

   1 <launch>
   2   <arg name="target_frame" doc="In which frame you want to receive data". />
   3   <arg name="link_frame" doc="The workspace frame, to which the camera was calibrated". />
   4   <arg name="serial" doc="The serial number of the camera."/>
   5   <node pkg="ensenso_camera" type="ensenso_camera_node" name="ensenso_camera_node">
   6     <param name="serial" value="$(arg serial)"/>
   7     <param name="target_frame" value="$(arg target_frame)"/>
   8     <!-- frame below IS needed-->
   9     <param name="link_frame" value="$(arg link_frame)"/>
  10   </node>
  11 </launch>

The camera publishes its known transform from itself to the workspace (camera_frame -> link_frame) and will fetch the remaining transform from Tf (link_frame -> target_frame). The 3D data will then be transformed from the camera_frame to the target_frame and also published in the target_frame.

Camera is not calibrated to any extrinsic point or any other camera

In that case you also want to have the 3D data in the camera_frame. In that case no frame has to be given, as shown below, as the target_frame and the link_frame will default to the camera_frame if none of them are given.

   1 <launch>
   2   <arg name="serial" doc="The serial number of the camera."/>
   3   <node pkg="ensenso_camera" type="ensenso_camera_node" name="ensenso_camera_node">
   4     <param name="serial" value="$(arg serial)"/>
   5     <!-- both frames below are not needed -->
   6     <!-- param name="target_frame" value="$(arg target_frame)"/ -->
   7     <!-- param name="link_frame" value="$(arg link_frame)"/ -->
   8   </node>
   9 </launch>

The camera has been hand eye calibrated

There are two different types. A moving and a fixed hand eye calibration. In a moved hand eye calibration (camera is moving on the robot) the transformation between the camera and the wrist will be stored in the eeprom. In the fixed variant, the transform between the camera and the robot base will be stored in the camera's eeprom.

After a moving hand eye calibration

Because the internal link stores the transform between the camera and the place, where the camera is mounted (e.g. the wrist of the robot), you will have to supply transforms to Tf, that will be fetched from the camera.

   1 <launch>
   2   <arg name="target_frame" doc="In which frame you want to receive data (e.g. robot base)". />
   3   <arg name="link_frame" doc="The frame, where the camera is mounted on (e.g. wrist)". />
   4   <arg name="serial" doc="The serial number of the camera."/>
   5   <node pkg="ensenso_camera" type="ensenso_camera_node" name="ensenso_camera_node">
   6     <param name="serial" value="$(arg serial)"/>
   7     <param name="target_frame" value="$(arg target_frame)"/>
   8     <param name="link_frame" value="$(arg link_frame)"/>
   9   </node>
  10 </launch>

In this case, the camera will publish its known frames (like in 2.) and will fetch the Tf data between the wrist and the frame you want to receive the data in.

After a fixed hand eye calibration

In this case the internal link stores the transform between the camera and the robot base. It is more or less the same as a workspace calibration. So if you want to receive data relative to your robot base, you only need to supply the target_frame to the camera with the robot's base frame.

   1 <launch>
   2   <arg name="target_frame" doc="robot base and the frame you want to receive data in". />
   3   <arg name="serial" doc="The serial number of the camera."/>
   4   <node pkg="ensenso_camera" type="ensenso_camera_node" name="ensenso_camera_node">
   5     <param name="serial" value="$(arg serial)"/>
   6     <param name="target_frame" value="$(arg target_frame)"/>
   7   </node>
   8 </launch>

The camera also publishes its transform from itself to the target_frame. In that case the robot base frame. To be exact: The whole transform is given between the camera_frame and the link_frame (which is published to Tf). The transform between the link_frame and the target_frame is a unit transform.


2020-01-25 12:37