With-Respect-To

Fast and simple 3D pose manager to stop worrying about rigid transformations

đź”— GitHub Link


When working with robotics or 3D systems, keeping track of transformations between reference frames is essential but the computations involved are error-prone (although relatively simple). Ultimately, a roboticist wants to either 1) record a pose for later use, or 2) retrieve a specific pose relative to a specific reference frame. Ideally, this process should be fast, simple, and reliable, such that the roboticist can focus on the higher-level tasks without worrying about the underlying transformations.

With-Respect-To (WRT) is a lightweight, high-performance library for managing 3D transformations with explicit accessors and a centralized database approach. WRT does not require ROS (unline tf2), making it ideal for standalone applications.

Why WRT?

  • Simple: Uses a single, explicitly defined 3D convention.
  • Fast: Handles thousands of transformations per second with negligible overhead.
  • Minimalist: A clean, intuitive API without unnecessary complexity.
  • Accessible: Works with Python, C++, Bash, and Linux command-line tools.

Blazing Fast Performance

WRT is designed for speed and efficiency, ensuring that accessing transformations doesn’t slow down your application. In stress tests, WRT consistently performed over 300 operations per second, even with multiple concurrent processes working with very deep pose trees. On any recent computer, WRT should execute its operations in about 2.5 milliseconds.

Even under heavy concurrent access, the library remains efficient, meaning you don’t have to worry about performance bottlenecks when querying transformations.

Real-World Example: Kinematic Calibration From Camera Observations

A good example of WRT in action is from my kinematic calibration library. In this context, a Realsense D405 camera is mounted on the end-effector of a Franka Research 3 robot. As the robot moves through various poses, the camera captures images of a calibration target, allowing the robot to compute the pose of the camera relative to the target for each observation. These observations are then used to refine the robot’s kinematic parameters, improving its accuracy in Cartesian space.

This process involves many transformations between the camera, the robot’s end-effector, the calibration target, and the robot’s base frame. Making use of WRT, the following script demonstrates how my library can simplify the process of working with 3D transformations and poses:

from spatialmath import SE3
import roboticstoolbox as rtb
import with_respect_to as WRT
import pickle

import sys
from pathlib import Path
sys.path.insert(0, Path(__file__).parents[1].as_posix())
from RobotKineCal import SerialRobotKineCal

db = WRT.DbConnector()

#Approximate pose of the camera relative to the link8 frame
REALSENSE_X_OFFSET_MM        = 94.5     #millimeters
REALSENSE_Y_OFFSET_MM        = -9       #millimeters
REALSENSE_Z_OFFSET_MM        = 40.55    #millimeters
REALSENSE_ROT_Z_OFFSET_DEG   = 90       #degrees

#Pose of Realsense relative to TCP expressed in TCP frame
X_C_T = SE3.Rz(REALSENSE_ROT_Z_OFFSET_DEG, t=[REALSENSE_X_OFFSET_MM/1000, REALSENSE_Y_OFFSET_MM/1000, REALSENSE_Z_OFFSET_MM/1000], unit="deg")

db.In('kine-cal').Set('ee').Wrt('cam').Ei('cam').As(X_C_T.inv().A)

#Pose of the board relative to robot's base frame (world)
# Assuming a board thickness of 3 mm, and a robot mounting plate thickness of 12.7 mm
# The origin of the board is close to the 0 marker.
board_centre = ((82.25+4*25+3+250)/1000, -(4.5*25+3+250)/1000)
board_dims = (0.5, 0.5)
#Top left corner of the checkerboard
# We need to add half the size of the board (including margin) and remove the margin.
top_left_corner = (board_centre[0] + board_dims[0]/2 - 0.025, board_centre[1] - board_dims[1]/2 + 0.025)
X_B_W = SE3.Ry(0,t=[top_left_corner[0], top_left_corner[1], -9.7/1000], unit="deg")
db.In('kine-cal').Set('board').Wrt('world').Ei('world').As(X_B_W.A)

#Load the data
fin = open(Path(__file__).parent / 'calib_data.pickle', 'rb')
kine_cal = pickle.load(fin)
joint_positions = kine_cal['joint_positions']
camera_poses = kine_cal['camera_poses']

observed_ee_poses = []
for m in range(len(camera_poses)):
    #Camera pose relative to the board
    X_C_B = camera_poses[m]
    db.In('kine-cal').Set('cam').Wrt('board').Ei('board').As(X_C_B)
    #End-effector pose from cameral observation
    obs_ee_pose = db.In('kine-cal').Get('ee').Wrt('world').Ei('world')
    #Record the observed end-effector pose as a separate frame
    db.In('kine-cal').Set(f'ee-{m}').Wrt('world').Ei('world').As(obs_ee_pose)
    observed_ee_poses.append(SE3(obs_ee_pose))

#Load the model of the robot
panda = rtb.models.URDF.Panda()
#Create the calibration object
cal = SerialRobotKineCal(panda, 'panda_link8', verbose=True)
#Set the data
cal.set_observations(joint_positions, observed_ee_poses)
#Solve the calibration problem
result = cal.solve()

#Print the URDF joint definitions
cal.print_urdf_joint_definitions(result)