Adding eyes to turtlebot3 with oak-d

Luis M. Bracamontes
2 min readJun 4, 2022

--

In my previous publication I was trying to enhance my turtlebot3 robot with a radar. However, I thought having a camera and even better a depth camera with the robot will allow me to play with cloud points and depth algorithms for path planing, SLAM, autonomous navigation, etc. With that I mind I decided to integrate a simple depth camera with depth capabilities. There are a lot of options for 3D cameras but they can be very expensive (zed camera and realsense) or bulky (kinect v2) for a small robot and I was trying to keep it simply and cheap. I ended up getting a oak-d because due to its size, price and built-in processing capabilities.

Installing the hardware to the robot

The first thing to do was to figure out how to install the camera to my turtlebot3 burger. Since I did not want to remove the lidar I moved it 5 cm above its original location and placed the oak-d at the front so that I can have the two sensors at the same time! :)

OAK-D depth camera installed.

Running on ROS

While the installation took some time it was not the most difficult part of the process. The next step is to modify the robot description to include both the oak-d sensor and the lidar. Here is a visualization of the boosted version of the turtlebot3 burger.

Rviz visualization of the robot.

Now let’s see the 3D cloud generated by the oak-d in rviz.

Turtlebot3 burger with oak-d doing SLAM.

The output looks great! However I’m just overlaying the color image and the depth image to generate the output. In despite of that this opens the door for turtlebot3 burger to start doing some interesting 3D point cloud experiments and deep learning with the processing added by OAK-D!

References

[1]: https://github.com/luxonis/depthai-ros

--

--

Luis M. Bracamontes
Luis M. Bracamontes

Written by Luis M. Bracamontes

Senior CV vision engineers and passionate about robots

No responses yet