ROS Node Available for Nuitrack


I’ve written a ROS node for the Nuitrack SDK, which publishes 2D and 3D tracking info, and provides markers in RVIZ. Code posted on Github at


Sheldon Robot:



Simulation in RVIZ:

Following me around the house

Code for Nuitrack ROS Node:

Publish rgb/depth via Image Transport on ROS

Hello dshinsel

I am installing your ROS node for the Nuitrack skeletal tracking (trial version). I am getting an error in finding " #include “nuitrack/Nuitrack.h”. Is that included in the Nuitrack SDK or the Nuitrack linux drivers? I have been looking for these two different files all over the internet and I only found the Nuitrack SDK. Where are placed the linux drivers?

Thank you for your help.

Your sincerely,



Hello there,
I am having the same issue and can’t find the “nuitrack/Nuitrack.h” file. @Daniel, did you ever get this to work?
Thanks in advance everyone!


Never-mind, I got it to install. For those having trouble, make sure you install the SDK from Nuitrack’s main page under the API linkas well as install the drivers as in the instructions at

Then change the “NUITRACK_SDK_PATH” parameter in the ROS package’s CMakeLists.txt to the location where you saved the SDK.


UPDATE: Added ability to share Depth and RGB images with other ROS Nodes!

Unfortunately, Nuitrack accesses the depth camera directly, so ROS can’t access the camera. (there may be a way to do this with OpenNI? I don’t know…)

I wanted other nodes to be able to access the color and depth streams, so I have added the ability to publish image messages for both Color and Depth.

I initially was going to use OpenCV, but ended up not using it, as ROS Kinetic image transport uses OpenCV 3.0, while NuiTrack uses OpenCV 2.4 (and you can’t mix OpenCV versions in the same process). So, I publish the Image Message directly, based upon the sample from Nuitrack_gl_sample. Now all my ROS nodes have access to color and depth frames!

Code is checked in at


UPDATE: Added publisher for color pointclouds, so other nodes can now do point cloud processing with PCL, etc.



Thank you so much for keeping this updated. I really appreciate your work and that you share it.

I am trying to build a person following project as well.

If there are multiple people in the frame, how do you keep the following “locked on” to the person you want to follow? How do you designate this person from the outset? (I was thinking maybe via a gesture?)