I have 3 ideas for NuiTrack improvement:
The Intel RealSense cameras use for depth generation image analysis from 2 stereoscopic camera.
Maybe you can add depth map generation with any pair of cameras (the algorithm is in OpenCV GPU accelerated for sure) so NuiTrack will work with a pair of any 2 identical cameras properly aligned ?
Pseudo depth -map can be generated from a single camera based on pixel motion analysis. There already exist MOCAP products based on this idea (wrnch). Any chance to implement single camera MOCAP ?
The l idea is to produce multiple standard camera more precise MOCAP
Your product and SDK look good so I believe it can be worth to extend its compatibility with any camera HW …