Hi, I studied the nuitrack_gl_sample example, and I noticed that the user’s skeleton can be built from the depth and color data. I wondered if it was possible to do the opposite. That is, starting from the skeleton of the user I would like to get the data of depth and color???
Ummm - what exactly would be the purpose of this?
The whole point of nuitrack is that it - works out where the possible skeleton on a person is from the depth data it receives - it technically doesnt even need to color data to do this.
Ok, but technically what are the steps to be taken to get a skeleton from the depth data ??? At the code level do you have to write something ?? Or just run commands?
Nuitrack is a Software Development Kit (SDK) - a set of libraries of code that are used with other tools to make computer programs - as such yes its up to you how you choose to make use of the information provided by Nuitrack.
Nuitrack has no tools for making a visual representation of skeleton - or for that matter even displaying any of the information it generates. It is up to you the end user to choose how you want to represent a skeleton, or a cursor or a point cloud or a image mask. This may using tools such as OpenGL or SFML or DirectX or Unity or Unreal Engine or tools like Godot that you may choose to code your own interfaces for.
The nuitrack_gl_sample shows in reasonably good detail what the steps are to get information from Nuitrack onto a display screen using OpenGL - and the Unity and Unreal samples similarly so for those engines.
What nuitrack does give you - is a set of data that is generated each frame (around a 30th of a second) that presents a set of information regarding where it believes Users are standing in 3D space - and from that - where parts of those Users are positioned - arms, legs, head, hands etc. Its just DATA - xyz and rotation points in space for each element.