# On the Recognition of Skeletons

I want to know if I can calculate a series of data such as height, waist circumference, etc. by collecting joints through 360 degree rotation. Is there an example?

Hello, @socket.

You can use the colar joint position + the distance between colar joint and head joint as a simple approximate way to determine the height of a person.
If you wish to obtain more accurate results, we’d suggest you to refer to the following papers as a starting point:

Also, could you, please, tell us more about your use case? What is the method of collecting joints through 360 degree rotation?

Hello, @socket.

How are you doing? Do you have any other questions?

Can you give me a case study? For example, waist circumference, height, hip circumference, and other data.

Hello, @socket.

Sorry for the delayed response.

The body measuring feature is currently in development. Please remain up to current on our roadmap so you can be the first to know when it is completed.

Now I have a problem. When we use an RGB-D camera, we obtain the key points and also the data of the key points. So, is there any way to obtain the edge coordinates of this key point? For example, the center point of the human body, I hope to obtain the leftmost and rightmost coordinates of this center point. Also, does the SDK support capturing only one frame of image to obtain it? I don’t need that much data.

Hi，Now I have a problem. When we use an RGB-D camera, we obtain the key points and also the data of the key points. So, is there any way to obtain the edge coordinates of this key point? For example, the center point of the human body, I hope to obtain the leftmost and rightmost coordinates of this center point. Also, does the SDK support capturing only one frame of image to obtain it? I don’t need that much data.

@gvr ，Hi，Now I have a problem. When we use an RGB-D camera, we obtain the key points and also the data of the key points. So, is there any way to obtain the edge coordinates of this key point? For example, the center point of the human body, I hope to obtain the leftmost and rightmost coordinates of this center point. Also, does the SDK support capturing only one frame of image to obtain it? I don’t need that much data.

Hello, @socket.

Could you please elaborate on the right and left coordinates of a joint? The joint, or key point, as you mentioned, is simply a point in the sensor’s 3D space. The joint struct contains both the real and projection coordinates. Regarding the last question, Nuitrack is designed to operate with streams, but you can cease capturing frames with all of the necessary information once the first one is collected.

Hello, @socket.

How are you doing? Do you have any other questions?

How can I enable face coordinate acquisition in C++? I modified the config file and encountered an error in the C++example. I hope you can provide me with an example of C++recognizing the coordinates of body and facial keypoints.

@gvr ,How can I enable face coordinate acquisition in C++? I modified the config file and encountered an error in the C++example. I hope you can provide me with an example of C++recognizing the coordinates of body and facial keypoints.

Hello, @socket.

Could you please provide us with the error message, the problem code sample, the system information, and the Nuitrack version?
There’s also existing issue with the face tracking through python api, see this post.