I am currenlty working on a Project where i need to export the Skeleton Joint Data and the corresponding Pointcloud form the Depthframe data in C++. I tried to follow the Pointcloud Tutorial in Unity but sadly wont be able to implement the methods required in C++. I am able to export the joint data without any problems but when it comes to exporting the Depthframe data as a pointcloud i am unable to find any Functions that could be of any help.
Has this been asked before or is there an implementation for the Function that i couldnt find?
Any help would be much appreciated.
There is no ready to use function which would return you a point cloud. But you can implement your own point cloud generation using depth information which is provided by Nuitrack.
- First you need to get a depth and RGB frames (for example you can look how to get depth frame in nuitrack_gl_sample
- Second you need to convert each point of the depth frame to point in real world coordinates (XYZ). That could be done using convertProjToRealCoords function.
- After you do that, you can use those coordinates to render corresponding point of RGB frame using rendering engine of your choice.
Thank you for the fast reply. I tried your suggested method but i have a problem with the convertProjToRealcoord as it accepts three arguments x,y and a Depth Map.
I have tried multiple methods which should be outputting a depthframe but none of them seem to work as the third input parameter to the above stated Function. I also tried to use the depthSensor->getDepthFrame() method and sadly that is also not the right data type.
The nuitrack_gl_sample does it the same way as far as i have seen other than the step with texture mapping.
Am i missing something?
Can you please refer me to the Function which delivers the right output.
convertProjToRealcoord needs not
DepthFrame object, but actual data of depthframe as the third input parameter. So you would have to call DepthFrame::getData() to get an array pointer and then feed certain values of that array to
convertProjToRealcoord as the third input parameter.
const uint16_t* depthPtr = depthFrame->getData(); Vector3 realCoords = _depthSensor->convertProjToRealCoords(x, y, depthPtr[y * depthFrame->getCols() + x]);
Has your issue been solved? Do you have any other questions that I can help you with?
The Problem i ran into after your suggestion was as follows:
As i need to call the convProjToRealCoords function inside the onNewDepthframe callback i am having problems in passing the Depthsensor object to the callback function. Is there any other way to call this Function without first instantiating a Depthsensor object and passing it.
My main goal is just to get a Pointcloud and the corresponding Skeletal joint data for use in my project.
For that i need the Points to be in the same Coordinate system the previously suggested implementation should theoreticaly Work but as stated above i was unable to get it working as intended.
You can pass Depthsensor object to the callback function using
depthSensor->connectOnNewFrame(std::bind(depthCallback, std::placeholders::_1, depthSensor))
Has your issue been solved? Do you have any other questions I can help you with?