The DepthFrame is from DepthSensor, while we can retrieve DepthFrame, but we don’t know how to use its data.
Currently we copy the data we convert from DepthFrame.Data, and we know that each 2 bytes represents 1 value of a pixel, but what does that value mean? and if it’s not mm how can we transform these 2 bytes into an actual distance value that we can use?
Also, we’re using the Kinect for Xbox One camera, the DepthFrame.GetAt() method is weird, we’re trying to get one value from one pixel, but that value are mostly 0, regardless of the camera is blocked by a short distance item or not. How does DepthFrame value work?
Yes, you are right, every two bytes (
ushort) is the depth value in millimeters for the current pixel.
In some undefined pixels, the depth value may be equal to 0.
If you use Unity, then you can learn more about how to work with depth in tutorials: 3D Point Cloud and AR Nuitrack
Thanks for the initial answer, but there are still some more questions (As mentioned above, Kinect for Xbox One is used):
- Because for performance we’re converting the raw DepthFrame.Data to an byte array because we need those data, how exactly do we calculate it to ushort? is there any thing we can relate to calculate this mm distance correctly? Your sample doesn’t tell much
- What do you mean undefined pixels? does every pixel close to camera will report 0?
- At around 10 cm distance to camera, if an object is moving in and out, the value of a pixel is higher than usual (8000), and then it increases normal from 0. Is it intended? or it’s just data convertion issue from byte to ushort?
To get the matrix in millimeters you can use:
cv::Mat depthMap = cv::Mat(depthFrame->getRows(), depthFrame->getCols(), CV_16UC1, (void*)depthFrame->getData());
At around 10 cm distance to camera, if an object is moving in and out, the value of a pixel is higher than usual (8000), and then it increases normal from 0. Is it intended? or it’s just data convertion issue from byte to ushort?
The working distance for Kinect Xbox One is 0.5-4.5m, objects outside this range may have incorrect values on the depth map.
What do you mean undefined pixels? does every pixel close to camera will report 0?
The sensor does not return a perfect depth map, the depth map contains noise, usually this noise has zero distance at the pixel position (usually these are pixels at the edges of the image or belonging to objects outside the working distance range).
How are you? Has your issue been solved?
Would be great if you could provide some reply/feedback, we will be ready to help.