I am trying to map a ColorFrame point to a DepthFrame point but i keep getting strange results.
The depth resolution is 640x480 and the camera is 1280x720. But when I look at the depth image in the Unity PointCloud sample it looks like the depth resolution is larger than 640x480.
I tried to write something anyway using a formula like this: depth.x: 0 is equal to Camera.x: 320 (Camera.width * 0.5 - depth.width * 0.5), and depth.y: 0 is equal to Camera.y:
120 (Camera.height * 0.5 - depth.height * 0.5). So if I fed in a point out of range of the depth resolution it would return 0, otherwise it would calculate the depth point based on the formula above and then get the depth. But the returned depth seems to be off.
What is the best way to translate a point from the ColorFrame to a depth from the DepthFrame?
First, you have to edit
Then, a possible solution is to set the resolution. To do this, you’ll need to insert the code below in NuitrackManager between
"depthSensor = nuitrack.DepthSensor.Create();":
On a slightly different topic, can you get the colour stream for the User Frame? Or will I need to use similar logic I have written for depth to colour for the User Frame?
It would be great to have these already in the library, rather than writing them ourselves. I’m sure a few people will be writing the same scripts.
Another note, since setting the config code via
nuitrack.Nuitrack.SetConfigValue I have found Unity often crashes when running, or stopping a scene. If I try to add the values into nuitrack.config I get
BadConfigValueException: NuitrackException (BadConfigValueException) when running.
UserFrame have a pixel-to-pixel correspondence. Therefore, you have to implement the same logic for mapping two frames to
ColorFrame (or vice versa).
BadConfigValueException occurs in two cases:
- If there’s a syntax error in
nuitrack.config. You can check the code from
nuitrack.config using JSON formatter & validator.
- If a user specifies the path to