Sensor intrinsics and correlation

Checking the c# API, I noticed that in order to project a point, we can use ‘DepthSensor.ConvertRealToProjCoords method’ so we can project a point into deph image.

Now, how could I do it for the color image? I noticed ColorSensor is missing the corresponding ConvertRealToProjCoords methods…

In fact, It could be very convenient if the API could expose the sensor intrinsics (frame size, camera FOV, radial distortion, etc) and sensor correlation matrices.

ColorSensor doesn’t include information about depth, so converting the real coordinates to projective coordinates is impossible without the depth data from DepthSensor. You can align a depth image to color image and use the converting methods from DepthSensor.

The projection functions provided by the depth are fine for projecting a point from world space to depth space and back, but even the color image has the same size in pixels as the depth image, they don’t necessarily align. This happens because the color image and the depth image are aquired with different physical sensors with specific physical properties.

This is what we would like the API to expose:

Color Frame Intrinsics:

  • Horizontal FOV
  • Vertical FOV
  • Principal Point
  • Radial Distortion coefficients
  • Tangential Distortion Coefficients
  • Transform Matrix to correlate with the depth physical sensor

Depth Frame Intrinsics:

  • Horizontal FOV
  • Vertical FOV
  • Principal Point
  • Radial Distortion coefficients
  • Tangential Distortion Coefficients

We already have these values for some sensors, but given that NuiTrack can use a variety of sensors, AND it doesn’t tell you which sensor is using at a given time, we can only guess the parameters.