DepthFrame Data Format

I am using the VicoVR sensor with an Android Tablet. I built the app on Unity.

I would love to be able to use the byte array from DepthFrame.Data as documented in the link below. However I need more information on the formatting. It appears to be 604400 bytes long for a 640x480 depth frame. Because the byte array is 1 dimensional, I can’t make much sense of what the values correspond to in the frame. If you could give some better documentation for what the bytes represent, It would be greatly helpful.

My best guess is that every pair of bytes corresponds to a single pixel. That is DepthFrame.Data[0] and DepthFrame.Data[1] might correspond to DepthFrame[0,0]

Has anyone had any experience with this? It has the potential to really improve my application if only I can figure out how to interpret it.


Hi Ben,

Each item of depth data is 16bits as opposed to 8bits - this is the reason why the array is larger.
The value returned is the depth of the pixel - which can be numbers between 0 and 5000 - an 8 bit integer can only return numbers between 0 and 256.

As such the array is a one to one correspondance - pixel for pixel as it relates to the depth buffer.

DepthFrame.Data[0] is the depth at (0,0)
DepthFrame.Data[1] is the depth at (1,0)
DepthFrame.Data[2] is the depth at (2,0)
and so on till the end of the first row
DepthFrame.Data[640] is the depth at (0,1)
DepthFrame.Data[641] is the depth at (1,1)
DepthFrame.Data[642] is the depth at (2,1)

Kind regards



Hi Everyone,

I also have a question regarding the DepthFrame data Format and data format in general .
I use the INTEL D435 Sensor.
As I think that the format is valid for all Sensors I sticked to what Westa posted but the result is not what I expected(see pictures).

The Data arrays for depthframe and userframe are twice as long as expected for 848x480 resolution:
814080 vs 407040
So if I take the first 407040 Points it seems I miss half of the information.
At least in userframe it shouldn’t look like that.
What am i missing.

Thanks for help.

Best regards,



What dimensions are the userframe and depthframe reporting in your code?

I would start by working thru the nuitrack.config and making sure you are comparing apples with apples.

What settings have you declared in the intel section for raw dimensions and processed dimensions.


Hi Westa,

was travelling and had no time until now to look into it.
This is the section for the realsense in nuitrack.config:

"Realsense2Module": {
    "Depth": {
        "ProcessMaxDepth": 5000,
		"RawWidth" : 848,
		"RawHeight" : 480,
        "ProcessWidth": 848, 
        "ProcessHeight": 480, 
		"FPS": 90,
        "Preset": 5, 
        "PostProcessing": {
            "SpatialFilter": {
                "spatial_iter": 0, 
                "spatial_alpha": 0.5, 
                "spatial_delta": 20
            "DownsampleFactor": 1
        "LaserPower": 1.0
    "FileRecord": "", 
    "Depth2ColorRegistration": true, 
    "RGB": {
		"RawWidth" : 1920,
		"RawHeight" : 1080,
        "ProcessWidth": 1920, 
        "ProcessHeight": 1080

so I assume that my raw and processed dimesions are the same with 848x480.
848x480 = 407 040 points

Now in my Code I copy the data from depthframe into a new array, but the array is twice as long as expected -> 814080 points (see screenshot).
If I take only the first half of this data array and save it as a Picture I get the results from the previous post.

Data Copy:

//create new byte array
private ushort[] lDepthImg = new ushort[848x480];

private void onDepthSensorUpdate(DepthFrame _depthFrame)
        // DepthStream
        if (true)
            depthFrame = _depthFrame;

            if (depthFrame != null)
                depthTime = (long)depthFrame.Timestamp;
                //Copy data to new array
                //Save a Picture

    void CopyFrameDataToArrayDepth(DepthFrame frame)
        //fast and efficient way to copy the data
        var rangePartitioner = Partitioner.Create(0, depthFrame.Rows);

        Parallel.ForEach(rangePartitioner, (range, loopState) =>
            for (int i = range.Item1; i < range.Item2; i++)
                for (int j = 0; j < frame.Cols; j++)
                    lDepthImg[i * j] = frame[i * j];

save as Picture:

public static void saveDepthImage(ushort[] depthValues)
        byte[] tranformedDepthImage = TransformDepthImageArray(depthValues, imageType.depthImage);
        saveImage(Environment.GetFolderPath(Environment.SpecialFolder.MyPictures) + @"\RealsenseTest  
        \depthImage" + savedImagesCounter + ".png", tranformedDepthImage, depthImageWidth,
        depthImageHeight, imageType.depthImage);

private static byte[] TransformDepthImageArray(ushort[] rawData, imageType imageType)
        byte[] result = new byte[depthImageHeight * depthImageWidth * 4];
        int currentIndex = 0;

        foreach (int depth in rawData)
            byte color = (byte)(depth * 255 / 5000);

            result[currentIndex++] = (byte)(255 * color) ;
            result[currentIndex++] = (byte)(255 * color);
            result[currentIndex++] = 0;
            result[currentIndex++] = 255;

        return result;

I also tried another approach where I use Array.Copy

// Divided by 2 because of double length of depthFrame.Data array
Array.Copy(depthFrame.Data, lDepthImg, depthFrame.Data.Length / 2);

but then I get something like this:

Thanks for looking into this.


The DepthArray is 16bits wide per location - so a ushort integer array is incorrect since it is 8bit

private ushort[] lDepthImg = new ushort[848x480];

instead you need to use a short integer

private short[] lDepthImg = new short[848x480];



Hi Westa,

thanks for your help.
After changing to short and playing around with it I made it work

Best regards