Aligning virtual character with RGB stream aka Mixed Reality Capture

Hi, I have just bought Nuitrack and so far so good.

I would like to mix RGB stream using Color Frame Canvas prefab with Render Mode set to Screen Space - Camera. Now I would like to match the character in scene, avatar driven by Nuitrack skeletal tracking with this image in a way they match. Basically, I would like to put cloths simulated in Unity onto video image of people. How can I do such a thing?

Thanks!

Hi Przemysław,

To do this, you have to change some parameters of the camera on the scene:

  1. Set the camera to (0,0,0) and rotate it by 180 degrees(0,180,0);
  2. If the model size is incorrect, change FOV of the camera;
  3. Please use the model with direct mapping of joints. Such model is used on our RiggedModel2 scene. Make sure that your Canvas doesnā€™t overlap the model.

You can find the sample project here.

Hi Olga thanks, it seems that you need to first match the position and tilt of real camera in Unity and than fine tune the FOV.

Hi Olga,

Iā€™m creating something similar where I need joints in 3D space to line up with the person on the RGB feed in the back ground but instead of controlling an avatar weā€™ll be pinning things to them.

Iā€™ve made sure Iā€™m orienting the joints in the same way you do in your example and followed the rest of the advice above but Iā€™m having a real struggle getting them to line up.

Iā€™ve also noticed the head joint doesnā€™t rotate when I turn my head which is a problem since the app Iā€™m making needs to pin armour and a helmet to the user. Is this correct behaviour? If so I assume my best way around this would be to use the skeleton head joint for the position of the helmet and use your face tracking API to get the face angle to rotate it.

Iā€™m jumping ahead though there, because if I canā€™t get the skeleton joints to line up with the user in the RGB feed itā€™s not going to work anyway.

Iā€™ve ran the demo project you posted above with the Unity Chan avatar and get the same problem, even after tweaking the camera position/rotation/FOV to try and get it to line up. Anything I do that gets the wrist joints placed wide enough apart to line-up, results in all the other joints then being out of place.

I feel like the points are never going to match properly because the image is being stretched to fill the screen. Is that the case or have you had this demo working perfectly? Maybe I need to scale the skeletons parent object by the same scale the image is being stretched?

If you have any advise you could give to solve this problem Iā€™d be very grateful.

Iā€™m using a Realsense D415 with this if that makes a difference.

Thanks for your help

Adam

Hi Adam,

It seems that you didnā€™t turn on the registration of depth and RGB frames.

Hi Olga,

Thanks very much, Iā€™ll give that a try.

Hi all,
Iā€™m also trying to align the skeleton with the RGB image in Unity but Iā€™m not able to get it right.
Iā€™m using a RealSense D435i depth-sensing camera.
I have the ā€˜Depth2ColorRegistrationā€™ in ā€˜nuitrack.configā€™ set to ā€˜trueā€™.
I set the camera horizontal FOV to 69.4 and Iā€™m using a RawImage as described in the ā€œDisplaying Skeletons on an RGB Imageā€ tutorial.
Iā€™m using the ā€˜Assets/NuitrackSDK/Tutorials/Avatar Animation/Scripts/RiggedAvatar.csā€™ script that I think is the one used in ā€˜RiggedModel2.sceneā€™.
Iā€™m using a model from MakeHuman with a CMU compliment rig and rendered using a ā€˜Skinned Mesh Rendererā€™.
The skeleton is working fine, itā€™s just the alignment that Iā€™m not getting right. Any more suggestions?

There can be several problems:

  1. Background stretching (for Image UI, you need to enable Preserve Aspect https://docs.unity3d.com/2018.4/Documentation/Manual/script-https://docs.unity3d.com/2018.4/Documentation/Manual/script-Image.html, https://docs.unity3d.com/2018.4/Documentation/Manual/script-Image.html)
  2. Perhaps the issue is connected to positioning in RiggedAvatar.cs (try to use the calibration script from Nuitrack SDK at startup of your project to align the skeleton)
  3. The set FOV of the camera does not match the FOV of the sensor.

Also please take a look at our tutorial (see the section Direct Mapping).

1 Like

Maybe it would be better if you guys could please provide a working sample already calibrated, for example, for RealSense camerasā€¦

@iosif.semenyuk Thank you for the reply

  1. The RGB camera is 1920 Ɨ 1080 and the output window is 3840x2160. They have the same exact ratio 1.77(7).
  2. Iā€™m using the ā€œT Pose Calibrationā€ and ā€œCalibration Infoā€ modules. Should I use others?
  3. I now noticed Iā€™m using RGB camera FOV (69.4) but I think I should be using the depth camera FOV (86). This still doesnā€™t fix it.
  4. This is all based on the ā€œDirect Mappingā€ tutorial. I took several looks already.

RGB and Depth FOV are different for different sensor models, and Nuitrack doesnā€™t have access to FOV of a sensor. Unfortunately, we donā€™t have such tutorials (for a 3D model).

Yeah, but the RGB to D registration (option for RS cameras) should solve this, right?

1 Like

Yes, this should align depth frames and color frames, however, itā€™s still necessary to fine-tune a sensor and a model to match it with the background. Weā€™ll consider making a tutorial on aligning RGB and Depth.

1 Like

Have you made a tutorial yet?

No, we havenā€™t made this tutorial yet.

Updated the ā€œPoint cloudā€ tutorial

  1. In order to improve the resulting point cloud, we recommend you to turn on depth-to-color registration because a depth map doesnā€™t accurately match an RGB image, therefore, they should be aligned. To turn on depth-to-color registration, open nuitrack.config from the folder <nuitrack_home>\data and set DepthProvider.Depth2ColorRegistration to true.

https://github.com/3DiVi/nuitrack-sdk/blob/master/doc/Unity_Point_Cloud.md

Have you made a tutorial yet?

At the moment, we are preparing a tutorial ā€œRendering world object behind userā€.
In this tutorial, you will see how to recreate a real-world environment in Unity with the correct depth map, as well as the ability to interact with objects on the stage ( AR ).

Scenes with examples from the tutorial will be available in the latest version of the Nuitrack SDK.

Subscribe to the YouTube channel so you donā€™t miss the video.

You been working on this tutorial for a year, when will be done?

The video is ready and is in the final stage of editing.
It remains to do voice-over and translation for subtitles.