Rendering world object behind user

I’m using Unity3d and an Intel Realsense D435 camera.
With the Intel Realsense sdk they have a sample ARBackgroundImage where you can place world game objects in front and behind a user, it sort of calculates the depth and knows what to render in front and what in behind.
Now I want to integrate this with nuitrack skeleton tracking.

My issue is I tried getting both SDKs in the same project but because nuitrack also wants to access camera and so does realsense I’m sort of stuck.

Any thoughts?

Hi Mohammad,

What part of RealSense SDK are you going to use? Is it necessary for your project to use depth and RGB streams from RealSense SDK?

Hi, im in the same issue.

Im trying to render objects in the “real world”, so using the “ArBackground” scene from the realsense SDK to mix the user body with real objects. But the realSense SDK doesnt provide a skeletal tracking (only a third party that doesnt work too well), so I’m trying to integrate the user tracking, so the user can interact with the characters, for example the character can mover behind or in front of the user, and touch his hand or shoulder. Its possible integrate both sdks at the same time?

Hi Nekroraptor!

Using both SDKs in the same project is a bad idea, as it will cause problems with accessing the device.

Soon there will be a tutorial and an example with drawing the user with a depth dependence.

In the meantime, you can try to cut the user’s RGB (nuitrack.ColorFrame) by segment (nuitrack.UserFrame) and depth (nuitrack.DepthFrame) yourself.

Maybe it will be like this: Creating a 3D Point Cloud

Thanks for your response. I’m synchronized the canvas with a “invisible” avatar that uses the user’s skeleton with 2 cameras, one camera renders the canvas in the background, and the other renders the “invisible” character (the user), so there is a 3d representation of the user in the 3d world and is invisible (the 3d objects behind are hidden), but this system has a problem: I’m using a 3D character with the skeleton tracker, and is not accurate with the size of the user. When I use the 3d cloud points form the examples, is too big to be used and cant adjust the size of the 3D cloud points to the canvas and the 3d skeleton tracker. How Can I resize the 3d cloudpoints to align the 3d cloud points with the 3d user track, and the canvas?

sorry, for my english, is not my native language, if you need more specifications just ask me.

thanks!

Jojizaidi & Nekroraptor

At the moment, we are preparing a tutorial “Rendering world object behind user”.
In this tutorial, you will see how to recreate a real-world environment in Unity with the correct depth map, as well as the ability to interact with objects on the stage (AR).

Scenes with examples from the tutorial will be available in the latest version of the Nuitrack SDK.

Subscribe to the YouTube channel so you don’t miss the video.

thx!

I could manage it making a modification in the shader of the depth material from the tutorials. Creating aColorMask 0 setting in the shader, the user is invisible to the camera, and the objects behind are also invisibles, then with 2 cameras in different layers (1 for 2D background and other for 3D objects), managed to create the composition.

While the video is not available, you can learn the tutorial in the attached unitypacakge .
Perhaps this will be useful for you.