How do I merge data from multiple sensors in unity?

I want cover the large area to obtain the skeleton data.

So, I want several realsense working on the same scene.

With the Multisensor case in Unity, I’ve been able to display two realsense screens at the same time, but how do I merge their data so that the two realsenses are recognized as the same person?

Hi @Logic_X

At the moment, we’re developing the module that will produce skeletons obtained from different sensors in the same coordinate system (Holistic Skeleton Tracking) along with the calibration tool.

You can try beta version of this module in Nuitrack.exe application. Add NUITRACK_MULTISENSOR_BETA=1 environment variable in your system. After that, you’ll get access to holistic tracking from multiple sensors as well as access to required calibration step inside Nuitrack.exe application.

We’re recommending to play around with this feature and see if it fits your use case. Don’t hesitate to ask if you have more questions.

I am building a similar system now for Unreal. Is the holistic skeleton tracking available for the Unreal pipeline as well? Is the Femto camera of an advantage because of the onboard processing of depth? Does the Femto multi camera synchronization help this process? can the astra be used for the holistic skeleton tracking? Thanks so much.

Hi @beelzebeau

Currently we do not support Holistic Skeleton tracking in UE.
At the moment, we do not see significant advantages of having onboard processing and multisensor synchronization of Femto over Astra+ cameras. The sensor of our choice is Astra+ because it provides a good cost/features ratio.
In the near future, we have plans to investigate in detail the additional functions of Femto sensors and identify potential points of improvement using them.

Hi @beelzebeau

Let us know if you have any further questions.