How do I merge data from multiple sensors in unity?

I want cover the large area to obtain the skeleton data.

So, I want several realsense working on the same scene.

With the Multisensor case in Unity, I’ve been able to display two realsense screens at the same time, but how do I merge their data so that the two realsenses are recognized as the same person?

Hi @Logic_X

At the moment, we’re developing the module that will produce skeletons obtained from different sensors in the same coordinate system (Holistic Skeleton Tracking) along with the calibration tool.

(UPD by @TAG) Nuitrack Holistic beta could be provided by request through a feedback form.

I am building a similar system now for Unreal. Is the holistic skeleton tracking available for the Unreal pipeline as well? Is the Femto camera of an advantage because of the onboard processing of depth? Does the Femto multi camera synchronization help this process? can the astra be used for the holistic skeleton tracking? Thanks so much.

Hi @beelzebeau

Currently we do not support Holistic Skeleton tracking in UE.
At the moment, we do not see significant advantages of having onboard processing and multisensor synchronization of Femto over Astra+ cameras. The sensor of our choice is Astra+ because it provides a good cost/features ratio.
In the near future, we have plans to investigate in detail the additional functions of Femto sensors and identify potential points of improvement using them.

Hi @beelzebeau

Let us know if you have any further questions.