Multi-Camera Depth Fusion for Skeletal Tracking — Is it Possible with Nuitrack?

I’m currently exploring Nuitrack for a motion tracking setup and had a few questions about potential capabilities.
Specifically, I’m interested in whether it’s possible to combine data from multiple depth sensors — currently for my testing i’m using several Intel RealSense D435 cameras, would it be possible for example by calibrating them using OpenCV to align their views, and then feeding the merged depth data into Nuitrack to perform skeletal tracking on the combined scene.

My use case is tracking 1–3 people bouldering. The idea is to position the cameras at different angles and positions to maximize coverage and minimize occlusions & there will be intentional sensor overlap.

I understand that Nuitrack currently supports skeletal tracking per single sensor, but is there any built-in or recommended way to fuse multiple depth inputs for improved tracking robustness in overlapping views? Or alternatively, would I need to manage sensor fusion myself and only send one composite depth image to Nuitrack?

I’m still in the testing phase, currently experimenting with the RealSense D435 , will be upgrading to a newer sensor soon.