HI Prasanna,
Firstly no you cant just grab a depth frame in EITHER SDK do some processing and then hand it off after the fact to either the Astra SDK OR the Nuitrack SDK for skeletal/hand tracking. Neither system supports this sort of workflow.
Some explanation firstly of how both SDKs work … both frameworks are best thought of as black boxes.
You initialise them - which performs a bunch actions that find a hardware sensor and wire up the system in preparation for processing. Once you have a initialised system - you tell both SDKs to wait for a frame of data to be captured from a stream.
Each SDK then has a mechanism to call back into your software with the latest frame of data already processed.
The mechanisms behind how each framework functions to do this are substantially different - but the net result is the same - the sdk does the processing and returns you back a set of data constructs containing fully calculated and processed information.
This may be - depending on the framework a full frame of depth, a full frame of rgb, a skeletal array of all the
currently tracked data points, an array of hand data or a gesture array.
How the wiring up of these systems works varies for each framework - but in essence - you tell the SDK all the things you want to PROCESS for a depth stream - and then the framework takes over almost complete control of the process from there.
In terms of which is best … There are major differences between the OrbBec and the Nuitrack SDKs on many levels. But I would put it like this - if you have very limited needs for tracking then OrbBec may be OK … but if you want accuracy or repeat-ability then the Nuitrack SDK beats the orbbec offering hands down. Further if you want functional hand tracking and gesture recognition that is in sync with the skeletal tracker then Nuitrack is your only option.
SO a little further discussion of how the Nuitrack framework operates:
Currently you have no direct control over the init() process beyond setting up parameters in the nuitrack.config file and letting it do its job. init() does all the connection work - finding a sensor and turning it on - which allows it to work with a wide variety of different sensor hardware in a completely generic way.
Once you have the system initialised you decide what sorts of data you want to receive from the SDK during each frame and you then setup a set of callback function mechanisms one for each type of tracking output you want to receive.
Then you effectively start a data pump loop that sits and tells nuitrack to wait until it gets a full frame of data from the connected sensor - during this process - nuitrack will call each of the callback functions you declared
- passing to those functions the calculations it has performed for that tracking type on the current frame.
SO - one function receives the Depth Frame, one function receives the RGB Frame, one function the user Frame, one function the Skeleton Frame, one function the Hand Frame, and one function a Gesture Frame if any has been recently captured.
The OrbBec SDK works a little differently in that you define a set of STREAM readers which define what is to be processed … you then setup the same sort of data pump loop and define a single callback point - which is called AFTER the SDK has processed the current frame completely. In that callback you have access to all of the processed data ( DEPTH, RGB, SKELETON ).
=========
Of the two - the Nuitrack system wins hands down in terms of tracking performance. And it has HAND tracking that is instant and responsive (within the limitations of the sensor).
The orbbec sdk does have a hand tracker - but it is slow to lock on (up to 5 seconds) and very quick to loose tracking of the hands.
Also significantly the Nuitrack hand and gesture tracking is intrinsically mapped to the SKELETON system which means the same IDs can be used to reference an USER, SKELETON, HAND JOINT and HAND TRACKER. — this is not the case with the OrbBec SDK in that the HAND tracker and Skeleton tracker dont have the same reference id’s.
Westa