I am using Nuitrack for a project where the user hands are being tracker (using Astra Pro camera). The user interacts with objects on the screen and can grab them and drag them, but if the hands move faster than a few centimeters per second we loose tracking and the object gets dropped.
I’ve been reading in other forums and online and came across the nuitrack.config file. Is there a way to improve the speed of the hand tracking making some changes in the file, or is there any other tip?
Orbbec Astra Pro
Intel Xeon 3.3GHz, 32GB RAM, Nvidia Quadro GPU
(We assume there are enough resources for Nuitrack to use)
There is nothing much in the nuitrack.config that directly effects hand tracking - apart from the region of view and the spacing between the left and right hand regions of view.
The quality of data returned by nuitrack is a factor of the quality of the information being fed to nuitrack by the sensor for starters … and also the quality of the environment … and only to a smaller extent maybe by the performance limitations of platforms such as unity,
The way that nuitrack recognizes what is a GRAB is likely part of the issue - as the hand moves - the appearance and shape of the blob of depth points that nuitrack believes is where the hand is tends to change.
With sensors like the astra - this is further compounded by the inherent noise and halo (dead pixel) fringes around items in the front of the field of view, Basically - each frame nuitrack makes a very intelligent guess about what it is seeing in the depth data - and converts that into estimates of what is likely a body / arm / hand etc. When the data coming from the sensor is clean - these frame to frame estimates can be quite reliable - but even then - never perfect. When the sensor is noisy - a fact of life for the types of sensor in the orbbec - then you get lots of jitter and noise appearing in the track - that results in drops and losses of track. This is further compounded by depth errors and the halo elements that seem to occur when hands are moved in front of the body.
No real amount of processing power alone can resolve these sort of issue - instead you need to start looking at methods of post processing the tracking information to clean up the data - and at times even maybe discard the data - or at least intelligently filter it
For example - instead of reading and using pure grab data from nuitrack - in c++ we process the grab thru a intelligent smoothing algorithm - which is designed to filter out losses of grab over say a couple of frames.
In raw terms - if the grab message is only lost for a couple of frames - then it is likely an accidental lost of grab - so act as though its still GRABBING.
Similar filtering algorithms can also be used on the tracking position itself to remove jitter and shake in the tracking data - and if the right types of algorithms are used - latency can be kept to a very low level.
Another area of consideration is the use of increased or more aggressive filtering when there is greater disparity between the Skeletal Joint Hand track and the Hand Tracker. we currently run independent filtering algorithms on both these tracked points - which while often very close to each other - can at times show larger disparities - when the tracker is struggling to guesstimate the location of a joint.
When there are larger deviations - it is often an indication that there is some element of occlusion or lowering of confidence in the quality of the track - by again filtering these results - you can again generate information that can be used to assist with rejection of bad tracks or bad grab reports.
Just some thoughts to go on with … note however - that all our current work is in c++ where filtering performance can be better optimised - in unity - you may hit the limitation performance optimization boundaries of bytecode compilers. But ive also seen some very good C# filtering code in kinect 2 sdks in the past which would be worth looking into for your usage case.