The hand algorithm and the joint algorithm use different methods to calculate the world space location - from what we have been able to observe.
BUT they are pretty close to each other - for the most part.
HOWEVER the projected math is entirely different
the joint projection is a direct one to one projection onto the depth map.
the hand projection is a projection on a normalised point in space in front of the BODY detected by system.
if you want the hand (projection) based on the same system as the joints - take the hand work position and using the worldtoprojection function.
As for the settings in the config - the only one that we have found useful - is the witdh/height - they are used to determine the area in space relevant to the body that the hand is tracked thru,
We would love to know more about the distancebetweenframes ???
Each hand is associated with a virtual frame with the size of Width (mm) x Height (mm). DistanceBetweenFrames is the distance between centers of these frames.
I apologise but I don’t understand what “DistanceBetweenFrame” Is. We try to modify this parameter, but nithing special happen. Could you re-explain what is it ? Or link a documentation ?
HandPointerFrame can be represented as a rectangle area (or a virtual frame), in which a user’s hand is moved. There are two HandPointerFrames: one for a left hand and one for a right hand. Movement of hands are projected to HandPointerFrames and then the position of a hand on a virtual frame is converted to the position of a pointer on a screen. The bigger the frame is, the less sensitive the pointer is, and vice versa. Width and Height mean the size of the rectangle (HandPointerFrame). DistanceBetweenFrames means the distance between the centers of these two rectangles (one for a right hand and one for a left hand). You can set your own values of Width, Height, and DistanceBetweenFrames in order to correct the sensitivity of a cursor.
Hope this explanation helps.