Hello, I am trying to combine skeletal tracking with a live image and gestures tracking with interactions with the environment like an AR experience. Will like to find out if just combining the different tutorials together work or do I need to do other form of coding in unity to get the results I am looking for. Is there a tutorial I can follow for this. I am new to this so any information you can direct me to will be much appreciated. Thank you
- At the moment, we are preparing a video tutorial on AR. While it is not published, you can examine the attached package.
Don’t forget to enable depth-to-color registration.
To turn on depth-to-color registration, open nuitrack.config from the folder <nuitrack_home>\data
and set DepthProvider.Depth2ColorRegistration
to true.
- See another example example in the attached unitypackage .
On the stage, there is a LocalSkeleton
(displays just the joins and their directions) and a LocalAvatar
(overlays the character on top of the RGB with an adjustment to the user’s size).
Note that the skeletons are located in the local space of the SensorSpace object. If you change the orientation or position SensorSpace , the result on the screen will remain the same (this is convenient if you want to place several skeletons in different places).
- You can view gesture tracking (hand waving) in the tutorial Interactive Multi-Touch Gallery
You can combine these tutorials to solve your problem.