About RealSense’s usage environment

I am inquiring because I am using RealSense and skeleton tracking is not working well.

I know that the environment in which I am trying to use RealSense is not good.

Even in the same environment, sometimes it works well and other times it is almost impossible to track.

However, I am curious as to whether the items below actually have an effect on skeleton tracking.

  1. Tilted sensor → It is not exact, but I would guess that it is being used at an angle of about 30 degrees.

  2. Long distance from people → This is also not accurate, but I want to track two people at a distance of about 2-3 meters.

  3. Detecting the back of a person → Under the above conditions, we want to use it to detect the back of a person. Something is moved mainly through the position of the arm.

  4. Dark environment → I was trying to use it outdoors at night, but the tracking wasn’t working well, so I used a light and it seemed to work a little better.
    So, I installed strong lighting, but tracking was not working well, perhaps because the lighting was strong.

I’m curious whether lighting actually affects tracking, and whether strong lighting actually hinders tracking.
LED was used for lighting.

If you know anything about the four questions above, I would appreciate it if you could answer Yes or No or provide additional explanation as to whether it will have a negative impact on tracking!

And I am currently using the position of the UI frame of the SimpleSkeletonAvatar created in the Skeletons Canvas prefab to determine what position the arm is in.
I wonder if this script has lower performance than other scripts when it comes to tracking.

This is because I have seen a phenomenon where the skeleton is tracked normally in the Frame View displayed in NuitrackManager, but the skeleton does not appear normally in SimpleSkeletonAvatar.

This is written in English through a translator, so please understand that the sentences may not be smooth.

Hi @sabin,

The answers to your questions may vary depending on the specific RealSense model and skeletal tracking algorithm selected. Which RealSense model are you using? Are you using AI Skeleton Tracking or Classic Skeleton Tracking?

  1. Tilted sensor

The optimal height for sensor placement is about 1.2m. It should be put facing straight forward and parallel to the floor. As the angle of the sensor to the floor increases, the tracking quality may deteriorate (this primarily applies to Classic Skeletal Tracking).

Additional recommendations on setting up the environment can be found here.

  1. Long distance from people

The quality of the depth map decreases as the distance from the sensor increases. Meanwhile, the quality of the skeleton tracking is directly dependent on the quality of the depth map (it does not depend on the skeletal tracking algorithm).

  1. Detecting the back of a person

Classic Skeletal Tracking may have problems with the lateral position (it does not support a 360 degree case), but there are no known problems with tracking from the back.

  1. Dark environment

If you use a stereo sensor from RealSense, it creates a depth map based on two RGB images, so in complete darkness or with bright lighting with strong highlights, it is expected that the depth map will be unacceptably noisy.

We could give more accurate answers if you would include an example (screenshots, video, or dump) for each controversial situation where you expected Skeletal Tracking to work without problems, but this is not the case.

I wonder if this script has lower performance than other scripts when it comes to tracking.

This is because I have seen a phenomenon where the skeleton is tracked normally in the Frame View displayed in NuitrackManager, but the skeleton does not appear normally in SimpleSkeletonAvatar.

The quality of skeletal tracking should not depend on the end application, could you attach an example of this problem, or more detailed information on how we could reproduce it.

Which RealSense model are you using?
→ I am using the D455 model.

Are you using AI Skeleton Tracking or Classic Skeleton Tracking?
→ If you are asking whether AI mode is turned on or not in Unity, it is not turned on and is being used.

Because when I tried to use it in AI mode, it didn’t seem to be able to properly distinguish between the front and back of a person.
ex) When you raise your right hand, there are times when a code that operates only when you raise your left hand and a code that operates only when you raise your right hand are called alternately.

could you attach an example of this problem, or more detailed information on how we could reproduce it.
→ Currently, I don’t have RealSense, so it may be difficult to provide screenshots or videos.

Thank you for your reply.
In summary, the factors that degrade sensor performance in the environment where I currently use RealSense are:

  1. Tilted sensor
  2. Long distance from people
  3. Lighting too strong

Can I understand that the three things above could be a problem?

If the above problem is resolved and the problem occurs again, we will provide additional information.

Can I understand that the three things above could be a problem?

You mean, how do you find out in runtime if these factors have started to affect quality? There is no such method at the moment, but I can recommend that you minimize these factors and follow these recommendations to get stable quality.

Is there anything else I can help you with?

Hi @sabin,

How are you? Has your issue been solved?
Would be great if you could provide some reply/feedback, we will be ready to help.
Thanks.

The topic was closed automatically after 14 days without a response.