Hi – New to this forum and to Nuitrack. FYI – my background is business and technology, but I am NOT a software guy yet.
I have worked with Kinect based body scanners for measurements, but I am investigating the development of an exercise tracking system for fitness coaches.
I have started developing the logic of a sensor system that can evaluate body position/posture for a list of movements. For example the Squat or Lunge or Pushup. I can identify the joints and criteria for performing a “SUCCESSFUL” version of each movement, or else some % of successful. For example, on the Squat, I would want the hip to be at the same level as the knee joint, and I would want to see the knee joint at the proper angle in relation to the ankle joint. Etc. So for each exercise, I would list the joint criteria for a satisfactory body position.
Before I got down the road of major time investment, I was asking the forum if they think this is a do-able task via Skeleton SDK and the current cameras out there. One scenario is that on a monthly basis, each fitness client would perform 8-10 diff movements in front of the cameras and would be assessed vs ideal. Once client at a time would be the easiest to implement.
A fancier scenario would be to evaluate a small group of clients at the same time, during their normal workouts each week. I essentially am looking for visual feedback to the client, above and beyond simply taking photos and videos with a smart phone.
Just looking for some basic feedback from people who understand the system capabilities and perhaps advice on where to start with such a project. I am in Austin, TX.
Thanks in advance -
I believe this is possible. The Joint data can be sufficient for such an analysis. But just keep in mind you should work only with one client and when the client is placed in good position according the sensor. And it will take you a few months of R&D do develop the AI for the joint data examination / comparison with proper move database (it will be probably required to do some Principal Component analysis to find how to determine proper move) …
The data provided by the nuitrack sdk can be used for these sort of applications - within certain levels of tolerance.
Nuitrack provides realtime estimates of possible joint positions based on AI algorithms that analyze depth information, What you do with that information is dependent on your own coding.
The data includes estimates of key joint positions and rotations every 30 seconds for up to 6 skeletons simultaneously - though single person tracking is regarded as optimum for most people as the more of the body that fills the scanning are - the better the tracking results tend to be.
The challenge area for your style of application would be occlusion - if you scan front on for example - the api can only see the closest points - so for a squat - if the knee blocks the vision of the hip point then nuitrack can only make a guess … a side on approach may be suitable in this case.
I would be happy to have a chat if you are looking for input - email@example.com
estimates are done 30 times PER second
Can the Nuitrack SDK track lying figure? To track push ups, for example?
Standard case for Nuitrack usage is tracking of a person standing in front of a sensor. Tracking of a lying person is non-standard case, you can test this case using Nuitrack Trial. Tracking results depend on the sensor used and its position.
I’ve tried with intel D435. Unfortunately, joints go mad with lying person.
Do you have any plans to develop SDK in this direction? Maybe even on a paid basis?
Please send this request to firstname.lastname@example.org
Why is it a problem to measure a subject in the horizontal, as opposed to the vertical? Assuming the sensor is at the correct level and we view the subject from the side.
Is it because of the close proximity to the floor?
For a pushup, for example, the subject would be viewed from the side.