Draw joint orientations using OpenCV

I would like to draw the orientations of joints as the Nuitrack demo.

From the previous posts, I know that each column of rotation matrix is the coordinates of basis vectors in the global coordinate system of a camera.
Regarding yaw pitch roll calulation
Local Joint Coordinate & Rotation Matrix problem

However, I do not know how to draw 3D coordinate axes from the rotation matrix using OpenCV. Could you please give me any advice to solve the problem?

Any help or suggestions would be greatly appreciated!

Hi @thanhlh

Here’s code for drawing joints orientations:

void drawOrientations(DepthSensor::Ptr depthSensor,
					  const std::vector<Joint>& joints, cv::Mat& show,
					  float length, int width)
{
	if (!depthSensor || !depthSensor->getDepthFrame())
		return;

	float xScale = (float)show.cols / depthSensor->getDepthFrame()->getCols();
	float yScale = (float)show.rows / depthSensor->getDepthFrame()->getRows();

	for (int j = 0; j < joints.size(); ++j)
	{
		if (joints[j].type == JOINT_NONE)
			continue;
		if (joints[j].confidence < 0.15)
			continue;
		Orientation orient = joints[j].orient;

		cv::Point3f position(joints[j].real.x, joints[j].real.y, joints[j].real.z);
		cv::Point3f x(orient.matrix[0], orient.matrix[3], orient.matrix[6]);
		cv::Point3f y(orient.matrix[1], orient.matrix[4], orient.matrix[7]);
		cv::Point3f z(orient.matrix[2], orient.matrix[5], orient.matrix[8]);

		cv::Point3f positionX = position + length * x;
		cv::Point3f positionY = position + length * y;
		cv::Point3f positionZ = position + length * z;

		Vector3 proj = depthSensor->convertRealToProjCoords(position.x, position.y, position.z);
		Vector3 projX = depthSensor->convertRealToProjCoords(positionX.x, positionX.y, positionX.z);
		Vector3 projY = depthSensor->convertRealToProjCoords(positionY.x, positionY.y, positionY.z);
		Vector3 projZ = depthSensor->convertRealToProjCoords(positionZ.x, positionZ.y, positionZ.z);

		cv::line(show, cv::Point(proj.x * xScale, proj.y * yScale),
				 cv::Point(projX.x * xScale, projX.y * yScale), CV_RGB(255, 0, 0), width);
		cv::line(show, cv::Point(proj.x * xScale, proj.y * yScale),
				 cv::Point(projY.x * xScale, projY.y * yScale), CV_RGB(0, 255, 0), width);
		cv::line(show, cv::Point(proj.x * xScale, proj.y * yScale),
				 cv::Point(projZ.x * xScale, projZ.y * yScale), CV_RGB(0, 0, 255), width);
	}
}
1 Like

Hi @a.bragin,

Thank you so much for your code. It is really helpful.

Unfortunately, I must use PyNuitrack as the requirements of my project, so it would be greatly appreciated if you could show me the Python code instead. (In PyNuitrack, I cannot find out such the function “convertRealToProjCoord”).

By the way, could you please clarify the parameters “show” and “length” in the function?

Once again, I really appreciate all your kind helps.

Hi @thanhlh

Unfortunately PyNuitrack doesn’t provide convertRealToProjCoord yet. But you can write your own convert function like this:

def convertRealToProjCoord(x, y, z, width, height, hfov):
	cX = width / 2
	cY = height / 2
	fX = cX / tan(hfov / 2)
	fY = fX
	
	x = cX + fX * x / z;
	y = cY - fY * y / z;
	z = z;
	
	return x, y, z

Where:
width, height - resolution of the image provided by DepthSensor
hfov - Horizontal field of view in radians of your depth sensor

Regarding “show” and “length”:
show - image where you want to draw joint orientations
length - just the length of axles. you can experiment with this variable to choose the length of the axles according to your needs

1 Like

Thanks heaps, @a.bragin.

You’re welcome @thanhlh. Do you have any other questions I can help yo with?

Your solution is very clear and helpful. I got the same results as the Nuitrack demo application and have no any other questions. I really appreciate all your kind helps, @a.bragin.