I would like to create a virtual touch layer that has a depth distance from the camera, width and height. The corners should be calibrated by allowing the user to place his finger in space for some seconds for each corner, and confirm the selection of the corners. For that, I need to retrieve the point in space x, y and z coordinates, to create the size of the width screen, and then adjust the touch plane with a certain depth, and map the virtual layer to the actual computer screen to achieve a virtual touchless application. I’d like to have guidance on this and how to approach it code wise.
Since your script is more AR-based, you can study the AR Nuitrack tutorials (interaction with virtual objects) and PointCloud (example of how to get native depth data) in the Nuitrack SDK for Unity,
If you are doing a project without using Unity, then it’s okay, the code from the examples can be used with minor changes.
Could you describe your idea and usage scenario in more detail for a better understanding?
I have a holographic view of a screen, and I want to make it seem interactive by adjusting/calibrating a virtual layer at the same depth of the hologram.
With the camera being attached ~80cms away from the hologram. It’s outside of unity since I just want to register any object that passes through the virtual layer as a left mouse click in the system.
So basically run the windows pc, attach a hologram to the monitor, and then be able to click on the hologram as if you’re clicking into a normal touch screen.
I’ll calculate the ratio coordinates of the virtual layer with the height and width of the screen to register the mouse click on the system whenever the object crosses a certain depth into the layer.