I've seen examples of using EMGUCV on regular RGB images captured from the Kinect like this, but then you might as well have a webcam. I am interested in getting a Point Cloud which I can later use for triangulation.
I've tried 'manually' converting a DepthFrame to a point cloud file. In the depth frame you have X, Y and a depth value which I converted to XYZ points for a .ply file. The results are garbled and useless.
Now, I noticed that EMGUCV has this method which maps a point cloud into an EMGUCV Mat object.
I just don't know how the syntax is supposed to be for this as there are no examples of people asking for this or any provided examples by the people behind EMGUCV.
Here's what I tried, the Kinect doesn't even seem to turn on, and success always returns false.
public void test()
KinectCapture kc = new KinectCapture(KinectCapture.DeviceType.Kinect, KinectCapture.ImageGeneratorOutputMode.Vga30Hz);
Mat m = new Mat();
bool success = kc.RetrievePointCloudMap(m);
If you're like me, you're just interested in getting a point cloud which you can turn into a mesh, you're not interested in doing all the fancy math yourself. Well luckily Fusion does this for you, provided you write 500 lines of code. Microsoft's example had a lot of failchecking, events, booleans being set, the image being rendered in different ways, combining multiple point clouds etc etc. Lots of unnecessary (for me) code. Their example was 3800 lines of code. I've tried cutting it down, you can probably cut this down even further.
I had some trouble pasting it here since it's pretty long but the link is here
Try fooling around with the values, and take a look at Microsoft's Kinect Fusion Sample project, it's 'stolen' entirely (99%) from there. This will open your sensor, take a depth+color image and turn it into a mesh. How you save it is up to you. I made a small .ply-outputter class, which they also provided in their example as well.
Here is the result I got. In this example I didn't show the faces between the vertices.