I've using Kinect and OpenCV (I am using c++). I can get both the RGB and the depth image. With the RGB image I can "pla开发者_StackOverflow中文版y" as usual, blurring it, using canny (after converting it to greyscale),... but I can't do the same with the depth image. Each time I want to do something with the depth image I got exceptions.
I have the following code to get the depth image:
CvMat* depthMetersMat = cvCreateMat(480, 640, CV_16UC1 );
CvMat* imageMetersMat = cvCreateMat(480, 640, CV_16UC1 );
IplImage *kinectDepthImage = cvCreateImage( cvSize(640,480),16,1);
const XnDepthPixel* pDepthMap = depth.GetDepthMap();
for (int y=0; y<XN_VGA_Y_RES; y++){
for(int x=0;x<XN_VGA_X_RES;x++){
depthMetersMat->data.s[y * XN_VGA_X_RES + x ] = 10 * pDepthMap[y * XN_VGA_X_RES + x];
}
}
cvGetImage(depthMetersMat, kinectDepthImage);
The problem is that I can't do anything with kinectDepthImage. I tried to convert it to greyscale and then using canny, but I dont know how to convert it.
Basically I would like to apply canny and laplacian to the depth image.
The problem was that the output from cvGetImage is 16bits depth while canny requires 8bit, therefore I need to convert it to 8bits, something like:
cvConvertScale(depthMetersMat, kinectDepthImage8, 1.0/256.0, 0);
The new OpenCV Api encurages to use Mat instead of the old image types. The current code for using the OpenNI depth meta data in OpenCV would be:
Mat depthMat16UC1(width, height, CV_16UC1, (void*) g_DepthMD.Data());
Mat depthMat8UC1;
depthMat16UC1.convertTo(depthMat8UC1, CV_8U, 1.0/256.0);
What is the sizeof(XnDepthPixel) ?
Try using a cvCreateImageHeader and then doing cvSetData on it with the XnDepth Image
Verify below link code ... could give you valuable information. NOTE: Its not my code, may give the result you require. comment the //cvCvtColor(rgbimg,rgbimg,CV_RGB2BGR);
http://pastebin.com/e5kHzs84
Regards Nagaraju
if you are using OpenNI, have you created context, production nodes, and started generating data? Probably that's your problem..
精彩评论