开发者

What is the depth image received from Kinect

开发者 https://www.devze.com 2023-03-10 23:31 出处:网络
When I ran this Matlab code to get the depth image, the result I got is a matrix of 480x640. The min element value is 0 and the max element value is 2711. What does 2711 mean? Is that the distance fro

When I ran this Matlab code to get the depth image, the result I got is a matrix of 480x640. The min element value is 0 and the max element value is 2711. What does 2711 mean? Is that the distance from the camera to the farthe开发者_StackOverflow社区st part of the image. But what is the unit of 2711. Is that meter of feet or ??


I don't know what the Matlab code exactly does to the depth, but it probably does some processing on it because the depth sent by the Kinect is on 11 bits, so it shouldn't be higher than 2048. Try to find out what it does, or to get access to the raw data sent by the Kinect.

The data sent by the Kinect is not a proper distance (it's a "disparity"), so you have to do some math to convert it to useful units.

From the OpenKinect project wiki (which contains useful information about the Kinect) :

From their data, a basic first order approximation for converting the raw 11-bit disparity value to a depth value in centimeters is: 100/(-0.00307 * rawDisparity + 3.33). This approximation is approximately 10 cm off at 4 m away, and less than 2 cm off within 2.5 m.

A better approximation is given by Stéphane Magnenat in this post: distance = 0.1236 * tan(rawDisparity / 2842.5 + 1.1863) in meters. Adding a final offset term of -0.037 centers the original ROS data. The tan approximation has a sum squared difference of .33 cm while the 1/x approximation is about 1.7 cm.

Once you have the distance using the measurement above, a good approximation for converting (i, j, z) to (x,y,z) is:

x = (i - w / 2) * (z + minDistance) * scaleFactor * (w/h)
y = (j - h / 2) * (z + minDistance) * scaleFactor
z = z
Where
minDistance = -10
scaleFactor = .0021.
These values were found by hand.

You can find more details about the Kinect's depth camera and its calibration on the ROS website (and many others !).


If you map the data to a meter scale it compresses the depth image slightly. I found this was an issue when I was trying to look for planes in the mapped data.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号