I try to teach my robot to walk in a confined space that it doesn't know. The robot has some sensors. It must to go to some point in the space and find the way to return to the start position.
This task very similar to Robot exploration algorithm, but by physical limitations of his legs, walking through the space he starts to think that he stands in one position (x1, y1), but in reality he stands in another position (x2, y2).
So, he can't return back to the start position in real world. I want to use an algorithm that compares the visible picture in position (x1, y1) and the visible picture in position (x2, y2) to correct the error of movement, but I don't know how to realize my idea.
Before I dive into an attempt to solve this problem, could anybody give me some hin开发者_如何学Cts how to realize this algorithm?
How is the image represented? Is it literally just a raw bitmap inputted from a camera sensor? If so, then you may be in trouble because that problem is very difficult. The name for this particular problem is called simultaneous localization and mapping (or SLAM for short):
http://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping
Solving this is pretty difficult in general, and without knowing more about what kind of data and what sort of processing constraints you have, it would be impossible to answer your question.
There are two problems :
- You want to build a map of your environment, but to do so, you need your exact position in this environment to incorporate what your sensors give you.
- You want to know exactly where you are (because your sensors give only approximate positions (x2, y2)) but to do so, you need a map of your environment!
You can notice how it is a chicken-and-egg problem.
Hopefully, for you, there is SLAM (Simultaneous localization and mapping)!
精彩评论