Vision Based Localisation
My PhD research is on a vision based global localisation system for mobile robots. For those unfamiliar with academic buzzwords, this just describes a system that can estimate the pose (position and orientation) of a robot in a known environment using vision (as oppose to radio signals found in your everyday GPS devices).
The system uses an off the shelf webcam with a special panoramic mirror that provides a full 360 degrees field of view. A rich 3D map (similar to a first person shooter game) is used to provide the system with a working knowledge of the environment. The system works by capturing images from the webcam and determines the most likely place it was taken from relative to the 3D map. This is something we as humans do on a daily basis with ease, but is a rather challenging task for a robot.
Overall the system achieves an accuracy of within 1 metre for position and 10 degrees for orientation and can operate where a GPS is ineffective eg. indoors. A moderate size environment, measuring roughly 50x50m, was used for testing.
Below are some images and videos that have come about from my research. For some light technical details, scroll down to the bottom of the page.
System in action!
Other related videos
Robots used during my PhD.
The ER1 robot was the first robot I used. It is a commercial off the shelf robot produced by Evolution Robotics. This robot was terrible for outdoor usage due to its small wheel base and low battery life.
The next robot was a hacked up wheel chair, which turned out to be a great platform for outdoors. Plenty of power and battery life to power external equipment.
The last robot is err, well, not really a robot at all. It’s just a walking frame with the localisation system bolted onto it.
How the system works (in a bit more detail)
An overview of the process of building the localisation system is presented in Figure 1. The first step, and the most time consuming, was to build a 3D model of the environment. This was done by using a Riegl Z420i laser range scanner, shown in Figure 2. A high resolution 3D scan was taken at various locations in the environment and stitched together to build a complete model. Using the 3D model, 2D images were then generated (from the point of view of the webcam) every 0.5 metre and 10 degrees, and a unique scene signature was extracted from each image. The signatures are stored in a database to be used later on. To localise the robot, the same signature was extracted from the webcam and fed through a particle filter. The particle filter is a probabilistic algorithm that can be used to estimate the pose (position and orientation) of a robot given sensory and odometry information. The particle filter in this case, uses the signature from the webcam and odometry information from the robot to assigns a probability score of match to all the other signatures in the database, and outputs the estimated pose of the robot (x, y, θ).