-
Notifications
You must be signed in to change notification settings - Fork 2
Localisation
Our approach to localisation is a combination of two parts.
The first is a distributed multi-modal Kalman filter, which is used for tracking the belief-state of the robot's position on the field. Observations of landmarks such as the centre circle, and the ball, are processed by the Kalman filter to track the robot's and ball pose.
The second is an approach based on Iterative Closest Point (ICP) for matching observed field lines to the field model. This finds the locally best matching pose of the robot that would match the observed field lines in the current frame. This update is then incorporated into the Kalman filter to refine out pose estimate.
The overall Kalman filter algorithm used is heavily based on the 2006 rUNSWift localisation system, and a detailed description of this can be found here. The ICP field-line matching algorithm was developed in 2012 and a detailed report can be found here.
The baseline algorithm we use is a multi-modal extended Kalman filter. For now let us say that we track a 7 dimensional state: the x,y,theta of the robot, and the x,y position and x'y' velocity of the ball. When the robot observes a landmark, this gives us a distance and heading to the landmark. We use a predetermined noise model to generate the variance of each observation, typically as a function of distance. We then perform the normal extended Kalman filter update on our state mean vector and covariance matrix.
To better model the probability distribution function of the state estimation, we use a multi-modal filter. This means that we have a list of independent modes, each with a state mean vector and covariance matrix. Each is updated independently using the method described above. However, we also add to each mode a weight variable. This is a value between 0 and 1, and indicates the confidence of that particular mode matching the true state of the robot. This weight is updated during the Kalman filter update step, basically it is scaled by how well the current observation matches that particular mode, taking into account the variance of both the observation and the mode.
We maintain a list of a maximum of 8 modes. The distribution is re-normalised after each update step (the sum of weights must be 1). The natural progression is for the number of modes to grow, as many observations are multi-modal. We therefore also take the step of merging modes that have a similar mean and covariance.
One of the key features of our localisation system is the distributed state tracking. The aim of this feature is to have the robots combine their observations, via wireless communication, and track the full global state of the team. This is done in several steps:
-
each robot tracks a 19 dimensional state in its primary multi-modal Kalman filter, corresponding to an x,y,theta for each robot (with 5 robots on the team), and the x,y position and velocity of the ball
-
each robot also has a "shared filter", which has a 7 dimensional state, tracking the robots x,y,theta and the ball position and velocity. This filter contains only a single mode.
-
when the robot makes an observation, it updates both its main 19-dim multi-modal filter, and the shared filter
-
approximately 5 times a second, each robot broadcasts its shared filter (mean and covariance matrix) to all of its teammates.
-
when a robot receives a shared broadcast from a teammate, it uses the received mean and covariance matrix as an observation, and updates its main 19-dim filter appropriately (taking into account the player index of the broadcaster).
-
each time a robot broadcasts its shared filter, it resets it by setting the covariance to be very large. This is needed so as not to "double count" observations.
The main result of this distributed filter scheme is the ball acts as a mobile landmark, allowing robots to localise their own pose from the ball. A typical scenario is a robot knows its location very well and can see the ball, and a teammate can see the ball but is not very well localised. The distributed updates allow the second robot to localise off the ball and get a better fix on its own position.
Another potential outcome is we can use robot observations (using visual robot detection) to localise off our teammates directly. This was implemented and tested very close to competition, but was not turned on. We did not have enough time to tweak it sufficiently for it to be worthwhile. In future, this is definitely a feature worth investigating and perfecting.
This year we essentially re-wrote the entire localisation system, keeping only the ICP field-line matching module from the existing code. As a result, there is significant scope for improving the performance of the localisation by simply tweaking various parameters to find a more optimal set of values. We did not have time to do much of this parameter tweaking this year, and several aspects of the localisation suffered from this.
In the current system we only perform an ICP update on the top mode, due to the time-complexity of the ICP matching code. This asymmetry between the top mode and the remaining modes leads to various problems which we have tried to deal with in various ways. However, if the ICP matching speed could be improved enough that we could apply the update to all modes, the performance of the localisation would improve.
A potentially very useful improvement would be to treat our teammates as landmarks and to incorporate visual robot detection into the filter. We attempted to add this feature a week out from competition, and it showed promising signs, but we did not have enough time to iron out the problems and it was disabled for competition. Improving the visual robot detection performance would make this feature even more useful. As a related topic, it is worth investigating incorporating opponent robot positions into the filter. This would have the potential double benefit of letting us better filter the opponent robots' positions on the field (with all of the associated benefits for smarter behaviour), but it would also allow us to use opponent robots as mobile landmarks thus improving our pose estimates of our own robots.
Another area to investigate is how to better deal with the non-Gaussian nature of the uncertainty of observations and pose estimation. The Extended Kalman filter that we currently use does this using linearistation. However, there may be better approaches. One is to use an Unscented Kalman Filter. Another is to use the distribution of modes to better model the uncertainty of observations. This was done to some extent in the 2006 localisation system used by rUNSWift. Extra modes were generated when a single landmark was observed, offset from the current mode and with reduces weights. This better approximated the true uncertainty distribution of the observation.
Lastly, there are a number of bugs in our current system that we did not get to the bottom of and simply worked around. For example, we sometime merge modes that should not be merged, producing an incorrect pose estimate. Time should be spent in the future to get to the bottom of these errors.
SimpleGaussian::uniModalVisionUpdate
SimpleGaussian::addBallMeasurement
At each vision update stage (Kalman filter), the ball measurement is used if only one ball is observed and the robot is not in Ready mode (I don't particularly agree with this. Seems to be an update with no correction ie. we don't know where the ball actually is so I'm not sure this update is beneficial). Only the top 5 observations are used, and balls rank posts, centre and penalty so it's unlikely this is used. All measurements are taken at face value (Don't consider certainties)
MultiGaussianDistribution::applyRemoteUpdate
SimpleGaussian::applyRemoteUpdate
Ball is used in the SharedGaussian update
Ball observations are used at face value
Ball is completely responsible for flipping players
TeamBallTracker::addTeammateObservation
teammateBallState[index].ballCertainty =
BALL_CERTAINTY_DAMPING * teammateBallState[index].ballCertainty +
(1.0 - BALL_CERTAINTY_DAMPING) * ballSeenFraction;
NOTE: ballCertainty and ballSeenFraction are all defined and calculated within the localisation engine. Seems to be independent of vision certainty
TeamBallTracker::isModeFlipped
Goalie counts triple
"Flipping" takes the form of multiplying the weight of the mode by TEAMMATE_FLIP_WEIGHT (0.01)
Not implemented: SimpleGaussian::goalieUpdateDisagrees