Kalman Filters everywhere
Last week, I launched a private mailing list for those really interested in the self-driving car & AI industry.
These emails will give you access to :
▶ Daily tips on AI & Self-Driving Cars
▶ Stories from the inside on autonomous tech
▶ Promotional offers on courses
▶ Premium invites to online or offline eventsIf you’d like to join the mailing list, subscribe HERE.
Today’s email was all about Kalman Filters. Here is the transcript.
Kalman Filters everywhere.
If this is the first time you read about Kalman Filters, you are just starting…
Among my articles on self-driving cars, the ones that worked the best were Sensor Fusion and Computer Vision for Tracking.
What do these articles have in common? Kalman Filters
I noticed the number of people that got interested in the topic and was very surprised because I didn’t know this tool could be used everywhere in the autonomous technology world.
Today, I’d like to tell you exactly if it is really important and if you should start to learn it. I will also tell you how it is used in self-driving cars.
Do you know about Kalman Filters? I didn’t learn it at school, and maybe you didn’t as well!
In self-driving car online courses, it is taught a little in sensor fusion modules. In some GitHub repositories, you might find their use for tracking obstacles.
When I arrived at the self-driving car startup, I heard the term Kalman Filter a lot. It was used by people working on totally different domains. When I learned about it, I thought it could only be used for sensor fusion.
But imagine, people from Computer Vision use it. People from Localizationuse it. People from radio communication use it. People from safetyunderstand it.
It seems like a big deal.
Every time you code something, there will be a need to make it super robust. And so there will be a need to use something like a Kalman Filter.
A Kalman Filter is a tool that can predict the future measurement based on a few, or improve the measurement at time t. How? It used math to calculate the state (what you are measuring) and the uncertainty. With time, the state changes and the uncertainty gets lower. If a sensor sends noisy data, you can make it more certain. A few noisy measurements become as good as a very accurate one.
In sensor fusion, Kalman Filter is used because you have data coming from different sensors with different errors and you need to make one unique estimation.
If the LiDAR says the car is 10m away and the radar tells 12, who to trust? How to make a final decision knowing the LiDAR is more precise? Should we get rid of the RADAR? Should we still benefit from it?
Sensor Fusion has to use kalman filters or choose only one sensor depending on the condition, which is not a really good option.
But in Computer Vision, suppose you are to detect a lane line, what is the need?
The need comes from avoiding false detections and sudden changes.
In an autonomous robot, robustness is one of the most important criteria to consider.
Building something that isn’t robust is like selling a car that can at any time make a 90° turn.
It is not safe.
In Localization, the same principle is used. If your localization module suddenly moves from 30 or 40 cm, you can risk going into the sidewalk.
You can have the use of Kalman Filters in multiple dimensions, you can even use it in drones. It can be used in speed detectors on the road, it is even used in space rockets.
Should you learn it?
I would say yes. But keep in mind that this is a difficult topic and that your filter may be hard to design. With practice, it gets easier.
Where to learn?
Sensor Fusion and Computer Vision for tracking can be a nice way to start. There are multiple courses online. Recently, I have seen the Sensor Fusion nanodegree program from Udacity which seems to cover it well.
If you are really comfortable with numbers, this book can help you.
If not, stick to this repository which is very good.
Hope that helps,
See you tomorrow,
Jeremy Cohen
If you’d like to receive daily emails like this one, subscribe here.