aerial robotics

A while ago, we wrote a generic blog post about state estimation in the Crazyflie, mostly discussing different ways the Crazyflie can determine its attitude and/or position. At that time, we only had the Complementary filter and Extended Kalman filter (EKF). Over the years, we’ve made some great additions like the M-estimation-based robust Kalman filter (an enhancement of the EKF, see this blog post) and the Unscented Kalman filter.

However, we have noticed that some of our beginning users struggle with understanding the concept of Kalman filtering, depending on whether this has been covered in their curriculum. And for some more experienced users, it might be nice to have a recap of the basics as well, since this is a very important part of the Crazyflie’s capabilities of flight (and also for robotics in general). So, in this blog post, we will explain the principles of Kalman filtering and how it is applied within the Crazyflie firmware, which hopefully will provide a good base for anyone starting to delve into state estimation within the Crazyflie.

We will also have a developer meeting about Kalman filtering on the Crazyflie, so we hope you can join that as well if you have any questions about how it all works. Also we are planning to got to FOSdem this weekend so we hope to see you there too.

Main Principles of the Kalman Filter

Anybody remotely working with autonomous systems must, at one point, have heard of the Kalman filter, as it has existed since the 60s and even played a role in the Apollo program. Understanding its main principles is also important for anyone working with drones or robotics. There are plenty of resources available, and its Wikipedia page is filled with examples, so here we will focus mostly on the concept and principles and leave the bulk of the mathematics as an exercise for those who like to delve into that :).

So basically, there are several principles that apply to a Kalman filter:

  • It estimates a linear system that is driven by stochastic processes. The probability function that drives these stochastic processes should ideally be Gaussian.
  • It makes use of the Bayes’ rule, which is a general term in statistics that describes the probability of an event happening based on previous knowledge related to that event.
  • It assumes that the ‘to be estimated state’ can be described with a Markov model, which assumes that a sequence of the next possible event (or scenario) can be predicted by the current event. In other words, it does not need a full history of events to predict the next step(s), only the information from the event of one previous step.
  • A Kalman filter is described as a recursive filter, which means that it reuses (part of) its output as input for the next filtering step.

So the state estimate is usually a vector of different variables that the developer or user of the system likes to observe, for either control or prediction, something like position and velocity, for instance: [x, y, , ẏ, …]. One can describe a dynamics model that can predict the state in the next step using only the current time step’s state, like for instance: xt+1 = xt + t, yt+1 = yt + ẏt. This can also be nicely described in matrix form as well if you like linear algebra. To this model, you can also add predicted noise to make it more realistic, or the effect of the input commands to the system (like voltage to motors). We will not go into the latter in this blogpost.

The Concept of Kalman filters

Simplified block scheme of Kalman filtering

So, we will go through the process of explaining the steps of the Kalman filter now, which hopefully will be clear with the above picture. As mentioned before, we’d like to avoid formulas and are oversimplifying some parts to make it as clear as possible (hopefully…).

First, there is the predict phase, where the current state (estimate) and a dynamics model (also known as the state transition model) result in a predicted state. Also in the same phase, the predicted estimated covariance is calculated, which also uses the dynamics model plus an indication of the process noise model, indicating how much the dynamics model deviates from reality in predicting that state. In an ideal world and with an ideal model, this could be enough; however, no dynamics model is perfect, which is why the next phase is also very important.

Then it’s the update phase, where the filter estimate gets updated by a measurement of the real world through sensors. The measurement needs to go through a measurement model, which transforms the measurement into a measured state (also known as innovation or measurement pre-fit residual). Usually, a measurement is not a 1-1 depiction of one variable of the state, so the measurement model ensures that the measurement can properly be compared to the predicted state. This same measurement model, accompanied by the measurement noise model (which indicates how much the measurement differs from the real world), together with the predicted covariance, is used to calculate the innovation and Kalman gain.

The last part of the update phase is where the predictions are updated with the innovation. The Kalman gain is then used to update the predicted state to a new estimated state with the measured state. The same Kalman gain is also used to update the covariance, which can be used for the next time step.

An 1D example, height estimation

It’s always good to show the filter in some form of example, so let’s show you a simple one in terms of height estimation to demonstrate its implications.

1D example of height estimation

You see here a Crazyflie flying, and currently it has its height estimated at zt and its velocity at żt. It goes to the predict phase and predicts the next height to be at zt+1,predict, which is a simple model of just zt + żt. Then for the innovation and updating phase, a measurement (from a range sensor) rz is used for the filter, which is translated to zt+1, meas. In this case, the measurement model is very simple when flying over a flat surface, as it probably is only a translation addition of the sensor to the middle of the Crazyflie, or perhaps a compensation for a roll or pitch rotation.

In the background, the covariances are updated and the Kalman gain is calculated, and based on zt+1,predict and zt+1, meas, the next state zt+1 is calculated. As you probably noticed, there was a discrepancy between the predicted height and measured height, which could be due to the fact that the dynamics model couldn’t correctly predict the height. Perhaps a PID gain was higher than expected or the Crazyflie had upgraded motors that made it climb faster on takeoff. As you can see here, the filter put the estimated height closer to zt+1 to the measurement than the predicted height. The measurement noise model incorporated into the covariances indicates that the height sensor is more accurate than the height coming from the dynamics model. This would very well be the case for an infrared height sensor like the one on the Flow Deck; however, if it were an ultrasound-based sensor or barometer instead (which are much noisier), then the predicted height would be closer to the one predicted by the dynamics model.

Also, it’s good to note that the dynamics model does not currently include the motor input, but it could have done so as well. In that case, it would have been better able to predict the jump it missed now.

A 2D example, horizontal position

A 2D example in x and y position

Let’s take it up a notch and add an extra dimension. You see here now that there is a 2D solution of the Crazyflie moving horizontally. It is at position xt, yt and has a velocity of t, ẏt at that moment in time. The dynamics model estimates the Crazyflie to end up in the general direction of the velocity factor, so it is a simple addition of the current position and velocity vector. If the Crazyflie has a flow sensor (like on the Flow Deck), flow fx, fy can be detected and translated by the measurement model to a measured velocity (part of the state filter) by combining it with a height measurement and camera characteristics.

However, the measurement in the form of the measured flow fx, fy estimates that there is much more flow detected in the x-direction than in the y-direction. This can be due to a sudden wind gust in the y-direction, which the dynamics model couldn’t accurately predict, or the fact that there weren’t as many features on the surface in the y-direction, making it more difficult for the flow sensor to measure the flow in that direction. Since this is not something that both models can account for, the filter will, based on the Kalman gain and covariances, put the estimate somewhere in between. However, this is of course dependent on the estimated covariances of both the outcome of the measurement and dynamic models.

In case of non-linearity

It would be much simpler if the world’s processes could be described with linear systems and have Gaussian distributions. However, the world is complex, so that is rarely the case. We can make parts of the world more abstract in simulation, and Kalman filters can handle that, but when dealing with real flying vehicles, such as the Crazyflie, which is considered a highly nonlinear system, it needs to be described by a nonlinear dynamics model. Additionally, the measurements of sensors in more complex and 3D situations usually don’t have a one-to-one linear relationship with the variables in the state. Can you still use the Kalman filter then, considering the earlier mentioned principles?

Luckily certain assumptions can be made that can still make Kalman filters useful in the sense of non-linearity.

  • Extended Kalman Filter (EKF): If there is non-linearity in either the dynamics model, measurement models or both, at each prediction and update step, these models are linearized around the current state variables by calculating the Jacobian, which is a collection of first-order partial derivative calculations of the model and the state variables.
  • Unscented Kalman Filter (UKF): An unscented Kalman filter deals with linearities by selecting sigma points selected around the mean of the state estimate, which are backpropagated through the non-linear dynamics model.

However, there is also the case of non-Gaussian processes in both dynamics and measurements, and in that case a complementary filter or particle filter would be best suited. The Crazyflie contains a complementary filter (which does not estimate x and y), an extended Kalman filter and an experimental unscented Kalman filter. Check out the state-estimation documentation for more information.

So…. where is the code?

This is all fine and dandy, however… where can you find all of this in the code of the Crazyflie firmware? Here is an overview of where you can find it exactly in the sense of the most used filter of them all, namely the Extended Kalman Filter.

There are several assumptions made and adjustments made to the regular EKF implementation to make it suitable for flight on the Crazyflie. For those details I’d like to refer to the papers on where this implementation is based on, which can be found in the EKF documentation. Also for a more precise explanation of Kalman filter, please check out the lecture slides of Stanford University on Linear dynamical systems or the Linköping university’s course slides on Sensor Fusion.

Update: From the comments we also got notified of an nice EKF tutorial where you write the filter from scratch (github) from Prof. Simon D. Levy from Washington and Lee university. Practice makes perfect!

Next Developer meeting and FOSdem

As you would have guessed, our next developer meeting will be about the Kalman filters in the Crazyflie. Keep an eye on this Discussion thread for more details on the meeting.

Also Kimberly and Arnaud will be attending FOSdem this weekend in Brussels, Belgium. We are hoping to organize an open-source robotics BOF/meetup there, so please let us know if you are planning to go as well!

In our ROS-aerial community working group, we had a meeting a few weeks ago to discuss education and tutorials within Aerial Robotics (see the ROS discourse thread here). The general conclusion was that there should be more courses and tutorials since the learning curve is too steep. But… is that actually the case? According to a LinkedIn post by Kimberly, asking for suggestions, we found out that might not be true! There are loads of tutorials out there! So in this blog post, we will provide an overview of the suggested tutorials and the ones that have materials available online.

Stable diffusion with prompt ‘A drone flying in front of a school blackboard’

Online books

One of the first suggestions was to explore the online free book titled ‘Small Unmanned Aircraft: Theory and Practice.’ This book has been written by Randy Beard and Tim McLain of Brigham Young University, and it covers everything from the absolute basics of coordinate frames and quadrotor dynamics to path planning and cameras. It is a must-read for anybody starting in UAVs and Aerial robotics.

The physical book can be found here: http://press.princeton.edu/titles/9632.html

The available PDFs can be accessed on GitHub: https://github.com/randybeard/uavbook

Courses specified on Aerial Robotics

Here are some suggestions for courses specifically focused on Aerial Robotics. These received the most recommendations! Many universities have made their courses available online, accessible to anyone interested.

Coursera offers the ‘Robotics: Aerial Robotics’ course as part of the Robotics specialization. Taught by Prof. Vijay Kumar from Penn University, this 4-week course covers the mechanics and control of aerial vehicles using Matlab. It starts from 1 dimension and gradually progresses to the 3rd dimension in simulation. The course is part of a paid educational program, but you can audit the lessons for free.

Link: https://www.coursera.org/learn/robotics-flight

Udacity has been offering a course on Aerial Vehicles for quite some time. The lessons are taught by top names in the industry and cover key aspects of Aerial Robotics, such as motion planning, controls, and estimation, with lab assignments involving a real drone. The course duration is 4 months, and access is available for a fee.

Link: https://www.udacity.com/course/flying-car-nanodegree–nd787

The University of Maryland offers a course on Autonomous Aerial Robotics, making all videos, slides, and assignments available. Taught by Nitin J. Sanket and Chahat Deep Singh, the course covers everything from basic control and dynamics to full autonomy. It’s a comprehensive resource for aerial robotics. The course utilizes the Parrot Bebop 2.0, and while a Mocap system is required, you may explore the possibility of adapting the course to a different platform.

Link: http://prg.cs.umd.edu/enae788m

Additionally, there’s the course ‘Applied Control System 3: UAV Drone (3D Dynamics & Control)’ which is part of a series by Mark Misin. This course delves deep into the dynamics, control, and modeling of quadrotors.

Link: https://www.udemy.com/course/applied-control-systems-for-engineers-2-uav-drone-control/

Courses specified on Robotics applied to UAVs

Here are some suggestions for courses that focus on robotics but utilize UAVs/drones to demonstrate the implementation of the studied materials.

‘Visual Navigation For Autonomous Vehicles’ is a course available on MIT Open Courseware, taught by Prof. Luca Carlone. As the name implies, the course primarily focuses on autonomous navigation for any autonomous vehicle. It includes exercises where students implement vision algorithms on both ground robots and drones. Additionally, the course covers working with ROS and applying the knowledge to a simulated drone in Unity.

Link: https://ocw.mit.edu/courses/16-485-visual-navigation-for-autonomous-vehicles-vnav-fall-2020/

The ‘Bio-inspired Robotics’ course at the University of Washington, led by Prof. Sawyer Fuller, explores the realm of drawing inspiration from nature rather than reinventing the wheel. It covers various robots inspired by creatures capable of swimming, walking, hopping, and of course, flying. Lab assignments in this course involve working with a Crazyflie drone.

Link: https://faculty.washington.edu/minster/bio_inspired_robotics/

Brown University offers a course called ‘Introduction to Robotics,’ taught by Prof. Stefanie Tellix. While the introduction covers generic robotics, the focus of the full course is on building and programming the Duckiedrone. The course dives straight into autonomy and also teaches students how to work with ROS.

Link: https://cs.brown.edu/courses/cs1951r/

Update (4th of July)

Princeton University (see this blogpost) have also decided to release their ‘Intro to Robotics’ lectures and materials for the public. Can’t believe I forgot this one!

Link: https://irom-lab.princeton.edu/intro-to-robotics/

Youtube tutorials

If you’d like to start hands-on right away, here are a couple of suggestions for YouTube tutorials or series about aerial robotics.

Drone Programming with Python: This popular tutorial/course teaches viewers how to program a real drone using Python with the DJI Tello. It offers a great opportunity for anyone looking for a short and enjoyable project to undertake, especially on a rainy day, while still working with a real platform.

Link: https://youtu.be/LmEcyQnfpDA

Intelligent Quads YouTube Channel: This channel is entirely dedicated to creating autonomous UAVs, covering topics from Ardupilot to MAVlink to ROS and Gazebo. It appears to be a valuable resource for beginners in the field of autonomous UAVs.

Link: https://www.youtube.com/@IntelligentQuads

But wait, there is more!

There are some extra recourses for you to also take a look at.

  • Self-Driving Car Specialization: If you are interested in learning more about SLAM (Simultaneous Localization and Mapping) and sensors, this specialization is tailored for self-driving cars but the theory can be useful for drones as well. Link: https://www.coursera.org/specializations/self-driving-cars
  • Drone Dojo: For those looking to build their own drones, Drone Dojo provides useful instructions and courses to get started on DIY drone projects. Link: https://dojofordrones.com/

To conclude

Indeed, it appears that there are plenty of courses and tutorials available for people interested in getting started with aerial robotics. The range of resources is vast, and it’s possible that we might still be missing some, which could lead to a part 2 of this blog post in the future! And perhaps also we would need to delve into these to see why the learning curve is considered steep. However, aerial robotics is not an easy subject anyway so perhaps it is good to start from the basics. Nevertheless, this compilation should provide a solid starting point for anyone eager to delve into the world of aerial robotics. A major thank you to everyone who has contributed so far (linked to in the original LinkedIn post); your valuable input has made this possible!

If you have been following the ROS Discourse on a regular basis, you might have seen a bit more activity on the Aerial Vehicles category than usual. We very recently started an Aerial Robotics Working Group in collaboration with Dronecode Foundation! It will be a community-driven working group initially, but we will hold biweekly meetings on Wednesday at 2:00 PM UTC, and build up a community members and gather information on the ROS Aerial community’s Github organization. This blogpost aims to explain how this working group came to light, what our current plans are and how you can participate.

How did it all begin?

There are actually quite some aerial enthusiasts out there dwelling in the ROS crowd, which became evident when 20-30 people showed up at the impromptu ROScon 2022 aerial roboticists meetup. This was also our first experience with ROScon as Bitcraze, and I (Kimberly) absolutely loved it. The idea popped to be able to be more active in the amazing ROS community, which we started doing with helping out more with the Crazyswarm2 project (see this blogpost) and giving a presentation about it as well. However, we did notice that there wasn’t as much online chatter about Aerial Vehicles on the ROS communication channels. Yes, the Embedded ROS working group led by eProsima (responsible for MicroROS) has done some really cool demos with Crazyflies! And the same goes for any other aerial project, that has probably contributed to some of the other staple projects like NAV2. But there aren’t any working groups that are specific for aerial robotics.

Since PX4 led by Dronecode foundation had similar ambitions to be emerged into the ROS family, since we met in person at the very same ROScon last year, we started talking about possibly starting up a working group. This started with us reaching out to the ROS community for interest with this ROS discourse post and after 25 and more replies, the obvious thing was to set up an first explorative meeting. About 30 people showed up to this, so the message was clear: yes, there is a demand for guidance, structure, and information in the ROS community regarding aerial robotics. Thus, the aerial robotics working group was born!

Current state and plans

One of the observed issues is that we have noticed that is happening is that there there are numerous projects and information about aerial robotics, and perhaps too much. That is because aerial robotics consists of a huge variety of robotic systems in different forms like multicopters or even monocopters (like in the blogpost here) but also hybrid VTOL vehicles, mini blimps (for example this hack we done) and so many more. But as you probably know, aerial vehicles come with their own set of challenges that distinguish them from ground robots, like instability, aerodynamics, and limitations related to their lift capabilities. Therefore, it offers an interesting platform for control theory, autonomy, and swarming and as a result several ROS-related projects have emerged, such as Crazyswarm2, Aerostack2, Kumar Robotics Autonomy Stack and, Agilicious. Moreover, even though a standard ROS interface for aerial robotics has been created some years ago, it has not been enforced or updated since. And also, although courses and tutorials can be found here and there scattered around on multiple projects and autopilot websites to get started with aerial robotics in ROS, but many have found the learning curve to be quite steep and usually don’t know where to start.

Due to the vast amount of systems, software, projects and information out there, we decided to gather all this information in one centralized location as an Aerial Robotics landscape instead of scattering it across various aerial robotics resources, of which we have created a simple repository with markdown files. The idea is to fill this in little by little by info that we get from the working group discussions or other input of users, or research done by ourselves. For that, we will facilitate biweekly meetings, where users will present about their project (like our last meeting about Aerostack 2) or where we engage in discussions on various aerial robotics topics (like Aerial Autonomy stacks in the startup meeting).

Future ambitions

Currently, we don’t have a specific end goal or main project in mind, as we are right at the start of the first discussions and information gathering. That is also why it will be considered a ‘community driven’ working group after some emails back and forth with Open Robotics Foundation, until we reach a stage where the landscape is adequately developed to establish specific development goals. and set up various subprojects for communication, autonomy, platforms and/or education. Additionally, incorporating direct communication protocols within swarms could be of interest, as these are a common use case within aerial robotics. Once we have established more specific development goals, we can apply to be an official ROS working group, and collaborate with other workgroups on overlapping projects. From our perspective, it would be more beneficial for the ROS ecosystem not to create a standalone aerial stack, but enhance the integration of other stacks with aerial vehicles.

Join us!

Currently I (Kimberly) representing Bitcraze and Ramon Roche from Dronecode Foundation will be in the ‘lead’ of the Aerial working group, although we prefer to act as facilitators rather than imposing our own direction. We will try our best not to geek out too much on PX4 and/or Crazyflies alone, so therefore anybody’s input will be crucial! So if you’d like to levitate ROS to new heights, come and join our meetings! Our next meeting is scheduled for Wednesday the 24th of May (2 pm UTC), and you can find the information on this ROS Discourse thread. We hope to see you there!