Author: Kimberly McGuire

Whenever we show the Crazyflie at our booth at various robotics conferences (like the recent ICRA Yokohama), we sometimes get comments like ‘ahh that’s cute’ or ‘that’s a fun toy!’. Those who have been working with it for their research know differently, but it seems that the general robotics crowd needs a little bit more… convincing! Disregarding its size, the Crazyflie is a great tool that enables users to do many awesome things in various areas of robotics, such as swarm robotics and autonomy, for both research and education.

We will be showing that off by giving a live tutorial and demonstration at the Robotics Developer Day 2024, which is organized by The Construct and will take place this Friday, 5th of July. We have a discount code for you to use if you want to get a ticket; scroll down for details. The code can be used until 12 am midnight (CEST) on the 2nd of July.

The Construct and Robotics Developer Day 2024

So a bit of background information: The Construct is an online platform that offers various courses and curriculums to teach robotics and ROS to their users. Along with that, they also organize all kinds of live training sessions and events like the Robotics Developer Day and the ROS Awards. Unfortunately, the deadline for voting in the latter has passed, but hopefully in the future, the Crazyflie might get an award of its own!

What stands out about the platform is its implementation of web-based virtual machines, called ‘ROSJects,’ where ROS and everything needed for it is already set up from the start. Anyone who has worked with ROS(2) before knows that it can be a pain to switch between different versions of ROS and Gazebo, so this feature allows users to keep those projects separate. For the ROS Developer Day, there will be about five live skill-learning sessions where a ROSject is already preconfigured and set up for the attendees, enabling them to try the tutorial simultaneously as the teacher or speaker explains the framework.

Skill learning session with the Crazyflie

One of the earlier mentioned skill learning sessions is, of course, one with the Crazyflie! The title is “ROS 2 with a Tiny Quadcopter,” and it is currently planned to be the first skill learning session of the event, scheduled at 15:15 (3:15 pm) CEST. The talk will emphasize the use of simulation in the development process with aerial robotics and iterating between the real platform and the simulated one. We will demonstrate this with a Crazyflie 2.1 equipped with a Lighthouse deck and a Multi-ranger deck. Moreover, it will also use a Qi-charging deck on a charging platform while it patiently waits for its turn :D

What we will be showing is a simple implementation of a mapping algorithm made specifically for the Crazyflie’s Multiranger deck, which we have demonstrated before at ROSCon Kyoto and in the Crazyswarm2 tutorials. What is especially different this time is that we are using Gazebo for the simulation parts, which required some skill learning on our side as we have been used to Webots over the last couple of years (see our tutorial for that). You can find the files for the simulation part in this repository, but we do advise you to follow the session first.

You can, if you want, follow along with the tutorial using a Crazyflie yourself. If you have a Crazyflie, Crazyradio, and a positioning deck (preferably Lighthouse positioning, but a Flowdeck would work as well), you can try out the real-platform part of this tutorial. You will need to install Crazyswarm2 on a separate Ubuntu machine and add a robot in your ROSject as preparation. However, this is entirely optional, and it might distract you from the cool demos we are planning to show, so perhaps you can try this as a recap after the actual skill learning session ;).

Here is a teaser of what the final stage of the tutorial will look like:

Win a lighthouse explorer bundle and a Hands-On Pass discount

We are also sponsors of the event and have agreed with The Construct to award one of the participants a Crazyflie if they win any contest. Specifically, we will be awarding a Lighthouse Explorer bundle, with a Qi deck and a custom-made charging pad similar to the ones we show at fairs like ICRA this year. So make sure to participate in the contests during the day for a chance to win this or any of the other prizes they have!

It is possible to follow the event for free, but if you’d like to participate with the ROSjects, you’ll need to get a hands-on pass. If you haven’t yet gotten a hands-on ticket for the Robotics Developer Day, please use our 50% off discount code:

19ACC2C9

This code is valid until the 2nd of July, 12 am (midnight) Central European Time! Buy your ticket on the event’s website: https://www.theconstruct.ai/robotics-developers-day/

RSS 2024 aerial swarm workshop

On a side note, we will be at the Robotics: Science and Systems Conference in Delft from July 15th to 19th, 2024—just about two weeks from now. We won’t have a booth as we usually do, but we will be co-organizing a half-day workshop titled Aerial Swarm Tools and Applications (more details on this website).

We will be organizing this workshop together with our collaborators at Crazyswarm2, as well as the developers of CrazyChoir and Aerostack2. We’re excited to showcase demos of these frameworks with a bunch of actual Crazyflies during the workshop, if the demo gods are on our side :D. We will also have great speakers, including: SiQi Zhou (TU Munich), Martin Saska (Czech Technical University), Sabine Hauert (University of Bristol), and Gábor Vásárhelyi (Collmot/Eötvös University).

Hope to see you there!

It’s been over a little year since we started the ROS Aerial Robotics community group together with the Drone Code Foundation, and it is still going strong (blogpost 1, blogpost 2). Since there is a nice mix of people joining the meetings from different backgrounds and drone operating systems, we have had quite a few discussions and overviews of various topics. For instance, we’ve explored courses in Aerial Robotics and other subjects in previous meetings. An important goal of the group has been to make it easier for people to get started with flying robotics, which we’ve achieved by collecting essential information in the ‘Aerial Robotic Landscape’.

Starting out in Aerial Robotics

Let’s cut to the chase: Aerial Robotics is a very challenging field to get started in. Not only do you need a comprehensive understanding of which hardware to acquire, but users also face a multiple choices. These decisions include selecting the right autopilot, simulator for testing ideas, and necessary sensors to achieve autonomy. Unlike the well-established Turtlebot in other robotics domains, there isn’t a universally accepted and field-tested getting-started development drone in the aerial robotics world. While we at Bitcraze would love everyone to go for the Crazyflie, we recognize its limitations. Like, it may not handle outdoor flights with GPS or carry heavy cameras effectively. Our goal, as the ‘Aerial Robotics Community group,’ is to make it easier for beginners by providing users with information about the hardware and software they truly need.

Drone Code Foundation and Bitcraze AB had a keynote speech together at ROSCon 2023 about getting started in Aerial Robotics called ‘Up, Up, and Away: Adventures in Aerial Robotics’. Please take a look at the talk here on Vimeo.

The Aerial Robotics Landscape website

The Aerial Robotics Landscape serves as a repository of information related to all things Aerial Robotics. It started out in the GitHub repository, and it grew due to the discussions held at the aerial robotics community group meetings. Additionally, contributions from both group members and external contributors have played an important role (you can explore the merged PRs).

As the pages and tables expanded, it became clear that a better representation was necessary than just the mere README documentation on the GitHub repository. The group therefore experimented with MKDocs, creating a website in the ‘Read the Docs’ theme. This is a similar theme that important packages within in the ROS ecosystem use, such as the ROS documentation, as well as ROS 2 packages like Nav2 and Crazyswarm2.

Please take a look at the rendered website here: https://ros-aerial.github.io/aerial_robotic_landscape/

Please contribute!

The Aerial Robotics Landscape is a dynamic , where development kits emerge while others are discontinued, new simulators rise while some remain unsupported, and autopilot and autonomy features evolve monthly. This ever-changing landscape demands constant updates and additions. We try to do this to the best of our ability, but we can’t do it alone — we need your help.

If you believe that your favorite hardware platform is missing from the landscape, or if you’ve recently developed a new planning algorithm for fixed-wing vehicles or created a YouTube course on optical-based flight, please contribute by means of a pull request to the GitHub repository. We’ve put together a guide on how to contribute to the Aerial Robotics Landscape here. Let’s make the website useful together!

If you’d like to join the ROS aerial Robotics meetings, please take a look at our community github repository for joining information. The next meeting is the 5th of June, 4 PM UTC and was announced on ROS discourse.

“What? You are in Japan? Again!?”. Yup that is right! We loved IROS Kyoto 2022 so much that we just couldn’t wait to come back again. Barbara, Arnaud and Rik are setting up the booth as we speak to show some Bitcraze awesomeness to you! Come and say hi at booth IC085.

The gang before the rush starts!

Crazyflie Brushless and Camera expansion

Of all the prototypes we are the most excited of showing you the Crazyflie Brushless and the ‘forward facing expansion connector prototype’ aka the Camera deck. Here you can see them both in action at a tryout of our demo. We have also written blogposts about both so make sure to read them as well (Brushless blogpost, Camera expansion blogpost)

The Crazyflie Brushless flying with a Camera deck.

Also we will explain about the contact charging prototype (see the blogpost here) and will be showing all of our decks at the booth as well. And of course our fully autonomous, onboard, decentralized peer-to-peer and avoiding swarm demo will be displayed as always. Make sure to read this blogpost of when we showed this demo at IROS 2022 to understand what is fully going on!

Also take a look at our event page of the ICRA 2024 demo.

Hand in your Crazyflie posters at our booth!

We will be providing a ‘special disposal service’ for your conference poster! We would love to see what you are working on and get your poster, because we have a lot of space in our updated office/flight space but a lot of empty walls.

If you hand in your poster at the booth, you’ll get a special, one-of-a-kind, button badge that you can wear proudly during the conference! So we will see you at booth IC085!

The ‘Bitcraze took my poster’ button!

Today, we’d like to take the opportunity to spotlight a feature that’s been in our code base for some time, yet hasn’t been the subject of a blog post: the Python bindings for our Crazyflie firmware. You may have noticed it mentioned in previous blog posts, and now we’ll delve into more detail about what it is, how we and others are utilizing it, and what its future holds.

Schematized visualization of code within the Crazyflie

What are the Python bindings?

Language bindings, in essence, are libraries that encapsulate chunks of code, enabling one programming language to interface with another. For instance, consider the project Zenoh. Its core library is crafted in Rust, but it offers bindings/wrappings for numerous other languages like Python, C/C++, and so on. This allows Zenoh’s API to be utilized in scripts or executables written in those languages. This approach significantly broadens the functionality without necessitating the rewriting of code across multiple programs. A case in point from the realm of robotics is ROS(1), which initially created all of their APIs for different languages from scratch—a maintenance nightmare. To address this, for ROS 2, they developed the primary functionality entirely in C and provided wrappers for all other programming languages. This strategy eliminates the need to ‘reinvent the wheel’ with each iteration.

Rather than redeveloping the firmware in Python, our esteemed collaborators Wolfgang Hönig and James Preiss took a pragmatic approach. They selected parts of the Crazyflie firmware and wrapped them for Python use. You can see the process in this ticket. This was a crucial step for the simulation of the original Crazyswarm (ROS1) project and was continued for its use in the Crazyswarm2 project, which is based on ROS 2. They opted for SWIG, a tool specifically designed to wrap C or C++ programs for use with higher-level target languages. This includes not only Python, but also C#, GO, Javascript, and more, making it the clear choice for implementing those bindings at the time. We also strongly recommend checking out a previous blogpost by Simon D. Levy, who used Haskell to wrap the C-based Crazyflie Firmware for C++.

Where are the Python bindings being used?

As previously mentioned, the Crazyswarm1 & 2 projects heavily utilize Python bindings for testing key components of the firmware (such as the high-level commander, planner, and controller) and for a (hybrid) software-in-the-loop simulation. During the project’s installation, these Python bindings must be compiled so they can be used during simulation. This approach allows users to first test their trajectories in a simulated environment before deploying them on actual Crazyflies. The advantage is that minimal or no modifications are required to achieve the same results. While simulations do not perfectly mirror real-world conditions, they are beneficial because they operate with the same controller as the one used on the Crazyflie itself. In our own Crazyflie simulation in Webots, it’s also possible to use these same bindings in the simulator by following these instructions.

Three controllers (PID, Mellinger, and Brescianini), intra-drone collision avoidance, and the high-level commander planner have all been converted into Python bindings. Recently, we’ve added a new component: the Extended Kalman Filter (EKF). This addition is ideal as it allows us to test the filter with recorded data from a real Crazyflie and experiment with different measurement models. As we discussed in a previous blogpost, estimators are complex due to their dependence on chance and environmental factors. It’s beneficial for developers to have more control over the inputs and expected outputs. However, the EKF is deeply integrated into the interconnected processes within the Crazyflie Firmware. After a significant refactoring effort, these were added to the bindings by creating an EKF emulator (see this PR). This enabled Kristoffer to further enhance the TDOA outlier filter for the Crazyflie by emulating the full process of the EKF, including IMU data.

In addition to SITL simulation and EKF development, Python bindings are also invaluable for continuous integration. They enable comprehensive testing that encompasses not just isolated code snippets, but entire processes. For instance, if there’s a recording of a Crazyflie flight complete with sensor data (such as flow, height, and IMU data), and it’s supplemented with a recorded ground truth (from lighthouse/mocap), this sensor data can be fed into the EKF Python binding. We can then compare the outputted pose with the ground truth to verify accuracy. The same principle applies to the controllers. Consequently, if any changes are made to the firmware that affect these crucial aspects of Crazyflie flight, these tests can readily detect them.

If you like to try the python bindings tests for yourself, clone the Crazyflie-firmware repo and build/install the python bindings via these instructions. Make sure you are in the root of the repository and run: python3 -m pytest test_python/. Mind that you might need to put the bindings in the same path with export PYTHONPATH=<PATH_TO_>/crazyflie-firmware/build:$PYTHONPATH (please see this open ticket)

The next steps of the python bindings

We’ve seen how Python bindings have proven to be extremely useful, and we’re keen to further expand their application. At present, only the Loco positioning system has been incorporated into the EKF part of the Python bindings. Work is now underway to enable this for the Lighthouse system (see this draft PR). Incorporating the Lighthouse system will be somewhat more complex, but fortunately, much of the groundwork has already been laid, so we hope it won’t be too challenging. However, we have encountered issues when using the controller bindings with simulation (see this open ticket). It appears that some hardware-specific timing has been hardcoded throughout the PID controller in particular. Therefore, work needs to be done to separate the hardware abstraction from the code, necessitating additional refactoring work for the controller.

Recent projects like Sim_CF2 (see this blogpost) and Crazysim (see this discussion thread) have successfully compiled the Crazyflie firmware to run as a standalone node on a computer. This allows users to connect it to the Crazyflie Python library as if it were an actual Crazyflie. This full Software-In-The-Loop (SITL) functionality, already possible with autopilot suites like PX4 and Ardupilot, is something we at Bitcraze are eager to implement as well. However, considering the extensive work required by the aforementioned SITL projects to truly separate the hardware abstraction layer from the codebase, we anticipate that refactoring the entire firmware will be a substantial task. We’re excited to see what we can achieve in this area.

Indeed, even with a more comprehensive Software-In-The-Loop (SITL) solution, there’s no reason to completely abandon Python bindings. For developments requiring more input/output control—such as the creation of a new controller or an addition to the Extended Kalman Filter (EKF)—it’s beneficial to start with just that portion of the firmware code. Python bindings and a SITL build can coexist, each offering its own advantages and disadvantages for different stages of the development process. By leveraging the tools at our disposal, we can minimize the risk of damaging Crazyflies during development. Let’s continue to make the most of these valuable resources!

This week it will be a bit of a different blogpost than you are used to read from us. Usually we talk about cool prototypes, explain bits and pieces from the Bitcraze ecosystem or let external parties/researchers show case their awesome work that they’ve done on the Crazyflie. Today’s blogpost will be more about a societal topic that plays a big part within the robotics world: diversity! Bitcraze is helping out with the Diversity Scholarship of this year’s ROSCon, which we’d like to advertise about, but is also complimented by some words about diversity in robotics and how this topic is reflected upon within Bitcraze itself.

Diversity & Robotics

It’s widely acknowledged that the field of robotics lacks diversity. While there have been improvements, significant underrepresented groups remain, including women, individuals in LGBTQIA+ communities, people with disabilities, and those from racial and/or ethnic minorities. There are some interesting communities to look into if you are part of these groups yourself. However, if you know of any other ones that are interesting, of course, let us know.

Other than these earlier mentioned groups, we do not regard ourselves as the absolute expert on diversity in robotics, but we have perhaps a simple but interesting statistic to share from our experience. We usually receive requests for guest blog posts on our website from external researchers and engineers looking to showcase their work with the Crazyflie. We thought it would be interesting to graph the gender distribution of these guest bloggers:

Gender of our guest blogposters on bitcraze.io

As you may have noticed, before 2020, all of our guest bloggers were male, and only in recent years has that changed. It’s also worth mentioning that to our knowledge, none of the bloggers has openly identified as anything other than cis-gender male or female. While this shift represents progress, it’s important to acknowledge that there is still room for improvement. Additionally, it is essential to recognize that this tiny statistic does not fully reflect the diversity of the robotics community but rather (perhaps) pertains to a specific subset, such as aerial robotics.

Diversity & Bitcraze

So let me just cut to the chase, Bitcraze is a very small company with currently only 6 full-time employees. Currently, we don’t have any formal policies on hiring and promoting diversity. However, we do have a very open culture within the company where we can discuss these topics at our coffee breaks without restrictions or judgment. There is a genuine interest in sharing and discussing negative experiences related to the lack of diversity at previous workplaces, so we do talk about it a lot.

In terms of our impact internally and externally, for now, we don’t come across enough hiring opportunities to implement diversity policies. We can perhaps also invite more diverse guest bloggers to contribute to our website, or make our developer meetings more welcoming. However, there is only a limited influence that we can exert here with our small company. Therefore, the choice to support other communities we love to improve diversity is perhaps the most we can do to contribute to this cause.

We are already involved in the ROS community by helping out with the ROS aerial community working group (blogpost1, blogpost2) and we loved the atmosphere during ROSCon when we were in Kyoto. When the opportunity arrived to be a co-chair of the diversity committee of ROSCon 2024, together with Belén Torres from Wymaq, we gladly took it and are hoping that is were we can make more of a difference.

Diversity Scholarship at ROSCon 2024

This year’s ROSCon will be held in Odense, Denmark, between October 21st and 23rd. Since 2016, the ROSCon organization has launched a diversity scholarship opportunity, and this year’s event is expected to be the biggest one yet. Individuals belonging to the underrepresented groups in robotics, as mentioned earlier, are invited to apply for the scholarship. The deadline is April 5th, so please don’t wait too long to apply. Check here for the ROS discourse post and here for the diversity scholarship application on the ROSCon website.

A while ago, we wrote a generic blog post about state estimation in the Crazyflie, mostly discussing different ways the Crazyflie can determine its attitude and/or position. At that time, we only had the Complementary filter and Extended Kalman filter (EKF). Over the years, we’ve made some great additions like the M-estimation-based robust Kalman filter (an enhancement of the EKF, see this blog post) and the Unscented Kalman filter.

However, we have noticed that some of our beginning users struggle with understanding the concept of Kalman filtering, depending on whether this has been covered in their curriculum. And for some more experienced users, it might be nice to have a recap of the basics as well, since this is a very important part of the Crazyflie’s capabilities of flight (and also for robotics in general). So, in this blog post, we will explain the principles of Kalman filtering and how it is applied within the Crazyflie firmware, which hopefully will provide a good base for anyone starting to delve into state estimation within the Crazyflie.

We will also have a developer meeting about Kalman filtering on the Crazyflie, so we hope you can join that as well if you have any questions about how it all works. Also we are planning to got to FOSdem this weekend so we hope to see you there too.

Main Principles of the Kalman Filter

Anybody remotely working with autonomous systems must, at one point, have heard of the Kalman filter, as it has existed since the 60s and even played a role in the Apollo program. Understanding its main principles is also important for anyone working with drones or robotics. There are plenty of resources available, and its Wikipedia page is filled with examples, so here we will focus mostly on the concept and principles and leave the bulk of the mathematics as an exercise for those who like to delve into that :).

So basically, there are several principles that apply to a Kalman filter:

  • It estimates a linear system that is driven by stochastic processes. The probability function that drives these stochastic processes should ideally be Gaussian.
  • It makes use of the Bayes’ rule, which is a general term in statistics that describes the probability of an event happening based on previous knowledge related to that event.
  • It assumes that the ‘to be estimated state’ can be described with a Markov model, which assumes that a sequence of the next possible event (or scenario) can be predicted by the current event. In other words, it does not need a full history of events to predict the next step(s), only the information from the event of one previous step.
  • A Kalman filter is described as a recursive filter, which means that it reuses (part of) its output as input for the next filtering step.

So the state estimate is usually a vector of different variables that the developer or user of the system likes to observe, for either control or prediction, something like position and velocity, for instance: [x, y, , ẏ, …]. One can describe a dynamics model that can predict the state in the next step using only the current time step’s state, like for instance: xt+1 = xt + t, yt+1 = yt + ẏt. This can also be nicely described in matrix form as well if you like linear algebra. To this model, you can also add predicted noise to make it more realistic, or the effect of the input commands to the system (like voltage to motors). We will not go into the latter in this blogpost.

The Concept of Kalman filters

Simplified block scheme of Kalman filtering

So, we will go through the process of explaining the steps of the Kalman filter now, which hopefully will be clear with the above picture. As mentioned before, we’d like to avoid formulas and are oversimplifying some parts to make it as clear as possible (hopefully…).

First, there is the predict phase, where the current state (estimate) and a dynamics model (also known as the state transition model) result in a predicted state. Also in the same phase, the predicted estimated covariance is calculated, which also uses the dynamics model plus an indication of the process noise model, indicating how much the dynamics model deviates from reality in predicting that state. In an ideal world and with an ideal model, this could be enough; however, no dynamics model is perfect, which is why the next phase is also very important.

Then it’s the update phase, where the filter estimate gets updated by a measurement of the real world through sensors. The measurement needs to go through a measurement model, which transforms the measurement into a measured state (also known as innovation or measurement pre-fit residual). Usually, a measurement is not a 1-1 depiction of one variable of the state, so the measurement model ensures that the measurement can properly be compared to the predicted state. This same measurement model, accompanied by the measurement noise model (which indicates how much the measurement differs from the real world), together with the predicted covariance, is used to calculate the innovation and Kalman gain.

The last part of the update phase is where the predictions are updated with the innovation. The Kalman gain is then used to update the predicted state to a new estimated state with the measured state. The same Kalman gain is also used to update the covariance, which can be used for the next time step.

An 1D example, height estimation

It’s always good to show the filter in some form of example, so let’s show you a simple one in terms of height estimation to demonstrate its implications.

1D example of height estimation

You see here a Crazyflie flying, and currently it has its height estimated at zt and its velocity at żt. It goes to the predict phase and predicts the next height to be at zt+1,predict, which is a simple model of just zt + żt. Then for the innovation and updating phase, a measurement (from a range sensor) rz is used for the filter, which is translated to zt+1, meas. In this case, the measurement model is very simple when flying over a flat surface, as it probably is only a translation addition of the sensor to the middle of the Crazyflie, or perhaps a compensation for a roll or pitch rotation.

In the background, the covariances are updated and the Kalman gain is calculated, and based on zt+1,predict and zt+1, meas, the next state zt+1 is calculated. As you probably noticed, there was a discrepancy between the predicted height and measured height, which could be due to the fact that the dynamics model couldn’t correctly predict the height. Perhaps a PID gain was higher than expected or the Crazyflie had upgraded motors that made it climb faster on takeoff. As you can see here, the filter put the estimated height closer to zt+1 to the measurement than the predicted height. The measurement noise model incorporated into the covariances indicates that the height sensor is more accurate than the height coming from the dynamics model. This would very well be the case for an infrared height sensor like the one on the Flow Deck; however, if it were an ultrasound-based sensor or barometer instead (which are much noisier), then the predicted height would be closer to the one predicted by the dynamics model.

Also, it’s good to note that the dynamics model does not currently include the motor input, but it could have done so as well. In that case, it would have been better able to predict the jump it missed now.

A 2D example, horizontal position

A 2D example in x and y position

Let’s take it up a notch and add an extra dimension. You see here now that there is a 2D solution of the Crazyflie moving horizontally. It is at position xt, yt and has a velocity of t, ẏt at that moment in time. The dynamics model estimates the Crazyflie to end up in the general direction of the velocity factor, so it is a simple addition of the current position and velocity vector. If the Crazyflie has a flow sensor (like on the Flow Deck), flow fx, fy can be detected and translated by the measurement model to a measured velocity (part of the state filter) by combining it with a height measurement and camera characteristics.

However, the measurement in the form of the measured flow fx, fy estimates that there is much more flow detected in the x-direction than in the y-direction. This can be due to a sudden wind gust in the y-direction, which the dynamics model couldn’t accurately predict, or the fact that there weren’t as many features on the surface in the y-direction, making it more difficult for the flow sensor to measure the flow in that direction. Since this is not something that both models can account for, the filter will, based on the Kalman gain and covariances, put the estimate somewhere in between. However, this is of course dependent on the estimated covariances of both the outcome of the measurement and dynamic models.

In case of non-linearity

It would be much simpler if the world’s processes could be described with linear systems and have Gaussian distributions. However, the world is complex, so that is rarely the case. We can make parts of the world more abstract in simulation, and Kalman filters can handle that, but when dealing with real flying vehicles, such as the Crazyflie, which is considered a highly nonlinear system, it needs to be described by a nonlinear dynamics model. Additionally, the measurements of sensors in more complex and 3D situations usually don’t have a one-to-one linear relationship with the variables in the state. Can you still use the Kalman filter then, considering the earlier mentioned principles?

Luckily certain assumptions can be made that can still make Kalman filters useful in the sense of non-linearity.

  • Extended Kalman Filter (EKF): If there is non-linearity in either the dynamics model, measurement models or both, at each prediction and update step, these models are linearized around the current state variables by calculating the Jacobian, which is a collection of first-order partial derivative calculations of the model and the state variables.
  • Unscented Kalman Filter (UKF): An unscented Kalman filter deals with linearities by selecting sigma points selected around the mean of the state estimate, which are backpropagated through the non-linear dynamics model.

However, there is also the case of non-Gaussian processes in both dynamics and measurements, and in that case a complementary filter or particle filter would be best suited. The Crazyflie contains a complementary filter (which does not estimate x and y), an extended Kalman filter and an experimental unscented Kalman filter. Check out the state-estimation documentation for more information.

So…. where is the code?

This is all fine and dandy, however… where can you find all of this in the code of the Crazyflie firmware? Here is an overview of where you can find it exactly in the sense of the most used filter of them all, namely the Extended Kalman Filter.

There are several assumptions made and adjustments made to the regular EKF implementation to make it suitable for flight on the Crazyflie. For those details I’d like to refer to the papers on where this implementation is based on, which can be found in the EKF documentation. Also for a more precise explanation of Kalman filter, please check out the lecture slides of Stanford University on Linear dynamical systems or the Linköping university’s course slides on Sensor Fusion.

Update: From the comments we also got notified of an nice EKF tutorial where you write the filter from scratch (github) from Prof. Simon D. Levy from Washington and Lee university. Practice makes perfect!

Next Developer meeting and FOSdem

As you would have guessed, our next developer meeting will be about the Kalman filters in the Crazyflie. Keep an eye on this Discussion thread for more details on the meeting.

Also Kimberly and Arnaud will be attending FOSdem this weekend in Brussels, Belgium. We are hoping to organize an open-source robotics BOF/meetup there, so please let us know if you are planning to go as well!

A few years ago, we wrote a blogpost about the Commander framework, where we explained how the setpoint structure worked, which drives the controller of the Crazyflie, which is an essential part of the stabilization module. Basically, without these, there would not be any autonomy on the Crazyflie, let alone manual flight.

In the blogpost, we already shed some light on where different setpoints can come from in the commander framework, either from the Crazyflie python library (externally with the Crazyradio), the high level commander (onboard) or the App layer (onboard).

General framework of the stabilization structure of the crazyflie with setpoint handling. * This part is takes place on the computer through the CFlib for python, so there is also communication protocol in between. It is left out of this schematics for easier understanding.

However, we notice that there is sometimes confusion regarding these different functionalities and what exactly sends which setpoints and how. These details might not be crucial when using just one Crazyflie, but become more significant when managing multiple drones. Understanding how often your computer needs to send setpoints or not becomes crucial in such scenarios. Therefore, this blog post aims to provide a clearer explanation of this aspect.

Sending set-points directly from the CFlib

Let’s start at the lower level from the computer. It is possible to send various types of setpoints directly from a Python script using the Crazyflie Python library (cflib for short). This capability extends to tasks such as manual control:

send_setpoint(roll, pitch, yawrate, thrust)

or for hover control (velocity control):

send_hover_setpoint(vx, vy, yawrate, zdistance)

You can check the automatic generated API documentation for more setpoint sending options.

If you use these functions in a script, the principle is quite basic: the Crazyradio sends exactly 1 packet with this setpoint over the air to the Crazyflie, and it will act upon that. There are no secret threads opening in the background, and nothing magical happens on the Crazyflie either. However, the challenge here is that if your script doesn’t send an updated setpoint within a certain amount of time (default of 2 seconds), a timeout will occur, and the Crazyflie will drop out of the sky. Therefore, you need to send a setpoint at regular intervals, like in a for loop, to keep the Crazyflie flying. This is something you need to take care of in the script.

Example scripts in the CFlib that are sending setpoints directly:

Setpoint handling through Motion Commander Class

Another way to handle the regular sending of setpoints automatically in the CFLib is through the Motion Commander class. By initializing a Motion Commander object (usually using a context manager), a thread is started with takeoff that will continuously send (velocity) setpoints at a fixed rate. These setpoints can then be updated by the following functions, for instance, moving forward with blocking:

forward(distance)

or a giving body fixed velocity setpoint updates (that returns immediately):

start_linear_motion(vx, vy, vz, rate_yaw)

You can check the Motion Commander’s API-generated documentation for more functions that can be utilized. As there is a background thread consistently sending setpoints to the Crazyflie, no timeout will occur, and you only need to use one of these functions for the ‘behavior update’. This thread will be closed as soon as the Crazyflie lands again.

Here are example scripts in the CFlib that use the motion commander class:

Setpoint handling through the high level commander

Prior to this, all logical and setpoint handling occurred on the PC side. Whether sending setpoints directly or using the Motion Commander class, there was a continuous stream of setpoint packets sent through the air for every movement the Crazyflie made. However, what if the Crazyflie misses one of these packets? Or how does this stream handle communication with many Crazyflies, especially in swarms where bandwidth becomes a critical factor?

This challenge led the developers at the Crazyswarm project (now Crazyswarm2) to implement more planning autonomy directly on the Crazyflie itself, in the form of the high-level commander. With the High-Level Commander, you can simply send one higher-level command to the Crazyflie, and the intermediate substeps (setpoints) are generated on the Crazyflie itself. This can be achieved with a regular takeoff:

take_off(height)

or go to a certain position in space:

go_to(x, y)

This can be accomplished using either the PositionHLCommander, which can be used as a context manager similar to the Motion Commander (without the Python threading), or by directly employing the functions of the High-Level Commander. You can refer to the automated API documentation for the available functions of the PositionHLCommander class or the High-Level Commander class.

Here are examples in the CFlib using either of these classes:

Notes on location of autonomy and discrepancies

Considering the various options available in the Crazyflie Python library, it’s essential to realize that these setpoint-setting choices, whether direct or through the High-Level Commander, can also be configured through the app layer onboard the Crazyflie itself. You can find examples of these app layer configurations in the Crazyflie firmware repository.

It’s important to note some discrepancies regarding the Motion Commander class, which was designed with the Flow Deck (relative positioning) in mind. Consequently, it lacks a ‘go to this position’ equivalent. For such tasks, you may need to use the lower-level send_position_setpoint() function of the regular Commander class (see this ticket.) The same applies to the High-Level Commander, which was primarily designed for absolute positioning systems and lacks a ‘go forward with x m/s‘ equivalent. Currently, there isn’t a possibility to achieve these functionalities at a lower level from the Crazyflie Python library as this functionality needs to be implemented in the Crazyflie firmware first (see this ticket). It would be beneficial to align these functionalities on both the CFlib and High-Level Commander sides at some point in the future.

Hope this helps a bit to explain the commander frame work in more detail and where the real autonomy lies of the Crazyflie when you use different commander classes. If you have any questions on what the Crazyflie can do with these, we advise you to ask your questions on discussions.bitcraze.io and we will try to point you in the right direction and give examples!

Before we start settling down and preparing for Christmas, it’s time for another release! The last one was before the summer in July, and we’ve had quite a few changes on the development master branch that we’d like to share. You can now download the latest Cfclient through pip and install the newest firmware on the Crazyflie to 2023.11 via the CFclient.

Latest changes in CFclient and Cflib

The most significant change in the CFclient is that we have finally transitioned from QT5 to QT6 for the GUI graphics. Additionally, we have addressed some issues with the toolboxes. Finally, we have added an information box to indicate the state of the supervisor, such as whether the Crazyflie is considered tumbled, flying, or if a restart is required because it is locked.

Cfclient when the crazyflie is tumbled with supervisor info

For the backend, namely the Crazyflie Python library, some important changes have been implemented. Along with fixes to the parameter and logging framework, full-state setpoints have been introduced. This feature has existed in firmware for a while due to the Crazyswarm1 project (now Crazyswarm2), but it wasn’t implemented in the cflib until now. Additionally, it’s now necessary to use `notify_setpoint_stop` in cases of switching between high-level setpoints and regular position setpoints. There is also a generic motion capture example now based on the libmotioncapture library.

Note that even though the CFclient has been converted to QT6, there are several examples in the Cflib folder that have not been updated yet. This will be fixed soon, and a ticket has been created for it. Additionally, in the Bitcraze-VM, there have been some reported issues with QT6 (see this ticket).

Latest changes in the firmware

The firmware has undergone some important changes too. On the STM side of things, the hybrid TDOA mode has been merged (check out this recent blog post). This feature is still considered experimental, so please refer to the documentation for the right settings. Additionally, support for the supervisor information box in the CFclient has been added. To utilize it, both the firmware and CFclient need to be updated. There is also a new example demonstrating communication between gap8 and cpx. Last but not least, it is now possible to create Python bindings for portions of the Kalman filter, mainly for the Loco positioning system. On the other hand, the NRF firmware has no added functionalities except for some build changes and fixes.

Crazyradio2 + LPS tools

We’ve also made some improvements in other firmware or tools. Starting with the Crazyradio2, which includes fixes for broadcasting (important for you Crazyswarm2 folks!). We also aimed to make a new release of LPS tools since we heard that people were experiencing issues with USB devices. Unfortunately, there are some problems with the GitHub release actions, so that will likely be delayed. For anyone facing USB issues, you can install the LPS tools from source with Python following the ReadMe’s instructions.

Release details and Remaining issues

So here are the details of all that is released:

Some things still require attention that are a bit affected by this release, but we haven’t had the time to fix it yet:

  • Fix issues with LPS tools and release (see this ticket)
  • CFclient seems to be broken on the bitcraze-VM (see this ticket)
  • CFlib examples with QT-based GUI are still on QT5 (see this ticket)
  • The newest CFclient seems to need additional packages in some cases ( see this and this ticket)

Please let us know at https://discussions.bitcraze.io if you are having more problems.

Developer meeting this Wednesday

As we already announced last week in the Monday blog post, we will be having a developer meeting this Wednesday (6th Dec, 3 pm CET) regarding the Flow deck (refer to this discussion thread for joining information). Since we usually don’t fill up the entire hour, the last part of the developer meeting is available for some generic support questions face-to-face (online), including questions about the release!

The Flow deck has been around for some time already, officially released in 2017 (see this blog post), and the Flow deck v2 was released in 2018 with an improved range sensor. Compared to MoCap positioning and the Loco Positioning System (based on Ultrawideband) that were already possible before, optical flow-based positioning for the Crazyflie opened up many more possibilities. Flight was no longer confined to lab environments with set-up external systems; people could bring the Crazyflie home and do their hacking there. Moreover, doing research for exploration techniques that cannot rely on external positioning systems was possible with it as well. For example, back in my day as a PhD student, I relied heavily on the Flow deck for multi-Crazyflie autonomous exploration. This would have been very difficult without it.

However, despite the numerous benefits that the Flow deck provides, there are also several limitations. These limitations may not be immediately familiar to many before purchasing a Crazyflie with a Flow deck. A while ago, we wrote a blog post about positioning systems in general and even delved into the Loco Positioning System in detail. In this blog post, we will explore the theory of how the Flow deck enables the Crazyflie to fly, share general tips and tricks for ensuring stable flight, and highlight what to avoid. Moreover, we aim to make the Flow deck the focus of next week’s Developer meeting, with the goal of improving or clarifying its performance further.

Theory of the Flow deck

I won’t delve into too much detail but will provide a generic indication of how the Flow deck works. As previously explained in the positioning system blog post, the Flow deck is a relative positioning system with onboard estimation. “Relative” means that wherever you start is the (0, 0, 0) position. The extended Kalman filter processes flow and height information to determine velocity, which is then integrated to estimate the position—essentially dead reckoning. The onboard Kalman filter manages this process, enabling the Crazyflie to use the information for stable hovering.

Image from Positioning System Overview blogpost

The optical flow sensor (PMW3901) calculates pixel flow per frame (this old blog post explains it well), and the IR range sensor (VL53L1x) measures height up to 4 meters (under ideal conditions). The Kalman filter incorporates a measurement model that describes the relationship between these two values and the velocity of the Crazyflie. More detailed information can be found in the state estimation documentation. This capability allows the Crazyflie to hover, as explained in the getting started tutorial.

Image from state estimation repo documentation

Tips & Tricks and Limitations

If you want to fly with the Crazyflie and the Flow deck, there are a couple of things to take in mind:

  • Take off from a floor with texture. Natural texture like wood flooring is probably the best.
  • The floor shouldn’t be too shiny, and be aware of infrared scattering for the height sensor
  • The room should be well-lit, as the sensor needs to see the texture.

There are certain situations that the Flow deck has some issues with:

  • Low or no texture. Flying above something that is only one plain color
  • Black areas. Similar reason to flying above no texture, but it’s more difficult than usual. Especially with startup, the position estimate diverges
  • Low light conditions
  • Flying over its own shadow

We made a video that shows these types of behaviors, starting of course with the most ideal flying conditions:

Moreover, it is also important to note that you shouldn’t fly too high or yaw too often. The latter will make the Crazyflie drift, as the optical flow cannot be distinguished as being caused by the yaw movement.

Developer meeting about Flow deck

We believe that many of the issues people experience are primarily due to the invisibility of the positioning quality. In many of our examples, the Crazyflie will not take off if the position is stable. However, we don’t have a corresponding functionality in our CFclient, as it is more up to the user to recognize when the positioning is diverging. There is a lot of room for improvement in this regard.

This is the reason why the next developer meeting will specifically focus on the Flow deck, which will be on Wednesday the 6th of December, 3 pm central European time. During the meeting, we will explain more about the Flow deck, discuss the issues we are facing, and explore ways to enhance the visibility of positioning quality. Check out this discussion thread for information on how to join.

It seems that many of you are very interested in simulation. We might have gotten the hint when we noticed that our July’s development meeting had our best attendance so far! Therefore, we will be planning a new developer meeting to discuss the upcoming plans for supporting simulation for the Crazyflie.

Getting Started with Simulation tutorial

Perhaps you are not aware, but there is actually a Getting Started tutorial for simulation that has been available for a little over 2 months now. Unfortunately, circumstances prevented us from writing a blog post about it, but we’ve noticed that not all of you are aware of it yet!

The getting-started tutorial demonstrates how to set up the Webots simulator, which already includes Crazyflie models and some cool examples:

  • An example that you can control the Crazyflie with the keyboard
  • An example that the Crazyflie does wall following autonomously

The latter is based on the example app layer for wall-following in the crazyflie-firmware repository. Starting this year, there’s also a Python library equivalent available.

The tutorial concludes with instructions on how to edit these controllers. Alternatively, you can choose to run the files directly from the crazyflie-simulation repository. After completing the tutorial, you can explore the simulation repository documentation for more information and to access additional examples.

Upcoming plans

With so many plans and so little time! This is a common phrase at Bitcraze, and it’s a symptom of being an overly ambitious, but too small, team. By the way, we are still looking for more people :). Nonetheless, we have big plans to take our Crazyflie simulation to the next level:

  • ROS 2 Crazyflie model for Webots: The Crazyflie has been a part of the Webots standard robots for 2 years now, but we still need to implement the Crazyflie into the Webots ROS 2 repository.
  • Better (new) Gazebo support: Currently, we only have a very simple example for Gazebo, which is limited to motors with no control input. Working with the C++ API can be a bit challenging, so it might be worth considering the use of ROS 2 in the loop here. Let’s see what comes out of it.
  • Integration into Crazyswarm2: Once the Webots ROS2 node has been released, integrating the Crazyflie simulation into Crazyswarm2 will become more straightforward.
  • Improvement to the Python bindings: We’ve had Python bindings for controllers and the high-level commander for a while. Recently, we also added Python bindings for the estimator (currently for loco positioning only). However, there are still some issues to address with the Python bindings for the controllers due to timing issues with the simulators.
  • Linking with our CFLIB: Currently, both Webots and the Crazyflie Python library use entirely different APIs. This means that these scripts are not compatible and you’ll need to be creative not to reuse new code. However, wouldn’t it be nice to use a python example from the python library with a --sim and that it would actually control the Crazyflie in the simulator instead?

Of course, there are probably more improvements that we haven’t thought of yet, but that’s why we have developer meetings!

Come and join us at the Developer meeting.

We will be hosting another developer meeting on November 1st at 15:00 Central European Time (accounting for the time-shift from summer to autumn). You can find details on how to join in the discussion thread here.

Just for your information, I (Kimberly) am the main driving force behind our simulation efforts. However, I’m currently on partial sick leave and will soon be on full leave for a while. I kindly ask for your patience with the pace of ongoing developments. Remember, it’s an open-source project, so if you’d like to contribute and help out, we would greatly appreciate it :)