Category: Guest-blogger

Today we have a guest blogpost by Simon D. Levy (Washington and Lee University) about using Haskell to warp parts of the Crazylife-firmware. We also have some announcements from Bitcraze itself at the bottom. Enjoy!

As Richard Feynman famously said, What I cannot create, I do not understand. My less ambitious version of this motto is What I can translate into another language, I might better understand.

In order to better understand the Crazyflie firmware, I first undertook to translate the C-based firmware into C++. By separating out the general-purpose algorithmic code (state estimator, closed-loop control) from the Crazyflie-specific / Hardware Abstraction Layer (HAL) component of the firmware, I ended up with a nice, header-only C++ library of algorithms that can work with both a simulator like Webots and on a Crazyflie 2.1 or Crazyflie Bolt flight controller (with the remaining Crazyflie firmware translated into the standard C++ .h/cpp format).

As a fan of functional languages like OCAML and Haskell (and Rust, which was strongly influenced by their approach), I wondered whether I couldn’t push this idea further, to get an even higher-level implementation of the algorithms. Having played a bit with Rust and encountered issues getting it to work efficiently with C++ (something that is thankfully now being addressed), I thought it might be work trying NASA Copilot, a Haskell extension that compiles Haskell into C with a fixed memory footprint (i.e., no malloc() or garbage-collection).

My efforts paid off, resulting in a Haskell library of algorithms for closed-loop (PID) control and motor mixing that works with both my modified version of the Crazyflie firmware and with a simulator like Webots. For anyone wondering “why Haskell”, I refer you to this excellent discussion on the language’s advantages (purity, elegance), as well as this classic presentation on why the functional programming offers a better solution to complex tasks than object-oriented approach. For example, the code in the LambdaFlight core module cleanly reflects the dataflow (input/output) diagram for a standard flight controller:

Here is the Haskell code corresponding the component labeled Core in the diagram:

— Clock rate is 500Hz for Crazyflie, 100Hz for sim
dt = rateToPeriod clock_rate

— Make a list of PID controllers based on application order
pids = [positionPid resetPids inHoverMode dt,
      pitchRollAnglePid resetPids inHoverMode dt,
      pitchRollRatePid resetPids inHoverMode dt,
      altitudePid inHoverMode dt,
      climbRatePid inHoverMode dt,
      yawAnglePid dt,
      yawRatePid dt]

— Run PID controllers on open-loop (stick) demands to get demands for motor mixer
demands’ = foldl (\demand pid -> pid vehicleState demand) openLoopDemands pids

— Adjust thrust component for hover mode
thrust” = if inHoverMode then ((thrust demands’) * tscale + tbase) else tmin

— Run motor mixer on closed-loop demands to get motor spins, scaled for CF or sim
motors = quadCFMixer $ Demands thrust”
                   ((roll demands’) * prscale)
                   ((pitch demands’) * prscale)
                   ((yaw demands’) * yscale)

By adjusting the values of the clock rate and scaling coefficients (tscale, tbase, prscale, …), the same PID controllers (with same Kp, Ki, Kd constants) can be used for both the simulation and the actual Crazyflie.

Two pieces complete the picture.

First, how can a pure functional language like Haskell, lacking mutation/side-effects, support an algorithm like PID control, which requires keeping state variables (error integral, previous error) across iterations? The answer to this question comes from an ingenious part of the NASA Copilot framework, namely, streams. A stream (which is similar to a lazy list in Haskell), can come from an module of the program written in C (for example, the vehicle state obtained by state estimation) or from a previous value passed into the function. This feature allows Copilot to have functions that possess state while still being “pure” in the sense of being amenable to formal verification (mathematical proof of correctness; see this thread for a discussion). Although I don’t have the mathematical background to do formal verification proofs, the ability to prove the code correct is an extremely powerful feature of languages like Haskell and is what led NASA to develop the Copilot framework for its space missions.

Second, how can Haskell be combined with C/C++ without running into the performance issues typically encountered in calling code in one language from code in another? As I alluded to before, NASA Copilot solves this issue nicely, by compiling the Haskell code to C code with a fixed memory footprint. Declaring certain C/C++ variables to be global makes them accessible to the Haskell code as streams, and causes them to be compiled into the resulting C code, which can then be compiled and linked to the Crazyflie (or simulator) C/C++ code in the usual way (thanks to the Crazyflie Makefile’s use of the Kbuild system). Hence, to compile the whole project into a form suitable for flashing with make cload, requires just the following chain of commands (where make links creates links to some external libraries I wrote for the sensors):

make cf2_defconfig && make links && make copilot && make -j 32

My current work focuses on two directions: first, I am translating Crazyflie’s Kalman filter into Haskell — a more ambitious undertaking, but one that I feel more confident in undertaking now that the core control algorithms are completed. Second, I am looking for ways of modifying the most popular robotics simulators (Webots, Gazbeo) to work with dynamics (physics engine) code custom-written for quadcopters and other aerial vehicles (like the dynamics code I wrote for this simulator), as well as faster control loops.

Announcements

There are some announcements that are not part of this guest blogpost that we’d like to share.

  • The 350 mAh batteries are out of stock and unfortunately have a very long lead time at our manufacturer, so you can expect them to be back in stock early May. Please take a look this blogpost which also compares this battery to the Tattu alternative, if you can’t wait and need to have a similar battery now.
  • We have a Bitcraze developer meeting this Wednesday to talk about the supervisor safety functionalities in the crazyflie-firmware. Please keep an eye on the GH discussion thread for information on how to join.

Today we have a guest blogpost by Thomas Izycki (Technische Hochschule Augsburg) and Klaus Kefferpütz (Ingolstadt Technical University of Applied Sciences, former Augsburg Technical University of Applied Sciences) talking about implementing Software-in -the-Loop for swarm simulation of the Crazyflie.

When our cooperative control lab at the Technical University of Applied Sciences Augsburg was founded a few years ago, our goal was to develop distributed algorithms for teams of UAVs. We quickly decided to use Crazyflies with the algorithms directly implemented in the firmware, thus having a platform for a truly decentralized system. Our ongoing projects focus on cooperative path planning, navigation, and communication.

Since working with several drones at once, which have to communicate and coordinate with each other, can quickly become confusing and very time-consuming, a simulation was needed. It should preferably offer the possibility to integrate the firmware directly in the simulation environment and ideally also offer an interface to the crazyflie python API. With the relocation of our laboratory from Augsburg to the Technical University of Applied Sciences Ingolstadt, which however does not yet have a permanently established flying space, the need for a simulation environment was further increased in order to be less dependent on hardware accessibility. A look at the community and the available simulations quickly led us to the sim_cf flight simulator for the Crazyflie. A fantastic project supporting the use of the actual Crazyflie firmware in software-in-the-loop (SITL) mode and even in hardware-in-the-loop (HITL) mode on a real Crazyflie using the FreeRTOS Linux Port together with ROS and Gazebo. Unfortunately, the project has not been maintained for several years and also had no integration with the Crazyflie Python API. After a short chat with Franck Djeumou and his agreement to use the code of the original sim_cf simulation, the project was ready for an upgrade.

sim_cf2 Flight Simulator

With support for ROS coming to an end we decided to migrate the project to ROS2 and additionally support the current version of the Crazyflie firmware, which at this time is release version 2023.11. With the addition of a driver to connect the cflib to the SITL process, the same python cflib-based scripts can now be used with real Crazyflies and for use in the simulation environment. Only the corresponding driver needs to be loaded during initialization. Overall, the focus of the sim_cf2 simulation is now on using the Crazyflie python API instead of commanding the Crazyflies via ROS.

As for the Crazyflie firmware the whole build process has been fully integrated into the firmware’s KBuild build system. This also allows the use of the same code base for simulation and execution on the real Crazyflie. Depending on the configuration, the firmware is compiled for the STM32F on the Crazyflie or the host system running the SITL.

Components of the sim_cf2 Flight Simulator

To run the simulation, the three modules must therefore be started separately. Gazebo is started using the main launch file. The number and initial pose of the simulated Crazyflies is also defined in this file. Once the firmware for the host system has been built, the desired number of SITL instances can be started using the attached script. Finally, a cflib-based script, the Crazyflie client or the multi-agent client presented below is started. With no radio dongles attached to the computer, the simulation driver is initialized automatically and a connection to the simulated Crazyflie can be established.

There are still a few open issues, including the absence of implemented decks for positioning, such as LPS and Lighthouse. Currently, the absolute position is sent from Gazebo to the SITL instance and fused in the estimator. Moreover, Gazebo requires quite a lot of computing power. We were able to run a maximum of four Crazyflies simultaneously on a relatively old laptop. However, a modern desktop CPU with multiple cores allows for simulating a significantly larger number of Crazyflies.

Multi-Agent Client

Independent of the simulation we designed a GUI for controlling and monitoring our multi-agent teams. It currently supports up to eight Crazyflies but could be upgraded for bigger teams in the future. So far it has been enough for our requirements.

Multi-Agent Client with six connected Crazyflies

A central feature is the interactive map, which makes up about half of the gui. This is a 2D representation of the flight area with a coordinate system drawn in. Connected Crazyflies are displayed as small circles on the map and a new target position can be assigned by clicking on the map after they have been selected by their corresponding button. If required, obstacles or paths to be flown can also be drawn into the map.

Pseudo-decentralized communication

An important aspect of a truly decentralized system is peer to peer communication. It allows information to be exchanged directly between agents and ideally takes place without a central entity. Currently peer to peer communication is not available in the Crazyflie ecosystem, but is in development.

For this reason, we have implemented a workaround in our client, enabling a pseudo-decentralized communication system. This involves adding an additional layer to the Crazy RealTime Protocol (CRTP), which we named the Multi-Agent Communication Protocol (MACP). It consists of an additional packet header made of the destination ID, source ID and a port as endpoint identifier. Every ID is unique and directly derived from the Crazyflie’s address.

These MACP packets are sent via the unused CRTP port 0x9. The packet routing mechanism implemented in the client forwards the packets to their destination. It is also possible to send packets as a broadcast or to address the client directly.

On the firmware side, we have added a corresponding interface to simplify the sending and receiving of macp packets. It is analogous to the CRTP implementation and allows the registration of callback functions or queues for incoming packets on corresponding ports. It can be activated via KBuild.

At least for use in the laboratory and the development of distributed algorithms, this method has proven its worth for us.

Project

We hope that the sim_cf2 simulation can also be useful to others. The complete source code is available on GitHub. Further information concerning installation and configuration can be found in the readme files in the respective repositories.

We would also like to point out that other simulations have been created that are based on sim_cf and therefore offer similar functionality. One of these projects is CrazySim, also available on GitHub. Moreover, there are ongoing efforts to officially integrate such a software-in-the-loop simulation into the Crazyflie firmware and ecosystem.

sim_cf2:

https://github.com/CrazyflieTHI/sim_cf2

Multi-Agent Client:

https://github.com/CrazyflieTHI/crazyflie-client-multi-agent-thi

Show-and-tell post of CrazySim:

https://github.com/orgs/bitcraze/discussions/995

For the original sim_cf Flight Simulator for the Crazyflie visit:

https://github.com/wuwushrek/sim_cf

Today, we welcome Dimitrios Chaikalis from New York University to talk about their project of cooperative flight. Enjoy!

For our work in cooperative flight, we developed controllers for many tightly coupled drones to fly as a unit. The idea is that, either in a centralized or decentralized manner, it should be possible to treat drones as thrust force and yaw moment modules, in order to allow many small drones to carry objects too heavy for a single one to lift.

It quickly turned out that the Crazyflies, with their small size, open-source firmware, ROS compatibility, and, as we happily found out after hours upon hours of crashes, amazing durability, would be the perfect platform to test our controllers.

We designed and 3D-printed very lightweight, hollow connecting rods that could latch onto Crazyflies on one side, along with a number of lightweight polygons such as squares and hexagons with housings for the other side of the rods on all their faces. This allowed us to seamlessly change between geometric configurations and test our controllers.

We first tested with some symmetric triangle and quad formations.

The above is probably literally the first time our cooperative configuration achieved full position control
The tests on quad-copter configurations started as we transitioned to fully modular designs

Eventually, to make the controller generic, we developed a simple script that could deduce with some accuracy the placement of drones given a small lexicographic description submitted by the tester as a string, essentially denoting a sequence of rods and polygons utilized in the current configuration. Of course, some parameters such as rod lengths, or additional weights that we added to the system (such as a piece of foam attached to the structure), could not be known in advance, but the adaptive controller design ensured that the overall system could still achieve stable flight.

Strangely, the L shape has become a sort of ‘staple’ configuration in cooperative load transportation

We also proved that with more than 3 drones in a configuration, we could optimize the thrusts of the agents such that additional performance criteria could be met. For example, in an asymmetric configuration of 5 drones, one of them had a significantly more depleted battery. Crazyflies provide real-time battery voltage feedback, so we were able to use that in an optimization node running in Matlab on a ground computer, choosing thrust levels such that the depleted agent could be utilized less. This was a significant help, because in many of those experiments, the Crazyflies had to operate at more than 80% of their thrust capacity, so battery life optimization was of the essence.

We used ROS for all the code written for the above implementations, using the Crazyflie-ROS package in order to get battery and IMU readings from all drones and provide thrust and roll, pitch, and yaw rate commands at up to 100Hz.

The corresponding publication can be found here: https://link.springer.com/article/10.1007/s10846-023-01842-1

In case you want to build on our work, you can cite the above paper as such:

D. Chaikalis, N. Evangeliou, A. Tzes, F. Khorrami, ‘Modular Multi-Copter Structure Control for Cooperative Aerial Cargo Transportation‘, Journal of Intelligent & Robotic Systems, 108(2), 31.

YouTube Link: https://www.youtube.com/watch?v=nA41uJIehH8&t=1s

Today, Suryansh Sharma from TU Delft presents the open-source Gimbal they devised. Enjoy!

Crazyflies (and other drones in this weight class) are extremely fun to fly and prototype with! But if you are also a scientist or tinkerer and not a well-skilled drone pilot then you might struggle with flying these platforms especially when testing new control loops or experimental code. While crashing also teaches a lot about the behavior of the system, sometimes we are interested in seeing the system dynamics without breaking the drone.

Currently, doing this for such small drones is not easy. We need something lightweight and still accessible. To solve this, we made Open Gimbal: a specially designed 3 degrees of freedom (DoF) platform that caters to the unique requirements of these tiny drones. We make two versions, (a) Tripod version which can be mounted on a camera / light tripod with a screw thread of sizes 1/4-20 UNC or 3/8-16 UNC (b) Desktop version which can be placed on a table top.

Our approach focuses on simplicity and accessibility. We developed an open-source, 3-D printable electro-mechanical design that has minimal size and low complexity. This design facilitates easy replication and customization, making it widely accessible to researchers and developers. The platform allows for unrestricted and free rotational motion, enabling comprehensive experimentation and evaluation. You can see the movement from the CAD version below:

Degrees of Rotational freedom that Open Gimbal provides

You can also check out the interactive CAD model and see how the gimbal moves here. All of the 3D model files as well as the BOM and instructions for assembly can be found in our repository here.

In our publication, we also address the challenges of sensing flight dynamics at a small scale. To do so, we have devised an integrated wireless batteryless sensor subsystem. Our innovative solution eliminates the need for complex wiring and instead uses wireless power transfer for sensor data reception. You can read all about how we do this in our paper here.

If you do end up using the platform for research then you can cite us using the details below:

@ARTICLE{10225720, author={Sharma, Suryansh and Dijkstra, Tristan and Prasad, Ranga Venkatesha}, journal={IEEE Sensors Letters}, title={Open Gimbal: A 3 Degrees of Freedom Open Source Sensing and Testing Platform for Nano- and Micro-UAVs}, year={2023}, volume={7}, number={9}, pages={1-4}, doi={10.1109/LSENS.2023.3307121}}

I hope that you find the Open Gimbal useful! Feel free to reach out to me at Suryansh.Sharma@tudelft.nl if you have any ideas/questions or if you end up making an Open Gimbal yourself!

Today, Vivek Adajania from Learning Systems and Robotics lab write about a project for a safe motion planning of Crazyflie swarm that was published at ICRA 2023. Enjoy!

Motivation

Quadrotor swarms offer significant potential in applications like search and rescue, environmental mapping, and payload transport due to their flexibility and robustness compared to single quadrotors. The core challenge in these applications is collision-free and kinematically feasible trajectory planning. As the quadrotors share space, they must safely manoeuvre around each other and avoid collisions with static obstacles. Existing solutions [1] [2], while effective for generating collision-free trajectories, often struggle in densely cluttered scenarios due to simplifying approximations.

Background

There are two literature groups in the domain of optimization-based quadrotor swarm motion planning: centralized and distributed approaches. In a centralized setup, a central computer solves a joint optimization problem that computes trajectories for all quadrotors at once. These approaches have broad solution space but quickly become computationally intractable as the number of quadrotors increases. On the other hand, the distributed approach involves each quadrotor independently solving its optimization problem and incorporating trajectories shared by the neighbouring quadrotors. This strategy offers improved scalability, yet existing distributed approaches struggle in cluttered environments.

Fig. Centralized and distributed planning approach to quadrotor swarm motion planning. The arrows indicate the flow of communication.

In this work, we adopt a distributed planning strategy. The independent optimization problem that needs to be solved by each of the quadrotors in the distributed setup is a non-convex quadratically constrained quadratic program (QCQP). This nature of the problem stems from non-convex and quadratic collision avoidance constraints and kinematic constraints.

Existing distributed approaches rely on sequential convex programming (SCP) that performs conservative approximations to obtain a quadratic program (QP). First, linearization of the collision avoidance constraints to obtain affine hyperplane constraints. Second, axis-wise decoupling of the kinematic constraints to obtain affine box constraints. We obtain a QP but with small feasible sets.

Fig. Conservative approximations made by Sequential Convex Programming (SCP) based approaches.

Proposed Approach

In contrast, our proposed approach obtains a QP without relying on the previously mentioned approximations. The first ingredient is the polar reformulation of collision avoidance and kinematic constraints. An example of the 2D polar reformulation of collision avoidance constraints is shown below:

Fig. Example illustration of polar reformulation of 2D collision avoidance constraints.

The second ingredient is to relax the reformulated constraints as l-2 penalties into the cost function and apply Alternating Minimization. Alternating Minimization results in subproblems that are convex QPs, and some have closed-form solutions, thus obtaining a QP form without relying on linearization; further details can be found in our paper [3]. We can also use and reformulate alternative collision avoidance constraints, barrier function (BF) constraints

where hij is the Euclidean distance between quadrotor i and quadrotor j, and the parameter γ controls how fast the quadrotor i is allowed to approach the boundary of quadrotor j.  

Results

We experimentally demonstrate our approach on a 12 Crazyflie 2.0 swarm testbed in challenging scenes: obstacle-free, obstacle-rich, shared workspace with a human. The experimental video is provided below:

In the simulation, we compare our approach against two SCP approaches: SCP (Continuous) [2] enforces constraints across the entire horizon, while SCP (On-demand) [1] enforces only on the first predicted collision. Our (Axiswise) includes box kinematic constraints, while Our (Quadratic) preserves the original quadratic constraints.

From our simulation results, we see that SCP (On-demand) has a lower compute time than SCP (Continuous), as SCP (On-demand) enforces fewer constraints. But, this compute time trend comes at the expense of success rate. On the contrary, our approaches achieve a high success rate with low compute times. Ours (Quadratic) has a slightly higher success rate than Ours (Axiswise) as it has access to large kinematic bounds.

Fig. Simulation results from 100 start-goal configurations with swarm sizes ranging from 10 to 50 in a cluttered environment with 16 cylindrical static obstacles.

Fig. Simulation results from 100 start-goal configurations with swarm sizes ranging from 10 to 50 and three different γvalues in a cluttered environment with 16 cylindrical static obstacles.

On average, our approaches achieved a 72% success rate improvement, a 36% reduction in mission time, and 42x faster per-agent computation time—our approach trades-off mission time with inter-agent clearance and distance to obstacles via BF constraints.

Outlook

In this work, we presented an online and scalable trajectory planning algorithm for quadrotor swarms in cluttered environments that do not rely on the linearization of collision avoidance constraints and axis-wise decoupling of kinematic constraints. We do so by reformulating the quadratic constraints to a  polar form and applying alternating minimization to the resulting problem. Consequently, our planner achieves high scalability and low computation times than existing approaches. We also show that we can reformulate barrier function constraints to introduce safety behaviours in the swarm. One of the future works is to extend the approach to navigate the swarm in a complex 3D environment.

References

[1] Luis, Carlos E., Marijan Vukosavljev, and Angela P. Schoellig. “Online trajectory generation with distributed model predictive control for multi-robot motion planning.” IEEE Robotics and Automation Letters 5.2 (2020): 604-611.

[2] E. Soria, F. Schiano and D. Floreano, “Distributed Predictive Drone Swarms in Cluttered Environments,” in IEEE Robotics and Automation Letters, vol. 7, no. 1, pp. 73-80, Jan. 2022, doi: 10.1109/LRA.2021.3118091.

[3] V. K. Adajania, S. Zhou, A. K. Singh and A. P. Schoellig, “AMSwarm: An Alternating Minimization Approach for Safe Motion Planning of Quadrotor Swarms in Cluttered Environments,” 2023 IEEE International Conference on Robotics and Automation (ICRA), London, United Kingdom, 2023, pp. 1421-1427, doi: 10.1109/ICRA48891.2023.10161063.

Links

The authors are with the Learning Systems and Robotics Lab at the University of Toronto and the Technical University of Munich. The authors are also affiliated with the Vector Institute for Artificial Intelligence and the University of Toronto Robotics Institute (RI) in Canada and the Munich Institute of Robotics and Machine Intelligence (MIRMI) in Germany.

Feel free to contact us with any questions or ideas: vivek.adajania@robotics.utias.utoronto.ca. Please cite this as:

@INPROCEEDINGS{
adajania2023amswarm, 
author={Adajania, Vivek K. and Zhou, Siqi and Singh, Arun Kumar and Schoellig, Angela P.}, 
booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)}, 
title={AMSwarm: An Alternating Minimization Approach for Safe Motion Planning of Quadrotor Swarms in Cluttered Environments}, 
year={2023}, 
pages={1421-1427}, 
doi={10.1109/ICRA48891.2023.10161063} 
}

This early summer my research group (Center for Project-Based Learning at ETH Zürich) was in charge of a special week – high school students from all over Switzerland (actually even the world, they had to speak German though) could apply for a study week at different departments from our university. The departments which joined this initiative were mathematics, physics, biology, environmental sciences, material sciences, and our department, electrical engineering and information technologies (ITET). But how do you show teenagers between 15-19 in one week as much as possible from electrical engineering while also having fun? And best inspire them to study at ITET? Our solution was: drones. More specifically, Crazyflies. With those we had many possibilities to learn about electrical engineering – from sensors, microcontrollers, timers, and motors to LEDs, batteries, embedded systems, FreeRTOS tasks, state estimation, and controller – and all this with a high fun potential and a low risk of accidents, as with their weight of only 30g they hardly ever do any damage. In this blog post, I will guide you through our week, in hopes to help others who also want to use the Crazyflie to teach students about electrical engineering in a fun way.

Monday

We started in the afternoon (in the morning they had a welcoming tour) with a short introduction and splitting the 20 students into groups of two (everyone got a paper slip and had to find the matching one, accelerometer gyroscope, pitch roll, UART SPI, and so on – this gives the lecturer a great opportunity for interaction with the students later on, once their word gets relevant during the week). After a short introduction to programming and microcontrollers we moved on to the most classic beginner task: blink an LED! We chose to use the front left one, as this one is only used when communicating – so as long as we don’t connect to the drone we can observe exactly what we programmed. Most students got the LED turned on rather quickly – however, pulsing the LED to change the intensity took them some more time and forced them to learn how to write loops. They also already learned how PWM works without knowing it yet – setting the intensity of an LED or the strength of a motor is about the same thing in the end after all and this gave us a great start for Tuesday.

Tuesday

On Tuesday we looked at hardware from different perspectives. As you might have guessed, we looked at motors and how to control them with PWM and timers. The students were a bit disappointed that we still didn’t fly, but as soon as they realized that they could play their favorite song on the motors the motivation was high again! We didn’t even have any stray drones, even though we let them mount the propellers (the songs sound much better with propellers).
We also looked at another aspect of electrical engineering: PCB design (this was already done though) and soldering. For this, we prepared a custom deck, with four colored LEDs, which could be populated like in industry with solder paste and then soldered with a hot plate. To make it even more fun (and partly to show off our laser cutter) they also designed a small plastic diffuser that could be mounted on top. So in the end our setup resembles the LED ring – however, it can be mounted on top which is essential if you don’t want to fly with a positioning system (and therefore need to mount a flow deck).

Wednesday

Trying out the state estimation

Now that we knew how to blink LEDs and, even more important, how to control motors we had to learn a bit about how the drone actually figures out which motor should be turned on and how much. For this, we first looked at sensors and wired communication protocols, such as I2C (for the IMU and time-of-flight sensor), SPI (for the optical flow), and UART (to the nRF) – due to limited time we didn’t go into details here though. We briefly touched wireless communication, to explain how the commands they will later send to the drone (and the firmware) are actually sent to the right drone.

Tuning PIDs

We moved on to what state estimation is in general – again jumping over all details of an extended Kalman Filter, but had a closer look at the logging and parameter system. We then spent a bit more time on the PID controller again – which was also a bit hard to explain, as half the teenagers hadn’t learned how to integrate and differentiate yet. However, they learned fast and we could move to the part they waited for all week long: Flying (and tuning the PID).

Thursday

Transport drone

This was the day that was meant for creativity – the students could choose themselves which project they want to achieve. We proposed them some ideas, such as blinking LEDs depending on the height, flying through a gate (the challenge here is to filter the height measurements when the gate border is below the drone), steering the drone with the keyboard, soldering an own sensor on a break-out board, …

In the end, we saw many cool projects, from a song played as a canon on multiple motors to a transporter drone, flying successfully through the gate, doing a successful looping (unfortunately no successful landing yet…), racing against each other (possibly with disco-lights on the deck) and trials for reaching max speed in the hallway.

The hallway was very popular to fly, as the distance to accelerate was longer…
Practice on the race parkour

The most popular base project was to steer the drone with the keyboard – unfortunately (or fortunately? they sure had fun with it once it was running and they might have learned enough in the remaining week…), this was very easy after we showed them where Marcus’ script lives (here) and which 8 lines (170-177) they have to remove for it to work without an AI-deck (and don’t forget to adapt the URI)…

Friday

Quite an audience!
Flying was forbidden – but playing music with motors was not!

On Friday it was presentation day – in the morning they could still work on their projects, but in the afternoon all the 120 students (and most of their parents) came together in a huge lecture hall to present what they did during this week. And, as at a real conference, they had posters and their drones (which we, unfortunately, were not allowed to fly without a fireproof net… Will organize this next time) to show their projects to family, friends, and even random tourists (the entrance hall of the ETH main building is on many sightseeing tours).

At the end of the week I doubted the robustness of Crazyflies for a moment – however, Monday morning once I had peace and quiet once again I figured out what was wrong with all hardware which ended up on the “not working for unknown reasons” stack in less than an hour (and fixed almost all of it). Notes to all others and my future self for the next time I give 10 drones to 20 teenagers:

  • If you show them how to tune a PID, also explain that “persistent” means exactly what it says – if you mess up the PID values and persist them they will stay this way until you reset them, no matter how often you reflash the drone.
  • Explain how fragile connectors are and that you NEVER should pull at cables. Also, mention 10x more to be careful when plugging in decks. And radios.
  • Keep one “private” drone no one is allowed to mess with – it will help greatly to figure out if they only broke the flow deck connectors or something more serious (which actually never happened)
  • Doing only warm boots with setting individual addresses with the CLOAD_CMDS while flashing saved us a lot of trouble, randomly connecting to drones only happened once they discovered the app for the phone…

The coding tasks (and at least some minimal solutions) can be found on my fork: Tasks and solutions. They are kept short on purpose – we at the Center for Project-Based Learning believe in our name – we believe the most learning (and fun) happens when you rather freely explore what you can do with the basic tools you just learned.

P.S. For completeness – I cut out all the parts which really had nothing to do with Crazyflies, we also did lab tours in the high-voltage laboratory and the laboratory for optical communication – and of course had some social events with actual university students. As much as we like the Crazyflie, even we have to admit that the field of electrical engineering is even bigger than what we can show with those tiny drones ;)

Today, Lennart Bult from Emergent Swarns presents us with this project of a 24/7 swarming demo. Enjoy!

Over the last few months our team has been working on creating a 24/7 swarming demo. Initially tasked by Guido de Croon and Chris Verhoeven from TU Delft MAVLab and the TU Delft Robotics Institute, we set out to find our way within the Crazyflie ecosystem to gradually increase the size and capabilities of the swarm. In this article we will first talk about some of the work and methods that we used. After that, we will introduce the TU Delft Science Centre Swarming Lab and talk about some applications of swarming drones.

Developing the 24/7 swarm

The project started in February with the goal of creating a physical swarm capable of real-time collision avoidance with drones and static obstacles. We started out with three drones equipped with the Flow Deck, and by setting them up in a clever way we could perform the first collision avoidance and landing tests. We were impressed with the performance we got out of the Flow Deck, however, eventually, it is mostly a battle against the drift of the position estimate, that is, we could increase some of the margins on the collision avoidance only so far before we would either fly out of the test zone or collide with another drone. Luckily with short test flights, we were able to see some of the flaws in our algorithms and correct them before testing with the new setup.

Setup after the first expansion to eight drones.

After a few weeks of testing we got approved for the first swarm expansion, five more drones and a Lighthouse positioning setup. This is when we could do our first real tests with the collision avoidance algorithm, which, much to our own surprise, worked on the first try. This is also when we first posted a project update on LinkedIn. There were however a lot of bugs that still needed to be worked out, and a lot of system experience still to be gained. After flying for a bit longer we noticed that some of the drones would flip quite often, which is when we discovered that we needed the thrust upgrade to control the additional weight of the larger battery and charging deck.

For the charging setup we took inspiration from the Bitcraze IROS 2022 demo; we 3D printed sloped landing pads that we tape onto a wireless charger. After a few iterations we landed on a design that uses minimal printer resources and allows the Crazyflie to land a bit off-center. This last feature turned out to be quite useful considering the large amount of destabilizing airflow that is generated by 40 drones. After receiving the last order of drones we also expanded the charging setup, which at this point takes up quite a bit of floor space. There are some ideas to create a vertical landing pad stack, which would bring the additional challenge of missing the landing pad not being an option.

All 40 drones recharging before their next flight.

After prototyping the charging setup and building confidence with the initial setup, we were confident enough in our system capabilities to expand it to the point where a continuous demo of 5-8 drones is possible. Although the system integration of the previous expansion went without much trouble, we did encounter a few issues when expanding to 40 drones. The first issue of which was radio communication, we noticed that a delay in the radio communication would be present if we increased the update rate above a certain level for a specific number of drones per radio. The second issue we encountered were performance drops related to the violation of certain bounds in the collision avoidance algorithm. These two issues were very difficult to debug since it was not immediately obvious where the source of the issue was.

The third and last major issue was the increase in destabilizing airflow of 40 drones compared to 8. With 40 drones there is a noticeable breeze when you stand next to the drone cage, which is nice for summertime, but not so nice when drones need to land in a tight-packed configuration. To combat this issue there is a limit to the amount of drones that can land at the same time. There is also a minimum separation distance between two active landing pads, which reduced the severity of the induced turbulence. There are still ongoing efforts to increase the landing success rate, which is currently affected by drones running out of power during the landing procedure.

To control and monitor the swarm we designed a custom GUI, an impression of which you can see below. Although some of the buttons are still a work in progress, there are a lot of features that have already proven very useful, especially when testing a new feature.

V1 of the graphical user interface developed for the 24/7 swarm.

The code base that we created for the swarm will be largely open-sourced (only the collision avoidance will not be open-source) to provide researchers all around the world with the possibility to setup their own Crazyflie swarm for research. You can find the repository through this link. Note that the documentation and code base are still under development and might contain bugs/errors.

Human interaction

After creating all functionality to provide a continuously operating swarm demo, it was time to work on some of our stretch goals: 1. walking through the swarm whilst it is operating and 2. controlling the swarm using our arms. In the image below you can see an impression of precisely this functionality. The drones are following the operator’s gesture commands whilst performing live collision avoidance with an operator.

Team member Seppe directing the 40 drone swarm, see the full video here.

This demo requires multiple techniques and hardware elements working together to create a relatively low-latency, human-controlled swarm. We used a Kinect-like 3D sensor to perform human pose estimation, we subsequently used this data to create a dynamic obstacle in our collision avoidance software. An important element to consider here is the synchronization of the Lighthouse- and 3D sensor coordinate frames, i.e. without proper calibration the human will not be correctly positioned with respect to the drones and the drones will crash into the human. The interaction between the swarm control software and the human gesture commands also requires careful consideration, proper tuning is required to ensure a responsive system that is reliable and not too aggressive.

TUD Science Centre Swarming Lab

The next step in this project will be to set up the swarm at its new location, the TU Delft Science Centre. Here, the swarm will first and foremost be visible as a public demo, showcasing the capabilities of TU Delft state-of-the-art swarming research. There will also be a focus on developing the swarm as a research platform. This will allow TU Delft students and researchers to extend swarm functionalities and test their theory on a physical swarming system. Besides demos and academic research, there will also be worked on developing educational applications across the full educational board (primary school, high school and applied education). If you are interested in working on, or collaborating with the swarming lab on any of the above-mentioned tasks, feel free to email the lab management at operations.swarminglab@tudelft.nl.

The TU Delft Swarming Lab setup with 40 drones and charging pads for continuous operations and research.

Applications of Swarming

There are a lot of potential use-cases for fully autonomous drone swarms, ranging from indoor applications such as warehouse monitoring and factory inspection to outdoor applications such as search and rescue and surveillance. In our opinion, the true potential of drone swarms lies in applications where there is a significant need for a scalable system with a lot of built-in redundancy. A lot of additional use cases open up when we consider fully onboard autonomous systems, where the full benefits of decentralized swarming can be utilized. Currently, the size of drones needed to achieve such feats is quite large, though maybe in a few years, we could see more and more being done on drone platforms such as the Crazyflie.

A swarm inspection of an F-16 Fighting Falcon at Deltion College in Zwolle, the Netherlands.

An interesting area of application for drone swarms could be in the inspection of aircraft. Drone swarms provide a scalable and flexible means to perform a fast inspection of aircraft across an entire airfield or military base. To showcase that this can be done with any size of drone, we went to Deltion College in Zwolle to perform a mock inspection of an F-16 fighter jet. Above you can see an impression of the inspection. Another area of application is search and rescue, where there is a need for systems that can find people or objects of interest in unknown and cluttered environments. Furthermore, the area that needs to be searched is usually very large and sometimes difficult to travel on foot. A drone swarm could provide fast and reliable coverage of the area of interest, whilst providing full data traceability. Seppe and Lennart will work on creating drone swarms for these use cases with the start-up Emergent Swarms.

Today, our guest Airi Lampinen from Stockholm University is presenting the second Drone Arena Challenge. Enjoy!

Welcome to the second Drone Arena Challenge, a one-of-a-kind interactive experience with Bitcraze’s Crazyflie! This year, the challenge is focused on moving together with drones in beautiful, curious, and provocative ways – without needing to write a single line of code!

Moving with drones. Image credit: Rachael Garrett.

What, when, where? The event takes place May 16-17, 2023 at KTH’s Reactor Hall in Stockholm – a dismantled nuclear reactor hall – which provides a unique setting for creative human–drone encounters. You don’t need your own drone or be able to program a drone to participate! We will provide the drone equipment (a Crazyflie 2.1 equipped with the AI-deck) and take care of everything necessary to make them fly. What you need to do is to be creative and move together with the drones to set up the best show you can deliver! There’ll be a jury judging the final performances and we have exciting prizes for the most successful teams!

Drone Arena in the Reactor Hall. Picture from the first challenge, held in June 2022. Picture credit: Fatemeh Bakhshoudeh

Who can join? Anyone irrespective of training, profession, and past experience with drones or performing arts is welcome to participate. Participants need to be at least 18 years old. If you are curious about how technology and humans may play together, enthusiastic about the Crazyflie, or eager to learn how to move with the Crazyflie, this event is for you. We welcome up to 10 pairs (teams of 2 people) to participate in the challenge.

Registration is already open, with only a few spots remaining. We encourage those interested to sign up as soon as possible to secure their spot!

Program & prizes? On the first day of the hackathon, we will host a keynote speaker and a short information session to explain what participants are expected to do and what support is available for them. The teams will then have access to the Reactor Hall to work on the challenge and explore moving with their drone – we offer long hours but each team is free to choose how much they want to work. (The goal here is to have a good time!) The competition itself takes place on the second day. We’ve got exciting prizes for the most successful teams!

Read more about the challenge, the prizes, and how to sign up on our website: http://www.dronearena.info/

The event is organized as a part of the Digital Futures demonstrator project Digital Futures Drone Arena led by Luca Mottola from RISE and Airi Lampinen from Stockholm University.

This week’s guest blogpost is from Matěj Karásek from Flapper Drones, about flying the Nimble + with a positioning system. Enjoy!

Flapper Drones are bioinspired robots flying by flapping their wings, similar to insects and hummingbirds. If you haven’t heard of Flappers yet, you can read more about their origins at TU Delft and about how they function in an earlier post and on our company website.

In this blogpost, I will write about how to fly the Flappers (namely the Flapper Nimble+) autonomously within a positioning system such as the Lighthouse, and will of course include some nice videos as well.

The Flapper Nimble+ is the first hover-capable flapping-wing drone on the market. It is a development platform powered by the Crazyflie Bolt and so it can enjoy most of the perks of the Crazyflie ecosystem, including the positioning systems as well as other sensors (check this overview). If you would like to get a Flapper yourself, just head to the Bitcraze webstore, where there are some units ready to be shipped! (At the time of writing at least…)

Minimal setup

The minimal setup for flying in a positioning system is nearly identical as with a standard Crazyflie. Next to a Flapper with a recent firmware, a Crazyradio dongle, a positioning system (in this post we will use the Lighthouse), and a compatible positioning deck (Lighthouse deck) you will also need: 1) a mount, such that the deck can be attached on top of the Flapper, and 2) a set of extension cables. You can 3D print the mounts yourself (models here), the extension cable prototypes can either be inquired from Flapper Drones, or can be soldered by yourself (in that case, the battery holder deck, standard Crazyflie pin headers and some wires come handy). Just pay attention to connect the cables in the correct way, as if the deck was mounted right on top of the Bolt. The complete setup with the Lighthouse deck will look like this:

Lighthouse deck installation on a Flapper Nimble+. Make sure the extension cables are well secured (e.g. by using the additional cable mount) such they don’t get caught by the gears.

For the Lighthouse, as with regular Crazyflies, the minimum number of base stations (with some redundancy) is 2, but you will get larger tracking volume with more base stations. 4 base stations mounted at 3 m height will give you about 5 meters time 5 meters coverage, which is recommended especially if you want to fly more than 1 Flapper at a time (they are a bit larger than the Crazyflies, after all…).
From now on, it is exactly the same as with standard Crazyflies. After you calibrate the Lighthouse system using the standard wizard procedure via the Cfclient, you can just go to the Flight Control Tab and use the “Command Based Flight Control” buttons to take-off, command steps in xyz directions and land. It is this easy!

Flapper Nimble+ in Lighthouse flown via Command Based Flight Control of cfclient

Assisted flight demo

We used this setup in February for the demos we were giving at the Highlight Delft festival in the Netherlands. This allowed people with no drone piloting skills (from 3-year-olds, to grandmas – true story) fly and control the Flapper in a safe way (safe for the Flapper, as the Flapper itself is a very safe platform thanks to its soft wings and low weight). To make it more fun, and even safer for the Flapper, we used a gamepad instead of on screen buttons, and we modified the cfclient slightly such that the flight space can be geofenced to stay within the tracking volume.

Flight demo at Highlight Delft festival, using the Lighthouse and position hold assistance

If you would like to try it yourself (it works also with standard crazyflies), the source code is here (just keep in mind it is experimental and has some known bugs…). To fly in the position-assisted mode, you need to press (and keep pressing) the Alt 1 button, and use the joysticks to move around (velocity commands, headless mode). Releasing the Alt 1 button will make the Flapper autoland. Autoland will also get triggered when the battery is low. You can still fly the Flapper in a direct way when pressing Alt 2 instead.

Flying more Flappers at a time

Again, this is something that works pretty much out of the box. As with a regular crazyflie, you just need to assign a unique address to each of the Flappers and then use e.g. this example python script to run a preprogrammed sequence.

With a few extra lines of code, we pulled this quick demo at the end of the Highlight Delft festival, when we had 30 minutes left before packing everything (one of the Flappers decided to drop its landing gear, probably too tired after 3 evenings of almost continuous flying…):

Sequence with 3 Flappers within Lighthouse positioning system

Other positioning systems

Using other positioning systems is equally easy. In fact, for the Loco Positioning system, the deck can even be installed directly on the Flapper’s Bolt board (no extension cables or mounts are needed). As for optical motion tracking, we do not have experience with Qualisys and the active marker deck, but flying with retro-reflective markers within OptiTrack system can be setup easily with just a few hacks.

When choosing and setting up the positioning system, just keep in mind that due to its wings, the Flapper needs to tilt much more to fly forward or sideways, compared to a quadcopter. This is not an issue with the Loco Positioning system (but there can be challenges with position estimation, as described further), but it can be a limitation for systems requiring direct line of sight, such as the Lighthouse or optical motion tracking.

Ongoing work

In terms of control and flight dynamics, the Flapper is very different from the Crazyflie. Thus, for autonomous flight, there remains room for improvement on the firmware side. We managed to include the “flapper” platform into the standard Crazyflie firmware (in master branch since November 2022, and in all releases since then), such that RC flying and other basic functionality works out of the box. However, as many things in the firmware were originally written only for a (specific) quadcopter platform, the Crazyflie 2.x, further contributions are needed to unlock the full potential of the Flapper.

With the introduction of “platforms” last year, many things can be defined per platform (e.g. the PID controller gains, sensor alignment, filter settings, etc.), but e.g. the Extended Kalman filter, and specifically the motion model inside, has been derived and tuned for the Crazyflie 2.x, and is thus no representative of the Flapper with very different flight dynamics. This is what directly affects (and currently limits) the autonomous flight within positioning systems – it works well enough at hover and slow flight, but the agility and speed achievable in RC flight cannot be reached yet. We are planning to improve this in the future (hopefully with the help of the community). The recently introduced out of tree controllers and estimators might be the way to go… To be continued :)

Thanks Matej ! And for those of you at home, don’t forget that we have our dev meeting next Wednesday (the 5th), where we’ll discuss about the Loco positioning system, but also will take some time for general discussions. We hope to see you there!

This week’s guest blogpost is from Florian Goralsky from Bok o Bok about their dance piece with multiple Crazyflies. Enjoy!

Flying bodies across the fields is a contemporary dance piece for four performers and a swarm of drones, exploring the phenomenon of the disappearance of bees and the use of pollinating drones to compensate for this loss. The piece attempts to answer this crucial question in a poetical way: can the machine create life and save us from ecological disaster?

Novembre Numérique à l’IFCI © M studio

We’re super excited to talk about a performance that we’ve been working on for the past two years in collaboration with Bitcraze. It premiered at the Environmental Forum, Centre Pompidou Paris, in 2021, and we’ve had the opportunity to showcase it at different venues since then. We are happy to share our thoughts about it!

Choreographic research

Beyond symbolizing current attempts to use drones to pollinate fields, the presence of the Crazyflie drones, supports the back and forth between nature and technology. We integrate a swarm, performing complex choreographies, which refer to the functioning of a beehive, including the famous “bee dance”, discovered by Karl von Frisch, which is used to transmit information on the food sources. Far from having a spectacular performance as its only goal, the synchronization of autonomous drones highlights bio-inspired computer techniques, focused on collective intelligence.

© bok o bok

Challenges within a dance performance

Making a dance performance with drones needs a high accuracy and adaptability, both before and during the show. Usually, we only have a few hours, sometimes even a few minutes, to setup the system according to the space. We quickly realized we needed pre-recorded choreographies, and hybrid choreographies where the pilot could have a few degrees of freedom on pre-defined behaviors.

GUI Editor + Python Server

Taking this into account, we developed a web GUI editor, that is able to send choreographies created with any device to a Websocket Python server. The system supports any absolute positioning system (We use the Lighthouse), and then converts all the setpoints and actions to the Crazyflie API HighLevelCommander class. This system allows us to create, update, and test complex choreographies in a few minutes on various devices.

Preview position of six drones at a certain time.
Early support of the CompressedTrajectories format, with Cubic bezier curves.

What is next?

We are looking forward to developing more dancers-drones interactions in the future. It will imply, in addition to the Lighthouse system, other sensors, in order to open up new possibilities: realtime path-finding, obstacle avoidance even during a recorded choreography (to allow improvisation), etc.

Novembre Numérique à l’IFCI © M studio