Category: Community

Today, we welcome our first blog post by Maurice Zemp. Stay tuned for more of his adventures later!

When I started working on my Matura thesis (a mandatory project in Swiss High School), I wanted to create something that went beyond a purely theoretical project. I was fascinated by the idea of combining cutting-edge technology with a very tangible and exciting challenge: making a small drone fly through a racing course, composed by small gates, completely on its own and as fast as possible. Inspired by the work under Prof. Davide Scaramuzza, I set off for a challenge to find some new alternatives or improvements, while developping my approach from scratch.

What may sound simple at first quickly turns into a highly complex task. A drone needs to perceive its environment, process information in real time, and decide on precise actions within fractions of a second, all without human intervention. Professional drone racing pilots train for years to master this level of control. My goal was to see whether artificial intelligence could achieve something similar, using reinforcement learning as the core technique.

But there was another challenge: I wanted to design a system that wasn’t only powerful, but also affordable and reproducible. Many research institutions use equipment worth tens of thousands of francs for projects like these. I asked myself: Could I build something comparable with a fraction of the budget, and still push the boundaries of what’s possible?

That question became the driving force behind my project, which later brought me all the way to the finals of Schweizer Jugend forscht (short SJf) and saved me a place as the main prize to represent Switzerland at the world’s biggest Youth Science Competition, ISEF 2026. Over the following sections, I’ll share how I built my system step by step, what it was like to present it at the competition, and how it felt when all the effort finally paid off.

Fig. 1: Me with the Crazyflie 2.1 Brushless in the halls of ETH Zurich, where the main event of SJf took place.

Motivation and Objectives

The project I presented at Schweizer Jugend forscht was the result of my Matura thesis, in which I set out to combine my interests in drones, programming, and artificial intelligence. My goal was to develop a complete system for autonomous drone racing, based on Reinforcement Learning (RL), that would not only work in simulation but could also be transferred to real-world conditions.

To achieve this, I focused on three key aspects:

  1. Building a highly efficient simulation environment for training a reinforcement learning agent.
  2. Developing a cost-effective motion-capture system (MoCap) capable of tracking a drone’s position and orientation in real time with high precision.
  3. Integrating both systems in a way that would allow seamless transfer from simulation to real-world experiments with minimal latency.

This combination made the project unique: instead of relying on expensive commercial hardware, I set out to create a solution that would be precise and affordable but still continue the state-of-the-art development in Drone Racing.

Simulation Environment

The simulation was implemented in Python, using Stable Baselines, OpenAI Gymnasium, and NumPy, accelerated with Numba for performance. At its core, the system employed the Proximal Policy Optimization (PPO) algorithm, a state-of-the-art reinforcement learning algorithm known for stability and efficiency.

Unlike general-purpose simulators such as Gazebo, my environment was designed specifically for drone racing. It could process tens of thousands of interactions per second, enabling a training run of a few dozen minutes to correspond to nearly a year of simulated flight time.

Key features included:

  • A physics-based flight dynamics model accounting for thrust, drag and gyroscopic effects.
  • A carefully engineered reward function balancing speed, precision and avoiding shortcuts.
  • A flexible design that allowed different gate sequences and drone parameters to be tested.
  • A multi-agent environment computing on multiple threads, leading to much shorter training time needed.
Fig. 2: Path of the drone after two hours of training on a given track

With this setup, an RL agent could learn to complete arbitrary racing tracks in near-optimal time1 after only a few hours of training. In simulation, top speeds of up to 100 km/h were achieved, though these exceeded the physical limits of the real drone and were generated with modified drone parameters.

Motion-Capture-System

A second cornerstone of the project was the development of a low-cost motion-capture system. Instead of relying on high-end solutions such as VICON or OptiTrack (which can cost tens of thousands of Swiss francs), I built a custom setup. The drone – a Crazyflie 2.1 Nanocopter (later a Crazyflie 2.1 Brushless) – was fitted with infrared diodes. With four cameras capturing at 120 frames per second, the drone’s position was calculated in real time through triangulation. By using three diodes arranged on the drone, I not only wanted to estimate the position but also the orientation. Unfortunately, due to the cameras being budget and thus not having a high-end resolution, the estimation of the orientation was not feasible and was therefore taken from the onboard IMU.

Fig. 3: Motion-Capture-System Concept

System Integration

For integration, I relied on the ROS2 middleware and the Crazyswarm2 framework, which allowed the simulation and MoCap data to be processed together with minimal latency. This setup ensured that a policy trained in simulation could be executed on the real drone almost seamlessly.

Fig. 4: Sim2Real Integration Concept

Results & Discussion

The results demonstrated the effectiveness of the combined system:

  • In simulation, the RL agent completed tracks in nearly optimal time1, demonstrating robust generalization across different gate sequences.
  • The motion-capture system delivered millimeter-level accuracy and reliable tracking in real time despite its simplicity. (see Fig. 5)
  • In real-world tests, the Crazyflie drone successfully completed tight gate sequences, even at speeds of up to 25 km/h, with a positional deviation of only 5–12 cm compared to the simulated trajectories. (see Fig. 6)

Given that the gates were roughly A3-sized openings (38 × 29 cm), the precision was sufficient to consistently hit every gate, confirming the feasibility of transferring simulated training into real-world racing. (Video)

Fig. 5: Boxplot of the deviation for collected points on a plane in millimeters
Fig. 6: Tracked Real Life Path on a given track
Fig. 7: Realisation in my school’s physics laboratory

The system demonstrated that RL-based controllers can generalize effectively from simulation to reality, even with imperfect models. This robustness shows that RL is promising for real-world robotics applications where exact physical modeling is difficult or costly. To further investigate this robustness, I conducted ablation studies in simulation focusing on aerodynamic drag, as well as analyses of the MoCap system’s accuracy in measuring dynamic motion.

  • As for the aerodynamic drag, I introduced some randomness during evaluation, which deviated largely from what the Algorithm had trained upon. To achieve this, I introduced a parameter k which was multiplied with the correct aerodynamic drag to view how these differences would impact the flight. In Fig. 8 you can see the deviation from the real path over the course of a flight. As obviously more drag results in lower velocities, the two paths had to be fitted using the nearest two possible points. The results are impressive, even with an enormous change to training such as k = 10 the flight still performed somewhat okay, even though the gates probably wouldn’t have been hit anymore.
  • For the MoCap System, I used a mathematically representable motion, such as a pendulum. Then I performed a parameter optimization on this mathematical model and measured the deviation based on the current speed. There is a clear correlation between speed and deviation (see Fig. 9), which most likely was caused by the missing calibration of the cameras.
Fig. 8: Results of ablation studies on aerodynamic drag
Fig. 9: Experiment to confirm correlation between deviation and speed

Conclusion and Reflection

This work demonstrates that RL, combined with accessible hardware, enables precise and robust autonomous flight in dynamic indoor environments. The results underscore the potential of low-
cost robotics solutions to democratize drone research. Even though it was a large project and there were some hard times, I enjoyed working on it a lot and believe that the result and the memories made with it are even more rewarding than any prize money!

Acknowledgments

To end this guest post, I want to sincerely thank Bitcraze for their amazing work and support during the development of this project. They have genuinely built such an incredible testbed for research in autonomous drones, it’s amazing! Without them, this project wouldn’t have worked out the way it did!

  1. Near-optimal time here references to the theoretical boundary of a drone completing this track, given it’s parameters. To be clear, this is not fitting a polynomial with boundaries on their derivatives onto gate segments, as it was done during earlier approches in the 2010s. It was compared to multiple previous approches by UZH RPG such as Optimal Control (OC). Nevertheless, this comparison could be done more extensively in the future, as time ran short in the end. ↩︎

FOSDEM is one of the largest open source conferences in Europe, happening every year in Brussels. It’s a special place where developers, researchers, and enthusiasts come together to share ideas, showcase projects, and explore the latest in open-source technology. Over the years, we at Bitcraze have been regular attendees, enjoying the chance to meet fellow open-source fans and see what’s happening across the community.

Last year, we decided to take a bigger step and helped organize a developer room. Devrooms are like mini-conferences: they are handled by a committee that produces a call for participation and handles the schedule for the room. FOSDEM allocates a time slot, a physical room, and video recording for the devroom so that all talks are broadcasted in real-time and recorded. Before, the conference lacked a dedicated devroom for robotics, which is a shame as a lot of robotics, at least in research, are open source, thanks in large part to ROS. That’s why we’re devoted on helping out organizing a devroom focused on robotics and simulation. Being part of the developer room gives us the chance not only to attend talks but also to contribute to the schedule, help coordinate speakers, and support discussions among attendees.

The developer room is packed with exciting content for anyone interested in robotics, swarming, simulation, and more. There are presentations on new tools and libraries, discussions about open-source projects pushing the boundaries of what small robots can do. Last year, talks ranged from advanced motion planning and multi-robot coordination to hands-on experiments with drones and embedded systems. For 2026, the schedule promises even more fascinating sessions, highlighting the latest developments in robotics software and hardware, and providing a space to exchange ideas with people working on similar challenges.

Whether you’re interested in ROS, sensor fusion, or just curious about what researchers and makers are building, the developer room is a great place to meet like-minded people and get inspired. We’ve found it to be an incredible platform for sharing knowledge, discovering new tools, and seeing how open-source contributions can accelerate innovation.

This is happening on Saturday 31st of January, from 10.30 to 18.30, in room UB2.147. You can check out the full FOSDEM 2026 Robotics and Simulation schedule here: FOSDEM 2026 schedule

We’re really looking forward to being part of the developer room again this year, and we hope to see some of you there. If you’ll be at FOSDEM, come by, say hi, and share what you’ve been working on — we can’t wait to see the amazing projects the community will bring this year!

Time flies! 2025 is already drawing to a close, and as has become a Bitcraze tradition, it’s time to pause for a moment and look back at what we’ve been up to during the year. As usual, it’s been a mix of new hardware, major software improvements, community adventures, and some changes within the team itself.

Hardware

2025 started with a bang, with the release of the Brushless! It was a big milestone for us, and it’s been incredibly exciting to see how quickly the community picked it up and started pushing it in all sorts of creative directions. From more demanding research setups to experimental control approaches, the Brushless has already proven to be a powerful new addition to the ecosystem. Alongside the Brushless, we also released a charging dock enabling the drone to charge autonomously between flights. This opened the door to longer-running experiments and led to the launch of our Infinite Flight bundle, making it easier than ever to explore continuous operation and autonomous behaviors.

Beyond new products, we spent much of the year working on two major hardware-related projects. During the first part of the year, our focus was on expanding the Lighthouse system to support up to 16 basestations. The second part of the year much of our efforts shifted toward preparing the release of the Color LED, which we’re very much looking forward to seeing in the wild.

Fun Fridays also got some extra time in the spotlight this year. Aris produced a series of truly delightful projects that reminded us all why we love experimenting with flying robots in the first place. Have you seen the disco drone, or the claw?

Software

On the software side, 2025 brought some important structural improvements. The biggest change was the introduction of a new control deck architecture; which lays the groundwork for better identification and enumeration of decks going forward. This is one of those changes that may not look flashy on the surface, but it will make life easier for both users and developers in the long run.

We also made steady progress with the Rust Lib, moving from Fun Friday to a fully-fleshed tool. It is now an integrated and supported part of our tooling, opening up new possibilities for users who want strong guarantees around safety and performance.

Another long-awaited update this year was an upgrade to the Crazyradio, improving its ability to handle large swarms of Crazyflie™ drones. This directly benefits anyone working with multi-drone setups and helps make swarm experiments more robust and scalable.

Community

It’s fair to say that 2025 was a busy year on the community and events front. We kicked things off, as usual, at FOSDEM, where we hosted a dev room for the first time! A big step for us and a great opportunity to connect with fellow open-source developers.

Later in the year, we made our first trip back to the US since the pandemic. This included sponsoring the ICUAS competition, and hosting a booth at ICRA Atlanta, both of which were fantastic opportunities to meet researchers and practitioners working with aerial robotics. We also presented a brand-new demo at HRI.

In September, KTH hosted a “Drone gymnasium“, giving students hands-on access to Crazyflies and encouraging them to explore new robotic experiences. Seeing what happens when students are given open tools and creative freedom is always inspiring, and this event was no exception.

2025 was also marked by continued and valuable community contributions. From improvements and bug fixes to features like battery compensation, these contributions play a huge role in shaping the platform and pushing it forward.

Team

Behind the scenes, Bitcraze itself continued to grow. This year brought both change and expansion within the team. Tove moved on to new adventures in Stockholm, and while we’ll miss her, we were also happy to welcome a record four new team members!

Aris ansitioned from intern to full-time developer, Fredrik then joined as a growth and partnership guru, followed by Enya as an application engineer. Hampus was the last to join us as an administrator. With these new faces, the team is larger — and stronger — than ever.

All in all, 2025 has once again been an exciting, intense, and rewarding year for Bitcraze. Thank you to everyone in the community who flew with us, built on our tools, reported issues, shared ideas, and showed us what’s possible with tiny flying robots. We can’t wait to see what 2026 brings.

For the second year, we are helping organize the Robotics and Simulation developer room (ie. devroom) at Fosdem. This is a community effort and I, Arnaud, am part of the organization committee for this devroom. Tldr: last year’s edition was a great success, propose a talk for this year!

Collage of the 2025 Robotics and Simulation Developer room

Fosdem is the biggest open source software conference in Europe. It is free to attend and happens the weekend between January and February in Brussels, Belgium.

The conference is organized with a main track as well as … devrooms. A devroom is very much like a small conference in and of itself: Fosdem allocates a room and the video recording. The dev room managers are responsible for publishing a Call for Participation to get people to propose talks, reviewing the talk proposals, and organizing the room so that all goes smoothly on the day.

Last year, we were allocated Sunday afternoon to host a Robotics and Simulation dev room. It went really well, we had so many talk proposals that we had to refuse some, and the room was so full that we had to refuse the entrance to some people during talks (Fosdem rightfully takes fire safety very seriously, so we cannot pack rooms over capacity).

We are thrilled to announce that we got a full-day room this year! This will allow us to have many more talks and hopefully to reach even wider in the robotics landscape.

If you, or anyone you know, has anything interesting to talk about and present, please have a look at the Call for Participation. The deadline for the proposal is on the first of December, and the main requirement is that it covers an open source project. A talk should highlight both foundational capabilities and new AI-driven approaches, showcasing practical progress and key takeaways. The topics could include, but are not limited to, core robotics libraries and applications, frameworks for building robotics systems, simulation tools, or open-source-friendly hardware platforms. See the CfP for more details. You can also look at last year’s talks for some inspiration.

We look forward to meeting you in Brussels this year!

When flying the Crazyflie Brushless, you may have noticed something familiar, as the battery drains, the drone becomes less responsive and can not generate the same amount of thrust it had at the start of the flight. This is because as the state of charge drops, the battery voltage decreases, and that directly affects the thrust output of the motors.

We wanted to fix this. In this post, we’ll explain why the old compensation method wasn’t ideal, how we used system identification to design a new battery compensation scheme, and how this improves thrust consistency across the entire battery range.

Motivation

The key problem is simple: a dropping battery voltage means a dropping thrust for the same command. This leads to flights that start crisp and responsive but is reduced as the battery drain.

Our goal was to make sure that, regardless of the state of charge, the actual thrust stays close to the commanded thrust. Though, for manual flight, sometimes this might not be preferred, so there will be an option to turn it off.

To illustrate the setup, here is a schematic of how the battery, PWM switching, and motor interact, effectively behaving like a simple buck converter:

This means the motor voltage can be computed by:

System Identification

To design a proper compensation, we first needed to understand how thrust relates to motor voltage. This meant running a series of experiments on the thrust stand introduced in this earlier blog post.

The first step was calibrating the loadcell used to measure thrust:

Once calibrated, we measured the thrust at different applied motor voltages.

As expected, the thrust can be modeled well by a third-order polynomial in motor voltage.

Mathematically, the relationship comes from two simple facts:

  • A DC motor torque is proportional to motor voltage and inversely related to motor speed.
  • A propeller’s thrust scales approximately with the square of the rotational speed.

Combining these effects leads to a nonlinear (third-order) relation between motor voltage and thrust.

Battery Compensation

The main idea is straightforward: instead of assuming the battery voltage is constant, we explicitly account for it. We can measure the battery voltage and low-pass filter it to reduce noise. Together with the necessary motor voltage from the curve above, we can solve the equation from above for the necessary pwm to apply:

This corrected motor voltage is then fed into our third-order model to compensate the thrust command. With this compensation, the commanded-to-actual thrust relation is now approximately linear, which is exactly what we want. We can verify this by applying thrust commands and comparing them to the actual thrust.

Dynamic Behavior

To obtain a complete parameter set of the motors and propellers, we also performed dynamic tests: commanding rapid increases and decreases in PWM and measuring the thrust response.

These dynamics are not required for the battery compensation itself, but they are very useful for accurate system identification and for simulation purposes.

Discussion and Conclusion

The new compensation method (#PR1526), ensures that thrust is consistent across the full range of battery charge. Compared to the old approach, it is both simpler to understand and more faithful to the actual physics of the motor–propeller system. The result is flights that feel the same at the end of the battery as at the beginning.

Beyond just improving flight performance, the system identification work also provides us with a full parameter set for the Crazyflie. We are already using these parameters in Crazyflow, our new simulation tool that models the Crazyflie dynamics with high fidelity. If you’re interested in simulation or in testing new control strategies virtually, check it out!

We’re excited to hear feedback from the community and to see what you do with this new capability.

At Bitcraze, we’ve always believed in giving people the tools to explore the world of robotics, and especially flying robotics. We’re still surprised by just how many directions you’ve taken the Crazyflie platform, and the research and innovation areas seem endless. Swarming, autonomy, edge AI, vision, navigation, mapping, coordination, etc. are all examples of areas you are interested in and what you are using the Crazyflie to unlock.

But what about the more human aspects of robots, and the relationships we build with these machines? What does it feel like to share space with a flying system, and how can we see drones not only as tools, but companions? And how can we help push social robots from academic theory into everyday life?

These are the kinds of questions we’ve been exploring at Bitcraze and with the Drone Gymnasium, we finally have a space designed to push those ideas further.

A Living Lab for Shared Spaces

Partially inspired by science fiction, The Drone Gymnasium is an experimental playground brought to life by our industrial postdoc Dr. Joseph La Delfa, toghether with Rachael Garrett, Kristina Höök and Luca Mottola, in collaboration with KTH (Royal Institute of Technology), Digital Futures. and RISE.

It’s a living lab, where the boundaries between code, design, behavior, and imagination are blurred. A temporary, yet functional, “future lab” where people experiment with how flying robots might one day fit seamlessly into real-world environments. How they could share space with people, not just in theory, but in practice.

Students from the Physical Interaction Design course, together with our own drone experts, prototyped new robotic experiences using the Crazyflie platform, not just as flying hardware, but as social agents in motion.

From Lab to Real Life

One of the great unsolved challenges of social robotics is translation, moving from controlled lab setups into the beautiful, messy, complexity of the real world.

That’s where many good ideas stumble. That’s also where the Crazyflie shines.

Open, modular, and programmable down to the bone, the Crazyflie gives researchers and innovators permission to try things. To test, break, rebuild, and then observe how it feels to share a room with a machine that moves and reacts in the same space as you.

The Drone Gymnasium is one of many ways we’re trying to support academia, not just in supplying hardware, but in co-creating learning environments where ideas around autonomy, behavior, and social interaction can be explored hands-on, in full view of the community.

Asking Better Questions

And the results are exciting. From emergent swarm behaviors to subtle gestures and sound cues, the participants in the Drone Gymnasium weren’t just building tech, they were testing social contracts. What makes a drone feel present instead of intrusive? Helpful instead of unsettling?

That’s not only an academic question. It’s a design question. And a human robotics question.

We believe spaces like this are interesting, not only to prepare the next generation of roboticists, but to ask better questions about what we’re actually building, and for whom.

We are hosting a side event at “The Drop” in our home town of Malmö, Sweden.

The Drop, brought to life by Pale Blue Dot and Domino Studio, is not just another climate tech festival. It’s a dynamic forum for scientists, investors, startups, and innovators who thrive on meaningful dialogue and next-generation problem solving.

Set inside a century-old, repurposed train workshop, The Drop combines historic ambiance with forward-looking discussions. Attendees often highlight how the event sparks collaborations, unlocks new funding opportunities, and reignites optimism for the future of climate innovation.

Side events at The Drop shift the focus from grand stages to gritty, solution-oriented collaboration. There’s a long list of pop-up gatherings in Malmö’s coffee shops, studios, and parks, where teams form around specific challenges to discuss, prototype, or model new ways of solving traditional problems.

At the side event we invite participants to a discussion about the emerging role of autonomous drones in society, not just as tools, but as extensions of our thinking, imagination, and responsibility. There will be good coffee, delicious croissants from our favourite French breakfast place Dame Ginette, real tech, hands-on experimentation, and an open conversation about how robotics can be both functional and poetic.

Although our attendee capacity has maxed out, you can still sign up and hope to secure a spot. See the side event page for more details and hope to connect with fellow innovators eager to push the boundaries of robotics and climate tech later this week!

Today’s blogpost comes from Joseph La Delfa, who is currently doing his Industrial Post-Doc with Bitcraze.

The Qi deck and the Brushless charging dock allow you to start charging a Crazyflie quickly, without having to fiddle with a plug or a battery change. But when you need to charge 10 or more Crazyflies 2.x and don’t want the weight penalty of the Qi deck then some some other solutions are needed.

This blog post is about a couple of chargers I made for the Crazyflie 2.x for my research prototypes. I research interaction design, which often means building something new and then putting in the hands of a user and getting them to try it out. What is important in these scenarios is that when there is unexpected behavior, they don’t think that the prototype is bugging out or broken. One way to prevent this is to make things that have a higher quality to raise the expectations of the user. This can help them stay immersed in the interaction and not look over to me when there is unexpected behavior and say… “is this working properly?”

Wiring Harness for Drone Chi

This charger is essentially a pair of JST 2-pin extensions for a 1S battery charger that I soldered together. Then weaved them through some fake hanging plants. With the drones already looking like flowers for the Drone Chi project, they blended well into the fake plants and all the wires were well hidden. When you wanted to fly, you would disconnect the battery from the wiring harness. Plus it brings the nice experience of picking a flower from a bush before you start flying!

Magnetic Mantle Piece Charger for How To Train Your Drone

This charger allows 10 Crazyflies to charge from their USB ports, but on a bit of a statement piece charger that lived in the lounge room of a group of friends who were participating in a month-long user study during the How to Train Your Drone Project. This charger contained a powered USB hub with cables running through each of the medusa like tubes rising from the base. What made this charger special was the magnetic USB charging adapters (available widely on e-bay, amazon, temu etc) that were plugged into the the USB ports on the drone. These allowed you to securely place the drone on the charger in one action as the magnetic cables integrated into the charger were always close enough to the drone when you set it down, giving you a satisfying * click * every time! They also gave off a eerie blue glow which I think matched the Crazyflie well.

You might already be familiar with the Crazyflie’s presence in numerous publications across various research fields. However, in this blog post, we’ll return to the basics and showcase some robotics concepts that can be taught using our platform.
The Crazyflie has already found its way into several classrooms such as the “Introduction to Robotics” in the Mechanical & Aerospace Engineering Department at Princeton University, the “Introduction to Control of Unmanned Aerial Vehicles” at UC Berkeley and the “Embedded control systems” at Chalmers University of Technology.
Whether you’re designing a robotics course for undergraduates or developing advanced labs for graduate students, here’s some fields where the Crazyflie can help your students grasp the fundamentals of modern robotics.

Basic Drone Principles

How does the quadcopter generate enough thrust? In which direction should the motors spin? How does the shape of a propeller affect performance?
As an introduction to drones and specifically quadcopters, students can explore these basic principles behind how they work. Then, by flying them, they can better understand the three axes of motion, roll, pitch and yaw and even find out their limitations, such as the ground effect.

Control Systems

What is the difference between controllers? How does a different controller tuning affect performance? How does an estimator work? What types of commands can be sent to a drone?
The Crazyflie platform offers a rich “playground” for exploring the stabilization process from sensor acquisition to motor control, that we often call “stabilizer module“. This includes a variety of controllers, estimators and commanders that can be modified to visualize results in the real world. Also, with the firmware being open-source and modular, it is relatively easy to build your own controller or estimator and integrate it to the platform.

Localization

How can a drone know its position and orientation in 3D space? What is the difference between a local and a global positioning system?
With a wide variety of deck sensors and positioning systems, students can find ways to control the Crazyflie through relative or absolute position/attitude commands. The different sensing methods used in these systems are also interesting to explore – for example, IR signals from the Lighthouse Positioning System, UWB radio from the Loco Positioning System, or optical flow data from the Flow deck v2.

Autonomous Navigation

How could the Crazyflie perform a collision-free trajectory? What is the most efficient way of flying from point A to point B?
In the field of autonomous navigation, the Crazyflie can be treated like any other moving robot by applying existing path planning algorithms or testing newly developed ones. Environment-aware problems are always exciting to work on and the Multi-ranger deck makes them feasible.

Swarm Robotics

What happens when you add another Crazyflie to your setup? How could multiple Crazyflies operate in a swarm? How could you make sure that they won’t collide? What is the difference between a centralized and a decentralized swarm?
Scaling a system up is always challenging but also fascinating. The examples provided in our Python library help you get a swarm in the air, but it’s up to you and your students to explore how the Crazyflies should coordinate and cooperate.

Small Drone, Big Educational Impact

The Crazyflie ecosystem is a fully capable robotics lab in the palm of your hand. Its flexibility, safety, and robust API support make it ideal for hands-on learning in a wide variety of robotics fields. Integrating the Crazyflie into a university robotics curriculum, gives students the chance to explore, test, fail, and fly their way to mastery.

A couple of weeks ago, we were at ICRA 2025 in Atlanta. This year’s ICRA drew over 7,000 attendees, making it the biggest edition yet. We had a booth at the exhibition where we showed our decentralized swarm demo. The setup included a mix of Crazyflie 2.1+ units with Qi charging decks and Crazyflie 2.1 Brushless platforms with our new charging dock. The entire swarm operated onboard, with two Lighthouse base stations for positioning. More details about the setup can be found in the recent swarm demo blog post.

8 Crazyflies flying simultaneously in our decentralized swarm at ICRA 2025

Some of the brushless drones carried our high-powered LED deck prototype to make the swarm more visible and engaging. One of the drones also had a prototype camera streaming deck, which held up well despite the busy wireless environment.

A Different Perspective

This year we were also invited to participate in a workshop: 25 Years of Aerial Robotics: Challenges and Opportunities, where I (Rik) gave a short presentation about the evolution of positioning in the Crazyflie, from webcam-based AruCo marker tracking to the systems we use today.

Usually, we spend most of our time on the exhibition floor, so being part of a workshop like this was a different experience. It was interesting to hear researchers mention the Crazyflie in their work without needing to explain what it is. That kind of familiarity isn’t something we take for granted, and it was nice to see.

The workshop also gave us a chance to talk with both established researchers and newer faces in the field. What stood out most was hearing how people are using the Crazyflie in their work today. It’s very rewarding to see how what we do at the office connects with and supports real research.

Catching Up and Looking Around

One of the most rewarding parts of the conference was the chance to connect directly with people using the platform. We talked to many users, both current and past, and saw new research based on the platform. It was also great to reconnect with Flapper Drones, who build flapping-wing vehicles powered by the Crazyflie Bolt. And it was nice to see HopTo on the exhibition floor for the first time. The company is a spin-off from the Robotics and Intelligent Systems Lab at CityU Hong Kong, which published a Science Robotics paper on the hopcopter concept that used a Crazyflie. We also had the chance to catch up with a maintainer of CrazySim, an open-source simulator in the Crazyflie ecosystem. It’s always valuable to connect with people building on top of the platform, whether through research, hardware, or open-source tools.

Wrapping Up

ICRA 2025 was packed with activity. From demoing the swarm, to the workshop, to hallway conversations, it gave us a lot of valuable feedback and insight. Thanks to everyone who stopped by, joined a talk, or came to say hello.