crazyflie

We built a small drone for people who want to understand how things fly. The community took it considerably further than that. The citations keep arriving from directions we didn’t anticipate. Spacecraft dynamics. Tactile human-swarm interaction. Onboard deep learning. Mapping algorithms that fit inside a nano-drone’s compute budget. The platform’s combination of openness, known dynamics, repeatable behavior, and low replacement cost turns out to be useful for a wider set of problems than any single team could have imagined building for.

What follows is by no means a comprehensive survey, but rather a selection of research areas where the Crazyflie has found a home, each illustrated with recent work. We find it genuinely interesting that the same hardware can be useful across this range, and we hope it gives other researchers a sense of what is possible.

1. Decentralized Multi-Agent Coordination and Swarm Control

Multi-agent coordination is probably the research area most closely associated with the Crazyflie, and for good reason. The platform’s light weight, predictable dynamics, and relatively low cost per unit make it practical to run experiments with enough agents to observe emergent swarm behavior, rather than just simulating it. A lab can field a meaningful swarm without the capital outlay that larger platforms would require.

Recent work has pushed this in some interesting directions. Decentralized approaches, where each agent makes decisions based on local information rather than a central planner, are particularly well-served by a platform where individual failures don’t cascade into catastrophic system loss. Research on collision avoidance, formation control, and consensus algorithms benefits from hardware that can absorb the crashes that inevitably happen when you are testing novel coordination strategies.

See “Design and Implementation of EPM Based Modular Micro-UAVs for Autonomous Midair Docking” IEEE paper (11248819) for an example of custom hardware extending with the Crazyflie.

The ROS 2 ecosystem around the Crazyflie has matured considerably, with frameworks like Crazyswarm2 enabling standardized multi-drone experiments that other labs can replicate. The reproducibility this enables is meaningful: a coordination result demonstrated on Crazyflies in one lab is demonstrable in another (see “CrazyChoir: Flying Swarms of Crazyflie Quadrotors in ROS 2” (arXiv)).

2. Onboard AI and Edge Inference at Nano Scale

What can you fit inside a couple of dozen grams grams and still have compute left over for intelligence? Quite a lot, it turns out, especially when researchers are motivated to find out. The AI Deck, which adds a GAP8 system-on-chip with a camera and Wi-Fi, opened a wave of work on fully onboard perception and inference pipelines on nano-UAVs.

Researchers at ETH Zurich demonstrated autonomous visual navigation along a 113-meter previously unseen indoor path using a convolutional neural networkl (CNN) running at 18 Hz on the GAP8, without any external computation (see “An Open Source and Open Hardware Deep Learning-powered Visual Navigation Engine for Autonomous Nano-UAVs” (arXiv)

More recently, work using custom expansion decks with the GAP9 processor has enabled onboard SLAM and scan-matching at take-off weights around 46 grams, showing that the platform’s expansion architecture makes it a meaningful target even as compute capabilities grow. See “Ultra-Lightweight Collaborative Mapping for Robot Swarms” (arXiv).

The Crazyflie’s transparent hardware design is important here: researchers can build custom decks, verify the power budget, and integrate new silicon without waiting for a vendor to offer an approved configuration.

3. Spacecraft and Orbital Dynamics Simulation

Researchers at the University of Houston, in collaboration with the US Air Force Research Laboratory, used Crazyflie drones to simulate the relative motion dynamics of spacecraft in formation, specifically the Clohessy-Wiltshire equations that describe how objects move relative to each other in near-circular orbit.

The reasoning is practical: testing spacecraft autonomy on-orbit is expensive and high-risk. Ground-based testbeds using air bearings exist, but are complex and space-intensive. A small fleet of Crazyflies, running scaled versions of orbital trajectories in an indoor lab, offers a much cheaper and more accessible way to validate formation-flying control laws and neural network guidance systems before committing to hardware that will be launched into space.

The paper notes exactly why the Crazyflie was chosen: it is affordable, open source, readily available, and the expansion deck ecosystem provides positioning, sensing, and even AI capabilities that can be configured to match a specific experimental requirement (see “Testing Spacecraft Formation Flying with Crazyflie Drones as Satellite Surrogates” (IEEE)).

4. Reinforcement Learning: From Simulation to Real Hardware

Reinforcement learning (RL) for drone control has been a thriving research area for years, but the gap between simulation and physical hardware remains a hard problem to close. The Crazyflie’s well-documented dynamics, consistent manufacturing, and open firmware have made it a preferred target for sim-to-real transfer research, because the sim and the real thing can be brought into close agreement.

Work in this space spans a wide range of problem settings. Multi-agent RL for collision-free navigation, safe RL with control barrier functions, landing on moving targets, and agile trajectory following in cluttered environments have all been demonstrated on Crazyflie hardware. A common thread is that the platform’s low inertia and predictable response make it a fair test: there is nowhere to hide on a platform this light and responsive, and if the policy is sloppy, it falls (see AttentionSwarm: Reinforcement Learning with Attention Control Barier Function for Crazyflie Drones in Dynamic Environments” (arXiv).

The “LEARN framework”, which claims to run a compact attention-based RL policy on six Crazyflies for multi-robot navigation through 0.2-meter gaps at 2 m/s, is a recent example of how far this line of work has come. The system runs fully onboard, using only time-of-flight sensors, and transfers directly from simulation to real hardware without fine-tuning. See “LEARN: Learning End-to-End Aerial Resource-Constrained Multi-Robot Navigation” (arXiv).

5. Human-Swarm Interaction and Expressive Robotics

An unexpected corner of the research community is the one that adopted the Crazyflie to the human-robot interaction field. It turns out that a swarm of small, quiet, slow-moving drones is a better vehicle for studying how humans interpret and respond to group robot behavior than many ground robot alternatives.

Work in this space ranges from the technical to the almost philosophical. Researchers have studied whether humans perceive swarm motion as intentional and communicative; whether vibrotactile feedback can give operators an intuitive sense of swarm state during physical interaction; how flight formation shapes emotional perception; and how to design impedance-controlled swarms that respond naturally to human touch (see “SwarmTouch: Tactile Interaction of Human with Impedance Controlled Swarm of Nano-Quadrotors” (arXiv).

The Crazyflie’s low injury risk in the event of a collision, its predictable behavior, and its ability to carry sensing and communication payloads make it well-suited to user studies where physical proximity and spontaneous human response are important variables. The fact that the platform is widely available also matters: HRI research benefits from results that can be reproduced in different lab environments with different participant populations.

See “Demonstrating How to Train Your Drone” IEEE paper (10973956) for an example of humans shaping interactions with drones.

A Note on What Makes This Possible

Looking across these five areas, a pattern emerges. In each case, the research is not about the Crazyflie itself. The platform is a means, not an end.

What the Crazyflie provides is a credible physical substrate that researchers can trust to behave consistently, modify freely, and replace cheaply when something goes wrong. The open source firmware means the dynamics are fully inspectable. The transparent hardware means the platform can be extended with custom decks. The stable software ecosystem means results from one year’s experiments can be compared against another year’s, and against results from other labs using the same platform.

If your research uses the Crazyflie in a direction not represented here, we’d like to hear about it. The research portal at bitcraze.io/portals/research lists some of what we know about, but the community is larger and more inventive than any curated list can capture.

It’s that time of year again! ICRA 2026 (IEEE International Conference on Robotics & Automation) is just around the corner, and this year we’re heading to Vienna. We couldn’t be more excited about this one: Vienna is an incredible city, and we’ve been working on some things we can’t wait to share.

June 1–5, 2026. Come find us!

A reproducible testbed for aerial robotics research

We will be running a live autonomous flight system based on the Crazyflie platform.

The focus is not the flight itself, but what it enables. The system provides a controlled indoor environment where experiments can be repeated, variables isolated, and results compared over time.

This is aligned with how aerial robotics research is actually conducted: iteration speed, reproducibility, and observability matter more than scale in early and mid-stage research. Our platform is designed around those constraints.

Autonomous indoor flight for controlled experimentation

The setup demonstrates autonomous flight under conditions that remain stable across runs.

This allows researchers to evaluate control strategies, perception pipelines, and multi-robot coordination without environmental noise dominating results. It also reduces costs and operational overhead compared to larger platforms, which changes how frequently experiments can be run.

In practice, this makes it feasible to move from idea to validated result faster and with clearer insight into failure modes.

Used in swarm robotics, control, and physical AI research

The Crazyflie platform is used across domains such as swarm robotics, learning-based control, SLAM, and human–robot interaction.

It has been referenced in hundreds of peer-reviewed publications and is often used as a bridge between simulation and larger systems. The value is not in representing the final deployment environment, but in enabling rigorous, comparable experimentation at low cost and risk.

If you are working in these areas, we are interested in how your setup is structured and where constraints appear.

Share your work with us

If you are presenting work that involves the Crazyflie, we would like to see it.

Even better, if you do not need your poster after your session, bring it by the booth! We collect and display these as part of the broader body of work built on the platform. We will make sure it is appreciated properly.

Meet us at ICRA 2026

One of our favourite things about ICRA is getting to meet the community in person, hearing about your research, seeing what you’ve built with the Crazyflie, and exchanging ideas with people who are just as excited about small flying robots as we are. Whether you want to chat about your research, see the demo up close, or just catch up, our booth is the place to be. We love hearing about all the cool projects you’re working on with the Crazyflie, so don’t be shy!

If you are working with the Crazyflie, evaluating platforms, or exploring new research directions, stop by booth 91. You can also reach out at contact@bitcraze.io to schedule time.

We’re happy to announce that release 2026.04 is now available. This update introduces a dedicated CRTP port for the supervisor subsystem and a radio startup gate for more reliable early connections, along with a number of smaller bug fixes and quality-of-life improvements. Alongside the release, we’re launching a new crazyflie-demos repository with self-contained examples demonstrating real-world use cases. Thanks to our community contributors for their valuable additions to this release.

Major changes

Demos repository
The new dedicated crazyflie-demos repository hosts more complex examples that combine features or subsystems to demonstrate best practices and real-world use cases. Every demo in the new repository is self-contained, with pinned software versions.

CRTP supervisor port
A new dedicated CRTP port provides direct access to the supervisor subsystem, consolidating supervisor-related commands that had historically ended up on unrelated ports. Arming, crash recovery, and emergency stop commands now go through this port instead of being spread across the platform and localization ports. The old ports still accept these commands for backward compatibility. The new port also exposes supervisor state (armed, flying, tumbled, crashed, etc.) through a direct query, so clients no longer need to set up a log block just to check supervisor status.

Radio startup gate
The STM32 now signals the nRF51 when it’s ready to receive radio packets. Previously, if a client connected during boot, packets could arrive before the firmware was ready to handle them, causing lockups.

Lighthouse calibration saving fix
Fixed a bug that made saving lighthouse calibrations unreliable: a signature mismatch in a memory-read failure callback could leave the memory subsystem locked, blocking further reads until reconnect.

Release overview

crazyflie-firmware release 2026.04 GitHub

crazyflie2-nrf-firmware release 2026.04 GitHub

cfclient (crazyflie-clients-python) release 2026.4 GitHub, PyPi

cflib (crazyflie-lib-python) release 0.1.32 GitHub, PyPi

AI coding agents have become increasingly useful lately. The main reason, as far as I can understand, is that agents like Claude Code can close the loop: they can produce code, test it, and iterate. This is critical because models will make mistakes, and the feedback loop allows them to iteratively correct problems and usually converge on a working solution.

When trying to use coding agents with embedded systems, I quickly found myself becoming a manual tester, copy-pasting logs and describing behavior back to the agent. I was the one closing the loop, which is both inefficient and frustrating. So I started looking for ways to improve that.

Control by CLI

One of the great strengths of coding agents is that they can close the loop through the command line. They can invoke CLI tools, and by assembling them together they can achieve far more than any single tool would allow, this is essentially the Unix philosophy applied to AI-assisted development.

The most effective way to extend an agent’s capabilities that I’ve found so far is to build dedicated command line tools and let the agent use them. I ran a couple of experiments with dev boards where I had the agent create a small Python tool to control the board. The minimum useful functionality was: flash firmware, observe the console output, and reset the board. With just those three capabilities, the agent gains the ability to iterate almost entirely on its own.

The crazyflie-agent-cli

This is where the idea came from for creating such a tool for the Crazyflie. I chose to write it in Rust, partly to exercise our newly developed Crazyflie Rust library.

The capabilities I gave it are:

  • Flash the Crazyflie using the bootloader
  • Reset the Crazyflie into bootloader or firmware mode
  • Console, stream the debug text output from the firmware
  • Parameters, read and write parameter values
  • Log variables, stream the value of log variables

This is roughly the minimum viable feature set for Crazyflie firmware development. Since AI coding agents already know how to write C code and compile projects, this is, in theory, enough to close the loop and let an agent implement new functionality, flash it, observe the behavior, find a bug, and iterate, just like in a normal development workflow.

Designing a CLI for agents, not humans

One design challenge worth mentioning: the Crazyflie communication model is inherently stateful. As a human, you would open an interactive client, connect to the drone, and then poke around, reading parameters, watching log variables, tweaking things live. That interactive, session-based workflow doesn’t translate well to agents, which can’t use interactive CLIs. Instead, the crazyflie-agent-cli uses a daemon/client architecture: the agent first launches a background daemon that establishes the radio connection, then uses separate one-shot commands to interact with the already-connected Crazyflie. It’s not the most ergonomic design for humans, you end up needing two terminals, but it turns out to work surprisingly well for an agent, which has no trouble managing background processes and firing off commands independently.

Putting it all together

The CLI gives the agent the capability to interact with the Crazyflie, but it also needs to know how to use it. We could tell the agent at the start of every session “here is a tool you can use,” and it would figure things out by calling --help. But a much more efficient approach is to use skills.

Alongside the CLI, I created a skill that teaches the agent how to use the tool for Crazyflie firmware development: what the workflow looks like, how to flash, how to debug. This is what truly closes the loop, once the skill is in place, the agent knows what a Crazyflie is, how to flash it, and how to debug it, without needing much guidance.

The end result: Claude Code can implement simple firmware functionality largely in one shot, and even when it doesn’t get it right the first time, it will iterate and generally get there.

Here is an example prompt that works end-to-end:

I have a Crazyflie on channel 80, 2M, default address. Add a log variable that exposes
the free heap size so I can monitor it over time. Build, flash, and verify the new
variable appears in the log list.Code language: PHP (php)

After a little while, the Crazyflie has been flashed, functionality has been verified and result looks something like:

Conclusion

This tool is not an official Bitcraze product, it’s a Fun Friday project. But we think it’s a nice demonstration of what is becoming possible with AI coding agents. By closing the loop, we can start to accelerate firmware development the same way AI has already accelerated other kinds of software development. That said, this is a force multiplier, not a replacement for engineering judgment. The human still needs to be in the loop.

For instance, I believe this CLI is already capable enough to let an agent bring up a new deck with a new sensor, exactly the kind of scoped, iterative task where the available functionality is sufficient. The tool could certainly be improved with more features, and we’ll see how much that happens. But we expect it will likely find its way into some of our day-to-day Crazyflie work at Bitcraze.

For the time being, treat it as an experiment and an example, not a finished product. The code is on GitHub at ataffanel/crazyflie-agent-cli if you want to try it out.

At Bitcraze, we spend a lot of time making things fly. Even though flying robots are, and will always be, fascinating to watch , every now and then it’s refreshing to try something different. Last Friday, I started exploring an idea that has been lying around for a while now: having a ground robot with the same Crazyflie infrastructure – the radio communication, the deck ecosystem and the cfclient connection. This robot is also known as the Crazyrat.

Fair warning: this is a first prototype, and it definitely looks like one. Jumper wires going in every direction, plastic foam and rubber bands to keep everything in place. But it drives, and it drives fast! That’s enough to call it a success and write about it.

Hardware

The entire vehicle was built around the Crazyflie Bolt 1.1, using its motor connectors, and specifically the S pins to drive a small H-bridge motor driver, enabling proper bidirectional DC motor control. To make it possible to drive the H-bridge, the motors must be configured as brushed.

For the motor driver, I used a Pololu DRV8833 Dual Motor Driver Carrier. I experimented with two different motor sets: one with a 75:1 gear ratio and one with 15:1. Even though the 75:1 motors offered better precision and were easier to drive, the 15:1 setup was the clear winner due to their speed.

The motors, the chassis and the power supply were taken from a Pololu 3pi+ robot.

Firmware

For the firmware, I used the Out-Of-Tree functionality of the crazyflie-firmware which makes it easy to integrate custom controllers. The controller itself is relatively simple. It maps pitch and yaw commands sent from the cfclient to throttle and steering respectively. Then, it sends the calculated values directly to the motors as PWM signals.

float throttle = -(setpoint->attitude.pitch / maxPitch);
float steer = -(setpoint->attitudeRate.yaw / maxYaw) * steerGain;

float left = throttle - steer;
float right = throttle + steer;

control->controlMode         = controlModePWM;
control->normalizedForces[0] = (left  > 0.0f) ?  left  : 0.0f;  /* M1 AIN1 left  fwd */
control->normalizedForces[1] = (left  < 0.0f) ? -left  : 0.0f;  /* M2 AIN2 left  rev */
control->normalizedForces[2] = (right > 0.0f) ?  right : 0.0f;  /* M3 BIN1 right fwd */
control->normalizedForces[3] = (right < 0.0f) ? -right : 0.0f;  /* M4 BIN2 right rev */Code language: PHP (php)

The Crazyrat in action

To test the prototype, I used a gamepad and drove it through the cfclient – no modifications are required. Here’s a video showing the capabilities of the Crazyrat using the 15:1 motor setup.

What’s Next

This project was a really fun learning experience on the potential and the limitations of the ground robots. These are some of the directions I plan to explore in the future:

  • Robust design – Design a proper chassis, clean up the wiring and make the whole vehicle smaller, to fit better in the Crazyflie ecosystem.
  • Deck integration – Use either the Lighthouse or the LPS deck for positioning and the Multi-Ranger deck for obstacle detection.
  • Experiment – Explore heterogeneous multi-agent swarming scenarios with the Crazyrat and the Crazyflie.

Booth #90 is running. Here’s what we’re showing and why we think it’s relevant to the conversations happening at the European Robotics Forum this week.

A Decentralized Brushless Swarm

The centerpiece is an evolution of the Decentralized Brushless Swarm demo we published last year. Multiple Crazyflie 2.1 Brushless drones share a volume with no central trajectory planner. Each agent handles its own state estimation, neighbor awareness, and collision avoidance independently. The swarm is fault-tolerant by design: individual failures don’t cascade.

What makes this relevant as a testbed is not the flight itself, but what it lets you study. Decentralized coordination, emergent behavior, and the gap between simulated and physical multi-agent dynamics are all things you can actually probe here, at a scale and cost that makes iteration realistic.

The Swarming Interface

We’re showing a new interface for the first time at ERF. It surfaces per-agent state in real time: position, velocity, battery, role, giving you visibility into what the swarm is doing and why, not just the flight envelope. We’ll write up the technical details separately, but if you want to see it running, the booth is the right place.

A Touch of Magic!

We have built a magic wand. It is a Lighthouse-based device that lets you grab a drone, or a group of them, and steer with your hand. It started as a side project and ended up being a surprisingly good way to demonstrate how the positioning system responds to real-time input. Worth a look if you’re nearby.

Come Find Us

We’re at booth #90 through Thursday March 27. The conversations we’re most interested in are about research infrastructure: how teams design testbeds, what the handoff from simulation to hardware looks like in practice, and where small-scale indoor platforms fit into larger development pipelines.

If you’d like to set aside time for a more focused discussion, reach out at contact@bitcraze.io or the https://www.b2match.com/e/erf2026/meetings app.

During my first Fun Friday as a Bitcraze intern in 2021, I discovered the musical note definitions in the Crazyflie firmware and thought about creating a musical performance using the Crazyflie’s motors, but never followed through.

A few weeks ago I decided to finally take it on as a Fun Friday Project with the slightly more ambitious goal of playing music across several Crazyflies at once.

crazyflie-jukebox takes a MIDI file, preprocesses it into motor frequency events, then uploads and plays the song by spinning the motors accordingly. Each Crazyflie contributes 4 voices (one per motor), so polyphony scales directly with your drone count.

I implemented this as a firmware app and a Python script using the work-in-progress cflib2 (running on Rust lib back-end). You can find the repository here, try it for yourself! Be aware that certain note combinations can cause the Crazyflie to move, flip, or take off unexpectedly.

Fitting music into 4 motors

The pipeline starts by parsing the MIDI file with mido. From there, an interactive track selection step shows you the instrument names, note counts, and ranges for each track so you can pick exactly which ones to include. The selected notes are then converted from MIDI note numbers into Hz frequencies that the motors can work with.

Each Crazyflie can only play 4 simultaneous voices (one per motor) so there’s some work involved in squeezing music into that constraint. I implemented a couple different voice allocation strategies: melodic priority, which keeps the bass and melody prioritized; voice stealing, which works like a LRU synth; and a simple round robin, which just assigns each new note to the next motor in turn, cutting off whatever was playing there. There’s also a frequency range problem to deal with: motors only reliably produce pitches in roughly the C4–B7 range, so notes outside that window get octave-shifted to fit.

Upload protocol

Events are packed into compact 6-byte structs containing a delta timestamp, motor index, an on/off flag, and the target frequency. These get streamed to the firmware app using the app channel in a simple START; events; END sequence. The Crazyflie app has a buffer limit of 5000 events, which effectively caps the length and complexity of what you can play. The 5000-event buffer was an arbitrary choice and you could probably get away with more, but it was enough for most songs I threw at it.

Synchronization

One of the trickier elements of this project was keeping Crazyflies synchronized. For starting in sync, I didn’t do anything special: no broadcast, no t-minus countdown, just sending start commands to each drone in sequence and relying on cflib2 to do it fast enough that the delay is negligible. That said, I’ve only tested with a small number of Crazyflies. With a larger fleet you’d probably need to implement something for the initial sync.

The real challenge is drift over time. The STM32’s crystal is rated at around 0.1% tolerance. This sounds tiny, but in the worst case, over a 1-minute song that’s already ~120 ms of drift between two drones. In a musical context, humans start noticing timing offsets around 20-30 ms; less for percussive sounds, and less for trained musicians. So left uncorrected, drift would become very audible well before the song ends.

To fix this, all clocks are reset to zero at song start. The host then periodically sends resync packets containing its own timestamp in microseconds, and each Crazyflie applies an offset correction to stay aligned, which as a bonus also irons out any initial start latency.

Rough edges

The biggest design constraint is that a single track can’t be split across Crazyflies, so if a track has more than 4 simultaneous voices, some get dropped. I thought of each Crazyflie as its own instrument, which made sense at the time, but it does mean a dense MIDI tracks can’t be split across multiple drones, which feels limiting in hindsight.

The usable pitch range is about 4 octaves (C4–B7), and propellers need to be attached for accurate pitch since the motors need load to produce the right frequencies, which makes the whole thing a bit unsafe. Certain note combinations can cause a drone to move, flip, or behave unpredictably. Only brushed motors are supported, and there’s a hard 71-minute per-song limit on clock sync. But honestly, if you’re sitting there listening to a 71-minute song on your Crazyflie, the clock drift is the least of your problems.

Check out crazyflie-jukebox on GitHub

This week we wanted to reflect on the progress that has been made lately in the Crazyflie ecosystem which will lead to bigger and better Crazyflie Swarms.

Radio communication

Like pointed out in the last blog post about Building a Crazyflie Flower Swarm with Rust, the new Rust Crazyflie library together with the new Crazyradio 2.0 has improved connection time and link efficiency by quite a bit.

It is now possible to connect swarms of multiple dozens of Crazyflies in seconds using a single radio and then make them fly while still getting position telemetry. So many Crazyflie on one radio does limit the maximum bandwidth per Crazyflie, but it does now work in a stable way!

Color LED deck

The recently released Color LED deck is a great addition to the ecosystem towards swarm. Its predecessor, the Led-ring Deck, has been used a lot by researchers to indicate state of individual Crazyflies in a Swarm. The Color LED Deck improves on that by providing a diffuser that allows to see the color from the side. This allows to mark states of big groups of Crazyflie much more clearly.

As a bonus, the Color LED Deck is very usable in other field like art and shows since it is much more visible and can be used to fly Crazyflies as “Flying Pixels”.

Autonomous landing and charging

Last year, we have released a Crazyflie 2.1 Brushless charging dock. This is a produced version of an idea we have been using with Crazyflie 2.1 and the Qi deck for years at fairs and conferences. It allows Crazyflies to autonomously land and charge. It is not only great for autonomous drone demos and shows but it also is a great waiting spots for swarms when doing research: the charging dock keeps the swarm charged so that when it is time to take off all the individuals starts with the same battery level.

Future endeavors

On the radio side there are still areas that would bring great improvement on communication stability. We are for example working on a channel-hopping communication protocol that should make the connection mostly immune to regular interference on 2.4GHz.

We are also working at improving other parts of swarm management, this includes for example solving the problem of flashing a full swarm of Crazyflie with the same firmware: we may be able to use broadcast messages more in order to drastically speed up the process instead of flashing the Crazyflie one per one.

Overall, working on bigger swarms allows us to work on the full stack and to make the Crazyflie a better drone for everybody.

Bitcraze will exhibit at the European Robotics Forum 2026 March 23-27 in booth #90, where we will demonstrate a live, autonomous indoor flight setup based on the CrazyflieTM platform. The demonstration features multiple nano-drones flying autonomously in a controlled environment and reflects how the platform is used in research and applied robotics development.

Why Indoor Aaerial Testbeds Matter

The purpose of the demonstration is not the flight itself, but the role such setups play in validating aerial robotics concepts. Indoor, small-scale aerial systems allow researchers and R&D teams to study autonomy, perception, control, and multi-robot coordination under safe and repeatable conditions. This makes it possible to explore system behavior, test assumptions, and iterate rapidly before moving to larger platforms or less controlled environments.

Applicable in Both Academia and Industrial R&D

Bitcraze is used both in academic research and in industrial R&D contexts. In academia, the platform supports experimental work in areas such as swarm robotics, learning-based control, and human–robot interaction, and has been referenced in hundreds of peer-reviewed research papers worldwide. In industry, similar setups are increasingly used as testbeds to de-risk development by validating ideas indoors before scaling to outdoor testing, larger drones, or other robotic systems that require higher investment and operational complexity.

Hands-on Discussions at the Booth

At the booth, the live flight cage will be complemented by hands-on access to additional drones, expansion decks, and software tools. This allows for technical discussions around hardware architecture, sensing and positioning options, software stacks, and how different configurations support different research or development goals.

The Conversations We Are at ERF to Have

At ERF, Bitcraze is there to engage in conversations about platforms, testbeds, and how ambitious aerial robotics ideas can be validated in a financially responsible, safe, and controlled manner. This includes discussions with academic groups, industrial R&D teams, and project partners working across the research-to-application spectrum.

Looking forward to the discussions in Stavanger in booth #90!

Send us a message to contact@bitcraze.io to book a meeting at the show!

The CrazyflieTM Color LED deck is a high-powered, fully programmable RGB(W) lighting expansion for the Crazyflie 2.x platform —and it’s now available in the shop.

It delivers bright, diffused, and uniform light suitable for research, teaching, vision experiments, and indoor drone choreography. The deck mounts on top or bottom of the Crazyflie and integrates seamlessly through our open-source firmware and I²C-based deck interface.

Two Versions, Same Electronics

The Color LED deck is available in two distinct versions, each sold as a separate product.

Top-Mounted Color LED Deck

Designed to be placed on top of the Crazyflie. Ideal for scenarios where the drone is viewed from above or when customers need to use positioning or sensor decks underneath the Crazyflie—such as the Flow Deck or other bottom-mounted modules.

  • Works well with motion capture (MoCap) systems using ceiling-mounted cameras.
  • Not recommended with the Lighthouse positioning deck, due to optical occlusion and interference with the lighthouse sensors.

You can find the top-mounted version in the store here

Bottom-Mounted Color LED Deck

This version is suitable for almost all use cases, offering maximum visibility from below and minimal interference with other decks.

  • Ideal for Lighthouse positioning (no optical obstruction).
  • Ideal for MoCap positioning, especially when cameras view from multiple angles

You can find the bottom-mounted version in the store here.

Dual Mounting for Maximum Visibility

Additionally, two Color LED decks can be mounted both above and underneath the Crazyflie simultaneously, creating a strong, uniform light signature visible from all directions. This is best for MoCap environments where multi-angle visibility improves marker/camera performance.

You can see all three variants of the Color LED in action in our latest Christmas video, created in collaboration with Learning Systems and Robotics Lab :

Each color LED deck variant operates independently, allowing the top and bottom decks to be configured with different colors if desired. While all variants share the same electronics, diffuser, and firmware behavior, their physical mounting positions let you choose the setup that best fits your lab, show environment, or positioning needs.

Diffuser now available

While designing the Color LED deck, we also created a light diffuser—now available in the shop to buy as a standalone product. It is designed to be compatible with the LED ring to spread and soften the light, extending visibility and improving appearance.

You can find the light diffuser in the store here.

The Color LED deck is available now in both versions. Head to the store to order, or contact us for a quote!