AI-deck

This week’s guest blogpost is from Rik Bouwmeester from the Micro Air Vehicle lab, Faculty of Aerospace Engineering at the Delft University of Technology.

Tiny quadcopters like the Crazyflie can be operated in narrow, cluttered environments and in proximity to humans, making them the perfect candidate for search-and-rescue operations, monitoring of crop in a greenhouse, or performing inspections where other flying robots cannot reach. All these applications benefit from autonomy, allowing deployment without proximity to a base station or human operator and permitting swarming behavior.

Achieving autonomous navigation on nano quadcopters is challenging given the highly constrained payload and computational power of the platform. Most attention has been given to monocular solutions; the camera is a lightweight and energy-efficient passive sensor that captures rich information of the environment. One of the most important monocular visual cues is optical flow, which has been exploited on MAVs with higher payload for obstacle avoidance [1], depth estimation [2] and several bio-inspired methods for autonomous navigation [3–7].

Optical flow describes the apparent visual variations caused by relative motion between an observer and their surroundings. This rich visual cue contains tangled information of velocity and depth. However, calculating optical flow is expensive. The field of optical flow estimation is and has been for a couple of years dominated by convolutional neutral networks (CNNs). Despite efforts to find architectures of reduced size and latency [8-10], these methods are still highly computationally expensive, running at several to tens of FPS on modern desktop GPUs and requiring millions of parameters to run, rendering them incompatible with edge hardware.

To this end, we present “NanoFlowNet: Real-Time Dense Optical Flow on a Nano Quadcopter”, submitted to an international robotics conference, which introduces NanoFlowNet, a CNN architecture designed for real-time, fully on-board, dense optical flow estimation on the AI-deck.

CNN architecture

We adopt semantic segmentation CNN STDC-Seg [11] and modify it for optical flow estimation. The resulting CNN architecture may be considered “real-time” on desktop hardware, for deployment on edge devices such as a nano quadcopter the net must be significantly shrunk. We improve the latency of the architecture in three ways.

First, we redesign the key convolutional modules of the architecture, the Short-Term Dense Concatenate (STDC) module. By reordering the operations within the strided variant of the module, we save, depending on the location of the module within the architecture, from over 10% to over 50% of the MAC operations per module, while increasing the number of output filters with large receptive field size. A large receptive field size is desirable for optical flow estimation.

Second, inspired by MobileNets [12], we globally replace ‘regular’ convolutions with depthwise separable convolutions. Depthwise separable convolutions factorize a convolution into a depthwise and pointwise convolution, effectively reducing the calculational expense at a cost in representational capacity.

Third, we reduce the input dimensionality. We train and infer network on grayscale input images, reducing the required on-board memory for storing images by a factor 2/3. Any memory saved on the AI-deck’s L2 memory can be handed to AutoTiler for storing the CNN architecture, speeding up the on-board execution. Requiring more of a speed-up, we run the CNN on-board at a reduced input resolution of 160×112 pixels. Besides the speed-up through saved L2, reducing the input resolution makes all operations throughout the network cheaper. We downscale training data to closely match the target resolution. Both these changes come at a loss of input information. We will miss out on small objects and small displacements that are not captured by the resolution.

To give some intuition of the available memory: Estimating optical flow requires two input images. Storing two color input images at full resolution requires (2 x 324x324x3=) 630 kB. The AI-deck has 512 kB of L2 memory available.

Motion boundary detail guidance

Inspired by STDC-Seg, we guide the training of optical flow with a train-time-only auxiliary task to promote the encoding of spatial information in the early layers. Specifically, we introduce a motion boundary prediction task to the net. The motion boundary ground truth can be found in the optical flow datasets. This improves performance by 0.5 EPE on the MPI Sintel clean (train) benchmark, at zero cost to inference latency.

Performance on MPI Sintel

Given the scaling and conversion to grayscale of input data, our network is not directly comparable with results reported by other works. For comparison, we retrain one of the fastest networks in literature, Flownet2-s [13], on the same data. Given the reduction in resolution, we drop the deepest two layers to maintain a reasonable feature size. We name the model Flownet2-xs.

We benchmark the performance of the architecture on the optical flow dataset MPI Sintel. NanoFlowNet performs better than FlowNet2-xs, despite using less than 10% of the parameters. NanoFlowNet achieves 5.57 FPS on the AI-deck. FlowNet2-xs does not fit on the AI-deck due to the network size. To put the achieved latency of NanoFlowNet in perspective, we execute FlowNet2-xs’ first two convolutions and the final prediction layer on the GAP8. The three-layer architecture achieves 4.96 FPS, which is slower than running the entire NanoFlowNet. On a laptop GPU, the two architectures accomplish similar latency.

MethodMPI Sintel (train) [EPE]Frame rate [FPS]Parameters
CleanFinalGPUGAP8
FlowNet2-xs9.0549.4581501,978,250
NanoFlowNet7.1227.9791415.57170,881
Performance on MPI Sintel (train subset). (Average) end-to-end Point Error (EPE) describes how far off the estimated flow vectors are on average, lower is better.

Obstacle avoidance implementation

We demonstrate the effectiveness of NanoFlowNet by implementing it in a simple, proof-of-concept obstacle avoidance application on an AI-deck equipped Crazyflie. We let the quadcopter fly forward at constant velocity and implement the horizontal balance strategy [14], [15], where the quadcopter balances the optical flow in the left and right half plane by yawing.

We equip a Crazyflie with the Flow deck for positioning only. The total flight platform weighs 34 grams.

We augment the balance strategy by implementing active oscillations (a cyclic up-down movement), resulting in additional optical flow generated across the field of view. This is particularly helpful for avoiding obstacles in the direction of horizontal travel, since no optical flow is generated at the focus of expansion.

The obstacle avoidance implementation is demonstrated in an open and a cluttered environment in ‘the Cyber Zoo’, an indoor flight arena at the faculty of Aerospace Engineering at the Delft University of Technology. The control algorithm is most robust in the open environment, with the quadcopter managing to drain a full battery without crashing. In the cluttered environment, performance is more variable. Especially on occasions where obstacles are close to one another, the quadcopter tends to avoid the first obstacle successfully, only to turn straight into the second and crash into it. Adding a head-on collision detection based on FOE detection and divergence estimation (e.g., [7]) should help avoid obstacles in these cases.

Successful run in a cluttered environment in the Cyber Zoo. The Crazyflie manages to avoid collision until the battery is drained.

All in all, we consider the result a successful demonstration of the optical flow CNN. In future work, we expect to see applications that take more advantage of the resolution of the flow information.

Citation

Bouwmeester, R. J., Paredes-Vallés, F., De Croon, G. C. H. E. (2022). NanoFlowNet: Real-time Dense Optical Flow on a Nano Quadcopter. arXiv. https://doi.org/10.48550/arXiv.2209.06918

References

[1] Gao, P., Zhang, D., Fang, Q., & Jin, S. (2017). Obstacle avoidance for micro quadrotor based on optical flow. Proceedings of the 29th Chinese Control and Decision Conference, CCDC 2017, 4033–4037. https://doi.org/10.1109/CCDC.2017.7979206

[2] Sanket, N. J., Singh, C. D., Ganguly, K., Fermuller, C., & Aloimonos, Y. (2018). GapFlyt: Active vision based minimalist structure-less gap detection for quadrotor flight. IEEE Robotics and Automation Letters, 3(4), 2799–2806. https://doi.org/10.1109/LRA.2018.2843445

[3] Conroy, J., Gremillion, G., Ranganathan, B., & Humbert, J. S. (2009). Implementation of wide-field integration of optic flow for autonomous quadrotor navigation. Autonomous Robots, 27(3), 189–198. https://doi.org/10.1007/s10514-009-9140-0

[4] Zingg, S., Scaramuzza, D., Weiss, S., & Siegwart, R. (2010). MAV navigation through indoor corridors using optical flow. Proceedings – IEEE International Conference on Robotics and Automation, 3361–3368. https://doi.org/10.1109/ROBOT.2010.5509777

[5] De Croon, G. C. H. E. (2016). Monocular distance estimation with optical flow maneuvers and efference copies: A stability-based strategy. Bioinspiration and Biomimetics, 11(1). https://doi.org/10.1088/1748-3190/11/1/016004

[6] Serres, J. R., & Ruffier, F. (2017). Optic flow-based collision-free strategies: From insects to robots. Arthropod Structure and Development, 46(5), 703–717. https://doi.org/10.1016/j.asd.2017.06.003

[7] De Croon, G. C. H. E., De Wagter, C., & Seidl, T. (2021). Enhancing optical-flow-based control by learning visual appearance cues for flying robots. Nature Machine Intelligence, 3(1), 33–41. https://doi.org/10.1038/s42256-020-00279-7

[8] Ranjan, A., & Black, M. J. (2017). Optical flow estimation using a spatial pyramid network. Proceedings – 30th IEEE Conference on Computer Vision and Pattern Recognition, 2720–2729. https://doi.org/10.1109/CVPR.2017.291

[9] Hui, T. W., Tang, X., & Loy, C. C. (2018). LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 8981–8989. https://doi.org/10.1109/CVPR.2018.00936

[10] Sun, D., Yang, X., Liu, M. Y., & Kautz, J. (2017). PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 8934–8943. https://doi.org/10.1109/CVPR.2018.00931

[11] Fan, M., Lai, S., Huang, J., Wei, X., Chai, Z., Luo, J., & Wei, X. (2021). Rethinking BiSeNet For Real-time Semantic Segmentation. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 9711–9720. https://doi.org/10.1109/CVPR46437.2021.00959

[12] Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., & Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. In arXiv. arXiv. http://arxiv.org/abs/1704.04861

[13] Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., & Brox, T. (2017). FlowNet 2.0: Evolution of optical flow estimation with deep networks. Proceedings – 30th IEEE Conference on Computer Vision and Pattern Recognition, 1647–1655. https://doi.org/10.1109/CVPR.2017.179

[14] Souhila, K., & Karim, A. (2007). Optical flow based robot obstacle avoidance. International Journal of Advanced Robotic Systems, 4(1), 2. https://doi.org/10.5772/5715

[15] Cho, G., Kim, J., & Oh, H. (2019). Vision-based obstacle avoidance strategies for MAVs using optical flows in 3-D textured environments. Sensors, 19(11), 2523. https://doi.org/10.3390/s19112523

We’re now in the middle of summer, and even though we’re not affected by the heat much here in Sweden, we’re still in a slower pace as usual, since a lot of us are not at the office. Sales, packing, support and general maintenance takes up a lot of our time for those that are left at the office. We also usually take the summer time to clear out lingering issues and focus on some projects that we can tackle alone.

This summer though will be mostly used for preparation of a very busy autumn. As the Covid situation seems to normalize around the world, conferences onsite are restarting, and we plan to take advantage of this ! Here is what is planned:

IMAV – Delft, 12 to 16 September.

The 13th edition of the International Micro Air Vehicle Conference will be held in Delft, in the Netherlands. We’ve been collaborating for a long time with the MAVLab in Delft, so we’re really happy to be one of the sponsors for this conference. For the occasion, there is a nano AI competition that we’re really excited to see. With the AI bundle, the goal is to fly as fast as possible through an obstacle course.

We’ve been working a lot with the AI deck this past year, so this competition is the perfect occasion for us to see it in action. Kimberly has also developed a simulator that will be used for this competition.

ROSCon – Kyoto, 19 to 21 october

ROSCon is a conference dedicated to the entire ROS community, traditionally held right before IROS. Kimberly will be our proud represent there, as she will have a talk about ROS2 and the Crazyflie. For the occasion, she will showcase the latest ROS2 integrations in collaboration with the maintainers of Crazyswarm2.

Last time a Crazyflie was present at ROSCon was in 2015, where Wolfgang Hönig had a lightning talk. A lot has changed since that time, and we’re hoping to increase the presence of (tiny) aerial vehicles within the ROS community, especially nanocopters like the Crazyflie.

IROS – Kyoto, 23 to 27 october

IROS is one of the largest robotics conferences worldwide, and after an online edition last year, this 35th instance promises to be full of exciting things!

As it’s quite huge, and for a quite delayed 10th Bitcraze’s anniversary, the whole company plans to get to this conference. Not only for the chance to discover Japan, that most of us haven’t visited, but also because it feels important to have a significant presence in this conference, which promises a lot of opportunities. That would mean a week without anyone at the Swedish office, but you know where to find us if you would like to talk to us ;).

For the occasion, our intern Marios is working on revamping the autonomous swarm demo. Because of the pandemic, it’s been a while since we actually used it for a whole day of flying, and he’s actively working on making it completely autonomous by implementing the peer to peer protocol.

Logistics

As you can see, those exciting 3 conferences almost back-to-back promise a busy autumn here at Bitcraze. There’s a lot to prepare ahead of time, like marketing materials, demo setups, visas problems and hotel bookings. And there will be a lot to talk about, during and after. The pandemics have delayed a lot of our in-person meetings, and it will feel really good to finally get to meet up in the real world with users – old and new. If you have the opportunity, don’t hesitate to come by our booths on those conferences and say hello in person!

I know a lot of you will be too distracted by chocolate to read this post, so I will make it short.

I am, too, a little distracted by sugar

As I mentioned earlier, we’re a little under-staffed right now. Jonas left us for new adventures, and Arnaud is enjoying some time with his baby (here in Sweden parental leave is thankfully long for dads too). On top of that, Kimberly was away the last two weeks to visit various labs in Europe. She will talk to you about it once she’s back, I’m sure. But with just 4 people at the office, time is a valuable resource. So what are we doing with it?

Well, a lot of that has been dedicated to the AI deck, but that’s not the only thing we’ve been working on. Recently, we had the visit of one expert on dangerous goods shipment. During 2 days, we got to learn about how to properly send the batteries we have, the regulations that are involved and what we have to implement to ship them. It may sound boring… and honestly, it was not the most interesting. But we got a certification out of it, that now allows us to ship as many batteries as we want with your order ! The 2 batteries only restriction that we have on the shop should be lifted – but please be aware that if you exceed 2 batteries per Crazyflie, the shipping cost will be higher, because of the fee Fedex imposes on dangerous goods shipments.

And speaking of Fedex, there are some problems right now on their air routes. Avoiding Ukraine and dealing with some strikes for air traffic operators in Europe has not been easy on their infrastructure, and we have experienced some delays in deliveries unfortunately. It seems to go back to normal gradually, so let’s hope their usual speediness resumes soon.

We’re also working on the Mini BAMs, which is on the 18th of May and will talk about drones for aerial show. Our special guest speakers are from Collmot and Flapper Drones, make sure to answer this survey if you want to participate ! You will get more informations soon.

And if want to play around with the AI deck, you will have an interesting occasion in September. IMAV launched a competition, where the goal is to have the Crazyflie equipped with the AI deck perform vision-based obstacle avoidance at increasing speeds. Deadline for registering are Mid-May, you can find more informations here.

We are now enjoying a long Easter week-end, recharging our batteries with families (and chocolate!), hoping that the Swedish spring finally settles here. I hope you’re enjoying it too !

We’ve had an exciting year in 2021, and we’re eager to see what 2022 will bring ! Let’s see what’s in the pipeline and what we hope for this new year.

Products

The AI deck and Bolt out of Early Access

We’ve put a lot of efforts during these last months on working with the AI deck’s firmware and infrastructure. With great help of our intern Rik, we managed to make huge leaps, and hopefully sometimes in the coming months we’ll be able to share what we worked on. I can already tell you that the incoming release will bring some needed improvements on flashing on the GAP8 chip and improved image streaming! As the AI deck is one of the most challenging of our decks, we also hope to add an extensive tutorial (that we call the “mega tutorial”) to help you working with it.

Also we have started to push some framework changes to make it easier for you to make bigger drones with the Crazyflie Bolt. One of those are the persistent parameter system that we have recently implemented on the Crazyflie’s, so we will add more and more of these types of features. The hope is, is that we are able to provide some kind of assembly kit for a larger Bolt-based drone, of which we already did some initial battery investigations for.

Prototypes

Fun Fridays are usually our time to play around with new possibilities and prototypes. Marcus has already made great strides, and hopefully in 2022 we’ll be able to go even farther with those. Arnaud has also been working on the much waited new iteration of the Crazyradio, with a new chip and an improved communication protocol. Tobias, our dedicated hardware man, has also ideas down the pipes in the form of a brushless Crazyflie as we already showed in our future plans presentation of November’s BAMdays. Also we hope to initiate the design process of a new and improved version of the Crazyflie with more power and processing capabilities.

People and Collaborations

Last year we have continued our close collaboration with researchers at institutes and universities, to help them out with achieving their goals and contribute their work to our opensource firmware and software. It proves really fruitful, both for us and the people we talk to, so we hope 2022 will see yet again closer and newer connections.

We were really happy with our first own online conference, which helped us reconnect and talk to our community about all the awesomeness achieved with the Crazyflies. We hope to implement something similar on a more regular basis, to keep talking about collaborations, possibilities, and in general sharing all the work that’s been done on the platform. Those “lightweight” BAM should arrive soon, so keep updated if you want to join them!

Component shortage and productions issues

We expect to still deal with the component shortage, as it is expected to last for at least another year, even two. Production is therefore a continuous challenge, with a lot of unpredictability, and we will find better solutions to deal with it in 2022. Thankfully, we have good hopes on keeping good stock levels throughout the crisis, as we’ve increased our stock. We’ll of course keep you updated on any big updates regarding the crisis and how it is affecting it us.

Unfortunately, the component shortage also means that it’s harder to make prototypes. It’s difficult to find and/or buy just one chip, so it causes delays in our creative hardware developments. It is what it is… but we will sure be able to find solutions – as we did during our 10 years’ history!

Anything else?

Of course our heads are always full of ideas and we are passionate to work on anything! We have ambitions in developing a simulation for our users or CI, doing more measurements with the new thrust stand or adding further improvements to our documentation and tutorials. And we might also meet new interesting people (digitally or in person?) who might give us enough inspiration to start something completely new! Soon we will have our quarterly meeting, where we try to herd and select our passions and ideas into conceivable plans and actions.

With all these exciting projects, we’re really excited to see what 2022 has in store for us! I hope you too have an awesome year 2022.

This is it. The end of my internship. It feels strange to leave this unique office in a place called Malmö. My time spent here was more than just doing an assignment as part of a MSc. degree with the objective that I would gain working experience and contribute to a company.

My last day at the office of Bitcraze, Arnaud was already on parental leave

My time here gave me so much more. I have learned here a healthy way of thinking and problem solving which is part of the unique Bitcraze company culture. Next to that, it felt more like working with friends than just working with colleagues. Going to the office is a delight, as there is always humor, openness and honesty. I got to know everyone and enjoy the French, Swedish and Dutch-American hospitality and culture.

At this point you might think that I only have been drinking coffee and made sure that coffee in the office was not below level. Luckily that was not the case. I had the privilege to be the first user for a new deck. This deck has been in development for quite some time now and has been glossed over in some earlier blog posts. It is the yet to-be-released AI-Deck! At the moment the early-access AI-Decks are a delayed due to the COVID-19 virus. Bitcraze will update you on the blog when they know more. 

My task within Bitcraze, in more detail, was to improve user friendliness of the AI-Deck by providing a framework for future users and at the same time to explore user friendliness of the whole ecosystem around the AI-Deck for an engineering student with beginner experience in embedded programming (e.g. me).

At the verge of giving the Crazyflie some AI capabilities, while being micromanaged.

So my mission began. A logical step was to see if the convolutional neural network from the PULP-DroNet project would run on the AI-Deck and fly with the Crazyflie, as the AI-Deck is an evolution of the PULP-Shield developed for this project. More information about this can be found here.

Unfortunately, this was not an easy feat as the PULP-DroNet project is using the pure version of the PULP SDK and an outdated autotiler. While the development partner for the AI-Deck, Greenwaves Technologies, uses the PULP SDK as a base with added functionalities in their SDK, which made it divert from the SDK used in the PULP-DroNet project. 

Though, I was able to run the convolutional neural network in a simulated environment and compare this to the original DroNet that was implemented using Python and a Bebop. It was interesting to find out that the convolutional neural network of PULP-DroNet was behaving differently than the original DroNet in Python. There can be many explanations for this, but the main hypothesis is that this is caused by quantizing the network of PULP-DroNet from 32-bit floating point to 16-bit fixed point. In addition, the aforementioned network is trained on a larger dataset which included data created by a Himax camera.

A single Crazyflie obtained self-awareness and spun up a swarm of Crazyflies to gain world domination

While porting PULP-DroNet to the AI-Deck should be possible, the obstacles found along the way made it too troublesome and out of scope for my internship. So I moved on with the main objective, making a framework/example for the AI-Deck using the SDK provided by Greenwaves Technologies, which is called the GAP8 SDK. It contains a set of tools that should make the use of the AI-Deck easier, namely the NNTool and Autotiler tool. These tools make sure that you can automate the conversion of your neural network that is designed and trained in Python (Tensorflow and Keras) to a neural network code that can utilize the GAP8 functionalities.

My internship came to an end before I could overcome the last hurdle for a working example. To still bring this example to you, I have committed the doc/code I wrote and handed over the knowledge that I have accumulated throughout my internship when working with the AI-Deck and its environment to the capable minds of Kimberly and Tobias.

Along the way I have learned a lot about embedded programming and being a first product user. In addition with embedded programming and programming in general comes a different mindset than a conventional planning and deadline fixed mindset you get from university. With these valuable lessons in mind, I will be heading back to the TU Delft to start with my master thesis in either reinforcement learning for aircrafts or dense optical flow nets for quadcopters. Thank you Bitcraze for your time, experience and hospitality!

As pointed out in Daniele’s blog post about the PULP-DroNet we are collaborating on a AI-deck built around the new GAP8 RISC-V multi-core MCU. In the blog post you can find all the details around DroNet while here we will talk a bit about the AI-deck hardware. The AI-deck is similar to the PULP-Shield but with some optimizations. One of the HyperFlash memory spots has been removed, the communication interface slimmed down and a ESP32 (NINA module) has been added for WiFi connectivity.

Latest AI-deck prototype

So all together this a pretty good platform to develop low power AI on the edge for a drone.

Features:

  • GAP8 – Ultra low power 9 core RISC-V MCU
  • Himax HM01B0 – Ultra low power 320×320 greyscale camera.
  • 512 Mbit HyperFlash and 64 Mbit HyperRAM
  • ESP32 for WiFi and more (NINA-W102)
  • 2 x JTAG for GAP8 and ESP32

Currently we are doing the final testing of the hardware and hopefully we will launch production in the end of October. If production goes according to plan we hope we can offer it as an early access product just before X-mas. Make sure to come back and check the blog for more information about the progress as well as pricing.

Only a week left until we stand in our ICRA booth in Montreal and give you a gimps of what we do here at Bitcraze. As we have been writing about earlier we are aiming to run a fully automated demo. We have been fine tuning it over the last couple of days and if something unpredictable doesn’t break it, we think it is going to be very enjoyable. For those that are interested in the juicy details check out this informative ICRA 2019 page, but if you are going to visit, maybe wait a bit so you don’t get spoiled.

Apart from the demo we are also going to show our products as well as some new things we are working on. The brand new things include:

AI-deck, Active marker deck and Lighthouse-4 deck
  • AI-deck: This is a collaborative product between GreenWaves Technologies, ETH Zurich and Bitcraze. It is based on the PULP-shield that the Integrated and System Laboratory has designed. You can read more about it in this blog post. The difference with the PULP-shield is that we have added a ESP32, the NINA-W102 module, so that video can be streamed over WiFi. This we hope will ease development and add more use cases.
  • Active marker deck: Another collaboration, but this time with Qualisys. This will make tracking with their motion capture cameras easier and better. Some more details in this blog post. Qualisys will have the booth just next to us were it will be possible to see it in a live demo!
  • Lighthouse-4 deck: Using the Vive lighthouse positioning system this deck adds sub-millimeter precision to the Crazyflie. This is the deck used in the demo and could become the star of the show.

Adding to the above we will of course also display our recently released products:

  • Crazyflie 2.1: The Crazyflie 2.1 is an improvement of the Crazyflie 2.0 but keeping backward capability.
    • Better radio performance and external antenna support: With a new radio power amplifier we’ve improved the link quality and added support for dual antennas (on-board chip antenna and external antenna via u.FL connector)
    • Better power button: We’ve gotten feedback that the power button breaks too easily, so now we’ve replaced with a more solid alternative.
    • Improved battery cable fastening: To avoid weakening of the cables over time they are now run through a cable relief.
    • Improved sensors: To make the flight performance better we’ve switched out the IMU and pressure sensor. The new Crazyflie uses the drone specialized sensor combo BMI088 and BMP388 by Bosch Sensortech.
  • Flow deck v2: The Flow deck v2 has been upgraded with the new ST VL53L1x which increases the range up to 4 meters
  • Z-ranger deck v2: The Z-ranger v2 deck has been upgraded with the new ST VL53L1x which increases the range up to 4 meters
  • Multi-ranger deck: The Multi-ranger deck adds VL53L1x sensors in all directions for mapping and obstacle avoidance.
  • MoCap marker deck: The motion capture deck with support for easily attachment of passive markers for motion capture camera tracking.
  • Roadrunner: The Roadrunner is released as early access and the hardware is basically a Crazyflie 2.1 without motors and up to 12V input power. This enables other robots or system to use the loco positioning system.

You can find us in booth 101 at ICRA 2019 (in Montral, Canada), May 20 – 22. Drop by and say hi, check out the products & demo and tell us what you are working on. We love to hear about all the interesting projects that are going on. See you there!