crazyflie

It’s time for a new compilation video about how the Crazyflie is used in research ! The last one featured already a lot of awesome work, but a lot happened since then, both in research and at Bitcraze.

As usual, the hardest about making those videos is choosing the works we want to feature – if every cool video of the Crazyflie was in there, it would last for hours! So it’s just a selection of the most videogenic projects we’ve seen. You can find a more extensive list of our products used in research here.

We’ve seen a lot of projects that used the modularity of the Crazyflie to create awesome new features, like a catenary robot, some wall tracking or having it land upside down. The Crazyflie board was even made into a revolving wing drone. New sensors were used, to sniff out gas leaks (the Sniffy bug as seen in this blogpost), or to allow autonomous navigation. Swarms are also a research topic where we see a lot of the Crazyflie, this time for collision avoidance, or path planning. We also see more and more of simulators, which are used for huge swarms or physics tests.

Once again, we were surprised and awed by all the awesome things that the community did with the Crazyflie. Hopefully, this will inspire others to think of new things to do as well. We hope that we can continue with helping you to make your ideas fly, and don’t hesitate to share with us the awesome projects you’re working on!

Here is a list of all the research that has been included in the video:

And, without further ado, here it is:

As already announced in a previous blog post, we have been working on a replacement for the Crazyradio PA. Crazyradio is the USB dongle used to communicate with Crazyflie 2.1, Crazyflie Bolt and any other 2.4GHz radio board we are making. We are also visiting FOSDEM in Brussels at the end of the week and will organize a community dev-meeting about Crazyradio and communication end of February: more on that at the end of the post.

Crazyradio 2.0 will have the following characteristics:

  • Based on the nordic-semiconductor nRF52840
    • 64MHz Cortex-M4
    • 1024KB flash, 256KB ram
    • Radio supporting Nordic protocol, Bluetooth low energy as well as IEEE802.15.4
    • 1Mbps and 2Mbsp bitrate to Crazyflie
    • USB full speed (12Mbps) device
  • Radio power amplifier providing up to +20dBm output power
  • ‘Drag and drop’ bootloader with physical button to start in bootloader mode
  • Same debug port as on the Crazyflie for ease of development

One of the main changes versus the Crazyradio PA will be the available CPU power and ease of development: this will allow to experiment with and implement much more advanced communication protocol like channel hopping and peer-to-peer communication.

On the software side, there will be two modes available for Crazyradio 2.0: a compatibility mode that emulate a Crazyradio PA and should work with all existing software as well as a new Crazyradio mode that will have a much improved USB protocol allowing for more efficient communication when controlling multiple Crazyflie as well as making it easy to support more protocols in the future.

These two modes will be available as two different firmware and the user can ‘drag and drop’ the firmware with the wanted mode.

As for the Crazyradio PA (version 1), sourcing the components for it has been a bit challenging lately. We will sell Crazyradio PA as long as we have stock for it and the software will continue to support it for the foreseeable future.

Announcements

Kimberly and I, Arnaud, will be visiting the FOSDEM conference at the end of the week in Brussels. If you are there too and want to meet us do not hesitate to drop a message in the comment there, in Github discussions or by mail. It would be great to meet fellow Crazyflie users!

We are also planning an online dev-meeting about Crazyradio 2.0 and communication the 22nd of February 2023. The information about joining will be on Github Discussions. We are interested in talking, and bouncing ideas about radio and communication protocol: with the new Crazyradio we have an opportunity to work on communication protocols to improve them and makes them more useful to modern use of the Crazyflie.

This week’s guest blogpost is from Frederike Dümbgen presenting her latest work from her PhD project at the Laboratory of Audiovisual Communications (LCAV), EPFL, and is currently a Postdoc at the University of Toronto. Enjoy!

Bats navigate using sound. As a matter of fact, the ears of a bat are so much better developed than their eyes that bats cope better with being blindfolded than they cope with their ears being covered. It was precisely this experiment that helped the discovery of echolocation, which is the principle bats use to navigate [1]. Broadly speaking, in echolocation, bats emit ultrasonic chirps and listen for their echos to perceive their surroundings. Since its discovery in the 18th century, astonishing facts about this navigation system have been revealed — for instance, bats vary chirps depending on the task at hand: a chirp that’s good for locating prey might not be good for detecting obstacles and vice versa [2]. Depending on the characteristics of their reflected echos, bats can even classify certain objects — this ability helps them find, for instance, water sources [3]. Wouldn’t it be amazing to harvest these findings in building novel navigation systems for autonomous agents such as drones or cars?

Figure 1: Meet “Crazybat”: the Crazyflie equipped with our custom audio deck including 4 microphones, a buzzer, and a microcontroller. Together, they can be used for bat-like echolocation. The design files and firmware of the audio extension deck are openly available, as is a ROS2-based software stack for audio-based navigation. We hope that fellow researchers can use this as a starting point for further pushing the limits of audio-based navigation in robotics. More details can be found in [4].

The quest for the answer to this question led us — a group of researchers from the École Polytechnique Fédérale de Lausanne (EPFL) — to design the first audio extension deck for the Crazyflie drone, effectively turning it into a “Crazybat” (Figure 1). The Crazybat has four microphones, a simple piezo buzzer, and an additional microprocessor used to extract relevant information from audio data, to be sent to the main processor. All of these additional capabilities are provided by the audio extension deck, for which both the firmware and hardware design files are openly available.1

Video 1: Proof of concept of distance/angle estimation in a semi-static setup. The drone is moved using a stepper motor. More details can be found in [4].

In our paper on the system [4], we show how to use chirps to detect nearby obstacles such as glass walls. Difficult to detect using a laser or cameras, glass walls are excellent sound reflectors and thus a good candidate for audio-based navigation. We show in a first semi-static feasibility study that we can locate the glass wall with centimeter accuracy, even in the presence of loud propeller noise (Video 1). When moving to a flying drone and different kinds of reflectors, the problem becomes significantly more challenging: motion jitter, varying propeller noise and tight real-time constraints make the problem much harder to solve. Nevertheless, first experiments suggest that sound-based wall detection and avoidance is possible (Figure and Video 2).

Video 2: The “Crazybat” drone actively avoiding obstacles based on sound.
Figure 2: Qualitative results of sound-based wall localization on the flying “Crazybat” drone. More details can be found in [4].

The principle we use to make this work is sound-based interference. The sound will “bounce off” the wall, and the reflected and direct sound will interfere either constructively or destructively, depending on the frequency and distance to the wall. Using this same principle for the four microphones, both the angle and the distance of the closest wall can be estimated. This is however not the only way to navigate using sound; in fact, our software stack, available as an open-source package for ROS2, also allows the Crazybat to extract the phase differences of incoming sound at the four microphones, which can be used to determine the location of an external sound source. We believe that a truly intelligent Crazybat would be able to switch between different operating modes depending on the conditions, just like bats that change their chirps depending on the task at hand.

Note that the ROS2 software stack is not limited to the Crazybat only — we have isolated the hardware-dependent components so that the audio-based navigation algorithms can be ported to any platform. As an example, we include results on the small wheeled e-puck2 robot in [4], which shows better performance than the Crazybat thanks to the absence of propeller noise and motion jitter.

This research project has taught us many things, above all an even greater admiration for the abilities of bats! Dealing with sound is pretty hard and very different from other prevalent sensing modalities such as cameras or lasers. Nevertheless, we believe it is an interesting alternative for scenarios with poor eyesight, limited computing power or memory. We hope that other researchers will join us in the quest of exploiting audio for navigation, and we hope that the tools that we make publicly available — both the hardware and software stack — lower the entry barrier for new researchers. 

1 The audio extension deck works in a “plug-and-play” fashion like all other extension decks of the Crazyflie. It has been tested in combination with the flow deck, for stable flight in the absence of a more advanced localization system. The deck performs frequency analysis on incoming raw audio data from the 4 microphones, and sends the relevant information over to the Crazyflie drone where it is converted to the CRTP protocol on a custom driver and sent to the base station for further processing in the ROS2 stack.

References

[1] Galambos, Robert. “The Avoidance of Obstacles by Flying Bats: Spallanzani’s Ideas (1794) and Later Theories.” Isis 34, no. 2 (1942): 132–40. https://doi.org/10.1086/347764.

[2] Fenton, M. Brock, Alan D. Grinnell, Arthur N. Popper, and Richard R. Fay, eds. “Bat Bioacoustics.” In Springer Handbook of Auditory Research, 1992. https://doi.org/10.1007/978-1-4939-3527-7.

[3] Greif, Stefan, and Björn M Siemers. “Innate Recognition of Water Bodies in Echolocating Bats.” Nature Communications 1, no. 106 (2010): 1–6. https://doi.org/10.1038/ncomms1110.

[4] F. Dümbgen, A. Hoffet, M. Kolundžija, A. Scholefield and M. Vetterli, “Blind as a Bat: Audible Echolocation on Small Robots,” in IEEE Robotics and Automation Letters (Early Access), 2022. https://doi.org/10.1109/LRA.2022.3194669.

2023 has already begun, and we have some ideas and hopes on what this new year will mean for Bitcraze. Of course, what 2022 has proven to us is that the world is unpredictable; but it doesn’t stop us from dreaming about our future. So here is what’s in our wishlist for 2023!

Products

We dedicated a good part of the winter to get a new, updated and better Crazyradio, that we will present to you sometime this year. Rumor around the office is that it will solve all problems you can think of, related to communication!

And, even though it’s been a long run, we hope to soon get the Big Quad deck and Bolt out of early access. There are still some things to tweak and documentation to write.

The Nimble + should arrive soon in the store, a drone with flapping wings powered by the Bolt and designed by our friends at Flapper Drones.

Prototypes

There’s always a drawer at Bitcraze that’s full of ideas and prototypes. What we lack to make them come true is time ! We are constantly wondering which of those treasures that will be our next product, and I can’t say anything is for certain, but to give you some ideas, we’ve been playing around with the idea of a brushless Crazyflie, a Glow deck, and are definitely updating some of our current decks.

Community

We really enjoyed meeting people at fairs once again after 2 years of staying put. We don’t know at which conference you will be able to catch us (yet), but we’ll most definitely attend at least 2.

And we will not loose track of our users and hope to get feedback and input as much as possible during our dev meetings or even mini-BAMs.

Bitcraze

We’re still actively looking for teammates, and we hope there’s someone out there that will join us in 2023! Send us a CV if you’re interested.

External dependencies

The components crisis hit us hard in 2020, but it seems we’re gradually coming back to normal. While the world is still full of surprises, we’re happy to have enough stability to still be doing what we like, through pandemics or recessions. Of course, we much rather prefer when things are a little less exciting! We’re cautiously optimistic about 2023, hoping that wars will end and that awareness about climate change will bring out the right habits.

Soon we will have our quarterly meeting, where we try to herd and select our passions and ideas into conceivable plans and actions.

We’re never sure if one year is enough to see all of our plans and hopes go through, but 2023 is still brand new with a lot of possibilities, that we plan to grab with passion. May this new year bring you excitement and passion too!

This year, the traditional Christmas video was overtaken by a big project that we had at the end of November: creating a test show with the help of CollMot.

First, a little context: CollMot is a show company based in Hungary that we’ve partnered with on a regular basis, having brainstorms about show drones and discussing possibilities for indoor drones shows in general. They developed Skybrush, an open- source software for controlling swarms. We have wanted to work with them for a long time.

So, when the opportunity came to rent an old train hall that we visit often (because it’s right next to our office and hosts good street food), we jumped on it. The place itself is huge, with massive pillars, pits for train maintenance, high ceiling with metal beams and a really funky industrial look. The idea was to do a technology test and try out if we could scale up the Loco positioning system to a larger space. This was also the perfect time to invite the guys at CollMot for some exploring and hacking.

The train hall

The Loco system

We added the TDoA3 Long Range mode recently and we had done experiments in our test-lab that indicate that the Loco Positioning systems should work in a bigger space with up to 20 anchors, but we had not actually tested it in a larger space.

The maximum radio range between anchors is probably up to around 40 meters in the Long Range mode, but we decided to set up a system that was only around 25×25 meters, with 9 anchors in the ceiling and 9 anchors on the floor placed in 3 by 3 matrices. The reason we did not go bigger is that the height of the space is around 7-8 meters and we did not want to end up with a system that is too wide in relation to the height, this would reduce Z accuracy. This setup gave us 4 cells of 12x12x7 meters which should be OK.

Finding a solution to get the anchors up to the 8 meters ceiling – and getting them down easily was also a headscratcher, but with some ingenuity (and meat hooks!) we managed to create a system. We only had the hall for 2 days before filming at night, and setting up the anchors on the ceiling took a big chunk out of the first day.

Drone hardware

We used 20 Crazyflie 2.1 equipped with the Loco deck, LED-rings, thrust upgrade kit and tattu 350 mAh batteries. We soldered the pin-headers to the Loco decks for better rigidity but also because it adds a bit more “height-adjust-ability” for the 350 mAh battery which is a bit thicker then the stock battery. To make the LED-ring more visible from the sides we created a diffuser that we 3D-printed in white PLA. The full assembly weighed in at 41 grams. With the LED-ring lit up almost all of the time we concluded that the show-flight should not be longer than 3-4 minutes (with some flight time margin).

The show

CollMot, on their end, designed the whole show using Skyscript and Skybrush Studio. The aim was to have relatively simple and easily changeable formations to be able to test a lot of different things, like the large area, speed, or synchronicity. They joined us on the second day to implement the choreography, and share their knowledge about drone shows.

We got some time afterwards to discuss a lot of things, and enjoy some nice beers and dinner after a job well done. We even had time on the third day, before dismantling everything, to experiment a lot more in this huge space and got some interesting data.

What did we learn?

Initially we had problems with positioning, we got outliers and lost tracking sometimes. Finally we managed to trace the problems to the outlier filter. The filter was written a long time ago and the current implementation was optimized for 8 anchors in a smaller space, which did not really work in this setup. After some tweaking the problem was solved, but we need to improve the filter for generic support of different system setups in the future.

Another problem that was observed is that the Z-estimate tends to get an offset that “sticks” and it is not corrected over time. We do not really understand this and will require more investigations.

The outlier filer was the only major problem that we had to solve, otherwise the Loco system mainly performed as expected and we are very happy with the result! The changes in the firmware is available in this, slightly hackish branch.

We also spent some time testing maximum velocities. For the horizontal velocities the Crazyflies started loosing positioning over 3 m/s. They could probably go much faster but the outlier filter started having problems at higher speeds. Also the overshoot became larger the faster we flew which most likely could be solved with better controller tuning. For the vertical velocity 3 m/s was also the maximum, limited by the deceleration when coming downwards. Some improvements can be made here.

Conclusion is that many things works really well but there are still some optimizations and improvements that could be made to make it even more robust and accurate.

The video

But, enough talking, here is the never-seen-before New Year’s Eve video

And if you’re curious to see behind the scenes

Thanks to CollMot for their presence and valuable expertise, and InDiscourse for arranging the video!

And with the final blogpost of 2022 and this amazing video, it’s time to wish you a nice New Year’s Eve and a happy beginning of 2023!

Santa is soon to be knocking on the door, hopefully with one or two exciting toys (with blinking LEDs) for us geeky people! There will not be a Christmas video in the Bitcraze gift this year, instead we’re wrapping up a new release that we hope will add to the Christmas fun!

We have been working on a secret project though and there might be a video for next week’s blog post showing what we have been up to…

The 2022.12 release

We are happy to announce that a new official release is out, 2022.12! We have mainly fixed bugs and stability issues but also added some new features, please see details below.

Crazyflie STM firmware (2022.12)

One of the main events in this release is that the Flapper Nimble+ has got official support with the flapper platform, it can now be flashed through the client like any other member of the Crazyflie family. A new controller, based on work by Brescianini has been added. The Kalman estimator and Lighthouse system have been tweaked to work better with the increased data volumes generated with 2+ base stations. Some improvements for brushless motors have been added. Finally there have been some general bug and stability fixes, including improvements for flashing of the AI-deck.

Please see the release notes for a list of all changes.

Crazyflie NRF firmware (2022.12)

The NRF firmware release mainly contains changes to support the new STM firmware.

Please see the release notes for a list of all changes.

Crazyflie lib python (0.1.21)

A blocking method has been added to upload trajectories to the high level commander, the various Uploader classes in the examples are not needed anymore. Stability and bug fixes related to deck flashing.

Please see the release notes for a list of all changes.

Crazyflie python Client (2022.12)

A button has been added in the console log tab to get statistics about persistent storage in the Crazyflie. The final traces of Windows and Mac builds have been cleaned out and some stability and bug fixes have been applied.

Please see the release notes for a list of all changes.

Announcement: We will have a townhall meeting this Wednesday (7th of December) about Crazyradio 2.0 and the ideas about the new com-stack at 15:00 (3 pm) CET. Please follow the discussion here for more info.

As you have been very much aware of already if you have been reading the blog occasionally is that we went to Japan with the entire company to be at the International Conference on Intelligent Robots and Systems (IROS) in Kyoto, Japan. Besides eating great food, singing karaoke, and herding our fully onboard autonomous swarm at our stand, we also had some time to check out what kind of work was done with the Crazyflie in the proceeding papers and talks!

So just some generic statistics first:

  • IROS had 1716 papers accepted
  • We found 14 Crazyflie papers/posters and 2 workshop papers
  • The three biggest topics we found the papers in were: SLAM, Multi-robot systems and Navigation & Motion planning, SLAM

At ICRA this year, we noticed that the Crazyflie/bolt were used to make unconventional platforms, like a mono-copter or transforming the Crazyflie to a Pogo stick. It was interesting to see that now at IROS, the focus seemed to be more on navigation, localization and even SLAM… also with unconventional sensors!

Navigation and SLAM with the Crazyflie

In the summer I (Kim) worked on a summer project with using ROS2 to try SLAM with the standard packages with the Flow deck and Multi-ranger. This was also to present the work at ROScon before that with the Crazyswarm2 project, the Crazyflie can be used as an actual robotic platform too! I’m glad that some researchers already figured this one out already, as there were quite some papers on SLAM! [6] and [12] made use of the flow & multi-ranger but made their own custom algorithms to do SLAM and mapping that was more tailored to the task than the standard SLAM packages out there meant for 360 degree lidars.

Very interestingly, there were several papers that uses unconventional sensors for this as well. [5] used a gas sensor to do both gas source localization and distributing mapping and [10] made their own echolocation deck with buzzer + microphones. Let’s see what other sensors will be explored in the future!

Safe Robot Learning Competition

A special mention goes to the Safe Robot Learning competition, organized by the joined TU Munich and Utoronto’s the Learning system & robotics lab (formally known as the Dynamic Systems lab). In this competition, teams could participate with an online competition where they had to finish an obstacle course in simulation. From those that were successful, the finals were done with a real Crazyflie at a remote testbed in the University of Toronto, where the algorithms were put to the ultimate test! The simulation was done in the safe-control-gym framework [12], and the communication with the real Crazyflie was done with the ROS1 based Crazyswarm. We sponsored the first three places with a couple of Crazyflie bundles, so congrats to the winners!

List of IROS 2022 Papers featuring the Crazyflie

  1. Using Simulation Optimization to Improve Zero-shot Policy Transfer of Quadrotors Sven Gronauer, Matthias Kissel, Luca Sacchetto, Mathias Korte and Klaus Diepold
  2. Downwash-aware Control Allocation for Over-actuated UAV Platforms Yao Su , Chi Chu , Meng Wang , Jiarui Li , Liu Yang , Yixin Zhu , Hangxin Liu
    • Beijing Institute for General Artificial Intelligence (BIGAI)
    • ArXiv
    • IEEE Xplore
  3. Towards Specialized Hardware for Learning-based Visual Odometry on the Edge Siyuan Chen and Ken Mai
    • Beijing Institute for General Artificial Intelligence (BIGAI)
    • IEEE Xplore
  4. Polynomial Time Near-Time-Optimal Multi-Robot Path Planning in Three Dimensions with Applications to Large-Scale UAV Coordination Teng Guo, Siwei Feng and Jingjin Yu
  5. GaSLAM: An Algorithm for Simultaneous Gas Source Localization and Gas Distribution Mapping in 3D Chiara Ercolani, Lixuan Tang and Alcherio Martinoli
    • Ecole Polytechnique Federale de Lausanne (EPFL),
    • IEEE Xplore
  6. Efficient 2D Graph SLAM for Sparse Sensing Hanzhi Zhou, Zichao Hu, Sihang Liu and Samira Khan
  7. Avoiding Dynamic Obstacles with Real-time Motion Planning using Quadratic Programming for Varied Locomotion Modes Jason White, David Jay, Tianze Wang, and Christian Hubicki
  8. Dynamic Compressed Sensing of Unsteady Flows with a Mobile Robot Sachin Shriwastav, Gregory Snyder and Zhuoyuan Song
  9. A Framework for Optimized Topology Design and Leader Selection in Affine Formation Control Fan Xiao, Qingkai Yang, Xinyue Zhao and Hao Fang
  10. Blind as a bat: audible echolocation on small robots Frederike Dumbgen Adrien Hoffet Mihailo Kolundzija Adam Scholefield Martin Vetterli
    • Ecole Polytechnique Federale de Lausanne (EPFL)
    • IEEE xplore
  11. Safe Reinforcement Learning for Robot Control using Control Lyapunov Barrier Functions Desong Du, Shaohang Han, Naiming Qi and Wei Pan
    • Harbin Institute of Technology + TU Delft + University of Manchester
    • Late breaking result poster
  12. Parsing Indoor Manhattan Scenes Using Four-Point LiDAR on a Micro UAV Eunju Jeong, Suyoung Kang, Daekyeong Lee, and Pyojin Kim
    • Sookmyung Women’s University,
    • Late breaking result poster
  13. Interactive Multi-Robot Aerial Cinematography Through Hemispherical Manifold Coverage Xiaotian Xu , Guangyao Shi , Pratap Tokekar , and Yancy Diaz-Mercado
    • University of Maryland
    • Note: Only mention of Crazyflie experiments in presentation
  14. Safe-control-gym: a Unified Benchmark Suite for Safe Learning-based Control and Reinforcement Learning in Robotics Zhaocong Yuan, Adam W. Hall, Siqi Zhou, Lukas Brunke, Melissa Greeff, Jacopo Panerati, Angela P. Schoellig
  15. Distributed Geometric and Optimization-based Control of Multiple Quadrotors for Cable-Suspended Payload Transport Khaled Wahba and Wolfgang Hoenig
  16. Customizable-ModQuad: a Versatile Hardware-Software Platform to Develop Lightweight and Low-cost Aerial Vehicles Diego S. D’Antonio, Jiawei Xu, Gustavo A. Cardona, and David Saldaña

Let us know if we are missing any papers or information per papers! Once the IEEE xplore IROS 2022 proceedings have been published, we will update these too and put them on our research page.

This week’s guest blogpost is from Rik Bouwmeester from the Micro Air Vehicle lab, Faculty of Aerospace Engineering at the Delft University of Technology.

Tiny quadcopters like the Crazyflie can be operated in narrow, cluttered environments and in proximity to humans, making them the perfect candidate for search-and-rescue operations, monitoring of crop in a greenhouse, or performing inspections where other flying robots cannot reach. All these applications benefit from autonomy, allowing deployment without proximity to a base station or human operator and permitting swarming behavior.

Achieving autonomous navigation on nano quadcopters is challenging given the highly constrained payload and computational power of the platform. Most attention has been given to monocular solutions; the camera is a lightweight and energy-efficient passive sensor that captures rich information of the environment. One of the most important monocular visual cues is optical flow, which has been exploited on MAVs with higher payload for obstacle avoidance [1], depth estimation [2] and several bio-inspired methods for autonomous navigation [3–7].

Optical flow describes the apparent visual variations caused by relative motion between an observer and their surroundings. This rich visual cue contains tangled information of velocity and depth. However, calculating optical flow is expensive. The field of optical flow estimation is and has been for a couple of years dominated by convolutional neutral networks (CNNs). Despite efforts to find architectures of reduced size and latency [8-10], these methods are still highly computationally expensive, running at several to tens of FPS on modern desktop GPUs and requiring millions of parameters to run, rendering them incompatible with edge hardware.

To this end, we present “NanoFlowNet: Real-Time Dense Optical Flow on a Nano Quadcopter”, submitted to an international robotics conference, which introduces NanoFlowNet, a CNN architecture designed for real-time, fully on-board, dense optical flow estimation on the AI-deck.

CNN architecture

We adopt semantic segmentation CNN STDC-Seg [11] and modify it for optical flow estimation. The resulting CNN architecture may be considered “real-time” on desktop hardware, for deployment on edge devices such as a nano quadcopter the net must be significantly shrunk. We improve the latency of the architecture in three ways.

First, we redesign the key convolutional modules of the architecture, the Short-Term Dense Concatenate (STDC) module. By reordering the operations within the strided variant of the module, we save, depending on the location of the module within the architecture, from over 10% to over 50% of the MAC operations per module, while increasing the number of output filters with large receptive field size. A large receptive field size is desirable for optical flow estimation.

Second, inspired by MobileNets [12], we globally replace ‘regular’ convolutions with depthwise separable convolutions. Depthwise separable convolutions factorize a convolution into a depthwise and pointwise convolution, effectively reducing the calculational expense at a cost in representational capacity.

Third, we reduce the input dimensionality. We train and infer network on grayscale input images, reducing the required on-board memory for storing images by a factor 2/3. Any memory saved on the AI-deck’s L2 memory can be handed to AutoTiler for storing the CNN architecture, speeding up the on-board execution. Requiring more of a speed-up, we run the CNN on-board at a reduced input resolution of 160×112 pixels. Besides the speed-up through saved L2, reducing the input resolution makes all operations throughout the network cheaper. We downscale training data to closely match the target resolution. Both these changes come at a loss of input information. We will miss out on small objects and small displacements that are not captured by the resolution.

To give some intuition of the available memory: Estimating optical flow requires two input images. Storing two color input images at full resolution requires (2 x 324x324x3=) 630 kB. The AI-deck has 512 kB of L2 memory available.

Motion boundary detail guidance

Inspired by STDC-Seg, we guide the training of optical flow with a train-time-only auxiliary task to promote the encoding of spatial information in the early layers. Specifically, we introduce a motion boundary prediction task to the net. The motion boundary ground truth can be found in the optical flow datasets. This improves performance by 0.5 EPE on the MPI Sintel clean (train) benchmark, at zero cost to inference latency.

Performance on MPI Sintel

Given the scaling and conversion to grayscale of input data, our network is not directly comparable with results reported by other works. For comparison, we retrain one of the fastest networks in literature, Flownet2-s [13], on the same data. Given the reduction in resolution, we drop the deepest two layers to maintain a reasonable feature size. We name the model Flownet2-xs.

We benchmark the performance of the architecture on the optical flow dataset MPI Sintel. NanoFlowNet performs better than FlowNet2-xs, despite using less than 10% of the parameters. NanoFlowNet achieves 5.57 FPS on the AI-deck. FlowNet2-xs does not fit on the AI-deck due to the network size. To put the achieved latency of NanoFlowNet in perspective, we execute FlowNet2-xs’ first two convolutions and the final prediction layer on the GAP8. The three-layer architecture achieves 4.96 FPS, which is slower than running the entire NanoFlowNet. On a laptop GPU, the two architectures accomplish similar latency.

MethodMPI Sintel (train) [EPE]Frame rate [FPS]Parameters
CleanFinalGPUGAP8
FlowNet2-xs9.0549.4581501,978,250
NanoFlowNet7.1227.9791415.57170,881
Performance on MPI Sintel (train subset). (Average) end-to-end Point Error (EPE) describes how far off the estimated flow vectors are on average, lower is better.

Obstacle avoidance implementation

We demonstrate the effectiveness of NanoFlowNet by implementing it in a simple, proof-of-concept obstacle avoidance application on an AI-deck equipped Crazyflie. We let the quadcopter fly forward at constant velocity and implement the horizontal balance strategy [14], [15], where the quadcopter balances the optical flow in the left and right half plane by yawing.

We equip a Crazyflie with the Flow deck for positioning only. The total flight platform weighs 34 grams.

We augment the balance strategy by implementing active oscillations (a cyclic up-down movement), resulting in additional optical flow generated across the field of view. This is particularly helpful for avoiding obstacles in the direction of horizontal travel, since no optical flow is generated at the focus of expansion.

The obstacle avoidance implementation is demonstrated in an open and a cluttered environment in ‘the Cyber Zoo’, an indoor flight arena at the faculty of Aerospace Engineering at the Delft University of Technology. The control algorithm is most robust in the open environment, with the quadcopter managing to drain a full battery without crashing. In the cluttered environment, performance is more variable. Especially on occasions where obstacles are close to one another, the quadcopter tends to avoid the first obstacle successfully, only to turn straight into the second and crash into it. Adding a head-on collision detection based on FOE detection and divergence estimation (e.g., [7]) should help avoid obstacles in these cases.

Successful run in a cluttered environment in the Cyber Zoo. The Crazyflie manages to avoid collision until the battery is drained.

All in all, we consider the result a successful demonstration of the optical flow CNN. In future work, we expect to see applications that take more advantage of the resolution of the flow information.

Citation

Bouwmeester, R. J., Paredes-Vallés, F., De Croon, G. C. H. E. (2022). NanoFlowNet: Real-time Dense Optical Flow on a Nano Quadcopter. arXiv. https://doi.org/10.48550/arXiv.2209.06918

References

[1] Gao, P., Zhang, D., Fang, Q., & Jin, S. (2017). Obstacle avoidance for micro quadrotor based on optical flow. Proceedings of the 29th Chinese Control and Decision Conference, CCDC 2017, 4033–4037. https://doi.org/10.1109/CCDC.2017.7979206

[2] Sanket, N. J., Singh, C. D., Ganguly, K., Fermuller, C., & Aloimonos, Y. (2018). GapFlyt: Active vision based minimalist structure-less gap detection for quadrotor flight. IEEE Robotics and Automation Letters, 3(4), 2799–2806. https://doi.org/10.1109/LRA.2018.2843445

[3] Conroy, J., Gremillion, G., Ranganathan, B., & Humbert, J. S. (2009). Implementation of wide-field integration of optic flow for autonomous quadrotor navigation. Autonomous Robots, 27(3), 189–198. https://doi.org/10.1007/s10514-009-9140-0

[4] Zingg, S., Scaramuzza, D., Weiss, S., & Siegwart, R. (2010). MAV navigation through indoor corridors using optical flow. Proceedings – IEEE International Conference on Robotics and Automation, 3361–3368. https://doi.org/10.1109/ROBOT.2010.5509777

[5] De Croon, G. C. H. E. (2016). Monocular distance estimation with optical flow maneuvers and efference copies: A stability-based strategy. Bioinspiration and Biomimetics, 11(1). https://doi.org/10.1088/1748-3190/11/1/016004

[6] Serres, J. R., & Ruffier, F. (2017). Optic flow-based collision-free strategies: From insects to robots. Arthropod Structure and Development, 46(5), 703–717. https://doi.org/10.1016/j.asd.2017.06.003

[7] De Croon, G. C. H. E., De Wagter, C., & Seidl, T. (2021). Enhancing optical-flow-based control by learning visual appearance cues for flying robots. Nature Machine Intelligence, 3(1), 33–41. https://doi.org/10.1038/s42256-020-00279-7

[8] Ranjan, A., & Black, M. J. (2017). Optical flow estimation using a spatial pyramid network. Proceedings – 30th IEEE Conference on Computer Vision and Pattern Recognition, 2720–2729. https://doi.org/10.1109/CVPR.2017.291

[9] Hui, T. W., Tang, X., & Loy, C. C. (2018). LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 8981–8989. https://doi.org/10.1109/CVPR.2018.00936

[10] Sun, D., Yang, X., Liu, M. Y., & Kautz, J. (2017). PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 8934–8943. https://doi.org/10.1109/CVPR.2018.00931

[11] Fan, M., Lai, S., Huang, J., Wei, X., Chai, Z., Luo, J., & Wei, X. (2021). Rethinking BiSeNet For Real-time Semantic Segmentation. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 9711–9720. https://doi.org/10.1109/CVPR46437.2021.00959

[12] Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., & Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. In arXiv. arXiv. http://arxiv.org/abs/1704.04861

[13] Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., & Brox, T. (2017). FlowNet 2.0: Evolution of optical flow estimation with deep networks. Proceedings – 30th IEEE Conference on Computer Vision and Pattern Recognition, 1647–1655. https://doi.org/10.1109/CVPR.2017.179

[14] Souhila, K., & Karim, A. (2007). Optical flow based robot obstacle avoidance. International Journal of Advanced Robotic Systems, 4(1), 2. https://doi.org/10.5772/5715

[15] Cho, G., Kim, J., & Oh, H. (2019). Vision-based obstacle avoidance strategies for MAVs using optical flows in 3-D textured environments. Sensors, 19(11), 2523. https://doi.org/10.3390/s19112523

I already talked about it here and there, but this day finally came: the whole company is in Japan !
Kimberly travelled first, to account for jetlag, meet with some people, and attend ROScon.

It was last week, and she got the opportunity to learn a lot, meet people from the ROS community, and give an exciting talk.

Kimberly’s talk at RosCon (made by Ramón Roche)
Happy to be in Japan (Made by Ramón Roche)

The rest of the company travelled last week with all the equipment needed divided into our suitcases.

Our suitcases at the office, to gather the materials before going

We chose to rent a traditional machiya while there, where we can all stay together and enjoy the life in the center of Kyoto.

Us chilling out in the Bitcraze mansion

Our first day here was to account for jetlag, but we managed to sightsee the amazing sites of Kyoto – and enjoy the most praised Japanese food, much appreciated after a long walk among the Tori gates of the Fushimi Inari shrine.

Us after climbing on top of Mt Inari – with the beautiful path of Tori gates

But it was soon time to start working, and yesterday we worked really hard on setting up everything to have a nice demo at IROS.

After some head scratching, emergency taping and hacking we managed to get the autonomous demo that Marios implemented last summer flying – just before the event hall We got time to explore the Kyoto International Conference Center, a beautiful venue with a Japanese garden and a futuristic look – as imagined in the 70′.

Some views from the Kyoto Conference Center

We invited those of you that are attending IROS to come and see us for a tech meet-up. It’s today and it would be a real nice opportunity for us to finally chat in person with our users ! Since there are a lot of aerial systems talks, we realize it may be difficult to come during the sessions, so the tech meet-up can begin during the break, at 15.40

Next up this week is the safe nanocopter competition. Kimberly will actually deliver the prize for that, we can’t wait to see what this competition will show – and how fun it is to remote-control the Crazyflies that are in the University of Toronto Institute for Aerospace Studies!

Of course, we will share some news on social media – and we will have a blogpost in a few weeks to debrief on the whole trip.

As you’ll understand, maintaining the day-to-day of the company is a little trickier this week, but we still monitor email, github discussions, and are shipping orders. You should just expect a longer time to process those, as we’re too busy – either at the booth or… at karaoke ! (no, there will be no videos of us singing).

This fall is full of exciting events for us, and none are more excitedly expected than our visit to Japan. Yes, the whole company (6 people) are travelling to Kyoto for at least a week – but not for sightseeing (well, not only). Here is what we have planned:

ROSCon

As per tradition, ROSCon is held shortly before IROS. So, on the 19 to 21 October, Kimberly will be here to represent us along the ROS community. She will even have a presentation about the latest ROS2 integrations in collaboration with the maintainers of Crazyswarm2. It’s on October 21st, 16.50 local time so if you’re there make sure to hear her talk !

IROS

From the 21st to the 27th of October, IROS will be held at the Kyoto International Conference Center. it’s one of largest robotics conference worldwide, with almost 1750 papers presented. As the first in-person session of the conference since the beginning of the Covid pandemic, we had to be there. We will man the booth during the whole conference, with the demo our intern Marios has worked on a lot. And since it’s been a long time since we’ve been able to gather and talk together, we thought it would be great to have an official meetup at IROS for those interested.

So, please note this official invitation to Bitcraze’s tech meetup at IROS! If you’re at IROS and want to meet us together with other Crazyflie users, then let’s get together on Monday 24th of October at 16.00 at our booth 59. It’s the perfect occasion to (re)connect, to get the latest news about Bitcraze, to talk about development, share what you’ve been doing and even possibly hack together! Be sure to say hi if you’re there. We will try to make it something similar to a Swedish fika, with some sweets and coffee, but we can’t promise that there will be kanelbullar.

IROS Safe Robot Learning Competition

And this year, we’re happy to announce that there will be a Crazyflie competition during IROS. The goal is to develop safe learning-based algorithms that can cope with uncertainties not known at design time. Our friends at Dynamic Systems Lab are organizing this competition with two simulated phase, and one experimental phase at IROS… And the experiment is a remote access to the Flight Arena at the University of Toronto Institute for Aerospace Studies in Toronto, Canada via high-speed internet connections. You don’t need to be present at IROS to participate, but if you wish to do so, beware, the registration for the competition ends on October 12th. We’re really curious and excited to see what this competition is going to show!

What about Bitcraze during that week?

But, if everybody is in Japan, what about Bitcraze’s regular activities ? You may be wondering. Well, no worries. Even though we’re going to be half a world away, the business is going to follow us. Of course, some of us are going to take that opportunity to take some vacations and visit this beautiful country, so during IROS’ week and the week after, the company will run a little bit more slowly than usual. We won’t be as reactive as usual on emails and discussions, but we will still monitor our emails and ship some orders.

Are you planning to visit IROS or ROSCon ? Is there anything in particular in the schedule that you don’t want to miss ? Don’t hesitate to tell us if you want to join the meetup !