Category: Video

CrazyFlies are great for indoor applications, thanks to their maneuverability and ubiquitous character. Its small size, however, limits sensor quality and compute capability. In our recent work we present source seeking onboard a CrazyFlie by deep reinforcement learning. We show a general methodology for deploying deep neural networks on heavily constrained nano drones, using full 8-bit quantization and input scaling. 

Our fully autonomous light-seeking CrazyFlie

Problem definition

Source seeking can be interesting in a variety of contexts. We focus on light seeking, as seen in nature. Many insects rely on light, either for survival or navigation. Light seeking in aerial robotics has many applications, such as finding the exit out of a dark room. 

Our goal is to fully autonomously find a light source, using only the onboard Micro Controller Unit (MCU) and deep reinforcement learning. 

Crazyflie configuration

Our fully autonomous nano drone uses several standard and custom sensors. We use the multiranger and flowdeck for position control and obstacle avoidance.

The Multiranger deck with our custom light sensor

We add a custom light sensor, based on the Adafruit TSL2591 sensor. The custom light sensor nicely fits in the multiranger deck, adding little mass and inertia (total vehicle mass is 33 grams).

CrazyFlie 2.1 with multiranger, flowdeck and light sensor

Algorithm

We use a deep reinforcement learning algorithm with a discrete action space. The neural network policy has laser rangers and light readings (current and past values) as input. The neural network tells the drone to rotate left, right or fly forward. We train a neural network with 2 hidden layers of both 20 nodes, featuring bias add and relu activation functions. The input layer is a vector with a length of 20 (4 states), which, compared to images, greatly reduces computational effort. 

DQN policy architecture

Simulation and conversion

We train our agent in simulation using the Air Learning simulation platform, after which we fully quantize the neural network to 8-bit integers.

To maintain accuracy after quantization, we have come up with quantization innovations. Both input layer and all tensors in the network need to have a pre-defined [min,max] range in float32, to convert to 8-bit integers. 

Air Learning pipeline

In the input layer, not all inputs have the same range. That is, a laser ranger can have values from 0 to 5 meters while our light sensor may return a value between 0 and 300 lux. To avoid this issue, we scale all inputs to the same range.

Additionally, the tensors in the network need to have an assigned [min,max] range for quantization. To achieve this, we input a range of representative input into the unquantized model, and read out the values of intermediate layers. With this strategy, we arrive at a 2.9x speed-up compared to float32 inference.

Implementation

We use Tensorflow Lite to deploy our tensorflow models in C on the CrazyFlie. The TFMicro Stack, together with the actual model, almost completely fill up the available RAM. 

RAM utilization on the CrazyFlie 2.1

The total amount of RAM available on the CrazyFlie 2.1 is 196kB, of which only 131kB is available for static allocation at compile time. The Bitcraze software stack uses 98kB of RAM, leaving only 33kB available for our purposes. The TFMicro stack takes up 24kB, thus leaving 9kB for the actual model (e.g., weights, bias terms). 

We also analyzed CPU usage, and noticed a high amount of interrupts by the ‘stabilizer’ thread, i.e., the PID controllers. Because of these interrupts, inference of our model takes 46.4 times longer than it would have been without interruption. 

Our quantized model is 3kB. If it were an FP32 model, it would have taken 12kB, which would not have fitted in the available memory. We were able to run inference at 4Hz, compared to the estimated 1.4Hz of the same but unquantized model. 

In a practical sense, we noticed a decreased level of stability when increasing model size. Occasionally the drone would reboot randomly while flying. Possible causes for this behavior are RAM overflow and task scheduling problems in RTOS. Besides, we observed variation in performance loss after quantization. Some of our trained models would just keep rotating after quantization, while our final model demonstrates robust source seeking behavior. This degree of uncertainty can possibly be avoided using quantization aware training. 

Finally, flying in a dark room without a position estimate can be challenging. The PID controllers heavily rely on information provided by the Flow Deck. This information is limited when little light is present while flying over a floor containing little features. To fix this, we added mats with texture on the ground, adding features and enabling stable flight in a dark room.

Flight tests

To validate our results in simulation, we created a cluttered environment with a light source. We randomly initialized the drone in the room, and hereby observed a success rate of 80% in a total of 105 flight tests. By varying the environment and initial drone position, we learned more about the inner workings of our algorithm.

Experiment testing environment

We learned that the algorithm performs better with more obstacles, and that a closer initial position improves performance. Generally, source seeking far away from the source seems really hard. Almost no variation in source strength exists between different measurements, and the drone observes mostly noise. 

Outlook

With our methodology, we were able to perform fully autonomous source seeking using deep reinforcement learning on a Cortex-M4 MCU. We hope our methodology will be applicable to other TinyML applications where resources are heavily constrained. Developing custom accelerators for a specific workload is time-consuming and expensive, while general purpose MCU’s are cheap and widely available. With our methodology, we unlock new applications for learning algorithms on heavily constrained platforms.

Direct path to source in empty room, blue = take-off

Links

Video: https://www.youtube.com/watch?v=wmVKbX7MOnU

Paper: https://arxiv.org/abs/1909.11236

Github: https://github.com/harvard-edge/source-seeking

Feel free to contact us might you have any questions or ideas: bduisterhof@g.harvard.edu

The High-level Commander has been part of the Crazyflie firmware since the 2018.10 release. In combination with a positioning system, it can fly the Crazyflie along a trajectory that is either defined in the firmware or uploaded through the python lib. It originates from the Crazyswarm project and we have used it in various demos since it is possible to make trajectories that are very fluid and looks really cool. The trajectories are defined as 7th degree polynomials describing segments executed one after each other.

The controller gives full control of position, velocity, acceleration and jerk, the only problem is that it is non-trivial to generate the polynomials. We have wanted to simplify the creation of trajectories for a long time and have finally had some time to play with it. In this blog post we will describe how it can be done with Bezier curves and show some examples.

Each segment in a High-level commander trajectory is defined by four 7 degree polynomials, one for x, y, z and yaw. There is also a scaling parameter that tells the controller the time scale to use when executing the segment. Using polynomials of degree 7 makes it possible to design trajectories that are continuous in position, velocity, acceleration and jerk when changing from one segment to the next, which is important to get a smooth and controlled flight.

Bezier curves are common in many graphics applications and are probably known to most users. They are parametric curves defined by control points, usually three or four. Bezier curves can also be expressed as a polynomial, and this is what we will use in this case. To get a correct mapping to the desired polynomials we need some more control points and will use 8 per segment. The basic idea is to define the trajectory as Bezier curves and make sure the control points are placed in such a way that the continuity requirements are satisfied.

Bezier curve with 8 control points

On this page from University of Cambridge, there is a good explanation on continuity across the joins between curves and formulas for c0, c1 and c2 continuity. We also need c3 continuity which can be calculated in the same manner

With these formulas it is possible to set the handles of the Bezier curves to make sure we get a smooth ride.

We have added a python example that implements the ideas above. You can find it in crazyflie-lib-python/examples/positioning/bezier_trajectory.py. The design is based on Nodes that represents the connection points between bezier curves (called Segments). The Nodes has a set of handles that are shared between the Segments that use the Node. If not all handles are set the implementation will set them to appropriate values, see the comments in the code for more details. The Node API only allows the user to set handles on one of the Segments, the handles for the other segment are automatically set to generate a continuous trajectory.

The example uses nodes in the corners of a square and contains three parts:

No velocity in the nodes. The Crazyflie stops in the Nodes. Similar to calling go-to in the HL commander.
Velocity in the nodes. A fluid motion all the way around.

Velocity in the nodes. A fluid motion all the way around.

A bit more aggressive settings to get a little action.

Finally a video showing the full sequence, we use the Lighthouse for positioning.

Two weeks ago we posted about the demo we did for our new office move-in party. There has been multiple requests to share the script but unfortunately this is a hacked old script that is not going to be useful at all as an example. So, last week, we made an example that could run a synchronized swarm sequence.

The example has been pushed in the example folder of the Crazyflie-lib-python project. It is called synchronizedSequence.py. Running this example unmodified with 3 Crazyflies in a positioning system will give you this result. (Like the previous demo, this was done in a lighthouse system.)

One of the key design of the example is that it is based on a single control loop that can be synchronized with an outside system: in this example, there is a simple sleep of one seconds between each step of the sequence but it could for example be changed into a midi clock receiver to synchronize the sequence with music.

The example was developed with the help of Victor, a student we have hired to help-out during the summer. He has then played around a little bit to make a 9 Crazyflies sequence that is more impressive:

I uploaded Victor’s sequence in a github gist as it can be good for inspiration. One bit of warning though: as is, the sequence contains some vertical movements that are quite aggressive and the part where Crazyflies fly directly on top of each-other is more to be considered as a stress test.

We have recently moved to a new bigger office. With the summer arriving in Sweden, it was time to organize a small move-in after-work party with friends and family. For the occasion we wanted to play around with a small swarm of Crazyflies and the new Lighthouse positioning. Time being a sparse resource, we setup the ICRA2019 demo in the flight lab so that we would be able to fly during the party. We also started looking at our old swarm show that we ran with the LPS a year ago to see if we could run it with Lighthouse:

The show was a essentially a sequence of setpoints sent from a python script and controlling 9 Crazyflies 2.1 equipped with Lighthouse deck on the top and led-ring deck on the bottom synchronized on music. We setup the Crazyflie in the Lighthouse positioning system and converted the script to use the high-level-commander GOTO setpoints. We look forward at trying more advance control problems like trajectories to make more impressive synchronized flight choreography in the future but for now it already look quite good even with only GOTOs:

3 of us where at ICRA 2019 in Montreal last week, where we met a lot of interesting people and a lot of Crazyflie users. Thanks a lot to everyone that drop by our booth, and for the ones that missed it we are planning on being at iROS2019 later this year so we might see you there :-).

We have already described our demo in a previous post, now that we run it we can update on how it went. We are also updating the ICRA2019 page with the latest source code and information so that anyone interested can reproduce the demo.

In its final state at the conference, the demo contained 8 Crazyflies 2.1 equiped with Lighthouse deck and Qi charger deck. There were 8 3D-printed charging pads on the floor with Ikea Qi wireless chargers and two HTC Vive base stations (V1) on tripods. The full system was contained in a cage, built from 50 cm-long tubes or aluminium and nets.

The full setup of the booth took us about 4 hours, this included about 3 hours for the cage, 15 min for the demo including calibration of the lighthouse base-station geometry and the rest to fine-tune things. This is by far our best setup time, we still need to prettify the cage a bit and to make is easier to install, but we will most likely re-use this system for upcoming conferences.

In this demo we aimed at keeping a Crazyflie in the air at every moment, to do so we had a computer connected to all 8 Crazyflies sending to one of them the signal to start flying if no other where actually in the air flying a trajectory. The flight was completly autonomous as we explained in our previous blog post. We setup the Crazyflie to fly 2 cycles and then land, which increase the rate of swap and so increased the ‘action’, though it also meant that during the swap two Crazyflies where flying. This drained the batteries a bit more than expected and meant that after about an hour all the Crazyflies where bellow the take-off threshold and we had to wait ~30 seconds between flights. Here is a video of it in action:

The demo was very care-free, we had very few Crashes and we mostly restarted the Crazyflies to swap batteries manually to add a bit of power in the swarm. The last day we decided to spice it up a little bit by adding a chair in the cage and by calibrating the chair position and flight trajectory, we managed to have the Crazyflie partly fly under it. This worked quite well most of the time and showed that the lighthouse positioning is repeatable and works fairly well with short occlusion in the path. Though we also found out that even though a single Crazyflie would always fly the same trajectory, two different Crazyflies will not. We think differences in propeller stiffness and the fact that the our Mellinger position controller has not been calibrated for changing YAW are the main reasons.

If you want to know more about the demo or if you want to reproduce it do not hesitate to visit the ICRA 2019 page that explains it in more details and links to the source code of everything including 3D printed parts for the cage and the landing pads.

Hi everyone, here at the Integrated and System Laboratory of the ETH Zürich, we have been working on an exciting project: PULP-DroNet.
Our vision is to enable artificial intelligence-based autonomous navigation on small size flying robots, like the Crazyflie 2.0 (CF) nano-drone.
In this post, we will give you the basic ideas to make the CF able to fly fully autonomously, relying only on onboard computational resources, that means no human operator, no ad-hoc external signals, and no remote base-station!
Our prototype can follow a street or a corridor and at the same time avoid collisions with unexpected obstacles even when flying at high speed.


PULP-DroNet is based on the Parallel Ultra Low Power (PULP) project envisioned by the ETH Zürich and the University of Bologna.
In the PULP project, we aim to develop an open-source, scalable hardware and software platform to enable energy-efficient complex computation where the available power envelope is of only a few milliwatts, such as advanced Internet-of-Things nodes, smart sensors — and of course, nano-UAVs. In particular, we address the computational demands of applications that require flexible and advanced processing of data streams generated by sensors such as cameras, which is beyond the capabilities of typical microcontrollers. The PULP project has its roots on the RISC-V instruction set architecture, an innovative academic and research open-source architecture alternative to ARM.

The first step to make the CF autonomous was the design and development of what we called the PULP-Shield, a small form factor pluggable deck for the CF, featuring two off-chip memories (Flash and RAM), a QVGA ultra-low-power grey-scale camera and the PULP GAP8 System-on-Chip (SoC). The GAP8, produced by GreenWaves Technologies, is the first commercially available embodiment of our PULP vision. This SoC features nine general purpose RISC-V-based cores organised in an on-chip microcontroller (1 core, called Fabric Ctrl) and a cluster accelerator of 8 cores, with 64 kB of local L1 memory accessible at high bandwidth from the cluster cores. The SoC also hosts 512kB of L2 memory.

Then, we selected as the algorithmic heart of our autonomous navigation engine an advanced artificial intelligence algorithm based on DroNet, a Convolutional Neural Network (CNN) that was originally developed by our friends at the Robotic and Perception Group (RPG) of the University of Zürich.
To enable the execution of DroNet on our resource-constrained system, we developed a complete methodology to map computationally-intense deep neural networks on the PULP-Shield and the GAP8 SoC.
The network outputs two pieces of information, a probability of collision and a steering angle that are translated in dynamic information used to control the drone: respectively, forward velocity and angular yaw rate. The layout of the network is the following:

Therefore, our mission was to deploy all the required computation onboard our PULP-Shield mounted on the CF, enabling fully autonomous navigation. To put the problem into perspective, in the original work by the RPG, the DroNet CNN enabled autonomous navigation of big-size drones (e.g., the Bebop Parrot). In the original use case, the computational power and memory was not a problem thanks to the streaming of images to a remote base-station, typically a laptop consuming 30-100 Watt or more. So our mission required running a similar workload within 1/1000 of the original power.
To make this work, we combined fixed-point arithmetic (instead of “traditional” floating point), some minimal modification to the original topology, and optimised memory and computation usage. This allowed us to squeeze DroNet in the ultra-small power budget available onboard. Our most energy-efficient configuration delivers 6 frames-per-second (fps) within only 64 mW (including all the electronics on the PULP-Shield), and when we push the PULP platform to its limit, we achieve an impressive 18 fps within just 3.5% of the total CF’s power envelope — the original DroNet was running at 20 fps on an Intel i7.

Do you want to check for yourself? All our hardware and software designs, including our code, schematics, datasets, and trained networks have been released and made available for everyone as open source and open hardware on Github. We look forward to other enthusiasts contributions both in hardware enhancement, as well as software (e.g., smarter networks) to create a great community of people interested in working together on smart nano-drones.
Last but not least, the piece of information you all were waiting. Yes, soon Bitcraze will allow you to enjoy of our PULP-shield, actually, even better, you will play with its evolution! Stay tuned as more information about the “code-name” AI-deck will be released in upcoming posts :-).

If you want to know more about our work:

Questions? Drop us an email (dpalossi at iis.ee.ethz.ch and fconti at iis.ee.ethz.ch)

Last week we blogged about the early release version of the lighthouse deck and showed a nice push-around demo of the Crazyflies using the Vive controller. Now we wanted to push the system even further, by making a Lighthouse Painting!

We started by adding a LED-ring deck on the bottom of the CrazyFlie 2.1 with the lighthouse deck attached to the top. We were able to access the input of the track pad of the Vive controller and link it to a specific color / hue value. The LED ring can display any color possible in the RGB range, so in theory, you could paint in whatever color you like. For now, the brightness was fixed, but this could be easily added to the demo script as well.

To capture the light trace, we needed to make a long-exposure image, therefore, the flight arena need to stay completely dark. Luckily, this was easy to do for us since we do not have any windows in our new testing arena. Our camera is the Canon D5600 with a manually controlled shutter time setting selected (press to open the shutter and press again to close the shutter). The aperture setting was set at F-22. Nevertheless, this is very depended on the environment, so we had to do some trial-and-error in order to get this parameter right.

Aperture too wide… perfect!

Once we had the set-up finished, we made several long exposure photo paintings with one person controlling the camera and another painting the picture into thin air. Of course, the artist would need to imagine its creation, as we were not able to see the result until after the picture was taken. Also, big gestures were required in order to complete the painting, as the Crazyflie’s and the Vive controller’s movements were synced 1:1, so adding some multiplication factor would come in handy. Nonetheless, the results were amazing.

Some nice examples of a single crazyflie flying based on the Vive’s position, changing color based on the trackpad

We took it even further, by making the Crazyflie fly a predefined trajectory and planned color scheme without the Vive controller. First, it flew three concentric circles in green, red and blue with the high level commander with the PID controller setting. But, the circles would probably be closed-off more properly with the Mellinger controller setting. We also were able to reproduce the Bitcraze logo in the same fashion. In both long-exposure photos, it still possible to see the Crazyflie, as it is still traceable due to its routine LED functionality, so you can easily observe where it took off, and where it flew in between shapes.

The Crazyflie flying a predefined trajectory in several shapes

The demo python scripts of the above flights can be found here:

An we also took a video of the Bitcraze logo being drawn. The mobile phone camera had some problems focusing in the dark, but it gives a good idea of how things works:

We have just released the Crazyflie Lighthouse deck as Early Access! It is now available in our web store.

The lighthouse deck allows the Crazyflie to estimate its position using the HTC Vive tracking base-station normally used for Virtual Reality. The positioning is done by tracking the timing of rotating infra-red laser beams emitted from the base-stations. This system has the advantages of having a very good precision and of allowing the Crazyflie to acquire its position autonomously: once the Crazyflie knows the position and orientation of the base-station, it can calculate its own position without the help of any external systems.

The release as Early Access means that we have finished the hardware and we are confident that the hardware is working properly. Though we have not yet finished all the software and firmware, by releasing the hardware early we can get the hardware into the hands of users quickly to try it out. In return we hope we can get some help making the software better.

Current state

  • The Crazyflie can calculate its position from the received Vive Base-Station V1 signals.
  • Direct line of sight should be kept to both base-stations. The Lighthouse deck has 4 receivers so in the future it will be possible to get a position from seeing only one base station.
  • Base-Station V2 support is still being worked-on, it will only require a software update.
  • The Base-station position is hard-coded in the Crazyflie and found using SteamVR. Ideally this should be sent from the ground and the Crazyflie should calculate the positions of the Base-Stations automatically.
  • The previous point means that a full VR system or at least two base stations and a controller or tracker is required to setup the system. In the future we hope to setup the system with only a Crazyflie and two base stations.
  • Since this version of the deck only has horizontal sensors, it is important that the base-stations are placed above the flight space and the Crazyflies should fly ~40cm bellow the base-stations

As long as the deck is in early access, the main documentation will be the lighthouse positioning page in the wiki. This page is going to be updated a lot in the near future and will track the progress in development.

Demo

We have written a small demo script that allows to set the position of the Crazyflie using a Vive controller. It is a good demo to experiment with the precision of the system and the ability to mix VR and Crazyflie since they are in the same tracking space:

In this demo, a python script connects to two Crazyflies and acquire the controller position using OpenVR and makes the Crazyflies take-off above the controller. Then, when the controller trigger is pushed, the setpoint to the closest Crazyflie is changed to follow the controller movement, the Crazyflies are flying autonomously only getting position setpoints from the python script. The position estimation and control is handled onboard.

We are pretty excited by this release since we think this positioning technology will be very useful for a lot of use-case. Let us know what you think and do not hesitate to contribute if you want to improve the system :).

A few weeks ago we wrote about the release of the Multi-ranging deck and the new STEM ranging bundle.

The STEM ranging bundle is a great addition in the classroom for a wide range of students. By combining the Flow deck v2’s time-of-flight distance sensor and optical flow sensor with the Multi-ranger deck’s ability to measure distance to objects, the Crazyflie gets position and spatial awareness.

We have shot a video that shows the bundle in action!

 

To get started with the STEM ranging bundle we have created a guide for the bundle with step-by-step instructions. The code for the demos in the video are available in the example directory of the crazyflie-lib-python project:

  • multiranger_push.py: When the application in launched the Crazyflie will take off and hover. If anything is getting close to the right/left/front/back sensors the Crazyflie will move in the opposite direction. 
  • multiranger_pointcloud.py: When the application is launched the Crazyflie will take off, hover and a 3D-plot will be shown of what is detected by the Multi-ranger deck sensors. By default the left/right/front/back/up sensors will be plotted, but you can also add the Crazyflie position and the down sensor if you like. The Crazyflie can be moved around by using the arrow keys on the keyboard and w/s for up/down and a/d for rotating CCW/CW. For more info see the documentation in the example.

We love feedback so please leave some comments in the field below!