Category: Software

It has been about a month since the AI-deck became available in Early Access. Since then there are now quite a few of you that own an AI-deck yourself. A new development we would like to share: we thought before that we had selected a gray-scale image sensor. However, it came to our attention that the camera actually contains a color image sensor, which on second viewing of the video presented in this blogpost is pretty obvious in hindsight (thanks PULP project ETH Zurich for letting us know!).

A color image from the AI-deck

This came as a little surprise, but a color camera can also add some new possibilities, like making the Crazyflie follow a orange ball, or also train the CNNs incorporate color in their classification training as well. The only thing is that it will require an extra preprocessing task in order to retrieve the color image, which will be explained in the next section.

Demosaicing

Essentially all CMOS image sensors are gray-scale by definition. In order to retrieve color from a scene, manufacturers add a Bayer filter on top of the image sensor, so it filters out the red, green and blue on each pixels. This Color filter array does not need to be RGB, but all kinds of colors, but we will only talk about the Bayer filter. If the pattern of the filter is known, the pixels that related to a certain color will be interpolated with each-other in order to fill in the gaps in between. This process is called demosaicing and it creates the RGB channels that are converted to a color image.

Process of demosaicing with a Bayer filter

Currently we only implemented a simple nearest-neighbor interpolation scheme for demosaicing, which is fine for demonstration purposes, however is not the best technique out there. Such a simple interpolation is not very ‘edge and detail’ aware and can therefore cause artifacts, like these Moiré effects seen here below. Anyway, we are still experimenting how to get a better image and how to translate that to all the examples of the AI-deck example repository (see this issue if you would like to follow or take part in the discussion).

Moiré effect

So technically, once we have the color image, this can be converted to a gray-scale images which can be used for the examples as is. However, there is a reduction in quality since the full pixel resolution is not used for obtaining the full scale image. We are currently discussing if it would be useful to get the gray-scale version of this camera and make this available as well, so let us know if you would be interested!

Feedback and Early Access

Like we said before, there now quite a few of you out there that have an AI-deck in their procession. As it is in Early Access, the software part is still in full development. However, since we have not received any negative feedback of you, we believe that everything is fine and peachy!

Just kidding ;) we know that the AI-deck is quite a challenging deck to work with and we know for sure that many of you probably have questions or have something to say about working with it. Buying an Early Access product also comes with a little bit of responsibility. The more feedback we get from you guys, the more we can tailor the software and support to help you and others, thereby advancing the product forward and getting it out of the early access phase.

So please, let us know if you are having any trouble starting up by posting a thread on the forum (we have a special AI-deck group!). If there are any issues with the examples or the documentation of the AI-deck repo. We and also our collaborators at Greenwaves Technologies (from the GAP8 chip) are more than happy to help out. That is what we are here for :)

Autonomous Robotics at UW Seattle

Our team’s Crazyflie quadcopter was equipped with Bitcraze’s Optic Flow deck and Multi-ranger deck. A BH-1750 light intensity sensor was soldered to a Bitcraze Prototype deck to complete our hardware.

The Crazyflie 2.1 was the perfect robotics platform for an introduction to autonomous robotics at the University of Washington winter quarter 2020. Our Bio-inspired Robotics graduate course completed a series of Crazyflie projects throughout the 10 weeks that built our skills in:

  • Python
  • Robot Operating System (ROS)
  • assembling custom sensors
  • writing new drivers
  • designing and testing control algorithms
  • trouble shooting and independent learning

The course was offered by UW Mechanical Engineering’s Autonomous Insect Robotics Laboratory, headed by Dr. Sawyer B. Fuller. The course was supported by PhD candidate Melanie Anderson, who has done fantastic research with her Crazyflie-based Smellicopter. The final project was an opportunity to turn a Crazyflie quadcopter into a bio-inspired autonomous robot. Our three person team of UW robotics grad students included Nishant Elkunchwar, Krishna Balasubramanian, and Jessica Noe.

Light Seeking Run-and-Tumble Algorithm Inspired by Bacterial Chemotaxis

The goal for our team’s Crazyflie was to seek and identify a light source. We chose a run-and-tumble algorithm inspired by bacterial chemotaxis. For a quick explanation of bacterial chemotaxis, please see Andrea Schmidt’s explanation of chemotaxis on Dr. Mehran Kardar’s MIT teaching page. She provides a helpful animation here.

In both bacterial chemotaxis and our run-and-tumble algorithm, there is a body (the bacteria or the robot) that can:

  • move under its own power.
  • detect the magnitude of something in the environment (e.g. chemical put off by a food source or light intensity).
  • determine whether the magnitude is greater or less than it was a short time before.

This method works best if the environment contains a strong gradient from low concentration to high concentration that the bacteria or robot can follow towards a high concentration source.

The details of the run-and-tumble algorithm are shown in a finite state machine diagram below. The simple summary is that the Crazyflie takes off, begins moving forward, and if the light intensity is getting larger it continues to “Run” in the same direction. If the light intensity is getting smaller, it will “Tumble” to a random direction. Additional layers of decision making are included to determine if the Crazyflie must “Avoid Obstacle”, or if the source has been reached and the Crazyflie quadcopter should “Stop”.

Run & Tumble Algorithm
The run-and-tumble algorithm represented as a finite state machine.

Crazyflie Hardware

To implement the run-and-tumble algorithm autonomously on the Crazyflie, we needed a Crazyflie quadcopter and these additional sensors:

The Optic Flow deck was a key sensor in achieving autonomous flight. This sensor package determines the Crazyflie’s height above the surface and tracks its horizontal motion from the starting position along the x-direction and y-direction coordinates. With the Optic Flow installed, the Crazyflie is capable of autonomously maintaining a constant height above the surface. It can also move forward, back, left, and right a set distance or at a set speed. Several other pre-programmed movement behaviors can also be chosen. This Bitcraze blog post has more information on how the Flow deck works and this post by Chuan-en Lin on Nanonets.com provides more in-depth information if you would like to read more.

The Bitcraze Multi-ranger deck provided the sensor data for obstacle avoidance. The Multi-ranger detects the distance from the Crazyflie to the nearest object in five directions: forward, backward, right, left, and above. Our threshold to trigger the “Avoid Obstacle” behavior is detecting an obstacle within 0.5 meters of the Crazyflie quadcopter.

The Prototype deck was a quick, simple way to connect the BH-1750 light intensity sensor to the pins of the Crazyflie to physically integrate the sensor with the quadcopter hardware. This diagram shows how the header positions connect to the rows of pads in the center of the deck. We soldered a header into the center of the deck, then soldered connections between the pads to form continuous connections from our header pin to the correct Crazyflie header pin on the left or right edges of the Prototype deck. The Bitcraze Wiki provides a pin map for the Crazyflie quadcopter and information about the power supply pins. A nice overview of the BH-1750 sensor is found on Components101.com, this shows the pin map and the 4.7 kOhm pull-up resistor that needs to be placed on the I2C line.

It was easy to connect the decks to the Crazyflie because Bitcraze clearly marks “Front”, “Up” and “Down” to help you orient each deck relative to the Crazyflie. See the Bitcraze documentation on expansion decks for more details. Once the decks are properly attached, the Crazyflie can automatically detect that the Flow and Multi-Ranger decks are installed, and all of the built-in functions related to these decks are immediately available for use without reflashing the Crazyflie with updated firmware. (We appreciated this awesome feature!)

Crazyflie Firmware and ROS Control Software

Bitcraze provides a downloadable virtual machine (VM) to help users quickly start developing their own code for the Crazyflie. Our team used a VM that was modified by UW graduate students Melanie Anderson and Joseph Sullivan to make it easier to write ROS control code in the Python coding language to control one or more Crazyflie quadcopters. This was helpful to our team because we were all familiar with Python from previous work. The standard Bitcraze VM is available on Bitcraze’s Github page. The Modified VM constructed by Joseph and Melanie is available through Melanie’s Github page. Available on Joseph’s Github page is the “rospy_crazyflie” code that can be combined with existing installs of ROS and Bitcraze’s Python API if users do not want to use the VM options.

  • “crazyflie-firmware” – a set of files written in C that can be uploaded to the Crazyflie quadcopter to overwrite the default firmware
    • In the Bitcraze VM, this folder is located at “/home/bitcraze/projects/crazyflie-firmware”
    • In the Modified VM, this folder is located at “Home/crazyflie-firmware”
  • “crazyflie-lib-python” (in the Bitcraze VM) or “rospy_crazyflie” (in the Modified VM) – a set of ROS files that allows high-level control of the quadcopter’s actions
    • In the Bitcraze VM, “crazyflie-lib-python” is located at “/home/bitcraze/projects/crazyflie-lib-python”
    • In the Modified VM, navigate to “Home/catkin_ws/src” which contains two main sets of files:
      • “Home/catkin_ws/src/crazyflie-lib-python” – a copy of the Bitcraze “crazyflie-lib-python”
      • “Home/catkin_ws/src/rospy_crazyflie” – the modified version of “crazyflie-lib-python” that includes additional ROS and Python functionality, and example scripts created by Joseph and Melanie

In the Modified VM, we edited the “crazyflie-firmware” files to include code for our light intensity sensor, and we edited “rospy-crazyflie” to add functions to the ROS software that runs on the Crazyflie. Having the VM environment saved our team a huge amount of time and frustration – we did not have to download a basic virtual machine, then update software versions, find libraries, and track down fixes for incompatible software. We could just start writing new code for the Crazyflie.

The Modified VM for the Crazyflie takes advantage of the Robot Operating System (ROS) architecture. The example script provided within the Modified VM helped us quickly become familiar with basic ROS concepts like nodes, topics, message types, publishing, and subscribing. We were able to understand and write our own nodes that published information to different topics and write nodes that subscribed to the topics to receive and use the information to control the Crazyflie.

For more information, see the Bitcraze Development overview.

Updating the crazyflie-firmware

A major challenge of our project was writing a new driver that could be added to the Crazyflie firmware to tell the Crazyflie system that we had connected an additional sensor to the Crazyflie’s I2C bus. Our team referenced open-source Arduino drivers to understand how the BH-1750 connects to an Arduino I2C bus. We also looked at the open-source drivers written by Bitcraze for the Multi-ranger deck to see how it connects to the Crazyflie I2C bus. By looking at all of these open-source examples and studying how to use I2C communication protocols, our team member Nishant Elkunchwar was able to write a driver that allowed the Crazyflie to recognize the BH-1750 signal and convert it to a sensor value to be used within the Crazyflie’s ROS-based operating system. That driver is available on Nishant’s Github. The driver needed to be placed into the appropriate folder: “…\crazyflie-firmware\src\deck\drivers\src”.

The second change to the crazyflie-firmware is to add a “config.mk” file in the folder “…\crazyflie-firmware\tools\make”. Information about the “config.mk” file is available in the Bitcraze documentation on configuring the build.

The final change to the crazyflie-firmware is to update the make file “MakeFile” in the location “…\crazyflie-firmware”. The “MakeFile” changes include adding one line to the section “# Deck API” and two lines to the section “# Decks”. Information about compiling the MakeFile is available in the Bitcraze documentation about flashing the quadcopter.

Making additions to the ROS control architecture

The ROS control architecture includes messages. We needed to define 3 new types of messages for our new ROS control files. In the folder “…\catkin_ws\src\rospy_crazyflie\msg\msg” we added one file for each new message type. We also updated “CMakeLists.txt” to add the name of our message files in the section “add_message_files( )”.

The second part of our ROS control was a set of scripts written in Python. These included our run-and-tumble algorithm control code, publisher scripts, and a plotter script. These are all available in the project’s Github.

Characterizing the Light Sensor

At this point, the light intensity sensor was successfully integrated into the Crazyflie quadcopter. The new code was written and the Crazyflie quadcopter was reflashed with new firmware. We had completed our initial trouble shooting and the next step was to characterize the light intensity in our experimental setup.

Experimental setup for light intensity characterization.

This characterization was done by flying the Crazyflie at a fixed distance above the floor in tightly spaced rows along the x and y horizontal directions. The resulting plot (below) shows that the light intensity increases exponentially as the Crazyflie moves towards the light source.

The light characterization allowed us to determine an intensity threshold that will only happen near the light source. If this threshold is met, the algorithm’s “Stop” action is triggered, and the Crazyflie lands.

Light intensity (units of lux) was experimentally characterized by piloting the Crazyflie in a linear pattern at a constant height above the ground. The resulting plot shows that light intensity is characterized by an exponential roll off in both the x and y directions.

Testing the Run-and-Tumble Algorithm

With the light intensity characterization complete, we were able to test and revise our run-and-tumble algorithm. At each loop of the algorithm, one of the four actions is chosen: “Run”, “Tumble”, “Avoid Obstacle”, or “Stop”. The plot below shows a typical path with the action that was taken at each loop iteration.

Flight Tests of the Run-and-Tumble Algorithm

In final testing, we performed 4 trial runs with 100% success locating the light source. Our test area was approximately 100 square feet, included 1 light source, and 2 obstacles. The average search time was 1:41 seconds.

The “Avoid Obstacle” and “Run” behavior are demonstrated in the above video clip (1.5x actual speed).


The “Run” and “Tumble” actions are demonstrated in the above video clip (2x actual speed). At the end, the “Stop” action is demonstrated when the light intensity reaches the threshold value of 800 lux, indicating that the Crazyflie has found the light source and should land.

Lessons Learned

This was one of the best courses I’ve taken at the University of Washington. It was one of the first classes where a robot could be incorporated, and playing with the Crazyflie was pure fun. Another positive aspect was that the course had the feel of a boot camp for learning how to build, control, test, and improve autonomous robots. This was only possible because Bitcraze’s small, indoor quadcopter with optic flow capability made it possible to safely operate several quadcopters simultaneously in our small classroom as we learned.

This development project was really interesting (aka difficult…) and we went down a few rabbit holes as we tried to level up our knowledge and skills. Our prior experience with Python helped us read the custom example scripts provided in our course for the ROS control program, but we had quite a bit to learn about the ROS architecture before we could write our own control scripts.

Nishant made an extensive study of I2C protocols as he wrote the new driver for the BH-1750 sensor. One of the biggest lessons I learned in this project was that writing drivers to integrate a sensor to a microcontroller is hard. By contrast, using the Bitcraze decks was so easy it almost felt like cheating. (In the nicest way!)

On the hardware side, the one big problem we encountered during development was accidentally breaking the 0.5 mm headers on the Crazyflie quadcopter and the decks. The male headers were not long enough to extend from the Flow deck all the way up through the Prototype deck at the top, so we tried to solder extensions onto the pins. Unfortunately, I did not check the Bitcraze pin width and I just soldered on the pins we all had in our tool kits: the 0.1 inch (2.54 mm) wide pins that we use with our Arduinos and BeagleBones. These too-large-pins damaged the female headers on the decks, and we lost connectivity on those pins. Fortunately, we were able to repair our decks by soldering on replacement female headers from the Bitcraze store. I wish now that the long pin headers were available back then.

In summary, this course was an inspiring experience and helped our team learn a lot in a very short time. After ten weeks working with the Crazyflie, I can strongly recommend the Crazyflie for robotics classes and boot camps.

Links to Project Files

Team’s Research Poster: https://github.com/thecountoftuscany/crazyflie-run-and-tumble/blob/master/documents/Project-Final_Poster.pdf

Github Link courtesy of Nishant Elkunchwar: Crazyflie-Run-and-Tumble

YouTube Links courtesy of Nishant Elkunchwar: Crazyflie locates Light, Simulation of Run and Tumble Algorithm in PyGame

We’re happy to announce the availability of the 2020.06 release! The release includes the Crazyflie firmware, the Crazyflie NRF firmware and the python library (0.1.11). You can find the full package in the Crazyflie Release repository, to be used for flashing through the python client.

More RAM

The major event of this release is the use of the Core Coupled Memory in the Crazyflie. The CCM is a 64k RAM memory bank and by moving memory blocks from the standar RAM to the CCM, we have freed up 64k of RAM! The 128k of RAM was almost full so an extra 50% is good news.

One might ask why this has not been done earlier and the answer is that the CCM has some special properties that has to be taken into account. It is RAM, just like the “normal” RAM, but it is connected to a different internal bus in the STM MCU. The most notable difference is that it can not be used in DMA operations that are commonly used when accessing sensors, and if a pointer to CCM memory is passed to a sensor driver things will go bad. To make it clear where the memory is located, we have introduced a macro to be used when explicitly moving a memory area to the CCM, otherwise it will end up in normal RAM.

Hopefully the chosen design will have very little impact on the “normal” firmware programmer. We have moved a bunch of memory blocks to the CCM that are “safe”, and most programmers can happily forget about the CCM and just enjoy the new 64k of available RAM!

Battery temperature

In release 2020.02 we introduces a battery temperature check do not charge the battery if it is too warm. Lithium batteries likes to be charged within 0-45 deg Celsius. To do this we used the temp sensor within the nRF51822 which is mounted just under the battery. It hover turned out that the temp measurement is way to biased and as a result stops charging to early. So in this release we did more measurements and increased the allowed charging range.

It has been a while since we have updated you all on the AI deck. The last full blogpost was in October, with some small updates here and there. It is not that we have not focused on it at all; on the contrary… this has been a high priority project for a while now. It is just quite a complex board with a lot of bells and whistles, which can be challenging to work with sometimes so early in development, something that our previous intern can definitely agree on. So therefore we rather wanted to wait until we were able to make sufficient progress before we gave you an update… and so we have!

A Crazyflie 2.1 with the AI deck

Together with Greenwaves technologies we have been trying to get the SDK of the GAP8 chip on the AI deck stable enough for an early release. The latest release of the SDK (version 3.4) has proved itself to work with relative ease on the AI deck after extensive testing. Currently it is possible to use OpenOCD for flashing and debugging, and it supports most commonly available debuggers with a jtag connector. In the upcoming weeks both of Bitcraze and Greenwaves will test and try out all examples of the SDK on the AI deck to make sure that everything is still compatible. Also the documentation will be extended as well. As there is so much to document, it might be difficult to catch all of it. However, if you notify us and Greenwaves on anything that is missing once the AIdeck is out, that will help us out to catch the knowledge gaps.

The AI deck also contains the ESP-based NINA module for establishing a WiFi connection. This enables the users to stream the video stream of the AI deck onto their computers, which will be quite an essential tool if they would like to generate their own image database for training the CNNs for the GAP8 (and it happens to also be quite practical for debugging by the way!). Currently it is required to set credentials of your local WiFi network and reflash the AI-deck to be able to connect and streaming the images, but we are working on turning the Nina into an access-point instead so no reflashing would be required. We hope that we will be able to implement this before the release.

Top view of the AI deck

We are also trying out to adjust applications to make suitable of the AI deck. For instance, we have adapted Greenwaves’ face-detector example to use the image streamer instead of the display available on the GAPuino boards. You can see a video of the result here underneath. Beware that this face-detector is not based on a CNN but on HOG descriptors, so it only works in good conditions where the face is well lit. However, it is possible to train a CNN to detect faces in Tensorflow and flash this on the AI deck with the GAPflow framework as developed by Greenwaves. At Bitcraze we haven’t managed to try that out ourselves ( we are close to that though!) but at least this example is a nice demonstration of the AI deck’s abilities together with the WiFi-streamer. This example and more testing code can be found in our experimental repo here. For examples of GAPflow, please check out the examples/NNtool section of the GAP8 SDK.

For some reason WordPress has difficulty embedding the video that was supposed to be here, so please check https://youtu.be/0sHh2V6Cq-Q

Seeing how the development has been progressing, we will be comfortable to say that the AI deck could be ready for early release somewhere in the next month, so please keep an eye out on our website! We will continue to test the GAP SDK’s stability and we are very thankful for Greenwaves Technologies with their help so far. We will also work on getting-started guides in order to get acquainted with the AI deck, supplementing the already existing documentation about the GAP8 chip.

Even-though the AI deck will soon be ready for early release, this piece of hardware is not for the faint-hearted and embedded programming experience is a must. But keep in mind that the possibilities with the AI deck are huge, as it will be mean that super-edge-computing on a 30 gram flying platform will be available for anyone. It will all be worth it when you have your Crazyflie flying autonomously while being able to recognize its surroundings :)

Here is another blog post where we try to explain parts of the stabilizer framework of the Crazyflie. Last time, we talked about the controllers and state estimators as part of the stabilizer.c module which was introduced in this blog post back in 2016. Today we will go into the commander framework, which handles the setpoint of the desired states, which the controllers will try to steer the estimated state to.

The Commander module

The commander module handles the incoming setpoints from several sources (src/modules/src/commander.c in the firmware). A setpoint can be set directly, either through a python script using the cflib/ cfclient or the app layer (blue pathways in the figure), or by the high-level commander module (purple pathway). The High-level commander in turn, can be controlled remotely from the python library or from inside the Crazyflie.

General framework of the stabilization structure of the crazyflie with setpoint handling. * This part is takes place on the computer through the CFlib for python, so there is also communication protocol in between. It is left out of this schematics for easier understanding.

It is important to realize that the commander module also checks how long ago a setpoint has been received. If it has been a little while (defined by threshold COMMANDER_WDT_TIMEOUT_STABILIZE in commander.c), it will set the attitude angles to 0 on order to keep the Crazyflie stabilized. If this takes longer than COMMANDER_WDT_TIMEOUT_SHUTDOWN, a null setpoint will be given which will result in the Crazyflie shutting down its motors and fall from the sky. This won’t happen if you are using the high level commander.

Setpoint structure

In order to understand the commander module, you must be able to comprehend the setpoint structure. The specific implementation can be found in src/modules/interface/stabilizer_types.h as setpoint_t in the Crazyflie firmware.

There are 2 levels to control, which is:

  • Position (X, Y, Z)
  • Attitude (pitch, roll, yaw or in quaternions)

These can be controlled in different modes, namely:

  • Absolute mode (modeAbs)
  • Velocity mode (modeVelocity)
  • Disabled (modeDisable)
Setpoint structures per controller level

So if absolute position control is desired (go to point (1,0,1) in x,y,z), the controller will obey values given setpoint.position.xyz if setpoint.mode.xyz is set to modeAbs. If you rather want to control velocity (go 0.5 m/s in the x-direction), the controller will listen to the values given in setpoint.velocity.xyz if setpoint.mode.xyz is set to modeVel. All the attitude setpoint modes will be set then to disabled (modeDisabled). If only the attitude should be controlled, then all the position modes are set to modeDisabled. This happens for instance when you are controlling the crazyflie with a controller through the cfclient in attitude mode.

High level commander

Structure of the high level commander

As already explained before: The high level commander handles the setpoints from within the firmware based on a predefined trajectory. This was merged as part of the Crazyswarm project of the USC ACT lab (see this blogpost). The high-level commander uses a planner to generate smooth trajectories based on actions like ‘take off’, ‘go to’ or ‘land’ with 7th order polynomials. The planner generates a group of setpoints, which will be handled by the High level commander and send one by one to the commander framework.

It is also possible to upload your own custom trajectory to the memory of the Crazyflie, which you can try out with the script examples/autonomous_sequence_high_level of.py the crazyflie python library repository. Please see this blogpost to learn more.

Support in the python lib (CFLib)

There are four main ways to interact with the commander framework from the python library.

  1. Send setpoints directly using the Commander class from the Crazyflie object, this can be seen in the autonomousSequence.py example for instance.
  2. Use the MotionCommander class, as in motion_commander_demo.py. The MotionCommander class exposes a simplified API and sends velocity setpoints continuously based on the methods called.
  3. Use the high level commander directly using the HighLevelCommander class on the Crazyflie object, see autonomous_sequence_high_level.py.
  4. Use the PositionHlCommander class for a simplified API to send commands to the high level commander, see the position_commander_demo.py

Documentation

We are busy documenting the stabilizer framework in the Crazyflie firmware documentation, including the content of this blogpost. If you feel that anything is missing or not explaining clearly enough about the stabilizer framework, please drop a comment below or comment on the forum.

For the users that have subscribed to our github repository this does not come as an surprise, but for the rest, we have released a new version of our Crazyflie firmware (both STM and NRF) last week!

We know that it is quite close to our last release in February, but we had so many changes and contribution that we deemed it necessary to add a stamp to this current version. In this blog-post, we will give an overview on which features to expect in this update.

UART communication

With courtesy of Saarland University, it is now possible to connect the Crazyflie through its UART to a port on your raspberry pi or through an FTDI cable directly to your computer. This is an extra port for communicating with CRTP will open up new possibilities to interact with your crazyflie.

This is compatible with CFlib version 0.1.10, however there was a fix implemented in the current master (see the ticket here). Please see the ticket for the UART communication here if you are interested in the implementation details.

Lighthouse

It is now possible to get the lighthouse geometry (the position and orientation of the base stations) without SteamVR. We made a script based on the latest stable release of openCV, to calculate the base station geometry based on the received sweep angles on the lighthouse deck. Check these full instructions on how to use this new script. It is a very new and fresh implementation, so if you are experiencing any trouble, please leave an issue on this page or leave a comment on the forum.

Also, FPGA v4 is now integrated in 2020.04, which support Basestation v2. This is still in a very early phase and not yet fully integrated in the firmware, so please keep an eye on this ticket for the implementation process in the latest master of the crazyflie-firmware. There was also a blogpost a few weeks ago about the current state of the lighthouse v2 development.

Bluetooth management

We also provided an update of the bluetooth management of the Crazyflie communication by the NRF chip. Before, it was (unintentionally) possible to connect to the Crazyflie over Bluetooth while it also connected to the CFclient through the crazyradio PA. This caused a lot of unwanted elements such as package loss and unresponsiveness. Now, whenever a Crazyradio packet has been received, Bluetooth will automatically be disabled. The same goes for the peer-2-peer packet, so the NRF firmware no longer needs to be flashed without Bluetooth support. The Crazyflie needs to be restarted after connecting through the CF dongle or P2P in order to connect to it again with the Crazyflie mobile app.

General fixes and improvements

Here are the general fixes and improvements listed that has been fixed in release v2020.04:

  • BMI088 (IMU of the CF2.1) has an self-test now.
  • Fixed memory issue with the Micro SD card deck.
  • High-level commander improvements.
  • Documentation improvements.
  • LPS TDoA (2 and 3) improvements.

See the release notes of the crazyflie-firmware and crazyflie-nrf-firmware to see the full list of improvements and issues that were fixed in 2020.04. The zip files for the firmware for both the roadrunner (tag) and crazyflie (cf2) can be found here.

Accurate indoor localization is a crucial enabling technology for many robotic applications, from warehouse management to monitoring tasks. Ultra-wideband (UWB) localization technology, in particular, has been shown to provide robust, high-resolution, and obstacle-penetrating ranging measurements. Nonetheless, UWB measurements are still corrupted by non-line-of-sight (NLOS) communication and spatially-varying biases due to doughnut-shaped antenna radiation pattern. In our recent work, we present a lightweight, two-step measurement correction method to improve the performance of both TWR and TDoA-based UWB localization.  We integrate our method into the Extended Kalman Filter (EKF) onboard a Crazyflie and demonstrate a closed-loop position estimation performance with ~20cm root-mean-square (RMS) error.

A stylized depiction of our UWB indoor localization system and the schematics of the proposed estimation framework.

Methodology

UWB measurement errors can be separated into two groups: (1) systematic bias caused by limitations in the UWB antenna pattern and (2) spurious measurements due to NLOS and multi-path propagation. We propose a two-step UWB bias correction approach exploiting machine learning (to address(1)) and statistical testing (to address (2)). The data-driven nature of our approach makes it agnostic to the origin of the measurement errors it corrects. 

(1) Neural Network Bias Correction

The doughnut-shaped antenna radiation pattern causes the relative poses of anchors and tags to have a noticeable impact on the received signal power, which leads to systematic, predictable biases.  To empirically demonstrate the systematic measurement errors resulting from varying the relative pose between anchors and tags, we placed two DWM1000 UWB anchors at a distance of 4m and collected both TWR and TDoA UWB range measurements for the UWB tag mounted on top of a Crazyflie spinning around its own z-axis.

Left: schematics of the ranges (∆p’s), azimuth (α’s) and elevation angles (β’s) defining the relative poses of tag T and anchors A0, A1 when collecting the systematic bias measurements. Right: the neural network’s inferred bias (in red) with respect to the tag’s varying azimuth angle towards anchor T0, αT0, plotted against the UWB raw measurements.

We choose to leverage the nonlinear representation power of neural networks to learn the systematic bias which only depends on anchor-tag relative poses. Considering the limited onboard computation power, we select a fully connected neural network with 50 neurons in each of two layers with ReLU activation. To represent the relative pose between the UWB tag and anchors, we select the relative distance ∆p and roll, pitch, and yaw angles of the quadcopter as the input features x for the network. As we used fixed anchors, we do not include their poses as inputs (this level of generalization is left for future work). Given sufficient training data, the spatially-varying measurement bias can be described by a nonlinear function b=f(x) captured by the trained neural network.

(2) Outlier (Spurious Measurements) Rejection

Besides our learning-based bias correction, we use a quadcopter’s dynamic model to filter inconsistent UWB range measurements. Given the estimated velocity v and maximum acceleration amax, we can compute the maximum distance dmax a quadcopter can cover during time ∆t. Based on this information, we can reject unattainable measurements before fusing them into the EKF by comparing the measurement innovation with dmax

Moreover, we use a statistical hypothesis test to further classify potential outlier measurements. Since the measurement innovation vector is assumed to be distributed according to a multivariate Gaussian distribution, the normalized sum of squares of its values should follow a Chi-square distribution. We use the Chi-square hypothesis test to determine whether a measurement innovation is likely coming from this distribution.

UWB measurement bias f (x) prediction performance of the trained neural network (in red) compared to the actual measurement errors (blue dots) as well as the role of model-based filtering (purple dots) and statistical validation (orange dots) in rejecting outlier measurement innovations (teal dots) during a 60” flight experiment.

Data Collection and Training

We use a Crazyflie 2.0 quadcopter and the Loco Positioning System (LPS)’s UWB DW1000 modules as our research platforms. Our calibration approach runs on the Crazyflie STM32 microcontroller within the FreeRTOS real-time operating system. We equipped a cuboid flying arena with 8 UWB anchors, one for each vertex. The anchor positions were measured using a Leica total station theodolite.

Left: three-dimensional plot of our flight arena showing the positions and poses of the eight UWB DW1000 anchors (each facing towards its own x-axis, i.e., the red versor). Right: two of the training trajectories we flew to collect the samples that we used to train our neural network-based bias estimator

For all experiments, the ground truth position of the Crazyflie was provided by 10 Vicon cameras. The neural network was trained using PyTorch. To perform inference on the Crazyflie’s microcontroller, we re-use PyTorch’s trained weights in a plain C re-implementation. Since the DW1000 modules in the LPS provide UWB measurements every 5ms, the neural network inference runs at 200Hz during flight as well. Our outlier rejection method is also implemented in plain C and merged with the onboard EKF.

Close-loop Position Estimation Performance

We demonstrate the position estimation and close-loop performance of the proposed methods by flying a Crazyflie quadcopter along planar and non-planar circular trajectories (which were not among the trajectories used for training). A comparison between the estimation error of (A) the UWB localization estimate enhanced with outlier rejections and (B) the estimated enhanced with both outlier rejection and neural network bias compensation is conducted in our experiments for both TWR and TDoA2 modes. We repeated all of our experiments 10 times with a target velocity of 0.375m/s. The quadcopter trajectories during these flight tests are displayed in the following plots.  

Flight paths and the tracking performance of our approach with (in blue) and without (in orange) the neural network bias correction for two reference trajectories (planar and non-planar circular orbits) and both UWB modes (TWR and TDoA).

The distributions of the RMS estimation errors are summarized into a box plot. TWR-based ranging results in better localization performance than TDoA. However, we observe that, with our neural network bias compensation, the average RMS error of TDoA localization is around 0.21m, which is comparable to that of TWR-based localization (~0.19m). Thanks to the neural network bias compensation, the average reduction in the RMS error is ~18.5% and 48% for TWR and TDoA, respectively. Most notably, this result suggests that bias compensation might help closing the performance gap between TWR- and TDoA-based localization.

Root mean square error (RMSE) of the quadcopter position estimate before (in orange) and after (in blue) the neural network calibration step for both TWR and TDoA ranging modes. Each pair of box plots refers to a planar reference trajectory (left of each pair) and a reference trajectory with varying z (right of each pair), showing a greater performance enhancement for the latter.

Outlook

In this work, we presented a two-step methodology to improve UWB localization—for both TWR- and TDoA-based measurements. We used a lightweight neural network to model and compensate for pose-dependent and spatially-varying biases and an outlier rejection mechanism to filter spurious measurements. Through several real world flight experiments tracking different trajectories, we showed that we are able to improve localization accuracy for both TWR and TDoA, granting safer indoor flight. In our future work, we will include the anchors’ pose information to allow our method to further generalize to previously unobserved indoor environments, with different anchor configurations.

Links

The authors are with the Dynamic Systems Lab, Institute for Aerospace Studies, University of Toronto, Canada, and affiliated with the Vector Institute for Artificial Intelligence in Toronto.

Feel free to contact us if you have any questions or ideas: wenda.zhao@robotics.utias.utoronto.ca. Please cite this as:

<code>@article{wenda2020learning,
  title={Learning-based Bias Correction for Ultra-wideband Localization of Resource-constrained Mobile Robots},
  author={Wenda Zhao and Abhishek Goudar and Jacopo Panerati and Angela P. Schoellig},
  journal={arXiv preprint arXiv:2003.09371},
  year={2020}
}</code>

The Lighthouse V2 implementation has been simmering away for a long time in the Bitcraze kitchen and in this blog post we will give you an update on the current status and what is remaining for a full release of this tasty dish.

Crazyflie 2.1 and Lighthouse V2 base station

We believe we have solved most of the major technical hurdles (last famous words) on the way to a working implementation that uses Lighthouse V2 base stations for positioning, now it is mostly work to implement the functionality that is remaining. As described in this post we now have a new FPGA binary that has the ability to decode both V1 and V2 base stations, and this was a major step forward. This new binary is used in the Crazyflie firmware master branch, and if the Lighthouse deck is used with the latest Crazyflie firmware, the new FPGA binary will automatically be flashed to the deck.

What has changed?

The new FPGA binary uses a different UART protocol to communicate with the Crazyflie. This protocol has been implemented in the firmware and hopefully there is no functional difference compared to the previous FPGA binary when using Lighthouse V1 base stations.

We have added a first version of Lighthouse V2 base station decoding, but it is still a bit limited. As a start we decided to “emulate” V1 base stations to be able to reuse as much of the existing code as possible. For now we support only 2 base stations and they must use channel 1 and 2 (used to be called modes). The V2 angles are transformed into V1 angles and fed into the old positioning logic and are handled exactly the same way as before. Even though this works, it is not the optimal solution and we hope to be able to refine it later on.

We have also written a python script to estimate base station geometry (positions and orientation) using the Lighthouse deck. This removed the requirement to use software from Steam which should simplify the set up process. Please see the (still limited) documentation. Note that this calibration method only supports the basestation V1… for now!

There is a lot of code that has been modified and the FPGA implementation is completely new, it is not unlikely that there is functionality that is unstable or broken, or configurations that are not supported. If you happen to notice any bugs, please let us know!

What is remaining?

The functional areas that needs to be implemented or cleaned up before we leave the Early Access stage is the following:

Calibration data

The calibration data is embedded in the modulated light from the base stations and describes imperfections from the manufacturing process for each individual. This data is not read yet for V2 and will increase the precision when available.

Support for more than 2 base stations

Lighthouse V2 base stations are designed for systems with more than 2 base stations. The Crazyflie firmware needs to be extended for this functionality to work, including handling of geometry data, logging, memory management and some other bits and pieces.

Native V2 positioning

The angles from the V2 base station should be fed directly into the kalman filter for positioning, instead of first being transformed into V1 angles. This will increase robustness and reduce data loss.

Client support

We want to add a tab in the python client where a Lighthouse system can be monitored, configured and managed. It should, for instance enable the user to configure and visualize base station geometry.

FPGA binaray management

Currently the FPGA binary is included in the Crazyflie firmware and it is automatically uploaded to the deck when booted. This is not a viable long term solution and we hope to be able to find a more generic way of handling deck binaries.

Conclusions

As can be seen, there is still quite some work to be done before the Lighthouse V2 stew is ready to be served, but we are definitely starting to smell some nice flavours from the kitchen!

Finally a view from Kristoffer’s home lab, currently in the summer house. Three base stations are set up as a Fun Friday hack to see what it would take to use more than 2. Luckily it did not take too much time to get this to work :-)

3 Lighthouse V2 base stations

In this blog-post we wanted to give you guys an overview of our running projects and a general update of the status of things! We got settled in our home-labs and are working on many projects in parallel. There are a lot of development happening at the moment, but the general feeling is that we do miss working with each other at our office! With our daily slack Bitcraze sync meetings and virtual fikapause (Swedish for coffee breaks), we try to substitute what we can. In the mean time, we are going on a roll with finishing all our goals we have set at our latest quarterly meeting, so here you can read about those developments.

AI-deck

Crazyflie with AI-deck

The last time we gave an update about the AI-deck was in this blog post and in the final post of our intern Zhouxin. Building on his work, we are now refocusing on getting the AI-deck ready for early release. The last hurdle is mostly software wise on which we are considering several approaches together with the manufacturer of the Gap8 chip Greenwaves technologies. Currently we are preparing small testing functions as examples of the different elements of the AI-deck in our repo, which are all still in a very primarily phase.

Even though we still need some time to finalize the AI-deck’s early release, we will consider sending an early version of the AI-deck if you are willing to provide feedback while working with it. Please fill in the form and we will get back to you.

Lighthouse

We have made quite some progress on the development for the lighthouse V2. Kristoffer has been working hard from his homelab to get a seamless integration of both V1 and V2 in our firmware (check out this github issue for updates). Currently it is still very untested and very much in progress, however we do have a little preview for you to enjoy.

Crazyflie with LH basestation v2

Documentation

Right now, we are also doing a lot of revamping of the large web of documentation. Unfortunately this is a lot of work! As you noticed by now, we have added overview pages to guide the reader to the right information. We also have moved the tutorials to another part of the menu to avoid clutter on our website. In general we try to go through the repository docs to see if there is any information missing or outdated, however please let us know if you have encountered an error in any description or are missing crucial elements.

Our latest task is revamping the product pages as well, by putting all the necessary information about the hardware in just one place. Also, we are planning to make (video) tutorials soon about many elements of the Crazyflie and how to work with it. More about that later!

Production and Shipment

Production at our manufacturers in China are slowly starting up again. Although it is not yet back at full force, it does enable us to already start ordering to replenish our stock and to get started with finishing our test rigs. Moreover, we are also negotiating to resolve the propeller issue we mentioned earlier, but there is no update on that so far.

As mentioned in this blogpost, we are still shipping orders about twice a week. Both DHL and Fedex are functioning as normal, but we do notice that there is a delay of a few extra days on some deliveries. Please keep that in mind when ordering at our webshop.

We have mentioned the Active Marker deck in an earlier blog post, and are now happy to announce that it has been released and is available in our store.

Crazyflie with Active marker deck

By changing the passive, reflective markers to active, IR-LEDs, it is possible to improve the detection of markers in the cameras. There are two main reasons: the area of the marker is smaller and easier to separate from other markers close by, and the LEDs are emitting light and can be detected further away.

The deck has been developed in collaboration with Qualisys and together with the QTM system, it utilizes their Active Marker technology. An ID is assigned to each marker, and since the identity of each marker can be detected by the MoCap system, it is possible to estimate the full body pose of the Crazyflie without unique marker positions or known starting positions. IDs are easily assigned using the parameter sub system of the Crazyflie.

Even though the deck mainly is intended to be used with Qualisys MoCap systems, the LED markers can also be configured to be on or off which we hope might be useful in other applications as well.