Category: Guest-blogger

Autonomous Robotics at UW Seattle

Our team’s Crazyflie quadcopter was equipped with Bitcraze’s Optic Flow deck and Multi-ranger deck. A BH-1750 light intensity sensor was soldered to a Bitcraze Prototype deck to complete our hardware.

The Crazyflie 2.1 was the perfect robotics platform for an introduction to autonomous robotics at the University of Washington winter quarter 2020. Our Bio-inspired Robotics graduate course completed a series of Crazyflie projects throughout the 10 weeks that built our skills in:

  • Python
  • Robot Operating System (ROS)
  • assembling custom sensors
  • writing new drivers
  • designing and testing control algorithms
  • trouble shooting and independent learning

The course was offered by UW Mechanical Engineering’s Autonomous Insect Robotics Laboratory, headed by Dr. Sawyer B. Fuller. The course was supported by PhD candidate Melanie Anderson, who has done fantastic research with her Crazyflie-based Smellicopter. The final project was an opportunity to turn a Crazyflie quadcopter into a bio-inspired autonomous robot. Our three person team of UW robotics grad students included Nishant Elkunchwar, Krishna Balasubramanian, and Jessica Noe.

Light Seeking Run-and-Tumble Algorithm Inspired by Bacterial Chemotaxis

The goal for our team’s Crazyflie was to seek and identify a light source. We chose a run-and-tumble algorithm inspired by bacterial chemotaxis. For a quick explanation of bacterial chemotaxis, please see Andrea Schmidt’s explanation of chemotaxis on Dr. Mehran Kardar’s MIT teaching page. She provides a helpful animation here.

In both bacterial chemotaxis and our run-and-tumble algorithm, there is a body (the bacteria or the robot) that can:

  • move under its own power.
  • detect the magnitude of something in the environment (e.g. chemical put off by a food source or light intensity).
  • determine whether the magnitude is greater or less than it was a short time before.

This method works best if the environment contains a strong gradient from low concentration to high concentration that the bacteria or robot can follow towards a high concentration source.

The details of the run-and-tumble algorithm are shown in a finite state machine diagram below. The simple summary is that the Crazyflie takes off, begins moving forward, and if the light intensity is getting larger it continues to “Run” in the same direction. If the light intensity is getting smaller, it will “Tumble” to a random direction. Additional layers of decision making are included to determine if the Crazyflie must “Avoid Obstacle”, or if the source has been reached and the Crazyflie quadcopter should “Stop”.

Run & Tumble Algorithm
The run-and-tumble algorithm represented as a finite state machine.

Crazyflie Hardware

To implement the run-and-tumble algorithm autonomously on the Crazyflie, we needed a Crazyflie quadcopter and these additional sensors:

The Optic Flow deck was a key sensor in achieving autonomous flight. This sensor package determines the Crazyflie’s height above the surface and tracks its horizontal motion from the starting position along the x-direction and y-direction coordinates. With the Optic Flow installed, the Crazyflie is capable of autonomously maintaining a constant height above the surface. It can also move forward, back, left, and right a set distance or at a set speed. Several other pre-programmed movement behaviors can also be chosen. This Bitcraze blog post has more information on how the Flow deck works and this post by Chuan-en Lin on Nanonets.com provides more in-depth information if you would like to read more.

The Bitcraze Multi-ranger deck provided the sensor data for obstacle avoidance. The Multi-ranger detects the distance from the Crazyflie to the nearest object in five directions: forward, backward, right, left, and above. Our threshold to trigger the “Avoid Obstacle” behavior is detecting an obstacle within 0.5 meters of the Crazyflie quadcopter.

The Prototype deck was a quick, simple way to connect the BH-1750 light intensity sensor to the pins of the Crazyflie to physically integrate the sensor with the quadcopter hardware. This diagram shows how the header positions connect to the rows of pads in the center of the deck. We soldered a header into the center of the deck, then soldered connections between the pads to form continuous connections from our header pin to the correct Crazyflie header pin on the left or right edges of the Prototype deck. The Bitcraze Wiki provides a pin map for the Crazyflie quadcopter and information about the power supply pins. A nice overview of the BH-1750 sensor is found on Components101.com, this shows the pin map and the 4.7 kOhm pull-up resistor that needs to be placed on the I2C line.

It was easy to connect the decks to the Crazyflie because Bitcraze clearly marks “Front”, “Up” and “Down” to help you orient each deck relative to the Crazyflie. See the Bitcraze documentation on expansion decks for more details. Once the decks are properly attached, the Crazyflie can automatically detect that the Flow and Multi-Ranger decks are installed, and all of the built-in functions related to these decks are immediately available for use without reflashing the Crazyflie with updated firmware. (We appreciated this awesome feature!)

Crazyflie Firmware and ROS Control Software

Bitcraze provides a downloadable virtual machine (VM) to help users quickly start developing their own code for the Crazyflie. Our team used a VM that was modified by UW graduate students Melanie Anderson and Joseph Sullivan to make it easier to write ROS control code in the Python coding language to control one or more Crazyflie quadcopters. This was helpful to our team because we were all familiar with Python from previous work. The standard Bitcraze VM is available on Bitcraze’s Github page. The Modified VM constructed by Joseph and Melanie is available through Melanie’s Github page. Available on Joseph’s Github page is the “rospy_crazyflie” code that can be combined with existing installs of ROS and Bitcraze’s Python API if users do not want to use the VM options.

  • “crazyflie-firmware” – a set of files written in C that can be uploaded to the Crazyflie quadcopter to overwrite the default firmware
    • In the Bitcraze VM, this folder is located at “/home/bitcraze/projects/crazyflie-firmware”
    • In the Modified VM, this folder is located at “Home/crazyflie-firmware”
  • “crazyflie-lib-python” (in the Bitcraze VM) or “rospy_crazyflie” (in the Modified VM) – a set of ROS files that allows high-level control of the quadcopter’s actions
    • In the Bitcraze VM, “crazyflie-lib-python” is located at “/home/bitcraze/projects/crazyflie-lib-python”
    • In the Modified VM, navigate to “Home/catkin_ws/src” which contains two main sets of files:
      • “Home/catkin_ws/src/crazyflie-lib-python” – a copy of the Bitcraze “crazyflie-lib-python”
      • “Home/catkin_ws/src/rospy_crazyflie” – the modified version of “crazyflie-lib-python” that includes additional ROS and Python functionality, and example scripts created by Joseph and Melanie

In the Modified VM, we edited the “crazyflie-firmware” files to include code for our light intensity sensor, and we edited “rospy-crazyflie” to add functions to the ROS software that runs on the Crazyflie. Having the VM environment saved our team a huge amount of time and frustration – we did not have to download a basic virtual machine, then update software versions, find libraries, and track down fixes for incompatible software. We could just start writing new code for the Crazyflie.

The Modified VM for the Crazyflie takes advantage of the Robot Operating System (ROS) architecture. The example script provided within the Modified VM helped us quickly become familiar with basic ROS concepts like nodes, topics, message types, publishing, and subscribing. We were able to understand and write our own nodes that published information to different topics and write nodes that subscribed to the topics to receive and use the information to control the Crazyflie.

For more information, see the Bitcraze Development overview.

Updating the crazyflie-firmware

A major challenge of our project was writing a new driver that could be added to the Crazyflie firmware to tell the Crazyflie system that we had connected an additional sensor to the Crazyflie’s I2C bus. Our team referenced open-source Arduino drivers to understand how the BH-1750 connects to an Arduino I2C bus. We also looked at the open-source drivers written by Bitcraze for the Multi-ranger deck to see how it connects to the Crazyflie I2C bus. By looking at all of these open-source examples and studying how to use I2C communication protocols, our team member Nishant Elkunchwar was able to write a driver that allowed the Crazyflie to recognize the BH-1750 signal and convert it to a sensor value to be used within the Crazyflie’s ROS-based operating system. That driver is available on Nishant’s Github. The driver needed to be placed into the appropriate folder: “…\crazyflie-firmware\src\deck\drivers\src”.

The second change to the crazyflie-firmware is to add a “config.mk” file in the folder “…\crazyflie-firmware\tools\make”. Information about the “config.mk” file is available in the Bitcraze documentation on configuring the build.

The final change to the crazyflie-firmware is to update the make file “MakeFile” in the location “…\crazyflie-firmware”. The “MakeFile” changes include adding one line to the section “# Deck API” and two lines to the section “# Decks”. Information about compiling the MakeFile is available in the Bitcraze documentation about flashing the quadcopter.

Making additions to the ROS control architecture

The ROS control architecture includes messages. We needed to define 3 new types of messages for our new ROS control files. In the folder “…\catkin_ws\src\rospy_crazyflie\msg\msg” we added one file for each new message type. We also updated “CMakeLists.txt” to add the name of our message files in the section “add_message_files( )”.

The second part of our ROS control was a set of scripts written in Python. These included our run-and-tumble algorithm control code, publisher scripts, and a plotter script. These are all available in the project’s Github.

Characterizing the Light Sensor

At this point, the light intensity sensor was successfully integrated into the Crazyflie quadcopter. The new code was written and the Crazyflie quadcopter was reflashed with new firmware. We had completed our initial trouble shooting and the next step was to characterize the light intensity in our experimental setup.

Experimental setup for light intensity characterization.

This characterization was done by flying the Crazyflie at a fixed distance above the floor in tightly spaced rows along the x and y horizontal directions. The resulting plot (below) shows that the light intensity increases exponentially as the Crazyflie moves towards the light source.

The light characterization allowed us to determine an intensity threshold that will only happen near the light source. If this threshold is met, the algorithm’s “Stop” action is triggered, and the Crazyflie lands.

Light intensity (units of lux) was experimentally characterized by piloting the Crazyflie in a linear pattern at a constant height above the ground. The resulting plot shows that light intensity is characterized by an exponential roll off in both the x and y directions.

Testing the Run-and-Tumble Algorithm

With the light intensity characterization complete, we were able to test and revise our run-and-tumble algorithm. At each loop of the algorithm, one of the four actions is chosen: “Run”, “Tumble”, “Avoid Obstacle”, or “Stop”. The plot below shows a typical path with the action that was taken at each loop iteration.

Flight Tests of the Run-and-Tumble Algorithm

In final testing, we performed 4 trial runs with 100% success locating the light source. Our test area was approximately 100 square feet, included 1 light source, and 2 obstacles. The average search time was 1:41 seconds.

The “Avoid Obstacle” and “Run” behavior are demonstrated in the above video clip (1.5x actual speed).


The “Run” and “Tumble” actions are demonstrated in the above video clip (2x actual speed). At the end, the “Stop” action is demonstrated when the light intensity reaches the threshold value of 800 lux, indicating that the Crazyflie has found the light source and should land.

Lessons Learned

This was one of the best courses I’ve taken at the University of Washington. It was one of the first classes where a robot could be incorporated, and playing with the Crazyflie was pure fun. Another positive aspect was that the course had the feel of a boot camp for learning how to build, control, test, and improve autonomous robots. This was only possible because Bitcraze’s small, indoor quadcopter with optic flow capability made it possible to safely operate several quadcopters simultaneously in our small classroom as we learned.

This development project was really interesting (aka difficult…) and we went down a few rabbit holes as we tried to level up our knowledge and skills. Our prior experience with Python helped us read the custom example scripts provided in our course for the ROS control program, but we had quite a bit to learn about the ROS architecture before we could write our own control scripts.

Nishant made an extensive study of I2C protocols as he wrote the new driver for the BH-1750 sensor. One of the biggest lessons I learned in this project was that writing drivers to integrate a sensor to a microcontroller is hard. By contrast, using the Bitcraze decks was so easy it almost felt like cheating. (In the nicest way!)

On the hardware side, the one big problem we encountered during development was accidentally breaking the 0.5 mm headers on the Crazyflie quadcopter and the decks. The male headers were not long enough to extend from the Flow deck all the way up through the Prototype deck at the top, so we tried to solder extensions onto the pins. Unfortunately, I did not check the Bitcraze pin width and I just soldered on the pins we all had in our tool kits: the 0.1 inch (2.54 mm) wide pins that we use with our Arduinos and BeagleBones. These too-large-pins damaged the female headers on the decks, and we lost connectivity on those pins. Fortunately, we were able to repair our decks by soldering on replacement female headers from the Bitcraze store. I wish now that the long pin headers were available back then.

In summary, this course was an inspiring experience and helped our team learn a lot in a very short time. After ten weeks working with the Crazyflie, I can strongly recommend the Crazyflie for robotics classes and boot camps.

Links to Project Files

Team’s Research Poster: https://github.com/thecountoftuscany/crazyflie-run-and-tumble/blob/master/documents/Project-Final_Poster.pdf

Github Link courtesy of Nishant Elkunchwar: Crazyflie-Run-and-Tumble

YouTube Links courtesy of Nishant Elkunchwar: Crazyflie locates Light, Simulation of Run and Tumble Algorithm in PyGame

We have a guest blog post this week from Christopher Banks at Georgia Tech, where he tells us about their work with the Robotarium. Enjoy!

Multi-Agent Aerial Robotics

In the GRITS Lab  we focus on autonomous control and coordination of multi-robot systems with applications in – but not limited to – optimal control, constraint-based control, and hardware development. We are home to the Robotarium [1], a remotely accessible swarm robotics testbed that is free for anyone around the world to use for academic and educational purposes. We have integrated Crazyflies into the Robotarium as the main vehicle for aerial robot swarms due to their small size, quiet operation, and high maneuverability . Also, due to their low inertia, they pose minimal harm to their surroundings if system failures occur. Their small size and robust nature are well suited for flying in an indoor testbed like the Robotarium. As we work towards extending the operation of the Crazyflies in the Robotarium to external users, we encountered some important research questions: How do we guarantee the quadcopters remain “safe” (undamaged) while minimizing modifications to user inputs? How do we develop an easy to use interface for external users, with experience ranging from novice to expert? What commands can be used by external users to control a swarm of robots?  This post will briefly describe the ongoing research aimed at solving these questions.

Safety Guarantees

To ensure hardware safety while flying experimental algorithms we have developed Control Barrier Functions (CBFs) for quadcopters, allowing users to give nominal control inputs while obeying some safety constraints for the system (e.g. collision-free trajectory following). In the video below, we give four Crazyflies the commands to fly in a circle. A fifth Crazyflie is then told to fly to waypoints that will intersect the circle and attempt to collide with the circling quadcopters. Using CBFs a central controller can modify the inputs given to Crazyflies near collision to ensure safe velocity commands that are close as possible to the user intended control [2] . These CBFs can also be designed to ensure safety by bounding the quadcopters to a designated region of the testbed, giving additional safety constraints by protecting areas outside of the motion capture system during flights.

Quadcopters execute pre-planned flight trajectories designed to collide and use CBFs to avoid collisions.

User Interfaces

We have also used the Crazyflies to understand how remote users can best interact with the Robotarium both at the interface level and in planning. One project involved studying the effectiveness of graphical user interfaces (GUIs) on swarm robotic control. Two GUIs were developed with different interaction modalities. The GUIs were designed to map user inputs to a set of hoops placed in the Robotarium. One GUI (shown in Fig. 1) provided users the ability to draw paths through a touchscreen interface on a two-dimensional map and then map those inputs to trajectories for a team of robots. The other GUI (illustrated in Fig. 2) allowed users to input a sequence of desired hoops for a team of robots and execute trajectories based on the input.

Figure 1: A GUI that maps hand-drawn paths to inputs for a group of Crazyflies
Figure 2: A GUI that maps the string of indexed hoops as inputs for a group of Crazyflies.

Multi-Agent Planning

In planning, we looked at how multi-agent planning can be approached using high-level specifications. These high-level specifications allow users to develop plans requiring groups of robots to visit regions of interest (see Fig. 3) and trajectories are generated automatically. To represent these specifications, we use a logic formalism known as temporal logic to encode a preferred sequence of plan execution. As an additional step, users could include constraints on the trajectory by minimizing a cost using stochastic sampling. For more details, see the attached video demonstrating task allocation in a fire-fighting scenario.

Figure 3: Using the multi-agent planning framework, users give high-level specifications that plan trajectories for quadcopters to visit regions of interest (hoops) in the Robotarium.
A optimizing task allocation framework that assigns quadcopters a set of tasks based on user specifications.

Future Directions

As we continue to expand the capabilities of the Robotarium we are looking into how to develop long term autonomy for the Crazyflies. This includes autonomous charging as well as remote access for the lab and other users. We hope to use the Lighthouse system as a method for long term tracking since the Crazyflie will know its position instead of relying on passive tracking from a Vicon system. Our plans also include a lab-based simulator for in house projects related to the Crazyflies as well as updating our system to incorporate Crazyswarm to make control of the Crazyflies easier in implementation. In addition to this, in order to accommodate unknown users, we will have to figure out a control scheme that encourages use from a wide variety of users ranging from novices in quadcopter control to experts. We’ll keep Bitcraze updated on the Robotarium’s progression towards fully autonomous aerial swarms!

Links

  1. Robotarium Article: https://ieeexplore.ieee.org/document/8960572
  2. CBFs for Quadcopters: https://ieeexplore.ieee.org/document/7989375

Accurate indoor localization is a crucial enabling technology for many robotic applications, from warehouse management to monitoring tasks. Ultra-wideband (UWB) localization technology, in particular, has been shown to provide robust, high-resolution, and obstacle-penetrating ranging measurements. Nonetheless, UWB measurements are still corrupted by non-line-of-sight (NLOS) communication and spatially-varying biases due to doughnut-shaped antenna radiation pattern. In our recent work, we present a lightweight, two-step measurement correction method to improve the performance of both TWR and TDoA-based UWB localization.  We integrate our method into the Extended Kalman Filter (EKF) onboard a Crazyflie and demonstrate a closed-loop position estimation performance with ~20cm root-mean-square (RMS) error.

A stylized depiction of our UWB indoor localization system and the schematics of the proposed estimation framework.

Methodology

UWB measurement errors can be separated into two groups: (1) systematic bias caused by limitations in the UWB antenna pattern and (2) spurious measurements due to NLOS and multi-path propagation. We propose a two-step UWB bias correction approach exploiting machine learning (to address(1)) and statistical testing (to address (2)). The data-driven nature of our approach makes it agnostic to the origin of the measurement errors it corrects. 

(1) Neural Network Bias Correction

The doughnut-shaped antenna radiation pattern causes the relative poses of anchors and tags to have a noticeable impact on the received signal power, which leads to systematic, predictable biases.  To empirically demonstrate the systematic measurement errors resulting from varying the relative pose between anchors and tags, we placed two DWM1000 UWB anchors at a distance of 4m and collected both TWR and TDoA UWB range measurements for the UWB tag mounted on top of a Crazyflie spinning around its own z-axis.

Left: schematics of the ranges (∆p’s), azimuth (α’s) and elevation angles (β’s) defining the relative poses of tag T and anchors A0, A1 when collecting the systematic bias measurements. Right: the neural network’s inferred bias (in red) with respect to the tag’s varying azimuth angle towards anchor T0, αT0, plotted against the UWB raw measurements.

We choose to leverage the nonlinear representation power of neural networks to learn the systematic bias which only depends on anchor-tag relative poses. Considering the limited onboard computation power, we select a fully connected neural network with 50 neurons in each of two layers with ReLU activation. To represent the relative pose between the UWB tag and anchors, we select the relative distance ∆p and roll, pitch, and yaw angles of the quadcopter as the input features x for the network. As we used fixed anchors, we do not include their poses as inputs (this level of generalization is left for future work). Given sufficient training data, the spatially-varying measurement bias can be described by a nonlinear function b=f(x) captured by the trained neural network.

(2) Outlier (Spurious Measurements) Rejection

Besides our learning-based bias correction, we use a quadcopter’s dynamic model to filter inconsistent UWB range measurements. Given the estimated velocity v and maximum acceleration amax, we can compute the maximum distance dmax a quadcopter can cover during time ∆t. Based on this information, we can reject unattainable measurements before fusing them into the EKF by comparing the measurement innovation with dmax

Moreover, we use a statistical hypothesis test to further classify potential outlier measurements. Since the measurement innovation vector is assumed to be distributed according to a multivariate Gaussian distribution, the normalized sum of squares of its values should follow a Chi-square distribution. We use the Chi-square hypothesis test to determine whether a measurement innovation is likely coming from this distribution.

UWB measurement bias f (x) prediction performance of the trained neural network (in red) compared to the actual measurement errors (blue dots) as well as the role of model-based filtering (purple dots) and statistical validation (orange dots) in rejecting outlier measurement innovations (teal dots) during a 60” flight experiment.

Data Collection and Training

We use a Crazyflie 2.0 quadcopter and the Loco Positioning System (LPS)’s UWB DW1000 modules as our research platforms. Our calibration approach runs on the Crazyflie STM32 microcontroller within the FreeRTOS real-time operating system. We equipped a cuboid flying arena with 8 UWB anchors, one for each vertex. The anchor positions were measured using a Leica total station theodolite.

Left: three-dimensional plot of our flight arena showing the positions and poses of the eight UWB DW1000 anchors (each facing towards its own x-axis, i.e., the red versor). Right: two of the training trajectories we flew to collect the samples that we used to train our neural network-based bias estimator

For all experiments, the ground truth position of the Crazyflie was provided by 10 Vicon cameras. The neural network was trained using PyTorch. To perform inference on the Crazyflie’s microcontroller, we re-use PyTorch’s trained weights in a plain C re-implementation. Since the DW1000 modules in the LPS provide UWB measurements every 5ms, the neural network inference runs at 200Hz during flight as well. Our outlier rejection method is also implemented in plain C and merged with the onboard EKF.

Close-loop Position Estimation Performance

We demonstrate the position estimation and close-loop performance of the proposed methods by flying a Crazyflie quadcopter along planar and non-planar circular trajectories (which were not among the trajectories used for training). A comparison between the estimation error of (A) the UWB localization estimate enhanced with outlier rejections and (B) the estimated enhanced with both outlier rejection and neural network bias compensation is conducted in our experiments for both TWR and TDoA2 modes. We repeated all of our experiments 10 times with a target velocity of 0.375m/s. The quadcopter trajectories during these flight tests are displayed in the following plots.  

Flight paths and the tracking performance of our approach with (in blue) and without (in orange) the neural network bias correction for two reference trajectories (planar and non-planar circular orbits) and both UWB modes (TWR and TDoA).

The distributions of the RMS estimation errors are summarized into a box plot. TWR-based ranging results in better localization performance than TDoA. However, we observe that, with our neural network bias compensation, the average RMS error of TDoA localization is around 0.21m, which is comparable to that of TWR-based localization (~0.19m). Thanks to the neural network bias compensation, the average reduction in the RMS error is ~18.5% and 48% for TWR and TDoA, respectively. Most notably, this result suggests that bias compensation might help closing the performance gap between TWR- and TDoA-based localization.

Root mean square error (RMSE) of the quadcopter position estimate before (in orange) and after (in blue) the neural network calibration step for both TWR and TDoA ranging modes. Each pair of box plots refers to a planar reference trajectory (left of each pair) and a reference trajectory with varying z (right of each pair), showing a greater performance enhancement for the latter.

Outlook

In this work, we presented a two-step methodology to improve UWB localization—for both TWR- and TDoA-based measurements. We used a lightweight neural network to model and compensate for pose-dependent and spatially-varying biases and an outlier rejection mechanism to filter spurious measurements. Through several real world flight experiments tracking different trajectories, we showed that we are able to improve localization accuracy for both TWR and TDoA, granting safer indoor flight. In our future work, we will include the anchors’ pose information to allow our method to further generalize to previously unobserved indoor environments, with different anchor configurations.

Links

The authors are with the Dynamic Systems Lab, Institute for Aerospace Studies, University of Toronto, Canada, and affiliated with the Vector Institute for Artificial Intelligence in Toronto.

Feel free to contact us if you have any questions or ideas: wenda.zhao@robotics.utias.utoronto.ca. Please cite this as:

<code>@article{wenda2020learning,
  title={Learning-based Bias Correction for Ultra-wideband Localization of Resource-constrained Mobile Robots},
  author={Wenda Zhao and Abhishek Goudar and Jacopo Panerati and Angela P. Schoellig},
  journal={arXiv preprint arXiv:2003.09371},
  year={2020}
}</code>

 

I started working with the Crazyflie 2.0 in 2015. I was interested in learning how to program a quadcopter, and the open-source nature of the Crazyflie’s hardware and software was the perfect starting point.

Shortly after, I discovered the world of FPV and the thrill of flying with a bird’s eye view. My journey progressed from rubber-banding an all-in-one camera/VTX to my Crazyflie, to building a 250mm racing quad (via the BigQuad deck), and into the world of Betaflight (including bringing Betaflight support to the Crazyflie hardware).

 

Naturally, the announcement of the Bolt (then known as the RZR) piqued my interest, and the folks at Bitcraze graciously allowed me early hands-on with the product.

This post details my progress towards building out a FPV-style drone on top of the Crazyflie Bolt.

Component List

The FPV community has come a long way since 2015. What once was a very complicated process is now well documented and similar to building a PC (well, with some soldering). For latest details on the specifics of building FPV drones, I recommend resources such as Joshua Bardwell or the r/Multicopter subreddit.

Turns out I had enough components lying around for a 4-inch (propeller diameter) build based on 3S (3 cell) LiPo batteries. Again, there’s nothing special about these parts (in fact they’re all out of date). Take this list as a guide, and do your own research.

  • PDB (Power Distribution Board): This is a circuit board that produces regulated voltages from an unregulated LiPo battery. The Bolt has built-in regulators but is only rated up to an 8A current draw per motor. My 4 inch propellers will certainly draw more than 8A, and so an external PDB is required (plus having dedicated 12V and 5V supplies is nice for peripherals).
  • 4x DYS 1806 Brushless Motors: Brushless motors use magnetic pulses to rotate a motor bell (distinct from brushed motors found on the regular Crazyflie).
  • 4x DYS 20A BLHeli_S ESCs (Electronic Speed Controller): This is a piece of circuitry that accepts a logic-level control signal and applies direct battery power to motor coils to make a brushless motor spin. They have to be rated for the current draw expected by the battery+propeller combination.
  • Tweaker (by Shendrones) Frame: I’ve been wanting to build a quad around this frame, and the large square hole is interesting for the Bolt (more on that later). One thing to keep in mind is this is an ‘H’ style frame. That is, it’s longer than it is wide, so flight will not be perfectly symmetrical. If you’re interested in building a larger Crazyflie and not so interested in FPV, you’ll definitely want a symmetrical ‘X’ style frame.
  • WS2812B addressable LEDs: LEDs are proven to make things better. It’s science.
  • Camera + VTX: For a full FPV setup, you’ll need a camera and a video transmitter. For the most part these run completely independently of the flight controller and so I’ll omit them from this article — what I’ve shown in the picture above is horribly out of date anyway.
  • RX: Radio receiver. For longer range flights and reduced latency it may be a good idea to use an external radio and UART-based receiver with diversity antennas. However, some specific work went in to the Bolt’s antenna design, so I’ll be sticking with the on-board NRF51 and external antenna.
  • Flight Controller: The Crazyflie Bolt!

The Build

Again, there are hundreds of fantastic guides on the web that detail how to build an FPV quadcopter. Instead of trying to create another, here are some notes specific to my Bolt build.

Expansion Decks

Since the Bolt is pin compatible with the Crazyflie, I thought it would be interesting to try and take advantage of a couple existing Crazyflie expansion decks in my build: The LED Ring Deck, the Flow Deck v2, and the Micro SD Card Deck.

The LED Ring Deck

The LEDs were the most hands-on feature to enable. Rather than simply attaching the LED ring inside the frame, I mounted a series of WS2812B lights to the underside of my frame’s arms. The LED Ring Deck consists of 12 LEDs connected in series — so I put three LEDs on each arm of the frame and wired them up in a daisy-chain.

Finally, I soldered the lead to IO_2 (the same that’s used by the LED Ring Deck) on a Breakout Deck.

Since this isn’t the official LED Ring Deck, there’s no OW memory ID. The deck must be force-enabled by specifying a compile flag in your tools/build/make/config.mk file:

CFLAGS += -DDECK_FORCE=bcLedRing

With the custom firmware, the under-arm LEDs work just like the LED Ring Deck (other than the lack of front-facing LEDs).

Micro SD Card Deck

Most popular flight controllers feature flash storage or SD card slots for data logging. The FPV community uses storage to log sensor data for PID tuning and debugging. Naturally, this deck is a good fit on my Bolt build, and requires no additional modification.

Flow Deck (v2)

Remember my interest in the square cutout on my frame of choice? That, and my unorthodox choice to mount the Bolt board below my PDB, means I can theoretically use the bottom-attached Flow Deck to achieve some lateral stabilization while close to the ground. In theory, the VL53L1x ranger should work outdoors thanks to its usage of 940nm light as opposed to 850nm.

Note: This photo also shows the daisy chain wire connecting banks of LEDs in series

Other Build Tips

  • It’s good practice to soft mount flight controllers to minimize transferring motor/prop vibrations into the IMU. I used these to isolate the flight controller from the frame — not perfect, but better than a rigid mount.
  • The receiver antenna must be mounted clear of the carbon fiber frame and electronics. I like to use a heavy duty zip tie and attach the antenna with heat shrink.
  • The Bolt can be powered from a 5v regulator on your PDB, but if you want to take advantage of the VBat sensor it should be powered from the raw battery leads instead. However, most ESCs support active breaking (ability to slow/stop the propellers on demand). Active breaking is known to produce a lot of back-voltage, which can damage some circuits. To be safe, since I’m using a 3S battery (12.6V when fully charged, 11.1V when depleted) I chose to power the Bolt off a regulated 12V supply from my PDB. This way, the PDB’s regulator will filter out voltage spikes and help protect the Bolt. Readings won’t be accurate at the higher range, but what really matters for a voltage sensor is to know when to land.

Results

It works! There is work needed to improve flight, though:

  • Control tuning is required. The powerful brushless motors respond much quicker than brushed motors, and so many of the PID and/or Kalman parameters are too aggressive or just non-optimal.
  • Stabilization with the Flow deck does not work — I haven’t spent much time debugging but my guess is it’s either due to the Kalman tuning, or problems with the VL53L1x depth working outdoors (which also impacts the flow measurements)
  • Betaflight Support: Betaflight has no driver for the BMI088 IMU used on the Crazyflie Bolt or the Crazyflie 2.1.
  • Safety Features: Brushless quads are very dangerous and can cause serious injuries. It’d be good to implement a kill-switch and a more aggressive failsafe in the firmware to prevent flyaways.

All in all, this was an enjoyable project and I’m excited to see some autonomous brushed quads coming out of the Crazyflie community!

This week we have a guest blog post from Joseph La Delfa.

DroneChi is a Human Drone interaction experience that uses the Qualisys motion capture system that enables the Crazyflie to react to movements of your body. At the Exertion Games Lab in Melbourne Australia, we like to design new experiences with technology where the whole body can be the controller and is involved in the experience.

When we first put these two technologies together we realised two things. 

  1. It was super easy to keep your attention on a the drone as it flew around the room reacting to your movements. 
  2. As a result it was also really easy to reflect on and refine ones own movements. 

We thought this was like meditation meditated by a drone, and wanted to investigate how to further enhance this experience through design. We thought the smooth movements were especially mesmerising and so I decided to take beginner Tai Chi lessons; to get an appreciation of what it felt like to move like a Tai Chi student.

We undertook an 8 month design program where we simultaneously designed the form and the interaction of the Crazyflie. The initial design brief was pretty simple, make it look and feel light, graceful and from nature. In Tai Chi you are asked all the time to imagine a flower, the sea or a bird as you embody its movements, we wanted to emulate these experiences but without verbal instruction. Could a drone facilitate these sorts of experiences through it’s design?

We will present a summarised version of how the form and the interaction came about. Starting with a mood board, we collated radially symmetrical forms from nature to match a drone’s natural weight distribution.

We initially went with a jelly fish, hoping to emulate their “push gliiide” movement by articulating laser cut silhouettes (see fig c). This proved incredibly difficult, after searching high and low for a foam that was light enough for the Crazyflie to lift, we just could not get it to fly stable. 

However, we serendipitously fell into the flower shape by trying to improve how we joined the carbon rods together in a loop (fig b below).  By joining them to the main hull we realised it looked like a petal! This set us down the path of the flower, we even flipped the chassis so that the LED ring faced upwards (cheers to Tobias for that firmware hack). 

Whilst this was going on we were experimenting with how to actually interact with the drone. Considering the experience was to be demonstrated at a major conference we decided to keep the tracking only to the hands, this allowed quick change overs. We started with cardboard pads, experimented with gloves but settled on some floral inspired 3D printed pads. We were so tempted to include the articulation of the fingers but decided against it to avoid scope creep! Further to this, we curved the final hand pads (fig  d) to promote the idea of holding the drone, inspired by a move in Tai Chi called “holding the ball”.

As a beginner practicing Tai Chi I was sometimes overwhelmed by the number of aspects of my movement that constantly needed monitoring, palms out, heel out, elbow slightly bent, step forward etc. However in brief moments it all came together and I was able to appreciate the feelings of these movements as opposed to consciously monitoring them. We wanted this kind of experience when learning DroneChi so we devised a way of mapping the drone to the body to emulate this. After a few iterations we settled on the “mid point” method as seen below.

The drone only followed the midpoint (blue dot above) if it was within .2m of it. If it was outside of this range it would float away slowly from the participant. This may seem like a lot, but with little in the way of visual guidance (eg a laser pointer or an augmented display) a person can only rely on the proprioceptive feedback from their own body. We used the on board LED ring on the drone to let the person know at least when they are close, but that is all the help they got. As a result this takes a lot of concentration to get right!

In the end we were super happy with the final experience, in the study participants reported tuning into their bodies when using the drone, as well as experiencing a unique sort of relationship to the drone; not entirely like a pet and also like an extension of the body. We will be investigating both findings from the study through the design and testing of a new system on the Crazyflie. We see this work contributing to more intimate designs for human drone interactions as well as a being applicable to health contexts such as rehabilitation.

CrazyFlies are great for indoor applications, thanks to their maneuverability and ubiquitous character. Its small size, however, limits sensor quality and compute capability. In our recent work we present source seeking onboard a CrazyFlie by deep reinforcement learning. We show a general methodology for deploying deep neural networks on heavily constrained nano drones, using full 8-bit quantization and input scaling. 

Our fully autonomous light-seeking CrazyFlie

Problem definition

Source seeking can be interesting in a variety of contexts. We focus on light seeking, as seen in nature. Many insects rely on light, either for survival or navigation. Light seeking in aerial robotics has many applications, such as finding the exit out of a dark room. 

Our goal is to fully autonomously find a light source, using only the onboard Micro Controller Unit (MCU) and deep reinforcement learning. 

Crazyflie configuration

Our fully autonomous nano drone uses several standard and custom sensors. We use the multiranger and flowdeck for position control and obstacle avoidance.

The Multiranger deck with our custom light sensor

We add a custom light sensor, based on the Adafruit TSL2591 sensor. The custom light sensor nicely fits in the multiranger deck, adding little mass and inertia (total vehicle mass is 33 grams).

CrazyFlie 2.1 with multiranger, flowdeck and light sensor

Algorithm

We use a deep reinforcement learning algorithm with a discrete action space. The neural network policy has laser rangers and light readings (current and past values) as input. The neural network tells the drone to rotate left, right or fly forward. We train a neural network with 2 hidden layers of both 20 nodes, featuring bias add and relu activation functions. The input layer is a vector with a length of 20 (4 states), which, compared to images, greatly reduces computational effort. 

DQN policy architecture

Simulation and conversion

We train our agent in simulation using the Air Learning simulation platform, after which we fully quantize the neural network to 8-bit integers.

To maintain accuracy after quantization, we have come up with quantization innovations. Both input layer and all tensors in the network need to have a pre-defined [min,max] range in float32, to convert to 8-bit integers. 

Air Learning pipeline

In the input layer, not all inputs have the same range. That is, a laser ranger can have values from 0 to 5 meters while our light sensor may return a value between 0 and 300 lux. To avoid this issue, we scale all inputs to the same range.

Additionally, the tensors in the network need to have an assigned [min,max] range for quantization. To achieve this, we input a range of representative input into the unquantized model, and read out the values of intermediate layers. With this strategy, we arrive at a 2.9x speed-up compared to float32 inference.

Implementation

We use Tensorflow Lite to deploy our tensorflow models in C on the CrazyFlie. The TFMicro Stack, together with the actual model, almost completely fill up the available RAM. 

RAM utilization on the CrazyFlie 2.1

The total amount of RAM available on the CrazyFlie 2.1 is 196kB, of which only 131kB is available for static allocation at compile time. The Bitcraze software stack uses 98kB of RAM, leaving only 33kB available for our purposes. The TFMicro stack takes up 24kB, thus leaving 9kB for the actual model (e.g., weights, bias terms). 

We also analyzed CPU usage, and noticed a high amount of interrupts by the ‘stabilizer’ thread, i.e., the PID controllers. Because of these interrupts, inference of our model takes 46.4 times longer than it would have been without interruption. 

Our quantized model is 3kB. If it were an FP32 model, it would have taken 12kB, which would not have fitted in the available memory. We were able to run inference at 4Hz, compared to the estimated 1.4Hz of the same but unquantized model. 

In a practical sense, we noticed a decreased level of stability when increasing model size. Occasionally the drone would reboot randomly while flying. Possible causes for this behavior are RAM overflow and task scheduling problems in RTOS. Besides, we observed variation in performance loss after quantization. Some of our trained models would just keep rotating after quantization, while our final model demonstrates robust source seeking behavior. This degree of uncertainty can possibly be avoided using quantization aware training. 

Finally, flying in a dark room without a position estimate can be challenging. The PID controllers heavily rely on information provided by the Flow Deck. This information is limited when little light is present while flying over a floor containing little features. To fix this, we added mats with texture on the ground, adding features and enabling stable flight in a dark room.

Flight tests

To validate our results in simulation, we created a cluttered environment with a light source. We randomly initialized the drone in the room, and hereby observed a success rate of 80% in a total of 105 flight tests. By varying the environment and initial drone position, we learned more about the inner workings of our algorithm.

Experiment testing environment

We learned that the algorithm performs better with more obstacles, and that a closer initial position improves performance. Generally, source seeking far away from the source seems really hard. Almost no variation in source strength exists between different measurements, and the drone observes mostly noise. 

Outlook

With our methodology, we were able to perform fully autonomous source seeking using deep reinforcement learning on a Cortex-M4 MCU. We hope our methodology will be applicable to other TinyML applications where resources are heavily constrained. Developing custom accelerators for a specific workload is time-consuming and expensive, while general purpose MCU’s are cheap and widely available. With our methodology, we unlock new applications for learning algorithms on heavily constrained platforms.

Direct path to source in empty room, blue = take-off

Links

Video: https://www.youtube.com/watch?v=wmVKbX7MOnU

Paper: https://arxiv.org/abs/1909.11236

Github: https://github.com/harvard-edge/source-seeking

Feel free to contact us might you have any questions or ideas: bduisterhof@g.harvard.edu

Hi everyone, here at the Integrated and System Laboratory of the ETH Zürich, we have been working on an exciting project: PULP-DroNet.
Our vision is to enable artificial intelligence-based autonomous navigation on small size flying robots, like the Crazyflie 2.0 (CF) nano-drone.
In this post, we will give you the basic ideas to make the CF able to fly fully autonomously, relying only on onboard computational resources, that means no human operator, no ad-hoc external signals, and no remote base-station!
Our prototype can follow a street or a corridor and at the same time avoid collisions with unexpected obstacles even when flying at high speed.


PULP-DroNet is based on the Parallel Ultra Low Power (PULP) project envisioned by the ETH Zürich and the University of Bologna.
In the PULP project, we aim to develop an open-source, scalable hardware and software platform to enable energy-efficient complex computation where the available power envelope is of only a few milliwatts, such as advanced Internet-of-Things nodes, smart sensors — and of course, nano-UAVs. In particular, we address the computational demands of applications that require flexible and advanced processing of data streams generated by sensors such as cameras, which is beyond the capabilities of typical microcontrollers. The PULP project has its roots on the RISC-V instruction set architecture, an innovative academic and research open-source architecture alternative to ARM.

The first step to make the CF autonomous was the design and development of what we called the PULP-Shield, a small form factor pluggable deck for the CF, featuring two off-chip memories (Flash and RAM), a QVGA ultra-low-power grey-scale camera and the PULP GAP8 System-on-Chip (SoC). The GAP8, produced by GreenWaves Technologies, is the first commercially available embodiment of our PULP vision. This SoC features nine general purpose RISC-V-based cores organised in an on-chip microcontroller (1 core, called Fabric Ctrl) and a cluster accelerator of 8 cores, with 64 kB of local L1 memory accessible at high bandwidth from the cluster cores. The SoC also hosts 512kB of L2 memory.

Then, we selected as the algorithmic heart of our autonomous navigation engine an advanced artificial intelligence algorithm based on DroNet, a Convolutional Neural Network (CNN) that was originally developed by our friends at the Robotic and Perception Group (RPG) of the University of Zürich.
To enable the execution of DroNet on our resource-constrained system, we developed a complete methodology to map computationally-intense deep neural networks on the PULP-Shield and the GAP8 SoC.
The network outputs two pieces of information, a probability of collision and a steering angle that are translated in dynamic information used to control the drone: respectively, forward velocity and angular yaw rate. The layout of the network is the following:

Therefore, our mission was to deploy all the required computation onboard our PULP-Shield mounted on the CF, enabling fully autonomous navigation. To put the problem into perspective, in the original work by the RPG, the DroNet CNN enabled autonomous navigation of big-size drones (e.g., the Bebop Parrot). In the original use case, the computational power and memory was not a problem thanks to the streaming of images to a remote base-station, typically a laptop consuming 30-100 Watt or more. So our mission required running a similar workload within 1/1000 of the original power.
To make this work, we combined fixed-point arithmetic (instead of “traditional” floating point), some minimal modification to the original topology, and optimised memory and computation usage. This allowed us to squeeze DroNet in the ultra-small power budget available onboard. Our most energy-efficient configuration delivers 6 frames-per-second (fps) within only 64 mW (including all the electronics on the PULP-Shield), and when we push the PULP platform to its limit, we achieve an impressive 18 fps within just 3.5% of the total CF’s power envelope — the original DroNet was running at 20 fps on an Intel i7.

Do you want to check for yourself? All our hardware and software designs, including our code, schematics, datasets, and trained networks have been released and made available for everyone as open source and open hardware on Github. We look forward to other enthusiasts contributions both in hardware enhancement, as well as software (e.g., smarter networks) to create a great community of people interested in working together on smart nano-drones.
Last but not least, the piece of information you all were waiting. Yes, soon Bitcraze will allow you to enjoy of our PULP-shield, actually, even better, you will play with its evolution! Stay tuned as more information about the “code-name” AI-deck will be released in upcoming posts :-).

If you want to know more about our work:

Questions? Drop us an email (dpalossi at iis.ee.ethz.ch and fconti at iis.ee.ethz.ch)

This week we have a guest blog post from Javier Burgués. Enjoy!

I would like to introduce you a rather unknown application of the CrazyFlie 2.0 (CF2): chemical sensing. Due to its small form-factor, the CF2 is an ideal platform for carrying out gas sensing missions in hazardous environments inaccessible to terrestrial robots and bigger drones. For example, searching for victims and hazardous gas leaks inside pockets that form within the wreckage of collapsed buildings in the aftermath of an earthquake or explosion.

To evaluate the suitability of the CF2 for these tasks, I developed a custom deck, named the MOX deck, to interface two metal oxide semiconductor (MOX) gas sensors to the CF2. Then, I performed experiments in a large indoor environment (160 m2) with a gas source placed in challenging positions for the drone, for example hidden in the ceiling of the room or inside a power outlet box. From the measurements collected in motion (i.e. without stopping) along a predefined 3D sweeping path that takes around 3 minutes, the CF2 builds a map of the gas distribution and identifies the most likely source location with high accuracy.

1. MOX deck

The MOX deck (Fig. 1a) contains two sockets for 4-pin Taguchi-type (TGS) gas sensors, a temperature/humidity sensor (SHT25, Sensirion AG), a dual-channel digital potentiometer (AD5242BRUZ1M, Analog Devices, and two MOSFET p-type transistors (NX2301P, NEXPERIA). I used TGS 8100 sensors (Figaro Engineering) due to its compatibility with 3.0 V logic, power consumption of only 15 mW (the lowest in the market as of June 2016) and miniaturized form factor (MEMS). Since the sensor heater uses 1.8V, two transistors (one per sensor) reduce the applied power by means of pulse width modulation (PWM). The MOX read-out circuit (Fig. 1b) is a voltage divider connected to the μC’s analog-to-digital converter (ADC). The voltage divider is powered at 3.0 V and the load resistor (RL) can be set dynamically by the potentiometer (from 60 Ω to 1 MΩ in steps of 3.9 kΩ). Dynamic configuration of the load resistor is important in MOX gas sensors due to the large dynamic range of the sensor resistance (several orders of magnitude) when exposed to different gas concentrations. The sensors were calibrated (by exposing them to several known concentrations) to convert the raw output into parts-per-million (ppm) concentration units.

The initialization task of the deck driver configures the PWM, initializes the SHT25 sensor, sets the wiper position of both channels of the potentiometer and adds the MOX readout registers to the list of variables that are continuously logged and transmitted to the base station. The main task of the deck driver reads the MOX sensor output voltage and the temperature/humidity values from the SHT25 and sends them to the ground station at 10 Hz.

2. Experimental Arena, External Localization System and Gas Source

Experiments were performed in a large robotics laboratory (160 m2 × 2.7 m height) at Örebro University (Sweden). The laboratory is divided into three connected areas (R1–R3) of 132 m2 and a contiguous room (R4) of 28 m2 (Fig. 2). To obtain the 3D position of the drone, I used the Loco positioning system (LPS) from Bitcraze, based on ultra-wide band (UWB) radio transmitters. Six LPS anchors were positioned in known locations of the experimental arena and one LPS tag was fixed to the drone. The six LPS anchors were placed in the central area of the laboratory, shaped in two inverted triangles (below and above the flight area).

A gas leak was emulated by placing a small beaker filled with 200 mL of ethanol 96% in different locations of the arena (Fig. 4). Ethanol was used because it is non-toxic and easily detectable by MOX sensors. Two experiments were carried out to check the viability of the proposed system for gas source localization and mapping in complex environments. In the first experiment, the gas source was placed on top of a table (height = 1 m) in the small room (R4). In the second experiment, the source was placed inside the suspended ceiling (height = 2.7 m) near the entrance to the lab (R1). Since the piping system of the lab runs through the suspended ceiling, the gas source could represent a leak in one of the pipes. A 12 V DC fan (Model: AD0612HB-A70GL, ADDA Corp., Taiwan) was placed behind the beaker to facilitate dispersion of the chemicals in the environment, creating a plume. The experiments started five minutes after setting up the source and turning on the DC fan.

3. Navigation strategy

The drone was sent to fly along a predefined sweeping path consisting of two 2D rectangular sweepings at different heights (0.9 m and 1.8 m), collecting measurements in motion (Fig. 5). These two heights divide the vertical space of the lab in three parts of equal size. Flying first at a lower altitude minimizes the impact of the propellers’ downwash in the gas distribution. For safety reasons, the trajectory was designed to ensure enough clearance around obstacles and walls, and people working inside the laboratory were told to remain in their seats during the experiments. The ground station communicates the flight path to the drone as a sequence of (x,y,z) waypoints, with a target flight speed of 1.0 m/s. The CF2 reports the measured concentration and its location to the ground station every 100 ms.

At the end of the exploration, the ground station uses all the received information to compute a 3D map of the instantaneous concentration and the ’bouts’. A ’bout’ is declared when the derivative of the sensor response exceeds a certain threshold. Bouts are produced by contact with individual gas patches and some authors use them instead of the instantaneous response (which is more affected by the slow response time of chemical sensors). For gas source localization, we compare two approaches: using the cell with maximum value in the concentration map or using the cell with maximum bout frequency. The bout frequency (bouts/min) is computed as the bout count in a 5 second sliding window multiplied by 12 (to convert it to bouts/min).

4. Results

In the first experiment, the drone took off near the entrance of the lab (R1), 17 meters downwind of a gas source located in the other end of the laboratory (R4). From the gas distribution map (Fig. 6a) it is evident that the gas source must be in R4, because the maximum concentration (35 ppm) was found there while concentrations below 5 ppm were measured in the rest of the lab. The gas plume can be outlined from the location of odor hits. The highest odor hit density (25 hits/min) was found also in R4. The cells corresponding to the maximum concentration (green start) and maximum odor hit frequency (blue triangle) were found at 0.94 and 1.16 m of the true source location, respectively.

In the second experiment, the gas source was located just above the starting point of the exploration, hidden in the suspended ceiling (Fig. 7). The resulting maximum concentration in the test room was measured when the drone flew at h=1.8 m, highlighting the importance of sampling in 3D for localization and mapping of elevated gas sources. However, since the source is presumably not directly exposed to the environment, concentrations below 3 ppm were found in most locations of the room, which complicates the gas source localization task. The concentration and odor hit maps suggest that the gas source is located in the division between R1 and R2, which represents a localization error of 4.0 and 3.31 m, respectively.

5. Conclusions

These results suggest that the CF2 can be used for gas source localization and mapping in large indoor environments. In contrast to previous works in which long measurement times were taken at predefined or adaptively chosen sampling locations, a rough approximation of such maps can be obtained in very short time with concentration measurements acquired in motion. The obtained gas distribution maps seem coherent with respect to the true source location and wind direction, and not only enable the detection of the source with relatively small localization errors but also provide a rich visual interpretation of the gas distribution.

If you are interested in more details about this work, take a look at the journal paper or drop me an email at <jburgues8 at gmail dot com> or leave a comment on the blog!

This week we have a guest blog post from Percy Jaiswal about quad rotor dynamics. Enjoy!

  1. Components
    Although most of us are aware how a quadcopter / drone looks, a generic picture (It’s of a drone called Crazyflie from Bitcraze) of drone is shown above. It consists of 4 motors, control circuitry in middle and Propellers mounted on its rotors. For reasons described in below section, 2 of the rotors rotate in clockwise (CW) direction and remaining 2 in counterclockwise (CCW). CW and CCW motors are placed next to each other to negate Moment (described in next section) generated by them. Propellers come in different configurations like CW or CCW rotating, Pusher or Tractor, with different radius, pitch etcetera.
  2. Force and Moments

    Each rotating propeller produces two kind of forces. When a rotor rotates, it’s propeller produces upward thrust given by F=K_f * ω² (shown by forces F1, F2, F3 and F4 in Figure 2) where ω (omega) is rotation rate of rotor measured in radian / second. Constant K_f depends upon many factors like torque proportionality constant, back-EMF, Density of surrounding air, area swept by propeller etc. The values for K_f​ and K_m (mentioned below)​ are generally found empirically. We mount the motor and propeller on a load cell and measure the force and moment for different motor speeds. Refer “System Identification of the Crazyflie 2.0 Nano Quadrocopter” by Julian Forster for details regarding measurement of K_f and K_m.
    Total upward thrust generated by all 4 propellers is given by summing all individual thrusts generated, for i= 1 to 4 its given by
    F_i = K_f * ω²
    Apart from upward force, a rotating propeller also generates an opposing rotating spin called Torque or Moment (shown by Moments M1, M2, M3 and M4 in Figure 2). For e.g. a rotor spinning in CW direction will produce a torque which causes the body of drone to spin in CCW direction.  A demonstration of this effect can be seen here. This rotating torque is given by M=K_* ​ω²
    Moment generated by a motor is in opposite direction to its spinning, hence CW and CCW spinning motors generate opposite moments. And this is the reason why we have CW and CCW rotating motors so that in steady hover state, moments from 2 CW and 2 CCW rotating rotors negate each other out and drone doesn’t keeping spinning about its body axis (also called yaw).
    Moments / Torques M1, M2, M3 and M4 are moments generated by individual motors. The overall Moment generated around drone’s z axis (Z_b in Figure 2) is given by summation of all 4 moments. Remember that CW and CCW moments will have opposite signs.
    moment_z = M1 + M2 + M3 + M4, again CW and CCW moments will have opposite signs and hence in ideal condition (or whenever we don’t want any Yaw (rotation around z axis) movement) moment_z will be close to 0.
    Contrary to moment_z, overall moment / torque generated around x and y axis’s calculations are little different. Looking at Figure 2, we can see that motor 1 and 3 lie on x axis of drone. So they won’t contribute to any moment / torque around x axis. However we can see that difference in forces generated by motor 2 and 4 will cause drone’s body to tilt around it’s x axis and this is what constitutes overall moment / torque around x axis, which is given by
    moment _x = (F2 — F4) * L, where L is the distance from the axis of rotation of the rotors to the center of the quadrotor. By same logic,
    moment _y = (F3 — F1) * L.
    Summing it up, moment around all 3 axis can be denoted by below vector
    moment = [moment_x, moment_y, moment_z]^T (^T for Transpose)

  3. Orientation and position

    A drone has positional as well as orientational attributes, meaning to say it can be any position (x, y, z coordinates) and can be making certain angles (theta (θ), phi (φ) and psi (ψ)) with respect to world / Inertial frame. Above figure shows theta (θ), phi (φ) and psi (ψ) more clearly.
  4. Moving in z and x & y direction

    Whenever a drone is stationary, it’s in alignment with World frame, meaning to say its Z axis is in same direction as World’s gravitational field. During such a case, if a drone wants to move upwards, it just needs to set proper propeller rotating speed and it can start moving in z direction according to equation total generated force — gravity. However, if it wants to move in x or y direction it first needs to orient itself (making required theta or phi angle). When that happens, total thrust generated by four propellers (F_thrust) has a component in z direction and in x/y direction as shown in above 2D figure. For above shown example, using basic trigonometry, we can find z and y directional force by following equation, where phi is angle made Drone’s body z axis with world Frame.
    F_y ​= F_thrust * ​sinϕ
    F_z ​= F_thrust * ​cosϕ
  5. World and Body frame

    To measure above stated theta, phi and psi angles, usually drone’s onboard IMU sensor is used. This sensor measures how fast drone’s body is rotating around its body frame and provides that angular velocity as its output. When processing this IMU outputs, we need to be careful and understand that angular velocities sent by it are not with respect to World frame, but are with respect to its Body Frame. Above diagram shown both this frames for reference.
  6. Rotation Matrix

    To convert coordinates from Body Frame to World Frame and vice versa, we use a 3×3 matrix called Rotation Matrix. That is, if V is a vector in the world coordinates and V’ is the same vector expressed in the body-fixed coordinates, then the following relations hold:
    V’ = R * V and
    V = R^T * V’ where R is Rotation Matrix and R^T is its transpose.
    To understand this relation completely, let’s begin by understanding rotation in 2D. Let the vector V be rotated by an angle β to get the new vector V′. Let r=|V|. Then, we have the below relations:
    vx = r*cosα and vy = r*sinα
    v’x = r*cos(α+β) and v’y = r*sin(α+β). Expanding this, we get
    v’x = r * (cosα * cosβ — sinα * sinβ) and v’y = r * (sinα * cosβ + cosα * sinβ)
    v’x = vx * cosβ — vy * sinβ and v’y = vy * cosβ + vx * sinβ
    This is exactly what we want because the desired point V’ is described in terms of the original point V and the actual angle β. For conclusion we can write this in matrix notation as

    Going from 2D to 3D is relatively simple in Rotation Matrix’s case. In fact, the 2D matrix we just now derived can actual be thought of as an 3D rotation matrix for rotation around z axis. Hence, for a rotation around z-axis the Rotation Matrix would be

    0, 0, 1 in values in last row and column indicate that z coordinates for rotated point (v’z) is same as original point’s z coordinate (vz). We will call this Z axis Rotation Matrix as Rz(β). Extrapolating same logic to rotations around x and y axis, we can get values for RX(β) and RY(β) as

    And final value for 3D motion Rotation Matrix will just be cross multiplication of above three Rotation Matrices.
    R = Rz(ψ) x Ry(θ) x Rx(φ), where psi (ψ), phi (φ,) and theta (θ) are rotation around z, y, and x axis respectively.

  7. State Vector and its Derivative
    As our drone has 6 degrees of freedom, we usually track it by monitoring this six parameters along with their derivatives (how they are changing with time) to get an accurate estimate of drone’s position and velocity of movement. We do this by maintaining what is often called a state vector X = [x, y, z, φ, θ, ψ, x_dot, y_dot, z_dot, p, q, r] and its derivative X_dot= [x_dot, y_dot, z_dot, θ_dot, φ_dot, ψ_dot, x_doubledot, y_doubledot, z_doubledot, p_dot, q_dot, r_dot] where x, y and z are position of drone in World frame, x_dot, y _dot, and z_dot are positional / linear velocities in World Frame. φ, θ, ψ represent drone attitude / orientation in World frame whereas φ_dot, θ_dot, ψ_dot represents rate of change of this (Euler) angles. p, q, r are angular velocities in body frame whereas p_dot, q_dot and r_dot are its derivate (derivative = rate of change) aka angular acceleration in body frame. x_doubledot, y_doubledot, z_doubledot represents linear accelerations in World Frame.
  8. Linear Acceleration
    As briefed before, whenever propellers are moving, drone will start moving (accelerating) in x, y and z direction depending upon total thrust generated by it’s 4 propellers (represented by Ftotal in below equation) and drone’s orientation (represented by Rotation Matrix R). We know force = mass * acceleration. Ignoring Rotation Matrix R, if we just consider acceleration in z direction, it would be given by
    Z_force = (mass * gravitational force) — (mass * Z_acceleration)
    mass * Z_acceleration = mass * gravitational force — Z_force
    And therefore Z_acceleration = gravitational force — Z_force / mass
    Extrapolating it to x and y direction and including Rotation Matrix (for reasons described in section 4 and 6), equation describing linear acceleration for a drone is given by below equation, where m is mass of drone and g is for gravitational force. Negative sign in F indicates that we are considering gravitational force to be in positive z direction.
  9. Angular acceleration
    Along with linear motion, owing to rotating propellers and its orientation, drone will also have some rotational motion. . While it is convenient to have the linear equations of motion in the inertial / world frame, the rotational equations of motion are useful to us in the body frame, so that we can express rotations about the center of the quadcopter instead of about our inertial center. As mentioned in section 4, we will use drone’s IMU to get its angular accelerations. Let’s consider output from IMU be p, q and r, representing rotational velocities around drone’s x, y and z body axis.

    We derive the rotational equations of motion from Euler’s equations for rigid body dynamics. Expressed in vector form, Euler’s equations are written as

    where ω = [p, q, r]^T is the angular velocity vector, I is the inertia matrix, and moment is a vector of external moment / torques developed in section 2. . Please don’t get confused with usage of ω (as angular velocity) in this section with it’s usage as propeller’s rotation rate. We will stick to usage of ω as rotation rate post this section. We can rewrite above equation as

    Replacing ω with [p, q, r]^T, expanding moment vector and reshuffling above equation we get angular accelerations in body frame as

  10. Rate of change of the Euler angles
    Although drone’s orientation is originally observed in Body frame, we need to convert them to World Frame. Again, we use rotation matrix as per below formula for this purpose. The derivation of this formula is little elongated and is provided in the Reference [6]
  11. Recap
    So to recap what we have learnt so far
    1. A quadcopter has 4 (2 CW and 2 CCW) rotating propellers
    2. Each Propeller creates F =K_f * ω² force in direction perpendicular to its plane and Moment M = K_m * ω² around it’s perpendicular axis.
    3. A drone can be in any x, y, z position and theta (θ), phi (φ) and psi (Ψ) orientation.
    4. When a drone wants to move in z direction (in World Frame) it needs to generate appropriate force (total thrust divided by 4) on each propeller. When it wants to move in either x or y direction (again World Frame), it makes respecting theta / phi angle along with generating required force
    5. When tracking drone’s motion, we need to handle data in World and Body Frames
    6. To convert angular data from Body Frame to World Frame, a Rotational Matrix is used
    7. To track drone’s movements, we keep track of its state vector X and its derivative X_dot
    8. Rotating propellers generate linear accelerations in x, y and z direction as per equation shown in section 8
    9. Rotating propellers generate angular accelerations around z, y and z axis in Body frame as per equation shown in section 9
    10. We convert angular velocities in Body Frame to World Frame Euler angle velocities as per equation shown in section 10.
  12. References
    I don’t want to just list down references, but instead would like to sincerely thank individual authors for their work, without which this article and the understanding which I have gained for drone dynamics would have been almost impossible.
    1. System Identification of the Crazyflie 2.0 Nano Quadrocopter by Julian Forster — http://mikehamer.info/assets/papers/Crazyflie%20Modelling.pdf
    2. Trajectory Generation and Control for Quadrotors by Daniel Warren Mellinger — https://repository.upenn.edu/cgi/viewcontent.cgi?article=1705&context=edissertations
    3. Quadcopter Dynamics, Simulation, and Control by Andrew Gibiansky — http://andrew.gibiansky.com/downloads/pdf/Quadcopter%20Dynamics,%20Simulation,%20and%20Control.pdf
    4. A short derivation to basic rotation around the x-, y- or z-axis — http://www.sunshine2k.de/articles/RotationDerivation.pdf
    5. How do you derive the rotation matrices? — Quora — https://www.quora.com/How-do-you-derive-the-rotation-matrices
    6. Representing Attitude: Euler Angles, Unit Quaternions, and Rotation Vectors by James Diebel — https://www.astro.rug.nl/software/kapteyn/_downloads/attitude.pdf

I would like to sincerely thanks Bitcraze team for allowing me to express myself on their platform. If you liked this post, Follow, Like, Retweet it on Twitter, it will act as encouragement for writing new posts as I continue my journey in becoming a complete Drone engineer.

Till next time….cheers!!

 

Malmö is known to be very beautiful in June. At least that’s what i’ve heard from the Bitcraze team when we talked about having a meet-n-geek in their offices in Sweden.

I’ve been working on the crazyflies and developing artist friendly interface to control them for a year now, and I always was impressed with their philosophy of work and communication. Over the past year, they’ve helped me and my companies a lot to be able to create the shows we want, with our specific needs and constraints, that researchers don’t have, like fast installation time, or confidence in drone take off (you want to make sure that a drone won’t hit the theater’s director’s head at the very beginning of the show !).

 

For that I created LaMoucheFolle, which is an open-source software with a nice UI to be able to connect, monitor and control multiple drones. LaMoucheFolle is not made to be a flight controller, but acts more as a server that any software will be able to use, like Unity or Chataigne. That way, people and users don’t need to handle all the connection, feedback, warning, calibration process and can concentrate essentially on the flight and interaction.
While the fist version of LaMoucheFolle got me through most of the demos and workshops, I knew that it could be vastly improved if I understood better the drones, so I decided to ask the creators to meet them, and here I am !

As I hoped, being physically there allowed me to understand better what are their practices, and allowed them to better understand mine, so we could find a way to improve both their Crazyflie ecosystem and my contribution to it.

So I refactored, redesigned and improved LaMoucheFolle to a (soon to be released) V2, featuring :
– New drone manager interface  with a more intuitive feedback of the drones
– New multi-threaded drone communication mechanism
– New state-safe sequenced initialization and flight control of the drone, with a progressive take-off vastly increasing its stability
– Unity demo app and Chataigne demo session to show how to control from other softwares via OSC

 

 

 

While developping the new version and talking to the team, some key features and improvements for the use Crazyflies in shows were discussed and some of them are now in research/development :
– Health analysis : using the accelerometer’s data and Tobias’ magic brain, it allows to test the motors while the drone is on ground and find out if one or more are problematic (too much or not enough vibration, meaning either the propeller or the motor needs to be repaired/replaced)
– Battery analysis : when the battery is fully charged, this allows to have an automated motor sequence which will find out if the battery has an abnormal discharge behavior
– Steath mode : Shut down all the system leds (the 4 builtin leds on the drone) so it can be invisible, ninja-style
– Normalized battery level and low battery log values : it allows for safe and consistent feedback of the battery level in percent, and an indicator if the drone should land soon. It will also possible to use this value to monitor the charging progression of a drone.
– LedRing fade pattern : This mode allows for easy fading between solid colors on the led ring, so it’s not necessary to stream all the colors from one color to another, but instead having a very beautiful smooth interpolation, using only 2 parameters : the target color and the fade time.

I hope you are as excited as I am about those new features, and if you’re not, please tell us what would make you “vibrate” !

I’ve been spending the last week at their office, and it was a great week : I initially came to improve my software, and discuss about future development of the Crazyflies [which was great], but what i’ll remember the most from this trip is by far the human aspect of the Bitcraze team.
Marcus, Kristoffer, Tobias, Björn and Arnaud are amazing and i’m really happy to have had the chance to see them working, and collaborate with them. I admire their choices of being fully transparent on their work and amongst themselves, and there is a natural kindness mixed to the passion for their project that makes working there feel like everyday’s special. Also, Malmö is very beautiful indeed :)

Thank you very much and I hope to reiterate the experience soon,

Skål