Category: Research

This week we have a guest blog post from Joseph La Delfa.

DroneChi is a Human Drone interaction experience that uses the Qualisys motion capture system that enables the Crazyflie to react to movements of your body. At the Exertion Games Lab in Melbourne Australia, we like to design new experiences with technology where the whole body can be the controller and is involved in the experience.

When we first put these two technologies together we realised two things. 

  1. It was super easy to keep your attention on a the drone as it flew around the room reacting to your movements. 
  2. As a result it was also really easy to reflect on and refine ones own movements. 

We thought this was like meditation meditated by a drone, and wanted to investigate how to further enhance this experience through design. We thought the smooth movements were especially mesmerising and so I decided to take beginner Tai Chi lessons; to get an appreciation of what it felt like to move like a Tai Chi student.

We undertook an 8 month design program where we simultaneously designed the form and the interaction of the Crazyflie. The initial design brief was pretty simple, make it look and feel light, graceful and from nature. In Tai Chi you are asked all the time to imagine a flower, the sea or a bird as you embody its movements, we wanted to emulate these experiences but without verbal instruction. Could a drone facilitate these sorts of experiences through it’s design?

We will present a summarised version of how the form and the interaction came about. Starting with a mood board, we collated radially symmetrical forms from nature to match a drone’s natural weight distribution.

We initially went with a jelly fish, hoping to emulate their “push gliiide” movement by articulating laser cut silhouettes (see fig c). This proved incredibly difficult, after searching high and low for a foam that was light enough for the Crazyflie to lift, we just could not get it to fly stable. 

However, we serendipitously fell into the flower shape by trying to improve how we joined the carbon rods together in a loop (fig b below).  By joining them to the main hull we realised it looked like a petal! This set us down the path of the flower, we even flipped the chassis so that the LED ring faced upwards (cheers to Tobias for that firmware hack). 

Whilst this was going on we were experimenting with how to actually interact with the drone. Considering the experience was to be demonstrated at a major conference we decided to keep the tracking only to the hands, this allowed quick change overs. We started with cardboard pads, experimented with gloves but settled on some floral inspired 3D printed pads. We were so tempted to include the articulation of the fingers but decided against it to avoid scope creep! Further to this, we curved the final hand pads (fig  d) to promote the idea of holding the drone, inspired by a move in Tai Chi called “holding the ball”.

As a beginner practicing Tai Chi I was sometimes overwhelmed by the number of aspects of my movement that constantly needed monitoring, palms out, heel out, elbow slightly bent, step forward etc. However in brief moments it all came together and I was able to appreciate the feelings of these movements as opposed to consciously monitoring them. We wanted this kind of experience when learning DroneChi so we devised a way of mapping the drone to the body to emulate this. After a few iterations we settled on the “mid point” method as seen below.

The drone only followed the midpoint (blue dot above) if it was within .2m of it. If it was outside of this range it would float away slowly from the participant. This may seem like a lot, but with little in the way of visual guidance (eg a laser pointer or an augmented display) a person can only rely on the proprioceptive feedback from their own body. We used the on board LED ring on the drone to let the person know at least when they are close, but that is all the help they got. As a result this takes a lot of concentration to get right!

In the end we were super happy with the final experience, in the study participants reported tuning into their bodies when using the drone, as well as experiencing a unique sort of relationship to the drone; not entirely like a pet and also like an extension of the body. We will be investigating both findings from the study through the design and testing of a new system on the Crazyflie. We see this work contributing to more intimate designs for human drone interactions as well as a being applicable to health contexts such as rehabilitation.

CrazyFlies are great for indoor applications, thanks to their maneuverability and ubiquitous character. Its small size, however, limits sensor quality and compute capability. In our recent work we present source seeking onboard a CrazyFlie by deep reinforcement learning. We show a general methodology for deploying deep neural networks on heavily constrained nano drones, using full 8-bit quantization and input scaling. 

Our fully autonomous light-seeking CrazyFlie

Problem definition

Source seeking can be interesting in a variety of contexts. We focus on light seeking, as seen in nature. Many insects rely on light, either for survival or navigation. Light seeking in aerial robotics has many applications, such as finding the exit out of a dark room. 

Our goal is to fully autonomously find a light source, using only the onboard Micro Controller Unit (MCU) and deep reinforcement learning. 

Crazyflie configuration

Our fully autonomous nano drone uses several standard and custom sensors. We use the multiranger and flowdeck for position control and obstacle avoidance.

The Multiranger deck with our custom light sensor

We add a custom light sensor, based on the Adafruit TSL2591 sensor. The custom light sensor nicely fits in the multiranger deck, adding little mass and inertia (total vehicle mass is 33 grams).

CrazyFlie 2.1 with multiranger, flowdeck and light sensor

Algorithm

We use a deep reinforcement learning algorithm with a discrete action space. The neural network policy has laser rangers and light readings (current and past values) as input. The neural network tells the drone to rotate left, right or fly forward. We train a neural network with 2 hidden layers of both 20 nodes, featuring bias add and relu activation functions. The input layer is a vector with a length of 20 (4 states), which, compared to images, greatly reduces computational effort. 

DQN policy architecture

Simulation and conversion

We train our agent in simulation using the Air Learning simulation platform, after which we fully quantize the neural network to 8-bit integers.

To maintain accuracy after quantization, we have come up with quantization innovations. Both input layer and all tensors in the network need to have a pre-defined [min,max] range in float32, to convert to 8-bit integers. 

Air Learning pipeline

In the input layer, not all inputs have the same range. That is, a laser ranger can have values from 0 to 5 meters while our light sensor may return a value between 0 and 300 lux. To avoid this issue, we scale all inputs to the same range.

Additionally, the tensors in the network need to have an assigned [min,max] range for quantization. To achieve this, we input a range of representative input into the unquantized model, and read out the values of intermediate layers. With this strategy, we arrive at a 2.9x speed-up compared to float32 inference.

Implementation

We use Tensorflow Lite to deploy our tensorflow models in C on the CrazyFlie. The TFMicro Stack, together with the actual model, almost completely fill up the available RAM. 

RAM utilization on the CrazyFlie 2.1

The total amount of RAM available on the CrazyFlie 2.1 is 196kB, of which only 131kB is available for static allocation at compile time. The Bitcraze software stack uses 98kB of RAM, leaving only 33kB available for our purposes. The TFMicro stack takes up 24kB, thus leaving 9kB for the actual model (e.g., weights, bias terms). 

We also analyzed CPU usage, and noticed a high amount of interrupts by the ‘stabilizer’ thread, i.e., the PID controllers. Because of these interrupts, inference of our model takes 46.4 times longer than it would have been without interruption. 

Our quantized model is 3kB. If it were an FP32 model, it would have taken 12kB, which would not have fitted in the available memory. We were able to run inference at 4Hz, compared to the estimated 1.4Hz of the same but unquantized model. 

In a practical sense, we noticed a decreased level of stability when increasing model size. Occasionally the drone would reboot randomly while flying. Possible causes for this behavior are RAM overflow and task scheduling problems in RTOS. Besides, we observed variation in performance loss after quantization. Some of our trained models would just keep rotating after quantization, while our final model demonstrates robust source seeking behavior. This degree of uncertainty can possibly be avoided using quantization aware training. 

Finally, flying in a dark room without a position estimate can be challenging. The PID controllers heavily rely on information provided by the Flow Deck. This information is limited when little light is present while flying over a floor containing little features. To fix this, we added mats with texture on the ground, adding features and enabling stable flight in a dark room.

Flight tests

To validate our results in simulation, we created a cluttered environment with a light source. We randomly initialized the drone in the room, and hereby observed a success rate of 80% in a total of 105 flight tests. By varying the environment and initial drone position, we learned more about the inner workings of our algorithm.

Experiment testing environment

We learned that the algorithm performs better with more obstacles, and that a closer initial position improves performance. Generally, source seeking far away from the source seems really hard. Almost no variation in source strength exists between different measurements, and the drone observes mostly noise. 

Outlook

With our methodology, we were able to perform fully autonomous source seeking using deep reinforcement learning on a Cortex-M4 MCU. We hope our methodology will be applicable to other TinyML applications where resources are heavily constrained. Developing custom accelerators for a specific workload is time-consuming and expensive, while general purpose MCU’s are cheap and widely available. With our methodology, we unlock new applications for learning algorithms on heavily constrained platforms.

Direct path to source in empty room, blue = take-off

Links

Video: https://www.youtube.com/watch?v=wmVKbX7MOnU

Paper: https://arxiv.org/abs/1909.11236

Github: https://github.com/harvard-edge/source-seeking

Feel free to contact us might you have any questions or ideas: bduisterhof@g.harvard.edu

Hi everyone, here at the Integrated and System Laboratory of the ETH Zürich, we have been working on an exciting project: PULP-DroNet.
Our vision is to enable artificial intelligence-based autonomous navigation on small size flying robots, like the Crazyflie 2.0 (CF) nano-drone.
In this post, we will give you the basic ideas to make the CF able to fly fully autonomously, relying only on onboard computational resources, that means no human operator, no ad-hoc external signals, and no remote base-station!
Our prototype can follow a street or a corridor and at the same time avoid collisions with unexpected obstacles even when flying at high speed.


PULP-DroNet is based on the Parallel Ultra Low Power (PULP) project envisioned by the ETH Zürich and the University of Bologna.
In the PULP project, we aim to develop an open-source, scalable hardware and software platform to enable energy-efficient complex computation where the available power envelope is of only a few milliwatts, such as advanced Internet-of-Things nodes, smart sensors — and of course, nano-UAVs. In particular, we address the computational demands of applications that require flexible and advanced processing of data streams generated by sensors such as cameras, which is beyond the capabilities of typical microcontrollers. The PULP project has its roots on the RISC-V instruction set architecture, an innovative academic and research open-source architecture alternative to ARM.

The first step to make the CF autonomous was the design and development of what we called the PULP-Shield, a small form factor pluggable deck for the CF, featuring two off-chip memories (Flash and RAM), a QVGA ultra-low-power grey-scale camera and the PULP GAP8 System-on-Chip (SoC). The GAP8, produced by GreenWaves Technologies, is the first commercially available embodiment of our PULP vision. This SoC features nine general purpose RISC-V-based cores organised in an on-chip microcontroller (1 core, called Fabric Ctrl) and a cluster accelerator of 8 cores, with 64 kB of local L1 memory accessible at high bandwidth from the cluster cores. The SoC also hosts 512kB of L2 memory.

Then, we selected as the algorithmic heart of our autonomous navigation engine an advanced artificial intelligence algorithm based on DroNet, a Convolutional Neural Network (CNN) that was originally developed by our friends at the Robotic and Perception Group (RPG) of the University of Zürich.
To enable the execution of DroNet on our resource-constrained system, we developed a complete methodology to map computationally-intense deep neural networks on the PULP-Shield and the GAP8 SoC.
The network outputs two pieces of information, a probability of collision and a steering angle that are translated in dynamic information used to control the drone: respectively, forward velocity and angular yaw rate. The layout of the network is the following:

Therefore, our mission was to deploy all the required computation onboard our PULP-Shield mounted on the CF, enabling fully autonomous navigation. To put the problem into perspective, in the original work by the RPG, the DroNet CNN enabled autonomous navigation of big-size drones (e.g., the Bebop Parrot). In the original use case, the computational power and memory was not a problem thanks to the streaming of images to a remote base-station, typically a laptop consuming 30-100 Watt or more. So our mission required running a similar workload within 1/1000 of the original power.
To make this work, we combined fixed-point arithmetic (instead of “traditional” floating point), some minimal modification to the original topology, and optimised memory and computation usage. This allowed us to squeeze DroNet in the ultra-small power budget available onboard. Our most energy-efficient configuration delivers 6 frames-per-second (fps) within only 64 mW (including all the electronics on the PULP-Shield), and when we push the PULP platform to its limit, we achieve an impressive 18 fps within just 3.5% of the total CF’s power envelope — the original DroNet was running at 20 fps on an Intel i7.

Do you want to check for yourself? All our hardware and software designs, including our code, schematics, datasets, and trained networks have been released and made available for everyone as open source and open hardware on Github. We look forward to other enthusiasts contributions both in hardware enhancement, as well as software (e.g., smarter networks) to create a great community of people interested in working together on smart nano-drones.
Last but not least, the piece of information you all were waiting. Yes, soon Bitcraze will allow you to enjoy of our PULP-shield, actually, even better, you will play with its evolution! Stay tuned as more information about the “code-name” AI-deck will be released in upcoming posts :-).

If you want to know more about our work:

Questions? Drop us an email (dpalossi at iis.ee.ethz.ch and fconti at iis.ee.ethz.ch)

This week we have a guest blog post from Javier Burgués. Enjoy!

I would like to introduce you a rather unknown application of the CrazyFlie 2.0 (CF2): chemical sensing. Due to its small form-factor, the CF2 is an ideal platform for carrying out gas sensing missions in hazardous environments inaccessible to terrestrial robots and bigger drones. For example, searching for victims and hazardous gas leaks inside pockets that form within the wreckage of collapsed buildings in the aftermath of an earthquake or explosion.

To evaluate the suitability of the CF2 for these tasks, I developed a custom deck, named the MOX deck, to interface two metal oxide semiconductor (MOX) gas sensors to the CF2. Then, I performed experiments in a large indoor environment (160 m2) with a gas source placed in challenging positions for the drone, for example hidden in the ceiling of the room or inside a power outlet box. From the measurements collected in motion (i.e. without stopping) along a predefined 3D sweeping path that takes around 3 minutes, the CF2 builds a map of the gas distribution and identifies the most likely source location with high accuracy.

1. MOX deck

The MOX deck (Fig. 1a) contains two sockets for 4-pin Taguchi-type (TGS) gas sensors, a temperature/humidity sensor (SHT25, Sensirion AG), a dual-channel digital potentiometer (AD5242BRUZ1M, Analog Devices, and two MOSFET p-type transistors (NX2301P, NEXPERIA). I used TGS 8100 sensors (Figaro Engineering) due to its compatibility with 3.0 V logic, power consumption of only 15 mW (the lowest in the market as of June 2016) and miniaturized form factor (MEMS). Since the sensor heater uses 1.8V, two transistors (one per sensor) reduce the applied power by means of pulse width modulation (PWM). The MOX read-out circuit (Fig. 1b) is a voltage divider connected to the μC’s analog-to-digital converter (ADC). The voltage divider is powered at 3.0 V and the load resistor (RL) can be set dynamically by the potentiometer (from 60 Ω to 1 MΩ in steps of 3.9 kΩ). Dynamic configuration of the load resistor is important in MOX gas sensors due to the large dynamic range of the sensor resistance (several orders of magnitude) when exposed to different gas concentrations. The sensors were calibrated (by exposing them to several known concentrations) to convert the raw output into parts-per-million (ppm) concentration units.

The initialization task of the deck driver configures the PWM, initializes the SHT25 sensor, sets the wiper position of both channels of the potentiometer and adds the MOX readout registers to the list of variables that are continuously logged and transmitted to the base station. The main task of the deck driver reads the MOX sensor output voltage and the temperature/humidity values from the SHT25 and sends them to the ground station at 10 Hz.

2. Experimental Arena, External Localization System and Gas Source

Experiments were performed in a large robotics laboratory (160 m2 × 2.7 m height) at Örebro University (Sweden). The laboratory is divided into three connected areas (R1–R3) of 132 m2 and a contiguous room (R4) of 28 m2 (Fig. 2). To obtain the 3D position of the drone, I used the Loco positioning system (LPS) from Bitcraze, based on ultra-wide band (UWB) radio transmitters. Six LPS anchors were positioned in known locations of the experimental arena and one LPS tag was fixed to the drone. The six LPS anchors were placed in the central area of the laboratory, shaped in two inverted triangles (below and above the flight area).

A gas leak was emulated by placing a small beaker filled with 200 mL of ethanol 96% in different locations of the arena (Fig. 4). Ethanol was used because it is non-toxic and easily detectable by MOX sensors. Two experiments were carried out to check the viability of the proposed system for gas source localization and mapping in complex environments. In the first experiment, the gas source was placed on top of a table (height = 1 m) in the small room (R4). In the second experiment, the source was placed inside the suspended ceiling (height = 2.7 m) near the entrance to the lab (R1). Since the piping system of the lab runs through the suspended ceiling, the gas source could represent a leak in one of the pipes. A 12 V DC fan (Model: AD0612HB-A70GL, ADDA Corp., Taiwan) was placed behind the beaker to facilitate dispersion of the chemicals in the environment, creating a plume. The experiments started five minutes after setting up the source and turning on the DC fan.

3. Navigation strategy

The drone was sent to fly along a predefined sweeping path consisting of two 2D rectangular sweepings at different heights (0.9 m and 1.8 m), collecting measurements in motion (Fig. 5). These two heights divide the vertical space of the lab in three parts of equal size. Flying first at a lower altitude minimizes the impact of the propellers’ downwash in the gas distribution. For safety reasons, the trajectory was designed to ensure enough clearance around obstacles and walls, and people working inside the laboratory were told to remain in their seats during the experiments. The ground station communicates the flight path to the drone as a sequence of (x,y,z) waypoints, with a target flight speed of 1.0 m/s. The CF2 reports the measured concentration and its location to the ground station every 100 ms.

At the end of the exploration, the ground station uses all the received information to compute a 3D map of the instantaneous concentration and the ’bouts’. A ’bout’ is declared when the derivative of the sensor response exceeds a certain threshold. Bouts are produced by contact with individual gas patches and some authors use them instead of the instantaneous response (which is more affected by the slow response time of chemical sensors). For gas source localization, we compare two approaches: using the cell with maximum value in the concentration map or using the cell with maximum bout frequency. The bout frequency (bouts/min) is computed as the bout count in a 5 second sliding window multiplied by 12 (to convert it to bouts/min).

4. Results

In the first experiment, the drone took off near the entrance of the lab (R1), 17 meters downwind of a gas source located in the other end of the laboratory (R4). From the gas distribution map (Fig. 6a) it is evident that the gas source must be in R4, because the maximum concentration (35 ppm) was found there while concentrations below 5 ppm were measured in the rest of the lab. The gas plume can be outlined from the location of odor hits. The highest odor hit density (25 hits/min) was found also in R4. The cells corresponding to the maximum concentration (green start) and maximum odor hit frequency (blue triangle) were found at 0.94 and 1.16 m of the true source location, respectively.

In the second experiment, the gas source was located just above the starting point of the exploration, hidden in the suspended ceiling (Fig. 7). The resulting maximum concentration in the test room was measured when the drone flew at h=1.8 m, highlighting the importance of sampling in 3D for localization and mapping of elevated gas sources. However, since the source is presumably not directly exposed to the environment, concentrations below 3 ppm were found in most locations of the room, which complicates the gas source localization task. The concentration and odor hit maps suggest that the gas source is located in the division between R1 and R2, which represents a localization error of 4.0 and 3.31 m, respectively.

5. Conclusions

These results suggest that the CF2 can be used for gas source localization and mapping in large indoor environments. In contrast to previous works in which long measurement times were taken at predefined or adaptively chosen sampling locations, a rough approximation of such maps can be obtained in very short time with concentration measurements acquired in motion. The obtained gas distribution maps seem coherent with respect to the true source location and wind direction, and not only enable the detection of the source with relatively small localization errors but also provide a rich visual interpretation of the gas distribution.

If you are interested in more details about this work, take a look at the journal paper or drop me an email at <jburgues8 at gmail dot com> or leave a comment on the blog!

Here at the USC ACT Lab we conduct research on coordinated multi-robot systems. One topic we are particularly interested in is coordinating teams consisting of multiple types of robots with different physical capabilities.

A team of three quadrotors all controlled with Crazyflie 2.0 and a Clearpath Turtlebot

Applications such as search and rescue or mapping could benefit from such heterogeneous teams because they allow for more flexibility in the choice of sensors and locomotive capability. A core challenge for any multi-robot application is motion planning – all of the robots in the team need to make it to their target locations efficiently while avoiding collisions with each other and the environment. We have recently demonstrated a scalable method for trajectory planning for heterogeneous robot teams utilizing the Crazyflie 2.0 as the flight controller for our aerial robots.

A Crazier Swarm

To test our trajectory planning research we wanted to assemble a team with both ground robots and multiple sizes of aerial robots. We additionally wanted to leverage our existing Crazyswarm software and experience with Crazyflie firmware to avoid some of the challenges of working with new hardware. Luckily for us the BigQuad deck offered a straightforward way to super-size the Crazyflie 2.0 and gave us the utility we needed.

With the BigQuad deck and off-the-shelf components from the hobbyist drone community we built three super-sized Crazyflie 2.0s. Two of them weigh 120g (incl. battery) with a motor-to-motor size of 130mm, and the other is 490g (incl. battery and camera) with a size of 210mm.

120g, 130mm

490g, 210mm

We wanted to pick components that would be resistant to crashing while still offering high performance. To meet these requirements we ended up picking components inspired by the FPV drone racing community where both reliable performance and high-impact crashes are expected. Full parts lists for both platforms are available here

Integrating the new platforms into the Crazyswarm was fairly easy. We first had to re-tune the PID controller gains to account for the different dynamics of the larger platforms. This didn’t take too long, but we did crash a few times — luckily the components we chose were able to handle the crashes without any breakages. After tuning the platforms behave very well and are just as easy to work with as the original Crazyflie 2.0. We additionally updated the Crazyswarm package to be able to differentiate between BigQuad and regular Crazyflie types and those updates are now available for use by anyone!

In future work, we are excited to do hands-on experiments with a prototype of the CF-RZR. This new board seems like a promising upgrade to the CF 2.0 + BQD combination as it has upgraded components, an external antenna, and a standardized form factor. Hopefully we will see the CF-RZR as part of the Crazyswarm in the near future!

Mark Debord
Master’s Student
Automatic Coordination of Teams Laboratory
University of Southern California
Wolfgang Hönig
PhD Student
Automatic Coordination of Teams Laboratory
University of Southern California

 

Modular robots can adapt and offer solutions in emergency scenarios, but self-assembling on the ground is a slow process. What about self-assembling in midair?

In one of our recent work in GRASP Laboratory at University of Pennsylvania, we introduce ModQuad, a novel flying modular robotic structure that is able to self-assemble in midair and cooperatively fly. This work is directed by Professor Vijay Kumar and Professor Mark Yim. We are focused on developing bio-inspired techniques for Flying Modular Robots. Our main interest is to develop algorithms and controllers for self-assembling modular robots that can dock in midair.

In biological systems such as ant or bee colonies, collective effort can solve problems not efficient with individuals such as exploring, transporting food and building massive structures. Some ant species are able to build living bridges by clinging to one another and span the gaps in the foraging trail. This capability allows them to rapidly connect disjoint areas in order to transport food and resources to their colonies. Recent works in robotics have been focusing on using swarm behaviors to solve collective tasks such as construction and transportation.Docking modules in midair offers an advantage in terms of speed during the assembly process. For example, in a building on fire, the individual modules can rapidly navigate from a base-station to the target through cluttered environments. Then, they can assemble bridges or platforms near windows in the building to offer alternative exits to save lives.

ModQuad Design

The ModQuad is propelled by a quadrotor platform. We use the Crazyflie 2.0. The vehicle was chosen because of its agility and scalability. The low-cost and total payload gives us an acceptable scenario for a large number of modules.

Very light-weight carbon fiber rods connected by eight 3-D printed ABS connectors form the frame. The frame weight is important due to tight payload constraints of the quadrotor. Our current frame design weighs 7g about half the payload capability. The module geometry has a cuboid shape as seen in the figure below. To enable rigid attachments between modules, we include a docking mechanism in the modular frame. In our case, we used Neodymium Iron Boron (NdFeB) magnets as passive actuators.

Self-Assembling and Cooperative Flying

ModQuad is the first modular system that is able to self-assemble in midair. We developed a docking method that accurately aligns and attaches pairs of flying structures in midair. We also designed a stable decentralized modular attitude controller to allow a set of attached modules to cooperatively fly. Our controller maximizes the use of the rotor forces by generating the maximum possible moment.

In order to allow the flying structure to navigate in a three dimensional environment, we control thrust and attitude to generate vertical and horizontal translations, and rotation in the yaw angle. In our approach, we control the attitude of the structure in a decentralized manner. A modular attitude controller allows multiple connected robots to stably and cooperatively fly. The gain constants in our controller do not need to be re-tuned as the configurations change.

In order  to dock pairs of flying structures in midair, we propose to have two flying structures: the first one is hovering and waiting, meanwhile the second one is performing a docking action. Both, the hovering and the docking actions are based on a velocity controller. Using a velocity controller, we are able to dock multiple robots in midair. We highlight that docking robots in midair is one of the fastest ways to assemble structures because the building units can rapidly move and dock in three dimensional environments. The docking system and control has been validated through multiple experiments.

Our system takes advantage of robot swarms because a large number of robots can construct massive structures.

This work was developed by:

David Saldaña, Bruno Gabrich, Guanrui Li, Mak Yim, and Vijay Kumar.

Additional resources at:

http://kumarrobotics.org/

http://www.modlabupenn.org/

http://davidsaldana.co/

Grasping objects is a hard task that usually implies a dedicated mechanism (e.g arm, gripper) to the robot. Instead of adding extra components, have you thought about embedding the grasping capability to the robot itself? Have you also thought about whether we could do it flying?

In the GRASP Laboratory at the University of Pennsylvania, we are concerned about controlling robots to perform useful tasks. In this work, we present the Flying Gripper! It is a novel flying modular platform capable of grasping and transporting objects with the help of multiple quadrotors (crazyflies) working together. This research project is coordinated by Professor Mark Yim and Professor Vijay Kumar, and led by Bruno Gabrich (PhD candidate) and David Saldaña (Postdoctoral researcher).

 

 

Inspiration in Nature

In nature, cooperative work allows small insects to manipulate and transport objects often heavier than the individuals. Unlike the collaboration in the ground, collaboration in air is more complex especially considering flight stability. With this inspiration, we developed a platform composed of four cooperative identical modules where each is based on a quadrotor (crazyflies) within a cuboid frame with a docking mechanism. Pair of modules are able to fly independently and physically connect by matching their vertical edges forming a hinge. Four one degree of freedom (DOF) connections results in a one DOF four-bar linkage that can be used to grasp external objects. With this platform we are able to change the shape of the flying vehicle during flight and use its own body to constrain and grasp an object.

Flying Gripper Design and Motion

In the proposed modular platform, we use the Crazyflie 2.0. Its battery life lasts around seven minutes, though in our case battery life time is reduced due to the extra weight. The motor mounting was modified from the standard design, we tilted the rotors 15 degrees. This was necessary as more yaw authority was required to enable grasping as a four-bar. However, adding this tilt reduces the lifting thrust by 3%. Axially aligned cylindrical Neodymium Iron Boron (Nd-FeB) magnets, with 1/8″ of diameter and 1/4″ of thickness are mounted on each corner enabling edge-to-edge connections. The cylindrical magnets have mass of 0.377g and are able to generate a force of 0.4 kg in a tangential connection between two of the same magnets. This forms a strong bond when two modules connect in flight. Note that the connections are not rigid – each forms a one DOF hinge.

The four attached modules results in a one DOF four bar linkage in addition to the combined position and attitude of the conglomerate. The four-bar internal angles are controlled by controlling the yaw attitude of each module. For example, two modules rotate clockwise and other two modules rotate counter-clockwise in a coordinated manner.

 

 

Grasping Objects

Collaborative manipulation in air is an alternative to reduce the complexity of adding manipulator arms to flying vehicles. In the following video we are able to see the Flying Gripper changing its shape in midair to accomplish the complex task of grasping a wished coffee cup. Would you like some coffee delivery?

 

 

 

This work was developed by:

Bruno Gabrich, David Saldaña, Vijay Kumar and Mark Yim

Additional resources at:

http://kumarrobotics.org/

http://www.modlabupenn.org/

This week’s Monday post is a guest post written by members of the Computer Science and Artificial Intelligence Lab at MIT.

One of the focuses of the Distributed Robotics Lab, which is run by Daniela Rus and is part of the Computer Science and Artificial Intelligence Lab at MIT, is to study the coordination of multiple robots. Our lab has tested a diverse array of robots, from jumping cubes to Kuka youBots to quadcopters. In one of our recent projects, presented at ICRA 2017, Multi-robot Path Planning for a Swarm of Robots that Can Both Fly and Drive, we tested collision-free path planning for flying-and-driving robots in a small town.

Robots that can both fly and drive – in particular wheeled drones – are actually somewhat of a rarity in robotics research. Although there are several interesting examples in the literature, most of them involve creative ways of repurposing the wings or propellers of a flying robot to get it to move on the ground. Since we wanted to test multi-robot algorithms, we needed a robot that would be robust, safe, and easy to control – not necessarily advanced or clever. We decided to put an independent driving mechanism on the bottom of a quadcopter, and it turns out that the Crazyflie 2.0 was the perfect platform for us. The Crazyflie is easily obtainable, safe, and (we can certify ourselves) very robust. Moreover, since it is open-source and fully programmable, we were able to easily modify the Crazyflie to fit our needs. Our final design with the wheel deck is shown below.

A photo of the Crazyflie 2.0 with the wheel deck.

A model of the Crazyflie 2.0 with the wheel deck from the bottom

The wheel deck consists of a PCB with a motor driver; two small motors mounted in a carbon fiber tube epoxied onto the PCB; and a passive ball caster in the back. We were able to interface our PCB with the pins on the Crazyflie so that we could use the Crazyflie to control the motors (the code is available at https://github.com/braraki/crazyflie-firmware). We added new parameters to the Crazyflie to control wheel speed, which, in retrospect, was not a good decision, since we found that it was difficult to update the parameters at a high enough rate to control the wheels well. We should have used the Crazyflie RealTime Protocol (CRTP) to send custom data packages to the Crazyflie, but that will have to be a project for another day.
The table below shows the mass balance of our miniature ‘flying car.’ The wheels added 8.3g and the motion capture markers (we used a Vicon system to track the quads) added 4.2g. So overall the Crazyflie was able to carry 12.5g, or ~44% of its body weight, and still fly pretty well.


Next we measured the power consumption of the Crazyflie and the ‘Flying Car.’ As you can see in the graph below, the additional mass of the wheels reduced total flight time from 5.7 minutes to 5.0 minutes, a 42-second or 12.3% reduction.

Power consumption of the Crazyflie vs. the ‘Flying Car’

 

The table below shows more comparisons between flying without wheels, flying with wheels, and driving. The main takeaways are that driving is much more efficient than flying (in the case of quadcopter flight) and that adding wheels to the Crazyflie does not actually reduce flying performance very much (and in fact increases efficiency when measured using the ‘cost of transport’ metric, which factors in mass). These facts were very important for our planning algorithms, since the tradeoff between energy and speed is the main factor in deciding when to fly (fast but energy-inefficient) versus drive (slow but energy-efficient).

Controlling 8 Crazyflies at once was a challenge. The great work by the USC ACT Lab (J. A. Preiss, W. Hönig, G. S. Sukhatme, and N. Ayanian. “Crazyswarm: A Large Nano-Quadcopter Swarm,” ICRA 2017. https://github.com/USC-ACTLab/crazyswarm) has made our minor effort in this field obsolete, but I will describe our work briefly. We used the crazyflie_ros package, maintained by Wolfgang Hönig from the USC ACT Lab, to interface with the Crazyflies. Unfortunately, we found that a single Crazyradio could communicate with only 2 Crazyflies at a time using our methods, so we had to use 4 Crazyradios, and we had to make a ROS node that switched between the 4 radios rapidly to send commands. It was not ideal at all – moreover, we had to design 8 unique Vicon marker configurations, which was a challenge given the small size of the Crazyflies. In the end, we got our system to work, but the new Crazyswarm framework from the ACT Lab should enable much more impressive demos in the future (as has already been done in their ICRA paper and by the Robust Adaptive Systems Lab at Carnegie Mellon, which they described in their blog post here).

We used two controllers, an air and a ground controller. The ground controller was a simple pure pursuit controller that followed waypoints on ground paths. The differentially steered driving mechanism made ground control blissfully simple. The main challenge we faced was maximizing the rate at which we could send commands to the wheels via the parameter framework. For aerial control, we used simple PID controllers to make the quads follow waypoints. Although the wheel deck shifted the center of mass of the Crazyflie, giving it a tendency to slowly spin in midair, overall the system worked well given its simplicity.

Once we had the design and control of the flying cars figured out, we were able to test our path planning algorithms on them. You can see in the video below that our vehicles were able to faithfully follow the simulation and that they transitioned from flying to driving when necessary.

Link to video

Our work had two goals. One was to show that multi-robot path planning algorithms can be adapted to work for vehicles that can both fly and drive to minimize energy consumption and time. The second goal was to showcase the utility of flying-and-driving vehicles. We were able to achieve these goals in our paper thanks in part to the ease of use and versatility of the Crazyflie 2.0.

This week’s Monday post is a guest post written by members of the Robust Adaptive Systems Lab at Carnegie Mellon University.

As researchers in the Robust Adaptive Systems Lab (Director: Prof. Nathan Michael) at Carnegie Mellon University http://rasl.ri.cmu.edu, we’re interested in developing algorithms for control, planning, and coordination to enable robots to adapt to their environments, and we use the Crazyflie platforms to help us test our theory on real world systems.

We employ Crazyflie quadrotors to evaluate several research objectives in our lab. One research thrust looks at enabling a platform to learn new controllers as it flies, allowing the quadrotor to automatically switch between controllers to handle specific flight conditions. Another objective looks at adapting flight plans for multiple robots in real time, based on what a user tells the robots to do. We use the Crazyflie platform to evaluate our algorithms because the hardware is robust and the user community has helped make firmware available on which we can base our own systems (J. A. Preiss, W. Hönig, G. S. Sukhatme, and N. Ayanian. “Crazyswarm: A Large Nano-Quadcopter Swarm,” ICRA 2017. https://github.com/USC-ACTLab/crazyswarm). At the same time, the power and memory constraints of the Crazyflie force us to pursue efficient algorithms that can handle real world considerations such as limited battery life and computing restrictions.

We’re bringing these research ideas together to make an adaptive multi-robot system that a person can direct online and operate over long periods of time–longer than the lifespan of a single battery. In this blog post, we wanted to share a bit of information about some of the research we’re working on that uses the Crazyflie platforms. We look at single-robot controllers, multi-robot planning systems, and hardware designs to move us towards a persistent, adaptive swarm of Crazyflies.

Control Approaches

We have looked at implementing advanced control strategies onboard the Crazyflie for improved safety and flight performance. This includes an extremely efficient formulation of explicit Model Predictive Control (MPC) developed specifically for systems like the Crazyflie that have severe computational constraints due to size, weight, and power limitations. MPC optimizes the control commands based on current and predicted motion, allowing us to enforce speed limits and avoid motor saturation. The controller is run in a separate task on the STM32F405 MCU, allowing it to maintain a 100 Hz position control rate without blocking other components, such as the state estimator and attitude controller.

In the image above, the vehicle uses MPC control to rapidly and accurately move between two destinations. [V. R. Desaraju and N. Michael, “Fast Nonlinear Model Predictive Control via Partial Enumeration,” ICRA 2016.]

Multi-robot Systems

Another element of our research emphasizes adapting plans for multi-robot systems online, in order to translate a human operator’s intent into dynamically feasible and safe motion plans. We are using a swarm of Crazyflies for hardware validation of the online motion planning algorithms.

Our experimental testbed consists of a Vicon motion capture system that provides the state information (position and orientation) of the robots. The position coordinates and the heading angle are broadcast to the individual Crazyflies using the radios at a rate of 100 Hz. A centralized planner, running on a separate thread, parses user instructions into dynamically feasible and safe motion plans for each individual robot. The user instructions consists of high level information such as robot ids, behavior type (go to a target destination, rotate in a circle), and formation shapes (line, polygon, 3D). Once the feasible and safe motion plans are computed, the desired position setpoints are broadcasted to the Crazyflies at a rate of 30 Hz.

These images show a formation of 30 Crazyflies in a 3D shape and five groups of three Crazyflies, flying around a motion capture arena in response to a user’s direction. The multi-robot planner computes these flight plans online in response to the user’s commands, without any prior planning. [Ellen A. Cappo, Arjav Desai and Nathan Michael. “Robust Coordinated Aerial Deployments for Theatrical Applications Given Online User Interaction via Behavior Composition,” DARS 2016.]

Hardware Considerations 

As we move towards operating for long periods of time, we needed to incorporate the Qi inductive charger to enable the robots to recharge. We also want to retain the LEDs, to keep an aesthetic visual element and act as status indicators. There were two concerns with using both the standard LED and Qi decks: they both mount to the bottom of the Crazyflie, and both decks add weight. We created a combined deck based on the schematics from the Bitcraze wiki, and the final deck is shown below and weighs in at 6-7g.

Even with the combined Qi + LED deck, we observe shorter flight times and motor saturation with the Crazyflie flights, so we increased the battery capacity and flying with different motor and propeller combinations. Pictured below is a Crazyflie with 7×20 motors using 55mm propellers and a 500mAh battery, and carrying a foam mount for the motion capture markers. No frame modifications are required as the motors fit in the mounts and the battery is chosen to fit within the header pins. These changes resulted in an increase in flight time, greater control margin and aggressiveness, and larger payload capacity.

Toward Persistent Operation

The capabilities we mentioned above allow us to work towards long duration operation. We are currently integrating our adaptive control, online planning, and efficient hardware design to enable persistent operation of a swarm of Crazyflies. To achieve this goal, we are developing scheduling algorithms that update flight plans based on battery capacities across a Crazyflie fleet. The flight planner calculates trajectories that allow robots with full batteries to exchange places with robots with expired batteries, landing Crazyflies that need to recharge onto charging stations. In the first image below, 10 robots rotate in a circle formation. The time-lapsed image shows three Crazyflies with full batteries (with blue LEDs) exchange places with three Crazyflies with depleted batteries (with red LEDs). The second photo shows a close-up of our Crazyflie charging station.

In closing, we show a few video clips of Crazyflie flight illustrating the research we discuss here. We show a group of fifteen Crazyflies following online-issued commands to split and merge between formations; a formation of 30 Crazyflies; and an example of our exchange planner, substituting robots into a multi-robot formation and returning robots to ground station locations.

Contributors: Ellen Cappo, Arjav Desai, Matt Collins, Derek Mitchell, and Vishnu Desaraju, students and researchers at the Robust Adaptive Systems Lab (Director: Prof. Nathan Michael), Carnegie Mellon University.

This week we got an email from David Gómez, a research scientist at Multi-Agent Autonomous Systems Lab, Intel Labs. He and his team have done some cool work on trajectory planning in cluttered environments that we think needs to be shared. For their research they used the Crazyflie 2.0 which we think is even cooler :-). Watch the video to see how the Crazyflie 2.0 find its way though narrow corridors and obstacle dense environments.
 

 

If you are interested in the full paper, “A Hybrid Method for Online Trajectory Planning of Mobile Robots in Cluttered Environments”, you can find it under Crazyflie 2.0 publications in the research section.