Category: Video

Many people in the world have now settled in the reality of working from home. We have also taken precautions ourselves by not go to our office as normal and only ship out packages a few times per week instead of every day (see this blogpost). This also means that we do not have full access to our lab with all our equipment and positioning systems in our big 10 x 10 meter flight lab at the office. In this blogpost we will show how we manage to keep on developing and flying, even in the current situation.

Crazyflie flying in a kitchen with the lighthouse deck

In(light)house positioning

Currently we started to use the Lighthouse positioning system to setup up the remote home lab at our houses. As of recent additions to the Crazyflie firmware, it has been made easy to get the geometry data from the base station. Now the only items we need for indoor flight are just two (or only one) lighthouse basestations V1’s and a Crazyflie, and that is it! There is no need for an HTC Vive headset or hub, or third-party software like SteamVR and the setup is finished in 2 minutes! Check out the new documentation here if you want to know more about the new setup of the lighthouse positioning system.

Also, we recently got a very primarily version of the lighthouse V2 working (see here) and we of course want to keep the momentum going! We will be working on full compatibility from our homes so stay tuned. For now, see this video of the Crazyflie flying with just a single base-station, taken from one of our team-member’s home lab.

Remote Lecture Hall and Practicals

We were invited by Dario Floreano and Fabrizio Schiano from the EPFL-LIS laboratory to do a lecture for the ‘Aerial Robotics’ Course as part of EPFL’s Master’s program in Robotics. Due to the virus, we had to cancel our trip to go there physically… but luckily we were able to do the lecture remotely anyway!

Screenshots of the lectures

The lecture consists of two parts. In the first hour we mostly explained about the Crazyflie ecosystem, hardware and sensors. In the second hour we focused on how the stabilization module worked, including the controllers and the state estimation. During both sessions, we alternated between the theory slides with actual hands-on demos. The lighthouse positioning system was setup in a kitchens, so that we were able to show full flights and practicals with the Crazyflie. At the end there was also the push-demo with just the flowdeck and multiranger, which didn’t use any external positioning at all.

The lectures can be found below and the documentation has been updated as well with the covered material (see here). Be sure to check out the controller tuning presented in part 2 of the lecture (25:00 – Cascaded PID controller).

Other Home labs

Home lab with Crazyflie

We know that there are currently users that are moving their flight lab from their university or company to their homes to be able to continue their work. We would love to hear about your experience and your home lab! Send us an email with your story to contact@bitcraze.io, drop us a message on forum.bitcraze.io, or mention us in your Twitter, Linkedin, Facebook or Reddit post. Also, if you want to setup your own home lab and you need any advice or help, please let us know!

We are happy to announce that we have gotten Crazyflie 2 to fly autonomously using the Lighthouse deck and Lighthouse V2 base-stations. This was a very requested features, and while this is not stable and ready to use yet, it is a great milestone toward Lighthouse V2 support.

There exists two incompatible versions of the Lighthouse positioning system. Version 1 was released with the original HTC Vive VR system. In this system base-station are using two rotating laser beam that sweeps the room, one horizontal and one vertical, and an omnidirectional synchronization flash to allow IR light receiver to be located in the room. One limitation of this version is that up to two base-station can be used and no more, this is mainly due to the fact that beam identification is done using a TDMA scheme: base stations switch-on their laser in a dedicated time-slot one after each-other and adding more time slots for more base-stations will greatly reduce the update rate of the system.

Lighthouse V2, was released with the HTC Vive PRO headset and is also used by the Valve Index. The big change is that laser sweeps now carries modulated data and that there is only one rotor with two angled slit instead of the two rotors for V1. The V2 sweep data is described as ‘Sync on beam’ and contains timing information of how long it has been since the synchronization event (ie. when the rotor crossed 0 degree). The sweep data also allows to identify the base-station that has transmitted the sweep. This removes the need for an omni-directional synchronization pulse and allows more than two base-station to operate at the same time in the same space, since their sweeps can now be identified and timed.

The lighthouse V2 system is very elegant and scalable. However, actually decoding the signal from the sweeps has taken a lot of time since it is not documented and we needed to find-out what the encoding actually was. There has been effort on the internet to understand how the system worked, the most useful one is this github ticket that goes from raw data acquisition to fully unlocking the beam encoding.

I have been working on-and-off for a long time on making an FPGA design for the lighthouse deck to acquire and decode Lighthouse V2. The main blocking point until now was that I had not been able to reliably acquire useful signal from the system in order to allow real-time decoding on the Crazyflie. Added to that, there was some inconsistency between what we though the system was doing and what we could gather from the base-stations debug console. Recently though, the last piece of the puzzle, was to discover that the beam encoding was not Manchester, as we though, but Bi-phase mark code FM1 (BMC). Once this decoding was used everything made sense and worked.

Added to that, I started using SpinalHDL instead of raw Verilog to write the FPGA design which allows for much quicker iteration, much less frustration, and it also allowed me to easily make the design multi-clock which is required to decode the BMC signal: the beam decoder runs at 48MHz, and the rest of the system works at 24MHz. This design is required since the FPGA we use in the lighthouse deck is not fast enough to run everything at 48MHz.

The result, is a new FPGA firmware for the lighthouse deck that receives, identify and decode Lighthouse V2 sweep signal and send them over to the Crazyflie. The Crazyflie still has a little pulse packing to do (putting together pulses from a single sweep received on multiple sensors) and then can use pulse timing information to calculate azimuth and elevation at which the base-station sees the Crazyflie. This information is the same as the one we get from Lighthouse V1 and so the same algorithm can be used to calculate the Crazyflie position.

I hacked a proof of concept was this last fun Friday and it flies!

If anyone is curious the code for this demo has been uploaded as an out-of-tree driver and the code for the FPGA parts is already in the lighthouse-fpga project. The current Crazyflie code is too incomplete to be usable, but it is a nice starting point if anyone wants to play with Lighthouse V2 and the Crazyflie right away ;-).

As a side note, the Bitcraze team will shrink temporary as I, Arnaud, will go in parental leave until mid-August. I look forward to this new adventure and I trust the lighthouse V2 development and the forum will be in good hands in my absence.

Happy holidays to all our users, community members and friends! We are happy to announce our 2019 Christmas video which we have made in collaboration with Ben Kuper! It is starring 7 Crazyflies, the lighthouse positioning system, our office Christmas tree and a whole lot of holiday spirit, so go ahead and take a look!

Here are some words from Ben how it was to work on this year’s Christmas video at our office:

Coming to Bitcraze’s HQ and working with them has been once more a wonderful experience, technically and humanly ! The main goal of this session was to test and implement the new lighthouse tracking system in the tool suite I’m creating, and it was an amazing surprise to witness for real the uncanny stability of the drones when they’re on lighthouse tracking !

Of course, my first reaction was to push the limit and see what can be done with this new power, this is why I created this choreography : to see what can be done in a limited amount of time (1 and a half day to create the full choreography, the official video shows the first part only), and trying to go at the limit of the current possibilities. As the team was working on occlusion recovery, we decided to have the drone fly around the tree as a fun test, and it works !

In the new year we will have a followup blog-post going into detail on how exactly we made this video. Until then, happy holidays and have an awesome new year!

This week we are exhibiting at IROS in Macau. We are running our fully autonomous demo based on the Lighthouse positioning technology and charging pads. We also have brought some prototypes to show, for instance the Crazyflie Bolt, the AI deck and the Active marker deck. You can read more about the demo at the IROS 2019 page.

We’d love to hear what you are working on, discuss issues, possibilities or new products. If you are at IROS, drop by our booth (B34) and say hi!

Lighthouse yaw

We have not only prepared for IROS, we have also been working on improving the lighthouse positioning system. Recently we added a (slightly hackish) solution for updating the yaw with data from the Lighthouse deck. This means that it is not necessary to start the Crazyflie facing the positive X direction when using the Lighthouse deck. The Crazyflie will understand its heading and act accordingly.

Two Crazyflies facing a random direction, take off and rotate to yaw=0.

We are also working on integrating the Lighthouse deck in a better way in the kalman filter. If everything goes according to plan, it will enable a Crazyflie to fly with only one base station, and be more robust when using two base stations.

For the last four years of doing my PhD at the TU Delft and the MAVlab, we were determined to figure out how to make a swarm/group of tiny quadcopters fly through and explore an unknown indoor environment. This was not easy, as many of the sub-challenges that needed to be solved first. However, we are happy to say that we were able to show a proof-of-concept in the latest Science Robotics issue! Here you can see the press release from the TU Delft for general information about the project.

Since we used the Crazyflie 2.0 to achieve this result, this blog-post we wanted to mostly highlight the technical side of the research, of the achievements and the challenges we had to face. Moreover, we will also explain the updated code which uses the new features of the Crazyflie Firmware as explained in the previous blogpost.

A swarm of drones exploring the environment, avoiding obstacles and each other. (Guus Schoonewille, TU Delft)

Hardware

In the paper, we presented a technique called Swarm Gradient Bug Algorithm (SGBA), which borrows (as the name suggests) navigational elements from the path planning technique called ‘Bug Algorithms’ (see this paper for an overview). The basic principle is that SGBA is a state-machine with several simple behavior presets such as ‘going to the goal’, ‘wall-following’ and ‘avoiding other Crazyflies’. Here in the bottom you can see all the modules were used. For the main experiments (on the left), the Crazyflie 2.0’s were equipped with the Multiranger and the Flowdeck (here we used the Flow deck v1). On the right you see the Crazyflies used for the application experiment, were we made an custom Multiranger deck (with four VL53L0x‘s) and a Hubsan Camera module. For both we used the Turnigy nanotech 300 mAh (1S 45-90C) LiPo battery, to increase the flight time to 7.5 min.

Hardware used in the experiments. Adapted from the science robotics paper.

Experiments

With this, we were able to have 6 Crazyflies explore an empty office floor in the faculty building of Aerospace engineering. They started out in the middle of the test environment and flew all in different preferred directions which they upheld by their internal estimated yaw angle. With the multi-rangers, they managed to detect walls in their, and followed its border until the way was clear again to follow their preferred direction. Based on their local odometry measurements with the flowdeck, the Crazyflies detected if they were flying in a loop, in order to get out of rooms or other situations.

A little before half way of their battery life, they would try to get back to their initial position, which they did by measuring the Received Signal Strength Intensity of the Crazyradio PA home beacon, which was located at their initial starting position. During wall-following, they measured the gradient of the RSSI, to determine in which directions it increases or decreases, to estimate the angle back the goal.

While they were navigating, they were also communicating with each-other by means of broadcasting messages. Based on those measurements of RSSI, they could sense other Crazyflies approaching, which they first of all used for collision avoidance (by letting the low priority CFs move out of the way of the high priority CFs). Second of all, during the initial exploration phase, they communicated their preferred direction as well, so that one of them can change its exploration behavior to not conflict with the other. This way, we tried to maximize the explored area by the Crazyflies.

One of those experiments with 6 Crazyflies can be seen in this video for better understanding:

We also showed an application experiment where 4 crazyflies with the camera modules searched for 2 dummies in the same environment.

Challenges

In order to get the results presented above, there were many challenges to overcome during the development phase. Here is a list that explains a couple of the elements that needed to work flawlessly:

  • Single CF robustness: We used the Flowdeck v1, for the ‘deadlock’ detection and the basic velocity control, which was challenging in the testing environment because of low lighting conditions and texture. Therefore the Crazyflies were flying at 0.5 meters in order to ensure robustness. The wall-following was performed solely using the Multiranger. This was tested out in many situations and was able to handle a lot of type of obstacles without any problem. However the limited FOV of the laser range finder can not detect all types of obstacles, for instance thin ones or irregular ones such as plants. Luckily these were not encountered in the environment the Crazyflies flew in, but to increase robustness, we will need to consider adding a camera to the navigational drive as well.
  • Communication base-station. SGBA by essence only needs one base-station Crazyradio PA, since all the behavior is completely on board. However, in order to show results in the paper, it was necessary for the CF to communicate information back, like odometry, state and such. As this was a two way communication (CFs needed RSSI to get back) each Crazyflie needed 1 base-station. Also, they all needed to be on different channels to avoid package collisions and RSSI accumulation.
  • Communication Peer to Peer. At development time, P2P didn’t exist yet, so we had to implement broadcast communication between the Crazyflies. Since the previous pointer required them to listen on different channels, the NRF had to be configured to send separate broadcast messages on all those channels as well. In order to time this properly, the home beacon had to sync the Crazyflies accordingly by sending out a timer. Even so, the avoidance maneuvers were done very conservatively to try to prevent inter-drone collisions.

Many of the issues, especially the communication challenges, will be solved with the updated code implementation as explained in the next section.

Updated code

The firmware that the Crazyflies used to fly in the experiments showed in the paper, can all be found in this public repository. However, the code is based quite an old version the current Crazyflie firmware, as it was forked almost a year ago. The implementation of the SGBA state machine and the P2P broadcasting were not generic enough to integrate this back to the development cycle, therefore the current code is only suitable for the old Crazyflie 2.0.

Therefore, we developed two major changes in the latest firmware which will make it much easier for me (and other ideas as well we hope!) to implement SGBA and the P2P communication in a way that should be compatible with any version of the firmware (and hardware) from here and on. We implemented SGBA as an app-layer and also handled all the broadcast messaging directly from this layer as well. Please check out this Github repository with this new app layer implementation of SGBA.

CrazyFlies are great for indoor applications, thanks to their maneuverability and ubiquitous character. Its small size, however, limits sensor quality and compute capability. In our recent work we present source seeking onboard a CrazyFlie by deep reinforcement learning. We show a general methodology for deploying deep neural networks on heavily constrained nano drones, using full 8-bit quantization and input scaling. 

Our fully autonomous light-seeking CrazyFlie

Problem definition

Source seeking can be interesting in a variety of contexts. We focus on light seeking, as seen in nature. Many insects rely on light, either for survival or navigation. Light seeking in aerial robotics has many applications, such as finding the exit out of a dark room. 

Our goal is to fully autonomously find a light source, using only the onboard Micro Controller Unit (MCU) and deep reinforcement learning. 

Crazyflie configuration

Our fully autonomous nano drone uses several standard and custom sensors. We use the multiranger and flowdeck for position control and obstacle avoidance.

The Multiranger deck with our custom light sensor

We add a custom light sensor, based on the Adafruit TSL2591 sensor. The custom light sensor nicely fits in the multiranger deck, adding little mass and inertia (total vehicle mass is 33 grams).

CrazyFlie 2.1 with multiranger, flowdeck and light sensor

Algorithm

We use a deep reinforcement learning algorithm with a discrete action space. The neural network policy has laser rangers and light readings (current and past values) as input. The neural network tells the drone to rotate left, right or fly forward. We train a neural network with 2 hidden layers of both 20 nodes, featuring bias add and relu activation functions. The input layer is a vector with a length of 20 (4 states), which, compared to images, greatly reduces computational effort. 

DQN policy architecture

Simulation and conversion

We train our agent in simulation using the Air Learning simulation platform, after which we fully quantize the neural network to 8-bit integers.

To maintain accuracy after quantization, we have come up with quantization innovations. Both input layer and all tensors in the network need to have a pre-defined [min,max] range in float32, to convert to 8-bit integers. 

Air Learning pipeline

In the input layer, not all inputs have the same range. That is, a laser ranger can have values from 0 to 5 meters while our light sensor may return a value between 0 and 300 lux. To avoid this issue, we scale all inputs to the same range.

Additionally, the tensors in the network need to have an assigned [min,max] range for quantization. To achieve this, we input a range of representative input into the unquantized model, and read out the values of intermediate layers. With this strategy, we arrive at a 2.9x speed-up compared to float32 inference.

Implementation

We use Tensorflow Lite to deploy our tensorflow models in C on the CrazyFlie. The TFMicro Stack, together with the actual model, almost completely fill up the available RAM. 

RAM utilization on the CrazyFlie 2.1

The total amount of RAM available on the CrazyFlie 2.1 is 196kB, of which only 131kB is available for static allocation at compile time. The Bitcraze software stack uses 98kB of RAM, leaving only 33kB available for our purposes. The TFMicro stack takes up 24kB, thus leaving 9kB for the actual model (e.g., weights, bias terms). 

We also analyzed CPU usage, and noticed a high amount of interrupts by the ‘stabilizer’ thread, i.e., the PID controllers. Because of these interrupts, inference of our model takes 46.4 times longer than it would have been without interruption. 

Our quantized model is 3kB. If it were an FP32 model, it would have taken 12kB, which would not have fitted in the available memory. We were able to run inference at 4Hz, compared to the estimated 1.4Hz of the same but unquantized model. 

In a practical sense, we noticed a decreased level of stability when increasing model size. Occasionally the drone would reboot randomly while flying. Possible causes for this behavior are RAM overflow and task scheduling problems in RTOS. Besides, we observed variation in performance loss after quantization. Some of our trained models would just keep rotating after quantization, while our final model demonstrates robust source seeking behavior. This degree of uncertainty can possibly be avoided using quantization aware training. 

Finally, flying in a dark room without a position estimate can be challenging. The PID controllers heavily rely on information provided by the Flow Deck. This information is limited when little light is present while flying over a floor containing little features. To fix this, we added mats with texture on the ground, adding features and enabling stable flight in a dark room.

Flight tests

To validate our results in simulation, we created a cluttered environment with a light source. We randomly initialized the drone in the room, and hereby observed a success rate of 80% in a total of 105 flight tests. By varying the environment and initial drone position, we learned more about the inner workings of our algorithm.

Experiment testing environment

We learned that the algorithm performs better with more obstacles, and that a closer initial position improves performance. Generally, source seeking far away from the source seems really hard. Almost no variation in source strength exists between different measurements, and the drone observes mostly noise. 

Outlook

With our methodology, we were able to perform fully autonomous source seeking using deep reinforcement learning on a Cortex-M4 MCU. We hope our methodology will be applicable to other TinyML applications where resources are heavily constrained. Developing custom accelerators for a specific workload is time-consuming and expensive, while general purpose MCU’s are cheap and widely available. With our methodology, we unlock new applications for learning algorithms on heavily constrained platforms.

Direct path to source in empty room, blue = take-off

Links

Video: https://www.youtube.com/watch?v=wmVKbX7MOnU

Paper: https://arxiv.org/abs/1909.11236

Github: https://github.com/harvard-edge/source-seeking

Feel free to contact us might you have any questions or ideas: bduisterhof@g.harvard.edu

The High-level Commander has been part of the Crazyflie firmware since the 2018.10 release. In combination with a positioning system, it can fly the Crazyflie along a trajectory that is either defined in the firmware or uploaded through the python lib. It originates from the Crazyswarm project and we have used it in various demos since it is possible to make trajectories that are very fluid and looks really cool. The trajectories are defined as 7th degree polynomials describing segments executed one after each other.

The controller gives full control of position, velocity, acceleration and jerk, the only problem is that it is non-trivial to generate the polynomials. We have wanted to simplify the creation of trajectories for a long time and have finally had some time to play with it. In this blog post we will describe how it can be done with Bezier curves and show some examples.

Each segment in a High-level commander trajectory is defined by four 7 degree polynomials, one for x, y, z and yaw. There is also a scaling parameter that tells the controller the time scale to use when executing the segment. Using polynomials of degree 7 makes it possible to design trajectories that are continuous in position, velocity, acceleration and jerk when changing from one segment to the next, which is important to get a smooth and controlled flight.

Bezier curves are common in many graphics applications and are probably known to most users. They are parametric curves defined by control points, usually three or four. Bezier curves can also be expressed as a polynomial, and this is what we will use in this case. To get a correct mapping to the desired polynomials we need some more control points and will use 8 per segment. The basic idea is to define the trajectory as Bezier curves and make sure the control points are placed in such a way that the continuity requirements are satisfied.

Bezier curve with 8 control points

On this page from University of Cambridge, there is a good explanation on continuity across the joins between curves and formulas for c0, c1 and c2 continuity. We also need c3 continuity which can be calculated in the same manner

With these formulas it is possible to set the handles of the Bezier curves to make sure we get a smooth ride.

We have added a python example that implements the ideas above. You can find it in crazyflie-lib-python/examples/positioning/bezier_trajectory.py. The design is based on Nodes that represents the connection points between bezier curves (called Segments). The Nodes has a set of handles that are shared between the Segments that use the Node. If not all handles are set the implementation will set them to appropriate values, see the comments in the code for more details. The Node API only allows the user to set handles on one of the Segments, the handles for the other segment are automatically set to generate a continuous trajectory.

The example uses nodes in the corners of a square and contains three parts:

No velocity in the nodes. The Crazyflie stops in the Nodes. Similar to calling go-to in the HL commander.
Velocity in the nodes. A fluid motion all the way around.

Velocity in the nodes. A fluid motion all the way around.

A bit more aggressive settings to get a little action.

Finally a video showing the full sequence, we use the Lighthouse for positioning.

Two weeks ago we posted about the demo we did for our new office move-in party. There has been multiple requests to share the script but unfortunately this is a hacked old script that is not going to be useful at all as an example. So, last week, we made an example that could run a synchronized swarm sequence.

The example has been pushed in the example folder of the Crazyflie-lib-python project. It is called synchronizedSequence.py. Running this example unmodified with 3 Crazyflies in a positioning system will give you this result. (Like the previous demo, this was done in a lighthouse system.)

One of the key design of the example is that it is based on a single control loop that can be synchronized with an outside system: in this example, there is a simple sleep of one seconds between each step of the sequence but it could for example be changed into a midi clock receiver to synchronize the sequence with music.

The example was developed with the help of Victor, a student we have hired to help-out during the summer. He has then played around a little bit to make a 9 Crazyflies sequence that is more impressive:

I uploaded Victor’s sequence in a github gist as it can be good for inspiration. One bit of warning though: as is, the sequence contains some vertical movements that are quite aggressive and the part where Crazyflies fly directly on top of each-other is more to be considered as a stress test.

We have recently moved to a new bigger office. With the summer arriving in Sweden, it was time to organize a small move-in after-work party with friends and family. For the occasion we wanted to play around with a small swarm of Crazyflies and the new Lighthouse positioning. Time being a sparse resource, we setup the ICRA2019 demo in the flight lab so that we would be able to fly during the party. We also started looking at our old swarm show that we ran with the LPS a year ago to see if we could run it with Lighthouse:

The show was a essentially a sequence of setpoints sent from a python script and controlling 9 Crazyflies 2.1 equipped with Lighthouse deck on the top and led-ring deck on the bottom synchronized on music. We setup the Crazyflie in the Lighthouse positioning system and converted the script to use the high-level-commander GOTO setpoints. We look forward at trying more advance control problems like trajectories to make more impressive synchronized flight choreography in the future but for now it already look quite good even with only GOTOs:

3 of us where at ICRA 2019 in Montreal last week, where we met a lot of interesting people and a lot of Crazyflie users. Thanks a lot to everyone that drop by our booth, and for the ones that missed it we are planning on being at iROS2019 later this year so we might see you there :-).

We have already described our demo in a previous post, now that we run it we can update on how it went. We are also updating the ICRA2019 page with the latest source code and information so that anyone interested can reproduce the demo.

In its final state at the conference, the demo contained 8 Crazyflies 2.1 equiped with Lighthouse deck and Qi charger deck. There were 8 3D-printed charging pads on the floor with Ikea Qi wireless chargers and two HTC Vive base stations (V1) on tripods. The full system was contained in a cage, built from 50 cm-long tubes or aluminium and nets.

The full setup of the booth took us about 4 hours, this included about 3 hours for the cage, 15 min for the demo including calibration of the lighthouse base-station geometry and the rest to fine-tune things. This is by far our best setup time, we still need to prettify the cage a bit and to make is easier to install, but we will most likely re-use this system for upcoming conferences.

In this demo we aimed at keeping a Crazyflie in the air at every moment, to do so we had a computer connected to all 8 Crazyflies sending to one of them the signal to start flying if no other where actually in the air flying a trajectory. The flight was completly autonomous as we explained in our previous blog post. We setup the Crazyflie to fly 2 cycles and then land, which increase the rate of swap and so increased the ‘action’, though it also meant that during the swap two Crazyflies where flying. This drained the batteries a bit more than expected and meant that after about an hour all the Crazyflies where bellow the take-off threshold and we had to wait ~30 seconds between flights. Here is a video of it in action:

The demo was very care-free, we had very few Crashes and we mostly restarted the Crazyflies to swap batteries manually to add a bit of power in the swarm. The last day we decided to spice it up a little bit by adding a chair in the cage and by calibrating the chair position and flight trajectory, we managed to have the Crazyflie partly fly under it. This worked quite well most of the time and showed that the lighthouse positioning is repeatable and works fairly well with short occlusion in the path. Though we also found out that even though a single Crazyflie would always fly the same trajectory, two different Crazyflies will not. We think differences in propeller stiffness and the fact that the our Mellinger position controller has not been calibrated for changing YAW are the main reasons.

If you want to know more about the demo or if you want to reproduce it do not hesitate to visit the ICRA 2019 page that explains it in more details and links to the source code of everything including 3D printed parts for the cage and the landing pads.