Category: Software

This is it. The end of my internship. It feels strange to leave this unique office in a place called Malmö. My time spent here was more than just doing an assignment as part of a MSc. degree with the objective that I would gain working experience and contribute to a company.

My last day at the office of Bitcraze, Arnaud was already on parental leave

My time here gave me so much more. I have learned here a healthy way of thinking and problem solving which is part of the unique Bitcraze company culture. Next to that, it felt more like working with friends than just working with colleagues. Going to the office is a delight, as there is always humor, openness and honesty. I got to know everyone and enjoy the French, Swedish and Dutch-American hospitality and culture.

At this point you might think that I only have been drinking coffee and made sure that coffee in the office was not below level. Luckily that was not the case. I had the privilege to be the first user for a new deck. This deck has been in development for quite some time now and has been glossed over in some earlier blog posts. It is the yet to-be-released AI-Deck! At the moment the early-access AI-Decks are a delayed due to the COVID-19 virus. Bitcraze will update you on the blog when they know more. 

My task within Bitcraze, in more detail, was to improve user friendliness of the AI-Deck by providing a framework for future users and at the same time to explore user friendliness of the whole ecosystem around the AI-Deck for an engineering student with beginner experience in embedded programming (e.g. me).

At the verge of giving the Crazyflie some AI capabilities, while being micromanaged.

So my mission began. A logical step was to see if the convolutional neural network from the PULP-DroNet project would run on the AI-Deck and fly with the Crazyflie, as the AI-Deck is an evolution of the PULP-Shield developed for this project. More information about this can be found here.

Unfortunately, this was not an easy feat as the PULP-DroNet project is using the pure version of the PULP SDK and an outdated autotiler. While the development partner for the AI-Deck, Greenwaves Technologies, uses the PULP SDK as a base with added functionalities in their SDK, which made it divert from the SDK used in the PULP-DroNet project. 

Though, I was able to run the convolutional neural network in a simulated environment and compare this to the original DroNet that was implemented using Python and a Bebop. It was interesting to find out that the convolutional neural network of PULP-DroNet was behaving differently than the original DroNet in Python. There can be many explanations for this, but the main hypothesis is that this is caused by quantizing the network of PULP-DroNet from 32-bit floating point to 16-bit fixed point. In addition, the aforementioned network is trained on a larger dataset which included data created by a Himax camera.

A single Crazyflie obtained self-awareness and spun up a swarm of Crazyflies to gain world domination

While porting PULP-DroNet to the AI-Deck should be possible, the obstacles found along the way made it too troublesome and out of scope for my internship. So I moved on with the main objective, making a framework/example for the AI-Deck using the SDK provided by Greenwaves Technologies, which is called the GAP8 SDK. It contains a set of tools that should make the use of the AI-Deck easier, namely the NNTool and Autotiler tool. These tools make sure that you can automate the conversion of your neural network that is designed and trained in Python (Tensorflow and Keras) to a neural network code that can utilize the GAP8 functionalities.

My internship came to an end before I could overcome the last hurdle for a working example. To still bring this example to you, I have committed the doc/code I wrote and handed over the knowledge that I have accumulated throughout my internship when working with the AI-Deck and its environment to the capable minds of Kimberly and Tobias.

Along the way I have learned a lot about embedded programming and being a first product user. In addition with embedded programming and programming in general comes a different mindset than a conventional planning and deadline fixed mindset you get from university. With these valuable lessons in mind, I will be heading back to the TU Delft to start with my master thesis in either reinforcement learning for aircrafts or dense optical flow nets for quadcopters. Thank you Bitcraze for your time, experience and hospitality!

There has been some work done earlier to use the Crazyflie for generating images, for instance the dot-drawing by Paul Kry and light painting. I wanted to see if it is possible to put a brush or pen on a Crazyflie and use it to draw lines on a paper. I decided to use a fun Friday to try it out. The idea is simple: mount a pen on the Crazyflie, put a paper on a wall, write a script to draw a figure, fly!

The setup

The first thing I looked into was to investigate if a Crazyflie can fly with a brush or pen mounted on it. I wanted to keep the weight down and my initial approach was to use a cotton swab (0.6 g) dipped in paint. I found one that was long enough to extend in front of the propellers and I mounted it by squeezing it between the battery and the PCB. Flying was no problem with such low extra weight.

For positioning I decided to use the Lighthouse system. It is very accurate, simple to use and the easiest way to get started. I mounted a piece of cardboard in the YZ-plane of our lighthouse coordinate system, where I could attach a drawing paper. The idea of setting up the drawing surface parallell to the YZ-plane was to make the scripting easier. I (of course) used the Crazyflie and lighthouse system to measure that the cardboard was mounted at the right position.

Finally I wrote a simple python script that utilized the high level commander to move towards the drawing surface and yawing at the right position to draw a stroke on the paper. It sort of worked, but the cotton swab has to be “refilled” before each stroke which took a lot of time, and the results were a bit random.

I decided to try out a pen instead. The upside is that it does not require refill, on the other hand it is much heavier which makes the Crazyflie a bit sluggish when flying. I mounted the pen on the top side of the PCB, squeezed under the Lighthouse deck, and moved the battery to under the Crazyflie to distribute the weight.

Initial tests – both cotton swab and pen

The script was updated to draw the outline of the Bitcraze logo. I had a couple of variations where I tried to draw the full outline in one long stroke, as separate strokes, going up or going down and some other flavours.

So was it successful? Currently the Crazyflie is not a new Picasso, but the painting skills could maybe be improved with some more work. I think the main problems were:

  1. the pen is too heavy and requires too much force on the paper
  2. the controller cannot handle the situation in a good way. In essence I set the set point a few millimeters “into” the paper to push the pen against the surface which seems to be confusing as the controller can not reach the set point.
  3. Flying that close to the drawing surface creates an air flow that disturbs the flight.

Video showing the Crazyflie drawing the logo

The Bitcraze logo (17×17 cm), drawn by the Crazyflie

We are happy to announce that we have gotten Crazyflie 2 to fly autonomously using the Lighthouse deck and Lighthouse V2 base-stations. This was a very requested features, and while this is not stable and ready to use yet, it is a great milestone toward Lighthouse V2 support.

There exists two incompatible versions of the Lighthouse positioning system. Version 1 was released with the original HTC Vive VR system. In this system base-station are using two rotating laser beam that sweeps the room, one horizontal and one vertical, and an omnidirectional synchronization flash to allow IR light receiver to be located in the room. One limitation of this version is that up to two base-station can be used and no more, this is mainly due to the fact that beam identification is done using a TDMA scheme: base stations switch-on their laser in a dedicated time-slot one after each-other and adding more time slots for more base-stations will greatly reduce the update rate of the system.

Lighthouse V2, was released with the HTC Vive PRO headset and is also used by the Valve Index. The big change is that laser sweeps now carries modulated data and that there is only one rotor with two angled slit instead of the two rotors for V1. The V2 sweep data is described as ‘Sync on beam’ and contains timing information of how long it has been since the synchronization event (ie. when the rotor crossed 0 degree). The sweep data also allows to identify the base-station that has transmitted the sweep. This removes the need for an omni-directional synchronization pulse and allows more than two base-station to operate at the same time in the same space, since their sweeps can now be identified and timed.

The lighthouse V2 system is very elegant and scalable. However, actually decoding the signal from the sweeps has taken a lot of time since it is not documented and we needed to find-out what the encoding actually was. There has been effort on the internet to understand how the system worked, the most useful one is this github ticket that goes from raw data acquisition to fully unlocking the beam encoding.

I have been working on-and-off for a long time on making an FPGA design for the lighthouse deck to acquire and decode Lighthouse V2. The main blocking point until now was that I had not been able to reliably acquire useful signal from the system in order to allow real-time decoding on the Crazyflie. Added to that, there was some inconsistency between what we though the system was doing and what we could gather from the base-stations debug console. Recently though, the last piece of the puzzle, was to discover that the beam encoding was not Manchester, as we though, but Bi-phase mark code FM1 (BMC). Once this decoding was used everything made sense and worked.

Added to that, I started using SpinalHDL instead of raw Verilog to write the FPGA design which allows for much quicker iteration, much less frustration, and it also allowed me to easily make the design multi-clock which is required to decode the BMC signal: the beam decoder runs at 48MHz, and the rest of the system works at 24MHz. This design is required since the FPGA we use in the lighthouse deck is not fast enough to run everything at 48MHz.

The result, is a new FPGA firmware for the lighthouse deck that receives, identify and decode Lighthouse V2 sweep signal and send them over to the Crazyflie. The Crazyflie still has a little pulse packing to do (putting together pulses from a single sweep received on multiple sensors) and then can use pulse timing information to calculate azimuth and elevation at which the base-station sees the Crazyflie. This information is the same as the one we get from Lighthouse V1 and so the same algorithm can be used to calculate the Crazyflie position.

I hacked a proof of concept was this last fun Friday and it flies!

If anyone is curious the code for this demo has been uploaded as an out-of-tree driver and the code for the FPGA parts is already in the lighthouse-fpga project. The current Crazyflie code is too incomplete to be usable, but it is a nice starting point if anyone wants to play with Lighthouse V2 and the Crazyflie right away ;-).

As a side note, the Bitcraze team will shrink temporary as I, Arnaud, will go in parental leave until mid-August. I look forward to this new adventure and I trust the lighthouse V2 development and the forum will be in good hands in my absence.

The Crazyflie supports wireless communication using both the Crazyradio PA and BLE (Bluetooth Low Energy https://en.wikipedia.org/wiki/Bluetooth_Low_Energy). BLE is used with the mobile phone apps while Crazyradio PA usually is used together with a PC.

The lower levels of the radio communication in the Crazyflie is handled by the nRF51 that is capable of handling both types of communication. When using the Crazyradio we are using the manufacturers, Nordic Semiconductor, proprietary Enhances ShockBurst protocol (ESB) which makes it simple to send packages, up to 32 bytes, between each other. When communicating over BLE we are using Nordic Semis S110 SoftDevice which is a BLE stack developed by Nordic Semi to simplify implementation.

When we designed the first Crazyflie, the Crazyflie 1.0/Nano, we choose to use the nRF24L01+ that uses the ESB protocol because of simplicity, good range and low latency. Then came the Crazyflie 2.0 and we wanted BLE for mobile client support. Luckily Nordic released the nRF51 which could handle both. However there is a small drawback, both protocols can’t run concurrently and has to be interleaved. For BLE this has never been any problem as this protocol has the priority, but for ESB it means that when BLE is running there will be a small amount of packet loss.

The CRTP protocol we developed that runs on top of the ESB, handles the packet loss fairly well but as more and more Crazyflies are added we have been seeing communication issues. So last week we dived in to this problem and after some digging we understood that BLE was one of the problems. Therefor we added a switch which disables BLE as soon as a ESB packet is received. This improved the ESB connection and it now seems more stable. If you have the possibility we suggest you to get the latest from the crazyflie2-nrf-firmware master branch, try it out and give us feedback.

This change will hopefully provide more stable communication between the Crazyradio PA and the Crazyflie. From a functionality point of view, most users will not see any difference, but we would like to point out that if you have communicated with your crazyflie using the Crazyradio PA, it will not be possible to connect with a mobile phone until the Crazyflie has been re-booted. Note that a simple radio scan with the python client has the same effect and disables BLE.

When there is a possibility to name a release with only two’s and zero’s  one has to take that opportunity right! Adding to that, it was about time to make a new release, and there is actually another reason. As we wrote about in the “What’s up 2020” blog post, it’s time to look back, finish up and make things more stable. This includes improving documentation, more examples/tutorials etc. With this release we create a good baseline to start this work from. 

The release changes are outlined below.

Crazyflie/Bolt/Roadrunner firmware

Python client and library

  • Bug fixes
  • More examples
  • Full external pose support

Two weeks ago, we had a blogpost about the state estimators that are available within the Crazyflie. So once the Crazyflie knows where it is, it would need to be determined where it wants to go, by means of the high level commander (implemented as part of the crazyswarm project) or set-points given by CFclient or directly from scripts using Crazyflie python lib. But exactly how would the crazyflie get to those desired positions in the first place? The differences between the current state estimates and the desired state, will need to be transformed to inputs given to the motors. Unfortunately, quadrotors like the Crazyflie do not have easy dynamics to maintain, so if you want to learn more, see this blogpost to read more about it!

Controlling the Crazyflie

So in order use the thrust of the motors in an useful way to get the Crazyflie to do what you want to do, there are several controllers to consider, which you can see on this quick overview here underneath. It shows the different control paths that can be taken from the high level commander all the way to the power distribution of the motors. Bear in mind that these are still simple representations and that the actual implementation is of course a bit more complicated, but at least it will give you a rough idea of which paths are possible to pursue.

Possible controller pathways

PID Controller

So the default settings in the Crazyflie firmware is the proportional integral derivative (PID) control for all desired state aspects. So the High Level Commander (HLC) will send desired position set-points to the PID position controller (which used to be done off-board, so outside of the Crazyflie firmware before this blogpost). These result in desired pitch and roll angles, which are sent directly to the attitude PID controller. These determine the desired angle rates which is send to the angle rate controller (which is… you guessed… also a PID controller). This is also called Cascaded PID controller. That results in the desired thrusts for the roll pitch yaw and height that will be handled by the power distribution by the motors. (Note that height is mostly handled by the position controller)

INDI Controller

So the Incremental Nonlinear Dynamic Inversion (INDI) controller is an controller that immediately deal with the angle rates to determine the trust. This is a very new addition to the Crazyflie firmware by one of our community members and is based on the implementation of this paper. Currently, the position control is still handled by the same PID controller mentioned in the last paragraph, Nevertheless for handling the angles, it should be faster than the attitude and rate PID controller combined. We have not yet fully tested this out but if you do, let us know how you like it on the Bitcraze forum!

Mellinger Controller

As part of the Crazyswarm project, the controller designed by Daniel Mellinger has been implemented in the Crazyflie firmware as well. Please see this paper about the details of the Mellinger controller. It is a sort of “all in one”: based on the desired position and velocity vectors towards those position, it will calculate right away what the desired thrusts are that need to be distributed to all the motors. This results in a much smoother controlled trajectory of the high level commander and therefore advised to use when the Crazyflie has a precise position estimate (lighthouse and mocap). However, as it is so aggressive, any position estimate of a lesser quality (flowdeck or LPS) will not be sufficient for this controller. See some examples of mellinger controlled flights here and here.

Let us know what you think!

So do you have experience working with these controllers or want to know more about them, please drop us a message on the forum! We are currently working on stabilization and documentation of multiple aspects of the Crazyflie and the controllers is one of them, so we are really interested what your experiences are!

How does a Crazyflie manage to fly and stay in the air in the first place? Many of us tend to take this for granted as much research tend to happen on the application level. Although we try to make the low level elements of flight as stable as possible, it might happen that whatever you are trying to implement on the application level actually effects the Crazyflie on the low level controls and estimation. We therefore would like to focus a little bit on the inner-workings of the autopilot of the Crazyflie, starting with state estimation. The state estimation is part of the stabilizer loop in the Crazyflie, an overview of is was made in a previous blog post.

State estimation is really important in quadrotors (and robotics in general). The Crazyflie needs to first of all know in which angles it is at (roll, pitch, yaw). If it would be flying at a few degrees slanted in roll, the crazyflie would accelerate into that direction. Therefore the controller need to know an good estimate of current angles’ state and compensate for it. For a step higher in autonomy, a good position estimate becomes important too, since you would like it to move reliably from A to B.

There are two types of state estimators in the crazyflie firmware, namely a Complementary Filter and an Extended Kalman Filter.

Complementary Filter

The complementary filter is consider a very lightweight and efficient filter which in general only uses the IMU input of the gyroscope (angle rate) and the accelerator. The estimator has been extended to also include input of the ToF distance measurement of the Zranger deck. The estimated output is the Crazyflie’s attitude (roll, pitch, yaw) and its altitude (in the z direction). These values can be used by the controller and are meant to be used for manual control. If you are curious how this code is implemented exactly, we encourage you to checkout the firmware in estimator_complementary.c and sensfusion6.c. The complementary filter is set as the default state estimator on the Crazyflie firmware.

Schematic overview of inputs and outputs of the Complementary filter.

Extended Kalman Filter

The (extended) Kalman filter is an step up in complexity compared to the complementary filter, as it accepts more sensor inputs of both internal and external sensors. It is an recursive filter that estimates the current state of the Crazyflie based on incoming measurements (in combination with a predicted standard deviation of the noise), the measurement model and the model of the system itself. We will not go into detail on this but we encourage people to learn more about (extended) Kalman filters by reading up some material like this.

Schematic overview of inputs and outputs of the Extended Kalman Filter

Shortly said, because of the more state estimation possibilities, we preferred the Kalman filter in combination with several decks: Flowdeck, Loco positioning deck and the lighthouse deck. If you look in the deck driver firmware (like for instance this one), you see that we set the required estimator to be the Kalman and that is of course because we want position/velocity estimates :). Important though is that each input of the measurement effects the quality of the position, as positioning of the Lighthouse deck (mm precision) is much more accurate that the loco positioning deck (cm precision), which has all to do with the standard deviation of the measurement of those values. Please check out the content of estimator_kalman.c and kalman_core.c to know more about the implementation. Also good to know that the Kalman filter has an supervisor, which resets if the position or velocity estimate is gets out of hand.

Of course this blogpost does not show the full detailed explanation of state estimation, but we do hope that it gives some kind of overview so you know where to look if you would like to improve anything. The Kalman filter can easily be extended to accept more inputs, or the models on which the estimates are based can be improved. If you would like implement your own filter, that would be perfectly possible to do so too.

It would be great if you guys could share your thoughts and questions about the state estimation on the crazyflie on the forum!

We talked about it in a previous post, it is more than time to implement a higher abstraction layer for the Crazyflie firmware to make it easy to implement custom automations and programs on top of the flying platform. In this post we will try to explain the state of the art and where we are thinking of heading. This is mostly a request for comments and we are creating a github ticket to discuss about it.

DELFT – Zwerm Drones TU Delft. – FOTO GUUS SCHOONEWILLE

The out-of-tree build and P2P API presented in the previous post is a great start: it allows to make project on top of the Crazyflie firmware that can easily be maintained over time and to communicate directly between Crazyflie without having a PC in the loop. Though we have not completely solved or documented the API that can be called by the programs written on top of the Crazyflie, this is what the APP-layer is supposed to provide.

The current plan for the app layer is to make the same functionality that is available in the Python crazyflie lib API, accessible from within the Crazyflie firmware, using similar API calls. This way we get the possibility of prototyping functionality in python code on a remote machine, and when it is working, easily convert it to an app onboard. This is already implemented, in part, for the log and param API as well as for the low level parts of the commander. It has enabled us to write programs like the multiranger push demo and SGBA from Kimberly’s paper. The API is not yet documented properly and the function calls do not look like the ones on the python lib side at this time, but our intention is to converge the APIs over time.

We think that having the same level of functionality for Log, Param and Commander within a Crazyflie app, as in the python API, will already allow to implement a lot of onboard programs much more easily than has been possible until now. If there is anything else you think would be interesting to develop in this field, do not hesitate to drop a comment in this post or in the github issue.

This week we are exhibiting at IROS in Macau. We are running our fully autonomous demo based on the Lighthouse positioning technology and charging pads. We also have brought some prototypes to show, for instance the Crazyflie Bolt, the AI deck and the Active marker deck. You can read more about the demo at the IROS 2019 page.

We’d love to hear what you are working on, discuss issues, possibilities or new products. If you are at IROS, drop by our booth (B34) and say hi!

Lighthouse yaw

We have not only prepared for IROS, we have also been working on improving the lighthouse positioning system. Recently we added a (slightly hackish) solution for updating the yaw with data from the Lighthouse deck. This means that it is not necessary to start the Crazyflie facing the positive X direction when using the Lighthouse deck. The Crazyflie will understand its heading and act accordingly.

Two Crazyflies facing a random direction, take off and rotate to yaw=0.

We are also working on integrating the Lighthouse deck in a better way in the kalman filter. If everything goes according to plan, it will enable a Crazyflie to fly with only one base station, and be more robust when using two base stations.

For the last four years of doing my PhD at the TU Delft and the MAVlab, we were determined to figure out how to make a swarm/group of tiny quadcopters fly through and explore an unknown indoor environment. This was not easy, as many of the sub-challenges that needed to be solved first. However, we are happy to say that we were able to show a proof-of-concept in the latest Science Robotics issue! Here you can see the press release from the TU Delft for general information about the project.

Since we used the Crazyflie 2.0 to achieve this result, this blog-post we wanted to mostly highlight the technical side of the research, of the achievements and the challenges we had to face. Moreover, we will also explain the updated code which uses the new features of the Crazyflie Firmware as explained in the previous blogpost.

A swarm of drones exploring the environment, avoiding obstacles and each other. (Guus Schoonewille, TU Delft)

Hardware

In the paper, we presented a technique called Swarm Gradient Bug Algorithm (SGBA), which borrows (as the name suggests) navigational elements from the path planning technique called ‘Bug Algorithms’ (see this paper for an overview). The basic principle is that SGBA is a state-machine with several simple behavior presets such as ‘going to the goal’, ‘wall-following’ and ‘avoiding other Crazyflies’. Here in the bottom you can see all the modules were used. For the main experiments (on the left), the Crazyflie 2.0’s were equipped with the Multiranger and the Flowdeck (here we used the Flow deck v1). On the right you see the Crazyflies used for the application experiment, were we made an custom Multiranger deck (with four VL53L0x‘s) and a Hubsan Camera module. For both we used the Turnigy nanotech 300 mAh (1S 45-90C) LiPo battery, to increase the flight time to 7.5 min.

Hardware used in the experiments. Adapted from the science robotics paper.

Experiments

With this, we were able to have 6 Crazyflies explore an empty office floor in the faculty building of Aerospace engineering. They started out in the middle of the test environment and flew all in different preferred directions which they upheld by their internal estimated yaw angle. With the multi-rangers, they managed to detect walls in their, and followed its border until the way was clear again to follow their preferred direction. Based on their local odometry measurements with the flowdeck, the Crazyflies detected if they were flying in a loop, in order to get out of rooms or other situations.

A little before half way of their battery life, they would try to get back to their initial position, which they did by measuring the Received Signal Strength Intensity of the Crazyradio PA home beacon, which was located at their initial starting position. During wall-following, they measured the gradient of the RSSI, to determine in which directions it increases or decreases, to estimate the angle back the goal.

While they were navigating, they were also communicating with each-other by means of broadcasting messages. Based on those measurements of RSSI, they could sense other Crazyflies approaching, which they first of all used for collision avoidance (by letting the low priority CFs move out of the way of the high priority CFs). Second of all, during the initial exploration phase, they communicated their preferred direction as well, so that one of them can change its exploration behavior to not conflict with the other. This way, we tried to maximize the explored area by the Crazyflies.

One of those experiments with 6 Crazyflies can be seen in this video for better understanding:

We also showed an application experiment where 4 crazyflies with the camera modules searched for 2 dummies in the same environment.

Challenges

In order to get the results presented above, there were many challenges to overcome during the development phase. Here is a list that explains a couple of the elements that needed to work flawlessly:

  • Single CF robustness: We used the Flowdeck v1, for the ‘deadlock’ detection and the basic velocity control, which was challenging in the testing environment because of low lighting conditions and texture. Therefore the Crazyflies were flying at 0.5 meters in order to ensure robustness. The wall-following was performed solely using the Multiranger. This was tested out in many situations and was able to handle a lot of type of obstacles without any problem. However the limited FOV of the laser range finder can not detect all types of obstacles, for instance thin ones or irregular ones such as plants. Luckily these were not encountered in the environment the Crazyflies flew in, but to increase robustness, we will need to consider adding a camera to the navigational drive as well.
  • Communication base-station. SGBA by essence only needs one base-station Crazyradio PA, since all the behavior is completely on board. However, in order to show results in the paper, it was necessary for the CF to communicate information back, like odometry, state and such. As this was a two way communication (CFs needed RSSI to get back) each Crazyflie needed 1 base-station. Also, they all needed to be on different channels to avoid package collisions and RSSI accumulation.
  • Communication Peer to Peer. At development time, P2P didn’t exist yet, so we had to implement broadcast communication between the Crazyflies. Since the previous pointer required them to listen on different channels, the NRF had to be configured to send separate broadcast messages on all those channels as well. In order to time this properly, the home beacon had to sync the Crazyflies accordingly by sending out a timer. Even so, the avoidance maneuvers were done very conservatively to try to prevent inter-drone collisions.

Many of the issues, especially the communication challenges, will be solved with the updated code implementation as explained in the next section.

Updated code

The firmware that the Crazyflies used to fly in the experiments showed in the paper, can all be found in this public repository. However, the code is based quite an old version the current Crazyflie firmware, as it was forked almost a year ago. The implementation of the SGBA state machine and the P2P broadcasting were not generic enough to integrate this back to the development cycle, therefore the current code is only suitable for the old Crazyflie 2.0.

Therefore, we developed two major changes in the latest firmware which will make it much easier for me (and other ideas as well we hope!) to implement SGBA and the P2P communication in a way that should be compatible with any version of the firmware (and hardware) from here and on. We implemented SGBA as an app-layer and also handled all the broadcast messaging directly from this layer as well. Please check out this Github repository with this new app layer implementation of SGBA.