Category: Crazyflie

It’s that time of the year again ! As the days get darker and darker here in Sweden, we’re happy to getting some time off to share some warmth with our families.

And to kick off the holiday season, we prepared a little treat for you ! We enjoyed making a Christmas video that tested how we could use the Crazyflie at home. Since we’re not at the office anymore, we decided to fly in our homes and this video shows the different ways to do so. First, take a look at what we’ve done:

Now let’s dig into the different techniques we used.

  • Tobias decided to fly the Bolt manually. His first choice was to land in the Christmas sock, but that was too hard, thereof the hard landing in top of the tree. We were not sure who would survive: the tree or the Bolt!
  • Kimberly installed two base stations V2’s and after setting up, determined some way points by holding the Crazyflie in her hand. Then she generated a trajectory with the uav_trajectories project (like in the hyper demo). Then she used the cflib to upload this trajectory and make the crazyflie fly all the way to the basket. Her two cats could have looked more impressed, though!
  • Using trials and errors, Barbara used the Flowdeck, the motion commander, and a broken measuring tape to calibrate the Crazyflie’s path next to the tree.
  • Arnaud realized that, with all the autonomous work, we hardly fly the Crazyflie manually anymore. So he flew the Crazyflie manually. It required a bit more training that expected, but Crazyflie is really a fun (and safe!) quad to fly.
  • Marcus used two Lighthouse V2 base stations together with the Lighthouse deck and LED-ring deck. For the flying, he used the high level commander. The original plan was to fly around his gingerbread house, but unfortunately it was demolished before he got the chance (by some hungry elves surely!)
  • Kristoffer made his own tree ornament with the drone, which turned out to be a nice addition to a Christmas tree !

It was a fun way to use our own product, and to show off our decorated houses.

I hope you enjoy watching this video as much as we enjoyed making it.

We are staying open during the Holiday season but on a limited capacity: we still ship your orders, and will keep an eye on our emails and the forum, but things will get a bit slower here.

We wish you happy holidays and safe moments together with your loved ones.

This autumn when we had our quarterly planing meeting, it was obvious that there would not be any conferences this year like other years. This meant we would not meet you, our users and hear about your interesting projects, but also that we would not be forced to create a demo. Sometimes we joke that we are working with Demo Driven Development and that is what is pushing us forward, even-though it is not completely true it is a strong driver. We decided to create a demo in our office and share it online instead, we hope you enjoy it!

The wish list for the demo was long but we decided that we wanted to use multiple positioning technologies, multiple platforms and multiple drones in a swarm. The idea was also to let the needs of the demo drive development of other technologies as well as stabilize existing functionality by “eating our own dogfood”. As a result of the work we have for instance:

  • improved the app layer in the Crazyflie
  • Lighthouse V2 support, including basic support for 2+ base stations
  • better support for mixed positioning systems

First of all, let’s check out the video

We are using our office for the demo and the Crazyflies are essentially flying a fixed trajectory from our meeting room, through the office and kitchen to finally land in the Arena. The Crazyflies are autonomous from the moment they take off and there is no communication with any external computer after that, all positioning is done on-board.

Implementation

The demo is mainly implemented in the Crazyflie as an app with a simple python script on an external machine to start it all. The app is identical in all the Crazyflies so the script tells them where to land and checks that all Crazyflies has found their position before they are started. Finally it tells them to take off one by one with a fixed delay in-between.

The Crazyflie app

When the Crazyflie boots up, the app is started and the first thing it does is to prepare by defining a trajectory in the High Level Commander as well as setting data for the Lighthouse base stations in the system. The app uses a couple of parameters for communication and at this point it is waiting for one of the parameters to be set by the python script.

When the parameter is set, the app uses the High Level Commander to take off and fly to the start point of the trajectory. At the starting point, it kicks off the trajectory and while the High Level Commander handles the flying, the app goes to sleep. When reaching the end of the trajectory, the app once more goes into action and directs the Crazyflie to land at a position set through parameters during the initialization phase.

We used a feature of the High Level Commander that is maybe not that well known but can be very useful to make the motion fluid. When the High Level Commander does a go_to for instance, it plans a trajectory from its current position/velocity/acceleration to the target position in one smooth motion. This can be used when transitioning from a go_to into a trajectory (or from go_to to go_to) by starting the trajectory a little bit too early and thus never stop at the end of the go_to, but “slide” directly into the trajectory. The same technique is used at the end of the trajectory to get out of the way faster to avoid being hit by the next Crazyflie in the swarm.

The trajectory

The main part of the flight is one trajectory handled by the High Level Commander. It is generated using the uav_trajectories project from whoenig. We defined a number of points we wanted the trajectory to pass through and the software generates a list of polynomials that can be used by the High Level Commander. The generated trajectory is passing through the points but as a part of the optimization process it also chooses some (unexpected) curves, but that could be fixed with some tweaking.

The trajectory is defined using absolute positions in a global coordinate system that spans the office.

Positioning

We used three different positioning systems for the demo: the Lighthouse (V2), the Loco Positioning system (TDoA3) and the Flow deck. Different areas of the flight space is covered by different system, either individually or overlapping. All decks are active all the time and pick up data when it is available, pushing it into the extended Kalman estimator.

In the meeting room, where we started, we used two Lighthouse V2 base stations which gave us a very precise position estimate (including yaw) and a good start. When the Crazyflies moved out into the office, they only relied on the Flowdeck and that worked fine even-though the errors potentially builds up over time.

When the Crazyflies turned around the corner into the hallway towards the kitchen, we saw that the errors some times were too large, either the position or yaw was off which caused the Crazyflies to hit a wall. To fix that, we added 4 LPS nodes in the hallway and this solved the problem. Note that all the 4 anchors are on the ground and that it is not enough to give the Crazyflie a good 3D position, but the distance sensor on the Flow deck provides Z-information and the overall result is good.

The corner when going from the kitchen into the Arena is pretty tight and again the build up of errors made it problematic to rely on the Flow deck only, so we added a lighthouse base station for extra help.

Finally, in the first part of the Arena, the LPS system has full 3D coverage and together with the Flow deck it is smooth sailing. About half way the Crazyflies started to pick up the Lighthouse system as well and we are now using data from all three systems at the same time.

Obviously we were using more than 2 basestations with the Lighthouse system and even though it is not officially supported, it worked with some care and manual labor. The geometry data was for instance manually tweaked to fit the global coordinate system.

The wall between the kitchen and the Arena is very thick and it is unlikely that UWB can go through it, but we still got LPS data from the Arena anchors occasionally. Our interpretation is that it must have been packets bouncing on the walls into the kitchen. The stray packets were picked up by the Crazyflies but since the Lighthouse base station provided a strong information source, the LPS packets did not cause any problems.

Firmware modifications

The firmware is essentially the stock crazyflie-firmware from Github, however we did make a few alterations though:

  • The maximum velocity of the PID controller was increased to make it possible to fly a bit faster and create a nicer demo.
  • The number of lighthouse base stations was increased
  • The PID controller was tweaked for the Bolt

You can find the source code for the demo on github. The important stuff is in examples/demos/hyperdemo

The hardware

In the demo we used 5 x Crazyflie 2.1 and 1 x Bolt very similar to the Li-Ion Bolt we built recently. The difference is that this version used a 2-cell Li-Po and lower KV motors but the Li-Ion Bolt would have worked just as well.

Hyperdemo drones and they configurations

To make all positioning to work at the same time we needed to add 3 decks, Lighthouse, Flow v2 and Loco-deck. On the Crazyflie 2.1 this fits if the extra long pin-headers are used and the Lighthouse is mounted on top and the Loco-deck underneath the Crazyflie 2.1 with the Flow v2 on the bottom. The same goes for the Bolt, but here we had to solder the extra long pin-header and the long pin-header together to make them long enough.

There is one catch though… the pin resources for the decks collide. With some patching of the loco-deck this can be mitigated by moving its IRQ to IO_2 using the solder-jumper. The RST needs to be moved to IO_4 which requires a small patch wire.

Also some FW configuration is needed which is added to the hyperdemo makefile:

CFLAGS += -DLOCODECK_USE_ALT_PINS
CFLAGS += -DLOCODECK_ALT_PIN_RESET=DECK_GPIO_IO4

The final weight for the Crazyflie 2.1 is on the heavy side and we quickly discover that fully charged batteries should be used or else the crash probability is increased a lot.

Conclusions

We’re happy we were able to set this demo up and that it was fairly straight forward. The whole setup of it was done in one or two days. The App layer is quite useful and we tend to use it quite often when trying out ideas, which we interpret as a good sign :-)

We are satisfied with the results and hope it will inspire some of you out there to push the limits even further!

This week we have a guest blog post from CollMot about their work to integrate the Crazyflie with Skybrush. We are happy that they have used the app API that we wrote about a couple of weeks ago, to implement the required firmware extensions!


Bitcraze and CollMot have joined forces to release an indoor drone show management solution using CollMot’s new Skybrush software and Crazyflie firmware and hardware.

CollMot is a drone show provider company from Hungary, founded by a team of researchers with a decade-long expertise in drone swarm science. CollMot offers outdoor drone shows since 2015. Our new product, Skybrush allows users to handle their own fleet-level drone missions and specifically drone shows as smoothly as possible. In joint development with the Bitcraze team we are very excited to extend Skybrush to support indoor drone shows and other fleet missions using the Crazyflie system.

The basic swarm-induced mindset with which we are targeting the integration process is scalability. This includes scalability of communication, error handling, reliability and logistics. Each of these aspects are detailed below through some examples of the challenges we needed to solve together. We hope that besides having an application-specific extension of Crazyflie for entertainment purposes, the base system has also gained many new features during this great cooperative process. But lets dig into the tech details a bit more…

UWB in large spaces with many drones

We have set up a relatively large area (10x20x6 m) with the Loco Positioning System using 8 anchors in a more or less cubic arrangement. Using TWR mode for swarms was out of question as it needs each tag (drone) to communicate with the anchors individually, which is not scalable with fleet size. Initial tests with the UWB system in TDoA2 mode were not very satisfying in terms of accuracy and reliability but as we went deeper into the details we could find out the two main reasons of inaccuracies:

  1. Two of the anchors have been positioned on the vertical flat faces of some stairs with solid material connection between them that caused many reflections so the relative distance measurements between these two anchors was bi-stable. When we realized that, we raised them a bit and attached them to columns that had an air gap in between, which solved the reflection issue.
  2. The outlier filter of the TDoA2 mode was not optimal, a single bad packet generated consecutive outliers that opened up the filter too fast. This issue have been solved since then in the Crazyflie firmware after our long-lasting painful investigation with changing a single number from 2 to 3. This is how a reward system works in software development :)

After all, UWB was doing its job quite nicely in both TDoA2 and TDoA3 modes with an accuracy in the 10-20 cm level stably in such a large area, so we could move on to tune the controller of the Crazyflie 2.1 a bit.

Crazyflies with Loco and LED decks

As we prepared the Crazyflie drones for shows, we had the Loco deck attached on top and the LED deck attached to the bottom of the drones, with an extra light bulb to spread light smoothly. This setup resulted in a total weight of 37g. The basic challenge with the controller was that this weight turned out to be too much for the Crazyflie 2.1 system. Hover was at around 60-70% throttle in average, furthermore, there was a substantial difference in the throttle levels needed for individual motors (some in the 70-80% range). The tiny drones did a great job in horizontal motion but as soon as they needed to go up or down with vertical speed above around 0.5 m/s, one of their ESCs saturated and thus the system became unstable and crashed. Interestingly enough, the crash always started with a wobble exactly along the X axis, leading us to think that there was an issue with the positioning system instead of the ESCs. There are two possible solutions for this major problem:

  1. use less payload, i.e. lighter drones
  2. use stronger motors

Partially as a consequence of these experiments the Bitcraze team is now experimenting with new stronger models that will be optimized for show use cases as well. We can’t wait to test them!

Optimal controller for high speeds and accurate trajectory following

In general we are not yet very satisfied with any of the implemented controllers using the UWB system for a show use-case. This use-case is special as trajectory following needs to be as accurate as possible both in space and time to avoid collisions and to result in nice synchronized formations, while maximal speed both horizontally and vertically have to be as high as possible to increase the wow-effect of the audience.

  • The PID controller has no cutoffs in its outputs and with the sometimes present large positioning errors in the UWB system controller outputs get way too large. If gains are reduced, motion will be sluggish and path is not followed accurately in time.
  • The Mellinger and INDI controllers work well only with positioning systems of much better accuracy.

We stuck with the PID controller so far and added velocity feed forward terms, cutoffs in the output and some nonlinearity in case of large errors and it helped a bit, but the solution is not fully satisfying. Hopefully, these modifications might be included in the main firmware soon. However, having a perfect controller with UWB is still an open question, any suggestions are welcome!

Show specific improvements in the firmware

We implemented code that uploads the show content to the drones smoothly, performs automatic preflight checking and displays status with the LED deck to have visual feedback on many drones simultaneously, starts the show on time in synchrony with all swarm members and handles the light program and show trajectory execution of the show.

These modifications are now in our own fork of the Crazyflie firmware and will be rewritten soon into a show app thanks to this new promising possibility in the code framework. As soon as Skybrush and Crazyflie systems will be stable enough to be released together, we will publish the related app code that helps automating show logistics for every user.

Summary

To sum it up, we are very enthusiastic about the Crazyflie system and the great team behind the scenes with very friendly, open and cooperative support. The current stage of Crazyflie + Skybrush integration is as follows:

  • New hardware iterations based on the Bolt system that support longer and more dynamic flights are coming;
  • a very stable, UWB-compatible controller is still an open question but current possibilities are satisfying for initial tests with light flight dynamics;
  • a new Crazyflie app for the drone show case is basically ready to be launched together with the release of Skybrush in the near future.

If you are interested in Skybrush or have any questions related to this integration process, drop us an email or comment below.

This week we have a guest blog post from Bárbara Barros Carlos, PhD candidate at DIAG Robotics Lab. Enjoy!

Quadrotors are characterized by their underactuation, nonlinearities, bounded inputs, and, in some cases, communication time-delays. The development of their maneuvering capability poses some challenges that cover dynamics modeling, state estimation, trajectory generation, and control. The latter, in particular, must be able to exploit the system’s nonlinear dynamics to generate complex motions. However, the presence of communication time-delay is known to highly degrade control performance.

A composite image showing our real-time NMPC with time-delay
compensation being used on the Crazyflie during the tracking of
a helical trajectory.

In our recent work, we present an efficient position control architecture based on real-time nonlinear model predictive control (NMPC) with time-delay compensation for quadrotors. Given the current measurement, the state is predicted over the delay time interval using an integrator and then passed to the NMPC, which takes into account the input bounds. We demonstrate the capabilities of our architecture using the Crazyflie 2.1 nano-quadrotor.

Time-Delay Compensation

In our aerial system, because of the radio communication latency, we have delays both in receiving measurements and sending control inputs. Likewise, since we intend to use NMPC, the potentially high computational burden associated with its solution becomes an element that must also be taken into account to minimize the error in the state prediction.

Crazyflie NMPC response without
considering the time-delay compensation.

To tackle this issue, we use a state predictor based on the round-trip time (RTT) associated with the sum of network latencies as a delay compensator. The prediction is computed by performing forward iterations of the system dynamic model, starting from the current measured state and over the RTT, through an explicit Runge Kutta 4th order (ERK4) integrator. Due to the independent nature of this operation, perfect delay compensation can be achieved by adjusting the integration step to be equal to the RTT. Thus, it is assumed that there is a fixed RTT, defined by τr, to be compensated.

Nonlinear Model Predictive Control

The NMPC controller is defined as the following constrained nonlinear program (NLP):

Therein, p denotes the inertial position, q the attitude in unit quaternions, vb the linear velocity expressed in the body frame, ω the angular rate, and Ωi the rotational speed of the ith propeller. The NLP is tailored to the Crazyflie 2.1 and is implemented using the high-performance software package acados, which solves optimal control problems and implements a real-time iteration (RTI) variant of a sequential quadratic programming (SQP) scheme with Gauss-Newton Hessian approximation. The quadratic subproblems (QP) arising in the SQP scheme are solved with HPIPM, an interior-point method solver, built on top of the linear algebra library BLASFEO, finely tuned for multiple CPU architectures. We use a recently proposed Hessian condensing algorithm particularly suitable for partial condensing to further speed-up solution times.

When designing an NMPC, choosing the horizon length has profound implications for computational burden and tracking performance. For the former, the longer the horizon, the higher the computational burden. As for the latter, in principle, a long prediction horizon tends to improve the overall performance of the controller. In order to select this parameter and achieve a trade-off between performance and computational burden, we implemented the NLP in acados considering: five horizon lengths (N = {10,20,30,40,50}), input bounds on the rotational speed of the propellers (lower bound = 0, upper bound = 22 krpm), discretizing the dynamics using an ERK4 integration scheme. Likewise, we compare the condensing approach with the state-of-the-art solver qpOASES against the partial condensing approach with HPIPM, concerning the set of horizons regarded.

Left: closed-loop trajectories comparing different horizon lengths.
Right: average runtimes per SQP-iteration
for different horizon lengths considering two distinct QP solvers.

As qpOASES is a solver based on active-set method, it requires condensing to be computationally efficient. In line with the observations found in the literature that condensing is effective for short to medium horizon lengths, we note that qpOASES is competitive for horizons up to approximately N = 30 when compared to HPIPM. The break-even point moves higher on the scale for longer horizons, mainly due to efficient software implementations that cover: (a) Hessian condensing procedure tailored for partial condensing, (b) structure-exploiting QP solver based on novel Riccati recursion, (c) hardware-tailored linear algebra library. Therefore, we chose horizon N = 50 as it offers a reasonable trade-off between deviation from the reference trajectory and computational burden.

Onboard Controller Considerations

How the onboard controllers (PIDs) use the setpoints of the offboard controller (NMPC) in our architecture is not entirely conventional and, thereby, deserves some considerations. First, the reference signals that the PID loops track do not fully correspond to the control inputs considered in the NMPC formulation. Instead, part of the state solution is used in conjunction with the control inputs to reconstruct the actual input commands passed as a setpoint to the Crazyflie. Second, a part of the reconstructed input commands is sent as a setpoint to the outer loop (attitude controller), and the other part is sent to the inner loop (rate controller). Furthermore, as the NMPC model does not include the PID loops, it does not truly represent the real system, even in the case of perfect knowledge of the physical parameters. As a consequence, the optimal feedback policy is distorted in the real system by the PIDs. 

Closed-loop Position Control Performance

Our control architecture hinges upon a ROS Kinetic framework and runs at 66.67 Hz. The Crazy RealTime Protocol (CRTP) is used in combination with our crazyflie_nmpc stack to stream in runtime custom packages containing the required data to reconstruct the part of the measurement vector that depends on the IMU data. Likewise, the cortex_ros bridge streams the 3D global position of the Crazyflie, which is then passed through a second-order, discrete-time Butterworth filter to estimate the linear velocities.

To validate the effectiveness of our control architecture, we ran two experiments. For each experiment, we generate a reference trajectory on a base computer and pass it to our NMPC ROS node every τs = 15 ms. When generating the trajectories, we explicitly address the feasibility issue in the design process, creating two references: one feasible and one infeasible. In addressing this issue, we prove through experiments that the performance of the proposed NMPC is not degraded even when the nano-quadrotor attempts to track an infeasible trajectory, which could, in principle, make it deviate significantly or even crash.

Overall, we observe that the most challenging setpoints to be tracked are the positions in which, given a change in the motion, the Crazyflie has to pitch/roll in the opposite direction quickly. These are the setpoints where the distortion has the greatest influence on the system, causing small overshoots in position. The average solution time of the tailored RTI scheme using acados was obtained on an Intel Core i5-8250U @ 3.4 GHz running Ubuntu and is about 7.4 ms. This result shows the efficiency of the proposed scheme.

Outlook

In this work, we presented the design and implementation of a novel position controller based on nonlinear model predictive control for quadrotors. The control architecture incorporates a predictor as a delay compensator for granting a delay-free model in the NMPC formulation, which in turn enforces bounds on the actuators. To validate our architecture, we implemented it on the Crazyflie 2.1 nano-quadrotor. The experiments demonstrate that the efficient RTI-based scheme, exploiting the full nonlinear model, achieves a high-accuracy tracking performance and is fast enough for real-time deployment.

Related Links

This research project was developed by:

Bárbara Barros Carlos1, Tommaso Sartor2, Andrea Zanelli3, and Gianluca Frison3, under the supervision of professors Wolfram Burgard4, Moritz Diehl3 and Giuseppe Oriolo1.

1 B. B. Carlos and G. Oriolo are with the DIAG Robotics Lab, Sapienza University of Rome, Italy.
2 T. Sartor is with the MECO Group, KU Leuven, Belgium.
3 A. Zanelli, G. Frison, and M. Diehl are with the syscop Lab, University of Freiburg, Germany.
4 W. Burgard is with the AIS Lab, University of Freiburg, Germany.

Last Wednesday we had our first live tutorial event, explaining our Spiral Swarm Demo that we usually show at conferences. About 60 people signed up and it seems that we have about 40-50 people that were able to join from all parts of the world. There were even several Crazyflie users from Asia that stayed up late especially for this, so we definitely appreciated the dedication!

For those who missed it, you can find the recordings and slides on this event page.

The Tutorial

The first hour we were mostly talking about the Lighthouse positioning system and in particular focusing on the base station V2. In real time, we had hands-on sessions where we actually showed how we setup the system, how to retrieve the calibration data and how to achieve geometry. The hour ended with showing a Crazyflie flying in the lighthouse system itself.

After the break , we focused on how to achieve more autonomy in the swarm, where we talked about the limitation of communication, the high level commander and the app layer. This was also shown with hands-on with multiple flying Crazyflies and the full automatic demo at the end. We were able to keep showing the demo in the end for a 30 minutes more while we were resting up with a drink :)

We were using Discord and Mozilla Hubs simultaneously to stream the tutorial. Discord worked out nicely since we could have one channel for the stream and one channel for the chat, which one of us was able to look at continuously. Mozilla hubs was a nice add-on however it definitely had some hiccups and streaming quality issues, which is not ideal for following a tutorial. Also being in Virtual Reality for 2 hours is very exhausting we heard from headset-using participants.

What next?

We really liked doing the tutorial and speaking one-on-one with our users very much so we are likely to organize one again. Not sure at what frequency though but of course we will announce it first. We have already some requests for topics so we will look into those first. Next time it probably will be a shorter tutorial on Discord only. Mozilla Hubs might still be used but as a virtual gallery where we put 3D visualizations of what we are working on (like how the base station sweeps work for instance), so that people can get a better understanding. If you have any request for topics please leave a comment below.

We will also try out to use our new Discord Server as a digital ‘watering hole’ for our users. Here everybody will have the opportunity to chat with each-other, to share awesome projects and to maybe help each-other out with certain questions. However, we will not be on Discord ourselves all the time and still advise to use forum.bitcraze.io as the main place to ask questions and to seek for support.

Click here to join our Discord server

As mentioned in this blog post, we added the possibility to write apps for the Crazyflie firmware a while ago. Now we have added more functions in the Firmware to make it possible to use apps for an even wider range of tasks.

The overall idea of the app API is to mirror the functionality of the python lib. This will enable a user to prototype an application in python with quick iterations, when everything is working the app can easily be ported to C to run in the Crazyflie instead. The functions in the firmware are not identical to the python flavour but we have tried to keep them as close as possible to make the translation simple.

An app is also a much better way to contain custom functionality as the underlying firmware can be updated without merging any code. The intention is that the api API will be stable over time and apps that work one version of the firmware also should work with the next version.

Improvements

We used our demo from IROS and ICRA (among others) with a fairly autonomous swarm as a driver for the development. The demo used to be implemented in a branch of the firmware with various modifications of the code base to make it possible to do what we wanted. The goal of the exercise was to convert the demo into an app and add the required API to the firmware to enable the app to do its thing. The new app is available here.

The main areas where we have extended the API are:

Log and parameters framework

The log framework is the preferred way for an app to read data from the firmware and this has been working from the start. Similarly the parameter framework is the way to set parameters. Even though this has worked, it broke a basic assumption in the setup with the client, that only the client can change a parameter. Changing a parameter from an app could lead to that the client and Crazyflie had different views of the state in the Crazyflie, but this has now been fixed and the client is updated when needed.

High level commander

The high level commander was not accessible from an app earlier and the functions in the python lib have been added to make it easy to handle autonomous flight.

Custom LED sequences

It is now possible to register custom LED sequences to control the four LEDs on the Crazyflie to signal events or state.

Lighthouse functionality

Functions for setting base station geometry data as well as calibration data have been added. These functions are also very useful for those who are using the lighthouse system as it now can be done from an app instead of modifying lighthouse_position_est.c.

Remaining work

We have taken a step forward with these changes but there is more to be done! The two main areas are support for custom CRTP packets and memory mapping through the memory sub system. There might be more, let us know if there is something you are missing. The work will continue and there might even be some documentation at some point :-)

Tutorial

One reason for doing this API work now was to prepare for the tutorial about the lighthouse 2 positioning system, swarm autonomy and the demo app that we will run this Wednesday on-line, don’t miss out! You can read more about the event here.

Li-Ion batteries have packed more energy per gram for a long time compared to Li-Po batteries. The problem for UAV applications has been that Li-Ion can’t deliver enough current, something that is starting to change. Now there are cells that are supposed to be able to deliver 30-35A continuously in the 18650 series, at least according to the specs. Therefor we thought it was time to do some testing and decided to build a 1 cell Li-Ion drone using the Crazyflie Bolt as base.

Since a 18650 battery is 18mm in diameter and 65mm long, the size would affect the design but we still wanted to keep the drone small and lightweight. The battery is below 20mm wide which means we can run the deck connectors around it, that is nice. We chose to use our 3D printer to build the frame and use off the shelf ESCs, motors and props. After a couple of hours of research we selected 3″ propellers, 1202.5 11500kv motors and tiny 1-2s single ESCs for our first prototype.

Parts list:

  • 1 x Custom designed 130mm 3D printed frame
  • 1 x Crazyflie Bolt flight controller
  • 4 x Eachine 3020 propeller (2xCW + 2xCCW)
  • 4 x Flywoo ROBO RB 1202.5 11500 Kv motors
  • 4 x Flash hobby 7A 1-2S ESC
  • 1 x Li-Ion Sony 18650 VTC6 3000mAh 30A
  • Screws, anti vib. spacers, zipties, etc.

The custom designed frame was developed in iterations, and can still be much improved, but at this stage it is small, lightweight and rigid enough. We wanted the battery to be as central as possible while keeping it all compact.

Prototype frame designed in FreeCAD.

Assembly and tuning

The 3D printed frame came out quite well and weighed in at 13g. After soldering the bolt connectors to the ESCs, attaching motors and props, adjusting battery cable and soldering a XT30 to the Li-Ion battery it all weighed ~103g and then the battery is 45g of these. It feels quite heavy compared to the Crazyflie 2.1 and we had a lot of respect when we test flew it the first time. Before we took off we reduced the pitch and roll PID gains to roughly half and luckily it flew without problems and quite nicely. Well it sounds a lot but that is kind of expected. After increasing the gains a bit we felt quite pleased with:

#define PID_ROLL_RATE_KP  70.0
#define PID_ROLL_RATE_KI  200.0
#define PID_ROLL_RATE_KD  2
#define PID_ROLL_RATE_INTEGRATION_LIMIT    33.3

#define PID_PITCH_RATE_KP  70.0
#define PID_PITCH_RATE_KI  200.0
#define PID_PITCH_RATE_KD  2
#define PID_PITCH_RATE_INTEGRATION_LIMIT   33.3

#define PID_ROLL_KP  7.0
#define PID_ROLL_KI  3.0
#define PID_ROLL_KD  0.0
#define PID_ROLL_INTEGRATION_LIMIT    20.0

#define PID_PITCH_KP  7.0
#define PID_PITCH_KI  3.0
#define PID_PITCH_KD  0.0
#define PID_PITCH_INTEGRATION_LIMIT   20.0

This would be good enough for what we really wanted to try, the endurance with a Li-Ion battery. A quick measurement of the current consumption at hover, 5.8A, we estimated up to ~30 min flight time on a 3000mAh Li-Ion battery, wow, but first a real test…

Hover test

For the hover test we used lighthouse 2 which is starting to work quite well. We had to change the weight and thrust constants in estimator_kalman.c for the autonomous flight to work:

#define CRAZYFLIE_WEIGHT_grams (100.0f)

//thrust is thrust mapped for 65536 <==> 250 GRAMS!
#define CONTROL_TO_ACC (GRAVITY_MAGNITUDE*250.0f/CRAZYFLIE_WEIGHT_grams/65536.0f)

After doing that and creating a hover script that hovers at 0.5m height and was set to land when the voltage reached 3.0V. We leaned back with excitement, behind a safety net, and started the script… after 19 min it landed… good but not what we hoped for and quite far from the calculated 30 min. Maybe Li-Ion isn’t that good when it needs to provide more current…? A quick internet search and we could find that Li-Ion can run all the way down to 2.5V, but we have to stop at 3.0V because of electronics and loosing thrust, so we are missing quite a bit of energy… Further investigations are needed.

Lighthouse 2 flight test

As a final test we launched some flight scripts to fly in a square and in a spiral so we would get a feel for Lighthouse 2 + Bolt with PID controller combination. We think it turned out quite nicely, and this with almost no optimization effort:

Summary

Li-Ion felt like it could be a game changer when it comes to flight time but was not as promising as we hoped for. It doesn’t mean we can’t get there though. More research and development is required.

We’re happy to announce that we have taken an important step forward in the development of the lighthouse positioning system, we have improved the calibration compensation. The changes improves the correctness of the coordinate system, especially for lighthouse V2 base stations.

As mentioned in this blog post one of the remaining areas to solve was handling of calibration data and this is what we have addressed lately. In the manufacturing process mechanical elements are mounted within some tolerances but since the precision of the system is so good, also a very fine tolerances makes a big difference in the end result. Each base station is measured in the factory and the calibration data describing these imperfections are stored in the base station. The calibration data is transmitted in the light sweeps to enable a receiver to use it to correct for the errors in the measured angles.

As with everything else related to lighthouse, there is no official information of how to interpret the calibration data so we (and the community) had to make educated guesses.

Lighthouse 1

The compensation model for lighthouse 1 has been known for quite long, see the Astrobee project by Nasa and Libsurvive. The most important parameter is the phase and until now this is the only part of the calib data that we have used in the firmware. In the new implementation we use all parameters.

The parameters of the lighthouse 1 calibration model are phase, tilt, gib mag, gib phase and curve.

Lighthouse 2

The compensation data for lighthouse 2 is similar to lighthouse 1 but there are two new parameters, ogee mag and ogee phase. It also seems as some parameters that are sharing names between lighthouse 1 and 2 have different meanings, for instance curve.

Libsurvive has implemented compensation for lighthouse 2 but we have unfortunately not managed to use their work with good results, instead we have tried to figure out what the model might look like and match it to measurements. We have managed to get good results for the phase, tilt, gib mag and gib phase, while we don’t know how to use curve and ogee mag and ogee phase. The solution seems to be pretty good with a subset of the parameters and we have decided to leave it at that for now.

Use of calibration data

The way we have used the calibration data so far has been to apply it to the measured angles to get (more) correct sweep angles that have been fed into the position estimation algorithms. The problem is that the compensation model is designed the other way around, i.e. it goes from correct angles to measured angles, and an iterative approach is required to apply it to the measured angles. A better way (most likely by design) is to apply it in the kalman estimator instead where it simply becomes part of the measurement model.

Currently we do calculate the corrected angles as well and expose them as log data, but it is not required for the standard functionality of the lighthouse system. We may make it possible to turn it on/off via a parameter in the future to save some CPU power.

Functional improvements

So what kind of improvements will the calibration add?

The first improvement is the base station geometry estimation. With more correct angles the estimated base station position and orientation will be better. This is important to be able to get a good estimation of the Crazyflie position since poor geometry data will give the position estimator conflicting data.

Secondly more correct angles will straighten the coordinate system. With angular distortion the position estimator will not be able to estimate the correct position and the coordinate system will be warped, bent or stretched. The improvement can be seen when flying parallel to the floor at constant height for instance.

Thirdly the stability will hopefully be improved. When the angles from two base stations match better, the estimated position will change less when one base station is occluded and generally make life easier for the position estimator. We will take a look at the outlier filter to see if it can be improved as well.

Remaining problems

The calibration data is transmitted as a part of the sweeping light planes with a low bitrate. For lighthouse 1 the decoding process works well and all calibration data is usually received within 20-30 seconds. For lighthouse 2 it does not work as well in our current implementation it takes (much) longer before all data has been received correctly from both base stations.

It is possible to get the calibration data via the USB port on lighthouse 2 and we are considering storing the calibration data in the Crazyflie somehow instead. This will be even more important when we support larger systems (2+ base stations) and all base stations are not within range at startup.

During the summer we were discusses at the office of what would be a good substitute of us not being able to go to conferences or fairs anymore (see this blogpost). We sparred with a few ideas, ranging from organizing an online competition to an seminar. Although we initially were quite enthusiastic about organizing the competition, the user questionnaire from the previous blog-post showed us that many of you are rather interested in online tutorials. Based on that we actually started to make some more step-by-step guides, however we definitely would agree that is not the same as meeting each-other face-to-face!

So now we are planning to organize one for real this time! So our first online live tutorial will be on:

Wednesday 4th of November, 18:00 (CET, Malmö Sweden)

Register for the first session here to indicate your interest and to receive up-to-date information. There are of course no cost involved!

First topic: Spiraling Swarm Demo (Live!)

The last couple of years we have been showing our demo at many robotics conferences and fairs, such as ICRA, IMAV and IROS. Since we do not have a opportunity to do that anymore (at least for the foreseeable future), we thought that a suitable first topic of the online tutorial to be about the Spiraling Swarm demo! We will go through the different elements of the demo, which includes the implementation details on the Crazyflie and the Lighthouse Positioning system. We hope to explain all of in about 20-30 minutes and that this would enable you to set the demo up yourself if you want.

We have been thinking about just doing a prerecorded tutorial, however we also really like to talk with our users about their needs and research topics. That is why we think it is important to do it live where we can answer your questions on the go or after the tutorial. This also means that we will be demonstrating the demo live as well! Afterwards we will have a social interaction where we have a friendly chat :)

Mozilla Hubs and Discord

There are so many options on how to exactly host this event, as there are a gazillion alternatives for video conferencing. Currently we have are looking at Mozilla hubs. which fits nicely with our interests in the lighthouse positioning system with the HTC Vive basestations. The nice thing aspect of Hubs is that you don’t need a fancy headset to join, since it is possible to join via your browser or your phone. Me (Kimberly) has joined a Virtual Reality seminar at the beginning of the pandemic, organized by Roland Meertens of pinchofintelligence.com, and it was definitely a very interesting and fun experience. When giving a presentation, it really felt like people were paying attention and were engaged. So, we recently recreated our own flight-lab in VR (using Hub’s environment creator Spoke) and tested it out ourselves. This way you will be able to see our workplace as well!

Of course, we can imagine not everybody is waiting to go full VR. That is why we will combine the online tutorial with Discord, where we will make a video channel where we will stream the live demo and tutorial. It will also be possible to send messages that are visible in both the VR space and the Discord chat channel with Hub’s discord bot. You can choose where to follow the tutorial — fully in VR, or first discord and afterwards socialize in VR — that is totally up to you.

We still need to figure out the specifics, but if you register with your email we will send all the necessary information for the first session to you directly.

IOT conference Malmö

Now something else: tomorrow, namely Tuesday the 5th of October, we will also present at the IOT conference 2020 in Malmö. It is free for participants and it is still possible to register! Come and join if you can not wait to see us until the 4th of November.

For a long time issue #270 has been bugging us. It caused the µSD-card logging to fail in combination when using either the flow or loco deck, or actually any deck that uses the deck SPI bus. Several attempts has been made to fix this issue over time and recently we decided to really dig in to it. There has been some workaround to move the µSD-card to a different SPI bus but that was tedious and required patching the deck. So it was time to fix this for good, or at least know why it doesn’t work. A SPI bus is designed to be a multi-bus so it should be possible… Timing problems is still tricky but that is another story.

The problem

The SPI driver is protecting the bus with a mutex to prevent several clients to access it at the same time. After some digging we found that the FatFs integration layer was bugged and that SPI bus handling wasn’t well done. After comparing this to some other open implementations we found that this needed to be rewritten.

The solution

After rewriting part of the integration layer to have clear path of when the SPI bus was taken, and when it was released, we immediately got some good results. µSD-card logging with flow and loco deck worked, hooray! There is of course a limit to this and as we mentioned earlier the bus is a shared resource and if it is to congested, things will slow down, or stop working. This is currently the case when LPS is put in TWR mode. The TWR is very chatty and causes around 15k transactions per seconds on the SPI bus, and since it has higher priority than the µSD-card logging, the µSD-card write task starves, causing the logging to fail.

µSD and LPS SPI bus captured with a logic analyzer, over 50ms
µSD and LPS SPI bus captured with a logic analyzer, over 6ms

So if you stay away from LPS in TWR mode µSD-card logging should now work fine. I’m pretty sure there is a workaround for the TWR mode as well. First guess is that you would need to slow down the TWR update rate which is now at its maximum.

Happy logging!