Author: Kristoffer Richardsson

Our Ultra Wide Band (UWB) based positioning system, the Loco Positioning System, has been around for a long time and is still going strong! In this post we will tell you a bit about how it works (for those that don’t know about it yet), what research that is on-going in the field and new developments.

Crazyflie with Loco deck

Basics

UWB is using high frequency, low power, wide band radio where one of the most important properties is that it is possible to detect when a packet is received with very high accuracy. Combining this with very high frequency clocks, opens up the possibility to measure the time it takes for a radio packet to travel from a transmitter to a receiver. Since radio waves propagates with the speed of light in air we can convert the time into distance, and this is the basic idea in UWB positioning.

Not only is it possible to measure the timing of transmissions, the packets can also contain data, like in other radio standards. This property is extensively used to include time stamps of when a packet is sent, and also for instance the time stamp of when the transmitter received other packets or the position of an anchor.

This sounds pretty straight forward, but there are (of course) some complications. We will mention some of them but not go into the details.

  • Reflections – radio waves bounce around on walls and objects. Luckily, the nature of UWB actually uses this to its advantage and works better indoors than out side.
  • The clocks in the transmitter and receiver are not synchronized – the Time Of Flight can unfortunately not simply be measured by subtracting reception time from transmission time as the time stamps originate from two different clocks. The problem can be solved by sending some more packets back and forth though.
  • Packet collisions – two transmitters can not send at the same time, one or both packets will be lost. Transmissions must be scheduled or packet loss must be handled.
  • Obstacles – obstacles between the transmitter and receiver changes the transmission time.
  • Antennas – the propagation time through the antenna is substantial and changes depending on the angle to the transmitter/receiver.
  • Radio interference – other radio sources may interfere with the UWB radio signals and add noise or packet loss.

Modes

The Loco Positioning System can run in two fundamentally different modes: Two Way Ranging (TWR) and Time Difference of Arrival (TDoA).

Two Way Ranging (TWR)

In TWR the Crazyflie measures the distance to one anchor at a time, over and over again. Each measurement in initiated by the Crazyflie and requires 4 messages to be sent between the Crazyflie and the anchor, two request-response pairs. The position is estimated by pushing the measured distances into the kalman estimator.

This mode only supports one Crazyflie, but has the advantage of being very robust and also works pretty well some distance outside the system.

Time Difference of Arrival (TDoA)

In TDoA the setup is different, the anchors are transmitting packets while the Crazyflie is passively listening to the traffic. From the received information it is unfortunately not possible to measure the distance to the anchors, but what we can get is the difference in distance to two anchors. For example, we might know that we are 0.54 meters closer to anchor 3 than anchor 6, or similar. It is possible to calculate the position from this information and similarly to TWR the measurements are pushed to the kalman estimator for further processing.

This mode supports unlimited numbers of Crazyflies (swarms) but is less robust compared to TWR, especially outside the system. TDoA is similar to how GPS works.

Research

There are many researchers that use the Loco System, some use it as a positioning system and investigate topics like path planning or similar, while some others are looking at different questions related to the UWB positioning itself. We will not try to mention everyone, we probably only know of a small fraction of what is going on (please tell us!), but would like to point out two areas of research.

The first is related to improving the estimated position by handling measurement errors and the environment in a better way. Examples of this is to compensate for differences in reception angle or handling of obstacles in the space. We would like to mention Wenda Zhao’s work at the Dynamic Systems Lab, University of Toronto. He has contributed the robust TDoA implementation in the kalman estimator (blog post) as well as a public TDoA data set.

The second is inter drone ranging, that is measuring the distance between drones as an addition to, or instead of drone-to-anchor measurements. Examples in this are are the work by Dr Feng Shan at School of Computer Science and Engineering Southeast University, China (blog post) and professor Klaus Kefferpütz, Hochschule Augsburg, work on “Crazyflie quadcopter in decentralized swarming” as presented on the BAM days last year.

Experimental functionality

Even though there has not been a lot of code committed lately in our repositories related to the Loco Positioning System, it has been simmering in the background. We would like to mention what is cooking in the pots and some of the stuff that has been discussed or tested.

System size

An 8 anchor Loco Positioning System can cover a flight space of around 8×8 meters, but from time to time we get the question of larger systems. TDoA3 was designed with this in mind and supports up to 255 anchors, which in theory would make it possible to build larger systems. This functionality was implemented 4 years ago but we never really tested it(!). Finally we collected all anchors in the lab an set up 20 anchors in the same system, and it worked! This should make it possible to extend systems to at least 15×15 meters, but maybe even more with some clever radio cell planing.

Another possibility to enlarge a system is to tweak the radio settings to make them reach longer. There is a “Longer range” mode in TDoA3 that lowers the bit rate, but again it has not really been verified. This was also tested in the latest Loco frenzy and with some minor modifications it worked the way we hoped, with 20 anchors! The tests mainly verified that the anchors play nicely together, and we are not sure about the maximum range (to be tested) but we believe distances of up to 40 meters between anchors is possible. To use this feature you should make sure to use the latest firmware for the Loco Nodes as well as the Crazyflie.

The two features mentioned above should hopefully make it possible to go big and we hope it could be used for shows for instance.

TDoA3 hybrid mode

If one looks at the messages sent in a TDoA system, the anchors are actually doing TWR with each other, while the Crazyflie(s) are just listening to the traffic and that the possibility to extract the position is a nice “side effect”. Now imagine if the Crazyflies were to send some messages from time to time, then they could act as “dynamic” anchors, or do inter-drone ranging with each other. This is something we call TDoA3 hybrid mode.

Currently there is no official implementation of the Hybrid mode, only some experimental hacks. Some researchers have done their own implementations, but we hope, at some point, to generalize the functionality and integrate it into the firmware.

Read more

If you are interested to read more about positioning and the Loco system, you can take a look at the following link list.

Summer time!

Summer is coming and with that vacations, yeay! There will always be someone at the office to help you if you need help, and we will handle shipping through out the summer, but it might take a bit longer than usual.

We hope you all have some great summer months!

We recently added improved support for assert information in the client and wanted to take this opportunity to describe some of the features in the console tab of the client that are useful for debugging and profiling.

Example of the console tab

The console tab in the python client is where you can get real time logs from the Crazyflie when connected. Any DEBUG_PRINT() statements in the Crazyflie firmware will popup here and is obviously a simple way of adding debug information to your firmware. The console logs are buffered in the Crazyflie and dumped to the client when you connect, this is why you will see the start up information when connecting to a Crazyflie. If too much information is logged and the buffer is full, you will unfortunately loose some of it but you will be notified by a “<F>” marker in the console window.

On the right side of the console tab window you will find some useful buttons, the first being the “Clear” button that simply clears the console window.

Task dump

The “Task dump” button will print a list with information about the FreeRTOS tasks running in the system, for instance something like this.

SYSLOAD: Task dump
SYSLOAD: Load	Stack left	Name
SYSLOAD: 0.19 	205 		Tmr Svc
SYSLOAD: 83.70 	127 		IDLE
SYSLOAD: 0.01 	213 		CRTP-RX
SYSLOAD: 0.0 	54 		PWRMGNT
SYSLOAD: 0.70 	131 		LH
SYSLOAD: 0.0 	117 		CRTP-SRV
...

The “load” column contains how much of the total time that was spent in each task, since the previous measurement (or boot). To get useful values when performing some task, you probably want to make a dump at the start of your measurement and a second one at the end to get the average during this specific time.

The “Stack left” shows how many bytes of stack that is left for each task, this is the worst recorded number in the period. Stack size is recorded at task switch time which means it is possible that more stack actually was used at some point, but it should give a good indication if a task is running out of stack.

Assert info

Next up is the new “Assert info” button, it will dump assert or crash info to the console. When the STM CPU encounters a hard fault or some other condition that resets the CPU, it will record some basic crash information in a specific part of the RAM. This special RAM is not reset when the STM re-boots and it will automatically be dumped to the console log for investigation during the start up sequence. The “Assert info” button simply dumps the same information again, which may not sound very useful. But in some cases a client may auto-reconnect to a crashed Crazyflie, consume the console log and dispose of it before a human had the opportunity to look at it. In this case you can simply connect the client to the Crazyflie and click the “Assert info” button to get the information again.

Propeller test

The “Propeller test” button runs a automated test of the propellers and measures vibrations in the platform to determine if they are well balanced or not. The result is printed in the console window, like this: (looks like it is time to change one of my propellers!)

HEALTH: Acc noise floor variance X+Y:0.004469, (Z:0.002136)
HEALTH: Motor M1 variance X+Y: 4.17 (Z:0.55), voltage sag:0.35
HEALTH: Motor M2 variance X+Y: 0.22 (Z:0.42), voltage sag:0.37
HEALTH: Motor M3 variance X+Y: 1.23 (Z:0.21), voltage sag:0.35
HEALTH: Motor M4 variance X+Y: 1.09 (Z:0.17), voltage sag:0.31
HEALTH: Propeller test on M1 [FAIL]. low: 0.0, high: 2.50, measured: 4.17
ESTKALMAN: WARNING: Kalman prediction rate low (82)

Battery test

The final button is the “Battery test”. It tests if the battery is worn out by spinning the motors and measuring the drop in voltage. A drop in the voltage indicates that the battery probably is bad, but it can also be caused by other sources of extra resistance in the power path, for instance oxide on the battery connector. Use it as an indication only!

Note: Only use this test for the Crazyflie 2.x, not the Bolt or BigQuad.

The result of this test is printed in the console log:

HEALTH: Idle:4.15V sag: 0.67V (< 0.95V) [OK]

The console side-by-side other tabs

It is possible to add the console log as a tool box at the bottom or one of the sides of the client. In the “View” menu, choose “toolboxes” and click “Console”. A toolbox window with the console log will appear at the bottom of the screen which can be handy as it will be visible even if you switch to another tab.

The Plotter tab with the console as a toolbox

Other debug tools

This post has been focused on the console tab, but there are of course other functionality that is useful when debugging your system. We will end by quickly mention some of them:

There has been some background work going on related to the Lighthouse system, as mentioned in a previous blogpost. The solution has been improved since that blog post and we believe the functionality is now on a level where it works pretty well and can add value to most Lighthouse users.

How to use it?

We have added a brief documentation to get you started. Though the solution has been stabilized, it is still a bit experimental and it has not been fully integrated into the client yet. The base station geometry estimator still has to be run as a python script from the command line, and a reconfigured version of the Crazyflie firmware has to be built and flashed.

We have added some improvements to the client thought to enable it to display base station status for 2+ base stations. This was the final part of the client UI that did not support 2+ base stations, and now remains only the possibility to run the new geometry estimation from the client.

Benefits

What kind of improvements does it bring?

First of all, the functionality to use more than 2 base stations and the possibility to cover a larger flight space. It also makes it possible to set up multi-room systems to support flight from one room to another.

Secondly an improved estimation of the base station geometry (also when using 2 base stations) that generally reduces the errors and improves the position estimation of the Crazyflie when flying. “Jumping” of the estimated position when one base station is occluded should be reduced. When following a trajectory that is straight line through space, the Crazyflie should now actually fly on a fairly straight line, previously the flown path might be a bit curved.

The new solution has a better match to the physical world and hopefully the estimated Z will be closer to zero when the Crazyflie is on the floor, with the “old” method, the solution sometimes is slightly tilted with a Z != 0 in some areas.

Problems

Most of the Lighthouse system works just like before, the new functionality is related to base station geometry estimation. The “standard” geometry estimation is still available in the client and if you continue to use this nothing is changed, the following list is for the new estimation method.

  • The new geometry estimation is a bit clunky to use and the user still has to rebuild the firmware and run a python script.
  • Lighthouse 1 is not fully supported
  • The new geometry estimation does not work with one base station.

We hope to address the above problems in future releases.

Release

Talking about releases, we are working on a new official release. If no unforeseen obstacles are found, we plan to make a new release within a week or two.

The functionality discussed in this blog post is still only in source code, on master or possibly in some pull requests. If you wait for the release all repositories should be syncronized and make it a bit easier to try out.

Feedback

As the environment of the system has an impact on this type of functionality, we would love to get feedback from you if you try it out. We’d love to hear how it works for you!

The Toolbelt has been around for a pretty long time but we have not been that good at promoting it and documentation is unfortunately a bit sparse. In this blog post we will talk a bit about the Toolbelt and how to use it.

The basic idea behind the Toolbelt is to provide an easy to access tool that helps the user to do common tasks in Bitcraze projects, without installing a lot of special tool-chains, libs or programs. The intention is also to harmonize the use in all our projects to make them as similar as possible and reduce the cognitive load when switching between repositories.

A functional view

After a standard installation (see below), the Toolbelt is available using the tb command and it is intended to be executed from the root of (almost) any Bitcraze repository file tree. You can run tools (commands) from the toolbelt with the extra spice that they run in the required environment, for instance when building the firmware the correct compiler is automatically available.

Without any arguments the Toolbelt will display a brief help. For instance if I run it in the root of the crazyflie-firmware repository I get this:

kristoffer@kristoffer-XPS-13-9310:~/code/bitcraze/crazyflie-firmware$ tb
Usage:  tb [-d] tool [arguments]
The toolbelt is used to develop, test and build Bitcraze modules. When the toolbelt is called, it will first try to find the tool in the belt, after that it will try the tools in the module if the working directory is the root of a module. Module tools are executed in the context of a docker container based on the module requirements configured in the module.json config file.

-d:  print the docker call that executes the tool

Tools in the belt:
  help, -h, --help - Help
  update - Update tool belt to latest version
  version, -V, --version - Display version of the tool belt
  ghrn - Generate release notes from github milestone
  docs - Serve docs locally

Tools in the current module:
  build
  test
  compile
  check_elf
  make
  test_python
  clean
  build-docs

We can see that there are two groups of tools; “Tools in the belt” that are available in all repositories and ” Tools in the current module”, these are tools that are specific to the current repository.

To run a tool, simply use tb and the command. To build the firmware for instance, you can use make

kristoffer@kristoffer-XPS-13-9310:~/code/bitcraze/crazyflie-firmware$ tb make
Running script tools/build/make in a container based on the bitcraze/builder docker image as uid 1000
Using default tag: latest
latest: Pulling from bitcraze/builder
Digest: sha256:bee591d94db757465b88338c69be847cdf527698f0270ea1a86a2ccaa3c9845d
Status: Image is up to date for bitcraze/builder:latest
docker.io/bitcraze/builder:latest
make: Entering directory '/module'
  CLEAN_VERSION
  CC    stm32f4xx_dma.o
  CC    stm32f4xx_exti.o
  CC    stm32f4xx_flash.o
  CC    stm32f4xx_gpio.o
...

We can see that the firmware is built but we did not have to set our environment beforehand like instructed in the build/install page. The Toolbelt handled that by providing an precompiled development environment to do the firmware-compiling for us.

I will not go through all the tools but I’d like to mention the docs command. The docs command starts a web server and renders a simplified version of the documentation for a repository (in the docs directory). It is useful when browsing or editing the documentation.

Implementation

The Toolbelt is based on Docker and it runs in container. When executing a tool, the Toolbelt starts a second container where the tool runs. This second container is called a builder and it contains all the software required to execute the tool. There are a few different builders with tool-chains that are appropriate for various languages, CPUs and so on, luckily the Toolbelt picks the correct one automatically.

The directory that the tool is executed from is mapped into the docker containers and this is how the tools access the files, for instance when compiling.

In the example above we can see that the Toolbelt is pulling the latest version of the bitcraze/builder image from docker hub to have the latest and greatest builder when running make. It will take a while to download the builder image the first time (or when it has been updated) but usually this is not necessary and usually starting a tool takes only around 1 second.

The builder images are also used by our build servers for CI and release builds, this means that building with the Toolbelt replicates the exact same environment as on our builder servers.

The tools that are specific to a repository can be found in the tools/build and tools/build-docs directories. They are usually bash or python scripts and often they can also be executed without the toolbelt if you have the appropriate software installed on your system.

The source code for the Toolbelt is available on github. You can also find the source code for the builders on github, search for “builder”

Installation

The Toolbelt is mainly designed for Linux like environments and works on MacOS and in WSL (Windows Subsystem for Linux). Some operations are slowish on Mac as file access is a bit slower from docker containers.

To run the Toolbelt you need to have Docker installed on your system, after that installation is as simple as adding an alias to your .bashrc (or similar). For instructions run:

docker run --rm -it bitcraze/toolbelt

Native installation VS Toolbelt VS Virtual machine

There are three paths for building and working with Bitcraze source code; native install, the Toolbelt and the VM (Virtual machine). They all have their pros and cons.

Native installation

All build tools installed on the machine.

Pros: fast, access to USB and Crazyradio which enables flashing of firmware, can use your standard development environment

Cons: Possible compatibility issues with other software on the system. Must maintain installation and upgrade from time to time.

Toolbelt

Pros: highly separated from the OS, automatically updated with the appropriate tools and versions

Cons: can not access USB and the Crazyradio – flashing not possible. No access to GUIs – can not run the client

VM

Pros: Everything ready in one place, also supports USB, Crazyradio and flashing. Client works. Highly separated from the OS.

Cons: A bit bulky

Conclusions

The Toolbelt is an option for users that are interested in working with the source code for the Bitcraze ecosystem, but do not want to put too much time into installing tool-chains and setting up environments. It does not solve all problems but hopefully simplifies some tasks.

Any feedback is welcome!

Base station geometry estimation is a function in the python client (in the lighthouse tab), where the system estimates the position and orientation of the base stations. The user places the Crazyflie on the floor (in the desired origin) and clicks a button to measure the angles to the base stations, which are used to estimate the geometry. The current implementation is fairly basic and has some issues associated with it:

  1. All base stations must be received from the point where the Crazyflie is located
  2. Only 2 base stations are supported
  3. The coordinate system is not properly aligned with the room
  4. The generated geometry is not as good as it could be, that is the position/orientation is sub-optimal
  5. The code has a dependency to OpenCV which causes problems for ROS users

I have been working on a solution for these problems as my fun Friday project and in this blog post I will tell you a bit more about the problems and a possible solution.

Screenshot from the client of a geometry with 4 base stations.

What are the problems to be solved?

In the current implementation, the user places the Crazyflie in the origin, with the front of the Crazyflie pointing in the direction of the positive X-axis. When the user hits the “Estimate Geometry” button, the angles to the visible base stations are recorded and the solvePnP() function in OpenCV is used to estimate their poses (position and orientation). This is all fine but it also has its limitations and in the following section we will outline what the limitations are and how to solve them.

All base stations must be received in the origin and only 2 base stations are supported

Currently the Crazyflie ecosystem supports up to 2 base stations and this works fine for a flight space of around 4×4 meters. With more base stations it would be possible to cover larger areas or multiple rooms, which is a feature that many users have been asking for. In these scenarios it will not be possible to receive all base stations from one position any more though, and it will require a new method for geometry estimation using multiple measurements. Suppose base station 1 and 2 are received in one position and 2 and 3 in another, then we can map the measurements together since we know base station 2 must have the same pose in both measurements. This way it is possible to relate all base station poses to each other, provided there are measurements that link them together.

The coordinate system is not properly aligned with the room

When generating the geometry in the current implementation, the orientation of the Lighthouse deck is used to define the coordinate system: forward of the deck defines the X-axis, left defines the Y-axis and up the Z-axis. The problem is that the deck might not be perfectly aligned with the Crazyflie, the floor might not be completely flat or the Crazyflie might not point exactly in the desired direction. A pretty small misalignment will result in fairly large errors a couple of meters away, resulting in unexpected behavior, for instance not flying at constant height. Expanding to more base stations and larger systems, the problem will become even bigger and a better solution is clearly needed.

If we used the position of the Crazyflie when placed at multiple positions, we could use this information to rotate the coordinate system to be better aligned. For instance, suppose we measured some points on the floor of the flight space, then we could make sure the XY-plane of the coordinate system goes through those points, or at least as close as possible. Similarly one or more measurements along the X-axis would help to define the rotation around the Z-axis.

The generated geometry is not as good as it could be

The lighthouse positioning system is based on measuring the angles between the sensors on the Lighthouse deck and the base stations. One can think of it as four beams or rays, going from each base station to the sensors on the deck, for which we measure the direction very precisely seen from the base station’s point of view. The purpose of the geometry estimation is to figure out the position and orientation of the base stations so that we can calculate how the beams are oriented in the flight space instead. By looking at where the beams from two base stations intersect we know where the sensors are located and can calculate the position of the Crazyflie. This is a somewhat simplified picture of how it works but it is sufficient for the following discussion.

So what happens if the geometry is not completely correct? If the estimated positions or orientations of the base stations are slightly off, the beams will not intersect and we have to use some method to find the point closest to the two beams instead to use as the sensor position. In a real world system there will always be errors and the implementation must be able to handle them, but we want to keep them as small as possible. Further more we want to make sure the errors are uniformly distributed in the flight space so that we get equally good results everywhere.

In the current estimation process, where we take a measurement in one position, we are able to generate a geometry that is good at that point, but due to noise in the measurements and other subtleties the error at the edges of the flight space might be several centimeters.

The solution to this problem is to measure the angles in multiple positions and try to find a geometry where the error is equally small for all of them. It does not guarantee that the error will be equal everywhere, but if we make measurements in the volume we plan to fly in we know it will be OK where we need it to be. It should also be a much better geometry, for the full covered volume, than what we can be achieved by measuring in one point only.

One bonus problem that hopefully will be solved by this approach is the moving back and forth that sometimes can be seen in a Lighthouse 2 system. What happens is that the base stations interfere with each other from time to time (by design) and most of the time the Crazyflie gets positioning information from both base stations, but every couple of seconds only from one of them. When both are available the “average” position is used, but when only one is received, the Crazyflie will “jump” to the position indicated by that base station (the simplified model from above with crossing beams does not hold in this case, sorry!). If the difference between the suggested positions of the two base stations (the error in the geometry) is large there will be a noticeable motion in the Crazyflie.

The code has a dependency to Open CV

In the current solution we use the solvePnP() function in Open CV to estimate the geometry. Open CV is an awesome library but unfortunately it has turned out that this dependency interferes with ROS, and since a fair amount of our users also use ROS, we would like to get rid of it if possible.

Luckily I found an open source implementation of IPPE, an algorithm that finds the pose of an object based on points seen by a camera, that we can use instead. There is actually an option to use Ippe in OpenCV’s solvePnP(), but we used another flavor.

The solution

The core idea is to first collect measurements of beams in many positions in the flight space by moving a Crazyflie around and record the lighthouse angles. Secondly an equation system is created that takes the poses of the base stations and all the recorded Crazyflie poses as input and as output calculates the lighthouse angles those poses would correspond to for all the sensors. Finally the output is compared to the recorded values and poses are adjusted using the least-square solver in scipy to find the poses that minimizes the difference between the measurements and the output from the equation system.

Before we can solve the equation system we have to record the angles from the base stations. There is a handy function in the Crazyflie that pushes measured lighthouse angles to the PC via the radio, and by letting the user move the Crazyflie around in space we get the angles along that path. What we are looking for though are angles collected in discrete positions and as an approximation I group measurements together based on time. The assumption is that if two angle measurements are closer than 10 ms in time, the Crazyflie did not move very far and they can be considered to be taken in the same position. The output of this process is a list of samples where each sample contains the measured lighthouse angles of one or more base stations for one specific Crazyflie pose. After this has been done, the list is filtered to only contain samples with two or more base stations.

We also need an initial guess of the base station and Crazyflie poses for the least-square solver to make the solution converge. I use IPPE for this and use the first sample as the reference to define a temporary global reference frame. Suppose the first sample contains angles for base stations 2 and 3, we can then use IPPE to calculate an estimate of the pose of the two base stations, in the Crazyflie reference frame of this sample. Since we use the first sample as the reference for the global reference frame (that is the pose of the Crazyflie in this sample is the origin by definition), those poses are also equal to the base station poses in the global reference frame.

Suppose the next sample contains lighthouse angles for base stations 1 and 2, using IPPE we can estimate the base station poses for base stations 1 and 2 in the reference frame of the Crazyflie in this sample. Since the relative positions of the base stations is the same, regardless of reference frame, we can rotate/translate the poses of the base stations so that base station 2 pose matches the pose of base stations 2 in the first sample. We now have an estimate of the poses of base stations 1, 2 and 3, further more the transformation used represents the pose of the Crazyflie in sample 2. Repeating the process for all samples gives us a pretty good idea of where all the base stations are located as well as the pose of the Crazyflie in all the samples.

We can now feed the initial guess and the equation system into scipy and hopefully get a refined solution back. From the estimated poses of the base stations and Crazyflie samples it is possible to calculate the distance between sensors and beams which gives us an approximation of how good the solution is.

The final step is to align the coordinate system with the room, as mentioned earlier the solution we have this far is based on the pose of the Crazyflie in the first sample. The way it is done in the suggested implementation is to ask the user to place the Crazyflie at some points in the desired origin, on the positive X-axis and in the XY-plane and measure the angles in these positions. The measurements are included as samples in the above process which means we will get the estimated positions as a part of the over all solution, in the temporary global reference frame. The task at hand is then to find the rotation/translation from the temporary global reference frame to the one indicated by the positions sampled by the user. Again we do a least-square optimization to find the transformation that minimizes the error in the sampled points. We can now calculate the final solution by applying the transformation to the base station poses we got earlier.

Does it work?

Yes, it seems to work pretty well, I have not had the time to do extensive testing yet but the results looks promising. In our flight arena with 4 base stations, the solution seems to generally be acceptable. We don’t know the exact poses of our base stations since it is very hard to measure, but they are mounted in the same truss and should be at similar heights.

Base stations at:
1: (-3.7104629351065146, -0.27330674065567867, 2.960720481536423)
2: (-0.9233909349006646, -2.9651389799486356, 2.9781503155699176)
3: (-0.12450705551081238, 3.430497907723026, 3.011201684709142)
4: (2.74012584124908, 0.5856524795079388, 3.023133069381165)
Solution match per base station:
1: {'mean_error': 0.0026020322270174697, 'max_error': 0.013310934923630531, 'std_error': 0.0028768923969783836}
2: {'mean_error': 0.0015240237742724164, 'max_error': 0.005526851773945277, 'std_error': 0.0011560341160273498}
3: {'mean_error': 0.002193101969828834, 'max_error': 0.006778096051979129, 'std_error': 0.0015768914826109067}
4: {'mean_error': 0.0033752667182490796, 'max_error': 0.014997173956894249, 'std_error': 0.00354931189334688}

The above snippet is part of the output from one run and as can be seen the estimated height is between 2.96 and 3.02 m. You can also see that the estimated average error for sensor positions is in the order of 2-3 mm while the maximum error is 1.5 cm.

Below is graph of the recorded Crazyflie positions in the final solutions. Note the three single points at the bottom that are from the origin, the X-axis and XY-plane.

Estimated positions of the Crazyflie

I did some testing on larger systems with 6-8 base stations this Friday and it seemed to be harder to get a solution that converges which indicates that there might be something to look into here.

Try it out

This is still work in progress, but if you want to try it out, you can find the code in this pull request. Run the examples/lighthouse/bs_geometry_estimation.py script, you will get instructions on the screen as you go.

Officially the firmware supports 2 base stations , but most of the code is designed to handle up to 16 and if you want to test the functionality with more than two base stations you have to update PULSE_PROCESSOR_N_BASE_STATIONS and re-flash your Crazyflie.

Any feedback is welcome, please use the pull request.

Sometimes we get the question of where to modify or add code to change some behavior of the Crazyflie. There is no quick answer to this question but we thought that we should write a post to clear up some question marks and give a better idea of how to approach the problem.

There are quite a few repositories on the Bitcraze github page but there are two that are the main focal point for almost any Crazyflie work, those are the crazyflie-firmware and the crazyflie-lib-python. The crazyflie-firmware contains the source code (written in C) for the firmware that runs in the Crazyflie, that is the code responsible for flying, blinking LEDs, communicating with the radio, scanning sensors and so on. The crazyflie-lib-python (often called the python lib) on the other hand is running on the PC side and is the API to use to communicate with the Crazyflie from a script. The crazyflie-lib-python is also used by the python client which means that anything you see in the client can be done by a script using the python lib.

Let’s assume we have a system of one Crazyflie connected to a computer using a Crazyradio. Now we want to control the Crazyflie and make it take off for instance, how should this be done?

The easiest way would be to use the python lib. The python lib is used to communicate with the Crazyflie and we can use it to send instructions to the Crazyflie, for instance to take off or fly a trajectory. It is also possible to use the parameter framework to change values in the Crazyflie. The main way of monitoring what is going on in the Crazyflie is to use the log framework to read variables from the Crazyflie. The python lib is perfect for controlling the Crazyflie or prototype ideas as it is very fast to make changes and try things out. The best to get started with the python lib is to start from an example that already uses functionalities you want to use.

Another option is to add code in the firmware. Originally this has been quite hard since the firmware has not been initially designed to accept user code. This means that unless you want to modify an already existing code, it is quite hard to find where to add your code so that it runs in the Crazyflie firmware and you would have to make a fork of the firmware which can he hard to maintain and keep up to date in the long run. This is one of the things the out-of-tree build and the app layer is solving, it is now quite easy to add and run your own C files to the firmware in your own project without having to fork the Crazyflie firmware. There is a bunch of examples in the firmware that shows how to implement autonomous behavior as an app. The easiest is to start with the hello world example. When it comes to modifying exiting functionality in the firmware code, most of the time forking and modifying the official firmware unfortunately is the only solution. We are however working our way to make more and more of the firmware modular so that it can be expanded out of tree. For example there has been work to make an out of tree estimator possible to implement.

A prototype written as a python script is often pretty easy to move to the Crazyflie firmware. This is a good pattern when writing an application, rapid prototyping in python and then finalizing in firmware if needed. The best example of that is the push demo. It is a demo where the Crazyflie can be pushed-around with the help of the flow deck for autonomous flight and the multiranger deck for detecting obstacle/hands. We have a python cflib push demo as well as a Crazyflie firmware push demo app.

There is some support in the python lib for interacting with multiple Crazyflies and it is probably a good start point for simple swarms. For more advanced swarm work Crazyswarm may be a better option.

If you would like to see some of the process in action, we have made a workshop during our BAM days about implementing functionality both using the python lib and in the firmware as an out of tree app:

We’re happy to announce the availability of the 2021.01 release! The release includes the Crazyflie firmware (2021.01), the python library (0.1.13.1) and the python client (2021.1). The firmware package can be downloaded from the Crazyflie release repository (2021.01) or can be flashed directly using the client bootloader window.

Most of the improvements have been done in the Crazyflie firmware and include:

The App API in the Crazyflie firmware has been extended and improved to be able to handle a wider range of applications. The goal is to enable a majority of users to implement the functionality they need in an app instead of hacking into the firmware it self.

We have improved the Lighthouse support in the firmware and both V1 and V2 are now working well. Even-though everything is not finished yet, we have taken a good step towards official Lighthouse positioning.

A collision avoidance module has kindly been contributed by the Crazyswarm team.

A persistant storage module has been added to enable data to be persisted and available after the Crazyflie is power cycled. It will initially be used to store Lighthouse system information, but will be useful for many other tasks in the future.

Basic arming functionality has been added for platforms using brushless motors.

In the client the LPS tab now has a 3D visualization of the positioning system and a new tab has been added to show the python log output.

Unfortunately we have run into some problems for the Windows client build which is not available for this release.

Finally we have fixed bugs and worked to improve the general stability.

We hope you enjoy it!

With the raging pandemic in the world, 2021 will most likely not be an ordinary year. Not that any year in the Bitcraze universe has been boring and without excitement so far, but it is unusually hard to make predictions about 2021. Any how, we will try to outline what we see in the crystal ball for the coming year.

Products

What products are cooking in the Bitcraze pot and what tasty new gadgets can we look forward to this year?

Lighthouse

We did hope that we would be able to release the first official version of the Lighthouse system in 2020, but unfortunately we did not make it. It has turned out to be more complex than anticipated but we do think we are fairly close now and that it will be finished soonish, including support for lighthouse V2.

Once the official version has been achieved, we are planning to assemble an full lighthouse bundle, which includes everything you need to start flying in the lighthouse positioning system. This will also include the Basestations V2 as developed by Valve corporation, so stay tuned!

New platforms and improvements

We released the AI-deck last year in early release, but the AIdeck will be soon upgraded with the latest version of the GAP8 chip. For most users this will not change much but for those that really push the deep learning to the edge will be quite happy with this improvement. More over, we are planning to by standard equipped the gray-scale camera instead of the RGB Bayer filter version, due to feedback of the community. We are still planning to offer the color camera on the side as a separate product for those that do value the color information for their application.

Also we noticed the released of several upgraded versions of sensors for the decks that we already are offering today. Pixart and ST have released a new TOF and motion sensor so we will start experimenting with those soon which hopefully lead to a new Multiranger or Flowdeck. Also we are aware of the new DWM3000 chip which would be a nice upgrade to the LPS system, so we will start exploring that as well, however we are not sure if we will be able to release the new version of LPS in 2021 already.

One of the field that we have wanted to improve for a while but have not gotten to so far is the communication with the Crazyflie. The Crazyradio is using a quite old chip and the communication protocol has hardly been touched in years. There now exists a much more powerful nRF52 radio chip with USB port so it can give us the opportunity to make a new Crazyradio and, at the same time, rework the communication protocols to make them more reliable, easier to use and to expand.

People and Collaborations

Last year we have started several collaborations with show drone orientated business, which we are definitely moving forward with in 2021. For shows stability and performance is very important so with the feedback of those that work with that on a regular basis will be crucial for the further development and reliability of our products.

Moreover we would like to continue our close collaboration with researchers at institutes and universities, to help them out with achieving their goals and contribute their work to our opensource firmware and software. Here we want to encourage the community to make their contributions easy to use by others, therefore increasing the reproducibility of the implementations, which is a crucial aspect of research. Also we are planning to have more of our online tutorial like the one we had in November.

We also will be working with closely together with one of our very active community members, Wolfgang Hönig! He has done a lot of great work for the Crazyswarm project from his time at University of Southern California (USC) and has spend the last few years at Caltech. He will be working together with us for a couple of months in the spring so we will be very happy to have him. Moreover, we will also have 2 master students from LTH working with us on the topic hardware simulation in the spring. We are making sure that we can all work together in the current situation, either sparsely at the office or fully online.

In 2021, we will also keep our eyes open for new potential Bitcrazers! We believe that everybody can add her/his own unique addition to the team and therefore it is important for us to keep growing and get new/fresh ideas or approaches to our problem. Usually we would meet new people at conferences but we will try new virtual ways to get to know our community and hopefully will meet somebody that can enhance our crazy group.

Working from home

Due to the pandemic we are currently mainly working from home and from the looks of it, this will continue a while. Even though we think we have managed to find a way to work remotely that is fairly efficient, it is still not at the same level as meeting in real-life, so there is always room for improvement. Further more the lack of access to electronics lab, flight lab and other facilities when working from home, does not speed work up. We will try to do our best though under the current circumstances and are looking forward to an awesome 2021!

This autumn when we had our quarterly planing meeting, it was obvious that there would not be any conferences this year like other years. This meant we would not meet you, our users and hear about your interesting projects, but also that we would not be forced to create a demo. Sometimes we joke that we are working with Demo Driven Development and that is what is pushing us forward, even-though it is not completely true it is a strong driver. We decided to create a demo in our office and share it online instead, we hope you enjoy it!

The wish list for the demo was long but we decided that we wanted to use multiple positioning technologies, multiple platforms and multiple drones in a swarm. The idea was also to let the needs of the demo drive development of other technologies as well as stabilize existing functionality by “eating our own dogfood”. As a result of the work we have for instance:

  • improved the app layer in the Crazyflie
  • Lighthouse V2 support, including basic support for 2+ base stations
  • better support for mixed positioning systems

First of all, let’s check out the video

We are using our office for the demo and the Crazyflies are essentially flying a fixed trajectory from our meeting room, through the office and kitchen to finally land in the Arena. The Crazyflies are autonomous from the moment they take off and there is no communication with any external computer after that, all positioning is done on-board.

Implementation

The demo is mainly implemented in the Crazyflie as an app with a simple python script on an external machine to start it all. The app is identical in all the Crazyflies so the script tells them where to land and checks that all Crazyflies has found their position before they are started. Finally it tells them to take off one by one with a fixed delay in-between.

The Crazyflie app

When the Crazyflie boots up, the app is started and the first thing it does is to prepare by defining a trajectory in the High Level Commander as well as setting data for the Lighthouse base stations in the system. The app uses a couple of parameters for communication and at this point it is waiting for one of the parameters to be set by the python script.

When the parameter is set, the app uses the High Level Commander to take off and fly to the start point of the trajectory. At the starting point, it kicks off the trajectory and while the High Level Commander handles the flying, the app goes to sleep. When reaching the end of the trajectory, the app once more goes into action and directs the Crazyflie to land at a position set through parameters during the initialization phase.

We used a feature of the High Level Commander that is maybe not that well known but can be very useful to make the motion fluid. When the High Level Commander does a go_to for instance, it plans a trajectory from its current position/velocity/acceleration to the target position in one smooth motion. This can be used when transitioning from a go_to into a trajectory (or from go_to to go_to) by starting the trajectory a little bit too early and thus never stop at the end of the go_to, but “slide” directly into the trajectory. The same technique is used at the end of the trajectory to get out of the way faster to avoid being hit by the next Crazyflie in the swarm.

The trajectory

The main part of the flight is one trajectory handled by the High Level Commander. It is generated using the uav_trajectories project from whoenig. We defined a number of points we wanted the trajectory to pass through and the software generates a list of polynomials that can be used by the High Level Commander. The generated trajectory is passing through the points but as a part of the optimization process it also chooses some (unexpected) curves, but that could be fixed with some tweaking.

The trajectory is defined using absolute positions in a global coordinate system that spans the office.

Positioning

We used three different positioning systems for the demo: the Lighthouse (V2), the Loco Positioning system (TDoA3) and the Flow deck. Different areas of the flight space is covered by different system, either individually or overlapping. All decks are active all the time and pick up data when it is available, pushing it into the extended Kalman estimator.

In the meeting room, where we started, we used two Lighthouse V2 base stations which gave us a very precise position estimate (including yaw) and a good start. When the Crazyflies moved out into the office, they only relied on the Flowdeck and that worked fine even-though the errors potentially builds up over time.

When the Crazyflies turned around the corner into the hallway towards the kitchen, we saw that the errors some times were too large, either the position or yaw was off which caused the Crazyflies to hit a wall. To fix that, we added 4 LPS nodes in the hallway and this solved the problem. Note that all the 4 anchors are on the ground and that it is not enough to give the Crazyflie a good 3D position, but the distance sensor on the Flow deck provides Z-information and the overall result is good.

The corner when going from the kitchen into the Arena is pretty tight and again the build up of errors made it problematic to rely on the Flow deck only, so we added a lighthouse base station for extra help.

Finally, in the first part of the Arena, the LPS system has full 3D coverage and together with the Flow deck it is smooth sailing. About half way the Crazyflies started to pick up the Lighthouse system as well and we are now using data from all three systems at the same time.

Obviously we were using more than 2 basestations with the Lighthouse system and even though it is not officially supported, it worked with some care and manual labor. The geometry data was for instance manually tweaked to fit the global coordinate system.

The wall between the kitchen and the Arena is very thick and it is unlikely that UWB can go through it, but we still got LPS data from the Arena anchors occasionally. Our interpretation is that it must have been packets bouncing on the walls into the kitchen. The stray packets were picked up by the Crazyflies but since the Lighthouse base station provided a strong information source, the LPS packets did not cause any problems.

Firmware modifications

The firmware is essentially the stock crazyflie-firmware from Github, however we did make a few alterations though:

  • The maximum velocity of the PID controller was increased to make it possible to fly a bit faster and create a nicer demo.
  • The number of lighthouse base stations was increased
  • The PID controller was tweaked for the Bolt

You can find the source code for the demo on github. The important stuff is in examples/demos/hyperdemo

The hardware

In the demo we used 5 x Crazyflie 2.1 and 1 x Bolt very similar to the Li-Ion Bolt we built recently. The difference is that this version used a 2-cell Li-Po and lower KV motors but the Li-Ion Bolt would have worked just as well.

Hyperdemo drones and they configurations

To make all positioning to work at the same time we needed to add 3 decks, Lighthouse, Flow v2 and Loco-deck. On the Crazyflie 2.1 this fits if the extra long pin-headers are used and the Lighthouse is mounted on top and the Loco-deck underneath the Crazyflie 2.1 with the Flow v2 on the bottom. The same goes for the Bolt, but here we had to solder the extra long pin-header and the long pin-header together to make them long enough.

There is one catch though… the pin resources for the decks collide. With some patching of the loco-deck this can be mitigated by moving its IRQ to IO_2 using the solder-jumper. The RST needs to be moved to IO_4 which requires a small patch wire.

Also some FW configuration is needed which is added to the hyperdemo makefile:

CFLAGS += -DLOCODECK_USE_ALT_PINS
CFLAGS += -DLOCODECK_ALT_PIN_RESET=DECK_GPIO_IO4

The final weight for the Crazyflie 2.1 is on the heavy side and we quickly discover that fully charged batteries should be used or else the crash probability is increased a lot.

Conclusions

We’re happy we were able to set this demo up and that it was fairly straight forward. The whole setup of it was done in one or two days. The App layer is quite useful and we tend to use it quite often when trying out ideas, which we interpret as a good sign :-)

We are satisfied with the results and hope it will inspire some of you out there to push the limits even further!

As mentioned in this blog post, we added the possibility to write apps for the Crazyflie firmware a while ago. Now we have added more functions in the Firmware to make it possible to use apps for an even wider range of tasks.

The overall idea of the app API is to mirror the functionality of the python lib. This will enable a user to prototype an application in python with quick iterations, when everything is working the app can easily be ported to C to run in the Crazyflie instead. The functions in the firmware are not identical to the python flavour but we have tried to keep them as close as possible to make the translation simple.

An app is also a much better way to contain custom functionality as the underlying firmware can be updated without merging any code. The intention is that the api API will be stable over time and apps that work one version of the firmware also should work with the next version.

Improvements

We used our demo from IROS and ICRA (among others) with a fairly autonomous swarm as a driver for the development. The demo used to be implemented in a branch of the firmware with various modifications of the code base to make it possible to do what we wanted. The goal of the exercise was to convert the demo into an app and add the required API to the firmware to enable the app to do its thing. The new app is available here.

The main areas where we have extended the API are:

Log and parameters framework

The log framework is the preferred way for an app to read data from the firmware and this has been working from the start. Similarly the parameter framework is the way to set parameters. Even though this has worked, it broke a basic assumption in the setup with the client, that only the client can change a parameter. Changing a parameter from an app could lead to that the client and Crazyflie had different views of the state in the Crazyflie, but this has now been fixed and the client is updated when needed.

High level commander

The high level commander was not accessible from an app earlier and the functions in the python lib have been added to make it easy to handle autonomous flight.

Custom LED sequences

It is now possible to register custom LED sequences to control the four LEDs on the Crazyflie to signal events or state.

Lighthouse functionality

Functions for setting base station geometry data as well as calibration data have been added. These functions are also very useful for those who are using the lighthouse system as it now can be done from an app instead of modifying lighthouse_position_est.c.

Remaining work

We have taken a step forward with these changes but there is more to be done! The two main areas are support for custom CRTP packets and memory mapping through the memory sub system. There might be more, let us know if there is something you are missing. The work will continue and there might even be some documentation at some point :-)

Tutorial

One reason for doing this API work now was to prepare for the tutorial about the lighthouse 2 positioning system, swarm autonomy and the demo app that we will run this Wednesday on-line, don’t miss out! You can read more about the event here.