Category: Crazyflie

A while ago we bought an HTC Vive for the Bitcraze office. This was partly for having fun with VR, but is was mostly because we had hope to use the vive tracking system with the Crazyflie. We are making progress with the idea and we just received our latest prototype:

The Lighthouse tracking system is the hardware component of steamvr tracking, it is used by the HTC vive to get the full position and orientation of the Vive VR head mounted display and game controllers. It has sub-millimeter precision and low latency, which is key to achieve immersive VR experience. The system works by having base-stations installed in the room. The base station sweeps two rotating infrared laser planes. A receiver is basically a photodiode, by detecting when the photodiode is hit by the sweeping lasers, the receiver can measure at which angle it is seen by the base station. With enough receivers and/or base-stations, it is possible to calculate the receiver position and orientation. If you want to read more about how lighthouse works, there has been awesome work of reverse engineering and documentation made by the open-source community.

As far as Crazyflie is concerned the lighthouse system has one major advantage: the position and orientation can be calculate in the tracked object which means that the Crazyflie can be completely autonomous and there is no limit in the number of Crazyflies that can be tracked at the same time.

Lighthouse has been my fun-Friday project for a couple of month and the early results are very encouraging.This is still very much work in progress, so stay tuned for future blog-posts about the subject :-).

 

Malmö is known to be very beautiful in June. At least that’s what i’ve heard from the Bitcraze team when we talked about having a meet-n-geek in their offices in Sweden.

I’ve been working on the crazyflies and developing artist friendly interface to control them for a year now, and I always was impressed with their philosophy of work and communication. Over the past year, they’ve helped me and my companies a lot to be able to create the shows we want, with our specific needs and constraints, that researchers don’t have, like fast installation time, or confidence in drone take off (you want to make sure that a drone won’t hit the theater’s director’s head at the very beginning of the show !).

 

For that I created LaMoucheFolle, which is an open-source software with a nice UI to be able to connect, monitor and control multiple drones. LaMoucheFolle is not made to be a flight controller, but acts more as a server that any software will be able to use, like Unity or Chataigne. That way, people and users don’t need to handle all the connection, feedback, warning, calibration process and can concentrate essentially on the flight and interaction.
While the fist version of LaMoucheFolle got me through most of the demos and workshops, I knew that it could be vastly improved if I understood better the drones, so I decided to ask the creators to meet them, and here I am !

As I hoped, being physically there allowed me to understand better what are their practices, and allowed them to better understand mine, so we could find a way to improve both their Crazyflie ecosystem and my contribution to it.

So I refactored, redesigned and improved LaMoucheFolle to a (soon to be released) V2, featuring :
– New drone manager interface  with a more intuitive feedback of the drones
– New multi-threaded drone communication mechanism
– New state-safe sequenced initialization and flight control of the drone, with a progressive take-off vastly increasing its stability
– Unity demo app and Chataigne demo session to show how to control from other softwares via OSC

 

 

 

While developping the new version and talking to the team, some key features and improvements for the use Crazyflies in shows were discussed and some of them are now in research/development :
– Health analysis : using the accelerometer’s data and Tobias’ magic brain, it allows to test the motors while the drone is on ground and find out if one or more are problematic (too much or not enough vibration, meaning either the propeller or the motor needs to be repaired/replaced)
– Battery analysis : when the battery is fully charged, this allows to have an automated motor sequence which will find out if the battery has an abnormal discharge behavior
– Steath mode : Shut down all the system leds (the 4 builtin leds on the drone) so it can be invisible, ninja-style
– Normalized battery level and low battery log values : it allows for safe and consistent feedback of the battery level in percent, and an indicator if the drone should land soon. It will also possible to use this value to monitor the charging progression of a drone.
– LedRing fade pattern : This mode allows for easy fading between solid colors on the led ring, so it’s not necessary to stream all the colors from one color to another, but instead having a very beautiful smooth interpolation, using only 2 parameters : the target color and the fade time.

I hope you are as excited as I am about those new features, and if you’re not, please tell us what would make you “vibrate” !

I’ve been spending the last week at their office, and it was a great week : I initially came to improve my software, and discuss about future development of the Crazyflies [which was great], but what i’ll remember the most from this trip is by far the human aspect of the Bitcraze team.
Marcus, Kristoffer, Tobias, Björn and Arnaud are amazing and i’m really happy to have had the chance to see them working, and collaborate with them. I admire their choices of being fully transparent on their work and amongst themselves, and there is a natural kindness mixed to the passion for their project that makes working there feel like everyday’s special. Also, Malmö is very beautiful indeed :)

Thank you very much and I hope to reiterate the experience soon,

Skål

 

 

The Crazyflie 2.0 has been flyable in MoCap systems such as Qualisys, Vicon or Optitrack for quite a while thanks to the Crazyswarm project. For the MoCap systems to be able to track the Crazyflie it needs to be fitted with reflective markers. These can be attached on e.g the motor mounts which in some cases might be the best solution, however we also liked the idea of creating a deck where the markers easily can be attached and in many, repeatable configurations, that is why we created the Mocap deck. This deck is now soon to be released but before we start manufacturing it would be great to get some feedback.

The deck has M3 sized holes which is spaced on a 5mm grid. The deck also has footprints for two optional push buttons that can be used to e.g. trigger a take-off or start of a demo. And as the battery holder deck, which has no electronics, so this can be mounted upside down for better fitting of the markers, if the buttons aren’t mounted of course.

We are collaborating with Qualisys, also based in Sweden (in Gothenburg), to make the Crazyflie more moCap friendly and make it easy to use together with the Qualisys system as well as other mocap systems. Qualisys will provide the markers for the moCap deck.

Qualisys and Bitcraze are exhibiting together at IROS in Madrid where we will show some awesome demos with a moCap system as well as other position technologies and we are also hoping to fly a Crazyswarm. We will publish more information when we get closer to the conference.

We have now worked a few weeks on the new TDoA 3 mode for the Loco Positioning System. We are happy with the results so far and think we managed to do what we aimed for: removing the single point of failure in anchor 0 and supporting many anchors as well as larger spaces.

 

We finished off last week by setting up a system with 20 anchors covering two rooms down in the lunch area of the office. We managed to fly a scripted autonomous flight between two rooms.

Work so far on the anchors

Messages from the anchors are now transmitted at random times, which removes the dependency on anchor 0 that used to act as a master that all other anchors were synchronized to. The drawback is that we get problems with collisions when two anchors happens to transmit at the same time. Experiments showed that at 400 packets/s (system rate) we ended up at a packet loss of around 15% and 340 TDoA measurements/s sent to the kalman filter for position estimation.  We figured that this was acceptable level and added an algorithm in the anchors that reduces the transmission rate based on the number of anchors around them. If more anchors are added to a room they all reduce their transmission rate to target 400 packets/s in total system rate.

The anchors continuously keeps track of the clock drift of all other anchors by listening to the messages that are transmitted. We know that clocks do not change frequency suddenly and can use this fact to filter the clock correction to reduce noise in the data. Outliers are detected and removed and the resulting correction is low pass filtered. We have done some experiments on using this information and compare it to the time stamp of a received message to detect if the time stamp is corrupt or not, but this idea requires more work.

One interesting feature of the anchors is the limited CPU power that is available. The strategy we have chosen to handle this fact has been to create an algorithm that is efficient when handling messages. A timer based maintenance algorithm (@1 Hz) examines the received data and makes demissions on which anchors to include in the messages in the future as well as purges old data.

The Crazyflie

The implementation in the Crazyflie is fairly straight forward. The biggest change to TDoA 2 is that we now can handle a dynamic number of anchors and have to chose what data to store and what to discard. We  have also extracted the actual TDoA algorithm into a module to separate it from the TDoA 3 protocol. The clock correction filtering algorithm from the anchors has also been implemented in the Crazyflie. 

An experimental module test has been added where the TDoA module is built and run on a PC using data recorded from a sniffer. We get repeatability as well as better tools for debugging and this is something that we should explore further.

Work remaining 

The estimated position in the Crazyflie is still more noisy than in TDoA 2 and we would like to improve it to at least the same level. We see that we have outliers in the TDoA measurements that makes the Crazyflie go off in a random direction from time to time, we believe it should be possible to get rid of most of these.

The code is fairly hackish and there are no structured unit or module tests to verify functionality. So far the work has been in an exploratory phase but we are getting closer to a set of algorithms that we are happy with and that are  worth testing. 

We have not done any work on the client side, that is support for visualizing and configuring the system. This is a substantial amount of work and we will not officially release TDoA 3 until this is finished.

How to try it out

If you are interested in trying TDoA 3 out your self, it is all available on github. There are no hardware changes and if you have a Loco Positioning system it should work just fine. There is a short description on the wiki of how to compile and configure the system. The anchor supports both TDoA 2 and TDoA 3 through configuration while the Crazyflie has to be recompiled to change between the two. The support in the client is limited but will basically handle anchors 0 – 7.

Have fun!

First of all we are happy to announce that (almost) all products have been stocked in the new warehouse and are now shipping! The last orders that were on hold are on their way out and new orders placed in the store will now be shipped again within a few days.

We released the TDoA mode, a.k.a. swarm mode of the Loco Positioning System back in January. TDoA supports positioning of many Crazyflies simultaneously which makes it possible to fly a swarm of Crazyflies with the LPS system. The release in January was actually the second iteration of the TDoA implementation (the first iteration was never publicly released) and it is also known as TDoA 2.

TDoA 2 works well but there are a couple of snags that we would like to fix and we have now started the work on the next iteration, TDoA 3. 

Single point of failure

TDoA 2 is based on a fixed transmission schedule with time slots when each anchor transmits its ranging packet. All anchors listen to anchor 0 and use the reception of a packet from anchor 0 to figure out when to transmit. The problem with this solution is that if anchor 0 stops transmitting for some reason the full system will stop transmitting positioning information. This is clearly a property that would be nice to get rid of.

Limited number of anchors

The packets in the TDoA 2 protocol have 8 slots for anchor data that are implicitly addressed through the position in the packet. First slot is anchor 0, second slot anchor 1 and so on. This setup is easy to use but creates an upper limit of 8 anchors in the system.

The maximum radio reach of an anchor depends mainly on the transmitted power and the environment. This distance, in combination with a maximum of 8 anchors and that all anchors must be in range of anchor 0, sets an upper limit of the volume that an LPS system can cover, basically one large room. When we designed TDoA 2 we were happy to be able to support a swarm of Crazyflies and did not really bother too much about the covered volume. We get more and more questions about larger areas and more anchors though and it would be nice to have a positioning system that could be expanded.

The solution – maybe…

What we want to do in TDoA 3 is to transmit packets at random times and add functionality to handle the collisions and packet loss that will happen in a system like this. The idea is that the even if some data is lost, the receiving side will get enough packets to be able to calculate the distance to other anchors or a position as needed. By removing the time slots and synchronization to anchor 0, we get rid of the single point of failure. 

In the TDoA 3 protocol, we have added explicit ids to the anchor data, and thus removed the implicit addressing of anchors. We have 8 bits for anchor ids and the system will handle 256 anchors for sure. We do think that it will be possible to design larger systems though by reusing ids and making sure that the radio ranges of anchors with the same ids do not overlap.

The UWB radios have a nice property that makes this a bit easier to handle collisions than one might first think, if they receive two packets at the same time, they will most likely “pick” one of the packets and discard the other. The drawback is that it is likely that the receive time of the packet will be less accurate. We are not completely sure it will be possible to detect and handle the added noise in the time stamps but we have good hope!

The current state of the project

Last week we did a proof of concept hack when we modified the old TDoA 2 implementation to transmit at random times, as well as minor modifications to handle random receive order of packets. It all worked out beautifully and we could fly a short sequence in the office with the new mode. The estimated position was a bit more shaky which is not surprising, considering that the receive times are more noisy.

We have just started with the real deal.  We have designed a draft spec of the protocol and have also started to implement the new protocol on top of the old TDoA2 algorithms in the anchors and the Crazyflie to get started. Next steps will be to introduce random transmission times, dynamic anchor management and better error handling. The TDoA 3 implementation will exist in parallel with the current TDoA2 implementation and should not interfere.

If you want to contribute, are interested in what we do or have some input, please comment this blog post or contact us in any other way.

 

 

 

It’s time for an update on the Multi-ranger deck (see previous updates here: 1, 2, 3). Back in February/March we were just about to start the production of the Multi-ranger deck when the new VL51L1 ToF sensor from ST became available. Among the interesting features for the new sensor is increased range and ROI (region of interest) settings. We felt that the improvement was enough to consider using the new sensor for the Multi-ranger so we got some sensors and started testing.

Point cloud

We’ve made a little example where you can control the Crazyflie with a keyboard (using velocity mode) that records estimated position, body attitude and all the distances (down/up, left/right, front/back) from the ToF sensors. We then did some post-processing of the log data and plotted it using pyntcloud, you can see the results in the point cloud. There are still lots of possible improvements (like taking body attitude into consideration) to be made on the script, but once we’ve cleaned it up a bit we’ll publish it on GitHub so others can play around with it. Note that in the plot the blue points are up/down sensors (i.e Crazyflie movement) and the red points are the side sensors (front/back/left/right).

The room that was mapped

So far we’re happy with the results. We feel that the increased range and new features enables users to work on more interesting problems with the deck, so we’ve decided to switch our the sensors and go to production with the new one. Right now we’re running a 0-series of 10 units using the real production manufacturing fixture (for the standing PCBs with sensors) as well as the production test rig. Our best estimate for when the deck will become available for purchase is some time during the summer. Below is a picture of the latest prototype. We’ll make sure to keep you updated on the progress!

Warehouse issues

On an additional note we’re having some issues with our warehouse provider which ships out orders from our E-store. In the beginning of the month the provider hade scheduled a move of the warehouse to a new physical location which would delay handling of orders for max a week. Unfortunately the move, which should have been finished mid last week, is still in progress which means we can’t ship out any orders for the moment. We’re working hard on trying to work this out with the provider.

I’ve spent the last 5 years of my career at Microsoft on the team responsible for HoloLens and Windows Mixed Reality VR headsets. Typically, augmented reality applications deal with creating and manipulating digital content in the context of real-world surroundings. I thought it’d be interesting to explore some applications of using an augmented reality device to manipulate and control physical objects and have them interact with the real world and/or digital content.

Phase 1: Gesture Input

The HoloLens SDK has APIs for consuming hand gestures as input. For the first phase of this project, I modified the existing Windows UAP/UWP client to handle these gestures and convert them to CRTP setpoints. I used the “manipulation gesture” which provides offsets in three dimensions for a tap-and-drag gesture, from the point in space where the initial tap occurred. These three degrees of freedom are mapped to thrust, pitch and roll.

For the curious, there’s an article on my website with details about the implementation and source code. Here’s a YouTube video where I explain the concept and show a couple of quick demos.

As you can see in the first demo in the video, this works but isn’t entirely useful or practical. The HoloLens accounts for head movements (otherwise moving the head to the left would produce the same offset as moving the hand to the right, requiring the user to keep his or her head very still) but the user must still take care to keep the hand in the field of view of the device’s cameras. Once the gesture is released (or the hand goes out of view) the failsafe engages and the Crazyflie drops to the ground. And of course, lack of yaw control cripples the ability to control the Crazyflie.

Phase 2: Position Hold

Adding a flow deck makes for a more compelling user experience, as seen in the second demo in the video above. The Crazyflie uses the sensors on the flow deck to hold its position. With this functionality, the user is free to move about the room and make shorter “adjustment” hand gestures, instead of needing to hold very still. In this mode, the gesture’s degrees of freedom map to an x/y velocity and a vertical offset from the current z-depth.

This is a step in the right direction, but still has limitations. The HoloLens doesn’t know where it is in space relative to the Crazyflie. A gesture in the y axis relative to the device will always result in a movement in the y direction of the Crazyflie, which begins to feel unnatural if the user moves around. Ideally, gestures would cause the crazyflie to move in the same direction relative to the user, not relative to the ‘front’ of the Crazyflie. Also, there’s still no control over yaw.

The flow deck has some limitation as well: The z-range only goes to 2 meters with any accuracy. The flow sensor (for lateral stabilization) has a strong dependency on the patterns on the floor below. A flow sensor is a camera that relies on measuring pixel deltas from frame to frame, so if the floor is blank or has a repeating pattern, it can be difficult to hold position properly.

Despite these limitations, using hand gestures to control the Crazyflie with a flow deck installed as actually quite fun and surprisingly easy.

Phase 3 and Beyond: Future Work & Ideas

I’m currently working on some new features that I hope will open the door for more interesting applications. All of what follows is a work in progress, and not yet implemented or functional. Dream with me!

Shared Coordinate System

The next phase (currently a work in progress) is to get the HoloLens and the Crazyflie into a shared coordinate system. Having spatial awareness between the HoloLens and the Crazyflie opens up some very exciting scenarios:

  • The orientation problem could be improved: transforms could be applied to gestures to cause the Crazyflie to respond to commands in the user’s frame of reference (so ‘pushing’ away from one’s self would cause the Crazyflie to fly away from the user, instead of whatever direction is ‘forward’ to the Crazyflie’s perspective).
  • A ‘follow me’ mode, where the crazyflie autonomously follows behind a user as he or she moves throughout the space.
  • Ability to walk around and manually set waypoints by selecting points of interest in the environment.

The Loco Positioning System is a natural fit here. A setup step (where a spatial anchor or similar is established at same physical position and orientation as the LPS origin) and a simple transform for scale and orientation (HoloLens and the Crazyflie define X,Y,Z differently) would allow the HoloLens and Crazyflie to operate in a shared coordinate system. One could also use the webcam on the HoloLens along with computer vision techniques to track the Crazyflie, but that would require constant line of sight from the HoloLens to the Crazyflie.

Obstacle Detection/Avoidance

Example surface map produced by HoloLens

The next step after establishing a shared coordinate system is to use the HoloLens for obstacle detection and avoidance. The HoloLens has the ability to map surfaces in real time and position itself in that map (SLAM). Logic could be added to the HoloLens to consume this surface map and adjust pathing/setpoints to avoid these obstacles without reducing the overall compute/power budget of the Crazyflie itself.

Swarm Control and Manipulation

As a simple extension of the shared coordinate system (and what Bitcraze has been doing with TDoA and swarming lately) the HoloLens could be used to manipulate individual Crazyflies within a swarm through raycasting (the same technique used to gaze at, select and move specific holograms in the digital domain). Or perhaps a swarm could be controlled to move out of the way as a user passes through the swarm, and return to formation afterward.

Augmenting with Digital Content

All scenarios discussed thus far have dealt with using the HoloLens as an input and localization device, but its primary job is to project digital content into the real world. I can think of applications such as:

  • Games
    • Flying around through a digital obstacle course
    • First person shooter or space invaders type game (Crazyflie moves around to avoid user or fire rendered laser pulses at user, etc)
  • Diagnostic/development tools
    • Overlaying some diagnostic information (such as battery life) above the Crazyflie (or each Crazyflie in a swarm)
    • Set or visualize/verify the position of the LPS nodes in space
    • Visualize the position of the Crazyflie as reported by LPS, to observe error or drift in real time

Conclusion

There’s no shortage of interesting applications related to blending augmented reality with the Crazyflie, but there’s quite a bit of work ahead to get there. Keep an eye on the Bitcraze blog or the forums for updates and news on this effort.

I’d love to hear what ideas you have for combining augmented reality devices with physical devices like the Crazyflie. Leave a comment with thoughts, suggestions, or any other relevant work!

Here at the USC ACT Lab we conduct research on coordinated multi-robot systems. One topic we are particularly interested in is coordinating teams consisting of multiple types of robots with different physical capabilities.

A team of three quadrotors all controlled with Crazyflie 2.0 and a Clearpath Turtlebot

Applications such as search and rescue or mapping could benefit from such heterogeneous teams because they allow for more flexibility in the choice of sensors and locomotive capability. A core challenge for any multi-robot application is motion planning – all of the robots in the team need to make it to their target locations efficiently while avoiding collisions with each other and the environment. We have recently demonstrated a scalable method for trajectory planning for heterogeneous robot teams utilizing the Crazyflie 2.0 as the flight controller for our aerial robots.

A Crazier Swarm

To test our trajectory planning research we wanted to assemble a team with both ground robots and multiple sizes of aerial robots. We additionally wanted to leverage our existing Crazyswarm software and experience with Crazyflie firmware to avoid some of the challenges of working with new hardware. Luckily for us the BigQuad deck offered a straightforward way to super-size the Crazyflie 2.0 and gave us the utility we needed.

With the BigQuad deck and off-the-shelf components from the hobbyist drone community we built three super-sized Crazyflie 2.0s. Two of them weigh 120g (incl. battery) with a motor-to-motor size of 130mm, and the other is 490g (incl. battery and camera) with a size of 210mm.

120g, 130mm

490g, 210mm

We wanted to pick components that would be resistant to crashing while still offering high performance. To meet these requirements we ended up picking components inspired by the FPV drone racing community where both reliable performance and high-impact crashes are expected. Full parts lists for both platforms are available here

Integrating the new platforms into the Crazyswarm was fairly easy. We first had to re-tune the PID controller gains to account for the different dynamics of the larger platforms. This didn’t take too long, but we did crash a few times — luckily the components we chose were able to handle the crashes without any breakages. After tuning the platforms behave very well and are just as easy to work with as the original Crazyflie 2.0. We additionally updated the Crazyswarm package to be able to differentiate between BigQuad and regular Crazyflie types and those updates are now available for use by anyone!

In future work, we are excited to do hands-on experiments with a prototype of the CF-RZR. This new board seems like a promising upgrade to the CF 2.0 + BQD combination as it has upgraded components, an external antenna, and a standardized form factor. Hopefully we will see the CF-RZR as part of the Crazyswarm in the near future!

Mark Debord
Master’s Student
Automatic Coordination of Teams Laboratory
University of Southern California
Wolfgang Hönig
PhD Student
Automatic Coordination of Teams Laboratory
University of Southern California

 

We have been flying swarms in our office plenty of times. There is kind of a limitation to this though, our flying space is only around 4 x 4 meters. Flying 8 – 10 Crazyflies in this space is challenging and it is hard do make it look good. Slight position inaccuracy makes it look a bit sloppy. To mitigate this we decided to have a small swarm show using a a bigger flying space and to invite families and friends, just to raise the stake a bit.

As usual we had limited time to accomplish this, and this time the result should be worth looking at. Well, we have managed to pull off hard things in one day before so why not this time… The setup is basically a swarm bundle with added LED-rings. Kristoffer took care of the choreography, Tobias setting up the drones and Arnaud configuring the Loco positioning system.

Choreography

Kristoffers pre-Bitcraze history involves some dancing and he has been playing a bit earlier with the idea of creating choreographies with Crazyflies. One part of this was a weekend-hack a few months back when he tried to write a swarm sequencer that is a bit more dance oriented. The goal was to be able to run a sequence synchronized to music and define the movements in terms of bars and beats rather than seconds. He also wanted to be able to define a motion to end at a specific position at a beat as opposed to start on the beat. As Kristoffer did not have access to a swarm when he wrote the code he also added a simple simulator to visualize the swarm. The hack was not a complete success at that time but turned out to be useful in this case.

The sequences are defined in a YML file as a list of time stamps, positions and, if needed the color of the LED-ring. After a few hours of work he had at least some sort of choreography with 9 Crazyflies moving around, maybe not a master piece from a dance point of view but time was running out.

The simulator is super basic but turned out to be very useful anyway (the color of the crosses indicates the color of the LED ring). We actually never flew the full sequence with all drones before the performance, but trusted the simulation to be accurate enough! We did fly most of the sequence with one Crazyflie, to at least make it plausible that we got it all right.

Short snippet from the simulation

Setting up drones

Handling swarms can be tedious and time consuming. Just making sure all drones are assembled, fully operational and charged is a challenge when the number increases. Tobias decided to do manual flight test of every drone. If it flies well manually it will most likely fly well autonomously.  The testing resulted in switching out some motors and props as vibrations is a crippling factor, especially for Z accuracy. Takeaway from this exercise is to implement better self testing so this can be detected automatically and fixed much quicker.

Loco Positioning System

We ran the positioning system with standard firmware in TDoA mode to support multiple Crazyflies simultaneously. The mapped space was around 7 x 5 x 2.5 meters and the anchors were placed more or less in the corners of the flying space box.

The result

The audience (families and friends) was enthusiastic and expectations high! Even though not all drones made it all the way through the show, the spectators seemed to be duly impressed and requested a re-run.
 

This is a guest blog post written by Fred, the maintainer of the Android Crazyflie client and Java Crazyflie lib.

As a follow-up to last week’s blog post about the different Crazyflie clients, I would like to describe the current status of the Android client in a bit more detail.

After more than a year, Version 0.7.0 of the Android client has been released last Friday (March 16). Most importantly this release fixes a very annoying UI bug that appeared on newer Android versions, where the on-screen joystick did not show up when the app was started for the first time. It also adds support for height hold mode when using the zRanger or Flow deck, it adds a console view (can be enabled in Settings -> App settings -> Show console) and also shows the link quality for BLE connections now. You can read the full changelog on Github. You can find the release in the Google Play Store and as an APK on GitHub.

Connection quality and console messages now work on a BLE connection

Apart from the obvious/visible new features and bug fixes, quite a lot has happened “under the hood”. Some parts of the code were cleaned up, refactored, decoupled, etc. This is still a work in progress though.

There is still plenty of stuff to do for future releases, especially in the realm of Bluetooth support. On the short list are:

  • Param/Log support for BLE connections
  • bootloading over BLE
  • support for Flow deck sequences

Admittedly there was almost no documentation for the Android client and some features are hidden (too well). In an effort to change that, I’ve started to document some features on the project’s Github wiki.

If you find bugs in the app, want to request a feature or see errors in the documentation, please open a GitHub issue.

If you are interested in the development of the Crazyflie Android client and want to get involved, let me know. The fastest way to get new features added or bugs fixed is to contribute a pull request.

Last but not least, I’d like to thank the Bitcraze team for creating and developing the Crazyflie and amazing new decks. Maintaining the Crazyflie Android client is still one of my favorite past times. :)