Category: Loco Positioning

In this blog post we will describe one of the demos we were running at IROS and how it was implemented. Conceptually this demo is based on the same ideas as for ICRA 2017 but the implementation is completely new and much cleaner.

The demo is fully autonomous (no computer in the loop) but it requires an external positioning system. We flew it using either the Loco Positioning System or the prototype Lighthouse system.
A button has been added to the LPS deck to start the demo. When the button is pressed the Crazyflie waits for position lock, takes off and repeats a predefined spiral trajectory until the battery is out, when it goes back to the door of the cage and lands.
For some reason we forgot to shoot a video at IROS so a reproduced version from the (messy) office will have to do instead, imagine a 2×2 m net cage around the Crayzflie.

Implementation

As mentioned in an earlier blog post the demo uses the high level commander originally developed by Wolfgang Hoenig and James Alan Preiss for Crazyswarm. We prototyped everything in python (sending commands to the Crazyflie via Crazyradio) to quickly get started and design the demo . Designing trajectories for the high level commander is not trivial and it took some time to get it right. What we wanted was a spiral downwards motion and then going back up along the Z-axis in the centre of the spiral. The high level commander is a bit picky on discontinuities and we used sines for height and radius to generate a smooth trajectory. 

Trajectories in the high level commander are defined as a number of pieces, each describing x, y, z and yaw for a short part of the full trajectory. When flying the trajectories the pieces are traversed one after the other. Each piece is described by 4 polynomials with 8 terms, one polynomial per x, y, z and yaw. The tricky part is to find the polynomials and we decided to do it by cutting our trajectory up in segments (4 per revolution), generate coordinates for a number of points along the segment and finally use numpy.polyfit() to fit polynomials to the points. 

When we were happy with the trajectory it was time to move it to the Crazyflie. Everything is implemented in the app.c file and is essentially a timer loop with a state machine issuing the same commands that we did from python (such as take off, goto and start trajectory). A number of functions in the firmware had to be exposed globally for this to work, maybe not correct from an architectural point of view but one has to do what one has to do to get the demo running :-) The full source code is available at github. Note that the make file is hardcoded for the Crazyflie 2.1, if you want to play with the code on a CF 2.0 you have to update the sensor setting

This approach led to an idea of a possible future app API (for apps running in the Crazyflie) containing similar functionality as the python lib. This would make it easy to prototype an app in python and then port it to firmware.

Controllers

The standard PID controller is very forgiving and usually handles noise and outliers from the positioning system in a fairly good way. We used it with the LPS system since there is some noise in the estimated position in an Ultra Wide Band system. The Lighthouse system on the other hand is much more precise so we switched to the Mellinger controller instead when using it. The Mellinger controller is more agile but also more sensitive to position errors and tend to flip when something unexpected happens. It is possible to use the Mellinger with the LPS as well but the probability of a crash was higher and we prioritised a carefree demo over agility. An extra bonus with the Mellinger controller is that it also handles yaw (as opposed to the PID controller) and we added this when flying with the Lighthouse. 

Going faster

Since the precision in the Lighthouse positioning system is so much better we increased the speed to add some extra excitement. It turned out to be so good that it repeatedly almost touched the panels at the back without any problems, over and over again!

One of the reasons we designed the trajectory the way we did was actually to make it possible to fly multiple copters at the same time, the trajectories never cross. As long as the Crazyflies are not hit by downwash from a copter too close above all is good. Since the demo is fully autonomous and the copters have no knowledge about each other we simply started them with appropriate intervals to separate them in space. We managed to fly three Crazyflies simultaneously with a fairly high degree of stability this way.

Last week half of Bitcraze, Kristoffer, Tobias and Arnaud were at IROS 2018 where we had an exhibitor booth. We have had a great week and met so many interesting and inspiring people, both users of the Crazyflie as well as persons curious in what we do. Thanks to everyone that passed by the booth, it is awesome to hear how Crazyflie is used and how we can improve it even more.

This year we invited Qualisys to share the booth with us, they kindly provided a motion capture system and we had the pleasure to be joined by Martin to help us and present Qualisys.

Demo-wise we had prepared a bunch of demos which you can read about in our previous post about IROS. It won’t surprise anyone to hear that not everything has been working as planned. The Lighthouse demo did not work when we set it up in the booth (it did in the office!) but some live hacking solved the problem on Tuesday. We also had unexpected issues with the Crazyswarm demo: our landing pad design and flight trajectory was working very well in the office, but in the booth we experienced much more instabilities that prevented us to successfully fly and land all 6 crazyflies in Crazyswarm. We still need to investigate what happened. The autonomous demos, both using the UWB Loco Positioning System and Lighthouse (when fixed), have been surprisingly robust: they do not require a connection to a computer and they worked almost all the time, when they failed they failed without drama and could be reset very quickly.

Overall we have been able to accumulate flight time and experience much quicker in this last week than in the last months, now we have a lot of things to test and improve and also a lot of things we can be much more confident about. We have been fixing and improving the demo during the event and we will write more blog posts in the coming weeks about things we have developed and improved for and during IROS.

To conclude, thanks again to everyone that dropped by the booth, this kind of event always make us come back with a boost of motivation and fresh new ideas and it is all thanks to you!

The last couple of weeks has been really intense since we’ve been busy preparing for IROS. Finally it’s here, and with it we’re releasing a few new products!

We’re excited to announce that during the fall we will be releasing the following new products:

  • Crazyflie 2.1: The Crazyflie 2.0 was released almost 4 years ago now. Over the years there’s been thousands of users and lots of feedback on the product. Most of it great, but there’s been a few things we’ve wanted to fix. Now with the updated 2.1 version we finally have the chance to do it. Here’s a quick list of the updates:
    • Better radio performance and external antenna support: With a new radio power amplifier we’ve improved the link quality and added support for dual antennas (on-board chip antenna and external antenna via u.FL connector)
    • Better power button: We’ve gotten feedback that the power button breaks too easily, so now we’ve replaced with a more solid alternative.
    • Improved battery cable fastening: To avoid weakening of the cables over time they are now run through a cable relief.
    • Improved sensors: To make the flight performance better we’ve switched out the IMU and pressure sensor. The new Crazyflie uses the drone specialized sensor combo BMI088 and BMP388 by Bosch Sensortech.
  • Flow deck v2: The Flow deck has been upgraded with the new ST VL53L1x which increases the range up to 4 meters
  • Z-ranger deck v2: The Z-ranger deck has been upgraded with the new ST VL53L1x which increases the range up to 4 meters
  • Multi-ranger deck: Finally the Multi-ranger deck is currently in production and will be available during the fall!
  • Mocap deck: The motion capture deck with support for easily attaching markers
  • “Roadrunner” (alpha): With TDoA3 to be included in the next firmware release we’re happy to release one of our LPS tags code named “Roadrunner”. The hardware is basically a Crazyflie 2.1 without motors and up to 12V input power.

In the upcoming weeks we’ll post more details about the products and when they will be available, so stay tuned!

We should also mention that we will showing off some awesome prototypes of products that are planned to be released next year, among them:

  • “RZR”: The long awaited Crazyflie + BigQuad stand-alone combo code-named “RZR” is making it’s way into production and we are aiming to release it during the beginning of 2019. Basically it’s a Crazyflie 2.1 where instead of motors you can directly connect ESCs to build bigger quads up to around 0.5kg.
  • Lighthouse deck: Our current prototype is now flying with both Lighthouse 1.0 and 2.0 and the performance is awesome! This is definitely the next product out the door after the list above and we’re aiming at having it available during the spring.
  • Raspberry Pi Zero power deck: This deck allows you to add a Raspberry Pi Zero to the Crazyflie 2.x and the “RZR”.
  • LPS tag: We’ve shown this tag before but now we’ve updated it to use the Crazyflie 2.1 IMU and to have proper mounting holes. We’re getting closer to release and this will hopefully be available during the spring.

During IROS this week we will be showing off all the products above (including the prototypes). So if you want to be one of the first to check them out drop by our booth nr 91.

We are working hard in the Bitcraze team to prepare and get ready for IROS 2018 in Madrid next week. As usual preparing for fairs and exhibitions make us add useful features and functionality that we might not had planned to implement but that we find useful or need. Even though some of it might be a bit hackish, most of it will add value to the project and will hopefully be useful to the community. Notable functionality that we are working on this time: 

  • design for a 3D-printable charging pad
  • basic support for the experimental Light House deck
  • support for the high level commander in the python lib
  • “app” for autonomous flying running in the Crazyflie

Charging pads

The plan is to fly a small crazyswarm with 6 Crazyflies using a motion capture system from Qualisys. Since we want to spend as much time as possible talking to people and minimize setup time, we were looking for a solution to automatically recharge the batteries between flights. We are planning to use Qi-charger decks for contact less charging with 3D-printed landing pads with slopes to make the Crazyflies slide into the correct charging position even if they land a few millimetres off. 

The Light House deck

Even though the Light House deck hardware still is very much experimental we have started to add support for it in the Crazyflie firmware. Hopefully we will be able to run our demos using either LPS or the Lighthouse to show the difference in performance.

Support for the high level commander in the python lib

The high level commander was contributed by Wolfgang Hoenig and James Alan Preiss (thanks!) an has been available in the Crazyflie firmware for a while. In an environment with positioning support it provides high level commands such as “take off” and “go to” as well as flying user defined trajectories and is used by Crazyswarm. We wanted to use the same functionality in our demo but running it stand alone in the firmware. The easiest way to get acquainted with the functionality was to play with it from python and as a side effect we implemented the API in the python lib for anyone to use. There is also an example script called autonomous_sequence_high_level.py in the examples directory.

App for autonomous flight

For ICRA last year we wrote code in the Crazyflie firmware to fly trajectories autonomously. At that point we simply fed setpoints to the PID controller to make the Crazyflie follow a preprogrammed path. Now we have more tools in the Crazyflie toolbox (the high level commander and the Mellinger controller) and by using them we have reduced the amount of code needed and complexity of the solution while the performance has been improved (code on github). 

E-store

Like we’ve mentioned a few times before it’s not always easy shipping batteries. Due to this we’ve unfortunately had to switch off checkouts containing batteries to some countries (like Canada, Australia and India). We’ve finally found a workaround for this, so today we’ve switched from using DHL to using FedEx in our E-store. As a positive side-effect of this most customers will also benefit from lower shipping rates on their orders. As always if there’s any issues with shipping or ordering please let us know and we’ll do our best to sort it out.

Loco node Rev.E

After receiving feedback from some customers that the micro-USB connector on the Loco nodes broke we’ve decided to update the design. So in the coming weeks we will start phasing in the new revision (Rev.E) of the Loco node and phasing out the old one (Rev.D). Aside from the updated micro-USB connector we’ve also connected more spare pins to the expansion connector on the board. For full details on the schematic changes have a look at the the Rev.E schematics over on the wiki. As a side-note it’s worth mentioning that the first batch of Rev.E Loco nodes have a dark blue silkscreen instead of the standard Bitcraze black silkscreen, this will be updated in future batches.

We started the work on TDoA 3 in May and it has been functional for a few months, but it is a bit cumbersome to make it work since it requires compiling firmware with special flags and running scripts to configure anchors. To rectify this and make it more accessible we are now working on integrating it just like the other positioning modes; TWR and TDoA2. 

Changes

The anchors already contained most of the required functionality. We have added support to change to the TDoA3 mode via LPP, that is using the Crazyflie as a bridge between the client and the anchors, transmitting data to the anchors via UWB.

In the Crazyflie TDoA 3 has been added as a third mode. This means that it is now auto detected when the Crayzflie is switched on and it can be selected from a client – no need for compile flags any more! We have also added a new mapping to the memory sub system to transfer anchor information for a dynamic number of anchors to a client. This means that instead of being available to the client as a long list of log variables and parameters, most of the TDoA3 information and configurations are available in a memory map using the same protocol we use to access real memory like the configuration EEPROM or the deck memories. This way we have much more freedom to define and transfer the data-structure to and from the Crazyflie.

The python client/lib is the piece of software that requires most changes. The UI (and implementation) was designed to handle 8 anchors, but with TDoA3 it must support a dynamic and larger number. The new memory mapping has of course to be implemented in the lib as well. The anchor position configuration part of the LPS tab will be separated into a dialog box to get more space for the controls. We also have some ideas for improvements in anchor position configuration (saving to file and sanity checking of configurations for instance) that will be easier to implement in the future as well.

Feedback

The driver for this work is of course to make the TDoA 3 technology available to anyone that wants to try it out. It is important to remember that it still is experimental and that we have mainly tested it in single room setups with a few anchors. Our hope is that more users will use it in various settings and that we will get feedback and contributions to iron out any remaining problems. We currently lack easy access to larger spaces which makes it hard for us to verify the functionality in a system with many anchors.

The code in the firmware for the anchors and the Crazyflie is mostly ready while there still remains some work in the lib/client, hopefully it can be committed and pushed during the week (see issue bitcraze/crazyflie-clients-python#349). If you want to try it out when the client is fixed, remember to upgrade the anchor firmware (including git sub modules), the Crazyflie firmware (including git sub modules), the python lib and the python client. Since this is still work in progress APIs and protocols may change until the first official release.

Last week we received the visit of Wolfgang from USC, he is the creator of the Crazyswarm project. It was great to have him here at the office. One of the subject of discussion was to prepare a demo for iROS 2018 on October 1-5 2018 in Madrid.

We will be in booth 91, if you are attending iROS 2018 feel free to pass-by and say hello. We are planning a couple of demos:

  • Crazyswarm with at least 6 Crazyflies flying in a Qualisys mocap system.
  • Running a fully autonomous Crazyflie with the Loco Positioning System.
  • Hopefully, some demo of autonomous flight using the lighthouse positioning. This is still not fully working but I have at least 2 full months to get something flying :-).

If you would like to see us demo anything more/else tell us in the comments and we will see if we can setup something.

We used Wolfgang’s visit to finalise the Qualisys support for Crazyswarm. It is now pushed and documented, this means that if you have a Qualisys system and a couple of Crazyflies you can now fly them autonomously using the Crazyswarm framework. It also means that we now have Crazyswarm up and running flawlessly at the office, it will help us testing related pull-request and supporting advanced functionality like the high-level-commander in the Crazyflie python lib.

 

As a side note, Bitcraze is spread very thin these weeks since most of us are in vacation (I am basically alone). We usually miss one Monday post per year, it was last week and the Wolfgang visit is my excuse :-). Sorry in advance if there is any delay to answer mail, forum or other requests. From next week, the rest of the team will slowly start to come back.

We have now worked a few weeks on the new TDoA 3 mode for the Loco Positioning System. We are happy with the results so far and think we managed to do what we aimed for: removing the single point of failure in anchor 0 and supporting many anchors as well as larger spaces.

 

We finished off last week by setting up a system with 20 anchors covering two rooms down in the lunch area of the office. We managed to fly a scripted autonomous flight between two rooms.

Work so far on the anchors

Messages from the anchors are now transmitted at random times, which removes the dependency on anchor 0 that used to act as a master that all other anchors were synchronized to. The drawback is that we get problems with collisions when two anchors happens to transmit at the same time. Experiments showed that at 400 packets/s (system rate) we ended up at a packet loss of around 15% and 340 TDoA measurements/s sent to the kalman filter for position estimation.  We figured that this was acceptable level and added an algorithm in the anchors that reduces the transmission rate based on the number of anchors around them. If more anchors are added to a room they all reduce their transmission rate to target 400 packets/s in total system rate.

The anchors continuously keeps track of the clock drift of all other anchors by listening to the messages that are transmitted. We know that clocks do not change frequency suddenly and can use this fact to filter the clock correction to reduce noise in the data. Outliers are detected and removed and the resulting correction is low pass filtered. We have done some experiments on using this information and compare it to the time stamp of a received message to detect if the time stamp is corrupt or not, but this idea requires more work.

One interesting feature of the anchors is the limited CPU power that is available. The strategy we have chosen to handle this fact has been to create an algorithm that is efficient when handling messages. A timer based maintenance algorithm (@1 Hz) examines the received data and makes demissions on which anchors to include in the messages in the future as well as purges old data.

The Crazyflie

The implementation in the Crazyflie is fairly straight forward. The biggest change to TDoA 2 is that we now can handle a dynamic number of anchors and have to chose what data to store and what to discard. We  have also extracted the actual TDoA algorithm into a module to separate it from the TDoA 3 protocol. The clock correction filtering algorithm from the anchors has also been implemented in the Crazyflie. 

An experimental module test has been added where the TDoA module is built and run on a PC using data recorded from a sniffer. We get repeatability as well as better tools for debugging and this is something that we should explore further.

Work remaining 

The estimated position in the Crazyflie is still more noisy than in TDoA 2 and we would like to improve it to at least the same level. We see that we have outliers in the TDoA measurements that makes the Crazyflie go off in a random direction from time to time, we believe it should be possible to get rid of most of these.

The code is fairly hackish and there are no structured unit or module tests to verify functionality. So far the work has been in an exploratory phase but we are getting closer to a set of algorithms that we are happy with and that are  worth testing. 

We have not done any work on the client side, that is support for visualizing and configuring the system. This is a substantial amount of work and we will not officially release TDoA 3 until this is finished.

How to try it out

If you are interested in trying TDoA 3 out your self, it is all available on github. There are no hardware changes and if you have a Loco Positioning system it should work just fine. There is a short description on the wiki of how to compile and configure the system. The anchor supports both TDoA 2 and TDoA 3 through configuration while the Crazyflie has to be recompiled to change between the two. The support in the client is limited but will basically handle anchors 0 – 7.

Have fun!

First of all we are happy to announce that (almost) all products have been stocked in the new warehouse and are now shipping! The last orders that were on hold are on their way out and new orders placed in the store will now be shipped again within a few days.

We released the TDoA mode, a.k.a. swarm mode of the Loco Positioning System back in January. TDoA supports positioning of many Crazyflies simultaneously which makes it possible to fly a swarm of Crazyflies with the LPS system. The release in January was actually the second iteration of the TDoA implementation (the first iteration was never publicly released) and it is also known as TDoA 2.

TDoA 2 works well but there are a couple of snags that we would like to fix and we have now started the work on the next iteration, TDoA 3. 

Single point of failure

TDoA 2 is based on a fixed transmission schedule with time slots when each anchor transmits its ranging packet. All anchors listen to anchor 0 and use the reception of a packet from anchor 0 to figure out when to transmit. The problem with this solution is that if anchor 0 stops transmitting for some reason the full system will stop transmitting positioning information. This is clearly a property that would be nice to get rid of.

Limited number of anchors

The packets in the TDoA 2 protocol have 8 slots for anchor data that are implicitly addressed through the position in the packet. First slot is anchor 0, second slot anchor 1 and so on. This setup is easy to use but creates an upper limit of 8 anchors in the system.

The maximum radio reach of an anchor depends mainly on the transmitted power and the environment. This distance, in combination with a maximum of 8 anchors and that all anchors must be in range of anchor 0, sets an upper limit of the volume that an LPS system can cover, basically one large room. When we designed TDoA 2 we were happy to be able to support a swarm of Crazyflies and did not really bother too much about the covered volume. We get more and more questions about larger areas and more anchors though and it would be nice to have a positioning system that could be expanded.

The solution – maybe…

What we want to do in TDoA 3 is to transmit packets at random times and add functionality to handle the collisions and packet loss that will happen in a system like this. The idea is that the even if some data is lost, the receiving side will get enough packets to be able to calculate the distance to other anchors or a position as needed. By removing the time slots and synchronization to anchor 0, we get rid of the single point of failure. 

In the TDoA 3 protocol, we have added explicit ids to the anchor data, and thus removed the implicit addressing of anchors. We have 8 bits for anchor ids and the system will handle 256 anchors for sure. We do think that it will be possible to design larger systems though by reusing ids and making sure that the radio ranges of anchors with the same ids do not overlap.

The UWB radios have a nice property that makes this a bit easier to handle collisions than one might first think, if they receive two packets at the same time, they will most likely “pick” one of the packets and discard the other. The drawback is that it is likely that the receive time of the packet will be less accurate. We are not completely sure it will be possible to detect and handle the added noise in the time stamps but we have good hope!

The current state of the project

Last week we did a proof of concept hack when we modified the old TDoA 2 implementation to transmit at random times, as well as minor modifications to handle random receive order of packets. It all worked out beautifully and we could fly a short sequence in the office with the new mode. The estimated position was a bit more shaky which is not surprising, considering that the receive times are more noisy.

We have just started with the real deal.  We have designed a draft spec of the protocol and have also started to implement the new protocol on top of the old TDoA2 algorithms in the anchors and the Crazyflie to get started. Next steps will be to introduce random transmission times, dynamic anchor management and better error handling. The TDoA 3 implementation will exist in parallel with the current TDoA2 implementation and should not interfere.

If you want to contribute, are interested in what we do or have some input, please comment this blog post or contact us in any other way.

 

 

 

I’ve spent the last 5 years of my career at Microsoft on the team responsible for HoloLens and Windows Mixed Reality VR headsets. Typically, augmented reality applications deal with creating and manipulating digital content in the context of real-world surroundings. I thought it’d be interesting to explore some applications of using an augmented reality device to manipulate and control physical objects and have them interact with the real world and/or digital content.

Phase 1: Gesture Input

The HoloLens SDK has APIs for consuming hand gestures as input. For the first phase of this project, I modified the existing Windows UAP/UWP client to handle these gestures and convert them to CRTP setpoints. I used the “manipulation gesture” which provides offsets in three dimensions for a tap-and-drag gesture, from the point in space where the initial tap occurred. These three degrees of freedom are mapped to thrust, pitch and roll.

For the curious, there’s an article on my website with details about the implementation and source code. Here’s a YouTube video where I explain the concept and show a couple of quick demos.

As you can see in the first demo in the video, this works but isn’t entirely useful or practical. The HoloLens accounts for head movements (otherwise moving the head to the left would produce the same offset as moving the hand to the right, requiring the user to keep his or her head very still) but the user must still take care to keep the hand in the field of view of the device’s cameras. Once the gesture is released (or the hand goes out of view) the failsafe engages and the Crazyflie drops to the ground. And of course, lack of yaw control cripples the ability to control the Crazyflie.

Phase 2: Position Hold

Adding a flow deck makes for a more compelling user experience, as seen in the second demo in the video above. The Crazyflie uses the sensors on the flow deck to hold its position. With this functionality, the user is free to move about the room and make shorter “adjustment” hand gestures, instead of needing to hold very still. In this mode, the gesture’s degrees of freedom map to an x/y velocity and a vertical offset from the current z-depth.

This is a step in the right direction, but still has limitations. The HoloLens doesn’t know where it is in space relative to the Crazyflie. A gesture in the y axis relative to the device will always result in a movement in the y direction of the Crazyflie, which begins to feel unnatural if the user moves around. Ideally, gestures would cause the crazyflie to move in the same direction relative to the user, not relative to the ‘front’ of the Crazyflie. Also, there’s still no control over yaw.

The flow deck has some limitation as well: The z-range only goes to 2 meters with any accuracy. The flow sensor (for lateral stabilization) has a strong dependency on the patterns on the floor below. A flow sensor is a camera that relies on measuring pixel deltas from frame to frame, so if the floor is blank or has a repeating pattern, it can be difficult to hold position properly.

Despite these limitations, using hand gestures to control the Crazyflie with a flow deck installed as actually quite fun and surprisingly easy.

Phase 3 and Beyond: Future Work & Ideas

I’m currently working on some new features that I hope will open the door for more interesting applications. All of what follows is a work in progress, and not yet implemented or functional. Dream with me!

Shared Coordinate System

The next phase (currently a work in progress) is to get the HoloLens and the Crazyflie into a shared coordinate system. Having spatial awareness between the HoloLens and the Crazyflie opens up some very exciting scenarios:

  • The orientation problem could be improved: transforms could be applied to gestures to cause the Crazyflie to respond to commands in the user’s frame of reference (so ‘pushing’ away from one’s self would cause the Crazyflie to fly away from the user, instead of whatever direction is ‘forward’ to the Crazyflie’s perspective).
  • A ‘follow me’ mode, where the crazyflie autonomously follows behind a user as he or she moves throughout the space.
  • Ability to walk around and manually set waypoints by selecting points of interest in the environment.

The Loco Positioning System is a natural fit here. A setup step (where a spatial anchor or similar is established at same physical position and orientation as the LPS origin) and a simple transform for scale and orientation (HoloLens and the Crazyflie define X,Y,Z differently) would allow the HoloLens and Crazyflie to operate in a shared coordinate system. One could also use the webcam on the HoloLens along with computer vision techniques to track the Crazyflie, but that would require constant line of sight from the HoloLens to the Crazyflie.

Obstacle Detection/Avoidance

Example surface map produced by HoloLens

The next step after establishing a shared coordinate system is to use the HoloLens for obstacle detection and avoidance. The HoloLens has the ability to map surfaces in real time and position itself in that map (SLAM). Logic could be added to the HoloLens to consume this surface map and adjust pathing/setpoints to avoid these obstacles without reducing the overall compute/power budget of the Crazyflie itself.

Swarm Control and Manipulation

As a simple extension of the shared coordinate system (and what Bitcraze has been doing with TDoA and swarming lately) the HoloLens could be used to manipulate individual Crazyflies within a swarm through raycasting (the same technique used to gaze at, select and move specific holograms in the digital domain). Or perhaps a swarm could be controlled to move out of the way as a user passes through the swarm, and return to formation afterward.

Augmenting with Digital Content

All scenarios discussed thus far have dealt with using the HoloLens as an input and localization device, but its primary job is to project digital content into the real world. I can think of applications such as:

  • Games
    • Flying around through a digital obstacle course
    • First person shooter or space invaders type game (Crazyflie moves around to avoid user or fire rendered laser pulses at user, etc)
  • Diagnostic/development tools
    • Overlaying some diagnostic information (such as battery life) above the Crazyflie (or each Crazyflie in a swarm)
    • Set or visualize/verify the position of the LPS nodes in space
    • Visualize the position of the Crazyflie as reported by LPS, to observe error or drift in real time

Conclusion

There’s no shortage of interesting applications related to blending augmented reality with the Crazyflie, but there’s quite a bit of work ahead to get there. Keep an eye on the Bitcraze blog or the forums for updates and news on this effort.

I’d love to hear what ideas you have for combining augmented reality devices with physical devices like the Crazyflie. Leave a comment with thoughts, suggestions, or any other relevant work!