Deep learning is coordinating drones so they can flock together without colliding.
What’s new: Caltech researchers Soon-Jo Chung and Yisong Yue developed a pair of models that enables swarms of networked drones to navigate autonomously through cluttered environments.
How it works: Sensors on each drone collect real-time data that are shared among a swarm. A neural network called GLAS plans drone actions, while another one called Neural-Swarm helps compensate for wind caused by nearby fliers.
- The authors trained GLAS via imitation learning using synthetic maps populated randomly with obstacles and drones. A global planner computed an optimal route for each synthetic drone based on relative positions of other objects and a goal for each timestep.
- At flight time, each robot computes an action for each timestep using only information from its immediate surroundings.
- The authors trained Neural-Swarm using curriculum learning, which starts with easy examples and gradually progresses to more difficult ones. Starting with two quadcopters, then three and four, Neural-Swarm learned to predict aerodynamic effects created by the myriad propellers.
- In operation, the drones use these predictions to counteract turbulence generated by nearby rotors.
Results: The authors tested GLAS and Neural-Swarm separately. In comparisons with a state-of-the-art motion planning algorithm, 16 drones piloted by GLAS navigated 20 percent more effectively through a variety of obstacle courses. Drones controlled by Neural-Swarm were four times better than a baseline linear tracking controller at staying on course.
Why it matters: Drones capable of maneuvering safely in swarms could aid urban search and rescue operations, accelerate industrial inspections, and provide comprehensive aerial mapping.
We’re thinking: Is anyone else excited to see drone shows even more spectacular than the one that lit up the 2018 Olympics?