Drones as the New “Flying IoT”: They’ll Track People and Deliver Goods Using a New Low-Power Architecture to Juice the Apps While Staying Aloft

By Lori Cameron
Published 01/10/2018
Share this on:

Drone delivery concept with box in air

Soon, drones that think like humans will automatically detect and film athletes in motion, track criminals, and deliver packages right to your door.

As with any intelligent system, however, such machine learning can drain energy, so a new study shows how to keep power usage down and drones aloft much longer by shifting a drone’s processing workload to a sensor-cloud architecture.

“Drones are an emerging form of new IoT devices, flying in the sky with full network connectivity capabilities. Intelligent drones with cognitive computing skills need the capability to automatically recognize and track objects to free users from the tedious task of controlling them, all of which must be performed within the power-constrained environment of a Li-Po battery,” write Hasan Genc, Yazhou Zu, Ting-Wu Chin, Matthew Halpern, and Vijay Janapa Reddi, authors of “Flying IoT: Toward Low-Power Vision in the Sky,” which appears in the November/December 2017 issue of IEEE Micro.

Researchers from University of Texas at Austin, as well as from Carnegie Mellon University, have devised a sensor-cloud architecture to partition data collection and processing between the edge and the cloud so that drones like Amazon Prime Air (featured in video) consume less power and could remain aloft longer. 

Drones: the new Flying IoT

The University of Texas at Austin researchers borrowed the object detection feature of Follow the Leader—an app that helps you keep track of someone you are following if you lose sight of them—and modified it for other intelligent apps.

“We study an application called Follow the Leader, which automatically detects, tracks, and follows a moving human target. The application is centered on a machine learning task called object detection, which is its most computationally intensive kernel. The ability to perform basic computer vision tasks like object detection is a necessary step toward new and intelligent applications such as sports photography and package delivery,” the authors said.

The researchers, who include one individual from Carnegie Mellon University, refer to this cognitive drone platform as the domain of “Flying IoT,” or “Flying Internet of Things.”


 

Want more tech news? Subscribe to ComputingEdge Newsletter Today!

 


 

Reducing drones’ cognitive workload

The massive energy it takes for a drone to detect and track a person requires a larger, clunkier design, significantly reducing flight time. If the goal is to track someone for long distances, this is a problem.

Performance running Follow the Leader on a high-performance computer (TX1).
Performance breakdown when running Follow the Leader on a high-performance computer (TX1).

“Cognitive applications are often very computationally intensive, which makes them difficult to run on embedded computers that have low-power, lightweight, small-size design requirements. For example, state-of-the-art machine-learning models for image classification, such as ResNet, require gigaflops to process a single image. Computing these models in real time requires many ALUs in a processor, increasing chip size and consequently leading to larger heatsinks and larger, heavier devices, which increases the drones’ take-off weight and decreases their flight time,” added the authors.

Devising an alternative architecture for drones

The solution is to offload much of the heavy work—data collection, analysis, and responding—to edge devices in the cloud.

Follow the Leader app with uscaled, compressed images
Sensor-cloud architecture reduces the energy consumed per frame for the Follow the Leader application: (a) i.MX6 and (b) TX1. Images are unscaled and compressed using PNG. Note that the y-axis of (a) is scaled to three times the y-axis of (b).

“We propose a sensor-cloud system to bring server-level computational capability to low-power IoT devices such as drones. In a sensor-cloud system, computationally intensive tasks are offloaded to the cloud while data collection tasks are done at the edge,” say the authors.

With greater energy efficiency, the drone will be super fast—a good thing when chasing down the guy who stole your car or following New York Giants’ Odell Beckham into the end zone.

“We create a Follow the Leader application where our drone detects and follows a moving human target in real time. Many smart drones, in domains from security to sports photography, must be capable of following human targets, whether to record video of a quickly moving athlete or to monitor a suspicious individual in a crowd. To detect targets, our application runs object-detection algorithms on images taken from the drone’s cameras. The drone then flies along the horizontal plane, centering its target in the middle of the drone’s field of view. An autonomous drone application has strict real-time requirements, because it must fetch, analyze, and react to sensory data quickly enough to avoid its moving targets from exiting its field of view. In this work, we set a real-time performance goal of 10 frames per second (fps),” the authors say.

A dataset of faces

The trick is to design a drone intelligent enough to recognize its target in a crowd and powerful enough to track its target for long distances.

“The challenge of performing object detection on drones is to balance performance and power efficiency. To operate successfully, Follow the Leader must detect a person multiple times per second, or it could lose the person it is attempting to track. This real-time performance requirement is difficult to satisfy on extremely low-power CPUs, or even GPUs, even for simple shallow machine learning models. For complex multiclass deep models like convolutional neural networks (CNNs), desktop or server-level processors are required. So, extremely low-power, low-performance processors alone are not sufficient without hardware specializations, but hardware specialization introduces design complexity and nonrecurring engineering costs,” say the authors.

To fine-tune their design, they tested the drone using a huge database of human faces.

“To ensure a controlled test environment, we evaluate the drone application indoors, with the drone stationary, replacing its camera input with ‘positive test’ images from the INRIA Person dataset. The drone reads images from the dataset and treats them as inputs from its own camera, attempting to fly toward the people in those images,” the authors say.


 

Want more tech news? Subscribe to ComputingEdge Newsletter Today!

 



They found that compressing and reducing the size of images increased efficiency, losing only minimal accuracy.

“We demonstrate how various common software-level optimizations, such as image downsampling and lossy compression, can trade small accuracy loss for significant performance and energy efficiency improvements,” added the authors.

The researchers see tremendous potential for their sensor-cloud architecture as integral to the new Internet of Things.

“The Internet of Things is entering a new paradigm where devices on the edge need both cognitive capability and the ability to interact directly with their environments in real time. Currently, compression on drone processors is typically done by CPUs. However, by developing specialized compression accelerators, researchers can alleviate the pressure put on CPUs, dramatically improving performance,” they say.

Related research on drones in the Computer Society Digital Library

Login may be required for full text.

 

 


 

About Lori Cameron

Lori Cameron is a Senior Writer for the IEEE Computer Society and currently writes regular features for Computer magazine, Computing Edge, and the Computing Now and Magazine Roundup websites. Contact her at l.cameron@computer.org. Follow her on LinkedIn.