Bruce Schneier knows we all have a lot to worry about these days, but the security researcher for the Harvard Kennedy School has one more thing that may keep you up at night: AI hackers.
Schneier’s eye-opening talk at the all-virtual RSAC 2021 conference examined the consequences, positive and negative, of artificial intelligence learning to hack all kinds of systems. The ongoing COVID-19 pandemic forced him and other RSAC participants to present via video this year, but that comfortable setting didn’t blunt Schneier’s concerns.
“Any good AI system will naturally find hacks,” said Schneier. They find novel solutions because they lack human context, and the consequence is that some of those solutions will break the expectations humans have—hence, a hack.
This is especially true for computers. “We never close off all the avenues for hacking,” he said, positing that once AI systems start looking for hacks, vulnerabilities will be found at a scale humans are simply unprepared to handle.
Schneier believes that, initially, AI analysis will favor hackers. “When AIs are able to discover vulnerabilities in computer code, it will be a boon to hackers everywhere,” said Schneier.
Over time, however, he believes that the advantage will ultimately favor defenders—the good guys. The same technology used to find and exploit vulnerabilities can also be used to find and fix software vulnerabilities before they can be exploited. “We can imagine a future where software vulnerabilities are a thing of the past,” Schneier argued.
Schneier was careful to position his talk in a real-world context. A “hack” can be something a system permits but is unintended by its designers, like finding loopholes in tax systems, or AlphaGo—an AI that led a five-game winning streak against a Go master—making a move no human would ever choose. Thus, AI hacks may not require hyper-intelligent androids or even evil intent. They could materialize in the near-term, without any significant breakthroughs in the field of machine learning, and get the job done without anyone even realizing it.
In his talk, Schneier drew attention to longstanding criticisms of AI and machine learning. For one, many of these systems are “black boxes,” where inputs go in and solutions come out. But humans don’t understand how the solutions are created. For another, the consequences of AI decision-making can have unintended consequences, like recommendation engines that push out racist or extreme content because that’s what its human overlords are feeding it.
Schneier’s concerns may sound farfetched, but machine learning—and malicious applications for AI—have come up several times at RSAC 2021. It seems this sci-fi concept is far from fiction.