A new feature of Google Assistant debuted this week at Google’s I/O 2018 conference: a shockingly human-like phone call was placed by what CEO Sundar Pichai dubbed “Google Duplex.” If you haven’t seen it yet, here’s the video:
Stunning, right? The realistic “ummms” and “ahhhs,” the ability to understand complex sentences and accents. Perhaps most impressive was Google Duplex’s ability to keep its cool in the face of difficult conversational moments, such as with the restaurant, in which a human may have become frustrated or hung up.
Spear Phishing 3.0
Now imagine a DDoS attack made of realistic phone calls that overwhelm your employees, networks, and resources. Also stunning, right? Or maybe terrifying is a better word.
Or imagine your employees being cyberbullied into complying with ransomware demands, made even scarier by a human-like voice on other end of the line.
Imagine a complex AI calling your company as part of a social engineering campaign that includes specific information sprinkled in about the target, just like today’s spear phishing attacks, in which the names of family members, co-workers, or known events are referenced.
“I’m calling to get the updated W2s; you wouldn’t want me to have to call your boss for them, would you? Jenny in accounting said you could provide them for me,” could say the “assistant” of cybercriminals in the not-so-distant future. At worst, it could be configured to sound close enough to the voice of Jenny in accounting to skip the name drop altogether.
It’s like something out of an episode of Black Mirror. What happens when there’s enough information on an employee for blackmail to occur? What percent of employees will risk their well-kept secrets going public rather than release company IP to an undisclosed digital agent?
AI and The Social Threat
According to the 2018 Verizon Data Breach Report, companies are nearly three times more likely to be breached by social attacks than via actual vulnerabilities. It’s no secret why.
Humans are by nature social creatures, which means we’re susceptible to the influences of other humans, whether in response to alarmism (“My boss will kill me if I don’t get this information to him immediately”), to threat (“Give me the data or I’ll tell your wife about that call you made at 3am last Tuesday”), or anger (“Give me the data or else I’ll do everything in my power to make sure you don’t have a job tomorrow!”) Multiplying this to the level of a bot army? Things could go wrong very, very quickly.
And the scary question is – who’s to say it’s not happening already? In the book Future Crimes: Inside the Digital Underground and the Battle for Our Connected World, author Marc Goodman repeatedly points how that cybercriminals are much more advanced than our hooded and basement-dwelling stock images give them credit for. They tend to be at the forefront of technology. In fact, we’re often catching up to them from a consumer and law enforcement perspective.
And Goodman would know: he’s spent his life in law enforcement, working with the FBI and Interpol, and is the founder of the Future Crimes Institute and chair for Policy, Law & Ethics at Singularity University.
Not a Question of “Will They?” but of “When Will They?”
Luckily for us, Google Duplex capabilities are based on thousands and thousands of bits of data from anonymized phone calls. High-tech criminals likely don’t have the sheer amount of data needed to create this type of AI – yet. But once Google Duplex is released, you can be sure they’ll be working hard to backwards engineer it, corrupt it, and use it for personal gain. Just like they always do.