LULU GARCIA-NAVARRO, HOST:
Information warfare may be as old as war itself. And lawmakers are now addressing its newest weapon. Top lawyers from Google, Facebook and Twitter testified this week about Russian interference in the 2016 presidential election. One of the terms on the agenda - social bots, computer programs that mimic real people on social media. NPR's Eric Westervelt has our story.
ERIC WESTERVELT, BYLINE: Experts who hunt bots for a living aren't convinced the tech giants are doing enough to combat the malicious use of social bots, programs that imitate human behavior on the web or your digital device.
DAN KAMINSKY: We have this joke in computer security. It's a game. It's called nation-state or teenager because any particular attack really could be both.
WESTERVELT: That's Dan Kaminsky, co-founder of the cybersecurity firm White Ops. You could call him one of the nation's foremost bot hunters. Bots are everywhere. For example, they help you with virtual customer service. And on your smartphone, chatbots help dictate messages. But each bot can be programmed with its own unique identity, Kaminsky says, and used easily for ad fraud or to spread misinformation and propaganda as we saw during last year's presidential race.
KAMINSKY: There are absolutely clever bots out there. But it's not like these are, you know, wild, artificial intelligences. They're no more artificially intelligent than the printing press.
WESTERVELT: But in the last few years, Kaminsky says, he began to see exceptions to that. He helped take down what's been dubbed Methbot. Russian hackers using so-called bot farms created thousands of fake websites to dupe web marketers into buying millions of dollars worth of video ads. Companies thought thousands of real people had viewed these ads. In fact, they were bots designed to imitate real web surfers. It was one of the largest ad fraud attacks in the history of the web. Kaminsky says Methbot was new, a very scalable, sophisticated criminal operation and, in retrospect, something of a Russian-bot shot across the bow.
KAMINSKY: Custom coding, custom engineering, custom operations - those were certainly well beyond, you know, your average 12-year-old in Poughkeepsie. And it tells you things about other similar attacks.
WESTERVELT: Methbot was ad fraud. But Russian hackers used social bots in new, effective ways in the election. At the hearings on Capitol Hill, Clint Watts with the Foreign Policy Research Institute warns senators that the social bot problem will only get worse if tech giants don't take stronger collective action now.
(SOUNDBITE OF ARCHIVED RECORDING)
CLINT WATTS: They can create accounts that look like you and talk like you, which makes you more likely to believe it. The other thing is it can replicate a message so many times, the more times you see it the more likely you are to believe it. So it can actually create false worlds in the social media space.
WESTERVELT: Research shows that many of the Russian social bots were programmed to direct tweets at users with lots of followers and influence. That helped make some false claims spread farther faster. Bot expert Sam Woolley is research director of the Digital Intelligence Lab. He says he heard too much in the hearings about Russian political ads and not enough about how tech companies might work together to combat social bots.
SAM WOOLLEY: There are so many other mechanisms from group pages on Facebook to private rooms on Twitter that can be used to spread propaganda and misinformation. And we know we're used by the Russian government to do that. So we need to move the focus beyond just the political advertisements and toward the larger scale attacks that were going on.
WESTERVELT: The tech firms insist they're taking adequate action. But the hackers, Woolley says, and their bot armies on the march are often one step ahead. Eric Westervelt, NPR News, San Francisco. Transcript provided by NPR, Copyright NPR.