Not all Twitter bots have bad intentions
TWITTER BOTS DON'T have a good reputation. If they're not indiscriminately firing out links to private cam shows, they're schilling for political causes with dimwitted partisanship and unfathomable hashtags.
But bots have a bad rap - or at least that's the opinion of Mike Cook, an engineer from the Queen Mary University of London and 'father' of many quirky, unusual Twitter bots. He and Tony Veale from University College Dublin have written a book about the whimsical creativity of Twitter bots and will be taking to the stage at New Scientist Live next month to make the case on their behalf.
In fact, Cook tells me he's made a new bot especially for the show: one that automatically generates headlines for New Scientist, with several versions to show how human choices can affect robotic output.
"We think of building Twitter box as a very technical exercise - which it is - but there's a lot of artistic decision making that can affect the process as well," he explains. The NSBot isn't live yet, as it needs to absorb more headlines, but he shares an early one with me anyway: 'Corporations are fueling increasingly attractive dinosaurs.' Tell me you wouldn't click on that story.
The bot feels like a homage to one of Cook's favourite: "Two Headlines" which mashes up two Washington Post headlines into one unholy combination. Its most popular post is a marvellous mix of surreal and haunting: 'This town has resisted pelicans for 18 months, but food is running low.'
"It's a really good surrealist headline, but then you find out what the original headlines were and it's really depressing," Cook says, noting that given the nature of the bot, there must be a town somewhere that is running low on supplies for the bot to make its Pulitzer nominee.
Of course, that's a hand-picked example, and not every one will be a winner. "Some of them will produce rubbish 99 per cent of the time, but people really like following these bots because they're interesting," Cook explains. "They're waiting for that one-per-cent moment that they can share with everyone." The bot, of course, has no idea its made a hit: it's up to Twitter to decide through the democratic medium of likes and retweets.
"This is why Twitter is such an interesting medium for this kind of work. If this was an app you'd downloaded on your phone, you would expect it to do really good work every single time. Whereas with Twitter Bots there's kind of room to experiment."
It's that chaos of watching them work that Cook particularly enjoys, although he does have a soft spot for weird anarchic bots that interact with humans too: the one that used to spoil the end of Twin Peaks when someone says they're going to watch it, for example, or Stealth Mountain: the pedantic bot that used to lie in wait for people writing "sneak peak" rather than "sneak peek". I'm ashamed to say that in my pre-writing days, I fell foul to that bastard about 10 years ago...
Of course, if you think of Twitter bots, you may immediately think of something trying to subvert democracy, to the degree that it's now often used as a shorthand insult. "You see people calling other human beings bots even when they know they're human," Cook says. "And in that context, I think that they mean bots in the sense that they think this person is being paid to present their views or it's like a way dehumanise that person."
In fact, Cook has been known to attract ire from clueless Twitterfolk when he mentions he builds bots, as people think he's a political dissident building automated weapons to undermine democracy, rather than someone who, say, has built a bot that comes up with burger of the day puns for Bob's Burgers.
For what it's worth, Cook thinks most people are worse at spotting the bad kind of bots than they think. "In general, I've realised that there is no limit to the weirdness of what anyone a real human will post on Twitter earnestly and actually believe in," he says. "A lot of people are like, ‘oh this person has phrased things oddly' or ‘they're up at 4am on the internet - they must be in Russia.' No, they're just up at 4am."
In fact, sometimes the most partisan political bot may have a wholly different purpose, he reckons. "I noticed very early on in the Trump presidency that if you looked at the replies to Donald Trump's tweets, there was this consistent pattern where someone would post something supportive… and then they would link to a product they were selling," he explains. This would inevitably attract arguments in the comments underneath, but then he spotted something strange: the same arguments were being fought every time.
"They were actually getting bots to have a scripted argument with them," Cook concluded. Presumably this was to game Twitter into putting the e-commerce link higher up to get MAGA types to buy. The links no longer surface as much any more, but Twitter's popularity algorithm, Cook says, "is still definitely based around people arguing."
Despite Cook's belief that we've become overzealous in our bot witch hunt, he does think that it could be a bigger problem in theory. "I actually think it would be pretty easy to hide an army of bots on Twitter if you wanted to," he says. "I know for a fact that there are researchers who work in computational linguistics and their job is to research techniques for making the same phrase look like it's being expressed in different ways."
That isn't as sinister as it sounds - it's to make non-player characters in games sound less monotonous. All the same, it could easily be repurposed. "It's quite easy to see how this research could be applied to - for instance - getting 5,000 Twitter bot accounts to express a political opinion in a slightly different way," he posits.
"Again, I'm not saying that these things do happen, and I think in general people get very excited about the idea that Russia is secretly controlling the entirety of Twitter and I actually think that that's a little bit silly."
On a related note, one of the reasons Cook is so passionate about bot building is that he believes that by teaching people to build them, they can better establish what is possible and what's entirely fantastical.
"Maybe this is wishful thinking but I do feel like giving people knowledge about these technological concepts is useful in helping them sort things out for themselves and figure out what's going on," he says. "You see this a lot with machine learning and other aspects of AI now where concepts like neural networks are too complex a topic and as a result people are unable to distinguish fact from fiction."
You see this pretty frequently in those implausibly viral fake scripts supposedly written by AI, that clearly rely on a heavy human editing hand. "Small things like this are part of the bigger environment of AI where companies are making claims about what their products can do, governments are making decisions based on the things that companies claim their products can do and we're changing the shape of society in the future," he says. "Even though these jokes are minor and harmless in many ways, they are contributing to this general background buzz of misinformation about AI and I think there is something problematic about that."
That's one function of the Cook and Veale's book. Another, as it turns out, is history, because Twitter bots like other forms of digital art often have very limited lifespans. A single API change, and a bot can be forever silenced, ignoring the ones where creators have actively disconnected them and gone elsewhere.
"On reflection, I am even happier that we wrote it now because I've gone back and seen so many of the bots have stopped tweeting or become unrecoverable," he says. The bot that built friendships via automated games of Boggle, the one that generated sexts from out-of-context WikiHow texts, even the pedantic Stealth Mountain: gone forever.
"It's nice to have frozen that stuff in time. In 10 years, it will be very hard to find those people." µ
Mike Cook will be speaking at New Scientist Live, which runs from 10 to 13 October 2019. You can buy tickets here.