A joint team of researchers from the UK and the US claim to have trained an artificial intelligence (AI) programme to identify bots on Twitter by examining the pattern of their activities.
In the study, the researchers used a large Twitter database to analyse changes in the behaviour of human users and bots over the course of an activity session.
The team reviewed two different datasets of Twitter users. The first dataset, labelled as French Elections (FE), consisted of a collection of more than 16 million tweets, posted by more than 2 million different users. These tweets were posted between 25 April 2017 and 7th May 2017.
The second dataset, called Hand-Labelled (HL) by the researchers, consisted of "three groups of tweets produced by bot accounts active in as many viral spamming campaigns at different times, plus a group of human tweets."
In their analysis, the researchers examined multiple factors, including the amount of content produced by the user and their inclination to engage in social interactions on the platform.
The results showed that real users responded nearly four to five times more frequently to tweets from other users than bots did.
Moreover, accounts being operated by real users showed a tendency to become more interactive over the course of an hour-long session, although the length of their messages decreased as the session progressed. Emilio Ferrara, a professor at the University of Southern California's Information Sciences Institute, believes this behaviour could be a result of cognitive depletion in humans over time, in which they become less likely to spend mental efforts to create original content.
Bots, on the other hand, showed no major variations in interaction behaviour or the length of tweets posted over time on the platform.
The researchers also examined the amount of time between consecutive messages from a user and found that bots used some certain time intervals to post tweets, for example, at 30-minute or one-hour intervals.
All these results were used to train Botometer - a pre-existing bot-detection algorithm, which showed better performance in detecting bots than when it was not taking into account the activity pattern or time intervals between tweets.
The detailed findings of the study are published in journal Frontiers in Physics.
Such users will be directed to a "myth busters" page on WHO website
Computing examines the tools and strategies needed to keep the business functioning in extraordinary times, including a rundown of the most popular unified communications and video conferencing solutions
No single vendor can provide a complete edge computing solution - the edge requires collaboration
Managing storage for large, complex distributed containerised applications is a whole different game; industry insiders discuss how the options stack up
Computing examines the pandemic crisis and identifies key trends likely to continue long after the lockdown ends