Social media: what happens when AI takes over?

Social media: towards better recommenders

Image:
Social media: towards better recommenders

AI is about to make recommender algorithms a whole lot more effective, and potentially more dangerous, but it doesn't have to be that way, say researchers

Researchers and entrepreneurs are working on social media content suggestion algorithms optimised for trust rather than conflict.

In his recent BBC Reith Lectures, Stuart Russell, professor of computer science at the University of California, Berkeley and renowned thinker on human-centric AI, claimed that social media algorithms are already more successful than any dictator in history at changing peoples' cognitive intake to achieve their goals.

"Like any rational entity, the algorithm learns how to modify the state of its environment — in this case, the user's mind — in order to maximise its own reward," he said.

The algorithm's objective is for the user to interact with the platform for as long as possible. To this end it feeds them more of what engages them. Optimising these feeds requires that you make the human more predictable, and the best way to do that is to push meme-worthy content which binds them tighter into their filter bubbles and entices them to share. If the content offends another group, so much the better, because that creates more engagement, more entrenchment and pulls others into the fray. The truth, accuracy or helpfulness of the content is of little or no importance, as far as the algorithm's concerned. Attention is all.

"These algorithms are not very intelligent," said Russell in his talk. "They don't even know that humans exist or have minds. More sophisticated algorithms could be far more effective in their manipulations. Unlike the magic brooms [in the Sorcerer's Apprentice], these simple algorithms cannot even protect themselves — but fortunately they have corporations for that."

So, what happens when ChatGPT and other LLM projects join the attention-optimisation party, which surely won't be long now? It's a troubling prospect.

The grind

The effects of algorithmic attention optimisation have been much in the news. There's the spread of misinformation and disinformation (disastrous in Myanmar and Ethiopia); the reinforcement of biases, stereotypes and prejudices; the stories of people disappearing down rabbit-holes of ever more extreme content; and the threat to privacy. Algorithmic amplification can change the discourse in ways that may be highly destabilising to the social order. Then there's the lack of transparency: recommenders are the closely guarded secrets of the social media, search and ad-serving giants that dominate the web.

Algorithms now mediate almost all online speech, a situation that's profoundly unhealthy, according to Mikko Alasaarela, an entrepreneur and a veteran of the Silicon Valley gaming industry at a time when it was perfecting the art of keeping people online. Alasaarela learned first-hand about designing for addiction, offering virtual rewards at key stages to deliver the all-important feel-good factor.

"Year 2011, the year I quit, was the inflection point, when the share of free-to-play games of the industry went from 30% to 70% during a single year," he says.

"By that time, major mobile and social games publishers had perfected the ‘grind', a never-ending loop of algorithmic dopamine triggers that got people back to the game multiple times a day for years."

Translated to social media, the side effects of the grind include polarisation, knee-jerk emotion triggering, radicalisation and loneliness. "These are already well known. What is less understood is all the subtle ways the constant presence of the algorithms are changing our societies and our culture."

Since quitting the games business a decade ago, Alasaarela has been studying the way recommender algorithms work, testing "viral hooks, emotion triggering, timing, sparking conversations, being controversial and inserting familiar language to specific filter bubbles to make people feel I'm one of them." The goal is to create automation that works in a very different way: optimising for trust rather than attention.

Alasaarela's latest venture is a soon-to-launch social platform called Equel. This, he says, will feature WhatsApp-style groups for professionals who must use their real identities. Unlike WhatsApp, it will be open and extensible via APIs.

The hook? Trust. Once ChatGPT et al are integrated into social platforms, content will be increasingly AI-generated. Alasaarela is betting the appetite for meaningful online contact with bona fida human beings will increase.

"It is quite possible that the flood of content will be so massive that our networking moves mostly to private groups where all members are verified humans," he says.

Social media: what happens when AI takes over?

AI is about to make recommender algorithms a whole lot more effective, and potentially more dangerous, but it doesn't have to be that way, say researchers

Algorithmic upsides

One thing newcomers to Mastodon - a federated social media platform supported by donations rather than ads - notice immediately is that conversation more civil. This is due to human moderation of individual instances, but also, almost certainly to the lack of dopamine-dealing recommenders. And if they've imported their follow list from Twitter, as millions have since Musk took over, they are delighted to be reacquainted with people they'd long ago forgotten, because Twitter's algorithm tends to ignore all but a few high-engagement contacts.

Sooner or later, though, the downsides of the platform's simple reverse-chronological feed become apparent. Unless you happen to catch a post as it's rolling past, or spend time and effort curating lists, the chances are you will never see it. Thus, prolific posters are favoured over those who might post more thoughtful content rather less often. There's a bias, just a different one.

"Reverse chronological or alphabetical sorting are still algorithms," points out Luke Thorburn, PhD researcher at King's College London and co-author of the Understanding Recommenders project at Berkely's Center for Human-Compatible AI, which was co-founded by Stuart Russell.

"I don't think you can have lists of content on social media without algorithms. Any method for determining which content to display in an online platform, when conducted at scale, will be algorithmic."

Indeed, despite the many problematic aspects of current social recommender algorithms, they are invaluable when it comes to sifting through millions of posts so we don't have to.

"There are certain communities, for example academic Twitter, where they can be incredibly helpful," Thorburn says. "In such contexts recommender systems help curate which information is most important. Many people, though not all, prefer engagement-based algorithmic feeds to reverse chronological feeds in certain contexts."

Thorburn and his colleague Aviv Ovadya, an affiliate at Berkman Klein Center for Internet & Society at Harvard, are also interested in algorithms that increase trust and lower the temperature, recently releasing a paper on bridging systems.

Bridging systems aim to retain the helpful aspects of recommenders while minimising the harmful effects. They do this by upweighting posts that span an ideological divide.

"A personal post by someone talking frankly about the impact of economic disadvantage in their community might be liked or shared by people on both the left and the right of politics, so would be considered bridging," Thorburn explains.

As well as optimising for interactions that span divides, they can also seek to minimise some measure of destructive conflict. They are still an emerging topic of research, and the best metrics to optimise are being worked out, but far from being lab-bound, bridging algorithms are already being used in the real world.

Twitter's Community Notes feature is a fact-checking forum that only promotes a user-derived fact check once enough contributors from different points of view rate it as helpful. Meta is increasingly experimenting with community forums comprised of ordinary Facebook users to inform content moderation, which also use bridging systems.

Unlike previous approaches to online conflict resolution, evaluation of the success of the bridging algorithm is integral to the algorithm itself, Thorburn explains. "In the same way that an online platform can A/B test its way to increasing engagement, they should be able to A/B test their way towards bridging."

Unfortunately, people being people, polarisation and pile-ons will alas always be with us, and technology can only do so much to change that. But bridging systems offer a way to stop recommender algorithms making social problems worse, by introducing some breathing space and much needed transparency.

The big ask

"The real goal should be to use algorithms to create an interesting feed that is useful and beneficial to you," says Alasaarela. "This will require the business model of the social media platform to be such that addiction is not the goal and your attention is not sold to advertisers."

That's a big ask. The great unanswered question is what's in it for the social media giants? These companies have not deliberately set out to send people down extremist rabbit holes, that's just a side effect of maximising attention and thus profits. These companies with their mountains of data, giga-flops of compute and freshly acquired AI startups aren't going anywhere (with the possible exception of Twitter). With these advantages they have built a fortress. They may be toying with bridging systems, but why make fundamental changes in what has been a very lucrative business model? Even when faced with clear evidence of harm they've proved very resistant to change.

But things do change, and perhaps with the right mix of regulation and a shift of public opinion the landscape will open up. Russell alludes to, but does not go into detail about, "developing research relationships" with social media companies over "how they can be actually beneficial to people".

Social media: what happens when AI takes over?

AI is about to make recommender algorithms a whole lot more effective, and potentially more dangerous, but it doesn't have to be that way, say researchers

Can't avoid the algorithms

So could a social media platform like Mastodon, which offers no algorithmic dopamine hit whatsoever (or for historical reasons full text search) ever achieve the network effect essential to keep users coming back and challenge the incumbents, or are we all too hooked on conflict?

Mastodon's developers will have to be open to algorithms if the platform is to be any kind of player, Alasaarela believes, but it needs to introduce them on its own terms.

"If Mastodon wants to compete against the mainstream social media platforms for users, it has to allow comprehensive search, feed algorithms and many other things that are currently being avoided. In my opinion, Mastodon should allow entrepreneurs to innovate and provide those to their server/app members, with the goals of full transparency of the algorithms, healthy usage patterns and constructive conversations on the platform."

Or perhaps someone could ask ChatGPT to write better, more human-centric recommender algorithms.