Peter Cochrane: Escaping the comfortable 'rabbit hole' of social-media agreement

Has social media killed reasoned debate? And, if so, how do we re-inject civility back into public discourse, asks Peter Cochrane

We now live in societies where the art of debate seems to have died and social media bashing has become a cause célèbre.

Apparently, this new technology has stimulated neuroses, depression, isolation, suicide, images of child abuse, broken marriages, tsunamis, earthquakes and all manner of unexplained deaths!

I exaggerate slightly, of course. But this, and the associated media frenzy, has got out of hand with no reference to the upside of social media: fact checking, detailed cost and/or threat - benefit analysis, clear(er) thinking or, indeed, the recognition of the real very problems and dangers.

Our civilisation has profited greatly by mechanisms of open and creative debate, diversity of opinion, critical analysis and challenge

The reality is that all social media, not just Facebook or Twitter, exhibit ‘strange attracter' qualities that see us as individuals pulled into vortices of like minded people with similar views, experiences, needs and aspirations.

And how very comfortable these ‘rabbit holes' of agreement are and how pleasing it is when we can all look out at the rest of humanity in the full knowledge that they are all wrong and certainly going to hell on the day of digital reckoning.

But hey; we've been here numerous times before over many centuries. This is what all religions do? And haven't we always gone to war to champion such beliefs and ‘the one true god'? So which is it this time - SnapChat, Instagram, WhatsApp?

To challenge anyone, group, doctrine, or belief is to risk a lambasting of the most rude, profane, and threatening kind

Our civilisation has profited greatly by mechanisms of open and creative debate, diversity of opinion, critical analysis and challenge.

Unfortunately, social media is now almost wholly devoid of the positive and creative mechanisms they originally furnished: except that is, in the most agreeable mode of ‘personal rabbit holes' where people seek out only those people and groups that can bolster a ‘confirmation bias'.

In the open, on the social plane of vulnerability, to challenge anyone, group, doctrine, or belief is to risk a lambasting of the most rude, profane, and threatening kind. And yet, as we sit in our comfortable silo of agreeability, we become ever more isolated, disconnected, and rendered more ignorant by every blog and tweet.

How did we get to this state? In short, people, technology and algorithms

So how do we get back to a world of fewer singularities and reasoned, reasonable debate? One solution is to befriend people and join groups that we really do not like or agree with, just to see what is being said - without vectoring in purely on anything that confirms our biases and preconceptions.

This, I can assure you, can be unsettling to a high degree. Simultaneously living in one comfortable rabbit hole and a collection of ‘hell holes' of various depth is not easy and takes fortitude.

But how did we get to this state? In short, people, technology and algorithms.

We can't change the preferences of people, and our devices and platforms evolve faster than we can adopt and adapt. But algorithms are a different matter. Originally designed, written and ‘tuned' by people, algorithms now have a life of their own as they insist on linking the like with even more likes. Some are even steered by AI that tracks our every change in thinking and choice.

So, could we change the algorithm design, or disturb/perturbate their strange attractor behaviours. Can we get the algorithm to re-inject serendipity? Might they be made to flatten the landscape, or at least reduce the depth and number of the rabbit holes? I think we can through a process of randomisation realised through the injection of digital noise.

Algorithms now have a life of their own as they insist on linking the like with even more likes

But the bigger question is; can we get people to give up on their life of opinionated comfort and get them back to some level of civility?

That may be harder. I think we may have to introduce some level of curation, machine curation; with artificial intelligence steering people towards more acceptable behaviours without the abuse, profanities, threats and other unnecessary, uncivilised behaviour.

But how ironic that this looks like role reversal time - with the AI teaching the humans how to be civilised.

Peter Cochrane OBE is the ex-CTO of BT, who now works as a consultant focusing on solving problems and improving the world through the application of technology.