Interview: Image Analyzer's artificial intelligence is saving workers from PTSD

Tom Allen
clock • 3 min read

Content moderation is stressful, and even harmful. Supporting humans with AI offers a new approach

Data creation has exploded in the 21st century. There's a camera and recording device in every pocket, which that makes it increasingly difficult for human moderators to stay on top of the explosion in user-generated content - especially when some people purposefully post illegal or harmful images and video.

"When we consider that it's been estimated that it would take someone 950 years to check all of the Snaps uploaded to SnapChat every 24 hours, it's obvious that companies cannot moderate this volume of images using human power alone," says Cris Pikes, CEO and co-founder of Image Analyzer.

Even massive social media firms like Facebook, which outsource content moderation, struggle to keep up with the growth in harmful, extremist and false content. It has reached the point that the individuals who work in moderation are starting to sue for burn-out, and even post-traumatic stress.

Organisations also face increasing pressure from draft legislation, like the UK's Online Safety Bill and the EU's Digital Services Act. These require firms to take down illegal content quickly, with large financial penalties for failure - up to 10 per cent of global annual turnover.

However, there may be a solution, in the form of artificial intelligence. Pikes - whose company won the Best Emerging Technology in AI Award at this year's AI & Machine Learning Awards - says, "Artificial intelligence can be used at the right point in a digital platform's workstream to remove the majority of harmful content before it reaches the platform. This aids compliance with impending legislation and leaves only the more nuanced content for human moderators to review."

Cris Pikes, CEO and co-founder, Image Analyzer

Pikes explains, "IAVIS [Image Analyzer Visual Intelligence System] gives each piece of content a risk probability score, speeds the review of posts, and reduces the moderation queue by 90 per cent or more… IAVIS can scale to moderate increasing volumes of visual content, without impacting performance or user experience."

Social media moderation is one key area for the technology, but the Awards judges called IAVIS "A great use of AI to resolve a problem that affects all sectors and all organisations," which shows in its adoption. As well as social media, Image Analyzer's partners are using the solution in online communities and gaming platforms to protect children; in corporations to maintain safe workplaces; and in digital forensics teams, to identify digital evidence hidden inside thousands of images, messages and unstructured data stored on networks and electronic devices.

The team was "absolutely delighted" when they heard they had won, and Pikes describes it as "a huge endorsement" of the technology.

He adds, "Being described as an award-winner is a real morale booster for our team, as it shows that their work is recognised within their industry. Being known as a Computing AI and Machine Learning Award winner provides our customers, partners and prospects with solid third-party endorsement of our technology, because they know that this has been judged by a team of experienced IT professionals who know what they're talking about."

While Image Analyzer is very happy with its recent performance, it realises that the work isn't finished, especially as new legislation looms. Many organisations outside the UK and EU don't realise these laws will apply to them, and Pikes says education is the company's upcoming focus. Image Analyzer has commissioned two whitepapers, written by lawyers at Bird & Bird, which clarify the organisations that will need to comply with the impending laws and which tier of compliance they will fall into, so they can understand the steps they may need to take in preparation for the new laws coming into force.

"Our priorities will be assisting existing and future customers in understanding the impending legal landscape in the UK and Europe and working with our customers and OEM partners to help them to put the systems in place to be able to maintain a safe online environment for their users and employees."

You may also like
Meta Trusted Partner Program failing to fulfil core remit

Social Networking

Media non-profit Internews claims Meta is not acting on reports of dangerous content posted on Facebook and Instagram, including death threats and incitements to violence.

clock 08 August 2023 • 2 min read
Twitter's content moderation lead quits

Corporate

Ella Irwin assumed the role after Yoel Roth resigned in November

clock 02 June 2023 • 3 min read
EU tells Musk: 'Recruit more staff to moderate Twitter'

Compliance

Elon Musk has cut Twitter's workforce by more than 50% since October

clock 08 March 2023 • 3 min read

Sign up to our newsletter

The best news, stories, features and photos from the day in one perfectly formed email.

More on Leadership

Digital Technology Leaders Awards deadline is this Friday

Digital Technology Leaders Awards deadline is this Friday

Entries will close at 5pm

clock 16 April 2024 • 1 min read
One week left to enter the Digital Technology Leaders Awards

One week left to enter the Digital Technology Leaders Awards

Rewarding the people that pull the industry together

Tom Allen
clock 11 April 2024 • 1 min read
Interview: Neil Peacock, Security Excellence Awards Finalist

Interview: Neil Peacock, Security Excellence Awards Finalist

'These award ceremonies are a great reminder that the hard work is worth it'

clock 09 April 2024 • 5 min read