AI is now 'Should we?' not 'Can we?'

Ready access to AI is forcing companies to address ethical questions

Tom Allen
clock • 2 min read
AI is now 'Should we?' not 'Can we?'

AI is so ubiquitous today that the real challenge is not practicalities, but ethics.

2023 has been the year of AI, especially generative AI like ChatGPT. The technology has become so commoditised that it's used in industries from travel to consultancy

"AI is becoming much more pervasive and less limited," said Dan McMahon, head of innovation management at Hymans Robertson, speaking on a panel discussing AI at the IT Leaders Summit this week. 

"Copilot being released into the entirety of the Microsoft 365 suite puts [AI] into many more people's hands." 

Companies' use of AI differs heavily. Hymans Robertston is using it in its core systems. Bank of America, on the other hand, focuses on heavy data analysis, said director Rahul Mundke. 

"We capture pretty much all data coming into the trading platform," which makes it "very difficult" for a person to manage, he said. It's a perfect use case for AI; "finding bottlenecks and new workflows, looking at how to improve the customer experience." 

Trust, transparency and ethics 

Because AI tools are now so easy to access, companies have to ask and answer important ethical questions when implementing the systems. 

"AI needs to explain itself: how it makes a decision, and if that decision is transparent, ethical and trustworthy," said Huseyin Seker, professor of computing sciences at Birmingham City University. "You need to consider how you're developing and deploying it, whether it's rule-based or data-driven, and mostly if you trust it." 

That's especially difficult when you have to buy a system in, rather than building it. 

"We're a Microsoft house, so we're trialling the OpenAI service," said Dan. "It's more challenging than self-building or getting it off the shelf, as you can't just open the lid and see how it works. 

"OpenAI is getting itself into hot water over training data now, so should we use it? If we don't, someone else could beat us to the punch. 

"It's the should we, not the can we."

There are still questions to answer about AI and data. For example, what happens if a user withdraws their consent after their data has already been used to train an LLM? Who owns an AI model trained on public data - the company or the data owners? 

"Regulation and legislation are massive issues and have not caught up [to AI] yet," said Huseyin. "Who is going to regulate this sector? It's a big question." 

And we're still waiting on an answer.

You may also like
Meta's 'pay or consent' model likely to be on borrowed time in EU

Artificial Intelligence

Is this why release of newest LLM is confined to the US?

clock 24 July 2024 • 4 min read
CMA launches probe into Microsoft's hiring spree from Inflection AI

Mergers

When is an acquisition not an aquisition?

clock 17 July 2024 • 3 min read

Sign up to our newsletter

The best news, stories, features and photos from the day in one perfectly formed email.

More on Big Data and Analytics

Industry Voice: How tech investment is improving efficiency at Mitie

Industry Voice: How tech investment is improving efficiency at Mitie

A single source of truth underpinning everything

Shaun Carroll
clock 19 June 2024 • 1 min read
A matter of scale: How this World Heritage site is getting a handle on big data

A matter of scale: How this World Heritage site is getting a handle on big data

'In two years it will be 45 million rows, easily'

Tom Allen
clock 19 June 2024 • 4 min read
Blenheim Estate: How tech is protecting 'the finest view in England'

Blenheim Estate: How tech is protecting 'the finest view in England'

Data analysis and a sprawling sensor network are saving money and boosting biodiversity

Tom Allen
clock 12 June 2024 • 5 min read