BBC to explore generative AI in journalism

But it will prevent data scraping by OpenAI

clock • 3 min read
BBC to explore generative AI in journalism
Image:

BBC to explore generative AI in journalism

The BBC, the UK's largest news organisation, has announced its intention to explore the application of generative AI in journalism, but it will not allow OpenAI to scrape its content.

In a blog post published on Thursday, Rhodri Talfan Davies, director of nations at the BBC, unveiled the BBC's guiding principles for exploring the potential of generative AI across various domains, including journalism, archiving and personalised experiences.

According to Davies, the technology offers opportunities to enhance the value delivered to both its audiences and society as a whole.

"Innovation has always been at the heart of the BBC. From the very first radio broadcasts in 1922 to colour television in the 1960s and the rapid development of our online and mobile services over the last 25 years - innovation has driven the evolution of the BBC at every step," Davies said.

"We believe Gen AI could provide a significant opportunity for the BBC to deepen and amplify our mission, enabling us to deliver more value to our audiences and to society. It also has the potential to help our teams to work more effectively and efficiently across a broad range of areas, including production workflows and our back-office."

Over the next few months, the BBC intends to experiment with generative AI in diverse areas, including "journalism research and production, content discovery and archive, and personalised experiences."

The BBC has also committed to collaborating with technology firms, fellow media entities, and regulatory bodies to ensure the responsible and secure development of generative AI, with a primary emphasis on upholding trust in the media.

The blog post also outlines three principles that it says will guide the BBC's approach to working with generative AI.

  • The BBC will consistently act in the best interests of the public.
  • The organisation will prioritise talent and creativity while respecting the rights of artists.
  • The BBC will maintain a commitment to openness and transparency regarding AI-generated content.

Beeb's AI crawler ban

While the BBC explores applications of generative AI, it has taken measures to prevent web crawlers from organisations like OpenAI and Common Crawl from accessing its websites.

This decision aligns the BBC with other prominent news organisations such as CNN, The New York Times and Reuters, who have also implemented measures to block web crawlers from accessing their copyrighted content.

The BBC says that the unauthorised scraping of its data for training generative AI models does not serve the public interest.

It says it is seeking to establish a more organised and sustainable approach through collaborative discussions with technology companies to address this issue.

"That's why we have taken steps to prevent web crawlers like those from Open AI and Common Crawl from accessing BBC websites," Davies stated.

The BBC is also examining the potential impact of generative AI on the broader media industry.

"For example, how the inclusion of Gen AI in search engines could impact how traffic flows to websites, or how the use of Gen AI by others could lead to greater disinformation," Davies explained.

"Throughout this work, we will harness the world class expertise and experience we have across the organisation, particularly in BBC R&D and our Product teams who are already exploring the opportunities for public media."

You may also like
CMA raises red flag over competition concerns in AI foundation models

Artificial Intelligence

CEO expressed worries over Big Tech's capacity and motivation to skew Foundation Model markets

clock 15 April 2024 • 3 min read
IT Essentials: 'Impressively innovative' and other inanities

Skills

Trying to save time may be taking us towards real-world harm

clock 03 April 2024 • 2 min read
Most read

Sign up to our newsletter

The best news, stories, features and photos from the day in one perfectly formed email.

More on Big Data and Analytics

Even CERN has to queue for GPUs. Here's how they optimise what they have

Even CERN has to queue for GPUs. Here's how they optimise what they have

'There's a tendency to say that all ML workloads need a GPU, but for inference you probably don't need them'

John Leonard
clock 17 April 2024 • 4 min read
Partner Content: Why good data is the foundation of AI success

Partner Content: Why good data is the foundation of AI success

Does your organisation have the right quantity and quality of data to make its AI ambitions a reality?

Arrow
clock 04 April 2024 • 2 min read
Partner Content: Human-in-the-loop - How AI can boost your organisational culture

Partner Content: Human-in-the-loop - How AI can boost your organisational culture

Why it’s vital to consider your organisation’s people when implementing AI

Arrow
clock 26 March 2024 • 2 min read