Microsoft has released a set of new tools to help combat deepfakes, which could be used ahead of important events like the upcoming US election to spread false information on the internet.
According to the company, its first tool - dubbed 'Video Authenticator' - can analyse an image or video clip to determine whether it has been edited using AI. The tool will then provide a confidence score, indicating the chance that the media has been manipulated.
In the case of a video, the tool shows a confidence score in real-time on each frame as the video plays.
Deepfakes refer to photos, videos or audio files that are manipulated to show someone doing or saying something that they actually never did or said. With AI, it becomes easy to manipulate individual faces based on previous shots and to create realistic looking new images. These deepfakes are then used to spread misinformation on various online platforms, including social media.
Amsterdam-based cyber security firm Deeptrace said last year that it had found nearly 14,700 deepfake videos on the internet in June and July 2019, up 84 per cent from 7,964 found in December 2018.
Experts say the figures are worrying, as fake video and audio could be used to influence public opinion during elections, sow discord among various political parties or to indict someone for a crime that they didn't commit.
According to Microsoft, its new tool works by 'detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.'
The company said that its video authenticator tool was created using a public dataset from Face Forensic++, and has been tested on the DeepFake Detection Challenge Dataset.
Microsoft has also released a second, proprietary tool that allows video creators to certify that their content is authentic.
This technology has two components. The first - built into Microsoft Azure - allows content creators to add certificates and digital hashes to their content. Those authentication methods then 'live with the content as metadata wherever it travels online.'
The second component - a reader - checks content metadata (certificates and hashes) to determine authenticity. Viewers can access this feature through a browser extension.
According to Tom Burt, a Microsoft vice president, these new tools will be initially available to political campaigns and media organisations.
"This will be a long-term effort, but we hope to have an impact in the lead-up to November," Burt said.
This is the first time that a machine learning technique has been used to validate exoplanets
But Apple has been spared from immediately reinstating Epic's Fortnite game on its App Store
Government launches consultation on automated lane-keeping technology to enable hands-free driving on British roads
ALKS technology can potentially make long road journeys safer and smoother for drivers
App designed to safeguard children online was the result of two years' toil at AI's cutting edge, says product manager Jon Howard
Artificial intelligence could help researchers in studying avian behaviour