Facebook's next-gen AI will take a human view of the world - by thinking of itself

Google Glass failed, but Facebook is developing a new type of AI system to make its own foray into augmented reality a success

Image:
Google Glass failed, but Facebook is developing a new type of AI system to make its own foray into augmented reality a success

AI today is very bad at understanding a first-person point of view. Facebook wants to change that, and hang the privacy concerns

AI systems are incredibly useful, but have no real sense of self. Even robots like those built by Boston Dynamics typically learn about the world from pictures and video shot by a third party, but Facebook hopes that its future AI project will take a more 'egocentric' view of the world.

The idea is that future AI will take a more 'human' view by learning from first-person videos that show the camera as the centre of the action. It believes this will unlock new possibilities and enhance the functionality of tools like augmented reality.

For example, such a system could help you remember where you put your keys, or warn you that you've already added salt to a recipe.

Image
Image recognition by an AI from third- and first-person perspectives
Description

Facebook is working with a consortium of 13 universities and labs across nine countries (including one British institution, the University of Bristol) on Ego4D: a long-term project funded entirely by the social media giant. The project has so far gathered more than 2,200 hours of first-person video from around 700 participants as they go about their daily lives. Facebook claims the dataset is "20 times greater than any other in terms of hours of footage", and will be available to researchers who sign a data use agreement from November.

As part of the project, the company has developed five benchmark challenges for developing smarter AI:

It's no surprise to see the company that owns Oculus VR working with augmented and virtual reality, especially considering the rumours that the company is planning to launch its own smart glasses soon.

That said, Facebook has a contentious past when it comes to working with researchers, and has been repeatedly criticised for its approach to user privacy. Developing a technology with such a close-up view of our personal lives is sure to raise even more concerns - especially as, with the information we currently have, there appear to be no safeguards in place to protect user privacy in any future data collection.

A spokesperson for the social media giant told The Verge, which raised the issue, that it expected privacy safeguards to be introduced in the future, and the onus will be on companies who use the tech, rather than Facebook:

"We expect that to the extent companies use this dataset and benchmark to develop commercial applications, they will develop safeguards for such applications. For example, before AR glasses can enhance someone's voice, there could be a protocol in place that they follow to ask someone else's glasses for permission, or they could limit the range of the device so it can only pick up sounds from the people with whom I am already having a conversation or who are in my immediate vicinity."