Facebook uses first-person videos to train future AIs


One of the obvious goals of almost all computer vision projects is to enable a machine to see and perceive the world like a human does. Today, Facebook has started talking about Ego4D, his own effort in this space, for which he created a vast new data set to train future models. In a statement, the company said it had recruited 13 universities in nine countries, which had collected 2,200 hours of footage from 700 participants. These images were taken from the user’s perspective, which can be used to train these future AI models. Facebook principal researcher Kristen Grauman says this is the largest collection of data explicitly created for this purpose.

The images centered around a number of common experiences in human life, including social interaction, handling the hand and objects, and predicting what will happen. This is, as far as the social network is concerned, a big step towards better computing experiences which, until now, have always focused on finding data from the viewer’s point of view. Facebook said the datasets would be released in November, “for researchers who sign the Ego4D data usage agreement.” And, next year, researchers beyond this community will be challenged to better train the machines to understand what exactly humans do in their lives.

Naturally, there’s the angle that Facebook, which now has a camera glasses partnership with Ray ban, seeks to improve its own abilities in the future. You probably already know the perils what this potential oversight might involve, and why anyone might be feeling a little suspicious about the announcement.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through any of these links, we may earn an affiliate commission.



Source link