Meta is building a giant AI model to power its ‘entire video ecosystem,’ exec says

- Advertisement -

Omar Marques | Sopa Images | Lightrocket | Getty Images

Meta’s hefty funding in synthetic intelligence contains improvement of an AI system designed to power Facebook’s complete video advice engine throughout all its platforms, a firm government mentioned Wednesday.

Tom Alison, the top of Facebook, mentioned a part of Meta’s “technology roadmap that goes to 2026” entails growing an AI advice model that may power each the corporate’s TikTok-like Reels quick video service and extra conventional, longer movies.

To date, Meta has usually used a separate model for every of its merchandise, reminiscent of Reels, Groups and the core Facebook Feed, Alison mentioned onstage at Morgan Stanley’s tech convention in San Francisco.

As a part of Meta’s ambitious foray into AI, the corporate has been spending billions of dollars on Nvidia graphics processing items, or GPUs. They’ve grow to be the first chips utilized by AI researchers for coaching the sorts of massive language fashions used to power OpenAI’s fashionable ChatGPT chatbot and different generative AI fashions.

Alison mentioned “phase 1” of Meta’s tech roadmap concerned switching the corporate’s present advice techniques to GPUs from extra conventional pc chips, serving to to enhance the general efficiency of merchandise.

As curiosity in LLMs exploded final 12 months, Meta executives have been struck by how these large AI fashions may “handle lots of data and all kinds of very general-purpose types of activities like chatting,” Alison mentioned. Meta got here to see the opportunity of a giant advice model that might be used throughout merchandise, and by final 12 months, constructed “this kind of new model architecture,” Alison mentioned, including that the corporate examined it on Reels.

This new “model architecture” helped Facebook acquire “an 8% to 10% gain in Reels watch time” on the core Facebook app, which Alison mentioned helped show that the model was “learning from the data much more efficiently than the previous generation.”

“We’ve really focused on kind of investing more in making sure that we can scale these models up with the right kind of hardware,” he mentioned.

Meta is now in “phase 3” of its re-architecture of the system, which entails attempting to validate the know-how and push it throughout a number of merchandise.

“Instead of just powering Reels, we’re working on a project to power our entire video ecosystem with this single model, and then can we add our Feed recommendation product to also be served by this model,” Alison mentioned. “If we get this right, not only will the recommendations be kind of more engaging and more relevant, but we think the responsiveness of them can improve as well.”

Illustrating out the way it will work if profitable, Alison mentioned, “If you see something that you’re into in Reels, and then you go back to the Feed, we can kind of show you more similar content.”

Alison mentioned Meta has gathered a large stockpile of GPUs that will likely be used to assist its broader generative AI efforts, reminiscent of improvement of digital assistants.

Some generative AI tasks Meta is contemplating embrace incorporating extra refined chatting instruments into its core Feed so a one that sees a “recommended post about Taylor Swift,” may maybe “easily just click a button and say, ‘Hey Meta AI, tell me more about what I’m seeing with Taylor Swift right now.'”

Meta is additionally experimenting with integrating its AI chatting instrument inside Groups, so a member of a Facebook baking group may doubtlessly ask a query about desserts and get a solution from a digital assistant.

“I think we have the opportunity to put generative AI in kind of a multiplayer kind of consumer environment,” Alison mentioned.

WATCH: CNBC’s full interview with Meta’s Nick Clegg

Watch CNBC’s full interview with Meta’s Nick Clegg

Don’t miss these tales from CNBC PRO:

Source link

- Advertisement -

Related Articles