Meta’s Artificial Intelligence Rebuild: Zuckerberg’s Incorporating and Developing Generative AI Research

According to the Meta chief executive officer, the firm intends to have nearly 350,000 H100 GPUs from Nvidia, a chip designer, by the end of 2024.

On January 18, Mark Zuckerberg, Meta’s chief executive officer, said the firm intends to bring its business-concentrated generative artificial intelligence (AI) research group and the Fundamental Artificial Intelligence Research (FAIR) team ‘closer together’ and redoubling on a push to have the technology included in its products.

Zuckerberg Announces Meta’s AI Rebuild

Zuckerberg appeared in a Thread video and revealed the modification to the firm’s artificial intelligence efforts. This includes investments in specially designed computer chips to develop and deliver new generative artificial intelligence products and models. Further, he noted that Meta has started training Llama 3 large language model (LLM).

Specifically, he said that developing full general intelligence is vital for the next generation of services. This entails developing the most suitable artificial intelligence assistants that require developments in each AI area, including planning, reasoning, memory, coding, and other cognitive abilities.

Zuckerberg said that accommodating the push to incorporate generative AI into its products will involve an increase in Meta’s technological infrastructure. By the end of 2024, the firm intends to have nearly 350000 H100 graphic processing units (GPUs) from Nvidia, a chip designer.

Meta is blending its two latest artificial intelligence research divisions in a strategy that resembles Alphabet’s in 2023. The firm combined DeepMind and Google Brain, its two advanced artificial intelligence research labs.

Alphabet was attempting to draw level with OpenAI and Microsoft. OpenAI had beaten Google to market using ChatGPT, a highly capable artificial intelligence chatbot. Besides, it made GPT-4, the globe’s most potent LLM, available to clients.

Meta Executes Sizeable AI Research Covering Unsupervised Learning

For a long time, Meta has carried out considerable artificial intelligence research that covers unsupervised learning. This involves artificial intelligence learning patterns in the absence of labelled information and developing artificial intelligence software that outclasses top persons in the strategy game Diplomacy.

Advancements in computer vision algorithms and machine translations are other examples of Meta’s achievements.

The company’s GenAI team developed a robust open-source language model referred to as Llama 2. It is less advanced than Google’s Gemini or Open AI’s GPT-4. However, most developers favour it for developing cost-effective and customizable chatbots compared to other models. 

Editorial credit: Ascannio / Shutterstock.com

Michael Scott

By Michael Scott

Michael Scott is a skilled and seasoned news writer with a talent for crafting compelling stories. He is known for his attention to detail, clarity of expression, and ability to engage his readers with his writing.