Social Media giant Meta announced on July 1 that Alexandr Wang, former CEO of Scale AI, has been appointed as Meta’s chief AI officer and will co-lead Meta Superintelligence Labs (MSL) with Nat Friedman, former CEO of GitHub. This follows Meta’s $14.3 billion investment in Scale AI earlier in June.
The company has been on an AI hiring blitz, recruiting top talent from OpenAI, Anthropic, Google, and DeepMind, with some signing bonuses reportedly reaching up to $100 million.
The following individuals comprise the superintelligence team.
Trapit Bansal
Trapit Bansal is recognised for his pioneering work in applying reinforcement learning to chain-of-thought reasoning within large language models. As a co-creator of OpenAI’s o-series models, Bansal has played a pivotal role in advancing model interpretability and robustness. His research focuses on developing training methodologies that enhance both the reasoning capabilities and efficiency of modern AI systems.
Shuchao Bi
Shuchao Bi contributed significantly to the development of GPT-4o’s voice mode and the o4-mini model. At OpenAI, he led efforts in multimodal post-training, which involved refining how models process and generate outputs across text, audio, and visual inputs.
Huiwen Chang
Huiwen Chang was instrumental in designing GPT-4o’s image generation features and has a strong background in generative AI. Previously at Google Research, she invented the MaskGIT and Muse architectures, both of which have become foundational in the field of text-to-image synthesis.
Ji Lin
Ji Lin played a key role in building a suite of influential models, including o3/o4-mini, GPT-4o, GPT-4.1, GPT-4.5, 4o-ImageGen, and the Operator reasoning stack. His contributions span the development of advanced reasoning mechanisms and architectural improvements, enabling these models to perform a wide range of complex AI tasks more effectively.
Joel Pobar
At Anthropic, Joel Pobar led work on inference optimisation. He previously spent over a decade at Meta, where he contributed to the development of core infrastructure projects, including HHVM, Hack, Flow, Redex, and various performance and machine learning tools. His deep experience in software engineering and AI has been critical to improving the speed and scalability of AI inference systems.
Jack Rae
Jack Rae serves as the pre-training technical lead for Gemini and leads reasoning for Gemini 2.5. At DeepMind, he spearheaded early large language model projects including Gopher and Chinchilla. His expertise lies in large-scale pre-training strategies and improving the reasoning capabilities of cutting-edge AI models.
Hongyu Ren
Hongyu Ren is a co-creator of several OpenAI models, including GPT-4o, 4o-mini, o1-mini, o3-mini, o3, and o4-mini. He previously led a post-training group at OpenAI, focusing on refining and optimising large language models for greater accuracy and efficiency through advanced post-training techniques.
Johan Schalkwyk
Johan Schalkwyk, a former Google Fellow, was an early contributor to the Sesame project and served as the technical lead for Maya. His extensive background in AI research and development has influenced foundational advancements in machine learning frameworks and AI technologies.
Pei Sun
Pei Sun worked on post-training, coding, and reasoning for the Gemini project at Google DeepMind. Previously, he developed the last two generations of perception models for Waymo, demonstrating his expertise in AI for autonomous vehicles. His current focus is on enhancing reasoning and real-world application capabilities in advanced AI systems.
Jiahui Yu
Jiahui Yu is a co-creator of o3, o4-mini, GPT-4.1, and GPT-4o. He previously led OpenAI’s perception team and co-led multimodal research for Gemini. His work is centred on advancing AI perception and integrating multimodal understanding, enabling models to process and generate information across diverse data types.
Shengjia Zhao
Shengjia Zhao has been a co-creator of ChatGPT, GPT-4, the mini model series, 4.1, and o3. At OpenAI, he led synthetic data initiatives, focusing on improving the diversity and quality of training data. His innovations in data synthesis have been crucial for enhancing model generalisation and overall performance.
The post Meet Meta’s New Superintelligence Dream Team appeared first on Analytics India Magazine.