Expanding Scale, Widening Performance Gap: Questions Surround Meta’s Bet on ‘Superintelligence’ AI
Input
Changed
Record User Growth, but Plunging Trust in Technology
Talent War Ends, Sudden Strategic Shift
Doubts Over Feasibility of Superintelligence Quest

Meta’s ambitious artificial intelligence (AI) services are posting steep user growth, yet the reliability of the technology is declining just as sharply amid unstable responses and privacy controversies. Having realized the widening performance gap with rivals OpenAI and Google, Meta has halted its aggressive hiring spree and embarked on an organizational shake-up. CEO Mark Zuckerberg has sought to highlight the long-term vision by unveiling a “superintelligence” project, but with even AGI (Artificial General Intelligence), the precursor technology, still unrealized, most experts see little feasibility in the initiative.
Mismatch Between Launch Timing and Product Maturity
At its annual shareholders’ meeting late last month, Meta announced that its AI has surpassed 1 billion monthly active users (MAUs) across Facebook, Instagram, WhatsApp and other platforms. That figure nearly doubled from 500 million in September last year, driven largely by the launch of a dedicated AI app in April. Zuckerberg’s ambition to transform Meta from a social media company into an AI powerhouse appears, at least in terms of scale, to have partly materialized.
Yet consumers have been far less generous in their assessments. Persistent concerns over response consistency, conversational memory, and system stability—the core elements of user experience—combined with privacy fears around default settings and data handling practices, have severely eroded trust. Tech outlet OpenTools criticized the effort as “little more than renaming an existing app originally developed for smart glasses,” calling the move “a product of impatience to launch a standalone app.”
This gap between timing and readiness underpins much of the criticism. Meta’s AI arrived as a standalone app more than two and a half years after ChatGPT’s debut, during which time both user adoption of AI and consumer expectations had soared. Complaints have centered on misinterpretations, inaccurate responses, and memory lapses—failings at the heart of conversational services. But Meta has yet to present a clear timeline for addressing these fundamental issues, compounding the trust deficit. While the company bet on the viral impact of its “open source plus mass distribution” strategy through its Llama large language model (LLM), product stability has lagged far behind its growing user base.
Still, the potential benefits of scale cannot be ignored. Meta’s built-in distribution network across Facebook, Instagram, and WhatsApp ensures constant expansion of user touchpoints and provides a steady stream of interaction data critical for model improvement. Analysts note that if network effects enable “learning at scale,” the current momentum in user growth could eventually be converted into meaningful quality upgrades.
But scale brings risks as well. If rapid user growth is accompanied by recurring glitches such as conversation bugs, the same momentum could serve as an amplifier of risk. With services currently limited to 22 countries and widening performance gaps with OpenAI and Google, the speed of Meta’s global expansion is also being curtailed. The larger the user base grows, the more society demands safety and reliability. If Meta fails to meet these expectations, negative perceptions may deepen. Ultimately, unless Meta can translate popularity into trust in product quality, analysts warn that market patience will quickly wear thin.

After Aggressive Talent Raids, Only ‘Cost Burdens’ Remain
Meta had waged an aggressive talent war, but earlier this month it froze all new hiring for its AI division and temporarily barred internal transfers. External recruitment now requires explicit approval from the company’s top AI executive. At the same time, Meta reorganized its AI operations under a new “Meta Superintelligence Lab,” aligning functions into commercial product teams, infrastructure teams, basic research teams (FAIR), and a unit dedicated to superintelligence. The “AGI Foundations” team, which had overseen the LLM line, was disbanded, while defections have accelerated amid frustrations over Llama’s performance, leaving morale badly damaged.
Until this sharp reversal, Meta had been at the center of an overheated recruitment battle. The company personally courted leading researchers from rivals, offering compensation packages worth around $100 million in annual pay, with even larger offers rumored for select star scientists. As a result, Meta managed to poach more than 50 key figures, including over 20 from OpenAI, 13 from Google, and others from Apple, xAI, and Anthropic.
But the influx of new hires came with surging one-time bonuses and stock-based awards, raising concerns that Meta’s ability to return value to shareholders was being undermined. Investors increasingly criticized management for “overspending relative to results.” Despite securing talent through massive investment and generous compensation, the performance metrics and completeness of Meta’s output fell short of expectations, calling into question the justification for the recruitment spree.
Against this backdrop, the hiring freeze and reorganization appear aimed at reining in spiraling costs in the short term, while clarifying accountability and goals by separating research, product, and infrastructure functions. A Meta executive explained, “If the commercial product teams focus on quality improvements and user concerns while the basic research teams support the long-term roadmap, the parallel structure could rapidly align scattered R&D momentum.”

Can ‘Overcoming Biological Limits’ Be Achieved?
The shift also reflects Meta’s weakness in the LLM race. Acknowledging its lag behind OpenAI and Google, the company has sought to pivot away from pushing its existing LLMs as flagship offerings and instead crafted a narrative around longer-term differentiation. Earlier this year, Zuckerberg declared a strategic shift from LLM competition to the pursuit of “superintelligence,” vowing to build “the most powerful AI research lab in the world.” Market observers, however, remain skeptical, noting that the current Llama series is plagued by flaws and low reliability, making the promise of superintelligence seem more aspirational than realistic.
Superintelligence, which refers to AI exceeding the general intelligence of humans, is conceptually framed as a step beyond AGI. Whereas AGI aims to replicate human-like reasoning and learning capabilities, superintelligence envisions solving problems at a scale beyond human comprehension. But many in academia argue that “with AGI itself still unrealized, investing on the premise of superintelligence is premature.”
Experts highlight the formidable challenges of superintelligence research. Decoding neuroscientific mechanisms and implementing them in software and hardware is seen as well beyond the scope of any tech company. The underlying premise—simulating and overcoming biological limits—remains at a laboratory stage, while constraints around computing resources, energy efficiency, and data quality are substantial. Nonetheless, analysts interpret Meta’s embrace of superintelligence as an attempt to carve out a distinct identity in the crowded AI market and secure investor confidence for the long term.
Industry observers worry that Meta’s strategy could distort the ecosystem. If massive funds are funneled into a project with little near-term feasibility, startups and academia could be deprived of the resources and talent they need, upsetting the balance of the sector. Meta’s prior talent raids and heavy spending have already disrupted compensation norms across the industry. With the prospects for its superintelligence project widely seen as remote, the broader AI ecosystem risks being skewed toward unrealistic objectives—an outcome the industry as a whole can ill afford.