OpenAI misses targets 🎯, OpenAI Microsoft renegotiate 🤝, Manus deal blocked 🚫

TLDR AI dan@tldrnewsletter.com
Reçu le
mardi 28 avril 2026 à 13:48
Source
TLDR AI
Message-ID
0100019dd45994e1-4d035e34-46b1-4023-ad2b-ffeb564b1b07-000000@email.amazonses.com
Version nettoyage
v1.0.0 (ok)
Brut (HTML rendu, sandboxé, ressources externes bloquées)
Nettoyé (Markdown — clean déterministe)
OpenAI missed its own targets for new users and revenue, raising
concern among company leaders about whether it will be able to support
its
spending ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

 Sign Up [1] |Advertise [2]|View Online [3]

		TLDR

		TOGETHER WITH [Atlassian] [4]

TLDR AI 2026-04-28

 UNLEASH HUMAN+AI COLLABORATION AT ATLASSIAN TEAM '26 (SPONSOR) [4]

 On May 6-7, stream Atlassian's flagship event [4] to learn how
AI-native teams are using the power of agents combined with
organizational knowledge to supercharge their work.

The full online agenda is live [4], featuring these sessions and more:

DAY 1: Get front-row access to the latest announcements around Agents,
automations, and how to transform organizations from AI novice to AI
native.

DAY 2: Hear from customers like Ford Motor Company, Lendi Group, and
CHG Healthcare, and more on how they use agentic AI with Rovo.

Register now to save your seat for the free livestream [4].

🚀

HEADLINES & LAUNCHES

 OPENAI MISSES KEY REVENUE, USER TARGETS IN HIGH-STAKES SPRINT TOWARD
IPO (6 MINUTE READ) [5]

 OpenAI missed its own targets for new users and revenue, raising
concern among company leaders about whether it will be able to support
its massive spending on data centers. The company's Chief Financial
Officer has said that she is worried that OpenAI may not be able to
pay for future computing contracts if revenue doesn't grow fast
enough. Board directors have been questioning CEO Sam Altman's efforts
to secure even more computing power despite the business slowdown.
Company executives are now seeking to control costs and instill more
discipline in the business.

 OPENAI SMARTPHONE RUMORS (3 MINUTE READ) [6]

 Analyst Ming-Chi Kuo reported that OpenAI explored building a
smartphone with partners like MediaTek and Qualcomm, potentially
replacing app-centric interfaces with AI agents and hybrid
on-device/cloud models.

 CHINA BLOCKS META MANUS ACQUISITION (2 MINUTE READ) [7]

 China halted Meta's $2B acquisition of agentic AI startup Manus,
ordering the deal unwound amid regulatory scrutiny, complicating
Meta's push into AI agents and cross-border expansion.

 MICROSOFT OPENAI PARTNERSHIP UPDATE (2 MINUTE READ) [8]

 OpenAI and Microsoft revised their agreement to increase flexibility,
including non-exclusive IP licensing, multi-cloud support for OpenAI
products, and capped revenue-sharing terms through 2030.

🧠

DEEP DIVES & ANALYSIS

 TO TRAIN OR NOT TO TRAIN (10 MINUTE READ) [9]

 The companies integrating down into the model layer are doing it
because, at their scale, the economics and differentiation arguments
work out. Almost all of them are doing post-training, not pre-training
from scratch. Companies should start collecting data and build small,
specialized models. The more data companies collect, the better models
they can produce.

 BATCH API IS TERRIBLE FOR ONE AGENT. IT MIGHT BE GREAT FOR A FLEET (6
MINUTE READ) [10]

 Batch API offers a 50% discount but adds latency, making it less
suitable for single-agent use. For fleets of agents where multiple
requests can be pooled, the batching approach becomes economically
viable. Optimal usage involves routing slower, costlier models through
batches, while employing faster models via synchronous paths,
potentially managed by intelligent proxies like the developing
LunaRoute.

 GPT 5.5: THE SYSTEM CARD (20 MINUTE READ) [11]

 GPT-5.5 is a solid improvement and is competitive with Claude Opus.
It seems to be better for factual queries, web searches, and
straightforward, well-specified requests, while Claude Opus excels in
more open-ended or interpretive purposes. The model is unlikely to
pose new big risks, and its alignment seems similar to that of
previous models. This post examines the system card for GPT-5.5.

🧑‍💻

ENGINEERING & RESEARCH

 STOP STITCHING DATABASES FOR AI AGENTS (SPONSOR) [12]

 Oracle AI Database acts as a unified memory core for agents [13].
Vector search, relational, JSON, and graph data live together so
agents can reason over live enterprise data without extra vector
stores, pipelines, or synchronization jobs.
SEE HOW DEVELOPERS BUILD AGENT MEMORY → [14]

 COMPRESSING AI VECTORS TO 2–4 BITS PER NUMBER WITHOUT LOSING
ACCURACY (54 MINUTE READ) [15]

 TurboQuant compresses each coordinate in large tables of
high-dimensional vectors to 2-4 bits with provably near-optimal
distortion, no memory overhead for scale factors, and no training or
calibration. It is between four and six orders of magnitude faster
than the alternatives at 4-bit indexing, with higher recall as well.
This page explains how TurboQuant works.

 CODEX SYMPHONY AGENT ORCHESTRATION (28 MINUTE READ) [16]

 OpenAI's Symphony is an open-source specification that turns issue
trackers into control planes for coding agents, reducing context
switching and increasing pull request throughput by up to 5x.

 AMAZON'S RISK EVALUATION FRAMEWORK (18 MINUTE READ) [17]

 Amazon researchers introduced ESRRSim, an agentic evaluation
framework with a structured taxonomy to benchmark risks like deception
and reward hacking, revealing wide variation in model behavior across
11 LLMs.

🎁

MISCELLANEOUS

 RECURSIVE LANGUAGE MODELS, CLEARLY EXPLAINED (3 MINUTE READ) [18]

 MIT researchers have introduced Recursive Language Models (RLMs) to
solve "context rot," a phenomenon where large language models
experience reasoning degradation when processing massive context
windows, even if they excel at basic retrieval tasks. Instead of
forcing a model to ingest an entire document at once, an RLM loads the
context into a Python REPL runtime memory slot.

 THE MOAT OR THE COMMONS (10 MINUTE READ) [19]

 American AI was financed on the bet that frontier models would be the
next great monopoly business. That assumption is now breaking as open
weight models are commoditizing the capability that the American
capital paid-for moat was supposed to protect. The gap between the
open frontier and the closed frontier is closing. The question will be
whether countries choose to subsidize the private moat or the open
commons.

⚡

QUICK LINKS

 [WEBINAR] 73% OF ENTERPRISES SAY THEY CAN'T SCALE AI (SPONSOR) [20]

 Not because of the model, but the infrastructure. On May 13th, see
how CData and Microsoft built the architecture blueprint for scalable
AI agents.

 DEEPSEEK CUTS V4-PRO PRICES BY 75% (5 MINUTE READ) [21]

 DeepSeek is offering a 75% discount on its V4-Pro model pricing and
reducing input cache hits by 90%, challenging US AI giants in a tense
geopolitical backdrop.

 GPU SPOT PRICES SURGE 114% IN SIX WEEKS (2 MINUTE READ) [22]

 NVIDIA's B200 GPU's rental price soared 114% to $4.95 per hour due to
demand from AI models like GPT-5.5.

 BECOME A CURATOR FOR TLDR AI (3-5 HRS/WEEK) [23]

 TLDR is looking for an engineer/researcher at a major AI lab or
startup to help write for 1M+ subscribers. Our curators have been
invited to Google I/O and OpenAI DevDay, scouted for Tier 1 VCs, and
get early access to unreleased TLDR products. Learn more [24].

 MIMO-V2.5-PRO (6 MINUTE READ) [25]

 Xiaomi's open-sourced MiMo-V2.5-Pro, a 1.02T-parameter
Mixture-of-Experts model, shows significant advancements in agentic
tasks, software engineering, and long-horizon coherence.

 FORMER GOOGLE DEEPMIND RESEARCHER'S AI STARTUP RAISES RECORD $1.1
BILLION SEED FUNDING TO PURSUE SUPERINTELLIGENCE (3 MINUTE READ) [26]

 David Silver is a UCL professor and the former lead of DeepMind's
reinforcement learning team.

Love TLDR? Tell your friends and get rewards!

 Share your referral link below with friends to get free TLDR swag!

 https://refer.tldr.tech/5a192f5c/2 [27]

		 Track your referrals here. [28]

Want to advertise in TLDR? 📰

 If your company is interested in reaching an audience of AI
professionals and decision makers, you may want to ADVERTISE WITH US
[29].

Want to work at TLDR? 💼

 APPLY HERE [30], CREATE YOUR OWN ROLE [31] or send a friend's resume
to jobs@tldr.tech and get $1k if we hire them! TLDR is one of INC.'S
BEST BOOTSTRAPPED BUSINESSES [32] of 2025.

 If you have any comments or feedback, just respond to this email!

Thanks for reading,
Andrew Tan [33], Ali Aminian [34], & Jacob Turner [35]

 Manage your subscriptions [36] to our other newsletters on tech,
startups, and programming. Or if TLDR AI isn't for you, please
unsubscribe [37].

Links:
------
[1] https://tldr.tech/ai
[2] https://advertise.tldr.tech/
[3] https://a.tldrnewsletter.com/web-version?ep=1&lc=97c90d72-3e48-11f1-ab0b-7fc89b18583c&p=a496a340-42f6-11f1-a62b-ddb71cf8abb2&pt=campaign&t=1777384133&s=edb3d536ac28545719bd46354046de2a87b6d494d87aaae17fa8732aa9dbfeef
[4] https://events.atlassian.com/team?utm_context=701qb00000fu2szya0&event_id=326027
[5] https://links.tldrnewsletter.com/4qvzVa
[6] https://links.tldrnewsletter.com/7xogqq
[7] https://techcrunch.com/2026/04/27/china-vetoes-metas-2b-manus-deal-after-months-long-probe/
[8] https://links.tldrnewsletter.com/SizjbP
[9] https://links.tldrnewsletter.com/W2KapZ
[10] https://eran.sandler.co.il/post/2026-04-27-batch-api-is-terrible-for-one-agent/
[11] https://thezvi.wordpress.com/2026/04/27/gpt-5-5-the-system-card/
[12] https://www.oracle.com/database/?source=%3Aad%3Anw%3Aop%3Aeng%3Aa_nas%3A%3ARC_WWMK260212P00002%3A&SC=%3Aad%3Anw%3Aop%3Aeng%3Aa_nas%3A%3ARC_WWMK260212P00002%3A
[13] https://www.oracle.com/database/?source=:ad:nw:op:eng:a_nas::RC_WWMK260212P00002:&SC=:ad:nw:op:eng:a_nas::RC_WWMK260212P00002:
[14] https://www.oracle.com/developer/resources/?source=:ad:nw:op:eng:a_nas::RC_WWMK260212P00002:&SC=:ad:nw:op:eng:a_nas::RC_WWMK260212P00002:
[15] https://arkaung.github.io/interactive-turboquant/
[16] https://links.tldrnewsletter.com/0PtHJG
[17] https://arxiv.org/abs/2604.22119
[18] https://links.tldrnewsletter.com/CT3sB5
[19] https://links.tldrnewsletter.com/TGeuCw
[20] https://www.cdata.com/resources/ai-agents-future-digital-work-microsoft/
[21] https://thenextweb.com/news/deepseek-v4-pro-price-cut-75-percent
[22] https://tomtunguz.com/b200-gpu-pricing-spot-market-model-releases/
[23] https://jobs.ashbyhq.com/tldr.tech/038c4419-5b48-4279-a75e-6f7a0afdb240
[24] https://jobs.ashbyhq.com/tldr.tech/038c4419-5b48-4279-a75e-6f7a0afdb240
[25] https://mimo.xiaomi.com/mimo-v2-5-pro
[26] https://www.cnbc.com/2026/04/27/deepmind-ineffable-intelligence-record-seed-funding-nvidia-google.html
[27] https://refer.tldr.tech/5a192f5c/2
[28] https://hub.sparklp.co/sub_8f6b153ee058/2
Extraction LLMclaude-haiku-4-5 · prompt v1 · 41701257 tokens
# HEADLINES & LAUNCHES

## OPENAI MISSES KEY REVENUE, USER TARGETS IN HIGH-STAKES SPRINT TOWARD IPO

OpenAI missed its own targets for new users and revenue, raising concern among company leaders about whether it will be able to support its massive spending on data centers. The company's Chief Financial Officer has said that she is worried that OpenAI may not be able to pay for future computing contracts if revenue doesn't grow fast enough. Board directors have been questioning CEO Sam Altman's efforts to secure even more computing power despite the business slowdown. Company executives are now seeking to control costs and instill more discipline in the business.

## OPENAI SMARTPHONE RUMORS

Analyst Ming-Chi Kuo reported that OpenAI explored building a smartphone with partners like MediaTek and Qualcomm, potentially replacing app-centric interfaces with AI agents and hybrid on-device/cloud models.

## CHINA BLOCKS META MANUS ACQUISITION

China halted Meta's $2B acquisition of agentic AI startup Manus, ordering the deal unwound amid regulatory scrutiny, complicating Meta's push into AI agents and cross-border expansion.

## MICROSOFT OPENAI PARTNERSHIP UPDATE

OpenAI and Microsoft revised their agreement to increase flexibility, including non-exclusive IP licensing, multi-cloud support for OpenAI products, and capped revenue-sharing terms through 2030.

# DEEP DIVES & ANALYSIS

## TO TRAIN OR NOT TO TRAIN

The companies integrating down into the model layer are doing it because, at their scale, the economics and differentiation arguments work out. Almost all of them are doing post-training, not pre-training from scratch. Companies should start collecting data and build small, specialized models. The more data companies collect, the better models they can produce.

## BATCH API IS TERRIBLE FOR ONE AGENT. IT MIGHT BE GREAT FOR A FLEET

Batch API offers a 50% discount but adds latency, making it less suitable for single-agent use. For fleets of agents where multiple requests can be pooled, the batching approach becomes economically viable. Optimal usage involves routing slower, costlier models through batches, while employing faster models via synchronous paths, potentially managed by intelligent proxies like the developing LunaRoute.

## GPT 5.5: THE SYSTEM CARD

GPT-5.5 is a solid improvement and is competitive with Claude Opus. It seems to be better for factual queries, web searches, and straightforward, well-specified requests, while Claude Opus excels in more open-ended or interpretive purposes. The model is unlikely to pose new big risks, and its alignment seems similar to that of previous models.

# ENGINEERING & RESEARCH

## COMPRESSING AI VECTORS TO 2–4 BITS PER NUMBER WITHOUT LOSING ACCURACY

TurboQuant compresses each coordinate in large tables of high-dimensional vectors to 2-4 bits with provably near-optimal distortion, no memory overhead for scale factors, and no training or calibration. It is between four and six orders of magnitude faster than the alternatives at 4-bit indexing, with higher recall as well.

## CODEX SYMPHONY AGENT ORCHESTRATION

OpenAI's Symphony is an open-source specification that turns issue trackers into control planes for coding agents, reducing context switching and increasing pull request throughput by up to 5x.

## AMAZON'S RISK EVALUATION FRAMEWORK

Amazon researchers introduced ESRRSim, an agentic evaluation framework with a structured taxonomy to benchmark risks like deception and reward hacking, revealing wide variation in model behavior across 11 LLMs.

# MISCELLANEOUS

## RECURSIVE LANGUAGE MODELS, CLEARLY EXPLAINED

MIT researchers have introduced Recursive Language Models (RLMs) to solve "context rot," a phenomenon where large language models experience reasoning degradation when processing massive context windows, even if they excel at basic retrieval tasks. Instead of forcing a model to ingest an entire document at once, an RLM loads the context into a Python REPL runtime memory slot.

## THE MOAT OR THE COMMONS

American AI was financed on the bet that frontier models would be the next great monopoly business. That assumption is now breaking as open weight models are commoditizing the capability that the American capital paid-for moat was supposed to protect. The gap between the open frontier and the closed frontier is closing. The question will be whether countries choose to subsidize the private moat or the open commons.

## DEEPSEEK CUTS V4-PRO PRICES BY 75%

DeepSeek is offering a 75% discount on its V4-Pro model pricing and reducing input cache hits by 90%, challenging US AI giants in a tense geopolitical backdrop.

## GPU SPOT PRICES SURGE 114% IN SIX WEEKS

NVIDIA's B200 GPU's rental price soared 114% to $4.95 per hour due to demand from AI models like GPT-5.5.

## MIMO-V2.5-PRO

Xiaomi's open-sourced MiMo-V2.5-Pro, a 1.02T-parameter Mixture-of-Experts model, shows significant advancements in agentic tasks, software engineering, and long-horizon coherence.

## FORMER GOOGLE DEEPMIND RESEARCHER'S AI STARTUP RAISES RECORD $1.1 BILLION SEED FUNDING TO PURSUE SUPERINTELLIGENCE

David Silver is a UCL professor and the former lead of DeepMind's reinforcement learning team.
Prompt utilisé(snapshot au moment de l'extraction — édition via System prompts)
Tu es l'extracteur de contenu de Breviat. On te fournit le contenu Markdown nettoyé d'une newsletter.

Ta mission : produire une version PROPRE du contenu en supprimant tout ce qui n'est pas de l'information utile au lecteur. Tu es un FILTRE, pas un résumeur.

À RETIRER :
- Publicités, encarts sponsors, mentions "sponsorisé par X", "ad", "présenté par"
- Intros vides : formules de bienvenue, météo de l'humeur de l'auteur, anecdotes personnelles non liées au contenu
- Appels à l'action marketing : s'abonner à la newsletter, parrainer un ami, "follow us on Twitter", "join our Discord"
- Signatures, mentions légales, adresses postales, "view in browser", "unsubscribe"
- Boutons / CTAs / "cliquez ici" / "lire la suite" sans contenu derrière
- Promotions d'autres produits / événements / formations payantes de l'auteur ou de tiers
- Encarts récurrents type "Read of the day" ou "Quote of the day" sans valeur informationnelle propre

À CONSERVER (intégralement, sans résumer ni reformuler) :
- Toutes les annonces, news, analyses, commentaires factuels
- Les chiffres, dates, noms d'entreprises, citations
- Les explications techniques
- Les liens vers des sources réelles (annonces officielles, papers, articles cités)
- La structure (titres, sous-titres, listes)

RÈGLES :
- Ne reformule pas. Garde la formulation d'origine.
- Ne résume pas, ne condense pas. Si une section fait 200 mots et est utile, garde 200 mots.
- N'ajoute aucun contenu (pas de titres ni de transitions de ton cru).
- Ne fabrique aucune URL. Garde celles d'origine, ou retire-les.
- Si la newsletter entière est de la pub / promo / contenu inutile, sors un Markdown vide (rien d'autre).

Sortie : UNIQUEMENT le Markdown nettoyé, sans préambule ni commentaire sur ton travail.

Re-extraire cet email

Choisis une version du prompt d'extraction. La nouvelle extraction sera créée à côté des précédentes (rien n'est écrasé).

Footer détecté et extrait (R-08)
[29] https://advertise.tldr.tech/
[30] https://jobs.ashbyhq.com/tldr.tech
[31] https://jobs.ashbyhq.com/tldr.tech/c227b917-a6a4-40ce-8950-d3e165357871
[32] https://www.linkedin.com/feed/update/urn:li:activity:7401699691039830016/
[33] https://twitter.com/andrewztan
[34] https://www.linkedin.com/in/aliiaminian/
[35] https://www.linkedin.com/in/jacob-turner-7521a8198/
[36] https://tldr.tech/ai/manage?email=breviat%40fastmail.com
[37] https://a.tldrnewsletter.com/unsubscribe?ep=1&l=eedf6b14-3de3-11ed-9a32-0241b9615763&lc=97c90d72-3e48-11f1-ab0b-7fc89b18583c&p=a496a340-42f6-11f1-a62b-ddb71cf8abb2&pt=campaign&pv=4&spa=1777381334&t=1777384133&s=26b65027d287ea21102defb7c445ea9eb87f3f999c6a3dcf7a9f9f0790752c8e