OpenAI missed its own targets for new users and revenue, raising concern among company leaders about whether it will be able to support its spending ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

TLDR

Together With Atlassian

TLDR AI 2026-04-28

Unleash human+AI collaboration at Atlassian Team '26 (Sponsor)

On May 6-7, stream Atlassian's flagship event to learn how AI-native teams are using the power of agents combined with organizational knowledge to supercharge their work.

The full online agenda is live, featuring these sessions and more:

Day 1: Get front-row access to the latest announcements around Agents, automations, and how to transform organizations from AI novice to AI native.

Day 2: Hear from customers like Ford Motor Company, Lendi Group, and CHG Healthcare, and more on how they use agentic AI with Rovo.

Register now to save your seat for the free livestream.

🚀

Headlines & Launches

OpenAI Misses Key Revenue, User Targets in High-Stakes Sprint Toward IPO (6 minute read)

OpenAI missed its own targets for new users and revenue, raising concern among company leaders about whether it will be able to support its massive spending on data centers. The company's Chief Financial Officer has said that she is worried that OpenAI may not be able to pay for future computing contracts if revenue doesn't grow fast enough. Board directors have been questioning CEO Sam Altman's efforts to secure even more computing power despite the business slowdown. Company executives are now seeking to control costs and instill more discipline in the business.
OpenAI Smartphone Rumors (3 minute read)

Analyst Ming-Chi Kuo reported that OpenAI explored building a smartphone with partners like MediaTek and Qualcomm, potentially replacing app-centric interfaces with AI agents and hybrid on-device/cloud models.
China Blocks Meta Manus Acquisition (2 minute read)

China halted Meta's $2B acquisition of agentic AI startup Manus, ordering the deal unwound amid regulatory scrutiny, complicating Meta's push into AI agents and cross-border expansion.
Microsoft OpenAI Partnership Update (2 minute read)

OpenAI and Microsoft revised their agreement to increase flexibility, including non-exclusive IP licensing, multi-cloud support for OpenAI products, and capped revenue-sharing terms through 2030.
🧠

Deep Dives & Analysis

To Train or Not to Train (10 minute read)

The companies integrating down into the model layer are doing it because, at their scale, the economics and differentiation arguments work out. Almost all of them are doing post-training, not pre-training from scratch. Companies should start collecting data and build small, specialized models. The more data companies collect, the better models they can produce.
Batch API is terrible for one agent. It might be great for a fleet (6 minute read)

Batch API offers a 50% discount but adds latency, making it less suitable for single-agent use. For fleets of agents where multiple requests can be pooled, the batching approach becomes economically viable. Optimal usage involves routing slower, costlier models through batches, while employing faster models via synchronous paths, potentially managed by intelligent proxies like the developing LunaRoute.
GPT 5.5: The System Card (20 minute read)

GPT-5.5 is a solid improvement and is competitive with Claude Opus. It seems to be better for factual queries, web searches, and straightforward, well-specified requests, while Claude Opus excels in more open-ended or interpretive purposes. The model is unlikely to pose new big risks, and its alignment seems similar to that of previous models. This post examines the system card for GPT-5.5.
🧑‍💻

Engineering & Research

Stop stitching databases for AI agents (Sponsor)

Oracle AI Database acts as a unified memory core for agents. Vector search, relational, JSON, and graph data live together so agents can reason over live enterprise data without extra vector stores, pipelines, or synchronization jobs.
See how developers build agent memory →
Compressing AI vectors to 2–4 bits per number without losing accuracy (54 minute read)

TurboQuant compresses each coordinate in large tables of high-dimensional vectors to 2-4 bits with provably near-optimal distortion, no memory overhead for scale factors, and no training or calibration. It is between four and six orders of magnitude faster than the alternatives at 4-bit indexing, with higher recall as well. This page explains how TurboQuant works.
Codex Symphony Agent Orchestration (28 minute read)

OpenAI's Symphony is an open-source specification that turns issue trackers into control planes for coding agents, reducing context switching and increasing pull request throughput by up to 5x.
Amazon's Risk Evaluation Framework (18 minute read)

Amazon researchers introduced ESRRSim, an agentic evaluation framework with a structured taxonomy to benchmark risks like deception and reward hacking, revealing wide variation in model behavior across 11 LLMs.
🎁

Miscellaneous

Recursive Language Models, clearly explained (3 minute read)

MIT researchers have introduced Recursive Language Models (RLMs) to solve "context rot," a phenomenon where large language models experience reasoning degradation when processing massive context windows, even if they excel at basic retrieval tasks. Instead of forcing a model to ingest an entire document at once, an RLM loads the context into a Python REPL runtime memory slot.
The Moat or the Commons (10 minute read)

American AI was financed on the bet that frontier models would be the next great monopoly business. That assumption is now breaking as open weight models are commoditizing the capability that the American capital paid-for moat was supposed to protect. The gap between the open frontier and the closed frontier is closing. The question will be whether countries choose to subsidize the private moat or the open commons.

Quick Links

[Webinar] 73% of enterprises say they can't scale AI (Sponsor)

Not because of the model, but the infrastructure. On May 13th, see how CData and Microsoft built the architecture blueprint for scalable AI agents.
DeepSeek cuts V4-Pro prices by 75% (5 minute read)

DeepSeek is offering a 75% discount on its V4-Pro model pricing and reducing input cache hits by 90%, challenging US AI giants in a tense geopolitical backdrop.
GPU Spot Prices Surge 114% in Six Weeks (2 minute read)

NVIDIA's B200 GPU's rental price soared 114% to $4.95 per hour due to demand from AI models like GPT-5.5.
Become a curator for TLDR AI (3-5 hrs/week)

TLDR is looking for an engineer/researcher at a major AI lab or startup to help write for 1M+ subscribers. Our curators have been invited to Google I/O and OpenAI DevDay, scouted for Tier 1 VCs, and get early access to unreleased TLDR products. Learn more.
MiMo-V2.5-Pro (6 minute read)

Xiaomi's open-sourced MiMo-V2.5-Pro, a 1.02T-parameter Mixture-of-Experts model, shows significant advancements in agentic tasks, software engineering, and long-horizon coherence.
Former Google DeepMind researcher's AI startup raises record $1.1 billion seed funding to pursue superintelligence (3 minute read)

David Silver is a UCL professor and the former lead of DeepMind's reinforcement learning team.

Love TLDR? Tell your friends and get rewards!

Share your referral link below with friends to get free TLDR swag!
Track your referrals here.

Want to advertise in TLDR? 📰

If your company is interested in reaching an audience of AI professionals and decision makers, you may want to advertise with us.

Want to work at TLDR? 💼

Apply here, create your own role or send a friend's resume to jobs@tldr.tech and get $1k if we hire them! TLDR is one of Inc.'s Best Bootstrapped businesses of 2025.

If you have any comments or feedback, just respond to this email!

Thanks for reading,
Andrew Tan, Ali Aminian, & Jacob Turner


Manage your subscriptions to our other newsletters on tech, startups, and programming. Or if TLDR AI isn't for you, please unsubscribe.