Google sells TPUs ⚡, Mistral Vibe agents 🤖, AI eval bottlenecks 📉

TLDR AI dan@tldrnewsletter.com
Reçu le
jeudi 30 avril 2026 à 13:44
Source
TLDR AI
Message-ID
0100019ddea22971-628837c5-4be8-4e0b-ae86-57be05b1a24c-000000@email.amazonses.com
Version nettoyage
v1.0.0 (ok)
Brut (HTML rendu, sandboxé, ressources externes bloquées)
Nettoyé (Markdown — clean déterministe)
Alphabet plans to sell its custom Tensor Processing Units (TPUs) to
select customers to install into their own data
centers ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

 Sign Up [1] |Advertise [2]|View Online [3]

		TLDR

		TOGETHER WITH [AWS] [4]

TLDR AI 2026-04-30

 MANY ENTERPRISES WANT TO DEPLOY INTELLIGENT AGENTS, BUT STRUGGLE TO
BUILD STRONG DATA FOUNDATIONS TO SUPPORT THEM (SPONSOR) [4]

 Get advice from 15+ leaders on how to build the right data
foundations for agentic analytics and intelligent agents in this book
from Amazon Web Services (AWS). [4]

Explore each chapter written by a different enterprise leader who
brings a unique perspective. See their advice on topics such as data
strategy, data products, classical machine learning (ML), and agentic
AI.

→ Get your digital copy [4]

🚀

HEADLINES & LAUNCHES

 GOOGLE TO SELL TPU CHIPS TO 'SELECT' CUSTOMERS IN LATEST SHOT AT
NVIDIA (2 MINUTE READ) [5]

 Alphabet plans to sell its custom Tensor Processing Units (TPUs) to
select customers to install into their own data centers. The company
recently announced two new TPUs for training and inference. Alphabet
has already entered into deals with Anthropic and Meta for chips. Its
TPU maneuvers put it into ever greater competition with Nvidia.

 MISTRAL MEDIUM 3.5 POWERS REMOTE VIBE AGENTS (6 MINUTE READ) [6]

 Mistral Medium 3.5, a 128B dense model, powers Vibe remote agents to
run long asynchronous coding tasks in the cloud, starting from the CLI
or Le Chat. The model combines instruction-following, reasoning, and
coding capabilities, operating efficiently on four GPUs and scoring
high on SWE-Bench Verified. Le Chat's new Work mode uses this model
for executing complex, multi-step tasks across diverse tools and
functions.

 OPENAI HAS EFFECTIVELY ABANDONED FIRST-PARTY STARGATE DATA CENTERS IN
FAVOR OF MORE FLEXIBLE DEALS (5 MINUTE READ) [7]

 Stargate's initial goal was to build 20 data centers. However, the
partners in the project reportedly could not agree on who would have
ultimate control of the planned data centers. OpenAI has started
leasing compute instead. The startup has not made a profit since it
was founded, and while many institutions believe in its potential,
some analysts estimate that it could run out of cash by mid-2027.

🧠

DEEP DIVES & ANALYSIS

 AI EVALS ARE BECOMING THE NEW COMPUTE BOTTLENECK (19 MINUTE READ) [8]

 AI evaluation costs have escalated, becoming a significant compute
bottleneck comparable to or exceeding training costs, with some runs
costing tens of thousands of dollars. The field faces uneven cost
distributions across models and tasks, highlighting inefficiencies and
the need for cost-effective approaches like standardized documentation
and data reuse. Without addressing these issues, the evaluation
process remains expensive, challenging equal access and hindering
external validation in AI research.

 INTRODUCING AUTOSP (6 MINUTE READ) [9]

 AutoSP automates converting standard transformer training code into
sequence-parallel code for long-context LLM training, integrated with
DeepSpeed. It enables longer sequence training on multiple GPUs
without significant runtime overhead, eliminating the need for complex
manual code changes. AutoSP also offers an advanced
activation-checkpointing strategy for better memory management,
enhancing performance with minimal cost.

 GRANITE 4.1 LLMS: HOW THEY'RE BUILT (13 MINUTE READ) [10]

 Granite 4.1 LLMs utilize a dense, decoder-only architecture with
models of 3B, 8B, and 30B parameters, trained on 15 trillion tokens
and using a five-phase pre-training approach. The 8B model matches the
performance of the previous 32B Mixture-of-Experts model through a
multi-stage reinforcement learning pipeline focused on data quality.
These models, designed for efficient, reliable enterprise use,
demonstrate competitive instruction-following and tool performance
while maintaining cost efficiency and stable usage.

🧑‍💻

ENGINEERING & RESEARCH

 MANY ENTERPRISES WANT TO DEPLOY INTELLIGENT AGENTS, BUT STRUGGLE TO
BUILD STRONG DATA FOUNDATIONS TO SUPPORT THEM (SPONSOR) [11]

 Get advice from 15+ leaders on how to build the right data
foundations for agentic analytics and intelligent agents in this book
from Amazon Web Services (AWS) [12].

→ Get your digital copy [12]

 LESSONS ON BUILDING MCP SERVERS (5 MINUTE READ) [13]

 This post discusses how to make MCP toolchains work using a framework
where the MCP servers do most of the work while models walk
breadcrumbs. Models don't plan - they look at the conversation, scan
the tool list, and grab whatever looks most probable. Making effective
chains means making sure the server makes the next call blindingly
obvious at every step.

 LADIR: LATENT DIFFUSION ENHANCES LLMS FOR TEXT REASONING (2 MINUTE
READ) [14]

 LaDiR (Latent Diffusion Reasoner) is a novel reasoning framework that
unifies the expressiveness of continuous latent representation with
the iterative refinement capabilities of latent diffusion models for
an existing LLM. The design allows efficient parallel generation of
diverse reasoning trajectories, allowing models to plan and revise the
reasoning process holistically. LaDiR consistently improves accuracy,
diversity, and interpretability over existing autoregressive,
diffusion-based, and latent reasoning methods. It is a new paradigm
for text reasoning with latent diffusion.

 MICROSOFT WORLD-R1 FOR 3D-CONSISTENT VIDEO GENERATION (4 MINUTE READ)
[15]

 World-R1 is a reinforcement learning framework that improves 3D
consistency in video generation by leveraging feedback from 3D and
vision-language models without modifying the base architecture.

 RELIABLE DATA ANALYSIS AGENTS (16 MINUTE READ) [16]

 DataPRM is an environment-aware process reward model that detects
silent errors and better supervises data analysis agents, improving
downstream performance and generalization across benchmarks.

🎁

MISCELLANEOUS

 ELON MUSK TESTIFIES HE WAS A ‘FOOL' TO FUND OPENAI (4 MINUTE READ)
[17]

 Elon Musk says he was a fool to back OpenAI when it was a nonprofit.
Musk gave the startup $38 million of essentially free funding. OpenAI
is now worth $800 billion. Musk has asked a court to unwind OpenAI's
recent conversion to a for-profit entity and is seeking damages of
more than $180 billion.

 DARWINIAN SPECIALIZATION IN AI (3 MINUTE READ) [18]

 The inference market is fragmenting because workloads are different.
The model ecosystem has fragmented into latency tiers, multimodal
models, and edge models. Each model type has different serving
requirements, which fragments into infrastructure. The fragmentation
creates room for several winners.

⚡

QUICK LINKS

 MANY ENTERPRISES WANT TO DEPLOY INTELLIGENT AGENTS, BUT STRUGGLE TO
BUILD STRONG DATA FOUNDATIONS TO SUPPORT THEM (SPONSOR) [19]

 Get advice from 15+ leaders on how to build strong data foundations
for agentic analytics and intelligent agents in this book from Amazon
Web Services (AWS).

→ Get your digital copy [20]

 REVERSE ENGINEERING WITH AI UNEARTHS HIGH-SEVERITY GITHUB BUG (4
MINUTE READ) [21]

 GitHub disclosed a high severity vulnerability, CVE-2026-3854,
affecting GitHub Enterprise Server and other products, which allows
remote code execution through manipulated git push options.

 OPENAI CODEX SYSTEM PROMPT INCLUDES EXPLICIT DIRECTIVE TO “NEVER
TALK ABOUT GOBLINS” (3 MINUTE READ) [22]

 OpenAI appears to be fighting a new problem in its latest model where
the model focuses on goblins in completely unrelated conversations.

 BECOME A CURATOR FOR TLDR AI (3-5 HRS/WEEK) [23]

 TLDR is looking for an engineer/researcher at a major AI lab or
startup to help write for 1M+ subscribers. Our curators have been
invited to Google I/O and OpenAI DevDay, scouted for Tier 1 VCs, and
get early access to unreleased TLDR products. Learn more.

 AI AGENTS THAT BUILDS THEMSELVES (4 MINUTE READ) [24]

 CrewAI built Iris, a Slack-native internal AI employee that writes
code, files PRs, reviews teammates' work, and modifies its own
codebase across CrewAI's engineering org.

 DEEPMIND PROEVAL FOR GENAI EVALUATION (GITHUB REPO) [25]

 ProEval is a framework that reduces generative AI evaluation costs
while identifying failure modes using surrogate models and transfer
learning across benchmarks.

Love TLDR? Tell your friends and get rewards!

 Share your referral link below with friends to get free TLDR swag!

 https://refer.tldr.tech/5a192f5c/2 [26]

		 Track your referrals here. [27]

Want to advertise in TLDR? 📰

 If your company is interested in reaching an audience of AI
professionals and decision makers, you may want to ADVERTISE WITH US
[28].

Want to work at TLDR? 💼

 APPLY HERE [29], CREATE YOUR OWN ROLE [30] or send a friend's resume
to jobs@tldr.tech and get $1k if we hire them! TLDR is one of INC.'S
BEST BOOTSTRAPPED BUSINESSES [31] of 2025.

 If you have any comments or feedback, just respond to this email!

Thanks for reading,
Andrew Tan [32], Ali Aminian [33], & Jacob Turner [34]

 Manage your subscriptions [35] to our other newsletters on tech,
startups, and programming. Or if TLDR AI isn't for you, please
unsubscribe [36].

Links:
------
[1] https://tldr.tech/ai
[2] https://advertise.tldr.tech/
[3] https://a.tldrnewsletter.com/web-version?ep=1&lc=97c90d72-3e48-11f1-ab0b-7fc89b18583c&p=ef346610-446a-11f1-8c31-f787867ecd52&pt=campaign&t=1777556662&s=f51854cd38b511fe50104012663b57007bd85244dd0089e6ab3d3f36d5f34f1e
[4] https://pages.awscloud.com/awsmp-gim-jqup-adhoc-aim-ent-ai-data-leader-book-1-ent.html?trk=6333d07f-199f-4830-9373-f90ddf7afb01&sc_channel=el
[5] https://finance.yahoo.com/markets/stocks/article/google-to-sell-tpu-chips-to-select-customers-in-latest-shot-at-nvidia-214900221.html
[6] https://mistral.ai/news/vibe-remote-agents-mistral-medium-3-5
[7] https://www.tomshardware.com/tech-industry/artificial-intelligence/openai-has-effectively-abandoned-first-party-stargate-data-centers-in-favor-of-more-flexible-deals-company-now-prefers-to-lease-compute-and-says-stargate-is-an-umbrella-term
[8] https://huggingface.co/blog/evaleval/eval-costs-bottleneck
[9] https://pytorch.org/blog/introducing-autosp/
[10] https://huggingface.co/blog/ibm-granite/granite-4-1
[11] https://pages.awscloud.com/awsmp-gim-jqup-adhoc-aim-ent-ai-data-leader-book-1-ent.html?trk=be9085af-6b74-4f1b-b462-d9aa1156a789&sc_channel=el
[12] https://pages.awscloud.com/awsmp-gim-jqup-adhoc-aim-ent-ai-data-leader-book-1-ent.html?trk=be9085af-6b74-4f1b-b462-d9aa1156a789&sc_channel=el
[13] https://taoofmac.com/space/blog/2026/04/29/2341
[14] https://machinelearning.apple.com/research/ladir
[15] https://microsoft.github.io/World-R1/
[16] https://arxiv.org/abs/2604.24198
[17] https://links.tldrnewsletter.com/tlASky
[18] https://tomtunguz.com/inference-market-segmentation/
[19] https://pages.awscloud.com/awsmp-gim-jqup-adhoc-aim-ent-ai-data-leader-book-1-ent.html?trk=417df18a-33da-4fbb-a647-7b4707cb57d5&sc_channel=el
[20] https://pages.awscloud.com/awsmp-gim-jqup-adhoc-aim-ent-ai-data-leader-book-1-ent.html?trk=417df18a-33da-4fbb-a647-7b4707cb57d5&sc_channel=el
[21] https://links.tldrnewsletter.com/qTJw3v
[22] https://arstechnica.com/ai/2026/04/openai-codex-system-prompt-includes-explicit-directive-to-never-talk-about-goblins/
[23] https://jobs.ashbyhq.com/tldr.tech/038c4419-5b48-4279-a75e-6f7a0afdb240
[24] https://links.tldrnewsletter.com/8r8rai
[25] https://github.com/google-deepmind/proeval
[26] https://refer.tldr.tech/5a192f5c/2
[27] https://hub.sparklp.co/sub_8f6b153ee058/2
Extraction LLMclaude-haiku-4-5 · prompt v1 · 43601440 tokens
# HEADLINES & LAUNCHES

## GOOGLE TO SELL TPU CHIPS TO 'SELECT' CUSTOMERS IN LATEST SHOT AT NVIDIA

Alphabet plans to sell its custom Tensor Processing Units (TPUs) to select customers to install into their own data centers. The company recently announced two new TPUs for training and inference. Alphabet has already entered into deals with Anthropic and Meta for chips. Its TPU maneuvers put it into ever greater competition with Nvidia.

## MISTRAL MEDIUM 3.5 POWERS REMOTE VIBE AGENTS

Mistral Medium 3.5, a 128B dense model, powers Vibe remote agents to run long asynchronous coding tasks in the cloud, starting from the CLI or Le Chat. The model combines instruction-following, reasoning, and coding capabilities, operating efficiently on four GPUs and scoring high on SWE-Bench Verified. Le Chat's new Work mode uses this model for executing complex, multi-step tasks across diverse tools and functions.

## OPENAI HAS EFFECTIVELY ABANDONED FIRST-PARTY STARGATE DATA CENTERS IN FAVOR OF MORE FLEXIBLE DEALS

Stargate's initial goal was to build 20 data centers. However, the partners in the project reportedly could not agree on who would have ultimate control of the planned data centers. OpenAI has started leasing compute instead. The startup has not made a profit since it was founded, and while many institutions believe in its potential, some analysts estimate that it could run out of cash by mid-2027.

# DEEP DIVES & ANALYSIS

## AI EVALS ARE BECOMING THE NEW COMPUTE BOTTLENECK

AI evaluation costs have escalated, becoming a significant compute bottleneck comparable to or exceeding training costs, with some runs costing tens of thousands of dollars. The field faces uneven cost distributions across models and tasks, highlighting inefficiencies and the need for cost-effective approaches like standardized documentation and data reuse. Without addressing these issues, the evaluation process remains expensive, challenging equal access and hindering external validation in AI research.

## INTRODUCING AUTOSP

AutoSP automates converting standard transformer training code into sequence-parallel code for long-context LLM training, integrated with DeepSpeed. It enables longer sequence training on multiple GPUs without significant runtime overhead, eliminating the need for complex manual code changes. AutoSP also offers an advanced activation-checkpointing strategy for better memory management, enhancing performance with minimal cost.

## GRANITE 4.1 LLMS: HOW THEY'RE BUILT

Granite 4.1 LLMs utilize a dense, decoder-only architecture with models of 3B, 8B, and 30B parameters, trained on 15 trillion tokens and using a five-phase pre-training approach. The 8B model matches the performance of the previous 32B Mixture-of-Experts model through a multi-stage reinforcement learning pipeline focused on data quality. These models, designed for efficient, reliable enterprise use, demonstrate competitive instruction-following and tool performance while maintaining cost efficiency and stable usage.

# ENGINEERING & RESEARCH

## LESSONS ON BUILDING MCP SERVERS

This post discusses how to make MCP toolchains work using a framework where the MCP servers do most of the work while models walk breadcrumbs. Models don't plan - they look at the conversation, scan the tool list, and grab whatever looks most probable. Making effective chains means making sure the server makes the next call blindingly obvious at every step.

## LADIR: LATENT DIFFUSION ENHANCES LLMS FOR TEXT REASONING

LaDiR (Latent Diffusion Reasoner) is a novel reasoning framework that unifies the expressiveness of continuous latent representation with the iterative refinement capabilities of latent diffusion models for an existing LLM. The design allows efficient parallel generation of diverse reasoning trajectories, allowing models to plan and revise the reasoning process holistically. LaDiR consistently improves accuracy, diversity, and interpretability over existing autoregressive, diffusion-based, and latent reasoning methods. It is a new paradigm for text reasoning with latent diffusion.

## MICROSOFT WORLD-R1 FOR 3D-CONSISTENT VIDEO GENERATION

World-R1 is a reinforcement learning framework that improves 3D consistency in video generation by leveraging feedback from 3D and vision-language models without modifying the base architecture.

## RELIABLE DATA ANALYSIS AGENTS

DataPRM is an environment-aware process reward model that detects silent errors and better supervises data analysis agents, improving downstream performance and generalization across benchmarks.

# MISCELLANEOUS

## ELON MUSK TESTIFIES HE WAS A 'FOOL' TO FUND OPENAI

Elon Musk says he was a fool to back OpenAI when it was a nonprofit. Musk gave the startup $38 million of essentially free funding. OpenAI is now worth $800 billion. Musk has asked a court to unwind OpenAI's recent conversion to a for-profit entity and is seeking damages of more than $180 billion.

## DARWINIAN SPECIALIZATION IN AI

The inference market is fragmenting because workloads are different. The model ecosystem has fragmented into latency tiers, multimodal models, and edge models. Each model type has different serving requirements, which fragments into infrastructure. The fragmentation creates room for several winners.

# QUICK LINKS

## REVERSE ENGINEERING WITH AI UNEARTHS HIGH-SEVERITY GITHUB BUG

GitHub disclosed a high severity vulnerability, CVE-2026-3854, affecting GitHub Enterprise Server and other products, which allows remote code execution through manipulated git push options.

## OPENAI CODEX SYSTEM PROMPT INCLUDES EXPLICIT DIRECTIVE TO "NEVER TALK ABOUT GOBLINS"

OpenAI appears to be fighting a new problem in its latest model where the model focuses on goblins in completely unrelated conversations.

## AI AGENTS THAT BUILDS THEMSELVES

CrewAI built Iris, a Slack-native internal AI employee that writes code, files PRs, reviews teammates' work, and modifies its own codebase across CrewAI's engineering org.

## DEEPMIND PROEVAL FOR GENAI EVALUATION

ProEval is a framework that reduces generative AI evaluation costs while identifying failure modes using surrogate models and transfer learning across benchmarks.
Prompt utilisé(snapshot au moment de l'extraction — édition via System prompts)
Tu es l'extracteur de contenu de Breviat. On te fournit le contenu Markdown nettoyé d'une newsletter.

Ta mission : produire une version PROPRE du contenu en supprimant tout ce qui n'est pas de l'information utile au lecteur. Tu es un FILTRE, pas un résumeur.

À RETIRER :
- Publicités, encarts sponsors, mentions "sponsorisé par X", "ad", "présenté par"
- Intros vides : formules de bienvenue, météo de l'humeur de l'auteur, anecdotes personnelles non liées au contenu
- Appels à l'action marketing : s'abonner à la newsletter, parrainer un ami, "follow us on Twitter", "join our Discord"
- Signatures, mentions légales, adresses postales, "view in browser", "unsubscribe"
- Boutons / CTAs / "cliquez ici" / "lire la suite" sans contenu derrière
- Promotions d'autres produits / événements / formations payantes de l'auteur ou de tiers
- Encarts récurrents type "Read of the day" ou "Quote of the day" sans valeur informationnelle propre

À CONSERVER (intégralement, sans résumer ni reformuler) :
- Toutes les annonces, news, analyses, commentaires factuels
- Les chiffres, dates, noms d'entreprises, citations
- Les explications techniques
- Les liens vers des sources réelles (annonces officielles, papers, articles cités)
- La structure (titres, sous-titres, listes)

RÈGLES :
- Ne reformule pas. Garde la formulation d'origine.
- Ne résume pas, ne condense pas. Si une section fait 200 mots et est utile, garde 200 mots.
- N'ajoute aucun contenu (pas de titres ni de transitions de ton cru).
- Ne fabrique aucune URL. Garde celles d'origine, ou retire-les.
- Si la newsletter entière est de la pub / promo / contenu inutile, sors un Markdown vide (rien d'autre).

Sortie : UNIQUEMENT le Markdown nettoyé, sans préambule ni commentaire sur ton travail.

Re-extraire cet email

Choisis une version du prompt d'extraction. La nouvelle extraction sera créée à côté des précédentes (rien n'est écrasé).

Footer détecté et extrait (R-08)
[28] https://advertise.tldr.tech/
[29] https://jobs.ashbyhq.com/tldr.tech
[30] https://jobs.ashbyhq.com/tldr.tech/c227b917-a6a4-40ce-8950-d3e165357871
[31] https://www.linkedin.com/feed/update/urn:li:activity:7401699691039830016/
[32] https://twitter.com/andrewztan
[33] https://www.linkedin.com/in/aliiaminian/
[34] https://www.linkedin.com/in/jacob-turner-7521a8198/
[35] https://tldr.tech/ai/manage?email=breviat%40fastmail.com
[36] https://a.tldrnewsletter.com/unsubscribe?ep=1&l=eedf6b14-3de3-11ed-9a32-0241b9615763&lc=97c90d72-3e48-11f1-ab0b-7fc89b18583c&p=ef346610-446a-11f1-8c31-f787867ecd52&pt=campaign&pv=4&spa=1777554195&t=1777556662&s=c10e22a6aa1cd35961e4ee43ce059308e4d25ede795072a7035d067fecc3827e