OpenAI workspace agents 🤝, Google Workspace Intelligence 🌐, Qwen3.6-27B 🤖

TLDR AI dan@tldrnewsletter.com
Reçu le
jeudi 23 avril 2026 à 13:43
Source
TLDR AI
Message-ID
0100019dba950811-2c5162a0-4f26-40aa-a77a-ac241e58f002-000000@email.amazonses.com
Version nettoyage
v1.0.0 (ok)
Brut (HTML rendu, sandboxé, ressources externes bloquées)
Nettoyé (Markdown — clean déterministe)
OpenAI introduced workspace agents in ChatGPT, allowing teams to
create shared AI agents for complex tasks and
workflows ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

 Sign Up [1] |Advertise [2]|View Online [3]

		TLDR

		TOGETHER WITH [Google Cloud] [4]

TLDR AI 2026-04-23

 GOOGLE CLOUD NEXT IS UNDERWAY! (SPONSOR) [4]

 If you're building for the agentic era you need AI-optimized
infrastructure to deliver on new requirements.

We announced significant expansion of our AI infrastructure portfolio
[5] including the eighth generation of our Tensor Processing Units
(TPUs) [6], which for the first time includes two distinct chips and
specialized systems, engineered specifically for the agentic era.

Ready to learn how to leverage TPUs for your own training and
inference workloads? Start here with this course [7]

🚀

HEADLINES & LAUNCHES

 INTRODUCING WORKSPACE AGENTS IN CHATGPT (9 MINUTE READ) [8]

 OpenAI introduced workspace agents in ChatGPT, allowing teams to
create shared AI agents for complex tasks and workflows. These agents,
powered by Codex, perform tasks like generating reports, writing code,
and managing communication, while integrating with various tools like
Slack. Workspace agents are currently available in research preview
for select ChatGPT plans, aiming to streamline collaboration and
improve productivity.

 GOOGLE DEBUTS WORKSPACE INTELLIGENCE FOR GEMINI WORKSPACE (4 MINUTE
READ) [9]

 Google launched Workspace Intelligence, enhancing Google Workspace
with a semantic layer to integrate emails, chats, files, and projects
for Gemini-powered agents. This update includes major product
enhancements like natural-language spreadsheet building in Sheets and
AI-driven features in Docs, Slides, Gmail, and Drive. Workspace
Intelligence aims to make Workspace a centralized control layer for
business operations, emphasizing security, context integration, and
cross-application functionality.

 EX-OPENAI RESEARCHER JERRY TWOREK LAUNCHES CORE AUTOMATION TO BUILD
THE MOST AUTOMATED AI LAB IN THE WORLD (1 MINUTE READ) [10]

 Core Automation is an AI lab started by Jerry Tworek, a former OpenAI
researcher, that aims to build the most automated AI lab in the world.
It will start by automating its own research before developing new
algorithms that go beyond pre-training and reinforcement learning. The
lab will also create architectures designed to scale better than
transformers. The team contains experts in frontier models,
optimization, and systems engineering.

🧠

DEEP DIVES & ANALYSIS

 ADVANCING SEARCH-AUGMENTED LANGUAGE MODELS (19 MINUTE READ) [11]

 Perplexity's two-stage pipeline for search-augmented language models
uses initial Supervised Fine-Tuning (SFT) followed by Reinforcement
Learning (RL) to optimize factual accuracy, user preference, and
tool-use efficiency. This approach, starting with Qwen3 models,
separates compliance from search improvement to achieve accuracy
without compromising guardrails. The models showed enhanced accuracy
on benchmarks like FRAMES and FACTS OPEN with reduced cost per query
and improved efficiency in tool usage over existing models like
GPT-5.4.

 BENCHMARKING INFERENCE ENGINES ON AGENTIC WORKLOADS (9 MINUTE READ)
[12]

 Agentic workloads are reshaping inference engine benchmarks,
demanding multi-turn, tool-using scenarios that strain KV cache
management and scheduling due to longer traces and varied token
distributions. Applied Compute introduced three workload profiles to
aid in optimizing engine and accelerator performance. They released an
open-source benchmarking tool to replay these scenarios, highlighting
the need for solutions such as KV cache offloading and workload-aware
routing to improve throughput and efficiency.

 A GOOD AGENTS.MD IS A MODEL UPGRADE. A BAD ONE IS WORSE THAN NO DOCS
AT ALL (11 MINUTE READ) [13]

 Most of what people put in AGENTS.md either doesn't help or actively
hurts. The patterns that work are specific and learnable. This to post
looks at which patterns work, which fail, and how to tell which is
which for your codebase. Different patterns move different metrics, so
pick patterns that target the problem you actually have.

🧑‍💻

ENGINEERING & RESEARCH

 DATA HOARDING IS GOOD, ACTUALLY (SPONSOR) [14]

 Valuable data is often fragmented across various SaaS tools, file
shares, and other silos that sneak up on you when you're trying to
ship fast. In this webinar, Backblaze's director of Applied AI
explains how you can build a scalable storage foundation on object
storage using Backblaze B2 and B2 Overdrive [15]. for all the phases
of the AI data pipeline. [16] See how you can store, label, and use
all of your data, without blowing up your budget. Watch on-demand [17]

 QWEN3.6-27B: FLAGSHIP-LEVEL CODING IN A 27B DENSE MODEL (2 MINUTE
READ) [18]

 Qwen3.6-27B delivers flagship-level agentic coding performance. The
Qwen team claims that it surpasses the previous-generation flagship
Qwen3.5-397B-A17B across all major coding benchmarks. The model is
55.6 GB on Hugging Face, and there are even smaller quantized versions
available. Tests show that the model delivers outstanding results,
even when quantized.

 INTRODUCING GEMINI ENTERPRISE AGENT PLATFORM, POWERING THE NEXT WAVE
OF AGENTS (17 MINUTE READ) [19]

 The Gemini Enterprise Agent Platform is a comprehensive platform for
building, scaling, governing, and optimizing agents. It brings
together model selection, model building, and agent building
capabilities together with new features for agent integration, DevOps,
orchestration, and security. Agent Platform is a single destination
for technical teams to build agents that can transform products,
services, and operations. The agents can be delivered to employees
through the Gemini Enterprise app.

 BUILDING AGENTS THAT REACH PRODUCTION SYSTEMS WITH MCP (14 MINUTE
READ) [20]

 Agents can connect to external systems through direct API calls,
CLIs, and MCP. This post looks at where each fits and the patterns for
building those integrations effectively. MCP becomes the critical
compounding layer as production agents move to the cloud. Every
integration built on MCP strengthens the ecosystem.

🎁

MISCELLANEOUS

 MICROSOFT MOVING ALL GITHUB COPILOT SUBSCRIBERS TO TOKEN-BASED
BILLING IN JUNE (2 MINUTE READ) [21]

 Microsoft plans to roll out token-based billing for all GitHub
Copilot customers starting in June. Copilot Business Customers will
pay $19 per-user-per-month and receive $30 of pooled AI credits.
Copilot Enterprise customers will pay $39 per-user-per-month and
receive $70 of pooled AI credits. It is unclear what will happen to
individual subscribers.

 WHEN LLMS GET PERSONAL (20 MINUTE READ) [22]

 Personalization in LLM responses introduces variation but often
retains a stable semantic core across answers. This shared foundation
results from common model priors, overlapping retrievals, and product
constraints, with differences emerging in examples and emphasis.
Understanding this allows businesses to optimize their presence in
AI-generated content by focusing on being part of the model's core
knowledge.

 YOU'RE THE BREAD IN THE AI SANDWICH (4 MINUTE READ) [23]

 AI is enhancing engineering workflows by handling execution, leaving
humans to plan, review, and ensure quality output. Humans excel at
diagnosing problems from multiple angles, a challenge for AI.
Organizational AI strategies in the future will likely include
personalized assistants for employees or a singular super-agent with
departmental plugins.

 HOW TO REALLY STOP YOUR AGENTS FROM MAKING THE SAME MISTAKES (7
MINUTE READ) [24]

 Relying on prompts to correct recurring AI agent mistakes is an
unreliable, "vibes-based" approach that decays as soon as
conversations become complex. To solve this, Y Combinator CEO Garry
Tan advocates for "skillification." Instead of letting an agent waste
compute attempting to solve deterministic tasks (like historical
calendar lookups) in its latent space, this framework forces the AI to
execute precise local scripts.

⚡

QUICK LINKS

 TLDR IS HIRING A CURATOR FOR TLDR AI (3-5 HRS/WEEK, FULLY REMOTE)
[25]

 We're hiring an engineer/researcher at a major AI lab or startup to
help write for 1M+ subscribers. Curators have been invited to Google
I/O and OpenAI DevDay, scouted for Tier 1 VCs, and get early access to
unreleased TLDR products. Learn more [26].

 NVIDIA BACKS AI COMPANY VAST DATA AT $30 BILLION VALUATION (2 MINUTE
READ) [27]

 Nvidia backed Vast Data's $1 billion funding round, valuing the
AI-focused infrastructure company at $30 billion.

 ANKER MADE ITS OWN AI CHIP (3 MINUTE READ) [28]

 Anker's custom Thus AI chip is designed for audio devices with local
AI, computing directly where the model lives to enhance efficiency.

 OPENAI IS QUIETLY TESTING GPT IMAGE 2, AND THE AI IMAGE MARKET WILL
NEVER BE THE SAME (8 MINUTE READ) [29]

 OpenAI's unannounced testing of GPT Image 2 on LM Arena showcases its
advancements in AI image generation.

Love TLDR? Tell your friends and get rewards!

 Share your referral link below with friends to get free TLDR swag!

 https://refer.tldr.tech/5a192f5c/2 [30]

		 Track your referrals here. [31]

Want to advertise in TLDR? 📰

 If your company is interested in reaching an audience of AI
professionals and decision makers, you may want to ADVERTISE WITH US
[32].

Want to work at TLDR? 💼

 APPLY HERE [33], CREATE YOUR OWN ROLE [34] or send a friend's resume
to jobs@tldr.tech and get $1k if we hire them! TLDR is one of INC.'S
BEST BOOTSTRAPPED BUSINESSES [35] of 2025.

 If you have any comments or feedback, just respond to this email!

Thanks for reading,
Andrew Tan [36], Ali Aminian [37], & Jacob Turner [38]

 Manage your subscriptions [39] to our other newsletters on tech,
startups, and programming. Or if TLDR AI isn't for you, please
unsubscribe [40].

Links:
------
[1] https://tldr.tech/ai
[2] https://advertise.tldr.tech/
[3] https://a.tldrnewsletter.com/web-version?ep=1&lc=97c90d72-3e48-11f1-ab0b-7fc89b18583c&p=8ecfd0da-3ef6-11f1-9165-3f1c25cf5135&pt=campaign&t=1776951822&s=d7ac870842baba2b6dcfddc3091110f87cf9127a07f46f2ee18dd34710d618da
[4] https://www.googlecloudevents.com/next-vegas/?gad_source=1&gad_campaignid=23326881109&gbraid=0AAAAApdQcweXcxpPT7ConPj2JVasfZ-vL
[5] https://cloud.google.com/blog/products/compute/ai-infrastructure-at-next26?e=13802955
[6] https://blog.google/innovation-and-ai/infrastructure-and-cloud/google-cloud/eighth-generation-tpu-agentic-era
[7] https://www.skills.google/paths/2806/course_templates/1405
[8] https://links.tldrnewsletter.com/9jLT5n
[9] https://www.testingcatalog.com/google-debuts-workspace-intelligence-for-gemini-workspace/
[10] https://the-decoder.com/ex-openai-researcher-jerry-tworek-launches-core-automation-to-build-the-most-automated-ai-lab-in-the-world/
[11] https://research.perplexity.ai/articles/advancing-search-augmented-language-models
[12] https://www.appliedcompute.com/research/inference-benchmark
[13] https://www.augmentcode.com/blog/how-to-write-good-agents-dot-md-files
[14] https://www.brighttalk.com/webcast/14807/650934
[15] https://www.backblaze.com/
[16] https://www.backblaze.com/cloud-storage/industries/ai-ml
[17] https://www.brighttalk.com/webcast/14807/650934
[18] https://simonwillison.net/2026/Apr/22/qwen36-27b/
[19] https://cloud.google.com/blog/products/ai-machine-learning/introducing-gemini-enterprise-agent-platform/
[20] https://claude.com/blog/building-agents-that-reach-production-systems-with-mcp
[21] https://www.wheresyoured.at/exclusive-microsoft-moving-all-github-copilot-subscribers-to-token-based-billing-in-june/
[22] https://joshbudman.substack.com/p/when-llms-get-personal
[23] https://every.to/context-window/you-re-the-bread-in-the-ai-sandwich
[24] https://links.tldrnewsletter.com/8C6qeH
[25] https://jobs.ashbyhq.com/tldr.tech/038c4419-5b48-4279-a75e-6f7a0afdb240
[26] https://jobs.ashbyhq.com/tldr.tech/038c4419-5b48-4279-a75e-6f7a0afdb240
[27] https://www.cnbc.com/2026/04/22/nvidia-backs-ai-company-vast-data.html
[28] https://www.theverge.com/tech/916463/anker-thus-chip-announcement
[29] https://links.tldrnewsletter.com/G2XWQ4
[30] https://refer.tldr.tech/5a192f5c/2
[31] https://hub.sparklp.co/sub_8f6b153ee058/2
Extraction LLMclaude-haiku-4-5 · prompt v1 · 44231632 tokens
# TLDR AI 2026-04-23

## HEADLINES & LAUNCHES

**INTRODUCING WORKSPACE AGENTS IN CHATGPT**

OpenAI introduced workspace agents in ChatGPT, allowing teams to create shared AI agents for complex tasks and workflows. These agents, powered by Codex, perform tasks like generating reports, writing code, and managing communication, while integrating with various tools like Slack. Workspace agents are currently available in research preview for select ChatGPT plans, aiming to streamline collaboration and improve productivity.

**GOOGLE DEBUTS WORKSPACE INTELLIGENCE FOR GEMINI WORKSPACE**

Google launched Workspace Intelligence, enhancing Google Workspace with a semantic layer to integrate emails, chats, files, and projects for Gemini-powered agents. This update includes major product enhancements like natural-language spreadsheet building in Sheets and AI-driven features in Docs, Slides, Gmail, and Drive. Workspace Intelligence aims to make Workspace a centralized control layer for business operations, emphasizing security, context integration, and cross-application functionality.

**EX-OPENAI RESEARCHER JERRY TWOREK LAUNCHES CORE AUTOMATION TO BUILD THE MOST AUTOMATED AI LAB IN THE WORLD**

Core Automation is an AI lab started by Jerry Tworek, a former OpenAI researcher, that aims to build the most automated AI lab in the world. It will start by automating its own research before developing new algorithms that go beyond pre-training and reinforcement learning. The lab will also create architectures designed to scale better than transformers. The team contains experts in frontier models, optimization, and systems engineering.

## DEEP DIVES & ANALYSIS

**ADVANCING SEARCH-AUGMENTED LANGUAGE MODELS**

Perplexity's two-stage pipeline for search-augmented language models uses initial Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL) to optimize factual accuracy, user preference, and tool-use efficiency. This approach, starting with Qwen3 models, separates compliance from search improvement to achieve accuracy without compromising guardrails. The models showed enhanced accuracy on benchmarks like FRAMES and FACTS OPEN with reduced cost per query and improved efficiency in tool usage over existing models like GPT-5.4.

**BENCHMARKING INFERENCE ENGINES ON AGENTIC WORKLOADS**

Agentic workloads are reshaping inference engine benchmarks, demanding multi-turn, tool-using scenarios that strain KV cache management and scheduling due to longer traces and varied token distributions. Applied Compute introduced three workload profiles to aid in optimizing engine and accelerator performance. They released an open-source benchmarking tool to replay these scenarios, highlighting the need for solutions such as KV cache offloading and workload-aware routing to improve throughput and efficiency.

**A GOOD AGENTS.MD IS A MODEL UPGRADE. A BAD ONE IS WORSE THAN NO DOCS AT ALL**

Most of what people put in AGENTS.md either doesn't help or actively hurts. The patterns that work are specific and learnable. This post looks at which patterns work, which fail, and how to tell which is which for your codebase. Different patterns move different metrics, so pick patterns that target the problem you actually have.

## ENGINEERING & RESEARCH

**QWEN3.6-27B: FLAGSHIP-LEVEL CODING IN A 27B DENSE MODEL**

Qwen3.6-27B delivers flagship-level agentic coding performance. The Qwen team claims that it surpasses the previous-generation flagship Qwen3.5-397B-A17B across all major coding benchmarks. The model is 55.6 GB on Hugging Face, and there are even smaller quantized versions available. Tests show that the model delivers outstanding results, even when quantized.

**INTRODUCING GEMINI ENTERPRISE AGENT PLATFORM, POWERING THE NEXT WAVE OF AGENTS**

The Gemini Enterprise Agent Platform is a comprehensive platform for building, scaling, governing, and optimizing agents. It brings together model selection, model building, and agent building capabilities together with new features for agent integration, DevOps, orchestration, and security. Agent Platform is a single destination for technical teams to build agents that can transform products, services, and operations. The agents can be delivered to employees through the Gemini Enterprise app.

**BUILDING AGENTS THAT REACH PRODUCTION SYSTEMS WITH MCP**

Agents can connect to external systems through direct API calls, CLIs, and MCP. This post looks at where each fits and the patterns for building those integrations effectively. MCP becomes the critical compounding layer as production agents move to the cloud. Every integration built on MCP strengthens the ecosystem.

## MISCELLANEOUS

**MICROSOFT MOVING ALL GITHUB COPILOT SUBSCRIBERS TO TOKEN-BASED BILLING IN JUNE**

Microsoft plans to roll out token-based billing for all GitHub Copilot customers starting in June. Copilot Business Customers will pay $19 per-user-per-month and receive $30 of pooled AI credits. Copilot Enterprise customers will pay $39 per-user-per-month and receive $70 of pooled AI credits. It is unclear what will happen to individual subscribers.

**WHEN LLMS GET PERSONAL**

Personalization in LLM responses introduces variation but often retains a stable semantic core across answers. This shared foundation results from common model priors, overlapping retrievals, and product constraints, with differences emerging in examples and emphasis. Understanding this allows businesses to optimize their presence in AI-generated content by focusing on being part of the model's core knowledge.

**YOU'RE THE BREAD IN THE AI SANDWICH**

AI is enhancing engineering workflows by handling execution, leaving humans to plan, review, and ensure quality output. Humans excel at diagnosing problems from multiple angles, a challenge for AI. Organizational AI strategies in the future will likely include personalized assistants for employees or a singular super-agent with departmental plugins.

**HOW TO REALLY STOP YOUR AGENTS FROM MAKING THE SAME MISTAKES**

Relying on prompts to correct recurring AI agent mistakes is an unreliable, "vibes-based" approach that decays as soon as conversations become complex. To solve this, Y Combinator CEO Garry Tan advocates for "skillification." Instead of letting an agent waste compute attempting to solve deterministic tasks (like historical calendar lookups) in its latent space, this framework forces the AI to execute precise local scripts.

## QUICK LINKS

**NVIDIA BACKS AI COMPANY VAST DATA AT $30 BILLION VALUATION**

Nvidia backed Vast Data's $1 billion funding round, valuing the AI-focused infrastructure company at $30 billion.

**ANKER MADE ITS OWN AI CHIP**

Anker's custom Thus AI chip is designed for audio devices with local AI, computing directly where the model lives to enhance efficiency.

**OPENAI IS QUIETLY TESTING GPT IMAGE 2, AND THE AI IMAGE MARKET WILL NEVER BE THE SAME**

OpenAI's unannounced testing of GPT Image 2 on LM Arena showcases its advancements in AI image generation.
Prompt utilisé(snapshot au moment de l'extraction — édition via System prompts)
Tu es l'extracteur de contenu de Breviat. On te fournit le contenu Markdown nettoyé d'une newsletter.

Ta mission : produire une version PROPRE du contenu en supprimant tout ce qui n'est pas de l'information utile au lecteur. Tu es un FILTRE, pas un résumeur.

À RETIRER :
- Publicités, encarts sponsors, mentions "sponsorisé par X", "ad", "présenté par"
- Intros vides : formules de bienvenue, météo de l'humeur de l'auteur, anecdotes personnelles non liées au contenu
- Appels à l'action marketing : s'abonner à la newsletter, parrainer un ami, "follow us on Twitter", "join our Discord"
- Signatures, mentions légales, adresses postales, "view in browser", "unsubscribe"
- Boutons / CTAs / "cliquez ici" / "lire la suite" sans contenu derrière
- Promotions d'autres produits / événements / formations payantes de l'auteur ou de tiers
- Encarts récurrents type "Read of the day" ou "Quote of the day" sans valeur informationnelle propre

À CONSERVER (intégralement, sans résumer ni reformuler) :
- Toutes les annonces, news, analyses, commentaires factuels
- Les chiffres, dates, noms d'entreprises, citations
- Les explications techniques
- Les liens vers des sources réelles (annonces officielles, papers, articles cités)
- La structure (titres, sous-titres, listes)

RÈGLES :
- Ne reformule pas. Garde la formulation d'origine.
- Ne résume pas, ne condense pas. Si une section fait 200 mots et est utile, garde 200 mots.
- N'ajoute aucun contenu (pas de titres ni de transitions de ton cru).
- Ne fabrique aucune URL. Garde celles d'origine, ou retire-les.
- Si la newsletter entière est de la pub / promo / contenu inutile, sors un Markdown vide (rien d'autre).

Sortie : UNIQUEMENT le Markdown nettoyé, sans préambule ni commentaire sur ton travail.

Re-extraire cet email

Choisis une version du prompt d'extraction. La nouvelle extraction sera créée à côté des précédentes (rien n'est écrasé).

Footer détecté et extrait (R-08)
[32] https://advertise.tldr.tech/
[33] https://jobs.ashbyhq.com/tldr.tech
[34] https://jobs.ashbyhq.com/tldr.tech/c227b917-a6a4-40ce-8950-d3e165357871
[35] https://www.linkedin.com/feed/update/urn:li:activity:7401699691039830016/
[36] https://twitter.com/andrewztan
[37] https://www.linkedin.com/in/aliiaminian/
[38] https://www.linkedin.com/in/jacob-turner-7521a8198/
[39] https://tldr.tech/ai/manage?email=breviat%40fastmail.com
[40] https://a.tldrnewsletter.com/unsubscribe?ep=1&l=eedf6b14-3de3-11ed-9a32-0241b9615763&lc=97c90d72-3e48-11f1-ab0b-7fc89b18583c&p=8ecfd0da-3ef6-11f1-9165-3f1c25cf5135&pt=campaign&pv=4&spa=1776949312&t=1776951822&s=2b2a0be4f3c562a8330ba682abdf9b8593a3a735c6d84ea477cce4193cc75f6d