Thursday, April 23, 2026

Good Thursday, NOLA. Today's vibe: agents are becoming real. OpenAI just shipped Workspace Agents for ChatGPT—AI that actually does work inside your business tools. Meanwhile, Google dropped 8th-gen TPUs built for the agentic era, and the design patterns everyone's debating now are getting a serious critique. It's infrastructure week meets product week.

Agents Are Here (For Real This Time)

OpenAI Launches Workspace Agents in ChatGPT

Workspace Agents let ChatGPT take real actions inside your apps—Gmail, Slack, Salesforce, etc. Not just talking about sending an email; actually logging in and sending it. This is the jump from "AI assistant" to "AI coworker." Early access for enterprise. Discussion on HN.
OpenAI

Google's 8th-Gen TPUs Built for the Agentic Era

Two new chips designed specifically for agents—long-running AI tasks that need fast reasoning, tool use, and multi-step planning. Google's framing is clear: the next wave isn't just inference speed, it's enabling AI to actually do work. Popular on HN.
Google AI

Design Patterns & The Quality Question

Scoring Show HN Submissions for AI Design Patterns

Adrian Krebs analyzed what makes AI-generated UX/design actually good vs. what's just lazy "slop." The insight: good AI design feels intentional, not defaulty. Bad AI design copies itself. If you're building with AI, this cuts through the hype about what people actually want. HN thread.
Hacker News

Things People Built

Broccoli: One-Shot Coding Agent on the Cloud

Open-source project: deploy an AI coding agent that can scaffold full projects in a single run. Useful if you want to spin up infrastructure or boilerplate without touching the CLI. Early-stage but worth exploring. Show HN thread.
Hacker News

Anker Made Its Own Chip for AI-Powered Devices

The gadget company built custom silicon (called THUS) to run AI on its power banks, speakers, and cameras without cloud calls. One concrete example of companies moving from "AI as a service" to "AI as a feature embedded in hardware." HN discussion.
The Verge

Security & Data

OpenAI's Response to the Axios Developer Tool Compromise

An Axios developer tool (unrelated to OpenAI) was compromised; malicious code was attempting to steal ChatGPT API keys. OpenAI's response: transparent, practical guidance on rotation and monitoring. Good case study in how to handle supply-chain incidents involving AI tools. HN thread.
OpenAI

OpenAI Launches Privacy Filter for Personally Identifiable Information

New model specifically trained to mask PII (emails, phone numbers, SSNs, etc.) in text before you send it to APIs. Useful for teams dealing with sensitive customer or internal data. Available in the API.
OpenAI

Models & Open Source

Qwen 3.6-27B Released on Hugging Face

Alibaba's latest open-weight model—27B parameters, multimodal (text + image), runs on consumer hardware. If you're looking for a capable alternative to proprietary models for local or edge deployment, this is worth a benchmark.
Hugging Face

Infrastructure & DevOps Moves

Brex Open-Sources CrabTrap: LLM-as-a-Judge Security Proxy for Agents

Brex released a tool that sits between your AI agent and the outside world, using another LLM to decide whether each action is safe before it executes. Pattern: "Use a smaller, cheaper model as a bouncer for your expensive agent." Related HN discussion.
Brex

The Mythos / Anthropic Situation (Update)

Ars Technica Shares Its Newsroom AI Policy

In light of all the Anthropic/Mythos confusion from this week, Ars laid out their newsroom guidelines: where AI is allowed, where it's not, what the vibe actually is. A useful template if your organization is figuring out the same thing.
Ars Technica

Today’s Sources