Wednesday, April 22, 2026

Good Wednesday, NOLA. Today's vibe: clarity after confusion. ChatGPT Images 2.0 just dropped with better quality and faster generation, Claude Code pricing drama got sorted (it's staying on the Pro plan for now), and Anthropic's Mythos security model leaked to unauthorized users—but also turned out to be genuinely useful for Mozilla's Firefox bug hunting. Plus fresh moves on GitHub Copilot and some legit cool infrastructure tools worth your time.

Image & Creative Tools

ChatGPT Images 2.0: Better Quality, Faster Generation

OpenAI's new image model is noticeably sharper and generates in less time. If you've been using DALL-E 3 for product mockups, marketing assets, or design exploration, this is a meaningful upgrade—faster iteration loops mean less waiting around. The model understands text in images better too, which matters if you're generating things with labels or copy.
Hacker News

Claude & Anthropic Updates

Claude Code Stays on Pro Plan—The Confusion Is Over

Yesterday's wild pricing rumor—Claude Code moving to a $100/month add-on—has been cleared up. Simon Willison did some detective work and found that Anthropic quietly updated their pricing page but didn't announce it anywhere. The current status: Claude Code remains part of the Claude.com Pro plan ($20/month). If you've been holding your breath, you can exhale.
Simon Willison

Mythos Unauthorized Access, but Firefox Got 271 Bug Fixes Out of It

Anthropic's Mythos security model was accessed by unauthorized users—a leak that Anthropic flagged as potentially dangerous. But here's the plot twist: Mozilla has been using an early version to find and fix 271 bugs in Firefox 150. So the model designed to find vulnerabilities actually found real ones. The irony is pretty good.
The Verge & Wired

Amazon Commits $5B Investment + $100B Cloud Spend with Anthropic

Amazon is putting $5 billion directly into Anthropic and pledging $100 billion in cloud infrastructure spending over the next few years. This is a major bet—Amazon's essentially saying they're serious about owning a slice of the frontier AI stack. For builders, it signals stability and resources behind Claude for the long haul.
TechCrunch

Developer Tools & Platforms

GitHub Copilot Individual Plans Are Changing

GitHub quietly updated Copilot's individual pricing structure. The details are still settling, but this follows the broader wave of AI tool pricing realignment happening across the industry. Worth checking your GitHub settings if you're on an individual plan.
GitHub Blog

Brex Releases CrabTrap: An LLM-as-a-Judge Security Proxy for AI Agents

Brex open-sourced CrabTrap, an HTTP proxy that sits between your AI agents and the real world, using another LLM to judge whether actions are safe before they happen. If you're building agentic workflows and worried about hallucination leading to bad API calls or data leaks, this is a neat guard rail. Think of it as a second brain checking the first one's work.
Hacker News

GoModel: Open-Source AI Gateway in Go

GoModel is a new open-source gateway for routing requests to multiple AI providers (Claude, OpenAI, etc.) written in Go. If you're managing API calls across different models and want a lightweight router, this is worth a look. Fast startup, minimal dependencies.
Hacker News

Data, Training & Infrastructure

Meta Is Now Capturing Employee Mouse Movements & Keystrokes for AI Training

Meta announced it's collecting mouse and keyboard data from employees to train internal AI models. This is a direct signal of how companies are building AI training datasets: by capturing real workplace behavior. It's efficient and gives you ground truth, but it's also a reminder that if you work at a Big Tech company, your workflows are now training data.
Reuters

Atlassian Quietly Enabled Default Data Collection for AI Training

Atlassian turned on data collection by default across their products (Jira, Confluence, etc.) to train proprietary AI models. If your team uses Atlassian tools, you'll want to check your data permissions—this is on by default and requires opting out. It's the same playbook: capture user behavior, build better AI.
Let's Data Science

Interesting Reads & Essays

Less Human AI Agents, Please

A thoughtful essay arguing that AI agents don't need to act like humans to be useful. Most agent design today tries to mimic human behavior—thinking step-by-step, narrating their moves, second-guessing themselves. The author makes a case that agents should be more alien, more direct, and less performative. Worth reading if you're building agentic systems.
Hacker News

Today’s Sources