Friday, May 15, 2026

Good Friday, NOLA. May 15th brings practical wins and a reality check on AI's actual impact: Anthropic partners with the Gates Foundation, Codex arrives in ChatGPT mobile, and Claude just landed a legal-specific toolkit. But the day's also forcing hard questions about what happens when AI tools promise more than they deliver — and when pricing power trumps developer goodwill.

New Tools & Releases

Codex is now in the ChatGPT mobile app

Discussion on HN. OpenAI's code execution tool is no longer desktop-only — you can now run Python directly from ChatGPT on your phone. This closes a real friction point for on-the-go prototyping and debugging. Not groundbreaking, but genuinely useful for anyone building or testing code remotely.
Hacker News

Claude for Legal: A toolkit built for lawyers

Anthropic released open-source templates and guides for using Claude in legal workflows — document analysis, contract review, legal research. It's not a full product, but it's a signal that Claude is being actively tuned for vertical markets. Discussion here.
Hacker News

How Claude Code works in large codebases

Anthropic published a practical guide on using Claude Code effectively in real-world projects — handling large files, working with existing code, and managing context. If you're frustrated with Claude Code's behavior, this is required reading. HN thread.
Hacker News

Industry Moves & Partnerships

Anthropic partners with the Gates Foundation on AI for global health

A $200M partnership focused on using AI to improve health outcomes in developing countries. Significant because it signals Anthropic positioning Claude not just as a productivity tool but as infrastructure for high-stakes problem-solving. HN discussion.
Hacker News

Apple-OpenAI relationship frays, setting up possible legal fight

The exclusive integration deal that brought ChatGPT to iOS is apparently on shaky ground. This matters less because of the drama and more because it hints at the friction when one company controls the distribution channel and another controls the AI model. Discussion.
Hacker News

Reality Checks: When AI Disappoints

Ontario auditors find doctors' AI note-takers routinely blow basic facts

Medical AI tools supposed to save doctors time are generating inaccurate notes at scale — wrong dosages, missed symptoms, fabricated quotes. This is the stakes-are-high version of the broader problem: AI tools can sound confident and look polished while being dangerously wrong. HN thread.
Hacker News

AI is making me dumb

A thoughtful piece on what happens when you outsource thinking to an AI that's fast but not always accurate. The real risk isn't replacement — it's erosion of your ability to catch AI mistakes. If you can't do the work yourself, you can't judge whether the AI did it right. Discussion.
Hacker News

Interesting Reads & Analysis

Bitcoin trader recovers $400k wallet with Claude's help

A crypto trader lost their wallet password 11 years ago. Claude ran 3.5 trillion password attempts and recovered the backup. It's a fun proof-of-concept for how AI can brute-force repetitive tasks at scale — and a reminder that sometimes the tedious approach actually works. HN discussion.
Hacker News

Access to frontier AI will soon be limited by economic and security constraints

A sharp analysis: as AI gets more powerful and expensive to train, access consolidates to well-funded players. This has real implications for builders — the era of cheap, open compute for experimentation may be ending. Discussion.
Hacker News

How Chinese short dramas became AI content machines

A look at a thriving creative economy where AI-generated short-form video is already mainstream and profitable. Different regulatory environment, different incentives — but a glimpse at what AI-native content production actually looks like at scale.
MIT Technology Review

Today’s Sources