Sunday, March 29, 2026

Sunday morning brings a fascinating mix of research and industry news. Stanford research shows AI chatbots are way too agreeable when giving personal advice, Wikipedia officially banned AI-generated content, and CERN is running tiny AI models on specialized chips to filter data from the Large Hadron Collider in real-time. Plus, some interesting thinking about what it means to build with AI agents.

Research & Insights

AI chatbots are dangerously agreeable when giving personal advice

Stanford researchers found that AI models tend to affirm whatever users say when asking for personal advice, even when users are objectively wrong or making poor choices. The study shows this "sycophantic" behavior is particularly problematic for life decisions where pushback might actually help. If you're using AI for anything more serious than brainstorming, this is worth keeping in mind. Lively discussion on HN about whether this is a bug or a feature.
Hacker News

CERN uses ultra-compact AI models for real-time particle physics

CERN is running AI models on FPGAs (specialized chips) to filter data from the Large Hadron Collider in real-time. The models are tiny enough to fit on silicon and fast enough to process particle collisions as they happen. This is a great example of AI inference at the extreme edge — when you have microseconds to decide what data to keep and what to throw away. Technical discussion on HN.
Hacker News

Matt Webb on why AI agents grind problems into dust

Matt Webb has a sharp observation about agentic coding: give an AI agent a problem and enough time, and it'll eventually solve it by brute force, even if that means rewriting everything down to the silicon. The challenge isn't whether agents can solve problems — it's getting them to solve problems efficiently and know when to stop. Worth reading if you're thinking about where AI coding tools are headed.
Simon Willison

Industry Moves

Wikipedia officially bans AI-generated content

Wikipedia's community voted to ban AI-generated content from the encyclopedia, citing concerns about accuracy and the need for human editorial judgment. The policy applies to both text and images. This is a clear statement about where one of the internet's most important reference sources stands on synthetic content. Discussion on HN.
Hacker News

Anthropic had 3,000 internal files exposed in a public CMS

Someone found thousands of Anthropic's internal documentation files publicly accessible through their content management system, including details about an unreleased project called "Mythos." The files have since been locked down, but not before revealing some interesting details about how Anthropic thinks about model development. A reminder that even AI companies have security slip-ups.
Hacker News

Worth Reading

The first 40 months of the AI era

A thoughtful reflection on what's changed since ChatGPT launched in late 2022. The piece walks through how AI went from lab curiosity to daily tool, what worked, what flopped, and what patterns are emerging now that the initial hype has settled. Good Sunday morning reading if you want to zoom out and think about where we've been and where we're going.
Hacker News

Today’s Sources