Thursday, March 26, 2026

Google just dropped TurboQuant, a breakthrough that lets you run massive AI models on regular hardware through extreme compression. Meanwhile, Ensu brings local LLMs to your desktop with a privacy-first approach, and GitHub updated Copilot's data policy with clearer opt-out controls. Plus: Google's Lyria 3 music generation model is now available through the Gemini API, and the community's sharing solid podcasts for staying current.

💥 Big Moves

TurboQuant: Redefining AI efficiency with extreme compression

Google Research released TurboQuant, a compression technique that lets you run huge AI models on normal hardware without crushing performance. The key innovation is extreme quantization (making models smaller by simplifying how they store numbers) that maintains quality while slashing memory requirements. This could be the breakthrough that makes powerful models practical for everyday machines. Popular discussion on HN.
Hacker News

GitHub Copilot gets clearer data usage controls

GitHub updated how Copilot handles your interaction data, giving users more transparent opt-out controls for data used to improve the product. The changes clarify what gets collected, how it's used, and how to turn it off completely. If you've been wary about what Copilot does with your code sessions, this is worth reviewing. Discussion on HN.
Hacker News

Lyria 3: Google's new music generation model now in paid preview

Google's Lyria 3 music generation model is now available through the Gemini API and for testing in AI Studio. This is their latest music AI, and you can start building with it today if you're on a paid plan. The timing coincides with Lyria 3 Pro rolling out to professional creative tools for longer track creation.
Google AI Blog

🛠️ Tools & Releases

Ensu: Local LLM app from Ente

The team behind Ente (the privacy-focused photo app) built Ensu, a desktop app for running LLMs entirely on your machine. No cloud, no data leaving your computer. It's designed for people who want AI assistance without sacrificing privacy. Clean UI, works with popular open models, and it's free to try. Active discussion on HN.
Hacker News

Show HN: A plain-text cognitive architecture for Claude

A clever plain-text framework for giving Claude more structured reasoning capabilities. The approach uses simple text files to define memory, goals, and decision-making patterns—no complex setup required. It's surprisingly effective for building more reliable agents. Check the HN thread for examples.
Hacker News

Robust LLM extractor for websites in TypeScript

A TypeScript library that uses LLMs to reliably extract structured data from messy websites. Handles dynamic content, navigation, and pagination automatically. Good for anyone scraping data at scale who's tired of brittle selectors breaking. Show HN thread.
Hacker News

📚 From the Community

AI Daily Brief episode: How to use Claude's massive new upgrades

The AI Daily Brief podcast breaks down Claude's recent feature updates with practical examples and a feature checklist. If you've been overwhelmed by all the new Claude capabilities, this episode is a solid 20-minute explainer. The show consistently delivers useful, builder-focused content.
dunn in AI Friday Slack

Podcast recommendations: Dwarkesh Patel and Marketing Against the Grain

Two podcast recommendations from the community: Dwarkesh Patel does deep technical interviews with AI leaders (Anthropic CEO, etc.) every few weeks, and Marketing Against the Grain from HubSpot covers AI in knowledge work with demos and practical guides. Both worth adding to your rotation. MATG channel here.
dunn in AI Friday Slack

📊 Weird Data

90% of Claude-linked code goes to repos with under 2 stars

Someone scraped GitHub and found that 90% of repositories with Claude attribution messages have fewer than 2 stars. It's a fascinating dataset that shows how most AI-generated code lives in private or low-visibility projects, not the showcase repos we see on Twitter. Interesting discussion on HN about what this means.
Hacker News

Today’s Sources