Anthropic released a new research piece on how Claude learns to explain its own reasoning—why it chose a particular answer, not just what the answer is. This matters for trust and debugging. When Claude can tell you *why* it did something, you can actually use it for high-stakes work.
HN discussion.