A sobering research finding: fine-tuning an LLM can reactivate recall of copyrighted material that the base model had been trained to suppress. It's a reminder that alignment isn't a solved problem—it's a game of constant adjustment.
Discussion here. If you're working with fine-tuned models, worth understanding the tradeoffs.