When does AI make stale context harder to spot?

I keep running into this with AI-assisted docs and tickets: the output looks clean enough that it doesn’t immediately signal when it’s built on old assumptions. That feels useful right up until someone treats it like fresh context.

Has anyone else noticed AI tools making it easier to miss when a draft, spec, or summary is quietly out of date?