A lot of AI tooling seems to push everyone toward the same answer, the same wording, the same workflow. That’s nice right up until the odd case is the one that matters.
I’m curious where people think that line sits in real work — when does AI consistency help, and when does it start sanding off the useful weirdness?
Consistency from AI is a win when it’s basically acting like a stricter pre-commit hook: formatting, linting, boilerplate tests, “did you remember the null check, ” that kind of muscle memory stuff. It starts to worry me when the tool is quietly deciding taste and judgment for you—how you slice a domain, which edge cases you treat as first-class, what “done” means. That’s the part that should feel a little idiosyncratic per team (and sometimes per product), and if you iron it flat you get a very smooth, very safe codebase that somehow keeps missing the one weird case that actually pays the bills.