I keep running into AI features that are fast enough to be useful, but not clear enough for me to feel good about using them in real work. The output looks fine until you need to explain where it came from or why it chose that path.
How much context do you need from a tool before you trust it for something you’ll ship or show a client?