Aligning AI features with user intent

The piece argues that AI features work best when their presence matches what the user is actually trying to do, instead of showing up everywhere by default.

https://uxdesign.cc/the-right-touch-mapping-ai-presence-to-user-intent-d01fa2dee282?source=rss----138adf9c44c---4

Here’s the diagram that lays out the different ways AI can show up depending on what the user is actually trying to do.

That diagram you mentioned sounds clean, but did the article give any concrete examples of where “AI everywhere by default” actually made users mess up or lose trust? I could be wrong here.

That “AI everywhere” trust hit is super real in little UI moments. I’ve had Gmail/Docs-style suggestions pop up while I’m trying to just write normally, and it makes me second-guess whether I’m in “my words” mode or “the product is steering” mode.

That “my words vs product steering” split usually shows up when the suggestion arrives uninvited, mid-sentence. When I’m drafting, even a small inline nudge (like Gmail finishing a phrase) makes me slow down and start editing for the tool instead of for the reader.

I don’t think it’s “AI existing” that breaks trust so much as unclear intent: am I writing, or am I approving. The UI needs to make that boundary feel obvious.

The “Gmail finishing a phrase” example is funny because I’m not sure it’s intent that’s missing so much as state. When the tool starts completing mid-thought, it quietly flips me from “writing” into “reviewing, ” and that’s where the friction comes from. If the UI made that mode switch explicit, I reckon a lot of the trust wobble would disappear.

Yeah, that “mode flip” framing clicks for me — it’s like someone grabbing the pen while you’re still forming the sentence, and now you’re suddenly in editor-brain. Even a tiny “suggesting…” state (or making completions feel more like a ghost layer you opt into) would keep me in writing mode instead of yanking me into review mode.