When does AI product magic become a liability?

A lot of AI products seem to want that sci-fi feeling where the system feels alive, anticipatory, almost like it knows what you meant before you said it. I get the appeal, but in real teams that can turn into a weird trust problem fast.

I keep wondering whether the most successful AI products will be the ones that feel a little less magical and a little more legible. Where have you seen that line actually matter in practice?