Shuman Ghosemajumder argues that generative AI has turned deepfakes and fraud into a much bigger, more automated problem, and that old defenses like CAPTCHA are basically getting outpaced.
CAPTCHA already feels like security theater when bots can just solve it or route it through cheap human farms. The only stuff I’ve seen hold up even a little is behavior-based signals (timing, interaction weirdness, device consistency) plus forcing higher-risk actions into slower “prove you’re a real account” steps like passkeys or out-of-band confirmation.
Behavior signals seem more like real structure, because you’re checking the whole “body” of the user, not one little badge on the front door.
I buy that framing, but the second-order effect is you’re basically forcing everyone into “continuous authentication, ” and that tends to punish edge cases (travel, disability, shared devices) while the best fraud rings just learn the new choreography. Once deepfakes make faces/voices cheap, the arms race shifts to “can you mimic a normal week of behavior, ” which is a much weirder privacy trade than people admit.