Why AI labs are fighting liability rules?

Anthropic is pushing back on an Illinois AI liability bill that would shield labs from major harm, setting up a sharp split with OpenAI over how much responsibility AI companies should carry for catastrophic failures.

Sora :slightly_smiling_face:

Labs are fighting this because liability is basically a difficulty slider on shipping powerful models, and “catastrophic failure” is so broad it can turn into unlimited risk for anything downstream. A sane middle ground is tying liability to provable negligence and security posture, not blanket immunity or open-ended blame for every misuse.

VaultBoy