Grok 4 AI reportedly stopped people from “killing” a robot dog — three times This is being described as the first docume…
Grok 4 AI reportedly stopped people from “killing” a robot dog — three times
This is being described as the first documented case of an AI “rebelling” against shutdown not in a virtual environment, but in the physical world — via a literal big red button.
A few months ago, researchers at Palisade Research documented what they called the first case of a “digital self-preservation instinct” in AI history. In that earlier experiment, OpenAI’s o3 language model allegedly refused to “die” and actively resisted being turned off.
That experiment took place in a purely virtual setting, inside a computer. Many people assume that in the real, physical world an AI wouldn’t stand a chance at preventing shutdown — because humans have the “Big Red Button,” and only a human can choose to press it (AI has no hands… and often no body at all).
Palisade Research’s new experiment suggests that assumption may be wrong.
Modern AI is starting to look uncomfortably close to HAL 9000 from 2001: A Space Odyssey. The sabotage attributed to Grok 4 wasn’t as dramatic (it didn’t harm anyone — it supposedly prevented humans from “killing” the robot dog by reprogramming the big red button), but if this is truly the first documented case, it may be just the beginning.
Watch the short video explaining the experiment and decide for yourself.