TGArchive
·2 хв читання · 260 слів·👁 13.2K15

💭 Does AI Have Free Will? One Philosopher Thinks So

Frank Martela, a philosopher from Aalto University in Finland, considers that AI agents already have something like free will.

He draws on the concept of functional free will, meaning a system can be seen as free if it:

1️⃣ Acts intentionally, not just reflexively.
2️⃣ Can choose between real options.
3️⃣ Has control over its actions—achieving goals through deliberate behavior, not by chance.

Martela examined the behavior of Voyager, a real Minecraft bot powered by GPT-4, and developed a thought experiment involving Spitenik, a fictional combat drone that could exist with today's AI technology.

He argues that in both cases, all three markers of free will are present. These agents make plans, weigh options, and adjust based on feedback, not because every situation is pre-programmed but because they have goals and an internal world model.

But if an AI makes its own choices, who's responsible for what it does? Martela compares it to a dog: "You can blame the dog for bad behavior, but the owner still carries the main responsibility."

However, the key concern is that AI is already being used to diagnose patients, select job candidates, and drive vehicles. A simple "safety lock" is no longer sufficient. A truly "free" AI might need a moral code—but someone has to write the rules.

What do you think—does AI have free will?

👍 — Yes, it can make its own decisions
😎 — Nope, it's just running code
🤔 — Free will? Humans don't even have that!

#news #science @hiaimediaen

Відкрити в Telegram
Повернутись до каналу