TGArchive
·1 хв читання · 200 слів·👁 18.1K15

❌ Scientists, Tech Leaders, and Politicians Call for a Ban on Super AI

An open letter from the nonprofit Future of Life Institute urges a moratorium on developing superintelligence (AGI)—AI that could surpass humans at every task.

✍️ The statement has been signed by over 39,000 people, including Apple co-founder Steve Wozniak, Stability AI co-founder Emad Mostaque, former Baidu president Ya-Qin Zhang, papal advisor Paolo Benanti, Prince Harry and Meghan Markle, and several Nobel and Turing Award winners—including "Godfather of AI" Geoffrey Hinton.

The authors acknowledge AI's potential advantages, but caution that a superintelligence that could emerge within a year or two might pose significant risks to humanity—from loss of freedom to possible extinction.

Critics argue that the proposal is too vague, as it remains unclear who would enforce such a ban, how it would operate, what exactly would be restricted, or what constitutes "consensus" or "public support."

In March 2023, after GPT-4 release, the same institute called for a six-month pause on developing more advanced AI systems.

Would you support a ban on AGI development?

🔥 — Yes, until it's proven safe
🎃 — No, we can't slow down progress
🤔 — We need regulation, not a ban

@hiaimediaen

Відкрити в Telegram
Повернутись до каналу