🌐 Artificial General Intelligence (AGI) might emerge by 2030
During The Joe Rogan Experience podcast, Sam Altman mentioned that the key issues for the emergence of Artificial General Intelligence should be resolved within the next 4 years.
Its emergence, in turn, is a step towards superintelligence, which will be able to surpass the smartest humans in any field, whether it is science or art. It will take time to achieve these goals by 2030-31.
According to a forecast by McKinsey, by the end of the decade, artificial intelligence will be able to perform tasks at a human level. Some areas may see this happening 40 years earlier than the forecast made in 2017, during the dawn of generative AI.
Levels of AI development are often classified based on their ability to perform tasks and interact with the environment:
👉 Artificial Narrow Intelligence (ANI) — specializes in one task, for instance, ChatGPT, Siri, or Face ID (face recognition).
👉 Artificial General Intelligence (AGI) — can learn, adapt, and perform any task a human can. Expected by 2030 (a couple of years ago, it was talked about 2050-2060).
👉 Artificial Super Intelligence (ASI) — surpasses humans in everything, continually improves, and can perform tasks that are hard for us to imagine now.
The emergence of ASI is actively discussed by scientists and philosophers due to the possible implications for society — this is also called the alignment problem. OpenAI has created the Superalignment team to address this issue, known on Twitter as "AI Notkilleveryoneism".
❔ What is the alignment problem?
It's a question of how to make AI actions 'aligned' with human interests, rather than turning against us. Issues may arise due to the complexity of AI understanding human goals, ambiguity in instructions from humans, and the possibility of unforeseen consequences of AI actions.
Sources:
— Planning for AGI and beyond
— The Joe Rogan Experience
#OpenAI @hiaimedia
