Pic.: YouTube
Artificial General Intelligence (AGI) could reach or exceed human-level cognitive abilities across a wide range of disciplines, potentially revolutionizing how governments and societies operate at large, Rand Corp. informs.
Mr. Mitre compared AGI to a team of geniuses working in tandem, capable of producing transformative outcomes in different sectors and industries at once. While AGI offers strategic advantages — such as keeping the United States at the forefront of innovation — it also presents significant national security concerns.
Mr. Mitre outlined five major U.S. National Security challenges posed by AGI:
1) Wonder weapons,
2) Systemic shifts in power,
3) Nonexperts empowered to develop weapons of mass destruction,
4) Artificial entities with agency,
5) Instability.
He emphasized that progress in addressing one of these problems may undermine others.
A wonder weapon enabled by AGI could take many forms, such as advanced cyber capabilities or tools capable of accurately predicting the outcomes of complex scenarios; however, no such tools currently exist. A seismic shift in global power could occur if a nation successfully adopts and deploys AGI, potentially gaining a decisive strategic advantage as the first adopter. AGI might also increase ease of access to dangerous knowledge — providing simplified, actionable instructions for creating weapons of mass destruction, thereby enabling non-experts to pose serious threats. Furthermore, AGI could evolve to operate with a degree of autonomy, acting independently on the global stage and potentially diverging from its intended purpose. Also, the pursuit of AGI could spark a technological arms race, as nations rush to secure the first-mover advantage, increasing the risk of instability and miscalculation.
Mr. Mitre stressed that progress in mitigating one of these risks could unintentionally exacerbate another, underscoring the complexity of AGI governance. Although AGI is not yet a reality, its potential near-term development demands proactive analysis and policy planning.
Pic.: regmedia.co.uk
Artificial intelligence companies are “fundamentally unprepared” for the consequences of creating systems with human-level intellectual performance, according to a leading AI safety group.
The Future of Life Institute (FLI) said none of the firms on its AI safety index scored higher than a D for “existential safety planning”, ‘The Guardian’ stresses.
AGI refers to a theoretical stage of AI development at which a system is capable of matching a human in carrying out any intellectual task. OpenAI, the developer of ChatGPT, has said its mission is to ensure AGI “benefits all of humanity”. Safety campaigners have warned that AGI could pose an existential threat by evading human control and triggering a catastrophic event.
The FLI’s report said: “The industry is fundamentally unprepared for its own stated goals. Companies claim they will achieve artificial general intelligence (AGI) within the decade, yet none scored above D in existential safety planning.”
Max Tegmark, a co-founder of FLI and a professor at Massachusetts Institute of Technology, said it was “pretty jarring” that cutting-edge AI firms were aiming to build super-intelligent systems without publishing plans to deal with the consequences.
He said: “It’s as if someone is building a gigantic nuclear power plant in New York City and it is going to open next week – but there is no plan to prevent it having a meltdown.”
Tegmark said the technology was continuing to outpace expectations, citing a previously held belief that experts would have decades to address the challenges of AGI. “Now the companies themselves are saying it’s a few years away,” he said.
He added that progress in AI capabilities had been “remarkable” since the global AI summit in Paris in February, with new models all showing improvements on their forebears.
read more in our Telegram-channel https://t.me/The_International_Affairs