#image_title
Leading AI scientists warn of the significant risks associated with the rapid development of AI technologies in a Policy Forum. They propose that major technology firms and public funders dedicate at least one-third of their budgets to risk assessment and mitigation. They also advocate for stringent global standards to prevent AI misuse and emphasize the importance of proactive governance to steer AI development towards beneficial outcomes and avoid potential disasters. Credit: SciTechDaily.com
AI experts recommend significant investment in AI risk mitigation and stricter global regulations to prevent misuse and guide AI development safely.
Researchers have warned about the extreme risks associated with rapidly developing artificial intelligence (AI) technologies, but there is no consensus on how to manage these dangers. In a Policy Forum, world-leading AI experts Yoshua Bengio and colleagues analyze the risks of advancing AI technologies.
These include the social and economic impacts, malicious uses, and the potential loss of human control over autonomous AI systems. They propose proactive and adaptive governance measures to mitigate these risks.
The authors urge major technology companies and public funders to invest more, allocating at least one-third of their budgets to assessing and mitigating these risks. They also call for global legal institutions and governments to enforce standards that prevent AI misuse.
“To steer AI toward positive outcomes and away from catastrophe, we need to reorient. There is a responsible path – if we have the wisdom to take it,” write the authors.
They highlight the race among technology companies worldwide to develop generalist AI systems that may match or exceed human capabilities in many critical domains. However, this rapid advancement also brings about societal-scale risks that could exacerbate social injustices, undermine social stability, and enable large-scale cybercrime, automated warfare, customized mass manipulation, and pervasive surveillance.
Among the highlighted concerns is the potential to lose control over autonomous AI systems, which would make human intervention ineffective.
The AI experts argue that humanity is not adequately prepared to handle these potential AI risks. They note that, compared to the efforts to enhance AI capabilities, very few resources are invested in ensuring the safe and ethical development and deployment of these technologies. To address this gap, the authors outline urgent priorities for AI research, development, and governance.
For more on this research, see AI Scientists Warn of Unleashing Risks Beyond Human Control.
Reference: “Managing extreme AI risks amid rapid progress” by Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Trevor Darrell, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner and Sören Mindermann, 20 May 2024, Science.
DOI: 10.1126/science.adn0117
NEW DELHI: An Indian-American businessman has been convicted for seven and a half years in…
The remarkable aurora in early May this year demonstrated the power that solar storms can…
Cleveland Cavaliers’ Donovan Mitchell reacts after a basket against the Boston Celtics during the first…
The ticket prices, seat map, and fan benefits for Kim Ji-won’s “Be My One” fan…
The new token of the well-known and profitable online casino Mega Dice, DICE, has exceeded…
Thousands of people at a religious gathering in India rushed to leave a makeshift tent,…
This website uses cookies.