In a remarkable gathering on Monday, a diverse group of over 200 influential figures, comprised of former statespersons, Nobel laureates, AI trailblazers, eminent scientists, and career diplomats, made a joint proclamation. With a resounding voice, they urged the need for a globally recognized understanding on the actions that must remain beyond the reach of artificial intelligence. Some of the prominent examples they posited included using AI to impersonate humans or allowing them self-replication capabilities.
The chorus of these influential voices gave birth to the Global Call for AI Red Lines initiative. This comprehensive demand calls upon governmental bodies across the globe to form a political consensus on AI boundaries by the end of 2026. Notable signatories include AI and policy heavyweights like Geoffrey Hinton, a pioneer of deep learning; Wojciech Zaremba, cofounder of OpenAI; Jason Clinton, CISO at Anthropic; and Ian Goodfellow from Google DeepMind.
Charbel-Raphaël Segerie, executive director of the French Center for AI Safety (CeSIA), stressed on the necessity to preempt a potential major crisis before it unfurls. In a press briefing, he affirmed that what should AI not do is something all nations must agree on, even if the utilization methods differ. Coinciding with the 80th United Nations General Assembly high-level week in New York, the initiative is led by CeSIA, The Future Society, and UC Berkeley’s Center for Human-Compatible Artificial Intelligence. Nobel Peace Prize laureate Maria Ressa also mentioned the initiative in her opening address at the UN, advocating globally inclusive accountability and cessation of “Big Tech impunity”.
In the context of AI safety, many regions have made some strides in the right direction like Europe’s AI Act prohibiting specific “unacceptable” AI uses, along with the agreement between the US and China to retain human control over nuclear weaponry despite AI evolution. Still, the goal of a universally agreed-upon stance remains elusive. Niki Iliadis, director for global governance of AI at The Future Society, remarked that mere voluntary pledges from AI firms fall short of requirements. The necessity, she stated, is an autonomous global institution empowered to establish, administer, and enforce AI red lines.
Stuart Russell, a celebrated AI scientist and professor at UC Berkeley, reiterated the importance of a safe technology pathway. Drawing parallels with atomic energy, where safety protocols were in place before the construction of nuclear power plants began, Russell highlighted that the AI industry should consider safety measures right from their inception. He also addressed concerns about regulatory frameworks impairing innovative progress. Advancements in AI, he conceded, could be made, minus the uncontrollable and potentially devastating Artificial General Intelligence, likening the claimed trade-off to a fallacy.
For further details, refer to the original article on The Verge.
This website uses cookies.