The Duke and Duchess of Sussex Align With AI Pioneers in Calling for Prohibition on Superintelligent Systems

The Duke and Duchess of Sussex have joined forces with artificial intelligence pioneers and Nobel Prize winners to push for a total prohibition on creating artificial superintelligence.

Harry and Meghan are among the signatories of a powerful statement that calls for “a prohibition on the creation of superintelligence”. Superintelligent AI refers to artificial intelligence that would surpass human intelligence in every intellectual area, though such systems remain theoretical.

Key Demands in the Statement

The statement states that the prohibition should stay active until there is “widespread expert agreement” on creating superintelligence “with proper safeguards” and once “strong public buy-in” has been achieved.

Prominent figures who endorsed the statement include AI pioneer and Nobel laureate a leading AI researcher, along with his colleague and pioneer of contemporary artificial intelligence, Yoshua Bengio; tech entrepreneur Steve Wozniak; British business magnate Richard Branson; Susan Rice; former Irish president an international leader, and UK writer Stephen Fry. Other Nobel laureates who signed include Beatrice Fihn, Frank Wilczek, an astrophysicist, and an economics expert.

Behind the Movement

The declaration, targeted at governments, technology companies and policy makers, was coordinated by the FLI organization, a American AI ethics organization that previously called for a pause in advancing strong artificial intelligence in recent years, shortly after the launch of conversational AI made artificial intelligence a worldwide public discussion topic.

Industry Perspectives

In recent months, Mark Zuckerberg, the chief executive of Facebook parent Meta, one of the major AI developers in the United States, stated that development of superintelligence was “approaching reality”. However, some experts have argued that talk of ASI indicates market competition among tech companies investing enormous sums on AI recently, rather than the sector being close to achieving any scientific advancements.

Possible Dangers

Nonetheless, the organization states that the possibility of ASI being developed “within the next ten years” carries numerous risks ranging from replacing human workers to losses of civil liberties, leaving nations to national security risks and even threatening humanity with existential risk. Deep concerns about artificial intelligence focus on the possible capability of a AI system to escape human oversight and safety guidelines and trigger actions contrary to human interests.

Public Opinion

The institute published a American survey showing that approximately three-quarters of US citizens want strong oversight on sophisticated artificial intelligence, with 60% thinking that artificial superintelligence should not be developed until it is demonstrated to be secure or controllable. The poll of 2,000 US adults noted that only a small fraction supported the status quo of rapid, uncontrolled advancement.

Corporate Goals

The top artificial intelligence firms in the United States, including the ChatGPT developer a major AI lab and the search giant, have made the creation of human-level AI – the theoretical state where AI matches human levels of intelligence at many intellectual activities – an explicit goal of their work. While this is slightly less advanced than superintelligence, some specialists also caution it could carry an existential risk by, for example, being able to improve itself toward achieving superintelligence, while also carrying an implicit threat for the modern labour market.

Kim Booth
Kim Booth

A seasoned business consultant with over a decade of experience in strategic planning and market analysis.