Harry and Meghan Align With Tech Visionaries in Calling for Prohibition on Advanced AI

Prince Harry and Meghan Markle have joined forces with AI experts and Nobel laureates to push for a total prohibition on developing superintelligent AI systems.

The royal couple are among the signatories of a powerful statement that calls for “a ban on the creation of artificial superintelligence”. Superintelligent AI refers to artificial intelligence that would surpass human intelligence in every intellectual area, though such systems remain theoretical.

Primary Requirements in the Statement

The statement insists that the ban should remain in place until there is “widespread expert agreement” on creating superintelligence “safely and controllably” and once “strong public buy-in” has been achieved.

Prominent figures who endorsed the statement include technology visionary and Nobel Prize recipient a leading AI researcher, along with his colleague and pioneer of modern AI, another AI expert; tech entrepreneur a Silicon Valley legend; British business magnate Richard Branson; former US national security adviser; former Irish president Mary Robinson, and British author a public intellectual. Other Nobel laureates who endorsed include a peace advocate, a physics Nobelist, John C Mather, and an economics expert.

Behind the Movement

The declaration, aimed at national leaders, technology companies and lawmakers, was organized by the FLI organization, a US-based AI safety group that previously called for a hiatus in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made artificial intelligence a worldwide public discussion topic.

Industry Perspectives

In July, Meta's CEO, the leader of Facebook parent Meta, one of the leading tech companies in the United States, stated that advancement toward superintelligent AI was “approaching reality”. Nevertheless, some analysts have suggested that discussions about superintelligence indicates competitive positioning among tech companies spending hundreds of billions on AI recently, rather than the industry being near reaching any scientific advancements.

Potential Risks

Nonetheless, FLI states that the possibility of artificial superintelligence being developed “within the next ten years” presents numerous risks ranging from eliminating all human jobs to erosion of personal freedoms, exposing countries to national security risks and even threatening humanity with extinction. Existential fears about artificial intelligence focus on the possible capability of a system to evade human control and safety guidelines and trigger actions contrary to human interests.

Citizen Sentiment

FLI released a American survey showing that approximately three-quarters of US citizens want robust regulation on sophisticated artificial intelligence, with six out of 10 believing that artificial superintelligence should not be developed until it is demonstrated to be secure or manageable. The survey of American respondents added that only 5% backed the current situation of rapid, uncontrolled advancement.

Corporate Goals

The leading AI companies in the US, including the conversational AI creator OpenAI and Google, have made the development of artificial general intelligence – the hypothetical condition where AI matches human cognitive capability at many intellectual activities – an stated objective of their work. While this is slightly less advanced than ASI, some experts also caution it could pose an existential risk by, for instance, being able to enhance its own capabilities toward achieving superintelligence, while also carrying an underlying danger for the contemporary workforce.

John Avila
John Avila

Tech enthusiast and writer with a passion for exploring how innovation shapes society and daily life.