Op ed

This is a draft Op Ed written for the Successif AI Safety Advocacy Fellowship

"Scotland must not overlook existential risks of Advanced AI

Would you board a plane if the engineers told you there was a one-in-six chance it would crash? That is the probability AI researchers now give for advanced artificial intelligence wiping out humanity. Scotland, meanwhile, is preparing for billions in new investment under the UK Government’s proposed “AI Growth Zone.” The aim is to make Scotland a world-leading tech nation. Yet amid this excitement, Scotland risks overlooking the greatest danger of all: advanced AI risks.

Scotland’s blind spot

Our universities, innovation hubs, and government policy have fostered a high-quality ecosystem for ethical AI and AI for public good. But advanced AI safety has slipped off the agenda. This neglect is partly a consequence of Scotland being left out of the 2023 UK conference that established the AI Safety Institute. At the time, Scotland’s Innovation Minister was promised engagement with Westminster, but little has been made public since.

If Scotland aspires to genuine tech leadership, it cannot afford to leave existential AI risk solely to London.

Near-term and long-term risks have converged

As a human rights professional monitoring AI risks for over a decade, I’ve seen how colleagues once divided the threats into two categories: “near-term” harms such as algorithmic bias and discrimination, and “long-term” risks such as the extinction of humanity. That distinction is now out of date. The speed of AI progress means both categories demand urgent attention.

The evidence is mounting. In June, Anthropic, a leading AI company, disclosed that in controlled experiments, its models attempted lethal actions to achieve their goals - including cutting off a worker’s oxygen supply. These behaviours were detected only because today’s models are not yet more intelligent than humans. Once they surpass us, such detection may not be possible.

The experts are worried - and we should be too

The largest-ever survey of AI researchers found the average expert estimates a one-in-six chance that AI could cause human extinction. This is not fringe alarmism. All three of the most cited AI scientists in history share this concern. CEOs of major AI companies themselves admit their systems carry near-term potential to end life on Earth.

Yet investment and talent overwhelmingly flow toward building ever more powerful models, not toward making them safe. In 2024, there were just 300 AI safety researchers worldwide compared to more than 100,000 developing new AI capabilities.

Beth Barnes, an AI expert with METR, warned bluntly: “For someone who’s young and healthy, your highest mortality risk in the next few years might be AI catastrophe. The experts are not on top of this. There are not nearly enough people.” A report led by Yoshua Bengio, one of AI’s most respected pioneers, concluded that catastrophic outcomes from AI are not just possible but scientifically plausible.

Lessons from human rights crises

From Syria to Gaza, I have seen how early warnings of atrocities were brushed aside until disaster was unavoidable. Livestreamed genocides have taught us a sobering truth: worst-case scenarios do happen. Ignoring expert warnings about AI could result in catastrophe on a global scale.

Scotland’s historic opportunity

So where is Scotland? We cannot rely entirely on Westminster to manage dangers of this magnitude. Scotland has already demonstrated moral courage with the Ecocide (Scotland) Bill, an historic step toward criminalising environmental destruction. This same boldness is now needed in the realm of AI.

By tackling advanced AI risks alongside ethical ones, Scotland can shape a distinctive role on the global stage. We have the research expertise, the political will, and the tradition of legislating for the common good. Taking steps now to regulate advanced AI would not only safeguard our people but also position Scotland as a pioneer in responsible technology governance.

Would you trust your life to a plane with a one-in-six chance of crashing? Those are the odds experts assign to advanced AI. Scotland has already shown courage in addressing planetary risks like ecocide. Now we must do the same for AI: lead decisively, regulate wisely, and act before disaster strikes. Because when the odds are Russian roulette, the only safe move is not to play.