Copywriting
I was tasked with copywriting in an accessible way for the Fund for AI Safety
"Right now, most money and effort going into Artificial Intelligence (AI) is to make it more smart and capable. Governments and companies want to make AI as intelligent and powerful as possible so that it can solve problems like climate change and curing diseases.
But there are dangers when inventing technology that is poised to outsmart humans. Very little work is going into making sure advanced AI is safe and aligned with human values. This is called AI safety research.
One estimate is that thousands of times more resources are being used to make AI more powerful. Only a tiny fraction is spent on AI safety. In 2022, for example, there were over 40,000 AI researchers working on making AI more powerful, compared with up to 895 working on making it safe.
This means AI is getting smarter very quickly. But comparatively little progress has been made on preventing risks and disasters.
Experts say this imbalance is very concerning. It's like designing an extremely powerful new machine without any safety features. Much more investment needs to go into AI safety to balance out this lopsided effort, and there isn’t a moment to lose.
It has been acknowledged by the UN Vice-President, the UK Prime Minister, staff at the White House and President of the EU Commission that advanced AI poses an existential threat to humanity.
Ian Hogart is a technology entrepreneur. One night he was confronted by these risks at a dinner party at the home of an AI researcher.
The researcher lived in a nice apartment, with floor-to-ceiling windows overlooking London’s skyscrapers.
During dinner, the group talked about recent advances in AI, like ChatGPT. And how billions of dollars are now going into making AI smarter.
Ian asked the researcher how long it would take to make AI as smart as humans. This is called AGI or artificial general intelligence. The goal of AGI research is to eventually create AI that is at or above human-level intelligence across all domains.
The researcher thought about it briefly. Then he said AGI could possibly happen "from now onwards."
Ian was shocked by this short timeline. He knew AGI could be very dangerous if not made safely.
“If you think we could be close to something potentially so dangerous, shouldn’t you warn people about what’s happening?” he said to the researcher, echoing the late Stephen Hawking’s warning that AI could be the worst event in the history of our civilization if risks are not addressed.
The researcher at the dinner was grappling with the responsibility he had. But like many other AI researchers, the rapid speed of progress makes it hard to slow down and think about safety. This pace means less time to make sure AI aligns with human values before it gets too powerful.
After the dinner, Ian thought about his young son and felt angry that a small group of companies is racing to build AGI. “It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight,” he said.
“Did the people racing to build the first AGI have a plan to slow down and let the rest of the world have a say in what they were doing? And when I say they, I really mean we, because I am part of this community.”
The CEOs of the world’s leading AI companies, along with hundreds of other AI scientists and experts, agree that, “Mitigating the risk of extinction from AI should be a global priority alongside societal-scale risks such as pandemics and nuclear war.”
In order to ensure such technology develops in a safe, responsible manner, we need urgently to prioritise safety over unchecked technological advancement. The fate of our species depends on it.
Aligning advanced Artificial Intelligence with human values is the most complex challenge that humanity has ever faced. AI safety researchers are the first responders in this emergency. Please donate now to the Fund for AI Safety to enable more AI safety research to protect all of us."