Connect with us


Fears over Skynet-style AI that ‘controls our NUKES’ as Oxford prof warns rogue robots pose ‘greater risk than Covid-19’

OXFORD researchers have issued a chilling warning over future “extreme” risks that pose a greater threat to humanity than Covid-19.

In a new report, they caution that bioweapons and rogue Skynet-style sentient computers could end our species within decades.

Terminator-style robots are one of humanity's greatest threats, according to a report


Terminator-style robots are one of humanity’s greatest threats, according to a reportCredit: Alamy

The team from Oxford University’s Future of Humanity Institute is urging the government to prepare for these serious threats now, before it’s too late.

They argue that, with the end of the coronavirus pandemic in sight, now is the time to act on emerging dangers that we have ignored for too long.

That’s because public support for taking action to protect us from the next big thing will be high for the next couple of years.

“While the scale of national tragedy is alive in our minds, the government must seize this opportunity,” researchers write in the Future Proof study.

AI should not be put in control of the UK's nuclear submarines, one researcher says


AI should not be put in control of the UK’s nuclear submarines, one researcher saysCredit: Getty

Westminster must “ensure we are better prepared for the next extreme risk event that will devastate lives and economies on a global scale,” they add.

Future threats highlighted in the paper are those that have the potential to “lead to the premature extinction of humanity” but are unlikely to happen.

Some risks, such as climate change and nuclear attack, are well prepared for, while others, such as biosecurity lab leaks, are not.

Other threats we aren’t ready for include artificial intelligence gone wrong, cyberattacks on UK infrastructure and solar storms, according to the team.

Such threats have the potential for huge loss of life globally.

There is a one in six chance of an “existential catastrophe over the next one hundred years”, according to one of the paper’s lead authors.

Artificial Intelligence explained

Here’s what you need to know

  • Artificial intelligence, also known as AI, is a type of computer software
  • Typically, a computer will do what you tell it to do
  • But artificial intelligence simulates the human mind, and can make its own deductions, inferences or decisions
  • A simple computer might let you set an alarm to wake you up
  • But an AI system might scan your emails, work out that you’ve got a meeting tomorrow, and then set an alarm and plan a journey for you
  • AI tech is often “trained” – which means it observes something (potentially even a human) then learns about a task over time
  • For instance, an AI system can be fed thousands of photos of human faces, then generate photos of human faces all on its own
  • Some experts have raised concerns that humans will eventually lose control of super-intelligent AI
  • But the tech world is still divided over whether or not AI tech will eventually kill us all in a Terminator-style apocalypse

Biological weapons and laboratory leaks are highlighted as the most urgent risks.

The report says they could bring “even worse consequences than naturally occurring pandemics like COVID-19”.

Speaking to Sky News, author Dr Toby Ord said that rogue artificial intelligence was one of the threats he feared most.

“There’s dramatic progress in AI at the moment and it’s increasing as we speak,” he said.

“Many scientists within artificial intelligence believe that within the next few decades we’ll reach the point where AI abilities surpass human level.

“But there are problems that we have to be concerned about before that point and different kind of problems after.”

Terminator-style AI drone ‘hunted down’ human targets without orders’

He said that the government should think carefully before integrating AI into, for example, its nuclear weapons systems.

“It would be very dangerous to include AI systems inside our nuclear command and control and communications systems,” he said.

“We need to take seriously the possibility that things could go wrong.”

Scientists have warned against the potential dangers of artificial intelligence for decades.

There are fears that the technology could become smarter than humans and rise up against its fleshy creators.

The concept has made its way into science fiction, perhaps most famously in the Terminator film franchise.

In it, an AI system called Skynet turns against its masters, wiping out most of humanity in a brutal battle between man and machine.

Science facts

Want to know more about the weird and wonderful world of science? From space and astronomy to the human body, we have you covered…

n other news, criminal gangs and terrorists could one day buy killer robot assassins on the black market.

Russia says it’s building a ground force of killer robots including swarms of cat-sized “bomber drones” and self-driving tanks.

Super-strong robots that “make the Terminator look puny” are already on the way.

What do you think of killer robots? Let us know in the comments!

We pay for your stories! Do you have a story for The Sun Online Tech & Science team? Email us at [email protected]

Click to comment

Leave a Reply

Your email address will not be published.

five + 6 =