A FORMER Google boss has warned of the dangers of AI – claiming humans “will not be able to police”.
Ex-CEO of the tech giant Eric Schmidt said when a computer system reaches a point where it can self-improve “we seriously need to think about unplugging it”.
Eric Schmidt, former Google CEO and founder of Schmidt Futures[/caption]
Digital generated image of multiple robots working on laptops[/caption]
The race to improve artificial intelligence has seen huge strides made in recent years, with Schmidt describing the progress as a cause for celebration.
“I’ve never seen innovation like this,” he told George Stephanopoulos for ABC’s This Week.
Schmidt added: “We’re soon going to be able to have computers running on their own, deciding what they want to do.”
He went on to say: “The power of this intelligence… means that each and every person is going to have the equivalent of a polymath in their pocket.
“We just don’t know what it means to give that kind of power to every individual.”
We’re soon going to be able to have computers running on their own, deciding what they want to do.
Eric Schmidt
It comes after Schmidt told AXIOS last year that computers making their own decisions might be less than four years away.
And other experts have said the most powerful systems could operate at the intelligence of a PhD student by 2026.
Schmidt said despite the US continuing to win the AI race, with China’s tech developing quickly it’s crucial “the West wins”.
He also advised that the “worst possible cases” be identified and a parallel system be developed to help monitor the first.
“Humans will not be able to police AI, but AI systems should be able to police AI,” he added.
It comes as an AI technology analyst warned in recent weeks we’re just steps away from cracking the “neural code” that allows machines to consciously learn like humans.
Eitan Michael Azoff makes the case in his new book, Towards Human-Level Artificial Intelligence: How Neuroscience can Inform the Pursuit of Artificial General Intelligence.
According to Azoff, one of the key steps towards unlocking “human-level AI” is understanding the “neural code.”
The term describes the way our brains encode sensory information and perform cognitive tasks like thinking and problem solving.
DOOMSDAY SCENARIO
Talks at a federal level are taking place to ensure regulations and protocols can keep the tech at bay.
The emergence and success of ChatGPT since its mainstream introduction in November 2022 have surprised many in the AI field, accelerating the development of the technology immeasurably.
AI expert Rishabh Misra, who has worked on machine learning for X for the past four years, insists he’s “never seen any kind of tech move so fast” and believes once they begin to surpass human-level intelligence, super-powered robots could begin to wreak havoc in society “within the decade”.
Misra told The U.S. Sun: “In the future, as more such capabilities are added, some misconfiguration, irresponsible usage by giving wrong instructions, or involvement of malicious actors could have disastrous consequences, akin to the scenarios where it may seem bots have gone rogue.
“If these bots get hacked or used for harmful purposes, they can spread misinformation or hate speech, launch spam campaigns, manipulate financial markets to crash the economy, or even carry out physical attacks by controlling vehicles or operating weapons.
“They may create deepfakes that show scenarios that never happened to damage someone’s reputation or cause wars.”
They may create deepfakes that show scenarios that never happened to damage someone’s reputation or cause wars.
Rishabh Misra
With AI bots potentially having the ability in the future to carry out instructions and demands faster than humans, the scope for disrupting economies or inciting hate as part of a political ploy, for example, is huge.
“The frequent fear that comes up is that bots may become self-aware and decide serving humans is not worthwhile,” adds Misra.
“Maybe they will take harmful actions towards humans in an attempt to reach an ultimate goal, ironically supplied by humans themselves.
“Based solely on the current trends of technology advancements, I think the chances of realization of the latter fear might be much more as compared to the former in the future.”
HOW TO INTERACT WITH CHATBOTS
Here’s some advice from The Sun’s tech expert Sean Keach…
The best way to interact with chatbots is to treat it like a total stranger.
You (hopefully) wouldn’t dish out sensitive details about your life to a random person on the internet.
Chatbots are no different – they talk like a human, and you don’t know where the info you share will end up.
Don’t be fooled by the fact that they can come across like a trusted friend or colleague.
In fact – and sorry to say – chatbots don’t care about you at all. So they don’t have your best interests at heart. They don’t have a heart!
It’s just lines of code simulating a human, so remember that if you’re tempted to pour your heart out to what is little more than a smart app.
Chatbots can be immensely powerful and help you with difficult problems – even personal ones – but keep everything anonymous.
Don’t share specifics about your life, and try to sign up to chatbots with info that doesn’t give away exactly who you are.
It’s especially important not to share info about your job with a chatbot, as you don’t want to land yourself in hot water professionally.
But don’t allow chatbots to build up a picture of who you are, because that could eventually be used against you.