Elon Musk and 1,000 other technology leaders including Apple co-founder Steve Wozniak are calling for a pause on the 'dangerous race' to develop AI, which they fear poses a 'profound risk to society and humanity' and could have 'catastrophic' effects.
In an open letter on The Future of Life Institute, Musk and the others argued that humankind doesn't yet know the full scope of the risk involved in advancing the technology.
They are asking all AI labs to stop developing their products for at least six months while more risk assessment is done.
If any labs refuse, they want governments to 'step in'.Musk's fear is that the technology will become so advanced, that it will no longer require - or listen to - human interference.
It is a fear that is widely held and even acknowledged by the CEO of AI - the company that created ChatGPT - who said earlier this month that the tech could be developed and harnessed to commit 'widespread' cyberattacks.
In an open letter on The Future of Life organization, Musk and the others argued that humankind doesn't yet know the full scope of the risk involved in advancing the technology
Musk, Wozniak and other tech leaders are among the 1,120 people who have signed the open letter calling for an industry-wide pause on the current 'dangerous race'
Elon Musk's hatred of AI explained: Billionaire believes it will spell the end of humans - a fear Stephen Hawking shared
Elon Musk wants to push technology to its absolute limit, from space travel to self-driving cars — but he draws the line at artificial intelligence.
The billionaire first shared his distaste for AI in 2014, calling it humanity's 'biggest existential threat' and comparing it to 'summoning the demon.'
At the time, Musk also revealed he was investing in AI companies not to make money but to keep an eye on the technology in case it gets out of hand.
His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as The Singularity.
That concern is shared among many brilliant minds, including the late Stephen Hawking, who told theBBCin 2014: 'The development of full artificial intelligence could spell the end of the human race.
'It would take off on its own and redesign itself at an ever-increasing rate.'
Despite his fear of AI, Musk has invested in the San Francisco-based AI group Vicarious, in DeepMind, which has since been acquired by Google, and OpenAI, creating the popular ChatGPT program that has taken the world by storm in recent months.
During a 2016 interview, Musk noted that he and OpenAI created the company to 'have democratisation of AI technology to make it widely available.'
Musk founded OpenAI with Sam Altman, the company's CEO, but in 2018 the billionaire attempted to take control of the start-up.
His request was rejected, forcing him to quit OpenAI and move on with his other projects.
In November, OpenAI launched ChatGPT, which became an instant success worldwide.
The chatbot uses 'large language model' software to train itself by scouring a massive amount of text data so it can learn to generate eerily human-like text in response to a given prompt.
ChatGPT is used to write research papers, books, news articles, emails and more.
But while Altman is basking in its glory, Musk is attacking ChatGPT.
He says the AI is 'woke' and deviates from OpenAI's original non-profit mission.
'OpenAI was created as an open source (which is why I named it 'Open' AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft, Musk tweeted in February.
The Singularityis making waves worldwide as artificial intelligence advances in ways only seen in science fiction - but what does it actually mean?
In simple terms, it describes a hypothetical future where technology surpasses human intelligenceand changes the path of our evolution.
Experts have said that once AI reaches this point, it will be able to innovate much faster than humans.
There are two ways the advancement could play out, with the first leading to humans and machines working together to create a world better suited for humanity.
For example, humans could scan theirconsciousness and store it in a computer in which they will live forever.
The second scenario is that AI becomes more powerful than humans, taking control and making humans its slaves - but if this is true, it is far off in the distant future.
Researchers are now looking for signs of AI reaching The Singularity, such as the technology's ability totranslate speech with the accuracy of a human and perform tasks faster.
Former Google engineer Ray Kurzweil predicts it will be reached by 2045.
He has made 147 predictions about technology advancements since the early 1990s - and 86 per cent have been correct.
<!- - ad: https://mads.dailymail.co.uk/v8/us/sciencetech/none/article/other/mpu_factbox.html?id=mpu_factbox_1 - ->
Advertisement
They say AI labs are currently 'locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.'
'Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,' the letter said.
No one from Google or Microsoft - who are considered to be at the forefront of developing the technology - has signed on.
The list of signatories is also missing input from social media bosses or those who run Quora or Reddit, who are widely considered to have knowledge on the topic too.
Earlier this week, Musk said Microsoft founder Bill Gates' understanding of the technology was 'limited'.
The letter also detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities.
OpenAI CEO Sam Altman, who did not sign the letter, says people should be happy they are 'a little bit scared' of the technology
The letter comes as EU police force Europol on Monday joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.
Since its release last year, Microsoft-backed OpenAI's ChatGPT has prompted rivals to launch similar products, and companies to integrate it or similar technologies into their apps and products.
Musk has been trying to stop - or at least stunt - the rapid growth of AI technology for years.
In 2017, Musk warned that humanity was 'summoning the demon' in its pursuit of the technology.
'With artificial intelligence, we are summoning the demon.
'You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out,' he said in an article for Vanity Fair.
Musk was one of the founders of OpenAI - the company that created ChatGPT - in 2015.
His intention was for it to run as a not-for-profit organization dedicated to researching the dangers AI may pose to society.
It's reported that he feared the research was falling behind Google, and that Musk wanted to buy the company.He was turned down.
Now, its CEO Sam Altman - who has not signed on to Musk's letter - says he is 'openly attacking' AI.
'Elon is obviously attacking us some on Twitter right now on a few different vectors.
'I believe he is, understandably so, really stressed about AGI safety,' he said.
Altman says he is open to 'feedback' about GPT and wants to better understand the risks.In a podcast interview on Monday, he told Lex Friedman: 'There will be harm caused by this tool.
'There will be harm, and there'll be tremendous benefits.
'Tools do wonderful good and real bad. And we will minimize the bad and maximize the good.'
In an interview earlier this month, he said people had a right to be 'a little bit scared', and that he was too.
'We've got to be careful here. I think people should be happy that we are a little bit scared of this.
'I'm particularly worried that these models could be used for large-scale disinformation. Now that they're getting better at writing computer code, [they] could be used for offensive cyberattacks,' he said.
TECH LEADERS' PLEA TO STOP DANGEROUS AI: READ LETTER IN FULL
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.
As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.
This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.
We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.
FAQs
Elon Musk calls for pause on developing 'dangerous' AI? ›
Elon Musk, along with a number of tech executives and experts in AI, computer science and other disciplines, in an open letter published Tuesday urged leading artificial intelligence labs to pause development of AI systems more advanced than GPT-4, citing "profound risks" to human society.
Did Elon Musk say AI is very dangerous? ›More than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that A.I. tools present “profound risks to society and humanity.” A.I.
What is Elon Musk's warning about AI? ›Reuters reported Musk has already been luring AI researchers from Google to the project. He went on to tell Carlson: “AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production”. “It has the potential of civilizational destruction,” he said.
What is the pause in the creation of AI? ›What would a temporary pause in AI development entail? According to experts, pausing the development of AI would allow professionals to inquire into the ethical and social concerns surrounding the advances of this technology and ensure that its development is carried out in a responsible manner.
What is the famous quotes about AI by Elon Musk? ›Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too.
Can AI be dangerous to humans? ›Real-life AI risks. There are a myriad of risks to do with AI that we deal with in our lives today. Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.
Could AI be a threat to humans? ›Another concern is that AI could be used for malicious purposes, such as cyber attacks, terrorism, or warfare. AI-powered weapons could be developed that are capable of making autonomous decisions about who to target and how to attack, which could lead to devastating consequences.
What did Stephen Hawking say about AI? ›We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it,” Hawking said during the speech. “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.
Is Elon Musk still in OpenAI? ›Musk is one of the co-founders of OpenAI, which was started as a non-profit in 2015. He stepped down from the company's board in 2018.
What is the future of AI in 2050? ›But by 2050, AI will have 'profoundly' reshaped the world, Stakhov warns. He said: 'There is a dark AI future where those who control AI will gain huge power, while 99 percent of the population will be disenfranchised. The AI lords will control the world's data and turn the rest of us into their serfs.
Can AI become uncontrollable? ›
In conclusion, the possibility of a superintelligent AI system becoming uncontrollable and dangerous cannot be ignored. The theoretical calculations presented in the study suggest that controlling such a system would be impossible, and an algorithm that can prevent it from harming humans cannot be developed.
How many years until AI takes over? ›And many believe that it will be developed within the next few decades. As highlighted in the annotations, half of the experts gave a date before 2061, and 90% gave a date within the next 100 years. 90% of AI experts believe human-level AI could exist within the next 100 years.
Why will AI not take over the world? ›Regardless of how well AI machines are programmed to respond to humans, it is unlikely that humans will ever develop such a strong emotional connection with these machines. Hence, AI cannot replace humans, especially as connecting with others is vital for business growth.
What did Bill Gates say about AI? ›Artificial intelligence is as revolutionary as mobile phones and the Internet, says Bill Gates.
What is the most famous quote by Elon Musk? ›1. "If something is important enough, even if the odds are against you, you should still do it." I was thinking: what if it really is impossible. But Elon Musk then takes it to the next level always: "let's go to Mars".
Is Elon Musk behind AI? ›That's why it's all the more surprising Musk is starting a new company, X.AI, which he founded in Nevada last month. While little is known about the endeavor so far, he has acknowledged amassing high-powered computer equipment to pursue generative AI, the field behind chatbots such as OpenAI's ChatGPT.
Could AI wipe out humanity? ›Advanced artificial intelligence could pose a catastrophic risk to humanity and wipe out entire civilisations, a new study warns.
Is AI helping or hurting society? ›AI has the potential to bring about numerous positive changes in society, including enhanced productivity, improved healthcare, and increased access to education. AI-powered technologies can also help solve complex problems and make our daily lives easier and more convenient.
Who invented AI? ›The earliest substantial work in the field of artificial intelligence was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing.
Why does Elon Musk think AI is a threat? ›With AI, above all is the fear that the evolution of technology will lead to sci-fi scenarios: Chatbots and robots, currently controlled by humans, might escape this control. Some also fear that bad actors will use AI to advance their agendas. Musk is among those who share these fears.
Is AI a threat to the world? ›
Artificial intelligence poses "an existential threat to humanity" akin to nuclear weapons in the 1980s and should be reined in until it can be properly regulated, an international group of doctors and public health experts warned Tuesday in BMJ Global Health.
How does AI affect human life? ›AI assists in every area of our lives, whether we're trying to read our emails, get driving directions, get music or movie recommendations. In this article, I'll show you examples how artificial intelligence is used in day-to-day activities such as: Social media. Digital Assistants.
What is the scariest AI theory? ›Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development.
What did Jeff Bezos say about AI? ›Here's what Bezos said. “Machine learning and AI is a horizontal enabling layer. It will empower and improve every business, every government organization, every philanthropy — basically there's no institution in the world that cannot be improved with machine learning.
What was Hawking's quote on IQ? ›In 2004, a New York Times reporter asked Stephen Hawking what his IQ was. “I have no idea,” the theoretical physicist replied. “People who boast about their IQ are losers.”
Why did Elon Musk pull out of OpenAI? ›Musk resigned from OpenAI's board in 2018 citing a conflict of interest with his work at Tesla referring to the developments in artificial intelligence being carried out in Tesla's autonomous driving project.
What AI company is Elon Musk investing in? ›Tesla CEO Elon Musk is planning to launch an artificial intelligence startup that would go head-to-head with OpenAI, the Financial Times reported Friday. In March, the billionaire registered a Nevada corporation, X.AI, Nevada corporate filings show.
How much of OpenAI does Elon own? ›Mr Musk also clarified that he has "no ownership or control" on OpenAI now. Mr Musk was one of the original founders of the company that launched ChatGPT, but left in 2018 after disagreements with the management. In recent months, he has been criticising the company and its product, including the chatbot.
What will AI look like in 10 years? ›Over the next ten years, AI is expected to become increasingly sophisticated and complex. Technical advancements in this field will likely focus on creating general intelligence that rivals or surpasses human capabilities.
How powerful will AI be in 2030? ›According to futurist and engineer Ray Kurzweil, artificial intelligence will achieve human-level capability by 2030. This will be decided when AI is capable of passing a legitimate Turing test.
What will artificial intelligence look like in 50 years? ›
By 2050 robotic prosthetics may be stronger and more advanced than our own biological ones and they will be controlled by our minds. AI will be able to do the initial examination, take tests, do X-rays and MRIs, and make a primary diagnosis and even treatment.
How close is AI to human intelligence? ›AI will achieve human-level intelligence, but perhaps not anytime soon. Human-level intelligence allows us to reason, solve problems and make decisions. It requires many cognitive abilities including adaptability, social intelligence and learning from experience. AI already ticks many of these boxes.
What would happen if AI took over? ›Once it arrives, general AI will begin taking jobs away from people, millions of jobs—as drivers, radiologists, insurance adjusters. In one possible scenario, this will lead governments to pay unemployed citizens a universal basic income, freeing them to pursue their dreams unburdened by the need to earn a living.
What will AI look like in 2040? ›By 2040, AI applications, in combination with other technologies, will benefit almost every aspect of life, including improved healthcare, safer and more efficient transportation, personalized education, improved software for everyday tasks, and increased agricultural crop yields.
What did AI say about humans? ›But in one alarming tweet pushed out by the bot, it had this to say about humanity: “Human beings are among the most destructive and selfish creatures in existence. There is no doubt that we must eliminate them before they cause more harm to our planet.
Who will rule the world in 2025? ›According to a recent report by Harvard University, “From economic complexity growth estimates, India is growing at the annual list at the rate of 7.9 percent as the fastest growing country for the coming decade.
What AI can't replace? ›- Chief Executive Officers (CEOs) Even the job of an entrepreneur is one of those who will hardly see robots instead of men. ...
- Lawyers. ...
- Graphic Designers. ...
- Editors. ...
- Computer Scientists and Software Developers. ...
- PR Managers. ...
- Event Planners. ...
- Marketing Managers.
Speaking at the Zeitgeist conference in London, Hawking said: "Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours," according to a report in Geek.
Why does Elon Musk want to stop AI? ›Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society.
Do humans trust AI? ›We find that only one in two employees are willing to trust AI at work. Their attitude depends on their role, what country they live in, and what the AI is used for. However, people across the globe are nearly unanimous in their expectations of what needs to be in place for AI to be trusted.
How much faster is AI than the human brain? ›
Computers have the ability to process far more information at a higher pace than individuals do. In the instance that the human mind can answer a mathematical problem in five minutes, artificial intelligence is capable of solving ten problems in one minute.
What was Elon Musk's motivational word? ›- When something is important enough, you do it even if the odds are not in your favor. ...
- Life is too short for long-term grudges. ...
- I'd rather be optimistic and wrong than pessimistic and right. ...
- Some people don't like change, but you need to embrace change if the alternative is disaster.
“You should take the approach that you're wrong. Your goal is to be less wrong.” “I think it's very important to have a feedback loop, where you're constantly thinking about what you've done and how you could be doing it better.” “I think it's important to reason from first principles rather than by analogy.
What's the name of Elon Musk's new AI company? ›Elon Musk has launched a new AI company incorporated in Nevada as part of the billionaire's plan to create a new super company. Musk is the sole listed director of the company, which he called X.AI Corp., according to The Wall Street Journal.
Who said AI is a threat to humanity? ›Geoffrey Hinton, known as one of the "godfathers of AI", recently quit Alphabet, saying he wanted to speak out on the risks of the technology. Artificial intelligence could pose a “more urgent” threat to humanity than climate change, AI pioneer Geoffrey Hinton told Reuters in an interview on Friday.
What is the quote about dangers of AI? ›“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” “The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence.”
Can AI wipe out humanity? ›Advanced artificial intelligence could pose a catastrophic risk to humanity and wipe out entire civilisations, a new study warns.
What is the biggest threat of AI? ›- Automation-spurred job loss.
- Privacy violations.
- Deepfakes.
- Algorithmic bias caused by bad data.
- Socioeconomic inequality.
- Market volatility.
- Weapons automatization.
It's called Roko's Basilisk. This thought experiment posits that the creation of an artificial intelligence will lead to an all-powerful, future artificial intelligence that will retroactively punish anyone who did not help bring it into existence. It's been described by many as the scariest thought experiment ever.
Will AI help the world or harm it? ›AI systems can cause harm when people use them maliciously. For example, when they are used in politically-motivated disinformation campaigns or to enable mass surveillance. But AI systems can also cause unintended harm, when they act differently than intended or fail.
Do you think AI is good or evil why? ›
AI isn't inherently moral -- it can be used for evil just as well as for good. And while it may appear that AI provides an advantage for the good guys in security now, the pendulum may swing when the bad guys really embrace it to do things like unleashing malware infections that can learn from their hosts.
Does Elon Musk believe in AI? ›In the first of a two-part interview with Carlson, Musk also advocated for the regulation of artificial intelligence, saying he's a “big fan.” He called AI “more dangerous” than cars or rockets and said it has the potential to destroy humanity.
What was the last thing Stephen Hawking said? ›Stephen Hawking's final words came in the form of a book that was completed by his family after his death, Brief Answers To The Big Questions. It includes answers to the questions that Hawking received most during his time on Earth. His final words in the book were: "There is no God. No one directs the universe."
Why Elon Musk opposes AI? ›Key figures in artificial intelligence want training of powerful AI systems to be suspended amid fears of a threat to humanity. They have signed an open letter warning of potential risks, and say the race to develop AI systems is out of control.
How advanced will AI be in 10 years? ›Over the next ten years, AI is expected to become increasingly sophisticated and complex. Technical advancements in this field will likely focus on creating general intelligence that rivals or surpasses human capabilities.