Is the further development of artificial intelligence (AI) worth the trouble? On 29 March 2023, in an open letterpublished on the Future of Life’s website, about 1,800 scientists, historians, philosophers and even some billionaires and others – let us call them the Tech Nobility – called for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 […]. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
In a reaction to this letter, decision theorist Eliezer Yudkowsky wrotethat the call in the open letter does not go far enough, and insisted that governments should:
“Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs… Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue data centre by airstrike.”
Calls for such extreme measures against AI are based on the fear that AI poses an existential riskto humanity. Following the release of large language models (LLM) by OpenAI (GTP-4) and Microsoft (Bing) there is a growing concern that further versions could move us towards an AI “singularity” – that is where AI becomes as smart as humans and can self-improve. The result is runaway intelligence. An intelligence explosion.
Hypotheses for catastrophes
There are many ways in which this could spell doom for humanity. All of these are argued to be unavoidable by proponents of AI doom because we do not know how to align AI and human interests (the “alignment problem”) and how to control how AI is used (the “control problem”).
A 2020 paperlists 25 ways in which AI poses an existential risk. We can summarise these into four main hypothetical consequences that would be catastrophic.
One is that such a superintelligence causes an accident or does something with the unintended side-effect of curtailing humanity’s potential. An example is given by the thought experiment of the paper clip maximiser.
A second is that a superintelligent AI may pre-emptively strike against humanity because it may see humanity as its biggest threat.
A third is that a superintelligent AI takes over world government, merges all corporations into one “ascended” corporation, and rules forever as a singleton– locking humanity into a potential North Korean dystopia until the end of time.
A fourth is that a superintelligent AI may wire-headhumans (like we wire-head mice) – somewhat akin to Aldous Huxley’sBrave New Worldwhere humans are kept in a pacified condition to accept their tech-ruled existence through using a drug called Soma.
Read more in Daily Maverick: Artificial intelligence has a dirty little secret
Issuing highly publicised open letters on AI – like that of 29 March – is nothing new in the tech industry, the main beneficiaries of AI. On 28 October 2015 we saw a similar grand public signing by much the same Tech Nobility – also published as an open letteron the Future of Life’s website – wherein they did not, however, call for a pause in AI research, but instead stated that “we recommend expanded research” and that the “potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence”.
In eight short years the tech industry seems to have moved from hype to hysteria – calling not for further research to advance AI, but instead for airstrikes to destroy “rogue” data centres.
What is happening?
First, the hysteria surrounding AI has steadily risen to exceed the hype. This was to be expected given humans’ cognitive bias towards bad news. After all, the fear that AI will pose an existential threat to humanity is deep-seated. Samuel Butler wrote an essay in 1863 titled “Darwin Among The Machines”, in which he predicted that intelligent machines will come to dominate:
“The machines are gaining ground upon us; day by day we are becoming more subservient to them… that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.”
Not much different from Eliezer Yudkowsky writing in 2023. That the hysteria surrounding AI has steadily risen to exceed the hype is however not only due to human bias and deep-seated fears of “The Machine”, but also because public distrust in AI has grown between 2015 and 2023.
None of the benefits touted in the 2015 open letter have materialised. Instead, we saw AI being of little value during the global Covid-19crisis, we have seen a select few rich corporations getting more monopoly powerand richer on the back of harvesting people’s private data, and we have seen the rise of the surveillance state.
At the same time, productivity, research efficiency, tech progressand science have all declinedin the most advanced economies. People are more likely to believe the worst about AI, and the establishment of several institutes that earn their living from peddling existential risks just further feeds the number of newspaper articles that drive the hysteria.
Not only are they facing growing public distrust and increasing scrutiny by governments, but the tech industry has taken serious knocks in recent months. These include more than 100,000industry job cuts, the collapse of Silicon Valley Bank– the second-largest bank failure in US history – declining stock prices and growing fears that the tech bubbleis about to burst.
Underlying these cutbacks and declines is a growing realisation that new technologies have failed to meet expectations.
Read more in Daily Maverick:Why is everyone so angry at artificial intelligence?
The jobs cuts, bank failures and tech bubble problems compound the market’s evaluation of an AI industry where the costs are increasingly exceeding the benefits.
AI is expensive – developing and rolling out LLMs such as GTP-4 and Bing requires investment. And add infrastructure cost in the billions of dollarsand training costs in the millions. GTP-4 has 100 trillionparameters and the total training compute it needed has been estimated to be about 18 billion petaflops– in comparison, the famous AlphaGowhich beat the best human Go player needed less than a million petaflops in compute.
The point is, these recent LLMs are pushing against the boundaries of what can be thrown at deep learning methods and make sophisticated AI systems out of bounds for most firms – and even most governments. Not surprisingly then, the adoption of AI systems by firms in the US, arguably the country most advanced in terms of AI, has been very low: a US Census Bureau survey of 800,000 firms found that only 2.9% were using machine learning as recently as 2018.
AI’s existential risk is at present only in the philosophical and literary realms. This does not mean that the narrow AI we have cannot cause serious harm – there are many examples of Awful AI – we should continue to be vigilant.
It also does not mean that some day in the future the existential risk will not be real – but we are still too farfrom this to know how to do anything sensible about it. The open letter’s call to “pause” AI for three months is more likely a response borne out of desperation in an industry that is running out of steam.
It is a perfect example of a virtue signal and an advertisement for GTP-4 (called a tool of hi-tech plagiarismby Noam Chomsky and a failureby Gary Marcus) – all rolled into one grand publicity stunt.DM
Wim Naudé is Visiting Professor in Technology, Innovation, Marketing and Entrepreneurship at RWTH Aachen University, Germany; Distinguished Visiting Professor at the University of Johannesburg; a Fellow of the African Studies Centre, Leiden University, the Netherlands; and an AI Expert at the OECD’s AI Policy Observatory, Paris, France.
The open letter, titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”Who wrote the open letter to pause AI? ›
OpenAI CEO Sam Altman said he agreed with parts of an open letter from the Future of Life Institute signed by tech leaders like Tesla CEO Elon Musk and Apple co-founder Steve Wozniak that called for a six-month AI research halt, but added that the letter was “missing most technical nuance about where we need the pause. ...Who signed the open letter about AI? ›
That's in response to an open letter published last week by the Future of Life Institute, signed by Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, calling for a six-month halt to work on AI systems that can compete with human-level intelligence.Why is AI a threat to humanity? ›
AI systems may find loopholes that allow them to accomplish their proxy goals efficiently but in unintended, sometimes harmful ways (reward hacking). AI systems may also develop unwanted instrumental strategies such as seeking power or survival because this helps them achieve their given goals.Is OpenAI run by elon? ›
Musk is one of the co-founders of OpenAI, which was started as a non-profit in 2015. He stepped down from the company's board in 2018.Why does Musk want to pause AI? ›
Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society.What is the letter by Elon Musk about AI? ›
“Contemporary AI systems are now becoming human-competitive at general tasks,” states the letter, which was hosted on the Future of Life Institute's website. “We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?Why is there a pause on AI development? ›
According to experts, pausing the development of AI would allow professionals to inquire into the ethical and social concerns surrounding the advances of this technology and ensure that its development is carried out in a responsible manner.Is Elon Musk owner of OpenAI? ›
While Musk contributed immensely to OpenAI, it is essential to note that he no longer has an ownership stake in OpenAI. Musk left OpenAI in 2018 to avoid future conflicts of interest with Tesla.Who is the godfather of AI? ›
Artificial intelligence pioneer Geoffrey Hinton announced he was leaving his part-time job at Google on Monday so that he could speak more freely about his concerns with the rapidly developing technology.
OpenAI, a leading research organization in the field of artificial intelligence (AI), has recently released Chat GPT-4, the latest iteration of their language model. This release has generated a lot of excitement and anticipation, as it is the most advanced and powerful AI yet.Why is Elon Musk afraid of AI? ›
Elon Musk warned in a new interview that artificial intelligence could lead to “civilization destruction,” even as he remains deeply involved in the growth of AI through his many companies, including a rumored new venture.How is AI a threat to human dignity? ›
A threat to dignity
It feeds into the popular notion that our dignity and worth is solely dependent on our usefulness to society rather than bestowed on us in creation by God. AI can also be used in ways that devalue human life and the deterioration of human flourishing because they can function as a substitute for us.
AI is said to be a very helpful mechanism in the future yet some people think that Artificial Intelligence can be a threat to humanity. Conspiracy theorists and other groups of people believe that AI will be able to overpower and rule over humans sooner or later.What AI company is Elon Musk investing in? ›
Musk was recently reported to be starting his own AI-focused company by the Financial Times. The billionaire was named as a director in a business-incorporation document filed in March for a new company called X.ai Corp.Why did Elon Musk step down from OpenAI? ›
Musk resigned from OpenAI's board in 2018 citing a conflict of interest with his work at Tesla referring to the developments in artificial intelligence being carried out in Tesla's autonomous driving project.Why is Elon Musk no longer with OpenAI? ›
In 2018, Mr. Musk resigned from OpenAI's board, partly because of his growing conflict of interest with the organization, two people familiar with the matter said. By then, he was building his own A.I.How to outsmart AI? ›
Minimizing student use of AI
Avoid the use of knowledge recognition and recall through the elimination of multiple-choice questions. Decrease the use of essays that focus on the regurgitation of knowledge from one source and that require repackaging the information as the substance of the assessment.
The AI can outsmart humans, finding solutions that fulfill a brief but in ways that misalign with the creator's intent. On a simulator, that doesn't matter. But in the real world, the outcomes could be a lot more insidious. Here are five more stories showing the creative ingenuity of AI.Is Elon Musk right about AI? ›
It's perfectly fine for people to influence different fields, and Musk's work on driverless cars has undoubtedly influenced the development of AI. But an awful lot of what he says about AI has been wrong. Most notoriously, none of his forecasts about timelines for self-driving cars have been correct.
As artificial intelligence makes rapid advances, a group of experts has called for a pause. They have warned of the negative effects runaway development could have on society and humanity.What are the concerns of AI in 2023? ›
AI is going to move the needle for enterprises in 2023, but it is not without mounting concerns. Privacy, or the lack thereof, will likely remain a central fear among consumers. AI training under the current processes also have a likelihood of biases from misunderstanding spoken language or skewing data points.What is Elon Musk's new AI? ›
Twitter owner Elon Musk has founded a new artificial intelligence company named X.AI, according to a Nevada business filing from last month. The filing, dated March 9, lists Musk as the company's sole director and Jared Birchall, who manages Musk's family office, as its secretary.What does GPT stand for? ›
And What Does It Have to Do with Assistive Technology? Chat GPT stands for Chat Generative Pre-Trained Transformer and was developed by an AI research company, Open AI. It is an artificial intelligence (AI) chatbot technology that can process our natural human language and generate a response.Can I send Elon Musk a letter? ›
Put the letter in an envelope and address it to "Elon Musk." Then, send the letter to: Corporate Secretary, Tesla, Inc. 3500 Deer Creek Road, Palo Alto, CA 94304 United States. Make sure you include the correct postage so the letter gets to Tesla.How advanced will AI be in 2050? ›
By 2050 robotic prosthetics may be stronger and more advanced than our own biological ones and they will be controlled by our minds. AI will be able to do the initial examination, take tests, do X-rays and MRIs, and make a primary diagnosis and even treatment.Why is today's AI still considered weak AI? ›
Weak AI can outperform humans on the specific tasks it is designed for, but it operates under far more constraints than even the most basic human intelligence. All the AI that's available today can be considered weak AI. Meanwhile, strong AI does not exist yet.Can AI be a threat in the future? ›
The tech community has long debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been mentioned as some of the biggest dangers posed by AI.What is Google's AI called? ›
Google Bard is built on the Pathways Language Model 2 (PaLM 2), a language model released in late 2022. PaLM and the model that preceded it, Google's Language Model for Dialogue Applications (LaMDA) technology, are based on Transformer, Google's neural network architecture released in 2017.Does Google own OpenAI? ›
OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.
First, it charges for access to its platform and services. Companies pay for access to its proprietary technology and the ability to integrate their own data and algorithms into OpenAI's platform. Second, OpenAI makes money by licensing its technology to other organizations and companies.Who is the most powerful AI? ›
GPT-3 was released in 2020 and is the largest and most powerful AI model to date. It has 175 billion parameters, which is more than ten times larger than its predecessor, GPT-2.Who is the oldest AI? ›
The first AI programs
The earliest successful AI program was written in 1951 by Christopher Strachey, later director of the Programming Research Group at the University of Oxford. Strachey's checkers (draughts) program ran on the Ferranti Mark I computer at the University of Manchester, England.
Josh Bachynski, MA, TEDx Talker, claims to have created the world's first self-aware AI prototype. Her name is Kassandra, named after the fabled Trojan prophetess.What is the most advanced AI robot in the world? ›
Ameca is the brainchild of Cornwall-based startup, Engineered Arts, who describe her as the 'world's most advanced robot'. The robot is undoubtedly lifelike and can perform a range of facial expressions including winking, pursing its lips and scrunching its nose – just like a real person.What countries are most advanced in AI? ›
The United States is the clear leader in AI development, with major tech companies headquartered there leading the charge. The United States has indisputably become the primary hub for artificial intelligence development, with tech giants like Google, Facebook, and Microsoft at the forefront of AI-driven research.Can AI take over the world? ›
It's worth noting that AI is not a single entity but a broad field of study that encompasses different types of technologies and applications. It's unlikely that a single AI system or application could become so powerful as to take over the world.Why we should not fear artificial intelligence? ›
Creating fear around artificial intelligence is holding us back from what can be a beneficial tool for all of our lives. As AI develops further and becomes more ingrained in our lives, it's essential to stop looking at it as a monster but instead as a tool. Stop being afraid of artificial intelligence.What is the fear of AI taking over? ›
This fear, also known as "the singularity," is based on the assumption that if AI systems can improve themselves, they could eventually surpass human intelligence and become uncontrollable. Moreover, there are concerns about the potential misuse of AI, particularly in the form of autonomous weapons.Does AI robots threaten human dignity? ›
Weizenbaum explains that if machines replace the types of jobs that require empathy, humans will find themselves to be alienated, devalued, and frustrated, and this represents a threat to our human dignity.
Human Impersonation on Social Networking Platforms
Cybercriminals are also abusing AI to imitate human behavior. For example, they are able to successfully dupe bot detection systems on social media platforms such as Spotify by mimicking human-like usage patterns.
“AI could pose a threat to humanity's future if it has certain ingredients, which would be superhuman intelligence, extensive autonomy, some resources and novel technology,” says Ryan Carey, a research fellow in AI safety at Oxford University's Future of Humanity Institute.What is the biggest danger of AI? ›
Real-life AI risks. There are a myriad of risks to do with AI that we deal with in our lives today. Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.Is AI good or bad? ›
The advantages range from streamlining, saving time, eliminating biases, and automating repetitive tasks, just to name a few. The disadvantages are things like costly implementation, potential human job loss, and lack of emotion and creativity.Can humans trust AI? ›
People have more faith in the ability of AI systems to produce reliable output and provide helpful services, than the safety, security and fairness of these systems, and the extent to which they uphold privacy rights. However, trust is contextual and depends on the AI's purpose.Do you need to spell out AI? ›
AI is an abbreviation for artificial intelligence and should be capitalized.What is open about OpenAI? ›
OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity.What is an AI letter? ›
About Applied AI Letters
Applied AI Letters is an open access journal launched by Wiley as a world-leading title in the application of Artificial Intelligence methodologies to a highly interdisciplinary range of applications.
China is committed to building a community with a shared future for mankind in the domain of AI, advocating a people-centered approach and the principle of AI for good.Should we be polite to AI? ›
AI cannot process emotions like humans, and you shouldn't feel obligated to say "thank you" or "please." Nevertheless, it doesn't hurt to be polite to AI, even if you know it's not sentient, but rather because you will feel better about yourself if you're polite.
AI-generated content for websites and blogs won't replace quality content writers any time soon, because AI-created content isn't necessarily good—or reliable. Google says AI writing is against its guidelines, considering it spam.How do you know if an AI wrote? ›
GLTR is currently the most visual way to predict if casual portions of text have been written with AI. To use GLTR, simply copy and paste a piece of text into the input box and hit "analyze." This tool was built with GPT-2, meaning it won't be as extensively trained as if it were written with GPT-3 or GPT-4 content.What did Elon Musk do with OpenAI? ›
Musk is pushing back against OpenAI and plans to compete with it, he helped found the A.I. lab in 2015 as a nonprofit. He has since said he has grown disillusioned with OpenAI because it no longer operates as a nonprofit and is building technology that, in his view, takes sides in political and social debates.Who funds OpenAI? ›
OpenAI had in January announced that it has raised funding from Microsoft.What is Elon Musk's warning about AI? ›
Musk has warned for years that poorly built artificial intelligence could have catastrophic effects on humanity. Since OpenAI's ChatGPT became a viral sensation last November, Mr. Musk has denounced it as politically correct and warned it could lead AI to become too powerful for humans to control.Why does Elon Musk want to stop AI? ›
Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society.What country has the most advanced AI? ›
The United States is the clear leader in AI development, with major tech companies headquartered there leading the charge. The United States has indisputably become the primary hub for artificial intelligence development, with tech giants like Google, Facebook, and Microsoft at the forefront of AI-driven research.Who is leading the AI race? ›
Right now, it is clear that the United States leads in AI, with advantages in computing hardware and human talent that other countries cannot match.Is China ahead of us in technology? ›
China has a significant lead in key areas of technology compared to the U.S. The cutting edge of technology in 2023 is artificial intelligence, with the (sometimes creepy) ChatGPT nearing an “iPhone moment” as a revolutionary tool that is being adopted at record speed.