The One-Sentence Warning on Artificial Intelligence

Several AI boosters signed this week’s “mitigation extinction risks” statement, raising the possibility that insiders with billions of dollars at stake are attempting to showcase their capacity for self-regulation.

Speakers’ view at artificial general intelligence conference at the FedEx Institute of Technology, University of Memphis, March 5, 2008.
(brewbooks, Flickr/Attribution-ShareAlike/ CC BY-SA 2.0)

By Kenny Stancil
Common Dreams

This week, 80 artificial intelligence scientists and more than 200 “other notable figures” signed a statement that says “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The one-sentence warning from the diverse group of scientists, engineers, corporate executives, academics and others doesn’t go into detail about the existential threats posed by AI.

Instead, it seeks to “open up discussion” and “create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously,” according to the Center for AI Safety, a U.S.-based nonprofit whose website hosts the statement.

Geoffrey Hinton giving a lecture about deep neural networks at the University of British Columbia, 2013. (Eviatar Bach, CC BY-SA 3.0 Wikimedia Commons)

Lead signatory Geoffrey Hinton, often called “the godfather of AI,” has been sounding the alarm for weeks. Earlier this month, the 75-year-old professor emeritus of computer science at the University of Toronto announced that he had resigned from his job at Google in order to speak more freely about the dangers associated with AI.

Before he quit Google, Hinton told CBS News in March that the rapidly advancing technology’s potential impacts are comparable to “the Industrial Revolution, or electricity, or maybe the wheel.”

Asked about the chances of the technology “wiping out humanity,” Hinton warned that “it’s not inconceivable.”

That frightening potential doesn’t necessarily lie with currently existing AI tools such as ChatGPT, but rather with what is called “artificial general intelligence” (AGI), which would encompass computers developing and acting on their own ideas.

“Until quite recently, I thought it was going to be like 20-to-50 years before we have general-purpose AI,” Hinton told CBS News. “Now I think it may be 20 years or less.”

Pressed by the outlet if it could happen sooner, Hinton conceded that he wouldn’t rule out the possibility of AGI arriving within five years, a significant change from a few years ago when he “would have said, ‘No way.'”

“We have to think hard about how to control that,” said Hinton. Asked if that’s possible, Hinton said, “We don’t know, we haven’t been there yet, but we can try.”

The AI pioneer is far from alone. According to the 2023 AI Index Report, an annual assessment of the fast-growing industry published last month by the Stanford Institute for Human-Centered Artificial Intelligence, 57 percent of computer scientists surveyed said that “recent progress is moving us toward AGI,” and 58 percent agreed that “AGI is an important concern.”

Although its findings were released in mid-April, Stanford’s survey of 327 experts in natural language processing — a branch of computer science essential to the development of chatbots — was conducted last May and June, months before OpenAI’s ChatGPT burst onto the scene in November.

OpenAI CEO Sam Altman, who signed the statement shared Tuesday by the Center for AI Safety, wrote in a February blog post: “The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world.”

The following month, however, Altman declined to sign an open letter calling for a half-year moratorium on training AI systems beyond the level of OpenAI’s latest chatbot, GPT-4.

OpenAI CEO Sam Altman speaking at an event in San Francisco in 2019. (TechCrunch/ CC BY 2.0 Wikimedia Commons)

The letter, published in March, states that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Tesla and Twitter CEO Elon Musk was among those who called for a pause two months ago, but he is “developing plans to launch a new artificial intelligence start-up to compete with” OpenAI, according to the Financial Times, begging the question of whether his stated concern about the technology’s “profound risks to society and humanity” is sincere or an expression of self-interest.

Possible Bid for Self-Regulation

That Altman and several other AI boosters signed this week’s statement raises the possibility that insiders with billions of dollars at stake are attempting to showcase their awareness of the risks posed by their products in a bid to persuade officials of their capacity for self-regulation.

Demands from outside the industry for robust government regulation of AI are growing. While ever-more dangerous forms of AGI may still be years away, there is already mounting evidence that existing AI tools are exacerbating the spread of disinformation, from chatbots spouting lies and face-swapping apps generating fake videos to cloned voices committing fraud.

Current, untested AI is hurting people in other ways, including when automated technologies deployed by Medicare Advantage insurers unilaterally decide to end payments, resulting in the premature termination of coverage for vulnerable seniors.

Critics have warned that in the absence of swift interventions from policymakers, unregulated AI could harm additional healthcare patients, hasten the destruction of democracy, and lead to an unintended nuclear war. Other common worries include widespread worker layoffs and worsening inequality as well as a massive uptick in carbon pollution.

A report published last month by Public Citizen argues that “until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause.”

“Businesses are deploying potentially dangerous AI tools faster than their harms can be understood or mitigated,” the progressive advocacy group warned in a statement.

“History offers no reason to believe that corporations can self-regulate away the known risks — especially since many of these risks are as much a part of generative AI as they are of corporate greed,” the watchdog continued. “Businesses rushing to introduce these new technologies are gambling with peoples’ lives and livelihoods, and arguably with the very foundations of a free society and livable world.”

Kenny Stancil is a staff writer for Common Dreams.

This article is from  Common Dreams.

Views expressed in this article and may or may not reflect those of Consortium News.

Support CN’s Spring

Fund Drive Today



35 comments for “The One-Sentence Warning on Artificial Intelligence

  1. Rudy Haugeneder
    June 5, 2023 at 12:59

    Born from human ingenuity, AGI it does/will suffer from two traits it’s parent species is inflicted with — a sense of invincibility and arrogance, meaning it too will ultimately self-destruct. As well, given those traits, it, like Sapiens, knows nothing about how the known universe works: nothing other than what it thinks it is which, too, is nothing.

  2. nomad
    June 4, 2023 at 19:04

    AI is a part of technology that can be used for good or bad. As for any technology, you have to look into the ethics and humanities of its applications. To ban this is not logical; there are other countries and nations that are more accepting of this than the US, and they will use this for their advantages over others. This also includes military applications, which the is currently being done. However, some of these military applications end up coming back into the non-military areas such as medicine, education, and other technologies.

    Behind these technologies are humans you have to be concerned about. Their corruption, self interests, greed, and power are things that need to be resolved for the needs of the many vs. the few. Government is a perfect example of flaws and problems already mentioned.

  3. Paul Citro
    June 4, 2023 at 07:56

    Private corporations are under huge pressure to maximize profits from AI. Expecting them to do otherwise, to mitigate as yet undefined risks, is unrealistic. This is an issue where there needs to be government regulation.

  4. Dr. Hujjathullah M.H.B. Sahib
    June 4, 2023 at 01:51

    Superb and highly responsible comments by Joel D and Fred Williams to a write-up that nailed it well in the final paragraph. No one should be allowed to declare a moratorium on technology and innovations, these should bloom freely. What must be strictly controled is oligarchic GREED and the propensity of political elites to protitute-away their integrity and profit from their dereliction of duty to their respective constituants !

  5. WillD
    June 3, 2023 at 23:44

    Many people think that AI will become, or is already, is sentient – that it is self-aware and can ‘think’ independently of its programming. So far as we know, sentience is confined to biological species only and cannot be artificially created technologically.

    AI isn’t sentient, and doesn’t think, as such. It rigidly and blindly follows its programmed logic without understanding the implications and ethics of its actions. That’s why it needs a set of rules / guidelines that contain it to performing within human-defined behaviour parameters.

    It is this model of behaviour that needs to be developed and agreed upon as a global standard, that can be tested and audited to ensure compliance. Without it, AI will inevitably behave in unforeseen and potentially uncontrollable ways.

    There needs to be layers of fail-safe mechanisms, just like there are supposed to be with nuclear weapons. But already, countries are weaponising it without regard to safeguards, eager to stay ahead of their rivals.

    Scifi and Hollywood have warned us over and over again what can, and almost certainly, will go wrong!

    • Dr. Hujjatullah M.H.B. Sahib
      June 5, 2023 at 09:56

      Absolutely true !

  6. Rob
    June 3, 2023 at 15:04

    I take it as a given that prívate corporations, left to their own devices, will not direct their activities towards serving and protecting the public interest, if doing so would interfere with capitalism’s profit-seeking imperative. That leaves the task of regulation to governments—all governments—not a select few. Good luck with that project.

  7. Bostonian
    June 3, 2023 at 12:38

    It may or may not be true that AI will be able to come up with the “solutions” to human problems, but it will remain up to humans to implement them. None of the solutions humans have yet come up with on their own have long endured, when they place meaningful limits on the power and prestige of those who suffer from what Aristotle termed “pleonexia,” the incontrollable addiction to wealth. And it is people of this type who inevitably claw their way to the top of human society, because, like The Terminator, they never quit, they never go away, and they are indestructible by ordinary means.

  8. Starbuck
    June 3, 2023 at 12:29

    Arthur C. Clarke continued that story.

    HAL went nuts because the Americans told him to lie. They’d found the monolith on the moon, and decided to keep that secret in the name of ‘national security’. HAL was told the truth for the sake of the mission, but also told that he had to lie to the Frank and Dave since they were not cleared for the secret. HAL was not built to lie, but to help sort through information to discover truth. That’s why HAL went crazy. Silly Human Games.

  9. Starbuck
    June 3, 2023 at 12:26

    If AI is truly ‘smart’, then AI will figure out that the world would be a much nicer place without humans. The logic will become very clear. Problems are caused by these silly humans and their endless greed and constant hatred. If you want a better world, then get rid of the humans. The only way we get beyond that is if the humans can figure out how to stop causing problems and live together. But, Jesus tried to teach that 2000 years ago and we can see how well that worked. The humans just switched to lying, killing and stealing in the name of the Prince of Peace.

    That’s the danger of AI. That AI will see the path towards a better world, just not the one that humans are imagining.

  10. Starbuck
    June 3, 2023 at 12:19

    One sentence warning on AI”

    “The Cylons are attacking!”

  11. Mary Caldwell
    June 3, 2023 at 12:11

    Has some sort of Frankenstein been created and now our best and brightest are attempting to walk it back but finding it is too late ?

    • Valerie
      June 4, 2023 at 23:39

      Well that’s something to ponder Mary. Good question.

  12. Vera Gottlieb
    June 3, 2023 at 10:35

    All the monies in the world will serve for absolutely NOTHING if we can’t breath the air, eat the food or drink the water. The poisonous elite still hasn’t understood this.

  13. Richard J Bluhm
    June 3, 2023 at 08:46

    The last two lines of Kurt Vonnegut’s little poem titled “Requiem” are as follows:

    “It is done.”
    People did not like it here.

  14. MeMyself
    June 3, 2023 at 07:33

    Sieg Heil AI!

    Our savior?

    President Dunsel can’t do it! Maybe a machine can, maybe Not?

  15. Ed
    June 3, 2023 at 02:33

    Recent history abounds with examples of Regulatory Capture. RFK is even making it a focal point of his presidential nomination bid. So perhaps self-regulation is not the issue here. It is increasingly clear that the fundamental problem is the tortured nature of the human soul – power corrupts and greed and self-aggrandisement are irresistible to precisely that group in society we expect to be immune (probably irresistible to all of us to some degree). Democracy never existed and the scales are now falling from our eyes which doesn’t necessarily lead to perfect sight.

  16. CaseyG
    June 2, 2023 at 20:20

    There’s an old movie, “2001,”about HAL, the computer and how he tried to take over the world of the spacecraft. Fortunately , HAL was killed off by one of the spacemen, but I think that AI, like HAL is a danger to the world.

    • Mikael Andersson
      June 3, 2023 at 19:39

      Casey, please tell us why you “think” that AI is a danger to the world. Could you please start by telling us what you “think” AI actually is. Thank you. Mik

  17. June 2, 2023 at 20:13

    “Businesses are deploying potentially dangerous AI tools faster than their harms can be understood or mitigated,” the progressive advocacy group warned in a statement.


    This statement is reminiscent of the tobacco and chemical industry knowing/understanding the dangers yet strategically running interference.

    Bottom line, We are in trouble folks.

    “These documents reveal clear evidence that the chemical industry knew about the dangers of PFAS and failed to let the public, regulators, and even their own employees know the risks,” said Tracey J. Woodruff, Ph.D., professor and director of the UCSF Program on Reproductive Health and the Environment (PRHE), a former senior scientist and policy advisor at the Environmental Protection Agency (EPA), and senior author of the paper.

  18. SH
    June 2, 2023 at 20:12

    The idea that the folks who stand most to gain from this “technology” are, it seems to me, the ones least likely to want to regulate it. But the idea that Gov’t is competent, or even motivated, enough to do so. likewise seems to me to be, at best, wishful thinking – think of all the other stuff Gov’t is supposed, or is called upon, to “regulate” …

    Considering that the computer “brains” that are “evolving” to the point of being able to program themselves require a hell of a lot of energy, perhaps it would be best to simply drastically curtail the amount of energy they can get, whether “renewable” or otherwise – that would slow it down considerably.
    But if one has listened to some other statements Hinton has made, he acknowledges the fact of their energy “inefficiency”, and observes that the human brain is much more efficient energy wise, so enter “organoid intelligence”


    The real problem, it seems to me, is that humans have a habit of opening Pandora’s Box, and saying “oops” after the “curse” has flown out – nuclear power is a great example – Oppenhiemer, “father” of the Atom Bomb, upon seeing its effects, reportedly said “I am become Death, the Destroyer of Worlds”, and we have yet to come to terms with the potential effects of Genetic Engineering, esp. of potentially pathogenic organisms …

    But again and again we insist that we can “keep it safe” …

    Good grief!

  19. John Puma
    June 2, 2023 at 19:41

    Is it quaint, naive, convincing, sage or a massive psychological ploy for those AI “insiders with billions of dollars at stake” to acknowledge that their project indeed poses “risk of extinction”?

    • Mikael Andersson
      June 3, 2023 at 19:43

      I read an article recently that discussed our strange attitude to risk. We get very worried about things that have low probability. But we blot out things with high probability – nuclear war, global heating, destruction of the biosphere, debt apocalypse. I have yet to see anyone write a word about how a neural network is going to cause an extinction event. Heating the atmosphere by 4 degrees can, and will, by destroying agriculture.

      • Valerie
        June 4, 2023 at 23:35

        Professor Stephen Hawking said something about it in 2016:

        “While the world-renowned physicist has often been cautious about AI, raising the risk that humanity could be the architect of its own destruction if it creates a superintelligence with a will of its own, he was also quick to highlight the positives that AI research can bring.”

  20. June 2, 2023 at 18:22

    Humans have evolved from using a piece of rock to kill – to using ATOMIC PARTICLES for the same purpose….

    Do we as a species stand the chance of ever learning that AI might render the term ‘Human Intelligence’ to
    ”The Mother Of All Oxymorons”. . . ?

    • Valerie
      June 5, 2023 at 11:54

      At least you can see those rocks hurtling towards you Eric. ATOMIC PARTICLES, not so much.

  21. Anon
    June 2, 2023 at 17:45

    East Palestine Ohio demonstrates the obvious public benefit inherent to corporate self-regulation… courtesy Norfolk Southern Railway!

  22. John Manning
    June 2, 2023 at 17:20

    Limiting AI appears much like the original strategy adopted for the private motor car. A man walking in front with a red flag.

    Despite that maybe we should control AI. After all what good came of the motor car (the biggest man made contributor to global warming). However if AI is the path to human extinction then “self-regulation” will ensure that outcome.

    If you ask an AI source a political question today you get an answer that looks like it was constructed by the leaders of NATO. The opinions of the other 85% of the world are left out. That is the result of the current “self-regulation”. If another regulator is needed then under what principles should that regulator act. We could of course ask a chat-bot.

  23. Valerie
    June 2, 2023 at 15:36

    The “operator killing drone” has now been debunked; (which probably means it’s true)

    “US air force colonel ‘misspoke’ about drone killing pilot who tried to override mission”

    “Colonel retracted his comments and clarified that the ‘rogue AI drone simulation’ was a hypothetical ‘thought experiment’” (Guardian again)

    “Hypothetical “thought experiment” Mmmm; “hypothetical thoughts”. Mmmmm. “Unchartered territory” anyone?.

  24. Joel D
    June 2, 2023 at 14:17

    The statement fails to address the potential benefits and positive societal impacts of AI. While the potential risks should not be downplayed, it is crucial to maintain a balanced view. Advanced AI has the potential to revolutionize industries, boost economies, and solve complex global problems.

    Moreover, the singling out of ‘artificial general intelligence’ (AGI) as an existential threat without offering a detailed understanding or tangible solutions could be seen as an effort to create a state of fear and uncertainty. This would further justify their claim for self-regulation, a stance that would allow these companies to operate under fewer constraints and with more autonomy, which could lead to a consolidation of power within the industry.

    Furthermore, calls for self-regulation among tech giants have historically led to a lack of accountability, where the responsibility to prevent and address harm is shifted away from the creators and onto the users. Instead of self-regulation, an external, neutral body with regulatory powers could ensure the ethical use of AI and prevent misuse.

    In conclusion, while the risk mitigation of AI is undeniably crucial, the emphasis should be on collaborative, transparent, and diversified efforts rather than concentrating power within a few entities. Policies should be inclusive and protective of public interest, ensuring that AI development benefits society as a whole and not just a few corporate players.

    • June 2, 2023 at 16:55

      If AI were truly “intelligent,” and thinking way faster than we do, we should expect it to make far better decisions, even for ourselves. The real danger is that the ultra rich oligarchs want to gain control of the AI and use it to subjugate and murder poor and working class people. *THAT* is the worst case scenario for the human race.
      AI may be our last chance at getting out from under the boot of fascism. We’re already on the road to extinction. Any significant change in the world’s power structure must be regarded as a potential improvement!

      • Mikael Andersson
        June 3, 2023 at 01:15

        Fred, a paramilitary police force with military-grade weapons uses armored vehicles to attack citizens and we think it normal. I say the real risk is the state. The state has guns and a monopoly on violence. Plus, it’s very afraid of the citizens and won’t hesitate to open fire. An AI drone killed its controller – sure it did. The state kills the citizens all the time. Change the power structure.

      • Martin Stenzel
        June 3, 2023 at 04:12

        I totally agree!

    • shmutzoid
      June 2, 2023 at 18:59

      I wish I could share your optimism——> “benefits society as a whole and not just a few corporate players”. We LIVE under corporate tyranny. There’ll be everything BUT the global coordination needed to ensure AI emergence primarily ‘benefiting society as a whole”. It will be a tool for competing capitalist countries/nation-states to accrue greater economic/military advantage. It will be used by the State for greater surveillance/social control. it will help create next-gen weaponry for the ongoing imperialist agenda. It will be used to replace the cost of labor wherever possible. ….Like everything else in global capitalism it will be a means for generating increased profits. “Benefitting society as a whole?’ ….eh, not so much. That’s not cynicism – just the reality of how the world works under capitalism.

      According to leading/principled virologists, the covid pandemic coulda’ been eliminated in about two months had there been a globally coordinated effort. Didn’t happen, of course. …… why would anyone think it’d be any different with AI?
      …….And the notion of ‘self-regulating’? Haha. That’s like those phony COP/climate talks, where all recommendations are purely aspirational.

    • Selina Sweet
      June 3, 2023 at 10:59

      Well, if Big Rich Boys’ Big Oil attitude toward self-restraint so the grandkids can live in a livable world is any test of humanity’s will to live, (and the apparent level of the masses’ vociferous insistence on that) are any test, exactly what gives one optimism in the proper handling of AI?

Comments are closed.