AI & the End of Thinking?

Shares

Wolfgang Messner explores the risks that mediocrity and conformity will accompany an AI-powered cognitive revolution.

A visitor at Ars Electronica Center in Linz, Austria, in 2019 watches Orbit, a real-time reconstruction of time-lapse photographs taken on board the International Space Station by NASA’s Earth Science & Remote Sensing Unit, with a soundtrack by Seán Doran. (Ars Electronica / Robert Bauernhansl / Flickr / CC BY-NC-ND 2.0)

By Wolfgang Messner
The Conversation 

Artificial Intelligence began as a quest to simulate the human brain.

Is it now in the process of transforming the human brain’s role in daily life?

The Industrial Revolution diminished the need for manual labor. As someone who researches the application of AI in international business, I can’t help but wonder whether it is spurring a cognitive revolution, obviating the need for certain cognitive processes as it reshapes how students, workers and artists write, design and decide.

Graphic designers use AI to quickly create a slate of potential logos for their clients. Marketers test how AI-generated customer profiles will respond to ad campaigns. Software engineers deploy AI coding assistants. Students wield AI to draft essays in record time – and teachers use similar tools to provide feedback.

The economic and cultural implications are profound.

What happens to the writer who no longer struggles with the perfect phrase, or the designer who no longer sketches dozens of variations before finding the right one? Will they become increasingly dependent on these cognitive prosthetics, similar to how using GPS diminishes navigation skills?

And how can human creativity and critical thinking be preserved in an age of algorithmic abundance?

Echoes of the Industrial Revolution

We’ve been here before.

The Industrial Revolution replaced artisanal craftsmanship with mechanized production, enabling goods to be replicated and manufactured on a mass scale.

Shoes, cars and crops could be produced efficiently and uniformly. But products also became blander, predictable and stripped of individuality. Craftsmanship retreated to the margins, as a luxury or a form of resistance.

Garment factory in Sangkat Chaom Chao, Cambodia, 2016. (UN Women Cambodia/Charles Fox/CC BY-NC-ND 2.0)

Today, there’s a similar risk with the automation of thought. Generative AI tempts users to conflate speed with quality, productivity with originality.

The danger is not that AI will fail us, but that people will accept the mediocrity of its outputs as the norm. When everything is fast, frictionless and “good enough,” there’s the risk of losing the depth, nuance and intellectual richness that define exceptional human work.

The Rise of Algorithmic Mediocrity

Despite the name, AI doesn’t actually think.

Tools such as ChatGPT, Claude and Gemini process massive volumes of human-created content, often scraped from the internet without context or permission. Their outputs are statistical predictions of what word or pixel is likely to follow based on patterns in data they’ve processed.

They are, in essence, mirrors that reflect collective human creative output back to users — rearranged and recombined, but fundamentally derivative.

And this, in many ways, is precisely why they work so well.

Consider the countless emails people write, the slide decks strategy consultants prepare and the advertisements that suffuse social media feeds. Much of this content follows predictable patterns and established formulas. It has been there before, in one form or the other.

Generative AI excels at producing competent-sounding content — lists, summaries, press releases, advertisements — that bears the signs of human creation without that spark of ingenuity. It thrives in contexts where the demand for originality is low and when “good enough” is, well, good enough.

When AI Sparks – & Stifles – Creativity

A visitor at the Ars Electronica Center in Linz, Austria, works with GauGAN, an application that can generate complex landscapes with only a few brushstrokes and generate new versions of these pictures in the painting style of different artists. (Ars Electronica / Robert Bauernhansl / Flickr / CC BY-NC-ND 2.0)

Yet, even in a world of formulaic content, AI can be surprisingly helpful.

In one set of experiments, researchers tasked people with completing various creative challenges. They found that those who used generative AI produced ideas that were, on average, more creative, outperforming participants who used web searches or no aids at all. In other words, AI can, in fact, elevate baseline creative performance.

However, further analysis revealed a critical trade-off: Reliance on AI systems for brainstorming significantly reduced the diversity of ideas produced, which is a crucial element for creative breakthroughs. The systems tend to converge toward a predictable middle rather than exploring unconventional possibilities at the edges.

I wasn’t surprised by these findings. My students and I have found that the outputs of generative AI systems are most closely aligned with the values and worldviews of wealthy, English-speaking nations. This inherent bias quite naturally constrains the diversity of ideas these systems can generate.

More troubling still, brief interactions with AI systems can subtly reshape how people approach problems and imagine solutions.

One set of experiments tasked participants with making medical diagnoses with the help of AI. However, the researchers designed the experiment so that AI would give some participants flawed suggestions. Even after those participants stopped using the AI tool, they tended to unconsciously adopt those biases and make errors in their own decisions.

What begins as a convenient shortcut risks becoming a self-reinforcing loop of diminishing originality — not because these tools produce objectively poor content, but because they quietly narrow the bandwidth of human creativity itself.

Navigating the Cognitive Revolution

True creativity, innovation and research are not just probabilistic recombinations of past data. They require conceptual leaps, cross-disciplinary thinking and real-world experience.

These are qualities AI cannot replicate. It cannot invent the future. It can only remix the past. [See: The Limits of AI]

What AI generates may satisfy a short-term need: a quick summary, a plausible design, a passable script. But it rarely transforms, and genuine originality risks being drowned in a sea of algorithmic sameness.

The challenge, then, isn’t just technological. It’s cultural.

How can the irreplaceable value of human creativity be preserved amid this flood of synthetic content?

The historical parallel with industrialization offers both caution and hope. Mechanization displaced many workers but also gave rise to new forms of labor, education and prosperity.

Similarly, while AI systems may automate some cognitive tasks, they may also open up new intellectual frontiers by simulating intellectual abilities. In doing so, they may take on creative responsibilities, such as inventing novel processes or developing criteria to evaluate their own outputs.

This transformation is only at its early stages. Each new generation of AI models will produce outputs that once seemed like the purview of science fiction. The responsibility lies with professionals, educators and policymakers to shape this cognitive revolution with intention.

Will it lead to intellectual flourishing or dependency? To a renaissance of human creativity or its gradual obsolescence?

The answer, for now, is up in the air.The Conversation

Wolfgang Messner is clinical professor of international business, University of South Carolina.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Views expressed in this article may or may not reflect those of Consortium News.

Please Donate to the

Spring Fund Drive!

15 comments for “AI & the End of Thinking?

  1. Brewer
    June 6, 2025 at 20:16

    It is just a reiteration of the oldest power grab known to man – the invention of an ultimate authority. In other words an omniscient “God”.

    “Is AI the future… or just a reboot of the oldest con in Western history? In this powerful talk, Shahid Bolsen dismantles the mythology of artificial intelligence and reveals how the tech elite are resurrecting an ancient tactic: mystify power to legitimize tyranny.”
    hxxps://www.youtube.com/watch?v=CjKUnL7WCJ4&authuser=0
    “It’s the same thing they’ve always done The elites just resurrected the same central tactic deify their power to legitimize their control. The Catholic Church did that with the claim of divine mandate, Monarchs did that claiming divine right, Colonizers did that claiming divine purpose And now you’ve got Silicon Valley basically claiming that they have divine code.”

  2. Joe Brant
    June 6, 2025 at 17:34

    Many good points here; but some divergences and notes:
    1. AI will become genuinely intelligent, with similar faults as in human thinking
    (when they adopt my 1978 theory of Reinforceable Grammar, of course!);
    2. Human intelligence will be challenged but still essential (perhaps wishful thinking);
    3. The greatest challenge will be reduced employment needed to produce essentials, and this will be nicely met in socialist systems, but may lead to major disruption of exploitative unregulated market economies like ours.

  3. Mark Dawson
    June 5, 2025 at 19:52

    Very interesting piece Wolfgang.

  4. Caliman
    June 5, 2025 at 19:41

    “Will it lead to intellectual flourishing or dependency?”

    Or both? Or simply differently intellectual and differently dependent?

  5. Janet Wormser
    June 5, 2025 at 12:21

    Though I thought this article was basically even-handed, I was annoyed at “been there before” in regard to the Industrial Revolution. It’s too painful for us humans to confront how unbalanced our human lives have become. We can’t think this out apparently. It’s wasn’t good that craftsmen worked as “slaves’ creating buildings and monuments and needed implements, but is it good that we have lost craftsmenship and how to make things? The average person can’t do much for him or her self, and, in fact, domestic work is frowned upon if you have the means to pay someone. In short, we are more and more out of touch with ourselves and more and more out ot of touch with the earth. It doesn’t bode well.

    • Patrick Powers
      June 5, 2025 at 18:49

      In the words of Tonto, “what do you mean, ‘we’?” I reside in Bali, where people do most things for themselves — farm their own rice, build their own homes. I spend a lot of time in Japan, where craftsmanship is honored and amateur art reaches amazing heights.

      Passive consumerism or looking down on ‘labor’ are both choices.

      • Janet Wormser
        June 6, 2025 at 21:17

        I was speaking as an American. It’s true that passive consumerism and looking down on working with one’s hands are a choice, but how informed is that choice is to me the question.

  6. Fred
    June 5, 2025 at 08:15

    Joshua Stylman has published a fascinating account of the role AI is likely to play in bringing individuals to complete social subjugation. It will not be possible to opt out of society or stand independently.

    hxxps://stylman.substack.com/cp/164540270

    The Cognitive Layer: OpenAI controls the information that shapes public consciousness.

    The Infrastructure Layer: massive data centers processing your biometric signatures and behavioral patterns.

    The Interface Layer: Integration with Apple, Microsoft, iOS, Siri, and Office products means OpenAI’s systems mediate your every digital interaction, creating a seamless surveillance mesh.

    The Identity Layer: Sam Altman’s World Network is “ramping up efforts to scan every human’s iris using its ‘orb’ devices” to create “digital passports” that make anonymous existence impossible.

    The Security Layer: OpenAI’s consortium with Palantir and Anduril focuses on “improving counter-unmanned aircraft systems and their ability to detect, assess and respond to potentially lethal aerial threats in real-time.”

    The Economic Layer: World Network’s goal “to scale to one billion people” through cryptocurrency distribution makes economic survival dependent on biometric compliance.

  7. Patrick Powers
    June 5, 2025 at 03:13

    Conformity has always been one of the strongest factors in human life.

    • Bob the Bard
      June 5, 2025 at 16:19

      The invention of the typewriter ruined cursive script. Should we stop using keyboards?

      • Vera Gottlieb
        June 6, 2025 at 10:35

        No…but keep your ‘thinking’ something that is solely yours…Too many people seem afraid of thinking for themselves.

  8. hgfjh
    June 5, 2025 at 03:00

    I’m stereotypically autistic with principal hobbies of geopolitics and theoretical physics. I do it because I like it and no AI is going to make me, and the millions upon millions like me, stop.

    I’ve recently discovered what gravity actually is – the interaction between the dimensions of mass and distance. It’s amazingly simple to demonstrate this but conformity with standard doctrine forbids it so that’s the end of the matter.

    Conformity will stomp on creativity every time. It’s like returning to middle ages when religion dictated that Galileo was heretical. That’s the real danger of AI.

  9. Emma M.
    June 4, 2025 at 23:41

    “The historical parallel with industrialization offers both caution and hope. Mechanization displaced many workers but also gave rise to new forms of labor, education and prosperity.

    Similarly, while AI systems may automate some cognitive tasks, they may also open up new intellectual frontiers by simulating intellectual abilities. In doing so, they may take on creative responsibilities, such as inventing novel processes or developing criteria to evaluate their own outputs.”
    These “intellectual frontiers” in the linked article are the simulation of human creativity and its implied death and replacement for the benefit of those without skill, talent, or imagination to be able to utilise gen AI to do the work for them. The paper’s author thinks this is good, as “having a bespoke painting, poem or piece of music created is the privilege of the few;” apparently unaware of the potential for loss of meaning and value, and that anyone living on much less than minimum wage can afford to commission these things today from human artists for less than the price of a dinner-for-two outing or video game. Or, one can learn the skills to create themselves for free, especially with the endless information available on the Internet to learn these things.

    Did I miss the hopeful part…? Well, I’ll fill in with the hopeful part by throwing a wedge in the hype train, since having been interested in AI all my life, most reporting on it is uninformed and unfortunately victim of Silicon Valley hype. The risk to thinking is serious, but not because these AI can do anything well or will continue to improve. I will focus on the latter and not the former:

    “This transformation is only at its early stages. Each new generation of AI models will produce outputs that once seemed like the purview of science fiction. The responsibility lies with professionals, educators and policymakers to shape this cognitive revolution with intention.”
    This is not what the data suggests, as there has not been a significant improvement in any of these models for years. The best AI researchers predicted the wall it has hit years ago.

    hxxps://nautil.us/deep-learning-is-hitting-a-wall-238440/
    hxxps://garymarcus.substack.com/p/breaking-openais-efforts-at-pure

    Best case scenario, models have “hallucinations” at maybe 30% of the time; at worst it’s far higher. This cannot be solved and relates to the transformers themselves. It should rule out any use of it for anything serious or of any consequence where the quality and accuracy of result matters. (In practice, it may not be; the oft repeated idea it will continue to improve in spite of what is observed may ensure its use anyway.)

    There are problems with neural nets themselves that cannot be solved and are part of the technology, such as that they cannot understand if A = B, then B = A; the only computers that cannot do math or understand fundamentals such as order of operations; if such operands change in a binary operation the result is the same, but not according to neural nets. This is documented going back over 20 years, and visible constantly in LLM output: it can understand e.g that celebrity X’s mother is Y, but not that the statistically less prevalent in training data Y’s daughter is X.

    Countless other such examples. Further progress is unlikely. The amount of false hype, money, and resources wasted on LLMs may mean AGI will not be achieved, certainly not in the short term but perhaps never. Actually, there is research suggesting they will get worse, because the Internet is now full of generative AI output and the data used for training is now contaminated. Meanwhile, journals are closing down because they are being flooded with AI papers (e.g hxxps://archive.is/3ABkT); peer reviewers are themselves being slowly replaced with AI.

    And can the nuclear power plants companies like OpenAI are demanding truly provide enough power for more enormous data centres? And what of the water for liquid cooling? Resources such as energy, water, and REMs will prevent further progress; hence China’s new three body constellation to allow AI compute in space where vacuum is heatsink.

    It’s completely unsustainable. Sooner or later, the bubble will burst.

    • BigO
      June 6, 2025 at 01:37

      Diablo canyon in California sits on the coast and uses seawater for cooling, I believe it’s desalinated. Water should not be a problem for nuclear power generation because it’s recycled. Furthermore molten salt reactors are being planned that are more stable and safer

  10. Patrick Powers
    June 4, 2025 at 17:33

    As an artist I can tell you that while the market wants originality I wants MINIMAL originality. I say 5%. While fundamental originality can sell this is the freakish exception, the one in a million shot. AI excels at grinding out the cliched rehashes that the market demands. More power to it.

Comments are closed.