Back to Insights

AI is Not Eating Your Brain

Dillon S. Brennan, MBA

November 3, 2025

More Technology, More Problems

Chances are you know someone who is afraid of the pandora's box we may be opening with the increased use of artificial intelligence around the world. They're not wrong to feel some level of anxiety as we aggressively construct more resource-intensive data centers, think about our children's learning and development prospects, or consider its broad societal impacts.

While there are promising use cases such as: expediting customer support, automating coding, or even identifying cancer earlier than before—there is still a lot of justified uneasiness at foot .

Mainstream media is no help here—publishing sensational fear-monger content at every opportunity to generate ad revenue. They say things like "AI is already taking white-collar jobs," and parrot the grandiose assertions made by CEOs like Jim Farley or Marc Benioff, that knowledge work is somehow less necessary, entirely, in the near future .

Census data may better paint this picture. Americans, almost overwhelmingly, think that AI will worsen people's ability to think creatively, form meaningful relationships with others, make hard calls, and solve problems. What's worse, Americans think that there needs to be more regulation, but they confide in neither the US Government nor tech giants to successfully do so .

How Americans Think AI will Affect our Ability to...

Figure 1: AI's Effects on People's Ability to Perform Certain Tasks per Pew Research Center

Is AI Eating Your Brain?

So, Americans think that AI is going to degrade our ability to form meaningful relationships and there's fear over its impact on the job market—but is it really going to impact critical cognitive functions?

In June of 2025, MIT published a 209-page-long study called Your Brain on ChatGPT: Accumulation of Cognitive Debt. In short, this study measured the brain activity of a small group of participants who wrote essays with ChatGPT's large language model (LLM) and showed that their final products had less variety, they had less satisfaction with their work, claimed less ownership stake, and they displayed less brain activity for ideation, memory, and semantics .

Ownership Claim for Essays Written with or without AI Assistance

Figure 2: Reported Ownership Claims for Essays Written with Assistance from LLMs vs No Assistance at All

This is what mainstream media has focused on—the negative impacts of AI on our cognitive abilities.

Never mind that essay composers using search engines reported less ownership than LLM users. Never mind the Study's comparisons of AI-facilitated, search-engine-aided, or "brain only" essay composition. Never mind the study's advocating for AI implementation in a hybrid fashion at school. AI is eating your brain—that's the sensationalist claim being propagated in the skeptic's sphere.

Healthy Skepticism or Cynicism?

In one engaging, well-written, but illustratively skeptical example, The Telegraph published a sentiment piece called Why AI Heralds a New Age of Stupidity, which combines anecdotes, expert interviews, and cultural commentary to argue that artificial intelligence is eroding human intellect.

The author cites Finnish accountants who abandoned an automation tool for fear of "lost competence," references falling global IQ scores, and invokes philosophers like Jacques Ellul to warn that convenience breeds complacency . Ultimately, the assertion is that AI is a tool that's detrimental to our society and is being pushed on us to make tech CEOs richer while we all get dumber and lose our jobs at the same time.

This is not a foolish or totally invalid assertion. It's healthy to be skeptical about promises surrounding new technology when money is involved and it's noble to protect the well-being of the common human. Since 1979, US private sector workers have experienced only 33% in wage growth while productivity has increased by 87% . Furthermore, realized CEO pay has increased by 1,085% from 1978 to 2023 .

Worker Productivity and Pay Indices Over Time

Figure 3: US Worker Productivity and Pay, Indexed, from 1948 to 2025

However, if we are too skeptical (or too accepting), how can we expect progress? As Carl Sagan wrote, "what is called for is an exquisite balance between two conflicting needs: the most skeptical scrutiny of all hypotheses that are served up to us and at the same time a great openness to new ideas."

Cognitive Offloading or Shifting?

One research article published by Carnegie Mellon and Microsoft is commonly cited by media outlets like The Telegraph in the "AI is eating your brain" argument. The researchers introduced ~300 knowledge workers to a framework for critical thinking and implemented a survey to gauge their perceived levels of things like: critical thinking, self-confidence, and confidence in the AI tool(s) being used .

Over the course of the study, the knowledge workers responded to surveys pertinent to ~900 examples of AI employment in work tasks. The survey gauged the perceived increase or decrease in cognitive effort when conducting tasks with AI tools and showed that overall, workers rarely required more effort, and overwhelmingly exerted less effort with AI tools.

Reported Change in Task Effort with AI Use, by Cognition Class*

*Cognitive activity groupings taken from critical thinking framework in Lee et al., 2025

Figure 4: Perceived Change in Effort with Implementation of AI Tools in Knowledge Work

Unfortunately, this convenient chart, while popular with AI skeptics, is not a fair representation of the study's findings. The survey not only measured whether workers perceived greater or lesser cognitive effort across different domains, but also sought to understand why those perceptions arose—examining how confidence, task type, and reflective behavior influenced the degree of critical engagement with AI tools.

Rather than solely exhibiting "cognitive offloading," the process of using less of your brain because you're reliant on a tool to do it for you—the article found that a shift in cognitive strain from one set of activities to another explained less perceived cognitive load. For example, information gathering becomes information verification, problem-solving becomes inference formation through response integration, and task execution shifts to task supervision.

Another interesting relationship pointed out in the Microsoft/Carnegie Article was the inverse proportion between task confidence and tool confidence. When knowledge workers approached tasks outside of their domain of expertise, they relied on AI tools more. Conversely, when knowledge workers were in their area of subject matter expertise, they relied less on AI tools .

The Dichotomy of Self-Confidence and AI Reliance

Figure 5: Graphing the Inverse Relationship between Task Confidence and Tool Confidence

Anecdotal Comparisons

So, is AI making us dumber? Did commuter cars make us fatter? Did college degrees make us smarter? The answer, as usual, is probably derived pragmatically. Ultimately, it seems that it's what you make of it. You get out of it what you put into it.

In my first years as a chemical engineering student I manually wrote proofs or derived fundamental equations that took hours to complete. As I learned more, I implemented those equations in practical contexts to create and optimize chemical processes. How much of a given chemical could I produce with a reactor of a certain size? What purity would this chemical be produced at?

In our junior and senior years, we learned how to solve these problems with process simulation software. Optimization efforts were streamlined through computer-fueled iterations. Scenario analysis, once a mental exercise or pipe dream, became a convenient risk management and mitigation tool. The technology made us better.

Sure, if you're using ChatGPT as your therapist or seeking an AI girlfriend, maybe you're getting dumber. AI can be bad, it can make you dumber, but if used correctly, it can make you better.

Keeping Sharp

The truth is, regardless of what technologies are introduced to us—we intellectual beings have an obligation to exercise our brains. To avoid the negative cognitive effects of AI use, MIT recommended starting tasks without AI tool assistance first to provoke cognitive strain or prioritizing learning over task completion . Microsoft/Carnegie suggested using AI as a copilot to check your reasoning or offloading tedious tasks rather than conceptually challenging ones .

At Suleiman Solutions, we employ our core practices to ensure that AI use does not only bolster task completion expediency, but also knowledge worker engagement and enlightenment. Below, we discuss these core practices:

1. Stay Humble is not only a core practice for AI use at Suleiman Solutions, but it is part of our operating philosophy. We assume that the client knows more than us in every case so that we don't provide oversimplified solutions or suggestions you've already implemented. Likewise, we assume a low task confidence when employing AI tools even when we have subject matter expertise in the area. The wisest person is the one who claims to know the least.

2. Stay Skeptical means assuming a low tool confidence when employing AI tools even if task confidence is low. Check sources, calculation methodologies, and arguments provided by AI tools. Seek answers in search engines, research articles, or different AI tool providers to refute or confirm assertions.

3. Stay Curious is our way of ensuring continual engagement and growth. As a consulting firm, our capital is principally located in our knowledgebase and skill repertoire. While the customer's goals are our primary concern, and helping them achieve them is our primary measure of success, our second-most important measure of success is: did we learn something? Don't just solve the problem at hand with AI tools, but use them to explore different scenarios, teach new concepts, or get excited about new skills.

Into the Future

Regardless of whether we like AI or not, it's here to stay in some capacity. Our research shows that it provides efficiency gains for knowledge workers, life-saving capabilities for the healthcare industry, and convenience for consumers among other things.

Like other technologies though, the early stages of its commercialization and adoption can be scary. As humans, we must consider its environmental impacts as well as its impacts on society and the economy. The future is also still uncertain as to whether the AI economy is a bubble or whether robots will take over the world, but we must remain hopeful.

So, let us not employ a jaded, hindsight-biased approach to considering our fate—where we say new technologies were fine in the past, so this one is fine too. Likewise, let us not be overly optimistic and say that AI is going to be the overhead reduction savior of the century or will provide us with unlimited riches. Let us be reasonable.

With discretionary use and hybrid application, human and machine, we can assure you that for now, AI is not eating your brain.

About the Author

Dillon S. Brennan, MBA

Dillon S. Brennan, MBA

Founder | Principal Business Consultant

Specialist in analytics, modeling, and operations

Like what you read?

Contact us now for your free business consultation session!

Get in Touch