Are you super-grateful to AI chatgpt/claude for making up for so much of your reduced function/memory with age?

[for those who feel some aging/reasoning phenotypes can be reversed through technology again]

have you noticed it yet?

1 Like

LLMs are the research partner I now wish I had all my life. Every waking moment, it seems that I generate questions out of curiosity or problem solving. Prior to LLMs, I would guess that I was able to run down 2% of those questions. Thanks to LLMs, that proportion might now be 15% or more. I am delighted!

1 Like

Yes and no. Well, I don’t have any sort of memory deficient. And it’s amazing to have these tools available. So for improving general knowledge, yes they’re great.

But, I definitely feel it’s made me intellectually lazier. Like, when I’m writing a grant proposal, it would take me a really long time to put everything together as coherently as possible, and in the process of doing so, I’d improve my own knowledge, my own communication and writing skills, and it would help me to organise my thoughts. The act of iteration, re-reading, tweaking etc was informative and useful brain exercise.

But now, the LLMs make it so easy to generate perfect text and spot errors that using the very slow approach of 2023 feels like a waste of time. However, I can already feel the “inertia” of my brain when faced with some big task. When I’m asked to peer review a high-end manuscript (and I wouldn’t upload those to cloud LLMs, because it’s unethical), it feels more of a struggle without having Claude to summarise it and find errors for me.

I can only imagine that it will get worse. My students (MSc and PhD students) are definitely highly-dependent on these tools now. I don’t think many of them could write an essay without them, and that must be having consequences for their overall understanding of a topic. Their essays are better than ever, but I have no idea whether whatever is in their brain is better. I would guess not.

Glad you started this topic. I am not too sure about being grateful to AI (may be thankful and dont want to anthropomorphise AI), I am certainly grateful to you having started this thread…:slight_smile:
And, I agree thhis thread forces me to pause and reflect. May be time to update my Bayesian beliefs…hahahaa.
Even though I have become heavy user myself but I begin to wonder if I am becoming dumber every month I ham so heavily reliant on LLMs.
There has got to be some trade offs neurobiologically (such as neuroplasticity etc) which we haven’t realized yet. Lateral thinking being one of them. Then, i am sure there is neurocognitive remodeling thats happening already specially in younger adults and adolescents whose cortical structure havent been completely myelinated yet.
Here is some data:
Regressive Remodelling: EEG studies show a decline in N400 response amplitude—a brain wave associated with processing semantic incongruity.
​The Consequence: The brain essentially becomes “lazy” at detecting nuance, irony, or subtle misinformation. We are effectively training our neurocircuitry to expect—and eventually only be able to process—the “highly probable” rather than the “highly profound.”

https://www.nature.com/articles/s41562-025-02105-9

NeuroAI-driven cognitive off-loading and synaptic effort recalibration - PMC.

Then you have epigenetic changes:

NeuroAI-driven cognitive off-loading and synaptic effort recalibration - PMC.

Then the Stanford/Wharton have

The Flexibility-Originality Paradox: it stems from a cluster of longitudinal studies published between late 2024 and 2026, most notably spearheaded by researchers at the Stanford Institute for Human-Centered AI (HAI) and the Wharton School’s Mack Institute.
​The term was popularized in the meta-analysis “The Generative Bottleneck: Divergent Thinking in the Age of LLMs” (Hofmann et al., 2025), which examined how high-frequency AI users performed on the Torrance Tests of Creative Thinking (TTCT).
​1. The Core Findings: Fluency INCREASES
2. originality: Significantly DECREASES

The “Skill Formation Gap”: A 2026 study on programming and complex logic found that while AI-assisted groups completed tasks faster, they scored 17% lower on subsequent unassisted “surprise” quizzes. This indicates a failure to move information from working memory to long-term cortical storage—a breakdown in the “encoding-through-effort” pipeline.

It does sound concerning. Perhaps time for us to update /recalibrate our protocols!

It is a great set of questions we have going here. Focusing on intellectual laziness and reduced skill, my suspicion is that the data will run in both directions, depending on other factors.

Cognitive offloading to tools is an adaptive feature of human cognition. Whether or not it has resulted in a specific reduced skill set in the past, it has broadly resulted in increased total capabilities. My history with statistical tools supports that view.

Around 50 years ago, I picked up a few extra dollars as a grad student tutoring in the stat lab. It was a transition period as the lab had mechanical calculators that could theoretically do more than I could coax out of them and two terminals that accessed SAS 72. Some around there could remember working out problems on the board. Later, I became enamored of SPSS and relied upon it through 18 versions. Did my reliance over decades diminish my skills? “Yes and no” seems like the best answer. The use definitely diminished my skills in working out problems by hand but it also increased my ability to visualize and apply a much broader range of statistical tools. More importantly, I could develop more robust research designs. On balance, I think that tradeoff was mostly positive.

In a different context the tradeoff has been mostly negative. I earned my pilot certificate when GPS was prevalent but not universal and calculation-intensive forms of navigation skills had to be mastered to pass the written and the flight review. In the practical, I had to navigate in day and night time conditions, using VORs and specialized slide-rule calculators. My first and subsequent aircraft have had GPS and GPS controlled autopilots installed. In a matter of months my non-GPS navigation skills declined to the point that I could not rely upon them. Pilots commonly joke about this issue and we all recall the studies on the posterior hippocampus of London Black Cab drivers.

I am certain that my newfound reliance on LLMs to analyze the threats to validity of a research paper have fostered an intellectual laziness. In 90 seconds, an LLM can unpack a complex research paper as well or better than I can with 30 minutes of work. I find that efficiency too attractive to ignore.

On the other hand, some of my skills might be maintained or even enhanced when I find that the LLM has overlooked or misconstrued an element of the research paper and I end up coaching it to the correct analysis and conclusions.

One clear positive for the use of the LLM might be the increased exposure to more literature. I feel like I can become conversant with a topic much more quickly than I could before LLMs.

I have also heard from colleagues that the reduced burden on routine synthesis can free cognitive resources for higher-order creative and critical thinking. I’m skeptical about this claim but open to evidence.

Overall, I suppose a meaningful empirical illumination on these questions is a few years away. By that time the functional ground may have changed so much that the questions have lost much of their meaning.

1 Like

Public Claude Opus 4.6 is the closest to military-grade AI a consumer can get.
Claude has been my go-to AI for a while. I have not seen it hallucinate or give false references for quite some time. FWIW: I think the paid version is worthwhile.

"Claude 4.6: Released in February 2026, the Opus 4.6 and Sonnet 4.6 models currently top many benchmarks. Sonnet 4.6 features a 1-million-token context window and has shown a “pivot to profitability” strategy in business-simulated evaluations.

Anthropic is currently top dog with the military and 3-letter agencies.

Anthropic launched the powerful Mythos ​model earlier this week but stopped short of a broad release, citing concerns it could expose previously unknown cybersecurity vulnerabilities.

April 9 (Reuters) - U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with bank ‌CEOs this week to warn of cyber risks posed by Anthropic’s ‌latest AI model, two sources familiar with the matter said on Thursday.