Home » Dr. Jayne » Currently Reading:

Curbside Consult with Dr. Jayne 6/16/25

June 16, 2025 Dr. Jayne No Comments

Healthcare isn’t the only industry grappling with how AI should, or should not, fit into our daily work.

Some friends who are teachers sent me the transcript of a recent discussion about how AI is impacting the ability of humans to think and whether it will alter our abilities for critical thinking. The discussion linked to an article “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking” that was a great read. The author set out to examine how AI tool use relates to critical thinking skills and focused on the concept of cognitive offloading as a potential mediating factor. Cognitive offloading happens when thought processes are outsourced to technology instead of being developed independently.

The study found that higher AI tool use had a negative impact on critical thinking abilities. Younger study participants (ages 17-25) were more dependent on AI tools and had lower critical thinking scores compared to those study participants who were older than 46 years. It also noted that regardless of AI usage, better critical thinking skills were associated with higher educational attainment, which should be important to anyone who has a stake in ensuring a well-educated population. The study found that higher educated people maintained those critical thinking skills even when using AI, which supports the idea that how we are using AI is more important than whether we’re using it or not. The study also found that AI use encourages passive learning, where students consume information rather than creating it.

The study had multiple hypotheses about the role of cognitive offloading, including one that suggested that moving thinking tasks to external tools would reduce the cognitive burden on individuals. Instead, they found that the reduced cognitive load can lead to reduced critical engagement and cognitive analysis. According to the author, this phenomenon has been described as the “Google effect,” where being able to easily find information online leads to reduced memory retention and problem solving skills.

That would seem to go along with what many of us already think, which is that the internet is making us dumber. Although to truly explore that statement, you would also have to look at the proliferation of TikTok videos and the nonsense seen all over social media on a daily basis.

I had the chance to speak to a couple of teachers who were blissfully enjoying their summer vacation, so I figured I would ask about their thoughts around AI and their thoughts about how it was impacting education, beyond the obvious concerns about AI-generated work.

One said that plagiarism has always been an issue, and taking from AI sources isn’t a lot different than taking from other authors, although AI might be easier to catch because of stilted language that would have been caught by editors of more traditional sources. She also noted that she’s applying some of her existing “how to spot fake news” lesson plan content to AI, encouraging students to be skeptical about what AI is telling them, to ask about bias, and to consult multiple sources to ensure accuracy. She recommends that students do their best to answer questions in more traditional ways first, then use AI to validate their findings.

The other teacher felt that better education is needed on how AI works and the risks of using it. He likened it to when GPS units first came out, and there were reports of people driving off the edges of roads that were closed because they were blindly following the GPS and not paying appropriate attention to their surroundings. He also noted that although there are certainly concerns about AI use interfering with academic rigor, he is more worried about his teenage students being emotionally harmed by AI-generated content, such as deepfake photos or videos.

He noted, “When I was in school, people spread rumors, but now you can have altered videos going around that are a lot more difficult to combat.” As a proud member of Generation X, I don’t envy the students growing up in this environment. Still, I’m grateful for teachers that recognize these challenges and work to prepare students not only to be ready for the future but to protect their own mental health.

The use of AI by medical students and residents has been a hot topic for my colleagues who are working in academic settings. There are concerns that students have become used to looking up facts and aren’t memorizing information the way they used to, which places them at risk when resources aren’t readily available. Whether it’s a downtime event or a rapidly evolving clinical situation, I know I’m glad that I have certain pathways memorized to the point where they just happen naturally in my thought process.

Of course, I’ve allowed some things to go by the wayside and I would have to look them up if I ever needed them. (Cockcroft-Gault equation, I salute you.) One faculty member said his school is using AI within its case-based learning modules for medical students in hopes that the approach will build diagnostic reasoning skills rather than sabotage their development.

The faculty physicians I spoke with had different thoughts about the use of AI by resident physicians, since they’ve graduated from medical school and have the MD or DO behind their name and are therefore able to treat patients with some degree of independence even if they may not be fully licensed. Universally, they had concerns about using non-medical AI solutions due to the risk of hallucinations and the safety risks to patients. They were also concerned about students using those resources to learn procedures and algorithms, since students wouldn’t be aware if what they were reading was incorrect compared to what they might learn reading a more authoritative resource such as a medical textbook or journal articles.

All but one said they conduct their teaching rounds in an AI-free environment where participants are expected to contribute to the discussion without the benefit of external resources.

That conversation was limited to faculty in my immediate area. I suspect that attitudes might be different in parts of the country that are more apt to adopt new technologies more aggressively. I would be interested to hear from informaticists that work with medical schools or graduate medical education programs on how your institutions are approaching AI and what best practices are being developed.

Is AI really going to make healthcare better, or is it another shiny object that will eventually lose our admiration? Leave a comment or email me.

Email Dr. Jayne.



HIStalk Featured Sponsors

     

Text Ads


RECENT COMMENTS

  1. Beholder's Share can be supported in software without incurring much technical cost by supporting cosmetic configuration. Some Epic reports allow…

  2. Good question. I think in the current world, two things like that can both be true.

  3. I can't decide. Is the bankruptcy of The Villages Health the most unpredictable thing ever, or the most predictable thing…

  4. I too have not been a huge fan of Joint Commission, but late last year they formed an alliance with…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.