Curbside Consult with Dr. Jayne 2/23/26
It’s clear that AI is here to stay. I’ve spent quite a bit of time looking at studies that seem to be either proving its value or dismissing it on the basis of inaccuracy and risk.
Healthcare people tend to look at it with a specific lens. I reached out to contacts in other industries to better understand how they are approaching it, and whether their professional organizations have produced policies or recommendations around its use.
The first person who responded to my query is in the field of law. The initial portion of his response addressed the high-profile problems with AI that have surfaced in the legal world. A number of cases involved attorneys who used AI to construct briefs, but failed to catch that the AI fabricated citations for cases that didn’t exist.
Similar to what we encounter in healthcare, issues exist with the content on which AI systems are trained. Attorney-client confidentiality must not be compromised by becoming part of a data set. Similar risks involve algorithmic bias and discrimination. Attorneys have been sanctioned for misusing AI, with some being fined for fictitious citations.
The legal community is discussing accountability for the use of AI. Ethics experts agree that attorneys are ultimately responsible for the accuracy of matters that are being handled in their name.
My attorney friend shared his opinion that even the best AI isn’t as good as some of his most seasoned paralegals and researchers. His firm tends to proceed with caution, although it does not have a formal policy on the use of the technology. He thinks about about using AI to create documents similarly to having a summer legal intern do it. He reads everything with a critical eye in case it misses the mark, just like interns sometimes do.
We chatted a bit about the idea that AI probably isn’t as good as a law student at the top of their class, but might be better than a student at the bottom of their class. This has parallels with medical education. It is different asking a fourth-year sub-intern to present a case than to ask a third-year student who is on their first clinical rotation to do the same.
We agreed that the idea of blind trust in AI is risky, especially when professional licensure is on the line.
The American Bar Association issued its first guidance on the ethics of AI use in 2024. It specifically noted the need to ensure that legal billings are appropriate for tasks that are conducted using generative AI tools.
The attorney in question is also a commercial pilot. He had a few things to say about the use of AI in the aviation space. Airlines have been using it for operations functions, including maintenance optimization and the modeling of passenger behaviors such as their likelihood to check bags or buy additional services and amenities. Consumer-facing AI includes support chatbots and booking and ticketing systems.
On the maintenance side, AI can help with troubleshooting complex airframes that generate sensor data. Mechanics also use it for maintenance documentation.
He mentioned incorporating AI into flight simulator systems. It uses real-world cases and events to create realistic emergency scenarios that might go beyond the experience of a human simulator operator or operational handbooks.
I must have posed my question at just the right time, because he mentioned a recent announcement about the US Air Force’s Flying Training Center of Excellence. It is developing an AI-based “Instructor Pilot GPT” that is designed to interact with students who are undergoing pilot training. The tool will be trained on flight manuals and aviation documentation. It will help student pilots assess their performance and will provide rapid access to reference procedures. Similar to the commercial side, they hope to use the technology in flight simulators.
The Air Force uses a closed training environment that contains documents such as military protocols, federal guidance, and flight-related publications. I chuckled when I read a quote from one of the people who is involved with the project, who referred to the subset of information as a “data pond.”
Another comment in the article sounded a lot like the conversations that we are having regularly in medical education. Students are on their phones using LLMs every day, so they will expect it as they move forward in training.
The article also notes important concerns that I hadn’t considered in healthcare, such as cybersecurity risks. What happens when your fighter jet GPT gets hacked and harmful information is injected? The same thing could happen to a healthcare system, which would provide the ultimate example of medical misinformation.
As far as professional organizations or regulations, the Federal Aviation Administration issued a formal notice on the use of generative AI tools and services in March 2025. The first page of the document highlights the need to ensure that generative AI use “is conducted in an ethical and responsible manner.”
The notice applies only to FAA’s employees and contractors, but it includes policy elements that are similar to what I see in hospitals and care delivery organizations. These include a requirement to request approval for using generative AI software, the ability to request support for specific use cases that have already been identified, and the need to ensure that AI tools that are found on the internet have been approved by the organization.
The FAA also cautions about the risks of AI infringing on intellectual property, the need to review AI-generated content for accuracy, the need to be transparent about where AI tools are being used, and the principle that it shouldn’t be used to “perform or facilitate illegal or malicious activities.”
I am waiting to hear back from contacts in other industries and will share if I receive compelling insights. If you or your organization does crossover work in areas other than healthcare, how are those industries tackling the use of generative AI? Leave a comment or email me.
Email Dr. Jayne.
ChatGPT reactions to the WSJ article on the flatulence metric: Finally, a wearable that really tracks your *gas mileage*. “Prodigious…