Home » Dr. Jayne » Currently Reading:

EPtalk by Dr. Jayne 7/27/23

July 27, 2023 Dr. Jayne No Comments

Big tech companies — including Amazon, Google, Meta, and Microsoft — have signed on to a White House initiative to create AI safeguards. The companies have agreed to ensure safety in AI products prior to release, along with a yet-to-be-determined level of third-party review. In announcing the agreement, President Biden noted that “Social media has shown us the harm that powerful technology can do without the right safeguards in place.”

Although I agree with that statement, the numbers of people who believe social media has hurt society are about even with those that say social media makes their lives better. Having spent a good portion of my medical career caring for teenagers and treating plenty of individuals for anxiety and depression, I would bet that the average family physician doesn’t think social media is helping.

Speaking about generative AI with my non-informatics acquaintances, most of them think that the features are “cool” and are impressed by their ability to get AI-generated content delivered to them for a fraction of the cost of human-produced content. These are folks who are generally upper middle class and can afford luxury items, so it’s not like they’re choosing AI-generated content because it’s all they can afford. Everyone likes a bargain, apparently, not to mention the novelty of the technology. As an aside, many of these acquaintances are also consumers of so-called “fast fashion,” so I don’t think they’re paying a lot of attention to the sustainability element or the overall economic impact of employing artists, writers, photographers, and journalists.

Parts of the agreement include provisions to test AI systems for the potential to create harm, to identify situations where systems might try to access physical systems or try to copy themselves. Those that signed on also agreed to create pathways for reporting vulnerabilities and to use digital watermarking to differentiate AI-generated content. The agreements were constructed in closed-door sessions, leading to comments from critics that voluntary safeguards aren’t sufficient and that there needs to be more open public debate.

Members of Congress are also working on legislation to regulate AI solutions and other industry players are calling for standards that go beyond this week’s agreement. Various countries and the United Nations are also looking at regulations and standards to address AI. It will be interesting to follow the discussion in the coming months and to see where we land with this.

Meanwhile, the focus on AI has been decidedly greater than the focus on the potential for mind-reading machines, which I wasn’t even aware of until I came across this article in Nature. Earlier this month, a group of neuroscientists, ethicists, and governmental representatives met in Paris to discuss the potential for regulating brain-reading techniques and other neurotechnologies. The scope of such solutions is broad, and ranges from medical devices (such as brain implants designed to treat a medical condition) to consumer products such as virtual reality wearables that might collect users’ brain data. Investment in the field is growing at a rapid pace with neurotechnology now being a $33 billion industry. Ethics professionals at the meeting discussed concepts such as manipulating an individual’s thoughts, altering their behavior, or otherwise manipulating thoughts or behavior for financial or political gain.

Privacy advocates called out companies whose terms and conditions require users to cede ownership of their brain data. Columbia University neuroscientists Rafael Yuste and his colleagues proposed a slate of neuro rights that includes “the right to mental privacy; protection against personality-changing manipulations; protected free will and decision-making; fair access to mental augmentation; and protection from biases in the algorithms that are central to neurotechnology.” Nations including Spain, Slovenia, Saudi Arabia, and Chile are already addressing the issue with the later becoming the first to address neurotechnology in its constitution. More to come, I’m sure.

It was gratifying to see that Cigna is being sued over the algorithm it uses to deny coverage to patients. The system allows claims to be rejected in seconds without human oversight. The PxDx digital claims system is said to be an “improper scheme designed to systematically, wrongfully, and automatically deny its insureds medical payments owed to them under Cigna’s insurance policies.” Cigna fired back with a statement that the system “is a simple tool to accelerate physician payments that has been grossly mischaracterized in the press.” The issue isn’t entirely around the system, which spends an average of 1.2 seconds processing each claim, but rather that Cigna physicians are signing off on denials without reviewing medical records. I’ll definitely be following this one with my bowl of popcorn at the ready.

Gallup has released its 2023 State of the Global Workplace report that states the majority (59%) of the workforce is “quiet quitting” by subtly disengaging in the workplace, which isn’t surprising in this post-pandemic environment. The striking finding, though, is that 18% of the workforce is “loud quitting” or actively disengaging from work. Loud quitters may spread their feelings throughout the workplace and on social media. The combination of these two groups may have an impact on global productivity of over $8 trillion. The report also indicates that employee stress is increasing, which impacts productivity. Workers in the US and Canada reported stress at the 52% rate, where European workers were at 39%. More than 122,000 employed people contributed data to the report.

CMS recently released its proposed rule for the 2024 Medicare physician fee schedule and Quality Payment Program. It’s a mixed bag, but will require technology updates, so here’s the highlight reel for behind-the-scenes IT folks:

  • An overall decrease in physician payments of more than 3%, so prepare for grumpy physicians.
  • A supplemental billing code for office/outpatient evaluation and management (E/M) complexity.
  • Changes to telehealth reimbursement based on place of service (POS) codes.
  • New billing codes for behavioral health crisis services delivered in certain places of service.
  • Addition of an optional Social Determinants of Health risk assessment to the Medicare Annual Wellness Visit. Those performing this assessment must perform it the same day as the visit using a standardized instrument that takes the patient’s education, development, health literacy, culture, and language into account. A separate billing code will also be created to account for this effort.

That’s the update folks. Get your business analysts and requirements writers ready.

How does your organization incorporate CMS changes to your EHR, and how long does it typically take? Leave a comment or email me.

Email Dr. Jayne.



HIStalk Featured Sponsors

     

Text Ads


RECENT COMMENTS

  1. Fear of scorn from Mr HIStalk is so great at Oracle Towers that the webinar recording linked to in the…

  2. House lawmakers should have bought a squirrel ;-)

  3. Poor portal design has lots to blame for messaging issues. In the portals that I have used, the patient can…

  4. Thanks, appreciate these insights. I've been contemplating VA's Oracle / Cerner implementation and wondered if implementing the same systems across…

  5. This is speculation, but it's informed speculation. There are trouble spots to look out for that are likely involved: 1).…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.