Featured image for “GenAI in the Enterprise: John Travis, Principal at JFT PRG LLC”

GenAI in the Enterprise: John Travis, Principal at JFT PRG LLC

On today’s episode of GenAI in the Enterprise, Zach hosts John Travis, principal at JFT PRG LLC. John shares his extensive background in healthcare and regulatory compliance and how #GenAI has fit in. They discuss the importance of transparency and fairness in AI applications, particularly in the healthcare decision-making processes.

John highlights the regulatory considerations for companies venturing into generative AI, emphasizing the need for understanding FDA and ONC guidelines. They touch on the complexities of data privacy laws like GDPR and HIPAA, and the challenges of ensuring data quality and representativeness in AI training datasets. The conversation underscores the evolving landscape of AI governance and the imperative for transparency in algorithmic decision-making.

About John:

John Travis has a long history of helping companies meet healthcare regulations. He possesses particular expertise in HIPAA health information privacy, security and administrative simplification requirements, EHR/HIT Certification, CMS Revenue Cycle and Condition of Participation requirements, information blocking and interoperability requirements, 42 CFR Part 2 requirements for SUD treatment, 21 CFR Part 11 requirements among other regulatory requirements that impact the use of HIT. John is the founder of JFT PRG LLC, a company that provides consulting guidance for the intersection of health information technology and federal regulation.

John on LinkedIn: https://www.linkedin.com/in/john-travis-716a495/

View This Episode On:

About The Generative AI In The Enterprise Series:

Welcome to Keyhole Software’s first-ever Podcast Series, Generative AI in the Enterprise. Chief Architect, Zach Gardner, talks with industry leaders, founders, tech evangelists, and GenAI specialists to find out how they utilize Generative AI in their businesses.

And we’re not talking about the surface-level stuff! We dive into how these bleeding-edge revolutionists use GenAI to increase revenue and decrease operational costs. You’ll learn how they have woven GenAI into the very fabric of their business to push themselves to new limits, beating out competition and exceeding expectations.

See All Episodes

Partial Generative AI In The Enterprise Episode Transcript

Note: this transcript section was created using generative AI tools like YouTube automated transcripts and ChatGPT. There may be typos, slight content changes, or character limits for brevity!

[Music]

Zach Gardner: Ladies and gentlemen, welcome to the future. My name is Zach Gardner. I’m the Chief Architect at Keyhole Software, or so they tell me. I went on a little bit of a journey last year. I scoured the internet to find people who knew a thing or two about generative AI. It was an area I was genuinely interested in learning more about. Rather than going at it alone, trying to cross the Rubicon in a kayak, I decided to invite some friends to come along on this journey. Today with me is John Travis, the principal at JFT LLC. John was a long-time Cerner associate with deep experience in a lot of different areas. Thank you very much for joining me today, John.

John Travis: My pleasure, Zach.

Zach Gardner: Why don’t you give the audience a little bit of background? Where did you grow up? Do you remember your favorite Christmas present as a kid? Where did your professional career take you, and how did you become the principal at JFT LLC? Let’s start there.

John Travis: I grew up in Topeka, Kansas. As they say, Topeka is a great place to visit, but you wouldn’t want to live there. I apologize to those who live there and love it—fine hometown, though. I went to Kansas State for my undergrad and started my career in public accounting at what was then Ernst & Whinney, now Ernst & Young. I got involved in healthcare audit right off the bat in the first year that Medicare created the DRG payment system for hospitals. Lots of learning by doing. I encountered things I never imagined. They didn’t teach us in business school how healthcare services are paid for, which has nothing to do with how they’re priced. It’s a three-party market that really influenced how things are paid for.

Related Posts:  Gen AI in the Enterprise: Vern Eastley, Legal AI Advisor

I left public accounting after a couple of years and went to Cerner when it was very young. This was in 1986. I was one of the first 200 people hired there. It wound up being a multi-billion dollar enterprise by the time it was sold to Oracle about a year and a half ago. During my time there, I held various roles, but for probably half the time, I led what came to be called the regulatory strategy team. That group was responsible for all the research and keeping up on all the federal regulatory space that impacted the use of health information technology. This included everything from dealing with HIPAA for privacy and security to revenue cycle compliance, all the Medicare and Medicaid requirements for billing, claims, and dealing with the payer market around HIPAA transaction sets.

Eventually, we also dealt with the clinical and operational compliance requirements of being a Medicare-participating provider. Along the way, health IT certification came up as a requirement starting in the mid to late 2000s, and it still persists today. This certification, under the Office of the National Coordinator for Health IT (ONC), provides public assurance that health information technology recognized as certified meets certain requirements, especially around interoperability and security. Now, we have artificial intelligence beginning to show up in the decision support space as a new element of certification that all health IT developers wanting that status will have to certify by the end of this year. This certification mainly enables hospitals, critical access hospitals, and eligible clinicians to use health IT in meaningful ways, following the term “meaningful use,” which means using health IT that’s been vetted and certified to contain specific capabilities required by federal regulations.

If you don’t use certified health IT, you actually receive a reduction in payments under the Medicare program. This has gotten the attention of many healthcare providers over the years. If you want to be credible in the market as an IT developer, you probably need to consider getting certified. I left Cerner about a year ago and started my own consultancy, as you mentioned, centered around health IT and regulatory compliance.

Zach Gardner: Very cool. That is exactly why I wanted to pick your brain on this topic. Before we get too much further, let me give a disclaimer: I’m not a lawyer, but I know a few of them, and they always tell me to say this. All views and opinions expressed in this program are those of the participants and do not reflect their employers, trade organizations, religious institutions, or any backpacking clubs they may be affiliated with. We’re just two guys having a chat.

You mentioned the ONC, which is a three-letter acronym mashed together to make a six-letter acronym, the Office of the National Coordinator for Health Information Technology. As someone who has seen policies and federal regulations evolve over your storied career, I was actually describing DRGs to someone on Friday night, and their mind was blown. It’s not a conversation starter at parties, but maybe we run in different crowds, John.

John Travis: That could be.

Zach Gardner: I’m curious if you could talk about what you’re seeing at the federal level in terms of balancing the massive potential upside of generative AI with the risks of hallucinations and incorrect information. What are you seeing in terms of federal regulations trying to ensure we benefit as much as possible while minimizing downsides?

John Travis: The biggest interest the federal government has, whether in terms of patient safety or the accuracy and validity of AI results, is ensuring AI is applied fairly and equitably. They want to prevent bias, particularly discriminatory bias, based on factors relevant to the use case. For instance, if you’re using generative AI for claims denials or prior authorization approvals, the AI needs to consider a balanced patient population. If it wasn’t trained on diverse data, it might provide inappropriate responses. So, much interest is around preventing bias and ensuring AI is used appropriately for the intended population.

Related Posts:  RAG Architecture Pattern Explained

Zach Gardner: That’s interesting. Discriminatory bias can be unintentional, like having a data set from the East Coast and applying it to the Midwest. Different regions have different cost-of-living and access to resources. Inner-city Chicago has food deserts, so telling someone to eat more broccoli isn’t feasible if they have to travel far to get it. Physicians and clinicians are more in tune with these needs. It’s fascinating how we try to take the human element out of it, even though human interaction is essential. Tuning AI recommendations to specific locations is challenging.

Social determinants of health were mentioned several times at the UMKC presentation. Are there any recommendations for healthcare companies thinking about getting into generative AI? What should they be aware of or consider?

John Travis: There are two significant regulatory authorities to consider. First, understand if you’ll face regulation when going to market, especially if your AI plays a role in clinical decision-making. The FDA is concerned with patient safety and the AI’s efficacy for its intended purpose. If your AI influences clinical decisions, it might be considered a medical device, requiring regulatory approval from the FDA.

The second is the ONC, which focuses on transparency to the market. They want to know how your AI was trained, what data was used, and your ongoing measurement systems. Being transparent is crucial. For example, if you’re a startup or smaller company, certification can be a differentiator, adding value and credibility.

Zach Gardner: Interesting. The transparency of the data set is vital. Large language models are trained on extensive patient data, and respecting privacy regulations like GDPR and HIPAA is challenging. GDPR’s right to be forgotten conflicts with HIPAA’s requirement to maintain immutable medical records. If patient data is used to train a model and then deleted upon request, ensuring complete deletion is complex. Companies might follow the stricter regulation to avoid higher fines. Are there any upcoming policies or regulations I should be aware of as a healthcare IT architect?

John Travis: That’s a great question. To reconcile GDPR and HIPAA, follow the de-identification standard in HIPAA. If data is de-identified, GDPR’s right to be forgotten may not apply. However, identifiable data must be handled carefully. If someone requests data deletion under GDPR, you must stop processing it, even if used historically.

For behavioral health data under 42 CFR part 2, you can use it for research but cannot disclose identifiable information without consent. These controls suggest using de-identified data whenever possible. Transparency about the data’s quality and representativeness will also be critical as regulations evolve.

Zach Gardner: All great points. We could talk for hours on this topic, but unfortunately, we don’t have that time. If people want to learn more about you and your work, where should they go? What websites should they visit? What golf courses should they frequent?

John Travis: You can reach me via email at [email protected]. I’m also on LinkedIn, where I occasionally publish blogs on regulatory topics, including what we discussed today. If there’s a chance to include this information in the show notes, that would be great, Zach.

Zach Gardner: I’ll make sure that happens. John, this was awesome. Such a pleasure to talk with you again. Ladies and gentlemen, I hope you found this as interesting as I did. We’ll catch you in the future.

[Music]


Share:

Subscribe on

Apple Podcast
Spotify
Youtube
See All Podcasts

Latest Blog Posts

Blog Topics