Using a AI Agent or Specialized GPT in healthcare utilization review processes

in A.I2 months ago (edited)

AI uses in Healthcare

  • Healthcare Utilization Review involves looking at patient medical records and determining if the patients health condition actually warrants the surgery, tests and levels of care, as a scarce resource that has been requested by the patient's physician or consultant physician.
  • I was attending a meeting recently and we were discussing making changes to some templates we were using, and a few people expressed concern about leaking private healthcare data information on the worldwide web. In this day of computer server hacks and data breeches, this concern was certainly valid.
  • I was formulating an answer to this concern when the meeting dismissed the possibility of using an AI Agnet like Chat GPT for a redundant task, and therefore by extension ruled out using AI for this task at all.
  • Why was I slow to answer?
  • It is a difficult question to answer without explaining both the questions premise and the answers basis. But I will "take a stab at it" an American Idiom which means I will try to explain or answer.

Privacy, Two way street and walled gardens

Privacy

  • In healthcare all patient information is private and can't be exposed to anyone who doesn't have a need to know.
  • So any connection of a patient information database to the internet would present a hige security risk with data breeches being possible.
  • These risks of data breeches make using commercially available AI Agents like CHat GPT and others unwise.
  • Why? Two way street

Two Way Street

  • First, most AI Agents like Chat GPT search the world wide web or internet for answers, and retrieve what they think is the answer to the question.
  • Second, the fact that the ChatGPT has the ability to search the web or internet, means that the internet has the ability to search ChatGPT. Now while the marketers may assure you that Chat GPT isn't a security risk for your computer, the ability to search the internet is a dooor to the internet, and like all doords, traffic can move both ways. Even when the door is constructed to only open out, the Chatbot has the ability to bring information in, so that means other software can also come in.
  • Second, because the ChatGPT has to search the internet, it takes time, and it may bring back consistently different answers to consistently same or similar queries, because rapid searching of large databases results in random answers given the math of the search.
  • Third, if you wall the ChatGPT off from the Internet, as in you don't create a door, then not only can the ChatGPT not go out to the internet, but also the internet can't come in. Now while on the surface that would seem self defeating, it is actually a much more efficient model.

Why?

  • If the ChatGPT doesn't have to search the internet, but instead searches a closed database, it retrieves answers faster and it retreives answers much more consistently.

  • The answers are retrieved more consistently because there are less choices in a closed database, and because there will be less diverse quieries.

  • For example if the queries are limited to colors of t-shirts and the numbers of choices are four.

  • The Chatbot has an initial probability of success of 1 out of four, which is much better then the odds in the typical Google search, which returns 10,000,000 possible answers to your search query.

  • And if the Chatbot gets feedback that it's answer was correct, it is more likely to return the same answer next time, because it builds a memmory database arranged by hierarchy of correctness by percentage. So in a closed system the ChatBot gets more feedback that it's answer was correct, and it retieves that right answer more often, and over time it will retrieve that right answer 100% because it gets feedback that that is the right answer.

Walled Garden GPT

  • So a walled garden ChatGPT is perfect to answer phones and answer questions about the menu, because the questions will all be similar, so the GPT will accumulate a large database of menu questions with similar correct answers and learn how to answer peoples menu questions accurately.
    An extrapolation of this would be that if we ask the GPT to review a single template and correct it according to a template within it's database of answers it can quickly correct those templates.
  • An additional use case for the GPT would be to ask if a healthcare case description contained certain characteristics which would make it fulfil certain criterion for payment. In this case the GPT could return accurate answers if the information in it's searchable database contained all the parameters for approval, and the query and the language of the healthcare case descritption matched the criterion in it's database. The issues for the GPT would be it doesn't actually think, so it can only deal with synonyms if all synonyms are in it's database.

This is my current understanding of GPTs or AI Agents, what is your take?

.

Title:
Using a AI Agent or Specialized GPT in healthcare utilization review processes

title picture source

Sort:  

AI is a useful tool in many areas in recent times. Though I don't know what will be the outcome in few years to come when humans won't be needed anymore

I agree that AI is a useful tool.
I am hoping humans adapt to AI so that AI improves our performance more than it replaces our performance!
🙏

I see your in a new community called GPT Development.
I will go there and look around.
Join me gang.
#smallbites
#defigeek

Thanks, check it out and let me know what you think.

I checked on it and you are the only poster.