Tattoos and Telehealth

When Dr. Google Meets ChatGPT: What You Need to Know Before Self-Diagnosing

Nik and Kelli Season 1 Episode 26

Send us a text

Remember when "Dr. Google" changed healthcare forever? Those days of patients arriving with self-diagnoses have evolved into something new – now it's "ChatGPT told me I should be doing this." As healthcare providers navigating this rapidly changing landscape, we're seeing both the incredible potential and concerning pitfalls of AI in health decision-making.

Artificial intelligence offers unprecedented access to medical information, but comes with a critical caveat: AI is only as good as the information it's fed. While we both use AI tools daily in our practice, we've learned to distinguish between those pulling from evidence-based, peer-reviewed sources versus those potentially drawing from subjective opinions or limited studies. The difference can significantly impact your healthcare outcomes.

Beyond source verification lies an even more fundamental challenge – healthcare simply doesn't fit neatly into algorithmic boxes. Your body represents a unique constellation of genetics, medical history, lifestyle factors, and countless other variables that even the most sophisticated AI can't fully comprehend. This is why treatments that work perfectly for one person may fail entirely for another, even with identical symptoms. Good healthcare requires nuanced human judgment that considers everything from family history to ethnic background to medication interactions.

When researching your health concerns, we absolutely encourage using AI as one tool in your arsenal. But approach your provider with "here's what I found, what do you think?" rather than "this is what I have and what I need." This collaborative approach creates space for meaningful discussion rather than confrontation. Most providers welcome informed patients who take active roles in their care, as long as there's mutual respect for both technology's capabilities and human expertise's irreplaceable value.

Have you used AI for health research? What was your experience? We'd love to hear your thoughts in the comments. And if you're looking for telehealth care from providers who embrace technology while maintaining human-centered approaches, visit us at hamiltontelehealthcom.

Thanks for tuning in to today’s episode!
Ready to take the next step in your health journey? Visit HamiltonTelehealth.com — your healthcare oasis.
Get care when you need it, where you need it. Don't forget to subscribe!

Speaker 1:

Hi everyone, welcome back to Tattoos and Telehealth. Today we're going to talk about AI, how to vet your AI, especially with regard to your health, because AI is the new way of getting great information and it can be great. But also there is a few things that we want to talk about that you just need to look out for and be aware of as we evolve into this new culture of information. So my name is Nicole Baldwin. I'm a board certified nurse practitioner. This is my good friend and colleague, kelly White, also a board certified nurse practitioner. Kelly's also a board certified in functional medicine, which is absolutely just amazing. And so we are providers at Hamilton Telehealth, hamilton Health and Wellness, and our attorneys make us say that this is not to be construed as medical advice and this podcast does not constitute a patient provider relationship. Constitute a patient provider relationship. So, kelly, let's get started.

Speaker 1:

And as the culture is changing and as we are all utilizing AI because no one, no matter how much school we go to, no matter how many degrees or letters or whatever behind our name, we're never going to be able to fit everything in our brain. Like I read studies all the time, but can I recall them at any given time. No, like I'm not, you know. No, we can't, we just have to just keep you know, putting the knowledge in, whereas AI has access to all that knowledge, right To every study, to every all the statistics. But let's get into today. Something that I know was important for you to talk about was how just be careful, especially with relation to health care. So I'll let you take it.

Speaker 2:

So I think that one of the things we need to think about is, you know, remember back in the days when Internet became a big thing and I know, especially in my brick and mortar setting, patients would come in and I Googled my symptoms, I know I have this, I need this, this and this, and it became a big crutch to healthcare. So what should have been an open door to provide great information and to aid in the process of providing care to patients actually became a hindrance in the healthcare process, because patients came in thinking that they already had themselves diagnosed, they already knew what medications they needed, they already knew exactly what was going to be happening in step one, two and three, without taking into consideration the fact that maybe they had this going on or that going on or a family history of this, and so these different factors negated what they thought they already knew once they talked it out with their provider. And so you know, I coined the phrase. You know, dr, Google didn't go to school. Not that Google was wrong or not, that the internet searches they were looking at was false. It just didn't necessarily 100% apply to that person's situation. And so now fast forward to the world of AI, where I'm seeing patients that are messaging in saying you know, I chat GPT, or I AI, or I did this, and it says that actually I should be doing this.

Speaker 2:

And I've had to call patients a few times and say well, you know, this is where I got it wrong, and I think that it's important to understand that in the world of artificial intelligence, it is only as accurate as the information that it is fed. It's just like you and I the intelligence that you and I hold in our brains, like you said, is only as good as the information that we gave our brains. So it's only as accurate as the articles we read or the information that the lecture provided to us. And so that's the same thing with artificial intelligence the information that it gathers. While it can spread its fingers a whole lot wider than we can and pull all that information in and then give you a synopsis.

Speaker 2:

It sometimes is pulling information from human resources, and that could be resources that are based on human experiences, based on someone's subjective opinion, not fact, or something that is not necessarily evidence-based. And so whenever we're thinking about artificial intelligence, we have to really be sure that the AI source we're using is pulling from validated resources. So I only caution people in the sense of absolutely use AI. Please use AI, nicole, and I use AI. We use it for lots of things, whether it is rewording verbiage so we don't sound as country as we can be sometimes, whether that means that we want to type something up a little bit more professionally. We use it to help us with flyers and handouts and all kinds of great things. It does great stuff for us, but be sure that the AI source you're using is pulling from reputable sources. So, nicole, I know that you use one called Open Evidence, right?

Speaker 1:

Yes, I do, I do use that. I do use that. I don't use that probably as often as I should, but I do use it. There's a couple of different ones. There's some medical ones that I use, um. There's a couple different ones. There's some medical ones that I use um, chat, gpt. You know the is one that I use. But even on the bottom of that it always says for important information, verify information. Like it always says that in the bottom in little gray writings. Even on chat, gpt, it says for important information, verify you. We can get it wrong.

Speaker 1:

Yeah, and what is important is that, even though you may have the exact same symptoms as your sibling, you're still different, you're. You know there's things that are different where a medication that's suitable for them may not be suitable for you, especially if it's I mean, if it's a family member. You have a little bit more. But yeah, you know, there's so many other things to consider. It's not just I have a headache, what could it be. It's there's so much more to you as a, as a human, as a body.

Speaker 1:

If we could put everybody in a, in a, in a box to say if you have depression, this is what you need. If you have anxiety, this is what you need. If you have high blood pressure, this is what you need. Then that would be easy, right, but that doesn't work for everybody. It's there's so many variables that go into us choosing a medication. You know, it's family history, it's it's your habits, it's genetics, it's your. It's your your past medical history. Do you have a history of this history of that? Are you at risk for this? At risk for that? I mean even as far as your, your background? You hypertension. We start African-Americans on a different medication than we do for other types of ethnicity because they're more prone to specific things, and so it boils down to so many more factors than you could ever really enter in AI per se.

Speaker 2:

Yeah, and I think that that's an important thing to keep in mind when when you're using those things. So, like Nicole was saying, at the very bottom of your AI response there's going to be a disclaimer and at the bottom of a lot of the medical ones we use, it'll list the articles that that AI used to pull its information. So one of the ones that I use on a really regular basis open evidence. I use it daily while seeing patients. I do, I do. I use it daily. It stays open over on the side. I use it when I'm researching stuff for different patients, because I see some pretty complex patients and they have a lot going on and, like Nicole said, I can't keep all that stuff in my brain. So there's times when I'm in the middle of talking to someone and I'll just reach over to open evidence and I'll type something in and it'll pull those articles for me and I love that. Those are very good, well-vetted articles from the New England Journal of Medicine, from PubMed, from JAMA, and I can click on that article and I can pull it up and I literally have the research right in front of me and I know those articles and those sources you know like from the National Institute of Health. These are very well accredited sources that I know. I'm giving my patient up to date, accurate, very well vetted resources and information that I feel sound and good about, and so I think that that's the most important thing to keep in mind when you're going through it.

Speaker 2:

Not that AI isn't wonderful, because it really is, but again, guys, it is only as good as the information that it is fed. And so if it is being fed information that is biased by opinion or by other people's personal experiences only and there's no facts to come behind it and by facts I mean good, vetted, large retrospective studies with a good number of people not 100, not 500, like 10s of 1000s of people's in these studies then that information may not be the best. And so that's kind of the point that I want to hit home here. Like Nicole was saying, we can't put everyone in a box. You know if you guys have been following us and if you haven't, you need to like and subscribe and follow us. Then you know that Nicole did this whole talk where there's genetic testing available to see what kinds of antidepressants and anxiolytic medications are best for you based on your DNA. So we don't all fit in that box, and so I really think it's important to keep that stuff in mind and what was best for you guys.

Speaker 2:

We want to be sure that the information that you're being provided is accurate and up to date. I personally don't mind if you come to me and say hey, kelly, I Googled my symptoms and this is what I think. I'm happy to have that talk with you. I'm glad that you're being a proponent of your healthcare, your body, your rules, and I can't help you if you can't help yourself. So I love that you do that is accurate, that it is informative as well as it is informational. You're getting your sources. You're getting your information from sources that are fact-based.

Speaker 1:

Yeah, absolutely, absolutely and so that's just something that, as we get more advanced and as we get more into AI and people are utilizing it for things with regard to your health, ai doesn't know everything about you, and so it is important to make sure that you, yes, look into things. We want you to look up things that it could be, but when you do see your provider whether it's us or whoever else, you know, your regular provider I encourage you to say here's what I when I dug myself. Here's what I found. What do you think versus? This is what I have. This is what I need. Here's what I found. What do you think Versus this is what I have. This is what I need.

Speaker 1:

That kind of sets us a little, a little off, because we don't really know where you got your information and it may be correct, but it may not be, and so most providers are okay. So, yes, absolutely I want you to research whatever condition you know you have, whether it's hypertension, diabetes or or whatever, but it is important to go into it with um, a little bit of understanding that it can get it wrong. It can get it very wrong, um, and so we just want to make sure we touched on that today for sure all right, you guys.

Speaker 2:

So I hope this information was helpful. I hope you found it informative. Um, and as always, please like, subscribe, follow, share. Let us know that you love us. Leave us a message down in the note, in the show notes. We will always reach back out to you and if you want to come see us, you can find us at hamiltontelehealthcom. Again, this is my great colleague Nicole Baldwin, and I'm Kelly White. We do hope to hear from you soon.

Speaker 1:

All right, have a good day, guys.

Speaker 2:

Bye.

People on this episode