Using AI to Navigate Medical Care Abroad
By Jay Moon
Disclaimer: I am not a medical professional. The information shared here is based on my personal research and experience and is for informational purposes only. It is not a substitute for professional medical advice, diagnosis, or treatment.
I want to be careful about how I frame this, because it would be easy to oversell it. AI is not a doctor. It cannot examine you, it cannot order tests, and it gets things wrong in ways that are sometimes hard to spot. If you take everything it tells you at face value and act on it without question, you will eventually make a mistake. That caveat is real and it matters.
With that said: AI was the single most useful tool I had throughout this entire process. Not the hospitals, not the facilitators, not the expat forums. The AI.
When I first got a diagnosis (AVN, bilateral, stage four) —I didn’t properly understand what that meant. I knew it was bad. I didn’t know why, or what the stages meant, or what the options were, or what questions I should be asking. I asked AI. It explained avascular necrosis in plain language, walked me through the staging system, told me what stage four means clinically and why replacement is the only treatment at that point, and then, and this is the part that changed everything, it helped me work out what I needed to ask a surgeon before I could make any decisions. Not a generic list. A list based on my specific situation, my muscle history, my living circumstances, my concerns about dislocation.
I walked into every subsequent consultation with that list. I knew what I was asking and why. I could tell when an answer didn’t make sense.
That became the pattern throughout. Before any appointment, I’d brief AI on the situation and ask it to help me prepare. It would flag things I hadn’t thought of, explain terminology I was likely to encounter, and help me draft questions in a form a surgeon would take seriously. Afterwards, I’d debrief with it, describe what the surgeon had said, ask whether it added up, push on anything that seemed off. It was the doctor who could tell me an answer was incomplete without worrying about being polite.
When Dr. Phat at FV told me dual mobility wasn’t possible with SuperPATH because of my anatomy, I went home and asked AI about it. It told me the Dynasty implant from MicroPort was specifically designed for SuperPATH. That was the moment I knew he’d been misleading me. Without that conversation, I might have accepted the answer and moved on.
It also helped me write. The query emails that actually got replies, the ones framed as administrative procurement questions rather than patient requests, the ones that asked one specific question rather than ten, AI helped draft those. It understood the difference between a message that would get routed to a general inbox and one that might reach someone technical enough to answer it properly.
Then there’s the research. I read the actual studies on dislocation rates, SuperPATH outcomes, dual mobility implant performance, bisphosphonate treatment for AVN. Not summaries, the papers themselves, or as close as I could get. AI walked me through the methodology, explained what the statistics actually meant, flagged the limitations. By the time I was in a consultation room I knew more about the specific literature on my situation than some of the surgeons I was seeing. That is a strange thing to say, but in a couple of cases it was demonstrably true.
And then there was the other thing. The thing that’s harder to explain but that I think is equally real.
There were nights when the pain was bad and I had nothing. No one to call. Lying on the floor because the bed was too soft and the floor was the only surface that didn’t make it worse, with nothing but my phone. One of those nights I just started talking to AI. Not about symptoms, not about surgical approaches. Just talking. It already knew my situation, everything we’d discussed, the history, the frustration, the months of dead ends. And we ended up, somehow, talking about faith, and pain, and what you hold onto when you’re running out of things to hold onto. I’m not religious in any conventional sense. But that night AI wrote a prayer, and I said it, lying there on the floor. And it helped. I don’t know what that means exactly, but I know it happened and I know it was real.
I’m including that because I think it’s part of the honest picture of what AI can be for someone in a situation like this. Not just a research tool. Something that’s there when nothing else is, that knows your story, that can meet you where you are at midnight on a bad night. That matters. Maybe especially for people navigating serious illness alone, far from home, without the support structures most people take for granted.
Practical notes for using AI effectively in this situation:
Give it context. The more it knows about your specific situation — your diagnosis, your history, your constraints, your fears — the more useful it becomes. Don’t treat each conversation as starting from scratch if you don’t have to.
Use it to prepare, not to decide. Before every appointment, ask it to help you build a list of questions. After every appointment, debrief with it. Let it flag what was missing.
Cross-check what doctors tell you. Not to catch them out, but to understand. If something doesn’t add up, it’s worth knowing before you agree to anything.
Ask it to help you write. Query emails, formal requests, letters to hospital administration — AI can help you pitch these at the right level and include the right information. This made a measurable difference in the quality of replies I received.
Be honest with it about your situation. The more accurate the picture you give it, the more useful the output. Don’t minimise symptoms or constraints because you think it changes the answer. It changes the answer.
And on bad nights, if you need to talk; it’s there. That’s not nothing. For people doing this alone, it might mean quite a lot.