Absolutely and it has done so for over a decade. Not LLMs of course, those are not suitable for the job but there are lots of specialized AI models for medical applications.
My day job is software development for ophthalmology (eye medicine) and people are developing models that can, for example, detect cataracts in an OCT scan long before they become a problem. Grading those by hand is usually pretty hard.
Neat. I was actually talking with optom students roughly a decade ago about how AI was imminently going to make a lot of this work redundant.
~~ Small note: you probably mean things like glaucoma, macular degeneration and other retinopathies, not cataracts. (Cataracts are just lens opacities) ~~ ignore that. Am idiot.
I did mean cataracts. We usually do OCTs of the anterior segment (cornea to lens; uses a different wavelength for imaging) to plan lens surgery and measure keratoconus (among other things) but as a side effect, even in seamingly healthy eyes you can measure how optically dense different parts of the lens are to predict when cataracts are going to form. It’s hard to do by just looking at the OCT image because the way the lens looks depends on many factors but there have been some interesting AI-based approaches popping up lately.
But sure, with a posterior segment OCT, you can detect all the things you mentioned.
I’m not in the industry any more but when you said grading I stupidly thought you meant like what I was doing monitoring diabetic retinopathy OCT scans.
So… The medical professional is taking voice notes and then they get transcribed (ok, this is fine) - and then summarized automatically? I don’t think the summary is a good idea - it’s not a car factory, the MD should get to know my medical history, not just a summary of one.
You can’t make an LLM only reference the data it’s summarising. Everything an LLM outputs is a collage of text and patterns from its original training data, and it’s choosing whatever piece of that data seems most likely given the existing text in its context window. If there’s not a huge corpus of training data, it won’t have a model of English and won’t know how to summarise text, and even restricting the training data to medical notes will stop mean it’s potentially going to hallucinate something from someone else’s medical notes that’s commonly associated with things in the current patient’s notes, or it’s going to potentially leave out something from the current patient’s notes that’s very rare or totally absent from its training data.
If you end up integrating LLMs in a way where it could impact patient care that’s actually pretty dangerous considering their training data includes plenty of fictional and pseudo scientific sources. That said it might be okay for medical research applications where accuracy isn’t as critical.
Absolutely and it has done so for over a decade. Not LLMs of course, those are not suitable for the job but there are lots of specialized AI models for medical applications.
My day job is software development for ophthalmology (eye medicine) and people are developing models that can, for example, detect cataracts in an OCT scan long before they become a problem. Grading those by hand is usually pretty hard.
Can you tell me more about your job, as fellow computer guy I would really appreciate first hand experience.
Neat. I was actually talking with optom students roughly a decade ago about how AI was imminently going to make a lot of this work redundant.
~~ Small note: you probably mean things like glaucoma, macular degeneration and other retinopathies, not cataracts. (Cataracts are just lens opacities) ~~ ignore that. Am idiot.
I did mean cataracts. We usually do OCTs of the anterior segment (cornea to lens; uses a different wavelength for imaging) to plan lens surgery and measure keratoconus (among other things) but as a side effect, even in seamingly healthy eyes you can measure how optically dense different parts of the lens are to predict when cataracts are going to form. It’s hard to do by just looking at the OCT image because the way the lens looks depends on many factors but there have been some interesting AI-based approaches popping up lately.
But sure, with a posterior segment OCT, you can detect all the things you mentioned.
Wow sorry I stand corrected. That’s very cool!
I’m not in the industry any more but when you said grading I stupidly thought you meant like what I was doing monitoring diabetic retinopathy OCT scans.
deleted by creator
So… The medical professional is taking voice notes and then they get transcribed (ok, this is fine) - and then summarized automatically? I don’t think the summary is a good idea - it’s not a car factory, the MD should get to know my medical history, not just a summary of one.
deleted by creator
You can’t make an LLM only reference the data it’s summarising. Everything an LLM outputs is a collage of text and patterns from its original training data, and it’s choosing whatever piece of that data seems most likely given the existing text in its context window. If there’s not a huge corpus of training data, it won’t have a model of English and won’t know how to summarise text, and even restricting the training data to medical notes will stop mean it’s potentially going to hallucinate something from someone else’s medical notes that’s commonly associated with things in the current patient’s notes, or it’s going to potentially leave out something from the current patient’s notes that’s very rare or totally absent from its training data.
deleted by creator
If you end up integrating LLMs in a way where it could impact patient care that’s actually pretty dangerous considering their training data includes plenty of fictional and pseudo scientific sources. That said it might be okay for medical research applications where accuracy isn’t as critical.
deleted by creator