Not much familiar wirh metrics for evaluating progression in medical fields, so asking in general sense.

  • dfyxA
    link
    fedilink
    arrow-up
    32
    ·
    3 months ago

    Absolutely and it has done so for over a decade. Not LLMs of course, those are not suitable for the job but there are lots of specialized AI models for medical applications.

    My day job is software development for ophthalmology (eye medicine) and people are developing models that can, for example, detect cataracts in an OCT scan long before they become a problem. Grading those by hand is usually pretty hard.

    • Adcott@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 months ago

      Neat. I was actually talking with optom students roughly a decade ago about how AI was imminently going to make a lot of this work redundant.

      ~~ Small note: you probably mean things like glaucoma, macular degeneration and other retinopathies, not cataracts. (Cataracts are just lens opacities) ~~ ignore that. Am idiot.

      • dfyxA
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        3 months ago

        I did mean cataracts. We usually do OCTs of the anterior segment (cornea to lens; uses a different wavelength for imaging) to plan lens surgery and measure keratoconus (among other things) but as a side effect, even in seamingly healthy eyes you can measure how optically dense different parts of the lens are to predict when cataracts are going to form. It’s hard to do by just looking at the OCT image because the way the lens looks depends on many factors but there have been some interesting AI-based approaches popping up lately.

        But sure, with a posterior segment OCT, you can detect all the things you mentioned.

        • Adcott@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Wow sorry I stand corrected. That’s very cool!

          I’m not in the industry any more but when you said grading I stupidly thought you meant like what I was doing monitoring diabetic retinopathy OCT scans.

      • ThirdConsul@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        3 months ago

        So… The medical professional is taking voice notes and then they get transcribed (ok, this is fine) - and then summarized automatically? I don’t think the summary is a good idea - it’s not a car factory, the MD should get to know my medical history, not just a summary of one.

          • AnyOldName3@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            3 months ago

            You can’t make an LLM only reference the data it’s summarising. Everything an LLM outputs is a collage of text and patterns from its original training data, and it’s choosing whatever piece of that data seems most likely given the existing text in its context window. If there’s not a huge corpus of training data, it won’t have a model of English and won’t know how to summarise text, and even restricting the training data to medical notes will stop mean it’s potentially going to hallucinate something from someone else’s medical notes that’s commonly associated with things in the current patient’s notes, or it’s going to potentially leave out something from the current patient’s notes that’s very rare or totally absent from its training data.

              • cecinestpasunbot@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 months ago

                If you end up integrating LLMs in a way where it could impact patient care that’s actually pretty dangerous considering their training data includes plenty of fictional and pseudo scientific sources. That said it might be okay for medical research applications where accuracy isn’t as critical.