• cm0002@lemmy.world
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    5 months ago

    I don’t think that will have the impact people think it will, maybe at first, but eventually it’ll just start treating “wrong” code as a negative and reference it as a “how NOT to do things” lmao

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        5 months ago

        That’s just a matter of properly tagging the training data, which AI trainers need to do regardless.

      • cm0002@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        5 months ago

        For sure, but just like that whole “poison our pictures” from artists thing, the people building these models (be it a company or researchers or even hobbyists) are going to start modifying the training process so that the AI model can recognize bad code. And that’s assuming it doesn’t already, I think without that capability from the getgo the current models would be a lot worse at what they generate than they are as is lmao

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    16
    ·
    5 months ago

    When you ask an LLM to write some prose, you could ask it “I’d like a Pulitzer-prize winning description of two snails mating” or you could ask it “I want the trashiest piece of garbage smut you can write about two snails mating.” Or even “rewrite this description of two snails mating to be less trashy and smutty.” In order for the LLM to be able to give the user what they want they need to know what “trashy piece of garbage smut” is. Negative examples are still very useful for LLM training.