I agree that with the current state of tools around LLMs, this is very unadvisable. But I think we can develop the right ones.
-
We can have tools that can generate the context/info submitters need to understand what has been done, explain the choices they are making, discuss edge cases and so on. This includes taking screenshots as the submitter is using the app, testing period (require X amount of time of the submitter actually using their feature and smoothening out the experience)
-
We can have tools at the repo level that can scan and analyze the effect. It can also isolate the different submitted features in order to allow others to toggle them or modify them if they’re not to their liking. Similarly, you can have lots of LLMs impersonate typical users and try the modifications to make sure they work. Putting humans in the loop at different appropriate times.
People are submitting LLM generated code they don’t understand right now. How do we protect repos? How do we welcome these contributions while lowering risk? I think with the right engineering effort, this can be done.
I sympathize, I also feel like the fight against the corporations is hopeless. The loss of leverage against employers for tech workers is huge in the face of LLMs. I’m a tech worker myself and am facing those same problems. But I’m not sure that this means that FOSS is useless. The corps have a huge incentive to create these tools, whether they’re open source or not. But at least when they’re open source, we the people can also use them. I’m not suggesting that we can do this with LLMs today, we just don’t have the right contributor and maintainer tools to do it. But right now we have to develop maintainer tools to filter out the huge amount of crap that badly designed LLM systems are putting out. This gives us the opportunity to build a contribution model that doesn’t care about human vs LLM provenance, as long as it meets certain quantifiable standards. In 5-10 years, we’re going to have LLMs that can infer at very high speed, meaning we can do a lot of error correction by multiplying the number of generations you make and looking for consistency. The engineering effort for LLM systems is barely started, these systems are gonna get way more robust. Wouldn’t it be better if these systems were built in the open so that we can all share, understand and leverage these tools for ourselves?
As for the gatekeeping/democratizing of art and tech, I agree that anyone can learn that stuff if they put enough effort into it. But by the simple fact that people need to put time and sweat into it, it disqualifies a large swath of the population, from children to neurodivergent people to low wage workers who don’t have the breathing room to rest let alone take up programming. It’s really not about a ‘soldier at the gate’, no person or group is preventing anyone from learning how to code. The social order and biology sometimes makes it so. Wouldn’t it be better for everyone if anyone could modify their software without having to invest a shitload of time to learn how to code? Like maybe this person only wants this one specific change in one specific app-- the ROI just isn’t there if they have to learn a whole new field.
I am not trying to say that AI and LLMs are the next best thing since sliced bread. I think there’s huge problems with it, but I also think that they can be powerful tools if we wield them properly. I think there’s big limitations on the tech, and huge ethical implications about the way they’re built and their cost to the planet. I’m hoping that we can fix these in the long run, but I sure as fuck don’t count on the current AI industry leaders to do it. They’re going to use this tech to supercharge surveillance capitalism, imo. It’s gonna be fucking horrible. What I hope is that we can carve out a space for personal computing with the help of FLOSS.