

I think the biggest issue is that you’ve assumed everyone is the same and wants to be treated the same.
The world isn’t black and white. People are telling you their personal preferences and you’re telling them that they’re wrong.
You’re fighting other people’s battles for them even when they’re telling you that they’d prefer you not to - you’re literally acting like the guy in the last panel.
If there’s anything that we’ve learned over the last horrible year it’s that getting all of your information off social media is a recipe for disaster.


With respect, it sounds like you have no idea about the range of nonsense human students are capable of submitting even without AI.
I used to teach Software Dev at a university, and even at MSc level some of the submissions would have paled in comparison to even GPT3 output. That said, I didn’t have to deal with the AI problem myself. I taught just before LLMs came into their own - Textsynth had just come out, and I used to use it as an example of how unintentional bias in training data shapes the outputs.
While I no longer teach, I do still work in that space. Ironically the best way to catch AI papers these days is with another AI. This is included in the plagiarism-checking software, and breaks down where it detects suspicious passages and why it thinks they’re suspicious.