What the fuck? The only task that didn’t degrade across most models was Python. Very basic things like JSON, Makefiles, and schemas got screwed. Fiction, emails, and food menus got screwed. Did you even bother to read the legend? If you consider a single pass to be “producing and manipulating language” you didn’t bother to read the idiotic article you started this thread in support of. Good luck.
Edit: why do you lie?
Catastrophic corruption (80 and below) occurs in more than 80% of model, domain combinations.
The only task that didn’t degrade across most models was Python.
Yeah, after 20 cycles of unsupervised iteration on the task. Gemini 3.1 Pro doing as well as it did under that experiment setup is quite remarkable actually.
What the fuck? The only task that didn’t degrade across most models was Python. Very basic things like JSON, Makefiles, and schemas got screwed. Fiction, emails, and food menus got screwed. Did you even bother to read the legend? If you consider a single pass to be “producing and manipulating language” you didn’t bother to read the idiotic article you started this thread in support of. Good luck.
Edit: why do you lie?
Yeah, after 20 cycles of unsupervised iteration on the task. Gemini 3.1 Pro doing as well as it did under that experiment setup is quite remarkable actually.
The paper does not show what you are arguing.