Luca comment 1

This page was last edited on 31 January 2024, at 14:12.

What happens when language models are so pervasive that subsequent models are trained on language data that was largely produced by other models’ previous outputs? The snake eats its own tail, and a self-collapsing feedback effect ensues.