Luca comment 1

This page was last edited on 31 January 2024, at 14:12.
Revision as of 14:12, 31 January 2024 by Luca Cacini (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

What happens when language models are so pervasive that subsequent models are trained on language data that was largely produced by other models’ previous outputs? The snake eats its own tail, and a self-collapsing feedback effect ensues.