Luca comment 1

This page was last edited on 31 January 2024, at 14:05.
Revision as of 14:05, 31 January 2024 by Luca Cacini (talk | contribs)

What happens when language models are so pervasive that subsequent models are trained on language data that was largely produced by other models’ previous outputs? The snake eats its own tail, and a self-collapsing feedback effect ensues.






index.php?title=Category:Content form - comment