Luca comment 1

This page was last edited on 31 January 2024, at 09:33.
Revision as of 09:33, 31 January 2024 by Luca Cacini (talk | contribs) (Created page with "<!------------------------> <!-- do not remove this --> <div id="{{PAGENAME}}" class="comment"> <!------------------------> '' What happens when language models are so pervasive that subsequent models are trained on language data that was largely produced by other models’ previous outputs? The snake eats its own tail, and a self-collapsing feedback effect ensues.'' <!------------------------> <!-- do not remove this --> </div> Category:Content form - com...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

What happens when language models are so pervasive that subsequent models are trained on language data that was largely produced by other models’ previous outputs? The snake eats its own tail, and a self-collapsing feedback effect ensues.