Talk:Luca - The Autophagic mode of production

This page was last edited on 24 January 2024, at 10:15.
Revision as of 10:15, 24 January 2024 by ComputerLars (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Pierre (22.01): Sounds to me like this Ouroboros-pattern is not completely unrelated to hallucinations? It would also be interesting to consider the place of the compression/recompression/data loss in this process (e.g. Ted Chiang's AI as Blurry JPEG or I am sitting in Youtube).


---

(Asker 24.01): I agree that the Ouroboros-symbolism is really suitable for genAI - it was also used in the weird collaboration between Benjamin Bratton and Blaise Aguera y Arcas to designate the problem of synthetic data in a textpocalypse - the ultimate modelling of the-world-as-hallucination:

"What happens when language models are so pervasive that subsequent models are trained on language data that was largely produced by other models’ previous outputs? The snake eats its own tail, and a self-collapsing feedback effect ensues.

The resulting models may be narrow, entropic or homogeneous; biases may become progressively amplified; or the outcome may be something altogether harder to anticipate. What to do? Is it possible to simply tag synthetic outputs so that they can be excluded from future model training, or at least differentiated? Might it become necessary, conversely, to tag human-produced language as a special case, in the same spirit that cryptographic watermarking has been proposed for proving that genuine photos and videos are not deepfakes? Will it remain possible to cleanly differentiate synthetic from human-generated media at all, given their likely hybridity in the future?"