Kendal - post // scrape // glitch: Difference between revisions

This page was last edited on 12 January 2024, at 15:29.
No edit summary
Line 13: Line 13:


But how can we construct more alternative narratives within AI? Could we '''glitch''' the system? Amongst the smooth, porousness can be found, and users will always uncover affordances within the software on offer. When websites could only handle text, and images were out of the realm of possibility, internet users created elaborate ASCII art to add a layer of visual elements to their personal pages. So what methods could be used to stretch the limits of genAI? Artists have already flocked to AI [https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/ poisoning tools] used to obscure their artwork within training data making renderings completely unpredictable. Other efforts include open source [https://haveibeentrained.com/ websites] that allow users to see whether their personal data is being trained within these LLMs which the American artist [https://www.rhaberstroh.com/we-are-already-gathered Beck Haberstroh] has used to produce creative projects around identity and agency. We could train our own image generators as a form of critique much like [https://fakeittillyoumakeit.lol/ Maya Man] and reclaim autonomy among the smooth. What if we could gather all these hacks, and turn the bland outputs of AI image generators into a circular feed of reciprocity? This is the time to reintroduce some subversive spikes into our virtual lives.
But how can we construct more alternative narratives within AI? Could we '''glitch''' the system? Amongst the smooth, porousness can be found, and users will always uncover affordances within the software on offer. When websites could only handle text, and images were out of the realm of possibility, internet users created elaborate ASCII art to add a layer of visual elements to their personal pages. So what methods could be used to stretch the limits of genAI? Artists have already flocked to AI [https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/ poisoning tools] used to obscure their artwork within training data making renderings completely unpredictable. Other efforts include open source [https://haveibeentrained.com/ websites] that allow users to see whether their personal data is being trained within these LLMs which the American artist [https://www.rhaberstroh.com/we-are-already-gathered Beck Haberstroh] has used to produce creative projects around identity and agency. We could train our own image generators as a form of critique much like [https://fakeittillyoumakeit.lol/ Maya Man] and reclaim autonomy among the smooth. What if we could gather all these hacks, and turn the bland outputs of AI image generators into a circular feed of reciprocity? This is the time to reintroduce some subversive spikes into our virtual lives.
_,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,_


         ___________
         ___________
Line 36: Line 41:


* Steyerl, H. (2023) ''Hito Steyerl, mean images, NLR 140/141, March–June 2023'', ''New Left Review''. Available at: <nowiki>https://newleftreview.org/issues/ii140/articles/hito-steyerl-mean-images</nowiki> (Accessed: 31 October 2023).
* Steyerl, H. (2023) ''Hito Steyerl, mean images, NLR 140/141, March–June 2023'', ''New Left Review''. Available at: <nowiki>https://newleftreview.org/issues/ii140/articles/hito-steyerl-mean-images</nowiki> (Accessed: 31 October 2023).


[[Category:Content form]]
[[Category:Content form]]

Revision as of 15:29, 12 January 2024

_,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,_


Across the web, hordes of self-organised internet communities find their sacred homes, tucked away from traditional online platforms. Hidden and occupying covert corners, these users reject the smooth corporate spaces in favour of porous DIY self-sustaining ones, where they are free to express themselves creatively with agency. Amongst their peers, users develop online languages, visual aesthetics, and digital folktales to connect and communicate their expression of self. Layered within low-tech tools, creative content is posted across forums, self-hosted servers, and open-source websites, as a way for users to distance themselves from the traditional stages of social media communication. Anonymity is favoured through the use of avatars, thus expressing an alternative identity, one that cannot be achieved in AFK situations. The return of personal 'cosy' websites stands in place of plastic standardised design templates and harkens back to the homemade aspects of Web 1.

On the other side of the proverbial coin, generative AI (genAI) has firmly placed itself at the forefront of our digital landscape. Our feeds are inundated with machinic imagery and content made with Dall-E, Stable Diffusion, and countless others. Gathering content derived from the training data of large language models (LLM), these image generators use data scraped from across the web, stealing from average users and artists alike. However, this can often result in visuals more akin to Frankenstein’s monster, that or something so relentlessly bland, it becomes devoid of any meaning whatsoever.  What we are left with are 'mean images' (Steyerl, 2023), "renderings which represent averaged versions of mass online booty, hijacked by dragnets" These so-called 'averaged versions' reinforce the plastic smoothness of the digital landscape, removing any nuance or aspects of humanity (Boer, 2023) found in man-made creative expressions.

But how can we construct more alternative narratives within AI? Could we glitch the system? Amongst the smooth, porousness can be found, and users will always uncover affordances within the software on offer. When websites could only handle text, and images were out of the realm of possibility, internet users created elaborate ASCII art to add a layer of visual elements to their personal pages. So what methods could be used to stretch the limits of genAI? Artists have already flocked to AI poisoning tools used to obscure their artwork within training data making renderings completely unpredictable. Other efforts include open source websites that allow users to see whether their personal data is being trained within these LLMs which the American artist Beck Haberstroh has used to produce creative projects around identity and agency. We could train our own image generators as a form of critique much like Maya Man and reclaim autonomy among the smooth. What if we could gather all these hacks, and turn the bland outputs of AI image generators into a circular feed of reciprocity? This is the time to reintroduce some subversive spikes into our virtual lives.


_,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,__,.-'~'-.,_


       ___________
      |.---------.|
      ||         ||
      ||         ||
      ||         ||
      |'---------'|
       `)__ ____('
       [=== -- o ]--.
     __'---------'__ \
    [::::::::::: :::] )
     `""'"""""'""""`/T\ 
Bibliography
  • Berlant, L.G. (2023) On the inconvenience of other people. Durham: Duke University Press.
  • Boer, R. (2023) Smooth city: Against urban perfection, towards collective alternatives. Amsterdam: Valiz.
  • Campbell, H. (2005) ‘Considering spiritual dimensions within computer-mediated communication studies’, New Media & Society, 7(1), pp. 110–134. doi:10.1177/1461444805049147.
  • Lialina, O., Espenschied, D. and Buerger, M. (2009) Digital Folklore: To computer users, with love and respect. Stuttgart: Merz & Solitude.
  • Steyerl, H. (2023) Hito Steyerl, mean images, NLR 140/141, March–June 2023, New Left Review. Available at: https://newleftreview.org/issues/ii140/articles/hito-steyerl-mean-images (Accessed: 31 October 2023).