Martyna - fuzzy fences, shiny coats, trees on fire

This page was last edited on 30 January 2024, at 15:11.
Revision as of 15:11, 30 January 2024 by Martyna (talk | contribs)

In early 2023 an unprecedented number of AI generated images began appearing on social media feeds, ranging from fairly harmless and entertaining ones, like the image of the Pope wearing Balenciaga, to ones with much higher stakes, like the photo of a dense plume of smoke at the Pentagon.

Although visual fakes aren't new, the speed at which AI (or even game engines and simulators[1]) has accelerated fake image generation highly exceeds that of manual photoshopping, resulting in a higher than ever volume of convincing fakes. (is it more or more convinving)?

(possible rewrite/restructuring of the argument - I am interested in approaching the problem of image-perpetuated disinformation, from the point of view of stages and modes of perceiving.

In response to this phenomenon, I argue that these AI events can serve as a reflection of the already lingering crisis of imagination and visual sensitivity.

For example, generative AI’s imagination seems limited by its core dataset. It has already been reported that AI has been entering an autophagic stage, whereby it re-feeds the already generated images back into the generative algorithm making its biases more prominent and lowering their quality and precision [2]. 
Our expectation of reality is similarly influenced by the images we see. Not only in a strictly Baudrillardian sense, where the hyperreal begins to affect the expectation and perception of the real [providing a truth or reality to compare to, can you talk more about this], but also because our interactions with otherwise extraordinary events, like explosions and fires, are shaped almost exclusively by images (fictional and documentary). This  ordinary-extraordinary becomes part of the daily experience, dulling our attention or anaesthetising us [who is the audience of the image?] [3] to events we should otherwise be very sensitive to. [discuss where this image is seen, what claims are attached to it, issue of trust of sharing accounts and media services].  
Therefore, the outlandishness/awkwardness [] of the pope wearing Balenciaga produced a sensory reaction, triggering the perception of the uncanny, [and attention?] but the ‘Pentagon event’ didn’t. Instead, multiple steps of verification and analysis by OSINT researchers had to be implemented to filter the fake out. [communal sense-making ] 

This observation brings me to the second aspect of the new aesthetics of fact: that in the absence of the perceptible uncanny, one must look more closely at the verification and investigation processes in search of a new way of sensing. (clarify)

The  verification and investigation of online images involves either automated (e.g. reverse image search, detection algorithms) or manual comparative processes, where the source image is inevitably contextualised and networked by other images. 
In some cases, a digital forensic analysis will further organise the perceptual field [4] of the image. It will become sliced, extruded, warped, segmented; the segments outlined, counted, scaled and overlaid. The process will not only transform the image, but also generate digital artefacts echoing these gestures of verification and investigation - from lines and other simple geometries signifying the act of counting, connecting or measuring, to entire 3D reconstructions, projections and translations.

Perhaps in the current aesthetics of images that pose themselves as evidence, the network of connections and gestures that extend from the source image, bypassing the initial anaesthesia, has a potential to form new kinds of aesthetics, akin to a spider using its web as an extension of its sensory field. Looking more closely at these processes and artefacts can shift the discourse around AI fakes from techno-doom/ techno-optimism, and instead strengthen the agency of the viewer.


  1. I am referring here in particular to misinformation using screenshots of war-simulation game Arma 3, in the context of Russia’s war in Ukraine and  Israel’s offensive in Gaza. France 24. (2023). War-themed video game fuels wave of misinformation. [online] Available at: https://www.france24.com/en/live-news/20230102-war-themed-video-game-fuels-wave-of-misinformation.
  2. Alemohammad, S., Casco-Rodriguez, J., Luzi, L., Humayun, A.I., Babaei, H., LeJeune, D., Siahkoohi, A. and Baraniuk, R.G. (2023). Self-Consuming Generative Models Go MAD. [online] arXiv.org. doi:https://doi.org/10.48550/arXiv.2307.01850.
  3. Fuller, M. and Weizman, E. (2021). Investigative Aesthetics. Verso Books.
  4. 4. Goodwin, Charles. “Professional Vision.” American Anthropologist, vol. 96, no. 3, Sept. 1994, pp. 606–633, onlinelibrary.wiley.com/doi/10.1525/aa.1994.96.3.02a00100/full, https://doi.org/10.1525/aa.1994.96.3.02a00100. Accessed 30 Oct. 2019.

index.php?title=Category:Content form