Martyna - fuzzy fences, shiny coats, trees on fire: Difference between revisions

This page was last edited on 31 January 2024, at 08:51.
mNo edit summary
No edit summary
Line 2: Line 2:
== Sensing the uncanny & the digital aesthetics of fact. ==
== Sensing the uncanny & the digital aesthetics of fact. ==
'''Martyna Marciniak'''
'''Martyna Marciniak'''
</div>In early 2023 an unprecedented number of AI generated images began appearing on social media feeds, ranging from fairly harmless and entertaining ones, like the image of the Pope wearing Balenciaga, to ones with much higher stakes, like the photo of a dense plume of smoke at the Pentagon.  
</div>In early 2023 an unprecedented number of AI generated images began appearing on social media, ranging from fairly harmless and entertaining ones, like the image of the Pope wearing Balenciaga, to ones with much higher stakes, like the photo of an ‘explosion at the Pentagon’.  
[[File:Balenciaga and pentagon2.png|thumb|775x775px]]
[[File:Balenciaga and pentagon2.png|thumb|775x775px]]
Although visual fakes aren't new, the speed at which AI (or even game engines and simulators[1]) has accelerated fake image generation highly exceeds that of manual photoshopping, resulting in a higher than ever volume of convincing  fakes. (is it more or more convinving)?


(possible rewrite/restructuring of the argument - I am interested in approaching the problem of image-perpetuated disinformation, from the point of view of stages and modes of perceiving.


In response to this phenomenon, I argue that these AI events can serve as a reflection of the already lingering crisis of imagination and visual sensitivity.
<small>For example, generative AI’s imagination seems limited by its core dataset. It has already been reported that AI has been entering an autophagic stage, whereby it re-feeds the already generated images back into the generative algorithm making its biases more prominent and lowering their quality and precision [2].</small>


<small>Our expectation of reality is similarly influenced by the images we see. Not only in a strictly Baudrillardian sense, where the hyperreal begins to affect the expectation and perception of the rea''l [providing a truth or reality to compare to, can you talk more about this]'', but also because our interactions with otherwise extraordinary events, like explosions and fires, are shaped almost exclusively by images (fictional and documentary). This  ordinary-extraordinary becomes part of the daily experience, dulling our attention or anaesthetising us ''[who is the audience of the image?]'' [3] to events we should otherwise be very sensitive to. ''[discuss where this image is seen, what claims are attached to it, issue of trust of sharing accounts and media services].''</small> 


<small>Therefore, the outlandishness/awkwardness [] of the pope wearing Balenciaga produced a sensory reaction, triggering the perception of the uncanny, ''[and attention?]'' but the ‘Pentagon event’ didn’t. Instead, multiple steps of verification and analysis by OSINT researchers had to be implemented to filter the fake out. [communal sense-making ]</small>
This observation brings me to the second aspect of the new aesthetics of fact: that in the absence of the perceptible uncanny, one must look more closely at the verification and investigation processes  in search of a new way of sensing. (clarify)
<small>The  verification and investigation of online images involves either automated (e.g. reverse image search, detection algorithms) or manual comparative processes, where the source image is inevitably contextualised and networked by other images.</small>


<small>In some cases, a digital forensic analysis will further organise the perceptual field [4] of the image. It will become sliced, extruded, warped, segmented; the segments outlined, counted, scaled and overlaid. The process will not only transform the image, but also generate digital artefacts echoing these gestures of verification and investigation - from lines and other simple geometries signifying the act of counting, connecting or measuring, to entire 3D reconstructions, projections and translations.</small>
Perhaps in the current aesthetics of  images that pose themselves as evidence, the network of connections and gestures that extend from the source image, bypassing the initial anaesthesia, has a potential to form new kinds of aesthetics, akin to a spider using its web as an extension of its sensory field. Looking more closely at these processes and artefacts can shift the discourse around AI fakes from techno-doom/ techno-optimism, and instead strengthen the agency of the viewer. (contextual aesthetics, complex/).




While  the  uncanniness [] of the pope wearing Balenciaga produced a sensory reaction,  immediately awaking attention to the high likelihood of its fakeness, the ‘Pentagon event’ didn’t.  The image got reshared and reposted, sparking panic and plummeting stocks.
One of the reasons for this sensory absence in this case  is perhaps   the  repetitive and passive  interactions of ‘eyes that do not see’ (Buck-Morss, 1992) with the images of catastrophic and  extraordinary events in traditional and social media.  The lack of an acute sensation or  ‘gut feeling’ response to images of catastrophes makes the truth more vulnerable.
'''Upon confirming that the  image is fake, articles and essays focusing on the dangers of AI-generated images  began appearing online - despite''' the fact that in the case of this  Pentagon fake, it was not the image itself, but the context of the claim attached to the image that  drew the attention of twitter users.  Arguably the documentary visual  is there not to inform, but to help  generate a state of panic(Steyerl, 2023).
[IMAGE]
'''I would like to propose a different reading  of this ‘ai event’   Away from the techno-doom and towards a definition of new aesthetics of digital facts. In the process I would like to highlight the role of  the modes of perceiving,  investigative gestures and notations, as important aspects of collective sensing and sense-making(fuller, Weizman 2022). [Decentering authority of truth comment]'''
The object of the controversy  (the explosion)  is impossible to disprove or confirm based on the image alone. Instead, further  analysis of  the materiality of the image,  the reality it portrays, and the interactions  of the image as an online artefact need to be taken into account.
'''Materiality of the image'''
Upon investigating the images online through reverse  image searching, logging of the  duplicates, comparing and collaging the original with confirmed photos of the pentagon and other known explosions , most of the analyses published on twitter, focused on the glitches within the image,  -the researchers zoomed into high- detail fragments and outlined the boundaries of the impossible geometries they perceived ( the phantasmagorias of the  bending fence with its fuzzy borders,   the delirious architecture of the  facade of the supposed pentagon building)  with brightly coloured rectangles - a visual investigative convention of recording the gesture of organising the perceptual field.[Goodwin]
[IMAGE]
Having  the attention drawn to the uncanny artefacts within the image, allows to notice the incoherent reality portrayed by the image: the ''' agency and circumstance  of the camera that  took the photo,  the strangely  ordered, frontal  framing, the  lack of movement in what one would only expect to be a chaotic scene. The detached, disembodied and neutral perspective and composition could be considered yet another way of echoing AI’s persistent erasure of bias (Steyerl, 2023).'''
In order  to regain the ability to sense the uncanniness of fake images (whether they are generated by ai, photoshopped, staged or screenshot from war game simulators()), we need to reconsider the structure of digital aesthetics. By seeking out  an ever expanding  network of connections and gestures that extend from the source image we can enable  collective sense-making, akin to a spider casting its web as an extension of its sensory field.


# <small>I am referring here in particular to misinformation using screenshots of war-simulation game Arma 3, in the context of Russia’s war in Ukraine and  Israel’s offensive in Gaza. France 24. (2023). ''War-themed video game fuels wave of misinformation''. [online] Available at: <nowiki>https://www.france24.com/en/live-news/20230102-war-themed-video-game-fuels-wave-of-misinformation</nowiki>.</small>
# <small>I am referring here in particular to misinformation using screenshots of war-simulation game Arma 3, in the context of Russia’s war in Ukraine and  Israel’s offensive in Gaza. France 24. (2023). ''War-themed video game fuels wave of misinformation''. [online] Available at: <nowiki>https://www.france24.com/en/live-news/20230102-war-themed-video-game-fuels-wave-of-misinformation</nowiki>.</small>

Revision as of 08:51, 31 January 2024

In early 2023 an unprecedented number of AI generated images began appearing on social media, ranging from fairly harmless and entertaining ones, like the image of the Pope wearing Balenciaga, to ones with much higher stakes, like the photo of an ‘explosion at the Pentagon’.




While  the  uncanniness [] of the pope wearing Balenciaga produced a sensory reaction,  immediately awaking attention to the high likelihood of its fakeness, the ‘Pentagon event’ didn’t.  The image got reshared and reposted, sparking panic and plummeting stocks.

One of the reasons for this sensory absence in this case  is perhaps   the  repetitive and passive  interactions of ‘eyes that do not see’ (Buck-Morss, 1992) with the images of catastrophic and  extraordinary events in traditional and social media.  The lack of an acute sensation or  ‘gut feeling’ response to images of catastrophes makes the truth more vulnerable.

Upon confirming that the  image is fake, articles and essays focusing on the dangers of AI-generated images  began appearing online - despite the fact that in the case of this  Pentagon fake, it was not the image itself, but the context of the claim attached to the image that  drew the attention of twitter users.  Arguably the documentary visual  is there not to inform, but to help  generate a state of panic(Steyerl, 2023).

[IMAGE]

I would like to propose a different reading  of this ‘ai event’   Away from the techno-doom and towards a definition of new aesthetics of digital facts. In the process I would like to highlight the role of  the modes of perceiving,  investigative gestures and notations, as important aspects of collective sensing and sense-making(fuller, Weizman 2022). [Decentering authority of truth comment]

The object of the controversy  (the explosion)  is impossible to disprove or confirm based on the image alone. Instead, further  analysis of  the materiality of the image,  the reality it portrays, and the interactions  of the image as an online artefact need to be taken into account.

Materiality of the image

Upon investigating the images online through reverse  image searching, logging of the  duplicates, comparing and collaging the original with confirmed photos of the pentagon and other known explosions , most of the analyses published on twitter, focused on the glitches within the image,  -the researchers zoomed into high- detail fragments and outlined the boundaries of the impossible geometries they perceived ( the phantasmagorias of the  bending fence with its fuzzy borders,   the delirious architecture of the  facade of the supposed pentagon building)  with brightly coloured rectangles - a visual investigative convention of recording the gesture of organising the perceptual field.[Goodwin]

[IMAGE]

Having  the attention drawn to the uncanny artefacts within the image, allows to notice the incoherent reality portrayed by the image: the  agency and circumstance  of the camera that  took the photo,  the strangely  ordered, frontal  framing, the  lack of movement in what one would only expect to be a chaotic scene. The detached, disembodied and neutral perspective and composition could be considered yet another way of echoing AI’s persistent erasure of bias (Steyerl, 2023).

In order  to regain the ability to sense the uncanniness of fake images (whether they are generated by ai, photoshopped, staged or screenshot from war game simulators()), we need to reconsider the structure of digital aesthetics. By seeking out  an ever expanding  network of connections and gestures that extend from the source image we can enable  collective sense-making, akin to a spider casting its web as an extension of its sensory field.

  1. I am referring here in particular to misinformation using screenshots of war-simulation game Arma 3, in the context of Russia’s war in Ukraine and  Israel’s offensive in Gaza. France 24. (2023). War-themed video game fuels wave of misinformation. [online] Available at: https://www.france24.com/en/live-news/20230102-war-themed-video-game-fuels-wave-of-misinformation.
  2. Alemohammad, S., Casco-Rodriguez, J., Luzi, L., Humayun, A.I., Babaei, H., LeJeune, D., Siahkoohi, A. and Baraniuk, R.G. (2023). Self-Consuming Generative Models Go MAD. [online] arXiv.org. doi:https://doi.org/10.48550/arXiv.2307.01850.
  3. Fuller, M. and Weizman, E. (2021). Investigative Aesthetics. Verso Books.
  4. 4. Goodwin, Charles. “Professional Vision.” American Anthropologist, vol. 96, no. 3, Sept. 1994, pp. 606–633, onlinelibrary.wiley.com/doi/10.1525/aa.1994.96.3.02a00100/full, https://doi.org/10.1525/aa.1994.96.3.02a00100. Accessed 30 Oct. 2019.

index.php?title=Category:Content form