Martyna - fuzzy fences, shiny coats, trees on fire: Difference between revisions

This page was last edited on 7 February 2024, at 11:19.
mNo edit summary
No edit summary
 
(48 intermediate revisions by 4 users not shown)
Line 1: Line 1:
<div class="metadata">
<div class="metadata">
== Sensing the uncanny & the digital aesthetics of fact. ==
<span id="Martyna - fuzzy fences, shiny coats, trees on fire"></span>
'''Martyna Marciniak'''
== fuzzy fences, shiny coats, trees on fire ==
</div>In early 2023 an unprecedented number of AI generated images began appearing on social media feeds, ranging from fairly harmless and entertaining ones, like the image of the Pope wearing Balenciaga, to ones with much higher stakes, like the photo of a dense plume of smoke at the Pentagon.
'''Martyna Marciniak'''
[[File:Balenciaga and pentagon2.png|thumb|775x775px]]
</div>
Although visual fakes aren't new, the speed at which AI (or even game engines and simulators[1]) has accelerated fake image generation highly exceeds that of manual photoshopping, resulting in a higher than ever volume of convincing  fakes. (is it more or more convinving)?


(possible rewrite/restructuring of the argument - I am interested in approaching the problem of image-perpetuated disinformation, from the point of view of stages and modes of perceiving.  
[[File:pope.png|thumb|400x400px|center]]
[[File:pentagon.png|thumb|775x775px|center]]


In response to this phenomenon, I argue that these AI events can serve as a reflection of the already lingering crisis of imagination and visual sensitivity.
In early 2023 an unprecedented number of AI-generated images began appearing on social media, ranging from harmless and entertaining ones, like the image of the Pope wearing Balenciaga, to ones with much higher stakes, like the photo of an ‘explosion at the Pentagon’.  
<small>For example, generative AI’s imagination seems limited by its core dataset. It has already been reported that AI has been entering an autophagic stage, whereby it re-feeds the already generated images back into the generative algorithm making its biases more prominent and lowering their quality and precision [2].</small>


<small>Our expectation of reality is similarly influenced by the images we see. Not only in a strictly Baudrillardian sense, where the hyperreal begins to affect the expectation and perception of the rea''l [providing a truth or reality to compare to, can you talk more about this]'', but also because our interactions with otherwise extraordinary events, like explosions and fires, are shaped almost exclusively by images (fictional and documentary). This ordinary-extraordinary becomes part of the daily experience, dulling our attention or anaesthetising us ''[who is the audience of the image?]'' [3] to events we should otherwise be very sensitive to. ''[discuss where this image is seen, what claims are attached to it, issue of trust of sharing accounts and media services].''</small> 
While the uncanniness of the pope wearing Balenciaga produced a sensory reaction, immediately awakening attention to the high likelihood of its fakeness, the ‘Pentagon event’ didn’t. The image got shared and reposted, sparking panic and causing stocks to plummet. Once the image was confirmed fake, endless articles and essays focusing on the dangers of AI-generated images began appearing online. However, it was not the image itself, but the context of the claim attached to it that drew the attention of twitter users. Arguably the documentary visual is there not to inform, but to help generate a state of panic (Steyerl, 2015).  


<small>Therefore, the outlandishness/awkwardness [] of the pope wearing Balenciaga produced a sensory reaction, triggering the perception of the uncanny, ''[and attention?]'' but the ‘Pentagon event’ didn’t. Instead, multiple steps of verification and analysis by OSINT researchers had to be implemented to filter the fake out. [communal sense-making ]</small>
One of the reasons for this absence of uncanny feeling when interacting with the 'Pentagon fake' is perhaps the repetitive and passive interactions with the images of catastrophic and extraordinary events in traditional and social media, of ''eyes that see too much -- and register nothing'' (Buck-Morss, 1992). The panic overtakes the senses, resulting in a lack of an acute uncanny sensation in response to images of catastrophes (real and faux), which makes the truth more vulnerable.
This observation brings me to the second aspect of the new aesthetics of fact: that in the absence of the perceptible uncanny, one must look more closely at the verification and investigation processes  in search of a new way of sensing. (clarify)
<small>The  verification and investigation of online images involves either automated (e.g. reverse image search, detection algorithms) or manual comparative processes, where the source image is inevitably contextualised and networked by other images.</small>


<small>In some cases, a digital forensic analysis will further organise the perceptual field [4] of the image. It will become sliced, extruded, warped, segmented; the segments outlined, counted, scaled and overlaid. The process will not only transform the image, but also generate digital artefacts echoing these gestures of verification and investigation - from lines and other simple geometries signifying the act of counting, connecting or measuring, to entire 3D reconstructions, projections and translations.</small>
I would like to propose a different reading of this ‘AI event’ -- away from the techno-doom and towards a definition of new aesthetics of digital facts. In the process, I would like to highlight the role of the modes of perceiving, investigative gestures and notations as important aspects of collective ''sensing and sense-making'' (Fuller and Weizman, 2021).
Perhaps in the current aesthetics of images that pose themselves as evidence, the network of connections and gestures that extend from the source image, bypassing the initial anaesthesia, has a potential to form new kinds of aesthetics, akin to a spider using its web as an extension of its sensory field. Looking more closely at these processes and artefacts can shift the discourse around AI fakes from techno-doom/ techno-optimism, and instead strengthen the agency of the viewer. (contextual aesthetics, complex/).  


The object of the controversy (the explosion) is impossible to disprove or confirm based on the image alone. Instead, further analysis of the materiality of the image, the reality it portrays, and the interactions of the image as an online artefact need to be considered.


Upon investigating the images online through reverse image searching, logging of the duplicates, comparing and collaging the original with confirmed photos of the Pentagon, most of the analyses published on twitter, focused on the glitches within the image. The researchers zoomed into high-detail fragments and outlined the boundaries of the impossible geometries they perceived (the phantasmagorias of the bending fence with its fuzzy borders, the delirious architecture of the facade of the supposed Pentagon building) with brightly coloured rectangles -- a visual record of the researcher ''organising the perceptual field'' (Goodwin, 1994).


# <small>I am referring here in particular to misinformation using screenshots of war-simulation game Arma 3, in the context of Russia’s war in Ukraine and  Israel’s offensive in Gaza. France 24. (2023). ''War-themed video game fuels wave of misinformation''. [online] Available at: <nowiki>https://www.france24.com/en/live-news/20230102-war-themed-video-game-fuels-wave-of-misinformation</nowiki>.</small>
Having the attention drawn to the framed uncanny artefacts within the original image [-->] allows one to notice the incoherent reality portrayed in the whole image: the agency and circumstance of the camera that took the photo, the strangely ordered frontal framing, the lack of movement in what one would only expect to be a chaotic scene. This detached, disembodied and neutral perspective and composition could be considered yet another way of echoing AI’s persistent erasure of bias (Steyerl, 2023).
# <small>Alemohammad, S., Casco-Rodriguez, J., Luzi, L., Humayun, A.I., Babaei, H., LeJeune, D., Siahkoohi, A. and Baraniuk, R.G. (2023). ''Self-Consuming Generative Models Go MAD''. [online] arXiv.org. doi:<nowiki>https://doi.org/10.48550/arXiv.2307.01850</nowiki>.</small>
# <small>Fuller, M. and Weizman, E. (2021). ''Investigative Aesthetics''. Verso Books.</small>
# <small>4. Goodwin, Charles. “Professional Vision.” ''American Anthropologist'', vol. 96, no. 3, Sept. 1994, pp. 606–633, onlinelibrary.wiley.com/doi/10.1525/aa.1994.96.3.02a00100/full, <nowiki>https://doi.org/10.1525/aa.1994.96.3.02a00100</nowiki>. Accessed 30 Oct. 2019.</small>


[[index.php?title=Category:Content form]]
To regain agency beyond relying on authorities of truth or resigning the trust in our ability to perceive, we can seek out a network of connections and gestures that extend from the material analysis and investigate the realities of the source image.  This way we can enable a new aesthetics of fact characterised by collective sense-making, akin to a spider casting its web as an extension of its sensory field.
 
[[File:collage.png|thumb|900x900px|center]]
 
<noinclude>
[[Category:Content form]]
</noinclude>

Latest revision as of 11:19, 7 February 2024

In early 2023 an unprecedented number of AI-generated images began appearing on social media, ranging from harmless and entertaining ones, like the image of the Pope wearing Balenciaga, to ones with much higher stakes, like the photo of an ‘explosion at the Pentagon’.

While the uncanniness of the pope wearing Balenciaga produced a sensory reaction, immediately awakening attention to the high likelihood of its fakeness, the ‘Pentagon event’ didn’t. The image got shared and reposted, sparking panic and causing stocks to plummet. Once the image was confirmed fake, endless articles and essays focusing on the dangers of AI-generated images began appearing online. However, it was not the image itself, but the context of the claim attached to it that drew the attention of twitter users. Arguably the documentary visual is there not to inform, but to help generate a state of panic (Steyerl, 2015).

One of the reasons for this absence of uncanny feeling when interacting with the 'Pentagon fake' is perhaps the repetitive and passive interactions with the images of catastrophic and extraordinary events in traditional and social media, of eyes that see too much -- and register nothing (Buck-Morss, 1992). The panic overtakes the senses, resulting in a lack of an acute uncanny sensation in response to images of catastrophes (real and faux), which makes the truth more vulnerable.

I would like to propose a different reading of this ‘AI event’ -- away from the techno-doom and towards a definition of new aesthetics of digital facts. In the process, I would like to highlight the role of the modes of perceiving, investigative gestures and notations as important aspects of collective sensing and sense-making (Fuller and Weizman, 2021).

The object of the controversy (the explosion) is impossible to disprove or confirm based on the image alone. Instead, further analysis of the materiality of the image, the reality it portrays, and the interactions of the image as an online artefact need to be considered.

Upon investigating the images online through reverse image searching, logging of the duplicates, comparing and collaging the original with confirmed photos of the Pentagon, most of the analyses published on twitter, focused on the glitches within the image. The researchers zoomed into high-detail fragments and outlined the boundaries of the impossible geometries they perceived (the phantasmagorias of the bending fence with its fuzzy borders, the delirious architecture of the facade of the supposed Pentagon building) with brightly coloured rectangles -- a visual record of the researcher organising the perceptual field (Goodwin, 1994).

Having the attention drawn to the framed uncanny artefacts within the original image [-->] allows one to notice the incoherent reality portrayed in the whole image: the agency and circumstance of the camera that took the photo, the strangely ordered frontal framing, the lack of movement in what one would only expect to be a chaotic scene. This detached, disembodied and neutral perspective and composition could be considered yet another way of echoing AI’s persistent erasure of bias (Steyerl, 2023).

To regain agency beyond relying on authorities of truth or resigning the trust in our ability to perceive, we can seek out a network of connections and gestures that extend from the material analysis and investigate the realities of the source image. This way we can enable a new aesthetics of fact characterised by collective sense-making, akin to a spider casting its web as an extension of its sensory field.