This sentence: "As we retrieve information stored in vectors, we therefore navigate semantic spaces". Reminds me to the experiment described in this paper. It is about reproducing mental images. The scientists are able to decode both: images observed by the human eye and images imagined from brain activity. Previous techniques consisted on brain scaners and computer vision. This method proposes to semantise the visual information and then use a text2image GAN for transforming the semantic meaning (words) as prompts for the model (CLIP) that will generate the image.
As I see it, these scientists transform mental images into standardised sentences which meaning cannot describe the sensations or aesthetics of a dream, which is surely the most important thing.
Do you think that the construction of meaning through the local similarity of vectors could help in this case? Don't you think that by transforming unconsious or computational information into semantics we are limiting communication to the textual and rational part?
---
Hi! Curious how you would differentiate the semantic space from the latent space? I think most artists working with the vectorial conception of machine learning would more view the latent space in spatial terms as a form of sculpting - e.g. https://a-desk.org/en/tema-del-mes/latent-space-ai-art/ (asker)
---
Hey Pierre,
it is a very interesting piece with well connected sentence. This is also the question I want to ask about the vector space and latent space in machine learning.
You mentioned spatial proximity, and I wonder how this shifts how understanding of perception? or why this way of seeing matters?
Many thanks (Winnie)