Talk:Pierre - Shaping vectors: Difference between revisions

This page was last edited on 26 January 2024, at 13:48.
No edit summary
(→‎Pierre response: new section)
 
Line 18: Line 18:


Many thanks [[User:Siusoon|Siusoon]] ([[User talk:Siusoon|talk]]) 11:19, 26 January 2024 (UTC)
Many thanks [[User:Siusoon|Siusoon]] ([[User talk:Siusoon|talk]]) 11:19, 26 January 2024 (UTC)
== Pierre response ==
To the first point: the example you propose is super interesting. Specifically about this brain-to-image experiment, it depends on who uses such technology. If it's people with  e.g. speech impairment/locked-in syndrome, then not expressing exactly the sensual subtleties of what is being thought pales in comparison with just having the ability to communicate at all.
And I also don't think we can ever capture the feeling of a dream, whether by speech, pen and paper, or computer :)
I would also say that meaning happens _thanks to_ the local similarity of vectors, as some sort of Gestalt phenomenon. A vector only makes sense in relation with the other vectors around it.
@asker: I would not differentiate the semantic space and the latent space :) Rather, I would consider that the latent space is a subset/kind of semantic space. But because vectors get rid of the exactness we used to have with binary, then latent space is this space of approximation, not-quite-exactly-what-one-wants (the kind of meaning that can resonate with artists I guess ;) ).
Expanding this text, I precisely want to investigate this aspect of sculpting the space: I think artists do it in one way, but large corporations (Microsoft, OpenAI) also does it in a very radical way, sculpting the semantic space before the product even reaches the users!
(Thanks for the A Desk reference!)
@winnie Yes, good point. I think the main shift is from this preciseness of the binary (truth tables, boolean, 0/1, etc.) becomes the approximation of the vector (similarity rather than equality, good enough vs. exact). So that's one shift in encoding/communication practices. I think the other part is about its fittingness to human thought: computation as logical and procedural operations might feel very alien to humans (let alone to all the other animals), but maybe a vector based conception makes it a bit less alien (with mixed consequences tbd)

Latest revision as of 13:48, 26 January 2024

This sentence: "As we retrieve information stored in vectors, we therefore navigate semantic spaces". Reminds me to the experiment described in this paper. It is about reproducing mental images. The scientists are able to decode both: images observed by the human eye and images imagined from brain activity. Previous techniques consisted on brain scaners and computer vision. This method proposes to semantise the visual information and then use a text2image GAN for transforming the semantic meaning (words) as prompts for the model (CLIP) that will generate the image.

As I see it, these scientists transform mental images into standardised sentences which meaning cannot describe the sensations or aesthetics of a dream, which is surely the most important thing.

Do you think that the construction of meaning through the local similarity of vectors could help in this case? Don't you think that by transforming unconsious or computational information into semantics we are limiting communication to the textual and rational part?

---

Hi! Curious how you would differentiate the semantic space from the latent space? I think most artists working with the vectorial conception of machine learning would more view the latent space in spatial terms as a form of sculpting - e.g. https://a-desk.org/en/tema-del-mes/latent-space-ai-art/ (asker)

---

Hey Pierre,

it is a very interesting piece with well connected sentence. This is also the question I want to ask about the vector space and latent space in machine learning.

You mentioned spatial proximity, and I wonder how this shifts how understanding of perception? or why this way of seeing matters?

Many thanks Siusoon (talk) 11:19, 26 January 2024 (UTC)

Pierre response

To the first point: the example you propose is super interesting. Specifically about this brain-to-image experiment, it depends on who uses such technology. If it's people with e.g. speech impairment/locked-in syndrome, then not expressing exactly the sensual subtleties of what is being thought pales in comparison with just having the ability to communicate at all.

And I also don't think we can ever capture the feeling of a dream, whether by speech, pen and paper, or computer :)

I would also say that meaning happens _thanks to_ the local similarity of vectors, as some sort of Gestalt phenomenon. A vector only makes sense in relation with the other vectors around it.

@asker: I would not differentiate the semantic space and the latent space :) Rather, I would consider that the latent space is a subset/kind of semantic space. But because vectors get rid of the exactness we used to have with binary, then latent space is this space of approximation, not-quite-exactly-what-one-wants (the kind of meaning that can resonate with artists I guess ;) ).

Expanding this text, I precisely want to investigate this aspect of sculpting the space: I think artists do it in one way, but large corporations (Microsoft, OpenAI) also does it in a very radical way, sculpting the semantic space before the product even reaches the users!

(Thanks for the A Desk reference!)

@winnie Yes, good point. I think the main shift is from this preciseness of the binary (truth tables, boolean, 0/1, etc.) becomes the approximation of the vector (similarity rather than equality, good enough vs. exact). So that's one shift in encoding/communication practices. I think the other part is about its fittingness to human thought: computation as logical and procedural operations might feel very alien to humans (let alone to all the other animals), but maybe a vector based conception makes it a bit less alien (with mixed consequences tbd)