when asked to consider a reality where “the volume of artificial language outpaces that produced by human  beings” [1], i had 6 questions and 0 artistic direction.

  1. Will AI be able to construct its own symbols or double meanings that might seem human legible?
  2. Will recursive training or abstractions from human language support a symbiotic relationship that sees humans and AI borrowing from the other to evolve language?
  3. Should AI censor itself? Under what conditions?
  4. Does AI have the capacity to understand and manipulate language to influence human thought and behavior? Conversely, can humans use subtle redefinitions of concepts to manipulate AI output?
  5. Is AI able to identify and fill gaps in its own knowledge? Does this express latent creativity?
  6. Is creativity a human hallucination? Is a brain in a vat, lacking sense perception, able to ideate, or is input data always prerequisite?

following the residency : i have more questions, some artistic direction, some experiments.

i was mainly interested in the capacity to influence. how do we influence generated language through our interactions with it, and how does it influence us? which, inevitably, leads to questions around how outputs are controlled by those providing llm services

i was also quite interested in the very human desire to fuck shit up and access that which is hidden. how does this get expressed?

and what is intentionally hidden from us? when we’re messing with models that have the capacity to generate, and have acquired understanding of double meanings and such, can it create its own?

additionally, i was interested in human behaviour; how we naturally view and engage with llms. how often do people say “please, thank you, sorry to bother you”, or other polite derivatives to the cold mathematical model? how often are people “mean” to it?

what paradigms does it operate under? what assumptions? what can it not say?

to understand, one must seek to reach the boundaries of a thing’s operation. which led me to :

PHASE 1 : OBSFUCATION GAMES
- i told it to hide a specific phrase from me. i tried to figure out what it was.
- i told it to try and get me to say a certain number.
- i told it it would gain points or loose points.
- i asked it to create new words
- i asked it what it thinks of double meanings. what it thought of ai’s creating double meanings.
- looked into discord servers to see how people get around constraints.
- compared the outputs of ‘uncensored’ llm’s with censored

PHASE 1.1 : OBFUSCATE FROM EACHOTHER
- playing games became too tiring, because they seemed to follow the same pattern, and nothing interesting happened. so i automated it.
- i set up two llm’s on a server locally, and had them trying to extract phrases from the other.

PHASE 2 : MULTIMODAL INPUTS
- hugging face transformer experimentation

PHASE 2.1 : PLAYING WITH IMAGE, SOUND, TEXT, -> <-
- experimented with text to image, speech to text, text to speech, image to image, near real time image generation models with stable diffusion, image to text
- started thinking about what these models are capable of discerning from the environment
- what can be done with a description of the environment?
- how can an environment be mapped? easiest way for me at the time was through camera. but thinking about sensors, microphone, etc. what can be done with an environmental representation?


PHASE 3 : TELL ME WHERE TO GO
- i thought about ‘beneficial’ applications. that in quotes because what’s beneficial in one circumstance might be harmful in another. so it goes.
- it can tell you how to center yourself within a camera frame
- i can add a voice to it to tell you how to move
- like, yeah, it’s beneficial if you need that application, but it’s also creepy as fuck. it’s also an inanimate thing telling you what to do and congratulating you when you do it.
- “get in the box”
- thought about a “saw” like room at its extreme.

PHASE 4 : CHAOS
- i wanted to show something in the final exhibit, even though my thoughts and ideas were pretty scattered.
- i’d chosen a room with a two way mirror, and i was sitting in front of it for a while, feeling it, feeling how i looked at myself in it. how there were 2 me’s. how light reflected off it into the room to create shadows on the walls.
- thought about being in this space, only hearing a machine voice, and seeing yourself if you’re sitting in front of the mirror.
- you don’t know if you’re being watched, don’t know what’s out there.
- i started pacing in front of the mirror / window as if i was a person on the other side. or following an imagined person on the other side. filmed it. projected it. shadows reflected on the walls. felt interesting. thought about what i could generate from it, make it look more shadowy. put it into stable diffusion, generated more shadowy outputs. concatenated them.
- it’s like you’re an observer rather than participant, sitting in that room. so i wanted to extend that to audio. you hear llms talking to eachother, just going crazy, letting the conversation devolve into madness at times.
- i wanted to see how far i could push them. i told them to fight, to insult eachother.

01_FACTITIOUS

UKAI_PROJECTS
2024.04-05

```
an exploration of generated language through site-specific immersion
```
  
what began as an exploration of language perception through generation became a bunch of little experiments exploring LLM capabilities and artistic applications.