2020
videoinstallation
video, 6′ 00”
object: Alexa (riverstone)
In our project we aim to map and present relations – within and between systems: living and ‘artificial’, organic and organized, relational and recursive ones. Ecology being the terrain of relations and interactions between organisms and systems – living and inorganic alike –, we are focusing on artificial intelligences’ ecologies, in multiple senses:
First and foremost – in the vein of William Gibson’s famous insight on the dubiousness whether our grandchildren will understand the distinction between things that are computing machines and which are not, and in line with the Kittlerian apprehension of the medial determinants’ (or technical a prioris’) role in any relation towards ourselves and the nature – we are after how AIs are shaping the very notion of ecology and allow for possibilities of discourses with media/art.
Relying on Kittler’s vision that we’re heading towards a future in which the difference between the ‘artificial’ and the ‘natural’ will be artificial itself (because the nature, potentially all ecosystems themselves will become interfaces), AIcology tries to articulate by means of media/art our need to reflect on the “schematism of perceivability” brought about by the different AIs – from ‘basic’ ones such as sensors, DSP chains to much more ‘advanced’ ones like Google’s Deep Dream.
AIs’ ecology in this schematic sense is examined in our work by audiovisually and conceptually relating and (re)connecting different DSP’s (movie camera, field recorder) and AI’s (Deep Dream, Alexa) to themselves and to each other via different mechanisms: mixed media video loop, recursive feedback in pattern recognition and generation.
Put bluntly, we are asking some basic questions of the media-art/AI/ecology triangle: How does the AI see the world (and thus, since all medium is now immersed by DSP’s, how we are able to see it), itself, themselves – and us, maybe still being able to see them seeing? What sort of ecologies these strange loops invite us towards?
***
AIcology thematic-wise is most prominently influenced by the seminal thoughts of Friedrich Kittler and Douglas Hofstadter, which we plan to translate into the media/art realm by relying and reflecting on some of their main concepts: technical a prioris, the schematism of perceivability, strange loops, self-reference and recursion in signal-processing systems.
If, as according to Kittler’s Gramophone, Film, Typewriter, “only that amount exists” from the human beings (and, we might add: from its broader ecosystems’ as well) that mediums can store and transmit, and lastly “numbers and blueprints” become all creations’ key, then we find it quite evident that the connection between media/art and ecology is indeed a pivotal and intrinsic one.
What is more, once these “numbers and blueprints” are organized into complex systems of information processing, storage and computation – especially in the field of pattern-recognition –, they not only function as the necessary (technical) possibility conditions for human beings’ any perceptual and processual relation to their ecosystems, but they themselves also create and ‘inhibit’ their own ecosystems.
That is, they are not mere proteases by means of which any ecology is possible and accessible – but they themselves are also worthy of our ecological examinations via meta/medial- and art techniques. How do they relate to ‘themselves’, each ‘others’, to other organisms and system – and to us? The actual means we will store, process and transmit these (self-referential) questions are the following:
***
We apply a digital movie camera capable of 4k RAW recording (allowing for different color grading of the ‘same’ material, thus leading even on this ‘basic’ level to fundamental questions about what do ‘we’ see based on different LUT’s and gammas) with a high resolution field recording audio rig to capture forms of the ‘organic nature’ (leaves, stones, etc.).
Then, as a second step, we initiate our first “strange loop” via generating positive feedbackloops by recording the main HDMI output of the camera. This way we dwell into the nature of not only how does the DSP’s ‘see’, but how do they see their own seeing; and how do we, human computational machines perceive such self-referentiality – the more if the forms resulting from such infinitely regressive processes are inherently interesting from an artistic perspective.
Then we take this ‘basic’ AI’s (the camera’s and recorders own DSP chain’s) perception/creation – first, a stills picture taken from this process, then videos as well – as a starting point of the second relational/self-referential medium/art journey: We will ask pattern recognizing algorithms to describe us what they see – and feed this output back to a ‘smarter’ AI’s input: Google’s Deep Dream engine.
This could function in two ways: either by ‘prescribing’ it patterns to look for in the image (Deep Style), or to ‘just look for anything’ (Deep Dream). Once the new images are created in either of these ways, we’ll go back to the beginning of the first loop again, and use these images and later videos for the subsequent loop(s) – and then, we will feed those stills (and videos) back again to the Google Engine…
Down (or up) this spiral of recursion will we discover not only how the ‘simple’ DSP chains perceive their own perception, but also how others (either picture recognition engines like of Alexa’s or Google’s Deep Dream, which serves as a ‘reversed picture recognizing engine’) perceive images other computational ecosystems have stored, transmitted and/or created.
Even if those are of their own creations (like in the case of feeding back Deep Dream pictures in a recursive manner), or originating from other systems’ (like when we ask Alexa to recognize pictures, and use it as a pattern of departure for the Deep dream, or when we will input Alexa’s recognition into other picture-search engines, and feed those pictures into the Deep Dream).
Lastly, on yet another meta-layer, the more such generated pictures will be at hand, the easier is to create videos from it – which, recursively again, can be used to generate positive-feedback video loops – or to be ‘over-dreamed’ by the Deep Dream Generator’s video-maker function.
Positive feedback and recursion will be the basis of the audio-realm as well: both the sounds generated during the video-looping, and the sounds of Alexa and other picture-to-sound ‘translational’ algorythms will be recorded, remixed and used in a like recursive manner.