Uncanny dialogues with the machine
In early September, we had the pleasure of opening the dialogues seminar series with a discussion with artist Albena Baeva and composer Sam Salem. In the works of both of these artists, the creative explorations take place in the terrains of not-quite, ambiguous or uncanny. In Baeva’s works experimental, combinatory creatures that fail to coherently resemble actual existing life-forms of either human or animal origin, become instead mythical and divine creatures, more than their (hypothetical) realistic counterparts. In Salem’s Bury Me Deep (2020), the timbre of a solo trombone is synthetised into sounds that are not realistic nor synthetic, but something in-between or perhaps outside such spectrum of realism. It is precisely this ambiguity and imperfection that allows the construction of meaning for the audience and makes the artworks interesting.
For both, the artistic collaboration with the Ai models is based on the exploration of limits, glitches and the unknown. But what happens to the aesthetics of ambiguous when the Ai models gradually (or very quickly) develop and start to make less and less mistakes? Just within the previous few months, the visual Ai models seem to have improved their performance by leaps and bounces. Salem interestingly pointed out that in video games, in contrast to fine arts, the aesthetic development has taken place from abstract to representational. Will this be the direction of development for Ai models as well? Baeva pointed out that that is already happening with models such as Midjourney that seem to generate images to such a high level that they become at the same time fascinating and boring.
It was also discussed how the text-to-image models have changed the presentation of digital images to include not only the visual element but also the prompt used to create it. When all that is left for the human counterpart to do is to create a context for the machine to fill with content, will that continue to be a satisfying role for the artist? What if, as Salem speculated, there would be text-to-music models that generate the musical content out of a prompt that indicates a style, description of the structure and instrumentation for the work. Would we still think of this as composing? Furthermore, could an artist resist this development by keeping their work process and the artworks strictly offline, where it can neither become an element in machine-driven endless art factory nor a part of someone else’s dataset? Or is the automated future inevitable?
Another interesting take on the machine-artist collaboration was Baeva’s Dangerous female creatures living in the deep (2020), in which the StyleGAN-generated, stereotypically sexist depictions of women are painted in minute detail by the artist on canvas. In this workprocess, the artist becomes a human printer for the algorithm, thus humoristically turning around the concepts of The Artist and the tool. The laborous, tactile effort of creating oil paintings was resonated in Salem’s reflection on what the function of composing is. Maybe the process of making art in itself is the core, and the works – ever imperfect and replaced by new inspirations – simply a side-product. To the extent that is the case, artists will have very little to be afraid of with creative Ai.
The recording of the dialogue seminar can be accessed [here].
– Anna-Kaisa Kaila