dialogues4: probing the future of creative technology
Guests: Paola Torres Núñez del Prado (PE/SE) and Laura Devendorf (USA)
Part of the seminar series dialogues: probing the future of creative technology
Paola Torres Núñez del Prado (PE/SE) is an artist and researcher of transdisciplinarity, working with textile assemblages and embroideries, painting, sound, text, digital media, interactive art, A.I. and video. She explores the boundaries and connections in between tactility, the visual and audio, related to the human voice, to nature, and to synthetic ones whose listening is often considered less harmonious, such as machine or digital noises. Her work is complex: she explores the limits of the senses, examining the concepts of interpretation, translation, and misrepresentation, to reflect on mediated sensorial experiences while questioning the cultural hegemony within the history of Technology and the Arts.
She is the recipient of the Stockholms stads kulturstipendium 2022 and of the Honorary Mention in the Prix Ars Electronica 2021. She has also has been awarded the Artists + Machine Intelligence Grant from Google Arts and Culture and Google AI in 2020 and was the winner of the “Local Media: Amazon Ecoregion: contest of Vivo Arte.mov in Brazil, 2013. Her works are in collections of the Swedish Public Art Agency and Malmo City Museum.
(For Laura Devendorf:)
Laura Devendorf, assistant professor of information science with the ATLAS Institute, is an artist and technologist working predominantly in human-computer interaction and design research. She designs and develops systems that embody alternative visions for human-machine relations within creative practice. Her recent work focuses on smart textiles—a project that interweaves the production of computational design tools with cultural reflections on gendered forms of labor and visions for how wearable technology could shape how we perceive lived environments. Laura directs the Unstable Design Lab. She earned bachelors’ degrees in studio art and computer science from the University of California Santa Barbara before earning her PhD at UC Berkeley School of Information. She has worked in the fields of sustainable fashion, design and engineering. Her research has been funded by the National Science Foundation, has been featured on National Public Radio, and has received multiple best paper awards at top conferences in the field of human-computer interaction.
28 March 2023, 17:00 – 18:00h CEST.
Artistic and legal-philosophical perspectives on deep fakes
Part of the seminar series dialogues: probing the future of creative technology
Ania Catherine and Dejha Ti are an award-winning experiential artist duo who founded their collaborative art practice, known as Operator, in 2016. Referred to as “the two critical contemporary voices on digital art’s international stages” (Clot Magazine), their expertises collide in large scale conceptual works recognizable for their poetic approach to technology. Ti’s background as an immersive artist and HCI technologist, and Catherine’s as a choreographer, performance artist and gender scholar make for a uniquely medium-fluent output–bringing together environments, technology and the body.
Operator has been awarded a Lumen Prize (Immersive Environments), ADC Award (Gold Cube), S+T+ARTS Prize (Honorary Mention), and MediaFutures (a European Commission funded programme). They’ve been speakers at Christie’s Art+Tech Summit, Art Basel, MIT Open Doc Lab, BBC Click, Bloomberg ART+TECHNOLOGY, Ars Electronica, Contemporary Istanbul, and CADAF. Ti and Catherine are originally from Los Angeles and currently based in Berlin.
Title: Soft Evidence–Synthetic cinema as contemporary art
Abstract:
Art has always explored notions of truth and fiction, and the relationship between image and reality. Synthetic media’s capability to depict events that never happened makes that relationship more complex than ever. How can artists use synthetic media/deepfakes creatively, and start conversations about ethics and the social implications of unreliable realities? In this presentation, artist duo Ania Catherine and Dejha Ti of Operator discuss their work Soft Evidence–a slow synthetic cinema series created as part of MediaFutures in 2021. They will detail how research and interviews with experts on media manipulation in law, education, and activism informed their creative and technical processes. As experiential artists, Ti and Catherine plan to exhibit Soft Evidence as an installation, a site for the public to learn and process a rapidly changing media landscape through immersion and feeling states.
Katja de Vries is an assistant professor in public law at Uppsala University. Her work operates at the intersection of IT law and philosophy of technology. Her current research focuses on the challenges that AI-generated content (‘deepfakes’ or ‘synthetic data’) poses to data protection, intellectual property and other fields of law.
Title: How can law deal with the counterfactual metaphysics of synthetic media?
Abstract:
How can law deal with deep fakes and synthetic media? Law is influenced by the politics, norms and ontologies of the society in which it operates but is never exhausted by it. Law always first and foremost obeys to an already existing system of parameters, rules concepts and ontologies, to which new elements can only be incrementally added. This contributes to legal certainty and foreseeability, as well as law’s slowness to adapt.
The EU legislator is trying to adapt to new digital challenges and opportunities by creating a true avalanche of legislation. In the case of deep fakes and other synthetic media the question, however, is if operative concepts such as transparency and informed consent and dichotomies such as fact v. fiction, human v. machine, etc. work well with the counterfactual metaphysics of synthetic media, namely the articulation of what is possible into digital mathematical spaces of seemingly endless alternative realities, and extensions in time and space. More concretely: is it important to simply flag that we are interacting with a synthetic work? Can we consent to live-on forever in disseminating digital alter-egos?
2 February 2023, 15:00 – 16:00 CET
dialogues2: probing the future of creative technology
Guests: Albena Baeva and Sam Salem
Part of the seminar series dialogues: probing the future of creative technology
Albena Baeva (https://albenabaeva.com)
Abstract: I will talk about the relationship between feminism, biases and algorithms – a topic that plays a central role in my recent work. Algorithms are not only automating many production processes but are also already shaping our perception of reality. AI is becoming a curator and creator of content, while humans are left to engage in poorly paid mechanical activities in content control factories or the preparation of training databases. Speculating over the imagery of the new reality is what inspires the artist to keep collaborating with different neural networks creating different artworks. My presentation will show how I as an artist simultaneously uses and critiques new technologies in my works.
Bio: Albena Baeva works at the intersection of art, technology, and social science. In her interactive installations for urban spaces and galleries, she uses ML and AI, augmented reality, physical computing, creative coding, and DIY practices. Albena has two MAs; in Restoration and Digital Arts from the National Academy of Art in Sofia. She received an Everything is Just Fine commission from the Bulgarian Fund for Women (2019), won the international Essl Art Award for contemporary art (2011) and the VIG Special Invitation (2011). Albena is a co-founder of Symbiomatter: experimental arts lab, the studio for interactive design Reaktiv, the first Bulgarian gallery for digital art gallery Gallery and the AR sculpture park Ploshtadka. Her work was shown internationally in museums for contemporary art including Essl (Austria, 2011), EMMA (Finland, 2013), MCV Vojvodina (Serbia, 2015 and 2019), galleries and festivals for video and performance in Austria, Bulgaria, Czech Republic, Cyprus, Denmark, France, Finland, Germany, Hungary, Italy, Lithuania, Switzerland, Serbia, Turkey, Ukraine and the USA.
Sam Salem (https://www.osamahsalem.co.uk)
Abstract: I will discuss approaches to, and reflections on, the use of Neural Synthesis in my recent works, Midlands (2019) and THIS IS FINE (2021), and my forthcoming work for solo trombone (+), Bury Me Deep (2022).
Bio: Sam Salem is a British / Jordanian composer who creates works for performers, electronics & video. His compositional process begins with a set of locations, a line on a map connected by a particular theme, history, or set of constraints. He captures moments, surprises, and ultimately, like prominent London-based psychogeographer Iain Sinclair, he offers a reading of his chosen locations, a divination made through an “act of ambulatory sign-making”. The layers of myth and history that he uncovers form his building blocks. His first works for live performers, London Triptych, were recorded and released as Salem’s debut portrait album in November 2021 via dFolds. He is a founding member and co-artistic director of Distractfold Ensemble, the recipients of the Kranichstein Music Prize for Interpretation from Internationales Musikinstitute Darmstadt (IMD) in 2014. Sam is also co-founder and co-director of Unsupervised / The Machine Learning for Music Working Group, a collaboration between RNCM and the University of Manchester that explores the creative applications of ML and AI. He is currently PRiSM Lecturer in Composition at the Royal Northern College of Music and was once described by the New York Times as “young”.
dialogues1: probing the future of creative technology
Subject: “Interaction with generative music frameworks”
Guests: Dorien Herremans and Kıvanç Tatar
The seminar recording is available HERE.
Dorien Herremans: Controllable deep music generation with emotion
Abstract: In its more than 60-year history, music generation systems have never been more popular than today. While the number of music AI startups are rising, there are still a few issues with generated music. Firstly, it is notoriously hard to enforce long-term structure (e.g. earworms) in the music. Secondly, by making the systems controllable in terms of meta-attributes like emotion, they could become practically useful for music producers. In this talk, I will discuss several deep learning-based controllable music generation systems that have been developed over the last few years in our lab. These include TensionVAE, a music generation system guided by tonal tension; MusicFaderNets, a variational autoencoder model that allows for controllable arousal; and seq2seq a controllable lead sheet generator with Transformers. Finally, I will discuss some more recent projects by our AMAAI lab, including generating music that matches a video.
Bio: Dorien Herremans is an Assistant Professor at Singapore University of Technology and Design, where she is also Director of Game Lab. At SUTD she teaches Computational Data Science, AI, and Applied Deep Learning. Before being at SUTD, she was a Marie Sklodowska-Curie Postdoctoral Fellow at the Centre for Digital Music at Queen Mary University of London. She received her Ph.D. in Applied Economics on the topic of Computer Generation and Classification of Music through Operations Research Methods, and graduated as a business engineer in management information systems at the University of Antwerp in 2005. After that, she worked as a Drupal consultant and was an IT lecturer at the Les Roches University in Bluche, Switzerland. Dr. Herremans’ research interests focus on AI for novel applications such as Music and Audio.
Kıvanç Tatar: Musical Artificial Intelligence Architectures with Unsupervised Learning in Improvisation, Audio-Visual Performance, Interactive Arts, Dance, and Live Coding
Abstract: Generalized conceptualization of music suggests that music is “nothing but organised sound”, involving multiple layers where any sound can be used to produce music, and strong connections exist between pitch, noise, timbre, and rhythm. This conceptualization indicates two kinds of organization of sound: 1- organization in latent space to relate one sound to another, 2- organization in time to model musical actions and form. This talk covers different Artificial Intelligence architectures that were developed with the perspective of generalized understanding of music. These architectures train on a dataset of audio recordings using unsupervised learning, which make these technologies to cover a wide range of aesthetic possibilities, and enable them to be incorporated into various musical practices. The example projects will span musical agents in live performances of musical improvisation and audiovisual performance, interactive arts and virtual reality installations, music-dance experiments, and live coding approaches.
Bio: Kıvanç Tatar works in the field of advanced Artificial Intelligence in Arts and Music, active both as a researcher (with important theoretical and technical contributions) and an artistic practitioner, as an experimental musician and audiovisual artist, often in artistic collaborations. His research has expanded to multimodal applications that combine music with movement computation, and visual arts, and his computational approaches have been integrated into musical performances, interactive artworks, and immersive environments including virtual reality. Tatar has a dual educational background in music and technology, with a PhD from Simon Fraser University in Canada (2019) and started as Assistant Professor in Interactive AI in Music and Art at Chalmers in 2021, funded by a WASP-HS grant until 2026.
Many Hinges and other Problematic Metaphors: programmatic data mining towards Fluid Corpus Manipulation
In this presentation, Prof Pierre Alexandre Tremblay will present the ecosystem of software extensions and knowledge exchange around the Fluid Corpus Manipulation project, whose agenda is to empower techno-fluent musicians with the tools and the thoughts of machine listening and machine learning to enable musicking and musicking-based research around sound bank data mining. The design agenda will be discussed, and their implementation demonstrated, followed by a Q&A session.