Unlearning Language

A project with Kyle McDonald
In collaboration with Yamaguchi Center for Arts and Media

As our every move is tracked and analyzed, we often have the sensation that the Internet knows more about us than we know about ourselves. Unlearning Language is an interactive installation and performance that explores a future beyond persistent monitoring. What would it look like to communicate in a way that is undetectable to algorithms of the present and future? This project endeavors to use machine learning to provoke us to discover new modes of communication.

A group of participants enter a small living room of the future. They are guided in conversation by an AI that wishes to train humans to be less machine-like. As the participants communicate, they are detected (using speech detection, gesture recognition, and expression detection), and the AI intervenes in various ways (light, sound, vibration). Together, the group must work together to find new ways to communicate, undetectable to an algorithm. This might involve clapping or humming, or modifying the rate, pitch, or pronunciation of speech. As they do, the various impediments subside. Through this playful experimentation, the people find themselves revealing their most human qualities that distinguish us from machines. They begin to imagine a future where human communication is prioritized. A separate opening performance with four people performing as AI introduces the backstory for the installation.

Photography by Kyle McDonald. Full credit list coming soon.