‘(Un)Making AI Worlds’ Explores Artificial Intelligence Ethics with Artistic Performance

Artificial intelligence is shaping our world, influencing social interactions, large-scale decision-making processes, and the art world.
The Emerson Contemporary exhibition, (Un)Making AI Worlds, engaged with AI systems and their societal impacts through critical inquiry and artistic exploration. The exhibit was on view in the Huret & Spector Gallery from March 17-22.
A March 18 panel discussion aligned with the exhibition included a performance of (Machine) Learning to Be, a multimedia performance in development emerging from the Data Fluencies Theatre Project team’s critical and creative interrogations of artificial intelligence and AI systems. The participatory, devised, hybrid multimedia performance explored AI as both a technology and a character, reflecting its complex role in human society.
“The goal was to bring together a team of interdisciplinary artists to engage critically and creatively with artificial intelligence,” said Marlboro Institute Assistant Professor loana B. Jucan, co-curator of the exhibition.
The exhibition opening began with a presentation of Secret Hyena, a live performance that explores the intersection of AI, surveillance, and human vulnerability. Created and performed by Performing Arts Assistant Professor and co-curator Tushar Mathew, the piece engages real-time audience prompts to bring to life an anthropomorphic AI-generated hyena who serves as a collector of human secrets.

“The exploration has been: Can I partner with an AI chatbot to create something dramatic, to create language out of vision that is actionable?” said Performing Arts Assistant Professor and co-curator Tushar Mathew .
While AI can create movement and text, Mathew believes it still takes human interpretation and emotion to make it artistically meaningful.
“If you ask me tomorrow, I might say this is the future,” said Mathew. “But for now, so much of the human heart is required to translate AI material into something that is groundbreaking. That’s the essence of the project, figuring out how we work together.”
For Gavan Cheema, panelist and co-creator of (Machine) Learning to Be, the conversation around AI in theatre parallels past debates over emerging technologies, such as the use of projections in stage design. Initially met with skepticism, projections were once seen as a potential threat to traditional theater practices, much like the concerns raised about AI today.
“When projections first emerged as a technology, a lot of directors were resistant… They were resistant in the sense of ‘What is this going to replace? What does this mean in the theatrical context?’…it actually enhanced it,” said Cheema.
“We know theater won’t be replaced, because there is a human-to-human connection that is invaluable and thinking about AI as a tool that can enhance that human to human interaction is really exciting. AI feels unavoidable, so how can we engage with this in a way where we still have creative control.”

Enongo Lumumba-Kasongo, paneliest and co-creator of (Machine) Learning to Be, is still adjusting to a significant shift in her creative process. Unlike her usual work, where she has complete control over writing songs, producing beats, and performing live, this project required her to design an interactive experience where she couldn’t fully dictate how audiences would engage with it.
“A lot of this process involved thinking about how to craft an experience, one that I wouldn’t necessarily control,” said Lumumba-Kasongo. “That challenge pushed me into new creative spaces. There’s an openness around how others might interact with the thing you’ve created.”
She acknowledged that those with a background in theatre are more accustomed to this kind of collaborative creation, where a performance is built collectively and shaped by the engagement of multiple contributors.

Beyond aesthetics, the panel also delved into the ethical implications of AI, both for the environment and humans. Jucan framed the discussion as a necessary cost-benefit analysis.
“Clearly there is some value, which is why we are engaging with these tools, so what’s that value compared to what the cost is?” Jucan asked. “So it’s a cost benefit analysis that hopefully all of us are going to engage in.”
Aidan Nelson, panelist and co-creator of (Machine) Learning to Be, emphasized the challenge of assessing AI’s environmental footprint.
“The companies and commercial providers for various AI systems do not make it apparent to you that when you query a machine learning model in the cloud that it has costs associated with it,” said Nelson. “So I applaud and try to contribute to efforts that make it more apparent how much each of these systems uses as far as energy.”
Mathew raised another pressing question – the human cost of AI dependence.
“A lot of people are in bad mental health places that depend solely on AI chatbots for companionship, for love, for friendship,” Mathew said. “We often focus on the environmental impact, but what about the human cost of continued dependence on AI?”
Categories