Description

Glow Box is a hybrid object which imbues life into the object by displaying a real-time visualization of a neural network as it works to solve problems. The installation exploits the organic-like nature of the neural network algorithm and combines this with the almost magical ability of the physical object to appear illuminated without apparent electricity. The result is something which blurs the distinction between real and virtual imploring the viewer to question this distinction altogether.

Conceptual Background

Nature has inspired mankind as long back as history is recorded. Among other things, our tools, techniques, and even aesthetics derive so much of their form and function from the clever decisions arrived at through centuries of evolution. In more recent years, Nature has become a direct source of strategy for designers and researchers tackling some of the world’s most challenging problems. The neural network, a core component in deep learning, is no exception to this grand tradition. In their simplest form, a neural network models the high level behavior of neural perceptrons. Essentially, they implement the basic functioning of a living brain. This technique has become invaluable in resolving problems that were once extreme challenges to quantify and compute.

But, categorically, what is a neural network? What do they look like? Do they think like we do? The ubiquity of these AI techniques begs for a more critical and creative understanding of the algorithms themselves. This project strives to do so by presenting viewers with a real-time window into the mind of a neural network as it repeatedly attempts to prove a simple equation. In particular, Glow Box evokes curiosity and, potentially, a questioning of the role for analog-digital hybrids as they inch closer and closer toward invalidating accepted definitions of sentience and free will.

Process

This project fuses together two original projects that were created, individually, by the two artists. These projects are Light Path Studies by Yeseul Song and Solution Space Studies by Michael Simpson. The artists’ impetus for combining these projects was to evoke the questions described above and, in essence, bring the sense of living, breathing life into an inanimate object.

To achieve this, the cube itself was algorithmically designed and then fabricated using state of the art 3D printing technology. The cube has dimensions of 6” x 6” x 6” and is printed in optically clear material which exposes the object’s internal structure. Inside the walls of the object, a matrix of conduits curve to connect the cube’s bottom surface to the front-looking face of the object. These conduits hold thousands of strands of optical fiber which redirect light from beneath to be emitted from the front. This allows the analog object to serve as a sort of display when an image or animation is projected onto the cube’s bottom.

This project uses custom software to drive the installation. The software was created using Processing and implements a multi-layer feedforward neural network, from scratch. The real-time visualization offers insight to the varied process undertaken by the algorithm and also calls to attention the fallibility of this relatively simpler implementation as opposed to a recurrent neural network, or “deep learning”, approach.

Video

Images

Exhibitions

Artists

Yeseul Song

A South Korean born artist and researcher based in New York. She uses code and electronics to explore perception, embodied experience, and poetic representations of data. Her works span sculptures, physical installations, digital sketches, and audio visual performances.

She is currently an artist in residence at Mana Contemporary’s New Media Program and a research fellow at NYU’s Interactive Telecommunications Program (ITP). She is the recipient of Communication Arts Interactive Annual, iF Design Award, and a nominee of Lumen Prize. Her creations have been shown at Fort Mason Art and Culture Center in San Francisco, IAC Building in New York, Mana Contemporary in New Jersey, Independent Filmmaker Project in Brooklyn, and Gallery Max in New York. She contributed to the SFPC Recode project that has been interacted with a large audience at the Day for Night Festival in Austin, Sónar+D 2017 in Barcelona, and Google I/O in San Francisco.

She is an alumnus of NYU ITP and School for Poetic Computation in New York, USA.

Michael Simpson

An interdisciplinary researcher, musician, and media artist based in New York City. Michael uses real-time data, algorithmic analysis, and machine learning as fundamental aspects of his creative work. Michael is currently focused on the topic of Machine Listening and the ways that this area of study can can be applied to enhance audio-visual works and performances through tighter semantic mappings between audio and visual reactions.

Michael’s work has been shown at a number of places including: Mana Contemporary in New Jersey, Fort Mason Art and Culture Center in San Francisco, IFP (Independent Filmmaker Project) Made in NY Media Center. His project, Indigo, was awarded the Communication Arts Interactive Award in 2018. Michael also made significant contributions to the SFPC Recoded project which was presented at the Day For Night Festival (Austin, Texas, USA), the Google I/O Developer Conference, and the Sonar+D festival.

Michael is an alumni of Boston University, the School for Poetic Computation, and NYU’s Interactive Telecommunications Program (ITP) where he received his Masters degree. He is currently a research fellow at NYU's ITP as well as a resident of Mana Contemporary’s New Media Program.

Contact

E-mail pyeseul@gmail.com for any press or art inquiries.