FYI.

This story is over 5 years old.

Entertainment

Here's How You Turn Sounds Into 3D Sculptures

"Mother," from artists Inmi Lee and Kyle McDonald, has transformed the synesthetic experience into a series of 3D printed sculptures.

While exploring language and its political, social and cultural roots in many of her artworks, artist Inmi Lee became fascinated with the question, What does the pure form of sound look like?For her piece, Mother, she attempted to answer this question: a collaboration with code artist Kyle McDonaldMother translates sounds into objects. Claims Lee, “It’s one thing to hear a sound— it goes into your ear and dissipates." Representations of sound, however, "Reveal certain relationships between sound and shape.”

Advertisement

At first glance, Mother is a series of beautiful and intricate 3D-printed sculptures that could have been the bones of some strange creature— and that’s just the surface layer. The proof is in the process: by asking 20 people to describe two different pairs of words they didn’t know— Korean adverbs, to be specific— and then describe that verbal description with hand gestures, using Xbox Kinect and custom built software in Open Frameworks created by McDonald, the duo captured their participants' hand gestures, then extruded them over time on a 3D coordinate plane. After cleaning up the shapes and making aesthetic decisions such as how tight the data points should be, the final models were 3D-printed into sculptures. “Ultimately, these people were our translators in understanding these words as a form,” Lee explains.

Sounds and gestures become 3D printed renderings in Inmi Lee & Kyle McDonald's

Even beyond the actual creation process, the objects hold the weight of years of conversation between Lee and McDonald: in 2009, the duo began a dialogue about the relationship between sound and language, and the possibility of mapping sound onto or into new forms. After McDonald sent Lee a quote from designer Bruce Mau’s Incomplete Manifesto— “Process is more important than outcome. When the outcome drives the process we will only ever go to where we’ve already been. If process drives outcome we may not know where we’re going, but we will know we want to be there,”— this became the underlying impetus for the project; both collaborators agreed to submit to the process, without fleshed-out end goals in mind.

Advertisement
This rendering shows a 3D graphical representation of visualized sound.

Whilst poring through linguistics and anthropology texts, they uncovered research on the bouba/kiki effect— a sound visualization phenomenon that revealed that jagged shapes were typically referred to as “kiki,” while curvy, rounded shapes were “bouba”—  and also readings on sound symbolism. Explains Lee, “Mapping between sound and shape wasn’t new. But as artists we were interested in finding our own way of representing it. How could we transform the pure form of sound into an object so it no longer carried the weight of spoken words in political terms?”

Words from the Korean language, visualized on a 3D coordinate plane.

To create source material for their project, the artists needed to find a language with a strong correlation between how it's written, and how it sounds. After deciding on Korean, a phonetic language, as their best natural fit, Lee, who is Korean, compiled a list of onomatopoeia words that each describe the state of an action. In the Korean language, by substituting out a vowel, or by repeating a character, one can add or remove a word's emphasis; the way one says a word often changes the scale of the thing that’s happening or being described. Lee was curious about how people would interpret the sounds they didn't know the meaning of, and whether or not they would innately grasp that minor phonetic changes to words would amplifiy or diminish the actions being described.

Four of the 20 subjects Inmi Lee and Kyle McDonald interviewed to produce the sound visualizations for

Lee and McDonald interviewed 20 people who did not speak Korean. First prompting interviewees to verbally describe the word, “If you don’t speak the language, these words come to you as a sound," said Lee. “[Subjects] were using their imaginations, their history, their background— to create meaning.” Answers varied from associations of sounds with colors to analogous sounds, like the noise of a piece of jewelry falling to the floor. The artists then asked participants to compare two very similar words, suggesting that one was rounder, and the other, more abrupt. Finally, participants were asked to make a synesthetic connection— to use hand gestures to describe what they had just said verbally. “I think it was more meaningful that we went through these people to talk about these words,"  Lee says. "Language is used by people, not by machines.”

Advertisement
An early 3D model of Inmi Lee & Kyle McDonald's sound visualization project,

It was important for both Lee and McDonald that the results mirrored standing linguistics pratices as much possible: “We are not scientists. But we are using technology; it’s the medium we work with in our own artistic practice,” Lee admits. In her view, tracing hand gestures, creating accurate forms, and having them 3D-printed was most true to participants' expressivity, and thus the most representatively accurate approach.

Sound and gesture, as visualized as a set of 3D points.

And as for the title, Mother is intricately linked to Lee’s personal life; growing up, she and her family moved to many different locations and English was not her first language. She remembers her own mother struggling with the English language and jumping through multiple hoops everyday just to get things done. This was the moment Lee became interested with the formation of language, and in how it came to be loaded with politics. She began exploring languages from multiple perspectives and diving into linguistics. “My interest in language started with seeing my mother and now it’s becoming something completely different. But the fact that it started from there— it will always drive me to look into that subject matter,” says Lee.

The hand gestures used to describe certain words, as visualized on a 3D coordinate plane.

One particular challenge of Mother was synthesizing the entireties of conversations, processes, and theories into one singular gallery experience. Lee admits that, for the fleeting gallery viewer, the idea is difficult to access and uncover without context; there is an inherent gap between the sonic concept and its physical manifestation. To tackle this, Lee and McDonald have experimented with the way the final piece is presented: at the SIGGRAPH conference art exhibition, “Acting in Translation,” the final 3D prints were arranged in pairs, one for each person in attendance, on pedestals lit from beneath. A short video on the wall explains that, according to Lee, “The aesthetic experience is the time period between when you first encounter the work and you don’t understand what the work is, to the time that you realize its meaning." She believes the process of understanding Mother could be faster, but that the delay is necessary: “If it’s immediate, you’re only experiencing it the way the artist wants you to experience it, not the way you actually do,” she says.

Advertisement
Inmi Lee and Kyle McDonald have created a way to craft 3D sculptures from gestural sound data.

To learn more about Mother, visit its official page on the SIGGRAPH website. SIGGRAPH was on display from August 10-14, at the Vancouver convention center. 

Related:

Visualizing Sound Waves With Bubbles And Light Projections

User Preferences: Tech Q&A With Kyle McDonald

You Can Finally Be An Artist With This Self-Portrait Machine