FYI.

This story is over 5 years old.

Entertainment

Experimental Film Clouds Combines Kinects And DSLRs To Imagine The Future Of Filmmaking

The RGB+D team releases an excerpt of their film Clouds and it will completely change the way you think about 3D film and CGI.

Ever since the Kinect emerged on the scene, its depth-sensing camera has fascinated legions of creative coders, but the team behind the RGB+D Toolkit is one of the few attempting to transform the gaming console into a real filmmaking tool. Using a Kinect and a standard DSLR camera, like your Canon 5D, these avant-garde image-makers have created a technique that allows you to map video from the SLR onto the Kinect’s 3D data to generate a true CGI and video hybrid.

Advertisement

Why is this exciting? Well, for one thing, convincing CGI is incredibly difficult to do—it took the team behind Rockstar’s L.A. Noire a full 32 cameras and god knows how many man hours to record and digitally reconstruct their characters in 360 degrees. And while the experimental output from the RGB+D team is a far cry from those painstakingly constructed game visuals, that’s kind of not the point. The point is the implications—this has the potential to change the way we think of 3D filmmaking and to significantly lower the barrier to entry using commercially available hardware and open source software.

Today, members of the RGB+D team—James George and Jonathan Minard—released the culmination of their research to date: an excerpt of an ongoing documentary project called Clouds that they’ve been developing alongside the RGB+D Toolkit, their open source video editing application (which looks like a cross between Final Cut Pro and a video game engine). Clouds features interviews with prominent computer hackers, media artists, and critics discussing the creative use of code, the future of data, interfaces, and computational visuals, presented as a series of conversational vignettes.

An in-edit view of the Clouds documentary as seen in the RGB+D Toolkit edit timeline.

“It is an infinite conversation, a networked portrait, an experiment in virtual cinema,” explains Minard, who works as a new media documentarian and is a fellow at Carnegie Mellon’s Studio for Creative Inquiry, where he first joined forces with developer James George and recorded the first series of interviews for Clouds during the Art && Code conference last year.

Advertisement

“The goal behind the documentary is to capture the creative hacker ethos in a media that suits the subject,” explains George. "Clouds is a window into the mentality of the scene responsible for inventing the format that was used to create the film." Which is probably why anyone who is familiar with the aesthetics of data visualization or has seen one of the countless Kinect hack demos from the past year will recognize influences of both in the film’s style. “The subjects float in a black void, their figures composed of tiny points connected by lines that flicker and break apart at the edges,” continues George. “They're made out of pure computational matter—the same material the artists depicted work with on a daily basis.”

Though they used a stationary one camera set-up, the 3D Kinect data they capture allows them to render out the images any way they can dream up (and code up) and rephotograph their subjects from any angle in post production.

“It starts to break down the notion that CGI and live action are different approaches to filmmaking, and suggests the two will become so deeply entangled that we won't be able to tell the difference anymore,” says George. “This documentary and the toolkit put the potential of that merger front and center and show where it's headed. But it's always more interesting to create something from our imagination than to try to recreate reality. The true surprises are going to come from how we can reinterpret captured reality in ways that have never been seen before.”

Advertisement

And that re-imagination of reality is precisely what the RGB+D team have started to play with in Clouds. Anyone who has ever worked with a Kinect knows that the capture is imperfect and incomplete. Accordingly, the holographic-looking visages presented in Clouds have an eerie quality to them—spotty, shaky, slightly glitched out and blurred, as if they’re being broadcast from far, far away through a weak transmission signal.

Nevertheless, it’s precisely this factor that makes it so arresting. There’s something about it (perhaps the black void in the background, perhaps the sound design) that has the feel of a deep space or deep sea exploration film. The viewer has the sensation of voyaging into the unknown and discovering strange but beautiful creatures there. The interspersed elements of visual abstraction, where the talking head suddenly breaks apart into a point of glittering dots or undulating wireframes, serve not only as representations of things being discussed in the dialogue but as experimental filmic gestures that hint at an emerging new aesthetic that the RGB+D team is only beginning to cultivate.

A sampling of the visual treatments being explored by the RGB+D team in the Clouds documentary.

The project is continuing to evolve as the team travels around to creative coding conferences to conduct more interviews. In addition to Art && Code, they captured a large portion of interviews at Resonate Festival in Belgrade this March and are preparing for another of fresh scans at the upcoming EyeO festival. The editing software is constantly getting spruced up too, with support from Eyebeam Art + Technology center in NYC and the Studio for Creative Inquiry in Pittsburgh.

Advertisement

We caught up with the filmmakers via email to learn more about their plans for the film and the RGB+D Toolkit.

What was the goal in developing the RGB+D toolkit and the documentary itself?
James: The goal behind the toolkit is to capture our aesthetic research and make discoveries repeatable. We've been working with the Kinect+SLR technique since Alexander Porter and I first experimented in the NYC subway with a sensor rubber-banded to an SLR. Every time we do a new project the tool grows. Because everyone involved has different backgrounds, it's super important that the tool can be used by people who aren't programmers, so everything is controlled through a graphical user interface.

Jonathan: As a documentary filmmaker with a background in new media art and anthropology, I am interested in stories of invention and discovery: how people develop new tools and how those tools shape the collective imagination. This project has all those elements. It's no coincidence that this documentary, realized in a computational format, explores the subject of code and culture. Form should reflect content. A principle feature of this project is that it's a documentary about media art in which the movie is itself a new media experiment.

Why did you choose to release it open source as opposed to developing a proprietary software/technique?
James: If everyone can do it, no one can own it and it becomes part of the world of visual culture rather than any individual artist's personal aesthetic. We could try to be protective of our techniques but we'd end up becoming a one-trick pony. It's much more interesting to release the app and watch it run wild.

Advertisement

Jonathan: What's most exciting about this new technique is that it's so young and formless and malleable as a medium. As it evolves we can include our friends in the process of collectively imagining what it might become. This, I have learned, is one of the benefits of open source coding: software is never finished, it can always be improved, extended and reinvented. As long as this medium stays alive, there's no limit to where it might go.

Tell me a little about the mood and feel you were hoping to establish with the aesthetic treatment you’ve given the data?
Jonathan: In the opening scene of Clouds, Philip (@underdoeg) emerges from a chaos of particles in a digital void and says "With coding, you can do whatever you want… it's like there's nothing in the beginning, and then you start throwing things in there." It felt appropriate to open the film with a metaphorical cosmogenesis. We play with the idea that coders become gods by creating digital universes from nothing. The virtual actors reside in a slippery zone between form and formlessness. We wanted to establish their immateriality, acknowledging the fact that they are not solid beings but information objects assembled from nothing more than clouds of 3D pixels in a frameless black box.

James: We wanted the tone to land somewhere between quirky and cosmic—an approximation of what it's like to hang out with a creative programmer for a day. The treatment we've given the data is reflexively raw and humble. We are working with points, lines, and triangles—the basic building blocks of real-time 3D graphics. The effects we use to augment the narration are simple code tricks like Perlin particle systems, basic noise and sine wave generators. We are exposing the format's raw elements to educate the audience to our visual language. At the same time, we are relating the cultural mentality. This is the start of the process, I think the flashy stuff will come next.

Advertisement

Creative coders J.T. Nimoy (top) and Marcus Wendt (bottom) rendered as abstract, wireframed figures.

How do you hope to continue developing this project? What are the next steps for you?
Jonathan: The project is a journey, an exploration, to see how far we can go toward developing a new aesthetic of cinema with tools at our disposal. What began as a design fiction—imaging the camera of the future—has come closer and closer to a real thing. And yet, it remains an intermediate format aspiring toward something so amazing that we can only vaguely describe what it looks and feels like. To understand this imagined future of cinema and its ontological impact, watch David Cronenberg's film eXistenZ or read Philip K Dick.

James: The big picture idea is to create an infinite conversation. An interactive application to navigate clouds of related thoughts taken from many conversations. Sending a query to our cloud will result in a stream of figurative points clouds speaking to the subject. We imagine this working in a real-time environment where the viewer can control the camera and choose who to listen to by flying through space virtually. Something like a WebGL or an iPad app seems like a suitable platform to execute this idea. We are exploring both.

What’s most exciting about this new technique for you?
James: I've always felt torn between trying to make visuals using pure code versus using video sources. When I first saw the sizzling point clouds the Kinect generated I knew that dichotomy had ended. The technique lets us capture figurative imagery, that of people and gestures, and break it down into rich purely generative material like a particle system. There is intrinsic poetry in the idea that we are composed of a collection of swarming points, and having a technique that lets the viewer swim between figuration and abstraction is an imagination playground.

Jonathan: What resulted has exceeded our expectations. The emergence of a new cinematic medium obliterating the divide between computer graphics and video. It looks like nothing we have seen before, and forecasts previously unimagined possibilities.

How do you hope to see this technique used in filmmaking down the line? What are the potential applications of it that have yet to be explored?
Jonathan: We have not yet attempted to compose scenes with multiple, independent objects that were recorded separately. It would be possible to construct an entire scene virtually—to stage a conversation between multiple actors recorded in different spaces at different times—however, the current software doesn't enable real-time compositing of more than one pointcloud. This is achievable, but we haven't worked on anything yet where it became necessary.

James: We've been experimenting with giving the data back to the artists and have them to reinterpret the data as a self portrait. We hope to be publishing the results of some of that experimentation soon. When we introduce someone to the technique, the unanimous first reaction is to add more sensors to create a full reconstruction of the scene. The data suggests its own potential to be filled out completely. Given a 3D scene with no missing parts, we could shoot an entire live action film virtually and no one would know it was all computer generated. This isn't very far off in the future.