FYI.

This story is over 5 years old.

taylor swift

Turn Your Image Searches into Abstract Art with This Browser App

'Located graphics' from Adam Ferriss transforms image searches into dynamic and unique visuals.
Images generated by Located graphics

Los Angeles-based new media artist Adam Ferriss who, a few months ago, created the face-glitching browser-based app, Gush, has come up with new demonstration of his web-oriented production abilities: Located graphics, Ferriss' latest experimentation, allows users to generate unique, dynamic visuals simply by dropping words or phrases into an erosive feedback loop.

Located graphics' engine generates stacks of deformed digital compositions from search results, upping the ante on Ferris' previous uses of the browser-as-canvas. Developed in webGL and JavaScript, the app invites users to launch an image search guided by a phrase or keyword. It then generates a series of visuals through the inexhaustible stream of images available on the internet, before sending them through its feedback loop. The resulting image is constructed/deconstructed through this process, and then saved over the original and added back to its original pile. The algorithm then continues to make the images evolve, one after another.

Advertisement

The Creators Project spoke to Adam Ferriss to better understand his creative process and to gain more insight into the ideas that brought Located graphics to life:

The Creators Project: Hi Adam. Could you talk to us about the genesis of Located graphics?

Adam Ferriss: The basic idea was to create a sort of aggregate distortion for a given set of images. Each image is distorted by the image that preceded it, and the resulting image is saved over the source. When this goes into a loop, the stack of images drifts towards something completely distorted and drippy. I started out with the goal of being able to scrape all the images off of any given website, but there were too many edge cases to really make this feasible. Initially it was pulling the images from Contemporary Art Daily, then using Imgur and now, Google.

Imgur is sort of interesting because it almost never returns the sort of results you would expect. For instance, searching "blue" on Imgur will probably generate a bunch of meme-type images, along with things that have been popular on Reddit. Searching "blue" on Google, you'll actually get mostly what you want. At one point, I had the site uploading images to a Tumblr, but it would max out my maximum posts per day so fast that I had to remove it and send the images to a site that I'm hosting instead.

What was your workflow like to create Located graphics?

I was curious about the true meaning of a term—how does it differ based on where the search query is made? In Gene McHugh's Post Internet, he writes, "Google search rankings […] are obviously not the truly essential meaning of a term; rather what Google shows me is that there never was a truly essential meaning of a term—through its endless lists, it illustrates that that's always the case. But is it the case?"

Advertisement

I also wanted to make some kind of visualization of how one image could be affected by those that surround it on the web. Lastly, I was curious about the things we search for, and sort of spying on the people using my app. The app also logs the search queries of the user, along with their IP address and location. Maybe unsurprisingly, it's mostly people searching for cats and porn.

Did you encounter any difficulties? And what was your biggest challenge?

On the technical side, there were a lot of things happening that I had never tackled before. There is a bunch of regular expression parsing for getting all the URLs properly formatted and making sure none of the links are dead. Using the Imgur/Tumblr/Google APIs was new to me, and on the backend, some boring PHP/MySQL stuff that I haven't done in a really long time. From a design standpoint, it took me a while to figure out what I wanted to make available to the end user. There are a lot of parameters that can be tweaked, but I ended up removing a lot of the UI in favor of making decisions about the "fenced-in experience" I want to convey.

What are the next steps for the project? Can we expect a possible version 2.0? 

I'd like to implement some form of assessment to the images generated, so once you've got this stack of distortions, the computer would analyze the images to say hmm, is this image more like (blank) than this image, and see where that goes. Or maybe just doing some kind of feature detection on them to extract and combine the parts that algorithm thinks are important.

Advertisement

Another thought I had was to run the stacks into something like 123D Catch to try and generate models. I have a feeling this is just going to give some mushy stuff. One of my friends suggested that this could be used for some kind of steganography. If you could create a list of all the distortions made, that would serve as a key to unwrapping the image back to it's original form. Not sure how feasible it is, but maybe something I'll think about doing in the future.

Located graphics is free and available to the public now. Click here to test it out for yourself.

Related:

How To Make Glitch Selfies With Your Webcam

Experience GIF Hypnosis With These Mesmerizing, Fluid Graphics

Up Your GIF Game With New Web-App "Klear"