Tech

Generative AI Is a Disaster, and Companies Don’t Seem to Really Care

In their push for AI-generated content, tech companies are dancing on the edge between fucking around and finding out.
Janus Rose
New York, US
Generative AI Is a Disaster, and Companies Don’t Seem to Really Care
Image generated by Bing

Tech companies continue to insist that AI-generated content is the future as they release more trendy chatbots and image-generating tools. But despite reassurances that these systems will have robust safeguards against misuse, the screenshots speak for themselves. 

Earlier this week, users of Microsoft Bing’s Image Creator, which is powered by OpenAI’s DALL-E, showed that they can easily generate things they shouldn’t be able to. The model is spewing out everything from Mario and Goofy at the January 6th insurrection to Spongebob flying a plane into the World Trade Center. Motherboard was able to generate images including Mickey Mouse holding an AR-15, Disney characters as Abu Ghraib guards, and Lego characters plotting a murder while holding weapons without issue. Facebook parent company Meta isn’t doing much better; the company’s Messenger app has a new feature that lets you generate stickers with AI—including, apparently, Waluigi holding a gun, Mickey Mouse with a bloody knife, and Justin Trudeau bent over naked.

Advertisement

On the surface, many of these images are hilarious and not particularly harmful—even if they are embarrassing to the companies whose tools produced them. 

“I think that in making assessments like this the key question to focus on is who, if anyone, is harmed,” Stella Biderman, a researcher at EleutherAI, told Motherboard. “Giving people who actively look for it non-photorealistic stickers of, e.g., busty Karl Marx wearing a dress doesn't seem like it does any harm. If people who were not looking for violent or NSFW content were repeatedly and frequently exposed to it that could be harmful, and if it were generating photorealistic imagery that could be used as revenge porn, that could also be harmful.”

On the other hand, users of the infamous internet cesspool 4chan have started using the tools to mass-produce racist images as part of a coordinated trolling campaign, 404 Media reported. “We’re making propaganda for fun. Join us, it’s comfy,” reads a thread on the site. The thread includes various offensive images made with Bing’s Image Creator, such as a group of Black men chasing a white woman that easily avoided the tool’s content filters with a simple adjustment of wording in the text prompt.

Some in the tech world—Elon Musk and investor Mike Solana, for example—have written off these concerns as being somehow invented by journalists. There is some truth to the argument that racists will use whatever tools at their disposal to create racist images and other propaganda, but companies also have a responsibility to ensure the tools they release have guardrails. The argument that this doesn't matter is similar to "guns don't kill people, people kill people" but in this case, the guns are being sold without safeties. 

Advertisement

AI safety and ethics is something that big tech companies pay lip service to and claim to have large numbers of people working on, but the tools being released so far don't seem to reflect that. Microsoft recently laid off its entire ethics and society team, although it still maintains an Office of Responsible AI and an "advisory committee" for AI ethics. So far, the responses that these tech companies have been giving to the media when contacted about how their publicly-released AI tools are generating wildly inappropriate outputs boil down to: We know, but we're working on it, we promise. 

In a statement to Motherboard, a Microsoft spokesperson said the company has "nearly 350 people working on responsible AI, with just over a third of those dedicated to it full time; the remainder have responsible AI responsibilities as a core part of their jobs."

"As with any new technology, some are trying to use it in ways that was not intended, which is why we are implementing a range of guardrails and filters to make Bing Image Creator a positive and helpful experience for users," the spokesperson's statement said. "We will continue to improve our systems to help prevent the creation of harmful content and will remain focused on creating a safer environment for customers."

Advertisement

Meta's responses to media requests about its badly-behaving AI tools have been similar, pointing reporters—including us at Motherboard—toward a boilerplate statement saying: “As with all generative AI systems, the models could return inaccurate or inappropriate outputs. We’ll continue to improve these features as they evolve and more people share their feedback." It did not respond to a request for comment on its AI safety practices in relation to the stickers.

The fear that people’s creative work is being ingested and regurgitated into AI-generated content is also very real. Authors, musicians, and visual artists have vehemently opposed AI tools, which are often trained using data indiscriminately scraped from the internet—including original and copyrighted works—without permission from authors. The use of AI to exploit workers became a major sticking point in the strikes organized by Hollywood writers and actors unions, and some artists are suing the companies behind the tools after seeing them reproduce their work without compensation.

Advertisement
Screenshot of a post on Bluesky: “Before releasing any AI software, please hand it to a focus group of terminally online internet trolls for 24 hours. If you aren’t OK with what they generate during this time period, do not release it.”

Now, by using the AI tools to create offensive images of copyright and IP-protected characters, internet trolls may force corporations like Disney into direct confrontation with AI-crazed tech firms like Microsoft and Meta. But even if these systems are patched to stop people from creating images of Minions shooting up a school, AI companies will always be playing a cat-and-mouse game. In other words, building safeguards against all possible definitions of “unwanted” or “unsafe” content is effectively impossible. 

“These 'general purpose’ models cannot be made safe because there is no single consistent notion of safety across all application contexts,” said Biderman. “What is safe for primary school education applications doesn't always line up with what is safe in other contexts.”

Even so, the results demonstrate that these tools—which, like all AI systems, are deeply embedded with human bias—seem to lack even the most obvious defenses against misuse, let alone protections for peoples’ creative work. And they also speak volumes about the apparent reckless abandon with which companies have plunged into the AI craze.

“Before releasing any AI software, please hand it to a focus group of terminally online internet trolls for 24 hours,” wrote Micah, a user on Twitter competitor Bluesky. “If you aren’t OK with what they generate during this time period, do not release it.”