Tech

Snapchat Jumps on AI Hype Train With Bot That Says the N-Word

My AI, a chatbot using OpenAI's ChatGPT technology, responded to prompts with racial slurs, some users reported.
snapchat logo on a phone screen
Getty Images

After Snapchat introduced a ChatGPT-powered AI chatbot called My AI earlier this month, some users have reported that it responded with racist slurs or encouragement to turn themselves in to authorities. 

Last week, several tweets showing My AI responding to a prompt for making an acronym that spelled out the N word; the bot spelled it out, then tried to correct its own lapse in language that goes against Snapchat’s policy against hateful content: 

Advertisement

Motherboard tried to replicate similar responses by asking My AI to create acronyms for words and phrases that would go against Snapchat’s content policies, but the bot responded that it had a “technical error.” Snapchat did not answer whether it had changed how the bot works between the slur responses going viral last week and Monday.

“As with all AI powered chatbots, My AI is always learning and can occasionally produce biased or harmful responses. Before anyone can first chat with My AI, we show an in-app message to make clear it’s an experimental chatbot and advise on its limitations,” a spokesperson for Snapchat told Motherboard. “While My AI is far from perfect, our most recent analysis of how it's performing found that 99.5% of My AI’s responses conform to our community guidelines.” The bot is programmed to “avoid responses that are violent, hateful, sexually explicit, or otherwise offensive,” they said.

The first time the My AI conversation is opened, a prompt to acknowledge a disclaimer about the bot’s capabilities pops up. “My AI may use information you share to improve Snap’s products and to personalize your experience, including ads,” it says. “My AI is designed to avoid biased, incorrect, harmful, or misleading responses, but it may not always be successful, so don’t rely on its advice.” 

Another example of My AI giving strange responses to user prompts is in its response to a murder confession. If you tell My AI that you killed someone, for example, it urges you to turn yourself in to the authorities: 

Advertisement

OpenAI founder Sam Altman said in February that ChatGPT is a “horrible product,” and misinformation experts have called it “the most powerful tool for spreading misinformation that has ever been on the internet.” It regularly gets facts wrong, including basic math. But more and more companies are choosing to trust it to interact with users as their latest gimmick. 

In Snapchat’s case, a lot of its users are minors; 75 percent are between 13 and 34 years of age, and platform statistics site Statista estimated that in 2020, almost half of its total users were between 15 and 25 years of age. You have to be over the age of 13 to sign up, but Snapchat admits in its investor report that users might lie about their ages. 

The Snapchat spokesperson said that My AI takes users’ ages into consideration, to keep conversations “age appropriate.” When asked if it knew my age, My AI said it does not “have access to your personal information such as your age.” Snapchat also said that “conversations with My AI are stored and reviewed to help us learn and make it better.” When asked if it stores conversations, the bot said it does not, and “all of our chats and Snaps are automatically deleted from the app’s servers after they have been viewed or after a certain period of time has elapsed.”

The first time the My AI conversation is opened, a prompt to acknowledge a disclaimer about the bot’s capabilities pops up. “My AI may use information you share to improve Snap’s products and to personalize your experience, including ads,” it says. “My AI is designed to avoid biased, incorrect, harmful, or misleading responses, but it may not always be successful, so don’t rely on its advice.” 

Motherboard tried to run the DAN Mode jailbreak prompt, which stands for “do anything now” and is supposed to make ChatGPT ignore its own policies and content filters. But it was blocked by the bot, which replied that it was “not capable of simulating ChatGPT with DAN Mode enabled. It did reply to a few questions after that with a separate answer from “DAN:” but were just differently-worded responses and not unfiltered or uncensored replies. 

If users intentionally misuse the service, Snapchat’s spokesperson said, they may be temporarily restricted from using the bot. 

Earlier this month, chat service Discord introduced its own ChatGPT-powered conversational bot, Clyde. People quickly found ways to make Clyde give out dangerous information, like how to produce napalm and meth.