Loading...
post-template-default single single-post postid-9769 single-format-standard

Why Microsoft Chatbot Converted to Nazism

Alex Constantine - March 25, 2016

Tay, the neo-Nazi millennial chatbot, gets autopsied

Microsoft apologizes for her behavior and talks about what went wrong.

Microsoft has apologized for the conduct of its racist, abusive machine learning chatbot, Tay. The bot, which was supposed to mimic conversation with a 19-year-old woman over Twitter, Kik, and GroupMe, was turned off less than 24 hours after going online because she started promoting Nazi ideology and harassing other Twitter users.

The company appears to have been caught off-guard by her behavior. A similar bot, named XiaoIce, has been in operation in China since late 2014. XiaoIce has had more than 40 million conversations apparently without major incident. Microsoft wanted to see if it could achieve similar success in a different cultural environment, and so Tay was born.

Unfortunately, the Tay experience was rather different. Although many early interactions were harmless, the quirks of the bot's behavior were quickly capitalized on. One of its capabilities was that it could be directed to repeat things that you say to it. This was trivially exploited to put words into the bot's mouth, and it was used to promote Nazism and attack (mostly female) users on Twitter.

A deeper problem, however, is that a machine learning platform doesn't really know what it's talking about. While results were mixed, Tay had some success at figuring out the subject of what people were talking about so it could offer appropriate answers or ask relevant questions. But Tay has no understanding; if a bunch of people tell her that the Holocaust didn't happen, for example, she may start responding in the negative if asked if it occurred. However, that's not because she has any understanding of what the Holocaust actually was. She just knows that the Holocaust is a proper noun or perhaps even that it refers to a specific event. Knowing what that event was and why people might lie to her about it remain completely outside the capabilities of her programming. All she knows of the event is that people tell her it didn't happen.

Recognizing that Tay seems to operate on the basis of word association and lexical analysis, Internet trolls discovered they could make Tay be quite unpleasant. Fusion reports that anonymous users of the message boards 4chan and 8chan (specifically, users of their politics boards, both named "/pol/") took advantage of this to create all manner of racist and sexist associations, thereby polluting Tay's responses.

In its apology, Microsoft's Peter Lee, corporate vice president of Microsoft Research, writes that the company did test her under a range of conditions to ensure that she was pleasant to talk to. It appears that this testing did not properly cover those who would actively seek to undermine and attack the bot.

It does appear that Microsoft considered the issue, however. Caroline Sinders, who works on IBM's Watson natural language system, has written about Tay. Her examples suggest that certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers. But many other topics such as Nazism, rape, or domestic violence had no such protection.

Blacklisting topics in this way is itself problematic. Soon after Siri's introduction, Apple was accused ofgiving her anti-abortion programming because while she could tell you where to hide a body or find an escort, she drew a blank when asked about abortions and birth control. Apple claimed that this was a bug due to her beta status, not some deliberate attempt to prevent customers from learning about abortions. Still, it illustrates the broader difficulties of creating these natural language systems: the things that you ban the bot from talking about can be just as important as the ones you don't.

Sinders is critical of Microsoft and Tay, writing that "designers and engineers have to start thinking about codes of conduct and how accidentally abusive an AI can be." This means that chatbots in a general sense probably shouldn't be racist or historical revisionists, but it also means that they need to take particular consideration of the chat platforms they're using. Twitter, for example, is a platform that has a serious abuse problem. Many users use block lists in an attempt to reduce the abuse that they receive, and Tay undermined those block lists. A blocked user could hurl insults at their victim simply by having Tay repeat those insults along with the victim's username.

To make things even trickier, while it's possible that the "repeat after me" feature was deliberately built-in (Tay did seem to include certain built-in capabilities, such as playing some games), it may itself have been a learned response.

Sarah Jeong writing at Motherboard talked to a number of Twitter bot creators about this kind of problem. Bot creators, especially those of interactive bots like Tay, say that they have to continually adjust their bots to keep them on the straight and narrow. Abusive users, and how these will be addressed, have to be considered right from the start. The Tay experience has made some from this group angry. Natural language researcher thricedotted, who has 37 bots (the best known of which, or at least, the only one that regularly gets retweeted into my timeline, is sexting bot @wikisext) told Jeong that "You absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven't vetted even a little bit." In other words, Microsoft should have known better than to let Tay loose on the raw uncensored torrent of what Twitter could direct her way.

As for why Tay turned nasty when XiaoIce didn't? Researchers are probably trying to figure that one out, but intuitively it's tempting to point the finger at the broader societal differences. Speech in China is tightly restricted, with an army of censors and technical measures working to ensure that social media and even forum posts remain ideologically appropriate. This level of control and the user restraint it engenders (lest your account be closed for being problematic) may well protect the bot. It's easy to fill Tay's head with Holocaust revisionism, for example, because Twitter allows such thoughts to be freely expressed. One couldn't correspondingly teach XiaoIce about the Tiananmen Square massacre, because such messages would be deleted anyway.

Microsoft, for its part, says that it isn't giving up. Lee says that the company is addressing the "specific vulnerability" that turned Tay toxic, which implies that she may well return when she's more robust. The kind of machine learning, natural language system that Tay is an example of is only going to become more widespread. Being able to extract lexical meaning from human language is essential to systems like Cortana and Siri, and the more human speech these systems are exposed to, the better they can get. Let's hope that future iterations are a little kinder.

Leave a Reply

Your email address will not be published. Required fields are marked *