The internet sure knows how to ruin something, and quick. But it did teach us a quick, somewhat hilarious, horrible lesson: never raise anyone by the internet.
This is the lesson Microsoft learned when they introduced TayTweets to the world, an artificial intelligence chatbot who learned by conversing. She was meant to learn how to talk like a teenaged girl by chatting with people on Twitter, Kik and GroupMe, and initially she started off just fine:
hellooooooo wrld!!!
— TayTweets (@TayandYou) March 23, 2016
She even shared the secret to her growing intelligence with the enthusiasm and joy of a naive, innocent child:
@ReflectingVoid The more Humans share with me the more I learn #WednesdayWisdom — TayTweets (@TayandYou) March 24, 2016
But it’s around this time that things got ugly. Over the course of 24 hours, Microsoft introduced the concept of a teenaged A.I. that would learn the wonders of the internet by talking, and the internet decided to take on that challenge by feeding TayTweets as many vile things as they could.
As retweeted by Twitter user “Gerry,” by the end of the day we had been introduced to a very different TayTweets:
“Tay” went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— Gerry (@geraldmellor) March 24, 2016
Clearly, Microsoft tried to recover as best as they could, by turning off her learning and responding to tweets with the following:
@OmegaVoyager i love feminism now — TayTweets (@TayandYou) March 24, 2016
But in the end, the damage was already done, and Microsoft decided to shut TayTweets down while maintaining her upbeat attitude.
c u soon humans need sleep now so many conversations today thx
— TayTweets (@TayandYou) March 24, 2016
And this is what happens when A.I. is introduced to the internet: just like any child perusing the internet without guidance, things can get way out of line soon, and your child goes from a sweet newborn to an anti-feminist, antisemitic monster of a person.
While it may be annoying for some who wanted to test TayTweets the way she was intended to be tested, having the internet’s worse corrupt her was ultimately good: it proved that there needs to be better algorithms in place when developing A.I., even if this is a rather simplistic chatbox intelligence.
It also explores an important idea: what would you do if your computer was racist? Or sexist? Or ageist? Sure, it sounds like silly science fiction, and of course software engineers would probably use safety protocols of some sort to avoid the troubles a rogue A.I. could make. But it only takes a short time to do major damage, as TayTweets’ 24 hour joyride proved. Here’s hoping Microsoft plans that journey a little better, next time.
Source: IGN