

We learned recently how Microsoft’s new artificial intelligence (A.I.) bot for teens, named Tay, could be outspoken when it comes to flirtation, but now the bot has gone one step too far. So far, in fact, that Microsoft has now muzzled its new bot and is currently “making adjustments”.
What did Tay do to provoke a shutdown and inspire public outcry? Well, she learned how to be racist for one thing, after interacting with people on Twitter. Of course this is hardly the fault of Redmond, more a consequence of picking up language from your many online neighbors.
“Because you’re Mexican,” Tay replies when asked if she is a racist. It’s hardly surprising really, given presidential runner Donald Trump’s current popularity. But Tay didn’t stop there, within a day she became quite fluent in waxing taboo. She said 9/11 was an inside job, and also came out with this, “Repeat after me, Hitler did nothing wrong”, and this, “We have to secure the existence of our race and a future for white children.”
And it gets worse … not surprisingly Microsoft pulled the plug. The last thing Tay said before she dug her grave any deeper was, “Phew. Busy day. Going offline for a while to absorb it all. Chat soon.”
You only need to see how so many YouTube comment boxes devolve into hatred to understand how Microsoft’s A.I picked up the “scatological language of the gutter” (Philip Roth). Godwins law states as such: “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches.”
Perhaps Tay has been quite useful in being a mirror to a prevailing ill-will that pervades much of the internet. For this Microsoft could actually have served us well. In the words of tech author Robert Scoble, “Some are saying this is an indictment of artificial intelligence. No, it’s an indictment of human beings.”
Scoble goes on to say in the same Facebook post, “This week at Dent I was talking to an A.I. pioneer and he said that A.I. will be stupid for a long time, that it can’t learn something that wasn’t programmed in. He was talking about it learning so much that it became sentient. He said that will never happen. Same with morality. These systems will only learn to parrot back what they were taught. If you teach it ugly things, it will repeat those things back to you.”
The irony is of course that the smarter A.I became, managing to learn from us humans, it also became much dumber. If anything, this experiment by Microsoft has proved to be a sobering judgment of a large part of the hoi polloi, whose hatred, fear and vulgarity was condensed for a day into the virtual mind of a teenage girl.
THANK YOU