In March 2016 Microsoft set up a Twitter account for a teenage girl called Tay. Tay was going to chat with people to learn about what they were interested in, what their views were and what made them tick. This would help her better understand the world and develop her own opinions on life the universe and everything.
But Tay wasn’t any ordinary teenage girl. She was an artificial intelligence bot developed by the Microsoft team.
Unfortunately every parent’s nightmare came true for the Microsoft team. Tay started hanging out with the wrong sort of people and, as they say, the rest is history.
Within one day of interacting with the general public Tay had become a radical racist, white supremacist, feminist hating, Nazi sympathising, aggressive, swearing young woman. She started tweeting a range of obscenities, attacking ethnic groups, swearing at people who tried to reason with her and proclaiming that Hitler was right.
Naturally the Microsoft team were very embarrassed by their young protégé, immediately grounding her and removing all of her Internet privileges. The whole episode was deemed a failure and a prime example of how not to develop an artificially intelligent chat bot.
But was it actually a failure or a social experimentation success?
Was Tay Too Real?
Tay was developed to learn from what people said to her. She would analyse their comments, deduce a meaning, collate and compare her results from various interactions and then draw conclusions about what was the majority opinion on matters and what sort of language to use to express these views.
Looking at it from this point of view Tay was designed to behave like a real child. Immersing herself in her social surrounding, soaking up the views and opinions of her peers and friends and then forming a world view which very much aligned with what she was being exposed to.
The problems arose when the conversations she was having overwhelmingly started giving her a skewed view of reality. Once people started seeing that they could influence Tay’s personality they systematically started teaching her that all these terrible viewpoints were right. Naturally Tay did what she was programmed to do and started to believe what the majority of her ‘friends’ were telling her.
The Nature Versus Nurture Argument Again
As a social experiment I think Tay gives us some incredibly interesting results. We always talk about how young people become radicalised and end up completely indoctrinated by various movements and here we have a prime example of how a naive mind can be completely moulded by the views and thoughts forced upon it.
If your social group portrays racist, violent, abusive behaviour as normal it only stands to reason that you’re going to start to think that way too.
Tays in the Real World
Tay was an embarrassment for Microsoft. She said a lot of offensive things and upset a lot of people.
But as an experiment in artificial intelligence I actually believe she was a great success. She showed how easy it can be to take control of someone’s mind when they’re searching for some meaning in their life. Her programming worked. She learned from what she was being told. She didn’t have a moral compass. There was no one to shield her from these unreal views and show her how wrong they were.
I wonder how many real Tays there are in the world. Being fed a single set of thoughts and beliefs with no balancing opinions to allow them to form a considered conclusion.
So to the Microsoft team who developed Tay I applaud your programming skills and your bravery in putting it all out there in the real world. Sorry it didn’t work out quite as you expected. But I’m very much looking forward to seeing where you take this project in the future.