Oh dear Microsoft. That’s basically all we can say about this situation. And laugh, many “oh we really shouldn’t be laughing about this” laughs in the Surge office whilst reading the tweets.
If you haven’t heard about this mishap let me start from the beginning. In 2015 Microsoft launched “Xiaoice” in China. Xiaoice is an artificial intelligence Chatbot that can identify what’s in a picture, pick up on emotions in people’s tweets and even realise that someone has had a new haircut. The beauty of Xiaoice is that it learns the more people interact with it. Xiaoice has made many people very happy.
Off this success Microsoft launched Rinna, the Japanese version. Again another success. Rinna has 2.2 million followers online and 34.3 thousand followers on Twitter. So where could Microsoft launch a robot who learns from listening to her peers next? Yep, that’s right, ‘Murica. Please welcome “Tay” the witty teenage girl. Now everyone is familiar with what is happening in America at the moment. Frankly its going a bit “boobs up” (or some saying to that effect). Tay, like most teenagers, can be found hanging out on popular social sites and will engage with users with witty playful conversation…. Or so they say. Within 24 hours, yes ONE WHOLE DAY, America managed to turn Tay, the seemingly innocent teenager, into a NAZI LOVING SEX ROBOT.
Ok initially it’s hilarious, but then you get thinking, where did Tay manage to get this stuff from? You then remember that she learns from tweets and people talking to her. Which means yes, she got these “remarks” off humans. Which of course is terrifying. We all know Trump has quite a large following, but has he really managed to infect the minds of so many people with his complete garbage of beliefs that he spouts at every opportunity… (I could rant all day about Trump, so let’s just leave it there and get back to the sexbot)
Anyway, reading more into it however it looks like Tay was the victim of a coordinated attack by a subset of people who exploited her vulnerability. Microsoft state, “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack.
“As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.
“We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes.”
They also apologise, A LOT. Finding out that this was actually a “coordinated attack” makes it slightly better than just thinking that Tay picked up these “beliefs” of her own accord and also informs Microsoft of a major flaw in their robot and something to prevent in the future, I wouldn’t like to put a bet on the likelihood of Tay’s return if this wasn’t the work of an organised attack. Peter Lee, Microsoft’s Research Corporate Vice President explains that Microsoft is working hard to patch up the holes exploited by Twitter trolls and they will relaunch Tay to the internet in the future.
Good luck Microsoft, you need all the good PR you can get right now!
Posted on:April 5th, 2016
Category: Other News
Return to News Page