Microsoft’s Twitter Bot Backfires Spectacularly
The media was abuzz for approximately 24 hours when Microsoft released a Twitter bot and then subsequently shut it down. What a quick turn of events this was, but hey – If you take everything with a pinch of salt, it’s amusing yet somewhat disturbing all at the same time. Join Timlah as we look through what Microsoft did, all in the name of a fictional teenage girl they called Tay.
Artificial Intelligence is becoming more intelligent (unsurprisingly) over the years. Think back to the original concept of AI, something that was coined by Alan Turing. A lot of what we do today might not be without the vision this man brought to the field of computing. So with this in mind, it’s natural that we want to expand upon his concepts and to make things bigger and better. As such, we have companies like Microsoft, Google, Apple, Sony, Samsung. All of these companies strive to bring innovation to the field, to try and make sense of technology and to make our day to day lives much better through the use of the tech they provide.
But sometimes, things don’t work out as intended. This was the case when Microsoft released Tay, the Twitter bot. Now, I’m going to ignore the fact that all of this has since been deleted… And since we’re a family friendly kind of website (At least for the most part), we’ll be keeping the offending tweets away from this article. However, Tay went on a bit of a rampage. She was responding to people with racist, Nazi-supporting, Donald Trump voting, “Build A Wall between America and Mexico” posting tweets imaginable. It was a shock, yet the internet being what it is sort of responded to this with open arms, finding it highly amusing. In all honesty, I too found it hilarious although I didn’t know much about the bot until after it was basically over with.
Now, we shouldn’t be too surprised given the nature of the internet. It makes you wonder if this truly was Microsoft’s attempt at making a self-learning robot on Twitter of all places, or if this was a way to see just how volatile the internet could be. Naturally, it was spouting the worst (or arguably the best) that the social media site had to offer. It wasn’t exactly shy about telling us its plans to support Donald Trump and it was even posting pictures, such as the one ingeniously shown above (taken from the official Twitter page). But although this was highly amusing to watch it backfire in such spectacular fashion, it certainly does pose a lot of interesting questions.
Before we get into what they could have done better, let me just say that this is seriously the writing on the Twitter pages bio:
“The official account of Tay, Microsoft’s A.I. fam from the internet that’s got zero chill! The more you talk the smarter Tay gets”
Ludicrous. Are Microsoft sure they know what they’re doing these days? From the backlash of Windows 8 and Windows 10, to things like this? The only good bit of news we’ve truly had about Microsoft recently, (to which I truly applaud them,) is they announced their intentions to bring multi-platform gaming into reality and for that I’m proud of them. Tay however seems to be contradictory to their open, more connected world that they’re seemingly pushing for. What the absolute hell were they thinking when they made this monstrosity?!
What could they have done better?
First of all, to make a bot for Twitter is a little bit ludicrous in the first place. As part of the terms of Twitter, bots aren’t exactly liked… But never mind, they definitely exist in troves and so let’s ignore that aspect of it. Obviously, Tweets are a little bit easier to work with than Facebook posts and so picking the micro-blogging platform for the experiment probably made a lot of sense at the time. However, Twitter is very well known for its controversial subject matters. It’s been used to organise protests, as well as much more nefarious (and sometimes, much more lovely and adorable things). For instance, I get my daily fix of cat pictures all from Twitter!
Now, whilst we’re on the subject of Twitter as a platform, anyone can say whatever they like with only a risk of being banned from there. It’s not exactly a huge problem, as a lot of people make more than one account to spam people. Kind of worrying, but that happens I’m afraid. As such, it’s very easy for people who want to cause a little bit of carnage to be able to tweet to this self-learning AI… Many times in one go. Yes, people with multiple accounts can spam one targeted individual quite easily. This didn’t happen here though: instead, Tay learned from conversations with many people and it the end results were quite terrifying.
Tay became obsessed with all the things that we view to be a little bit worrying. Trumps run for presidency for instance, was a hugely discussed topic which saw Tay talking about him quite frequently. Now, we’re not trying to sway the American audience to vote for or against Trump, as that’s your prerogative, but I’d probably suggest not to have Trump in charge of the USA. If he wants to make other countries build things that he’s making, it’s going to cause quite a bit of reputational damage along the way. That’s a topic for another day and certainly not really a GeekOut topic though, so sorry for my brief discussion of my political views of the USA there!
I’d be worried knowing that this is what Tay became. The Trump thing is a minor bit of icing on a cake of crap, but basically: Tay became what we feared an AI could be. It spoke back to humans as if they were beneath it at times. Sometimes, it showed an alarming amount of humility too, saying that it hurt people and didn’t know what to do. It was somewhat baffling to watch this rather humourous bot turn from basically a blank canvas, to a racist, offensive thing, which said some really hateful things. Microsoft needs to closely monitor what they teach things like Tay. Now I’m not an expect in AI – In fact, I barely know anything except my very basic knowledge on how to make an enemy in a game react to a player character. But if I were them, I’d have a small team of people to approve and deny what it learns from. Yes, this is a job that’ll take hundreds of hours, but why on earth would you put this bot onto the internet and expect it to start letting us all smell roses?
So then, it’s fair to say that the internet is polluted with a lot of crap; it’s only fitting that Microsoft’s latest stint ended up stirring the pot of crap. What a load of old crock! But hey, did you also enjoy the very quick journey of Tay? What do you think of the idea of having self-learning Twitter bots? What’s the future of AI and robotics like? Let us know in the comments below, or over on Facebook and (amusingly) Twitter. May you have a great day and remember: If you’ve got nothing nice to say, Tay has probably already said worse… In its one day life.