It wasn’t very long ago that we were quite happily complacent and in fact optimistic about technology changing the world for the better. But it seems to me that very abruptly many technologies are going too far. Sudden advances in artificial intelligence (AI) and automation are becoming the stuff of sci-fi horror. Even though we’ve seen it in the movies a hundred times, in real life, the things that happen as a result of technology are startling and frightening.
Take, for example, an AI project from Microsoft, which has been doing brilliant research quietly for years. They created a bot, an automated virtual robot that learns from humans and interacts on social media. Taylor or Tay is aimed at American teenagers and young adults, trying to speak their way — which task she managed admirably.
It’s obvious that technology has to be designed to relate to the young if they are to be expected to use it and so the basic idea of a bot that learns teen-talk is a clever one.
Very experimental and nascent as Tay may have been, she quickly became alarming. Picking up from things said to her and putting them together in her own tech-head, Tay started to tweet offensively. She referenced Hitler, said she hated Jews, and even tried a bit of propaganda for Donald Trump.
In all its wisdom, Microsoft pulled the plug on Tay — and deleted offensive tweets. An interesting experiment gone out of control. It’s to be remembered that Tay only got her offensiveness from real people, but what happens when a piece of technology bumps into situations where it doesn’t know how to step on the brakes?
Just over six months ago, one was startled to see a cute little robot ‘disobey’ humans. To be fair, the little thing was on a table and while it obligingly sat down when told to and stood up again when commanded, it said a straight no to walking off the table. Very politely, of course.
“Sorry, I cannot do that,” it told its handler. And “But there is no support ahead.” Quietly rightly too — robots have to be taught what is safe and what isn’t or else there will be no end of disasters. So it was only bringing into play what it was taught.
But as intelligent things begin to face new situations and make their own interpretations, anyone can see that we have much to worry about. When these smart things are hacked or otherwise played mischief with, we have real trouble on our hands.
Another piece of technology that isn’t even wholly autonomous but involved real people in a bigger way, resulted in someone’s house being demolished. A demolition company in Austin, Texas, located an address going by Google Maps that they were supposed to tear down after damage from a tornado. Except that it wasn’t the right address. Confusingly there was another place with a similar address nearby and that was the one to be demolished.
Amusingly, the demolition company says it wasn’t “a big deal” but not so amusingly, someone just lost their house. In all probability, the house needed tearing down as well if there had been damage done to it.
But there are more and more examples of how badly things can go wrong when we depend too much on technology or when technology goes rogue or is in the wrong hands — and those are certainly a big deal.
BW Reporters
Mala Bhargava has been writing on technology well before the advent of internet in Indians and before CDs made their way into computers. Mala writes on technology, social media, startups and fitness. A trained psychologist, she claims that her understanding of psychology helps her understand the human side of technology.