<div><strong>By Mala Bhargava</strong><br><br>Human values. Like how it isn't a good thing to hurt someone else. Or how wasting food isn't right. It's a form of thinking that human beings have always felt was limited to them, not shared with animals who obviously feel and express an array of emotions but seem to be guided more by instinct than values.</div><div> </div><div>And the last thing one will have expected is to imagine that inanimate objects would have values. But that's exactly what could be the next step in technology for robots.</div><div> </div><div>Already, people have been pretty successful at making robots seem to experience emotions. Or at least, they are able to express feelings and act on them. They're also able to identify emotions in humans and respond to that. Pepper isn't the only robot who can read a person's body language and expressions and respond accordingly, even attempting to cheer up someone who looks sad by going up and chatting, entirely on his own initiative.</div><div> </div><div><table align="right" border="1" cellpadding="5" cellspacing="1" style="width: 200px"><tbody><tr><td><img alt="" src="http://bw-image.s3.amazonaws.com/malabhargava.jpg"></td></tr><tr><td><strong>Mala Bhargava</strong></td></tr></tbody></table>But as we get better at almost replicating ourselves in robot form, what is going to happen when they start thinking on their own? Getting angry, attacking someone, doing physical damage, and so on. In fact, robots are being created for warfare as well and it isn't far-fetched to think robots could let fly aggression just the way people do.</div><div> </div><div>One way to put a check on the obvious threat to humans from feeling-acting robots and other pieces of technology is to program them to act on certain values. At least that's what Stuart Russell, Professor of Computer Science at the University of California, Berkley, told the California Report. He said that with robots doing so much that humans do, it will be imperative to program them to recognise and work with values.</div><div> </div><div>The possible threat to humans has been obvious for a long time and scientists have been trying to develop new technology while minimizing threat. Now and again, nasty accidents have happened such as a robot leading to the death of a man at work but that was because of a misinterpretation or not enough programming to recognise unforeseen situations. In future, one will just have to make sure there aren't that many unforeseen situations and that a robot will think before it acts.</div><div> </div><div>For simplistic things like don't throw the dog into the dustbin while cleaning the house, this sort of programming is entirely within the realm of possibility. It's when tasks and interactions get more complex that we will have to seriously worry. Human beings haven't made sense of their own values which in any case do change from time to time, so expecting a machine to follow them will be interesting.</div><div> </div><div>But it's a minefield coming up.</div>