Professor Stephen Hawking famously said, “The development of full artificial intelligence could spell the end of the human race; it would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
But the good professor accepted that AI could be beneficial to humankind and said, “The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems. As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase... Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”
What is Artificial Intelligence?
There aren’t too many people who have not heard of AI in today’s day and age. Most of us would have been touched by it in some form or the other. The simplest example being the smartphone we use that has voice assistants, fingerprint and facial recognition. Though AI has been in existence for years, its real potential is being harvested now. Scientists have classified AI into four categories:
There is another way of classifying AI based on usage:
In its 2022 Hype Cycle for emerging technologies, Gartner listed three distinct themes and one of them is accelerated AI automation. It calls out the need for expanding AI adoption to evolve products, services, and solutions. It emphasises accelerated creation and deployment of AI models with outcomes that include more accurate predictions and decision-making.
Currently, the most evolved version of AI is the narrow AI. Systems that are designed to perform repetitive tasks independently and accurately. Some examples are:
From the above non-exhaustive list, it is obvious that AI today is deployed in all walks of life and most sectors of the industry in helping workers being more productive. It is in our day-to-day lives in the form of smartphones, home automation systems, robot vacuum cleaners, maps for driving directions, etc.
Have you ever wondered what the underlying technologies used by these applications are?
Speech Recognition is used by the voice assistants that are prevalent in our smart phones and also standalone devises like Alexa. Speech Recognition works in conjunction with Natural Language Processing (NLP) in breaking down the structure (morphology) of the sentences and then using Syntactic parsing, Semantic analysis, Sentiment analysis and other linguistic science techniques to make the systems intelligent enough to interact with humans.
Machine Learning and Deep Learning algorithms are used to train models for data classification. Which of the two to use depends on the kind of problem that is to be solved. Machine Learning and Deep Learning algorithms are also used in data analysis, processing and cleansing, more so where the data is unstructured and consists of pictures, video and voice. This has found critical application in Cyber security in today’s digital age. Based on its ability to decipher patterns from humungous data volumes, these models assist in identifying and preventing threats.
Computer Vision makes use of algorithms of Machine Learning in simple applications and Deep Learning for complex applications in detecting and then identifying objects. This is put to use in the facial recognition feature of smartphones and computers on the one hand and autonomous vehicles on the other hand. Some other examples of Computer Vision are bar coding, self-checkout kiosks and medical imaging.
That said, today’s AI has limitations and is an evolving science. It doesn’t have the following characteristics that we humans have:
Like it or not, AI is all pervasive in our lives today, whether personal or at enterprise level. It is influencing our decisions and making things easier. However, this is just the beginning and with the kind of money that is being pumped into AI research and development, this field will see newer products and solutions. These may well disrupt the status quo for better or for worse.
The key fear is that it will negatively impact the workforce and take away routine, repetitive jobs very soon. While in the earlier times, the idea was to reduce the workforce incrementally, now the talk is about how small can the workforce be? Then there are massive concerns of privacy – in 2018, Article 19, a human rights organisation expressed serious reservations about the direction AI is taking: “If implemented responsibly, AI can benefit society. However, as is the case with most emerging technology, there is a real risk that commercial and state use has a detrimental impact on human rights. In particular, applications of these technologies frequently rely on the generation, collection, processing, and sharing of large amounts of data, both about individual and collective behaviour.” They also highlighted the fact that automated decision making can lead to biased outcomes.
What is augmented intelligence?
Gartner defines augmented intelligence as a design pattern for a human-centered partnership model of people and artificial intelligence (AI) working together to enhance cognitive performance, including learning, decision making, and new experiences. As the name suggests, augmented intelligence is the use of machines to support humans and not replace them. The idea is to use human cognitive faculties and the machine’s speed and accuracy to improve outcomes.
(This is the first part one of a two-part article that aims to throw light on the intriguing field of artificial intelligence)