Silicon valley giant Google on Thursday (18 May) at its annual conference announced an Artificial Intelligence (AI) first strategy. Its no longer mobile first it is in AI in mobile now. AI moves into the consumer space with almost all Google's applications that come bundled with Android actively using it.
Everyone knows Google through its eponymous search bar, Gmail icon, and Google maps it is part of every netizens life. These software application or even the algorithm is not Google. It is the millions of processors in data centers that store information of every action of its users which makes Google. Google invested more than $10.9 billion in its data centers in 2016, this was up 10% from 2015. This investment will again go up in 2017 with machine learning and a new set of servers. Google has invented, and patented most of the hardware in its data centers. From high speed networking equipment, processors, design of the servers even design of the cooling systems. Google holds the largest number of patents in this area.
It was therefore interesting that the first thing Sundar Pichai, CEO, Google showed at the I/O conference was the second generation Tensor Processing Units (TPU)--aimed at powering machine learning -- is into its second generation. Unveiled in 2016. TPU have vastly improved and like the Samaritan from the famous TV series 'Person of Interest' it is learning to design and improve itself. A virtuous and maybe an endless cycle where a machine learns to improve its own design, moving away from the constraints of human intelligence and engineering. Given below is a Google's TPU.
Each TPU has four chips that delivers 180 trillion of floating points performance per second, if this was not enough Google combined 64 of these TPUs together using patented high speed network to create machine learning supercomputer called TPU pod. If the name sounds minimal the performance does not it is capable of providing 11.5 petaflops, one peta has 15 zeroes. Now if this was not enough Google has combined several of these pods together in its data center and created a TPU cloud see the picture below.
Google is now making all this computing on the cloud available to people interested in using a pod through Google Compute. Of course, Google is not the first company to open up its gigantic computing processing IBM does it through Watson and so does Amazon. But the speed with which Google is innovating and developing processors like TPUs it might leave both IBM and Amazon behind. Remember, Google's real innovation has been on hardware patents in high end cloud computing, chips, servers, networking for its own data centers.
This is an opportunity for Indian startups in the AI space and even the government to use Google to build next generation of products. The entry level for high end processing has just been made available to every developer who can dream up a challenging enough problem.
AI In Your Mobile
All the computing power will of course be leveraged to make applications like Google photos smarter. One of the most oft feature on Facebook and Instagram is sharing photos with friends. Google has been unsuccessful in social media space, but is now using machine learning to help users share photos, even suggesting whom to share it with. New features will scan photos, identify individuals/friends in it and even select the best quality to share with friends and family within Google Photos. While facial recognition technology has improved on Facebook with Google photos you get the additional benefit of search and uploading without spending data on sharing. This will make Google photos a serious competitor in the social sharing world.
In a way, google is leveraging what it has most -personal data and it is applying AI on top of this data to make its products richer. Google has search data, complete email conversation data, photos, and location data. Now, 99 per cent of the smartphones use Android OS and Google has enough data on mobile users. Its AI can predict what you need to do even before you do.
It is going to use AI to help you make decisions. It can pick up your upcoming flight from google calendar, find the traffic speed from the location to the airport and suggest when departure time to reach catch a flight on time. This is no longer science fiction, Google has done it and the possibility are endless. Combining multiple data streams of past and future to trigger decisions.
Google can see and hear
Google has built up an enormous capability for searching images and videos. The real jump comes with combining this with machine learning and AI. Google Lens will now decipher image for you. For instance, an image in Chinese language can not not only be translate on your mobile but also searched for in English language. The supercomputers behind Google has just made a quantum leap by churning and learning from billions of language translation into voice and visual translation.
Google home
Combining Google Assistant an app with the smart speaker Google home. Google finally pips Amazon's Alexa using your private data to make conversations and response more personal. You can make free calls using the speaker to anywhere in the world, ask for conversation, play music etc. Google home also leverages AI and the huge super- computing prowess.
Interestingly, with AI first Google is trying to become more inclusive too almost every product had an apple version ready or soon to be launched. More importantly, Google for Jobs was an interesting initiative allowing job seekers to be linked to employers. Searching the database of jobs sites like Linkedin, Monster and Glassdoor. Which in the past have always been closed to search bots from Google have now opened up their sites, at least in US.
AI will now be in our lives every time we pick up the phone. Better still it is now omniscient part of our life. Google will ensure that.
Guest Author
K Yatish Rajawat is a digital strategist and policy commentator based in New Delhi, he tweets @yatishrajawat.