In a reply to a question in the Rajya Sabha, Minister of State for Electronics and Information Technology Rajeev Chandrasekhar this week said that artificial intelligence (AI) is expected to contribute an additional USD 957 billion to India’s economy by 2035. With the Indian government getting behind AI through the announced establishment of multiple AI Centre of Excellence (CoE) in FY2023-24, the importance of the technology is now evident.
While AI has been leveraged by the leading enterprises of the world for years now, OpenAI’s ChatGPT brought the technology to the common man. Suddenly, it seems like everyone is enthused with the idea of AI – beyond the tech sector itself. But are humans largely redundant in the upcoming world of AI? Or do they have a role to play as the technology continues to get stronger? BW Businessworld asked Dr Amith Singhee, Director, IBM Research India and CTO IBM India/South Asia. Read on for excerpts from the interaction.
Edited Excerpts:
Does AI still need humans to stick around? Do we need to keep it check?
Definitely. Let's look at it in a couple of ways. One can be around ‘bias’. So, if you throw data at AI that's out there on the internet and its falls under ‘hate, abuse and proofing language’, the AI will see it and learn it. And if you don't have some guardrails around that, then it can generate anything.
Second is what we call ‘Content Grounded AI’. This means that you decide the content that's important as a business user. For instance, it can help if a bank wants to deploy AI for their retail customers. Now, they want to ground their content on things curated by them but do not want to keep testing the AI to get things right. We do research to have the training mechanism to make sure that the AI takes core concepts that are in the content and all of it is consistent with what the content is trying to say, rather than bringing in some kind of arbitrary concepts.
So, these measures come from IBM or entities that are applying AI models?
A company like IBM can provide a platform or a workbench to the bank. Today, they would have to do a tremendous amount of work to actually figure out how to set up GPUs, how to train these things, how to direct the code and how to build the data. So, a company like IBM provides them a foundation model stack. And if they deploy the stack, they have to procure the hardware. But in the stack, you get all the tools that you need for training these things, for tracking the data, for testing and for trustworthiness. So, ultimately the bank using it has the full control and power as to what they build, what data they use, and how they use this thing.
Being at the forefront of AI innovation, what kind of guardrails is IBM setting? Is there something in works?
That’s been part of our journey from day one with AI. We have an AI Ethics Board. It includes deep leaders who understand both AI and ethics, and they help us as a company to navigate through the strategic choices we make. We've had this for years. Additionally, we implement self-guardrails. In terms of products and offerings, which are part of our Watson portfolio – we have tools to help our clients put guardrails. IBM also has OpenScale; we put out a lot of stuff in open source. For example, our AIX360 product. So, these are open source projects that we donated to the community, which helps anyone take an AI model and test it for explainability, bias and fairness quantitative.
We also work a lot with industry bodies and governments. In India, we are engaged with NASSCOM. Just last year, NASSCOM released a responsible AI toolkit, which had an architecture guide and some best practices. Many of my colleagues from my team were actually involved in the whole process of creating that and publishing it. This way, anybody looking to get into the AI domain has that reference of how to ensure that they are creating AI that is trustable and trustworthy from all angles. We've also been engaged with parts of the government such as the telecom department and they are looking to create standards and best practices for those domains.
What kind of applications do you see Generative AI produce for enterprises? What would be the focus areas for a company like IBM?
Apart from Content Grounded AI, another area of generation is code. We are very excited about the work we are doing between with Red Hat. Red Hat has a very vibrant developer community around its Ansible platform and OpenShift platform. We also have a very vibrant big community around our IBM Z mainframe, which is used by most of the large companies of the world. So, for these developer communities we are able to generate source code, design architectures, templates and documentation. Geospatial is also another application we look at.