Just a few short years ago, the accessibility of artificial intelligence (AI) was a concept limited to the select few. Fast forward to today, AI has become an integral part of our daily lives, readily available at our fingertips. It now plays a role in tasks ranging from assisting schoolchildren with their homework to autonomously driving cars and even generating intricate artistic creations. The world of AI has witnessed exponential growth, driven by intense competition among tech companies, as they race to push the boundaries of artificial intelligence, aiming to surpass human capabilities in a wide array of domains.
This rapid evolution of AI is evident as AI assistants are increasingly harnessed to automate programming tasks and streamline the collection of vast datasets, ultimately contributing to the ongoing enhancement of AI systems. While AI holds immense potential for advancement, it also brings with it significant risks that require careful consideration.
According to a McKinsey Global Institute report, artificial intelligence is projected to contribute approximately 16 per cent or around USD 13 trillion to the global economy by 2030. It has the potential to increase the global Gross Domestic Product (GDP) by as much as 26 per cent.
Potential Risk Of AI
A paper, co-authored by the likes of Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song, and Yuval Noah Harari, issued a week before the international AI Safety Summit in London said that a rapidly advancing AI could pose significant societal risks. Autonomous AI systems, if not carefully designed and controlled, could amplify social injustices, threaten stability, enable criminal activities and exacerbate global inequalities.
The paper also stressed that the risks become even greater as AI systems gain autonomy, with can potentially lead for unintended consequences. These advanced autonomous AI systems could pursue undesirable goals, exploit human trust, and even take control of critical infrastructure. Without proper oversight and caution, we may lose control over AI systems, leading to irreversible consequences, including large-scale harm and potential extinction.
A report from Goldman Sachs suggests that AI has the potential to substitute approximately 300 million full-time jobs.
The research paper called for addressing these challenges. Ongoing and emerging risks share commonalities, making investments in governance frameworks and AI safety essential for mitigating a range of threats. The authors strongly advocated that both tech companies and “public funders” should dedicate a minimum of one third of their AI R&D funding towards ensuring the safety and ethical deployment of AI systems.
"Given the stakes, we call on major tech companies and public funders to allocate at least one-third of their AI R&D budget to ensuring safety and ethical use, comparable to their funding for AI capabilities."
AI Governance
As a path forward to leveraging the AI potential and reducing its risk. The paper called for National and International rules for AI to prevent misuse. The fear cited in the paper was that in case of failure, companies and countries might prioritise profit over safety and give too much power to AI systems. It emphasised that countries should come together to make global agreements to regulate the most powerful and risky AI systems.
To regulate AI effectively, the paper stressed on protecting whistleblowers, who report problems and monitor AI development. There is a need for different rules based on the potential of AI. It also emphasised setting safety standards, holding developers accountable and encouraging safety investments. Apart from regulation by state, the paper emphasised on AI developers committing to self-regulation.