At the Collision conference in Toronto, University of Toronto professor Geoffrey Hinton, known as the "Godfather of AI," raised alarming concerns regarding the rapid advancement of artificial intelligence (AI) and its potential consequences.
Having recently left Google to freely criticise the field he helped pioneer, Hinton expressed unease about the surge in generative AIs like ChatGPT and Bing Chat, viewing them as signs of unchecked and potentially dangerous acceleration in development.
While AI was being touted as a solution for various applications, such as leasing and shipping, Hinton was sounding the alarm. He questioned whether good AI would triumph over its harmful counterparts and cautioned that ethical adoption of AI might come at a high cost. Hinton emphasised that AI's effectiveness is contingent upon the people who create it and warned that inferior technology could still prevail. He highlighted the difficulty of halting the production of battle robots by the military-industrial complex, envisioning a scenario where companies and armies would "love" wars with easily replaceable machine casualties.
Furthermore, Hinton expressed concerns about the implications of large language models like OpenAI's GPT-4, which could lead to significant productivity increases. He worried that the ruling class might exploit this advancement to further enrich themselves, exacerbating the existing wealth gap and perpetuating societal inequalities.
Reiterating his previously stated views, Hinton underscored the existential risks posed by AI. He cautioned that if artificial intelligence were to surpass human intelligence, there would be no guarantee that humans would remain in control. Hinton believed that society should take these threats seriously, emphasising that they are not merely science fiction. He suggested that the potential dangers of AI might only be recognised after witnessing the catastrophic impact of unrestrained killer robots.
Highlighting existing problems, Hinton drew attention to issues of bias, discrimination, misinformation, and mental health concerns associated with AI. Biased AI training data could produce unfair outcomes, and algorithms create echo chambers that reinforce misinformation. Hinton expressed doubt about the ability to catch every false claim but emphasized the importance of labeling misinformation as such.
Despite his concerns, Hinton did not despair over AI's impact and acknowledged its potential positive effects. However, he cautioned that healthy and beneficial use of AI might come at a steep price. Hinton proposed that empirical work be conducted to understand the potential pitfalls of AI and prevent it from seizing control. He suggested that addressing biases was already feasible and emphasised the need for changes in company policies to combat echo chambers and promote responsible AI usage.
When asked about job losses due to automation, Hinton suggested that addressing inequality would require adopting a more socialist approach. He encouraged individuals to pursue careers that could adapt to changing times, using plumbing as an example. He implied that society would need to make broad changes to adapt to the impact of AI.