The integration of generative AI tools is a strategic imperative for organistions in 2023. To delve into this dynamic realm and its associated security challenges, BW Businessworld interviewed Nathan Wenzler, Chief Cybersecurity Strategist at Tenable. During the Q&A, Wenzler shared his insights on how organisations are embracing generative AI, the importance of controlled experimentation and the responsibilities in safeguarding end-users. Excerpts:
How are organisations strategising around leveraging generative AI tools such as ChatGPT?
Organisations today are shifting their strategies to incorporate generative AI tools like ChatGPT. Historically, technology adoption was driven by companies, but now end-users and customers play a more significant role, thanks to accessible technologies like cloud computing and SaaS platforms. Companies are embracing generative AI due to user demand, realising that resistance is futile. They are quick to experiment with these technologies in both public and private sectors. However, the challenge lies in securing data and ensuring accuracy. While organisations are eager to adopt, they are struggling to find best practices for implementation, especially with the rapid pace of adoption in tools like ChatGPT.
Is it safe to experiment with strategies that involve leveraging powerful technologies like Generative AI, given the potential risks associated with hallucinations and privacy concerns?
Controlled experimentation is crucial. The challenge lies in educating employees, customers and users about the risks associated with generative AI systems. Establishing trust is vital but skepticism should be encouraged. People tend to trust generative AI results blindly. While experimentation is necessary to gauge accuracy, it should be controlled due to the risk of data breaches. Organisations should carefully select who conducts the experiments and emphasise skepticism and result validation. Controlling the dataset used by these tools can enhance trust and validation.
How can organisations ensure the protection of end-users who may not always verify outcomes from such technology, even if skepticism is encouraged and outcomes are verified at the organisational level?
In the technology and cybersecurity realms, experts often have a degree of paranoia due to their deep knowledge and constant vigilance. However, the average user does not share this expertise. While some experts may advocate intense vigilance, most users do not live in that world. It is crucial to encourage awareness, vigilance and skepticism without overwhelming users. It is a challenge to address the human and psychological aspects of cybersecurity, especially as AI systems become more integrated into natural language interfaces.
Does this shift the responsibility back to organisations to safeguard end-users?
The debate over the responsibilities of companies providing public-facing generative AI tools, like OpenAI's ChatGPT, is complex and philosophical. Organisations must make a business decision regarding the adoption of such tools. They need to assess how these tools can enhance efficiency and align with their risk tolerance. The responsibility lies in implementing the functionality in a way that aligns with the organisation's risk tolerance and objectives. Big companies like Microsoft and Google can choose their approaches, but organisations must decide what is right for their specific needs and data integrity, leading to different decisions and implementations.
Would Private LLMs be a good idea? Is it practical for small organisations to consider private LLMs despite their higher cost?
The cost factor for implementing private LLMs is decreasing rapidly due to growing demand and the increasing availability of open-source tools and models. Accessibility will become less of an obstacle over time. The real challenge lies in data management, integrity and protection. While smaller organisations might find it challenging to handle data-related issues, they need to make careful business decisions. Opting for public, large datasets like ChatGPT can be easier but riskier. Building a safer, more trustworthy system using existing data and AI modeling is a harder, but potentially wiser choice for businesses.
Also Read: AI Will Be Core Infrastructure For Global Economies: VMware CEO