As we enter 2025, the AI landscape will evolve dynamically, a bigshift toward AI utility. As organisations race to integrate AI into their operations, the focus shifts from merely adopting AI to the tangible value it brings. To fully capitalise on AI’s potential, enterprises must clarify their goals, whether streamlining access to information, accelerating strategic decisions, or boosting productivity. In the year ahead, groundbreaking advancements in AI are expected to unlock applications previously unimaginable. However, alongside technological progress, organisations must also address pressing legal and ethical challenges that could shape the future of AI innovation.
Leaders will be called upon to innovate and accelerate in an AI world. A CEO’s secret weapon? Curiosity.
What’s good for the goose is good for the gander. CEOs and leaders must get smart on adopting AI tools for their own use, as well as to empower their workers. More information is at our fingertips than ever before, and AI will be critical in helping CEOs be more effective with the data and world of information around us. My secret weapon and advice for CEOs harnessing AI is simple: endless curiosity. CEOs and leaders must take the time to try different tools, learn their different capabilities — like the nuances between ChatGPT, Gemini, and others — and understand the long-term impact they can bring to their organisations. The more comfortable leaders feel with different kinds of AI models, the more we understand AI’s strengths and weaknesses, the better we will collectively be at dealing with the world that's ahead of us.
2025 is the year CEOs and boards will approach AI with clear-eyed utility, not just awe or fear.
If 2024 was marked by an AI gold rush, 2025 will be defined by AI utility. Organisations have been rapidly adopting AI to stay competitive and seize new opportunities. However, to truly benefit from this technology, the conversation must shift from whether an organisation uses AI to the value it aims to achieve with it. Enterprises must start to identify their goals for AI adoption — whether that be getting the information they need faster, accelerating strategic decision-making, speeding up productivity, or something else. While not every application will be AI-powered, those that incorporate language models, knowledge repositories, and human input will evolve and improve over time. We can expect to see practical AI applications in unexpected areas, making it crucial for CEOs and boards to identify where their investments will yield the highest return.
AI will force enterprise leaders to redefine employee incentives.
As enterprises set lofty goals to deploy AI across their business, CEOs and the wider leadership team must reevaluate existing performance evaluation and ensure that their wider incentive systems align. For example, if a software engineer’s performance is measured by the amount of code they type themselves per day, they would be less inclined to lean on an AI copilot to do the work for them if it wouldn’t count toward their daily total. Outsourcing coding work could negatively impact their “performance,” even if it enables them to be more productive and strategic within their role.
As leaders continue to set new AI policies and increase employee adoption to accelerate efficiencies, they must create new incentives that align with these goals. Leaders must invest in AI upskilling from the top down, rewarding both adoption and strategic outputs to motivate employees. Fostering a culture that values innovation and collaborative problem-solving will be crucial for maximising the benefits of AI. In addition, by aligning performance metrics with the strategic use of AI, companies can drive sustainable growth to ensure that their workforce remains engaged and empowered.
Companies will begin to use their own massive data to get value from AI but will demand reliability.
For the most part, early applications of AI have just used foundation models trained on massive amounts of public data. With sophisticated RAG applications becoming mainstream and the rapid maturity of products to produce structured data, applications that leverage the massive troves of private enterprise data will begin to create true value for the enterprise. But the bar for these applications will be high: enterprises will demand reliability from AI applications, not just the whiz-bang demo.
AI advancements will increase regulatory scrutiny, demanding that industry players and governments join forces.
In the coming year, AI breakthroughs will unlock entirely new applications that were previously beyond our imagination. For example, the growing trend of using real people to generate deceptive images, audio, video, and more is becoming more common. In situations like that where existing laws and regulation didn’t foresee these future scenarios, there needs to be industry self-regulation or government regulation. Technology itself doesn’t have good or bad intentions built into it. That's why we need to be open about the role that different people, not just the industry creating the tools, but also governments — including regulators and other public bodies — have in shaping the AI debate. There is absolutely a risk in regulating a field too early. However, it's very hard to take a blanket ‘no laws, no regulation’ approach when we have technologies that are so broadly applicable. We must collectively make sure that both people, and our systems, are safe through smart regulation and legislation.
AI companies must play nice with publishers and content providers to safeguard the future of AI development.
Technology aside, organisations must consider the legal and ethical challenges that could hinder the future of AI innovation. Intellectual property and the rights associated with AI inputs and outputs is the top concern. In the past year, we’ve seen publishers and content providers grow increasingly wary of AI companies that are scraping their data to train models without permission or compensation. As a result, publishers are beginning to place restrictions on their content to shield them from unauthorised AI use. This represents a serious problem for the countless organisations that rely on this data to train AI. Without ample stores of high-quality data, AI companies will be left unable to refine and develop their offerings. However, there is a win-win solution on the table: AI companies will need to enter licensing agreements with content providers to ensure they’re being compensated for the extremely valuable data they offer. This must happen soon, before it’s all a tangle of lawsuits and blocking AI crawlers. It’s time for AI companies to stop taking data providers for granted, and start investing in the resources that are critical to their own success.