The European Union Agency for Law Enforcement Europol on Monday warned about the exploitation of AI systems including OpenAI’s wildly popular chatbot ChatGPT by criminals.
Europol has also come out with its first Tech Watch Flash report and its titled ‘ChatGPT - the impact of Large Language Models on Law Enforcement’.
“As the capabilities of LLMs such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook,” Europol said in a web post.
The report by Europol provides an overview on the potential misuse of ChatGPT and offers an outlook on what may still be to come.
According to the EU Law Enforcement Agency crime areas that are amongst the many areas of concern identified by Europol’s experts include Fraud and social engineering, Disinformation and Cybercrime.
In its report, Europol suggests that OpenAI’s many safeguards – intended to protect users against sexual, hateful, violent or selfharm promoting material – can be circumvented fairly easily through prompt engineering.
The agency noted that the complexity of AI models means that there are shortages of workarounds discovered by researchers and threat actors.
Europol says that the most common workarounds in case of ChatGPT are Prompt creation (providing an answer and asking ChatGPT to provide the corresponding prompt); Asking ChatGPT to give the answer as a piece of code or pretending to be a fictional character talking about the subject; Replacing trigger words and changing the context later; Style/opinion transfers (prompting an objective response and subsequently changing the style/perspective it was written in);Creating fictitious examples that are easily transferrable to real events (i.e. by avoiding names, nationalities, etc.).
“Critically, the context of the phishing email can be adapted easily depending on the needs of the threat actor, ranging from fraudulent investment opportunities to business e-mail compromise and CEO fraud. ChatGPT may therefore offer criminals new opportunities, especially for crimes involving social engineering, given its abilities to respond to messages in context and adopt a specific writing style,” mentioned the Europol report.
The EU is currently finalising legislative efforts to regulate AI systems under the upcoming AI Act. While there have been some suggestions that general purpose AI systems such as ChatGPT should be included as high risk systems and as a result meet higher regulatory requirements, uncertainty remains as to how this could practically be implemented.