A task force at the European Union’s privacy watchdog has determined that OpenAI’s efforts to reduce factual inaccuracies in its ChatGPT chatbot fall short of complying with EU data protection regulations.
Despite measures aimed at improving transparency and minimizing the risk of misinterpretation, the task force asserts these steps do not fully adhere to the principle of data accuracy mandated by EU law.
The task force, comprised of national privacy regulators from across Europe, was established following concerns raised by national authorities, with Italy’s regulator taking a leading role. While various investigations by national privacy watchdogs are still in progress, the report released on Friday represents a preliminary consensus among these authorities.
The report stressed on the limitations of ChatGPT’s probabilistic model, which can produce biased or fabricated outputs. It cautioned that users may mistakenly accept these outputs as factually accurate, even when they are not, thereby increasing the risk of misinformation.
The task force underlined that although OpenAI’s transparency measures are beneficial, they are not sufficient to ensure compliance with the EU’s stringent data accuracy requirements.