Britain's data watchdog, the Information Commissioner's Office (ICO), has expressed concerns about Snapchat's artificial intelligence chatbot, "My AI," and its potential privacy implications for children and users. The ICO revealed that the US social media company may have failed to adequately assess the privacy risks associated with the chatbot before its launch in April.
The ICO's preliminary investigation findings have raised red flags regarding Snapchat's handling of user data, particularly concerning children and other users. While this does not necessarily indicate a breach of British data protection laws, it does signal that the ICO is closely monitoring the situation. The regulator has not ruled out the possibility of issuing an enforcement notice against Snapchat, which could result in the ban of "My AI" in the UK.
Snapchat, in response to the ICO's concerns, has affirmed that "My AI" underwent a rigorous legal and privacy review process before being rolled out to the public. The company is committed to collaborating with the ICO to ensure that its risk assessment procedures align with the regulator's standards and expectations regarding user privacy.
The ICO's investigation revolves around how "My AI" handles the personal data of Snapchat's approximately 21 million users in the UK, including children aged 13 to 17. The chatbot, powered by OpenAI's ChatGPT, has gained notoriety as a prime example of generative AI and has prompted global discussions among policymakers about the need to establish regulations addressing privacy and safety concerns associated with such technology.