A recent breach once again underscores the tension between age-verification requirements and data privacy in laws seeking to protect children online.
When I wrote a few months ago about the unintended consequences of the UK’s data-privacy law, I warned that the very effort to protect children online can compromise their data. A new example has now emerged: a third-party vendor used by Discord was breached, resulting in the leak of private user details, including government IDs. Discord is an online platform where individuals can message each other through text, voice, and video in different hosted communities. These can be publicly accessible or private, invite-only groups.
Ironically, age-verification requirements (particularly strict now with the UK's age-verification law in effect) were the reason government IDs were uploaded by Discord users in the first place. Users banned for age-related reasons submitted photos of IDs through an appeal process to verify they were of age to use the platform. Because a third party handled that process on the back end, Discord could claim it did not retain ID data longer than necessary, while the verification vendor had no such obligation. Thus, when this vendor's data was breached, government IDs were among the private details of users that were exposed.
This unfortunate debacle is timely, as Senators Josh Hawley and Richard Blumenthal proposing national legislation to protect minors from AI chatbots. While harms from unmonitored use by minors are real, the bill would require ‘reasonable age verification’ by AI companies, including government IDs as one method to achieve it (this mirrors the language in Washington's SB 5708 from last session, which WPC's Mark Harmsworth pointed out had its own unintended consequences). Unfortunately, such requirements ultimately compel citizens to submit private data to entities that may not keep it secure. Once again, in the name of online safety, regulators risk the privacy of every online user.
Keeping children safe online is a laudable goal, particularly given the harm that has come to many who have used these technologies without guidance. Unfortunately, the age-verification requirements in laws intended to protect children are not only ineffective in practice but also place users who were not previously at risk in a position where their personal data are more vulnerable than before. Rather than outsourcing disciplinary guidelines to government regulators or private companies, it is incumbent on parents to build strong relationships with their children, establish boundaries over how they consume online content, and educate them on the risks in engaging with AI and other digital platforms.
If we're giving the state a role to protect children, we should make sure the solution is effective, that the external effects aren't calamitous, and that we're not outsourcing parental roles. Age-verification regulations have not yet met that threshold.