Government must stop restricting website access with laws requiring age verification.

Some advocates of these censorship schemes argue we can nerd our way out of the many harms they cause to speech, equity, privacy, and infosec. Their silver bullet? “Age estimation” technology that scans our faces, applies an algorithm, and guesses how old we are – before letting us access online content and opportunities to communicate with others. But when confronted with age estimation face scans, many people will refrain from accessing restricted websites, even when they have a legal right to use them. Why?

Because quite simply, age estimation face scans are creepy AF – and harmful. First, age estimation is inaccurate and discriminatory. Second, its underlying technology can be used to try to estimate our other demographics, like ethnicity and gender, as well as our names. Third, law enforcement wants to use its underlying technology to guess our emotions and honesty, which in the hands of jumpy officers is likely to endanger innocent people. Fourth, age estimation face scans create privacy and infosec threats for the people scanned. In short, government should be restraining this hazardous technology, not normalizing it through age verification mandates.

Error and discrimination

Age estimation is often inaccurate. It’s in the name: age estimation. That means these face scans will regularly mistake adults for adolescents, and wrongfully deny them access to restricted websites. By the way, it will also sometimes mistake adolescents for adults.

Age estimation also is discriminatory. Studies show face scans are more likely to err in estimating the age of people of color and women. Which means that as a tool of age verification, these face scans will have an unfair disparate impact.

Estimating our identity and demographics

Age estimation is a tech sibling of face identification and the estimation of other demographics. To users, all face scans look the same and we shouldn’t allow them to become a normal part of the internet. When we submit to a face scan to estimate our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics.

Some companies are in both the age estimation business and the face identification business.

Other developers claim they can use age estimation’s underlying technology – application of an algorithm to a face scan – to estimate our gender (like these venders) and our ethnicity (like these venders). But these scans are likely to misidentify the many people whose faces do not conform to gender and ethnic averages (such as transgender people). Worse, powerful institutions can harm people with this technology. China uses face scans to identify ethnic Uyghurs. Transphobic legislators may try to use them to enforce bathroom bans. For this reason, advocates have sought to prohibit gender estimation face scans.

Estimating our emotions and honesty

Developers claim they can use face estimation’s underlying technology to estimate our emotions (like these venders). But this will always have a high error rate, because people express emotions differently, based on culture, temperament, and neurodivergence. Worse, researchers are trying to use face scans to estimate deception, and even criminality. Mind-reading technologies have a long and dubious history, from phrenology to polygraphs.

Unfortunately, powerful institutions may believe the hype. In 2008, the U.S. Department of Homeland Security disclosed its efforts to use “image analysis” of “facial features” (among other biometrics) to identify “malintent” of people being screened. Other policing agencies are using algorithms to analyze emotions and deception.

When police technology erroneously identifies a civilian as a threat, many officers overreact. For example, ALPR errors recurringly prompt police officers to draw guns on innocent drivers. Some government agencies now advise drivers to keep their hands on the steering wheel during a traffic stop, to reduce the risk that the driver’s movements will frighten the officer. Soon such agencies may be advising drivers not to roll their eyes, because the officer’s smart glasses could misinterpret that facial expression as anger or deception.

Privacy and infosec

The government should not be forcing tech companies to collect even more personal data from users. Companies already collect too much data and have proved they cannot be trusted to protect it.

Age verification face scans create new threats to our privacy and information security. These systems collect a scan of our face and guess our age. A poorly designed system might store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this information can expose intimate details about us. Our faces are unique, immutable, and constantly on display – creating risk of biometric tracking across innumerable virtual and IRL contexts. Last year, hackers breached an age verification company (among many other companies).

Of course, there are better and worse ways to design a technology. Some privacy and infosec risks might be reduced, for example, by conducting face scans on-device instead of in-cloud, or by deleting everything immediately after a visitor passes the age test. But lower-risk does not mean zero-risk. Clever hackers might find ways to breach even well-designed systems, companies might suddenly change their systems to make them less privacy-protective (perhaps at the urging of government), and employees and contractors might abuse their special access. Numerous states are mandating age verification with varying rules for how to do so; numerous websites are subject to these mandates; and numerous vendors are selling face scanning services. Inevitably, many of these websites and services will fail to maintain the most privacy-preserving systems, because of carelessness or greed.

Also, face scanning algorithms are often trained on data that was collected using questionable privacy methods—whether it be from users with murky-consent or non-users. The government data sets used to test biometric algorithms sometimes come from prisoners and immigrants.

Most significant here, when most people arrive at most age verification checkpoints, they will have no idea whether the face scan system has minimized the privacy and infosec risks. So many visitors will turn away, and forego the content and conversations available on restricted website.

Next steps

Algorithmic face scans are dangerous, whether used to estimate our age, our other demographics, our name, our emotions, or our honesty. Thus, EFF supports a ban on government use of this technology, and strict regulation (including consent and minimization) for corporate use.

At a minimum, government must stop coercing websites into using face scans, as a means of complying with censorious age verification mandates. Age estimation does not eliminate the privacy and security issues that plague all age verification systems. And these face scans cause many people to refrain from accessing websites they have a legal right to access. Because face scans are creepy AF.