A New Study on Privacy and Bias in Facial Recognition Technologies

Privacy Plus+

Privacy, Technology and Perspective

A New Study on Privacy and Bias in Facial Recognition Technologies. This week, we highlight a new facial recognition study released by the RAND Corporation, entitled “Face Recognition Technologies: Designing Systems that Protect Privacy and Prevent Bias.” A link to the study follows:

https://www.rand.org/content/dam/rand/pubs/research_reports/RR4200/RR4226/RAND_RR4226.pdf

The study is a welcome effort to synthesize the mass of literature on Facial Recognition Technologies (FRT) that has emerged over the last few years, especially concerning FRT’s twin problems of bias (over-prediction of false minority “hits” due to sub-optimum design, training, operation, and data), and privacy protection.  Being largely a “study on studies,” it’s not an easy read, and might benefit from interviews with law enforcement users and vendor/FRT designers. 

Nevertheless, there are some helpful insights:

  • - At the moment, state and federal organizations seem to have better and more heterogeneous standards than federal authorities have;

  • - There is little or no consensus between industry and advocacy groups about how common standards should be developed (or who should be in charge of developing them);

  • - Human beings are still better than machines at identifying people, especially of their own race;

  • - Privacy-Enhancing Technologies could be combined with FRT advantageously;

  • - Setting the threshold for “matches” is critical, so that the operator receives neither too many false positives (nabbing up innocent people left and right), nor too many false negatives (so that its security system is a sieve);

  • - One size does not fit all: different use cases have different requirements and require different databases, “thresholds,” and training (e.g., school campuses have surprisingly different requirements than airports);

  • - Some personal-information databases are much more sensitive than others, even where the data subjects have knowingly released certain information.  For example, use of drivers’ licenses or passport-picture databases for many FRT purposes may be accepted, while pictures of concealed-carry permittees may trigger strong protests; and

  • - Especially in “some-to-many” comparisons, like in airports, authorities need a really fast and efficient way to resolve false positives. 

We are encouraged that the study seems to have been funded by the Department of Homeland Security (DHS), at least indirectly.  Although not explicitly contracted by DHS, it says the research “was conducted using internal funding generated from operations” of RAND’s Homeland Security Research Division (HSRD) and HSRD’s Acquisition and Development Program” (HSRD is one of DHS’s two Federally-Funded R&D Centers), and says it is specifically “intended to help improve [DHS’s] acquisition and oversight of [FRTs] by describing [FRTs’] opportunities and challenges,” especially with respect to bias and privacy protection.

The fact that DHS is funding a study on bias and privacy protection in FRT, even indirectly, seems to us a positive step.  

---

Hosch & Morris, PLLC is a Dallas-based boutique law firm dedicated to data protection, privacy, the Internet and technology. Open the Future℠.

Previous
Previous

Back to the Future – Revisiting Section 230

Next
Next

Contact-Tracing and Privacy