Draft
Statement of Principles on Facial Recognition Policy
I. Policymakers should avoid one-size fits all frameworks. Any framework should identify actual harms to consumers and be designed to protect against those harms. Prescriptive legislation should be avoided as it prevents the private sector from innovatively addressing public concerns about the technology. To the extent possible, policymakers should avoid delegating authority to regulatory agencies, but when they do, policymakers should avoid vague standards and unfettered discretion, but rather provide intelligible principles targeting actual harms to consumers. Safeguards against regulatory excess should include public records and other transparency measures, requiring executive branch officials sign rules before they take effect, mandating cost-benefit analysis, and attaching sunsets in a certain timeframe to all new rules.
II. Policymakers should prefer voluntary codes of conduct, industry-driven standards, and individual empowerment to government regulations. The private sector is far more nimble that the government and can respond to the public’s concern faster and more effectively than government regulators.
III. Federal and state constitutions limit government use of facial recognition technology. Policymakers should ensure that government entities, especially law enforcement, only use facial recognition for legitimate, lawful and well-defined purposes, consistent with our constitutional framework, laws and regulations.
Law enforcement adoption of facial recognition policy poses unique challenges. The unique challenges, though, should not result in an absolute bar from using the technology. Instead, policymakers need to carefully weigh concerns about constitutional and individual rights with the benefits of the technology. There are times when law enforcement agencies should be allowed to employ the technology for legitimate purposes, such as:
- When there is reason to believe the subject has committed or is committing a crime;
- To help identify an individual that may be a missing person, crime victim or witness to criminal activity;
- To help identify a deceased person;
- To help identify a person who is incapacitated or otherwise unable to identify themselves;
- To help identify an individual who is under arrest who does not have or provide valid identification;
- To help confirm the identity of individuals who are being released from correctional facilities to prevent accidental release;
- To help mitigate an imminent threat to public safety or significant threat to life, including acts of terrorism as defined by the Homeland Security Act of 2002.
Just as there are legitimate law enforcement usages of the technology, there are also circumstances where they should never use the technology, for example:
- As positive identification of a suspect, or as the sole basis for an arrest;
- To conduct mass surveillance, which means the use of facial recognition tools to develop and store identities of persons in a public place when there is no reasonable suspicion to believe that they have engaged in criminal activity;
- In violation of an individual’s constitutional rights under the First, Fourth and Fourteenth Amendments, such as surveillance based solely on:
- Religious, political or social views or activities;
- Participation in lawful events;
- The race, ethnicity, citizenship, place of origin, age, disability, gender, gender identity, sexual orientation or other classification protected by law against discrimination.
IV. Privacy protections are critical in deployment of facial recognition technology, as, in many cases, these tools create a link between a person’s facial appearance and personally identifiable information (PII). Innovators developing the technology respond favorably and swiftly to concerns regarding the collection and use of information. They should be allowed to do so. These innovators will craft tools and implement other changes designed to protect PII and respond to other privacy concerns in a far more effective and nimble way than the government ever could.
V. Transparency is the bedrock that governs the use of facial recognition technology. Transparency is critical to security and privacy, as it helps build and maintain public trust. The government should be clear when and for what purposes the technology will be used as well as establishing standards that govern the collection, processing, storage, and use of related data by government entities.
Every public-sector implementation of facial recognition technology should be accompanied by policies written in a clear and understandable manner, with a point of contact for inquiries and which are easily accessible by the public. For search-based identification applications, each policy should describe who is authorized to use a system under what circumstances and outline the role of human review, any privacy impact assessments and rules governing the retention of files contained in the image repository and search images.
VI. Human oversight and review are critical factors in identification processes aided by facial recognition technology. While facial recognition software automates image comparison and matching, it must not automate decision making without human oversight at a level appropriate to the application. Some applications require peer review of search results and conclusions.
VII. Facial recognition should only be used in ways and for purposes that are nondiscriminatory. There are legitimate concerns that some applications of facial recognition technology might negatively impact minorities. The purpose of using biometric technologies in safety and security applications is ultimately to better protect people from harm. Any significant bias in technology performance makes it harder to achieve this goal.
Given the unique needs of many public-sector applications, government entities should only purchase facial recognition technologies that perform highly overall and across demographic groups and are validated using sound, scientific methods.