In its new policy brief on biometric technologies, ARTICLE 19 seeks to contribute to discussions surrounding the development and deployment of biometric technologies and their impact on freedom of expression and human rights.
Due to the increasing availability of large datasets, as well as lower costs and improved machine learning, the development and deployment of biometric technologies has rapidly increased. Their deployment has been further accelerated by the tech-solutionist push to COVID-19 pandemic. Individuals’ human rights have almost been forgotten in this discussion, when they should be at the heart of it.
ARTICLE 19 is therefore calling for a moratorium on the development and deployment of all biometric technologies until vital human rights safeguards are in place. We also call for a complete ban on biometric mass surveillance in publicly accessible spaces, a total ban on emotion recognition technologies, and greater human rights protections in the design, development and use of biometric technologies.
What are biometric technologies?
Biometric technologies have a wide scope. A variety of these technologies are used to gather and analyse biometric data – personal data relating to the physical, physiological or behavioural characteristics of a person, which enable the unique identification of that person, such as DNA, fingerprints, voice patterns or cardiac signatures. One of the most commonly known forms of biometric technology is facial recognition.
Why is there an increase in the use of biometric technologies?
Generally the deployment of these technologies is justified by two narratives:
- The first is the protection of national security, for counter-terrorism measures, crime prevention or control and public safety.
- The second is that these technologies can be used by public authorities for the delivery of public services or by private actors, for example in the development of “smart cities” projects, public transport systems, access to physical and online spaces etc..
It is claimed that the use of biometric technologies will have significant advantages to the public, from increased security and reduced crime, to lower costs and greater convenience. However, most of these claims are currently unproven. They also fail to weigh the cost to individuals’ human rights. The public should not be forced to forfeit their human rights in exchange for claims of improved security or convenience.
Equally, States must not give up on their responsibilities to abide by human rights standards, and principles of legitimacy, necessity and proportionality when they deploy a biometric system.
How do biometric technologies impact our human rights?
The use of biometric technologies considerably infringe and impact our human rights. Of particular concern is the fact that many of these technologies are being used without sufficient, updated legal frameworks in place to protect rights and regulate their use, and without regard for the principles of necessity and proportionality which are central to protection of the rights to freedom of expression and privacy. The technology must represent the only way to achieve the purported legitimate aim, and be the least invasive way to do this, in order to meet this requirement – often this is not the case.
In addition, the creation and storage of massive databases of deeply personal biometric data creates serious concerns for individuals’ privacy, and can entrench biases and discrimination, as well as allowing for potential ‘mission creep’, where data is used for a purpose different from that originally agreed.
ARTICLE 19 has three key concerns about the impact of biometric technologies on freedom of expression in particular:
- Biometric mass surveillance has a chilling effect on freedom of expression.
Mass surveillance is the indiscriminate surveillance of the public – for example the use of facial recognition technology through cameras in subways, shopping centres, and streets. If biometric technologies such as facial recognition technology are used for recognition of individuals in public or publicly accessible spaces, this affects the ability of individuals to remain anonymous and communicate anonymously in those spaces. This has been shown to impact peoples’ behaviour, for example by deterring them for participating in public assemblies, or expressing their ideas or religious beliefs in public.
- This chilling effect is more severe for journalists, activists, political opponents and minority groups.
The use of these technologies can be particularly damaging for journalists, human rights defenders, and those belonging to minority groups at risk of discrimination. The technologies can be used to target and monitor specific categories of people, track their behaviour and profile them based on their characteristics or behaviour. This can have a chilling effect, preventing journalists from carrying out their work freely, or discouraging activists or political opponents from organising or joining protests.
- The deployment and use of these technologies is shrouded in secrecy.
People have a right to know what information is being gathered about them and how it is being used, but right now there is no sufficiently accessible way of finding out who is developing these technologies and how and why they are being deployed. Public-private partnerships and contracts with government are often not disclosed, and efforts to gain access to information on government use of these technologies often face barriers, denying the public, including journalists and civil society, their right to information, and ability to report on and scrutinise the use of biometric technologies in society.
Case study: Facial recognition
Facial recognition technology involves processing digital images of peoples’ faces to identify them, verify their identity against existing data, or assess certain characteristics such as age, race, gender, etc. Its ability to make some of these assessments accurately is unproven, but it still raises significant human rights concerns. Facial recognition is often used without users’ knowledge or consent.
Why should we worry about it?
Facial recognition technology is extremely invasive. We should be worried because it raises concerns in terms of peoples’ consent to or even knowledge of being subject to it, and there is little transparency about its deployment and accuracy, leaving individuals’ personal data open to abuse and misuse.
It can have significant impacts on free expression and other rights:
How can we protect human rights in the use of biometric technologies?
ARTICLE 19 believes that governments and companies should adopt a human rights-based approach to the design, development and use of biometric technologies.
We are calling for:
States should ban the indiscriminate and untargeted use of biometric technologies to process biometric data in public and publicly-accessible spaces, both offline and online. States should also cease all funding for biometric processing programmes and systems that could contribute to mass surveillance in public spaces.
By design, emotion recognition technologies are fundamentally flawed and are based on discriminatory methods that researchers within the fields of affective computing and psychology contest. They can never meet the narrowly defined tests of necessity, proportionality, legality, and legitimacy.
States should establish international norms that ban the conception, design, development, deployment, sale, export, and import of these technologies in recognition of their fundamental inconsistency with human rights.
Respect for the principles of legitimacy, proportionality, and necessity in the design, development and use of biometric technologies
Both States and private actors should perform an adequate case by case assessment of the legitimacy, proportionality and necessity of the use of biometric technologies.
States should ensure that neither they nor private actors ever use biometric technologies to target those individuals or groups that play significant roles in promoting democratic values, for instance journalists and activists.
The adoption of adequate legislative frameworks for the design, development and use of biometric technologies
For the legitimate uses that meet the necessity and proportionality test, States should shape an adequate legislative framework for the development and deployment of biometric technologies, which should include, at minimum, rules protecting individuals’ data; requirements regarding the quality of data including tests for accuracy and racial bias; obligations for human rights and data protection impact assessments; obligations for developers and users to minimise risk; a binding code of practice for law enforcement; and specific provisions to avoid dual use or mission creep with such technologies.
As the use of biometric technologies increasingly target multiple critical societal processes and democratic values, their design, deployment and development should only be allowed following a public and open debate, including the voices of experts and civil society.
States should publicly disclose all existing and planned activities and deployments of biometric technologies. They must also ensure transparency in public procurement processes, and ensure the right of access to information, including proactive publication, on activities related to biometric technologies.
States and private actors should regularly publish their data protection impact assessments, human rights impact assessments and risk assessment reports, together with a description of the measures taken to mitigate risks and protect individuals’ human rights.
Legislative frameworks for the development and deployment of biometric technologies should provide for clear accountability structures and independent oversight measures. States should condition private sector participation in the biometric technologies used for surveillance purposes – from research and development to marketing, sale, transfer and maintenance – on human rights due diligence and a track record of compliance with human rights norms.
The legislative framework should also ensure access to effective remedies for individuals’ whose rights are violated by the use of biometric technologies.
Private sector to design, develop, and deploy biometric systems in accordance with human rights standards
Companies engaged in the design, development, sale, deployment, and implementation of biometric technologies should:
- Ensure the protection and respect of human rights standards, by adopting a human-centric approach and performing human rights impact assessments.
- Set adequate and ongoing risks assessment procedures to identify risks to individuals’ rights and freedoms, in particular their right to privacy and freedom of expression, arising from the use of biometric technologies.
- Provide effective remedies in case of violation of individuals’ human rights.