As the Artificial Intelligence Safety Summit gets underway in the UK today, ARTICLE 19 calls on the UK government to halt and roll back the deployment of mass surveillance systems, which violate rights of millions of people across the country.
On 1 and 2 November, the UK Prime Minister Rishi Sunak is hosting the AI Safety Summit at Bletchley Park. The aim of this major event is to discuss the risks posed by so-called ‘frontier’ artificial intelligence (AI), with a view to gaining a shared understanding of the challenges and identifying appropriate measures to increase AI safety, and to explore potential for international collaboration.
International governments, leading AI companies, research experts, and a very few selected representatives from civil society are taking part in the key discussions. The guestlist is far from inclusive and does not engage a broad representation of stakeholders, as recommended by international experts. ARTICLE 19, together with other organisations, has already expressed concerns about the Summit leaving very little space for participation from the communities and workers most affected by AI. In that, it misses the opportunity of a truly open dialogue.
The UK expects the Summit to be an opportunity for the United Kingdom to be a role model and lead the discussion about how to shape the way ahead, including what governance systems we need for AI.
Yet the UK government has already taken a clear position on this issue. And it’s not a human rights-oriented one.
Across the country, AI is used more and more to surveil people, on a daily basis and in public space. This modus operandi has a huge impact on people’s human rights: it violates privacy and substantially chills free expression and freedom of association. Evidence clearly shows that this has a disproportionate impact on vulnerable groups and minorities.
In fact, in the past months, the UK government has been pushing for a fake and misleading causality that ‘more surveillance leads to more security’. Policing minister Chris Phillip has been putting pressure on police leaders in England and Wales to double their use of retrospective facial recognition software to track down offenders, and encouraging the police to operate live facial recognition cameras more widely.
Not only is the narrative that to improve security we need mass surveillance totally unproven. It also constitutes a dangerous attempt to provide an excuse for systemic violations of people’s rights, which are unjustified and disproportionate under international human rights standards.
Respect for the rule of law and international human rights standards has been dangerously deprioritised in the UK. The Home Office’s biometrics and surveillance Commissioner called Britain an ‘omni-surveillance’ society, warning that the police are not complying with court requirements to delete millions of images of innocent people more than a decade after being told to destroy them. According to the Commissioner the relevant regulatory framework is ‘inconsistent, incomplete and in some areas incoherent” and that, across the UK, there “isn’t much not being watched by somebody’.
Against this backdrop, it is not difficult to understand why the UK Government has carefully selected attendees to the AI Summit.
The current situation must change. The UK government has the opportunity to acknowledge this need for reform now and take the first step towards it. If it is serious about leading the way for AI Safety, the government must anchor its actions in respect for human rights standards. This must start with a rollback of intrusive, disproportionate and unjustified deployment of AI systems to surveil people.