AI and the Public Sector: Digital Welfare Dystopia?

We have recently been exploring the acceleration of digital transformation initiatives in the public sector, looking at how the impact of the pandemic has challenged the mindset around the art of the possible.

However, one area that remains mired in controversy is how public bodies can harness the power of artificial intelligence (AI) in an ethical way.

The UN Regional Information Center (UNRIC) recently called for the postponement of the deployment of AI tools to protect civil and human rights. Michelle Bachelet, the UN High Commissioner for Human Rights, is pushing for a delay in the sale and use of AI technology for all member states until stricter regulations have been put in place. The Commission believes that most AI tools do not meet international human rights frameworks, and governments are falling short in their due diligence. In this context, major violations of human rights through the use of biometric data, such as facial recognition, is inevitable.

This view is widespread. Consumer advocates warn of potential abuses linked to AI technologies. The European Data Protection Board (EDPB) has called for a general ban on the use of AI in automated recognition of human characteristics in public spaces in any context. Additionally, they recommend a ban on AI systems, calling the technology the end of anonymity. The recent revelations about Pegasus spyware and the relationships between NSO Group and state and commercial sector organizations underline this issue.

This contrasts with the opinion of public security bodies, which are strongly interested in exploring the potential of AI. Crime prevention and detection could be improved by using AI to review camera footage and match images in real time. The potential efficiency gains are significant as the effort in terms of both personnel and time required for retrospective investigation of perpetrators would be reduced.

Generally speaking, legal applications can be considered through two channels - prevention and law enforcement – and there already are some regulatory approaches. In April 2021, the European Commission acknowledged the need for regulations and announced a harmonized legal basis for AI after presenting a related strategy. Germany has adopted a similar approach and in the current coalition talks between Social Democrats, Greens, and Liberals, a new digital strategy is being discussed, including some new responsibilities within the German government.

With respect to prevention, the interior ministers in some German states are trying to create the legal foundations in their security or public laws for the implementation of automated facial recognition. So far, only one German state has implemented the legal basis for automated facial recognition (Polizeivollzugsdienstgesetz Sachsen, Saxony Police Service Act). A similar project failed in Bavaria due to legal concerns. In the draft bill for the new Federal Police Act, the Federal Ministry of the Interior also included the use of this technology at Germany's largest transport hubs but due to great political pressure, this was dropped from the agenda at the end of January 2020.

Beside Saxony, there is one case study regarding prevention. From mid-2017 to mid-2018, a facial recognition project was conducted at the German Federal Police at Berlin Südkreuz Station. However, the Chaos Computer Club (Europe’s largest association of hackers advocating governmental transparency, freedom of information, and the human right to communication) was highly critical of these results, considering the recognition rate to be distorted due to a lack of diversity of the sample.

With respect to law enforcement, i.e. tracking down suspects, there has been one prominent example in Germany in the past. In the aftermath of the G20 summit in Hamburg in[AT1] 2017, the police used AI technology to look for suspects involved in riots, which resulted in a complaint from the city's data protection commissioner.

In summary, there is a strong interest in harnessing AI among government stakeholders, but the lack of clear legal frameworks, and persistent concerns about data privacy and ethical usage mean that this will remain a controversial topic for some time to come.