When can a decision be considered ‘automated’?

There are already numerous ways in which automated systems are being deployed by government and state authorities to support – and in some cases replace – decisions which until recently were made entirely by humans. Paying attention to the legal implications of these new uses of AI and automated systems within public administration, Francesca Palmiotto broaches the important question of when, and under what circumstances, a decision can be considered ‘automated’. 

This paper explores the consequences of increased outsourcing of decision-making to machines and AI systems by public bodies. It sets out to investigate when a decision can be regarded as automated from a legal perspective, and how exactly automated decision-making can and should be used in the course of public administration without infringing upon the rights of citizens and applicants. As such, the paper addresses the uncertainties relating to the future AI governance, paying close attention to the legal protections currently available under GDPR as well as the recently enacted AI act within the context of automated decision-making. In response to the developing reality of AI in public administration, the author appeals for a fundamental rights approach to AI governance in order to safeguard rights in the future.

Read the paper here.

Share this article

via: