Skip to content
Warm Presence in the Circulation
Warm Presence in the Circulation

Corpse Found Within Computer Network Infrastructure

In the rapidly evolving world of technology, a fundamental question arises: what role should humans play in automated decision-making processes? This question is at the heart of EU technology regulation, particularly in the age of AI.

The concept of "human in the loop" is central to this discussion. In law, it refers to a system or process where human involvement is an important, yet not exclusive, part of an operation. The Omnibus packages, a focus of Brussels' approach to technology regulation (Commission 2025), offer an opportunity to reconsider the foundations of human involvement regulation in EU legal instruments.

Human review is crucial in content moderation processes, particularly in the redress or complaint stage. However, it remains unclear whether such post hoc review is sufficient to correct errors or prevent harm. Similarly, human intervention can serve as a legal backup in case of automated decision-making failure, but the legitimating effect of human intervention is not always clear.

Research shows that human oversight, a legal norm and principle in EU technology regulations, often falls short. Humans may become mere rubber stampers, rather than active participants in the decision-making process. This is a concern that extends to areas such as robotics, AI regulation, content moderation, automated decision-making, and governance of drones and financial markets.

Key participants in this discussion include EU institutions such as the European Commission (notably DG CONNECT and DG JUST), the European Parliament's committees on civil liberties and industry, various national regulatory agencies, technology experts, and stakeholder groups including industry representatives, civil society organizations, and academic researchers involved in AI and digital ethics policy development.

Schwemer (2021) suggests that a shared vocabulary and conceptual basis for human involvement in legal research is needed to establish clarity. Consolidating human involvement provisions in the AI Act, DSA, and GDPR could provide clearer definitions of human oversight, intervention, and review for use-case or sector-specific legislation.

The necessity and location of human agency in automated decision-making processes need to be addressed. Systematization alone may not answer the fundamental question of whether problems arising from the human-in-the-loop stem from a lack of a systematic approach or reflect a deeper issue of modern law: the difficulty of regulating non-human entities without human intermediaries.

Decision quality and error rates can serve as benchmarks for human involvement in automated decision-making processes. The quality, timing, and expertise of human input must be assessed empirically, rather than assumed effective by default. The AI Act, for instance, does not adequately recognise the limitations of human oversight, instead presuming that humans possess almost superhuman capacities.

Some legislation does not explicitly mandate human involvement but implies its importance, suggesting that certain tasks should not be fully automated. The assumption of human exceptionalism in EU tech regulation may create blind spots, entrenching flawed practices and ignoring potential improvements from well-designed automation.

In conclusion, the role of humans in automated decision-making processes is a critical issue in EU technology regulation. As we move forward, it is essential to ensure that humans are not merely rubber stampers but active participants in the decision-making process, with their involvement being systematic, effective, and well-defined.

Read also:

Latest