La supervisión humana de los sistemas de inteligencia artificial de alto riesgo. Aportaciones desde el Derecho Internacional Humanitario y el Derecho de la Unión Europea

  1. Aritz Obregón Fernández 1
  2. Guillermo Lazcoz Moratinos 1
  1. 1 Universidad del País Vasco/Euskal Herriko Unibertsitatea
    info

    Universidad del País Vasco/Euskal Herriko Unibertsitatea

    Lejona, España

    ROR https://ror.org/000xsnr85

Journal:
Revista electrónica de estudios internacionales (REEI)

ISSN: 1697-5197

Year of publication: 2021

Issue: 42

Type: Article

DOI: 10.17103/REEI.42.08 DIALNET GOOGLE SCHOLAR lock_openDialnet editor

More publications in: Revista electrónica de estudios internacionales (REEI)

Abstract

The automation of decision-making by artificial intelligence systems is a growing phenomenon affecting all areas of society. The European Commission, aware of the risks that the use of these technologies entails for fundamental rights and freedoms, proposes in its Artificial Intelligence Act to introduce human oversight as a mandatory requirement for the design and development of these technologies. However, human oversight is underdeveloped in the European regulatory environment. For this reason, we propose to resort to the concept of Meaningful Human Control developed in the framework of International Humanitarian Law. In this article we analyse the state contributions, doctrinal and policy proposals made by the Group of Governmental Experts on Lethal Autonomous Weapons Systems. All these contributions allow us to approach from a novel perspective the concept of human oversight proposed by the European Commission for high-risk artificial intelligence systems. We conclude the article looking for universally applicable elements in human oversight for the automation of decision-making, irrespective of the field in question.

Funding information

El texto ha sido elaborado gracias a la financiación de la ayuda del Programa Predoctoral de Formación de Personal Investigador No Doctor del Departamento de Educación del Gobierno Vasco.

Funders

Bibliographic References

  • ALMADA, M. “Human Intervention in Automated Decision-Making: Toward the Construction of Intelligence and Law, ICAIL, 2019, p. 5. 88
  • EDWARDS, L., VEALE, M., “Slave to the Algorithm? Why a Right to Explanationn doi.org/10.1093/idpl/ipx005.
  • HUQ, A. Z., “A Right to a Human Decision”, Virginia Law Review, vol. 106, nº 3, 2020, p. 624
  • Jacquenim, Hervé (coord.), Time to reshape the digital society. 40th anniversary of the CRIDS, 1ª Edición, Bruselas: Larcier (Lefebvre Sarrut Group), 2021. pp. 407-440. JIMÉNEZ-SEGOVIA, R., “Los sistemas de armas autónomos…”, op. cit., pp. 24-25. 73
  • JONES, M. L., “The Right to a Human in the Loop: Political Constructions of Computer Automation MENDOZA, I., BYGRAVE, L.A., "The Right Not to be Subject to Automated Decisions Based on Law, Springer, 2017, p. 78. 19
  • METHNANI, L., et al, “Let Me Take Over: Variable Autonomy for Meaningful Human Control”, Frontiers
  • OHM, P., LEHR, D., “Playing with the Data: What Legal Scholars Should Learn about Machine lawreview.law.ucdavis.edu/issues/51/2/Symposium/51-2_Lehr_Ohm.pdf.
  • PETER, A., (CCW/GGE.1/2019/WP.1, 8 marzo 2019), párr. 3, p. 1.
  • ROIG, A., Las garantías frente a las José María Bosch Editor, Barcelona, 2020, p. 30.
  • SHERIDAN, T. B., “Human Centered Automation: Oxymoron or Common Sense?”, en IEEE, 1995 IEEE International Conference on Systems, Man and Cybernetics. Intelligent Systems for the 21st Century, IEEE, Vancouver, 1995, pp. 823-828
  • TADDEO, M.R., “Trusting Digital Technologies Correctly.”, Minds and Machines, vol. 27, nº 4, p. 566.
  • WAGNER, B., “Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems”, Policy & Internet, vol. 11, nº 1, 2019, p. 108, doi.org/10.1002/poi3.198.