Análisis de la propuesta de Reglamento sobre los principios éticos para el desarrollo, el despliegue y el uso de la inteligencia artificial, la robótica y las tecnologías conexas
- 1 Universidad del País Vasco (UPV/EHU)
ISSN: 2444-8478
Year of publication: 2020
Volume: 6
Issue: 2
Pages: 26-41
Type: Article
More publications in: IUS ET SCIENTIA: Revista electrónica de Derecho y Ciencia
Abstract
On 20 October 2020, the European Parliament adopted a resolu-tion (2020/2012(INL)) with recommendations to the Commission regarding artificial intelligence, robotics and related technologies, which included a legislative proposal for a Regulation on the eth-ical principles for the development, deployment and use of these technologies. The content of this proposal undoubtedly follows from the regulatory vision that the European Commission has maintained in documents such as the White Paper on Artificial In-telligence (COM(2020) 65 final) or the Ethical guidelines for trust-worthy AI drawn up by the High-Level Expert Group on AI. Giv-en this new legislative horizon, it is more necessary than ever to address a constructive criticism on the proposal, highlighting the possibility of reformulating its markedly soft-law character despite its location in a regulatory source of general application and di-rectly applicable, such as regulations, or the adopted approach for certain key principles such as human supervision or discrimination
Bibliographic References
- ALMADA, M., “Human Intervention in Automated Decision-Making: Toward the Construction of Contestable Systems.”, Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law, 2019, pp. 2–11. https://doi.org/10.1145/3322640.3326699
- BOIX, A., “Los Algoritmos Son Reglamentos: La Necesidad de Extender Las Garantías Propias de Las Normas Reglamentarias a Los Programas Empleados Por La Administración Para La Adopción de Decisiones.”, Revista de Derecho Público: Teoría y Método, vol. 1, pp. 223–270. https://doi.org/10.37417/RPD/vol_1_2020_33
- COBBE, J. y SINGH, J., “Reviewable Automated Decision-Making.”, Computer Law & Security Review, vol. 39, 2020. https://doi.org/10.1016/j.clsr.2020.105475
- DANKS, D. y LONDON, A.J., “Algorithmic Bias in Autonomous Systems.” Proceedings of the 26th International Joint Conference on Artificial Intelligence, AAAI Press, 2017, pp. 4691–4697. https://dl.acm.org/doi/10.5555/3171837.3171944
- DEMETZOU, K., “Data Protection Impact Assessment: A Tool for Accountability and the Unclarified Concept of ‘High Risk’ in the General Data Protection Regulation.”, Computer Law & Security Review, vol. 35, núm. 6, 2019. https://doi.org/10.1016/j.clsr.2019.105342
- FISCHER, J.E., GREENHALGH, C., JIANG, W., RAMCHURN, S.D., WU, F. y RODDEN, T., “In-the-loop or on-the-loop? Interactional arrangements to support team coordination with a planning agent.”, Concurrency and Computation: Practice and Experience, 2017, pp. 1-16. https:// doi.org/10.1002/cpe.4082
- HILDEBRANT, M., “The issue of bias: the framing powers of ML”, Draft version, en Marcello Pelillo, Teresa Scantamburlo (eds.), Machine Learning and Society: Impact, Trust, Transparency, MIT Press forthcoming 2020
- KISELEVA, A., “AI as a Medical Device: Is It Enough to Ensure Performance Transparency and Accountability?”, European Pharmaceutical Law Review, vol. 4, núm. 1, 2020, pp. 5-16. https:// doi.org/10.21552/eplr/2020/1/4
- De LAAT, P.B., “Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?”, Philosophy & Technology, vol. 31, núm. 4, 2018, pp. 525–41. https://doi.org/10.1007/s13347-017-0293-z
- MANN, M. y MATZNER, T., “Challenging Algorithmic Profiling: The Limits of Data Protection and Anti-Discrimination in Responding to Emergent Discrimination.”, Big Data & Society, vol. 6, núm. 2, 2019, pp. 1–11. https://doi.org/10.1177%2F2053951719895805
- MITTELSTADT, B. 2019. “Principles Alone Cannot Guarantee Ethical AI.”, Nature Machine Intelligence, vol. 1, núm. 11, 2019, pp. 501–507. https://doi.org/10.1038/s42256-019-0114-4
- MITTELSTADT, B., “From Individual to Group Privacy in Big Data Analytics.”, Philosophy & Technology, vol. 30, núm. 4, 2017, pp. 475–94, https://doi.org/10.1007/s13347-017-0253-7
- De SIO, F. y Van Den HOVEN, J., “Meaningful Human Control over Autonomous Systems: A Philosophical Account.”, Frontiers in Robotics and AI, vol. 5, 2018, pp. 1-15. https://doi. org/10.3389/frobt.2018.00015
- VEALE, M. y BINNS, R., “Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data.”, Big Data & Society, vol. 4, núm. 2, 2017, pp. 1-17. https://doi.org/10.1177%2F2053951717743530
- WACHTER, S. “Affinity Profiling and Discrimination by Association in Online Behavioural Advertising.” Berkeley Technology Law Journal, vol. 35, núm. 2, 2020, Forthcoming, pp. 1-74. https://dx.doi.org/10.2139/ssrn.3388639
- ZUIDERVEEN BORGESIUS, F.J., “Strengthening Legal Protection against Discrimination by Algorithms and Artificial Intelligence.”, The International Journal of Human Rights, vol. 24, núm. 10, 2020, pp. 1–22. https://doi.org/10.1080/13642987.2020.1743976