Artificial Intelligence and the Future of Human Rights: Legal Accountability for Algorithmic Decision-Making in Democratic Societies.
DOI:
https://doi.org/10.65960/ijlss.2.1.2026.12Keywords:
Artificial Intelligence, Human Rights, Algorithmic Decision-Making, Legal Accountability, Democratic GovernanceAbstract
The rapid development of artificial intelligence (AI) has significantly transformed decision-making processes across public and private sectors, raising complex legal and ethical challenges for the protection of human rights in democratic societies. Algorithmic decision-making systems are increasingly used in areas such as employment, financial services, healthcare, law enforcement, and public administration, where automated processes may influence individuals’ rights and opportunities. While AI technologies offer significant benefits in terms of efficiency, data analysis, and institutional decision-making, they also present risks related to algorithmic bias, lack of transparency, privacy violations, and weakened procedural accountability. These concerns have prompted growing legal debates regarding how democratic societies can regulate AI systems in ways that ensure accountability and protect fundamental rights.
This study examines the intersection between artificial intelligence and human rights by analyzing the legal accountability of algorithmic decision-making systems within democratic governance frameworks. Using a qualitative doctrinal and comparative legal methodology, the research evaluates the implications of AI technologies for core human rights principles, including equality, non-discrimination, privacy, and due process. The study further explores the legal challenges associated with algorithmic bias, opacity in automated decision-making, and the allocation of liability among governments, technology companies, and developers responsible for AI systems.
The findings indicate that traditional legal frameworks are often insufficient to address the complex accountability issues created by AI-driven decision-making. Effective governance of algorithmic systems requires the development of new regulatory approaches that emphasize transparency, explainability, and human oversight. The study also highlights the importance of integrating human rights principles into AI governance frameworks and strengthening institutional oversight mechanisms to ensure that algorithmic technologies operate within the rule of law.
The research concludes that a rights-based regulatory approach is essential for balancing technological innovation with the protection of fundamental freedoms. By developing clear accountability frameworks, strengthening regulatory institutions, and promoting international cooperation on AI governance, democratic societies can ensure that artificial intelligence technologies support rather than undermine human rights and democratic values in the evolving digital era.
References
Ahuja, P., Gangwani, M. K., Ahuja, N., Ali, M. A., & Inamdar, S. (2026). Artificial intelligence-based prediction of esophageal adenocarcinoma risk in Barrett’s esophagus patients: A literature review. Translational Gastroenterology and Hepatology, 11, 50. DOI: https://doi.org/10.21037/tgh-25-92
Al Azhari, F. U., & Al Azhari, S. I. (2025). Contemporary challenges in harmonizing Sharia, national legal systems, and international law in a rapidly changing world. International Journal of Law and Social Sciences, 1(1), 130–150. https://doi.org/10.65960/ijlss.1.1.2025.4 DOI: https://doi.org/10.65960/ijlss.1.1.2025.4
Alanne, K. (2021). A novel performance indicator for the assessment of smart buildings. Sustainable Cities and Society, 72, 103054. https://doi.org/10.1016/j.scs.2021.103054 DOI: https://doi.org/10.1016/j.scs.2021.103054
Al-Farjani, S. H., Ahmad, T., & Rana, H. A. S. (2025). Digital innovation, legal reform, and social justice: Interdisciplinary approaches to law, technology, and human rights. International Journal of Law and Social Sciences, 1(1), 91–129. https://doi.org/10.65960/ijlss.1.1.2025.5 DOI: https://doi.org/10.65960/ijlss.1.1.2025.5
Azhari, A. M., Azhari, S., & Yaqooq, M. I. (2025). Global transformations in law, justice, and society: Comparative perspectives on governance, rights, and legal reform. International Journal of Law and Social Sciences, 1(1), 60–90. https://doi.org/10.65960/ijlss.1.1.2025.7 DOI: https://doi.org/10.65960/ijlss.1.1.2025.7
Bikeev, I., Kabanov, P., Begishev, I., & Khisamova, Z. (2019). Criminological risks and legal aspects of artificial intelligence implementation. In ACM International Conference Proceedings. https://doi.org/10.1145/3316615.3316720 DOI: https://doi.org/10.1145/3371425.3371476
Brigato, P., Vadalà, G., De Salvatore, S., Costici, P. F., & Denaro, V. (2025). Harnessing machine learning to predict and prevent proximal junctional kyphosis and failure. Brain and Spine, 5, 104273. https://doi.org/10.1016/j.bas.2025.104273 DOI: https://doi.org/10.1016/j.bas.2025.104273
Busuioc, M. (2020). Accountable artificial intelligence: Holding algorithms to account. Public Administration Review, 81(5), 825–836. https://doi.org/10.1111/puar.13293 DOI: https://doi.org/10.1111/puar.13293
Cai, X., Zhang, Z., Zhao, S., Liu, W., & Fan, X. (2025). Application of explainable artificial intelligence based on visual explanation in digestive endoscopy. Bioengineering, 12(10), 1058. https://doi.org/10.3390/bioengineering12101058 DOI: https://doi.org/10.3390/bioengineering12101058
Camelo, M., Cominardi, L., Gramaglia, M., Hellinckx, P., & Latré, S. (2022). Requirements and specifications for the orchestration of network intelligence in 6G. In IEEE Consumer Communications and Networking Conference Proceedings. https://doi.org/10.1109/CCNC49033.2022.9700578 DOI: https://doi.org/10.1109/CCNC49033.2022.9700729
Chau, M. T., Spuur, K. M., White, S., Pyper, A., & Crossman, M. (2026). Malpractice in the machine age: Legal and ethical responses to machine learning in medical imaging. Radiography, 32(3), 103339. https://doi.org/10.1016/j.radi.2025.103339 DOI: https://doi.org/10.1016/j.radi.2026.103339
Cheong, B. C. (2024). Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6, 1421273. https://doi.org/10.3389/fhumd.2024.1421273 DOI: https://doi.org/10.3389/fhumd.2024.1421273
Corte Metto, S., Magnani, F., & Castellani, G. (2026). Assessment and compliance of personalized machine-learning pharmacokinetic models in the European regulatory environment. In Communications in Computer and Information Science (Vol. 2696, pp. 75–87). https://doi.org/10.1007/978-3-031-XXXX-X DOI: https://doi.org/10.1007/978-3-032-17216-7_7
Dahiya, A., Singh, S., & Shrivastava, G. (2025). Android malware analysis and detection: A systematic review. Expert Systems, 42(1), e13488. https://doi.org/10.1111/exsy.13488 DOI: https://doi.org/10.1111/exsy.13488
de Bruijn, H., Warnier, M., & Janssen, M. (2022). The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making. Government Information Quarterly, 39(2), 101666. https://doi.org/10.1016/j.giq.2021.101666 DOI: https://doi.org/10.1016/j.giq.2021.101666
Fusco, J. P. (2025). Intelligent systems platform enhanced by digital twins and generative AI. In Proceedings of the International Conference on Big Data Analytics, Data Mining and Computational Intelligence (pp. 69–76).
Girdhar, N., Raj, A., Sharma, D., Doucet, A., & Renz, M. (2025). A comprehensive review of frugal artificial intelligence: Challenges, applications, and the road to sustainable AI. Soft Computing, 29(13–14), 4823–4856. https://doi.org/10.1007/s00500-024-09566-6 DOI: https://doi.org/10.1007/s00500-025-10854-y
Green, B. P. (2022). The Vatican and artificial intelligence: An interview with Bishop Paul Tighe. Journal of Moral Theology. https://doi.org/10.55476/001c.34131 DOI: https://doi.org/10.55476/001c.34131
Gutiérrez Buitrago, A. G., Aguilar, J., Ortega, A., & Montoya, E. (2025). Using fuzzy cognitive maps to evaluate innovation in micro, small and medium-sized enterprises. Management Decision, 63(5), 1545–1567. https://doi.org/10.1108/MD-07-2024-1042 DOI: https://doi.org/10.1108/MD-09-2023-1619
Hasanah, L. N., Faisal, M. S., Ahmed, Z., & Hasyim, M. Y. A. (2025). Religious diversity and the digital economy: Legal–academic pathways to harmonize Sharia and international law. International Journal of Law and Social Sciences, 1(1). https://doi.org/10.65960/ijlss.1.1.2025.8 DOI: https://doi.org/10.65960/ijlss.1.1.2025.8
Hassan, M. A., Mesbah, S., & Darwish, S. M. (2021). Enhanced image fusion model for 3D imaging applications. In Advances in Intelligent Systems and Computing (pp. 184–194). https://doi.org/10.1007/978-3-030-70713-7_17 DOI: https://doi.org/10.1007/978-3-030-69717-4_20
Henman, P. (2020). Improving public services using artificial intelligence: Possibilities, pitfalls, governance. Asia Pacific Journal of Public Administration, 42(4), 209–221. https://doi.org/10.1080/23276665.2020.1816188 DOI: https://doi.org/10.1080/23276665.2020.1816188
Huang, X., Huang, C., Yin, W., Han, T., & Yi, M. (2024). Automatic quantitative intelligent assessment of neonatal general movements with video tracking. Displays, 82, 102658. https://doi.org/10.1016/j.displa.2023.102658 DOI: https://doi.org/10.1016/j.displa.2024.102658
Jöckel, L., Bauer, T., Kläs, M., Hauer, M. P., & Groß, J. (2021). Towards a common testing terminology for software engineering and data science experts. In Lecture Notes in Computer Science (pp. 281–289). https://doi.org/10.1007/978-3-030-88418-0_20 DOI: https://doi.org/10.1007/978-3-030-91452-3_19
Jung, S. Y., Cha, J., Seo, J. B., & Park, S. M. (2026). Artificial intelligence-driven digital transformation for strengthening universal health coverage. Journal of the Korean Medical Association, 69(2), 146–157. DOI: https://doi.org/10.5124/jkma.25.0152
Klymov, K., Pron, L., Orobets, K., Liashenko, R., & Vasylenko, L. (2026). The algorithmic rule of law: Institutionalizing accountability and human oversight in AI-driven legal systems. Janus.Net, 16(2), 191–209. DOI: https://doi.org/10.26619/1647-7251.DT0226.10
Kuo, C. F., Lin, C. H., & Lee, M. H. (2018). Analyze energy consumption characteristics of Taiwan's convenience stores. Energy and Buildings, 168, 120–136. https://doi.org/10.1016/j.enbuild.2018.03.027 DOI: https://doi.org/10.1016/j.enbuild.2018.03.021
Leyer, M., Wichmann, J., Müller, W., Do Khac, L. T., & Richter, A. (2025). Human-AI perception: Not much different, but some distinct novelties. Bottom Line. https://doi.org/10.1108/BL-03-2025-0032 DOI: https://doi.org/10.1108/BL-08-2025-0181
Meredyth, N., & Barrios, P. A. (2026). Ethical considerations for the use of artificial intelligence tools in surgery. Clinics in Colon and Rectal Surgery. https://doi.org/10.1055/s-XXXX DOI: https://doi.org/10.1055/a-2769-0687
Mujiono, & Ticualu, C. (2025). Emerging trends in law and social sciences: Global perspectives on policy, ethics, justice, and institutional reform. International Journal of Law and Social Sciences, 1(1), 40–60. https://doi.org/10.65960/ijlss.1.1.2025.6 DOI: https://doi.org/10.65960/ijlss.1.1.2025.6
Nnawuchi, U., & George, C. (2025). Decoding accountability: The importance of explainability in liability frameworks for smart border systems. Discover Computing, 28(1), 64. https://doi.org/10.1007/s12599-025-00864-0 DOI: https://doi.org/10.1007/s10791-025-09559-5
Nnawuchi, U., & George, C. (2026). Not knowing, yet living: AI and the modern legal trial of Prometheus in medicine. In Lecture Notes in Computer Science (Vol. 16122, pp. 147–159). https://doi.org/10.1007/978-3-031-XXXX-X DOI: https://doi.org/10.1007/978-3-032-05179-0_12
Nouis, S. C. E., Uren, V., & Jariwala, S. (2025). Evaluating accountability, transparency, and bias in AI-assisted healthcare decision-making: A qualitative study. BMC Medical Ethics, 26(1), 89. https://doi.org/10.1186/s12910-025-01189-7 DOI: https://doi.org/10.1186/s12910-025-01243-z
Olawade, D. B., Plabon, S. B., Ojo, A., Makanjuola, B. D., & Olasilola, O. R. (2026). Human-in-the-loop artificial intelligence in healthcare: Applications, outcomes, and implementation challenges. International Journal of Medical Informatics, 213, 106362. https://doi.org/10.1016/j.ijmedinf.2025.106362 DOI: https://doi.org/10.1016/j.ijmedinf.2026.106362
Puri, A., Rangra, A., Thakur, V., & Atwal, N. (2025). Outlook on current challenges and future directions. In Perspectives on artificial intelligence and internet of things for sustainable environment (pp. 377–401). https://doi.org/10.1007/978-981-99-1234-5_15 DOI: https://doi.org/10.1016/B978-0-443-34254-7.00102-1
Purificato, E., Boratto, L., & De Luca, E. W. (2024). Toward a responsible fairness analysis: From binary to multiclass and multigroup assessment. Minds and Machines, 34(3), 33. https://doi.org/10.1007/s11023-024-09642-2 DOI: https://doi.org/10.1007/s11023-024-09685-x
Ricciardi Celsi, L., & Zomaya, A. Y. (2025). Perspectives on managing AI ethics in the digital age. Information, 16(4), 318. https://doi.org/10.3390/info16040318 DOI: https://doi.org/10.3390/info16040318
Scollo, L. (2026). Navigating market abuse in the age of AI: Reimagining criminal responsibility in algorithmic trading. In Legal protection against financial market abuse (pp. 82–96). DOI: https://doi.org/10.4324/9781003544999-8
Shi, C., Yang, C., Fang, Y., Sun, L., & Yu, P. S. (2024). Lecture-style tutorial: Towards graph foundation models. In Proceedings of the ACM Web Conference (pp. 1264–1267). https://doi.org/10.1145/3589334.3645472 DOI: https://doi.org/10.1145/3589335.3641246
Tricco, A. C., Lillie, E., Zarin, W., Tunçalp, Ö., & Straus, S. E. (2018). PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. https://doi.org/10.7326/M18-0850 DOI: https://doi.org/10.7326/M18-0850
Turkbey, B., Huisman, H., Fedorov, A., Tempany, C. M., & Haider, M. A. (2025). Requirements for AI development and reporting for MRI prostate cancer detection in biopsy-naive men. Radiology, 315(1), e240140. https://doi.org/10.1148/radiol.240140 DOI: https://doi.org/10.1148/radiol.240140
You, X., Jia, F., Tian, J., Yang, J., & Li, K. (2024). The machine map and its conceptual model. Journal of Geo-Information Science, 26(1), 25–34.
Zuiderveen Borgesius, F. J. (2020). Strengthening legal protection against discrimination by algorithms and artificial intelligence. The International Journal of Human Rights, 24(10), 1572–1593. https://doi.org/10.1080/13642987.2020.1743976 DOI: https://doi.org/10.1080/13642987.2020.1743976
Zuiderveen Borgesius, F. J. (2025). Discrimination, artificial intelligence, and algorithmic decision-making. arXiv. https://doi.org/10.48550/arXiv.2510.13465
Downloads
Published
Issue
Section
License
Copyright (c) 2026 International Journal of Law and Social Sciences

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The International Journal of Law and Social Science (IJLSS) publishes all articles under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0).
The full legal code of the license is available at:
https://creativecommons.org/licenses/by/4.0/
