Skip to content
1981
Volume 24, Issue 3
  • ISSN: 1539-7785
  • E-ISSN: 2048-0717

Abstract

Artificial intelligence (AI) now exerts a marked influence on decision-making across many fields, from the personalized selection of content on social media to medical diagnostics and staff recruitment. Yet the widespread assumption that data-driven algorithms are objective raises profound philosophical and ethical questions. This article critically examines the notion of algorithmic objectivity and the phenomenon of filter bubbles through three philosophical lenses: George Edward Moore’s naturalistic fallacy, Theodor W. Adorno and Max Horkheimer’s critical theory of the culture industry, and Adela Cortina’s applied ethics. The analysis centres on algorithmic systems designed for classification, regression and clustering tasks; generative AI, which poses additional challenges of authorship and creativity, lies outside the present scope.

Loading

Article metrics loading...

/content/journals/10.1386/eme_00258_1
2025-11-24
2026-04-13

Metrics

Loading full text...

Full text loading...

References

  1. Adorno, T. W. and Horkheimer, M. ([1947] 2002), Dialectic of Enlightenment: Philosophical Fragments, Stanford, CA: Stanford University Press.
    [Google Scholar]
  2. Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016), ‘Machine bias’, ProPublica, 23 May, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 22 August 2025.
    [Google Scholar]
  3. Burrell, J. (2016), ‘How the machine “thinks”: Understanding opacity in machine learning algorithms’, Big Data & Society, 3:1, pp. 112, https://doi.org/10.1177/2053951715622512.
    [Google Scholar]
  4. Buyl, M., Cociancig, C., Frattone, C. and Roekens, N. (2022), ‘Tackling algorithmic disability discrimination in the hiring process: An ethical, legal and technical analysis’, in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul: Association for Computing Machinery, pp. 107182, https://doi.org/10.1145/3531146.3533169.
    [Google Scholar]
  5. Cortina, A. (2019), ‘Ética de la inteligencia artificial’, Anales de la Real Academia de Ciencias Morales y Políticas, Fascículo, 1, pp. 37994, https://www.boe.es/biblioteca_juridica/anuarios_derecho/articulo.php?id=ANU-M-2019-10037900394. Accessed 22 August 2025.
    [Google Scholar]
  6. Crawford, K. (2021), The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, New Haven, CT: Yale University Press.
    [Google Scholar]
  7. DSA Observatory (2024), ‘The regulation of recommender systems under the DSA: A transition from default to multiple and dynamic controls’, 22 November, https://dsa-observatory.eu/2024/11/22/the-regulation-of-recommender-systems-under-the-dsa-a-transition-from-default-to-multiple-and-dynamic-controls/. Accessed 22 August 2025.
  8. Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V., Ilavarasan, P. V., Janssen, M., Jones, P., Kar, A. K., Kizgin, H., Kronemann, B., Lal, B., Lucini, B., Medaglia, R., Le Meunier-FitzHugh, K., Le Meunier-FitzHugh, L. C., Misra, Santosh, M., Emmanuel, S., Sujeet Kumar, S., Jang Bahadur, R., Vishnupriya, R., Ramakrishnan, R., Nripendra P., Samothrakis, S., Spencer, J., Tamilmani, K., Tubadji, A., Walton, P., and Williams, M. D. (2021), ‘Artificial intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy’, International Journal of Information Management, 57, p. 101994, https://doi.org/10.1016/j.ijinfomgt.2019.08.002.
    [Google Scholar]
  9. European Parliament and Council of the European Union (2024), ‘Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on artificial intelligence (AI Act)’, Official Journal of the European Union, 12 July, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689. Accessed 22 August 2025.
    [Google Scholar]
  10. Executive Office of the President (2023), ‘Safe, secure and trustworthy development and use of artificial intelligence’, Federal Register, 1 November, https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence. Accessed 22 August 2025.
    [Google Scholar]
  11. Executive Office of the President (2025), ‘Removing barriers to American leadership in artificial intelligence’, The White House, 23 January, https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/. Accessed 22 August 2025.
    [Google Scholar]
  12. Helberger, N., Karppinen, K. and D’Acunto, L. (2018), ‘Exposure diversity as a design principle for recommender systems’, Information, Communication & Society, 21:2, pp. 191207, https://doi.org/10.1080/1369118X.2016.1271900.
    [Google Scholar]
  13. Kelly, C. J., Karthikesalingam, A., Suleyman, M., Corrado, G. and King, D. (2019), ‘Key challenges for delivering clinical impact with artificial intelligence’, BMC Med, 17:1, p. 195, https://doi.org/10.1186/s12916-019-1426-2.
    [Google Scholar]
  14. Kurzweil, R. (2005), The Singularity Is Near: When Humans Transcend Biology, New York: Viking Press.
    [Google Scholar]
  15. Lagioia, F., Rovatti, R. and Sartor, G. (2023), ‘Algorithmic fairness through group parities? The case of COMPAS-SAPMOC’, AI & Society, 38:2, pp. 45978, https://doi.org/10.1007/s00146-022-01441-y.
    [Google Scholar]
  16. Langenkamp, M., Costa, A. and Cheung, C. (2020), ‘Hiring fairly in the age of algorithms’, arXiv, https://doi.org/10.48550/arXiv.2004.07132.
  17. Lockwood, B. (2017), ‘Confirmation bias and electoral accountability’, Quarterly Journal of Political Science, 11:4, pp. 471501, https://doi.org/10.1561/100.00016037.
    [Google Scholar]
  18. McLuhan, M. (1964), Understanding Media: The Extensions of Man, New York: McGraw-Hill.
    [Google Scholar]
  19. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K. and Galstyan, A. (2021), ‘A survey on bias and fairness in machine learning’, ACM Computing Surveys, 54:6, pp. 135, https://doi.org/10.1145/3457607.
    [Google Scholar]
  20. Moore, G. E. (1903), Principia Ethica, Cambridge: Cambridge University Press.
    [Google Scholar]
  21. Napoli, P. M. (2019), Social Media and the Public Interest: Media Regulation in the Disinformation Age, New York: Columbia University Press.
    [Google Scholar]
  22. National Institute of Standards and Technology (2023), AI Risk Management Framework (AI RMF 1.0), Gaithersburg, MD: U.S. Department of Commerce.
    [Google Scholar]
  23. Orlowski, J. (2020), The Social Dilemma, USA: Exposure Labs, Argent Pictures and The Space Program.
    [Google Scholar]
  24. Pariser, E. (2011), The Filter Bubble: What the Internet Is Hiding from You, New York: Penguin Press.
    [Google Scholar]
  25. Peters, U. (2024), ‘Science based on artificial intelligence need not pose a social epistemological problem’, Social Epistemology Review and Reply Collective, 13:1, pp. 5866, https://philarchive.org/rec/PETSBO-3. Accessed 22 August 2025.
    [Google Scholar]
  26. Peters, U. and Ojea Quintana, I. (2024), ‘Are generics and negativity about social groups common on social media? A comparative analysis of Twitter (X) data’, Synthese, 203:6, p. 213, https://doi.org/10.1007/s11229-024-04639-3.
    [Google Scholar]
  27. Postman, N. (1985), Amusing Ourselves to Death: Public Discourse in the Age of Show Business, New York: Viking Penguin.
    [Google Scholar]
  28. Postman, N. (1992), Technopoly: The Surrender of Culture to Technology, New York: Vintage.
    [Google Scholar]
  29. Pramanik, P. K. D., Pal, S. and Choudhury, P. (2018), ‘Beyond automation: The cognitive IoT. Artificial intelligence brings sense to the Internet of Things’, in A. Sangaiah, A. Thangavelu and V. Meenakshi Sundaram (eds), Cognitive Computing for Big Data Systems Over IoT: Frameworks, Tools and Applications, Lecture Notes on Data Engineering and Communications Technologies, vol. 14, Cham: Springer International Publishing, pp. 137, https://doi.org/10.1007/978-3-319-70688-7_1.
    [Google Scholar]
  30. Rajpurkar, P., Chen, E., Banerjee, O. and Topol, E. J. (2022), ‘AI in health and medicine’, Nature Medicine, 28:1, pp. 3138, https://doi.org/10.1038/s41591-021-01614-0.
    [Google Scholar]
  31. Rong, Y., Leemann, T., Nguyen, T., Fiedler, L., Qian, P., Unhelkar, V., Seidel, T., Kasneci, G. and Kasneci, E. (2024), ‘Towards human-centered explainable AI: A survey of user studies for model explanations’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 46:4, pp. 210422, https://doi.org/10.1109/TPAMI.2023.3331846.
    [Google Scholar]
  32. Rudin, C. (2019), ‘Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead’, Nature Machine Intelligence, 1:5, pp. 20615, https://doi.org/10.1038/s42256-019-0048-x.
    [Google Scholar]
  33. Rutledge, P. (2020), ‘The social dilemma: Fact or manipulation?’, Fielding Graduate University, 8 October, https://www.fielding.edu/the-social-dilemma-fact-or-manipulation/. Accessed 22 August 2025.
  34. Sunstein, C. R. (2018), #Republic: Divided Democracy in the Age of Social Media, Princeton, NJ: Princeton University Press.
    [Google Scholar]
  35. Tang, T. and Dicker, A. (2025), ‘China and the U.S. – different approaches to regulating AI’, Digital Economy & AI, 3 April, https://www.chinalawvision.com/2025/04/digital-economy-ai/china-and-the-u-s-different-approaches-to-regulating-ai/. Accessed 22 August 2025.
    [Google Scholar]
  36. Tegmark, M. (2017), Life 3.0: Being Human in the Age of Artificial Intelligence, New York: Alfred A. Knopf.
    [Google Scholar]
  37. UNESCO (2021), Recommendation on the Ethics of Artificial Intelligence, 23 November, Paris: UNESCO.
    [Google Scholar]
  38. Wang, C., Han, B., Patel, B. and Rudin, C. (2023), ‘In pursuit of interpretable, fair and accurate machine learning for criminal recidivism prediction’, Journal of Quantitative Criminology, 39:2, pp. 51981, https://doi.org/10.1007/s10940-022-09545-w.
    [Google Scholar]
/content/journals/10.1386/eme_00258_1
Loading
/content/journals/10.1386/eme_00258_1
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a success
Invalid data
An error occurred
Approval was partially successful, following selected items could not be processed due to error
Please enter a valid_number test