Skip to content
1981
Artificial Intelligence and Media Ecology
  • ISSN: 1539-7785
  • E-ISSN: 2048-0717

Abstract

This article explores the changing character of society in the age of generative artificial intelligence (AI). Adopting a media ecology perspective, the article highlights how generative AI drives polarization, promotes deception, and fosters exclusion and bias. The article is a call to action to minimize the worst excesses of the deployment of this technology to preserve societal interests.

Loading

Article metrics loading...

/content/journals/10.1386/eme_00200_1
2024-07-31
2024-09-09
Loading full text...

Full text loading...

References

  1. Arguedas, A. and Simon, F. (2023), ‘Automating democracy: Generative AI, journalism, and the future of democracy’, Balliol Interdisciplinary Institute, https://ora.ox.ac.uk/objects/uuid:0965ad50-b55b-4591-8c3b-7be0c587d5e7/download_file?file_format=application%2Fpdf&safe_filename=Arguedas_and_Simon_2023_Automating_democracy__Generative.pdf&type_of_work=Report. Accessed 11 December 2023.
    [Google Scholar]
  2. Australian National University (2023), ‘AI faces look more real than actual human face’, Science Daily, 13 November, https://www.sciencedaily.com/releases/2023/11/231113111717.htm. Accessed 8 December 2023.
    [Google Scholar]
  3. Bai, M. H., Voelkel, J., Eichstaedt, J. and Willer, R. (2023), ‘Artificial intelligence can persuade humans on political issues’, OSF, https://doi.org/10.31219/osf.io/stakv.
    [Google Scholar]
  4. Barnes, S. A. and Strate, L. (1996), ‘The educational implications of the computer: A media ecology critique’, New Jersey Journal of Communication, 4:2, pp. 180208.
    [Google Scholar]
  5. Beauvais, C. (2022), ‘Fake news: Why do we believe it?’, Joint, Bone, Spine, 89:4, https://doi.org/10.1016/j.jbspin.2022.105371.
    [Google Scholar]
  6. Benson, T. (2023), ‘This disinformation is just for you’, Wired, 1 August, https://www.wired.com/story/generative-ai-custom-disinformation/. Accessed 8 December 2023.
    [Google Scholar]
  7. [Google Scholar]
  8. Buchanan, B., Lohn, A., Musser, M. and Sedova, K. (2021), ‘Truth, lies, and automation: How language models could change disinformation’, Center for Security and Emerging Technology, May, https://cset.georgetown.edu/publication/truth-lies-and-automation/. Accessed 26 March 2023.
    [Google Scholar]
  9. Calvillo, D. and Smelter, T. (2020), ‘An initial accuracy focus reduces the effect of prior exposure on perceived accuracy of news headlines’, Cognitive Research: Principles and Implications, 5:55, https://doi.org/10.1186/s41235-020-00257-y.
    [Google Scholar]
  10. Cinelli, M., De Francisi Morales, G., Galeazzi, A., Quattrochiocchi, W. and Starnini, M. (2021), ‘The echo chamber effect on social media’, Proceedings of the National Academy of Sciences, 118:9, https://doi.org/10.1073/pnas.2023301118.
    [Google Scholar]
  11. [Google Scholar]
  12. Diaz Ruiz, C. and Nilsson, T. (2023), ‘Disinformation and echo chambers: How disinformation circulates on social media through identity-driven controversies’, Journal of Public Policy and Marketing, 42:1, pp. 1835, https://doi.org/10.1177/07439156221103.
    [Google Scholar]
  13. Fallis, D. (2021), ‘The epistemic threat of deepfakes’, Philosophy & Technology, 34:4, pp. 62343, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7406872/.
    [Google Scholar]
  14. Future of Life Institute (2023), ‘Pause giant AI experiments: An open letter’, 22 March, https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Accessed 11 December 2023.
  15. Galaz, V., Daume, S. and Marklund, A. (2023), ‘A game changer for misinformation: The rise of generative AI’, Stockholm Resilience Centre, May, https://www.stockholmresilience.org/news--events/climate-misinformation/chapter-6-a-game-changer-for-misinformation-the-rise-of-generative-ai.html. Accessed 2 December 2023.
    [Google Scholar]
  16. Gavura, S. (2023), ‘Misinformation is pervasive, and AI will turbocharge it’, Science-Based Medicine, 7 December, https://sciencebasedmedicine.org/misinformation-is-pervasive-and-; https://doi.org/10.1073/pnas.2023301118ai-will-turbocharge-it/. Accessed 7 December 2023.
    [Google Scholar]
  17. Gronbeck, B., Farrell, T. and Soukup, P. (1991), Media, Consciousness, and Culture: Explorations of Walter Ong’s Thought, Newbury Park, CA: Sage.
    [Google Scholar]
  18. Grosswiler, P. (2001), ‘Jurgen Habermas: Media ecologist?’, in D. Flayhan (ed.), Proceedings of the Media Ecology Association, 15–16 June, New York: MEA Publications, https://media-ecology.org/resources/Documents/Proceedings/v2/v2-03-Grosswiler.pdf. Accessed 9 December 2023.
    [Google Scholar]
  19. Habermas, J. (2022), ‘Reflections and hypotheses on a further structural transformation of the political public sphere’, Theory, Culture, & Society, 39:4, pp. 14571, https://doi.org/10.1177/02632764221112341.
    [Google Scholar]
  20. Hayes, C. (2012), Twilight of the Elites: America after Meritocracy, New York: Random House Inc.
    [Google Scholar]
  21. Hermann, E. (2022), ‘Artificial intelligence and mass personalization of communication content: An ethical and literacy perspective’, New Media & Society, 24:5, pp. 125877, https://doi.org/10.1177/14614448211022702.
    [Google Scholar]
  22. Innis, H. A. (1951), The Bias of Communication, Toronto: University of Toronto Press.
    [Google Scholar]
  23. Jilani, Z. and Smith, J. A. (2019), ‘What is the true cost of polarization in America?’, Greater Good Magazine, 4 March, https://greatergood.berkeley.edu/article/item/what_is_the_true_cost_of_polarization_in_america. Accessed 24 February 2023.
    [Google Scholar]
  24. Kimball, J. (2020), ‘U.S. is polarizing faster than other democracies, study finds’, Brown University, https://www.brown.edu/news/2020-01-21/polarization. Accessed 11 November 2023.
    [Google Scholar]
  25. Klepper, D. and Lemon, J. (2023), ‘Can you spot a deepfake?: Gaza the latest outlet for AI deception’, Sydney Morning Herald, 12 December, https://www.smh.com.au/world/middle-east/can-you-spot-a-deepfake-gaza-the-latest-outlet-for-ai-deception-20231129-p5enrd.html. Accessed 12 December 2023.
    [Google Scholar]
  26. Lamensch, M. (2023), ‘Generative AI tools are perpetuating harmful gender stereotypes’, Centre for International Governance Innovation, 14 June, https://www.cigionline.org/articles/generative-ai-tools-are-perpetuating-harmful-gender-stereotypes/. Accessed 14 June 2023.
    [Google Scholar]
  27. Manyika, J., Silberg, J. and Presten, B. (2019), ‘What do we do about the biases in AI’, Harvard Business Review, 25 October, https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai. Accessed 1 December 2023.
    [Google Scholar]
  28. Marr, B. (2023a), ‘The difference between generative AI and traditional AI: An easy explanation for anyone’, Forbes, 24 July, https://www.forbes.com/sites/bernardmarr/2023/07/24/the-difference-between-generative-ai-and-traditional-ai-an-easy-explanation-for-anyone/?sh=7481460c508a. Accessed 10 September 2023.
    [Google Scholar]
  29. Marr, B. (2023b), ‘The future of generative AI beyond ChatGPT’, Forbes, 31 May, https://www.forbes.com/sites/bernardmarr/2023/05/31/the-future-of-generative-ai-beyond-chatgpt/?sh=7fec664b3da9. Accessed 10 September 2023.
    [Google Scholar]
  30. Martin, J. (2023), ‘AI or Not AI: Can you spot the real photos?’, CNET, 15 November, https://www.cnet.com/pictures/ai-or-not-ai-can-you-spot-the-real-photos/. Accessed 15 November 2023.
    [Google Scholar]
  31. McLuhan, M. (1962), The Gutenberg Galaxy: The Making of the Typographic Man, Toronto: University of Toronto Press.
    [Google Scholar]
  32. McLuhan, M. (1964), Understanding Media: The Extensions of Man, New York: Mentor.
    [Google Scholar]
  33. Menz, B., Modi, N. and Sorich, M. (2023), ‘Health disinformation use case highlighting the urgent need for artificial intelligence vigilance’, JAMA Internal Medicine, 184:1, pp. 9296, https://doi.org/10.1001/jamainternmed.2023.5947.
    [Google Scholar]
  34. Meyrowitz, J. (1985), No Sense of Place: The Impact of Electronic Media on Social Behavior, New York: Oxford Press.
    [Google Scholar]
  35. Nicoletti, L. and Bass, D. (2023), ‘Humans are biased: Generative AI is even worse’, Bloomberg Technology, 9 June, https://www.bloomberg.com/graphics/2023-generative-ai-bias/. Accessed 1 December 2023.
    [Google Scholar]
  36. O’Shaughnessy, N. (2020), ‘From disinformation to fake news: Forwards into the past’, in P. Baines, N. O’Shaughnessy and N. Snow (eds), SAGE Handbook of Propaganda, London: SAGE Publications, pp. 5570.
    [Google Scholar]
  37. O’Sullivan, P. and Carr, C. (2018), ‘Masspersonal communication: A model bridging the mass-interpersonal divide’, New Media & Society, 20:3, pp. 116180.
    [Google Scholar]
  38. Ong, W. J. (1982), Orality and Literacy: The Technologizing of the Word, New York: Methuen.
    [Google Scholar]
  39. Orlando, A. (2023), ‘4 robots that look like humans’, Discover, https://www.discovermagazine.com/technology/4-robots-that-look-like-humans. Accessed 16 March 2023.
    [Google Scholar]
  40. Ott, B. (2017), ‘The age of Twitter: Donald J. Trump and the politics of debasement’, Critical Studies in Media Communication, 34:1, pp. 5968, https://doi.org/10.1080/15295036.2016.1266686.
    [Google Scholar]
  41. Pennycook, G., Cannon, T. D. and Rand, D. (2018), ‘Prior exposure increases perceived accuracy of fake news’, Journal of Experimental Psychology, 147:12, pp. 186580, https://doi.org/10.1037/xge0000465.
    [Google Scholar]
  42. Postman, N. (1970), ‘The reformed English curriculum’, in A. C. Eurich (ed.), High School 1980: The Shape of the Future in American Secondary Education, New York: Pitman, pp. 16068.
    [Google Scholar]
  43. Postman, N. (1985), Amusing Ourselves to Death: Public Discourse in the Age of Show Business, New York: Penguin.
    [Google Scholar]
  44. Postman, N. (1992), Technopoly: The Surrender of Culture to Technology, New York: Knopf.
    [Google Scholar]
  45. Postman, N. (2000), ‘The humanism of media ecology’, Proceedings of the Media Ecology Association, 1, pp. 1016.
    [Google Scholar]
  46. RAND (2024), ‘About truth decay’, https://www.rand.org/research/projects/truth-decay/about-truth-decay.html. Accessed 11 January 2024.
  47. Russell, K. (2023), ‘Generative AI firm Native AI ushers in a new era of hyper-personalized brand experiences’, USA Today, 12 December, https://www.usatoday.com/story/special/contributor-content/2023/12/12/generative-ai-firm-native-ai-ushers-in-a-new-era-of-hyper-personalized-brand-experiences/71897040007/. Accessed 12 December 2023.
    [Google Scholar]
  48. Scharre, P. (2023), ‘AI’s gatekeepers aren’t prepared for what is coming’, Foreign Policy, 19 June, https://foreignpolicy.com/2023/06/19/ai-regulation-development-us-china-competition-technology/. Accessed 20 June 2023.
    [Google Scholar]
  49. Schneier, B. (2023), ‘Security expert warns of AI tools’ potential threat to democracy’, PBS News, 4 February, https://www.pbs.org/newshour/show/security-expert-warns-of-ai-tools-potential-threat-to-democracy. Accessed 11 December 2023.
    [Google Scholar]
  50. Scott, L. (2023), ‘“World faces ‘tech-enabled’ Armageddon”, Maria Ressa says’, Voice of America, 5 September, https://www.voanews.com/a/world-faces-tech-enabled-armageddon-maria-ressa-says-/7256196.html. Accessed 5 September 2023.
    [Google Scholar]
  51. Stevenson, N. (1995), Understanding Media Cultures: Social Theory and Mass Communication, London: Sage.
    [Google Scholar]
  52. Sweeney, L. (2023), ‘The benefits and limitations of generative AI: Harvard experts answer your questions’, Harvard Online, 29 April, https://www.harvardonline.harvard.edu/blog/benefits-limitations-generative-ai. Accessed 20 April 2023.
    [Google Scholar]
  53. Thompson, J. B. (1990), Ideology and Modern Culture, Stanford, CA: Stanford University Press.
    [Google Scholar]
  54. Thompson, P. (2023), ‘A developer built a “propaganda machine” using OpenAI tech to highlight the dangers of mass-produced AI disinformation’, Business Insider, 3 September, https://www.businessinsider.com/developer-creates-ai-disinformation-system-using-openai-2023-9. Accessed 3 September 2023.
    [Google Scholar]
  55. Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltynek, T., Guerro-Dib, J., Popoola, O., Sigut, P. and Waddington, L. (2023), ‘Testing of detection tools for AI-generated text’, ArXiv, 19, 25 December, https://arxiv.org/ftp/arxiv/papers/2306/2306.15666.pdf. Accessed 11 December 2023.
    [Google Scholar]
  56. West, D. (2023), ‘Senate hearing highlights AI harms and need for tougher regulation’, Brookings, 17 May, https://www.brookings.edu/articles/senate-hearing-highlights-ai-harms-and-need-for-tougher-regulation/. Accessed 17 September 2023.
    [Google Scholar]
/content/journals/10.1386/eme_00200_1
Loading
  • Article Type: Article
Keyword(s): bias; deception; emerging technology; exclusion; media ecology; polarization; truth
This is a required field
Please enter a valid email address
Approval was a success
Invalid data
An error occurred
Approval was partially successful, following selected items could not be processed due to error