Weizenbaum’s nightmare: The decay of language in AI-generated communication | Intellect Skip to content
1981
AI, Augmentation and Art
  • ISSN: 2633-8793
  • E-ISSN: 2633-8785

Abstract

Half a century on from the ELIZA program, with the rapid and widespread emergence of generative artificial intelligence (AI) programs, we are now on the cusp of realizing Joseph Weizenbaum’s nightmare: an absurd world that is not only populated by machines that can convincingly simulate human communication in its various forms (e.g. writing, visual art, performance and music) but in which these machines are readily accepted as authentic replacements. Drawing on the existentialist language philosophy of Vilém Flusser, this article argues that to defer cultural communication to generative AI programs is to step outside the ‘great conversation’ of human culture and to be condemned to an unutterable nothingness. To demonstrate this, I analyse several illustrative examples including Google’s Language Model for Dialogue Applications (LaMDA) language model, the text-based roleplaying game (Latitude 2019), and the art installation (Huyghe 2018). I argue that generative AI programs only appear to ‘speak’ in a language we understand while continuing to ‘think’ in the formal language of mathematics and that their communications are merely transliterations of numbers. As such, these programs are not bound by the same moral and syntactical rules that we observe and abide by in cultural communication, and in using generative AI programs, we may very well bypass these rules to express raw intention. Though mechanized in the form of technical images, such a mode of expression would be akin to the nonsensical cries of animals in that it fulfils a fundamental desire but reveals nothing of a consciousness within. By deferring the labour of communication to an AI program, the human retreats inward and disappears from culture as they ‘speak’ in one language but ‘think’ in another. Thus, the nightmarish future that Weizenbaum envisioned is one filled with illusions of artistic expression, projected by both machines and humans, and yet there is no evidence of humanity in such a culture.

Funding
This study was supported by the:
  • UKRI Arts & Humanities Research Council (AHRC)
  • Scottish Graduate School for Arts and Humanities (SGSAH) (Award 2116337)
Loading

Article metrics loading...

/content/journals/10.1386/jpm_00002_1
2023-08-18
2024-04-27
Loading full text...

Full text loading...

References

  1. Agüera y Arcas, Blaise. ( 2021;), ‘ Do large language models understand us?. ’, Medium , 16 December, https://medium.com/@blaisea/do-large-language-models-understand-us-6f881d6d8e75. Accessed 14 April 2023.
  2. Bender, Emily. ( 2022;), ‘ No, large language models aren’t like disabled people (and it’s problematic to argue that they are). ’, Medium , 21 January, https://medium.com/@emilymenonbender/no-llms-arent-like-people-with-disabilities-and-it-s-problematic-to-argue-that-they-are-a2ac0df0e435. Accessed 14 April 2023.
  3. Bender, Emily M.,, Gebru, Timnit,, McMillan-Major, Angelina, and Shmitchell, Shmargaret. ( 2021;), ‘ On the dangers of stochastic parrots: Can language models be too big?. ’, Conference on Fairness, Accountability, and Transparency (FAccT ’21), New York, 3–10 March.
    [Google Scholar]
  4. Buchanan, Ben,, Lohn, Andrew,, Musser, Micah, and Sedova, Katerina. ( 2021), Truth, Lies and Automation: How Language Models Could Change Disinformation, Washington, DC:: Centre for Security and Emerging Technology;.
    [Google Scholar]
  5. Chun, Wendy Hui Kyong. ( 2021), Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition, London:: MIT Press;.
    [Google Scholar]
  6. Costandi, Mo. ( 2012;), ‘ Scientists read dreams. ’, Nature, 19 October, https://doi.org/10.1038/nature.2012.11625.
    [Google Scholar]
  7. Crawford, Kate. ( 2021), Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, London:: Yale University Press;.
    [Google Scholar]
  8. Flusser, Vilém. ( 2018), Language and Reality (trans. R. M. Novaes), London:: University of Minnesota Press;.
    [Google Scholar]
  9. Flusser, Vilém. ( n.d.), The Novel Called ‘Science’, Berlin:: Vilém Flusser Archive;, reference number: 2795.
    [Google Scholar]
  10. Holden, Constance. ( 2007;), ‘ Rehabilitating Pluto. ’, Science, 315:5819, https://doi.org/10.1126/science.315.5819.1643c.
    [Google Scholar]
  11. Kittler, Friedrich. ( 1999), Gramophone, Film, Typewriter, Stanford, CA:: Stanford University Press;.
    [Google Scholar]
  12. Kittler, Friedrich. ( 2018), Optical Media: Berlin Lectures 1999 (trans. A. Enns), Cambridge:: Polity Press;.
    [Google Scholar]
  13. Latitude ( 2019;), AI Dungeon. , Provo, UT:: Latitude;.
    [Google Scholar]
  14. Lemoine, Blake. ( 2022;), ‘ Is LaMDA sentient? An interview. ’, Medium , 11 June, https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917. Accessed 11 January 2023.
  15. Lin, Stephanie,, Hilton, Jacob, and Evans, Owain. ( 2022;), ‘ TruthfulQA: Measuring how models mimic human falsehoods. ’ in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, 22–27 May, Dublin:: Association for Computational Linguistics;, pp. 321452.
    [Google Scholar]
  16. Macherey, Pierre. ( 2006), A Theory of Literary Production, Abingdon:: Routledge;.
    [Google Scholar]
  17. McGuffie, Kris, and Newhouse, Alex. ( 2020), The Radicalization Risks of GPT-3 and Advanced Neural Language Models, Monterey, CA:: Middlebury Institute of International Studies;.
    [Google Scholar]
  18. Metz, Cade, and Wakabayashi, Daisuke. ( 2020;), ‘ Google researcher says she was fired over paper highlighting bias in A.I.. ’, New York Times, 3 December, https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html. Accessed 11 January 2023.
    [Google Scholar]
  19. Nicola, Luca. ( 2019;), ‘ Brain activity? A work of art interpreted by artificial intelligence. ’, IBSA Foundation , 7 February, https://www.ibsafoundation.org/en/blog/brain-activity-a-work-of-art-interpreted-by-artificial-intelligence. Accessed 11 January 2023.
  20. Shen, Guohua,, Dwivedi, Kshitij,, Majima, Kei,, Horikawa, Tomoyasu, and Kamitani, Yukiyasu. ( 2019;), ‘ End-to-end deep image reconstruction from human brain activity. ’, Frontiers in Computational Neuroscience, 13:21, n.pag., https://doi.org/10.3389/fncom.2019.00021.
    [Google Scholar]
  21. Simonite, Tom. ( 2021;), ‘ It began as an AI-fueled dungeon game: It got much darker. ’, Wired, 5 May, https://www.wired.com/story/ai-fueled-dungeon-game-got-much-darker/. Accessed 11 January 2023.
    [Google Scholar]
  22. Tiku, Natasha. ( 2022;), ‘ The Google engineer who thinks the company’s AI has come to life. ’, The Washington Post, 11 June, https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/. Accessed 11 January 2023.
    [Google Scholar]
  23. Vallance, Chris. ( 2022;), ‘ Google engineer says LaMDA AI system may have its own feelings. ’, BBC News , 13 June, https://www.bbc.co.uk/news/technology-61784011. Accessed 11 January 2023.
  24. Vincent, James. ( 2021;), ‘ Google showed off its next-generation AI by talking to Pluto and a paper airplane. ’, The Verge , 18 May, https://www.theverge.com/2021/5/18/22442328/google-io-2021-ai-language-model-lamda-pluto. Accessed 11 January 2023.
  25. Weizenbaum, Joseph. ( 1972;), ‘ On the impact of the computer on society. ’, Science, 176:4035, pp. 60914.
    [Google Scholar]
  26. Weizenbaum, Joseph. ( 1976), Computer, Power and Human Reason: From Judgement to Calculation, New York:: W. H. Freeman;.
    [Google Scholar]
http://instance.metastore.ingenta.com/content/journals/10.1386/jpm_00002_1
Loading
/content/journals/10.1386/jpm_00002_1
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a success
Invalid data
An error occurred
Approval was partially successful, following selected items could not be processed due to error