Skip to content
1981
1-2: Expanded Visualities: Photography and Emerging Technologies
  • ISSN: 2040-3682
  • E-ISSN: 2040-3690

Abstract

Dall-E2 and Stable Diffusion promote their text-to-image models based on their level of (photo)realism. The use of photographic language is not superficial or accidental but indicative of a broader tendency in computer science and data practice. To nuance the general application of photorealism, I position the term alongside photographic realism and computational photorealism. To contextualize important nuances between these three terms, contemporary examples from astrophotography are analysed and reconstructed using text-to-image models. From the comparative analysis, computational photorealism emerges as a modified term that recognizes the relationship between photography and text-to-image models without conflating their ontological and epistemological differences.

Loading

Article metrics loading...

/content/journals/10.1386/pop_00096_1
2024-06-28
2026-04-15

Metrics

Loading full text...

Full text loading...

References

  1. Audry, Sofian (2021), Art in the Age of Machine Learning, Cambridge, MA and London: MIT Press.
    [Google Scholar]
  2. Beaumont, Romain (2022), ‘LAION-5B: A new era of open large-scale multi-modal datasets’, LAION-5B, 31 March, https://laion.ai/blog/laion-5b/. Accessed 16 October 2023.
    [Google Scholar]
  3. Common, Andrew Ainslie (1884), ‘Telescopes for astronomical photography’, Nature, 31:785, pp. 3840, https://doi.org/10.1038/031038a0.
    [Google Scholar]
  4. Cox, Geoff, Dekker, Annet, Dewdney, Andrew and Sluis, Katrina (2021), ‘Affordances of the networked image’, The Nordic Journal of Aesthetics, 30:61–62, pp. 4045, https://doi.org/10.7146/nja.v30i61-62.127857.
    [Google Scholar]
  5. Crawford, Kate and Paglen, Trevor (2019), ‘Excavating AI: The politics of images in machine learning training sets’, Excavating AI, 19 September, https://excavating.ai. Accessed 16 October 2023.
    [Google Scholar]
  6. Cubitt, Sean (2023), Truth: Aesthetic Politics, London: Goldsmiths Press.
    [Google Scholar]
  7. Cukier, Kenneth and Mayer-Schoenberger, Viktor (2013), ‘The rise of big data: How it’s changing the way we think about the world’, Foreign Affairs, 92:3, pp. 2840, https://www.jstor.org/stable/23526834.
    [Google Scholar]
  8. Daston, Lorraine and Galison, Peter (1992), ‘The image of objectivity’, Representations, Special Issue: ‘Seeing Science’, 40, pp. 81128, https://doi.org/10.2307/2928741.
    [Google Scholar]
  9. Daston, Lorraine and Galison, Peter (2010), Objectivity, New York: Zone Books.
    [Google Scholar]
  10. Event Horizon Telescope (2023), ‘Science: Imaging a Black hole’, https://eventhorizontelescope.org/science. Accessed 16 October 2023.
    [Google Scholar]
  11. Fazi, M. Beatrice (2020), ‘Beyond human: Deep learning, explainability and representation’, Theory, Culture & Society, Special Section: ‘Algorithmic Though’, 38:7–8, pp. 5577, https://doi.org/10.1177/0263276420966386.
    [Google Scholar]
  12. Fazi, M. Beatrice and Fuller, Matthew (2017), ‘Computational aesthetics’, in M. Fuller (ed.), How to Be a Geek: Essays on the Culture of Software, Cambridge and Malden, MA: Polity Press, pp. 13254.
    [Google Scholar]
  13. Galison, Peter (2006), ‘Images scatter into data, data gather into images’, in S. Manghani, A. Piper and J. Simons (eds), Images: A Reader, London: Sage Publications, pp. 23641.
    [Google Scholar]
  14. Galison, Peter (2020), Black Holes: The Edge of All We Know, Submarine Entertainment.
    [Google Scholar]
  15. Gershgorn, Dave (2017), ‘The data that transformed AI research – and possibly the world’, 26 July, Quartz Tech & Innovation, https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world. Accessed 16 October 2023.
    [Google Scholar]
  16. Gray, Mary L. and Suri, Siddharth (2019), Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass, Boston, MA and New York: Houghton Mifflin Harcourt.
    [Google Scholar]
  17. Halpern, Orit (2021), ‘Planetary intelligence’, in J. Roberge and M. Castelle (eds), The Cultural Life of Machine Learning, Cham: Palgrave Macmillan, pp. 22756, https://doi.org/10.1007/978-3-030-56286-1_8.
    [Google Scholar]
  18. Harvey, Adam and LaPlace, Jules (2022), ‘Researchers gone wild: Origins and endpoints of image training datasets created “in the wild”’, in B. Herlo, D. Irrgang, G. Joost and A. Unteidig (eds), Practicing Sovereignty: Digital Involvement in Times of Crises, Bielefeld: Transcript Verlag, pp. 289310, https://doi.org/10.14361/9783839457603.
    [Google Scholar]
  19. Kember, Sarah (1996), ‘The shadow of the object: Photography and realism’, Textual Practice, 10:1, pp. 14563, https://doi.org/10.1080/09502369608582242.
    [Google Scholar]
  20. Klein, Naomi (2023), ‘AI machines aren’t “hallucinating”: But their makers are’, The Guardian, 8 May, https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein. Accessed 16 October 2023.
    [Google Scholar]
  21. Malevé, Nicolas and Sluis, Katrina (2023), ‘The photographic pipeline of machine vision: Or, machine vision’s latent photographic theory’, Critical AI, 1:1–2.
    [Google Scholar]
  22. Milmo, Dan (2023), ‘Google AI Chatbot Bard sends shares plummeting after it gives wrong answer’, The Guardian, 9 February, https://www.theguardian.com/technology/2023/feb/09/google-ai-chatbot-bard-error-sends-shares-plummeting-in-battle-with-microsoft. Accessed 16 October 2023.
    [Google Scholar]
  23. NASA (2022), ‘NASA’s webb delivers deepest infrared image of universe yet’, 12 July, https://www.nasa.gov/image-article/nasas-webb-delivers-deepest-infrared-image-of-universe-yet/. Accessed 16 October 2023.
    [Google Scholar]
  24. NASA (2023), ‘Mission: James Webb Space Telescope’, 3 April, https://science.nasa.gov/mission/webb. Accessed 8 April 2024.
    [Google Scholar]
  25. Nichol, Alex, Dhariwal, Prafulla, Ramesh, Aditya, Shyam, Pranav, Mishkin, Pamela, McGrew, Bob, Sutskever, Ilya and Chen, Mark (2022), ‘GLIDE: Towards photorealistic image generation and editing with text-guided diffusion models’, arXiv, 8 March, http://arxiv.org/abs/2112.10741. Accessed 16 October 2023.
  26. Offert, Fabian (2021), ‘Latent deep space: Generative adversarial networks (GANs) in the sciences’, Media+Environment, 3:2, December, https://doi.org/10.1525/001c.29905.
    [Google Scholar]
  27. Open AI (2023a), ‘Dall-E2’, https://openai.com/dall-e-2/. Accessed 9 June 2023.
    [Google Scholar]
  28. Open AI (2023b), ‘Dalle-E3’, 20 September, https://openai.com/dall-e-3. Accessed 8 April 2024.
    [Google Scholar]
  29. Parikka, Jussi (2023), Operational Images: From the Visual to the Invisual, Minneapolis, MN and London: University of Minnesota Press.
    [Google Scholar]
  30. Pichai, Sundar (2023), ‘An important next step on our AI journey’, Google, 6 February, https://blog.google/technology/ai/bard-google-ai-search-updates/. Accessed 16 October 2023.
    [Google Scholar]
  31. Ramesh, Aditya (2022), ‘How Dall-E 2 works’, http://adityaramesh.com/posts/dalle2/dalle2.html. Accessed 9 June 2023.
    [Google Scholar]
  32. Ramesh, Aditya, Dhariwal, Prafulla, Nichol, Alex, Chu, Casey and Chen, Mark (2022), ‘Hierarchical text-conditional image generation with CLIP latents’, arXiv, 12 April, http://arxiv.org/abs/2204.06125. Accessed 16 October 2023.
  33. Samman, Nadim (2023), Poetics of Encryption: Art and the Technocene, Berlin: Hatje Cantz.
    [Google Scholar]
  34. Sekula, Allan (1986), ‘The body and the archive’, October, 39:Winter, pp. 364, http://www.jstor.org/stable/778312. Accessed 16 October 2023.
    [Google Scholar]
  35. Somaini, Antonio (2022), ‘On the photographic status of images produced by generative adversarial networks (GANs)’, Philosophy of Photography, 13:1, pp. 153–64, https://doi.org/10.1386/pop_00044_1.
    [Google Scholar]
  36. Stable Diffusion (2023), ‘Stable diffusion online’, 2 April, https://stablediffusionweb.com. Accessed 8 April 2024.
    [Google Scholar]
  37. Steyerl, Hito (2023), ‘Mean images’, New Left Review, 140&141, https://newleftreview.org/issues/ii140/articles/hito-steyerl-mean-images. Accessed 16 October 2023.
    [Google Scholar]
  38. Swijtink, Zeno G. (1987), ‘The objectification of observation: Measurement and statistical methods in the nineteenth century’, in L. Krüger, L. J. Daston and M. Heidelberger (eds), The Probabilistic Revolution Volume 1: Ideas in History, Cambridge, MA and London: MIT Press, pp. 26185.
    [Google Scholar]
  39. Webb Telescope (2022), ‘Webb’s first deep field (NIRCam Image)’, 12 July, https://webbtelescope.org/contents/media/images/2022/035/01G7DCWB7137MYJ05CSH1Q5Z1Z. Accessed 16 October 2023.
    [Google Scholar]
/content/journals/10.1386/pop_00096_1
Loading
/content/journals/10.1386/pop_00096_1
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a success
Invalid data
An error occurred
Approval was partially successful, following selected items could not be processed due to error
Please enter a valid_number test