Skip to content
1981
The Human and the Machine: AI in Creative Industries
  • ISSN: 1757-2681
  • E-ISSN: 1757-269X

Abstract

The advent of advanced artificial intelligence (AI) and machine learning technologies has opened new avenues for qualitative research, particularly in visual data analysis. This pilot study introduced computer-assisted qualitative visual analysis (CQVA), leveraging GPT-4 Turbo and Google Cloud Vision to automate the thematic analysis of visual datasets. Traditional methods, relying on manual coding, are time-consuming and labour-intensive. CQVA addresses these challenges by providing an efficient, scalable and cost-effective alternative. This study had two objectives: developing the CQVA method and applying it to analyse the top 1000 advertisements from the ‘adPorn’ subreddit, offering insights into Reddit users’ advertising preferences. A clear preference was identified for ads utilizing visual metaphors, as these were the most common. Additionally, the importance of engaging visual communication was underscored, with themes employing visually striking and easily comprehensible imagery being favoured by Reddit users. Despite its promise, CQVA required human intervention to guide AI outputs and validate clusters and themes. However, the findings demonstrated CQVA’s potential to revolutionize qualitative visual analysis by significantly reducing time and cost, while maintaining the richness of insights typically achieved through manual methods, thus enabling more efficient and comprehensive analysis of large visual datasets, highlighting the method’s scalability and practicality for future research.

Loading

Article metrics loading...

/content/journals/10.1386/iscc_00058_1
2024-10-14
2025-01-19
Loading full text...

Full text loading...

References

  1. Anderson, Katie E. (2015), ‘Ask me anything: What is Reddit?’, Library Hi Tech News, 32:5, pp. 811.
    [Google Scholar]
  2. Basit, Tehmina N. (2003), ‘Manual or electronic? The role of coding in qualitative data analysis’, Educational Research, 45:2, pp. 14354.
    [Google Scholar]
  3. Bazeley, Pat (2012), ‘Regulating qualitative coding using QDAS?’, Sociological Methodology, 42:1, pp. 7778.
    [Google Scholar]
  4. Braun, Virginia and Clarke, Victoria (2006), ‘Using thematic analysis in psychology’, Qualitative Research in Psychology, 3:2, pp. 77101.
    [Google Scholar]
  5. Braun, Virginia, Clarke, Victoria, Hayfield, Nikki and Terry, Gareth (2021), ‘Got questions about thematic analysis? We have prepared some answers to common ones’, Thematic Analysis, https://www.thematicanalysis.net/faqs/. Accessed 17 June 2024.
    [Google Scholar]
  6. Chen, Shih Hsin and Chen, Yi Hui (2017), ‘A content-based image retrieval method based on the Google Cloud Vision API and WordNet’, in N. Nguyen, S. Tojo, L. Nguyen and B. Trawiński (eds), Intelligent Information and Database Systems, Cham: Springer, pp. 65162.
    [Google Scholar]
  7. Clarke, Victoria and Braun, Virginia (2017), ‘Thematic analysis’, Journal of Positive Psychology, 12:3, pp. 29798.
    [Google Scholar]
  8. Dai, Shih Chieh, Xiong, Aiping and Ku, Lun Wei (2023), ‘LLM-in-the-loop: Leveraging large language model for thematic analysis’, Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 6–10 December, pp. 999310001.
    [Google Scholar]
  9. van Gansbeke, Wouter, Vandenhende, Simon, Georgoulis, Stamatios, Proesmans, Marc and Gool, Luc Van (2020), ‘SCAN: Learning to classify images without labels’, European Conference on Computer Vision, 23–28 August, Cham: Springer, pp. 26885.
    [Google Scholar]
  10. Gkiouzepas, Lampros and Hogg, Margaret K. (2011), ‘Articulating a new framework for visual metaphors in advertising’, Journal of Advertising, 40:1, pp. 10320.
    [Google Scholar]
  11. Guinibert, Matthew (2024), ‘Replication Data for: Computer-Assisted Qualitative Visual Analysis’, Harvard Dataverse, https://doi.org/10.7910/DVN/NS9HV4.
  12. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing and Sun, Jian (2015), ‘Deep residual learning for image recognition’, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Boston, MA, 7–12 June, pp. 77078.
    [Google Scholar]
  13. Hornik, Jacob, Ofir, Chezy and Rachamim, Matti (2017), ‘Advertising appeals, moderators, and impact on persuasion’, Journal of Advertising Research, 57:3, pp. 30518.
    [Google Scholar]
  14. Jolliffe, Ian T. (2002), Principal Component Analysis for Special Types of Data, New York: Springer.
    [Google Scholar]
  15. Kapoor, Vikram (2011), ‘The use of visual cues and metaphors in advertising’, International Journals of Marketing and Technology, 1:7, pp. 7384.
    [Google Scholar]
  16. Keras (2024), ‘VGG16 and VGG19’, https://keras.io/api/applications/vgg/. Accessed 17 June 2024.
    [Google Scholar]
  17. Kodinariya, Trupti and Makwana, Prashant (2013), ‘Review on determining number of cluster in K-means clustering’, International Journal of Advance Research in Computer Science and Management Studies, 1, pp. 9095.
    [Google Scholar]
  18. Koffka, Kurt (1935), Principles of Gestalt Psychology, London: Routledge.
    [Google Scholar]
  19. Lazar, Jonathan, Feng, Jinjuan H. and Hochheiser, Harry (2017), Research Methods in Human Computer Interaction, 2nd ed., Cambridge: Morgan Kaufmann.
    [Google Scholar]
  20. Lester, Paul (2011), Visual Communication Images with Messages, 9th ed., Boston, MA: Wadsworth.
    [Google Scholar]
  21. Lin, Yang-Chu, Lee, Yi-Chih and Lin, Nu-Ting (2014), ‘The effect of advertising using advertising appeals, pictures and product categories’, Journal of Statistics and Management Systems, 17:1, pp. 7196.
    [Google Scholar]
  22. Lumivero (2024), ‘NVivo automated coding with AI’, https://lumivero.com/resources/product-tutorials/nvivo-automated-coding-with-ai/. Accessed 17 June 2024.
    [Google Scholar]
  23. Manic, Marius (2015), ‘Marketing engagement through visual content’, Bulletin of the Transylvania University of Brasov, Series V: Economic Sciences, 8:2, pp. 8994.
    [Google Scholar]
  24. Mulken, Margot van, Hooft, Andreu van and Nederstigt, Ulrike (2014), ‘Finding the tipping point: Visual metaphor and conceptual complexity in advertising’, Journal of Advertising, 43:4, pp. 33343.
    [Google Scholar]
  25. Omena, Janna J., Pilipets, Elena, Gobbo, Beatrice and Chao, Jason (2021), ‘The potentials of Google vision API-based networks to study natively digital images’, Disena, 19, pp. 119.
    [Google Scholar]
  26. Proferes, Nicholas, Jones, Naiyan, Gilbert, Sarah, Fiesler, Casey and Zimmer, Michael (2021), ‘Studying Reddit: A systematic overview of disciplines, approaches, methods, and ethics’, Social Media + Society, 7:2, pp. 1–14.
    [Google Scholar]
  27. Razis, Gerasimos, Theofilou, Georgios and Anagnostopoulos, Ioannis (2020), ‘Enriching social analytics with latent twitter image information’, 2020 15th International Workshop on Semantic and Social Media Adaptation and Personalization, Greece, 29–30 October, pp. 17.
    [Google Scholar]
  28. Reddit (n.d.), ‘r/AdPorn’, https://www.reddit.com/r/AdPorn/. Accessed 10 September 2024.
  29. Reddit (2024a), ‘Dive into anything’, https://www.redditinc.com/. Accessed 11 June 2024.
    [Google Scholar]
  30. Reddit (2024b), ‘Reddit.com: API documentation’, https://www.reddit.com/dev/api/. Accessed 11 June 2024.
    [Google Scholar]
  31. Robertson, Stephen (2004), ‘Understanding inverse document frequency: On theoretical arguments for IDF’, Journal of Documentation, 60:5, pp. 50320.
    [Google Scholar]
  32. Schwemmer, Carsten, Knight, Carly, Bello-Pardo, Emily D., Oklobdzija, Stan, Schoonvelde, Martijn and Lockhart, Jeffrey W. (2020), ‘Diagnosing gender bias in image recognition systems’, Socius, pp. 117, https://doi.org/10.1177/2378023120967171.
    [Google Scholar]
  33. Scikit-learn (2024), ‘Scikit-learn: Machine learning in Python’, https://scikit-learn.org/stable/. Accessed 17 June 2024.
    [Google Scholar]
  34. Semrush (2024), ‘Top websites in the world – April 2024 most visited & popular rankings’, https://www.semrush.com/website/top/. Accessed 11 June 2024.
    [Google Scholar]
  35. Simonyan, Karen and Zisserman, Andrew (2014), ‘Very deep convolutional networks for large-scale image recognition’, 3rd International Conference on Learning Representations, ICLR 2015 – Conference Track Proceedings, San Diego, CA, USA, 7–9 May, pp. 114.
    [Google Scholar]
  36. Teichert, Thorsten, Hardeck, Dirk, Liu, Yong and Trivedi, Rohit (2018), ‘How to implement informational and emotional appeals in print advertisements’, Journal of Advertising Research, 58:3, pp. 36379.
    [Google Scholar]
  37. Thematic Analysis Inc. (2024), ‘The primary methods of qualitative data analysis’, https://getthematic.com/insights/methods-of-qualitative-data-analysis/. Accessed 17 June 2024.
    [Google Scholar]
  38. Tilak, Geetali (2020), ‘Usage of visual communication design on consumer behaviour’, Gedrag & Organisatie Review, 33:2, pp. 96371.
    [Google Scholar]
  39. Tukey, John (1977), Exploratory Data Analysis, Reading, MA: Addison-Wesley.
    [Google Scholar]
  40. Zhu, Deyao, Chen, Jun, Shen, Xiaoqian, Li, Xiang and Elhoseiny, Mohamed (2024), ‘MiniGPT-4: Enhancing vision-language understanding with advanced large language models’, International Conference on Learning Representations, Vienna, Austria, 7–11 May.
    [Google Scholar]
/content/journals/10.1386/iscc_00058_1
Loading
/content/journals/10.1386/iscc_00058_1
Loading

Data & Media loading...

  • Article Type: Article
Keyword(s): advertisements; advertising; AI; coding images; GPT-4; Reddit; thematizing images
This is a required field
Please enter a valid email address
Approval was a success
Invalid data
An error occurred
Approval was partially successful, following selected items could not be processed due to error
Please enter a valid_number test