Skip to content
1981
image of Studying visual disinformation: A computational literature review of current issues, trends and research methods of a growing field

Abstract

The rise of visual mis/disinformation presents a significant challenge to democratic societies, as misleading visual content increasingly influences public opinion and decision-making. This study aims to comprehensively map the current state of research on visual mis/disinformation, providing insights into its key trends, disciplinary focus, methodologies and topical priorities. Using a computational literature review approach, this study identifies and synthesizes findings from a broad corpus of scholarly publications ( = 286) examining trends over time in visual disinformation research, highlighting the growing prominence of the field and its interdisciplinary nature. We explore the specific contributions of disciplines such as political science, communication studies and computer science. The review identifies a diverse array of research methods used to investigate visual disinformation, ranging from traditional content analysis and surveys to cutting-edge computational techniques such as machine learning and visual network analysis. It also delves into the thematic priorities of recent studies, including ‘media literacy’, ‘verification strategies of visual misinformation’, ‘visual literacy in misinformation’, ‘correction strategies and engagement’, ‘health-related disinformation’, ‘disinformation detection using machine learning’, and ‘misinformation and memory’. By integrating these perspectives, the study provides a comprehensive overview of the visual mis/disinformation field, offering valuable insights for future research. It emphasizes the need for multidisciplinary collaboration and methodological innovation to address the complexities of this pressing issue. This work contributes to the broader understanding of visual mis/disinformation’s impact and the strategies needed to mitigate its harmful effects.

Funding
This study was supported by the:
  • The Swedish Research Council (Award 2022-05414)
Loading

Article metrics loading...

/content/journals/10.1386/jvpc_00052_1
2025-11-29
2026-02-08
Loading full text...

Full text loading...

References

  1. Abuín-Penas, J., Corbacho-Valencia, J.-M. and Pérez-Seoane, J. (2023), ‘Análisis de los contenidos verificados por los fact-checkers españoles en Instagram’, Revista de Comunicación, 22:1, pp. 1734, https://doi.org/10.26441/rc22.1-2023-3089.
    [Google Scholar]
  2. Ahuja, N. and Kumar, S. (2024), ‘Fusion of semantic, visual and network information for detection of misinformation on social media’, Cybernetics and Systems, 55:5, pp. 106385, https://doi.org/10.1080/01969722.2022.2130248.
    [Google Scholar]
  3. Aikin, K. J., Southwell, B. G., Paquin, R. S., Rupert, D. J., O’Donoghue, A. C., Betts, K. R. and Lee, P. K. (2017), ‘Correction of misleading information in prescription drug television advertising: The roles of advertisement similarity and time delay’, Research in Social & Administrative Pharmacy, 13:2, pp. 37888, https://doi.org/10.1016/j.sapharm.2016.04.004.
    [Google Scholar]
  4. Al-alshaqi, M., Rawat, D. B. and Liu, C. (2024), ‘Ensemble techniques for robust fake news detection: Integrating transformers, natural language processing, and machine learning’, Sensors, 24:18, https://doi.org/10.3390/s24186062.
    [Google Scholar]
  5. Alonso-López, N., Sidorenko-Bautista, P. and Giacomelli, F. (2021), ‘Beyond challenges and viral dance moves: TikTok as a vehicle for disinformation and fact-checking in Spain, Portugal, Brazil, and the USA’, Anàlisi, 64, pp. 6584, https://doi.org/10.5565/rev/analisi.3411.
    [Google Scholar]
  6. Amin, Z., Ali, N. M. and Smeaton, A. F. (2021a), ‘Attention-based design and user decisions on information sharing: A thematic literature review’, IEEE Access, 9, pp. 8328597, https://doi.org/10.1109/access.2021.3087740.
    [Google Scholar]
  7. Amin, Z., Ali, N. M. and Smeaton, A. F. (2021b), ‘Visual selective attention system to intervene user attention in sharing COVID-19 misinformation’, International Journal of Advanced Computer Science and Applications, 12:10, pp. 3641, https://doi.org/10.48550/ARXIV.2110.13489.
    [Google Scholar]
  8. Antons, D., Breidbach, C. F., Joshi, A. M. and Salge, T. O. (2023), ‘Computational literature reviews: Method, algorithms, and roadmap’, Organizational Research Methods, 26:1, pp. 10738, https://doi.org/10.1177/1094428121991230.
    [Google Scholar]
  9. Apuke, O. D., Omar, B. and Tunca, E. A. (2023), ‘Effect of fake news awareness as an intervention strategy for motivating news verification behaviour among social media users in Nigeria: A quasi-experimental research’, Journal of Asian and African Studies, 58:6, pp. 888903, https://doi.org/10.1177/00219096221079320.
    [Google Scholar]
  10. Bastos, M., Mercea, D. and Goveia, F. (2023), ‘Guy next door and implausibly attractive young women: The visual frames of social media propaganda’, New Media & Society, 25:8, pp. 201433, https://doi.org/10.1177/14614448211026580.
    [Google Scholar]
  11. Bennett, W. L. and Livingston, S. (eds) (2020), The Disinformation Age: Politics, Technology, and Disruptive Communication in the United States, Cambridge: Cambridge University Press.
    [Google Scholar]
  12. Blei, D. M. (2012), ‘Topic modeling and digital humanities’, Journal of Digital Humanities, 2:1, pp. 811, https://journalofdigitalhumanities.org/2-1/topic-modeling-and-digital-humanities-by-david-m-blei/. Accessed 21 October 2025.
    [Google Scholar]
  13. Brennen, J. S., Simon, F. M. and Nielsen, R. K. (2021), ‘Beyond (mis)representation: Visuals in COVID-19 misinformation’, The International Journal of Press/Politics, 26:1, pp. 27799, https://doi.org/10.1177/1940161220964780.
    [Google Scholar]
  14. Butler, L. H., Lamont, P., Wan, D. L. Y., Prike, T., Nasim, M., Walker, B., Fay, N. and Ecker, U. K. H. (2023), ‘The (mis)information game: A social media simulator’, Behavior Research Methods, 56:3, pp. 237697, https://doi.org/10.3758/s13428-023-02153-x.
    [Google Scholar]
  15. Carmi, E., Yates, S. J., Lockley, E. and Pawluczuk, A. (2020), ‘Data citizenship: Rethinking data literacy in the age of disinformation, misinformation, and malinformation’, Internet Policy Review, 9:2, pp. 122, https://doi.org/10.14763/2020.2.1481.
    [Google Scholar]
  16. Cartella, G., Cuculo, V., Cornia, M. and Cucchiara, R. (2024), ‘Unveiling the truth: Exploring human gaze patterns in fake images’, IEEE Signal Processing Letters, 31, pp. 82024, https://doi.org/10.1109/lsp.2024.3375288.
    [Google Scholar]
  17. Chadwick, A. and Stanyer, J. (2022), ‘Deception as a bridging concept in the study of disinformation, misinformation, and misperceptions: Toward a holistic framework’, Communication Theory, 32:1, pp. 124, https://doi.org/10.1093/ct/qtab019.
    [Google Scholar]
  18. Comito, C., Caroprese, L. and Zumpano, E. (2023), ‘Multimodal fake news detection on social media: A survey of deep learning techniques’, Social Network Analysis and Mining, 13:1, p. 101, https://doi.org/10.1007/s13278-023-01104-w.
    [Google Scholar]
  19. Dan, V. and Coleman, R. (2024), ‘“I’ll change my beliefs when I see it”: Video fact checks outperform text fact checks in correcting misperceptions among those holding false or uncertain pre-existing beliefs’, Communication Research, 52:6, pp. 778802, https://doi.org/10.1177/00936502241287870.
    [Google Scholar]
  20. Dan, V., Paris, B., Donovan, J., Hameleers, M., Roozenbeek, J., Van Der Linden, S. and Von Sikorski, C. (2021), ‘Visual mis- and disinformation, social media, and democracy’, Journalism and Mass Communication Quarterly, 98:3, pp. 64164, https://doi.org/10.1177/10776990211035395.
    [Google Scholar]
  21. Destun, L. M. and Kuiper, N. A. (1996), ‘Autobiographical memory and recovered memory therapy: Integrating cognitive, clinical, and individual difference perspectives’, Clinical Psychology Review, 16:5, pp. 42150, https://doi.org/10.1016/0272-7358(96)00022-0.
    [Google Scholar]
  22. Di Bello, F., Collà Ruvolo, C., Cilio, S., La Rocca, R., Capece, M., Creta, M., Celentano, G., Califano, G., Morra, S., Iacovazzo, C., Coviello, A., Buonanno, P., Fusco, F., Imbimbo, C., Mirone, V. and Longo, N. (2022), ‘Testicular cancer and YouTube: What do you expect from a social media platform?’, International Journal of Urology, 29:7, pp. 68591, https://doi.org/10.1111/iju.14871.
    [Google Scholar]
  23. Dijkstra, K. and Moerman, E. M. (2012), ‘Effects of modality on memory for original and misleading information’, Acta Psychologica, 140:1, pp. 5863, https://doi.org/10.1016/j.actpsy.2012.02.003.
    [Google Scholar]
  24. Dixon, G. N., McKeever, B. W., Holton, A. E., Clarke, C. and Eosco, G. (2015), ‘The power of a picture: Overcoming scientific misinformation by communicating weight-of-evidence information with visual exemplars’, Journal of Communication, 65:4, pp. 63959, https://doi.org/10.1111/jcom.12159.
    [Google Scholar]
  25. Douglas, G. C. C. (2022), ‘A sign in the window: Social norms and community resilience through handmade signage in the age of COVID-19’, Linguistic Landscape: An International Journal, 8:2–3, pp. 184201, https://doi.org/10.1075/ll.21037.dou.
    [Google Scholar]
  26. Ecker, U. K. H., Sharkey, C. X. M. and Swire-Thompson, B. (2023), ‘Correcting vaccine misinformation: A failure to replicate familiarity or fear-driven backfire effects’, PLOS One, 18:4, https://doi.org/10.1371/journal.pone.0281140.
    [Google Scholar]
  27. Farkas, X. (2023), ‘Visual political communication research: A literature review from 2012 to 2022’, Journal of Visual Political Communication, 10:2, pp. 95126, https://doi.org/10.1386/jvpc_00027_1.
    [Google Scholar]
  28. Feltrero, R., Hernando, S. and Ionescu, A. (2023), ‘E-learning strategies for media literacy: Engagement of interactive digital serious games for understanding visual online disinformation’, American Journal of Distance Education, 37:4, pp. 27693, https://doi.org/10.1080/08923647.2023.2231814.
    [Google Scholar]
  29. Fernández-Castrillo, C. and Ramos, C. (2025), ‘Post-photojournalism: Post-truth challenges and threats for visual reporting in the Russo-Ukrainian war coverage’, Digital Journalism, 13:1, pp. 3760, https://doi.org/10.1080/21670811.2023.2295424.
    [Google Scholar]
  30. Ferreira, R. R. (2022), ‘Liquid disinformation tactics: Overcoming social media countermeasures through misleading content’, Journalism Practice, 16:8, pp. 153758, https://doi.org/10.1080/17512786.2021.1914707.
    [Google Scholar]
  31. French, L., Garry, M. and Mori, K. (2011), ‘Relative – not absolute – judgments of credibility affect susceptibility to misinformation conveyed during discussion’, Acta Psychologica, 136:1, pp. 11928, https://doi.org/10.1016/j.actpsy.2010.10.009.
    [Google Scholar]
  32. Geise, S., Heck, A. and Panke, D. (2021), ‘The effects of digital media images on political participation online: Results of an eye-tracking experiment integrating individual perceptions of “photo news factors”’, Policy and Internet, 13:1, pp. 5485, https://doi.org/10.1002/poi3.235.
    [Google Scholar]
  33. Gordon, L. T., Bilolikar, V. K., Hodhod, T. and Thomas, A. K. (2020), ‘How prior testing impacts misinformation processing: A dual-task approach’, Memory and Cognition, 48:2, pp. 31424, https://doi.org/10.3758/s13421-019-00970-0.
    [Google Scholar]
  34. Grabe, M. E. and Bucy, E. P. (2009), Image Bite Politics: News and the Visual Framing of Elections, Oxford: Oxford University Press.
    [Google Scholar]
  35. Graber, D. A. (1996), ‘Say it with pictures’, The Annals of the American Academy of Political and Social Science, 546:1, pp. 8596, https://doi.org/10.1177/0002716296546001008.
    [Google Scholar]
  36. Groh, M., Epstein, Z., Firestone, C. and Picard, R. (2022), ‘Deepfake detection by human crowds, machines, and machine-informed crowds’, Proceedings of the National Academy of Sciences of the United States of America, 119:1, https://doi.org/10.1073/pnas.2110013119.
    [Google Scholar]
  37. Gruzd, A., Zhang, J. and Mai, P. (2025), ‘GraphOptima: A graph layout optimization framework for visualizing large networks’, SoftwareX, 29, https://doi.org/10.1016/j.softx.2025.102034.
    [Google Scholar]
  38. Gundermann, C. and Wright, A. (2024), ‘Public history as graphic history’, International Public History, 7:2, pp. 6768, https://doi.org/10.1515/iph-2024-2014.
    [Google Scholar]
  39. Hameleers, M. (2025), ‘The nature of visual disinformation online: A qualitative content analysis of alternative and social media in the Netherlands’, Political Communication, 42:1, pp. 10826, https://doi.org/10.1080/10584609.2024.2354389.
    [Google Scholar]
  40. Hameleers, M., Powell, T. E., Van Der Meer, T. G. L. A. and Bos, L. (2020), ‘A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media’, Political Communication, 37:2, pp. 281301, https://doi.org/10.1080/10584609.2019.1674979.
    [Google Scholar]
  41. Hannah, M. N. (2021), ‘A conspiracy of data: QAnon, social media, and information visualization’, Social Media + Society, 7:3, https://doi.org/10.1177/20563051211036064.
    [Google Scholar]
  42. Hausken, L. (2024), ‘Photorealism versus photography: AI-generated depiction in the age of visual disinformation’, Journal of Aesthetics and Culture, 16:1, https://doi.org/10.1080/20004214.2024.2340787.
    [Google Scholar]
  43. Huff, M. and Maurer, A. E. (2014), ‘Post-learning verbal information changes visual and motor memory for hand-manipulative tasks’, Applied Cognitive Psychology, 28:5, pp. 77279, https://doi.org/10.1002/acp.3047.
    [Google Scholar]
  44. Ilias, L., Kazelidis, I. M. and Askounis, D. (2023), ‘Multimodal detection of bots on X (Twitter) using transformers’, IEEE Transactions on Information Forensics and Security, 19, pp. 732034, https://doi.org/10.48550/ARXIV.2308.14484.
    [Google Scholar]
  45. Inwood, O. and Zappavigna, M. (2024), ‘The legitimation of screenshots as visual evidence in social media: YouTube videos spreading misinformation and disinformation’, Visual Communication, https://doi.org/10.1177/14703572241255664.
    [Google Scholar]
  46. Ittefaq, M., Abwao, M. and Rafique, S. (2021), ‘Polio vaccine misinformation on social media: Turning point in the fight against polio eradication in Pakistan’, Human Vaccines and Immunotherapeutics, 17:8, pp. 257577, https://doi.org/10.1080/21645515.2021.1894897.
    [Google Scholar]
  47. Jing, J., Wu, H., Sun, J., Fang, X. and Zhang, H. (2023), ‘Multimodal fake news detection via progressive fusion networks’, Information Processing and Management, 60:1, https://doi.org/10.1016/j.ipm.2022.103120.
    [Google Scholar]
  48. Juefei-Xu, F., Wang, R., Huang, Y., Guo, Q., Ma, L. and Liu, Y. (2022), ‘Countering malicious deepfakes: Survey, battleground, and horizon’, International Journal of Computer Vision, 130:7, pp. 1678734, https://doi.org/10.1007/s11263-022-01606-8.
    [Google Scholar]
  49. Karanian, J. M., Rabb, N., Wulff, A. N., Torrance, M. G., Thomas, A. K. and Race, E. (2020), ‘Protecting memory from misinformation: Warnings modulate cortical reinstatement during memory retrieval’, Proceedings of the National Academy of Sciences of the United States of America, 117:37, pp. 2277179, https://doi.org/10.1073/pnas.2008595117.
    [Google Scholar]
  50. Kasianenko, K. and Boichak, O. (2024), ‘Canonizing online activism: Memetic iconography in the North Atlantic Fella Organization’, Media, War and Conflict, 18:2, pp. 17996, https://doi.org/10.1177/17506352241279957.
    [Google Scholar]
  51. Kilinc, D. D. and Sayar, G. (2019), ‘Assessment of reliability of YouTube videos on orthodontics’, Turkish Journal of Orthodontics, 32:3, pp. 14550, https://doi.org/10.5152/TurkJOrthod.2019.18064.
    [Google Scholar]
  52. Langguth, J., Pogorelov, K., Brenner, S., Filkuková, P. and Schroeder, D. T. (2021), ‘Don’t trust your eyes: Image manipulation in the age of deepfakes’, Frontiers in Communication, 6, https://doi.org/10.3389/fcomm.2021.632317.
    [Google Scholar]
  53. Lauer, C. and O’Brien, S. (2020), ‘How people are influenced by deceptive tactics in everyday charts and graphs’, IEEE Transactions on Professional Communication, 63:4, pp. 32740, https://doi.org/10.1109/TPC.2020.3032053.
    [Google Scholar]
  54. Leaha, M. A. and Canals, R. (2024), ‘Resisting alternative images: An ethnography of visual disinformation in Brazil’, Cultural Anthropology, 39:4, pp. 53363, https://doi.org/10.14506/ca39.4.03.
    [Google Scholar]
  55. Lee, Y.-I., Mu, D., Hsu, Y.-C., Wojdynski, B. W. and Binford, M. (2024), ‘Misinformation or hard to tell? An eye-tracking study to investigate the effects of food crisis misinformation on social media engagement’, Public Relations Review, 50:4, https://doi.org/10.1016/j.pubrev.2024.102483.
    [Google Scholar]
  56. Lewandowsky, S., Cook, J. and Lombardi, D. (2020), Debunking Handbook 2020, New York: Databrary.
    [Google Scholar]
  57. Li, H. O.-Y., Bailey, A., Huynh, D. and Chan, J. (2020), ‘YouTube as a source of information on COVID-19: A pandemic of misinformation?’, BMJ Global Health, 5:5, https://doi.org/10.1136/bmjgh-2020-002604.
    [Google Scholar]
  58. Li, H., Li, X., Dunkin, F., Zhang, Z., Hu, C., Wu, G. and Sam Ge, S. (2024), ‘Trust measurement of visual data based on multigranularity belief fusion for UAV perception system’, IEEE Transactions on Instrumentation and Measurement, 73, pp. 113, https://doi.org/10.1109/tim.2024.3463018.
    [Google Scholar]
  59. Lilleker, D. G. (2019), ‘The power of visual political communication: Pictorial politics through the lens of communication psychology’, in A. Veneti, D. Jackson and D. G. Lilleker (eds), Visual Political Communication, Cham: Springer, pp. 3751.
    [Google Scholar]
  60. Lilleker, D. G. and Veneti, A. (eds) (2023), Research Handbook on Visual Politics, Cheltenham: Edward Elgar Publishing.
    [Google Scholar]
  61. Lilleker, D. G., Veneti, A. and Jackson, D. (2019), ‘Introduction: Visual political communication’, in D. G. Lilleker, A. Veneti and D. Jackson (eds), Visual Political Communication, Cham: Springer, pp. 5373.
    [Google Scholar]
  62. Lin, S.-Y., Chen, Y.-C., Chang, Y.-H., Lo, S.-H. and Chao, K.-M. (2024), ‘Text–image multimodal fusion model for enhanced fake news detection’, Science Progress, 107:4, https://doi.org/10.1177/00368504241292685.
    [Google Scholar]
  63. Liu, Z.-J., Chernov, S. and Mikhaylova, A. V. (2021), ‘Trust management and benefits of vehicular social networking: An approach to verification and safety’, Technological Forecasting and Social Change, 166, https://doi.org/10.1016/j.techfore.2021.120613.
    [Google Scholar]
  64. Liu, H., Tan, Z., Chen, Q., Wei, Y., Zhao, Y. and Wang, J. (2025), ‘Unified frequency-assisted transformer framework for detecting and grounding multi-modal manipulation’, International Journal of Computer Vision, 133:3, pp. 1392409, https://doi.org/10.1007/s11263-024-02245-x.
    [Google Scholar]
  65. Ma, L., Yang, P., Xu, Y., Yang, Z., Li, P. and Huang, H. (2025), ‘Deep learning technology for face forgery detection: A survey’, Neurocomputing, 618, https://doi.org/10.1016/j.neucom.2024.129055.
    [Google Scholar]
  66. Marquart, F. (2023), ‘Eye-tracking methodology in research on visual politics’, in D. Lilleker and A. Veneti (eds), Research Handbook on Visual Politics, Cheltenham: Edward Elgar Publishing, pp. 3041.
    [Google Scholar]
  67. Martín-Neira, J.-I., Trillo-Domínguez, M. and Olvera-Lobo, M.-D. (2023), ‘Ibero-American journalism in the face of scientific disinformation: Fact-checkers’ initiatives on the social network Instagram’, El Profesional de La Información, 32:5, https://doi.org/10.3145/epi.2023.sep.03.
    [Google Scholar]
  68. Milani, E., Weitkamp, E. and Webb, P. (2020), ‘The visual vaccine debate on Twitter: A social network analysis’, Media and Communication, 8:2, pp. 36475, https://doi.org/10.17645/mac.v8i2.2847.
    [Google Scholar]
  69. Muenster, R. M., Gangi, K. and Margolin, D. (2024), ‘Alternative health and conventional medicine discourse about cancer on TikTok: Computer vision analysis of TikTok videos’, Journal of Medical Internet Research, 26, https://doi.org/10.2196/60283.
    [Google Scholar]
  70. Murillo-Ligorred, V., Ramos-Vallecillo, N., Covaleda, I. and Fayos, L. (2023), ‘Knowledge, integration and scope of deepfakes in arts education: The development of critical thinking in postgraduate students in primary education and master’s degree in secondary education’, Education Sciences, 13:11, https://doi.org/10.3390/educsci13111073.
    [Google Scholar]
  71. Nadeem, M. I., Ahmed, K., Li, D., Zheng, Z., Alkahtani, H. K., Mostafa, S. M., Mamyrbayev, O. and Abdel Hameed, H. (2022), ‘EFND: A semantic, visual, and socially augmented deep framework for extreme fake news detection’, Sustainability, 15:1, https://doi.org/10.3390/su15010133.
    [Google Scholar]
  72. Nault, K. and Ruhi, U. (2023), ‘User experience with disinformation-countering tools: Usability challenges and suggestions for improvement’, Frontiers in Computer Science, 5, https://doi.org/10.3389/fcomp.2023.1253166.
    [Google Scholar]
  73. Newman, E. J. and Schwarz, N. (2024), ‘Misinformed by images: How images influence perceptions of truth and what can be done about it’, Current Opinion in Psychology, 56, https://doi.org/10.1016/j.copsyc.2023.101778.
    [Google Scholar]
  74. Nguyen, V. T., Jung, K. and Gupta, V. (2021), ‘Examining data visualization pitfalls in scientific publications’, Visual Computing for Industry, Biomedicine, and Art, 4:1, pp. 2742, https://doi.org/10.1186/s42492-021-00092-y.
    [Google Scholar]
  75. Nygren, T., Guath, M., Axelsson, C.-A. W. and Frau-Meigs, D. (2021), ‘Combatting visual fake news with a professional fact-checking tool in education in France, Romania, Spain and Sweden’, Information, 12:5, https://doi.org/10.3390/info12050201.
    [Google Scholar]
  76. O’Hagan, L. A. (2021), ‘Commercialising public health during the 1918-1919 Spanish flu pandemic in Britain’, Journal of Historical Research in Marketing, 13:3–4, pp. 16187, https://doi.org/10.1108/jhrm-12-2020-0058.
    [Google Scholar]
  77. Pavlounis, D., Pashby, K. and Sanchez Morales, F. (2023), ‘Linking digital, visual, and civic literacy in an era of mis/disinformation: Canadian teachers reflect on using the questioning images tool’, Education Inquiry, pp. 118, https://doi.org/10.1080/20004508.2023.2292828.
    [Google Scholar]
  78. Ratzan, A., Siegel, M., Karanian, J. M., Thomas, A. K. and Race, E. (2024), ‘Intrinsic functional connectivity in medial temporal lobe networks is associated with susceptibility to misinformation’, Memory, 32:10, pp. 135870, https://doi.org/10.1080/09658211.2023.2298921.
    [Google Scholar]
  79. Řehůřek, R. and Sojka, P. (2010), ‘Software framework for topic modelling with large corpora’, in LREC 2010: New Challenges for NLP Frameworks Programme, Valletta, Malta, 22 May, Luxembourg: ELRA Language Resources Association, pp. 4450, http://www.lrec-conf.org/proceedings/lrec2010/workshops/W10.pdf. Accessed 21 October 2025.
    [Google Scholar]
  80. Robinson, L., Trammel, J. M. and Moles, K. (2023), ‘[De]politicizing the pandemic: Visually communicating digital public sociology’, American Behavioral Scientist, 69:9, pp. 117791, https://doi.org/10.1177/00027642231156769.
    [Google Scholar]
  81. Rodríguez-Serrano, A., Soler-Campillo, M. and Marzal-Felici, J. (2021), ‘Fact checking audiovisual en la era de la posverdad. ¿Qué significa validar una imagen?’, Revista Latina de Comunicación Social, 79, pp. 1942, https://doi.org/10.4185/rlcs-2021-1506.
    [Google Scholar]
  82. Russmann, U., Svensson, J. and Larsson, A. O. (2019), ‘Political parties and their pictures: Visual communication on Instagram in Swedish and Norwegian election campaigns’, in A. Veneti, D. Jackson and D. G. Lilleker (eds), Visual Political Communication, Cham: Springer, pp. 11944.
    [Google Scholar]
  83. Saito, J. M., Bae, G.-Y. and Fukuda, K. (2024), ‘Judgments during perceptual comparisons predict distinct forms of memory updating’, Journal of Experimental Psychology: General, 153:1, pp. 3855, https://doi.org/10.1037/xge0001469.
    [Google Scholar]
  84. Sanchez-Acedo, A., Carbonell-Alcocer, A., Gertrudix, M. and Rubio-Tamayo, J.-L. (2024), ‘The challenges of media and information literacy in the artificial intelligence ecology: Deepfakes and misinformation’, Communication and Society, 37:4, pp. 22339, https://doi.org/10.15581/003.37.4.223-239.
    [Google Scholar]
  85. Schill, D. (2012), ‘The visual image and the political image: A review of visual communication research in the field of political communication’, Review of Communication, 12:2, pp. 11842, https://doi.org/10.1080/15358593.2011.653504.
    [Google Scholar]
  86. Schultz, F. (2024), ‘Source matters: The impact of visual cues on perceived source credibility and belief in disinformation on short video platforms’, bachelor thesis, Enschende: University of Twente, http://essay.utwente.nl/100477/. Accessed 21 March 2025.
    [Google Scholar]
  87. Sharma, D. K. and Garg, S. (2023), ‘IFND: A benchmark dataset for fake news detection’, Complex and Intelligent Systems, 9:3, pp. 284363, https://doi.org/10.1007/s40747-021-00552-1.
    [Google Scholar]
  88. Singh, S., Gagal, S. and Chakladar, A. (2024), ‘A cross-sectional comparative study analyzing the quality of YouTube videos as a source of information for treatment of erectile dysfunction in English and Hindi language’, Sexuality and Culture, 28:4, pp. 1588602, https://doi.org/10.1007/s12119-023-10194-9.
    [Google Scholar]
  89. Steinfeld, N. (2023), ‘How do users examine online messages to determine if they are credible? An eye-tracking study of digital literacy, visual attention to metadata, and success in misinformation identification’, Social Media + Society, 9:3, https://doi.org/10.1177/20563051231196871.
    [Google Scholar]
  90. Stephen Lindsay, D., Allen, B. P., Chan, J. C. K. and Dahl, L. C. (2004), ‘Eyewitness suggestibility and source similarity: Intrusions of details from one event into memory reports of another event’, Journal of Memory and Language, 50:1, pp. 96111, https://doi.org/10.1016/j.jml.2003.08.007.
    [Google Scholar]
  91. Sui, M., Hawkins, I. and Wang, R. (2023), ‘When falsehood wins? Varied effects of sensational elements on users’ engagement with real and fake posts’, Computers in Human Behavior, 142, https://doi.org/10.1016/j.chb.2023.107654.
    [Google Scholar]
  92. Sultan, T., Rony, M. A. T., Islam, M. S., Aldosary, S. and El-Shafai, W. (2024), ‘MemesViTa: A novel multimodal fusion technique for troll memes identification’, IEEE Access, 12, pp. 17781128, https://doi.org/10.1109/access.2024.3505614.
    [Google Scholar]
  93. Sultănescu, D. C. (2022), ‘War of the words: The online conversation about NATO in Romania: Communicators, content, communities’, Romanian Journal of Communication and Public Relations, 24:1, pp. 2546, https://doi.org/10.21018/rjcpr.2022.1.338.
    [Google Scholar]
  94. Tan, D. H. and Jiang, Y. V. (2020), ‘Tell me what you saw: The usefulness of verbal descriptions for others’, Quarterly Journal of Experimental Psychology, 73:8, pp. 122741, https://doi.org/10.1177/1747021820915356.
    [Google Scholar]
  95. Tang, Z., Goh, D. H.-L., Lee, C. S. and Yang, Y. (2024), ‘Understanding strategies employed by seniors in identifying deepfakes’, Aslib Journal of Information Management, 76:5, https://doi.org/10.1108/AJIM-03-2024-0255.
    [Google Scholar]
  96. Thomson, T. J., Angus, D., Dootson, P., Hurcombe, E. and Smith, A. (2022), ‘Visual mis/disinformation in journalism and public communications: current verification practices, challenges, and future opportunities’, Journalism Practice, 16:5, pp. 93862, https://doi.org/10.1080/17512786.2020.1832139.
    [Google Scholar]
  97. Vaccari, C. and Chadwick, A. (2020), ‘Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news’, Social Media + Society, 6:1, https://doi.org/10.1177/2056305120903408.
    [Google Scholar]
  98. Vitale, S. G., Angioni, S., Saponara, S., Sicilia, G., Mignacca, A., Caiazzo, A., De Franciscis, P. and Riemma, G. (2025), ‘Hysteroscopic metroplasty and its reproductive impact among the social networks: A cross-sectional analysis on video quality, reliability and creators’ opinions on YouTube, TikTok and Instagram ’, International Journal of Medical Informatics, 195, https://doi.org/10.1016/j.ijmedinf.2024.105776.
    [Google Scholar]
  99. Vraga, E. K., Kim, S. C., Cook, J. and Bode, L. (2020), ‘Testing the effectiveness of correction placement and type on Instagram’, The International Journal of Press/Politics, 25:4, pp. 63252, https://doi.org/10.1177/1940161220919082.
    [Google Scholar]
  100. Wang, L., Yue, M. and Wang, G. (2023), ‘Too real to be questioned: Analysis of the factors influencing the spread of online scientific rumors in China’, Sage Open, 13:4, https://doi.org/10.1177/21582440231215586.
    [Google Scholar]
  101. Wardle, C. (2018), ‘The need for smarter definitions and practical, timely empirical research on information disorder’, Digital Journalism, 6:8, pp. 95163, https://doi.org/10.1080/21670811.2018.1502047.
    [Google Scholar]
  102. Weikmann, T. and Lecheler, S. (2023), ‘Visual disinformation in a digital age: A literature synthesis and research agenda’, New Media and Society, 25:12, pp. 3696713, https://doi.org/10.1177/14614448221141648.
    [Google Scholar]
  103. Weston, S. J., Shryock, I., Light, R. and Fisher, P. A. (2023), ‘Selecting the number and labels of topics in topic modeling: A tutorial’, Advances in Methods and Practices in Psychological Science, 6:2, https://doi.org/10.1177/25152459231160105.
    [Google Scholar]
  104. Xu, R., Nagothu, D. and Chen, Y. (2021), ‘Decentralized video input authentication as an edge service for smart cities’, IEEE Consumer Electronics Magazine, 10:6, pp. 7682, https://doi.org/10.1109/mce.2021.3062564.
    [Google Scholar]
  105. Yamashita, M. (1996), ‘A re-examination of the misinformation effect by means of visual and verbal recognition tests’, Japanese Psychological Research, 38:1, pp. 4752, https://doi.org/10.1111/j.1468-5884.1996.tb00007.x.
    [Google Scholar]
  106. Yan, C., Pu, K. and Luo, X. (2024), ‘Knowledge mapping of information cocoons: A bibliometric study using visual analysis’, Journal of Librarianship and Information Science, 57:2, pp. 40217, https://doi.org/10.1177/09610006231222628.
    [Google Scholar]
  107. Yang, Y., Davis, T. and Hindman, M. (2023), ‘Visual misinformation on Facebook’, Journal of Communication, 73:4, pp. 31628, https://doi.org/10.1093/joc/jqac051.
    [Google Scholar]
  108. Zhu, B., Chen, C., Shao, X., Liu, W., Ye, Z., Zhuang, L., Zheng, L., Loftus, E. F. and Xue, G. (2019), ‘Multiple interactive memory representations underlie the induction of false memory’, Proceedings of the National Academy of Sciences of the United States of America, 116:9, pp. 346675, https://doi.org/10.1073/pnas.1817925116.
    [Google Scholar]
/content/journals/10.1386/jvpc_00052_1
Loading
/content/journals/10.1386/jvpc_00052_1
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a success
Invalid data
An error occurred
Approval was partially successful, following selected items could not be processed due to error
Please enter a valid_number test