Skip to content
1981
Volume 13, Issue 1
  • ISSN: 1751-4193
  • E-ISSN: 1751-4207

Abstract

Sound design with the goal of immersion is not new. However, sound design for immersive media experiences (IMEs) utilizing spatial audio can still be considered a relatively new area of practice with less well-defined methods requiring a new and still emerging set of skills and tools. There is, at present, a lack of formal literature around the challenges introduced by this relatively new content form and the tools used to create it, and how these may differ from audio production for traditional media. This article, through the use of semi-structured interviews and an online questionnaire, looks to explore what audio practitioners view as defining features of IMEs, the challenges in creating audio content for IMEs and how current practices for traditional stereo productions are being adapted for use within 360 interactive soundfields. It also highlights potential direction for future research and technological development and the importance of practitioner involvement in research and development in ensuring future tools and technologies satisfy the current needs.

Funding
This study was supported by the:
  • An EPRSC iCASE Ph.D. studentship (Award EP/S513945/1)
This article is Open Access under the terms of the Creative Commons Attribution 4.0 International licence (CC-BY 4.0), which permits unrestricted use, distribution and reproduction in any medium, provided the original work is properly cited. The CC BY licence permits commercial and noncommercial reuse. To view a copy of the licence, visit https://creativecommons.org/licenses/by/4.0/
Loading

Article metrics loading...

/content/journals/10.1386/ts_00017_1
2021-10-01
2024-09-07
Loading full text...

Full text loading...

/deliver/fulltext/ts/13/1/ts.13.1.73_Turner.html?itemId=/content/journals/10.1386/ts_00017_1&mimeType=html&fmt=ahah

References

  1. Adams, E., and Rollings, A.. ( 2006), Fundamentals of Game Design, Upper Saddle River, NJ:: Prentice-Hall, Inc;.
    [Google Scholar]
  2. Agrawal, S.,, Simon, A.,, Bech, S.,, Baerentsen, K., and Forchhammer, S.. ( 2019;), ‘ Defining immersion: Literature review and implications for research on immersive audiovisual experiences. ’, AES 147th Convention, New York, USA, 13–16 June, pp. 111.
    [Google Scholar]
  3. Anderson, P. W., and Zahorik, P.. ( 2014;), ‘ Auditory/visual distance estimation: Accuracy and variability. ’, Frontiers in Psychology, 5, September, pp. 111.
    [Google Scholar]
  4. Aytar, Y.,, Vondrick, C., and Torralba, A.. ( 2016;), ‘ SoundNet: Learning sound representations from unlabeled video. ’, 29th Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December, http://arxiv.org/abs/1610.09001. Accessed 1 September 2021.
    [Google Scholar]
  5. Baume, C.,, Plumbley, M. D., and Calic, J.. ( 2015;), ‘ Use of audio editors in radio production. ’, Proceedings of the 138th Audio Engineering Society Convention, Warsaw, Poland, 7–10 May.
    [Google Scholar]
  6. BBC ( 2021;), ‘ BBC soundscapes for wellbeing aims to bring nature to everyone. ’, 25 January, https://www.bbc.co.uk/mediacentre/2021/soundscapes-for-wellbeing. Accessed 10 June 2021.
  7. BBC Academy ( 2020;), ‘ Spatial audio: Where do I start?. ’, https://www.bbc.com/academy-guides/spatial-audio-where-do-i-start. Accessed 10 June 2021.
  8. BBC R&D ( 2020;), ‘ Sounding special: Doctor Who in binaural sound. ’, 13 May, https://www.bbc.co.uk/rd/blog/2017-05-doctor-who-in-binaural-sound. Accessed 15 June 2021.
  9. Biocca, F., and Delaney, B.. ( 1995;), ‘ Immersive virtual reality technology. ’, in F. Biocca, and B. Delaney. (eds), Communication in the Age of Virtual Reality, Hove:: Lawrence Erlbaum Associates, Inc;, pp. 57124.
    [Google Scholar]
  10. Blue Ripple Sound ( 2020;), ‘ O3A upmixers. ’, https://www.blueripplesound.com/products/o3a-upmixers. Accessed 2 July 2021.
  11. Braun, V., and Clarke, V.. ( 2006;), ‘ Using thematic analysis in psychology. ’, Qualitative Research in Psychology, 3:2, pp. 77101.
    [Google Scholar]
  12. Brimijoin, W. O.,, Boyd, A. W., and Akeroyd, M. A.. ( 2013;), ‘ The contribution of head movement to the externalization and internalization of sounds. ’, PLoS ONE, 8:12, pp. 112.
    [Google Scholar]
  13. Calleja, G.. ( 2007;), ‘ Revising immersion: A conceptual model for the analysis of digital game involvement. ’, 3rd Digital Games Research Association International Conference: Situated Play, Tokyo, Japan, 24–28 September.
    [Google Scholar]
  14. Choi, J., and Chang, J.-H.. ( 2021;), ‘ Exploiting deep neural networks for two-to-five channel surround decoder. ’, Journal of the Audio Engineering Society, 68:12, pp. 93849.
    [Google Scholar]
  15. Eaton, C., and Lee, H.. ( 2019;), ‘ Quantifying factors of auditory immersion in virtual reality. ’, International Conference on Immersive and Interaction Audio, York, UK, 27–29 March.
    [Google Scholar]
  16. Ermi, L., and Mäyrä, F.. ( 2005;), ‘ Fundamental components of the gameplay experience: Analysing immersion. ’, Proceedings of DiGRA 2005 Conference: Changing Views – Worlds in Play, Vancouver, Canada, 16–20 June.
    [Google Scholar]
  17. Facebook 360 ( n.d.), https://facebook360.fb.com/. Accessed 1 June 2021.
  18. Firth, M.,, Bailey, R., and Pike, C.. ( 2020;), ‘ Binaural EBU ADM renderer. ’, 6 October, https://www.bbc.co.uk/rd/blog/2020-10-ear-next-generation-audio-software-tools. Accessed 1 July 2021.
  19. Francombe, J.,, Brookes, T., and Mason, R.. ( 2017;), ‘ Evaluation of spatial audio reproduction methods (Part 1): Elicitation of perceptual differences. ’, AES: Journal of the Audio Engineering Society, 65:3, pp. 198211.
    [Google Scholar]
  20. Google ( n.d.), Google ARCore, https://developers.google.com/ar. Accessed 1 July 2021.
  21. Hood, V.,, Knapp, M., and Griliopoulos, D.. ( 2021;), ‘ Best VR games 2021: The top virtual reality games to play right now. ’, Tech Radar, 11 November, https://www.techradar.com/uk/best/the-best-vr-games. Accessed 11 November 2021.
    [Google Scholar]
  22. Kolarik, A. J.,, Moore, C. J.,, Zahorik, P.,, Cirstea, S., and Pardhan, S.. ( 2016;), ‘ Auditory distance perception in humans: A review of cues, development, neuronal bases, and effects of sensory loss. ’, Attention, Perception, and Psychophysics, 78:2, pp. 37395.
    [Google Scholar]
  23. Kraft, S., and Zölzer, U.. ( 2015;), ‘ Stereo signal separation and upmixing by mid-side decomposition in the frequency-domain. ’, DAFx 2015: Proceedings of the 18th International Conference on Digital Audio Effects, Trondheim, Norway, 30 November–3 December, pp. 16.
    [Google Scholar]
  24. Laitinen, M. V.. ( 2014;), ‘ Converting two-channel stereo signals to B-format for directional audio coding reproduction. ’, 137th Audio Engineering Society Convention 2014, Los Angeles, USA, 9–12 October, pp. 31419.
    [Google Scholar]
  25. Luff, P. J.,, Hindmarsh, J., and Heath, C.. ( 2000), Workplace Studies: Recovering Work Practice and Informing System Design, Cambridge:: Cambridge University Press;.
    [Google Scholar]
  26. McArthur, A.. ( 2016;), ‘ Disparity in horizontal correspondence of sound and source positioning: The impact on spatial presence for cinematic VR. ’, AES Conference on Audio for Virtual and Augmented Reality, Los Angeles, USA, 30 September–1 October.
    [Google Scholar]
  27. McMahan, A.. ( 2003;), ‘ Immersion, engagement, and presence: A method for analyzing 3-D video games. ’, in M. J. P. Wolf, and B. Perron. (eds), The Video Game Theory Reader, New York:: Routledge;, pp. 6786.
    [Google Scholar]
  28. Murray, L.. ( 2019), Sound Design Theory and Practice: Working with Sound, Milton:: Routledge;.
    [Google Scholar]
  29. NUGEN Audio ( 2020;), ‘ Halo upmix. ’, https://nugenaudio.com/haloupmix. Accessed 2 July 2021.
  30. Park, S. Y.,, Chun, C. J., and Kim, H. K.. ( 2016;), ‘ Subband-based upmixing of stereo to 5.1-channel audio signals using deep neural networks. ’, 2016 International Conference on Information and Communication Technology Convergence, ICTC 2016, Jeju Island, Korea, 19–21 October.
    [Google Scholar]
  31. QSR International Pty Ltd ( 2020;), ‘ NVivo (released in March 2020). ’, https://www.qsrinternational.com/nvivo-qualitative-data-analysis-software/about/nvivo/who-its-for/academia. Accessed 1 May 2020.
  32. Quackenbush, S. R., and Herre, J.. ( 2021;), ‘ MPEG standards for compressed representation of immersive audio. ’, Proceedings of the IEEE, 109:9, pp. 112.
    [Google Scholar]
  33. Qualtrics ( 2020;), ‘ Qualtrics (2020), Provo, UT, United States. ’, Qualtrics, https://www.qualtrics.com/about/. Accessed 1 May 2021.
    [Google Scholar]
  34. Reiss, J. D., and Brandtsegg, Ø.. ( 2018;), ‘ Applications of cross-adaptive audio effects: Automatic mixing, live performance and everything in between. ’, Frontiers in Digital Humanities, 5, June, pp. 110.
    [Google Scholar]
  35. Ryan, M. L.. ( 2003), Narrative as Virtual Reality: Immersion and Interactivity in Literature and Electronic Media, Baltimore, MD:: Johns Hopkins University Press;.
    [Google Scholar]
  36. Slater, M.. ( 2003;), ‘ A note on presence. ’, Presence Connect, 3, pp. 15.
    [Google Scholar]
  37. Sonnenschein, D.. ( 2001), Sound Design: The Expressive Power of Music, Voice, and Sound Effects in Cinema, Studio City, CA:: Michael Wiese Productions;.
    [Google Scholar]
  38. Thomas, D. R.. ( 2006;), ‘ A general inductive approach for analyzing qualitative evaluation data. ’, American Journal of Evaluation, 27:2, pp. 23746.
    [Google Scholar]
  39. Thon, J.-N.. ( 2008;), ‘ Immersion revisited: On the value of a contested concept. ’, in O. T. Leino,, H. E. Wirman, and F. Amyris. (eds), Extending Experiences: Structure, Analysis and Design of Computer Game Player Experience, Rovaniemi:: Lapland University Press;, pp. 2943.
    [Google Scholar]
  40. Turner, D.,, Pike, C., and Murphy, D.. ( 2020;), ‘ Content matching for sound generating objects within a visual scene using a computer vision approach. ’, Proceedings of the 148th AES Convention, online, 2–5 June, pp. 110.
    [Google Scholar]
  41. Walther, A., and Faller, C.. ( 2011;), ‘ Direct-ambient decomposition and upmix of surround signals. ’, IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New York, USA, 16–19 October, pp. 27780.
    [Google Scholar]
  42. Yiwere, M., and Rhee, E. J.. ( 2020;), ‘ Sound source distance estimation using deep learning: An image classification approach. ’, Sensors (Switzerland), 20:1, pp. 119, 10.3390/s20010172. Accessed 20 May 2021.
    [Google Scholar]
  43. Zdanowicz, G., and Bambrick, S.. ( 2020), The Game Audio Strategy Guide: A Practical Course, New York:: Routledge and Taylor & Francis Group;.
    [Google Scholar]
  44. Zölzer, U.. ( 2011), DAFX: Digital Audio Effects, Chichester:: Wiley;.
    [Google Scholar]
  45. Zucchi, S.,, Fuchter, S. K.,, Salazar, G., and Alexander, K.. ( 2020;), ‘ Combining immersion and interaction in XR training with 360-degree video and 3D virtual objects. ’, 2020 23rd International Symposium on Measurement and Control in Robotics (ISMCR), Budapest, Hungary, 15–17 October, pp. 15, 10.1109/ISMCR51255.2020.9263732. Accessed 5 July 2021.
    [Google Scholar]
  46. Turner, Daniel,, Murphy, Damian,, Pike, Chris, and Baume, Chris. ( 2022 [2021];), ‘ Spatial audio production for immersive media experiences: Perspectives on practice-led approaches to designing immersive audio content. ’, The Soundtrack, 13:1, pp. 7394, 10.1386/ts_00017_1
    [Google Scholar]
/content/journals/10.1386/ts_00017_1
Loading
/content/journals/10.1386/ts_00017_1
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a success
Invalid data
An error occurred
Approval was partially successful, following selected items could not be processed due to error