Skip to content
1981
Volume 14, Issue 1
  • ISSN: 2042-7875
  • E-ISSN: 2042-7883

Abstract

This study examines the use of enhanced trackable markings in semi-automatic animation rotoscoping with EbSynth, an open-source free software, to uncover new efficiencies and streamline the animation workflow. Unlike generative artificial intelligence (AI) applications such as Runway, Stable Diffusion and Sora, which use machine learning (ML) and neural network (NN) techniques, EbSynth uses example-based synthesis (EBS) algorithms to create images for animation sequences based on one or more keyframe inputs. This research examines how distinctive facial markings – dotted markers, contour lines and character-specific markings – affect the manual labour involved in frame-by-frame editing in semi-automatic rotoscoping, particularly in scenarios where the animated character’s design significantly deviates from the live-action actor’s facial features. Through a series of controlled experiments with an actress portraying various facial movements and expressions, we analysed the effectiveness of these markings in improving the accuracy and reducing the need for subsequent adjustments in rotoscoped sequences. This research addresses a gap in empirical studies regarding the practical application of semi-automatic tools in animation, particularly in optimizing the rotoscoping process. By detailing the impact of character-specific trackable markings, this study contributes to the academic field while also providing practical guidelines for animators seeking to integrate semi-automation technologies such as EbSynth into their creative processes.

Loading

Article metrics loading...

/content/journals/10.1386/ap3_00061_1
2026-02-25
2026-04-22

Metrics

Loading full text...

Full text loading...

References

  1. Adobe (n.d.), ‘Animated lip-syncing powerd by Adobe AI’, https://www.adobe.com/creativecloud/video/discover/animation-lip-sync.html. Accessed 21 November 2024.
  2. Appel, G., Neelbauer, J. and Schweidel, D. A. (2023), ‘Generative AI has an intellectual property problem’, Harvard Business Review, 7 April, https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem. Accessed 20 November 2024.
    [Google Scholar]
  3. Bakshi, R. (1983), Fire and Ice, USA: P. S. O. P. Polyc International BV and Fox 20th Century Studios.
    [Google Scholar]
  4. Burgos Risco, A., Luis Tello, J. J., Gallardo, J., Baldassarri, S., Lacuesta, R., Reyes, A., Hernandez, S. and Albiol, S. (2022), ‘Approach to digital rotoscoping by means of painting in motion using EbSynth: Human-machine art creation experience’, XXII International Conference on Human Computer Interaction, Teruel, Spain, September 7–9, New York: ACM, pp. 16.
    [Google Scholar]
  5. Crafton, D. (1993), Before Mickey: The Animated Film, Chicago, IL: University of Chicago Press.
    [Google Scholar]
  6. EbSynth (n.d.), ‘FAQ’, https://ebsynth.com/faq.html. Accessed 12 December 2023.
  7. Fišer, J., Jamriška, O., Simons, D., Shechtman, E., Lu, J., Asente, P., Lukáč, M. and Sýkora, D. (2017), ‘Example-based synthesis of stylized facial animations’, ACM Transactions on Graphics, 36:4, pp. 111, https://doi.org/10.1145/3072959.3073660.
    [Google Scholar]
  8. Garwood, I. (2012), ‘Roto-synchresis: Relationships between body and voice in Rotoshop animation’, Animation, 7:1, pp. 3957, https://doi.org/10.1177/1746847711428850.
    [Google Scholar]
  9. Hand, D., Sharpsteen, B., Wilfred, J., Pearce, P., Cottrell, W. and Morey, L. (1937), Snow White and the Seven Dwarfs, USA: Walt Disney Productions.
    [Google Scholar]
  10. Hazelton, J. (2024), ‘Seven years and 151,000 computers: The epic story behind Pixar’s Elemental’, Screen Daily, 10 January, https://www.screendaily.com/features/seven-years-and-151000-computers-the-epic-story-behind-pixars-elemental/5189240.article#:~:text=Now%20that%20the%20NST%20technique,on%20other%20upcoming%20Pixar%20projects. Accessed 5 October 2024.
  11. Insider (2020), ‘How Netflix’s Klaus made 2D animation look 3D’,  YouTube, 30 January, https://www.youtube.com/watch?v=BlU49dJhfcw. Accessed 15 November 2024.
  12. Lang, J. (2024), ‘Artists share AI experiences and anxieties in new video essay: “AI vs artists: The biggest art heist in history”’, Cartoon Brew, 3 November, https://www.cartoonbrew.com/tools/artists-share-ai-experiences-and-anxieties-in-new-video-essay-ai-vs-artists-the-biggest-art-heist-in-history-238907.html. Accessed 18 November 2024.
  13. Linklater, R. (2001), Waking Life, USA: Fox Searchlight Pictures.
    [Google Scholar]
  14. Linklater, R. (2006), A Scanner Darkly, USA: Warner Independent Pictures.
    [Google Scholar]
  15. Matsumoto, D. and Ekman, P. (2008), ‘Facial expression analysis’, Scholarpedia, 3:5, p. 4237, https://doi.org/10.4249/scholarpedia.4237.
    [Google Scholar]
  16. McCarthy, J., Minsky, M. L., Rochester, N. and Shannon, C. E. (2006), ‘A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955’, AI Magazine, 27, p. 12, https://doi.org/10.1609/aimag.v27i4.1904.
    [Google Scholar]
  17. Meyer, F. (2023), ‘How AI technology animates the fire creatures in the latest Pixar movie’, Sciena: Swiss Science Today, 22 June, https://www.sciena.ch/tech-transfer/how-ai-technology-from-eth-animates-the-fire-creatures-in-the-latest-pixar-movie.html#:~:text=For%20the%20Pixar%20film%20Elemental,developed%20the%20technology%20and%20used. Accessed 5 October 2024.
  18. Pablos, S. (2019), Klaus, Spain: Netflix.
    [Google Scholar]
  19. Pointer, R. (2017), The Art and Inventions of Max Fleischer: American Animation Pioneer, Jefferson, NC: McFarland.
    [Google Scholar]
  20. Runway (2024), ‘Introducing act-one’, 22 October, https://runwayml.com/research/introducing-act-one. Accessed 14 October 2025.
  21. Sohn, P. (2023), Elemental, USA: Walt Disney Studios Motion Pictures.
    [Google Scholar]
  22. Synthetik (n.d.), ‘Auto rotoscoping & video effects: Create moving art and animation automatically’, https://synthetik.com/auto-rotoscoping-feature/. Accessed 21 November 2024.
  23. Thomas, F. and Johnston, O. (1981), The Illusion of Life: Disney Animation, New York: Abbeville Press.
    [Google Scholar]
  24. Valdersnes, L. (2022), ‘Proposing an improved workflow for independent animation-making in EbSynth through introducing enhanced trackable patterns and textures’, MA thesis, Volda: Volda University College.
    [Google Scholar]
  25. Ward, P. (2012), ‘Rotoshop and the work of Bob Sabiston’, Animation, 7:1, pp. 56, https://doi.org/10.1177/1746847712438955.
    [Google Scholar]
  26. Yixuan, L., Harun, A. and Dongyu, R. (2024), ‘AI-driven animation creation: Application exploration and potential risks’, 5th International Conference on Artificial Intelligence and Data Sciences (AiDAS), Bangkok, Thailand, 3–4 September, Piscataway, NJ: IEEE, pp. 16.
    [Google Scholar]
/content/journals/10.1386/ap3_00061_1
Loading
/content/journals/10.1386/ap3_00061_1
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a success
Invalid data
An error occurred
Approval was partially successful, following selected items could not be processed due to error
Please enter a valid_number test