Another Kind of Magic: Méliès, Mercury, and Machine Learning

Queen’s “Heaven for Everyone” from their album Made in Heaven (1995).

The music video for Queen’s “Heaven for Everyone” from their then-final record Made in Heaven (1995) - and a song that originally appeared on Shove It (1991), an album by drummer Roger Taylor’s side project The Cross (and featuring Freddie Mercury as a guest vocalist) - includes somewhat surprisingly footage from Georges Méliès’ early ‘trick’ films A Trip to the Moon (1902) and The Impossible Voyage (1904). With Mercury himself having died in November 1991 from an AIDs-related illness, the video directed by David Mallet comprises footage of colourful tributes left to the Queen frontman outside his home in Kensington, London, alongside shots of the band playing live (largely taken from their 1986 show at Wembley Stadium as part of their global Magic Tour) all superimposed onto sequences from Méliès’ most famous shorts (Fig. 1). The other-worldly adventure narratives displayed across Méliès’ foundational fantasy and effects-driven animated works appropriately matches the theme and context of Taylor’s dance ballad, which looks back on “these days of cool reflection” as well as lamenting these “troubled days of cruel rejection” at a time when Queen had all-but-disbanded, with Made in Heaven marking their farewell with the original line-up (the remaining three members Taylor, Brian May, and John Deacon would play together live only once more in 1997 with Elton John this time on vocals).

Fig. 1 - Mercury meets Méliès.

Fig. 2 - Mercury meets Metropolis.

Fig. 3 - May meets Méliès.

Throughout the new sound/image relations conjured in the animated fantasy video for “Heaven for Everyone,” Mercury’s vocals coupled with May’s guitar solo provides something of an unusual soundtrack to the stop-motion animation, fantastical imagery of flying steam trains, and in-camera effects for which Méliès is best known (Fig. 3). Yet Queen were certainly not immune to embracing big-screen histories of technological spectacle, and regularly took their inspiration from popular effects imagery as they, like the rest of the industry, fully embraced the music video format throughout the 1980s. The band’s video for “Radio Ga Ga” from their album The Works (1984) (also written by Taylor) incorporates the band’s performance with extended footage taken from Fritz Lang’s silent film Metropolis (1927) (Fig. 2), another film widely credited as a pioneer in the development of early animated and science-fiction effects (see Elsaesser 2000) (an image from Lang’s film also adorns the cover of Katharina Loew’s recent book Special Effects and German Silent Film: Techno-Romantic Cinema [2021]). As J.P. Telotte argues, despite the “images of a glittering and powerful technology” Metropolis nonetheless “seems self-conscious about how these images can make us desire the very technological developments whose dangers it so clearly details” (1995: 59).

Queen - “A Kind of Magic” (1986)

Fig. 4 - Queen’s “The Invisible Man” (1989).

Other animated techniques abound across Queen’s back catalogue. For the band’s later video for “A Kind of Magic” (1986) (see left), Mercury shares the screen with a caricatured cel-animated crowd and colourful backing dancers at a time when stylistic hybridity was about to fully come into the mainstream, first with Who Framed Roger Rabbit (Robert Zemeckis, 1988) and then with Ralph Bakshi’s Cool World (1992) a few years later. Taken from their 1989 album The Miracle, the synth-pop track “The Invisible Man” (1989) placed the band within the aesthetic style of colourful 8-bit computer graphics (Fig. 4) to reflect the growing 1980s videogame boom and arrival of personal consoles (this was also the time where a number of Saturday morning cartoons were being adapted popular videogame characters, including Pac-Man and Super Mario Bros).

However, as Mercury’s health waned in the latter part of the 1980s and into the 1990s, animation provided an effective counterpoint - even respite - to calls for the band to get back on the road and play live, and they could instead recreate the kind of spectacle familiar from their sell-out stadium shows through the visual trickery afforded by the animated medium. The video for the title song on the album Innuendo (1991) combined stop motion animation with sophisticated rotoscoping techniques to render both new and archival musical performances in an innovative graphic form inspired by French illustrator and caricaturist Jean-Jacques Grandville. As recently as September 2019, the posthumous re-release of Mercury’s solo hit “Love Me Like There’s No Tomorrow” (1985) featured a new animated music video that served to promote the singer’s Never Boring compilation album that would arrive the following month. Directed by Esteban Bravo and Beth David, and with animation from the Woodblock studio based in Berlin, the poignant video tells a love story between two men set against the backdrop of the global fight against AIDS in the 1980s. As the line-drawn images depicting the couple’s romance shift between the euphoria of dancing at a nightclub to the metaphorical visualisation of illness as it ravages their animated bodies, the video uses animation as a medium of catharsis to honour Mercury’s “life and legacy in a compelling short story about the power of love” (Peacock 2019).

Queen - “Radio Ga Ga” (1984).

Queen - “Innuendo” (1991)

The animated music video for Freddie Mercury’s “Love Me Like There’s No Tomorrow.”

Ctrl Shift Face - Freddie Mercury DeepFake VFX Breakdown.

Freddie sings “The Final Countdown” thanks to AI.

Yet fast forward to 2024, and the recent announcement that Mercury might just be about to return to the stage thanks to the latest advancements in digital holography appears to mark the fullest realisation of the band’s longstanding association with animated imagery, if not the monstrous and unruly culmination of the posthumous potential of digital holography. Indeed, rumours of Mercury’s return follows in the familiar footsteps of the wider industrial and cultural fascination with using sophisticated computer graphics to create 'live' stage shows of long-deceased performers (think Tupac at the 2012 Coachella Music Festival, Michael Jackson’s Billboard Music Awards performance in 2014, or “An Evening with Whitney: The Whitney Houston Hologram Concert” from 2020), which in their own uncanny ways tread through an ethical minefield of using digital animation to magic up posthumous - and seemingly interactive - stage performances.

Mercury’s possible digitally-mediated reinvention for a series of ‘live’ concerts additionally speaks to the singer’s ongoing facial markers as an intriguing testing ground for CG trickery. Online video artist Ctrl Shift Face’s breakdown video of his Deepfake reskinning process (see right) shows how Mercury’s distinctive physiognomy can be grafted onto that of Rami Malek in what is a nod to Malek’s impersonation of Mercury in the much-maligned musical biopic Bohemian Rhapsody (Dexter Fletcher, 2018). More recently, the acceleration in artificial or synthetic speech technology to create AI-generated vocality has been focused particularly on re-conjuring Mercury’s famously powerful four-octave range. The result is that Mercury has ‘appeared’ singing everything from ah-ha’s “Take on Me” and Journey’s “Don’t Stop Believin’” to “The Final Countdown” by Europe and (incredibly) Céline Dion’s “My Heart Will Go On.” More intriguingly is that many of these unofficial cover versions in which Mercury appears to offer his rendition of a number of global hits actually uses a database of vocal material sung by Marc Martel, a Canadian singer noted for his remarkable vocal likeness to the Queen frontman (and who provided partial vocal doubling for Mercury’s singing voice in the Bohemian Rhapsody film).

AI voicework, which allows the computer to create convincing vocalisations via machine learning, functions as the latest in a series of ‘noises’ made around new media culture and its creators, which confirms the rising importance of vocality in framing the possibilities for art, craft, and creativity engendered by the machine as a viable and valuable co-author. The technological reconstitution of the singer as a new sophisticated digital asset via archival voice recordings and soundalikes (such as Martel) fully reflects the spectrum of remixed, restored, and digitally-mediated voices that have come to populate a wide range of mainstream media content, including both ‘live’ virtual holograms and the growing popularity of online AI remix videos. Given that the recent SAG-AFTRA industrial action concerning the scanning of the star's image has only further intensified key issues around performer labour and embodiment, the question of who ‘owns’ Mercury’s image is one with a more intriguing answer now that he essentially exists as a valuable data asset.

The troubling sound of an AI-generated Mercury (and particularly his version of “Let It Go” from Frozen which is as bizarre as it sounds) provides the latest example of how popular stars of stage and screen have been made to speak, sing, and splutter through a variety of technological encounters with computer processing, and how innovations in speech synthesis technology and machine learning have permitted digital voices to convincingly impersonate the vocalisations and speech patterns of recognisable star bodies. In an era where AI systems are being monitored closely for the computer’s ability to intervene into the celluloid image, Mercury’s ongoing and seemingly neverending digitisation fully reflects the power of the voice and vocality in crafting celebrities - even posthumously - as figures of trust, support, safety, and benevolence when it comes to rapid technological change. Not only does the potentially lucrative digital revival of Mercury also further secure Queen’s playful connection to a culture of technological innovation thanks to their videos’ citations animated fantasy history, but it equally complicates the presumed agency of celebrity avatars from the world of entertainment who are charged with managing our understanding of what happens when computers speak, and what they can – and should – be saying. With the verbal and the virtual becoming increasingly tangled within the contemporary media ecology, what next for posthumous performances that are convincingly crafted in sound and image? As Mercury himself sang, who wants to live forever, anyway?

**Article published: March 8, 2024**

References

Elsaesser, Thomas. 2000. Metropolis (BFI Film Classic). London: British Film Institute.

Loew, Katharina. 2021. Special Effects and German Silent Film: Techno-Romantic Cinema. Amsterdam: Amsterdam University Press.

Peacock, Tim. 2019. “Watch the new video for Freddie Mercury’s ‘Love Me Like There’s No Tomorrow.’” udiscovermusic (September 5, 2019), available at: https://www.udiscovermusic.com/news/freddie-mercury-video-no-tomorrow/.

Telotte, J.P. 1995. Replications: A Robotic History of the Science Fiction Film. Urbana: University of Illinois Press.

Biography

Christopher Holliday is Lecturer in Liberal Arts and Visual Cultures Education at King’s College London, where he teaches Film Studies and Liberal Arts and specializes in Hollywood cinema, animation history, and contemporary digital media. His research is largely concerned with digital technologies and forms of computer animation in contemporary visual culture, and he has published work on topics related to the computer-animated film, digital visual effects, Deepfakes, and digital de-aging for Animation Practice, Process & Production, animation: an interdisciplinary journal (where is also Associate Editor), Convergence: The International Journal of Research into New Media Technologies, and the Journal of Cinema and Media Studies. His work on popular media has also appeared in several edited collections and in the Journal of British Cinema and Television, Journal of Popular Film and Television, The London Journal, and Early Popular Visual Culture. Christopher is the author of The Computer-Animated Film: Industry, Style and Genre (EUP, 2018) and co-editor of the anthologies Fantasy/Animation: Connections Between Media, Mediums and Genres (Routledge, 2018) and Snow White and the Seven Dwarfs: New Perspectives on Production, Reception, Legacy (Bloomsbury, 2021). He is currently co-editing two books: one on the multimedia performativity of animation (with Annabelle Honess Roe), and another (with David McGowan) on characters and aesthetics for the forthcoming Bloomsbury series The Encyclopedia of Animation Studies. He can also be found as the curator and creator of www.fantasy-animation.org.