Half a century after the band split, the remaining members of the Beatles reunited to release their final song “Now and Then” on Thursday. The song brought together the iconic former bandmates John Lennon, Paul McCartney and Ringo Starr, with one catch — Lennon died in 1980.
John Lennon recorded “Now and Then” as a home demo tape in the late ‘70s. With the blessing of Lennon’s wife Yoko Ono, his bandmates McCartney, Starr and the since-deceased George Harrison reconvened in the 1990s to finish Lennon’s demos “Free as a Bird” and “Real Love” for their Beatles anthology collection, featuring grainy Lennon vocals sourced straight from the original tape. However, the low-fidelity recording of “Now and Then,” plagued by audio defects, led the band to scrap plans for its reproduction — until now, thanks to artificial intelligence.
Thirty years later, AI is transforming music, including the way we listen to the Beatles. McCartney and Starr collaborated with producers and engineers to use AI-powered audio restoration tools and clean Lennon’s fuzzy vocal recording, creating a remarkably crisp, new-sounding track. Lennon’s eerily clear voice hauntingly meshes with his former bandmates’ instrumentation to produce a peculiar and poignant posthumous record — at its best, a nostalgic epilogue to the Beatles’ discography, and at its worst, a Frankensteinian resurrection of John Lennon venturing deep into the uncanny valley.
The development of AI has opened a world of possibility for music — but it’s also opened up Pandora’s box, a foreboding omen of the musical metamorphosis to come. McCartney and Starr, along with filmmaker Peter Jackson, producer Giles Martin and others, have rejuvenated bygone eras of the Beatles using AI restoration techniques, making once-impossible projects like the “Get Back” documentary and remastered “Revolver” special edition a reality. But even today, while generative AI is in its infancy, its capabilities go far beyond restoring old audio — launching music and art itself into uncharted territory.
Since the groundbreaking emergence of ChatGPT, DALL•E and other AI tools in recent years, AI-generated content has flooded the internet. The latest trend in music communities is “AI covers,” which is the practice of using AI to analyze and mimic a singer’s voice and inserting it into a song of the creator’s choice. Head over to YouTube to hear your favorite dead singer belt out a tune after their time — perhaps Frank Sinatra singing “Gangsta’s Paradise” or Freddie Mercury’s take on “All I Want For Christmas Is You.” It’s not just singers — anyone with a voice is vulnerable to AI appropriation, whether it’s Patrick Star nailing “Thriller” or Joe Biden and Donald Trump duetting “Wonderwall.”
AI covers may be entertaining, but their repercussions have already leaked into the music industry. Last April, the song “Heart On My Sleeve,” featuring vocals from Drake and The Weeknd, went viral, with a curious caveat — neither artist was involved with the song, and their vocals were AI-generated. The song had many listeners fooled with its sleek production and emulation of the duo’s auto-tuned vocals and lyrical flows.
The artists’ representatives promptly had the song removed from streaming platforms, dooming it to the void of pop culture obscurity, but “Heart On My Sleeve” represented a milestone in the story of AI — a moment when artists publicly faced their automated doppelgängers, and we could hardly tell the difference. Generative AI will improve at breakneck speed in the coming years, outpacing humans’ capacity to discern genuine from generated.
The proliferation of AI has handed humanity a Promethean power to make itself obsolete. We’re already witnessing the repercussions. Economists are speculating about the next industry to be swallowed by AI labor. Social media misinformation is becoming more sophisticated than ever, with AI impersonating politicians during election seasons. Perhaps the most dire target of generative AI is the arts, which face an existential threat as AI learns to imitate the inimitable — human creativity.
What is a singer who no longer owns their own voice, or a songwriter when a computer can compose a symphony in seconds? AI image generation drew backlash from artists who claimed that AI steals art by training models using existing artworks. Fears over generative AI partially prompted the screenwriters’ strike in Hollywood, with their new contract including provisions to limit its use in filmmaking. AI raises a million questions about intellectual property, the legal area governing ownership of ideas, creativity and innovation.
It’s difficult to conceptualize how AI will change the world, but it isn’t implausible to predict that AI will approach and exceed human capabilities, effectively erasing large swaths of the labor force. That isn’t the end of the world — from the invention of the wheel, we’ve built and harnessed technology to eliminate work for us humans. Societies adapt to developments in technology, and though automation renders many jobs obsolete, it opens the door for the inception of new ones.
The most important question in the coming decades will be how to build a society and economy in which computers have replaced the workforce and human labor is a thing of the past. If AI is the Wild West, governments are the sheriff. It’s imperative that lawmakers rein in AI to ensure that human interests remain protected. The idea of regulating AI is largely unprecedented, like the technology itself, but last week, the Biden administration unveiled a sweeping executive order creating the framework for federal AI regulations. The policy seeks to bolster transparency among AI developers, promote research and innovation in the AI sphere and defend workers and consumers from AI threats.
AI technology will evolve, but our policies must evolve with it. As AI ushers in a new, extraordinary era, it’s fundamental that we remember our humanity.
Patrick Swain begrudgingly removes Oxford commas as the culture editor of The Pitt News. Contact him at [email protected].