Softwares are now able to copy your voice, as well as modifying your facial features in a video. Is this the future of fake news?
Technology has been blurring conceptual lines for years, threatening human workforces, encroaching on privacy and altering our perceptions of society and social interaction. Artificial Intelligence (AI) is even infringing on the abilities many hold to distinguish humans from computers, including creativity. An AI developed in Rutgers can now generate original (and beautiful) artworks, and an Amper Music AI can create background music of any tempo, instrumentation or mood on demand.
But one common fear is the manipulation of content that serve to efficiently entertain or carry a message to the masses. As with many discussions of media, the term “fake news” creeps in, but is perfectly apt to describe many of the most influential forms of advertising and entertainment that have been with us for many years.
1987 saw the first photoshopped image shared with the world, and it had honest beginnings. John Kroll, working for Lucasfilm’s Industrial Light and Magic, took a photo of his girlfriend before he proposed, and found that he could improve the photo using Pixar’s resources. This developed into the now infamous Photoshop, but this development has now been surpassed. We have reached the heights of software engineering that we can make an on-screen figure or character directly copy the movements of another video subject.
A program presented at a 2018 computer graphics conference demonstrated this by mirroring Obama’s movements in a TV interview onto Putin, fooling around half of the audience in testing. This may not sound incredibly impressive to this audience with a high awareness of computer graphics power; an audience with less awareness on the topic, however, and a motivation to believe doctored footage, would be much more susceptible to share such content.
This becomes even more concerning when combined with freely available software like Lyrebird.ai, which can, from mere minutes of sample audio, produce an eerily realistic replication of a person’s voice. I myself have tested this, and with good audio quality and over 10 minutes of data input, it was fluent in my own voice. The inevitable step is to combine these programmes, and create full-length, fully fake interviews with politicians or celebrities. The future of fake news is bright, and we will have to learn to watch ever more closely.