AI and Deep Fake Videos
The first time Adobe showed me Photoshop, I was fascinated by its potential. The idea of adjusting a picture to make it better has been an essential tool for professionals, especially in the area of graphics, entertainment, advertising and many other types of applications.
When I had to take some professional photo’s for use in my bio and speaker brochures when I am asked to speak, the photographer used Photoshop to take a slight bit of an under-the-chin fat and make my face more proportionally pleasing. In this case, I was happy for Photoshop.
However, when they showed me Photoshop before it was released, I pointed out that this could also be used to doctor up photos and create false images out of real ones. Of course, that has what has happened over the years. However, a new type of tool in the similar vein of Photoshop will soon come to market that can do the same thing for videos.
Luke Dormehi of Digital Trends wrote about a presentation at Siggraph 2018 in Vancouver, BC a few weeks back and talked about new research presented there by Germany’s Max Planck Institute for Informatics about something that is called “Deep Fake” videos:
“They have created a deep-learning A.I. system which can edit the facial expression of actors to match dubbed voices accurately. Also, it can tweak gaze and head poses in videos, and even animate a person’s eyes and eyebrows to match up with their mouths — representing a step forward from previous work in this area.
“It works by using model-based 3-D face performance capture to record the detailed movements of the eyebrows, mouth, nose, and the head position of the dubbing actor in a video,” Hyeongwoo Kim, one of the researchers from the Max Planck Institute for Informatics, said in a statement. “It then transposes these movements onto the ‘target’ actor in the film to accurately sync the lips and facial movements with the new audio.”
The researchers suggest that one possible real-world application for this technology could be in the movie industry, where it could carry out tasks like making it easy and affordable to manipulate footage to match a dubbed foreign vocal track. This would have the effect of making movies play more seamlessly around the world, compared with today where dubbing frequently results in a (sometimes comedic) mismatch between an actor’s lips and the dubbed voice.
Still, it’s difficult to look at this research and not see the potential for the technology being misused. Along with other A.I. technologies that make it possible to synthesize words spoken in, say, the voice of Barack Obama, the opportunity for this to make the current fake news epidemic look like paltry in comparison is unfortunately present. Let’s hope that proper precautions are somehow put in place for regulating the use of these tools.”
While I understand its value to, as Mr. Dormehi points out, the movie industry, its use to create fake videos and fake news could be staggering. In my work, I get quoted many times a week by the media on news stories. Over the years I have been lucky that most reporters quote me as stated and only a few times have I been misquoted in print. I also do much commentary on national and local TV shows around tech topics, and these usually are taped. While a few of my comments were taken out of context, the actual video of what I shared has never been altered. I would hate to have someone put words in my mouth that I did not say, but that’s child’s play compared to how it could be used for nefarious reasons.
Imagine someone using this technology to post a video of a significant country leader that falsely declares war on an enemy. Alternatively, they take a video of some person and interject their own words to push some political agenda or even threaten a person that ends up impacting that persons life.
Although I was not aware of this particular research from the Max Planck Institute until recently, I saw this kind of technology years ago when I visited a tech lab in the Bay Area that was working on something similar that was focused on a type of military application. At that time I got a glimpse of how this could work and observed my hosts that this could be used for both good and evil.
This video adjusting technology that uses AI will come to the market because there are legitimate applications where it could be used, especially in the world of movie making. However, I sure hope that with it comes some form of checks and balances that will keep it out of the hands of non-professionals.
At this point this, it appears that this is a technology demo and not a product yet coming from a specific company. I suspect we will find out relatively soon what type of company may license this and how it will be using it for commercial purposes.
This type of technology is scary given the plethora of fake news and images today that get posted through all kinds of mediums. Imagine how in-depth fake videos could be used in the future and the potential it has for creating counterfeit videos for evil purposes.