The Greatest (And Scariest) Illustrations Of AI-Enabled Deepfakes
augmented intelligence certification</a> (AI)-enabled deepfakes have technology leaders, governments, and media talking about the perils it could create for communities. </p><figure class="image-embed embed-0" role="presentation"><div><img src="https://specials-images.forbesimg.com/imageserve/5d35439d95e0230008f6281f/960×0.jpg?fit=scale" alt="The Best (And Scariest) Examples Of AI-Enabled Deepfakes" data-height="4000" data-width="6000"></div><figcaption><fbs-accordion><p class="color-body light-text">The Best (And Scariest) Examples Of AI-Enabled Deepfakes</p><small>Adobe Stock</small></fbs-accordion></figcaption></figure><p>The first exposure to deepfakes for most of the general public happened in 2017. This was when an anonymous user of Redditor posted videos that showed celebrities such as Scarlett Johansson in compromising sexual situations. But, it wasn’t real-life footage—it was the combination of the celebrity’s face, and the body of a porn actor fused together using deepfake technology to make it appear that something happened in real life even though it was faked. Celebrities and public figures were originally the ones susceptible to the charade since algorithms required ample video footage to be able to create a deepfake, and that was available for celebrities and politicians.</p><p>When researchers at the University of Washington posted a deepfake of President Barack Obama and then circulated it on the Internet, it was clear how such technology could be abused. The researchers were able to make the video of President Obama say whatever they wanted it to say. Imagine what could transpire if nefarious actors presented a deepfake of a world leader as a real communication. It could be a threat to world security. With cries of “fake news” commonplace, a deepfake could be created to support any agenda to fool others into believing the deepfake is an authentic representation of what someone wants to communicate.</p><p>Other high-profile examples of manipulated video include an altered video of House Speaker Nancy Pelosi, that was retweeted by President Trump as real, that made it look like she was drunkenly stumbling over her words. In this case, the timing of the video was altered to create the effect, but many believed it was a true depiction. <a href="https://fortune.com/2019/06/12/deepfake-mark-zuckerberg/" target="_blank" class="color-link">Two British artists created a deepfake of Facebook CEO Mark Zuckerberg</a> talking to CBS News about the "truth of Facebook and who really owns the future." This video was widely circulated on Instagram and ultimately went viral. </p><p><strong>Deepfake Technology Rapidly Improving</strong></p><p>Deepfake technology is improving faster than many believed it would. In fact, researchers have created a new software tool that allows users to<a href="https://www.theverge.com/2019/6/10/18659432/deepfake-ai-fakes-tech-edit-video-by-typing-new-words" target="_blank" class="color-link"> edit the transcript of a video to alter the words</a>—add, change, or delete—coming out of someone’s mouth. This technology isn’t available to consumers—yet—but examples of what has been done illustrate the ease with which the tool can be used to alter videos.</p><div class="vestpocket" vest-pocket></div><p><a href="https://web.stanford.edu/~zollhoef/papers/SG2018_DeepVideo/page.html" target="_blank" class="color-link">Deep Video Portraits</a>, a system developed at Stanford University, can manipulate not only facial expressions such as can be seen in the President Obama deepfake, but also myriad movements including full 3D head positions, eye gaze and blinking, and head rotation by using <a href="https://www.google.com/url?client=internal-uds-cse&cx=016722361883844155001:yfaatio9mvc&q=https://bernardmarr.com/default.asp%3FcontentID%3D1901&sa=U&ved=2ahUKEwiFiOO15cDjAhUPmxQKHf1TASsQFjAAegQIARAB&usg=AOvVaw2fmKcdyamHeP5ZtksUOyir" target="_blank" class="color-link">generative neural networks</a>. Even though these videos aren’t perfect, they are incredibly photorealistic. This could be super beneficial for audio dubbing a film into another language and, as the researchers realize, could be abused as well.</p><p>Samsung’s AI lab made<a href="https://www.wired.com/story/deepfakes-getting-better-theyre-easy-spot/" target="_blank" class="color-link"> Mona Lisa smile and created a “living portrait</a>" of Salvador Dali, Marilyn Monroe, and others using machine learning to create realistic videos from a single image. The system only requires a few photographs of real faces to create the living portrait which could be cause for concern for "ordinary people" who thought that they might be immune to deepfakes because there isn’t enough video footage of them to train the algorithms. Samsung’s AI shows that it can make realistic videos with more general video footage of a wide range of people rather than only use video specific to the “star” of the deepfake.</p><p>There are even more disturbing capabilities out there. A programmer launched a free, easy-to-use app called<a href="https://www.vox.com/2019/6/27/18761639/ai-deepfake-deepnude-app-nude-women-porn" target="_blank" class="color-link"> DeepNude</a> that would take an image of a fully clothed woman and remove her clothes to create nonconsensual porn. Just days after the app’s release, the anonymous programmer shut it down. It’s hard to imagine anything but misuse for this app.</p><p>So, now that we know it’s out there and getting even more realistic and easy to use, what do we need to do to protect ourselves and others from misuse? That’s a huge question with no easy answers.</p><p>Should social media companies be forced to remove videos that are deepfakes from their networks? Does it matter what the intent of the video is? Is there any way to separate entertainment from maliciousness?</p><p>Some researchers suggest that it’s better for ethical developers to continue to push the envelope when it comes to this technology so they can warn what’s possible to encourage more critical analysis of video content. Others argue that this work just makes it easier for unethical people to extrapolate the learnings for their own misuse. </p><p>Also, AI might be behind deepfakes, but it can also be very instrumental in helping humans detect a deepfake. For example, software company Adobe has developed an AI-enabled tool that can now spot deepfakes of images. </p><p>However, we can’t merely rely on software to do the job for us. As deepfake technology is here and getting better every day, it would be prudent for us all to remember to critically assess the authenticity of videos we consume to understand their real intent. This means not just relying on the quality of the video as an indicator of authenticity but also assessing the<u> </u><a href="http://nymag.com/intelligencer/2019/06/how-do-you-spot-a-deepfake-it-might-not-matter.html" target="_blank" class="color-link">social context</a> in which it was discovered—who shared it (people and institutions) and what they said about it.</p>”>
There are optimistic utilizes for deepfake technological innovation like building digital voices for folks who shed theirs or updating movie footage in its place of reshooting it if actors journey above their strains. Even so, the opportunity for destructive use is of grave concern, in particular as the engineering will get a lot more refined. There has been great progress in the high quality of deepfakes since only a few several years in the past when the initial products and solutions of the technological innovation circulated. Due to the fact that time, several of the scariest illustrations of augmented intelligence certification (AI)-enabled deepfakes have technology leaders, governments, and media talking about the perils it could generate for communities.
The very first exposure to deepfakes for most of the standard community transpired in 2017. This was when an nameless user of Redditor posted films that showed famous people this kind of as Scarlett Johansson in compromising sexual predicaments. But, it was not true-life footage—it was the mix of the celebrity’s deal with, and the overall body of a porn actor fused collectively applying deepfake technological innovation to make it surface that a thing happened in serious lifetime even though it was faked. Superstars and public figures were being initially the kinds susceptible to the charade because algorithms required enough online video footage to be in a position to make a deepfake, and that was accessible for stars and politicians.
When researchers at the University of Washington posted a deepfake of President Barack Obama and then circulated it on the Online, it was obvious how such technological innovation could be abused. The researchers were being able to make the video clip of President Obama say no matter what they wanted it to say. Visualize what could transpire if nefarious actors offered a deepfake of a environment chief as a actual conversation. It could be a threat to planet security. With cries of “fake news” commonplace, a deepfake could be made to assist any agenda to fool some others into believing the deepfake is an genuine representation of what anyone desires to talk.
Other superior-profile examples of manipulated video clip incorporate an altered video of Property Speaker Nancy Pelosi, that was retweeted by President Trump as genuine, that created it appear like she was drunkenly stumbling more than her words and phrases. In this scenario, the timing of the video clip was altered to generate the effect, but many thought it was a real depiction. Two British artists designed a deepfake of Facebook CEO Mark Zuckerberg talking to CBS News about the “fact of Facebook and who genuinely owns the long run.” This online video was widely circulated on Instagram and finally went viral.
Deepfake Technological know-how Swiftly Improving
Deepfake know-how is increasing more rapidly than numerous considered it would. In fact, researchers have produced a new computer software resource that will allow users to edit the transcript of a online video to change the words—add, alter, or delete—coming out of someone’s mouth. This technology is just not out there to consumers—yet—but illustrations of what has been completed illustrate the relieve with which the tool can be employed to alter films.
Deep Video clip Portraits, a technique formulated at Stanford University, can manipulate not only facial expressions such as can be found in the President Obama deepfake, but also myriad actions together with total 3D head positions, eye gaze and blinking, and head rotation by using generative neural networks. Even while these video clips are not fantastic, they are very photorealistic. This could be super effective for audio dubbing a movie into another language and, as the researchers realize, could be abused as nicely.
Samsung’s AI lab produced Mona Lisa smile and created a “living portrait” of Salvador Dali, Marilyn Monroe, and some others working with machine learning to build real looking videos from a solitary graphic. The technique only requires a several photos of real faces to generate the residing portrait which could be cause for concern for “standard persons” who considered that they may well be immune to deepfakes mainly because there is not ample movie footage of them to prepare the algorithms. Samsung’s AI demonstrates that it can make realistic video clips with extra basic movie footage of a huge range of folks somewhat than only use video unique to the “star” of the deepfake.
There are even far more disturbing abilities out there. A programmer released a free of charge, effortless-to-use application termed DeepNude that would consider an graphic of a fully clothed woman and clear away her apparel to develop nonconsensual porn. Just times following the app’s launch, the anonymous programmer shut it down. It is challenging to envision something but misuse for this application.
So, now that we know it truly is out there and having even a lot more sensible and straightforward to use, what do we need to do to secure ourselves and other people from misuse? Which is a enormous question with no easy answers.
Should social media companies be pressured to remove videos that are deepfakes from their networks? Does it issue what the intent of the online video is? Is there any way to different leisure from maliciousness?
Some researchers recommend that it is greater for…