GAVIN: All right, welcome back to Way Too Interested, we are joined now by our other guests, and just to remind everybody this show is about somebody who's super interested in something, and then we bring on an expert to kind of help fill in information. Today, we're joined by Henry Ajder. Henry, why don't you introduce yourself and tell us a little bit about what your background is?
HENRY AJDER: Sure. I guess I can say that I'm an expert in deepfakes and synthetic media.
REX: Perfect!
[laughter]
GAVIN: Are you sure you're real, do we know that you're real?
HENRY: Yeah, that's the question I'll ask myself most days at the moment, with the whole COVID situation. But yeah, I'm an expert in deepfakes and synthetic media. I've been researching the topic since it first emerged in late 2017. And have done quite extensive research on the space, primarily through my role as head of research at the world's first deepfake detection company.
GAVIN: Great. And before I let Rex go, what is the first deepfake you remember seeing? And the other question that came up in our other thing is — when we call it deepfake, it comes from the word deepfake. What is the first deepfake you remember seeing, and then is that different than what we had seen before?
HENRY: It's quite unpleasant, to be honest, but the first deepfake I remember seeing was on the original subreddit, when I was doing research on this space as I did. And the first deepfakes, where the term was officially coined, came from Reddit, where there was an existing community of people who used kind of traditional Photoshop tools to superimpose celebrities’ faces on pornographic images.
And then, one day this new subreddit emerged called /r/deepfakes with the user called /u/deepfakes, who let everyone know that he'd created using open source libraries of software, a new tool for synthetically swapping faces, not just in images, but also in videos. So the first deepfakes that I saw were, rather shockingly, on these subreddits, where a lot of these female celebrities who are non-consensually being swapped into pornographic footage.
But I think as you guys probably have discussed, the term was originally coined exclusively to refer to those pornographic uses. But since then, in the three years or so since that first use case emerged, deepfakes now refers to a much broader range of uses and applications of synthetic media.
GAVIN: All right Rex, go for it.
REX: Your experience mimics mine, I was an early consumer of that subreddit and I don't know if there's a canonical firstness to the whole thing, but I think the one I remember is the Daisy Ridley one and it was truly disturbing, from both the "Well, somebody definitely did not consent to this" aspect, but also, "Whoa, that's realistic, how did they do that?" aspect.
And so I guess my first question is really, is that right? Did everyone else experience it the same way, like it was just this moment where whoa, this is much more accurate and better... Was it simply just like the technology leveled up, and that's what happened, what was that moment exactly? Did I experience it the same way as everyone did?
HENRY: Yeah, that's a really good question. I think it kind of both applies to that existing moment where you and I as, I guess, researchers on this information and malicious uses of artificial intelligence. Which is how I first found out about it, we probably found it in a similar way. Which is as you kind of mentioned, the technology behind synthetic media and deepfakes has been developing since about 2015, maybe even a bit earlier and kind of as it happens with academic and industry research, it trickles down.
So what first kind of is contained within academic computer science labs or kind of industry labs starts to become open source, starts to become something that people can replicate, you know, amateur hobbyists and people like this. Especially through these open source libraries of code that Google and other platforms offer. People can start cobbling together pieces of software in a way that previously they couldn't, and that's how deepfakes as how we first saw them emerge.
REX: One of the interesting things about that subreddit early on is that it wasn't just a library of instances, it was also like a how-to forum. There was Github libraries out there you could download. And there's a lot of like Q&A type stuff that subreddit eventually got banned, understandably, but it re-emerged as safe for work deepfakes, it’s now called?
HENRY: Sort of. On Reddit, deepfakes and deepfakes creators kind of recreated the subreddit in a safe for work capacity, but the original /r/deepfakes subreddit went underground. So that just started emerging on non-Reddit forums that new dedicated like essentially hubs, as we see with more conventional pornography. So a lot of the community moved into the kind of darker spaces online.
REX: Ah, well you know a darker part of the internet that I don't, and that's good to know. There's always worse.
HENRY: There’s always worse, exactly. It can always get darker.
REX: So I guess that's my next question is — I have some technical wherewithal, I write code and stuff, but I've never played around with this. I have this vague idea that there are these apps out there that I've never installed, but I see the outcome of them. I see these things being produced all the time. I’m kind of wondering like how hard it is. How much time would I have to put in to insert Emma Stone into Lord of the Rings? You know what I mean? Like these outrageous things you see. Would it take me a few hours on this FaceTime, or one of the names of the app Deepfacelab, I think is one of them? Will it take me like a little amount time or would I have to spend hundreds of hours to do this?
HENRY: Yeah. So in terms of creating deepfakes, that's one of the areas that over the last three years we've seen the most dramatic change. Because as you first mentioned, the first use on Reddit was accompanied by a really complex guide how to use it. And you had to have some kind of background understanding of code to use it properly. Typically you had to kind of pack parts together yourself.
Whereas if you fast forward to today, you're absolutely right. There's Deepfacelab, which is the most popular open-source creation tool and people now use that as kind of professional deepfake creators and hobbyists. People who are now working with VFX studios, they have really mastered the art of using that tool. And to create a really high-quality deepfake like some of the ones you've probably seen on YouTube, that does take a lot of time. Not just in terms of expertise, but training the model right.
So for example, the really famous Tom Cruise deepfakes that recently went viral on TikTok, the guy who created those is a guy called Chris Ume, he trained that model for I think two months to get a face swap of that quality. And that was before he then did manual post-production work on the final outputs. So high-quality deepfakes still require a lot of expertise and a lot of time.
GAVIN: Can I ask a tech question Henry? With the Tom Cruise one particularly, he famously used a Tom Cruise impersonator and then put his face over it. From a technical perspective, are we still pretty much in the face only perspective or is there a possibility that deep fakes can go…? Because if I think of Tom Cruise, there's a ton of movies out there that have him doing his thing. Can they do that yet, or are we still ways away from full body deepfake?
HENRY: So full body synthetic media is already here in a very crude form. That's one other thing with deepfakes is like certain forms of deepfakes and synthetic media are more advanced and accessible than others. And body transfer, as it's generally called, there are apps, I think one is called Jiggy, which you can use this to get a picture or a video of Bruno Mars dancing. And then you kind of upload a picture of yourself and it animates your photo dancing like Bruno Mars, which for someone like me is very useful because I can't dance to save my life.
GAVIN: Are you telling me that JibJab was the first deepfake, is that where this is going? JibJab gets the deepfake moniker.
HENRY: The Christmas cards my mom used to make with us as elves ... Yeah, so body post-transfer and synthetic body movements are again, it's coming. It's just a much more kind of nascent stage than say face swapping, which is by far the most predominant form. And face swapping in a cruder form is accessible now by smartphones with one image.
So face swapping in particular is super-accessible now, but at the high quality of any kind of deepfake, it’s still very much the preserve of like VFX experts or people who have learned over years to use these open-source tools very well.
REX: I think my next question is, I'm looking for people who maybe had a similar path of understanding this as me. I think a lot of people, as I understand, the first time they see this, they are kind of blown away too. I was talking to Gavin about it, and I described the process as "WTF to meh," meaning just in a matter of weeks or months, I went from "oh my God, this is terrible for society" to kind of going, "ah, this isn’t really that bad."
Within a very short amount of time I was thinking, this isn't really that different than Photoshop. And if you know the history of creative media, you know that there was lots of freak outs about Photoshop at the time, and to the idea of verity and truth. I guess I'm wondering what your view of it is, are you still in WTF or are you more in meh like me? Like, are you worried? Because there was a lot of scary stuff at first that was proposed and now I find myself not worried.
HENRY: So it's a really interesting framework to look at it from. And I think from my perspective, having kind of being an expert on this for a while and talking to the media about it, at the moment, once every couple of months a really good deepfake will come out, which will scare everyone. They will have that WTF moment and people who perhaps haven't heard of them before will see that first Tom Cruise deepfake and be like “Oh my God, this is so good, it's so easy, and this is going to be the new age of fake news.”
I think the media really capitalized on that quite visceral experience, that very personal experience of being fooled by deepfakes. That kind of thing that a lot of people quite quickly panic about. And in my research, that's one thing that I try to I've tried to do, you know is to delve into what is actually happening now and dispel the hypotheticals of like "well, this might be coming." Actually seeing what are we dealing with right now? What is the real problem? And on that framework, we haven't seen a huge amount of change from the last three years in terms of threat vectors or different kinds of attacks.
Still, the vast majority is non-consensual pornography, which is awful, and we need to do more to combat. It's a problem where thousands of women are being harmed by, but the fears of deepfakes in enhanced disinformation campaigns, fears of deepfakes undermining evidential proceedings in courtrooms. Things like this are still yet to materialize in a meaningful sense.
Having said that, I still think that we do need to be worried about it, I think it’s something we need to be very vigilant about, because I think the Photoshop analogy is a good one, right? Photoshop scared a lot of people, rightly as you said, but now kind of like it's just ubiquitous. It's just a part of our media diet in a way that I think most people don't realize. Like, how many people don't know that the images they see are photoshopped in subtle ways? How much has filtering and that kind of culture impacted us?
I think with Photoshop, like even really crude video edits, like slowing down audio or speeding up video speeds has contributed to disinformation that's gone viral in the US and around the world. And it's caused deaths in India, it's caused massive problems in places like Brazil, France as well has had some really big problems with it. So, I think there's a middle ground we need to take, which is like we need to be aware of the threats that are emerging. We need to be aware of how perhaps we're not quite as infallible as we like to think a lot of the time, but that doesn't mean we should be losing our heads and panicking. It means understanding where we're going and what we need to do before it's a problem to make sure that it doesn't become one.
REX: That's interesting because so much of the stuff I consume in this space is like, "here's a video where every person on the screen is Dr. Phil" or we're going to take that SNL sketch where Trebek and Sean Connery, but we're actually going to face swap them with their actual real faces and make it absurd. It's all this like joke stuff. And I've heard this stat that like somehow the vast majority of deepfakes that are created are revenge porn, effectively.
HENRY: That was my research, yeah, 96%.
REX: Yeah, and I know that that's out there, that it's young dudes who do something bad to their ex-girlfriend like gather up their iPhone library of pictures and make these videos of them. But that's not what trickles over into like the media world, and it's not what ends up on Reddit. It's probably on those dark corners of the internet that you're on, that I’m not a part of.
In the last election, there was this example where a Nancy Pelosi video appeared on Facebook. It was not a case of an actual deepfake, but sort of seemed like one, it was just an effect applied to it. And they just slowed down the recording and slurred her speaking and it made her sound drunk. It had a deepfake-like effect. And I think there's a lot of stuff out there like that. Gavin, didn't you have a question that was related to some of these?
GAVIN: Well, one of the things I wanted to ask you Henry was that I read in a couple of the articles that you had been quoted in about this idea called a "shallow fake." And I really wanted to know what the definition of shallow fake is. Is it just another term for manipulated media that removes the deep learning part of it?
HENRY: Yeah, that's a great question, and a really important part of the puzzle that I think people need to be aware of is that deepfakes fit into the kind of manipulated media heading, but there are other kinds of manipulated media that have come before it.
You know, Photoshop being one of them, but also shallow fakes and shallow fakes more or less refers to a crudely edited piece of audio visual media which uses techniques as I mentioned a minute ago, right? Like yes, slowing down audio speed to make Nancy Pelosi sound drunk in that video, which was then shared by the president at the time.
Or in the case of the Washington Post's White House correspondent, Jim Acosta, speeding up the video to make it look like he violently kind of wrenched his microphone away from the White House intern. Which the White House then used to justify revoking his pass. So those are classic examples of shallow fakes.
Then you've also got just out of context media, there's that famous photo of a shark swimming next to a car in flooded Florida, which gets shared every time there's hurricanes or there's flooding. That's Photoshopped, that's a really crude Photoshop image of that shark. But again, when they’re disasters and people are panicking, or when people aren't thinking critically. You have videos of bombings or you have videos of disasters, which are taken from completely different countries and times.
So shallow fakes really is a broad term to describe those cruder forms of media manipulation that don't, as you said use any deep learning, any sophisticated artificial intelligence. They are very crude, but work really well and require much less effort right now. Which is why, again, some of the fears are that deepfakes are here right now and they're causing really big problems and are overblown. But it's also why we can see why if deepfakes become a sophisticated as it looks like they will, they will be an incredibly powerful tool for these similar bad actors to misuse on a scale we haven't seen before. So yeah, that's kind of the rough framework.
GAVIN: When I hear deepfakes now, I think what Rex was saying before about this idea that deepfakes can be anything from behind the scenes of a entertainment property, revenge porn, or at a scarier level, like the world leader saying something that they're not saying. It sometimes feels like there's a little bit of a bogeyman factor going on with the term deepfake. Do you like the fact that deepfakes is now referred to .... the world at large thinks of it as a very negative term I think? I would assume that that's the case.
HENRY: Yeah again, that's a really good question. I think broadly speaking, my approach to deepfakes and synthetic media, which is a more neutral term to describe the broader uses of the technology... My attitude is trying to inject nuance into this conversation. So there are really awful uses of synthetic media or deepfakes, which we need to be worried about, we need to pay attention to. But also, there are lots of uses of synthetic media which promise to revolutionize our world in the commercial space and potentially even socially beneficial ways.
And this technology is not going away, it's becoming increasingly adopted by big players and is maturing. We need to prepare for a future where synthetic media is playing a really big role. Part of that may well be, as you alluded to, thinking about what we understand deepfake to mean. So some people will think deepfake kind of refers to AI-generated synthetic media that is created with the intent to deceive for malicious purposes.
So as you said, it has got a negative connotation primarily because it evolved exclusively from its use in non-consensual pornography right? That's how the story first broke, that's how people first came to know of it. But then again, there are also people who are like yeah, I'm a deepfake artist, this is what I do, and they've kind of embraced it. And they're trying to use the term in a more neutral sense.
The trend I see again, talking to the people who are looking to commercialize this technology for the entertainment sector, or looking to use this technology in advertising and things like this, they tend towards synthetic media precisely because it doesn't have that negative connotation attached. But you’re absolute right, deepfakes have become a boogeyman, that a lot of people are really worried about without necessarily understanding the full nuance of what it refers to. And while we do need to be concerned and we should worry about certain use-cases, there is a much broader change in the zeitgeist almost of media going on here, which needs a much more balanced and broader conversation to fully understand its significance.
REX: On that same theme, I'm curious if you have any theories about why we never had the dystopian moment. Instantly, you can think of the worst possible thing happening, especially around the 2020 election cycle. There was a lot of hype around, something terrible is going to spread on Facebook and it's going to change the course of the election, and we’re going to blame deepfakes for electing the wrong person and that never happened.
And you've done a good job in realigning my thinking that there is a bunch of bad stuff that still happens, and you should still be aware of that. But at the same time, the worst possible stuff never happened, not even like a second tier stuff I'd say never really happened. And I'm curious if you have theories as to why it hasn't happened. Is it that the videos aren’t good enough, is it that people aren't malicious as we thought they were, or is it some other reason?
HENRY: I think the key thing for me here would be that it hasn't happened yet. Like I mean, the timeframe that people are continually saying is the next election, the next event, and people will continue to say that until it happens.
I remember in the midterms, people were saying deepfakes are going to be a big problem here and they weren't. And in general, we haven't seen at least a confirmed case of deepfakes in a political context around the world being used with devastating effect. As you said, kind of making someone appear that they've said something they haven't, or putting them in a place that they weren't. It hasn't materialized yet that I'm aware of.
We have seen like hints of it though. So, for example, before the midterms, Russian interference used the idea of deepfakes and a fake Marco Rubio quote to say the Democrats were spreading deepfakes of Republican candidates. We’ve also seen with the dossier that was apparently leaked from an intelligence officer for Hunter Biden in this election. The apparent agent that leaked these documents was using a StyleGAN image, and that's an image which is entirely synthetically generated of a person who doesn't exist.
So we're seeing hints of how synthetic media and deepfakes are kind of entering into political sphere. But I think the biggest impact that we've seen is how just the idea alone is causing a massive destabilization of political discourse around the world. And specific to the US, I would say we are seeing increasingly on places like 4chan and 8kun. You know, QAnon people saying, "Trump saying I've conceded" or "Trump saying that the rioters at the Capitol were wrong to what they did" ... And they'll say, "No, that can't be real, that must be a deepfake."
REX: I think that’s really interesting that in some ways the thing that it's done to society that's worse than anything else is that it's made real documents, real media seem more suspicious. The larger societal effect is something about a larger sense of distrust about things that are real, but people think could have been manipulated.
HENRY: Absolutely. What deepfakes have done, and it's kind of like a weird perverse relationship. Which is the more awareness that's raised of deepfakes, which is kind of important, right, the people know the technology is out there, the more it kind of poisons the well of what people think is now possible and how that impacts their perception of authentic media. Particularly if there are cognitive biases that predispose them to want to believe a certain piece of authentic media isn’t real.
So you have deepfakes not just making fake things look real, but also providing a plausible way to dismiss real things as fake. And just an example, this happened several times around the world, but particularly in kind of less economically developed countries. But with the current situation going on in Myanmar at the moment, there was a confession that was released of one of the ministers saying that he colluded on crimes with the President or the Prime Minister. And this video was seized on by protestors saying, "This is a deepfake, this isn't real, this is not him actually confessing. This is created by the military to try and incriminate him."
And I did an analysis on this, as did some of my friends and colleagues who were specialists and we found no evidence of manipulation. It's not impossible that it happened, but if it did, it must've been an incredibly good job. But the narrative had spread that it was fake and people had just assumed and believed deeply that this was not real, because he could not have said that, and he must have been faked in order to get that confession.
And so that narrative has happened in Gabon in Africa where there was an attempted coup because people didn't believe that the president was alive anymore, and that the video released of him to confirm health was a deepfake. That's the biggest problem in politics we're seeing with deepfakes right now. And I think these uses will come, and I think your other question, why hasn't it happened yet?
I think you touched on it yourself, like realism is not sufficiently goods to kind of pass, I think, convincingly without a lot of work and expertise. And also let's say if a state actor were to try and do this, they'd likely have to create their own sort of model to do it well. And as soon as that model is released in the wild, then people will understand how that model works and be able to pick up on the fingerprints that leaves behind on the deepfakes it generates.
GAVIN: So one of the things I watched recently was that Netflix documentary about art fraud, you know what I’m talking about? Obviously, seeing if something is real is the most important thing in the world in this situation, what are fingerprints that do get left by the materials now so that you can prove that they're fake? Is there a specific thing you look for?
HENRY: Yeah, so there are some, and again, it depends on the kind of media you're looking at, right? Like obviously with certain images, using certain generators, that will leave behind different things to say a moving face swap on a video. But some key ones you can look for are things like mapping of the face when the face or the head is moving in different dimensions. So typically, face swapping models are really bad at getting faces if you're like looking down or you're looking up, or you're turning from side to side. Same with things like occlusion of the facial area. So if someone is smoking a cigarette or taking off their glasses or something like this, typically that's where you'll get flickering of the face that was originally kind of underneath. Even though there isn't a face underneath; it is a composite thing.
So those are some that you can get on videos, same with things like blending of the skin tones can be quite a good telltale sign, as can things like with the synthetic images, these StyleGAN images, you get visual artifacts, these are kind of things that shouldn't be there, especially when they're trying to recreate things like jewelry, like earrings kind of turned into these weird swirly things. Same again with strands of hair, you get these kinds of weird swirly patterns.
So there are some telltale signs, but the problem is, someone may listen to this podcast and hear these telltale signs and come away from it saying, "I now know what to look for." And in six months, those things may have been trained out of these models and they may have improved to the point that they're no longer there. But then it's not going to be necessarily a timely podcast reminder to them that suddenly these things are no longer in these images or videos. So I'm really hesitant to kind of say with confidence, if you don't see these, it's not a deepfake.
Because that gives false confidence, and indeed we've seen how false confidence in deepfake detection tools with this Myanmar case I just mentioned, these tools, I'm pretty sure, got it wrong. And they said confidently, this is a deepfake, which is really dangerous in a political crisis like that. Being able to having people falsely having confidence that something is real or fake could be the difference between an attempted coup or a election being declared void. So it’s a really difficult problem. There are some signs now, but there's no guarantee they'll be there in six months, in a year, or maybe even sooner.
REX: I'm curious, has anyone taken some of the best examples of Hollywood? Like the example I cited earlier was The Irishman with De Niro, Pacino in it, and the Netflix movie that won a bunch of awards, had a ton of de-aging in it ... and used deepfake-like technology. Has anyone on the research side taken things like that and found out if they can detect the seams of manipulation in it? You know what I mean? That's an example of the best high-tech applied to deepfakes and the most deceptive, and I'm curious if like researchers are even are able to identify something that good.
HENRY: As in, are researchers able to detect, in an automatic sense, the manipulation in The Irishman?
REX: Yeah, can they see the artifacts of manipulation within that?
HENRY: Within the original version that was released on Netflix?
REX: Yes.
HENRY: So that's an interesting case, The Irishman because a lot of the VFX community and a lot of the deepfake community thought they did a really bad job.
GAVIN: [laughs] Really?
HENRY: Yeah, there was actually a couple of videos out there of people who have tried to recreate scenes from The Irishman using face swap of younger footage from when De Niro was younger to do a better job than the synthetic de-aging that these manual click-by-click VFX artists have done. That's been done with Star Wars as well with Princess Leia, with Carrie Fisher who was put in with Grand Moff Tarkin and … I can't remember the name of the actor. But it's been done quite a few times by deepfake artists where they've said actually "Yeah, this isn't that good. I can do this better on a gaming PC in my mom's basement."
REX: Yeah that's interesting because Gavin shared with me a video of like this is a whole sub-genre within YouTube of graphic artists people who do this work in Hollywood, taking old movies and making the effects better. What's the one you shared with me Gavin?
GAVIN: It was The Scorpion King, they redid The Rock's face as he came out. You know what I’m talking about?
HENRY: I know the scene you're talking about, and the original hasn’t aged well.
GAVIN: It's so bad.
[clip from
Corridor Crew video]
GAVIN: But they did a great job with these guys, this channel basically, their VFX artists, and they go in and they take the original scene, which got panned hugely, and they used deepfake technology to change it. It’s incredibly better. And this is four or five guys and not an unpowerful computer, but this is not ILM, this is a pretty small working. And they're not working from the original files either, they're just literally slapping on this thing on top of the actual master.
REX: It’s interesting as the whole genre of this out there that like people are oh, I can do better than Hollywood can, and I can improve this.
HENRY: I don't know specifically the channel you're referring to, I imagine it might be Corridor Digital who have done some amazing work with deepfakes. They are VFX specialists, and they kind of made some deepfakes of Keanu Reeves. They did an original one of Tom Cruise, which wasn't as good as the more recent versions, but yeah, absolutely. I know a lot of the people in the community who can make deepfakes professionally.
They started off doing this is kind of like a fun hobby on YouTube, and they kind of like oh, a hundred thousand views on this video, that's great. They were just kind of doing it for fun. Now these people are being consulted or they're consulting or providing their work to massive studios. Like, the South Park creators hired a load of deepfake artists for an entirely new studio, for an entirely new deepfake satire show they've created, which they said was the first video released on YouTube. They said must've been the most expensive YouTube video ever made, like these people are artists. And there are a lot of people who are really good at using this technology who use it for awful purposes with non-consensual deepfake pornography or image abuse.
But what I meant earlier when I said this technology is maturing, there is a respected growing number of people who recognize the creative potential of deepfakes and synthetic media, who as you said have de-aged people in old films or have colorized old films using similar kinds of technologies. We have seen a new film featuring James Dean, a synthetic version of James Dean has been commissioned, that's going to be coming out. A French soap opera and actress got COVID, no problem, you deepfake her into the scene. Snoop Dog records an advert for the company JustEat, which I guess is like Grubhub I think you guys have in the States or something, you know, like takeaway service. He records the advert once, but they want to repurpose it for another region of the world? No problem. You just hire a company to synthetically change his lip movements to match a new audio track.
This is truly going to revolutionize all of our content creation and content consumption. And there are some really interesting questions about the ethically ambiguous use cases, right? Like, not just the really explicitly malicious ones like disinformation or non-consensual deepfake pornography, but the use cases where you are synthetically resurrecting someone, and you have to ask questions like, do I need consent to do this? Can I meaningfully consent to my likeness being replicated? In what cases don't you need consent?
REX: You got into my third big question without even me getting to ask it. It really was about the future of this for entertainment. I had this term that I like to use in these conversations where I corner people at parties and show them this deepfake, which happens a lot. I call it the "eternal celebrity." It seems really interesting to me that this technology has emerged at the same time there's a huge transition going on within Hollywood, within entertainment, towards the franchising of everything. Like everything now is a reboot or cinematic universe. You went through a good list of all of the cases that are happening, and I think people overlook how much is actually happening right now.
And that James Dean thing is like a really good example of, I think that this is coming, I think we're going to start taking old dead people and recreating them. And I think within 20 years, we're going to have somebody win an Oscar who is dead, that we've digitally recreated. That's my theory, I take the over/under on 20. So I guess my thing is do you think that this is accurate? And I'm curious if you know anything about some of the legal stuff.
I think this is true, I think Robin Williams in his will put in that he cannot have his likeness reused by anyone for any kind of future … He saw the deepfake stuff coming, and said he didn’t want to use it. But I'm curious, is that legally binding in 20 years of his kids want to do it? It's like, I don't know you could stop them, and so once you're dead, I think it's going to be open game. I don't know. But even now, there's going to be a ton of actors who get too old to play the person they played 20 years ago, and they're just going to be deepfaked into entertainment. And we're just going to keep recycling the same characters over and over again. Does this sound like the future of creative media to you?
HENRY: This is something that I've been thinking about a lot, these kinds of gray areas with synthetic media. I think synthetic resurrection, or techromancy, which I quite like as another way of talking about it, it's one of the most interesting areas because as you've kind of identified some of the legal tension, somebody's really open question, and kind of fundamentally, again, this is about film stars that we love and we know, and what we think we know anyway. We have this kind of weird parasocial relationship with.
These are people who have formed part of our childhoods and films that we love and adore. And there are some really big questions, as you just alluded to, as to whether we want this to happen. Like do we want Scarlett Johansson to feature in a film in 50 years? And it's not real, like how much of that is actually her, does she still contribute in some way? Is it like her body, but with lie a de-aged face? Is it a completely new actor? If it's a completely new actor, do they get some credit for that performance? How do we determine who is actually the person? For example, with deepfakes, one of the really impressive things about face swapping is that it maps the target's facial expressions perfectly.
So the person who would be having to be the body double, I guess you could say or the face double even, would have to be doing a really good job of the expressions and all of the parts of acting. But at the same time, people like SAG, the Screen Actors Guild, are going to be really wanting to protect the performance rights and the way that if your face is used... You know, that's the performance that you're giving, you deserve full payment for that performance.
GAVIN: Maybe there is a world where it's going to be Andy Serkis and Scarlett Johansson, do you know what I mean? Because Andy Serkis is famously Gollum. But there may be an award that's given to two people. One, the person who's doing the actual motion capture or the actual acting and the other person who's like a posthumous, James Dean academy award, but then having like Brad Pitt playing that character. And obviously, Brad Pitt in his own, he's not going to let his face get swapped out by James Dean, necessarily. But like that seems like a world where you could have a shared award in some ways, too.
I know we have wrap up pretty soon here because Henry, I know you have to move on to something else, but hey, thank you so much to both Henry and Rex. Henry, I do want to ask you before you go, I'm going to try to do this with all of our guests, what is something that you're way too interested in right now? Is there a specific thing that you can't get out of your brain?
HENRY: Yeah, so I'm only 27, but I'm having a late in life, for me, a discovery of jazz, specifically in the UK here in South London. There's an exploding jazz scene of these really exciting new young jazz musicians who are doing some really interesting stuff. And I haven't listened to a huge amount of the classics — Miles Davis, Herbie Hancock — but these young guys are doing something that taps into kind of a fascination that I didn't know I had with this music. I normally like music that's a bit more emotional for me, whereas jazz technically just blows my mind. So yeah, I'm getting deep into the jazz scene and I hope that I would get my way back to the classics someday soon.
GAVIN: Do you have a specific artist that we can check out?
HENRY: I would really recommend checking out a guy called Yussef Dayes, he has, I think the album is called Welcome to the Hills, I think. He also did the collaboration last year with an artist called Tom Misch, the album is called What Kind of Music. Both just exceptional records, real good energy, like when you need a bit of a perking up in lockdown summer, that's what I've been using.
GAVIN: Amazing, that is fantastic. Well thanks again to both of you for being here for the original recording. Hopefully, Rex, we got most of your questions answered and I'm going to go learn a whole shitload more about this now because I can't believe the stuff I didn't know about it. So thanks again everybody, we really appreciate it, and come back for the next episode.
HENRY: My pleasure, thanks guys.
REX: Thanks, see ya!
GAVIN: Well that's our show for this week, thank you so much for coming and listening. Way Too Interested is produced mostly entirely by me, but I did have some good help along the way. Eric Johnson of LightningPod has helped me significantly in getting the show up and running, does the editing on my interviews and overall is a great human being. You should definitely go and find LighteningPod if you are trying to start a podcast or you need some help making one.
Other than that, the Gregory Brothers, thank you for making the theme song, this music you hear right now, which I specifically asked for some very easy listening sort of style for the credits. And of course, thank you to my guests this week, Rex Sorgatz and Henry Ajder, we had a great time talking deepfakes, and I will see you next week.
Also go to waytoointerested.com, our website. I want to hear from you if you have a problem with the show, if you think it's interesting, I hope you think it's interesting, or if you have subjects that you're fascinated by. I'm also toying around with the idea of starting a Discord, I would love to start a conversation with you.
I'm on Twitter at
twitter.com/gavinpurcell; pretty active there. Would love to talk more with anybody that listened and kind of get some feedback on the show. It's my first time making one of these, so I'm still in the learning stages, but I'm having a good time and I hope to keep it up. Thank you again for listening and please come back again.