[ This commentary first appeared on the Digital Production Buzz. ]
I had an interesting conversation with a software developer earlier this week.
We got to talking about new technology which can invisibly make elements in videos such as edits, objects in the frame, even audio stutters disappear as if they did not exist in the original.
As I mentioned during our discussion, I’m increasingly uncomfortable with this type of technology because it directly enables sites that specialize in fake news to create video which looks “real,” but isn’t.
They responded that the answer is for consumers of content to be aware of the credibility of the sites they visit and only believe videos from reputable sites.
While this is always good advice, it isn’t always practical. Given the ubiquity of links, we don’t often know what site we’re visiting. Even reputable sites link to less than reputable sponsors. It seems to me that developers are refusing to acknowledge the responsibility they have for the proliferation of fake news. Fake news which is driving most of us nuts.
Who’s responsible for helping us determine what is “real?”
Developers, it seems to me, have a responsibility for the rest of us to be able to quickly tell if a video has been altered using the tools they’ve created. This information is sometimes buried deep in the invisible data of a movie file – but needs to be much more accessible.
This is such a two-edged sword. Clearly, as media creators, we often need to create images that look real yet don’t exist in real life; whether we are creating fantasy dragons, masking traffic signs in a period drama, or making actors fly who are actually connected to wires for their safety.
Yet, using these same tools, it is all too easy to take video that has been extensively modified yet pass it off as though it is “real.” At what point is it important for us to determine – not just guess, but KNOW – whether a video is displayed as shot or whether its been “Photoshopped?”
Technology is only going to make image manipulation easier and more invisible. If we don’t start having these conversations NOW about what is real and what is fake AND how to tell the difference; we will quickly move into a future where we have to assume that everything we see is fake. And that will be a sad day, indeed, when even what is real is fake.
Just something I’m thinking about. As always, let me know your thoughts.
3 Responses to When Should We Care If an Image is Real…. or Not?
Agreed Larry. I first started thinking about this in 1996. I had shot a media event in Dallas and gone to a production house to edit. In the other room was a photographer working on some picture in Photoshop. They were pictures of the recently completed Reform Party convention. The photographer had covered for a national news magazine (yes, back then a few of them were still printed). Now he was working for the Refom Party candidate, Ross Perot. That day he was working on a picture of Perot that was obviously taken before the nomination was a fait accompli. He was replacing all of the Dick Lamm (another candidate) posters with Perot signs. It was clearly a better picture of Perot than the hphotographer ad taken after the nomination. The crowd, which had been split in its allegiance to a candidate was now all in for Perot, at least in the photograph. How would someone in the room feel about being pictured holding a sign that he/she never held for a candidate he/she never supported? It got me wondering, where does this all lead? I guess we’ve arrived.
Great example, Ed. Thanks for sharing.
Valid concerns, Larry. These tools can be invaluable when creating new content or even “fixing” existing content. I often attempt to use the Flow Dissolve in FCPX when I need to fix a jump-cut in an interview segment. When working as intended, this effect can be a great solution to what would otherwise be an ugly cut or require a cut-away to an alternate camera angle, b-roll or a graphic (which may not be available). But this technology becomes an urgent problem when those who wish to “cover up” or otherwise falsify information put it to use for reasons you outline. I guess the most immediate solution is to NEVER lose your original source material. This way, even when falsified video is created, you’ll have the original source to dispute it. But this obviously isn’t foolproof. I think your idea of embedding metadata or some other code into the altered clip indicating a “fix” has been applied is a great solution. But, even so, that can’t be enforced. Perhaps news agencies and editorial companies can commit to only using that software that includes this metadata and clips found not to have that code included can be considered nefarious. As far as digital actors, I think they can be a great solution to a story-telling problem. Real actors are best, of course, but, when lifelike and believable, I think digital actors can be a great asset. Having Princess Leia and Grand Moff Tarkin in Rogue One was amazing and an important part of the story. Not perfect, of course, as most of us who know about such things could tell they weren’t real. But it’s just a matter of time before digital actors look totally photorealistic with mannerisms that match to a T.