Donald Trump and Vladimir Putin. ‘What mischief could be coming in this dawning era of astonishingly realistic ‘deep fakes’?’
Photograph: Pablo Martinez Monsivais/AP
Do the notes taken by the interpreters at the recent Helsinki summit include the words “Snowden” and “swap”? We could ask the Russians to check their (assumed) audio recording and let us all know whether Presidents Trump and Putin discussed such a prospect during their long private chat. Trump wrong-footing his own country’s intelligence community by delivering their most-wanted, Edward Snowden, seems precisely the trolling that Putin would enjoy.
What else might leak soon, in the form of audio of the authentic voices of two familiar public figures speaking to each other through the only other people in the room, the US interpreter and her Russian counterpart? What other mischief could be coming in this dawning era of astonishingly realistic “deep fakes”?
Artificial intelligence is becoming more proficient at using genuine audio and video to help create fake audio and video in which people appear to say or do what they have not said or done. Celebrities seeming to read aloud their own tweets and fake video of Barack Obama are two examples. Some developers indicateawareness of the ethical implications.
The issues are analysed in a new draft paper, Deep Fakes: A Looming Challenge for Privacy, Democracy and National Security, by two US law professors. Robert Chesney and Danielle Citron unflinchingly yet constructively explain the potential harms to individuals and societies – for example to reputations, elections, commerce, security, diplomacy and journalism – and suggest ways the problem can be ameliorated, through technology, law, government action and market initiatives. The paper reflects and respects both experience and scholarship, a style familiar from the Lawfare blog that Chesney co-founded. The specifics in the paper are mostly American but its relevance is global. Deep fakes are aided by the quick, many-to-many spread of information, especially in social media, and by human traits such as biases, attraction to what’s novel and negative, and our comfort in our filter bubbles.
The authors note that “not all lies involve affirmative claims that something occurred (that never did): some of the most dangerous lies take the form of denials”. They argue that deep fakes make it easier for liars to deny the truth in two ways. First, if accused of having said or done something that they did say or do, liars may generate and spread altered sound or images to create doubt. A risky approach, say the authors, when media are involved or others with technical proficiency can check. The second “equally pernicious” way is simply to denounce the authentic as being fake, a technique that “becomes more plausible as the public becomes more educated about the threats posed by deep fakes.”
Play Video
0:50
Hear Google's virtual assistant mimic a human voice to book an appointment by phone – video
The “liar’s dividend” grows in proportion to public awareness of deep fakes and “runs with the grain” of larger trends in truth scepticism, the authors argue.
Citing what they call mounting distrust of traditional news sources, the professors write: “That distrust has been stoked relentlessly by President Trump and like-minded sources in television and radio; the mantra ‘fake news’ has thereby become an instantly recognised shorthand for a host of propositions about the supposed corruption and bias of a wide array of journalists, and a useful substitute for argument when confronted with factual assertions … [I]t is not difficult to see how ‘fake news’ will extend to ‘deep fake news’ in the future. As deep fakes become widespread, the public may have difficulty believing what their eyes and ears are telling them – even when the information is real. In turn, the spread of deep fakes threatens to erode the trust necessary for democracy to function effectively. The combination of truth decay and trust decay creates greater space for authoritarianism”.”
Actions to build hope? Grow awareness. Be wary in proportion to the gravity of what’s being claimed, and verify with care. Abandon that specious catchcry cynically uttered in some newsrooms earlier in this digital era: never wrong for long. It was never right. Truth, always engaged in its less-than-free and less-than-open encounter with falsity, deserves better.
• Some hyperlinks were amended on 23 July 2018.
• Paul Chadwick is the Guardian’s readers’ editor
Since you’re here…
… we have a small favour to ask. More people are reading the Guardian than ever but advertising revenues across the media are falling fast. And unlike many news organisations, we haven’t put up a paywall – we want to keep our journalism as open as we can. So you can see why we need to ask for your help. The Guardian’s independent, investigative journalism takes a lot of time, money and hard work to produce. But we do it because we believe our perspective matters – because it might well be your perspective, too.
The Guardian is editorially independent, meaning we set our own agenda. Our journalism is free from commercial bias and not influenced by billionaire owners, politicians or shareholders. No one edits our Editor. No one steers our opinion. This is important because it enables us to give a voice to the voiceless, challenge the powerful and hold them to account. It’s what makes us different to so many others in the media, at a time when factual, honest reporting is critical.
If everyone who reads our reporting, who likes it, helps to support it, our future would be much more secure. For as little as $1, you can support the Guardian – and it only takes a minute. Thank you.
The liar’s dividend, and other challenges of deep-fake news
Concocted Trump-Putin audio is just one prospect among many. Democracies need to prepare
Shares
Do the notes taken by the interpreters at the recent Helsinki summit include the words “Snowden” and “swap”? We could ask the Russians to check their (assumed) audio recording and let us all know whether Presidents Trump and Putin discussed such a prospect during their long private chat. Trump wrong-footing his own country’s intelligence community by delivering their most-wanted, Edward Snowden, seems precisely the trolling that Putin would enjoy.
What else might leak soon, in the form of audio of the authentic voices of two familiar public figures speaking to each other through the only other people in the room, the US interpreter and her Russian counterpart? What other mischief could be coming in this dawning era of astonishingly realistic “deep fakes”?
Artificial intelligence is becoming more proficient at using genuine audio and video to help create fake audio and video in which people appear to say or do what they have not said or done. Celebrities seeming to read aloud their own tweets and fake video of Barack Obama are two examples. Some developers indicate awareness of the ethical implications.
The issues are analysed in a new draft paper, Deep Fakes: A Looming Challenge for Privacy, Democracy and National Security, by two US law professors. Robert Chesney and Danielle Citron unflinchingly yet constructively explain the potential harms to individuals and societies – for example to reputations, elections, commerce, security, diplomacy and journalism – and suggest ways the problem can be ameliorated, through technology, law, government action and market initiatives. The paper reflects and respects both experience and scholarship, a style familiar from the Lawfare blog that Chesney co-founded. The specifics in the paper are mostly American but its relevance is global. Deep fakes are aided by the quick, many-to-many spread of information, especially in social media, and by human traits such as biases, attraction to what’s novel and negative, and our comfort in our filter bubbles.
The authors note that “not all lies involve affirmative claims that something occurred (that never did): some of the most dangerous lies take the form of denials”. They argue that deep fakes make it easier for liars to deny the truth in two ways. First, if accused of having said or done something that they did say or do, liars may generate and spread altered sound or images to create doubt. A risky approach, say the authors, when media are involved or others with technical proficiency can check. The second “equally pernicious” way is simply to denounce the authentic as being fake, a technique that “becomes more plausible as the public becomes more educated about the threats posed by deep fakes.”
The “liar’s dividend” grows in proportion to public awareness of deep fakes and “runs with the grain” of larger trends in truth scepticism, the authors argue.
Citing what they call mounting distrust of traditional news sources, the professors write: “That distrust has been stoked relentlessly by President Trump and like-minded sources in television and radio; the mantra ‘fake news’ has thereby become an instantly recognised shorthand for a host of propositions about the supposed corruption and bias of a wide array of journalists, and a useful substitute for argument when confronted with factual assertions … [I]t is not difficult to see how ‘fake news’ will extend to ‘deep fake news’ in the future. As deep fakes become widespread, the public may have difficulty believing what their eyes and ears are telling them – even when the information is real. In turn, the spread of deep fakes threatens to erode the trust necessary for democracy to function effectively. The combination of truth decay and trust decay creates greater space for authoritarianism”.”
Actions to build hope? Grow awareness. Be wary in proportion to the gravity of what’s being claimed, and verify with care. Abandon that specious catchcry cynically uttered in some newsrooms earlier in this digital era: never wrong for long. It was never right. Truth, always engaged in its less-than-free and less-than-open encounter with falsity, deserves better.
• Some hyperlinks were amended on 23 July 2018.
• Paul Chadwick is the Guardian’s readers’ editor
Since you’re here…
… we have a small favour to ask. More people are reading the Guardian than ever but advertising revenues across the media are falling fast. And unlike many news organisations, we haven’t put up a paywall – we want to keep our journalism as open as we can. So you can see why we need to ask for your help. The Guardian’s independent, investigative journalism takes a lot of time, money and hard work to produce. But we do it because we believe our perspective matters – because it might well be your perspective, too.
The Guardian is editorially independent, meaning we set our own agenda. Our journalism is free from commercial bias and not influenced by billionaire owners, politicians or shareholders. No one edits our Editor. No one steers our opinion. This is important because it enables us to give a voice to the voiceless, challenge the powerful and hold them to account. It’s what makes us different to so many others in the media, at a time when factual, honest reporting is critical.
If everyone who reads our reporting, who likes it, helps to support it, our future would be much more secure. For as little as $1, you can support the Guardian – and it only takes a minute. Thank you.