Is Seeing Still Believing? The Deepfake Challenge to Truth in Politics

Is Seeing Still Believing? The Deepfake Challenge to Truth in Politics. Brookings Institution. William A. Galston. January 8, 2020

On Nov. 25, an article headlined “Spot the deepfake. (It’s getting harder.)” appeared on the front page of The New York Times business section. The editors would not have placed this piece on the front page a year ago. If they had, few would have understood what its headline meant. Today, most do. This technology, one of the most worrying fruits of rapid advances in artificial intelligence (AI), allows those who wield it to create audio and video representations of real people saying and doing made-up things. As this technology develops, it becomes increasingly difficult to distinguish real audio and video recordings from fraudulent misrepresentations created by manipulating real sounds and images. “In the short term, detection will be reasonably effective,” says Subbarao Kambhampati, a professor of computer science at Arizona State University. “In the longer run, I think it will be impossible to distinguish between the real pictures and the fake pictures.” [Note: contains copyrighted material].

[HTML format, various paging].

Profiles of News Consumption: Platform Choices, Perceptions of Reliability, and Partisanship

Profiles of News Consumption: Platform Choices, Perceptions of Reliability, and Partisanship. RAND Corporation. Michael Pollard, Jennifer Kavanagh. December 10, 2019

In this report, the authors use survey data to explore how U.S. media consumers interact with news platforms, finding mixed perceptions about the reliability of news and that consumer partisanship broadly shapes news consumption behavior. [Note: contains copyrighted material].

[PDF format, 110 pages].

Fighting Deepfakes When Detection Fails

Fighting Deepfakes When Detection Fails. Brookings Institution. Alex Engler. November 14, 2019

Deepfakes intended to spread misinformation are already a threat to online discourse, and there is every reason to believe this problem will become more significant in the future. So far, most ongoing research and mitigation efforts have focused on automated deepfake detection, which will aid deepfake discovery for the next few years. However, worse than cybersecurity’s perpetual cat-and-mouse game, automated deepfake detection is likely to become impossible in the relatively near future, as the approaches that generate fake digital content improve considerably. In addition to supporting the near-term creation and responsible dissemination of deepfake detection technology, policymakers should invest in discovering and developing longer-term solutions. Policymakers should take actions that:

  • Support ongoing deepfake detection efforts with continued funding through DARPA’s MediFor program, as well as adding new grants to support collaboration between detection efforts and training journalists and fact-checkers to use these tools.
  • Create an additional stream of funding awards for the development of new tools, such as reverse video search or blockchain-based verification systems, that may better persist in the face of undetectable deepfakes.
  • Encourage the release of large social media datasets for social science researchers to study and explore solutions to viral misinformation and disinformation campaigns. [Note: contains copyrighted material].

[HTML format, various paging].

The Democracy Playbook: Preventing and Reversing Democratic Backsliding

The Democracy Playbook: Preventing and Reversing Democratic Backsliding. Brookings Institution. Norman Eisen et al. November 2019

The Democracy Playbook sets forth strategies and actions that supporters of liberal democracy can implement to halt and reverse democratic backsliding and make democratic institutions work more effectively for citizens. The strategies are deeply rooted in the evidence: what the scholarship and practice of democracy teach us about what does and does not work. We hope that diverse groups and individuals will find the syntheses herein useful as they design catered, context-specific strategies for contesting and resisting the illiberal toolkit. This playbook is organized into two principal sections: one dealing with actions that domestic actors can take within democracies, including retrenching ones, and the second section addressing the role of international actors in supporting and empowering pro-democracy actors on the ground. [Note: contains copyrighted material].

[PDF format, 100 pages].

Fighting Disinformation Online: A Database of Web Tools

Fighting Disinformation Online: A Database of Web Tools. RAND Corporation. Jennifer Kavanagh, Hilary Reininger, Norah Griffin. November 12, 2019

The rise of the internet and the advent of social media have fundamentally changed the information ecosystem, giving the public direct access to more information than ever before. But it’s often nearly impossible to distinguish between accurate information and low-quality or false content. This means that disinformation — false or intentionally misleading information that aims to achieve an economic or political goal — can become rampant, spreading further and faster online than it ever could in another format.

As part of its Truth Decay initiative, RAND is responding to this urgent problem. Researchers identified and characterized the universe of online tools developed by nonprofits and civil society organizations to target online disinformation. The tools in this database are aimed at helping information consumers, researchers, and journalists navigate today’s challenging information environment. Researchers identified and characterized each tool on a number of dimensions, including the type of tool, the underlying technology, and the delivery format.

Hostile Social Manipulation: Present Realities and Emerging Trends

Hostile Social Manipulation: Present Realities and Emerging Trends. RAND Corporation.  Michael J. Mazarr et al. September 4, 2019.

The role of information warfare in global strategic competition has become much more apparent in recent years. Today’s practitioners of what this report’s authors term hostile social manipulation employ targeted social media campaigns, sophisticated forgeries, cyberbullying and harassment of individuals, distribution of rumors and conspiracy theories, and other tools and approaches to cause damage to the target state. These emerging tools and techniques represent a potentially significant threat to U.S. and allied national interests. This report represents an effort to better define and understand the challenge by focusing on the activities of the two leading authors of such techniques — Russia and China. The authors conduct a detailed assessment of available evidence of Russian and Chinese social manipulation efforts, the doctrines and strategies behind such efforts, and evidence of their potential effectiveness. RAND analysts reviewed English-, Russian-, and Chinese-language sources; examined national security strategies and policies and military doctrines; surveyed existing public-source evidence of Russian and Chinese activities; and assessed multiple categories of evidence of effectiveness of Russian activities in Europe, including public opinion data, evidence on the trends in support of political parties and movements sympathetic to Russia, and data from national defense policies. The authors find a growing commitment to tools of social manipulation by leading U.S. competitors. The findings in this report are sufficient to suggest that the U.S. government should take several immediate steps, including developing a more formal and concrete framework for understanding the issue and funding additional research to understand the scope of the challenge. [Note: contains copyrighted material].

[PDF format, 302 pages].