Advances in technology allow for innovation in the ways businesses and individuals perform financial activities. The development of financial technology—commonly referred to as fintech—is the subject of great interest for the public and policymakers. Fintech innovations could potentially improve the efficiency of the financial system and financial outcomes for businesses and consumers. However, the new technology could pose certain risks, potentially leading to unanticipated financial losses or other harmful outcomes. Policymakers designed many of the financial laws and regulations intended to foster innovation and mitigate risks before the most recent technological changes. This raises questions concerning whether the existing legal and regulatory frameworks, when applied to fintech, effectively protect against harm without unduly hindering beneficial technologies’ development.
The objective of face recognition technologies (FRTs) is to efficiently detect and recognize people captured on camera. Although these technologies have many practical security-related purposes, advocacy groups and individuals have expressed apprehensions about their use. The research reported here was intended to highlight for policymakers the high-level privacy and bias implications of FRT systems. In the report, the authors describe privacy as a person’s ability to control information about them. Undesirable bias consists of the inaccurate representation of a group of people based on characteristics, such as demographic attributes. Informed by a literature review, the authors propose a heuristic with two dimensions: consent status (with or without consent) and comparison type (one-to-one or some-to-many). This heuristic can help determine a proposed FRT’s level of privacy and accuracy. The authors then use more in-depth case studies to identify “red flags” that could indicate privacy and bias concerns: complex FRTs with unexpected or secondary use of personal or identifying information; use cases in which the subject does not consent to image capture; lack of accessible redress when errors occur in image matching; the use of poor training data that can perpetuate human bias; and human interpretation of results that can introduce bias and require additional storage of full-face images or video. This report is based on an exploratory project and is not intended to comprehensively introduce privacy, bias, or FRTs. Future work in this area could include examinations of existing systems, reviews of their accuracy rates, and surveys of people’s expectations of privacy in government use of FRTs. [Note: contains copyrighted material].
Conversations around data science typically contain a lot of buzzwords and broad generalizations that make it difficult to understand its pertinence to governance and policy. Even when well-articulated, the private sector applications of data science can sound quite alien to public servants. This is understandable, as the problems that Netflix and Google strive to solve are very different than those government agencies, think tanks, and nonprofit service providers are focused on. This does not mean, however, that there is no public sector value in the modern field of data science. With qualifications, data science offers a powerful framework to expand our evidence-based understanding of policy choices, as well as directly improve service delivery.
To better understand its importance to public policy, it’s useful to distinguish between two broad (though highly interdependent) trends that define data science. The first is a gradual expansion of the types of data and statistical methods that can be used to glean insights into policy studies, such as predictive analytics, clustering, big data methods, and the analysis of networks, text, and images. The second trend is the emergence of a set of tools and the formalization of standards in the data analysis process. These tools include open-source programming languages, data visualization, cloud computing, reproducible research, as well as data collection and storage infrastructure. [Note: contains copyrighted material].
Citizen science — an approach whereby citizens actively contribute to the generation of knowledge about important research questions — is gaining increased attention in research and policy communities. Recent years have seen an expansion in the scale of citizen science activity globally, as well as an increase in the diversity of ways in which citizens can contribute to research endeavours. This report, informed by a literature review and interviews with selected experts, explores key areas of innovation and emerging and topical issues in citizen science, with a particular but not exclusive interest in healthcare related applications. More specifically, the report explores innovation related to new areas of applications of citizen science; novel methods of data gathering and analysis; innovative approaches to recruiting, retaining and enabling participation in citizen science projects; and building capacity for citizen science. The report also considers emerging themes and topical issues within the field and their implications. [Note: contains copyrighted material].
The past decade in the United States has seen technological advancements, demographic shifts and major changes in public opinion. Pew Research Center has tracked these developments through surveys, demographic analyses and other research. [Note: contains copyrighted material].
Advancing technologies are increasingly able to fully or partially automate job tasks. These technologies range from robotics to machine learning and other forms of artificial intelligence, and are being adopted across many sectors of the economy. Applications range from selecting job applicants for interviewing, picking orders in a warehouse, interpreting X-rays to diagnose disease, and automated customer service. These developments have raised concern that workers are being displaced by advancing automation technology. Indeed, over 18 recent studies predict job losses from new automation technologies, including some predictions of massive job losses (Winick 2018). A large literature on worker displacement suggests that the effects of such developments could be dire: individual workers subject to plant closings and mass layoffs experience reduced employment probabilities and wage reductions, leading to long-term earnings losses, as well as reductions in consumption and worse health outcomes. Concerns about these effects of automation have led some commentators to call for policies to directly combat mass unemployment, such as a Universal Basic Income.
But is this right? At a time when many firms are investing in automation, the unemployment rate is at historic lows. Low unemployment might seem hard to reconcile with apocalyptic predictions about mass unemployment. This paper reviews the evidence from recent studies and reports on a new paper we have written, “Automatic Reaction: What happens to workers at firms that automate” (Bessen et al. 2019). This paper is the first to take a look at what actually happens to those workers. We build on some of the findings in order to draw the implications for policy. [Note: contains copyrighted material].
Advanced economies have experienced a significant drop in the fraction of the population employed in middle wage, “routine task-intensive” occupations. Applying machine learning techniques, we identify characteristics of those who used to be employed in such occupations and show they are now less likely to work in routine occupations. Instead, they are either not-participants in the labor force or working at occupations that tend to occupy the bottom of the wage distribution. We then develop a quantitative, heterogeneous agent, general equilibrium model of labor force participation, occupational choice, and capital investment. This allows us to quantify the role of advancement in automation technology in accounting for these labor market changes. We then use this framework as a laboratory to evaluate various public policies aimed at addressing the disappearance of routine employment and its consequent impacts on inequality. [Note: contains copyrighted material].
On Nov. 25, an article headlined “Spot the deepfake. (It’s getting harder.)” appeared on the front page of The New York Times business section. The editors would not have placed this piece on the front page a year ago. If they had, few would have understood what its headline meant. Today, most do. This technology, one of the most worrying fruits of rapid advances in artificial intelligence (AI), allows those who wield it to create audio and video representations of real people saying and doing made-up things. As this technology develops, it becomes increasingly difficult to distinguish real audio and video recordings from fraudulent misrepresentations created by manipulating real sounds and images. “In the short term, detection will be reasonably effective,” says Subbarao Kambhampati, a professor of computer science at Arizona State University. “In the longer run, I think it will be impossible to distinguish between the real pictures and the fake pictures.” [Note: contains copyrighted material].
The tech landscape has changed dramatically over the past decade, both in the United States and around the world. There have been notable increases in the use of social media and online platforms (including YouTube and Facebook) and technologies (like the internet, cellphones and smartphones), in some cases leading to near-saturation levels of use among major segments of the population. But digital tech also faced significant backlash in the 2010s. [Note: contains copyrighted material].
This brief is the second in a series of Leapfrogging in Education snapshots that provide analyses of our global catalog of education innovations. (Our first snapshot focused on playful learning.) The catalog and our corresponding research on leapfrogging is explained in depth in CUE’s book, “Leapfrogging inequality: Remaking education to help young people thrive.” Of the nearly 3,000 global innovations CUE cataloged, more than one half involve the use of technology, which suggests strong interest in its use and application in aiding educators around the world. [Note: contains copyrighted material].