The Phenomenal World

What happens when you give people cash? How do they use the money, and how does it change their lives? Every cash study on this list is different: the studies vary in intervention type, research design, location, size, disbursement amount, and effects measured. The interventions listed here include basic income and proxies--earned income tax credits, negative income tax credits, conditional cash transfers, and unconditional cash transfers. The variety present here prevents us from being able to make broad claims about the effects of universal basic income. But because of its variety, this review provides a sense of the scope of research in the field, capturing what kinds of research designs have been used, and what effects have been estimated, measured, and reported. The review also allows us to draw some revealing distinctions across experimental designs.

If you’re interested in creating a UBI policy, there are roughly three levels of effects (after ODI) that you can examine.

⤷ Full Article

Introduction

Existing models for studying the returns to college often present a single-dimensional account of the associated costs and benefits. Using such measures misses the web of interactions involved in the affordability of and returns to college. To take the clearest example, even calculating what an individual will pay for college involves capturing government, school, and private forms of financing, each of which carries different costs of capital to the student, distinct legal and regulatory constraints, and differing repayment structures. Similarly, imputing a basic benefit of college from the net present value of lifetime earnings against non-graduates, does not paint an accurately complex picture of the situation, for instance, a student’s perceived return to college will be dependent on how risk averse he/she is.

Moreover, it’s not only financial but also opportunity cost that is relevant for the decision of when, how, and whether to attend a post-secondary institution. In a generic situation, students forego four years of industry training and countless other avenues of personal and human capital development to attend school—a period of time that, for the majority, is in fact longer and less consistent than the formal duration of any given program.

⤷ Full Article

The past few years have made abundantly clear that the artificially intelligent systems that organizations increasingly rely on to make important decisions can exhibit morally problematic behavior if not properly designed. Facebook, for instance, uses artificial intelligence to screen targeted advertisements for violations of applicable laws or its community standards. While offloading the sales process to automated systems allows Facebook to cut costs dramatically, design flaws in these systems have facilitated the spread of political misinformation, malware, hate speech, and discriminatory housing and employment ads. How can the designers of artificially intelligent systems ensure that they behave in ways that are morally acceptable---ways that show appropriate respect for the rights and interests of the humans they interact with?

The nascent field of machine ethics seeks to answer this question by conducting interdisciplinary research at the intersection of ethics and artificial intelligence. This series of posts will provide a gentle introduction to this new field, beginning with an illustrative case study taken from research I conducted last year at the Center for Artificial Intelligence in Society (CAIS). CAIS is a joint effort between the Suzanne Dworak-Peck School of Social Work and the Viterbi School of Engineering at the University of Southern California, and is devoted to “conducting research in Artificial Intelligence to help solve the most difficult social problems facing our world.” This makes the center’s efforts part of a broader movement in applied artificial intelligence commonly known as “AI for Social Good,” the goal of which is to address pressing and hitherto intractable social problems through the application of cutting-edge techniques from the field of artificial intelligence.

⤷ Full Article

U.S. politics is beset by increasing polarization. Ideological clustering is common; partisan antipathy is increasing; extremity is becoming the norm (Dimock et al. 2014). This poses a serious collective problem. Why is it happening? There are two common strands of explanation.

The first is psychological: people exhibit a number of “reasoning biases” that predict- ably lead them to strengthen their initial opinions on a given subject matter (Kahneman et al. 1982; Fine 2005). They tend to interpret conflicting evidence as supporting their opinions (Lord et al. 1979); to seek out arguments that confirm their prior beliefs (Nickerson 1998); to become more confident of the opinions shared by their subgroups (Myers and Lamm 1976); and so on.

The second strand of explanation is sociological: the modern information age has made it easier for people to fall into informational traps. They are now able to use social media to curate their interlocutors and wind up in “echo chambers” (Sunstein 2017; Nguyen 2018); to customize their web browsers to construct a “Daily Me” (Sun- stein 2009, 2017); to uncritically consume exciting (but often fake) news that supports their views (Vosoughi et al. 2018; Lazer et al. 2018; Robson 2018); and so on.

So we have two strands of explanation for the rise of American polarization. We need both. The psychological strand on its own is not enough: in its reliance on fully general reasoning tendencies, it cannot explain what has changed, leading to the recent rise of polarization. But neither is the sociological strand enough: informational traps are only dangerous for those susceptible to them. Imagine a group of people who were completely impartial in searching for new information, in weighing conflicting studies, in assessing the opinions of their peers, etc. The modern internet wouldn’t force them to end up in echo chambers or filter bubbles—in fact, with its unlimited access to information, it would free them to form opinions based on ever more diverse and impartial bodies of evidence. We should not expect impartial reasoners to polarize, even when placed in the modern information age.

⤷ Full Article

CONTENT MODERATION | CARBON CAPTURE AND STORAGE | RURAL POLICIES

HOW TO HANDLE BAD CONTENT

Two articles illustrate the state of thought on moderating user-generated content

Ben Thompson of Stratechery rounds up recent news on content moderation on Twitter/Facebook/Youtube and makes a recommendation:

“Taking political sides always sounds good to those who presume the platforms will adopt positions consistent with their own views; it turns out, though, that while most of us may agree that child exploitation is wrong, a great many other questions are unsettled.

“That is why I think the line is clearer than it might otherwise appear: these platform companies should actively seek out and remove content that is widely considered objectionable, and they should take a strict hands-off policy to everything that isn’t (while — and I’m looking at you, Twitter — making it much easier to avoid unwanted abuse from people you don’t want to hear from). Moreover, this approach should be accompanied by far more transparency than currently exists: YouTube, Facebook, and Twitter should make explicitly clear what sort of content they are actively policing, and what they are not; I know this is complicated, and policies will change, but that is fine — those changes can be transparent too.”

Full blog post here.

The Social Capital newsletter responds:

“… If we want to really make progress towards solving these issues we need to recognize there’s not one single type of bad behavior that the internet has empowered, but rather a few dimensions of them.”

The piece goes on to describe four types of bad content. Link.

Michael comments: The discussion of content moderation--and digital curation more broadly--conspicuously ignores the possibility of algorithmic methods for analyzing and disseminating (ethically or evidentiarily) valid information. Thompson and Social Capital default to traditional and cumbersome forms of outright censorship, rather than methods to “push” better content.

We'll be sharing more thoughts on this research area in future letters.

⤷ Full Article

HIGHER EDUCATION | EXPLANATION, PART II

THE FUTURE OF UNDERGRADUATE EDUCATION

A new report argues that quality, not access, is the pivotal challenge for colleges and universities

From the American Academy of Arts and Sciences, a 112-page report with "practical and actionable recommendations to improve the undergraduate experience":

"Progress toward universal education has expanded most recently to colleges and universities. Today, almost 90 percent of high school graduates can expect to enroll in an undergraduate institution at some point during young adulthood and they are joined by millions of adults seeking to improve their lives. What was once a challenge of quantity in American undergraduate education, of enrolling as many students as possible, is now a challenge of quality—of making sure that all students receive the rigorous education they need to succeed, that they are able to complete the studies they begin, and that they can do this affordably, without mortgaging the very future they seek to improve."

Link to the full report. Co-authors include Gail Mellow, Sherry Lansing, Mitch Daniels, and Shirley Tilghman. ht Will, who highlights a few of the report's recommendations that stand out:

  • From page 40: "Both public and private colleges and universities as well as state policy-makers [should] work collaboratively to align learning programs and expectations across institutions and sectors, including implementing a transferable general education core, defined transfer pathway maps within popular disciplines, and transfer-focused advising systems that help students anticipate what it will take for them to transfer without losing momentum in their chosen field."
  • From page 65: "Many students, whether coming straight out of high school or adults returning later to college, face multiple social and personal challenges that can range from homelessness and food insecurity to childcare, psychological challenges, and even imprisonment. The best solutions can often emerge from building cooperation between a college and relevant social support agencies.
  • From page 72: "Experiment with and carefully assess alternatives for students to manage the financing of their college education. For example, income-share agreements allow college students to borrow from colleges or investors, which then receive a percentage of the student’s after-graduation income."
  • On a related note, see this 2016 paper from the Miller Center at the University of Virginia: "Although interest in the ISA as a concept has ebbed and flowed since Milton Friedman first proposed it in the 1950s, today it is experiencing a renaissance of sorts as new private sector partners and institutions look to make the ISA a feasible option for students. ISAs offer a novel way to inject private capital into higher education systems while striking a balance between consumer preferences and state needs for economic skill sets. The different ways ISAs can be structured make them highly suitable as potential solutions for many states’ education system financing problems." Link.
  • Meanwhile, Congress is working on the reauthorization of the Higher Education Act: "Much of the proposal that House Republicans released last week is controversial and likely won’t make it into the final law, but the plan provides an indication of Congressional Republicans’ priorities for the nation’s higher education system. Those priorities include limiting the federal government’s role in regulating colleges, capping graduate student borrowing, making it easier for schools to limit undergraduate borrowing — and overhauling the student loan repayment system. Many of those moves have the potential to create a larger role for private industry." Link.
⤷ Full Article

EXPLAINING ARTIFICIAL INTELLIGENCE | NEW METRICS

ARTIFICIAL AGENCY AND EXPLANATION

The gray box of XAI

A recent longform piece in the New York Times identifies the problem of explaining artificial intelligence. The stakes are high because of the European Union’s controversial and unclear “right-to-explanation” law, which will become active in May 2018.

“Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand. But that goal, of course, raises the fundamental question of whether the world a machine sees can be made to match our own.”

Full article by CLIFF KUANG here. This page provides a short overview of DARPA's XAI (Explainable Artificial Intelligence) program.

An interdisciplinary group addresses the problem:

"Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should often be technically feasible but may sometimes be practically onerous—there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard."

Full article by FINALE DOSHI-VELEZ et al. here. ht Margarita For the layperson, the most interesting part of the article may be its general overview of societal norms around explanation and explanation in the law.

Michael comments: Human cognitive systems have generated similar questions in vastly different contexts. The problem of chick-sexing (see Part 3) gave rise to a mini-literature within epistemology.

From Michael S. Moore’s book Law and Society: Rethinking the Relationship: “A full explanation in terms of reasons for action requires two premises: the major premise, specifying the agent’s desires (goals, objectives, moral beliefs, purposes, aims, wants, etc.), and the minor premise, specifying the agent’s factual beliefs about the situation he is in and his ability to achieve, through some particular action, the object of his desires.” Link. ht Margarita

  • A Medium post with an illustrated summary of some XAI techniques. Link.
⤷ Full Article

PREDICTIVE JUSTICE | FACTORY TOWN, COLLEGE TOWN

PREDICTIVE JUSTICE

How to build justice into algorithmic actuarial tools

Key notions of fairness contradict each other—something of an Arrow’s Theorem for criminal justice applications of machine learning.

"Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them."

Full paper from JON KLEINBERG, SENDHIL MULLAINATHAN and MANISH RAGHAVAN here. h/t research fellow Sara, who recently presented on bias in humans, courts, and machine learning algorithms, and who was the source for all the papers in this section.

In a Twitter thread, ARVIND NARAYANAN describes the issue in more casual terms.

"Today in Fairness in Machine Learning class: a comparison of 21 (!) definitions of bias and fairness [...] In CS we're used to the idea that to make progress on a research problem as a community, we should first all agree on a definition. So 21 definitions feels like a sign of failure. Perhaps most of them are trivial variants? Surely there/s one that's 'better' than the rest? The answer is no! Each defn (stat. parity, FPR balance, contextual fairness in RL...) captures something about our fairness intuitions."

Link to Narayanan’s thread.

Jay comments: Kleinberg et al. describe their result as choosing between conceptions of fairness. It’s not obvious, though, that this is the correct description. The criteria (calibration and balance) discussed aren’t really conceptions of fairness; rather, they’re (putative) tests of fairness. Particular questions about these tests aside, we might have a broader worry: if fairness is not an extensional property that depends upon, and only upon, the eventual judgments rendered by a predictive process, exclusive of the procedures that led to those judgments, then no extensional test will capture fairness, even if this notion is entirely unambiguous and determinate. It’s worth consideringNozick’s objection to “pattern theories” of justice for comparison, and (procedural) due process requirements in US law.

⤷ Full Article

CHILDREN'S RECSYS | RECSYS NETWORKS AND EMOTION TRANSMISSION | NEWS PERIPHERY AND CORE

"A DOLL POSSESSED BY A DEMON"

Recommender systems power YouTube's controversial kids' videos

Familiar cartoon characters are placed in bizarre scenarios, sometimes by human content creators, sometimes by automated systems, for the purpose of attracting views and ad money. First, from the New York Times:

“But the app [YouTube Kids] contains dark corners, too, as videos that are disturbing for children slip past its filters, either by mistake or because bad actors have found ways to fool the YouTube Kids algorithms.

“In recent months, parents like Ms. Burns have complained that their children have been shown videos with well-known characters in violent or lewd situations and other clips with disturbing imagery, sometimes set to nursery rhymes. Many have taken to Facebook to warn others, and share video screenshots showing moments ranging from a Claymation Spider-Man urinating on Elsa of ‘Frozen’ to Nick Jr. characters in a strip club.”

Full piece by SAPNA MAHESHWARI in the Times here.

On Medium, JAMES BRIDLE expands on the topic, and criticizes the structure of YouTube itself for incentivizing these kinds of videos, many of which have millions of views.

“These videos, wherever they are made, however they come to be made, and whatever their conscious intention (i.e. to accumulate ad revenue) are feeding upon a system which was consciously intended to show videos to children for profit. The unconsciously-generated, emergent outcomes of that are all over the place.

“While it is tempting to dismiss the wilder examples as trolling, of which a significant number certainly are, that fails to account for the sheer volume of content weighted in a particularly grotesque direction. It presents many and complexly entangled dangers, including that, just as with the increasing focus on alleged Russian interference in social media, such events will be used as justification for increased control over the internet, increasing censorship, and so on.”

Link to Bridle’s piece here.

⤷ Full Article

FEEDBACK LOOPS AND SOCIAL MEDIA | MESO-LEVEL CAUSES | ETHNOGRAPHY OF BUREAUCRACY

FEED FEEDBACK

Sociologist Zeynep Tufekci engages with Adam Mosseri, who runs the Facebook News Feed

Tufekci: “…Facebook does not ask people what they want, in the moment or any other way. It sets up structures, incentives, metrics & runs with it.”

Mosseri: “We actually ask 10s of thousands of people a day how much they want to see specific stories in the News Feed, in addition to other things.”

Tufekci: “That’s not asking your users, that’s research on your product. Imagine a Facebook whose customers are users—you’d do so much differently. I mean asking all people, in deliberate fashion, with sensible defaults—there are always defaults—even giving them choices they can change…Think of the targeting offered to advertisers—with support to make them more effective—and flip the possibilities, with users as customers. The users are offered very little in comparison. The metrics are mostly momentary and implicit. That’s a recipe to play to impulse.”

The tweets are originally from Zeynep Tufekci in response to Benedict Evans (link), but the conversation is much easier to read in Hamza Shaban’s screenshots here.

See the end of this newsletter for an extended comment from Jay.

  • On looping effects (paywall): “This chapter argues that today's understanding of causal processes in human affairs relies crucially on concepts of ‘human kinds’ which are a product of the modern social sciences, with their concern for classification, quantification, and intervention. Child abuse, homosexuality, teenage pregnancy, and multiple personality are examples of such recently established human kinds. What distinguishes human kinds from ‘natural kinds’, is that they have specific ‘looping effects’. By coming into existence through social scientists' classifications, human kinds change the people thus classified.” Link. ht Jay

THE MESO-LEVEL

Mechanisms and causes between micro and macro

Daniel Little, the philosopher of social science behind Understanding Society, haswritten numerous posts on the topic. Begin with this one from 2014:

“It is fairly well accepted that there are social mechanisms underlying various patterns of the social world — free-rider problems, communications networks, etc. But the examples that come readily to mind are generally specified at the level of individuals. The new institutionalists, for example, describe numerous social mechanisms that explain social outcomes; but these mechanisms typically have to do with the actions that purposive individuals take within a given set of rules and incentives.

“The question here is whether we can also make sense of the notion of a mechanism that takes place at the social level. Are there meso-level social mechanisms? (As always, it is acknowledged that social stuff depends on the actions of the actors.)”

In the post, Little defines a causal mechanism and a meso-level mechanism, then offers example research.

“…It is possible to identify a raft of social explanations in sociology that represent causal assertions of social mechanisms linking one meso-level condition to another. Here are a few examples:

  • Al Young: decreasing social isolation causes rising inter-group hostility (link)
  • Michael Mann: the presence of paramilitary organizations makes fascist mobilization more likely (link)
  • Robert Sampson: features of neighborhoods influence crime rates (link)
  • Chuck Tilly: the availability of trust networks makes political mobilization more likely (link)
  • Robert Brenner: the divided sovereignty system of French feudalism impeded agricultural modernization (link)
  • Charles Perrow: legislative control of regulatory agencies causes poor enforcement performance (link)

More of Little’s posts on the topic are here. ht Steve Randy Waldman

⤷ Full Article