[berkman] Yochai Benkler on his new book
Yochai Benkler is giving a talk about his new and wonderful book, The Penguin and the Leviathan. (I interviewed him about it here.)
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people. |
Yochai begins by pointing to Occupy Wall Street as teaching us much about cooperation and collaboration.
On Oct. 23, 2008, Alan Greenspan acknowledge to Rep. Henry Waxman that his model of the world was wrong. “I made a mistake in presuming that the self interest of organizations…was such that they were best capable of protecting their own shareholders.” We live in a world built around a mistaken model of human motivation, Yochai says. The basic error is not that we are sometimes self-interested, for we are. The mistake is thinking we could build our systems assuming that we are more or less uniformly self-interested. We’ve built systems that try to get incentives right, or that try to get punishment right. But now scientific selfishness has retreated, and we should model our systems on this new knowledge.
In 1968 Gary Becker said that we could model crime by thinking it of a pay-off model: the benefits of the crime vs. the cost of the penalty. So, we get Three Strikes laws. In another domain, the Jenson and Murphy paper on incentive pay for top management assumes that every level of the enterprise will try to shirk and put more in their pockets, so (the theory goes) you should increase the stock options at the top. But that hasn’t worked very well for companies in terms of return to stockholders; you get misalignment from this model. This model is like Becker’s: it’s about getting the incentives and penalties right. Yochai tells of a mother trying to get her three year old into a car by threatening to take five cents off the child’s allowance. “This model penetrates everywhere,” he says.
This intellectual arc is everywhere. Evolutionary biology has moved from group selection to selfish gene through kin altruism and direct reciprocity. Economics also: strong assumptions of self-interest. Political theory, from Downs, to Olson, to Hardin: all assume the inability to come together on a shared set of goals. Management science and organizational sociology: From Taylor to Weber to Schumpeter through Williamson. Although there are counter narratives in each of these fields, selfishness is the dominant model.
And yet on line we see how easily we cooperate. “Things that shouldn’t have worked, have worked.” He draws a 2×2: market based and non-market based vs. decentralized and centralized. In each, there have been huge successes of social production. This is in fact a new solution space.
In each of the aforementioned disciplines, there is now a development of more complex models that take account of cooperation. E.g., evolution: indirect reciprocity; cooperation emerges much more easily in the new models. Economics: shift to experimental and modeling away from self-interest, and the development of neuroeconomics. Political: Eleanor Ostrom on the commons. Management science: Work on team production and networks; high commitment, high-performance organizations.
The core insight of all of these fields is that the model of uniform self-interest is inadequate. Then there’s debate.
Yochai compares Dawkins in The Selfish Gene (1976) and Martin Nowak (2006). Dawkins says we are born selfish. Nowak says: “Perhaps the most remarkable aspect of evolution is its ability to generate cooperation in a competitive world.” It’s an old debate, Yochai says, citing Kropotkin vs. Spencer vs. Boaz vs. Margaret Mead. The debate is now swinging toward Kropotkin, e.g., neural research that shows empathy via brain scans: a partner’s brain lights up in the same way when s/he sees the other person undergoing pain. He points to the effect of oxytocin on trust, and for the first time in Berkman history makes a reference to monogamous voles.
Why does this matter, Yochai asks. He refers to an experiment by Lee Ross et al. Take a standard Prisoner’s dilemma. All predictions say that everyone should defect. Take the same game and give it to American students, Israeli fighter pilots, etc., and told them either “You’re going to play the Community Game” or “The Wall Street Game.” The former 70% opened cooperatively and kept cooperating through the 7 rounds. The latter opened at 30% cooperative. The 30% in the Community Game represent a significant segment that has to be dealt with in a cooperative system. But there’s a big middle that will one or the other depending on what they understand their context to be. So, concludes Yochai, it’s important to design systems that lets the middle understand the system as cooperative.
So, we move from tough on crime to community policing. That changes all sorts of systems, including technical, organizational, institutional, and social. Community policing has been widely adopted because it’s generally successful. We see that we have success with actual practices that depend not on reward and punishment and monitoring, but on coperation. We’re finding out about this online, but it’s not happening just online.
Yochai says that he’s just at the beginning of an investigation about this. There’s a limit to how much we can get out of evolution, he says. It’s hard to design systems on the basis of evolution. Instead, we see a lot of work across many different systems.
But we still want to know: Won’t money help? The answer is what’s called “crowding out.” We care about material interests, but we also care about fairness. We have emotional needs. We have social motivations. What if these interests don’t align? The Titmuss-Arrow debate 1970/1 about the motivations for donating blood. A 2008 study (Mellstrom and Johannsesson) paid people money to give blood. When you allow them to give the money away, it increased the number of people who gave blood. Adding money can suppress an activity more than it increases it. That’s crowding out. It’s not uniform in the population. Designing systems is much harder than coming up with a material reward that appeals to people’s self-interest. We do not have full answers here
Think of cooperative human systems in three vectors. 1. Conceptual: from rationality as univeral self-interest to diversity of motivations. 2. Design: Cooperative human systems designed on behaviorally realistic, evidence-based design. Politics: We cannot separate out incentives from fairness, ethics, empathy, solidarity.
Yochai points to a number of factors, but focuses on fairness: of outcomes, of intentions, and of processes.
Outcomes: What counts as fair is different in different cultures, especially when you move outside of market economies. In market societies, 50:50 is the norm for fairness. Once it gets to 30:70, people will walk away. But you can change that if you change the framing, e.g., “You got lucky.” But there is no single theory of justice. Yochai looks at a study of the cement trucking industry. It turns out that there are large pay disparities. They also differ in what they say they pay for: performance, or equally time. They don’t always do what they say, though. But when you look at real performance measures, you have fewer accident and out of service events if the company is accurate in what it says, no matter what it says.
We don’t have an agreed upon theory of justice, he says. This explains the 99% vs. 53% debate around the Occupy Wall Street. This is a debate over basic moral commitments without which a system cannot function. There is no way to resolve it either through neutral principles or by efficiency arguments.
Intentions also matter to fairness. When you Where bad intentions excluded (e.g., it was just a roll of the dice), then there’s much less negative reciprocity.
Processes: Tyler (2003) showed that procedural justice correlated with internalized compliance. Yochai points to the militarization of the police as they deal with the OWS. The image projected to the crowd is one of lack of regard for process. He compares this to a massive demonstration of Israel in which the police stood a good distance away, and a different relationship was fostered.
We can see a revival of the “sharing nicely” idea we teach our children. In science. In business. Science is beginning to push back against the assumption of selfishness. It turns out that we aren’t universally self-interested. Different people respond differently, and each person responds differently in different contexts.
We need a new field of cooperative human systems design that accounts for the diversity of motivation, and that takes seriously the issue of “crowding out”: adding incentives can result in worse outcomes.
And, Yochai concludes, we need a renewed view of our shared humanity.
Q: Fascinating. But: The passage from evolution to the social sciences has long been discredited. Also, it’s too simple to say that the solution to the banking problem is that we need more cooperation. The banks are supported by a set of interests bigger than that.
A: You say sociobiology has been discredited. That’s true of the early to mid 1980s but is no longer a good description. The social sciences and anthro have been moving to evolutionary models. Economics too. What was in the 1980s was resolved, now, especially in the social sciences, is unresolved. Second, sure, bankers self-select and control the system. The real answer is that it’s a lot of work. When you have a system optimized for money, and money is the social signal, it self-selects for people driven by that. We need long-term interventions to increase cooperation. E.g., the person who can work with Open Source at, say, IBM, is different than the person who can work her/his way up a hierarchy; the company therefore has to train itself to value those who cooperate.
Q: I just went through MIT’s tutorial that instructed me how my ideas would be licensed. I said that maybe there should information in your office about how to contribute more openly. How do systematize open, collaborative forms across the entire educational system?
A: Lots of people in this room are working on this problem in different ways. We fight, we argue, we persuade. Look at university open access publication. We use our power within the hierarchy of universities to raise a flag and to say we can do it a new way. That allows the next person to use us as an example. After I released Wealth of Networks for free on the Web, I got emails from all sorts of people wanting to know how to negotiate that deal for themselves. Universities should be easy.
Q: What are the burning policy implications of this shift in the way we rule the world? What would you change first?
A: I should note that I don’t address that in the book. We need an assessment of community policing and the big board [?] approach. The basic question is whether we continue to build a society based on maximizing total group, or one that trades off some growth for a more equitable distribution of outcomes. The point is much broader than open access, patent, copyright, etc. The deregulatory governance model is based on an erroneous model of interests. But all of my work is done on the micro level, not the level of organizations. But we know that the idea that musicians need the payoffs afforded by infinite copyright is false; we have empirically data about that. So there are places where the relation between the micro interests and institutional interventions is tight. But I don’t talk about that much in the book.
Q: I’ve looked at pay inequality in Japan and the US. The last thing that matters to the level of compliance with regulations is the gap between CEO and workers. The deterrents are very effective in the US, explaining [couldn’t hear it]. Compliance is much better in the US because the penalties are effective deterrents.
A: First, once you’re talking about the behavior of an organization, we don’t have the same kind of data on what happens within a corporate decision. When people see themselves as agents, there can be conflicts between the individual and the organization. For that you need external enforcement.
Q: Jail time makes a huge difference.
A: Then how do you explain the findings that amount of tax options predicts probability of tax fraud. Same baseline enforcement, but whether you had stock options predicts tax fraud. Adding money and punishment certainly has an effect on behavior. But it depends on whether that intervention has better effects than other interventions. But we only have a little bit of data.
Q: If a high school principal came to you who serves many interests and types of people, how could your ideas influence her or him?
A: My mother founded two schools and a volunteer organization. The lessons are relatively straightforward: Higher degrees of authority and trust, structure with clearly set goals, teamwork, less hierarchical distance between students and teachers, less high-stress testing.
Categories: berkman, philosophy, policy, science, social media dw