This article is reprinted from Campus Progress.org, the youth-oriented magazine of the Center for American Progress.
Have you had your tonsils removed? Did you ever eat an order of freedom fries? Has your government ever invaded a country on the mistaken impression that it had weapons of mass destruction? If you answered yes, you may have fallen victim to one of the dozens of follies Cass Sunstein says can emerge in group decision-making. In his new book Infotopia: How Many Minds Produce Knowledge, Sunstein explores how people can gather accurate information from groups, censuring traditional means of gathering information for their failures and viewing new methods—wikis, open source methods, prediction markets, and blogs—with cautious optimism. In the end, he comes to a few questionable conclusions, especially about blogs, but it’s a valuable read for people interested in the future of politics and business.
For a lesson in the flaws of group decision-making, take tonsillectomy. Sunstein cites a New England Journal of Medicine article exploring “bandwagon diseases,” in which doctors become convinced that certain symptoms ought to be interpreted and treated in a particular way. The tonsillectomy (and the ensuing ice cream diet) seems “to have been adopted initially based on weak information,” the article notes. And Sunstein thinks he’s found the culprit: “information cascades”—the effect, something like a vicious rumor, of a fact or factoid that makes its rounds and gains outsized influence over groups. If only doctors had all of the information available when the tonsil procedure was catching on, they might not have subjected so many children to possibly unnecessary surgery.
It’s the folly of group decision-making with partial information that drives Sunstein to look for new solutions. You might think that simply taking a vote could get all the information in the right place. This might have worked for medical researchers and tonsillectomy, but there’s a catch. Sunstein, a law professor at University of Chicago, turns to something called the Condorcet Jury Theorem, which states in part that in a jury, the probability that the right decision will be reached increases with the size of the jury, but only if the average juror is more likely than not to come up with the right decision on his or her own. If members of a jury are individually less than 50 percent likely get the right answer, then their deliberation magnifies the problem. Groups like these are wrong, Sunstein says, because of prejudices (freedom fries, anyone?), confusion, and incompetence.
The book first examines what many of us would recognize as our most familiar method of group decision-making: deliberation. This is what happens in meetings, in pre-Google efforts to settle a bet, and in most bureaucratic situations. But group members withhold knowledge in deliberation all the time. Sometimes you don’t want to rock the boat or upstage your boss. Sometimes you can benefit from keeping the information to yourself. Among all of the potential pitfalls, Sunstein cites one finding that carries special relevance for the political sphere: “Groups are more likely than individuals to escalate their commitment to a course of action that is failing—and all the more so if members identify strongly with the groups of which they are a part.” (Sunstein leaves out his opinion of the Iraq War, but does point out the WMD delusion.)
Sunstein assembles an accessible summary of a large body of experimental social science on the problems of group decision-making when a clear answer is available, say in a discussion about whether someone committed a murder or not. But he goes too far. In applying the patterns found in experiments about questions with clear answers to value judgments and electoral voting behavior, he implies that there is a “right” moral value. “Skeptics about morality, politics, and law, rejecting the view that the underlying questions have correct answers, would insist that any shifts introduced by deliberation cannot be said to be right or wrong,” he acknowledges. But he offers only the most dismissive answer to this valid objection: “But genuine skepticism is extremely hard to defend. Without engaging the philosophical issues, we can simply note that many different views about the nature of morality acknowledge the possibility of individual error—and that if individual error does occur, group error will occur as well.” In a book with such a well-developed argument on the decision-making itself, readers could fairly expect more backing for this claim.
Moving onto discussion of new technologies, Sunstein is generally stronger, if no less opinionated. His 2001 book Republic.com set off a storm of discussion among internet scholars after he questioned an emerging orthodoxy about the positive effects of the blogosphere. In an argument rehashed in Infotopia, Sunstein wrote that if people increasingly get their news online through ideologically homogeneous corners of the blogosphere, they may become isolated from the deliberative aspect of the “marketplace of ideas.” Ironically, Sunstein now turns to technology to improve upon traditional deliberation.
To do that, the book suggests an enthusiastic embrace of Friedrich Hayek , a prominent critic of socialism who wrote that the free market price system is an efficient way to collect information from all corners of society—the information of course being the price itself, an answer to the loaded question, “How much is this worth?” Sunstein paints some old-style decision-making as similar to people in a room somewhere setting prices with imperfect information. Market prices are simply more “right.” So why not use the internet to collect nearly perfect information about a variety of topics the same way the market decides prices?
The tools we have for this are prediction markets (in which people bet real money on the outcome of things like elections and the Oscars), wiki systems such as Wikipedia, free and open source software development like that behind the Firefox internet browser, and blogs.
Prediction markets are the clear analog to Hayek’s ideas about prices, and they are sometimes uncannily accurate. The University of Iowa’s Iowa Electronic Markets , for instance, did far better than polls in predicting the outcome of the 2004 U.S. presidential election, outperforming polls 76 percent of the time. Some companies have been using prediction markets internally to anticipate product roll-out dates and other events. (Google’s internal market, in which employees bet fake money that can be exchanged for prizes, have bet on such things as “Will Google get the WiFi contract in SF?” and “Will Brad Pitt and Angelina Jolie get married during 2005?”)
But prediction markets run into problems if the necessary information to make a prediction is not out there, or if the people who have the information aren’t buying in. Consider a hypothetical prediction market in 2003 about whether Iraq had weapons of mass destruction. Unless the Iraqi intelligence service bought in big, the market would most likely have been quite wrong. And it’s possible that “information cascades,” market manipulation, or the same kind of “bubble” that overvalued tech stocks that were later worthless could emerge with widespread use of prediction markets. Indeed, Sunstein writes, Hayek’s “account of prices was too optimistic, even starry-eyed.”
On wikis and open source, Sunstein’s treatment is mostly a good introduction for those unfamiliar with these concepts. Wikis and open source software communities have emerged as very effective means to collect information and intellectual work when group members are committed to them. Neither system is anarchic, a key factor in the success of collaborative work; in open source software, as on Wikipedia, some people have more power than others. In the most successful example of open source development, the Linux operating system, there is even a “benevolent dictator” (creator Linus Torvalds) who has final say over the most important code.
In both wikis, which can be edited by anyone, and open source software, to which anyone can contribute, contributors are by definition disclosing knowledge, circumventing the problem of people failing to speak up in deliberation. But a classic problem emerges when individuals could profit from keeping their software or editing skills for themselves. Companies pay computer engineers for their work, so why should programmers work for free for the community? The same goes for editors and writers. It’s hard to say why they’re working, but so long as people keep contributing to these collective projects, open source and wikis appear to hold great promise.
Even though he’s on measure more optimistic about the internet five years after Republic.com, Sunstein doesn’t pull any punches about blogs. No longer focusing on the ideological “echo chamber” effect he feared in Republic.com, he notes instead that blogs do a particularly bad job of gathering dispersed information from different sides of political divides. Perhaps the obvious critique of this argument is that blogs aren’t designed to do anything of the sort. As Ethan Zuckerman, a prominent blogger and scholar of the internet based at Harvard Law School’s Berkman Center for Internet and Society, writes on WorldChanging.com, “Holding [bloggers] to the standard of Hayekian aggregation of accurate information is like criticizing a football team for failing to produce a grand ballet. Yes, some writers have described football as balletic, but your average offensive tackle is trying to pancake a defensive lineman into the ground, not create high art.”
True to the title of Infotopia, a contraction of “information utopia,” Sunstein offers utopian speculation about the possibilities of the internet, a far cry from his former skepticism. But he retains a clear vision of the apparent limitations of online society, making this book a highly-digestible compendium of the potential goods and likely follies that come with the emergence of the online world.