open source political community development: a five-stage

24
This article was downloaded by: [108.54.16.113] On: 02 December 2011, At: 11:25 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Journal of Information Technology & Politics Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/witp20 Open Source Political Community Development: A Five-Stage Adoption Process David Karpf a b a Rutgers University b Eagleton Institute of Politics Available online: 01 Jun 2011 To cite this article: David Karpf (2011): Open Source Political Community Development: A Five-Stage Adoption Process, Journal of Information Technology & Politics, 8:3, 323-345 To link to this article: http://dx.doi.org/10.1080/19331681.2011.575020 PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Upload: others

Post on 03-Feb-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

This article was downloaded by: [108.54.16.113]On: 02 December 2011, At: 11:25Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: MortimerHouse, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Information Technology & PoliticsPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/witp20

Open Source Political Community Development: AFive-Stage Adoption ProcessDavid Karpf a ba Rutgers Universityb Eagleton Institute of Politics

Available online: 01 Jun 2011

To cite this article: David Karpf (2011): Open Source Political Community Development: A Five-Stage Adoption Process,Journal of Information Technology & Politics, 8:3, 323-345

To link to this article: http://dx.doi.org/10.1080/19331681.2011.575020

PLEASE SCROLL DOWN FOR ARTICLE

Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form toanyone is expressly forbidden.

The publisher does not give any warranty express or implied or make any representation that the contentswill be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug dosesshould be independently verified with primary sources. The publisher shall not be liable for any loss, actions,claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly orindirectly in connection with or arising out of the use of this material.

Journal of Information Technology & Politics, 8:323–345, 2011Copyright © Taylor & Francis Group, LLCISSN: 1933-1681 print/1933-169X onlineDOI: 10.1080/19331681.2011.575020

Open Source Political Community Development:A Five-Stage Adoption Process

David Karpf

ABSTRACT. This article considers the emergence of large-scale “commons-based peer production”projects such as Wikipedia.org from an institutional development perspective. The argument it makes isthreefold. First, that that the lowered transaction costs and information abundance found online trans-form a subset of public goods problems, essentially replacing free-ridership with mass coordination asthe central challenge. Second, that the boundaries of this subset are defined by a “power-law topol-ogy” that leads to the emergence of online hub spaces and serves to resolve search problems endemicto the anti-geographic online landscape. These boundary conditions limit the overall impact of com-mons-based peer production for the political space. Third, that all such hubs move through a commonfive-stage institutional development process, directly related to standard models of the diffusion ofinnovation. Identification of the institutional development process behind Wikipedia leads in turn to thestipulation of seven hypotheses: the “Field of Dreams” fallacy, the “Interest Horizons” thesis, “PoliticalStrategy Is Not Like Computer Code,” the “Location-based Wave” thesis, “Power Law Fragility UnderMoore’s Law,” the “Punctuated Equilibrium” thesis, and “Code-Forking the Public Sphere.” Each thesisholds direct implications for the potential and limitations of “open source” applications in the politicalarena.

KEYWORDS. online communities, open source, Wikipedia

Open source is not a piece of software, andit is not unique to a group of hackers. Opensource is a way of organizing production,of making things jointly.—Steven Weber, The Success of OpenSource

Unlike previous reference works thatstand on library shelves distanced from

Dave Karpf (PhD, Political Science, University of Pennsylvania) is an assistant professor of journalismand media studies at Rutgers University, as well as a faculty associate at the Eagleton Institute of Politics. Hisresearch concerns the impact of the Internet on American political associations, including community blogsand Internet-mediated advocacy groups.

The research presented here was conducted while Dr. Karpf was a postdoctoral research associate at BrownUniversity’s Taubman Center for Public Policy. He would like to thank the participants at the JITP 2010Politics of Open Source conference for helpful feedback, as well as Rogers Smith, Jack Nagel, John Lapinski,Lokman Tsui, Daniel Kreiss, and Rasmus Kleis Nielsen for substantial feedback on the piece.

Address correspondence to: David Karpf, Rutgers University, 4 Huntington St., New Brunswick, NJ 08901(E-mail: [email protected]).

the institutions, people, and discussionsfrom which they arose, Wikipedia is botha community and an encyclopedia.—Joseph Reagle, Good FaithCollaboration

“Open Source” refers to a type software,to the process through which that software isproduced, and to the online community that is

323

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

324 JOURNAL OF INFORMATION TECHNOLOGY & POLITICS

responsible for it. The software has attractedmuch attention thanks in particular to thesuccesses of Linux and Apache—sophisticated,broadly adopted software programs with large,complex open source communities behind them.For 15 years, since the demonstrated suc-cess of Linux, many have asked “What otherelements of society could benefit from opensource as a process or be transformed by opensource–type communities? Can we have opensource journalism, open source culture, opensource politics?” Benkler (2006) has termedthe process as “commons-based peer produc-tion” (p. 59) and contrasts it to markets andfirms. Shirky (2008) focuses more on thecommunity, discussing how the lowered trans-action costs of the Internet enable “organiz-ing without organizations.” Bruns (2008) sug-gests “produsage” as a term-of-art for the new,Internet-mediated production processes. Kelty(2008) offers an ethnographic account of theopen source software community, describing itas a “recursive public” that deliberates boththrough words and through the production ofsoftware code. All of these authors highlight thesurprising success of open source as a product,a process, and a community, and help illuminatehow and why it functions the way it does.

It is abundantly clear from these authorsthat, when a massive community devoted tocommons-based peer production emerges, it iscapable of producing sophisticated goods thatmanage to overcome the traditional collectiveaction problem (Olson, 1965). But we continueto know relatively little about the conditionsunder which such communities manage to form.Open source solutions do not work for everyproblem, nor do robust open source communi-ties form out of thin air. This article treats thedevelopment of online “collaborative communi-ties” (Reagle, 2010, p. ix), through a diffusionand institutional development lens. It asks howonline communities develop, and argues that afive-stage development process, tied to the stan-dard diffusion curve, is likely a central feature oftheir development. Focusing on the sequence ofinstitutional development challenges that face adeveloping collaborative community is particu-larly helpful in clarifying the boundaries of opensource–style activity in the political arena.

Particularly after the Dean campaign of2003–2004, a set of scholars, public intellectu-als, and practitioners has shown strong interestin the potential emergence of “open source pol-itics” (Bruns, 2008; Fine, Sifry, Raseij, & Levy,2008; Hara, 2008; Jenkins, 2006; Lessig, 2003;Ratcliffe & Lebkowsky, 2005; Rushkoff, 2003;Sifry, 2004; Trippi, 2004). Steven B. Johnson,for instance, suggests that, “Using open-sourcecoding as a model, it’s not a stretch to believethe same process could make politics more rep-resentative and fair. Imagine, for example, howa grassroots network could take over some ofthe duties normally performed by high-pricedconsultants who try to shape a campaign mes-sage that’s appealing” (Sifry, 2004, p. 6). Theuse of the Internet to spur citizen participationhas become a growth industry, fueling dozensof political consultancies and popular books.Lost underneath is any clear understanding ofwhat, specifically, open source politics is likelyto entail. Hindman (2007), for instance, cautionsthat we are indeed entering an era of open sourcepolitics, but reminds us that open source pro-duction features the replication of the same elitesystems that have long dominated politics. Wecan define “open source politics” as the appli-cation of Internet-mediated collaborative toolsby a motivated community-of-interest to collec-tive action problems that previously were solvedexclusively by markets or firms. The questionthat then follows is, under what conditions willthese communities flourish, and under what con-ditions will they fail?

This article offers an analysis of theonline community-formation process, treatingWikipedia as a guiding case example and draw-ing upon the robust literature that has emergedin recent years around Wikipedia as an “impos-sible good” (Ciffolilli, 2003; Kollock, 1999). Byunderstanding the technological affordances thatmake Wikipedia possible, and by understandingthe community formation pattern that Wikipediaand other online communities experience as theyendure a diffusion process, it allows us to postu-late a five-stage institutional development pro-cess and leads to seven hypotheses that clarifythe potential and limitations of open source-type community participation in the politicalarena.

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

Karpf 325

WHY STUDY WIKIPEDIA?

Wikipedia, “the encyclopedia that anyonecan edit,” is a public good, both in defini-tion and in spirit. In the success of Wikipedia,there lies a significant puzzle for social sci-entists. Much of the foundational literature oncollective action and public goods problemssuggests that Wikipedia should not exist. Freeridership should prevent it from forming. Asa public good—being both nonrival (my useof Wikipedia does not reduce your ability touse it) and nonexclusive (all people can accessWikipedia, regardless of whether they con-tributed to it)—Wikipedia ought to suffer fromunderprovision. Yet, as the seventh-most-visitedsite on the entire World Wide Web,1 and withover 10 million articles across 260 languages(Reagle, 2010), the one critique that the sitehas never been subjected to is “there just isn’tenough of it.”

Though not a political endeavor itself,Wikipedia provides a mature example of thetype of “open source–like” online communities-of-interest enabled by the structure and novelattributes of Web-based communication pro-tocols. Early critics of Wikipeida noted thecrucial difference between an online collab-orative encyclopedia and online collaborativesoftware: “Bad, incorrect code doesn’t compile.Bad, incorrect information on the ‘net lives onand non-experts hardly ever notice the mistake”(Reagle, 2010, p. 78). Stalder has described thisas a key difference between “open source” and“open culture” (Stalder, 2006). Given that polit-ical ideas, strategies, and decisions also “don’tcompile,” Wikipedia is a more apt comparison tothe political arena than Linux, Apache, or othermajor software projects.

Careful study of such a case can help usto identify the parameters within which suchcommunities function, in turn leading to clearerthinking about the potential of “open sourcepolitics.” Most importantly, Wikipedia demon-strates that the dramatic reduction in the costsof online communication produces a condi-tion of information abundance in which thechallenge to mass collaboration approximatesa coordination game, rather than a free rider

problem. The problem of mass coordination issolved through the development of a power-law topology, in which large hub spaces letcommunities-of-interest engage in collaborativeefforts that would have been impossible underprevious information regimes. Novel solutionsto the collective action problem, and novelstructures for collective action, become possi-ble online because of these changes to the costs,abundance, and availability of information. Theemerging landscape of collective action funda-mentally departs from previous eras in directrelation to the salience of these attributes, andWikipedia serves as a guiding example forunderstanding them.

Much of what makes Wikipedia worthyof analysis is the sheer scale of its success.The only Web sites more frequently visitedthan Wikipedia are the search engines/e-mailproviders Google, Yahoo, Windows Live/MSN,video sharing site YouTube.com, and socialnetworking site Facebook.com.2 Unlike thesefor-profit Internet giants (YouTube being a sub-sidiary of Google), Wikipedia operates with abudget of less than $500,000 and a staff of fewerthan a dozen employees (Lih, 2009, p. 4). Acore of 75,000 active volunteer “Wikipedians,”along with 13.3 million registered users and anuntold number of anonymous users, contributethe bulk of content, citations, and edits.3 Despiteproviding little opportunity for fame or recogni-tion to the volunteers, this expanding volunteercorps has remained resilient against attemptsto subvert the site through vandalism or thinlyveiled advertising. A 2007 study by the PewInternet and American Life Project found that36 percent of American adult Internet users con-sult Wikipedia. It is especially popular amongthe well-educated, with 50 percent of all onlineAmericans who hold a college degree using itas a reference. On a typical day, 8 percent ofonline Americans consult Wikipedia, makingit more popular than online purchasing, dat-ing Web sites, setting travel reservations, usingchat rooms, and participating in online auc-tions (Rainie & Tancer, 2007). This raises thepractical question of whether Wikipedia is nowso distinct as to be something different, some-thing greater, than an encyclopedia. Consider the

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

326 JOURNAL OF INFORMATION TECHNOLOGY & POLITICS

following: In the pre-Internet era, what percentof Americans would we suspect consulted anencyclopedia on an average day? Is it likely thatlooking things up in the Encyclopedia Brittanicahas ever been more popular in daily life thanpurchasing goods or trying to find a date?

Political scientists have largely overlookedWikipedia, though a small group of schol-ars has begun to look into the more genericimpacts of the Internet’s lowered communica-tion costs on online collective action. Lupiaand Sin (2003), for instance, argue that MancurOlson’s (1965) work is “built from historicallyuncontroversial assumptions about interpersonalcommunication. Today, evolving technologiesare changing communication dynamics in waysthat invalidate some of these once uncontro-versial assumptions” (p. 315). They go on topresent a formal model that suggests the orga-nizational advantage held by small groups inOlson’s day is muted by online communication,while the selective benefits that many groupswere once able to offer as an incentive for par-ticipation are occasionally undermined by theopen access of the Web (disciplinary journals, asone example, now face open-access, free, Web-based competition). Bimber, Flanagin, and Stohl(2005) likewise attempt to reconceptualize col-lective action as a phenomenon of “boundarycrossing” between public and private domains—an indication of how near-costless participationin online petitions and other Web-based pres-sure tactics has become. Lev-On and Hardin(2008) deal directly with the cases raised byBenkler (Wikipedia included) and offer a the-oretical framework for analyzing the “logic ofInternet-based collective action” (p. 6), arguingthat the lowered transaction costs and forma-tion of online hubs allows success in the faceof widespread free ridership. They argue thatphenomena such as Wikipedia can be concep-tualized within the existing framework for col-lective action studies. My analysis is similar totheirs, but focuses more centrally on communityformation than on production process.

The following section will use the Wikipediaexample to synthesize several core conceptsregarding Internet-mediated communication.Centrally, it will demonstrate that the struc-ture of the Web supports the development

of large-scale communities that, benefitingfrom strong “network effects,” can producetremendous public goods on the basis of surpluslabor contributions from hobbyists and parti-sans. When the costs of participation approachzero, a more complete demand curve for polit-ical engagement is revealed. In so doing, thesection clarifies how online communicationdiffers from previous communication regimes,and also places focus on when and wheresuch differences are likely to be present. It isonly when online power-law hubs successfullydevelop that these non-market, non-firm-basedsolutions become viable. The article thenturns to the diffusion-of-innovations literature(Rogers, 2003; Von Hippel, 2005), arguing thatonline community formation follows a similarpath, and that each stage of diffusion presentsa distinct institutional development challengethat can prevent hub formation. A third sectionthen derives a set of hypotheses regarding thestability of power-law hubs over time, and theconclusion then makes several points about thepolitical implications of these hypotheses.

THE SUCCESS OF WIKIPEDIA: EASY,FUN, AND FULL OF NETWORK

EFFECTS

Wikipedia was founded in 2001 afterNupedia, an attempt at developing an onlineencyclopedia based on traditional expert-produced and -reviewed contributions, failed togather momentum (Reagle, 2010). Jimmy Waleshad launched Nupedia as an open-access com-petitor to pricey encyclopedias like Britannica.His expectation was that the speed and ease ofe-mail communication could lower the costs ofproducing a high-quality encyclopedia, makingthe information free for all visitors.4 Nupediawas to be expert-led, with a traditional (anddaunting) seven-stage peer-review and editingprocess. What Wales and his collaborator LarrySanger learned was that the increased speed ofe-mail alone does little to transform productionprocesses. The hefty editing process resulted innumerous bottlenecks, leading to an estimated25 articles in its first three years. As academicjournal editors have likewise learned, moving

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

Karpf 327

from the fax, phone, and mail systems to digitalcommunication alleviates some elements ofpeer review and content production, but theoverall savings prove marginal.

In attempting to radically simplify the pro-duction process, Wales and Sanger turned to the“wiki” (from the Hawaiian word “wikiwiki,”translating directly to “fast” or “speedy”)software platform. Wiki software code enablesopen content creation and peer editing. Any userwith access (and on Wikipedia, most articlesare accessible by all) can click an “edit this”button, make changes to the document, andhave those changes instantly available to otherusers. Past versions are automatically archivedand viewable, making such experimentation alow-risk affair.

Developer Larry Sanger wrote a memo tothe 2,000-member Nupedia mailing list at thelaunch of the wiki-based site, saying, “Humorme. Go there and add a little article. It will takeall of five or ten minutes” (Shirky, 2008, p. 113).With the bottlenecks eliminated, the commu-nity responded, producing over 1,000 articleswithin the first month, and 10,000 within ninemonths. Clay Shirky describes this as a generalshift enabled by the Internet-based informationregime: from “filter, then publish,” to “pub-lish, then filter.” Print-based publication is costlyand (thus) scarce. Firms are necessary to pro-vide editorial and quality-control decisions atthe front end, ensuring that the final productis well-written and attractive to a paying audi-ence. Comparatively, Shirky notes that Web-based publication is “ridiculously easy.” Indeed,ever-expanding transistor capacity and serverspace render the Web an abundant informationenvironment where point-to-point communica-tion (e-mail) can happen near instantaneouslyand self-publication is free. Wikipedia could notexist without Internet-mediated communication,and moreover, it could only exist through theembrace of novel alternatives to traditional pro-duction practices. Faster and cheaper communi-cations media alone produce little change, butthey create the possibility for novel structuresfor mass collaboration and collective action.

The ease of publishing online may be self-evident, but that is a far cry from assur-ing high-quality encyclopedia entries. Indeed,

Wikipedia’s quick rise in popularity wasaccompanied by an avalanche of skepticismregarding the quality of the new-entrant ency-clopedia (“free, and worth every penny,” soto speak). Jim Giles published a 2005 studyin Nature magazine challenging this claimthrough a comparison of Wikipedia and theEnclopedia Britannica. Peer reviewers recruitedby Nature found an average of four inaccuraciesper Wikipedia article, and three per equivalentBritannica article (Giles, 2005). A back-and-forth firestorm ensued, with Britannica staffcriticizing the study and demanding a retraction.Nature offered a clarification of its method-ology, but stood by the study and refused toretract it. Physicists Huberman and Wilkinsonhave since conducted his additional study ofWikipedia articles, finding a strong correlationbetween the number of edits a Wikipedia articlereceives and the accuracy and writing quality ofthe article (Huberman & Wilkinson, 2007). Putanother way, the more contributors a Wikipediaarticle receives, the higher its accuracy and thebetter the writing. This is not entirely intuitive—certainly, anonymous visitors can and do engagein “graffiti” attempts on Wikipedia pages, andmotivated partisans attempt to distort pages tofavor their point of view. The site has developedboth a set of community norms and practicalcomputer code that lead contributions to have anet-positive effect.

Several authors, including Zittrain (2008),Lih (2009), and Reagle (2010), discuss thedetails of why Wikipedia manages to succeed.One key attribute is the meager starting set ofsite rules—(a) articles should display a neu-tral point of view (NPOV), (b) no copyrightinfringement, and (c) ignore any rules if they getin the way of building a great encyclopedia—and reliance on the Wikipedia community tomindfully work them out, developing additionalrules and protocols as needed.5 Additional ruleshave been included over time to manage contro-versies and improve site quality, but these prin-ciples remain at its core. The wiki software codeand abundant server space are necessary condi-tions for this organizing structure. The code letsany community member or passerby offer pos-itive contributions, small or large, while savingpast versions for easy review. Graffiti attempts

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

328 JOURNAL OF INFORMATION TECHNOLOGY & POLITICS

or biased contributions to an article can thus beremoved from the page with a simple click of the“revert to past draft” button. “Bias” and “neu-trality” are, of course, terms of art rather thanan exact science, but the second attribute helpsthe community to approximate neutrality rathereffectively (see Viegas, Wattenberg, Kriss, &Van Ham, 2007, for further discussion).

A second attribute is the inclusion of a “dis-cussion page” alongside every main Wikipediapage. This is a space for Wikipedians to explainand justify their changes, discuss article qual-ity, and engage in deliberation and disagreementover controversial topics without cluttering themain page. Major edits made without explana-tion and justification are likely to be reverted,providing an incentive for thoughtful, delibera-tive engagement. Given the participation of hob-byist communities, many heated “flame war”exchanges occur over topics that are obscure tothe mainstream, but passionately debated withina community-of-interest. This is an exampleof what Lawrence Lessig (1999) terms “code-based governance.” Within cyberspace, many ofthe decisions about how people can and shouldinteract are determined not through governmentregulation, but by the development of support-ive code. Indeed, the original wiki platform didnot feature such pages, and after substantial dis-cussion and debate over Wikipedia’s listserv,community member Clifford Adams customizedthe software to create these pages (Lih, 2009,p. 65–66). One challenge for scholars inter-ested in studying the Web’s impact on society(a challenge reflected in hypothesis 5, below)is that new code is constantly being devel-oped, and the seemingly impossible dilemmasof 2002 are rendered easily solvable by the newsoftware architecture of 2009. Without discus-sion pages, Wikipedia would face steep chal-lenges in supporting the NPOV norm. Ratherthan developing complex organizational bylawsand chains of command, Wikipedia and otheronline spaces incorporate new code-based solu-tions that support community norms by makingpositive contributions easier and negative contri-butions harder.

The third attribute of Wikipedia’s success isthe core of initial editors—what I will refer tosubsequently as an actively engaged set of “lead

adopters.” Wikipedia needed this initial groupof committed, substantively knowledgeable, andtechnically skilled contributors because thevalue of the site is almost entirely derived fromits network externalities. Consider the value ofWikipedia to the fifth contributor to visit the sitecompared to its value to the 5,000,000th con-tributor. Early on, the site is error-prone, full oftopical holes, and of questionable quality. Later,it benefits from a phenomenon first describedby IInternet ethnographer Eric Raymond (2001)when discussing the success of the open sourcesoftware movement: “Given enough eyeballs, allbugs are shallow” (p. 30). Raymond had foundthat open source software is successful in directproportion to the size of its community, becausea software bug that seems tremendously diffi-cult to one person is likely to be a simple fixfor someone else (see Reed, 1999, for furtherdiscussion of network effects).

Jimmy Wales explains the success ofWikipedia in similar terms:

The technology required for Wikipediais essentially rather simple. You need adatabase, you need a Web server, you needa Web browser, and you need the wiki edit-ing concept. While the wiki concepts wasinvented in 1995 by Ward Cunningham,Wikipedia didn’t start until 2001. So allof the technology, including the idea ofa wiki, which is a Web site that anyonecan edit, has existed since 1995. . . . Theanswer is, Wikipedia isn’t a technologicalinnovation at all; it’s a social innovation.What we figured out between 1995 and2001 was not new technology. We had theWeb already, but we discovered the basicidea of how to organize a community. (Lih,2009, p. xvi, emphasis added)

This notion of Wikipedia as a community, ratherthan a technological innovation, is of centralimportance for generating hypotheses about“open source politics” more generally. As thesite has grown, it has incorporated additionalrules, and it has empowered a layer of “supe-rusers” with additional editing privileges as areward for their positive contributions and asa means of engaging in distributed community

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

Karpf 329

management. Wikipedia as a product relieson Wikipedia as a community, engaging oncommons-based peer production as a process.The progression through which the communitydeveloped is a puzzle that has attracted farless attention than the product or the process,however.

It bears noting that, as suggested by Lev-On and Hardin (2008), the great majority ofWikipedia visitors do in fact free ride. Wikipediaglobally has about 75,000 “active” members.These are registered users who provide five ormore edits to the site per month. About 10 per-cent of these are “very active” Wikipedians, con-tributing 100 or more edits per month (Reagle,2010, p. 6). Given the site’s overwhelming popu-larity, with 8 percent of all Internet users visitingdaily,6 we can extrapolate that for every activecontent-producer, there are tends of thousandswho free ride on the public good. Most usersof Wikipedia do not take part in the editingor article-writing process, despite the tremen-dously low barriers to entry. So free ridershipdoes indeed occur on Wikipedia, but it is notthe problem that we would be led to expect. Noone would likely say that the central issue forWikipedia is that it is underprovided.

The key transition in the online space isthat, when the costs of participation in col-lective action approach zero, we face a condi-tion of abundance rather than one of scarcity.People have limited time and limited money,but they have virtually unlimited opinions. Whatwe see on Wikipedia is essentially a multi-faceted version of what Olson termed a “priv-ileged group.”7 When the resource in questionis not money or time, but rather specializedinformation, we find that there are plenty ofpeople who are “wealthy” in some form oranother. Put another way, most everyone has ahobby. Hobbyists have always happily incurredthe “costs” of discussing their topic of interest,often in excruciating detail. When they do soon Wikipedia, they provide exactly as much ofthe public good (information about video games,the history of knitting, etc.) as they themselveswant, and this provides more than enough forinquiring minds.

This is not to say that mass collaboration, col-lective action, and the provision of online public

goods is seamless and assured. Rather, it is tosay that the shift from slower, costlier infor-mation regimes to an instantaneous, abundantonline information regime creates a differentdilemma for social engagement. Specifically, thegeography-less, abundant online space createstremendous challenges in search. How are weto identify good, verifiable information frombad? How are motivated partisans or hobby-ists to find each other with no central square,and how are onlookers to take advantage of thefruits of these hobbyists’ labor? Wikipedia crit-ically benefits from the network externalitiesof all these hobbyist communities gathering inthe same, identifiable location. If five sites allcompeted for the same niche of “online organi-zational hub” (Lev-On & Hardin, 2008, p. 16),the sum of those parts would be far less than thewhole found on Wikipedia. Indeed, initial devel-oper Larry Sanger eventually left Wikipedia andstarted his own site, Citizendium.org, becausehe felt there should be a greater role for cre-dentialed experts (Bruns, 2008; Reagle, 2010).In two and a half years, the site has built asmall community of 800 contributors, authoring10,700 articles in total and attracting a frac-tion of a percent of Wikipedia’s audience.8 Forthis reason, I depart from Lupia and Sin (2003)and Bimber et al. (2005). I would suggest thatthe critical challenge to online collective actionis not public–private boundary-crossing or thedeclining value of selective incentives, but rathersolving the search dilemma under conditionsof abundance—a challenge that approximates amass coordination game.

HYPERLINKS, HUBS, AND POWERLAWS: AN ITERATED SOLUTION

TO THE SEARCH DILEMMA

Before there was the World Wide Web, therewas the hyperlink. Hyperlinks provide the net-worked structure of the Internet, with clickablelinks embedded in text that direct a reader fromone page of text to another. A solitary Webpage with no inbound or outbound hyperlinkslies, in a very real sense, at the periphery ofthe World Wide Web. Though such a page isaccessible through direct input of its uniform

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

330 JOURNAL OF INFORMATION TECHNOLOGY & POLITICS

resource locator (URL: the text-based “address”appearing after http:// in the address line of aWeb page), one would be unlikely to stumbleupon it through everyday surfing.

The hyperlink calls to attention two dimen-sions of the Internet’s novel search puzzle. Firstis the anti-geographic nature of the mediumitself. Search in the offline world is aided bylandscape-imposed scarcity. Towns and citieshave physical centers and peripheries, and thistranslates directly into the price system of thereal estate market. There is a cost imposedby being out-of-the-way, either for residencies(commute) or commercial zones (foot trafficand shopping districts). Thus restaurants tend tobe grouped together, one can generally expectto find a pawn shop in close proximity to arace track, and proximity to desirable locationstranslates into higher rents. On the Internet,by contrast, there is no physical landscape totraverse. As one example, consider the hun-dreds of millions of blogs that have been cre-ated and then abandoned. This provides theslightest inconvenience for Google, the com-pany upon whose server farms most of thesesites are hosted, and whose search algorithmmust handle them, but the realities of increas-ing bandwidth and transistor capacity relegatethis to a minor nuisance at most. From theuser’s perspective, dead blogs and abandonedWeb pages do not litter any landscape, becausethe Web is composed of hyperlinks and weare never forced to traverse their pages in ourdaily online pursuits. An abandoned blog goesunhyperlinked, and thus floats to the periph-ery of Web “space.” The lack of geography onthe Web is a substantial component of the con-dition of information abundance found online.There is no such thing as “location, location,location.”

The second dimension is the challenge forlike-minded hobbyists of finding each other.Internet communication is instantaneous, butalso asynchronous. One can post a message toa discussion board or send an e-mail alert andit will be immediately viewable, but as opposedto a phone or face-to-face conversation, repliesdo not necessarily come in real time. Lackingtown centers, where are hobbyists, partisans, orother communities-of-interest to gather? Withno town center, what good is a self-publishing

soapbox, anyway? This is closely related withthe problem of identifying verifiable informa-tion on the Web. In essence, the Internet lowersthe communication costs for all types of pub-lication and online group interaction. Scarcityprovides some baseline assurance that a groupor information source is reliable; the very act ofpublication or gathering indicates an ability tosurpass some minimal cost threshold. Under thecondition of abundance, how are we to tell reli-able information from speculation? How are weto find other like-minded participants when thereliterally is no “there” there?

Hyperlinks provide the kernel of the solu-tion, with Google’s PageRank algorithm actingas pioneer. Prior to PageRank, Internet searchwas tremendously problematic. The two stan-dard solutions were to provide a top-down direc-tory of all Web pages or offer a search mecha-nism based on the appearance of keywords ona Web page. The problem with directories wastwofold. First, the scale and rapid growth of theWeb meant that no directory could manage tobe comprehensive. Second, directories are builtaround meta-level organizing assumptions aboutthe categories a user will wish to search through.Thus AOL.com, for instance, could provide alist of topical headings such as “sports,” “news,”and “entertainment” and then further divide thecategories into fine-grained subheadings. But auser interested in new banjo strings and infor-mation on an upcoming jamboree would havelittle idea where to begin. Keyword-based searchcould help with this, organizing results based onthe combination of “banjo strings” and “jam-boree,” but separating new information fromold becomes problematic, and such keywordsearches are easily gamed. Google’s ingenioussolution was to include hyperlink data in the pre-sentation of search results. Pages with numeroushyperlinks, particularly from other sites that arehighly linked, appear at the top of the resultspage. Thus Google lets Web users “vote withtheir feet,” in a sense—indicating the qualityof an information source based on the numberof Web users who have chosen to link to it.The simple inclusion of this network data in itssearch results is what led Google to rise from atiny startup, three-person operation to the largestcompany in the online space (Vise & Malseed,2005).

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

Karpf 331

Physicist Albert Lazlo Barabasi offered animportant early treatment of these link patternson the Web in a 1999 article in Nature maga-zine. As he would later describe in his public-audience book, Linked, Barabasi was interestedin the distribution of links among Web pages.His early assumption had been that link dis-tribution would approximate a normal curve,indicating that the Web could be understoodmathematically using the standard assumptionsof random graph theory. Instead, Barabasi foundthat link patterns followed a heavily skewed dis-tribution approximating a power law or Paretodistribution. Vilfredo Pareto initially observedthese distributions in his study of wealth dis-parity in European societies, leading them tooften be termed “rich get richer” or “80/20” dis-tributions, since he found that 80 percent of asociety’s wealth was held by the top 20 percent,and that the greater the level of income, the morestark the disparity. Power laws are based on adecaying function in which the Nth-largest nodeis 1/Nth the size of the largest node (Barabasi,2003). Shirky and Hindman, Tsioutsiouliklis,and Johnson produced separate studies in 2003demonstrating that the blogosphere in particu-lar displays power law tendencies in its hyper-link distribution, leading to the emergence ofan “A-list” or elite status among early politi-cal bloggers. Karpf has likewise found the samepattern evident in the right- and left-wing blo-gospheres, noting that each blog communitydisplays its own power law distribution (Karpf,2008). Though there has been some debate asto whether these link patterns are a power law orsome other heavily skewed distribution (Drezner& Farrell, 2008), what is of particular inter-est here is the mechanism that Barabasi tells usproduces power law distributions.

Barabasi demonstrates in his article thatpower law distributions emerge in a networksimulation when two simple conditions arepresent: (a) growth and (b) preferential attach-ment. Simply put, if a network is growingand new links between nodes are determinedbased upon the preferences of their observableneighbors, then a set of “hubs” will developover time, as the link-rich are more likely togain additional links, further increasing link dis-parity over time and, critically, developing a

power law distribution. Growth plus preferentialattachment leads to the emergence of power-lawhubs. In so doing, this type of hub formationalso serves as an iterated solution to the masscoordination problem found online.

Let’s say you are interested in discussingleft-wing politics. Living in a conservativerural town, you would like to turn online inorder to find other people with similar inter-ests. Where do you go? Where are they? Thepreviously mentioned lack of geography pro-vides a dilemma. You have no strong prefer-ence regarding the location of the conversa-tion, and neither do the other members of yournascent community-of-interest. Your interest isin finding the same “place” online (and, later,in the place providing supportive environmentfor healthy, spam- and “troll-”free discussionand possibly tools for further collaboration).This is a classical example of a coordinationgame, in which actors have neutral preferencerankings among options, but wish to arrive atthe same solution as one another. In a single-iteration coordination game, this can be solvedthrough sequential action: The first actor makesan arbitrary decision, and all others followsuit. If actors move simultaneously, or withoutknowledge of each other’s actions, the problembecomes far more challenging. But in an iter-ated coordination game, preferential attachmentemerges as a viable and simple solution. In par-ticular, a Google search will reveal the mostpopular spaces where like-minded people arealready meeting. Rather than selecting an onlineforum, blog, wiki, etc. at random and hopingthat a community-of-interest will show up, eachadditional latent community member can chooseto rely on the actions of those who came beforehim or her.

Preferential attachment leads directly to theemergence of power-law hubs, and a generalprinciple for Web science practitioners: Largehub spaces online are different than smallspaces. The topology of the Web, as it has grownover time, is ruled by power-law hubs such aseBay, Wikipedia, DailyKos, YouTube, MoveOn,and Facebook. Each of these “Web 2.0” spacesoffers value to its users in direct proportion tothe network effects provided by large crowds ofsimilar users. Online hub spaces move through

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

332 JOURNAL OF INFORMATION TECHNOLOGY & POLITICS

identifiable phases of institutional developmentas they diffuse through the user population andface challenges related to scale and changingdemographics and interests of different userclasses.

The study of power law distributions inonline politics is mostly attributable to MatthewHindman’s work, particularly his 2008 book TheMyth of Digital Democracy. Therein Hindmanargues that the emergence of power lawsin online traffic creates a “Googlearchy,” orGoogle-imposed hierarchy, leading to heavyelite stratification and limiting the transforma-tive potential of the medium. Working withtraffic data supplied by Hitwise.com, Hindmanargues that the barriers-to-entry online are notsubstantially lowered by the new media environ-ment. Though the costs of self-publication havebeen dramatically reduced, those costs havebeen offset by the new costs of building a mas-sive online audience. To Hindman, these powerlaws are a problem; he argues that they repre-sent the re-emergence of elite politics that inturn limits the transformative potential of onlinecommunication.

Without disputing Hindman’s empirics, thecase of Wikipedia suggests that we shouldbe circumspect about his interpretation. TheInternet’s power-law topology means there canonly be one hub site occupying Wikipedia’sniche. But Wikipedia’s users are not attempt-ing to build Wikipedias of their very own. Theyare, instead, looking for a coordination pointwhere they can access “the sum of all humanknowledge.” The path to power-law hub sta-tus is a developmental process, and it yieldsa set of institutions that are substantially moreopen and participatory than those characteriz-ing the previous information regime. The utilityof power-law hubs in solving the mass coordi-nation problem has been largely ignored in theresearch literature thus far.

INSTITUTIONAL DEVELOPMENT OFHUB COMMUNITIES: A FIVE-STAGE

ADOPTER CLASS MODEL

Wikipedia benefits from the power-lawtopology of the Internet, developing a large

community of participants, active and passive,and benefiting from the substantial networkexternalities that they provide. The rise fromnascent startup to power-law hub did not occurin a smooth progression, though. Wikipediawas able to succeed because its leadershipskillfully and artfully moved it through a pre-dictable series of development challenges thatoccurred as the community grew and changed.All such Internet-mediated community spacesmove through the same diffusion process as vir-tually any other new product or innovation: (a)beginning with a tiny group of lead adopterswho co-create the good, (b) expanding to alarger early adopter class, which is highly moti-vated but less technically skilled, (c) launch-ing into the much larger early majority class,whose motivation and skill level is more variedand whose size pressures the system to adapt,(d) adopting protections against spammers andmalicious attacks as the site attracts the latemajority class and becomes recognized as “valu-able online real estate,” and (e) dealing withchallenges to institutional power structures asgrowth slows at the laggard phase and questionsregarding voice and equality rise to the fore.These stages are of particular interest becausethey accord both with Wikipedia’s experienceand with the longstanding literature on diffusionof innovations (Rogers, 2003). If Hindman andothers are correct about the stability of power-law hub sites online, then there can only be asmall number of these online communities-of-interest, and their development pattern is itselfan important topic for investigation.

S-CURVES AND ADOPTER CLASSES:A BRIEF OVERVIEW OF THE

DIFFUSION LITERATURE

The definitive text regarding diffusionresearch is Diffusion of Innovations by EverettRogers. First published in 1962, the book is nowin its fifth edition and has been cited over 19,000times9—a testament to Rogers’s longstandingimpact on the field. Rogers notes that ideas,farm products, viruses, and a whole range ofother innovations fit a standard “S-curve” asthey diffuse through a community over time.

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

Karpf 333

FIGURE 1. Standard diffusion curve13 (color figure available online).

Late Majority

Early Majority

Early Adopters

Innovators

TIME

INNOVATION ADOPTION RATE

NU

MB

ER

OF

AD

OP

TE

RS

Laggards

Figure 1 offers a graphical representation of theS-curve, along with the five traditional adopterclasses. Eric Von Hippel (2005) re-labels the“innovators” as “lead adopters” in his book,Democratizing Innovation. He notes in thatwork that the first tiny group of adopters oftenhelps to co-create the good, repurposing it andproviding feedback to the original firms orlabs who are releasing the new product. This isparticularly true in the computer industry, withbeta-testers providing feedback to proprietarysoftware companies and open-source program-mers actively participating in the softwaredevelopment process. Following Von Hippel,I use the term “lead adopters” rather than“innovators” here. Note the relative size of thefive adopter classes, with lead adopters beingthe smallest group, the early and late majoritiesmaking up the bulk of the population, andearly adopters and laggards representing 13.5percent and 16 percent of the population apiece,respectively. This is based on an assumption thattime-of-adoption follows a normal curve, withthe early and late majorities covering one stan-dard deviation from the mean, early adoptersrepresenting the second standard deviation tothe left of the mean, lead adopters representing

2+ standard deviations to the left and laggardsrepresenting all adoptions occurring more thanone standard deviation to the right (Rogers,2003, p. 281).

One of the most robust findings fromthe diffusion literature is that these adopterclasses are demographically distinct from oneanother. Survey research has routinely foundthat younger, wealthier, better educated, andmore “cosmopolitan” members of society have astronger taste for innovation than their neighbors(Rogers, 2003, pp. 272–282). Lead adoptersand early adopters tend to have peer net-works that span wide geographies, exposingthem to new ideas and innovations long beforetheir neighbors do. Thomas Valente (1995) fur-ther advances this notion of separate adopterclasses in Network Models of the Diffusion ofInnovation. Valente unites the longstanding dif-fusion research tradition with the emerging fieldof social network analysis, treating actors in acommunity as nodes in a network with varyingadoption thresholds. He goes on to identify threecritical mass points: one at the shift from earlyadopters to early majority, a second at the pivotpoint between early and late majority, and thethird at the shift from late majority to laggards.

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

334 JOURNAL OF INFORMATION TECHNOLOGY & POLITICS

This approach is particularly valuable becauseit suggests that not only are there differencesbetween adopter classes, but there are also tem-poral differences between the various phases ofadoption.

It is worth noting at this point a method-ological difficulty in the diffusion and networksliteratures. As Wasserman and Faust (1994) notein their text, Social Network Analysis: Methodsand Applications, population definition is a cru-cial and troubling issue. For early diffusionresearchers studying farm implements, the pop-ulation under study would be farmers in an iden-tifiable community. For later research on drugdevelopment, the population would be medi-cal doctors with a shared specialty and over-lapping memberships in the American MedicalAssociation. What is the population of potentialWikipedians, though? What about the popula-tion of potential Dean campaign participants,or Tea Party activists, or MoveOn members?Boundary definition can only be determined inretrospect for these groups, rendering social net-work analysis useful for theoretical exercises,but presenting substantial data hurdles for morequantitative work. For this reason, I use thediffusion and social networks literatures as astarting point for my descriptive model of insti-tutional development in online communities-of-interest, but do not develop the model as a socialnetwork study per se.

INSTITUTIONAL DEVELOPMENTCHALLENGES PRESENT AT EACH

ADOPTION STAGE

The important lesson from the diffusion ofinnovation literature is that the fifth Wikipedianis substantively different from the 5,000,000thWikipedian. They have different backgrounds,different interests in the site, and different needsof the site architecture. The fifth Wikipedianis co-creating the online space. She is likelyinvolved in writing software code or is par-ticularly devoted to the creation of an openencyclopedia. The five-millionth Wikipedian isvisiting an established online space, looking upinformation written by others, and eventuallyfinding enough value in the space to add a few

edits of his own. Effective launch of one of thesecommunities must move through five distinctphases: (a) initial launch, (b) reaching criticalmass, (c) managing the influx of a mass pub-lic, (d) defending norms against newcomers,and (e) institutionalizing authority. Each stage isdiscussed below:

Stage 1: Initial Launch

Recall again Jimmy Wales’s suggestion thatthe technology behind Wikipedia was both sim-ple and available for years prior to the launch ofthe site. The success of Wikipedia was a story ofcommunity-building. If Wales and Sanger hadannounced Wikipedia with an aggressive tele-vision and newspaper advertising campaign, thesite would have been an almost guaranteed fail-ure. The mass audience would have visited anempty vessel populated by a few anonymous(and likely erroneous) entries, turned around,and never come back. But the initial Nupedialist gave them a small set of highly motivatedparticipants who could choose to contribute tothe site because they individually found it a fas-cinating and worthwhile project. Their “adop-tion threshold” in the language of Valente, wastremendously low. The site also had the earlyblessing of Richard Stallman, founder of theFree Software Foundation and legend within theopen source software community, and receivedan influx of tech-savvy participants throughearly discussion on the Slashdot.org discussionforum, described by Lih as “a salon for the tech-nical elite and a grand senate of the computingcommunity” (Lih, 2009, p. 67).

The attention of this lead adopter commu-nity is itself a scarce resource: they are, as awhole, well educated, urbane technology andacademic professionals with time for a few inter-esting side projects and a dense network ofsocial ties. Benkler and Weber both note that thepersonal incentive for these individuals lies ina combination of reputation-building incentives,socio-psychological incentives, and hedonicpersonal gratification at solving interesting puz-zles (Benkler, 2006; Weber, 2004). Any onlinecommunity-of-interest must attract a sizeablenumber of these lead-adopting co-creators, and

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

Karpf 335

that in turn means providing them with the free-dom to make changes and provide input to thesystem. Internet communication may exist inan environment of information abundance, butthe interest of these elites is a scarce and valu-able resource, best attracted through technologyconferences, highly technical listserv discussiongroups, and other traditional networking eventsthat feature high barriers-to-entry. Though theidentity of the lead-adopter community will varyfrom online community to online community(the lead adopters who populated the early polit-ical blogosphere were not the same people whopopulated early Wikipedia), they are invariablydrawn from existing networks of influence—theunderdefined “policy networks” discussed in thepolicy agenda literature, for instance (Kingdon,1984).

Stage 2: Reaching Critical Mass

“User-generated content,” like Web 2.0, is anInternet buzzword coming out of the market-ing world that has taken on substantial mean-ing. Web 2.0 can be roughly defined as peo-ple finding each other online, whereas Web1.0 consisted of people finding informationonline (“the information superhighway”). User-generated content refers to comments, infor-mation, conversation, or multimedia contentthat come not from top-down management, butfrom bottom-up, voluntary production. Severalof the major online spaces (circa 2010) serveto aggregate and sort such content, includingFacebook (publicly articulated social networkinformation), YouTube (video sharing), Flickr(photo sharing), the large community blogs,and of course Wikipedia. To the extent thatInternet-mediated political associations rely onchanneling the participation and interaction oftheir communities-of-interest, they likewise fitinto this category. Critical mass refers to thepoint at which a site is receiving enough user-generated content that the network externalitiesproduced exceed the interest threshold for themass of less-motivated Web surfers. Put anotherway, at some point Wikipedia has enough con-tent to maintain the interest of people who donot self-identify as “techie” or “encyclopediajunkie.” The addition of this larger swath of the

public massively expands the community. As thenascent version of any of these sites expandsinto this early adopter phase, it must settlea series of technical and normative questionsregarding how to handle growth and communitycontribution.

In Wikipedia’s case, this included somecomplicated server load issues (Lih, 2009,pp. 77–79) in 2004, as the number of totalEnglish-language articles surpassed 100,000(about the size of the Encylopedia Brittanica)and increased traffic grew to the point where thesite would often crash. The involvement of tech-nical elites was critical to solving these prob-lems, and all growing online communities musteither attract the sustained interest of the opensource community or maintain a large budgetfor proprietary software solutions to this aspectof scaling. Lih records that, in the same timeperiod, “because the community was growingso quickly, the process of forming consensus bye-mail did not scale” (Lih, 2009, p. 95). The con-sensus and co-creation practices that were nec-essary to attract and keep the lead adopter com-munity had to be modified in order to allow forthe early adopters, who by and large displayeda keen interest in the system, but were lesstechnically experienced and lacked deep exist-ing network ties with one another. Wikipediaresponded by creating a distributed moderationsystem of superuser “administrators” and bymoving mailing list–based discussion to a sep-arate section of the wiki dubbed the “villagepump.” As Wikipedia attracted enough user-generated content to become self-sustaining,then, the system had to adopt new code-basedsolutions to the surge of traffic.

Stage 3: Managing the Influx of a MassPublic

As the site reaches Valente’s (1995) first crit-ical mass point, it must deal both with a tremen-dous surge in traffic/participation and also adaptto a mass public that does not share the par-ticular interests of the lead and early adopters.While lead adopters are contacted through exist-ing social/professional network ties, and earlyadopters are contacted through niche media

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

336 JOURNAL OF INFORMATION TECHNOLOGY & POLITICS

outlets (coverage in Wired magazine being par-ticularly coveted by many social media venturesat this stage), the shift to early majority is oftenaccompanied by coverage in traditional mediavenues. Wikipedia had attracted a few briefmentions in the mainstream media during itsfirst few years, but its breakthrough momentoccurred during a well-publicized controversyin December 2005. John Seigenthaler, formereditor of The Tennessean newspaper, noticedsome incorrect and libelous information postedin his Wikipedia entry. Seigenthaler contactedthe editors, who immediately changed it andapologized, but Seigenthaler went on to write ascathing op-ed for USA Today on Wikipedia’sunreliability regardless. The op-ed produced rip-ple effects, with other television and newspaperoutlets writing stories about the story (Seelye,2005). For millions of Americans, this coveragewas their first introduction to the site’s exis-tence, and the negative news served as free sitepublicity that significantly increased traffic andcontent-creation.

In the history of Wikipedia, this is referredto as “the Seigenthaler effect.” Figure 2 demon-strates the growth in Wikipedia page views pre-and post-Seigenthaler op-ed. The upward trendin views continued unabated, as Wikipedia grewto its present-day status as the sixth most-visitedWeb site in the world. This sustained growthwould not be possible prior to the normative andtechnical problem-solving occurring in stage2—the site would lack a vibrant community andalso lack the capacity to deal with the sudden

FIGURE 2. Growth of Wikipedia page viewspre- and post-Seigenthaler op-ed14 (color figureavailable online).

Page views

Seigenthaler op-ed

wikipedia.org

2005 2006

August December0

influx of users. As-is, the arrival of the earlymajority signaled a change in the character ofthe site, as the culture of “ignore any rules thatget in the way” had to stand up to the rush ofonlookers less sure of their co-creating skillsand more interested in a simple set of guide-lines for what can and cannot be done. It isgenerally during this third stage that many of thelead adopters, faced with changing communitynorms and an increasingly noisy environment,depart for some new project or create their ownsublist, complaining about how the communityhas been degraded by the onrushing newcomers(Shirky, 2008).

Stage 4: Defending Norms againstNewcomers

As the online community passes Valente’ssecond inflection point, growth is at its highestrate and the network externalities have ren-dered the space a clear power-law hub. At thispoint, the site becomes known as “valuableonline real estate.” A new wave of challengescomes with such a distinction, as malicioususers attempt to subvert the network for theirown gain. Wikipedia has remained surprisinglyrobust against these challenges—a credit bothto the technical solutions it has created and theparticipatory community it has enabled. But twoexamples of this challenge demonstrate the gen-eral point. On July 31, 2006, political humoristStephen Colbert featured Wikipedia in a seg-ment of his television show, The Colbert Report.Describing Wikipedia as a space where, “anyuser can change any entry, and if enough usersagree with them, it becomes true,” Colbert toldhis viewers to go onto Wikipedia and edit thearticle on elephants to say: “Elephant populationin Africa has tripled over the past six months.”The flood of user-edits forced site administratorsto temporarily lock the page. In a less conge-nial spirit, companies and political aides havegotten into the habit of anonymously groom-ing their entries. Zittrain (2008) elaborates thetension admirably: “If the Wikipedia entry onWal-Mart is one of the first hits in a search forthe store, it will be important to Wal-Mart to

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

Karpf 337

make sure the entry is fair—or even more thanfair . . .” (p. 139). Likewise, August 2006 sawthe launch of MyWikiBiz, a company aimed atcreating and editing Wikipedia entries on a for-fee basis. Jimmy Wales responded by blockingthe company’s user account and banning its IPaddress, and this led to a lengthy communitydiscussion about how to deal with such newventures (Zittrain, 2008, p. 140).

The “valuable real estate” issue has importantimplications for the growth of online commu-nities in areas that have already been identi-fied as valuable. When the Los Angeles Timesattempted to embrace the wiki editing conceptthrough the launch of “wikitorials,” the site wasalmost immediately overrun by porn advertise-ments and was quickly shut down. Clay Shirky(2008) writes, “In the absence of a function-ing community, a wiki will suffer from theTragedy of the Commons, as the Wikitorialdid, as individuals use it as an attention-getting platform, and there is no communityto defend it” (p. 137). Karpf (2009b) identifiesa similar trend impeding nascent conservativeonline political communities in their efforts tobuild parallel infrastructure to the progressive“Netroots.”

Stage 5: Institutionalizing Authority

Throughout the first four growth phases, wesee a continuous fraying of the principles ofopenness and co-creation that mark the ear-liest stages of a participatory community. Assites enter the laggard phase (which I will againnote, can only be methodologically definedwith rigor retrospectively), the slowdown insite growth raises inevitable questions of powerand authority among the now-stabilizing com-munity. Within Wikipedia, one such contro-versy occurred when longtime site administrator“Essjay” was revealed to have falsified his cre-dentials. Although Wikipedia is open to editingfrom anyone, Essjay had claimed on his per-sonal page that he held various graduate degreesand a professorship in theology. He had madereference to this educational background whenarguing on various “talk” pages over the years.

In 2007, after Jimmy Wales contacted him aboutjoining a for-profit venture, it turned out thatEssjay was a 24-year-old editor with no graduatedegrees. This led to a long community discus-sion regarding the validity of his edits, the issuesof identity-management in the online space, andthe proper role of expertise in Wikipedia (seeZittrain, 2008, p. 141; Lih, 2009, pp.194–200 forfurther discussion).

As growth slows in this final phase, whenmost potential community members have joinedthe site and the remainder of the online popu-lation is mostly non-adopters with a few laggardadopters still present, the disparity between hubsand niches comes into stark contrast. While theperiods of rapid growth provide a sense thatthe entire world is changing, the final phaseraises questions about who controls the fruitsof all this volunteer labor. These changes havebeen somewhat muted in Wikipedia becausethe site is a nonprofit, nonpolitical venture.But in other communities-of-interest, particu-larly ones where a company or political leader-ship is seen to profit from the voluntary output,the challenges to institutionalized authority canbe particularly problematic. The differences ofscale that have developed become differences-in-kind, with Larry Sanger’s attempt to start hisown equivalent to Wikipedia, Citizendium.org,being an instructive case. As Internet publisherTim O’Reilly has put it, “If there weren’t anetwork effect driving Wikipedia, [Google’s]Knol and Citizendium would be succeeding.”10

The powerful network effects that define theseonline spaces also prevent alternative venturesfrom successfully growing to scale. If you don’tlike Wikipedia, DailyKos, or Facebook, youare free to start your own, but that in itself isproblematic.

If the power-law topography creates thesedifferences-in-scale among the sites that allowfor novel solutions to the collective action prob-lem, then we must wonder about the conditionsunder which a power-law hub can fall or bereplaced. The next section will discuss how eachof the five institutional development stages listedabove produces a challenge that can lead to thefailure or replacement of a network-enhanced

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

338 JOURNAL OF INFORMATION TECHNOLOGY & POLITICS

good, leading to a set of hypotheses aboutthe parameters within which potential network-enhanced goods can be developed.

STUMBLING ALONG THE PATH TOPOWER-LAW HUB STATUS: THE

LIMITS OF OPEN SOURCE POLITICS

Phase 1

The first challenge for developing a hub spacelies in attracting a devoted set of lead adopters.This problem can come in at least two forms,depending on the availability of a pre-existingpower-law hub. In the case of Wikipedia, forinstance, the first wave of adopters came fromthe Nupedia list and from the Slashdot com-munity. Likewise, the Howard Dean campaignfeatured the support of dozens of leaders in thefield of social technology who were attractedby the new opportunity to apply the princi-ples of open source to a political campaignand see what happened. Those are existing net-works whose attention is a scarce commodity.Attempts at replicating these successes must findsome other reason why an existing technologicalor topical elite network would choose to engagein peer production through that particularvenue.

This point seems lost upon the hundreds oforganizations and companies who have decidedto enter the Web 2.0 age by launching their ownsocial networking sites, for instance. A usefulindicator is the existence of a McDonald’s socialnetworking site. Millions of Americans eat atMcDonald’s, but how many of them wish to self-identify as members of the “McDonald’s com-munity”? Pushed forward by a consulting indus-try that has found lucrative contracts in support-ing the growth of social media, the very realpublic goods produced by online communities-of-interest can be easily obscured if we look atthe social media industry as a whole. Withouta colonizing set of devoted, skilled volunteerparticipants, the best technology in the worldwill fail to deliver the valuable network exter-nalities that make these spaces worth regularlyvisiting. In a similar vein, I argued in a recentJournal of Information Technology & Politics

article, “Macaca Moments Reconsidered,” thatthe primary impact of new media on politicsis only identifiable through the study of large-scale communities-of-interest, rather than theisolated study of specific new media tools suchas YouTube (Karpf, 2010). The impact of theInternet on political associations and politics ingeneral comes not through lowered communica-tion costs alone, but through the communities-of-interest that these lowered costs enable. Opensource as a process is only as effective as theattached community makes it. The first step inbuilding such a community lies in attracting aset of active co-creators, and these co-creatorsare themselves a scarce commodity. This leadsto the first testable hypothesis, also termed the“Field of Dreams Fallacy”:

H1 (The “field of dreams” fallacy):Successful peer production requires theinitial engagement of a lead adopter com-munity, predominantly found within a pre-existing network.

If H1 is correct, it follows that optimismabout “open source politics” must be temperedby a recognition of existing power relationships.It simply is not the case that “if you build it,they will come.” Rather, existing networks ofelite actors must be courted and brought aboard.Whether these are members of the “technorati”or other social, political, or media elites, theyrepresent a pre-existing set of “haves” whoseparticipation is a necessary condition for thesuccess of even such open, egalitarian architec-tures as the one found on Wikipedia.

Phase 2

The move from lead adopters to the largerset of early adopters represents a distinct bun-dle of challenges. Lead adopters are a valuablecommodity, but they also have many intereststhat are quite different from the rest of the pop-ulation. Reaching critical mass requires that asite not only solve a series of technical andnormative challenges; it also requires the newcommunity to exist in an area which is attractiveto a substantial issue public. Shirky writes abouta variant on this hurdle in his 1999 essay, “The

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

Karpf 339

Interest Horizons and the Limits of SoftwareLove.” Responding to Eric Raymond’s then-recent summary of open source, that “everygood work of software starts by scratching adeveloper’s personal itch . . . given enougheyeballs, all bugs are shallow,” Shirky notes,“What if you have a problem that doesn’t scratchsome core developers personal itch?” (Shirky,1999). Within the restricted universe of softwaredevelopment projects, some ideas will be moreexciting and motivating than others. The leastexciting ideas may still have a commerciallyviable market, but they are unlikely to attract alarge enough community of motivated develop-ers to be appropriate for commons-based peerproduction.

This critique holds for the formation of onlinecommunities-of-interest as well—which is notsurprising, given that Wikipedia and other suchcommunities took inspiration from the opensource software movement. The lowered trans-action costs of the Internet help to reveal the fulldemand curve for public participation, but partof what that means is that topics or areas thatsimply are not particularly attractive or interest-ing to any existent or nascent issue public willfail to reach critical mass. Revelation of a com-plete demand curve does not mean all issues willbe equally in demand.

The first generation of social scientists tostudy the Internet was optimistic that, thanksto the falling costs of online engagement, wewould see the rise of mass deliberative spaces,“online public squares,” and other venues forenhanced democratic participation. Many suchsites have been launched with enthusiasm, onlyto fail to reach critical mass. There are sev-eral potential explanations for such failure, butone of them is that public interest in lengthydeliberative processes simply is not as highas social scientists would ideally like. (SeeSchudson, 1999, for a similar historical discus-sion.) One limit of peer production that willhamper communities-of-interest is the inabilityto attract a large enough community to passthe critical mass point where the user-generatedcontent itself gives people a reason to regularlyreturn to the online space. This leads to a sec-ond hypothesis, termed the “interest horizons”thesis:

H2 (The “interest horizons” thesis):Commons-based peer production willextend beyond the lead adopter commu-nity only to the degree that it appeals tothe interests of a broader community.

The interest horizons thesis may soundnearly teleological in nature—“peer produc-tion communities will only form where theyare interested in doing so”—but it offers animportant corrective to the practical limita-tions on open source politics. The Obamacampaign, for instance, featured many “opensource”–type activities. Will the relative suc-cesses of a site like MyBarackObama.com bereplicable for a local county council candi-date in 2010, however? H2 would suggest not,for the simple reason that presidential cam-paigns attract much greater public attention thanlocal races. Presidential campaigns—Obama’sin particular—are much higher on the demandcurve of the American public than any otherelectoral campaigns, and thus there are “opensource” strategies that can only be success-fully applied in such settings. The promulgationof “best practices” white papers and anecdotalaccounts of success largely ignores the implica-tions of the interest horizons thesis.

Phase 3

Often launched by some event that exposesthe hub space to the population through the massmedia, the third phase is where substantial scal-ing and network effects begin to take hold. Animportant related challenge at this juncture isthe availability of a distributed reputation sys-tem capable of managing this scaling process.As discussed by Benkler (2006), Bruns (2008),Resnick, Zeckhauser, Swanson, and Lockwood(2006), Karpf (2011), and others, online reputa-tion systems are a necessary component of allhub spaces within the power-law topography ofthe Internet.

A “benevolent dictator” such as Jimmy Walescan play a guiding role in the first two phasesof growth, but in phase three, communities ofinterest quickly learn that “Jimmy doesn’t scale”(Lih, 2009, p. 179). Slashdot’s “mojo” sys-tem and eBay’s “feedback forum” are the two

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

340 JOURNAL OF INFORMATION TECHNOLOGY & POLITICS

best-known examples, but Google’s PageRankalgorithm has similar functions, drawing upona large set of distributed reputation assess-ments, then applying some form of algorithmthat rewards good content or contributions whilesanctioning bad content or contributions. YochaiBenkler (2006) notes in The Wealth of Networksthat an effective reputation system is a neces-sary condition of large-scale peer production.He goes on to suggest that the componentsof peer-produced systems can be broken downinto smaller components (“modularity”) andthat these components themselves can then bereduced to tasks that require little time and effort(“granularity”) (Benkler, 2006, p. 100). Benklerillustrates these points by drawing upon the setof existing online reputation systems, but in sodoing he overlooks an important caveat: Sometypes of mass collaboration are much more eas-ily reduced to small actions taken in front of acomputer monitor than others.

This represents a substantial limitation tothe Internet’s impact on political associations.Wikipedia, DailyKos, MoveOn, and other large-scale communities-of-interest are capable ofoverwhelming growth with low overhead costsbecause they are asking their community toengage in distributed tasks that can occur effec-tively in front of a computer screen. One chal-lenge that MoveOn, Democracy for America,and similar organizations have faced when theyattempt to use “online tools for offline action”is that the slight increase in transaction costs—asking people to rate meetings after they returnhome to them, for instance—is accompanied bya steep drop-off in participation.

Karpf (2011) argues that these limits arechanging thanks to the diffusion of the mobileWeb (i.e., Internet-through-iPhone), but it is stilltoo early tell whether that hypothesis will besupported by the eventual data. For our pur-poses here, it bears noting that the impact ofthe Internet on offline collaborations is slimwhen compared with its impact on online col-laboration. Potential power-law hubs can onlyradically scale up if they adopt a system tomanage the influx of participation. Such sys-tems of reputation and recommendation are notequally applicable to all forms of collaborationand engagement, and where they cannot yet be

built, commons-based peer production will failto displace traditional modes of association andproduction.

H3 (“Political strategy is not like computercode”): Commons-based peer productionis limited to those tasks or activities thatare amenable to online reputation systems.

H4 (“Location-based wave” thesis): Thespread of the “Mobile Web” will expandthe range of tasks or activities amenable toonline reputation systems.

Phase 4

By the fourth phase, a site has managed toattract mass attention and benefits from sub-stantial network effects. What is to stop it fromcontinuing in this regard? The brief history ofsocial network sites (SNS) offers a useful illus-tration. Friendster.com was the first SNS topass critical mass and attract large-scale partic-ipation. danah boyd chronicles the demise ofFriendster, eclipsed by Myspace.com becauseMySpace offered a more permissive culture,inviting bands to start their own pages andletting users create fake profiles for schools,organizations, and celebrities. Friendster hada network externality-advantage, because morepeople were initially on its site, but low onlinetransaction costs meant that people could add aMySpace account in minutes, and with greaterfreedom on MySpace, they eventually switcheden masse. boyd attributes the replacement ofFriendster by MySpace as an indicator of“Internet culture” (boyd, 2006; boyd & Ellison,2007).

MySpace indeed gained millions more usersthan Friendster, as SNS’s gained further pen-etration among the public at large. MatthewHindman notes that, prior to June 2007,MySpace was stably among the top five Websites in the United States. In his researchinto the stability of power laws on the Web,he notes that MySpace precipitously droppedthat June because the site “became uncool.”11

In the months leading up to that decline,MySpace had become barraged by spam solic-itations, as pornography marketers took note

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

Karpf 341

of its status as “valuable online real estate”and began creating fake accounts. Viruses alsobecame a problem around this time. Critically,Facebook.com replaced MySpace at this time,and it remains the SNS power-law hub today.Facebook included more limiting user registra-tion permissions, and only allowed members ofan individual’s school-based or geographic net-work to view his or her profile. Perhaps moreimportantly, in May 2007, Facebook unveiled anew feature: its open application programminginterface (API). The open API allowed outsidedevelopers to write new programs, includinggames and information-sharing tools.

Facebook replaced MySpace as power-lawhub, not because of culture, but because the openAPI gave users something new to do. Failureto respond to the pressures of being “valuableonline real estate” rendered MySpace vulnera-ble, and when Facebook gave users new engage-ment opportunities, MySpace was left as a vir-tual ghost town, with over 100 million registeredusers, most of whom were suddenly spendingthe bulk of their time on another site.12 (Seeboyd 2009 for a more detailed argument regard-ing the socioeconomic effects of this “Myspaceflight.”).

The lesson we should draw from the his-tory of social network sites is that, althoughpower-law hubs benefit from substantial net-work effects that render substantial stability inthe short run, viewed over a long time horizon,the hubs appear more fragile. The Internet isa fast-changing environment, and lead adoptercommunities tend to desert an online space onceit gets too noisy and crowded, moving on toexperiment with the next wave of innovations.Just as Compuserv, AOL, and Geocities wereonce defining features of online “geography,”only to be relegated a few years later to the dust-bin of history, the changing nature of the Internetcreates room for a host of “disruptive innova-tions” (see Karpf, 2009a, for further discussion)that can lead to the displacement of existing hubcommunities. This is a particularly pronouncedfeature in light of “Moore’s Law,” which pre-dicted in 1965 that transistor capacity woulddouble approximately every two years (Moore,1965). Unlike other previous communicationsregimes, online communication continues to

evolve at a tremendous pace, creating constantopportunities to introduce new features to a sys-tem that had been cost-impermissible in theprevious election cycle. This leads to H5:

H5: (“Power law fragility under Moore’sLaw thesis”): The displacement of anexisting power-law hub by a new entrantwill be facilitated by the development ofnovel features, enabled by Moore’s Law.

H6 (“Punctuated equilibrium thesis”):Viewed over time, individual topic areasonline will demonstrate “punctuated equi-librium” characteristics, with long periodsof stability, followed by a power-law hubbeing replaced by a new upstart.

H5 suggests that, as the Internet continues toevolve as a medium, power laws may turn outto be less stable than they at first appeared. Newstartups invest in capacities of the Web at timeT that were not present at time T – 1, and inthe lowered transaction cost digital environment,this can lead to the replacement of one hub withanother. The requirements of the mass coordina-tion dilemma require that the environment willcontinue to only have one central hub per area,but those hubs can be replaced in a predictablemanner. Likewise, H6 posits that the apparentstability found in the online landscape may beara strong resemblance to the stability found inthe policy subsystems literature (Baumgartner& Jones, 1993). At a single point in time, pol-icy subsystems appear stable and unchanging.Viewed over a 30-year timeframe, they insteadreveal short periods of disruption, followed bythe establishment of a new systemic hierarchy.

Phase 5

The governance challenges presented in thefifth and final stage are difficult to describe ingreat detail, particularly because of the data lim-itations present when applying social networkanalysis to online communities-of-interest. Onecannot say with certainty whether Wikipedia ormore political hub spaces have actually enteredthe laggard phase of adoption, because we donot know at present what percentage of theonline population is “non-adopters” rather than

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

342 JOURNAL OF INFORMATION TECHNOLOGY & POLITICS

laggard adopters. What should be clear, how-ever, is that the slowdown of site growth createspressures regarding who controls the fruits ofthe community’s labor. As one participant inthe participatory ratings site (and power-lawhub) Yelp.com explained regarding her supe-ruser “elite” status,” “It makes you feel specialfor about two weeks. Then you either realizeyou’re working for someone else without gettingpaid, you totally lose interest, or you get reallyinto it” (Zittrain, 2008, p. 146).

Sites that fail to effectively manage thesegovernance challenges are at serious risk of“code forking,” the term Stephen Weber usesto describe subsets of the open source softwarecommunity who break off from a large projectto start their own similar endeavor (Weber,2004). Code forking is not inherently a neg-ative attribute—for certain types of commu-nities, particularly those whose collaborationsare not particularly granular or modular, thereexists a “crowding threshold” above which addi-tional members detract from the community.(See Ciffollili, 2003, for a related discussion ofClub Goods theory.). Too much code forking canreduce the network externalities produced by thecommunity, and if one of these forked com-munities successfully passes the critical masspoint in phase 2, then it begins to present aviable alternative to community members whobecome disaffected over the governance contro-versies. Likewise, the community must deal withthese governance challenges while also embrac-ing new code-based innovations; otherwise itruns the risk of being displaced by a new entrantthat suddenly offers community members anaugmented set of opportunities.

H7 (“Code forking the public sphere”):Once the phases of rapid growth haveconcluded, communities associated with“open source politics” will engage in aseries of “code forks.”

If H7 is correct, it provides a counterpointof sorts to Hindman’s suggestion that power-law hubs reduce the democratizing effects ofthe Internet to “mythological” status. For whilethere can be only one central information hub,

audience share is not the only goal of online cit-izens. If networked publics are able to success-fully code fork, and in so doing enable a greaternumber of small- or medium-sized groups toefficaciously engage in collective action, thenthe limitations of power-law topology prove agood deal less limiting.

CONCLUSION

The central purpose of this article has beento derive a set of hypotheses about the cur-rently ill-defined field of “open source politics”from the substantial commons-based peer pro-duction success represented by Wikipedia.org.In the process of developing these hypothe-ses, it has been necessary to elaborate severalcore concepts about Internet-mediated commu-nication that frequently are misunderstood, andalso to look at how the standard diffusion-of-innovations process influences the institutionaldevelopment of online communities.

A key component of the argument lies inthe reinterpretation of what power laws inonline traffic actually represent. Whereas previ-ous scholars have taken the existence of powerlaws to indicate a stark limitation on the democ-ratizing impact of the medium, this paper arguesthat “power-law topology” is of critical impor-tance in solving the collective action problemunder conditions of abundance. Whereas tradi-tional research on collective action and publicgoods problems has focused on the free riderproblem, it is self-evident that Wikipedia hasno problem with free ridership per se. Rather,the problem with online communities of thistype lies in coordinating activity in an anti-geographic landscape. Preferential attachmentserves as a solution to that challenge, and whencombined with the growth inherent in a diffusionprocess, preferential attachment produces powerlaw distributions. Power laws then serve to solvethe anti-geographic challenge presented by theInternet’s abundance.

The article concludes with a series ofhypotheses specifically because it is mycontention that the field of “open source poli-tics” is still in its formative stages. Open sourceis a product, a process, and a community. Many

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

Karpf 343

have focused on the process and product, whilecommunity-formation has gone largely under-explored. This has led to the promulgation ofseveral misperceptions, particularly within thepractitioner community and, to a lesser extent,among academic observers. The hypothesesin this article are meant as a challenge ofsorts, both to their author and to the researchcommunity as a whole. Web research facesthe challenge and the opportunity of havingmassive amounts of data, and much of it noisy.The seven hypotheses are meant to inform ourdiscussion of what open source as a product andprocess will look like in the political sphere,given the institutional challenges all online com-munities face as they move through the stagesof diffusion. Commons-based peer productionprocesses are not a “field of dreams,” capableof being assembled in any situation. Theyface the limits of existing interest horizons, asdetermined by early adopter communities ratherthan the smaller, more focused lead adoptercommunities. They can only be applied tojoint outcomes that prove amenable to onlinereputation tracking, though the sphere of suchoutcomes is likely growing apace with thedevelopment of the mobile Web. Existing powerlaw hubs are far from immutable due to theconstant innovation made possible by Moore’sLaw, and they thus exhibit the properties ofpunctuated equilibrium, with long periods ofstability and brief shakeups. As an online com-munity concerned with matters of the publicsphere matures, and growth slows, this leads toa healthy equivalent to the “code forking” dis-cussed by Weber, Kelty, and others. As a result,“open source politics” should not be expectedto radically transform the public sphere, but itshould render the elite system more porous.

NOTES

1. According to Alexa.com, November 3, 2010.2. According to Alexa.com, November 3, 2010.3. According to http://en.wikipedia.org/wiki/Wiki

pedia:About as of November 3, 2010.4. Nupedia, unlike Wikipedia, was designed as a for-

profit venture of Wales’s company, Bomis.com. Whileentries were to be free, the site was intended to gener-ate revenue through ad sales. Wikipedia was eventually

launched as a separate nonprofit after a controversy amongvolunteer “Wikipedians” over whether the company wouldone day profit from their free labor.

5. Today, the three rules have been expanded to “fivepillars:” (1) Wikipedia is an encyclopedia. (2) Wikipediahas a neutral point of view. (3) Wikipedia is free content.(4) Wikipedia has a code of conduct. (5) Wikipedia doesnot have firm rules.

6. http://alexa.com/data/details/traffic_details/wikipedia.org. This accords with the findings from the2007 Pew survey, but tracks traffic on the global, ratherthan national, level.

7. Olson suggests that two types of groups will faceminimal free rider problems. Small groups will be able toidentify noncontributors, creating reputation pressures andincentives to recognizably participate (Chong, 1991, devel-ops this case further with regards to social movements).Privileged groups feature a single wealthy participant whowill provide as much of the public good as he or she likesregardless. If the wealthy patron has a strong enough tastefor the good, all will be satisfied regardless of free riding.

8. http://en.citizendium.org/wiki/CZ:About. Also seehttp://alexa.com/siteinfo/wikipedia.org+citizendium.org.

9. Obtained through Google Scholar: http://scholar.google.com/scholar?hl=en&lr=&cites=7511022991152445218.

10. See http://radar.oreilly.com/2008/10/network-effects-in-data.html.

11. Research presentation at the Oxford InternetInstitute, March 2009. Available online at www.oii.ox.ac.uk.

12. This last point presents a host of measurementissues for social scientists interested in the Internet. Useraccounts, once created, are rarely destroyed. Competitionbetween social networks, community blogs, or Internet-mediated political associations must be measured in activ-ity, rather than list size. Unfortunately, activity measuresare almost universally proprietary data, when they areavailable at all.

13. Image obtained through Google images: http://images.google.com/imgres?imgurl=http://www.cyfm.net/articles/images/S-CurveDetail.jpg&imgrefurl=http://www.cyfm.net/article.php%3Farticle%3DDont_Good_Ideas_Fly.html&h=900&w=900&sz=67&hl=en&start=4&sig2=aXHLBuRCvt8cFz6sdhO5Ag&tbnid=cAFVuhipoW-dfM:&tbnh=146&tbnw=146&ei=trjzR6K_FYyGeufesIgB&prev=/images%3Fq%3Ds-curve%2B%26hl%3Den%26lr%3D%26safe%3Doff%26sa%3DG.

14. Data compiled for “History of Wikipedia” by user“TakuyaMurata.” Image available at http://en.wikipedia.org/wiki/History_of_Wikipedia under Creative Commonslicense.

REFERENCES

Albert, R., Jeong, H., & Barabasi, A. L. (1999). Diameterof the World Wide Web. Nature, (401), 130–131.

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

344 JOURNAL OF INFORMATION TECHNOLOGY & POLITICS

Barabasi, A. L. (2003). Linked. New York, NY: PlumeBooks.

Baumgartner, F., & Jones, B. (1993). Agendas and insta-bility in American politics. Chicago, IL: University ofChicago Press.

Benkler, Y. (2006). The wealth of networks. New Haven,CT: Yale University Press.

Bimber, B., Flanagin, A., & Stohl, C. (2005). Reconcept-ualizing collective action in the contemporarymedia environment. Communication Theory, 15(4),365–388.

boyd, d. (2006, March 21). Friendster lost steam. IsMySpace just a fad? Blog post. Retrieved from http://www.danah.org/papers/FriendsterMySpace Essay.html

boyd, d. (2009, June). The not-so-hidden politics of classonline. Presented at the Personal Democracy Forum,New York, NY.

boyd, d., & Ellison, N. (2007). Social network sites:Definition, history, and scholarship. Journal ofComputer Mediated Communication, 13(1), article 11.

Bruns, A. (2008). Blogs, Wikipedia, second life, andbeyond: From production to produsage. New York, NY:Peter Lang.

Chong, D. (1991). Collective action and the civilrights movement. Chicago, IL: University of ChicagoPress.

Ciffolilli, A. (2003). Phantom authority, self-selectiverecruitment and retention of members in vir-tual communities: The case of Wikipedia. FirstMonday, 8(12). Retrieved from http://firstmonday.org/issues/issue8_12/ciffolilli

Drezner, D., & Farrell, H. (2008). The power and politicsof blogs. Public Choice, 134(1–2), 15–30.

Fine, A., Sifry, M., Raseij, A., & Levy, J. (Eds.). (2008).Rebooting America: Ideas for redesigning Americandemocracy for the Internet age. New York, NY:Personal Democracy Press.

Giles, J. (2005). Internet encyclopedias go head to head.Nature, 438, 900–901.

Hara, N. (2008). Internet use for political mobilization:Voices of participants. First Monday, 13(7).

Hindman, M. (2007). Open-source politics reconsid-ered: Emerging patterns in online political participa-tion. In V. Mayer-Schonberger and D. Lazer (Eds.),Governance and information technology: From elec-tronic government to information government (pp.183–207). Cambridge, MA: MIT Press.

Hindman, M. (2008). The myth of digital democracy.Princeton, NJ: Princeton University Press.

Hindman, M., Tsioutsiouliklis, K., & Johnson, J. (2003,April). “Googlearchy”: How a few heavily-linked sitesdominate politics on the Web. Paper presented atthe Annual Meeting of the Midwest Political ScienceAssociation, Chicago, IL.

Huberman, B., & Wilkinson, D. (2007). Assessing thevalue of cooperation in Wikipedia. First Monday, 12(4).Retrieved from http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/ 1763/1643

Jenkins, H. (2006). Convergence culture: Where old andnew media collide. New York, NY: NYU Press.

Karpf, D. (2008, March). Measuring influence in the polit-ical blogosphere: Who’s winning, and how can wetell? Politics and Technology Review, 33–41. Retrievedfrom http://www.asiaing.com/politics-and-technology-review-march-2008.html

Karpf, D. (2009a, September). The MoveOn effect:Disruptive innovation within the interest group ecologyof American politics. Presented at American PoliticalScience Association Annual Meeting, Toronto, Ontario,Canada.

Karpf, D. (2009b, October). Don’t think of an online ele-phant: Explaining the dearth of conservative onlinepolitical infrastructure in America. Presented at Societyfor the Social Study of the Sciences Annual Meeting,Washington, DC.

Karpf, D. (2010). Macaca moments reconsidered: Electoralpanopticon or Netroots mobilization? Journal ofInformation Technology & Politics, 7(2–3), 143–162.

Karpf, D. (2011). Why bowl alone when you can flash-mob the bowling alley? IEEE Intelligent Systems, 26(1),40–47.

Kelty, C. (2008). Two bits: The cultural significance of freesoftware. Durham, NC: Duke University Press.

Kingdon, J. (1984). Agendas, alternatives, and publicpolicy. New York, NY: Longman Publishers.

Kollock, P. (1999). The economies of online cooperation:Gifts and public goods in cyberspace. In M. A. Smithand P. Kollock (Eds.), Communities in cyberspace (pp.220–239). New York, NY: Routledge.

Lessig, L. (1999). Code and other laws of cyberspace. NewYork, NY: Basic Books.

Lessig, L. (2003, August 19). Interview with JoeTrippi. Lessig.org, blog entry. Retrieved fromhttp://www.lessig.org/blog/archives/001428.shtml

Lev-On, A., & Hardin, R. (2008). Internet-based col-laborations and their political significance. Journal ofInformation Technology & Politics, 4(2), 5–27.

Lih, A. (2009). The Wikipedia revolution. New York, NY:Hyperion Books.

Lupia, A., & Sin, G. (2003). Which public goods areendangered? How evolving communication technolo-gies affect the logic of collective action. Public Choice,117(3–4), 315–331.

Moore, G. (1965). Cramming more components onto inte-grated circuits. Electronics Magazine, 4, 114–117.

Olson, M. (1965). The logic of collective action.Cambridge, MA: Harvard University Press.

Rainie, L., & Tancer, B. (2007). Wikipedia users.Research Report. Washington, DC: Pew Internet& American Life Project. Retrieved from http://www.pewinternet.org/PPF/r/212/report_display.asp

Ratcliffe, M., & Lebkowsky, J., Eds. (2005). Extremedemocracy. Stanford, CA: Creative Commons.

Raymond, E. S. (2001). The cathedral & the bazaar:Musings on Linux and open source by an accidentalrevolutionary. Sebastopol, CA: O’Reilly.

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011

Karpf 345

Reagle, J. (2010). Good faith collaboration: The culture ofWikipedia. Cambridge, MA: MIT Press.

Reed, D. (1999). That sneaky exponential—beyondMetcalfe’s Law to the power of community building.Context Magazine (online supplement). Retrieved fromhttp://www.reed.com/dpr/locus/gfn/reedslaw.html

Resnick, P., Zeckhauser, R., Swanson, J., & Lockwood, K.(2006). The value of reputation on eBay: A controlledexperiment. Experimental Economics, 9(2), 79–101.

Rogers, E. (2003). Diffusion of innovation (5th ed.). NewYork, NY: Free Press.

Rushkoff, D. (2003). Open source democracy: How onlinecommunication is changing offline politics. New York,NY: Demos.

Schudson, M. (1999). The good citizen: A history ofAmerican civic life. New York, NY: Free Press.

Seelye, K. (2005, December 4). Snared in the Web of aWikipedia Liar. The New York Times. Retrieved fromhttp : //www.nytimes.com/2005/12/04/weekinreview/04seelye.html

Shirky, C. (1999). The interest horizons and the lim-its of software love. Self-published. Retrieved fromhttp://www.shirky.com/writings/interest.html

Shirky, C. (2005). Power laws, Weblogs, and inequal-ity. In M. Ratcliffe & J. Lebkowsky (Eds.), Extremedemocracy (pp. 49–57). Stanford, CA: CreativeCommons.

Shirky, C. (2008). Here comes everybody. New York, NY:Penguin Press.

Sifry, M. (2004, November 4). The rise of open-source politics. The Nation. Retrieved from http://www.thenation.com/doc.mhtml?i=20041122&s=sifry

Stalder, F. (2006). Manuel Castells and the theory ofnetwork society. New York, NY: Polity Press.

Trippi, J. (2004). The revolution will not be televised. NewYork, NY: Regan Books.

Valente, T. (1995). Network models of diffusion of innova-tion. New York, NY: Hampton Press.

Viegas, F., Wattenberg, M., Kriss, J., & Van Ham, F.(2007). Talk before you type: Coordination inWikipedia. System Sciences, 2007. Proceedings ofthe 40th Annual Hawaii International Conference onSystem Sciences. Retrieved from http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4076527

Vise, D., & Malseed, M. (2005). The Google story: Insidethe hottest business and technology success of our time.New York, NY: Delacorte Press.

Von Hippel, E. (2005). Democratizing innovation.Cambridge, MA: MIT Press.

Wasserman, S., & Faust, K. (1994). Social networkanalysis: Methods and applications. New York, NY:Cambridge University Press.

Weber, S. (2004). The success of open source. Cambridge,MA: Harvard University Press.

Weinberger, D. (2007). Everything is miscellaneous. NewYork, NY: Holt Paperbacks.

Zittrain, J. (2008). The future of the Internet (and how tostop it). New Haven, CT: Yale University Press.

Dow

nloa

ded

by [

108.

54.1

6.11

3] a

t 11:

25 0

2 D

ecem

ber

2011