envisioning communities: a participatory approach towards

12
Envisioning Communities: A Participatory Approach Towards AI for Social Good Elizabeth Bondi, 1 Lily Xu, 1 Diana Acosta-Navas, 2 Jackson A. Killian 1 {ebondi,lily_xu,jkillian}@g.harvard.edu,[email protected] 1 John A. Paulson School of Engineering and Applied Sciences, Harvard University 2 Department of Philosophy, Harvard University ABSTRACT Research in artificial intelligence (AI) for social good presupposes some definition of social good, but potential definitions have been seldom suggested and never agreed upon. The normative ques- tion of what AI for social good research should be “for” is not thoughtfully elaborated, or is frequently addressed with a utilitar- ian outlook that prioritizes the needs of the majority over those who have been historically marginalized, brushing aside realities of injustice and inequity. We argue that AI for social good ought to be assessed by the communities that the AI system will impact, using as a guide the capabilities approach, a framework to measure the ability of different policies to improve human welfare equity. Fur- thermore, we lay out how AI research has the potential to catalyze social progress by expanding and equalizing capabilities. We show how the capabilities approach aligns with a participatory approach for the design and implementation of AI for social good research in a framework we introduce called PACT, in which community members affected should be brought in as partners and their input prioritized throughout the project. We conclude by providing an incomplete set of guiding questions for carrying out such participa- tory AI research in a way that elicits and respects a community’s own definition of social good. CCS CONCEPTS Human-centered computing Participatory design; Com- puting methodologies Artificial intelligence; General and reference Cross-computing tools and techniques. KEYWORDS artificial intelligence for social good; capabilities approach; partici- patory design ACM Reference Format: Elizabeth Bondi, 1 Lily Xu, 1 Diana Acosta-Navas, 2 Jackson A. Killian 1 . 2021. Envisioning Communities: A Participatory Approach Towards AI for Social Good. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’21), May 19–21, 2021, Virtual Event, USA. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3461702.3462612 *Equal contribution. Order determined by coin flip. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. AIES ’21, May 19–21, 2021, Virtual Event, USA © 2021 Association for Computing Machinery. ACM ISBN 978-1-4503-8473-5/21/05. . . $15.00 https://doi.org/10.1145/3461702.3462612 PACT capabilities approach participatory approach concept realization Figure 1: The framework we propose, Participatory Ap- proach to enable Capabilities in communiTies (PACT), melds the capabilities approach (see Figure 3) with a partici- patory approach (see Figures 4 and 5) to center the needs of communities in AI research projects. 1 INTRODUCTION Artificial intelligence (AI) for social good, hereafter AI4SG, has re- ceived growing attention across academia and industry. Countless research groups, workshops, initiatives, and industry efforts tout programs to advance computing and AI “for social good.” Work in domains from healthcare to conservation has been brought into this category [43, 73, 83]. We, the authors, ourselves have endeav- ored towards AI4SG work as computer science and philosophy researchers. Despite the rapidly growing popularity of AI4SG, social good has a nebulous definition in the computing world and elsewhere [27], making it unclear at times what work ought to be considered social good. For example, can a COVID-19 contact tracing app be considered to fall within this lauded category, even with privacy risks? Recent work has begun to dive into this question of defining AI4SG [22, 23, 49]; we will explore these efforts in more depth in Section 2. However, we point out that context is critical, so no single tractable set of rules can determine whether a project is “for so- cial good.” Instead, whether a project may bring about social good must be determined by those who live within the context of the system itself; that is, the community that it will affect. This point echoes recent calls for decolonial and power-shifting approaches to AI that focus on elevating traditionally marginalized populations [5, 36, 45, 54, 55, 84]. The community-centered, context-specific conception of social good that we propose raises its own questions, such as how to arXiv:2105.01774v2 [cs.CY] 18 Jun 2021

Upload: others

Post on 03-Oct-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Envisioning Communities: A Participatory Approach Towards

Envisioning Communities: A Participatory ApproachTowards AI for Social Good

Elizabeth Bondi,∗1 Lily Xu,∗1 Diana Acosta-Navas,2 Jackson A. Killian1{ebondi,lily_xu,jkillian}@g.harvard.edu,[email protected]

1John A. Paulson School of Engineering and Applied Sciences, Harvard University2Department of Philosophy, Harvard University

ABSTRACTResearch in artificial intelligence (AI) for social good presupposessome definition of social good, but potential definitions have beenseldom suggested and never agreed upon. The normative ques-tion of what AI for social good research should be “for” is notthoughtfully elaborated, or is frequently addressed with a utilitar-ian outlook that prioritizes the needs of the majority over thosewho have been historically marginalized, brushing aside realities ofinjustice and inequity. We argue that AI for social good ought to beassessed by the communities that the AI system will impact, usingas a guide the capabilities approach, a framework to measure theability of different policies to improve human welfare equity. Fur-thermore, we lay out how AI research has the potential to catalyzesocial progress by expanding and equalizing capabilities. We showhow the capabilities approach aligns with a participatory approachfor the design and implementation of AI for social good researchin a framework we introduce called PACT, in which communitymembers affected should be brought in as partners and their inputprioritized throughout the project. We conclude by providing anincomplete set of guiding questions for carrying out such participa-tory AI research in a way that elicits and respects a community’sown definition of social good.

CCS CONCEPTS•Human-centered computing→Participatory design; •Com-puting methodologies → Artificial intelligence; • General andreference → Cross-computing tools and techniques.

KEYWORDSartificial intelligence for social good; capabilities approach; partici-patory designACM Reference Format:Elizabeth Bondi,∗1 Lily Xu,∗1 Diana Acosta-Navas,2 Jackson A. Killian1.2021. Envisioning Communities: A Participatory Approach Towards AI forSocial Good. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics,and Society (AIES ’21), May 19–21, 2021, Virtual Event, USA. ACM, New York,NY, USA, 12 pages. https://doi.org/10.1145/3461702.3462612

*Equal contribution. Order determined by coin flip.

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] ’21, May 19–21, 2021, Virtual Event, USA© 2021 Association for Computing Machinery.ACM ISBN 978-1-4503-8473-5/21/05. . . $15.00https://doi.org/10.1145/3461702.3462612

PACTcapabilities approach

participatory approach

concept

realization

Figure 1: The framework we propose, Participatory Ap-proach to enable Capabilities in communiTies (PACT),melds the capabilities approach (see Figure 3) with a partici-patory approach (see Figures 4 and 5) to center the needs ofcommunities in AI research projects.

1 INTRODUCTIONArtificial intelligence (AI) for social good, hereafter AI4SG, has re-ceived growing attention across academia and industry. Countlessresearch groups, workshops, initiatives, and industry efforts toutprograms to advance computing and AI “for social good.” Work indomains from healthcare to conservation has been brought intothis category [43, 73, 83]. We, the authors, ourselves have endeav-ored towards AI4SG work as computer science and philosophyresearchers.

Despite the rapidly growing popularity of AI4SG, social goodhas a nebulous definition in the computing world and elsewhere[27], making it unclear at times what work ought to be consideredsocial good. For example, can a COVID-19 contact tracing app beconsidered to fall within this lauded category, even with privacyrisks? Recent work has begun to dive into this question of definingAI4SG [22, 23, 49]; we will explore these efforts in more depth inSection 2.

However, we point out that context is critical, so no singletractable set of rules can determine whether a project is “for so-cial good.” Instead, whether a project may bring about social goodmust be determined by those who live within the context of thesystem itself; that is, the community that it will affect. This pointechoes recent calls for decolonial and power-shifting approaches toAI that focus on elevating traditionally marginalized populations[5, 36, 45, 54, 55, 84].

The community-centered, context-specific conception of socialgood that we propose raises its own questions, such as how to

arX

iv:2

105.

0177

4v2

[cs

.CY

] 1

8 Ju

n 20

21

Page 2: Envisioning Communities: A Participatory Approach Towards

reconcile multiple viewpoints. We therefore address these concernswith an integrated framework, called the Participatory Approachto enable Capabilities in communiTies, or PACT, that allows re-searchers to assess “goodness" across different stakeholder groupsand different projects. We illustrate PACT in Figure 1. As part ofthis framework, we first suggest ethical guidelines rooted in capa-bility theory to guide such evaluations [59, 70]. We reject a viewthat favors courses of action solely for their aggregate net benefits,regardless of how they are distributed and what resulting injusticesarise. Such an additive accounting may easily err toward favor-ing the values and interests of majorities, excluding traditionallyunderrepresented community members from the design processaltogether.

Instead, we employ the capabilities approach, designed to mea-sure human development by focusing on the substantive libertiesthat individuals and communities enjoy, to lead the kind of livesthey have reason to value [70]. A capability-focused approach to so-cial good is aimed at increasing opportunities for people to achievecombinations of “functionings”—that is, combinations of thingsthey may find valuable doing or being. While common assessmentmethods might overlook new or existing social inequalities if somemeasure of “utility” is increased high enough in aggregate, ourapproach would only define an endeavor as contributing to socialgood if it takes concrete steps toward empowering all members ofthe affected community to each enjoy the substantive liberties tofunction in the ways they have reason to value.

We then propose to enact this conception of social good witha participatory approach that involves community members in “aprocess of investigating, understanding, reflecting upon, establish-ing, developing, and supporting mutual learning between multipleparticipants” [75], in order for the community itself to define whatthose substantive liberties and functionings should be. In otherwords, communities define social good in their context.

Our contributions are therefore (i) arguing that the capabilitiesapproach is a worthy candidate for conceptualizing social good, es-pecially in diverse-stakeholder settings (Section 3), (ii) highlightingthe role that AI can play in expanding and equalizing capabilities(Section 4), (iii) explaining how a participatory approach is bestserved to identify desired capabilities (Section 5), and (iv) presentingand discussing our proposed guiding principles of a participatoryapproach (Section 6). These contributions come together to formPACT.

2 GROWING CRITICISMS OF AI FOR SOCIALGOOD

As a technical field whose interactions on social-facing problemsare young but monumental in impact, the field of AI has yet to fullydevelop a moral compass. On the whole, the subfield of AI for socialgood is not an exception.

We highlight criticisms that have arisen against AI4SG research,which serve as a call to action to reform the field. Later, we ar-gue that a participatory approach rooted in enabling capabilitieswill provide needed direction to the field by letting the affectedcommunities—particularly those who are most vulnerable—be theguide.

Recent years have seen calls for AI and computational researchersto more closely engage with the ethical implications of their work.Green [26] implores researchers to view themselves and their workthrough a political lens, asking not just how the systems they buildwill impact society, but also how even the problems and methodsthey choose to explore (and not explore) serve to normalize thetypes of research that ought to be done. Latonero [44] offers acritical view of technosolutionism as it has recently manifested inAI4SG efforts emerging from industry, such as Intel’s TrailGuardAI that detects poachers in camera trap images, which have thepotential to individually identify a person. Latonero argues thatwhile companies may have good intentions, they often lack theability to gain the expertise and local context required to tacklecomplex social issues. In a related spirit, Blumenstock [7] urgesresearchers not to forget “the people behind the numbers” whendeveloping data-driven solutions, especially in development con-texts. De-Arteaga et al. [19] focus on a specific subset of AI4SGdubbed machine learning for development (ML4D) and similarlyexpress the importance of considering local context to ensure thatresearcher and stakeholder goals are aligned.

Also manifesting recently are meta-critiques of AI4SG specifi-cally, which contend that the subfield is vaguely defined, to trou-bling implications. Moore [56] focuses specifically on how thechoice of the word “good” can serve to distract from potentiallynegative consequences of the development of certain technolo-gies, retorting that AI4SG should be re-branded as AI for “not bad”.Malliaraki [51] argues that AI4SG’s imprecise definition hurts itsability to progress as a discipline, since the lack of clarity aroundwhat values are held or what progress is being made hinders theability of the field to establish specific expertise. Green [27] pointsout that AI4SG is sufficiently vague to encompass projects aimedat police reform as well as predictive policing, and therefore simplylacks meaning. Green also argues that AI4SG’s orientation toward“good” biases researchers toward incremental technological im-provements to existing systems and away from larger reformativeefforts that could be better.

Others have gone further, calling for reforms in the field to saythat “good” AI should seek to shift power to the traditionally dis-advantaged. Mohamed et al. [55] put forth a decolonial view ofAI that suggests that AI systems should be built specifically todismantle traditional forms of colonial oppression. They provideexamples of how AI can perpetuate colonialism in the digital age,such as through algorithmic exploitation via Mechanical Turk–style “ghost work” [25] or through algorithmic dispossesion inwhich disadvantaged communities are designed for without beingallowed a seat at the design table. They also offer three “tactics” formoving toward decolonial AI, namely: (1) a critical technical prac-tice to analyze whether systems promote fairness, diversity, safety,and mechanisms of anticolonial resistance; (2) reciprocal engage-ments that engender co-design between affected communities andresearchers; and (3) a change in attitude from benevolence to soli-darity, that again necessitates active engagement with communitiesand grassroots organizations. In a similar spirit, Kalluri [36] calls forresearchers to critically analyze the power structures of the systemsthey design for and consider pursuing projects that empower thepeople they are intended to help. For example, researchers may seekto empower those represented in the data that enables the system,

Page 3: Envisioning Communities: A Participatory Approach Towards

rather than the decision-maker who is privileged with a wealth ofdata. This could be accomplished by designing systems for users toaudit or demand recourse based on an AI system’s decision [36].

In tandem with these criticisms, there have been correspondingefforts to define AI4SG. Floridi et al. [22] provide a report of anearly initiative to develop guidelines in support of AI for good.Therein, they highlight risks and opportunities for such systems,outline core ethical principles to be considered, and offer severalrecommendations for how AI efforts can be given a “firm foun-dation” in support of social good. Floridi et al. [23] later expandon this work by proposing a three-part account, which includes adefinition, a set of guiding principles, and a set of “essential factorsfor success.” They define AI4SG as “the design, development, anddeployment of AI systems in ways that (i) prevent, mitigate, orresolve problems adversely affecting human life and/or the well-being of the natural world, and/or (ii) enable socially preferableand/or environmentally sustainable developments.” Note that thisdisjunctive definition captures a broad spectrum of projects withwidely diverging outcomes, leaving open still the possible critiquesdiscussed above. However, their principles and essential factorscontribute to establishing rough guidelines for projects in the field,as others have done [49, 81] in lieu of seeking a definition.

3 THE CAPABILITIES APPROACHClearly, AI for social good is in need of a stricter set of ethicalcriteria and stronger guidance. To understand what steps take usin the direction of a more equitable, just, and fair society, we mustfind the right conceptual tools and frameworks. Utilitarian ethics,a widely referenced framework, adopts the aggregation of utility(broadly understood) as the sole standard to determine the moralvalue of an action [68, 80].

According to classic utilitarianism, the right action to take inany given context is whichever action maximizes utility for society.This brand of utilitarianism seems particularly attractive in science-oriented circles, due to its claim that a moral decision can be madeby simply maximizing an objective function. For example, in thesocial choice literature, Bogomolnaia et al. [8] argue for the useof utilitarianism as “it is efficient, strategyproof and treats equallyagents and outcomes.” However, the apparent transparency of theutilitarian argument obscures its major shortcomings. First, likethe problem of class imbalance in machine learning, a maximizingstrategy will bias its objective towards the majority class. Hence,effects on minority and marginalized groups may be easily over-looked. Second, which course of action can maximize utility forsociety overall cannot be determined on simple cost–benefit analy-ses. Any such procedure involves serious trade-offs, and a moraldecision requires that we acknowledge tensions and navigate themresponsibly.

To take a recent example, the development of digital contacttracing apps in the wake of the COVID-19 pandemic represented amassive potential benefit for public health, yet posed a serious threatto privacy rights, and all the values that privacy protects—includingfreedom of thought and expression, personal autonomy, and demo-cratic legitimacy. Which course of action will maximize utility? Itis not just controversial. What verdict is reached will necessarilydepend on what fundamental choices and liberties individuals value

most, and which they are willing to forgo. Furthermore, for somegroups the losses may be more significant than for others, and thisfact requires due consideration.

A related, though distinct, standard might put aside the ambitionof maximizing good and adopt some threshold condition instead.As long as it generates a net improvement over the status quo, some-one might say, an action can be said to promote social and publicgood. This strategy raises three additional problems: first, utilityaggregates are insensitive to allocation and distribution; second,they are not sensitive to individual liberties and fundamental rights;and third, to the extent that they consider stakeholders’ preferences,they do not take into account the idea that individuals’ preferencesdepend on their circumstances and in better circumstances couldhave different preferences [70]. Why should this matter? Becauseattempts to increase utility may be too easily signed off as progresswhile inflicting serious harm and contributing to the entrenchment ofexisting inequality.

For this reason, we believe that respect for liberty, fairness ofdistribution, and sensitivity to interpersonal variations in utilityfunctions should serve as moral constraints, to channel utility in-crements towards more valuable social outcomes. Operating withinthese constraints suggests shifting the focal point from utility toa different standard. In the spirit of enabling capabilities, we sug-gest that such standard should be based on an understanding ofthe kinds of lives individuals have reason to value, how the dis-tribution of resources in combination with environmental factorsmay create (or eliminate) opportunities for them to actualize theselives, and designing projects that create fertile conditions for theseopportunities to be secured.

This reorientation of the moral assessment framework is in linewith a view of AI as a tool to shift power to individuals and groupsin situations of disadvantage, as advocated by Kalluri [36]. Whatthis shift means, in operational terms, is not yet fully elucidated.We therefore hope to contribute to this endeavor by exploringconceptual tools thatmay guide developers in the process, providingintermediate landmarks. We consider “capabilities” to be suitablecandidates for this task, as they constitute a step in the directionof empowering disadvantaged individuals to decide what, in theirview, that shift should consist of.

The capabilities approach, which stands at the intersection ofethics and development economics, was originally proposed byAmartya Sen and Martha Nussbaum [60, 70]. The approach focuseson the notion that all human beings should have a set of substan-tive liberties that allow them to function in society in the waysthey choose [60]. These substantive liberties include both freedomfrom external obstruction and the external conditions conducive tosuch functioning. The capability set includes all of the foundationalabilities a human being ought to have to flourish in society, suchas, freedom of speech, thought and association; bodily health andintegrity; and control over one’s political and material environ-ment; among others. The capabilities approach has inspired thecreation of the United Nations Human Development Index (Fig-ure 2), marking a shift away from utilitarian evaluations such asgross domestic product to people-centered policies that prioritizesubstantive capabilities [82].

A capability-oriented AI does not rest content with increasingaggregate utility and respecting legal rights. It asks itself what it is

Page 4: Envisioning Communities: A Participatory Approach Towards

Figure 2: The United Nations Human Development Index(HDI) was inspired by the capabilities approach, whichserves as the foundation of the participatory approach thatwe propose. The above map from Wikimedia Commonsshows the HDI rankings of countries around the world,where darker color indicates a higher index.

doing, and how it interacts with existing social realities, to enhanceor reduce the opportunities of the most vulnerable members ofsociety to pursue the lives they have reason to value. It asks whetherthe social, political, and economic environment that it contributesto create deprives individuals of those capabilities or fosters theconditions in which they may enjoy equal substantive liberties toflourish. In this framework, a measure of social good is how mucha given project contributes to bolster the enjoyment of substantiveliberties, especially by members of marginalized groups. This kindof measure is respectful of individual liberties, sensitive to questionsof distribution, and responsive to interpersonal variations in utility.

Before discussing the relation between AI and capabilities, wewould like to dispel a potential objection to our framework. Thecapabilities approach has been criticized for not paying sufficientattention to groups, institutions, and social structures, thus render-ing itself unable to account for power imbalances and dynamics[30, 40]. This characteristic would make the approach unappealingfor a field that seeks to address injustices in the distribution ofpower. However, the approach does indeed place substantial em-phasis on conceptualizing the factors that affect (particularly thosewhich may enhance) individuals’ power. Its emphasis on substantialopportunities is itself a way of conceptualizing what individualshave the power to do or to be. These powers are determined inlarge part by the broader social environment, which includes groupmembership, inter-group relations, institutions, and social prac-tices. Furthermore, the approach explicitly focuses on enhancingthe power of the most vulnerable members of society. In fact, thecreation and enhancement of capabilities constitute intermediatesteps on the way towards shifting power, promoting the welfareand broadening the liberties of those who have been historicallydeprived. For this reason, it provides a fruitful framework to assessgenuine moral progress and a promising tool for AI projects seekingto promote social good.

Light discussion of the capabilities approach has begun insidethe AI community. Moore [56] references the capabilities approachto call for greater accountability and individual control of privatedata. Coeckelbergh [16] proposes taking a capabilities approach tohealth care, particularly when using AI assistance to replace human

care, but focuses on Nussbaum’s proposed set of capabilities ratherthan eliciting desired capabilities from affected patients. We signif-icantly build on these claims by discussing the potential of AI toenhance capabilities and argue that the capabilities approach goeshand-in-hand with a participatory approach. Two papers specifi-cally ground their work in the capabilities approach; we highlightthese as exemplars for future work in AI4SG. Thinyane and Bhat[79] invoke the capabilities approach, specifically to empower theagency of marginalized groups, to motivate their development of amobile app to identify victims of human trafficking in Thai fisheries.Kumar et al. [42] conduct an empirical study of women’s health inIndia through the lens of Nussbaum’s central human capabilities.

4 AI AND CAPABILITIESThe capabilities approach may serve AI researchers and developersto assess the potential impact of projects and interventions alongtwo dimensions of social progress: capability expansion and distri-bution. This section argues that, when they interact productivelywith other social factors, AI projects can contribute to equalizingand enhancing capabilities—and that therein lies their potentialto bring about social good. For instance, an AI-based system Vi-sual Question Answering System can enhance the capabilities ofthe visually impaired to access relevant information about theirenvironment [38].

When considering equalizing and enhancing capabilities, it isimportant to notice that whether an individual or group enjoys aset of capabilities is not solely a matter of having certain personalcharacteristics or being free from external obstruction. Externalconditions need also be conducive to enable individuals’ choicesamong the set of alternatives that constitutes the capability set[60]. This is why these liberties are described as substantial, asopposed to formal or negative. Hence, capabilities, or substantiveliberties, are composed of (1) individuals’ personal characteristics(including skills, conditions, and traits), (2) external resources theyhave access to, and (3) the configuration of their wider material andsocial environment [34].

Prior work at the intersection of technology and capabilities hasaddressed the potential of technology to empower users, by en-hancing their capabilities and choice sets. Johnstone [34] proposesthe capabilities approach as a key framework for computer ethics,suggesting that technological objects may serve as tools or externalresources that enhance individuals’ range of potential action andchoice [17, 39]. This is an important role that AI-based technologiesmay play in enhancing capabilities: providing tools for geographicnavigation, efficient communications, accurate health diagnoses,and climate forecasts. Technology may also foster the developmentof new skills, abilities, and traits that broaden an individual’s choicesets. We diagram in Figure 3 the relationship between an AI4SGproject on an individual’s characteristics, resources, and environ-ments, and thus the potential of AI to alter (both positively andnegatively) capability sets.

Technological objects may also become part of social structures,forming networks of interdependencies with people, groups, andother artifacts [62, 85]. When focusing on AI-based technologies, itis crucial to also acknowledge their potential to affect the social andmaterial environment, which may render it more or less conducive

Page 5: Envisioning Communities: A Participatory Approach Towards

AI4SGexternal resources

goods available

wider material & social environment

individuals’ internal characteristics

skills, conditions, traits

capability sets functionings achieved

jointly create

individual choice

may provide

alters

helps foster

Figure 3: AI for social good research projects can expand and equalize capabilities by fostering individuals’ internal charac-teristics, providing external resources, or altering the wider material and social environment, thus creating capability sets toimprove social welfare and fight injustice.

to secure capability sets for individuals and communities. For exam-ple, AI-based predictive policing may increase the presence of lawenforcement agents in specific areas, which may in turn affect thecapability sets of community members residing in those areas. Ifthis presence increases security from, say, armed robbery, commu-nity members may enjoy some more liberties than they previouslydid. And yet, if this acts to reinforce the disparate impact of lawenforcement on members of vulnerable communities, along withother inequalities, then it may not only diminish the substantiveliberties of those individuals impacted, but also alter the way inwhich such liberties are distributed across the population.

Assessing this kind of trade-off ought to be a crucial step inevaluating the ability of a particular project to promote social good.This assessment must be aligned with the kinds of choices andopportunities community members have reasons to pursue [72]. Aproject should not be granted the moral standing of being “for socialgood” if it leads to localized utility increments, at the expense ofreducing the ability of members of the larger community to choosethe lives they value. Most importantly, this assessment must bemade by community members themselves, as the following sectionargues.

5 COMMUNITY PARTICIPATIONAs Ismail and Kumar [32] contend, communities should ultimatelybe the ones to decide whether and how they would like to use AI.If the former condition is met and the community agrees that an AIsolution may be relevant and useful, the latter requires the inclusivedesign of an AI system through a close and continued relationshipbetween the AI researcher and those impacted.

This close partnership is particularly important as the grosseffects of AI-based social interventions on communities’ capabilitiesare unlikely to be exclusively positive. Tradeoffs may be forced ondesigners and stakeholders; some are likely to be intergroup, andsome, intragroup. For this reason, it is only consistent with theproposed approach that designers consult stakeholders from allgroups on their preferred ways to navigate such tradeoffs. In otherwords, if PACT is focused on creating capabilities, it must enable

impacted individuals to have a say on what alternatives are opened(or closed) to them.

Moreover, AI projects that incorporate stakeholders’ choicesinto their design process may contribute to create what Wolff andDe-Shalit [86] refer to as “fertile functionings.” That is, functioningsthat, when secured, are likely to secure other functionings. Fertilefunctionings include, though are not limited to, the ability to workand the ability to have social affiliations. These are the kinds offunctionings that either enable other functionings (e.g. control overone’s environment) or reduce the risk associated with them.

Projects in AI that create propitious environments and enableindividuals to make decisions over the capability sets they value inturn give those individuals the capability to function in a way thatleads to the creation of other capabilities. If this kind of participatoryspace is offered to those who are the most vulnerable, AI couldplausibly act against disadvantage, and contribute to shifting thedistribution of power. In this way, the PACT framework is alignedwith the principles of Design Justice in prioritizing the voices ofaffected communities and viewing positive change as a result of anaccountable collaborative process [18].

The PACT framework is also committed to the notion that, whenapplying capabilities to AI, we must include all stakeholders, espe-cially vulnerable members of society. But how do we accomplishthis concretely, and how does this affect the different steps in anAI4SG research project? In the following section, we will first intro-duce suggested mechanisms of a participatory approach rooted incapabilities, and then discuss how these come into play in (1) deter-mining which capability sets to pursue at the beginning of a project,and (2) evaluating the success of an AI4SG system in terms of itsimpact, particularly on groups’ sets of substantive liberties.

The approach we propose is diagrammed in Figure 4, whichembeds community participation into each stage. By beginningfirst with defining capability sets, we resist the temptation to im-mediately apply established tools to address ingrained social issues,which would only restrict the possibilities for change [48]. Theseparticipatory approaches constitute bottom-up approaches for em-bedding value into AI systems, by learning values from human

Page 6: Envisioning Communities: A Participatory Approach Towards

A Participatory Approach to AI4SG

determine desired

capability sets

evaluate success

project design + development

Figure 4: Our proposed approach to AI4SG projects, wherethe key is that stakeholder participation is centeredthroughout. We do not explicitly comment on the designand development of the project other than calling for theinclusion of community participation and iterative evalua-tion of success, in terms of whether the project realized thedesired capability sets.

interaction and engagement rather than being decided through thecentralized and non-representative lens of AI researchers [46].

6 GUIDING PRINCIPLES FOR APARTICIPATORY APPROACH

To direct the participatory approach we propose, we provide the fol-lowing set of guiding questions, outlined in Figure 5 and elaboratedon below. These questions are specifically worded to avoid binaryanswers of yes or no. When vaguely stated requirements are putforth as list items to check off, there is a risk of offering a cheap sealof approval on projects that aren’t sufficiently assessed in terms oftheir ethical and societal implications. Instead, we expect that innearly all cases, none of these questions will be fully resolved. Thus,these questions are meant to serve as a constant navigation guideto help researchers regularly and critically reassess their intendedwork.

We begin our discussion on how a participatory approach toAI4SG would look with our first guiding question:How are impacted communities identified and how are they rep-resented in this process? Who represents historically marginalizedgroups?This question is one of the most important and potentially dif-ficult. As such, we encourage readers to research their domainof interest and seek partners as an important first step, but of-fer some advice from our experience. We often consider seekingpartners at non-profits, community-based organizations, and/or(non-)governmental organizations (NGOs), and have even had someexperience with non-profits or NGOs finding us. Finding thesegroups as an AI researcher may be done with the help of an institu-tion’s experts on partnerships (e.g., university or company officialsfocused on forming external partnerships), prior collaborators oracquaintances, discussions with peers within an institution in dif-ferent departments (e.g., public health or ecology researchers), or“cold emails.” Young et al. [87] provide further guiding questionsfor finding these partners, particularly in their companion guide[50]. In any case, some vetting and research is an important step.

AI researchers may also co-organize workshops, such as a pairof “AI vs. tuberculosis” workshops held by some of the authors inMumbai, India which brought together domain experts, non-profits,state and local health officials, industry experts, and researchers,identified byMumbai-based collaboratorswho specialize in buildingrelationships across health-focused organizations in India. Thevaried backgrounds of the stakeholders at these workshops wasconducive to quickly identifying areas of greatest need across thespectrum of tuberculosis care in India, many of which would nothave been considered by AI researchers alone. Further, these groupforums sparked conversations between stakeholders that oftenwould not otherwise communicate, but that led to initial ideasfor solutions. This highlights that such approaches are often mostsuccessful when AI researchers move out of their usual spheres andinto the venues of the domains that their projects impact.

At the inception of a project, the designers and initial partnersshould together identify all relevant stakeholders that will be im-pacted by the proposed project, bringing them on as partners ormembers of a (possibly informal) advisory panel. Every effort shouldbe made to include at least one member from each impacted group.If stakeholders were initially omitted and are later identified, theyshould then be added. Initial consultation with panel members andpartners should be carried out in a way that facilitates open and can-did discussion to ensure all voices are represented and accountedfor during the project design phase [87]. We additionally proposethat during the lifetime of the project, the panel and partners should,if possible, be available for ongoing consultation during mutuallyagreed upon checkpoints to ensure the project continues to alignwith stakeholder values. Similar practices have been suggested byPerrault et al. [66], who advocate for close, long-term collaborationswith domain experts. Likewise, the Design Justice network pursuesnon-exploitative and community-led design processes, in whichdesigners serve more as facilitators than experts. This frameworkis modeled on the idea of a horizontal relation in which affectedcommunities are given the role of domain experts, given their livedexperience and direct understanding of projects’ impact [18].

How are plans laid out for maintaining and sustaining this work inthe long-term, and how would the partnership be ended?We believe partnership norms should be established at the begin-ning of the partnership, including ensuring that the communityrepresentatives and experiential experts have the power to endthe project and partnership, in addition to the designers and otherstakeholders. As noted by Madaio et al. [49], this should also neces-sitate an internal discussion amongst the research team to decidewhat, if any, criteria would be grounds for such a cessation, e.g., if itbecomes clear that the problem at hand requires methods in whichthe research team does not have sufficient expertise. Discussionswith the broader partners should also lay out the expected timescale and potential benefits and risks for all involved.

Once the relationship is underway, it should be maintained asdiscussed, with regular communication. When a project reachesthe point where a potential deployment is possible, we advocatefor an incremental deployment, such as the example of Germany’steststrecken for self-driving cars, which sets constraints for auton-omy, then expands these constraints once certain criteria are met[23].

Page 7: Envisioning Communities: A Participatory Approach Towards

Guiding Principles for a Participatory Approach to AI4SGHow are impacted communities identified? Represented? Including marginalized groups?

Long-term plans for maintaining work?

Compensation for their input? Are they valued as partners?

How to incorporate divergent viewpoints?

How are concerns addressed?

Determining Capability SetsWhat functionings do stakeholders want to achieve?

What functionings are priority for the most vulnerable?

Are any of the capability sets fertile?

Do the values of the project match those of the community?

Evaluating AI4SGHow does the new AI4SG system a"ect all stakeholders' capabilities, particularly those selected at the start?

Are any capabilities or functionings negatively a"ected?

What should the role of an AI researcher be?

Figure 5: Guiding principles for the participatory approach we propose. The keymessage is to center the community’s desiredcapabilities throughout, and to ensure that the goals of the project are properly aligned to do so without negatively impactingother capabilities.

What kind of compensation are stakeholders receiving for their timeand input? Does that compensation respect them as partners?Stakeholders must be respected as experiential or domain expertsand be considered as partners. They must be compensated in someway, whether monetarily or otherwise. We advocate for compen-sation to avoid establishing an approach made under the guise ofparticipatory AI that ends up being extractive [63]. One notabledrawback to a participatory approach is the potential for these com-munity interactions to become exploitative if not done with inten-tion, compensation, or long-term interactions [76]. Collaboratorswho have significantly influenced the design or implementation ofa project ought to be recognized as coauthors or otherwise acknowl-edged in resulting publications, presentations, media coverage, etc.

How canwe understand and incorporate viewpoints frommany groupsof stakeholders?Surveys or voting may seem a natural choice. However, a simplevoting mechanism is risky, as it may find that a majority of thecommunity favors, for example, building a heavily polluting factorynear a river, while the impacted community living at the proposedsite would object that this factory would severely degrade theirquality of life. These concerns from the marginalized group mustbe given due weight. This emphasis on the welfare of marginal-ized groups is based on the premise that the evaluation of humancapabilities must consider individual capabilities [59]. In short, allindividual capabilities are valuable as ends in their own right; theyshould never be considered means to someone else’s rights or wel-fare. We must, therefore, guard against depriving individuals ofbasic entitlements as a means to enhancing overall welfare.

Hence, we endorse a deliberative approach, which aims to un-cover “overlapping consensus” [67]. This approach is based on theexpectation that, in allowing diverse worldviews, value systems,and preference sets to engage in conversation, with appropriatemoderation and technical tools, social groups may find a core setof decisions that all participants can reasonably agree with. De-liberative approaches to democracy have been operationalized byprograms such as vTaiwan, which uses digital technology to informpolicy by building consensus through civic engagement and dia-logue [31]. vTaiwan uses the consensus-oriented voting platformPol.is which seeks to identify a set of common values upon whichto shape legislation, rather than arbitrating between polarized sides[78]. Similarly, OPPi serves as a platform for consensus buildingthrough bottom-up crowd-sourcing [47]. This tool is tailored foropinion sharing and seeking, oriented towards finding commonground among stakeholders.

An alternative to fully public deliberative processes are targetedefforts such as Diverse Voices panels [87] which focus on includingtraditionally marginalized groups in existing policy-making pro-cesses. Specifically, they advocate for informal elicitation sessionswith partners, asking questions such as, “What do you do currently?What would support your work?” [37]. They also suggest consider-ing whether tech should be used in the first place and highlight thata key challenge of participatory design is to determine a course ofaction if multiple participants disagree. We add that it remains achallenge to assemble these panels.

Simonsen and Robertson [75] provide multiple strategies as well,particularly with the goal of coming to a common language andfostering communication between groups of experts from different

Page 8: Envisioning Communities: A Participatory Approach Towards

backgrounds. They suggest strategies to invite discussion, such asgames, acting out design proposals, storytelling, group brainstormsof what a perfect utopian solution would look like, participatoryprototyping, and probing. In the case of probing, for example, onestrategy was to provide participants with a cultural probe kit, con-sisting of items such as a diary and a camera, in order to understandpeople’s reactions to their environments [24].

Further, several fields in artificial intelligence have devoted agreat deal of thought to learning and aggregating preferences: pref-erence elicitation [12], which learns agent utility functions; com-putational social choice [9], which deals with truthful agents; andmechanism design [74], which deals with strategic agents. As an ex-ample, Kahng et al. [35] form what they call a “virtual democracy”for a food rescue program. They collect data about preferenceson which food pantry should receive food, then use these prefer-ences to create and aggregate the preferences of virtual voters. Thegoal is to make ethical decisions without needing to reach out tostakeholders each time a food distribution decision needs to bemade.

These fields have highlighted theoretical limitations in prefer-ence aggregation, such as Arrow’s impossibility theorem [4] whichreveals limitations in the ability to aggregate preferences with asfew as three voters, but these results do not necessarily inhibit usfrom designing good systems in practice [53, 71]. Such areas providerich directions for future research, particularly at the intersectionof participatory methods and social science research.What specific concerns are raised during the deliberative process, andhow are these addressed?

Diverse Voices panels [87] with experiential experts (both fromnon-profits and from within the community) may also be used toidentify potential issues by asking questions such as “Whatmistakescould be made by decision makers because of how this proposalis currently worded?” and, “What does the proposal not say thatyou wish it said?” Other strategies we discussed for understandingmultiple viewpoints may also have a role to play here when tailoredtowards sharing concerns. However, we also stress the importanceof ongoing partnerships with impacted community members be-yond just an initial session. By ensuring that the community has avoice throughout the project’s lifetime, their values are always keptfront-and-center. As others have argued, it is not sufficient to inter-view impacted communities once simply to “check the participatorybox” [76].

6.1 Determining Capability SetsNow, equipped with some guiding questions and concrete examplesof a participatory approach, we will apply these principles directlyto selecting capability sets at the outset of an AI4SG project. Toidentify what capabilities to work towards, some scholars such asNussbaum [59] have proposed well-defined sets of capabilities andargued that society ought to focus on enabling those capabilities.However, we endorse the view that the selection of an optimal setof capabilities should be based on community consensus [70]. Weargue that an AI project can only be for social good if it is responsiveto the values of the communities affected by the AI system.What functionings do members of the various stakeholder groups wishthey could achieve through the implementation of the project?

In the fair machine learning literature, Martin Jr. et al. [52] pro-pose a participatory method called community-based system dy-namics (CBSD) to bring stakeholders in to help formulate a machinelearning problem and mitigate bias. Their method is designed tounderstand causal relationships, specifically feedback loops, par-ticularly those in high-stakes environments such as health care orcriminal justice, thatmay disadvantagemarginalized and vulnerablegroups. This process is intended to bring in relevant stakeholdersand recognize that their lived experience makes these participantsmore qualified to recognize the effects of these interventions. Us-ing visual diagrams designed by impacted community members,the CBSD method can help identify levers that may help enableor inhibit functionings, particularly for those who are most vul-nerable. Similarly, Simonsen and Robertson [75] suggest the useof mock-ups and prototypes to facilitate communication betweenexperiential experts and developers. Other strategies discussed forunderstanding multiple viewpoints may also apply if tailored to-wards determining capabilities.

What functionings are the priority for those most vulnerable? Is therean overlap between their priorities and the goals of other stakeholders?

We need to pay attention to those who are most vulnerable.The capabilities approach may be leveraged to fight inequality bythinking in terms of capability equalizing. To identify capabilitiesthat are not yet available to marginalized members of a community,we must listen to their concerns and ensure those concerns areprioritized, for example via strategies proposed in our discussionson finding those impacted by AI systems and including multipleviewpoints.

As an example of the consequences of failing to include thosemost vulnerable throughout the lifetime of an AI system, considerone of the initial steps of data collection. Women are often ignoredin datasets and therefore their needs are underreported [20]. Forexample, crash-test dummies were designed to resemble the averagemale, and vehicles were evaluated to be safe based on these maledummies—leaving women 47% more likely to be seriously injuredin a crash [65]. These imbalances are often also intersectional, asBuolamwini and Gebru [10] demonstrate by revealing stark racialand gender-based disparities in facial recognition algorithms.

Beyond the inclusion of all groups in datasets, data must be prop-erly stratified to expose disparities. During the COVID-19 pandemicin December 2020, the Bureau of Labor Statistics reported a lossof 140,000 net jobs. The stratified data reveal that all losses werewomen’s: women lost 156,000 jobs while men gained 16,000, and un-employment was most severe for Black, Latinx, and Asian women[21]. Without accounting for the capabilities of all people affectedby such systems, it is difficult to claim that these technologies werefor social good.

On the other hand, prioritizing the needs of the most marginal-ized groups may at times offer an accelerated path towards achiev-ing collective goals. Project Drawdown identified and ranked 100practical solutions for stopping climate change [29]. Number 6 on itslist was the education of girls, recognizing that women with higherlevels of education marry later and have fewer children, as wellas manage agricultural plots with greater yield. Another solutionadvocates for securing land tenure of indigenous peoples, whose

Page 9: Envisioning Communities: A Participatory Approach Towards

stewardship of the land fights deforestation, resource extraction,and monocropping.Are any of these capability sets fertile, in the sense of securing othercapabilities?

To maximize the capabilities of various communities, we maywish to focus on capabilities that produce fertile functionings, asdiscussed in Section 5. Specifically, many functionings are necessaryinputs to produce others; for example, achieving physical fitnessfrom playing sports requires as input good health and nourishment[15]. Some of Nussbaum’s 10 central capabilities—including bodilyintegrity (security against violence and freedom of mobility) andcontrol over one’s environment (right of political participation, tohold property, and dignified work)—may be viewed as fertile [59].Reading, for instance, may secure the capability to work, associatewith others, and have control over one’s environment. AI has thepotential to help achieve many of these capabilities. For example,accessible crowdwork, done thoughtfully, offers the opportunityfor people with disabilities to find flexible work without the needfor transit [88].How closely do the values of the project match those of the communityas opposed to the designers? How does the focus of the project respondto their expressed needs and concerns? Does the project have thecapacity to respond to those needs?Consider the scenario where a community finds it acceptable tohunt elephants, while the designers are trying to prevent poach-ing. There could be agreement on high-level values, such as publichealth, but disagreement on whether to prioritize specific interven-tions to promote public health. There could even be a complete lackof interest in the proposed AI system by the community.

At an early stage of a project, AI researchers need to facilitatea consultation method to understand communities’ values andchoices. Note that this process could allow stark differences in pri-ority between stakeholders to surface which prevent the projectfrom starting so as not to over-invest in a project that will laterbe terminated because of difference in values. We may considerseveral of the strategies we discussed previously in this case, suchas deliberative democratic processes, Diverse Voices panels, or com-putational social choice methods. It may subsequently be necessaryto end the project if these strategies do not work.

6.2 Evaluating AI for Social GoodOnce AI researchers and practitioners have a system tailored tothese capabilities, we believe that communities should be the onesto judge the success of this new system. This stage of the processmay pose additional challenges, given the difficulty of measuringcapabilities [34]. Though we do not here endorse any measurementmethodology, various attempts to operationalize the capabilitiesapproach give us confidence that such methodologies are feasibleand may be implemented in the course of evaluating AI projects[3].

First and foremost, we maintain that the evaluation of successshould be done throughout the lifecycle of the AI4SG project (andbeyond) as discussed above, especially via community feedback.However, we wish to emphasize that we as AI researchers needto keep capabilities in mind as we evaluate the success of AI4SGprojects to avoid “unintended consequences” and a short-sighted

focus on improved performance on metrics such as accuracy, preci-sion, or recall.

How does the new AI4SG system affect all stakeholders’ capabilities,particularly those selected at the start?This question is related to the literature on measuring capabilities[34], and is therefore difficult to answer. We will aim to provide afew examples, which may not apply well to every AI4SG system,nor will it be an exhaustive list of techniques that could be valid.First, based on the idea of AI as a diagnostic to measure socialproblems [1], we may be able to (partially) probe this question us-ing data. Sweeney [77] show through ad data that advertisementsrelated to arrest records were more likely to be displayed whensearching for “Black-sounding” names, which may affect a candi-date’s employment capability, for example. Obermeyer et al. [61]analyze predictions, model inputs and parameters, and true healthoutcome data to show that the capability of health is violated forBlack communities, as they are included in certain health programsless frequently than white communities due to a biased algorithm.

There could additionally be feedback mechanisms when an in-tervention is deployed, whether via the AI system itself, or possiblyby collaborating with local non-profits and NGOs. This may beespecially useful in cases where these organizations are alreadyengaged in tracking and improving key capabilities such as healthoutcomes, e.g., World Health Partners [14] or CARE international[33]. Again, these examples will likely not apply to all cases andopens the door for further, interdisciplinary research. However, nomatter what strategy is taken, it is imperative that we continueto center communities in all attempts to measure the effects onstakeholders’ capabilities.

Are other valued capabilities or functionings negatively affected as aresult of the project? Are stakeholders’ values and priority rankingsin line with such tradeoffs?We have a responsibility to actively think about possible outcomes;it is neglect to dismiss negative possibilities as “unintended conse-quences” [64]. These participatory mechanisms should thus ensurethat the perspective of the most vulnerable and most impactedstakeholders is given due consideration. We recognize that this canbe especially challenging, as discussed in our first guiding ques-tion for identifying impacted communities. Therefore, we furthersuggest that the evaluation of an AI4SG project should employconsultation mechanisms that are open to all community mem-bers throughout the implementation process, such as the feedbackmechanisms suggested previously.

What should my role be as an AI researcher? As a student?We believe that AI researchers at all levels should participate inthis work. This work involves all of the above points, includinglearning about the domain to understand who stakeholders are,discussing with the stakeholders, and evaluating performance. Wealso acknowledge that we AI researchers may not always be thebest suited to lead participatory efforts, and so encourage inter-disciplinary collaborations between computer science and otherdisciplines, such as social sciences. A strong example would be theCenter for Analytical Approaches to Social Innovation (CAASI) atthe University of Pittsburgh, which brings together interdisciplinaryteams from policy, computing, and social work [11]. However, AI

Page 10: Envisioning Communities: A Participatory Approach Towards

researchers should not completely offload these important scenar-ios to social science or non-profit colleagues. It should be a teameffort, which we believe will bring fruitful research in social science,computer science, and even more disciplines.

AI researchers and students can also advocate for systematicchange from within, which we discuss more in depth in the con-clusion. Although student researchers are limited by constraintssuch as research opportunities and funding, they may establish aset of moral aspirations for their work and set aside time for people-centered activities, such as mentoring and community-building[13].

7 CONCLUSION: THOUGHTS ON AI FORSOCIAL GOOD AS A FIELD

In this paper, we lay out a community-centered approach to definingAI for social good research that focuses on elevating the capabilitiesof those members who are most marginalized. This focus on capa-bilities, we argue, is best enacted through a participatory approachthat includes those affected throughout the design, developmentand deployment process, and gives them ground to choose theirdesired capability sets as well as influence how they wish to seethose capabilities realized.

We recognize that the participatory approach we lay out requiresa significant investment of time, energy, and resources beyondwhat is typical in AI or even much existing AI4SG research. Wehighlight this discrepancy to urge a reformation within the AI researchcommunity to reconsider existing incentives to encourage researchersto pursue more socially impactful work.

Institutions have the power to catalyze change by (1) establish-ing requirements for community engagement in research relatedto public-facing AI systems; and (2) increasing incentives for re-searchers to meaningfully engage impacted communities whilesimultaneously producing more impactful research [6].

While engaging in collaborative work with communities cangive rise to some technical directions of independent interest to theAI community [19], such a shift to encourage community-focusedwork will in part require reconsidering evaluation criteria usedwhen reviewing papers at top AI conferences. Greater value mustbe placed on papers with positive social outcomes, including thosewith potential for impact, if the work has not yet been deployed.Such new criteria are necessary since long-term, successful AI4SGpartnerships often also lead to non-technical contributions, as wellas situated programs which do not focus necessarily on general-izability [66]. We encourage conferences to additionally considerrewarding socially beneficial work with analogies to Best Paperawards, such as the “New Horizons” award from the MD4SG 2020Workshop, and institutions to recognize impactful work such as theSocial Impact Award at the Berkeley School of Information [58].

In the meantime, we suggest that researchers look to nontradi-tional or interdisciplinary venues for publishing their impactfulcommunity-focused work. These venues often gather researchersfrom a variety of disciplines outside computer science, opening thedoor for future collaborations. For example, researchers could con-sider COMPASS, IAAI, MD4SG/EAAMO [2], the LIMITS workshop[57], and special tracks at AAAI and IJCAI, among others. Venuessuch as the Computational Sustainability Doctoral Consortium and

CRCS Rising Stars workshop bring students together from multipledisciplines to build relationships with each other. Researchers couldalso consider domain workshops and conferences, such as those inecology or public health.

The incentive structure in AI research is often stacked againstthoughtful deployment. Whereas a traditional experimental sectionmay take as little as a week to prepare, a deployment in the fieldmay take months or years, but is rarely afforded correspondingweight by reviewers and committee members. This extended time-line weighs most heavily on PhD students and untenured facultywho are evaluated on shorter timescales. We should thus rewardboth incremental and long-term deployment, freeing researchersfrom the pressure to rush to deployment before an approach isvalidated and ready.

In addition to the need for bringing stakeholders into the designprocess of AI research, we must ensure that all communities arewelcomed as AI researchers as well. Such an effort could counteractexisting disparities and inequities within the field. For example, asis the case in other academic disciplines, systemic anti-Blackness isingrained in the AI community, with racial discrepancies in physi-cal resources such as access to a secure environment in which tofocus, social resources such as access to project collaborators orreferrals for internships, and measures such as the GRE or teacherevaluations [28]. Further, as of 2018, only around 20% of tenure-track computer science faculty were women [69]. To combat theseinequities, people across the academic ladder may actively workto change who they hire, with whom they collaborate, includingcollaborations with minority-serving institutions [41], and howmuch time they spend on service activities to improve diversityefforts [28].

The above reformations could contribute greatly to making theuse of participatory approaches the norm in AI4SG research, ratherthan the exception. PACT, we argue, is a meaningful new wayto answer the question: “what is AI for social good?” We, as AIresearchers dedicated to the advancement of social good, mustmake a PACT with communities to find our way forward together.

ACKNOWLEDGMENTSWe would like to thank Jamelle Watson-Daniels and Milind Tambefor valuable discussions. Thank you to all those we have collab-orated with throughout our AI4SG journey for their partnership,wisdom, guidance, and insights. This work was supported by theCenter for Research on Computation and Society (CRCS) at theHarvard John A. Paulson School of Engineering and Applied Sci-ences.

REFERENCES[1] Rediet Abebe, Solon Barocas, Jon Kleinberg, Karen Levy, Manish Raghavan, and

David G Robinson. 2020. Roles for computing in social change. In Proceedings ofthe 2020 Conference on Fairness, Accountability, and Transparency. 252–260.

[2] Rediet Abebe and Kira Goldner. 2018. Mechanism design for social good. AIMatters 4, 3 (2018), 27–34.

[3] Paul Anand, Graham Hunter, Ian Carter, Keith Dowding, Francesco Guala, andMartin Van Hees. 2009. The development of capability indicators. Journal ofHuman Development and Capabilities 10, 1 (2009), 125–152.

[4] Kenneth J Arrow. 1950. A difficulty in the concept of social welfare. Journal ofpolitical economy 58, 4 (1950), 328–346.

[5] Abeba Birhane and Olivia Guest. 2020. Towards decolonising computationalsciences. Women, Gender and Research (2020).

Page 11: Envisioning Communities: A Participatory Approach Towards

[6] Emily Black, Joshua Williams, Michael A. Madaio, and Priya L. Donti. 2020. Acall for universities to develop requirements for community engagement in airesearch. In The Fair and Responsible AI Workshop at the 2020 CHI Conference onHuman Factors in Computing Systems.

[7] Joshua Blumenstock. 2018. Don’t forget people in the use of big data for devel-opment. Nature 561 (2018), 170–172.

[8] Anna Bogomolnaia, Hervé Moulin, and Richard Stong. 2005. Collective choiceunder dichotomous preferences. Journal of Economic Theory 122, 2 (2005), 165–184.

[9] Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D Procaccia.2016. Handbook of computational social choice. Cambridge University Press.

[10] Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accu-racy disparities in commercial gender classification. In Conference on fairness,accountability and transparency. PMLR, 77–91.

[11] CAASI. 2021. Center for Analytical Approaches to Social Innovation. http://caasi.pitt.edu/

[12] Urszula Chajewska, Daphne Koller, and Ronald Parr. 2000. Making rationaldecisions using adaptive utility elicitation. In Proceedings of the SeventeenthNational Conference on Artificial Intelligence (AAAI-00). 363–369.

[13] Alan Chan. 2020. Approaching Ethical Impacts in AI Research: The Role of aGraduate Student. In Resistance AI Workshop at NeurIPS.

[14] Annapurna Chavali. 2011. World health partners. Hyderabad, India: ACCESSHealth International Centre for Emerging Markets Solutions Indian School of Busi-ness (2011).

[15] David A Clark. 2005. Sen’s capability approach and the many spaces of humanwell-being. The Journal of Development Studies 41, 8 (2005), 1339–1368.

[16] Mark Coeckelbergh. 2010. Health care, capabilities, and AI assistive technologies.Ethical theory and moral practice 13, 2 (2010), 181–190.

[17] Julie E Cohen. 2012. Configuring the networked self: Law, code, and the play ofeveryday practice. Yale University Press.

[18] Sasha Constanza-Chock. 2020. Introduction: #TravelingWhileTrans, De-sign Justice, and Escape from the Matrix of Domination. In Design Jus-tice (1 ed.). https://design-justice.pubpub.org/pub/ap8rgw5e https://design-justice.pubpub.org/pub/ap8rgw5e.

[19] Maria De-Arteaga, William Herlands, Daniel B Neill, and Artur Dubrawski. 2018.Machine learning for the developing world. ACM Transactions on ManagementInformation Systems (TMIS) 9, 2 (2018), 1–14.

[20] Catherine D’Ignazio and Lauren F Klein. 2020. Data feminism. MIT Press.[21] Claire Ewing-Nelson. 2021. All of the Jobs Lost in December Were Women’s Jobs.

National Women’s Law Center Fact Sheet (2021).[22] Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand,

Virginia Dignum, Christoph Luetge, RobertMadelin, Ugo Pagallo, Francesca Rossi,et al. 2018. AI4People—an ethical framework for a good AI society: opportunities,risks, principles, and recommendations. Minds and Machines 28, 4 (2018), 689–707.

[23] Luciano Floridi, Josh Cowls, Thomas C King, and Mariarosaria Taddeo. 2020.How to design AI for social good: Seven essential factors. Science and EngineeringEthics 26, 3 (2020), 1771–1796.

[24] Bill Gaver, Tony Dunne, and Elena Pacenti. 1999. Design: cultural probes. inter-actions 6, 1 (1999), 21–29.

[25] Mary L Gray and Siddharth Suri. 2019. Ghost work: How to stop Silicon Valleyfrom building a new global underclass. Eamon Dolan Books.

[26] Ben Green. 2018. Data science as political action: grounding data science in apolitics of justice. Available at SSRN 3658431 (2018).

[27] Ben Green. 2019. “Good” isn’t good enough. In AI for Social Good Workshop atNeurIPS.

[28] Devin Guillory. 2020. Combating anti-Blackness in the AI community. arXivpreprint arXiv:2006.16879 (2020).

[29] Paul Hawken. 2017. Drawdown: The most comprehensive plan ever proposed toreverse global warming. Penguin.

[30] Marianne Hill. 2003. Development as empowerment. Feminist economics 9, 2-3(2003), 117–135.

[31] Yu-Tang Hsiao, Shu-Yang Lin, Audrey Tang, Darshana Narayanan, and ClaudinaSarahe. 2018. vTaiwan: An empirical study of open consultation process inTaiwan. SocArXiv (2018).

[32] Azra Ismail and Neha Kumar. 2021. AI in Global Health: The View from the FrontLines. In CHI Conference on Human Factors in Computing Systems.

[33] Barney Jeffries. 2017. CARE International Annual Report FY17. CARE Interna-tional (2017).

[34] Justine Johnstone. 2007. Technology as empowerment: A capability approach tocomputer ethics. Ethics and Information Technology 9, 1 (2007), 73–87.

[35] Anson Kahng, Min Kyung Lee, Ritesh Noothigattu, Ariel Procaccia, and Christos-Alexandros Psomas. 2019. Statistical foundations of virtual democracy. In Inter-national Conference on Machine Learning. 3173–3182.

[36] Pratyusha Kalluri. 2020. Don’t ask if artificial intelligence is good or fair, askhow it shifts power. Nature 583, 7815 (2020), 169–169.

[37] Michael Katell, Meg Young, Dharma Dailey, Bernease Herman, Vivian Guetler,Aaron Tam, Corinne Bintz, Daniella Raz, and PM Krafft. 2020. Toward situated

interventions for algorithmic equity: lessons from the field. In Proceedings of the2020 Conference on Fairness, Accountability, and Transparency. 45–55.

[38] Jin-Hwa Kim, Soohyun Lim, Jaesun Park, and Hansu Cho. 2019. Korean Local-ization of Visual Question Answering for Blind People. In AI for Social GoodWorkshop at NeurIPS.

[39] Dorothea Kleine. 2013. Technologies of choice?: ICTs, development, and the capa-bilities approach. MIT Press.

[40] Christine Koggel. 2003. Globalization and women’s paid work: expanding free-dom? Feminist Economics 9, 2-3 (2003), 163–184.

[41] Caitlin Kuhlman, Latifa Jackson, and Rumi Chunara. 2020. No computationwithout representation: Avoiding data and algorithm biases through diversity.In Proceedings of the 26th ACM SIGKDD International Conference on KnowledgeDiscovery & Data Mining.

[42] Neha Kumar, Naveena Karusala, Azra Ismail, and Anupriya Tuli. 2020. Taking thelong, holistic, and intersectional view to women’s wellbeing. ACM Transactionson Computer-Human Interaction (TOCHI) 27, 4 (2020), 1–32.

[43] Roberta Kwok. 2019. AI empowers conservation biology. Nature 567, 7746 (2019),133–135.

[44] Mark Latonero. 2019. Opinion: AI For Good Is Often Bad. https://www.wired.com/story/opinion-ai-for-good-is-often-bad/.

[45] Jason Edward Lewis, Angie Abdilla, Noelani Arista, Kaipulaumakaniolono Baker,Scott Benesiinaabandan, Michelle Brown, Melanie Cheung, Meredith Coleman,Ashley Cordes, Joel Davison, et al. 2020. Indigenous Protocol and ArtificialIntelligence Position Paper. Concordia Spectrum (2020).

[46] Q Vera Liao and Michael Muller. 2019. Enabling Value Sensitive AI Systemsthrough Participatory Design Fictions. arXiv preprint arXiv:1912.07381 (2019).

[47] Adrian Liew, Wong Seong Cheng, Santosh Kumar, Wayne Neo, Yinxue Tay,Kim Ng, and Jennifer Lewis. Accessed 2021. The pulse of the people. https://www.oppi.live/.

[48] Audre Lorde. 1984. The master’s tools will never dismantle the master’s house.In Sister outsider: Essays and speeches. Ten Speed Press.

[49] Michael A Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach.2020. Co-Designing Checklists to Understand Organizational Challenges andOpportunities around Fairness in AI. In Proceedings of the 2020 CHI Conferenceon Human Factors in Computing Systems. 1–14.

[50] Lassana Magassa, Meg Young, and Batya Friedman. 2017. Diverse Voices:a method for underrepresented voices to strengthen tech policy docu-ments. https://techpolicylab.uw.edu/wp-content/uploads/2017/10/TPL_Diverse_Voices_How-To_Guide_2017.pdf.

[51] Eirini Malliaraki. 2019. What is this “AI for Social Good”? https://eirinimalliaraki.medium.com/what-is-this-ai-for-social-good-f37ad7ad7e91.

[52] Donald Martin Jr., Vinod Prabhakaran, Jill Kuhlberg, Andrew Smart, andWilliam S Isaac. 2020. Participatory problem formulation for fairer machinelearning through community based system dynamics. In Eighth InternationalConference on Learning Representations (ICLR).

[53] Eric Maskin and Amartya Sen. 2014. The Arrow impossibility theorem. ColumbiaUniversity Press.

[54] Sabelo Mhlambi. 2020. From Rationality to Relationality: Ubuntu as an Ethicaland Human Rights Framework for Artificial Intelligence Governance. Carr CenterDiscussion Paper Series (2020).

[55] Shakir Mohamed, Marie-Therese Png, and William Isaac. 2020. Decolonial ai:Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy& Technology (2020), 1–26.

[56] Jared Moore. 2019. AI for not bad. Frontiers in Big Data 2 (2019), 32.[57] Bonnie Nardi, Bill Tomlinson, Donald J Patterson, Jay Chen, Daniel Pargman,

Barath Raghavan, and Birgit Penzenstadler. 2018. Computing within limits.Commun. ACM 61, 10 (2018), 86–93.

[58] Berkeley ISchool News. 2021. New Award Supports Social Good. https://www.ischool.berkeley.edu/news/2021/new-award-supports-social-good

[59] Martha C. Nussbaum. 2000. Introduction: Feminism and International Develop-ment. InWomen and human development: the capabilities approach. CambridgeUniversity Press.

[60] Martha C Nussbaum. 2011. Creating capabilities. Harvard University Press.[61] Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019.

Dissecting racial bias in an algorithm used to manage the health of populations.Science 366, 6464 (2019), 447–453.

[62] Ilse Oosterlaken. 2015. Technology and human development. Routledge.[63] Rachel Pain and Peter Francis. 2003. Reflections on participatory research. Area

35, 1 (2003), 46–54.[64] Nassim Parvin and Anne Pollock. 2020. Unintended by Design: On the Political

Uses of “Unintended Consequences”. Engaging Science, Technology, and Society(2020).

[65] Caroline Criado Perez. 2019. Invisible women: Exposing data bias in a worlddesigned for men. Random House.

[66] Andrew Perrault, Fei Fang, Arunesh Sinha, and Milind Tambe. 2020. AI for SocialImpact: Learning and Planning in the Data-to-Deployment Pipeline. AI Magazine41, 4 (2020).

[67] John Rawls. 1971. A Theory of Justice. Harvard University Press.

Page 12: Envisioning Communities: A Participatory Approach Towards

[68] Christopher Robichaud. 2005. With great power comes great responsibility: On themoral duties of the super-powerful and super-heroic. Open Court.

[69] Joseph Roy. 2019. Engineering by the numbers. In Engineering by the numbers.American Society for Engineering Education, 1–40.

[70] Amartya Sen. 1999. Development as Freedom. Alfred A. Knopf.[71] Amartya Sen. 1999. The possibility of social choice. American economic review

89, 3 (1999), 349–378.[72] Amartya Sen. 2017. Collective choice and social welfare. Harvard University Press.[73] Zheyuan Ryan Shi, Claire Wang, and Fei Fang. 2020. Artificial intelligence for

social good: A survey. arXiv preprint arXiv:2001.01818 (2020).[74] Yoav Shoham and Kevin Leyton-Brown. 2008. Multiagent systems: Algorithmic,

game-theoretic, and logical foundations. Cambridge University Press.[75] Jesper Simonsen and Toni Robertson. 2012. Routledge international handbook of

participatory design. Routledge.[76] Mona Sloane, Emanuel Moss, Olaitan Awomolo, and Laura Forlano. 2020. Partic-

ipation is not a Design Fix for Machine Learning. arXiv preprint arXiv:2007.02423(2020).

[77] Latanya Sweeney. 2013. Discrimination in online ad delivery. Commun. ACM 56,5 (2013), 44–54.

[78] Audrey Tang. 2020. Inside Taiwan’s new digital democracy. The Economist(2020).

[79] Hannah Thinyane and Karthik S Bhat. 2019. Apprise: Supporting the critical-agency of victims of human trafficking in Thailand. In Proceedings of the 2019CHI Conference on Human Factors in Computing Systems. 1–14.

[80] Mark Timmons. 2013. Moral theory: An introduction. Rowman & LittlefieldPublishers.

[81] Nenad Tomašev, Julien Cornebise, Frank Hutter, Shakir Mohamed, Angela Piccia-riello, Bec Connelly, Danielle CM Belgrave, Daphne Ezer, Fanny Cachat van derHaert, Frank Mugisha, et al. 2020. AI for social good: unlocking the opportunityfor positive impact. Nature Communications 11, 1 (2020), 1–6.

[82] Mahbub Ul Haq. 2003. The birth of the human development index. Readings inhuman development (2003), 127–137.

[83] Fei Wang and Anita Preininger. 2019. AI in health: state of the art, challenges,and future directions. Yearbook of medical informatics 28, 1 (2019), 16.

[84] MeredithWhittaker, Meryl Alper, Cynthia L Bennett, Sara Hendren, Liz Kaziunas,Mara Mills, Meredith Ringel Morris, Joy Rankin, Emily Rogers, Marcel Salas, et al.2019. Disability, Bias, and AI. AI Now Institute (2019).

[85] Langdon Winner. 1980. Do artifacts have politics? Daedalus (1980), 121–136.[86] JonathanWolff and Avner De-Shalit. 2007. Disadvantage. Oxford university press

on demand.[87] Meg Young, Lassana Magassa, and Batya Friedman. 2019. Toward inclusive tech

policy design: a method for underrepresented voices to strengthen tech policydocuments. Ethics and Information Technology 21, 2 (2019), 89–103.

[88] Kathryn Zyskowski, Meredith Ringel Morris, Jeffrey P Bigham, Mary L Gray, andShaun K Kane. 2015. Accessible crowdwork? Understanding the value in andchallenge of microtask employment for people with disabilities. In Proceedingsof the 18th ACM Conference on Computer Supported Cooperative Work & SocialComputing. 1682–1693.