why responsible ai matters

21
Why Responsible AI Matters Dr. Pinar Wennerberg Accenture Switzerland 01.09.2021

Upload: others

Post on 19-Feb-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Why Responsible AI Matters

Dr. Pinar Wennerberg

Accenture Switzerland

01.09.2021

About Me

❖ I was born and raised in Istanbul Turkey, left in 1998 after university

❖ Completed PhD in computational linguistics in Munich, Germany

❖ Single daughter

❖ My purpose is to enable AI responsibly to helps humans have better lives everyday.

Why Responsible AI?3

AI is here and now

https://www.accenture.com/_acnmedia/PDF-86/Accenture-AI-Momentum-Final.pdf

of global organizations have fully deployed AI on multiple use cases

of global organizations have fully deployed AI for a single use case

of global organizations have partially deployed AI as an experiment

of global organizations have deployed AI as a prototype

32% 14% 28% 14%

of global organizations rate the impact of AI technologies on their operations as highly successful

of global organizations rate the impact of AI technologies on their operations assuccessful

of global organizations rate the impact of AI technologies on their operations as mixed results

of global organizations rate the impact of AI technologies on their operations as slightly successful

21% 30% 29% 13%

And there are increasing societal expectations about AI. If these are not met, AI will not be

adopted, therefore will not scale

And things can go wrong

AI can have unintended consequences for which we as humans are accountable (no “moral outsourcing”).

▪ Inability to have human intervention when required and AI’s misunderstanding of contextual environment can lead to serious consequences

▪ Unrepresentative datasets or human bias in design can lead to discrimination especially impacting vulnerable groups

▪ AI may never scale due to a lack of capabilities, external AI standards or slow adoption due to lack of trustworthiness

In March 2018 Elaine Herzberg was killed by a semiautonomous car while crossing a road in Arizona.

2 years after its release Austrian Labor Agency shut down its AI tool as it disadvantaged female job

seekers

So, what does this mean for us and for our businesses?

60%of deployed AI systems require a human override of the AI system once a month. Clients do not trust AI systems yet.

Loss of confidence

63%of leaders believe it is crucial to monitor AI systems – but most are unsure how to. Clients are cautious about investing in AI systems that they do not find trustworthy.

Untapped potential

2020 Edelman Trust BarometerSalesforce Ethical Leadership and Business report

Reputational Risk

Feeling safe with AI requires that we trust it

There are four types of risk areas about AI that we must know how to manage

BUSINESSES IMPACTED INDIVIDUALS IMPACTED SOCIETY IMPACTED

Data Integrity & Accuracy

Lack of large, high quality and representative data sets and solution maturity to drive accurate and fair outcomes

Governance & Control

Lack of skills, governance, and data lineage to drive

understanding, transparency and control

Regulation & Culture

Lack of ethically driven AI legislation and guidance, and

low cultural awareness of potential AI impacts to prevent

misuse

Security & Privacy

Lack of strong cyber security controls to prevent data

breaches, exploitation of AI models, and misuse of

customer data

AI Risk Factors

.

TransparencyThe ability to understand how and why decisions were made and what actions were taken in complex modeling which becomes more deeply integrated into

business processes.

Addressing Bias

Tendencies for differential treatment of classes introduced through data, models,

user interaction or external changes

Data and SecurityA transparent, accountable and user-centric development of the AI and AI

backbone. This includes knowledge data pipelines, architecture, governance and

security backbone for AI.

AccountabilityA transparent and trustful process and

definition on clear ownership, governance and monitoring on AI, its Implementation

and its environment.

Human and Machine

Creation of an environment that facilitateshuman + machine collaboration as well as

development of the workforce skills necessary to create, guard and sustain AI,

Responsibly

TRUST

Accenture’s Responsible AI principles are designed to manage these risks and establish trust so that AI scales

We use our RAI principles to help our clients stand out by operating trustworthy AI systems at scale

Brian JoreNA Mid West LeadMinneapolis, [email protected]

David BrunyeelGallia LeadBrussels, [email protected]

Marta Balbas GambraIberia LeadMadrid, [email protected]

Mozhgan Tavakolifard Nordics LeadOslo, [email protected]

12

Monark Vyas

NA West LeadSan Francisco, USA [email protected]

Rubina Ohanian

NA South East LeadAtlanta, USA [email protected]

Fabio BrescianiICEG LeadMilan, [email protected]

Bhawalkar, Rudraksh ASGR LeadFrankfurt, [email protected]

Ray Eitel PorterEurope and UKI LeadLondon, [email protected]

Tahney KeithANZ LeadCanberra, [email protected]

Ray Eitel PorterGlobal [email protected]

Caryn TanStrategy and OperationsLondon, [email protected]

Anika MahajanTechnical [email protected]

Marisa TricaricoNA North East LeadNew York, [email protected] Anika Mahajan

ASEAN/ South Asia LeadGurugram, [email protected]

About Responsible AI @ AccentureGlobal Accenture locations and RAI centers

Wennerbberg, Pinar Switzerland LeadZurich, [email protected]

Operational Technical Org. &

Culture

Reputational

We help our clients in different problem areas depending on their needs

Compliance leaders want AI to flourish through

governance

Technology leaderswant AI systems that are trustworthy and

explainable by design

HR Leadershipwant to ensure human + machine

collaboration

Executive & Strategic leaderswant Responsible AI anchored to a

company’s core values & brand

Kick-off: ‘Design for Responsible AI Workshop’ to identify focus

Governance Guidebook ▪ Quantitative and Qualitative Algorithmic Assessment

▪ AI Fairness check

▪ Responsible AI in HR Center of excellence

▪ Governance guidebook

▪ Responsible AI for the Board (co-created with WEF)

▪ Brand-wheel & Brand strategy

Related tools and assets

Transparent project staffing recommendations delivered for global consumer goods client

HR department of a global consumer goods client is operating a project staffing engine delivered by our team. They needed to know how the AI engine was down-selecting the candidates for projects because they had to explain this to their employees.

Tool adoption and its successful further operation depended on how well the decision-making of AI was understandable by the employees.

Business challenge

▪ Collaborated with the client’s HR and PMO to understand their “explainability needs” for tool adoption.

▪ Designed and implemented an “explainable AI” (xAI) feature that shows the data sources and the scorings leading to the final recommendation in an easily understandable way.

▪ Conducted successful user acceptance tests (UAT) that resulted in the adoption and operation of the tool.

How we helped

Delivered outcomes

1. A new feature that visually and dynamically explains the

scoring and the data sources behind the recommendations

2. A training video for users of the tool (employees and HR)

3. Multiple user training workshops as well as UATs

Transparency

Green chatbot for a German government agency

A government agency in German wanted to reduce its CO2 costs :

▪ How can we contribute to a greener environment by managing our technology?

▪ What is the impact of AI on our environment?

Business challenge

▪ Analyzed the energy consumption of the call center in terms of emails and calls per year.

▪ Compared the results with that of their chatbot (i.e. the language models underlying the chatbot) as their representative AI technology.

▪ Provided recommendations towards optimizing the language models.

▪ Delivered a C02 calculator to enable the agency measure their CO2 footprint on demand and independently.

How we helped

Delivered outcome

1. Assessment out the call center’s yearly CO2 output2. Recommendations and techniques to reduce it3. CO2 calculator

Transparency, Sustainability

Some Lessons Learned

It is not about the explainability. It is understandability

AI’s explainability is only as good as it can be actually understood by the layman.

No AI without Responsible AI

Do not erect a skyscraper and then wonder if it stands earthquakes.

Erect it so that it does.

Company culture matters

Employees welcome AI in companies where they feel themselves safe.

Our core values show us the limits. It is always the human.

What AI should do is defined by individual, company and societal values.

No “moral outsourcing”

Workforce training is crucial

Men on the ground embrace AI or not. Workforce must feel they have control over AI and not the other way around.

There are more questions than those about bias and transparency only

There is a long and exciting journey ahead of us

How will our relationship be with AI?

Will your mother see it as her trusted friend?

What are the industry specific AI problems?

Will you trust a doctor robot but not a lawyer robot?

How will the governments evolve?

Will you migrate to a country because of its better / ethical AI?

Thank You

[email protected]