responsible, ethical artificial intelligence in government ......at the same time, ethical...

5
IDC ANALYST CONNECTION Sponsored by: Accenture Federal Services Responsible, Ethical Artificial Intelligence in Government Agencies October 2018 Questions posed by: Accenture Federal Services Answers by: Adelaide O'Brien, Research Director, IDC Government Insights Q How does IDC define responsible or ethical AI, and what should government agencies know about this? A As AI proliferates in government agencies and is being deployed for mission-critical operations, it will make or assist in decisions having significant impact on almost every aspect of individuals' lives. As the role of data-enabled automated decision making becomes more pervasive, transformative applications are demonstrating the potential that human and machine pairing can bring. At the same time, ethical challenges are being raised regarding the potential for errors, due to inadvertent algorithmic or data bias in sensitive areas, particularly regarding gender, race, class, or age. Responsible or ethical AI includes the practices that government agencies can and should take to manage, monitor, and mitigate these risks. Responsible and ethical use of AI includes protecting individuals from harm based on either algorithmic or data bias or unintended correlation of personally identifiable information (PII) even when using anonymous data. Agencies need to ensure that their AI platforms act as responsible members of society, adhering to all applicable rules and regulations, the same rules and regulations all employees are required to follow. Q How can government agencies mitigate the potential for bias in their data? A Humans have some level of bias impacting everyday decisions. And since machine learning is trained by humans, these biases may be inherently and inadvertently built into machine-based AI systems. We make decisions based on the information we know, and often lack full comprehension of, or access to other information that should be taken into consideration. Similarly, machines can entrench bias if learning is based on limited demographic information or a preponderance of historical data that doesn't reflect today's reality. Understanding and addressing the ethical, legal, and societal implications of AI is identified as a priority in The National Artificial Intelligence Research and As artificial intelligence (AI) proliferates in government agencies for mission-critical operations, it will make or assist in decisions having significant impact on almost every aspect of citizens’ lives. Agencies need to ensure that their AI platforms act as responsible members of society, adhering to all applicable rules and regulations.

Upload: others

Post on 06-Aug-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Responsible, Ethical Artificial Intelligence in Government ......At the same time, ethical challenges are being raised regarding the potential for errors, due to inadvertent algorithmic

IDC ANALYST CONNECTION

Sponsored by: Accenture Federal Services

Responsible, Ethical Artificial Intelligence in Government Agencies October 2018

Questions posed by: Accenture Federal Services

Answers by: Adelaide O'Brien, Research Director, IDC Government Insights

Q How does IDC define responsible or ethical AI, and what should government agencies know about this?

A As AI proliferates in government agencies and is being deployed for mission-critical operations, it will make or assist in decisions having significant impact on almost every aspect of individuals' lives. As the role of data-enabled automated decision making becomes more pervasive, transformative applications are demonstrating the potential that human and machine pairing can bring. At the same time, ethical challenges are being raised regarding the potential for errors, due to inadvertent algorithmic or data bias in sensitive areas, particularly regarding gender, race, class, or age.

Responsible or ethical AI includes the practices that government agencies can and should take to manage, monitor, and mitigate these risks. Responsible and ethical use of AI includes protecting individuals from harm based on either algorithmic or data bias or unintended correlation of personally identifiable information (PII) even when using anonymous data. Agencies need to ensure that their AI platforms act as responsible members of society, adhering to all applicable rules and regulations, the same rules and regulations all employees are required to follow.

Q How can government agencies mitigate the potential for bias in their data?

A Humans have some level of bias impacting everyday decisions. And since machine learning is trained by humans, these biases may be inherently and inadvertently built into machine-based AI systems. We make decisions based on the information we know, and often lack full comprehension of, or access to other information that should be taken into consideration. Similarly, machines can entrench bias if learning is based on limited demographic information or a preponderance of historical data that doesn't reflect today's reality. Understanding and addressing the ethical, legal, and societal implications of AI is identified as a priority in The National Artificial Intelligence Research and

As artificial intelligence (AI) proliferates in government agencies for mission-critical operations, it will make or assist in decisions having significant impact on almost every aspect of citizens’ lives. Agencies need to ensure that their AI platforms act as responsible members of society, adhering to all applicable rules and regulations.

Page 2: Responsible, Ethical Artificial Intelligence in Government ......At the same time, ethical challenges are being raised regarding the potential for errors, due to inadvertent algorithmic

Page 2 #US44339418

IDC ANALYST CONNECTION Responsible, Ethical Artificial Intelligence in Government Agencies

Development Strategic Plan. To mitigate bias, agencies should ensure diversity in their data and follow basic data management and governance practices such as ensuring that there is an information access and analysis strategy in place that involves a robust data foundation, data governance, and analytics. Have processes to document sources, label and organize files, and address the issues with respect to integration of data from different data sources to build a complete and true picture of constituents. Data scientists should verify the veracity and lineage of the data prior to training the model. Historical databases can be weighted to allow an emphasis in training on more current data, reflective of current demographic realities.

Agencies must be committed to training their AI platform and use training data that is as comprehensive and unbiased as possible. Bias can also be mitigated through human-centered design practices that create personas inclusive of various populations and through testing to ensure that AI decisions do not harm the personas that the agency's mission is to serve. It's important to realize that just as humans learn and adjust thinking over time, mitigating bias is not a "once and done" effort for AI but a continuous learning process that involves quality data, rigorous data management practices, and active testing for algorithmic bias. The bottom line is agencies are accountable for AI decisions and must ensure that the algorithms follow the law, do not treat demographic groups unfairly, and protect groups and individuals from algorithmic bias. Biases are not a new phenomenon, and properly trained AI systems will offer a long-term ability to better address them.

Q In using AI to serve citizens, what steps should government agencies take to maintain public trust?

A First start with a commitment to transparency and communicate the steps that are used in the AI process. Transparency about how AI decisions are made is important, especially when data privacy is at stake. Techniques should be fully explainable to inform decision makers and assuage constituents' concerns. Understand the data sets used (and the decisions not to use other data sets), rationale for weighting data sets if any, what algorithmic function was used to train the machine, and the decision-making process. Put in place mechanisms to protect citizen interest, especially when sharing information with ecosystem partners. Create formal governance policies to address who owns, who analyzes, and who has access to each granularity of data to protect confidentiality and PII.

AI systems that look for specific patterns or profile on specific characteristics may breach levels of privacy that were assumed to be intact. As agencies use machine learning to mine and analyze synthetic (i.e., anonymized) data and dark data (i.e., information that agencies store but historically haven't used for decisioning), policies must prevent drilling for individual data patterns that would disclose PII or profiling that would reveal individual information. Ensure that testing for bias is done and that PII can't be inadvertently disclosed. Prior to deploying nationwide, test AI recommendations in pilot projects to validate the model and determine unintended consequences. Educate citizens and provide explanations of how AI is being used and ensure that access to responsible agency individuals is easily available for redress.

Many of these practices come under the umbrella of understandable or explainable AI. Fostered by groups like DARPA, explainable AI is focused on eliminating "black box decision making" through systems that can explain their

Page 3: Responsible, Ethical Artificial Intelligence in Government ......At the same time, ethical challenges are being raised regarding the potential for errors, due to inadvertent algorithmic

Page 3 #US44339418

IDC ANALYST CONNECTION Responsible, Ethical Artificial Intelligence in Government Agencies

rationale and provide insight into risk factors underlying their decisions. Initially, explainable AI is likely to focus on explanations suitable for data scientists and not the general public. Understandable AI helps to fill this void by focusing on the handoff from AI systems to humans, ensuring that frontline staff have the insight needed to act on ambiguous results.

Q What steps should government agencies take when integrating AI into their workforce?

A Transparency is also key in integrating AI into the workforce. In many cases, employees will rely increasingly on AI systems to make critical decisions and do their jobs. To have trust in the system's recommendations, they must understand the system's decision-making process. Agencies can also take advantage of human-centered design and similar approaches to improve how analytical output is integrated into human workflow.

Agencies should also explain how workers are impacted by AI and how the nature of work will change. AI can enhance their everyday work life in several ways. However, specific roles may change, and over time, some positions may be eliminated while new ones are created. In general, employees can expect their roles to continuously evolve toward handling more complex or idiosyncratic tasks and interactions that demand higher-qualified skills such as problem solving and independent thinking. For example, AI can improve quality of life for government workers by processing a vast number of documents and identifying those that are optimal and required to solve a specific case. By identifying and recommending the most valuable criteria for each decision, case workers can become more strategic and productive, improving their ability to consistently and effectively resolve cases.

Human oversight is critical in deploying AI. Humans should periodically review AI decisions and subject them to "performance reviews" just like their human counterparts. This activity helps make sure that the AI decisions remain compliant and serve as useful checks when decision criteria are changed via legislation, for example.

Q What role can industry play in helping government agencies use AI?

A Keeping pace with the speed of technological innovations is challenging for many agencies, as industry leaders are constantly developing new tools and techniques in the field of AI. While it is important to stay abreast of relevant innovations and technological trends, agencies should leverage the investments made by industry leaders in this field. Agencies should expect their industry partners to have deep government, analytics, and technology expertise. Seek partners that have responsible and ethical policies governing the use of AI and have developed tools and techniques that test for and detect unintended consequences such as gender, racial, and ethnic bias in AI software. Industry partners can assist agencies prepare data so it's ready to be used for AI, train employees to be ready for and work together with AI, and help agencies build data labs for continuous analytics.

Page 4: Responsible, Ethical Artificial Intelligence in Government ......At the same time, ethical challenges are being raised regarding the potential for errors, due to inadvertent algorithmic

Page 4 #US44339418

IDC ANALYST CONNECTION Responsible, Ethical Artificial Intelligence in Government Agencies

IDC also recommends that agencies seek industry experts that have deep expertise in design thinking as AI algorithms compute but don't think and are only useful when designed properly by humans to fit the mission purpose. An algorithm designed for the U.S. Post Office to recognize handwritten zip codes is useless in providing facial recognition of bad actors for U.S. Customs and Border Protection. Industry partners can assist in meshing the needs across agency functions to articulate requirements of the AI system, and ensure analytic outputs are tailored to the unique needs of each agency. Many companies serving the government can provide potential pitfalls to avoid and best practices to deploy from their government clients and commercial customers in related regulated industries such as financial services and healthcare where protection of PII and responsible and ethical AI are top priorities. Agencies should seek industry partners that understand the ethical, legal, cultural and social implications of AI, as well as those developing methods for designing AI systems that are responsible and ethical.

Agencies should seek industry partners that understand the ethical, legal, and social implications of AI, as well as those developing methods for designing AI systems that are responsible and ethical.

Page 5: Responsible, Ethical Artificial Intelligence in Government ......At the same time, ethical challenges are being raised regarding the potential for errors, due to inadvertent algorithmic

Page 5 #US44339418

IDC ANALYST CONNECTION Responsible, Ethical Artificial Intelligence in Government Agencies

MESSAGE FROM THE SPONSOR

Accenture Federal Services, a wholly owned subsidiary of Accenture LLP, is a U.S. company with offices in Arlington, Virginia. Accenture's federal business has served every cabinet-level department and 30 of the largest federal organizations. Accenture Federal Services transforms bold ideas into breakthrough outcomes for clients at defense, intelligence, public safety, civilian and military health organizations. Our end-to-end capabilities include leadership in AI and advanced analytics, digital services and platforms, enterprise applications and processes, and cloud computing and IT operations. Learn more at www.accenturefederal.com.

Accenture is a leading global professional services company, providing a broad range of services and solutions in strategy, consulting, digital, technology and operations. Combining unmatched experience and specialized skills across more than 40 industries and all business functions — underpinned by the world's largest delivery network — Accenture works at the intersection of business and technology to help clients improve their performance and create sustainable value for their stakeholders. With 449,000 people serving clients in more than 120 countries, Accenture drives innovation to improve the way the world works and lives. Visit us at www.accenture.com.

About the analyst:

Adelaide O'Brien, Research Director, Government Digital Transformation, IDC Government Insights

Adelaide O'Brien assists clients in understanding the full scope of efforts needed for digital transformation and focuses on technology such as Big Data, AI, cognitive, and cloud in the context of government use cases. She was named one of this year's Federal 100 by FCW. This award recognizes and celebrates government and industry leaders who play pivotal roles in the federal government IT community.

IDC Corporate USA

5 Speen Street Framingham, MA 01701, USA

T 508.872.8200

F 508.935.4015

Twitter @IDC

idc-insights-community.com

www.idc.com

This publication was produced by IDC Custom Solutions. The opinion, analysis, and research results presented herein are drawn from more detailed research and analysis independently conducted and published by IDC, unless specific vendor sponsorship is noted. IDC Custom Solutions makes IDC content available in a wide range of formats for distribution by various companies. A license to distribute IDC content does not imply endorsement of or opinion about the licensee.

External Publication of IDC Information and Data — Any IDC information that is to be used in advertising, press releases, or promotional materials requires prior written approval from the appropriate IDC Vice President or Country Manager. A draft of the proposed document should accompany any such request. IDC reserves the right to deny approval of external usage for any reason.

Copyright 2018 IDC. Reproduction without written permission is completely forbidden.