2019 the future of artificial intelligence · 2019-08-16 · artificial intelligence, that’s...

33
Expert Contributors : How to filter through applications of AI for banking business transformation 2019 The Future of Artificial Intelligence

Upload: others

Post on 29-Feb-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

Expert Contributors :

How to filter through applications of AI for banking business transformation

2019 The Future of Artificial Intelligence

Page 2: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

2 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

The Future of Artificial Intelligence Report 2019

Contents

Page 3: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

3 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

Introduction

Chapter 1: Artificial intelligence hype

Expert viewVocaLink

Chapter 2: Avoiding artificial stupidity

Expert viewPelican

Chapter 3: Filtering the ethics of AI

Expert viewIBM

Chapter 4: Explainable and auditable AI

Expert viewXceptor

Conclusion

Bibliography

About

The Future of Artificial Intelligence How to filter through applications of AI for banking business transformation

Page 4: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

4 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

While artificial intelligence has established itself as a disruptive technology for decades, AI is arguably at the peak of the hype cycle now and banks have started to implement this technology to transform traditional models of businesses.

00 | Introduction

Page 5: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

5 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

However, being a multi-faceted technology, financial institutions must decipher whether it is machine learning, robotics, deep learning, business intelligence or natural language processing that is the most beneficial for corporate banking.

Some banks have launched chatbot applications and virtual assistants, but others do not have the talent within the business to deploy innovative products that are personalised for their customers, and an even smaller number do not understand the value of AI.

This report on The Future of Artificial Intelligence analyses how despite a lack of experience using the technology which has resulted in problems when attempting to identify unusual trends, prevent fraud and avoid bias on an ethical level, banks are now focused on developing AI with a business-critical consideration.

Page 6: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

6 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

01 |

While artificial intelligence is transforming several industries, the financial sector has a lot to learn from specific case studies in non-banking areas such as health, travel and retail. To start at the beginning, despite the term AI having been bandied about with a number of different definitions, the actual definition is simple: technology that appears intelligent.

Artificial intelligence hype

Page 7: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

7 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

1 MMC Ventures, ‘The State of AI: Divergence’ (2019). 2 Financial Times, ‘Europe’s AI start-ups often do not use AI, study finds’ (2019).

Having been in existence since the 1950s, early forms of the technology were created and designed to mimic human nature, and this is where the controversy and misconceptions have emerged from. After a period of quiet development, artificial intelligence has undergone a modern revolution with subsequent excitement as a result of techniques such as machine learning coming to the fore.

Abhijit Akerkar, head of AI business integration at Lloyds Banking Group, reveals that a bank’s journey into artificial intelligence starts with experimentation. For Lloyds, it was around 24 months ago. Experimentation provides a low-cost way to test what works and what doesn’t. This experience shapes the portfolio of use cases and hence, the trajectory of value creation.

“The last three to four years have seen the explosion of data, easy and low-cost access to powerful compute power, and availability of sophisticated machine learning algorithms. The stars are aligned for the breakthrough. No wonder, companies have stepped up their investments towards embracing AI,” Akerkar says.

Machine learning is another term that has been bandied about and it must be remembered that this is a subset of AI; all machine learning is AI, but not all AI is machine learning. Machine learning enables programmes to learn through training, instead of being programmed with rules and as a result, can improve with experience. This is why there has been excitement about machine learning in financial services, as MMC Ventures explored in their report ‘The State of AI: Divergence,’1 in partnership with Barclays. “Machine learning can be applied to a wide variety of prediction and optimisation challenges, from determining the probability of a credit card transaction being fraudulent to predicting when an industrial asset is likely to fail.”

Deep learning, which is a subset of machine learning, has also become mainstream knowledge because of how it emulates the way animal brains learn tasks, but deep learning has not been used to its full potential in financial services, yet.

The tipping point

MMC Ventures also state that after “seven false dawns since its inception in 1956, AI technology has come of age. The capabilities of AI systems have reached a tipping point due to the confluence of seven factors: new algorithms; the availability of training data; specialised hardware; cloud AI services; open source software resources; greater investment; and increased interest.”

In what is being described as the fastest paradigm shift in the history of technology, banks can now adopt AI technology because of the shift to cloud computing, offerings from vendors and software suppliers. According to Gartner, only 4% of enterprises had adopted AI in 2018. Today, this figure has jumped to 14% and a further 23% intend to deploy AI within the next 12 months.

“By the end of 2019, over a third of enterprises will have deployed AI. Adoption of AI has progressed extremely rapidly from innovators and early adopters to the early majority. By the end of 2019 AI will have ‘crossed the chasm’, from visionaries to pragmatists, at exceptional pace – with profound implications for companies, consumers and society,” the MMC Ventures report posed.

But is this interest all conflated hype? MMC Ventures also revealed in March 2019 that 40% of Europe’s AI startups do not use any AI programmes in their products, as was reported in Financial Times.2

Page 8: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

8 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

01 |

Art

ifici

al in

telli

gen

ce h

ype

Based on interviews and investigation into 2,830 AI startups in Europe, David Kelnar, MMC’s head of research, said that while many of these firms had plans develop machine learning programmes, none actually were at present. “A lot of venture capital groups in Europe are responsive to companies that are interested in raising money [for AI],” Kelnar said.

The FT went on to report that companies that are branded as “AI businesses” have historically raised larger funding rounds and secured higher valuations in comparison to other software businesses. In addition to this, politicians have also contributed to this hype by discussing so-called AI success stories.

AI FOMO

At Finextra’s annual NextGen Banking conference, keynote speaker head of AI at TSB Bank Janet Adams framed the debate and stated that “AI is the new electricity” and has the potential to power everything we do in the future, helping banking customers thought the wealth creation stage of their lives.

However, despite hype around uncovering the mysteries that surround the technology, Adams pointed out that business models cannot succeed without proper education of staff in financial services, and only then, strategic advantage can be gained. “Data equals training equals insight. Roshan Rohatgi, AI lead at RBS, agreed and added that “everyone is keen to use this stuff, but the system, the fabric, is not mature yet. It’s all well and good to go from POC to pilot, but it never really reaches the real world.”

The hype discussion continued in Karan Jain, head of technology Europe and Americas from Westpac’s keynote, in which he explained that a lot of discussion about AI is around FOMO - fear of missing out. And this “FOMO generation” have different expectations and “want their banking services to be available in a couple of clicks.”

It was also argued in a later panel discussion that this FOMO also exists within the corporate banking infrastructure, where the board may ask executives if they are working

with artificial intelligence - after having heard about the technology in the news - for the execs to reply, revealing that their bank has been using machine learning for a few years.

In conversation with Finextra, Prag Sharma, head of Emerging Technology, TTS Global Innovation Lab, Citibank, highlights that there has been a recent resurgence in artificial intelligence and this is because of the development in the overall capability of the technology driven by “data, processing power, cost and algorithms, products and services developed by the open source community.”

Annerie Vreugdenhil, chief innovation officer at ING Wholesale Bank suggests that AI is already part of our everyday lives and is more prevalent than first thought. “The world is changing rapidly through technological developments and as a result, our expectations are changing. As we adapt, and these technologies become more intertwined into our lives, our expectations around what could be achieved also grows. We believe in stepping out of our comfort zone, even beyond banking, to explore the opportunities, and as we do this, our expectations extend beyond further than we have ever imagined before.” Paul Hollands, chief operating officer for data and analytics at NatWest has a different view. After saying that he was “a terrible person to ask whether AI is a buzzword or not,” he said he has always thought that AI was “a massively overhyped term. It is a collection of capabilities, so you know, if you think about it in its simplest form, it’s machine learning, its robotics and it is to some extent, chatbots as well and I think a lot of what we’re trying to do is around how do we used advanced techniques to help get to smarter outcomes for customers.

“We’ve been using machine learning for a long time in terms of how we identify opportunities to customers to save money and do things differently. I sit there and think machine learning isn’t that new but the technology that is available that put the data through and true and a speed at which it is palatable enough to get an answer – that is new.”

Page 9: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

9 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

Machine learning lead at Monzo Neal Lathia adds that “machine learning is well beyond the peak of inflated expectations, but the broader usage of the phrase ‘artificial intelligence’ is hyped to a cringeworthy degree.” OakNorth’s chief operating officer Amir Nooralia had a similar view and said that while there is hype around AI, he believes that it is justified and not just part of a cycle.

“The hype is here to stay and if anything, will only continue to grow over time as more use cases develop and more propositions are proven. Personally, I think the tipping point will be commericalised AI: moving away from AI chatbots which make us more efficient to AI making commercial insights that lead to more profitable businesses. Once that is proven in an industry, it will permeate across quickly and then replicated across other sectors. We saw this with investment banking and algo-trading and how quickly it took off, once money was being made.”

Stephen Browning, challenge director – next generation services at Innovate UK, provides a concise outlook at artificial intelligence and explains that “it is not necessarily the technology that’s important, it’s the projects and programs in which AI is getting used. What we’re seeing right now is a surge in interest around a particular type of AI that is machine learning and variants of that such as deep learning and that’s driven substantially by two main things that have developed and come along.

“One is the computing power available at a reasonable price and the other is the availability of large quantities of data. When you bring those two together you have the ability to use machine learning models to do some things that are quite remarkable in terms of the ability to spot patterns, but it’s not intelligent in the normal sense of the word.

“These techniques come under the broad title of artificial intelligence, that’s really why there is a surge in interest at the moment. The opportunity to apply these techniques to new areas where there is access to data gives the ability to spot things that maybe you couldn’t spot before so when you’re talking about financial services, identifying fraudulent transactions far more readily or using machine intelligence to assess communication and potentially see where people aren’t being so honest and spot fraud.”

Page 10: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

10 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

Expert view:

David Divitt

James Hogan

Vice President, Financial Crime

Product Manager, Financial Crime Solutions

In a Q&A interview with Finextra, David Divitt, Vice President, Financial Crime and James Hogan, Product Manager, Financial Crime Solutions from Vocalink explore how financial institutions can approach innovation and combat financial crime at the same time.

After it emerged that people had started to see Facebook ads that were related to their Internet search history, consumers felt as if the line was being now being crossed, especially when it comes to the unpermitted sharing of data.

50% believe that a GDPR-style regulation could be implemented by regulators in the US, with barely one-in-five believing it would never come to pass.

Page 11: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

11 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

A level of scrutiny should be encouraged and is warranted as long as it doesn’t suffocate the process, since criminals tend to exploit weaknesses as soon as they emerge, and generally before the industry has time to fully investigate. That being said, applying unnecessary governance can be a barrier to true innovation, and diminish the opportunity of discovery and achievement by slowing the process down unnecessarily.

For financial crime, wider collaboration is the key and exactly where financial institutions should devote time as data and algorithms in a silo can only go so far. Partnering and sharing intelligence will deliver new learning to ensure financial institutions keep pace. Regular pilots and investigative explorations should form a conveyor belt of that innovation, wherever possible focussing on collaboration. Of course, financial institutions have a long, competing list of priorities but investment here, specifically in data science, aiming for tangible outcomes will reap reward.

The well-established banks have historically been less agile when reacting quickly to change or taking the lead when it comes to launching innovative products and solutions. It is true of course, they have many more customers and greater legacy technology challenges then say, a challenger bank, however, the arrival of the so called challenger bank and new initiatives such as Open Banking has shaken the industry into life. In order to keep pace, the industry as a whole needs to embrace agility and adopt a “start-up” mentality which encourages experimentation. Financial crime is already proving the ideal incubation environment for new ideas and technology to be tested: because bad actors move at an extremely rapid pace, it is essential that the industry is similarly agile and innovative to combat it. Innovations such as network-level money laundering detection, device fingerprinting, AI and machine learning have all had significant impacts in reducing fraud and money laundering, but more can always be done. We encourage financial institutions to dedicate teams focussed on innovation, who can work in a different way, but have the backing and the resources of the parent. However, ensuring that their mission is well communicated across the organisation and has the support from the various stakeholders is critical to success.

Why should innovation be embraced without scrutiny?

Data and algorithms are improving our supervisory approach, but what should financial institutions focus efforts on?

How can banks take a measured approach to keep pace with innovation and help combat financial crime?

Page 12: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

12

| T

he

Futu

re o

f Art

ifici

al In

telli

gen

ce R

epo

rtE

xp

ert

view

Machine learning is based on the principle that when given enough data, the machine can better detect and react to the subtleties of a problem. Where humans alone can generally interpret only the more obvious trends and patterns, a machine can comb through orders of magnitude-more data to uncover the extremely hidden patterns which detect differences in data. For this reason, tackling financial crime lends itself very well to the technique. Financial crime involves identifying a relatively rare situation occurring amongst a huge pool of legitimate transactions – a true needle in a haystack. Fraudsters intentionally try to blend in to the crowd and avoid obvious clues to their activity. Also, criminals experiment with new attacks and evolve existing ones rapidly, so reacting to them must also be done at speed. For these reasons, machine learning is a great tool in the arsenal of weapons to combat financial crime.

Absolutely. Risk models are in operation today and are at the forefront of combating money laundering activity. The application of a rule-based strategy to detect money laundering is antiquated and is proving to be an efficiency overhead that is no longer useful. The key to truly exploiting algorithms to detect this type of criminal activity however is embracing a collaborative approach where the silos across entities are broken down. The act of laundering money is harder to detect at single transaction, single bank level and a wider network of activity, relationships and neighbourhoods is the best approach to tackle this global problem.

When it comes to combatting financial crime, why is there a spotlight on machine learning when hacks cannot be statistically analysed?

Can risk models be built using algorithms to monitor crimes such as money laundering?

Page 13: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

13 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

02 | Avoiding artificial stupidity

While most financial applications of artificial intelligence have been in the customer service space, there are other areas that banks are working with to improve through implementation of innovation technology.

Page 14: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

14 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

3 PwC, ‘Getting real about AI and financial crime’ (2019).

Prag Sharma, head of emerging technology at Citibank’s TTS Global Innovation Lab, highlights in conversation with Finextra that there are a few main areas that financial institutions are working to improve with artificial intelligence technology, other than chatbots.

Sharma explains that at Citibank, they are also working to improve operational efficiency with artificial intelligence because there are several predictions that the bank needs to make, for example predicting customers behaviours from a transactions or liquidity perspective and or around detection outliers in payments data. He adds that natural language processing is beginning to get used to handle the millions of documents that are usually processed manually.

“We’re a bank so compliance is a key area of focus. We’re looking at regtech and how that is going to affect us in the future, where we can make it easier for ourselves to have processes in place that continuously monitor various activities with natural language processing as the key technology enabler.

Gulru Atak, global head of innovation at Citibank’s TTS Global Innovation Lab, says that the bank also have a platform called Citi Payments Outlier Detection that leverages machine learning to detect outliers in corporate customer payments.

On this point, Sharma says: “It’s a good example of Applied AI if you’re looking at what banks are interested in today, which is using AI to look at payments transactions and find anomalies. We could have bought something off the shelf and applied it, but we as an organisation looked at it and tried to figure out whether this would add serious value and truly understand the underlying algorithms, without having to rely on third parties because we understand our data better than others.”

Atak says that the bank also have a platform called Citi Payments that leverages machine learning to detect outliers in corporate customer payments.

Financial crime

As International Banker explained in an article last year, while chatbots appear to be the most visible use case of artificial intelligence and developments are being made in algorithmic trading, AI is also making considerable inroads in the compliance and security space. Money laundering continues to be a problem in the global financial services and banks such as HSBC are exploring their options to combat this issue.

The article goes on to point out that in April 2018, HSBC had partnered with big data startup Quantexa and had piloted AI software to combat money laundering, which follows the bank’s partnership with Ayasdi to automate anti-money laundering investigations that were being processed by thousands of human employees.

“The aim of the initiative is to improve efficiency in this area, especially given that the overwhelming majority of money-laundering investigations at banks do not find suspicious activity, which means that engaging in such tasks can be incredibly wasteful. In the pilot with Ayasdi, however, HSBC reportedly managed to reduce the number of investigations by 20 percent without reducing the number of cases referred for more scrutiny.”

But how then do companies make the most of the tools and techniques that AI offers now but also prepare for what the future might look like? PwC highlighted that despite hundreds of millions being invested into technology that fights financial crime, many financial institutions are still struggling, but continue to rely on what would be considered legacy infrastructure to keep up with new and evolving threats.3

02

| A

void

ing

arti

fici

al s

tup

idit

y

Page 15: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

15

| T

he

Futu

re o

f Art

ifici

al In

telli

gen

ce R

epo

rt

4 KPMG, ‘The role of Artificial Intelligence in combating financial crime’ (2019).

“These models tend to be based on black and white rules and parameters; for example, if a transaction is over $10,000 or a person uses a credit card overseas, then it gets flagged. The problem with simplistic approaches like this is that they tend to throw up an enormous number of false positives. And in an environment of increased regulation, increasing competition and increased cost pressures, it doesn’t make sense to have your team trawling through thousands of alerts that don’t represent real financial crime tasks.”

PwC explained that financial services companies are aware that AI is a faster, cheaper and smarter way of tackling financial crime, but there is a lot of confusion around how organisations should harness this technology – “just because a certain technique is feasible doesn’t mean that a company is in a position to apply it immediately.”

To remedy issues with financial crime, PwC suggested using AI to scan enormous amounts of data and identifying patterns, behaviours and anomalies because the technology can faster than humans can. “It can analyse voice records and detect changes in emotion and motivation that can give clues about fraudulent activities. It can investigate linkages between customer and employees and alert organisation to suspect dealings.”

KPMG delved deeper into this problem in its 2018 report, ‘The role of Artificial Intelligence in combating financial crime,’4

which explored how robotics process automation (RPA), machine learning, and cognitive AI can be adopted or combined to solve issues with financial crime today.

However, KPMG advised making “a reasoned decision as to what type, or mix of types of intelligent automation a company should implement, financial crime stakeholders first need to design an intelligent automation strategy.

“This strategy depends on what investment the institution is willing to make and the benefits sought, including a weighting of the risk potentially involved, and the level of efficiency and agility desired. Therefore, the intelligent automation strategy should be aligned with the size and scope of the institution and its risk tolerance.”

KPMG also pointed to specific areas in financial crime compliance where intelligence automation could be used to reduce costs and increase efficiencies and effectiveness. For transaction monitoring, the first being the need for institutions to build on alerts and cases that have previously occurred, building on any existing machine learning models to establish a domain knowledge base that the cognitive platform can rely on.

“It is the key to monitoring the risks the institution already knows and identified. Instead, it looks at patterns that exist in the data to identity if those patterns have been seen previously.” The second suggestion from KPMG was to use machines to “automate aspects of the review process and deployed to build statistical models that incorporate gathered data and calculate a likelihood of occurrence (closure or escalation).”

Intelligence automation could be used to reduce costs and increase efficiencies and effectiveness

PwC explained that financial services companies are aware that AI is a faster, cheaper and smarter way of tackling financial crime, but there is a lot of confusion around how organisations should harness this technology

Page 16: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

16 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

5 FCA, ‘AI and financial crime: silver bullet or red herring?’ (2018). 6 International Banker, ‘How AI is disrupting the banking industry’ (2018).

The third point was to employ bots to scan the internet and public due diligence sites “to collect relevant data from internal and other acceptable sources,” which would save analysts vulnerable time. For Know you Customer (KYC), the report identified areas such as applying judgement to these domain areas using RPA and machine learning. This allows financial crimes officers to make KYC a priority because the information they obtain better reflects actual risks.

In addition to this, machine learning can also automate the extraction of data from unstructured documents, while RPA can enable institutions to be provided with a more reliable and more efficient customer-risk rating process, and in turn, more of a real-time risk assessment. RPA also has the potential of reducing, or even eliminating, the need to contact customers repeatedly.

Alongside this, in a speech given by Rob Grupetta, head of the financial crime department at the Financial Conduct Authority at Chatham House5 in November 2018, he pointed to how “the spotlight is squarely on machine learning,” which has been “largely driven by the availability of ever larger datasets and benchmarks, cheaper and faster hardware, and advances in algorithms and their user-friendly interfaces being made available online.”

Grupetta continued: “But financial crime doesn’t lend itself easily to statistical analysis – the rules of the game aren’t fixed, the goal posts keep moving, perpetrators change, so do their motives and the methods they use to wreak havoc. Simply turning an algorithm loose without thinking isn’t a suitable approach to tackling highly complex, dynamic and uncertain problems in financial crime.

“That’s not to say we can’t use algorithms and models alongside our existing approach to help us be more consistent and effective in targeting financial crime risks. Consider building a risk model using algorithms: using a set of risk factors and outcomes, we could come up with a kind of mathematical

caricature of how the outcomes might have been generated, so we can make future predictions about them in a systematic way. “For example, in a money laundering context, the risk factors could be a firm’s products, types of customers and countries it deals with, and the outcomes could be detected instances of money laundering. Unfortunately, it’s quite difficult to acquire robust figures on money laundering as industry-wide data is hard to come by, and criminals aren’t exactly in the habit of publicising their successes. Crimes like money laundering – a secret activity that is designed to convert illicit funds into seemingly legitimate gains – is particularly hard to measure.”

To resolve this issue, he explained that the FCA had introduced a financial crime data return back in 2016 which would provide an industry-wide view on key risks that banks face, which would then target supervisory resources that are exposed to inherent risk. “We are moving away from a rule-based, prescriptive world to a more data-driven, predictive place where we are using data to help us objectively assess the inherent financial crime risk posed by firms. And we have already started experimenting with supervised learning models to supervise the way we supervise firms – ‘supervised supervision’, as we call it.”

Substituting humans

However, while on one hand, AI technology can reduce the number of times that a customer needs to be contacted, in the example highlighted above, fears are also amounting around the substitutability of bank employees, as the International Banker article also discussed.6 While there have been many statistics and news articles

AI could render 30% of banking jobs obsolete in the next five years

02

| A

void

ing

arti

fici

al s

tup

idit

y

Page 17: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

17 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

7 Darktrace, ‘The Next Paradigm Shift: AI-Driven Cyber-Attacks’ (2018).

bandied about, Lex Sokolin, global director for fintech research firm Autonomous Next revealed that AI adoption across financial services could save US companies up to $1 trillion in productivity gains and lower overall employment costs by 2030.

The article also pointed to ex-Citigroup head Vikram Pandit’s expectation that AI could render 30% of banking jobs obsolete in the next five years, asserting that AI and robotics “reduce the need for staff in roles such as back office functions”. Japan’s Mizuho Group plans to replace 19,000 employees with AI-related functionality by 2027, and recently departed Deutsche Bank CEO John Cryan once considered replacing almost 100,000 of the bank’s personnel with robots.

However, conflicting data suggested that AI may also result in a rise of banking jobs, as revealed by a recent study from Accenture that found that by 2022, a 14% net gain of jobs is likely to occur in jobs that effectively use AI, in addition to a 34% increase in revenues. Accenture also finds that the most mundane human jobs will be replaced by robots, and leave banking employees to focus on more interesting and complex jobs, improving work-life balance and helping career prospects.

In conversation with Finextra Research, Paul Hollands, chief operating officer for data and analytics for NatWest, highlights that this could be a problem, because there is a skills gap and “there is a change in the skills required in all

organisations as the ability to use machine learning, to use robotics and artificial intelligence increases.”

Hollands goes on to discuss how employers have a right to ensure that the people within the organisation also have the core skills to help them grow. Oaknorth’s Amir Nooralia also had a similar attitude and says that it is not about “machine replacing man (or woman), but rather machine enhancing human. Think Iron Man suit boosting a human rather than an all-knowing robot.”

Like the healthcare sector that will continue to require a human’s emotional response, “when it comes to finance, it is very personal and there are situations that will require empathy and emotional intelligence – e.g. a customer who might be experiencing anxiety of mental stress as a result of debt. It’s not like travel where the process involves getting from A to B, or retail which is purely transactional, so the human element is less important.”

Nooralia then goes on to reference a recent Darktrace whitepaper, ‘The Next Paradigm Shift: AI-Driven Cyber-Attacks7 in which the organisation believes that in the future, “malware bolstered through AI will be able to self-propagate and use every vulnerability on offer to compromise a network,” Nooralia says.

In the whitepaper, Darktrace state that “instead of guessing during which times normal business operations are conducted, [AI-driven malware] will learn it. Rather than guessing if an environment is using mostly Windows machines or Linux machines, or if Twitter or Instagram would be a better channel, it will be able to gain an understanding of what communication is dominant in the target’s network and blend in with it.”

Nooralia adds: “A human will always be guessing and will never be able to learn as quickly as a machine can, so it is inevitable that the machine will be better in comparison.”

Conflicting data suggested that AI may also result in a rise of banking jobs, as revealed by a recent study from Accenture that found that by 2022, a 14% net gain of jobs is likely to occur in jobs that effectively use AI, in addition to a 34% increase in revenues.

Page 18: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

18

| T

he

Futu

re o

f Art

ifici

al In

telli

gen

ce R

epo

rt

In a Q&A interview with Finextra, Rajiv Desai, SVP – US Operations, from Pelican discusses AI and the potential for transformation, in addition to the challenges that the world of real-time payments presents and how compliance plays a part in this process.

SVP – US OperationsRajiv DesaiExpert view:

Page 19: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

19 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

Artificial Intelligence is already a ubiquitous part of our everyday lives, and banks have been deploying AI for several decades in task-specific ways. AI in transaction banking has been used to address key bottlenecks in payments and financial crime compliance. These are the areas where thousands of people are used in the back offices worldwide to do repetitive tasks which require basic human intelligence. Application of AI to these areas will continue to grow as these are some of the main causes of inefficiencies and last-mile problems that banks have to solve. However, we are now also at an inflection point in banking transformation. This will also transform AI from becoming a “nice to have” enhancement provider to a “must have” facilitator of an open banking and real-time digital banking environment.

In today’s real-time environment complex processing and compliance decisions are made within a few seconds. It is simply not possible in this increasingly digital and 24/7 instant payment world to throw more human resources at the problem. The human body and mind simply lack the abilities to consistently and systematically assess, investigate and decide on matters 24x7 within seconds. AI is the only solution available to address this need to complete the existing processing tasks and new challenges facing us like High Value Payment Fraud. Real-time fraud detection in high value payments will gain increasing importance and AI will play a prominent role to address the same issues.

In addition to tackling the growing problem of payments fraud, sanctions screening obligations in a real-time environment can be incredibly challenging for banks, often resulting in very high false positives, or wrong hits, in financial crime compliance. We have noticed that with dozens of watchlists with thousands of patterns of names, companies, ships and cities many words trigger false alerts. However, most of the time, humans can quickly and easily decide that the hit is not real using context and common sense. For instant payments it is clearly not practical to have humans take these decisions, so using Natural Language Processing AI technology can easily figure out whether “Laura” is a ship or first name of a person, or if “Iran” is a street name in Denver or a blacklisted country. In addition, auditability and examinability are particularly important in these regulated contexts – banks need to have full confidence in their ability to fully demonstrate and explain the decisions that AI processes have taken.

How do you see AI transforming banking in the future?

Can you explain how you see the challenges of today’s real-time payments world being addressed by AI?

Are there other compliance areas where you see AI playing a major role?

Page 20: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

20

| T

he

Futu

re o

f Art

ifici

al In

telli

gen

ce R

epo

rt

03 | Filtering the ethics of AI

As decision-making factors using AI become more accepted, pure economics might not align with the softer strategies of a bank. Many financial institutions are questioning how artificial intelligence must be governed within an organisation and how it can be taught to align with a bank’s brand and ethos, but without influence from human judgement.

Page 21: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

21 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

8 BBC, ‘Google’s ethics board shut down’ (2019).9 Bloomberg, ‘The Google AI Ethics Board With Actual Power Is Still Around’ (201910 European Commission, ‘Ethics guidelines for trustworthy AI’ (2019).

While AI has dominated news headlines over the past year or so, the majority of announcements and research has been around the ethics of the technology and how to manage or avoid bias in data. In April 2019, a fortnight after it was launched, Google scrapped its independent group set up to oversee the technology corporation’s efforts in AI tools such as machine learning and facial recognition.

The Advanced Technology External Advisory Council (ATEAC) was shut down after one member resigned and there were calls for Kay Coles James, president of conservative thinktank The Heritage Foundation to be removed after “anti-trans, anti-LGBTQ and anti-immigrant” comments, as reported by the BBC.8 Google told the publication that it had “become clear that in the current environment, ATEAC can’t function as we wanted.

“So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.”

The big tech example

Many industry experts expressed confusion at the decision and referred to Google as being naïve. However, as it was only the external board that was shut down, as Bloomberg reported, the “Google AI ethics board with actual power is still around.”9

The Advanced Technology Review Council was assembled last year as an attempt to “represent diverse, international, cross-functional points of view that can look beyond immediate commercial concerns.” Many technology giants have laid out ethical principles to guide their work on AI, so why haven’t financial services institutions?

Bloomberg referenced the AI Now Institute which wrote in a report last year that “Ethical codes may deflect criticism by

acknowledging that problems exist, without ceding any power to regulate or transform the way technology is developed and applied. We have not seen strong oversight and accountability to backstop these ethical commitments.”

Days after Google scrapped their external ethics board, the European Union published new guidelines10 on developing ethical AI and how companies should use the technology, following the release of draft ethics guidelines at the end of last year.

After the EU convened a group of 52 experts, seven requirements were established that future AI systems should meet:

1. Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.

2. Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.

3. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.

4. Transparency: The traceability of AI systems should be ensured.

5. Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.

6. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.

7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

Page 22: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

22

| T

he

Futu

re o

f Art

ifici

al In

telli

gen

ce R

epo

rt0

3 |

Fi

lter

ing

the

eth

ics

of A

I

The EU also explained that in the summer of this year, the Commission will launch a pilot phase that would involve a number of stakeholders, but companies, public administrations and organisations are welcome to sign up to the European AI Alliance today.

Potential regulation

At NextGen Banking London, Maciej Janusz, head of cash management Nordic Region at Citibank brought up the subject of regulation and said that regulation “comes when something crashes. Banks will be reluctant to implement AI without human oversight.” Comments on regulatory frameworks were also made by Monica Monaco, founder of TrustEU Affairs, who revealed that governance - at the moment - only exists in the form of data protection, specifically Article 22 in GDPR, which could become a source for future principles to govern AI and the use of algorithms in financial services.

Monaco also made reference to the European Commission’s ‘AI for Europe’ report which was published on the 25th April, which she recommended everyone read. On GDPR, Monaco said that the right to be forgotten could become problematic, as it would also apply to institutions, not just individuals.

A question was raised as to whether AI could be a leveler, as the technology is shining a light on all issues, especially the non-diverse nature of the industry.

Ekene Uzoma, VP digital product development at State Street argued that the issue with data abuses is that they start to take on different forms, so predicting may be a little difficult. He also spoke about education and how there needs to be a recognition that we cannot look to the “altar of technology” to solve problems.

According to Terry Cordeiro, head of product management - applied science and intelligent products at Lloyds Bank, “AI will automate repeatable work, but where does that leave us [humans]? We could say that

the workforce of the future will be more relationship-based. Banks need to look at how to foster new talent and how to develop existing teams.”

Cordeiro continued: “Even algorithms need parents. And the parents have the responsibility to train them, but where are these people? They don’t exist.” In conversation with Finextra, Monzo’s machine learning lead Neal Lathia highlights that “there is bias everywhere, and a lot of active research on measuring, detecting, and trying to remedy it. I don’t think it’s too late – it’s a problem that will have to be constantly revisited.”

Nooralia also has a view on this and says: “The challenge lies in AI’s ‘black box’ problem and our inability to see the inside of an algorithm and therefore understand how it arrives at a decision. Unfortunately, as we’ve seen in several circumstances, AI programmes will replicate the biases which are fed into them and these biases originate from humans. So, the first step in eliminating these biases is to open the ‘black box’, establish regulations and policies to ensure transparency, and then have a human examine what’s inside to evaluate if the data is fair and unbiased.”

Sara El-Hanfy, innovation technologist – machine learning & data at Innovate UK explains that “an AI system is not in itself biased. These systems are being designed by humans and will therefore reflect the biases of the developers or the data that is selected to train the system. While there is absolutely a risk that AI systems could amplify bias at scale, there is also an opportunity for AI to improve transparency and tackle existing biases. She provides recruitment as an example and says that it is “good that we are becoming more aware of the possible unintentional harms of using AI technologies, and by having these conversations, we can advance understanding of AI and establish best practices.”

Innovate UK’s Stephen Browning adds that there is a “need for humans to work in a way that doesn’t perpetuate bias into the data

Page 23: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

23

| T

he

Futu

re o

f Art

ifici

al In

telli

gen

ce R

epo

rt

and on to the system. We are very conscious of that as something that would hold back the use of this type of technology or damage the benefits you could potentially obtain from AI,” he says – somewhat paraphrasing the concerns of the AI Now Institute.

Browning continues to say that “what really holds AI back and undermines it is the human aspect, and not the technical aspects, and that is what we’re working on. There are also activities across the UK government that are trying to address this, such as the Centre for Data Ethics and Innovation.

Prag Sharma, head of Emerging Technology, TTS Global Innovation Lab at Citibank, also believes that this is a real concern in this day and age, especially with the emergence of explainable AI and more financial institutions wanting to know how certain decisions are being reached.

In order to solve the issues with ethical AI, Sharma suggests introducing “rules and regulations around an audit trail of the data, so we are aware of what is produced, what is consumed and how the result will reflect that.” But in reality, we are only just coming to terms with how this technology actually works and it is not a case of financial services staying a step ahead of big technology corporations either.

Annerie Vreugdenhil, chief innovation officer, ING Wholesale Bank says that it “is not about winning or losing, it is about making the most of partnerships and the technical expertise and capabilities from both sides. For example, we believe that collaborating with fintechs is key, because we can’t do it alone anymore. Partnerships can be beneficial for both parties: fintechs can bring agility, creativity and entrepreneurship, while financial institutions like ING bring a strong brand, a large client base with an international footprint, and breadth of industry expertise.”

Page 24: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

24 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

Bias in AI systems mainly occurs in the data or in the algorithmic model. As we work to develop AI systems we can trust, it’s critical to develop and train these systems with data that is unbiased and to develop algorithms that can be easily explained. As AI systems find, understand, and point out human inconsistencies in decision making, they could also reveal ways in which we are partial, parochial, and cognitively biased, leading us to adopt more impartial or egalitarian views.

Training the AI system is key and it must be quantitively and qualitatively assessed. Whilst the orchestration and engineering of the system will be underpinned by devops and automation, we do not have the same level of process sophistication for training AI at the moment. As a result, we must be smart about how we approach training. Data science and machine learning have a very important role in delivering focused and appropriate training for the AI platform as it matures. However, if you only focus on the numbers there’s a chance you will deliver for the 80% and forget the 20%. Qualitative assessments and manual reviews are essential to understanding with evidence how AI is performing and therefore where Bias may be entering the system. Focusing upon questions like ‘did the AI system satisfy the question’ not simply ‘did it give the most appropriate response’. In the early days of AI evolution, we need to be overly critical to ensure that we provide a realistic baseline for the system to replicate.

How can we ensure that the data we feed into AI systems are not racially, sexually or ideologically biased?

Expert view:

Associate Partner, Global Business Services

Michael Conway

In a Q&A interview with Finextra, Michael Conway, Associate Partner, Global Business Services from IBM explains how banks can prevent bias to avoid a breakdown in trust in regard to the technology and the financial institution itself.

Page 25: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

25

| T

he

Futu

re o

f Art

ifici

al In

telli

gen

ce R

epo

rt

At this stage of AI training, and the heavily regulated environment we operate in, “Assisted Learning” is critical to making sure that this type of bias is closely monitored. The application of wholescale Automated Testing and the growing discipline of deep learning to understand the performance of AI, as well as products (such as IBM’s OpenScale) help us better interpret the performance of the AI Corpus. In parallel we must challenge ourselves to build diverse teams in thinking, in background and in approach to make sure we don’t suffer from “group thinking”. As mentioned above, recruitment in finance is more heavily focused on technical capabilities than ever before. Ensuring that we balance this with a diverse and rounded perspective will help mitigate the risk of bias from the outset.

How can bias be tamed and tackled so that AI systems are as successful as they can be in what they have been trained to do?

At what stage in the process does discrimination need to be prevented?

Following this, how can banks make sure that bias in AI does not break down trust that humans have in machines, but also the trust that customers have in their banks?

Once this bias has been mitigated, to what extent can the financial services industry be transformed?

Discrimination should be prevented at every stage of AI evolution. This is from inception to training. This is done by establishing a thorough and appropriate control framework, one that allows the AI to flourish but has appropriate measures and controls to ensure you know how the system evolves over time.

Transparency. Transparency in process, decisioning and training. Transparency in informing customers they are talking to a virtual assistant. By ensuring banks have an appropriate process for management and training of AI at scale, this will facilitate full end-to-end transparency in the operation. In delivering this, we can provide appropriate control points that allows these systems to grow and proliferate across the enterprise with confidence that we are treating customers fairly throughout their engagement with AI. What is more, these control points will provide the required evidence that the Regulator will require when reviewing how banks are using AI across their endeavours.

There is a long way to go before we can speak with confidence that the financial services industry has been transformed, however we are in the crest of an AI wave that can take the industry a long way. If we can strike the right balance of the use of AI technology with the transparency mentioned above, we will be able to ensure that we can maintain the trust of the customer, enterprise and the regulator – all of which are essential to ensuring this technology is not simply the latest buzzword in financial services. Once this can be established, we will be able to answer our risk counterparts and begin changing the dial in the world of risk appetite for this technology. If we can evidence, it is safe to operate this technology at full enterprise scale and that the control points at every stage of the customer engagement, there is no reason why the future of financial institutions cannot be centred around artificial intelligence.

Page 26: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

26 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

04 | Explainable and auditable AI

At NextGen Banking London, Jason Maude, head of technology advocacy at Starling Bank, advised that explainable AI is necessary. “We cannot just say that it is because the computer has said. People are not going to trust that answer, when they are declined for a loan application or another product. The trust that people need to have for banks to function will not be there if we leave it up to the computer.”

Page 27: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

27 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

11 The Enterprisers Project, ‘Explainable AI: 4 industries where it will be critical’ (2019).

While machine learning might not be at the heart of processes, it does not mean that we shouldn’t interrogate them. Maude continued and said that software engineering techniques like version control should be introduced. “Do testing where data sets are randomised before being put into the system to see how the output has changed. Provide an audit trail that regulators could start demanding.”

However, Jonathan Williams, principal consultant at Mk2 Consulting pointed out that as regulators are not from a technological background, this is difficult. “The other challenge is looking at the outcomes and checking they are in line with what we would expect. Humans bring their own biases, but we cannot automatically test those. Regulators have a steep learning curve to ascend.”

Maude then made a very poignant point, that remained with most of the audience before the end of the conference. “I don’t think we will reach a point where humans will not be able to explain what is going on. We may get to a point where the cost of explainability outweighs the benefit.”

Explainable AI lets humans understand and articulate how an AI system made a decision, but questions are still being raised about the potential consequences of AI-based outcomes and whether this is needed, as some low-stakes AI systems might be fine with a black box model, where the results are not understood.

Jane.ai head of artificial intelligence R&D Dave Costenaro explained: “If algorithm results are low-impact enough, like the songs recommended by a music service, society probably doesn’t need regulators plumbing the depths of how those recommendations are made.”11 While a person may be able to get by after being recommending a song they don’t like, AI systems being asked to make decisions about medical treatments or mortgage loans, could become problematic.

As the responsibility shifts from human to machine, the need for explainability increases.

“If an algorithm already has humans in the loop, the human decision-makers can continue to bear the responsibility of explaining the outcomes as previously done. This can help the radiologist work more accurately and efficiently, but ultimately, he or she will still provide the diagnosis and explanations,” Costenaro said.

However, as AI matures, we’re likely to see the growth of new applications that decreasingly rely on human decision-making and responsibility. Costenaro continued: “For a new class of AI decisions that are high-impact and that humans can no longer effectively participate in, either due to speed or volume of processing required, practitioners are scrambling to develop ways to explain the algorithms.” It is up to IT leaders to take the reins to ensure their company uses AI properly and incorporates explainability when necessary.

Page 28: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

28

| T

he

Futu

re o

f Art

ifici

al In

telli

gen

ce R

epo

rt

In a Q&A interview with Finextra, Dan Reid, CTO and founder of Xceptor, discusses how AI has progressed in the financial services industry and the obstacles banks have to overcome to leverage the technology in the same way that other sectors have.

Expert view:

CTO and FounderDan Reid

Page 29: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

29 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

Often the issue with AI is that it means something different to everyone you talk to, so no one is really sure what they should expect out of AI, what changes they are looking for and how best to go about it. It’s creating confusion rather than clarity. AI isn’t a single thing, rather it is series of building blocks that solve business problems by learning from vast amounts of structured and unstructured data. Typically you have to start by outlining what you mean by AI. For us the main building blocks are machine learning and natural language processing with a heavy focus on data transformation, so being able to ingest all manner of data types from spreadsheets to pdfs right up to emails written in colloquial shorthand. With 80% of a firm’s data typically unstructured, this opens the door for business to really get its arms around its vast data banks, automating the ingestion of emails, pdfs, contracts and then being to interrogate them and derive smart analytics.

It can help identify some good places to start. We’ve been working with clients on areas such as using natural language processing to classify unstructured emails, and to extract relevant data points from them. This process typically achieves a high level of automation. Similar to any process, AI or not, exceptions can occur and these can be flagged by validation rules. Other areas include NAV validation, fraud detection and named entity recognition. These are just a few examples and are all focussed on data enrichment – so building better data models to drive smarter analytics. That is where the business value is and it is essential that value can be demonstrated.

Part of it is cultural, people aren’t sure what AI means for them, their role, their jobs, but people are as important to successful deployment as much as the technology or the analytics. Part of it is treating AI like a single category, there are so many building blocks in the AI repertoire and it’s a matter of identifying the best fit for the task at hand. And a big part of it is data quality and maturity. Access to the right data of a reasonable is often the biggest hurdle.

Scaling up AI is one of the biggest challenges for firms. We see pockets of deployment but rarely enterprise-wide. There is still a long way to go and identifying the right part of AI for the right task is key to success.

How is AI changing business across all industries?

What can financial services learn from successful case studies?

What are the challenges that hinder banks from implementing AI technology?

What are the challenges that hinder banks from implementing AI technology?

Page 30: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

30

| T

he

Futu

re o

f Art

ifici

al In

telli

gen

ce R

epo

rt

05 |

While early forms of the artificial intelligence have been created and designed to mimic human nature, controversy and misconceptions have emerged but after a period of quiet development, AI has undergone a modern revolution and techniques such as machine learning have come to the fore.

Conclusion

Page 31: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

31 |

Th

e Fu

ture

of A

rtifi

cial

Inte

llige

nce

Rep

ort

Most financial applications of AI have been in the customer service space, but there are other areas such as fraud prevention that banks are working with to improve. However, as decision-making factors using AI become more accepted, this may not align with the softer strategies of a bank.

Many financial institutions are questioning how artificial intelligence must be governed within an organisation and how it can be taught to align with a bank’s brand and ethos, but without influence from human judgement.

The future rests on explainable AI and banks should interrogate processes, but in order to do this, regulators have a steep curve to ascend. A problem also emerges when humans cannot explain how AI works, which would outweigh the benefit of the technology.

Page 32: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

32

| T

he

Futu

re o

f Art

ifici

al In

telli

gen

ce R

epo

rt

Bibliography06 |

BBC, ‘Google’s ethics board shut down’ (2019). Available at: https://www.bbc.co.uk/news/technology-47825833 [Accessed 5/4/2019].

Bloomberg, ‘The Google AI Ethics Board With Actual Power Is Still Around’ (2019). Available: https://www.bloomberg.com/news/articles/2019-04-06/the-google-ai-ethics-board-with-actual-power-is-still-around [Accessed 6/4/2019].

Darktrace, ‘The Next Paradigm Shift: AI-Driven Cyber-Attacks’ (2018). Available at: https://www.darktrace.com/en/resources/wp-ai-driven-cyber-attacks.pdf [Accessed 4/3/2019].

European Commission, ‘Ethics guidelines for trustworthy AI’ (2019). Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai [Accessed 8/4/2019]. FCA, ‘AI and financial crime: silver bullet or red herring?’ (2018). Available at: https://www.fca.org.uk/news/speeches/ai-and-financial-crime-silver-bullet-or-red-herring [Accessed 4/3/2019].

Financial Times, ‘Europe’s AI start-ups often do not use AI, study finds’ (2019). Available at: https://www.ft.com/content/21b19010-3e9f-11e9-b896-fe36ec32aece [Accessed 4/3/2019].

International Banker, ‘How AI is disrupting the banking industry’ (2018). Available at: https://internationalbanker.com/banking/how-ai-is-disrupting-the-banking-industry/ [Accessed 4/3/2019].

KPMG, ‘The role of Artificial Intelligence in combating financial crime’ (2019). Available at: https://assets.kpmg/content/dam/kpmg/ch/pdf/the-role-of-artificial-intelligence-in-combating-financial-crime.pdf [Accessed 1/3/2019].

MMC Ventures, ‘The State of AI: Divergence’ (2019). Avaliable at: https://www.stateofai2019.com/ [Accessed 1/3/2019].

PwC, ‘Getting real about AI and financial crime’ (2019). Available at: https://www.pwc.com.au/consulting/assets/ai-financial-crime-article-07feb18.pdf [Accessed 1/3/2019].

The Enterprisers Project, ‘Explainable AI: 4 industries where it will be critical’ (2019). Available at: https://enterprisersproject.com/article/2019/5/explainable-ai-4-critical-industries [Accessed 29/5/2019].

Page 33: 2019 The Future of Artificial Intelligence · 2019-08-16 · artificial intelligence, that’s really why there is a . surge in interest at the moment. The opportunity to apply these

33

| T

he

Futu

re o

f Art

ifici

al In

telli

gen

ce R

epo

rt

About Finextra

The Future of Payments

The Future of Trade Finance

The Future of Core Banking

The Future of Cybersecurity

Contact [email protected] to get involved.

This report is published by Finextra Research.

Finextra Research is the world’s leading specialist financial technology (fintech) news and information source. Finextra offers over 115,000 items of specialist fintech news, features and TV content items to 420,000 unique monthly visitors to www.finextra.com.

Founded in 1999, Finextra Research covers all aspects of financial technology innovation and operation involving banks, institutions and vendor organisations within the wholesale and retail banking, payments and cards sectors worldwide.

Finextra’s unique global community consists of over 30,000 fintech professionals working inside banks and financial institutions, specialist fintech application and service providers, consulting organisations and mainstream technology providers. The Finextra community actively participate in posting their opinions and comments on the evolution of fintech. In addition, they contribute information and data to Finextra surveys and reports.

Finextra reports coming in 2019: