deploying observational learning for improved transaction

10
Whitepaper Deploying Observational Learning for Improved Transaction Data Quality www.datamanagementinsight.com commissioned by @DataMgmtInsight Search Data Management Insight

Upload: others

Post on 29-Jul-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Deploying Observational Learning for Improved Transaction

Whitepaper

Deploying Observational Learning for Improved Transaction Data Quality

www.datamanagementinsight.com

commissioned by

@DataMgmtInsightSearch Data Management Insight

Page 2: Deploying Observational Learning for Improved Transaction

www.datamanagementinsight.com

Deploying Observational Learning for Improved Transaction Data Quality | 2 2 | Deploying Observational Learning for Improved Transaction Data Quality

IntroductionBanks and other financial institutions are dependent on data quality to support automated processes that can streamline their operations. By automating key elements of the transaction workflow, institutions can reduce the number of exceptions and their associated costs, optimising operational processes and overall operational efficiency.

But data quality can be a barrier to realising the promise of automation. The complexity of today’s marketplace means that many banks are forced to deal with huge volumes of transaction data, across an array of inputs and formats. Legacy manual data validation techniques are withering under the data deluge, and the resulting data quality issues have implications for straight-through-processing rates.

New AI techniques - most notably the concept of observational learning - can help address these data quality issues by building on reconciliation systems’ ‘knowledge’ of the firm’s data preferences, allowing financial institutions to reduce the number of data issues that need attention and speed up the mitigation process for those that remain.

This paper looks at the data challenges that are hindering firms’ attempts to improve STP rates, and explains how observational learning can help. It discusses specific use-cases for AI and observational learning in critical operational and regulatory processes, and describes how SmartStream has added the Affinity observational learning capability it developed in its Innovation Lab to its SmartStream Air cloud-native reconciliations offering.

“Data quality can be a barrier to realising the promise of automation.”

Page 3: Deploying Observational Learning for Improved Transaction

www.datamanagementinsight.com

Deploying Observational Learning for Improved Transaction Data Quality | 3 3 | Deploying Observational Learning for Improved Transaction Data Quality

Current Landscape: Dealing with High Volumes of Transaction DataFinancial institutions of all scales and activity types are bombarded with vast quantities of data from external sources on a near real-time basis. Data updates from trading or payments counterparties, industry utilities, market information sources and client organisations together create a massive challenge for firms seeking to streamline their internal operational processes.

Incoming data often arrives in different formats and in high volumes, making it difficult to validate and verify to ensure accuracy and comprehensiveness. Delivering data to receiving application in a timely manner is also a major challenge. Firms also frequently compare data sets across disparate internal databases - covering entities, client data, holdings information - themselves with different formats and in many cases unstructured.

Regulations are adding to the complexity of the picture with new and emerging rules like the Basel BCBS 239, the EU’s SFTR and MiFID II reporting, and new regulations in Australia, Hong Kong, Singapore and elsewhere in the Asia-Pacific region are highly data-centric, further adding to the complexity.

From an implementation standpoint, firms are facing a resource crunch. This is compounded by the fact that any errors in the trade-/transaction-reporting area will require re-reporting. A firm that find it has been misreporting trades, not only is required to inform the regulator but has to re-submit trades themselves. This represents a major operational issue on top of the penalties, especially for those doing self-reporting. Firms also need to prove remediation, re-submit the reports and re-certify with the regulator as well.

A firm discovering 10,000 misreported trades, for example, would need to go back and fix those trades and re-report them to comply with the new reporting environment. This requires them to understand what changes need to be made to the data contained in the reports and how to rectify the reports themselves.

Firms failing to report correctly, meanwhile, run the risk of reputational damage, with obvious consequences for their relationships with counterparties and customers. Furthermore, transgressors face significant financial penalties in addition to the remediation costs outlined above.

Changing market and regulatory dynamics make the data management challenge tougher. The importance or relevance of any given data field or attribute may change according to the activity or changes to regulations.

Managing these challenges, and verifying data sets to ensure quality is high enough for the intended purpose, is difficult. While there are point solutions available in the marketplace, many make use of pre-defined functions and lack the flexibility and agility needed to deal with dynamic situations. As a result, many institutions remain reliant on manual spreadsheets to deal with the data verification process.

“Delivering data to receiving application in a timely manner is also a major challenge.”

Page 4: Deploying Observational Learning for Improved Transaction

www.datamanagementinsight.com

Deploying Observational Learning for Improved Transaction Data Quality | 4 4 | Deploying Observational Learning for Improved Transaction Data Quality

This kind of approach represents an unsustainable risk for financial institutions. Human error, key-man dependencies, audit trail issues, incomplete processes and other problems make ‘Excel work’ an unacceptably risk solution to the data quality issue. Notwithstanding these operational problems, spreadsheet work is notoriously slow at identifying errors or suspicious activity, and introduces the risk of filing erroneous regulatory reports.

In short, for many financial institutions, data quality issues are leading to errors and exceptions, in turn disrupting the STP work flow. But the impact can extend beyond operational process disruptions, into clearing and settlement problems, which may require costly investigations and manual reprocessing, as well as financial losses and penalties, and reputational damage.

What’s Needed: Controls for Data QualityAs new regulatory initiatives across the globe come into force, firms have been seeking to identify commonalities in the hope of developing solutions to deal with multiple new rules at the same time.

Given the diversity of regulations currently on financial institutions’ radars, this approach can be a hit or miss affair. But trade and transaction reporting may be one area in which a set of procedures developed to satisfy one regulatory environment can be redeployed or recycled to satisfy others. In particular, adoption of a framework approach for trade and transaction reporting can yield benefits that can help with compliance across a range of regulatory jurisdictions.

Any end-to-end trade-/transaction-reporting solution needs to incorporate three main areas of functionality: data capture and consolidation; data validation; and data control, in the form of a framework.

Identifying and pulling together the right data is the critical first step in this process. MiFID II, for example, introduced a wider and deeper data requirement for trade reporting, while at the same expanding regulatory coverage from equities only to include all asset classes. As a result, all steps in the pre- and post-trade reporting work flow need to embrace instrument and counterparty reference data from mandated sources, including GLEIF (for LEI data), ANNA Derivatives Service Bureau (for swap instrument identification) and others.

MiFID II also introduced new factors like Trading on a Trading Venue (ToTV) and liquidity thresholds, with ESMA’s Financial Instrument Reference Data System (FIRDS) key to accessing the right data to come up with these assignations. Other reporting-heaving regulations like SFTR and EMIR introduced similar challenges.

Adding to this challenge is the regulatory stress on veracity. MiFID II’s RTS 22 Article 15, for example, is explicit in its requirement for accuracy and completeness for reporting data. It lays down specific guidelines with respect to testing and controls for data for reporting to Approved Reporting Mechanisms (ARMS) and National Competent Authorities (NCAs), providing a validation checklist across all 65 data attributes.

“Identifying and pulling together the right data is the critical first step in this process.”

Page 5: Deploying Observational Learning for Improved Transaction

www.datamanagementinsight.com

Deploying Observational Learning for Improved Transaction Data Quality | 5 5 | Deploying Observational Learning for Improved Transaction Data Quality

Firms are required to identify errors and omissions, and avoid duplicate reports. Firms are also obliged to test their reporting processes and carry out regular reconciliations of reports against front-office records.

These requirements apply to all affected reporting firms, whether they are self-reporting or have outsourced Approved Publication Arrangement (APA) or ARM reporting to third-parties. It’s important to remember that in the latter case, firms are unable to shift its fiduciary responsibilities to a reporting outsourcer. As a result, all firms need a kind of ‘assurance reconciliation’ that ensures the trade they report exactly matches their understanding of the trade that took place.

The final step to this end is creating a control framework that provides a check for accuracy and completeness by pairing and matching transaction details. This meets the need for ‘assurance rec’ outlined above by building checks into the overall reporting process.

MiFID II and other new and emerging reporting regulations impose new processes on the transaction-reporting landscape, including scheduled testing of the reportable data and reconciliation of the reported transactions from the trade repository against front-office records.

Firms additionally need to show evidence of a scheduled testing programme of any adopted solutions. Testing itself will require its own systems and infrastructure over and above the chosen production system, potentially adding significant cost and resource overhead.

New reconciliations between front-office records and feedback from the firm’s chosen regulatory trade repository won’t fit with incumbent reconciliations systems, since they introduce new data columns and a different taxonomy. Meanwhile, these new reconciliations will need to be prioritised ahead of other outstanding reconciliations, introducing more complexity and increasing strain on existing processes.

The need to manage the resolution and errors in the transaction process is pushing firms toward new workflow processes, as regulators not only extend the scope of the transaction report but place new emphasis on the quality of the published data, raising the bar in terms of the resolution management processes that need to be in place.

Together, the emerging reporting requirements outlined above present practitioners with a complex set of challenges. Financial institutions are required to capture, normalise, validate and choreograph huge sets of data often in multiple formats in order to meet their reporting obligations across many jurisdictions.

In the face of this torrent of incoming transaction data, and under heightened regulatory scrutiny, firms have compressed the timeframes for responding to data quality issues so that they may rapidly address costly exceptions. Finally, ongoing changes to regulations and source data formats mean that any solution put in place now will need the flexibility to adapt to future needs.

Page 6: Deploying Observational Learning for Improved Transaction

www.datamanagementinsight.com

Deploying Observational Learning for Improved Transaction Data Quality | 6 6 | Deploying Observational Learning for Improved Transaction Data Quality

How Observational Learning Can HelpInnovative financial institutions have begun to explore how new technologies, especially artificial intelligence, can help in their endeavours to address the need for data quality in transaction and other financial and regulatory reporting. Recent advances have focused on a form of AI known as observational learning. Early indications suggest this approach can greatly reduce exceptions, resulting in cost savings of up to 20% in laboratory-condition experiments.

Observational learning borrows from the techniques children use to learn how to do often unexplainable things, like tying a shoelace or using a knife and fork to eat. For these kinds of activities, it’s more effective for children to observe and mimic than be told by an adult. This extends into adult life as well, perhaps best illustrated by the proliferation and popularity of the YouTube instructional video. How many of us have turned to the video-hosting site to watch an expert deal with an IKEA flatpack, fix a leaking tap or changing a cylinder head gasket on a vintage car?

By applying observational learning disciplines to the regulatory reporting function, analysts at SmartStream Technology’s Innovation Lab in Vienna have completed proofs of concepts (POCs) with two major bank organisations that succeeded in accelerating the exceptions management process while rapidly and vastly improving data quality. The result was a sustained reduction in error rates and an accompanying drop in operational costs associated with reconciliations in trade and transaction reporting.

Observational Learning in Regulatory ReportingExperimentation to date at the SmartStream Innovation Lab has focused on transaction reporting, but the principles used could be applied to many other processes where quality of data is integral to the success of the intended function.

For its analysis, SmartStream took the concept of observational learning and applied it to exceptions management algorithms as part of its Affinity AI offering. This allowed Affinity to observe human data verification processes, capture and ‘understand’ them, and ultimately make recommendations for future exceptions.

Within the reconciliations process, in the event of a mismatch in transaction data business analysts will seek information from various systems and sources in an attempt to fix the problem. The algorithms in SmartStream Affinity observe the steps analysts take to resolve exceptions, specifically how they address these data quality issues. This process effectively ‘trains’ Affinity so that it is able to offer users information it considers to be useful, based on prior successful interventions by analysts.

Over time, as Affinity observes more instances of mismatches and analysts’ responses to them, the system’s success rate in providing useful recommendations rises. Much like recommendations found on social media networks and online marketplaces like Amazon and eBay, higher success rates result in increased confidence among users, and a greater propensity for analysts to turn to Affinity for help in addressing their data quality issues, thereby perpetuating a virtual circle of observational learning.

Page 7: Deploying Observational Learning for Improved Transaction

www.datamanagementinsight.com

Deploying Observational Learning for Improved Transaction Data Quality | 7 7 | Deploying Observational Learning for Improved Transaction Data Quality

Call to Action: Introducing SmartStream Air Version 2 The POC work with observational learning at SmartStream Innovation Labs has focused on its application to data quality and improving STP rates. SmartStream’s team of analysts have been working with two Tier 1 banks whose scale gave them the greatest potential for efficiency gains.

For the POC, SmartStream deployed the Affinity AI capability within version 2 of its SmartStream Air cloud-native reconciliations platform. The banks were selected from SmartStream’s 2,000 customers as they were considered to be at the top end of the scale with respect to efficiency in reconciliations.

The POC used AIR to apply Affinity’s observational learning algorithms to the banks’ real transaction data, monitoring manual edits and modifications performed by business analysts and capturing them for use to deal with other data issues.

Affinity AI observes the users’ actions and establishes its own understanding of how records correlate and it will assist the user to significantly reduce the time it takes for matching complex data sets. Once the neuronal network is trained, Affinity acts as a virtual user to support businesses dealing with large amounts of data - the more it observes, the more efficient it becomes, boosting matching rates - delivered to the end-user with high-quality results. As a result of the Innovation Lab POC, the banks enjoyed cost savings of up to 20% in their reconciliations process, surpassing expectations for the project.

SmartStream has made the Affinity available to clients of version 2 of its SmartStream Air, which is deployed via the SmartStream Cloud managed services offering, and will be embedded into SmartStream’s flagship reconciliations solutions. The solution provides rapid benefits without any lengthy IT projects, it is supported in the cloud, as a fully managed service or it can be deployed with clients’ existing on-premise solutions. SmartStream Air Version 2 transforms traditional operating models as it behaves like a consumer app and requires no training or configuration, and no IT projects are needed.

SmartStream Air version 2 allows clients to transform data quality and reconciliation processes that would usually be measured in weeks and months, to just seconds. In addition, with cloud-based technology accessed via a new user interface it manages large volumes of data, in any format, to achieve even higher match rates.

SmartStream Air version 2 allows firms to instantly compare all types of data sets, regardless of format and complexity. Its inbuilt observational learning capability automatically learns how records correlate to one another and can mimic and learn from actions made by a user. In just a few clicks, the AI establishes its own understanding of how records correlate and Affinity will assist the user to significantly reduce the time it takes for carrying out the matching of complex data sets.

“As a result of the Innovation Lab POC, the banks enjoyed cost savings of up to 20% in their reconciliations process, surpassing expectations for the project.”

Page 8: Deploying Observational Learning for Improved Transaction

www.datamanagementinsight.com

Deploying Observational Learning for Improved Transaction Data Quality | 8 8 | Deploying Observational Learning for Improved Transaction Data Quality

SmartStream Air handles data quality and verification processes in an elastic cloud-based deployment model. Version 2 also carries the PCI-DSS label, a recognised data security standard, and has been certified at the highest level of security standards when hosting digital payments data. SmartStream’s solutions are also certified with SOC 1, SOC 2, SOC 3, attestation, and ISO 27001 and ISO 27002 standard. This ensures robust security controls across the whole organisation, including physical security, personnel security, fraud control mechanisms, IT and data security, and data privacy.

SmartStream Air’s new observational learning capability has particular relevance for those seeking to shift away from manual Excel-based processes, especially those that use macros to compare data for regulatory reports or check the latest data against previous reports to identify potential changes.

The solution doesn’t aim to replace complex STP platforms that solve for the entirety of the transaction or client lifecycle, but rather is designed as a tool to mitigate the risk of manual processes or rapidly develop a process for addressing specific data quality issues within the transaction reporting process. The Affinity client POC yielded cost savings of at least 20%, with one participant recording savings of $20 million. SmartStream has since embarked on five new feasibility studies to confirm AI business cases for the capability.

Page 9: Deploying Observational Learning for Improved Transaction

www.datamanagementinsight.com

Deploying Observational Learning for Improved Transaction Data Quality | 9 9 | Deploying Observational Learning for Improved Transaction Data Quality

About SmartStreamSmartStream is a recognised leader in financial transaction management solutions that enables firms to improve operational control, reduce costs, build new revenue streams, mitigate risk and comply accurately with regulation.

By helping its customers through their transformative digital strategies, SmartStream provides a range of solutions for the transaction lifecycle with AI and machine learning technologies embedded - which can also be deployed in the cloud or as managed services.

As a result, more than 2,000 clients—including 70 of the world’s top 100 banks, rely on SmartStream Transaction Lifecycle Management (TLM®) solutions to deliver greater efficiency to their operations.

Page 10: Deploying Observational Learning for Improved Transaction

www.datamanagementinsight.com

Deploying Observational Learning for Improved Transaction Data Quality | 10 10 | Deploying Observational Learning for Improved Transaction Data Quality

ABOUT A-TEAM GROUPA-Team Group helps financial technology vendors and consultants – large and small – to grow their businesses with content marketing. We leverage our deep industry knowledge, ability to generate high quality media across digital, print and live platforms, and our industry-leading database of contacts to deliver results for our clients. For more information visit www.a-teamgroup.com

A-Team Group’s content platform is A-Team Insight, encompassing our RegTech Insight, Data Management Insight and TradingTech Insight channels.

A-Team Insight is your single destination for in-depth knowledge and resources across all aspects of regulation, enterprise data management and trading technology in financial markets. It brings together our expertise across our well-established brands, it includes:

RegTech Insight focuses on how data, technology and processes at financial institutions are impacted by regulations. www.regtechinsight.com

Data Management Insight delivers insight into how financial institutions are working to best manage data quality across the enterprise. www.datamanagementinsight.com

TradingTech Insight keeps you up to speed with the dynamic world of front office trading technology and market data. www.tradingtechinsight.com

You can tailor your experience by filtering our content based on the topics you are specifically interested in, across our range of blogs with expert opinions from our editors, in-depth white papers, supplements and handbooks, and interactive webinars, and you can join us in person at our range of A-Team Summits and briefings. Visit www.a-teaminsight.com

Become an A-Team Insight member – it’s free! Visit: www.a-teaminsight.com/membership.