hadoop for finance essentials - sample chapter

15
Community Experience Distilled Harness big data to provide meaningful insights, analytics, and business intelligence for your nancial institution Hadoop for Finance Essentials Rajiv Tiwari Free Sample

Upload: packt-publishing

Post on 19-Dec-2015

25 views

Category:

Documents


0 download

DESCRIPTION

Chapter No. 3 Hadoop in the CloudHarness big data to provide meaningful insights, analytics, and business intelligence for your financial institution For more information: http://bit.ly/1KvY8VE

TRANSCRIPT

Page 1: Hadoop for Finance Essentials - Sample Chapter

C o m m u n i t y E x p e r i e n c e D i s t i l l e d

Harness big data to provide meaningful insights, analytics, and business intelligence for your fi nancial institution

Hadoop for Finance Essentials

Rajiv Tiw

ari

Hadoop for Finance EssentialsWith the exponential growth of data and many enterprises crunching more and more data every day, Hadoop as a data platform has gained a lot of popularity. Financial businesses want to minimize risks and maximize opportunities, and Hadoop, largely dominating the big data market, plays a major role.

This book will get you started with the fundamentals of big data and Hadoop, enabling you to get to grips with solutions to many top fi nancial big data use cases including regulatory projects and fraud detection. It is packed with industry references and code templates, and is designed to walk you through a wide range of Hadoop components.

By the end of the book, you'll understand a few industry leading architecture patterns, big data governance, tips, best practices, and standards to successfully develop your own Hadoop based solution.

Who this book is written forThis book is perfect for developers, analysts, architects or managers who would like to perform big data analytics with Hadoop for the fi nancial sector. This book is also helpful for technology professionals from other industry sectors who have recently switched or like to switch their business domain to fi nancial sector. Familiarity with big data, Java programming, database and data warehouse, and business intelligence would be benefi cial.

$ 29.99 US£ 19.99 UK

Prices do not include local sales tax or VAT where applicable

Rajiv Tiwari

What you will learn from this book

Learn about big data and Hadoop fundamentals including practical fi nance use cases

Walk through Hadoop-based fi nance projects with explanations of solutions, big data governance, and how to sustain Hadoop momentum

Develop a range of solutions for small to large-scale data projects on the Hadoop platform

Learn how to process big data in the cloud

Present practical business cases to management to scale up existing platforms at enterprise level

Hadoop for Finance Essentials

P U B L I S H I N GP U B L I S H I N G

community experience dist i l led

Visit www.PacktPub.com for books, eBooks, code, downloads, and PacktLib.

Free Sample

Page 2: Hadoop for Finance Essentials - Sample Chapter

In this package, you will find: The author biography

A preview chapter from the book, Chapter 3 'Hadoop in the Cloud'

A synopsis of the book’s content

More information on Hadoop for Finance Essentials

About the Author Rajiv Tiwari is a hands-on freelance big data architect with over 15 years of experience

across big data, data analytics, data governance, data architecture, data cleansing / data

integration, data warehousing, and business intelligence for banks and other financial

organizations.

He is an electronics engineering graduate from IIT, Varanasi, and has been working in

England for the past 10 years, mostly in the financial city of London, UK.

He has been using Hadoop since 2010, when Hadoop was in its infancy with regards to

the banking sector.

He is currently helping a tier 1 investment bank implement a large risk analytics project

on the Hadoop platform.

Rajiv can be contacted on his website at or on

Twitter at .

Page 3: Hadoop for Finance Essentials - Sample Chapter

Hadoop for Finance Essentials Data has been increasing at an exponential rate and organizations are either struggling to

cope up or rushing to take advantage by analyzing it. Hadoop is an excellent open source

framework, which addresses this big data problem.

I have used Hadoop within the financial sector for the last few years but could not find

any resource or book that explains the usage of Hadoop for finance use cases. The best

books I have ever found are again on Hadoop, Hive, or some MapReduce patterns, with

examples on counting words or Twitter messages in all possible ways.

I have written this book with the objective of explaining the basic usage of Hadoop and

other products to tackle big data for finance use cases. I have touched base on the

majority of use cases, providing a very practical approach.

What This Book Covers Chapter 1, Big Data Overview, covers the overview of big data, its landscape, and

technology evolution. It also touches base with the Hadoop architecture, its components,

and distributions. If you know Hadoop already, just skim through this chapter.

Chapter 2, Big Data in Financial Services, extends the big data overview from the

perspective of a financial organization. It will explain the story of the evolution of big

data in the financial sector, typical implementation challenges, and different finance use

cases with the help of relevant tools and technologies.

Chapter 3, Hadoop in the Cloud, covers the overview of big data in cloud and a sample

portfolio risk simulation project with end-to-end data processing.

Chapter 4, Data Migration Using Hadoop, talks about the most popular project of

migrating historical trade data from traditional data sources to Hadoop.

Chapter 5, Getting Started, covers the implementation project of a very large enterprise

data platform to support various risk and regulatory requirements.

Chapter 6, Getting Experienced, gives an overview of real-time analytics and a sample

project to detect fraudulent transactions.

Chapter 7, Scale It Up, covers the topics to scale up the usage of Hadoop within your

organization, such as enterprise data lake, lambda architecture, and data governance. It

also touches base with few more financial use cases with brief solutions.

Chapter 8, Sustain the Momentum, talks about the Hadoop distribution upgrade cycle

and wraps up the book with best practices and standards.

Page 4: Hadoop for Finance Essentials - Sample Chapter

[ 47 ]

Hadoop in the CloudHadoop in the cloud can be implemented with very low initial investment and is well suited for proof of concepts and data systems with variable IT resource requirements. In this chapter, I will discuss the story of Hadoop in the cloud and how Hadoop can be implemented in the cloud for banks.

I will cover the full data life cycle of a risk simulation project using Hadoop in the cloud.

• Data collection—ingesting the data into the cloud• Data transformation—iterating simulations with the given algorithms• Data analysis—analyzing the output results

I recommend you refer to your Hadoop cloud provider documentation if you need to dive deeper.

The big data cloud storyIn the last few years, cloud computing has grown signifi cantly within banks as they strive to improve the performance of their applications, increase agility, and most importantly reduce their IT costs. As moving applications into the cloud reduces the operational cost and IT complexity, it helps banks to focus on their core business instead of spending resources on technology support.

The Hadoop-based big data platform is just like any other cloud computing platform and a few fi nancial organizations have implemented projects with Hadoop in the cloud.

Page 5: Hadoop for Finance Essentials - Sample Chapter

Hadoop in the Cloud

[ 48 ]

The whyAs far as banks are concerned, especially investment banks, business fl uctuates a lot and is driven by the market. Fluctuating business means fl uctuating trade volume and variable IT resource requirements. As shown in the following fi gure, traditional on-premise implementations will have a fi xed number of servers for peak IT capacity, but the actual IT capacity needs are variable:

Your IT needsTime

Traditional ITcapacity

Capacity

As shown in the following fi gure, if a bank plans to have more IT capacity than maximum usage (a must for banks), there will be wastage, but if they plan to have IT capacity that is the average of required fl uctuations, it will be lead to processing queues and customer dissatisfaction:

On and Off

WASTE

Fast Growth

Variable peaks Predictable peaks

CUSTOMER DISSATISFACTION

Page 6: Hadoop for Finance Essentials - Sample Chapter

Chapter 3

[ 49 ]

With cloud computing, fi nancial organizations only pay for the IT capacity they use and it is the number-one reason for using Hadoop in the cloud–elastic capacity and thus elastic pricing.

The second reason is proof of concept. For every fi nancial institution, before the adoption of Hadoop technologies, the big dilemma was, "Is it really worth it?" or "Should I really spend on Hadoop hardware and software as it is still not completely mature?" You can simply create Hadoop clusters within minutes, do a small proof of concept, and validate the benefi ts. Then, either scale up your cloud with more use cases or go on-premise if that is what you prefer.

The whenHave a look at the following questions. If you answer yes to any of these for your big data problem, Hadoop in the cloud could be the way forward:

• Is your data operation very intensive but unpredictable?• Do you want to do a small proof of concept without buying the hardware

and software up front?• Do you want your operational expense to be very low or managed by

external vendors?

What's the catch?If the cloud solves all big data problems, why isn't every bank implementing it?

• The biggest concern is—and will remain for the foreseeable future—the security of the data in the cloud, especially customers' private data. The moment senior managers think of security, they want to play safe and drop the idea of implementing it on the cloud.

• Performance is still not as good as that on an on-premise installation. Disk I/O is a bottleneck in virtual machine environments. Especially with mixed tasks such as MapReduce, Spark, and so on, on the same cluster with several concurrent users you will feel a big performance impact.

• Once the data is in the cloud, vendors manage the day-to-day administrative tasks, including operations. The implementation of Hadoop in the cloud will lead to the development and operation roles merging, which is slightly against the norm in terms of departmental functions of banks.

In the next section, I will pick up one of the most popular use cases: implementing Hadoop in the cloud for the risk division of a bank.

Page 7: Hadoop for Finance Essentials - Sample Chapter

Hadoop in the Cloud

[ 50 ]

Project details – risk simulations in the cloudValue at Risk (VaR) is a very effective method to calculate the fi nancial risk of a portfolio. Monte Carlo is one of the methods used to generate the fi nancial risk for a number of computer-generated scenarios. The effectiveness of this method depends on running as many scenarios as possible.

Currently, a bank runs the credit-risk Monte Carlo simulation to calculate the VaR with complex algorithms to simulate diverse risk scenarios in order to evaluate the risk metrics of its clients. The simulation requires high computational power with millions of computer-generated simulations; even with high-end computers, it takes 20–30 hours to run the application, which is both time consuming and expensive.

SolutionFor our illustration, I will use Amazon Web Services (AWS) with Elastic MapReduce (EMR) and parallelize the Monte Carlo simulation using a MapReduce model. Note, however, that it can be implemented on any Hadoop cloud platform.

The bank will upload the client portfolio data into cloud storage (S3); develop MapReduce using the existing algorithms; and use EMR on-demand additional nodes to execute the MapReduce in parallel, write back the results to S3, and release EMR resources.

HDFS is automatically spread over data nodes. If you decommission the nodes, the HDFS data on them will be lost. So always put your persistent data on S3, not HDFS.

The current worldThe bank loads the client portfolio data into the high-end risk data platform and applies programming iterations in parallel for the confi gured number of iterations. For each portfolio and iteration, they take the current Asset price and apply the following function for a variety of random variables:

Page 8: Hadoop for Finance Essentials - Sample Chapter

Chapter 3

[ 51 ]

( )1

1 1

1

Where : is theAsset priceat time ;is theAsset priceat time 1

is themean of return on assets;

t t

t t t

t

t

S S t t

S S SS tS t

μ σε

μ

+

+ +

+

Δ = Δ + Δ

Δ = −

+

The asset price will fl uctuate for each iteration. The following is an example with 15 iterations when the starting price is 10€:

Page 9: Hadoop for Finance Essentials - Sample Chapter

Hadoop in the Cloud

[ 52 ]

For a large number of iterations, the asset price will follow a normal pattern. As shown in the following fi gure, the value at risk at 99 percent is 0.409€, which is defi ned as a 1 percent probability that the asset price will fall more than 0.409€ after 300 days. So, if a client holds 100 units of the asset price in his portfolio, the VaR is 40.9€ for his portfolio.

The results are only an estimate, and their accuracy is the square root of the number of iterations, which means 1,000 iterations will make it 10 times more accurate. The iterations could be anywhere from the hundreds of thousands to millions, and even with powerful and expensive computers, the iterations could take more than 20 hours to complete.

The target worldIn summary, they will parallelize the processing using MapReduce and reduce the processing time to less than an hour.

First, they will have to upload the client portfolio data into Amazon S3. Then they will apply the same algorithm, but using MapReduce programs and with a very large number of parallel iterations using Amazon EMR, and write back the results to S3.

Page 10: Hadoop for Finance Essentials - Sample Chapter

Chapter 3

[ 53 ]

It is a classic example of elastic capacity—the customer data can be partitioned and each partition can be processed independently. The execution time will drop almost linearly with the number of parallel executions. They will spawn hundreds of nodes to accommodate hundreds of iterations in parallel and release resources as soon as the execution is complete.

The following diagram is courtesy of the AWS website. I recommend you visit http://aws.amazon.com/elasticmapreduce/ for more details.

AMAZONS3

AMAZONS3

EC2 EC2 EC2

EC2 EC2 EC2

EMR

RESULTS

Data collectionThe data storage for this project is Amazon S3 (where S3 stands for Simple Storage Service). It can store anything, has unlimited scalability, and has 99.999999999 percent durability.

If you have a little more money and want better performance, go for storage on:

• Amazon DynamoDB: This is a NoSQL database with unlimited scalability and very low latency.

• Amazon Redshift: This is a relational parallel data warehouse with the scale of data in Petabytes and should be used if performance is your top priority. It will be even more expensive in comparison to DynamoDB and in the order of $1,000/TB/year.

Page 11: Hadoop for Finance Essentials - Sample Chapter

Hadoop in the Cloud

[ 54 ]

Confi guring the Hadoop clusterPlease visit http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-what-is-emr.html for full documentation and relevant screenshots.

Amazon Elastic Compute Cloud (EC2) is a single data processing node. Amazon Elastic MapReduce is a fully managed cluster of EC2 processing nodes that uses the Hadoop framework. Basically, the confi guration steps are:

1. Sign up for an account with Amazon.2. Create a Hadoop cluster with the default Amazon distribution.3. Confi gure the EC2 nodes with high memory and CPU confi guration,

as the risk simulations will be very memory-intensive operations.4. Confi gure your user role and the security associated with it.

Data uploadNow you have to upload the client portfolio and parameter data into Amazon S3 as follows:

1. Create an input bucket on Amazon S3, which is like a directory and must have a unique name, something like <organization name + project name + input>.

2. Upload the source fi les using a secure corporate Internet.

I recommend you use one of the two Amazon data transfer services, AWS Import/Export and AWS Direct Connect, if there is any opportunity to do so.

The AWS Import/Export service includes:

• Export the data using Amazon format into a portable storage device—hard disk, CD, and so on and ship it to Amazon.

• Amazon imports the data into S3 using its high-speed internal network and sends you back the portable storage device.

• The process takes 5–6 days and is recommended only for an initial large data load—not an incremental load.

• The guideline is simple—calculate your data size and network bandwidth. If the upload takes time in the order of weeks or months, you are better off not using this service.

Page 12: Hadoop for Finance Essentials - Sample Chapter

Chapter 3

[ 55 ]

The AWS Direct Connect service includes:

• Establish a dedicated network connection from your on-premise data center to AWS using anything from 1 GBps to 10 GBps

• Use this service if you need to import/export large volumes of data in and out of the Amazon cloud on a day-to-day basis

Data transformationRewrite the existing simulation programs into Map and Reduce programs and upload them into S3. The functional logic will remain the same; you just need to rewrite the code using the MapReduce framework, as shown in the following template, and compile it as MapReduce-0.0.1-VarRiskSimulationAWS.jar.

The mapper logic splits the client portfolio data into partitions and applies iterative simulations for each partition. The reducer logic aggregates the mapper results, value, and risk.

package com.hadoop.Var.MonteCarlo;

import <java libraries>;import <org.apache.hadoop libraries>;

public class VarMonteCarlo{

public static void main(String[] args) throws Exception{ if (args.length < 2) { System.err.println("Usage: VAR Monte Carlo <input path> <output path>"); System.exit(-1); } Configuration conf = new Configuration(); Job job = new Job(conf, "VaR calculation"); job.setJarByClass(VarMonteCarlo.class);

job.setMapperClass(VarMonteCarloMapper.class); job.setReducerClass(VarMonteCarloReducer.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(Text.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(RiskArray.class); FileInputFormat.addInputPath(job, new Path(args[1])); FileOutputFormat.setOutputPath(job, new Path(args[2]));

Page 13: Hadoop for Finance Essentials - Sample Chapter

Hadoop in the Cloud

[ 56 ]

job.waitForCompletion(true); }

public static class VarMonteCarloMapper extends Mapper<LongWritable, Text, Text, Text>{ <Implement your algorithm here> }

public static class VarMonteCarloReducer extends Reducer<Text, Text, Text, RiskArray> { <Implement your algorithm here> }}

Once the Map and Reduce code is developed, please follow these steps:

1. Create an output bucket on Amazon S3, which is like a directory and must have a unique name, something like <organization name + project name + results>.

2. Create a new job workfl ow using the following parameters: Input Location: This inputs the S3 bucket directory with client

portfolio data files Output Location: This outputs the S3 bucket directory to write the

simulation results Mapper: The textbox should be set to java -classpath MapReduce-

0.0.1-VarRiskSimulationAWS.jar com.hadoop.Var.MonteCarlo.JsonParserMapper

Reducer: The textbox should be set to java -classpath MapReduce-0.0.1-VarRiskSimulationAWS.jar com.hadoop.Var.MonteCarlo.JsonParserReducer

Master EC2 instance: This selects the larger instances Core Instance EC2 instance: This selects the larger instances and

selects a lower count Task Instance EC2 instance: This selects the larger instances and

selects a very high count, which must be in line with the number of risk simulation iterations

3. Execute the job workfl ow and monitor the progress.4. The job is expected to complete much faster and should be done in less

than an hour.5. The simulation results are written to the output S3 bucket.

Page 14: Hadoop for Finance Essentials - Sample Chapter

Chapter 3

[ 57 ]

Data analysisYou can download the simulation results from the Amazon S3 output bucket for further analysis with local tools.

In this case, you should be able to simply download the data locally, as the result volume may be relatively low.

SummaryIn this chapter, we learned how and when big data can be processed in the cloud, right from confi guration, collection, and transformation to the analysis of data.

Currently, Hadoop in the cloud is not used much in banks due to a few concerns about data security and performance. However, that is debatable.

For the rest of this book, I will discuss projects using on-premise Hadoop implementations only.

In the next chapter, I will pick up a medium-scale on-premise Hadoop project and see it in a little more detail.