awesum cf info

61
Computational finance Computational finance Computational finance or financial engineering is a cross-disciplinary field which relies on mathematical finance, numerical methods, computational intelligence and computer simulations to make trading, hedging and investment decisions, as well as facilitating the risk management of those decisions. Utilizing various methods, practitioners of computational finance aim to precisely determine the financial risk that certain financial instruments create. 1) 2) 3) 4) 5) Mathematical finance Numerical methods Computer simulations Computational intelligence Financial risk History Generally, individuals who fill positions in computational finance are known as “quants”, referring to the quantitative skills necessary to perform the job. Specifically, knowledge of the C++ programming language, as well as of the mathematical subfields of: stochastic calculus, multivariate calculus, linear algebra, differential equations, probability theory and statistical inference are often entry level requisites for such a position. C++ has become the dominant language for two main reasons: the computationally intensive nature of many algorithms, and the focus on libraries rather than applications. Computational finance was traditionally populated by Ph.Ds in finance, physics and mathematics who moved into the field from more pure, academic backgrounds (either directly from graduate school, or after teaching or research). However, as the actual use of computers has become essential to rapidly carrying out computational finance decisions, a background in computer programming has become useful, and hence many computer programmers enter the field either from Ph.D. programs or from other fields of software engineering. In recent years, advanced computational methods, such as neural network and evolutionary computation have opened new doors in computational finance. Practitioners of computational finance have come from the fields of signal processing and computational fluid dynamics and artificial intelligence.  Today, all full service institutional finance firms employ computational finance professionals in their banking and finance operations (as opposed to being ancillary information technology specialists), while there are many other boutique firms ranging from 20 or fe wer employees to several thousand that specialize in quantitative trading alone.  JPMorgan Chase & Co. was one of the first firms to create a large derivatives business and employ computational finance (including through the formation of RiskMetrics), while D. E. Shaw & Co. is probably the oldest and largest quant fund (Citadel Investment Group is a major rival). 1/61

Upload: harshil-gandhi

Post on 10-Apr-2018

228 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 1/61

Computational finance

Computational finance

Computational finance or financial engineering is a cross-disciplinary fieldwhichrelies on mathematical finance, numerical methods, computational intelligenceand computer simulations to make trading, hedging and investment decisions,as

well as facilitating the risk management of those decisions. Utilizing variousmethods, practitioners of computational finance aim to precisely determine thefinancial risk that certain financial instruments create.

1)2)3)4)5)

Mathematical financeNumerical methodsComputer simulationsComputational intelligenceFinancial risk

History

Generally, individuals who fill positions in computational finance are known as“quants”, referring to the quantitative skills necessary to perform the job.Specifically, knowledge of the C++ programming language, as well as of themathematical subfields of: stochastic calculus, multivariate calculus, linearalgebra, differential equations, probability theory and statistical inference areoften entry level requisites for such a position. C++ has become the dominantlanguage for two main reasons: the computationally intensive nature of manyalgorithms, and the focus on libraries rather than applications.

Computational finance was traditionally populated by Ph.Ds in finance, physicsand mathematics who moved into the field from more pure, academicbackgrounds (either directly from graduate school, or after teaching or

research).However, as the actual use of computers has become essential to rapidlycarryingout computational finance decisions, a background in computer programminghasbecome useful, and hence many computer programmers enter the field eitherfromPh.D. programs or from other fields of software engineering. In recent years,advanced computational methods, such as neural network and evolutionarycomputation have opened new doors in computational finance. Practitioners of computational finance have come from the fields of signal processing andcomputational fluid dynamics and artificial intelligence.

 Today, all full service institutional finance firms employ computational financeprofessionals in their banking and finance operations (as opposed to being

ancillary information technology specialists), while there are many otherboutiquefirms ranging from 20 or fewer employees to several thousand that specializeinquantitative trading alone. JPMorgan Chase & Co. was one of the first firms tocreate a large derivatives business and employ computational finance(includingthrough the formation of RiskMetrics), while D. E. Shaw & Co. is probably theoldest and largest quant fund (Citadel Investment Group is a major rival).

1/61

Page 2: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 2/61

Computational finance

Introduction

One of the most common applications of computational finance is within thearena of investment banking. Because of the sheer amount of funds involvedinthis type of situation, computational finance comes to the fore as one of thetools

used to evaluate every potential investment, whether it be something assimple asa new start-up company or a well established fund. Computational finance canhelp prevent the investment of large amounts of funding in something thatsimplydoes not appear to have much of a future.

Another area where computational finance comes into play is the world of financial risk management. Stockbrokers, stockholders, and anyone whochoosesto invest in any type of investment can benefit from using the basic principlesof computational finance as a way of managing an individual portfolio. Runningthenumbers for individual investors, just alike for larger concerns, can often makeitclear what risks are associated with any given investment opportunity. Theresultcan often be an individual who is able to sidestep a bad opportunity, and livetoinvest another day in something that will be worthwhile in the long run.

In the business world, the use of computational finance can often come intoplaywhen the time to engage in some form of corporate strategic planning arrives.Forinstance, reorganizing the operating structure of a company in order tomaximizeprofits may look very good at first glance, but running the data through aprocessof computational finance may in fact uncover some drawbacks to the currentplanthat were not readily visible before.

Being aware of the complete and true expenses associated with therestructuremay prove to be more costly than anticipated, and in the long run not as

productive as was originally hoped. Computational finance can help get pastthehype and provide some realistic views of what could happen, before anycorporatestrategy is implemented.

2/61

Page 3: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 3/61

Computational finance

Quantitative analysis

A quantitative analysis is working in finance using numerical orquantitativetechniques. Similar work is done in most other modern industries, the workis

called quantitative analysis. In the investment industry, people whoperformquantitative analysis are frequently called quants.Although the original quants were concerned with risk management andderivatives pricing, the meaning of the term has expanded over time to includethose individuals involved in almost any application of mathematics in finance.An example is statistical arbitrage.

History

Quantitative finance started in the U.S. in the 1930s as some astute investorsbegan using mathematical formulae to price stocks and bonds.

Robert C. Merton, a pioneer of quantitative analysis, introduced stochasticcalculus into the study of finance.

Harry Markowitz's 1952 Ph.D thesis "Portfolio Selection" was one of the firstpapers to formally adapt mathematical concepts to finance. Markowitzformalizeda notion of mean return and covariances for common stocks which allowed himtoquantify the concept of "diversification" in a market. He showed how tocomputethe mean return and variance for a given portfolio and argued that investorsshould hold only those portfolios whose variance is minimal among allportfolioswith a given mean return. Although the language of finance now involves Itōcalculus, minimization of risk in a quantifiable manner underlies much of themodern theory.In 1969 Robert Merton introduced stochastic calculus into the study of finance.Merton was motivated by the desire to understand how prices are set infinancialmarkets, which is the classical economics question of "equilibrium," and inlaterpapers he used the machinery of stochastic calculus to begin investigation of thisissue.At the same time as Merton's work and with Merton's assistance, Fischer Blackand Myron Scholes were developing their option pricing formula, which led towinning the 1997 Nobel Prize in Economics. It provided a solution for a

practicalproblem, that of finding a fair price for a European call option, i.e., the right tobuy one share of a given stock at a specified price and time. Such options arefrequently purchased by investors as a risk-hedging device. In 1981, HarrisonandPliska used the general theory of continuous-time stochastic processes to puttheBlack-Scholes option pricing formula on a solid theoretical basis, and as aresult,showed how to price numerous other "derivative" securities.

3/61

Page 4: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 4/61

Computational finance

Quantitative and Computational Finance

 ‘Quantitative Finance’ as a branch of modern finance is one of the fastestgrowingareas within the corporate world. Together with the sophistication andcomplexity

of modern financial products, this exciting discipline continues to act as themotivating factor for new mathematical models and the subsequentdevelopmentof associated computational schemes. Alternative names for this subject areaareMathematical Finance, Financial Mathematics or Financial Engineering.

 This is a course in the applied aspects of mathematical finance, in particularderivative pricing. The necessary understanding of products and marketsrequiredwill be covered during the course. The overall theme of the course is todevelopthe Partial Differential Equation (PDE) approach to the pricing of options. Aswellas a two hour examination during the summer term, students will undertake ashort computing project where they will use numerical and computationaltechniques to perform derivative pricing.

SimulationMethods inFinance

FinancialProducts andMarkets

Brief introduction to Stochastic Differential Equations (SDEs) –drift, diffusion, Itô’s Lemma. The statistics of random numbergeneration in Excel. Simulating asset price SDEs in Excel.

Introduction to the financial markets and the products whicharetraded in them: Equities, indices, foreign exchange, fixedincome world and commodities. Options contracts and

strategies for speculation and hedging.Similarity reduction and fundamental solution for the heatequation. Black-Scholes PDE: simple European calls and puts;put-call parity. The PDE for pricing commodity and currencyoptions. Discontinuous payoffs – Binary and Digital options.

 The greeks: theta, delta, gamma, vega & rho and their role inhedging.

Solving the pricing PDEs numerically using Explicit, Implicitand Crank-Nicholson Finite Difference Schemes. Stabilitycriteria. Monte Carlo Technique for derivative pricing.

Introduction to the properties and features of fixed incomeproducts; yield, duration & convexity. Stochastic interest ratemodels: stochastic differential equation for the spot interestrate;bond pricing PDE; popular models for the spot rate (Vasicek,CIR and Hull & White); solutions of the bond pricing equation;

Black-Scholesframework

ComputationalFinance

Fixed-IncomeProducts

4/61

Page 5: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 5/61

Computational finance

Fixed Income

 The fixed-income market demands a vast selection of investment options withavariety of credit quality, maturities, and yields to meet investors' objectives. Toaccomplish this, fixed income groups frequently create and modifymathematical

models to calculate bond pricing, perform yield analysis, calculate cash flows,anddevelop hedging strategies.Fixed-income research groups use the thousands of prewritten math andgraphicsfunctions in MathWorks products to access bond data, perform statisticalanalysis,calculate spreads, determine bond and derivative pricing, perform sensitivityanalyses, and run Monte Carlo simulations.Advanced graphics and rendering capabilities in MATLAB make reviewing cashflows, visualizing decision trees, plotting spot and forward curves, and creatingdeployable interactive 2- and 3-D models easy.

Equity

Smart security investing requires in-depth research and analysis. Measuring allthe influencing factors is an essential part of risk management. As a result,research groups continually create and modify mathematical models tocalculatestock value, review forecasts, and develop innovative risk strategies.

Equity research groups use the thousands of math and graphics functions inMathWorks products to access stock data, perform statistical analysis,determinederivatives pricing, perform sensitivity analyses, and run Monte Carlosimulations. The graphics capabilities in MATLAB offer a variety of ways to

review time series data, visualize portfolio risks and returns, and createforecasting graphs.

Investment Management and Trading

 To meet the investment needs of individuals, institutions, and governments,investment firms need to deliver a wide range of investment opportunities withrisk-adjusted performance and consistent returns over time. To accomplishthis,financial professionals need to develop and use mathematical models tooptimizeportfolios and develop trading strategies and systems that can respond to

marketconditions.Investment management and trading research groups use the thousands of mathand graphics functions in MathWorks products to easily access securities data,perform statistical analysis, determine pricing, conduct sensitivity and principacomponent analyses, and implement buy and sell criteria. The graphicscapabilities in MATLAB offer a variety of ways to easily review time series data,visualize portfolio risks and returns, and create forecasting graphs. With

5/61

Page 6: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 6/61

Computational finance

MathWorks deployment tools, you can easily compile and integrate yourMATLAB algorithms into your system.

Mathematical and statistical approaches

According to Fund of Funds analyst Fred Gehm, "There are two types of quantitative analysis and, therefore, two types of quants. One type works

primarily with mathematical models and the other primarily with statisticalmodels. While there is no logical reason why one person can't do both kindsof work, this doesn’t seem to happen, perhaps because these types demanddifferentskill sets and, much more important, different psychologies."A typical problem for a numerically oriented quantitative analyst would betodevelop a model for pricing and managing a complex derivative product.

A typical problem for statistically oriented quantitative analyst would be todevelop a model for deciding which stocks are relatively expensive andwhich

stocks are relatively cheap. The model might include a company's bookvalue toprice ratio, its trailing earnings to price ratio and other accounting factors.Aninvestment manager might implement this analysis by buying theunderpricedstocks, selling the overpriced stocks or both.

One of the principal mathematical tools of quantitative finance isstochasticcalculus.

According to a July 2008 Aite Group report, today quants often use alphageneration platforms to help them develop financial models. These softwaresolutions enable quants to centralize and streamline the alpha generationprocess.

6/61

Page 7: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 7/61

Computational finance

Areas of application

Areas where computational finance techniques are employedinclude:

Investment bankingForecasting

Risk Management softwareCorporate strategic planningSecurities trading and financial risk managementDerivatives trading and risk managementInvestment managementPension schemeInsurance policyMortgage agreementLottery designIslamic bankingCurrency pegGold and commodity valuation

Collateralised debt obligationCredit default swapBargainingMarket mechanism design

7/61

Page 8: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 8/61

Computational finance

Classification of method

Computational finance or financial engineering is a cross-disciplinary fieldwhichrelies on mathematical finance, numerical methods, computational intelligenceand computer simulations to make trading, hedging and investment decisions,as

well as facilitating the risk management of those decisions. Utilizing variousmethods, practitioners of computational finance aim to precisely determine thefinancial risk that certain financial instruments create.

1)2)3)4)5)

Mathematical financeNumerical methodsComputer simulationsComputational intelligenceFinancial risk

Mathematical finance

Mathematical finance comprises the branches of applied mathematicsconcernedwith the financial markets.

 The subject has a close relationship with the discipline of financial economics,which is concerned with much of the underlying theory. Generally,mathematicalfinance will derive, and extend, the mathematical or numerical modelssuggestedby financial economics. Thus, for example, while a financial economist mightstudy the structural reasons why a company may have a certain share price, a

financial mathematician may take the share price as a given, and attempt tousestochastic calculus to obtain the fair value of derivatives of the stock (see:Valuation of options).In terms of practice, mathematical finance also overlaps heavily with the fieldof computational finance (also known as financial engineering). Arguably, thesearelargely synonymous, although the latter focuses on application, while theformerfocuses on modeling and derivation (see: Quantitative analyst).

 The fundamental theorem of arbitrage-free pricing is one of the key theoremsinmathematical finance.

History

 The history of mathematical finance starts with the theory of portfoliooptimization by Harold Markowitz on using mean-variance estimates of portfoliosto judge investment strategies, causing a shift away from the concept of tryingtoidentify the best individual stock for investment. Using a linear regressionstrategy to understand and quantify the risk (i.e. variance) and return (i.e.mean)of an entire portfolio of stocks and bonds, an optimization strategy was used to

8/61

Page 9: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 9/61

Computational finance

choose a portfolio with largest mean return subject to acceptable levels of variance in the return. Simultaneously, William Sharpe developed themathematics of determining the correlation between each stock and themarket.For their pioneering work, Markowitz and Sharpe, along with Merton Miller,shared the 1990 Nobel Prize in economics, for the first time ever awarded for awork in finance.

 The portfolio-selection work of Markowitz and Sharpe introduced mathematicstothe “black art” of investment management. With time, the mathematics hasbecome more sophisticated. Thanks to Robert Merton and Paul Samuelson,one-period models were replaced by continuous time, Brownian-motion models,andthe quadratic utility function implicit in mean–variance optimization wasreplacedby more general increasing, concave utility functions.

INTRODUCTION:

 The mathematical formulation of problems arising in science, engineering,economics and finance involving rate of change w.r.t. one independent

variable isgoverned by ordinary differential equations. Solutions of “real life” problemsoften require developing and applying numerical/computational techniques tomodel complex physical situations, which otherwise are not possible to solvebyanalytical means. The choice of such a technique depends on how accurate asolution is required, posing a multitude of several other factors includingcomputing time constraint and stability of a method. Once a suitable numericaltechnique has been applied and the problem is transformed into an algorithmicform, one can use the powerful computational facilities available.

GOALS:

Develop / derive techniques to solve an initial value problem ( IVP) orboundary value problem (BVP ) and calculate the associatedlocal/globaltruncation errors.

Identify and apply an economical and efficient method to get anumericalsolution of an ODE or a system of ODEs.

Examine the stability of various numerical schemes.

Compare numerical solutions of some easy differential equations with

theiranalytical counterparts.

Gain experience of ODE solvers through MATLAB.

9/61

Page 10: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 10/61

Computational finance

Applied mathematics

Applied mathematics is a branch of mathematics that concerns itself withthemathematical techniques typically used in the application of mathematicalknowledge to other domains.

Divisions of applied mathematics

 There is no consensus of what the various branches of applied mathematicsare.Such categorizations are made difficult by the way mathematics and sciencechange over time, and also by the way universities organize departments,courses,and degrees.Historically, applied mathematics consisted principally of applied analysis,most

notably differential equations, approximation theory (broadly construed, toinclude representations, asymptotic methods, variational methods, andnumericalanalysis), and applied probability. These areas of mathematics were intimatelytied to the development of Newtonian Physics, and in fact the distinctionbetweenmathematicians and physicists was not sharply drawn before the mid-19thcentury. This history left a legacy as well; until the early 20th century subjectssuch as classical mechanics were often taught in applied mathematicsdepartmentsat American universities rather than in physics departments, and fluidmechanics

may still be taught in applied mathematics departments.

 Today, the term applied mathematics is used in a broader sense. It includesthe

classical areas above, as well as other areas that have become increasinglyimportant in applications. Even fields such as number theory that are part of puremathematics are now important in applications (such as cryptology), thoughtheyare not generally considered to be part of the field of applied mathematicsper se.Sometimes the term applicable mathematics is used to distinguish betweenthetraditional field of applied mathematics and the many more areas of mathematicsthat are applicable to real-world problems.

Mathematicians distinguish between applied mathematics, which isconcernedwith mathematical methods, and the applications of mathematics withinscience

and engineering. A biologist using a population model and applying knownmathematics would not be doing applied mathematics, but rather using it.However, nonmathematicians do not usually draw this distinction. The use of mathematics to solve industrial problems is called industrial mathematics.Industrial mathematics is sometimes split in two branches: techno-mathematics(covering problems coming from technology) and econo-mathematics (forproblems in economy and finance). The success of modern numerical mathematical methods and software hasled tothe emergence of computational mathematics, computational science, and

10/61

Page 11: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 11/61

Computational finance

computational engineering, which use high performance computing forthesimulation of phenomena and solution of problems in the sciences andengineering. These are often considered interdisciplinary programs.

Utility of applied mathematics

Historically, mathematics was most important in the natural sciences and

engineering. However, after World War II, fields outside of the physicalscienceshave spawned the creation of new areas of mathematics, such as gametheory,which grew out of economic considerations, or neural networks, which aroseoutof the study of the brain in neuroscience, or bioinformatics, from theimportanceof analyzing large data sets in biology.

 The advent of the computer has created new applications, both in studyingandusing the new computer technology itself (computer science, which usescombinatorics, formal logic, and lattice theory), as well as using computers tostudy problems arising in other areas of science (computational science), and

of course studying the mathematics of computation (numerical analysis).Statistics isprobably the most widespread application of mathematics in the socialsciences,but other areas of mathematics are proving increasingly useful in thesedisciplines, especially in economics and management science.Other mathematical sciences (associated with appliedmathematics)

Applied mathematics is closely related to other mathematicalsciences.

Scientific computing

Scientific computing includes applied mathematics (especially numericalanalysis), computing science (especially high-performance computing, andmathematical modelling in a scientific discipline.

Computer Science

Computer science relies on logic and combinatorics.

Operations research and management science

Operations research and management science are often taught in facultiesof engineering, business, public policy.

Statistics

Applied mathematics has substantial overlap with the discipline of statistics.Statistical theorists study and improve statistical procedures withmathematics, and statistical research often raises mathematical questions.

11/61

Page 12: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 12/61

Computational finance

Statistical theory relies on probability and decision theory, and makesextensive use of scientific computing, analysis, and optimization; for thedesign of experiments, statisticians use algebra and combinatorics. Appliedmathematicians and statisticians often work in a department of mathematicalsciences (particularly at colleges and small universities).

Statisticians have long complained that many mathematics departments

haveassigned mathematicians (without statistical competence) to teachstatisticscourses, effectively giving "double blind" courses. Examing data from 2000,Schaeffer and Stasny reportedBy far the majority of instructors within statistics departments have at leastamaster’s degree in statistics or biostatistics (about 89% for doctoraldepartments and about 79% for master’s departments). In doctoralmathematics departments, however, only about 58% of statistics courseinstructors had at least a master’s degree in statistics or biostatistics astheirhighest degree earned. In master’s-level mathematics departments, thecorresponding percentage was near 44%, and in bachelor’s-leveldepartmentsonly 19% of statistics course instructors had at least a master’s degree instatistics or biostatistics as their highest degree earned. As we expected, alarge majority of instructors in statistics departments (83% for doctoraldepartments and 62% for master’s departments) held doctoral degrees ineither statistics or biostatistics. The comparable percentages for instructorsof statistics in mathematics departments were about 52% and 38%.

 This unprofessional conduct violates the "Statement on Professional Ethics"of the American Association of University Professors (which has beenaffirmed by many colleges and universities in the USA) and the ethicalcodes

of the International Statistical Institute and the American StatisticalAssociation. The principle that statistics-instructors should have statisticalcompetence has been affirmed by the guidelines of the MathematicalAssociation of America, which has been endorsed by the AmericanStatisticalAssociation.

Actuarial science

Actuarial science uses probability, statistics, and economictheory.

Other disciplines

 The line between applied mathematics and specific areas of application isoften blurred. Many universities teach mathematical and statistical coursesoutside of the respective departments, in departments and areas includingbusiness and economics, engineering, physics, psychology, biology,computerscience, and mathematical physics.

Mathematical tools

12/61

Page 13: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 13/61

Computational finance

Asymptotic analysisCalculusCopulasDifferential equationErgodic theoryGaussian copulasNumerical analysisReal analysisProbabilityProbability distribution  o Binomial distribution  o Log-normal distributionExpected valueValue at riskRisk-neutral measureStochastic calculus  o Brownian motion  o Lévy processItô's lemmaFourier transform

Girsanov's theoremRadon-Nikodym derivativeMonte Carlo methodQuantile functionPartial differential equations  o Heat equationMartingale representation theoremFeynman Kac FormulaStochastic differential equationsVolatility  o ARCH model  o GARCH model

Stochastic volatilityMathematical modelNumerical method  o Numerical partial differential equations  Crank-Nicolson method  Finite difference method

13/61

Page 14: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 14/61

Computational finance

Derivatives pricing

 The Brownian Motion Model of Financial MarketsRational pricing assumptions  o Risk neutral valuation  o Arbitrage-free pricing

Futures  o Futures contract pricingOptions  o Put–call parity (Arbitrage relationships for options)  o Intrinsic value, Time value  o Moneyness  o Pricing models  Black–Scholes model  Black model  Binomial options model  Monte Carlo option model  Implied volatility, Volatility smile

 

SABR Volatility Model  Markov Switching Multifractal   The Greeks  o Optimal stopping (Pricing of American options)Interest rate derivatives  o Short rate model  Hull-White model  Cox-Ingersoll-Ross model  Chen model  o LIBOR Market Model  o Heath-Jarrow-Morton framework

14/61

Page 15: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 15/61

Computational finance

Areas of application

Computational financeQuantitative Behavioral FinanceDerivative (finance), list of derivatives topicsModeling and analysis of financial markets

International Swaps and Derivatives AssociationFundamental financial concepts - topicsModel (economics)List of finance topicsList of economics topics, List of economistsList of accounting topicsStatistical Finance

15/61

Page 16: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 16/61

Computational finance

Numerical analysis

Numerical analysis is the study of algorithms for the problems of continuousmathematics (as distinguished from discrete mathematics).

One of the earliest mathematical writings is the Babylonian tablet YBC 7289,

which gives a sexagesimal numerical approximation of , the length of thediagonal in a unit square. Being able to compute the sides of a triangle (andhence, being able to compute square roots) is extremely important, forinstance, incarpentry and construction. In a rectangular wall section that is 2.40 meter by3.75meter, a diagonal beam has to be 4.45 meters long.Numerical analysis continues this long tradition of practical mathematicalcalculations. Much like the Babylonian approximation to , modern numericalanalysis does not seek exact answers, because exact answers are impossibletoobtain in practice. Instead, much of numerical analysis is concerned with

obtaining approximate solutions while maintaining reasonable bounds onerrors.Numerical analysis naturally finds applications in all fields of engineering andthephysical sciences, but in the 21st century, the life sciences and even the artshaveadopted elements of scientific computations. Ordinary differential equationsappear in the movement of heavenly bodies (planets, stars and galaxies);optimization occurs in portfolio management; numerical linear algebra isessentialto quantitative psychology; stochastic differential equations and Markov chainsare essential in simulating living cells for medicine and biology.Before the advent of modern computers numerical methods often dependedon

hand interpolation in large printed tables. Since the mid 20th century,computerscalculate the required functions instead. The interpolation algorithmsneverthelessmay be used as part of the software for solving differential equations.General introduction

 The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to hard problems, thevariety of which is suggested by the following.

Advanced numerical methods are essential in making numericalweather

prediction feasible.Computing the trajectory of a spacecraft requires the accuratenumericalsolution of a system of ordinary differential equations.Car companies can improve the crash safety of their vehicles by usingcomputer simulations of car crashes. Such simulations essentiallyconsistof solving partial differential equations numerically.Hedge funds (private investment funds) use tools from all fields of numerical analysis to calculate the value of stocks and derivatives moreprecisely than other market participants.

16/61

Page 17: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 17/61

Computational finance

Airlines use sophisticated optimization algorithms to decide ticketprices,airplane and crew assignments and fuel needs. This field is also calledoperations research.Insurance companies use numerical programs for actuarial analysis.

 The rest of this section outlines several important themes of numericalanalysis.

History

 The field of numerical analysis predates the invention of modern computers bymany centuries. Linear interpolation was already in use more than 2000 yearsago.Many great mathematicians of the past were preoccupied by numericalanalysis,as is obvious from the names of important algorithms like Newton's method,Lagrange interpolation polynomial, Gaussian elimination, or Euler's method.

 To facilitate computations by hand, large books were produced with formulasand

tables of data such as interpolation points and function coefficients. Usingthesetables, often calculated out to 16 decimal places or more for some functions,onecould look up values to plug into the formulas given and achieve very goodnumerical estimates of some functions. The canonical work in the field is theNIST publication edited by Abramowitz and Stegun, a 1000-plus page book of avery large number of commonly used formulas and functions and their valuesatmany points. The function values are no longer very useful when a computer isavailable, but the large listing of formulas can still be very handy.

 The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it wasthen

found that these computers were also useful for administrative purposes. Buttheinvention of the computer also influenced the field of numerical analysis, sincenow longer and more complicated calculations could be done.Direct and iterative methods

Direct vs. iterative methods

Consider the problem of solving

3x3+4=28

for the unknown quantity x.

Direct Method3x3 + 4 = 28.

Subtract 43x3 = 24.Divide by 3x3 = 8.

 Take cube rootsx = 2.

17/61

Page 18: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 18/61

Computational finance

SHOULD BE NOTED THAT THE BISECTION METHOD BELOW ISN'TEXACTLY WHAT IS DESCRIBED ON THE BISECTION PAGE

For the iterative method, apply the bisection method to f(x) = 3x3 - 24. Theinitialvalues are a = 0, b = 3, f(a) = -24, f(b) = 57.

Iterative Method

bmid31.532.252.251.8752.252.0625

a01.51.51.875

f(mid)-13.87510.17...-4.22...2.32...

We conclude from this table that the solution is between 1.875 and 2.0625. Thealgorithm might return any number in that range with an error less than 0.2.

Discretization and numerical integration

In a two hour race, we have measured the speed of the car at three instantsandrecorded them in the following table.

 Time 0:20 1:00 1:40

km/h 140 150 180

A discretization would be to say that the speed of the car was constant from0:00to 0:40, then from 0:40 to 1:20 and finally from 1:20 to 2:00. For instance, thetotal distance traveled in the first 40 minutes is approximately (2/3h x 140km/h)=93.3 km. This would allow us to estimate the total distance traveled as93.3 km + 100 km + 120 km = 313.3 km, which is an example of numericalintegration (see below) using a Riemann sum, because displacement is theintegral of velocity.

Ill posed problem: Take the function f(x) = 1/(x − 1). Note that f(1.1) = 10 andf(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly1000. Evaluating f(x) near x = 1 is an ill-conditioned problem.

Well-posed problem: By contrast, the function is continuous and so evaluatingitis well-posed, at least for x being not close to zero.

Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed ininfiniteprecision arithmetic. Examples include Gaussian elimination, the QRfactorization method for solving systems of linear equations, and the simplex

18/61

Page 19: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 19/61

Computational finance

method of linear programming. In practice, finite precision is used and theresultis an approximation of the true solution (assuming stability).

In contrast to direct methods, iterative methods are not expected to terminatein anumber of steps. Starting from an initial guess, iterative methods formsuccessive

approximations that converge to the exact solution only in the limit. Aconvergence criterion is specified in order to decide when a sufficientlyaccuratesolution has (hopefully) been found. Even using infinite precision arithmeticthesemethods would not reach the solution within a finite number of steps (ingeneral).Examples include Newton's method, the bisection method, and Jacobi iteration.Incomputational matrix algebra, iterative methods are generally needed for largeproblems.

Iterative methods are more common than direct methods in numericalanalysis.Some methods are direct in principle but are usually used as though they werenot, e.g. GMRES and the conjugate gradient method. For these methods thenumber of steps needed to obtain the exact solution is so large that anapproximation is accepted in the same manner as for an iterative method.

Discretization

Furthermore, continuous problems must sometimes be replaced by a discreteproblem whose solution is known to approximate that of the continuousproblem;this process is called discretization. For example, the solution of a differentialequation is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, eventhough this domain is a continuum.

 The generation and propagation of errors

 The study of errors forms an important part of numerical analysis. There areseveral ways in which error can be introduced in the solution of the problem.

Round-off 

Round-off errors arise because it is impossible to represent all real numbersexactly on a finite-state machine (which is what all practical digital computersare).

 Truncation and discretization error

 Truncation errors are committed when an iterative method is terminated andtheapproximate solution differs from the exact solution. Similarly, discretizationinduces a discretization error because the solution of the discrete problemdoesnot coincide with the solution of the continuous problem. For instance, in theiteration in the sidebar to compute the solution of 3x3 + 4 = 28, after 10 or soiterations, we conclude that the root is roughly 1.99 (for example). Wethereforehave a truncation error of 0.01.

19/61

Page 20: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 20/61

Computational finance

Once an error is generated, it will generally propagate through the calculation.Forinstance, we have already noted that the operation + on a calculator (or acomputer) is inexact. It follows that a calculation of the type a+b+c+d+e isevenmore inexact.Numerical stability and well-posed problems

Numerical stability is an important notion in numerical analysis. An algorithm iscalled numerically stable if an error, whatever its cause, does not grow to bemuchlarger during the calculation. This happens if the problem is well-conditioned,meaning that the solution changes by only a small amount if the problem dataarechanged by a small amount. To the contrary, if a problem is ill-conditioned,thenany small error in the data will grow to be a large error.Both the original problem and the algorithm used to solve that problem can bewell-conditioned and/or ill-conditioned, and any combination is possible.

So an algorithm that solves a well-conditioned problem may be either

numericallystable or numerically unstable. An art of numerical analysis is to find a stablealgorithm for solving a well-posed mathematical problem. For instance,computing the square root of 2 (which is roughly 1.41421) is a well-posedproblem. Many algorithms solve this problem by starting with an initialapproximation x1 to , for instance x1=1.4, and then computing improvedguessesx2, x3, etc... One such method is the famous Babylonian method, which isgivenby xk+1 = xk/2 + 1/xk. Another iteration, which we will call Method X, is givenby xk + 1 = (xk2−2)2 + xk. We have calculated a few iterations of eachscheme in

table form below, with initial guesses x1 = 1.4 and x1 = 1.42.

Babylonian

x1 = 1.4

x2 = 1.41...

x3 = 1.41…

... ...

x28 = 7280.2284...

Babylonian

x1 = 1.42

x2 = 1.41...

x3 = 1.41...

Method X

x1 = 1.4

x2 = 1.40

x3 = 1.40...

Method X

x1 = 1.42

x2 = 1.42

x3 = 1.42...

X1000000 = 1.41421...

Observe that the Babylonian method converges fast regardless of the initialguess,whereas Method X converges extremely slowly with initial guess 1.4 anddivergesfor initial guess 1.42. Hence, the Babylonian method is numerically stable,whileMethod X is numerically unstable.Areas of study

20/61

Page 21: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 21/61

Computational finance

 The field of numerical analysis is divided into different disciplines according tothe problem that is to be solved.

Computing values of functions

Interpolation: We have observed the temperature to vary from 20 degreesCelsius at 1:00 to 14 degrees at 3:00. A linear interpolation of this data would

conclude that it was 17 degrees at 2:00 and 18.5 degrees at 1:30pm.

Extrapolation: If the gross domestic product of a country has been growing anaverage of 5% per year and was 100 billion dollars last year, we mightextrapolatethat it will be 105 billion dollars this year.

Regression: In linear regression, given n points, we compute a line that passesasclose as possible to those n points.

Optimization: Say you sell lemonade at a lemonade stand, and notice that at$1,

you can sell 197 glasses of lemonade per day, and that for each increase of $0.01,you will sell one less lemonade per day. If you could charge $1.485, you wouldmaximize your profit, but due to the constraint of having to charge a wholecentamount, charging $1.49 per glass will yield the maximum income of $220.52perday.

Differential equation: If you set up 100 fans to blow air from one end of theroom to the other and then you drop a feather into the wind, what happens?

 Thefeather will follow the air currents, which may be very complex. Oneapproximation is to measure the speed at which the air is blowing near thefeather

every second, and advance the simulated feather as if it were moving in astraightline at that same speed for one second, before measuring the wind speedagain.

 This is called the Euler method for solving an ordinary differential equation.

One of the simplest problems is the evaluation of a function at a given point. Themost straightforward approach, of just plugging in the number in the formula issometimes not very efficient. For polynomials, a better approach is using theHorner scheme, since it reduces the necessary number of multiplications andadditions. Generally, it is important to estimate and control round-off errorsarising from the use of floating point arithmetic.

Interpolation, extrapolation, and regression

Interpolation solves the following problem: given the value of some unknownfunction at a number of points, what value does that function have at someotherpoint between the given points? A very simple method is to use linearinterpolation, which assumes that the unknown function is linear betweeneverypair of successive points. This can be generalized to polynomial interpolation,

21/61

Page 22: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 22/61

Computational finance

which is sometimes more accurate but suffers from Runge's phenomenon.Otherinterpolation methods use localized functions like splines or wavelets.

Extrapolation is very similar to interpolation, except that now we want to findthe value of the unknown function at a point which is outside the given points.

Regression is also similar, but it takes into account that the data is imprecise.

Given some points, and a measurement of the value of some function at thesepoints (with an error), we want to determine the unknown function. The leastsquares-method is one popular way to achieve this.

Areas of application

Scientific computingList of numerical analysis topicsGram-Schmidt processNumerical differentiationSymbolic-numeric computation

General

numerical-methods.comnumericalmathematics.comNumerical Recipes"Alternatives to Numerical Recipes"Scientific computing FAQNumerical analysis DMOZ categoryNumerical Computing Resources on the Internet - maintained byIndiana

University Stat/Math CenterNumerical Methods Resources

Software

Since the late twentieth century, most algorithms are implemented in avariety of programming languages. The Netlib repository contains various collections of software routines for numerical problems, mostly in Fortran and C.Commercialproducts implementing many different numerical algorithms include the IMSLand NAG libraries; a free alternative is the GNU Scientific Library.

 There are several popular numerical computing applications such asMATLAB,S-PLUS, LabVIEW, and IDL and Scilab as well as free and open sourcealternatives such as FreeMat, GNU Octave (similar to Matlab), IT++ (a C++library), R (similar to S-PLUS) and certain variants of Python. Performancevaries widely: while vector and matrix operations are usually fast, scalarloopsmay vary in speed by more than an order of magnitude.

22/61

Page 23: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 23/61

Computational finance

Many computer algebra systems such as Mathematica also benefit fromtheavailability of arbitrary precision arithmetic which can provide moreaccurateresults.Also, any spreadsheet software can be used to solve simple problems relatingtonumerical analysis.

23/61

Page 24: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 24/61

Computational finance

Computational intelligence

Computational intelligence (CI) is an offshoot of artificial intelligence. As analternative to GOFAI it rather relies on heuristic algorithms such as in fuzzysystems, neural networks and evolutionary computation. In addition,computational intelligence also embraces techniques that use Swarm

intelligence,Fractals and Chaos Theory, Artificial immune systems, Wavelets, etc.

Computational intelligence combines elements of learning, adaptation,evolutionand Fuzzy logic (rough sets) to create programs that are, in some sense,intelligent. Computational intelligence research does not reject statisticalmethods,but often gives a complementary view (as is the case with fuzzy systems).Artificial neural networks is a branch of computational intelligence that iscloselyrelated to machine learning.Computational intelligence is further closely associated with soft computing,connectionist systems and cybernetics.

Artificial intelligence

Artificial Intelligence (AI) is the intelligence of machines and the branch of computer science which aims to create it. Major AI textbooks define the field as"the study and design of intelligent agents," where an intelligent agent is asystemthat perceives its environment and takes actions which maximize its chancesof success. John McCarthy, who coined the term in 1956, defines it as "thescience

and engineering of making intelligent machines."

 The field was founded on the claim that a central property of human beings,

intelligence—the sapience of Homo sapiens—can be so precisely describedthat itcan be simulated by a machine. This raises philosophical issues about thenatureof the mind and limits of scientific hubris, issues which have been addressedbymyth, fiction and philosophy since antiquity. Artificial intelligence has been thesubject of breathtaking optimism, has suffered stunning setbacks and, today,hasbecome an essential part of the technology industry, providing the heavyliftingfor many of the most difficult problems in computer science.

AI research is highly technical and specialized, so much so that some criticsdecrythe "fragmentation" of the field. Subfields of AI are organized around particular

problems, the application of particular tools and around longstandingtheoreticaldifferences of opinion. The central problems of AI include such traits asreasoning, knowledge, planning, learning, communication, perception and theability to move and manipulate objects. General intelligence (or "strong AI") isstill a long-term goal of (some) research.

24/61

Page 25: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 25/61

Computational finance

Perspectives on CI

CI in myth, fiction and speculation

 Thinking machines and artificial beings appear in Greek myths, such as Talosof Crete, the golden robots of Hephaestus and Pygmalion's Galatea. Human

likenesses believed to have intelligence were built in many ancient societies;some of the earliest being the sacred statues worshipped in Egypt and Greece,andincluding the machines of  Yan Shi, Hero of Alexandria, Al-Jazari or Wolfgangvon Kempelen. It was widely believed that artificial beings had been created byGeber, Judah Loew and Paracelsus. Stories of these creatures and their fatesdiscuss many of the same hopes, fears and ethical concerns that are presentedbyartificial intelligence.Mary Shelley's Frankenstein, considers a key issue in the ethics of artificialintelligence: if a machine can be created that has intelligence, could it alsofeel? If it can feel, does it have the same rights as a human being? The idea also

appearsin modern science fiction: the film Artificial Intelligence: A.I. considers amachine in the form of a small boy which has been given the ability to feelhumanemotions, including, tragically, the capacity to suffer. This issue, now known as"robot rights", is currently being considered by, for example, California'sInstitutefor the Future, although many critics believe that the discussion is premature.

Another issue explored by both science fiction writers and futurists is theimpactof artificial intelligence on society. In fiction, AI has appeared as a servant(R2D2in Star Wars), a law enforcer (K.I.T.T. "Knight Rider"), a comrade (Lt.Commander Data in Star Trek), a conqueror (The Matrix), a dictator (With

FoldedHands), an exterminator (Terminator, Battlestar Galactica), an extension tohumanabilities (Ghost in the Shell) and the saviour of the human race (R. DaneelOlivawin the Foundation Series). Academic sources have considered suchconsequencesas: a decreased demand for human labor, the enhancement of human abilityorexperience, and a need for redefinition of human identity and basic values.

Several futurists argue that artificial intelligence will transcend the limits of progress and fundamentally transform humanity. Ray Kurzweil has usedMoore'slaw (which describes the relentless exponential improvement in digitaltechnologywith uncanny accuracy) to calculate that desktop computers will have thesameprocessing power as human brains by the year 2029, and that by 2045artificialintelligence will reach a point where it is able to improve itself at a rate that farexceeds anything conceivable in the past, a scenario that science fiction writerVernor Vinge named the "technological singularity". Edward Fredkin arguesthat"artificial intelligence is the next stage in evolution," an idea first proposed bySamuel Butler's "Darwin among the Machines" (1863), and expanded upon byGeorge Dyson in his book of the same name in 1998. Several futurists andsciencefiction writers have predicted that human beings and machines will merge inthefuture into cyborgs that are more capable and powerful than either. This idea,

called transhumanism, which has roots in Aldous Huxley and Robert Ettinger, isnow associated with robot designer Hans Moravec, cyberneticist Kevin Warwick

25/61

Page 26: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 26/61

Computational finance

and inventor Ray Kurzweil. Transhumanism has been illustrated in fiction aswell,for example in the manga Ghost in the Shell and the science fiction seriesDune.Pamela McCorduck writes that these scenarios are expressions of an ancienthuman desire to, as she calls it, "forge the gods."History of CI research

In the middle of the 20th century, a handful of scientists began a newapproach tobuilding intelligent machines, based on recent discoveries in neurology, a newmathematical theory of information, an understanding of control and stabilitycalled cybernetics, and above all, by the invention of the digital computer, amachine based on the abstract essence of mathematical reasoning.

 The field of modern AI research was founded at a conference on the campus ofDartmouth College in the summer of 1956. Those who attended would becomethe leaders of AI research for many decades, especially John McCarthy, MarvinMinsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT,CMU and Stanford. They and their students wrote programs that were, to most

people, simply astonishing: computers were solving word problems in algebra,proving logical theorems and speaking English. By the middle 60s theirresearchwas heavily funded by the U.S. Department of Defense, and they wereoptimisticabout the future of the new field:

1965, H. A. Simon: "[M]achines will be capable, within twenty years, of doing any work a man can do"1967, Marvin Minsky: "Within a generation ... the problem of creating'artificialintelligence' will substantially be solved."

 These predictions, and many like them, would not come true. They had failed

torecognize the difficulty of some of the problems they faced. In 1974, inresponseto the criticism of England's Sir James Lighthill and ongoing pressure fromCongress to fund more productive projects, the U.S. and British governmentscutoff all undirected, exploratory research in AI. This was the first AI winter.In the early 80s, AI research was revived by the commercial success of expertsystems, a form of AI program that simulated the knowledge and analyticalskillsof one or more human experts. By 1985 the market for AI had reached morethana billion dollars, and governments around the world poured money back intothefield. However, just a few years later, beginning with the collapse of the LispMachine market in 1987, AI once again fell into disrepute, and a second, longerlasting AI winter began.In the 90s and early 21st century, AI achieved its greatest successes, albeitsomewhat behind the scenes. Artificial intelligence is used for logistics, datamining, medical diagnosis and many other areas throughout the technologyindustry. The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specificsubproblems, the creation of new ties between AI and other fields working on

26/61

Page 27: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 27/61

Computational finance

similar problems, and above all a new commitment by researchers to solidmathematical methods and rigorous scientific standards.

Philosophy of AI

Artificial intelligence, by claiming to be able to recreate the capabilities of thehuman mind, is both a challenge and an inspiration for philosophy. Are there

limits to how intelligent machines can be? Is there an essential differencebetweenhuman intelligence and artificial intelligence? Can a machine have a mind andconsciousness? A few of the most influential answers to these questions aregivenbelow.

 Turing's "polite convention"

If a machine acts as intelligently as a human being, then it is as intelligentas ahuman being. Alan Turing theorized that, ultimately, we can only judge theintelligence of a machine based on its behavior. This theory forms the basis

of the Turing test. The Dartmouth proposal

"Every aspect of learning or any other feature of intelligence can be soprecisely described that a machine can be made to simulate it." Thisassertionwas printed in the proposal for the Dartmouth Conference of 1956, andrepresents the position of most working AI researchers.

Newell and Simon's physical symbol system hypothesis

"A physical symbol system has the necessary and sufficient means of 

generalintelligent action." This statement claims that the essence of intelligence issymbol manipulation. Hubert Dreyfus argued that, on the contrary, humanexpertise depends on unconscious instinct rather than conscious symbolmanipulation and on having a "feel" for the situation rather than explicitsymbolic knowledge.

Gödel's incompleteness theorem

A formal system (such as a computer program) can not prove all truestatements. Roger Penrose is among those who claim that Gödel's theoremlimits what machines can do.

Searle's strong AI hypothesis

"The appropriately programmed computer with the right inputs and outputswould thereby have a mind in exactly the same sense human beings haveminds." Searle counters this assertion with his Chinese room argument,whichasks us to look inside the computer and try to find where the "mind" mightbe.

27/61

Page 28: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 28/61

Computational finance

 The artificial brain argument

 The brain can be simulated. Hans Moravec, Ray Kurzweil and others haveargued that it is technologically feasible to copy the brain directly intohardware and software, and that such a simulation will be essentiallyidenticalto the original.

CI research

In the 21st century, AI research has become highly specialized and technical. Itisdeeply divided into subfields that often fail to communicate with each other.Subfields have grown up around particular institutions, the work of particularresearchers, particular problems (listed below), long standing differences of opinion about how AI should be done (listed as "approaches" below) and theapplication of widely differing tools (see tools of AI, below).

Problems of AI

 The problem of simulating (or creating) intelligence has been broken down intoanumber of specific sub-problems. These consist of particular traits orcapabilitiesthat researchers would like an intelligent system to display. The traitsdescribedbelow have received the most attention.Deduction, reasoning, problem solving

Early AI researchers developed algorithms that imitated the step-by-stepreasoning that human beings use when they solve puzzles, play board gamesormake logical deductions. By the late 80s and 90s, AI research had also

developedhighly successful methods for dealing with uncertain or incompleteinformation,employing concepts from probability and economics.For difficult problems, most of these algorithms can require enormouscomputational resources — most experience a "combinatorial explosion": theamount of memory or computer time required becomes astronomical when theproblem goes beyond a certain size. The search for more efficient problemsolvingalgorithms is a high priority for AI research.

Human beings solve most of their problems using fast, intuitive judgmentsratherthan the conscious, step-by-step deduction that early AI research was able to

model. AI has made some progress at imitating this kind of "sub-symbolic"problem solving: embodied approaches emphasize the importance of sensorimotorskills to higher reasoning; neural net research attempts to simulate thestructuresinside human and animal brains that gives rise to this skill.Knowledge representation

Knowledge representation and knowledge engineering are central to AIresearch.Many of the problems machines are expected to solve will require extensive

28/61

Page 29: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 29/61

Page 30: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 30/61

Page 31: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 31/61

Computational finance

Computer vision is the ability to analyze visual input. A few selectedsubproblemsare speech recognition, facial recognition and object recognition.

Social intelligence

Emotion and social skills play two roles for an intelligent agent:

It must be able to predict the actions of others, by understanding theirmotives and emotional states. (This involves elements of game theory,decision theory, as well as the ability to model human emotions and theperceptual skills to detect emotions.)For good human-computer interaction, an intelligent machine also needsto display emotions — at the very least it must appear polite andsensitiveto the humans it interacts with. At best, it should have normal emotionsitself.

Creativity

A sub-field of AI addresses creativity both theoretically (from a philosophicalandpsychological perspective) and practically (via specific implementations of systems that generate outputs that can be considered creative).

General intelligence

Most researchers hope that their work will eventually be incorporated into amachine with general intelligence (known as strong AI), combining all the skillsabove and exceeding human abilities at most or all of them. A few believe thatanthropomorphic features like artificial consciousness or an artificial brain maybe

required for such a project.Many of the problems above are considered AI-complete: to solve one problemyou must solve them all. For example, even a straightforward, specific task likemachine translation requires that the machine follow the author's argument(reason), know what it's talking about (knowledge), and faithfully reproducetheauthor's intention (social intelligence). Machine translation, therefore, isbelievedto be AI-complete: it may require strong AI to be done as well as humans candoit.Approaches to CI

 There is no established unifying theory or paradigm that guides AI research.Researchers disagree about many issues. A few of the most long standingquestions that have remained unanswered are these: Can intelligence bereproduced using high-level symbols, similar to words and ideas? Or does itrequire "sub-symbolic" processing? Should artificial intelligence simulatenaturalintelligence, by studying human psychology or animal neurobiology? Or ishumanbiology as irrelevant to AI research as bird biology is to aeronauticalengineering?Can intelligent behavior be described using simple, elegant principles (such as

31/61

Page 32: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 32/61

Page 33: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 33/61

Computational finance

Researchers at MIT (such as Marvin Minsky and Seymour Papert) found thatsolving difficult problems in vision and natural language processingrequiredad-hoc solutions – they argued that there was no simple and generalprinciple(like logic) that would capture all the aspects of intelligent behavior. RogerSchank described their "anti-logic" approaches as "scruffy" (as opposed tothe"neat" paradigms at CMU and Stanford). Commonsense knowledge bases(such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they mustbebuilt by hand, one complicated concept at a time.

Knowledge based AI

When computers with large memories became available around 1970,researchers from all three traditions began to build knowledge into AIapplications. This "knowledge revolution" led to the development anddeployment of expert systems (introduced by Edward Feigenbaum), thefirsttruly successful form of AI software. The knowledge revolution was alsodriven by the realization that enormous amounts of knowledge would berequired by many simple AI applications.

Sub-symbolic AI

During the 1960s, symbolic approaches had achieved great success atsimulatinghigh-level thinking in small demonstration programs. Approaches based oncybernetics or neural networks were abandoned or pushed into thebackground.By the 1980s, however, progress in symbolic AI seemed to stall and manybelieved that symbolic systems would never be able to imitate all theprocesses of human cognition, especially perception, robotics, learning and pattern

recognition.A number of researchers began to look into "sub-symbolic" approaches tospecificAI problems.

Bottom-up, embodied, situated, behavior-based or nouvelleAI

Researchers from the related field of robotics, such as Rodney Brooks,rejected symbolic AI and focused on the basic engineering problems thatwould allow robots to move and survive. Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 50s andreintroduced the use of control theory in AI. These approaches are alsoconceptually related to the embodied mind thesis.

33/61

Page 34: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 34/61

Computational finance

Computational Intelligence

Interest in neural networks and "connectionism" was revived by DavidRumelhart and others in the middle 1980s. These and other sub-symbolicapproaches, such as fuzzy systems and evolutionary computation, are nowstudied collectively by the emerging discipline of computational

intelligence.Statistical AI

In the 1990s, AI researchers developed sophisticated mathematical tools tosolvespecific subproblems. These tools are truly scientific, in the sense that theirresultsare both measurable and verifiable, and they have been responsible for manyof AI's recent successes. The shared mathematical language has also permitted ahigh level of collaboration with more established fields (like mathematics,economics or operations research). Russell & Norvig (2003) describe this

movement as nothing less than a "revolution" and "the victory of the neats."Integrating the approaches

Intelligent agent paradigm

An intelligent agent is a system that perceives its environment and takesactions which maximizes its chances of success. The simplest intelligentagents are programs that solve specific problems. The most complicatedintelligent agents are rational, thinking human beings. The paradigm givesresearchers license to study isolated problems and find solutions that arebothverifiable and useful, without agreeing on one single approach. An agentthatsolves a specific problem can use any approach that works — some agentsaresymbolic and logical, some are sub-symbolic neural networks and othersmayuse new approaches. The paradigm also gives researchers a commonlanguageto communicate with other fields—such as decision theory and economics—that also use concepts of abstract agents. The intelligent agent paradigmbecame widely accepted during the 1990s.

An agent architecture or cognitive architecture

Researchers have designed systems to build intelligent systems out of interacting intelligent agents in a multi-agent system. A system with bothsymbolic and sub-symbolic components is a hybrid intelligent system, and

thestudy of such systems is artificial intelligence systems integration. Ahierarchical control system provides a bridge between sub-symbolic AI at itslowest, reactive levels and traditional symbolic AI at its highest levels,whererelaxed time constraints permit planning and world modelling. RodneyBrooks' subsumption architecture was an early proposal for such ahierarchicalsystem.

34/61

Page 35: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 35/61

Computational finance

 Tools of CI research

In the course of 50 years of research, AI has developed a large number of tools tosolve the most difficult problems in computer science. A few of the mostgeneralof these methods are discussed below.Search and optimization

Many problems in AI can be solved in theory by intelligently searching throughmany possible solutions: Reasoning can be reduced to performing a search.Forexample, logical proof can be viewed as searching for a path that leads frompremises to conclusions, where each step is the application of an inferencerule.Planning algorithms search through trees of goals and subgoals, attempting tofinda path to a target goal, a process called means-ends analysis. Roboticsalgorithmsfor moving limbs and grasping objects use local searches in configuration

space.Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches are rarely sufficient for most real world problems:

thesearch space (the number of places to search) quickly grows to astronomicalnumbers. The result is a search that is too slow or never completes. Thesolution,for many problems, is to use "heuristics" or "rules of thumb" that eliminatechoices that are unlikely to lead to the goal (called "pruning the search tree").Heuristics supply the program with a "best guess" for what path the solutionlieson.A very different kind of search came to prominence in the 1990s, based on themathematical theory of optimization. For many problems, it is possible tobeginthe search with some form of a guess and then refine the guess incrementally

untilno more refinements can be made. These algorithms can be visualized as blindhill climbing: we begin the search at a random point on the landscape, andthen,by jumps or steps, we keep moving our guess uphill, until we reach the top.Otheroptimization algorithms are simulated annealing, beam search and randomoptimization.

Evolutionary computation uses a form of optimization search. For example,theymay begin with a population of organisms (the guesses) and then allow themtomutate and recombine, selecting only the fittest to survive each generation(refining the guesses). Forms of evolutionary computation include swarm

intelligence algorithms (such as ant colony or particle swarm optimization)andevolutionary algorithms (such as genetic algorithms and geneticprogramming).

Logic

Logic was introduced into AI research by John McCarthy in his 1958 Advice Taker proposal. In 1963, J. Alan Robinson discovered a simple, complete andentirely algorithmic method for logical deduction which can easily beperformedby digital computers. However, a naive implementation of the algorithmquicklyleads to a combinatorial explosion or an infinite loop. In 1974, RobertKowalski

35/61

Page 36: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 36/61

Computational finance

suggested representing logical expressions as Horn clauses (statements intheform of rules: "if p then q"), which reduced logical deduction to backwardchaining or forward chaining. This greatly alleviated (but did not eliminate)theproblem.Logic is used for knowledge representation and problem solving, but it can beapplied to other problems as well. For example, the satplan algorithm uses

logicfor planning, and inductive logic programming is a method for learning. Thereareseveral different forms of logic used in AI research.

Propositional or sentential logic is the logic of statements which can betrue or false.First-order logic also allows the use of quantifiers and predicates, andcan

express facts about objects, their properties, and their relationswith eachother.Fuzzy logic, a version of first-order logic which allows the truth of astatement to be represented as a value between 0 and 1, rather thansimply True (1) or False(0). Fuzzy systems can be used for uncertain

reasoning and have been widely usedinmodern industrial andconsumer product control systems.Default logics, non-monotonic logics and circumscription are forms of logic designed to help with default reasoning and the qualificationproblem.Several extensions of logic have been designed to handle specificdomainsof knowledge, such as: description logics; situation calculus, eventcalculus and fluent calculus (for representing events and time); causalcalculus; belief calculus;and modal logics.

Probabilistic methods for uncertain reasoning

Many problems in AI (in reasoning, planning, learning, perception androbotics)require the agent to operate with incomplete or uncertain information.Starting inthe late 80s and early 90s, Judea Pearl and others championed the use of methodsdrawn from probability theory and economics to devise a number of powerfultools to solve these problems.Bayesian networks are a very general tool that can be used for a large numberof problems: reasoning (using the Bayesian inference algorithm), learning (usingthe

expectation-maximization algorithm), planning (using decision networks) andperception (using dynamic Bayesian networks).Probabilistic algorithms can also be used for filtering, prediction, smoothingandfinding explanations for streams of data, helping perception systems toanalyzeprocesses that occur over time (e.g., hidden Markov models or Kalmanfilters).A key concept from the science of economics is "utility": a measure of howvaluable something is to an intelligent agent. Precise mathematical tools havebeen developed that analyze how an agent can make choices and plan, using

36/61

Page 37: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 37/61

Computational finance

decision theory, decision analysis, information value theory. These toolsincludemodels such as Markov decision processes, dynamic decision networks, gametheory and mechanism design.

Classifiers and statistical learning methods

 The simplest AI applications can be divided into two types: classifiers ("if shiny

then diamond") and controllers ("if shiny then pick up"). Controllers dohoweveralso classify conditions before inferring actions, and therefore classificationformsa central part of many AI systems.Classifiers are functions that use pattern matching to determine a closestmatch.

 They can be tuned according to examples, making them very attractive for useinAI. These examples are known as observations or patterns. In supervisedlearning,each pattern belongs to a certain predefined class. A class can be seen as adecision that has to be made. All the observations combined with their classlabelsare known as a data set.When a new observation is received, that observation is classified based onprevious experience. A classifier can be trained in various ways; there aremanystatistical and machine learning approaches.

A wide range of classifiers are available, each with its strengths andweaknesses.Classifier performance depends greatly on the characteristics of the data to beclassified. There is no single classifier that works best on all given problems;thisis also referred to as the "no free lunch" theorem. Various empirical tests havebeen performed to compare classifier performance and to find thecharacteristicsof data that determine classifier performance. Determining a suitable classifierfora given problem is however still more an art than science.

 The most widely used classifiers are the neural network, kernel methods suchasthe support vector machine, k-nearest neighbor algorithm, Gaussian mixturemodel, naive Bayes classifier, and decision tree. The performance of theseclassifiers have been compared over a wide range of classification tasks inorderto find data characteristics that determine classifier performance.Neural networks

A neural network is an interconnected group of nodes, akin to the vast networkof 

neurons in the human brain. The study of artificial neural networks began in the decade before the field AIresearch was founded. In the 1960s Frank Rosenblatt developed an importantearly version, the perceptron. Paul Werbos developed the backpropagationalgorithm for multilayer perceptrons in 1974, which led to a renaissance inneuralnetwork research and connectionism in general in the middle 1980s. TheHopfieldnet, a form of attractor network, was first described by John Hopfield in 1982.

37/61

Page 38: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 38/61

Computational finance

Common network architectures which have been developed include thefeedforward neural network, the radial basis network, the Kohonen self-organizing map and various recurrent neural networks. Neural networks areapplied to the problem of learning, using such techniques as Hebbian learning,competitive learning and the relatively new architectures of Hierarchical

 Temporal Memory and Deep Belief Networks.

Control theory

Control theory, the grandchild of cybernetics, has many important applicationsespecially in robotics.

Specialized languages

AI researchers have developed several specialized languages for AIresearch:

IPL was the first language developed for artificial intelligence. It includesfeatures intended to support programs that could perform general

problemsolving, including lists, associations, schemas (frames), dynamicmemoryallocation, data types, recursion, associative retrieval, functions asarguments, generators (streams), and cooperative multitasking.Lisp is a practical mathematical notation for computer programs basedonlambda calculus. Linked lists are one of Lisp languages' major datastructures, and Lisp source code is itself made up of lists. As a result,Lispprograms can manipulate source code as a data structure, giving rise tothe

macro systems that allow programmers to create new syntax or evennewdomain-specific programming languages embedded in Lisp. There aremany dialects of Lisp in use today.Prolog is a declarative language where programs are expressed in termsof relations, and execution occurs by running queries over these relations.Prolog is particularly useful for symbolic reasoning, database andlanguage parsing applications. Prolog is widely used in AI today.STRIPS is a language for expressing automated planning probleminstances. It expresses an initial state, the goal states, and a set of actions.

For each action preconditions (what must be established before theactionis performed) and postconditions (what is established after the action isperformed) are specified.Planner is a hybrid between procedural and logical languages. It gives aprocedural interpretation to logical sentences where implications areinterpreted with pattern-directed inference.

AI applications are also often written in standard languages like C++ andlanguages designed for mathematics, such as MATLAB and Lush.

Evaluating artificial intelligence

How can one determine if an agent is intelligent? In 1950, Alan Turingproposed ageneral procedure to test the intelligence of an agent now known as the Turing

38/61

Page 39: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 39/61

Page 40: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 40/61

Computational finance

Remotely related topics:

Bioinformatics and BioengineeringComputational finance and Computational economicsIntelligent systemsEmergenceData mining

Concept mining

Software

Computational Intelligence Library (CILib)OAT (Optimization Algorithm Toolkit): A set of standard computational

intelligence optimization algorithms and problems in Java.

Organizations

IEEE Computational Intelligence Society The Computational Intelligence and Machine Learning Virtual

Community

40/61

Page 41: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 41/61

Computational finance

Computer simulation

A computer simulation, a computer model or a computational model is acomputer program, or network of computers, that attempts to simulate anabstractmodel of a particular system. Computer simulations have become a useful part

of mathematical modeling of many natural systems in physics (computationalphysics), chemistry and biology, human systems in economics, psychology,andsocial science and in the process of engineering new technology, to gaininsightinto the operation of those systems, or to observe their behavior.

Computer simulations vary from computer programs that run a few minutes, tonetwork-based groups of computers running for hours, to ongoing simulationsthat run for days. The scale of events being simulated by computer simulationshas far exceeded anything possible (or perhaps even imaginable) using thetraditional paper-and-pencil mathematical modeling: over 10 years ago, adesert-battle simulation, of one force invading another, involved the modeling of 

66,239tanks, trucks and other vehicles on simulated terrain around Kuwait, usingmultiple supercomputers in the DoD High Performance ComputerModernizationProgram; a 1-billion-atom model of material deformation (2002); a 2.64-million-atom model of the complex maker of protein in all organisms, a ribosome, in2005; and the Blue Brain project at EPFL (Switzerland), began in May 2005, tocreate the first computer simulation of the entire human brain, right down tothemolecular level.History

Computer simulation was developed hand-in-hand with the rapid growth of thecomputer, following its first large-scale deployment during the Manhattan

Projectin World War II to model the process of nuclear detonation. It was a simulationof 12 hard spheres using a Monte Carlo algorithm. Computer simulation is oftenused as an adjunct to, or substitution for, modeling systems for which simpleclosed form analytic solutions are not possible. There are many different typesof computer simulation; the common feature they all share is the attempt togeneratea sample of representative scenarios for a model in which a completeenumerationof all possible states of the model would be prohibitive or impossible.

Computermodels were initially used as a supplement for other arguments, but their uselaterbecame rather widespread.

Data preparation

 The data input/output for the simulation can be either through formattedtextfilesor a pre- and postprocessor.

41/61

Page 42: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 42/61

Computational finance

 Types

Computer models can be classified according to several independent pairs of attributes, including:

Stochastic or deterministic (and as a special case of deterministic,

chaotic)forexamples of stochastic vs. deterministic simulationsSteady-state or dynamicContinuous or discrete (and as an important special case of discrete,discrete event or DE models)Local or distributed.

 These attributes may be arbitrarily combined to form terminology thatdescribessimulation types, such as "continuous dynamic simulations" or "discretedynamicsimulations."For example:

Steady-state models use equations defining the relationships betweenelements of the modeled system and attempt to find a state in whichthesystem is in equilibrium. Such models are often used in simulatingphysical systems, as a simpler modeling case before dynamic simulationisattempted.Dynamic simulations model changes in a system in response to (usuallychanging) input signals.Stochastic models use random number generators to model chance orrandom events;A discrete event simulation (DES) manages events in time. Mostcomputer, logic-test and fault-tree simulations are of this type. In thistypeof simulation, the simulator maintains a queue of events sorted by thesimulated time they should occur. The simulator reads the queue andtriggers new events as each event is processed. It is not important toexecute the simulation in real time. It's often more important to be abletoaccess the data produced by the simulation, to discover logic defects inthedesign, or the sequence of events.A continuous dynamic simulation performs numerical solution of 

differential-algebraic equations or differential equations (either partial orordinary). Periodically, the simulation program solves all the equations,and uses the numbers to change the state and output of the simulation.Applications include flight simulators, construction and managementsimulation games, chemical process modeling, and simulations of electrical circuits. Originally, these kinds of simulations were actuallyimplemented on analog computers, where the differential equationscouldbe represented directly by various electrical components such as op-amps.By the late 1980s, however, most "analog" simulations were run onconventional digital computers that emulate the behavior of an analog

computer.42/61

Page 43: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 43/61

Computational finance

A special type of discrete simulation which does not rely on a model withan underlying equation, but can nonetheless be represented formally, isagent-based simulation. In agent-based simulation, the individualentities(such as molecules, cells, trees or consumers) in the model arerepresenteddirectly (rather than by their density or concentration) and possess aninternal state and set of behaviors or rules which determine how theagent's state is updated from one time-step to the next.Distributed models run on a network of interconnected computers,possibly through the Internet. Simulations dispersed across multiplehostcomputers like this are often referred to as "distributed simulations".

 Thereare several standards for distributed simulation, including AggregateLevelSimulation Protocol (ALSP), Distributed Interactive Simulation (DIS), theHigh Level Architecture (simulation) (HLA) and the Test and TrainingEnabling Architecture (TENA).

CGI computer simulation

Formerly, the output data from a computer simulation was sometimes

presented ina table, or a matrix, showing how data was affected by numerous changes inthesimulation parameters. The use of the matrix format was related to traditionaluseof the matrix concept in mathematical models; however, psychologists andothers noted that humans could quickly perceive trends by looking at graphsoreven moving-images or motion-pictures generated from the data, as displayedbycomputer-generated-imagery (CGI) animation. Although observers couldn'tnecessarily read out numbers, or spout math formulas, from observing a

movingweather chart, they might be able to predict events (and "see that rain washeadedtheir way"), much faster than scanning tables of rain-cloud coordinates. Suchintense graphical displays, which transcended the world of numbers andformulae,sometimes also led to output that lacked a coordinate grid or omittedtimestamps,as if straying too far from numeric data displays. Today, weather forecastingmodels tend to balance the view of moving rain/snow clouds against a mapthatuses numeric coordinates and numeric timestamps of events.

Similarly, CGI computer simulations of CAT scans can simulate how a tumormight shrink or change, during an extended period of medical treatment,presenting the passage of time as a spinning view of the visible human head,asthe tumor changes.

Other applications of CGI computer simulations are being developed tographically display large amounts of data, in motion, as changes occur during

asimulation run.

43/61

Page 44: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 44/61

Computational finance

Computer simulation in science

Generic examples of types of computer simulations in science, which arederivedfrom an underlying mathematical description:

a numerical simulation of differential equations which cannot be solved

analytically, theories which involve continuous systems such asphenomena in physical cosmology, fluid dynamics (e.g. climate models,roadway noise models, roadway air dispersion models), continuummechanics and chemical kinetics fall into this category.a stochastic simulation, typically used for discrete systems where eventsoccur probabilistically, and which cannot be described directly withdifferential equations (this is a discrete simulation in the above sense).Phenomena in this category include genetic drift, biochemical or generegulatory networks with small numbers of molecules. (see also: MonteCarlo method).

Specific examples of computer simulations follow:

statistical simulations based upon an agglomeration of a large numberof input profiles, such as the forecasting of equilibrium temperature of receiving waters, allowing the gamut of meteorological data to be inputfora specific locale. This technique was developed for thermal pollutionforecasting .agent based simulation has been used effectively in ecology, where it isoften called individual based modeling and has been used in situationsforwhich individual variability in the agents cannot be neglected, such aspopulation dynamics of salmon and trout (most purely mathematicalmodels assume all trout behave identically).time stepped dynamic model. In hydrology there are several suchhydrology transport models such as the SWMM and DSSAM Modelsdeveloped by the U.S. Environmental Protection Agency for river waterquality forecasting.computer simulations have also been used to formally model theories ofhuman cognition and performance, e.g. ACT-Rcomputer simulation using molecular modeling for drug discoveryComputational fluid dynamics simulations are used to simulate thebehaviour of flowing air, water and other fluids. There are one-, two- and

three- dimensional models used. A one dimensional model mightsimulatethe effects of water hammer in a pipe. A two-dimensional model mightbeused to simulate the drag forces on the cross-section of an aeroplanewing.A three-dimensional simulation might estimate the heating and coolingrequirements of a large building.An understanding of statistical thermodynamic molecular theory isfundamental to the appreciation of molecular solutions. Development of the Potential Distribution Theorem (PDT) allows one to simplify thiscomplex subject to down-to-earth presentations of molecular theory.44/61

Page 45: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 45/61

Page 46: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 46/61

Computational finance

Reservoir simulation for the petroleum engineering to model thesubsurface reservoirProcess Engineering Simulation tools.Robot simulators for the design of robots and robot control algorithms

 Traffic engineering to plan or redesign parts of the street network fromsingle junctions over cities to a national highway network, see forexampleVISSIM.modeling car crashes to test safety mechanisms in new vehicle models

 The reliability and the trust people put in computer simulations depends on thevalidity of the simulation model, therefore verification and validation are of crucial importance in the development of computer simulations. Anotherimportant aspect of computer simulations is that of reproducibility of theresults,meaning that a simulation model should not provide a different answer foreachexecution. Although this might seem obvious, this is a special point of attentioninstochastic simulations, where random numbers should actually be semi-

randomnumbers. An exception to reproducibility are human in the loop simulationssuchas flight simulations and computer games. Here a human is part of thesimulationand thus influences the outcome in a way that is hard if not impossible toreproduce exactly.

Vehicle manufacturers make use of computer simulation to test safety featuresinnew designs. By building a copy of the car in a physics simulation environmentthey can save the hundreds of thousands of dollars that would otherwise berequired to build a unique prototype and test it. Engineers can step throughthesimulation milliseconds at a time to determine the exact stresses being putuponeach section of the prototype.Computer graphics can be used to display the results of a computer simulation

Animations can be used to experience a simulation in real-time e.g. in trainingsimulations. In some cases animations may also be useful in faster than real-timeor even slower than real-time modes. For example, faster than real-timeanimations can be useful in visualizing the buildup of queues in the simulationof humans evacuating a building. Furthermore, simulation results are oftenaggregated into static images using various ways of scientific visualization.In debugging, simulating a program execution under test (rather thanexecutingnatively) can detect far more errors than the hardware itself can detect and, atthe

same time, log useful debugging information such as instruction trace,memoryalterations and instruction counts. This technique can also detect bufferoverflowand similar "hard to detect" errors as well as produce performance informationand tuning data.

46/61

Page 47: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 47/61

Page 48: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 48/61

Computational finance

 The Computational Modelling Group at Cambridge University'sDepartment of Chemical EngineeringLiophant SimulationUnited Simulation Team - Genoa UniversityHigh Performance Systems Group at the University of Warwick, UK 

Education

Simulation-An Enabling Technology in Software EngineeringSabanci University School of Languages Podcasts: Computer Simulationby Prof. David M. GoldsmanIMTEK Mathematica Supplement (IMS) (some Mathematica-specifictutorials here)

 The Creative Learning ExchangeMcLeod Institute of Simulation Science

Examples

WARPP Distributed/Parallel System Simulation Toolkit written by theHigh Performance Systems Group at the University of WarwickA portfolio of free public simulations from the University of FloridaIntegrated Land Use, Transportation, Environment, (ILUTE) ModelingSystemNanorobotics Simulation - Computational Nanomechatronics Lab. atCenter for Automation in Nanobiotech (CAN)Online traffic simulationAdaptive Modeler - simulation models for price forecasting of financialmarketsShakemovie Caltech's Online Seismic Event Simulation

DIG - Demographics, Investment and Company Growth SimulationGlobal Politics SimulationIndustrial & Educational Examples of Modelling & SimulationGeneralized online simulation utilityCatchment Modelling ToolkitCellular Automata for Simulation in Games

48/61

Page 49: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 49/61

Computational finance

Financial Risk

Risk Management is the identification, assessment, and prioritization of risksfollowed by coordinated and economical application of resources to minimize,monitor, and control the probability and/or impact of unfortunate events.. Riskscan come from uncertainty in financial markets, project failures, legal

liabilities,credit risk, accidents, natural causes and disasters as well as deliberate attacksfrom an adversary. Several risk management standards have been developedincluding the Project Management Institute, the National Institute of Scienceand

 Technology, actuarial societies, and ISO standards. Methods, definitions andgoals vary widely according to whether the risk management method is in thecontext of project management, security, engineering, industrial processes,financial portfolios, actuarial assessments, or public health and safety.

For the most part, these methodologies consist of the following elements,performed, more or less, in the following order.

1. identify, characterize, and assess threats2. assess the vulnerability of critical assets to specific threats3. determine the risk (i.e. the expected consequences of specific typesof 

attacks on specific assets)4. identify ways to reduce those risks5. prioritize risk reduction measures based on a strategy

 The strategies to manage risk include transferring the risk to another party,avoiding the risk, reducing the negative effect of the risk, and accepting someorall of the consequences of a particular risk.

Introduction

 This section provides an introduction to the principles of risk management. Thevocabulary of risk management is defined in ISO Guide 73, "Risk management.Vocabulary".

In ideal risk management, a prioritization process is followed whereby the riskswith the greatest loss and the greatest probability of occurring are handledfirst,and risks with lower probability of occurrence and lower loss are handled indescending order. In practice the process can be very difficult, and balancing

between risks with a high probability of occurrence but lower loss versus a riskwith high loss but lower probability of occurrence can often be mishandled.

Intangible risk management identifies a new type of a risk that has a 100%probability of occurring but is ignored by the organization due to a lack of identification ability. For example, when deficient knowledge is applied to asituation, a knowledge risk materialises. Relationship risk appears whenineffective collaboration occurs. Process-engagement risk may be an issuewhenineffective operational procedures are applied. These risks directly reduce the

49/61

Page 50: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 50/61

Computational finance

productivity of knowledge workers, decrease cost effectiveness, profitability,service, quality, reputation, brand value, and earnings quality. Intangible riskmanagement allows risk management to create immediate value from theidentification and reduction of risks that reduce productivity.

Risk management also faces difficulties allocating resources. This is the idea ofopportunity cost. Resources spent on risk management could have been spent

onmore profitable activities. Again, ideal risk management minimizes spendingwhile maximizing the reduction of the negative effects of risks.

Principles of risk management

 The International Organization for Standardization identifies the followingprinciples of risk management:

Risk management should create value.Risk management should be an integral part of organizationalprocesses.

Risk management should be part of decision making.Risk management should explicitly address uncertainty.Risk management should be systematic and structured.Risk management should be based on the best available information.Risk management should be tailored.Risk management should take into account human factors.Risk management should be transparent and inclusive.Risk management should be dynamic, iterative and responsive tochange.Risk management should be capable of continual improvement andenhancement.

Process

According to the standard ISO/DIS 31000 "Risk management -- Principles andguidelines on implementation", the process of risk management consists of several steps as follows:

50/61

Page 51: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 51/61

Computational finance

Establishing the context

Establishing the context involves

1. Identification of risk in a selected domain of interest2. Planning the remainder of the process.

3. Mapping out the following:  o the social scope of risk management  o the identity and objectives of stakeholders  o the basis upon which risks will be evaluated, constraints.4. Defining a framework for the activity and an agenda for

identification.5. Developing an analysis of risks involved in the process.6. Mitigation of risks using available technological, human and

organizational resources.

Identification

After establishing the context, the next step in the process of managing risk istoidentify potential risks. Risks are about events that, when triggered, causeproblems. Hence, risk identification can start with the source of problems, orwiththe problem itself. Source analysis Risk sources may be internal or external to the system

that is the target of risk management.

Examples of risk sources are: stakeholders of a project, employees of acompanyor the weather over an airport.

Problem analysis Risks are related to identified threats. For example: thethreat of losing money, the threat of abuse of privacy information or thethreat of accidents and casualties. The threats may exist with variousentities, most important with shareholders, customers and legislativebodies such as the government.

When either source or problem is known, the events that a source may triggerorthe events that can lead to a problem can be investigated. For example:stakeholders withdrawing during a project may endanger funding of the

project;privacy information may be stolen by employees even within a closed network;lightning striking a Boeing 747 during takeoff may make all people onboardimmediate casualties.

 The chosen method of identifying risks may depend on culture, industrypracticeand compliance. The identification methods are formed by templates or thedevelopment of templates for identifying source, problem or event. Commonriskidentification methods are:

51/61

Page 52: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 52/61

Computational finance

Objectives-based risk identification Organizations and project teamshave objectives. Any event that may endanger achieving an objectivepartly or completely is identified as risk.Scenario-based risk identification In scenario analysis differentscenarios are created. The scenarios may be the alternative ways toachieve an objective, or an analysis of the interaction of forces in, forexample, a market or battle. Any event that triggers an undesiredscenarioalternative is identified as risk - see Futures Studies for methodologyusedby Futurists.

 Taxonomy-based risk identification The taxonomy in taxonomy-basedrisk identification is a breakdown of possible risk sources. Based on thetaxonomy and knowledge of best practices, a questionnaire is compiled.

 The answers to the questions reveal risks. Taxonomy-based riskidentification in software industry can be found in CMU/SEI-93-TR-6.Common-risk checking In several industries lists with known risks areavailable. Each risk in the list can be checked for application to aparticular situation. An example of known risks in the software industryis

the Common Vulnerability and Exposures list found athttp://cve.mitre.org.Risk charting (risk mapping) This method combines the aboveapproaches by listing Resources at risk, Threats to those resourcesModifying Factors which may increase or decrease the risk andConsequences it is wished to avoid. Creating a matrix under theseheadings enables a variety of approaches. One can begin with resourcesand consider the threats they are exposed to and the consequences of each.Alternatively one can start with the threats and examine whichresourcesthey would affect, or one can begin with the consequences and

determinewhich combination of threats and resources would be involved to bringthem about.

Assessment

Once risks have been identified, they must then be assessed as to theirpotentialseverity of loss and to the probability of occurrence. These quantities can beeithersimple to measure, in the case of the value of a lost building, or impossible toknow for sure in the case of the probability of an unlikely event occurring.

 Therefore, in the assessment process it is critical to make the best educatedguesses possible in order to properly prioritize the implementation of the riskmanagement plan.

 The fundamental difficulty in risk assessment is determining the rate of occurrence since statistical information is not available on all kinds of pastincidents. Furthermore, evaluating the severity of the consequences (impact) isoften quite difficult for immaterial assets. Asset valuation is another questionthatneeds to be addressed. Thus, best educated opinions and available statisticsare theprimary sources of information. Nevertheless, risk assessment should producesuch information for the management of the organization that the primaryrisksare easy to understand and that the risk management decisions may beprioritized.

52/61

Page 53: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 53/61

Computational finance

 Thus, there have been several theories and attempts to quantify risks.Numerousdifferent risk formulae exist, but perhaps the most widely accepted formula forrisk quantification is:

Rate of occurrence multiplied by the impact of the event equals risk

Later research has shown that the financial benefits of risk management are

lessdependent on the formula used but are more dependent on the frequency andhowrisk assessment is performed.In business it is imperative to be able to present the findings of riskassessments infinancial terms. Robert Courtney Jr. (IBM, 1970) proposed a formula forpresenting risks in financial terms. The Courtney formula was accepted as theofficial risk analysis method for the US governmental agencies. The formulaproposes calculation of ALE (annualised loss expectancy) and compares theexpected loss value to the security control implementation costs (cost-benefitanalysis).

Potential risk treatments

Once risks have been identified and assessed, all techniques to manage theriskfall into one or more of these four major categories:

Avoidance (eliminate)Reduction (mitigate)

 Transfer (outsource or insure)Retention (accept and budget)

Ideal use of these strategies may not be possible. Some of them may involve

trade-offs that are not acceptable to the organization or person making the riskmanagement decisions. Another source, from the US Department of Defense,Defense Acquisition University, calls these categories ACAT, for Avoid, Control,Accept, or Transfer. This use of the ACAT acronym is reminiscent of anotherACAT (for Acquisition Category) used in US Defense industry procurements, inwhich Risk Management figures prominently in decision making and planning.

Risk avoidance

Includes not performing an activity that could carry risk. An example would benot buying a property or business in order to not take on the liability that

comeswith it. Another would be not flying in order to not take the risk that theairplanewere to be hijacked. Avoidance may seem the answer to all risks, but avoidingrisks also means losing out on the potential gain that accepting (retaining) theriskmay have allowed. Not entering a business to avoid the risk of loss also avoidsthepossibility of earning profits.

53/61

Page 54: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 54/61

Computational finance

Risk reduction

Involves methods that reduce the severity of the loss or the likelihood of thelossfrom occurring. For example, sprinklers are designed to put out a fire to reducethe risk of loss by fire. This method may cause a greater loss by water damageand therefore may not be suitable. Halon fire suppression systems may

mitigatethat risk, but the cost may be prohibitive as a strategy. Risk management mayalsotake the form of a set policy, such as only allow the use of secured IMplatforms(like Brosix) and not allowing personal IM platforms (like AIM) to be used inorder to reduce the risk of data leaks.

Modern software development methodologies reduce risk by developing anddelivering software incrementally. Early methodologies suffered from the factthatthey only delivered software in the final phase of development; any problemsencountered in earlier phases meant costly rework and often jeopardized thewhole project. By developing in iterations, software projects can limit effortwasted to a single iteration.

Outsourcing could be an example of risk reduction if the outsourcer candemonstrate higher capability at managing or reducing risks. In this casecompanies outsource only some of their departmental needs. For example, acompany may outsource only its software development, the manufacturing of hard goods, or customer support needs to another company, while handlingthebusiness management itself. This way, the company can concentrate more onbusiness development without having to worry as much about themanufacturingprocess, managing the development team, or finding a physical location for acallcenter.Risk retention

Involves accepting the loss when it occurs. True self insurance falls in thiscategory. Risk retention is a viable strategy for small risks where the cost of insuring against the risk would be greater over time than the total lossessustained.All risks that are not avoided or transferred are retained by default. Thisincludesrisks that are so large or catastrophic that they either cannot be insuredagainst orthe premiums would be infeasible. War is an example since most property andrisks are not insured against war, so the loss attributed by war is retained bythe

insured. Also any amounts of potential loss (risk) over the amount insured isretained risk. This may also be acceptable if the chance of a very large loss issmall or if the cost to insure for greater coverage amounts is so great it wouldhinder the goals of the organization too much.Risk transfer

In the terminology of practitioners and scholars alike, the purchase of aninsurance contract is often described as a "transfer of risk." However,technicallyspeaking, the buyer of the contract generally retains legal responsibility for thelosses "transferred", meaning that insurance may be described moreaccurately as

54/61

Page 55: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 55/61

Page 56: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 56/61

Computational finance

Risk analysis results and management plans should be updated periodically. There are two primary reasons for this:

1. to evaluate whether the previously selected security controls are stillapplicable and effective, and

2. to evaluate the possible risk level changes in the business environment.Forexample, information risks are a good example of rapidly changing

business environment.

Limitations

If risks are improperly assessed and prioritized, time can be wasted in dealingwith risk of losses that are not likely to occur. Spending too much timeassessingand managing unlikely risks can divert resources that could be used moreprofitably. Unlikely events do occur but if the risk is unlikely enough to occur itmay be better to simply retain the risk and deal with the result if the loss doesin

fact occur. Qualitative risk assessment is subjective and lack consistency. Theprimary justification for a formal risk assessment process is legal andbureaucratic.Prioritizing too highly the risk management processes could keep anorganizationfrom ever completing a project or even getting started. This is especially true ifother work is suspended until the risk management process is consideredcomplete.

It is also important to keep in mind the distinction between risk anduncertainty.Risk can be measured by impacts x probability.

Areas of risk management

As applied to corporate finance, risk management is the technique formeasuring,monitoring and controlling the financial or operational risk on a firm's balancesheet. See value at risk.

 The Basel II framework breaks risks into market risk (price risk), credit risk andoperational risk and also specifies methods for calculating capital requirementsfor each of these components.

Enterprise risk management

In enterprise risk management, a risk is defined as a possible event orcircumstance that can have negative influences on the enterprise in question.Itsimpact can be on the very existence, the resources (human and capital), theproducts and services, or the customers of the enterprise, as well as externalimpacts on society, markets, or the environment. In a financial institution,enterprise risk management is normally thought of as the combination of credit

56/61

Page 57: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 57/61

Computational finance

risk, interest rate risk or asset liability management, market risk, andoperationalrisk.

In the more general case, every probable risk can have a pre-formulated plantodeal with its possible consequences (to ensure contingency if the risk becomesa

liability).From the information above and the average cost per employee over time, orcostaccrual ratio, a project manager can estimate:

the cost associated with the risk if it arises, estimated by multiplyingemployee costs per unit time by the estimated time lost (cost impact, Cwhere C = cost accrual ratio * S).the probable increase in time associated with a risk (schedule variancedueto risk, Rs where Rs = P * S):  o Sorting on this value puts the highest risks to the schedule first.

This is intended to cause the greatest risks to the project to beattempted first so that risk is minimized as quickly as possible.

  o  This is slightly misleading as schedule variances with a large P andsmall S and vice versa are not equivalent. (The risk of the RMSTitanic sinking vs. the passengers' meals being served at slightlythe wrong time).

the probable increase in cost associated with a risk (cost variance due torisk, Rc where Rc = P*C = P*CAR*S = P*S*CAR)  o sorting on this value puts the highest risks to the budget first.  o see concerns about schedule variance as this is a function of it, as

illustrated in the equation above.

Risk in a project or process can be due either to Special Cause Variation or

Common Cause Variation and requires appropriate treatment. That is to re-iteratethe concern about extremal cases not being equivalent in the list immediatelyabove.

57/61

Page 58: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 58/61

Computational finance

Risk-management activities as applied to project management

Planning how risk will be managed in the particular project. Plan shouldinclude risk management tasks, responsibilities, activities and budget.Assigning a risk officer - a team member other than a project managerwho is responsible for foreseeing potential project problems. Typical

characteristic of risk officer is a healthy skepticism.Maintaining live project risk database. Each risk should have thefollowing attributes: opening date, title, short description, probabilityandimportance. Optionally a risk may have an assigned person responsibleforits resolution and a date by which the risk must be resolved.Creating anonymous risk reporting channel. Each team member shouldhave possibility to report risk that he foresees in the project.Preparing mitigation plans for risks that are chosen to be mitigated. Thepurpose of the mitigation plan is to describe how this particular risk willbe handled – what, when, by who and how will it be done to avoid it or

minimize consequences if it becomes a liability.Summarizing planned and faced risks, effectiveness of mitigationactivities, and effort spent for the risk management.

Risk management and business continuity

Risk management is simply a practice of systematically selecting cost effectiveapproaches for minimising the effect of threat realization to the organization.Allrisks can never be fully avoided or mitigated simply because of financial andpractical limitations. Therefore all organizations have to accept some level of residual risks.

Whereas risk management tends to be preemptive, business continuityplanning(BCP) was invented to deal with the consequences of realised residual risks.

 Thenecessity to have BCP in place arises because even very unlikely events willoccur if given enough time. Risk management and BCP are often mistakenlyseenas rivals or overlapping practices. In fact these processes are so tightly tiedtogether that such separation seems artificial. For example, the riskmanagementprocess creates important inputs for the BCP (assets, impact assessments, costestimates etc). Risk management also proposes applicable controls for theobserved risks. Therefore, risk management covers several areas that are vitalforthe BCP process. However, the BCP process goes beyond risk management'spreemptive approach and moves on from the assumption that the disaster willrealize at some point.

Risk Communication

Risk communication refers to the idea that people are uncomfortable talkingaboutrisk. People tend to put off admitting that risk is involved, as well as

58/61

Page 59: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 59/61

Page 60: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 60/61

Computational finance

Areas of application

Active AgendaBenefit riskBusiness continuityplanning

Chief Risk OfficerCorporategovernanceCost overrunCost riskCritical chainEarned valuemanagementEnterprise RiskManagementEnvironmental RiskManagement

AuthorityEvent chainmethodologyFinancial riskmanagementFutures StudiesHazard prevention

Hazop

InsuranceInternational RiskGovernanceCouncil

ISDALegal RiskList of financetopicsList of projectmanagement topicsManagerial riskaccountingMegaprojectsMegaprojects andriskOccupational

Health and SafetyOperational riskmanagementOptimism biasOutsourcingPrecautionaryprincipleProcess SafetyManagementProjectmanagementPublic Entity Risk

InstituteReference classforecastingRisk

Risk analysis(engineering)

Risk homeostasisRisk ManagementAgencyRisk Management

AuthorityRisk ManagementInformationSystemsRisk ManagementResearchProgrammeRisk registerRoy's safety-firstcriterionSociety for RiskAnalysis

 TimeboxingSocial RiskManagementSubstantialequivalenceUncertaintyValue at riskViable SystemModel

Vulnerabilityassessment

60/61

Page 61: Awesum CF Info

8/8/2019 Awesum CF Info

http://slidepdf.com/reader/full/awesum-cf-info 61/61

Computational finance

Conclusion

 This topic lays the mathematical foundations for careers in a number of areasinthe financial world. In particular, this suitable for novice quantitative analystsand

developers who are working in quantitative finance. This topic is also suitableforIT personnel who wish to develop their mathematical skills.Computational finance or financial engineering is a cross-disciplinary fieldwhichrelies on mathematical finance, numerical methods, computational intelligenceand computer simulations to make trading, hedging and investment decisions,aswell as facilitating the risk management of those decisions. Utilizing variousmethods, practitioners of computational finance aim to precisely determine thefinancial risk that certain financial instruments create.Mathematics

In this part of the topic we discuss a number of concepts and methods that are

concerned with variables, functions and transformations defined on finite orinfinite discrete sets. In particular, linear algebra will be important because of itsrole in numerical analysis in general and quantitative finance in particular. Wealso introduce probability theory and statistics as well as a number of sectionsonnumerical analysis. The latter group is of particular importance when weapproximate differential equations using the finite difference method.Numerical Methods

 The goal of this part of the topic is to develop robust, efficient and accuratenumerical schemes that allow us to produce algorithms in applications. These

methods lie at the heart of computational finance and a good understanding ofhow to use them is vital if you wish to create applications. In general, themethodsapproximate equations and models defined in a continuous, infinite-dimensionalspace by models that are defined on a finite-dimensional space.