pattern recognition in stock market

91
pattern recognition in stock market

Upload: zorro29

Post on 06-May-2015

4.179 views

Category:

Business


1 download

TRANSCRIPT

Page 1: pattern recognition in stock market

pattern recognition in stock market

Page 2: pattern recognition in stock market

Introduction

Page 3: pattern recognition in stock market

motivation

• Our time is limited, better not to waste it working

• Life style costs money

• Create someone else to do the job for you

Page 4: pattern recognition in stock market

metatrader

• Online broker

• Lets you trade foreign currency, stocks and indexes

• MetaQuotes Language (MQL) similar to C, allows you to buy and sell

• Can be linked with dynamic linked libraries (dll)

Page 5: pattern recognition in stock market

Pattern recognition

Pattern recognition aims to classify data

(patterns) based either on a priori knowledge or

on statistical information extracted from the

patterns. The patterns to be classified are usually

groups of measurements or observations,

defining points in an appropriate

multidimensional space.

To understand is to perceive patterns

Page 6: pattern recognition in stock market

SVM

Page 7: pattern recognition in stock market

Num

ber

of a

rt b

ooks

pur

chas

ed

∆ buyers ● non-buyers

Months since last purchase

Linear Support Vector Machines• A direct marketing company wants to sell a

new book:

“The Art History of Florence”• Nissan Levin and Jacob Zahavi in Lattin,

Carroll and Green (2003).

• Problem: How to identify buyers and non-buyers using the two variables:

– Months since last purchase– Number of art books purchased

∆ ●

●● ●

∆∆

∆∆

Page 8: pattern recognition in stock market

∆ buyers ● non-buyers

Num

ber

of a

rt b

ooks

pur

chas

ed

Months since last purchase

• Main idea of SVM:

separate groups by a line.

• However: There are infinitely many lines that have zero training error…

• … which line shall we choose?

Linear SVM: Separable Case

∆ ●

●● ●

∆∆

Page 9: pattern recognition in stock market

• SVM use the idea of a margin around the separating line.

• The thinner the margin,

• the more complex the model,

• The best line is the one with thelargest margin.

∆ buyers ● non-buyers

Num

ber

of a

rt b

ooks

pur

chas

ed

margin

Months since last purchase

Linear SVM: Separable Case

∆ ●

●● ●

∆∆

Page 10: pattern recognition in stock market

• The line having the largest margin is:

w1x1 + w2x2 + b = 0

• Where

– x1 = months since last purchase

– x2 = number of art books purchased

• Note:

– w1xi 1 + w2xi 2 + b +1 for i ∆

– w1xj 1 + w2xj 2 + b –1 for j ●

x2

x1

Months since last purchase

Num

ber

of a

rt b

ooks

pur

chas

ed

margin

Linear SVM: Separable Case

w 1x 1

+ w 2x 2

+ b = 1

w 1x 1

+ w 2x 2

+ b = 0

w 1x 1

+ w 2x 2

+ b = -1

w

∆ ●

●● ●

∆∆

Page 11: pattern recognition in stock market

• The width of the margin is given by:

• Note:

maximizethe margin

2w

minimize minimize

w2 22w

||||2)1(1

margin22

21

w

ww

Linear SVM: Separable Case

x2

x1

Months since last purchase

Num

ber

of a

rt b

ooks

pur

chas

ed

w 1x 1

+ w 2x 2

+ b = 1

w 1x 1

+ w 2x 2

+ b = 0

w 1x 1

+ w 2x 2

+ b = -1

w

margin

∆ ●

●● ●

∆∆

Page 12: pattern recognition in stock market

1for 1

1for 1

ii

ii

yb

yb

wx

wx iby ii allfor 01)( wx

Page 13: pattern recognition in stock market

• The optimization problem for SVM is:

• subject to:

– w1xi 1 + w2xi 2 + b +1 for i ∆

– w1xj 1 + w2xj 2 + b –1 for j ●

x2

x1

maximizethe margin

2w

minimize minimize

w2 22w

Linear SVM: Separable Case

2)( minimize2

ww Lmargin

∆ ●

●● ●

∆∆

Page 14: pattern recognition in stock market

• “Support vectors” are those points that lie on the boundaries of the margin

• The decision surface (line) is determined only by the support vectors. All other points are irrelevant

x2

x1

“Support vectors”

Linear SVM: Separable Case

∆ ●

●● ●

∆∆

Page 15: pattern recognition in stock market

• Non-separable case: there is no line separating errorlessly the two groups

• Here, SVM minimize L(w,C) :

• subject to:

– w1xi 1 + w2xi 2 + b +1 – i for i

– w1xj 1 + w2xj 2 + b –1 + i for j ●

I,j 0

x2

x1

∆ buyers ● non-buyers

Training set: 1000 targeted customers

maximizethe margin

minimize thetraining errors

i

iCCL 2),(2

ww

L(w,C) = Complexity + Errors

Linear SVM: Nonseparable Case

w 1x 1

+ w 2x 2

+ b = 1

∆ ●

●● ●

∆∆

∆∆

Page 16: pattern recognition in stock market

–vectors Xi

–labels yi = ±1

margin and error vectors

)(sign by Xw

Siby ii ,1)( Xw

Si

iii y Xw )(sign byySi

iii

XX

i

iib

byC )(1:min2

21

,Xww

w

Page 17: pattern recognition in stock market

C = 5x2

x1

Bigger C

( thinner margin )

smaller number errors( better fit on the data )

increased complexity Smaller C( wider margin )

bigger number errors( worse fit on the data )

decreased complexity

Linear SVM: The Role of C

∆∆

● ● ●

x2

x1

C = 1∆

∆∆

● ● ●

Vary both complexity and empirical error via C … by affecting the optimal w and optimal number of training errors

Page 18: pattern recognition in stock market

Non-linear SVMs

• Transform x (x)

• The linear algorithm depends only on xxi, hence transformed algorithm depends only on (x)(xi)

• Use kernel function K(xi,xj) such that K(xi,xj)= (x)(xi)

Page 19: pattern recognition in stock market

Mapping into a higher-dimensional space

Optimization task: minimize L(w,C)

subject to:

iiiii bxwxxwxw 12 223212

211

22

222

212

2121

2221221

1211211

21

2221

1211

2

2

2

llllll x

x

x

xxx

xxx

xxx

xx

xx

xx

x2

x1

i

iCC,L 22

w)(w

jjjjj bxwxxwxw 12 223212

211

Nonlinear SVM: Nonseparable Case

∆ ●

●● ●

∆∆

∆∆

Page 20: pattern recognition in stock market

Nonlinear SVM: Nonseparable Case

Map the data into higher-dimensional space: 2 3

22

21

21

2

x

xx

x

2

1

x

x

(1,-1)

x2

(1,1)(-1,1)

(-1,-1)

∆ ●

x1

12111 ,,,

12111 ,,,

12111 ,,,

12111 ,,,

212 xx

21x

22x

Page 21: pattern recognition in stock market

Nonlinear SVM: Nonseparable Case

Find the optimal hyperplane in the transformed space

22

21

21

2

x

xx

x

2

1

x

x

(1,-1)

x2

(1,1)(-1,1)

(-1,-1)

∆ ●

x1

12111 ,,,

12111 ,,,

12111 ,,,

12111 ,,,

212 xx

22x

21x

Page 22: pattern recognition in stock market

Nonlinear SVM: Nonseparable Case

Observe the decision surface in the original space (optional)

22

21

21

2

x

xx

x

2

1

x

x

x2

∆ ●

x1

12111 ,,,

12111 ,,,

12111 ,,,

12111 ,,,

212 xx

22x

21x

Page 23: pattern recognition in stock market

Nonlinear SVM: Nonseparable Case

Dual formulation of the (primal) SVM minimization problem

jiji

i j

ji

i

i yymax xx2

1

Ci 0

i

ii y 0

i

iCmin 2

2w

Primal Dual

iii by 1xw

Subject to

0i 1iy

Subject to

1iy

Page 24: pattern recognition in stock market

Nonlinear SVM: Nonseparable Case

Dual formulation of the (primal) SVM minimization problem

jiji

i j

ji

i

i yymax xx2

1

Ci 0 i

ii y 0

Dual

Subject to

1iy

2

2

2121

2221

21

2221

21 22

ji

jjii

jjjjiiii

ji

)x,x()x,x(

x,xx,xx,xx,x

)()(

xx

xx

22

21

21

2

x

xx

x

2

1

x

x

)()(),(K jiji xxxx

(kernel function)

2

2

1

jiji

i j

ji

i

i yymax xx

)()(yymax jiji

i j

ji

i

i xx2

1

Page 25: pattern recognition in stock market

Solving

• Construct & minimise the Lagrangian

• Take derivatives wrt. w and b, equate them to 0

The Lagrange multipliers αi are called ‘dual variables’

Each training point has an associated dual variable.

Ni

bybL

i

N

iiii

,...1,0 constraint wrt.

]1)([||||2

1),,(

1

2

wxww

0]1)([:

0),,(

0),,(

1

1

bycondKKT

yb

bL

ybL

iii

N

iii

N

iiii

wx

w

xww

w

parameters are expressed as a linear combination of training points

only SVs will have non-zero αi

Page 26: pattern recognition in stock market
Page 27: pattern recognition in stock market

Applications

• Handwritten digits recognition– Of interest to the US Postal services– 4% error was obtained– about 4% of the training data were SVs only

• Text categorisation• Face detection• DNA analysis• …

Page 28: pattern recognition in stock market

Architecture of SVMs

• Nonlinear Classifier(using kernel)• Decision function

are computed as the solution of quadratic programi

iii

i

i

l

iii

l

iii

v

yv

xexampletrain

eachforsubstitutex

bxxkv

bxxvxf

)(

)),(sgn(

)))()((sgn()(

1

1

Page 29: pattern recognition in stock market
Page 30: pattern recognition in stock market

Artificial Neural Networks

Page 31: pattern recognition in stock market

Neural NetworkTaxonomy of Neural Network Architecture

The architecture of the neural network refers to the arrangement of the connection between neurons, processing element, number of layers, and the flow of signal in the neural network. There are mainly two category of neural network architecture: feed-forward and feedback (recurrent) neural networks

Page 32: pattern recognition in stock market

Neural Network• Feed-forward network, Multilayer Perceptron

Page 33: pattern recognition in stock market

Neural Network• Recurrent network

Page 34: pattern recognition in stock market

Multilayer Perceptron (MLP)

O1

h1

h2

x1

x2

x3

x4

xn

.

.

.

Hidden Layer

Output Layer

Input Layer

InputVector

MLP Structure

F(y)y

x1

x2

xn

Neuron processing element

w1

w2

wn

F(y)

y

Page 35: pattern recognition in stock market

Backpropagation Learning• Architecture:

– Feedforward network of at least one layer of non-linear hidden nodes, e.g., # of layers L ≥ 2 (not counting the input layer)

– Node function is differentiablemost common: sigmoid function

• Learning: supervised, error driven, generalized delta rule

• Call this type of nets BP nets• The weight update rule

(gradient descent approach)• Practical considerations• Variations of BP nets• Applications

Page 36: pattern recognition in stock market
Page 37: pattern recognition in stock market

Backpropagation Learning• Notations:

– Weights: two weight matrices:

from input layer (0) to hidden layer (1)

from hidden layer (1) to output layer (2)

weight from node 1 at layer 0 to node 2 in layer 1

– Training samples: pair of

so it is supervised learning

– Input pattern:

– Output pattern:

– Desired output:

– Error: error for output node j when xp is applied

sum square error

This error drives learning (change and )

),...,( ,1, nppp xxx ),...,( ,1, kppp ooo ),...,( ,1, kppp ddd

)0,1(w)1,2(w)0,1(

1,2w

},...,1),{( Ppdx pp

kpkpjp odl ,,,

2,

1 1

)( jp

P

p

K

j

l

)0,1(w

)1,2(w

Page 38: pattern recognition in stock market

Backpropagation Learning• Sigmoid function again:

– Differentiable:

– When |net| is sufficiently large, it moves into one of the two saturation regions, behaving like a threshold or ramp function.

• Chain rule of differentiation

))(1)((

11

1

)()1(

1

)'1()1(

1)('

1

1)(

2

2

xSxS

e

e

e

ee

ee

xS

exS

x

x

x

xx

xx

x

Saturationregion

Saturationregion

)(')(')(' then)(),(),( if thxgyfdt

dx

dx

dy

dy

dz

dt

dzthxxgyyfz

Page 39: pattern recognition in stock market

Backpropagation Learning

• Forward computing:– Apply an input vector x to input nodes

– Computing output vector x(1) on hidden layer

– Computing the output vector o on output layer

– The network is said to be a map from input x to output o

• Objective of learning:

– Modify the 2 weight matrices to reduce sum square error

for the given P training samples as much

as possible (to zero if possible)

)()( )1()1,2(,

)2( j

jjkkk xwSnetSo

)()( )0,1(,

)1()1( i

iijjj xwSnetSx

Pp

Kk kpl1 1

2, )(

Page 40: pattern recognition in stock market

Backpropagation Learning• Idea of BP learning:

– Update of weights in w(2, 1) (from hidden layer to output layer): delta rule as in a single layer net using sum square error

– Delta rule is not applicable to updating weights in w(1, 0) (from input and hidden layer) because we don’t know the desired values for hidden nodes

– Solution: Propagating errors at output nodes down to hidden nodes, these computed errors on hidden nodes drives the update of weights in w(1, 0) (again by delta rule), thus called error Back Propagation (BP) learning

– How to compute errors on hidden nodes is the key

– Error backpropagation can be continued downward if the net has more than one hidden layer

– Proposed first by Werbos (1974), current formulation by Rumelhart, Hinton, and Williams (1986)

Page 41: pattern recognition in stock market

• Generalized delta rule:

– Consider sequential learning mode: for a given sample (xp, dp)

– Update weights by gradient descent

For weight in w(2, 1):

For weight in w(1, 0):

– Derivation of update rule for w(2, 1):

since E is a function of lk = dk – ok, ok is a function of , and

is a function of , by chain rule

Backpropagation Learning

k kpkpk kp odlE 2,,

2, )()(

)/( )1,2(,

)1,2(, jkjk wEw

)/( )0,1(,

)0,1(, ijij wEw

)1,2(, jkw

)2(knet

)2(knet

Page 42: pattern recognition in stock market

Backpropagation Learning

– Derivation of update rule for

consider hidden node j:

weight influences

it sends to all output nodes

all K terms in E are functions of

)1(jnet)0,1(

,ijw

)( )1(jnetS

)0,1(,ijw

)1(,

)1(

)1(

)1(

)1(

)2(

)2(

)2( )(

ij

j

j

j

j

k

k

k

k w

net

net

x

x

net

net

netS

o

E

i

j

ok

)0,1(,ijw

)1,2(, jkw

)0,1(,ijw

i ijijjj

j jkjkkkk kk

wxnetnetSx

wxnetnetSoodE)0,1(

,)1()1()1(

)1,2(,

)1()2()2(2

),(

,),(,)(

by chain rule

Page 43: pattern recognition in stock market

Backpropagation Learning

– Update rules:

for outer layer weights w(2, 1) :

where

for inner layer weights w(1, 0) :

where

)(')( )2(kkkk netSod

)(')( )1()1,2(, jk jkkj netSw

Weighted sum of errors from output layer

Page 44: pattern recognition in stock market

Note: if S is a logistic function, then S’(x) = S(x)(1 – S(x))

Page 45: pattern recognition in stock market

Backpropagation Learning• Pattern classification: an example

– Classification of myoelectric signals • Input pattern: 2 features, normalized to real values

between -1 and 1

• Output patterns: 3 classes

– Network structure: 2-5-3• 2 input nodes, 3 output nodes,

• 1 hidden layer of 5 nodes

• η = 0.95, α = 0.4 (momentum)

– Error bound e = 0.05

– 332 training samples

– Maximum iteration = 20,000

– When stopped, 38 patterns remain misclassified

Page 46: pattern recognition in stock market

38 patterns misclassified

Page 47: pattern recognition in stock market
Page 48: pattern recognition in stock market

• Great representation power– Any L2 function can be represented by a BP net– Many such functions can be approximated by BP learning

(gradient descent approach)• Easy to apply

– Only requires that a good set of training samples is available

– Does not require substantial prior knowledge or deep understanding of the domain itself (ill structured problems)

– Tolerates noise and missing data in training samples (graceful degrading)

• Easy to implement the core of the learning algorithm• Good generalization power

– Often produce accurate results for inputs outside the training set

Strengths of BP Learning

Page 49: pattern recognition in stock market

• Learning often takes a long time to converge– Complex functions often need hundreds or thousands of

epochs• The net is essentially a black box

– It may provide a desired mapping between input and output vectors (x, o) but does not have the information of why a particular x is mapped to a particular o.

– It thus cannot provide an intuitive (e.g., causal) explanation for the computed result.

– This is because the hidden nodes and the learned weights do not have clear semantics. • What can be learned are operational parameters, not

general, abstract knowledge of a domain– Unlike many statistical methods, there is no theoretically

well-founded way to assess the quality of BP learning• What is the confidence level of o computed from input x using

such net?• What is the confidence level for a trained BP net, with the

final E (which may or may not be close to zero)?

Deficiencies of BP Learning

Page 50: pattern recognition in stock market

• Problem with gradient descent approach – only guarantees to reduce the total error to a local

minimum. (E may not be reduced to zero)• Cannot escape from the local minimum error state• Not every function that is representable can be

learned – How bad: depends on the shape of the error surface.

Too many valleys/wells will make it easy to be trapped in local minima

– Possible remedies: • Try nets with different # of hidden layers and hidden

nodes (they may lead to different error surfaces, some might be better than others)

• Try different initial weights (different starting points on the surface)

• Forced escape from local minima by random perturbation (e.g., simulated annealing)

Page 51: pattern recognition in stock market

• Generalization is not guaranteed even if the error is reduced to 0– Over-fitting/over-training problem: trained net fits the training

samples perfectly (E reduced to 0) but it does not give accurate outputs for inputs not in the training set

– Possible remedies:• More and better samples• Using smaller net if possible• Using larger error bound

(forced early termination)• Introducing noise into samples

– modify (x1,…, xn) to (x1+α1,…, xn+αn) where αi are small random displacements

• Cross-Validation– leave some (~10%) samples as test data (not used for weight

update)– periodically check error on test data– learning stops when error on test data starts to increase

Page 52: pattern recognition in stock market

• Network paralysis with sigmoid activation function– Saturation regions:

– Input to an node may fall into a saturation region when some of its incoming weights become very large during learning. Consequently, weights stop to change no matter how hard you try.

– Possible remedies:• Use non-saturating activation functions• Periodically normalize all weights

increases of magnitude fast the how regardless valueits changeshardly )( region, saturation ain falls When

.when 0))(1)(()(' derivative its ,)1/(1)(

xxSx

xxSxSxSexS x

2.,, /: kjkjk www

Page 53: pattern recognition in stock market

• The learning (accuracy, speed, and generalization) is highly dependent of a set of learning parameters – Initial weights, learning rate, # of hidden layers and #

of nodes...– Most of them can only be determined empirically

(via experiments)

Page 54: pattern recognition in stock market

• A good BP net requires more than the core of the learning algorithms. Many parameters must be carefully selected to ensure a good performance.

• Although the deficiencies of BP nets cannot be completely cured, some of them can be eased by some practical means.

• Initial weights (and biases)– Random, [-0.05, 0.05], [-0.1, 0.1], [-1, 1]

• Avoid bias in weight initialization– Normalize weights for hidden layer (w(1, 0)) (Nguyen-Widrow)

• Random assign initial weights for all hidden nodes • For each hidden node j, normalize its weight by

Practical Considerations

2

)0,1()0,1(,

)0,1(, / jijij www

nodesinput of # nodes,hiddent of # nm

ionnormalizatafter 2

)0,1( jw

7.0 where n m

Page 55: pattern recognition in stock market

• Training samples:– Quality and quantity of training samples often determines the

quality of learning results– Samples must collectively represent well the problem space

• Random sampling• Proportional sampling (with prior knowledge of the problem

space)– # of training patterns needed: There is no theoretically idea

number. • Baum and Haussler (1989): P = W/e, where

W: total # of weights to be trained (depends on net structure) e: acceptable classification error rateIf the net can be trained to correctly classify (1 – e/2)P of the P training samples, then classification accuracy of this net is 1 – e for input patterns drawn from the same sample spaceExample: W = 27, e = 0.05, P = 540. If we can successfully train the network to correctly classify (1 – 0.05/2)*540 = 526 of the samples, the net will work correctly 95% of time with other input.

Page 56: pattern recognition in stock market

• How many hidden layers and hidden nodes per layer:– Theoretically, one hidden layer (possibly with many

hidden nodes) is sufficient for any L2 functions– There is no theoretical results on minimum necessary

# of hidden nodes– Practical rule of thumb:

• n = # of input nodes; m = # of hidden nodes• For binary/bipolar data: m = 2n• For real data: m >> 2n

– Multiple hidden layers with fewer nodes may be trained faster for similar quality in some applications

Page 57: pattern recognition in stock market

– Example: compressing character bitmaps.• Each character is represented by a 7 by 9 pixel

bitmap, or a binary vector of dimension 63• 10 characters (A – J) are used in experiment• Error range:

tight: 0.1 (off: 0 – 0.1; on: 0.9 – 1.0)loose: 0.2 (off: 0 – 0.2; on: 0.8 – 1.0)

• Relationship between # hidden nodes, error range, and convergence rate

– relaxing error range may speed up– increasing # hidden nodes (to a point) may

speed up error range: 0.1 hidden nodes: 10 # epochs: 400+error range: 0.2 hidden nodes: 10 # epochs: 200+error range: 0.1 hidden nodes: 20 # epochs: 180+error range: 0.2 hidden nodes: 20 # epochs: 90+no noticeable speed up when # hidden nodes increases to beyond 22

Page 58: pattern recognition in stock market

• Other applications.– Medical diagnosis

• Input: manifestation (symptoms, lab tests, etc.)Output: possible disease(s)

• Problems: – no causal relations can be established– hard to determine what should be included as

inputs • Currently focus on more restricted diagnostic tasks

– e.g., predict prostate cancer or hepatitis B based on standard blood test

– Process control• Input: environmental parameters

Output: control parameters• Learn ill-structured control functions

Page 59: pattern recognition in stock market

– Stock market forecasting• Input: financial factors (CPI, interest rate, etc.)

and stock quotes of previous days (weeks)Output: forecast of stock prices or stock indices (e.g., S&P 500)

• Training samples: stock market data of past few years

– Consumer credit evaluation• Input: personal financial information (income,

debt, payment history, etc.)• Output: credit rating

– And many more– Key for successful application

• Careful design of input vector (including all important features): some domain knowledge

• Obtain good training samples: time and other cost

Page 60: pattern recognition in stock market

• Architecture– Multi-layer, feed-forward (full connection between

nodes in adjacent layers, no connection within a layer)– One or more hidden layers with non-linear activation

function (most commonly used are sigmoid functions) • BP learning algorithm

– Supervised learning (samples (xp, dp))– Approach: gradient descent to reduce the total error

(why it is also called generalized delta rule)– Error terms at output nodes

error terms at hidden nodes (why it is called error BP)– Ways to speed up the learning process

• Adding momentum terms• Adaptive learning rate (delta-bar-delta)• Quickprop

– Generalization (cross-validation test)

Summary of BP Nets

wEw /

Page 61: pattern recognition in stock market

• Strengths of BP learning– Great representation power– Wide practical applicability– Easy to implement– Good generalization power

• Problems of BP learning– Learning often takes a long time to converge– The net is essentially a black box – Gradient descent approach only guarantees a local minimum error– Not every function that is representable can be learned– Generalization is not guaranteed even if the error is reduced to zero– No well-founded way to assess the quality of BP learning– Network paralysis may occur (learning is stopped)– Selection of learning parameters can only be done by trial-and-error– BP learning is non-incremental (to include new training samples, the

network must be re-trained with all old and new samples)

Page 62: pattern recognition in stock market

Experiments

Page 63: pattern recognition in stock market

Stock Prediction• Stock prediction is a difficult task due to the nature of the stock data

which is very noisy and time varying.

• The efficient market hypothesis claim that future price of the stock is not predictable based on publicly available information.

• However theory has been challenged by many studies and a few researchers have successfully applied machine learning approach such as neural network to perform stock prediction

Page 64: pattern recognition in stock market

Is the Market Predictable? • Efficient Market Hypothesis (EMH) (Fama, 1965)

Stock market is efficient in that the current market prices reflect all information available to traders, so that future changes cannot be predicted relying on past prices or publicly available information.

• Murphy's law : Anything that can go wrong will go wrong.

Fama et al. (1988) showed that 25% to 40% of the variance inthe stock returns over the period of three to five years is predictable from past return

Pesaran and Timmerman (1999) conclude that the UK stock market is predictable for the past 25 years.

Saad (1998) has successfully employed different neural network models to predict the trend of various stocks on a short-term range

Page 65: pattern recognition in stock market

Optimistic report

Page 66: pattern recognition in stock market

Implementation• In this paper we propose to investigate SVM, MLP and RBF network

for the task of predicting the future trend of the 3 major stock indicesa) Kuala Lumpur Composite Index (KLCI)

b) Hongkong Hangseng index c) Nikkei 225 stock index

using input based on technical indicators.• This paper approach the problem based on 2 class pattern

classification formulated specifically to assist investor in making trading decisions

• The classifier is asked to recognise investment opportunities that can give a return of r% or more within the next h days. r=3% h=10 days

Page 67: pattern recognition in stock market

System Block Diagram• The classifier is to predict if the trend of the stock index increment of

more than 3% within the next 10 days period can be achieved.

Increment Achievable ??

Yes / No

Data from daily historical data converted into technical analysis indicator

Classifier

Page 68: pattern recognition in stock market

Data Used• Kuala Lumpur Stock Index (KLCI) for the period of 1992-1997.

Page 69: pattern recognition in stock market

Data Used• Hangseng index (20/4/1992-1/9/1997)

Page 70: pattern recognition in stock market

Data UsedNikkei 225 stock index (20/4/1982-1/9/1987)

Page 71: pattern recognition in stock market

Input to Classifier

TABLE 1: DESCRIPTION OF INPUT TO CLASSIFIER

xi i=1,2,3 ….12 n=15

DLN (t) = sign[q(t)-q(t-N)] * ln (q(t)/q(t-N) +1) (1)

q(t) is the index level at day t and DLN (t) is the actual input to the classifier.

Page 72: pattern recognition in stock market

Prediction Formulation

Consider ymax(t) as the maximum upward movement of the stock index value within the period t and t + . y(t) represents the stock index level at day t

Page 73: pattern recognition in stock market

Prediction Formulation

ClassificationThe prediction of stock trend is formulated as a two class

classification problem.

yr(t) > r% >> Class 2

yr(t) r% >> Class 1

Page 74: pattern recognition in stock market

Prediction FormulationClassification• Let (xi , yi ) 1<i<N be a set of N training examples, each input example xi

Rn n=15 being the dimension of the input space, belongs to a class

labelled by yi +1,-1.

Yi =-1

Yi =+1

Page 75: pattern recognition in stock market

Performance Measure• True Positive (TP) is the number of positive class

predicted correctly as positive class.• False Positive (FP) is the number of negative class

predicted wrongly as positive class.• False Negative (FN) is the number of positive class

predicted wrongly as negative class.• True Negative (TN) is the number of negative class

predicted correctly as negative class.

Page 76: pattern recognition in stock market

Performance Measure

• Accuracy = TP+TN / (TP+FP+TN+FN) • Precision = TP/(TP+FP) • Recall rate (sensitivity) = TP/(TP+FN)• F1 = 2 * Precision * Recall/(Precision + Recall)

Page 77: pattern recognition in stock market

Testing Method

Train Test

Rolling Window Method is Used to Capture Training and Test Data

Train =600 data Test= 400 data

Page 78: pattern recognition in stock market

Experiment and Result• Experiments are conducted to predict the stock trend of

three major stock indexes, KLCI, Hangseng and Nikkei.

• SVM, MLP and RBF network is used in making trend prediction based on classification and regression approach.

• A hypothetical trading system is simulated to find out the annualized profit generated based on the given prediction.

Page 79: pattern recognition in stock market

Experiment and Result

Page 80: pattern recognition in stock market

Trading Performance• A hypothetical trading system is used • When a positive prediction is made, one unit of money

was invested in a portfolio reflecting the stock index. If the stock index increased by more than r% (r=3%) within the next h days (h=10) at day t, then the investment is sold at the index price of day t. If not, the investment is sold on day t+1 regardless of the price. A transaction fee of 1% is charged for every transaction made.

• Use annualised rate of return .

Page 81: pattern recognition in stock market

Trading Performance• Classifier Evaluation Using Hypothetical Trading

System

Page 82: pattern recognition in stock market

Trading Performance

Page 83: pattern recognition in stock market

Experiment and Result• Classification Result

Page 84: pattern recognition in stock market

Experiment and Result

• The result shows better performance of neural network techniques when compared to K nearest neighbour classifier. SVM shows the overall better performance on average than MLP and RBF network in most of the

performance metric used

Page 85: pattern recognition in stock market

Experiment and ResultComparison of Receiver Operating Curve (ROC)

Page 86: pattern recognition in stock market

Experiment and Result• Area under Curve (ROC)

Page 87: pattern recognition in stock market

Conclusion

• We have investigated the SVM, MLP and RBF network as a classifier and regressor to assess it's potential in the stock trend prediction task

• Support vector machine (SVM) has shown better performance when compared to MLP and RBF .

• SVM classifier with probabilistic output outperform MLP and RBF network in terms of error-reject tradeoff

• Both the classification and regression model can be used for a profitable trend prediction system. The classification model has the advantage in which pattern rejection scheme can be incorporated.

Page 88: pattern recognition in stock market

This report

Page 90: pattern recognition in stock market

Results

• Basically zero correlation between prediction and the actual outcome

• Suffer from many technical failures

• Still have faith that these methods (when applied correctly) can predict the future better then a random guess

• Tried many sorts of topologies of the BPN and the input values to SVM, looks like the secret does not lie there

• Future investigation, use wavelets/noiselets coefficients as inputs

Page 91: pattern recognition in stock market

References • http://www.cs.unimaas.nl/datamining/slides2009/svm_presentation.ppt

• http://merlot.stat.uconn.edu/~lynn/svm.ppt

• http://www.cs.bham.ac.uk/~axk/ML_SVM05.ppt

• http://www.stanford.edu/class/msande211/KKTgeometry.ppt

• http://www.csee.umbc.edu/~ypeng/F09NN/lecture-notes/NN-Ch3.ppt

• http://fit.mmu.edu.my/caiic/reports/report04/mmc/haris.ppt

• http://www.youtube.com/watch?v=oQ1sZSCz47w

• Google, Wikipedia and others