[ieee 2006 ieee international multitopic conference - islamabad, pakistan (2006.12.23-2006.12.24)]...

Post on 09-Feb-2017

214 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Recognition of Online Isolated HandwrittenCharacters by Backpropagation Neural Nets Using

Sub-Character Primitive FeaturesMuhammad Faisal Zafar,Informatics Complex (ICCC), H-8/1 Islamabd, hmfzafargiccc.org.pk

Dzulkifli Mohamad,FSKSM, Universiti Teknologi Malaysia, dzul@fsksm.utm.myMuhammad Masood Anwar,Informatics Complex (ICCC), H-8/1 Islamabd, masoodgiccc.org.pk

Abstract In online handwriting recognition, existing challenges areto cope with problems of various writing fashions, variable sizefor the same character, different stroke orders for the same letter,and efficient data presentation to the classifier. The similarities ofdistinct character shapes and the ambiguous writing furthercomplicate the dilemma. A solitary solution of all these problemslies in the intelligent and appropriate extraction of features fromthe character at the time of writing. A typical handwritingrecognition system focuses on only a subset of these problems. Thegoal of fully unconstrained handwriting recognition still remains achallenge due to the amount of variations found in characters.The handwriting recognition problem can be considered forvarious alphabets and at various levels of abstraction. The maingoal of the work presented in this paper has been the developmentof an on-line handwriting recognition system which is able torecognize handwritten characters of several different writingstyles. Due to the temporal nature of online data, this work haspossible application to the domain of speech recognition as well.The work in this research aimed to investigate various features ofhandwritten letters, their use and discriminative power, and tofind reliable feature extraction methods, in order to recognizethem. A 22 feature set of sub-character primitive features hasbeen proposed using a quite simple approach of feature extraction.This approach has succeeded in having robust pattern recognitionfeatures, while maintaining feature's domain space to a small,optimum quantity. Backpropagation Neural Network (BPN)technique has been used as classifier and recognition rate up to87% has been achieved even for highly distorted handwrittencharacters.

Keywards: Online handwriting recognition, feature extraction, neuralnetworks

I. INTRODUCTIONOnline handwriting recognition is one of the very complex

and challenging problems [15, 16, 24], because of variabilityon size, writing style of hand-printed characters [21], andduplicate pixels caused by a hesitation in writing or interpolatenon-adjacent consecutive pixels caused by fast writing. Asmentioned in the literature [9, 12], the feature extraction playsan important role in the overall process of handwritingrecognition. Many feature extraction techniques [2, 5, 6, 9, 12,14, 20] have been proposed to improve overall recognitionrates; however most of them are depended on the size andslope of handwriting characters. They require very accurateresizing, slant correction procedure or technique; otherwisethey achieve very poor recognition rates. Also most of existingtechniques use only one characteristic of a handwritten

character. This research focuses on sub-character primitivefeature extraction technique that does not use resizing of acharacter and it uses direction encoding to curtail thecharacteristics of a character and combines them to create aglobal feature vector.

Handwriting recognition can be approached from bothperspectives, and the current focus of the market today is on-line handwriting recognition. With the increase in popularity ofportable computing devices such as PDAs and handheldcomputers [3, 13], non-keyboard based methods for data entryare receiving more attention in the research communities andcommercial sector. Large number of symbols in some naturallanguages (e.g., Kanji contains 4,000 commonly usedcharacters) making keyboard entry even a more difficult task[18]. The most promising options are pen-based and voice-based inputs. Digitizing devices like [19] and computingplatforms such as the IBM Thinkpad TransNote [8] and TabletPCs [23], have a pen-based user interface. Such devices, whichgenerate handwritten documents with online or dynamic(temporal) information, require efficient algorithms forprocessing and retrieving handwritten data [1]. Pen-based is amore natural and easier-to-use interface to the tasks involvingcomplex formatting, like entering and editing equations, anddrawing sketches and diagrams [10].An aspect of on-line handwriting recognition that sets it

apart from off-line handwriting recognition, is the temporalinput sequence information provided directly by the user. Thisdynamic information provides clean foreground separation andperfect thinning, and the on-line recognition can bypass thepreprocessing that is required by the off-line recognitionprocessing. On-line data, in general, is at least an order ofmagnitude more compact compared to off-line data because ofthe different dimensionalities in representation. The differencein the data size also results in substantial difference in theprocessing time [10].

II. PROPOSED SUB-CHARACTER PRIMITIVE BASEDFEATURE EXTRACTION

In handwriting recognition, extracting the right features isnot easy due to the variance of handwriting. To find the properfeatures, we need to study the variance of handwriting whichcomprises the number of strokes and the connection of strokeswith other factors. In on-line handwriting, data is representedby a sequence of points. We need to extract features fromimages [11] or a sequence of points [7] for off-line and on-linehandwriting recognition respectively. The extracted features

1-4244-0794-X/06/$20.00 ©2006 IEEE 157

can be used for recognition and pre-classification. In fact,features are often used to represent handwritten characters andmay thus be thought of as the abstract characteristics of acharacter [22].

In sub-character primitive feature extraction, each charactermodel is decomposed into a small number of sub-characterprimitive units each of which represents a small salient sub-region of the character. The simplest kind is the directionalcode computed from equally spaced regions. One advantage ofthe sub-character primitives is that they are more concise dueto their encoded nature. Another is that they offer a higher levelview because they represent a region not a point. One problemis that the primitive level feature extraction, depending on thegranularity they work on, should be extremely robust to beeffective since one mistake may make a big hole in the overallpicture because of their regional representation nature. Anoverview of the proposed system is presented in Fig 1.

Data Acqu

x<Oy<o

x<Oy>o

Fig 1: Block diagram of the proposed system.

from input data. Then the cross product filter approximatescurved strokes by line segments. The detail can be seen in [17].

Direction vector encoding was introduced by HerbertFreeman [4]. It is a means of describing line drawings in anapproximate way using a number of vectors of quantiseddirections. Direction of each line segment is approximated tothe nearest direction available within the vector encoding.When a line changes direction, another approximation is used.The result is a chain of vectors capable of representing anyarbitrary line drawing. Handwriting can be perceived as a linedrawing. This is particularly the case for dynamic data, whichcan be represented by polylines of various complexities. Vectordirection encoding well represents the angular variation of thepen path and is used in the segment and recognize approaches(Powalka, 1995). In this work, experiments have been carriedout by using eight vector directions. All data samples wereencoded using eight number of direction vectors. The encodingwas performed on filtered data.

For eight directions encoding the following measures havebeen taken: Horizontal direction along x-axis (at angle 0) isconsidered as 1. Directions at angles 45, 90, 135,180, 225, 270,315 are coded as 2, 3, 4, 5, 6, 7, and 8 respectively. When acharacter is drawn, the value of angle '0 'is calculated by therelation:

= tan' Y .................... (1)x

As there are 4 quadrants, for computer screen thedistributions ofx and y coordinates are shown in Fig 2.

x>Oy<o

x>Oy>o

Data acquisition is performed by a digitizing tablet is usedthat samples the location of a stylus on the tablet at the rate ofapproximately 73 - 200 times per second. This generates asequence of which define the trace of the pen over time.

Recognition results strongly depend on the quality of dataprovided to the recognizer. Ideally, the recognizer would getthe data which contains only the features it needs forrecognition. Strokes captured by a digitizing tablet tend tocontain a small amount of noise, most likely introduced by thedigitizing device, causing the sampled curves to be somewhatjagged. Removing irrelevant data decreases the amount ofwork necessary to locate the features and, more importantly, itreduces the possibility of incurring errors in locating orclassifying these features. In order to achieve this aim, theinput data points are filtered. In this research work two filters:distance filter, and cross product filter have been used. Thedistance filter removes the identical consecutive data points

Fig 2: Four quadrants

Therefore, the values of 0 in Ist and 3rd quadrants are -ve. Tocode the angle space between two angle values, a threshold isfixed. The value of 0 below this threshold is coded to lowernearest value and above it to upper nearest value. For instance,the threshold between 00 (0 rad.) and 450 (0.785 rad.) is takenas 220 (0.384 rad.). The 0 value between 00 (0 rad.) and 220(0.384 rad.) is coded as '1' and between 22.10 and 450 is codedas '2' and so on.

In terms of radian it will look like as:Code = 1 when -0.01 < 0 < -0.385Code = 1 when -0.3 85 < 0 < -0.786and so on.The length of each vector is measured using Euclidean

distance:

d = (x2 - x )2 + (Y2 _ Y )2. (2)Also slope of a line with points (xl, yl) and (x2, Y2) is given by

158

tanO= Y2 -Yl ........ (3)x2 -x

or

Y2 -YI =(x2 -xI)tanOPutting this value of Y2 -YI in (2):

d = J(X2 - x1 )2 + ((X2 - x1)tan 0)2or

d = (X2 -X1)C+(tanO)2orx d

1 + (tan 0)2or

d

|1±+ (tan )2Similarly

......................................... (4)

d (5)Y2 =Y:1Y+i 1

i(tan 0)2As the values of xl, yl, d, and 0 are calculated from the

previous points(xi, yi) and (x2, Y2), the new value of x2 and Y2 ismeasure by (4) and (5). In this way, the exact length of thesegment remains same i.e. Euclidean distance d after beingcoded.

Feature extraction scheme proposed in this work is based onthe encoded direction vectors, as they carry useful informationto process, which can be used for character recognition. Whena character is encoded then the following information iscollected from the code sequence:

i. Total length of direction vectors sequence (T).ii. Total no. of each direction vector present in the sequence

(Di, i=1,2.N).Where N= No. of encoding directions (in this case it is 8)

Below are given the different proposed featuresexperimented in this research work. They are produced byconsidering different combination of local as well as globaltraits of every character1. Ratio ri of total no. of each direction vector Di with

sequence length T

i.e ri =' i = 1,2, ...AN..........(6)T

This ratio is size invariant and helpful in getting the uniqueparticipation of each direction code for a particular character. Itgives local insight of the character. In total these are eightratios as 8-direction encoding has been used in this work.

2. Norm n (termed Minkoski metric) of all ri' given by

Iln 112 = .. (7)

This is a global feature of every character and gives aunique value for a particular character. In connectionwith (8), it is also size invariant.

3. Total no. of direction vectors h1 pointing towards left side.4. Total no. of direction vectors h2 pointing towards right

side5. Total no. of direction vectors v1 pointing towards up side6. Total no. of direction vectors v2 pointing towards down

side7. Total no. of direction vectors diag1 constituting left

diagonals8. Total no. of direction vectors diag2 constituting right

diagonals9. Total no. of local maxima maxim present in the sequence10. Total no. of local left arms LTarm present in the sequence11. Total no. of local right arms RTarm present in the

sequence12. Total no. of local minima minim present in the sequence13. Total no. of clock wise cw movements present in the

sequence14. Total no. of anti-clock wise acw movements present in the

sequence15. Total no. of central angles ang present in the sequence

Last thirteen features (3 to 15), have local as well as globalnature and provide the information that how many times theevent, stated in the feature definition, occurs. All these valuesare distinctive for a particular character.

Here a comprehensive algorithm for extraction of featuresfrom direction vector encoded sequence S ( so, S1,S.2-,sT_ )of acharacter has been given. It implements every thing step bystep as proposed above.The values of encoded sequence S and its length T are

assumed known. N= 8 is the total numbers of directions usedin encoding.

1. Measurement of Ratio ri of total no. of each direction vectorDi with sequence length T

D.r1-= I,from (6)lT

2. Calculation for norm n of ri'sIN

IIn 112 from (7)

3. Calculation for hl, h2, vl, v2, diagl, diag2Scan code sequence S from start to end. Each one of the hl,

h2, vI, v2, diag1, diag2 will qualify for a count and will beincremented by 1, if at least three consecutive direction vectorsare same.4. Calculation for maxim

Let a, b, and c are representing patches of threeconsecutive values in the sequence S such that a = 666, b777, c =888There will be a maxim

if b comes immediately after a with only a onedirection vector between them.

159

orif c comes immediately after a with only a onedirection vector between them.

orif c comes immediately after b with only a onedirection vector between them.(direction vector between all above cases has beenobserved 2 or 3)

5. Calculation for minimLet x, and y are representing patches of three consecutivevalues in the sequence S such that x = 222 , y = 333There will be a minim

if x comes immediately after bor

orif y comes immediately after b

if x comes immediately after cor

if y comes immediately after c6. Calculation for RTarm

There will be a RTarmif b comes immediately after x

orif c comes immediately after x

7. Calculation for LTarmThere will be a LTarm

if b comes immediately afteryor

if c comes immediately aftery8. Calculation for cw, acw and ang.

If code sequence has two consecutive pair (e.g. 5544) ofdirection vectors in descending order then there will beclockwise movement and cw will qualify to increment by1.If code sequence has two consecutive pair (e.g. 4455) ofdirection vectors in ascending order then there will beanticlockwise movement and acw will qualify toincrement by 1.If 88 comes immediately after 66 or 55 then there will be acentral angle and ang will qualify to increment by 1.

iii. EPERIMENTSA. Data Set and Model ParametersThe data used in this work was collected using tablet

SummaSketch III It has an electric pen with sensing writingboard. An interface was developed to get the data from tablet.Anoop and A. K. Jain ( 2004) pointed out that the actual devicefor data collection is not important as long as it can generate atemporal sequence of x and y positions of the pen tip. However,the writing styles of people may vary considerably on differentwriting surfaces and the script classifier may require trainingon different surfaces.

Selecting only a few characters from the entire character setto analyze the behaviour of different recognition modelsappeared appropriate. Hence, 26 upper case English alphabetswere considered in case study. In the data set, the total number

of handwritten characters is about 2000 characters, collectedfrom 40 subjects. Every developed model was tested oncharacters drawn by individuals who did not participate in thesample collection for data set. Each subject was asked to writeon tablet board (writing area). No restriction was imposed onthe content or style of writing. The writers consisted ofuniversity students (from different countries), professors, andemployees in private companies.As direction encoding has been used for sub-character

primitive based system, thus the stroke order matters in thiscase since different orderings of the strokes will result indifferent representations even though the written character is

Fig 3: Character "A", two different orders that it could have beenwritten indicated in boxed numbers.

the same (see Fig 3). Fortunately, each character class hascertain regularity in stroke orderings so that the number ofdifferent stroke orders is not large in most cases. Only threecharacters "A", "M", and "N" have been found which are morefrequently written by people in different orders. Thus for thesethree characters, in our data set, each has been considered twicedepending on two different writing orders. This factor hasincreased the alphabets domain from 26 to 29 characters. In thedata set, the total number of handwritten characters is about2000 characters used for training purposes, collected from 30subjects. About 1000 more handwriting samples were used fortesting purpose. Experiments were examined by extracting afeature vector of 22 elements from each character.

B. Learning/ TrainingFor classification purpose, two neural networks techniques:

back propagation neural networks (BPN) and counterpropagation neural networks have been used. Experimentswere performed using data sets of 11, 22, 33, 44, 55 and 66samples/character separately to observe the behaviour ofBPN.

In the BPN, sigmoid PEs were used in the hidden and theoutput layer. Twenty-nine output layer processing elements(PEs) corresponded to twenty-six English ( A, M and N havingdouble entries) alphabets to be recognized. Table 1 shows thesummary of different parameter's values used for BPN duringtraining. Training was stopped, and 16 samples (out of 319), 34samples (out of 638), 39 samples (out of 957), 91 samples (outof 1276), 106 samples (out of 1595) and 136 samples (out of1914) remained unclassified after 123500, 72000, 156000,41000, 58000, and 28000 training presentations for 11 22, 33,44, 55 and 66 samples/ character model respectively.

160

Table 1: Details of Different parameter values used for BPN during Training Phase

Sum of Momentum Hidden Characters

Sample/ Total no. of Squared Learning Parameter Elements remainedIterations

Character characters Error rate untrained

(SSE)

11 Each 319 123500 7.4237 0.999 0.5 35 16 (5%)

22 Each 638 72000 15.7978 0.5 0.2 30 34 (5.3%)

33 Each 957 156000 16.1764 0.2 0.1 30 39 (4%)

44 Each 1276 41000 30.768270 0.999 0.5 30 91 (7.1%)

55 Each 1595 58000 42.029945 0.999 0.5 30 106 (6.6i)

66 Each 1914 28000 60.69241 0.9999 0.5 30 138 7.2%)

Table 2 below shows the learning trends of BPN whiletraining. Values of sum of squared error (SSE) have been givenafter every 5000 iterations.

Table 2: Learning Trends ofBPN during Training Phase

Iterations Sum of Squared Error (SSE)

11 Each 22 Each 33 Each 44 Each 55 Each 66 Each

0 154.16 308.62 462.9 593.98 732.72 857.59

5000 11.63 21.26 28.31 37.24 56.38 68.94

10000 11.17 19.41 26.7 1 33.11 54.22 68.04

15000 7.91 19.32 23.89 38.56 51.72 62.55

20000 787 1996 2225 3327 47.47 66.78

25000 7.83 18.04 21.93 33.84 45.92 60.73

30000 7.8 17.25 21.23 37.33 46.14 60.56

35000 777 16.31 21.23 4068 45.0 6069

40000 7.74 16.9 20.74 31.63 43.63

45000 7.71 17.08 19.49 30.76 43.23

50000 7.68 16.88 19.57 42.72

The performance trends for all data sets, presented in Table 2,have been shown graphically in Fig 4.

1 000

900 - 221 Each|800 | | ~~~~~~~~~~~~~~~~~33 Each |700 iLn 44 Each

600 IN51 Each|

q500 -66 Each

400 -

300 4_

200

100 ,

ol

Nd ofIteetiren B

Fig 4: Convergence Trends of different BPN models

C. Recognition PerformanceAs mentioned earlier, models were evaluated on

samples taken from individuals who did not participate in theinitial process of setting up the training data set. This was donekeeping in view the eventual aim of using the models inpractical online recognition systems. The quality of an onlinehandwriting recognizer is related to its ability to translatedrawn characters irrespective of writing styles.

BPN models were tested with high thresholds of 0.9and 0.5, using the PE with the highest value above thethreshold for input classification. Another criterion used intranslating the BPN model's outputs was to eliminate theconcept of threshold and simply use the highest value. Notethat the first criteria will always have the possibility of arecognition failure: a network decision of not attributing anycharacter to the input vector. The last criteria will eliminate thissomewhat desirable feature in the decision making process.Table 3 below presents the statistics; CRs, FRs, and RFs areabbreviation for Correct Recognitions, False Recognitions, andRecognition Failures respectively.

Table 3: Performance ofBPN models with three different criteria of classification

Sample/ 'Threshold':'Threshold': 0.5 'Threshold': 0.9

Character NONE

CRs FRs RFs CRs FRs RFs CRs FRs RFs

11 Each 66% 34% 0% 68% 18% 14% 67% 15% 18%

22 Each 80% 20% 0% 70% 24% 6% 74% 8% 18%

33 Each 82% 18% 0% 74% 20% 6% 77% 8% 15%

44 Each 81% |19% 0% 80% |12% 8% 79%i 11% 10%

55 Each 84% 16% 0% 80% 10% 10% 83% 5% 12%

66 Each 87% 13% 0% 86% 10% 4% 85% 5% 10%

D. Performance AnalysisFor developed BPN models, it was observed that

learning became more difficult with the increase of trainingsamples and, even after long time of training, models wereunable to fully learn the training sets. Number of hidden PEsfrom 30 to 35 were found suitable. Value of learning rate < 1appeared appropriate for training. Generally, the recognitionperformance of BPN models improved with increase insamples/character. Figure 5 presents a graphical overview ofBPN performances with three different decision criteria ofRecognition. The recognition rate without any threshold(NONE) was highest (up to 87%) but at the cost of more falserecognition. This recognition rate gradually decreases byapplying tough thresholds (0.5 and 0.9) but this makes thesystem more reliable by tempting less false recognitions.However, overall the false recognitions were much less thanrecognition failure (RFs), after applying thresholds. More RFs

161

are due to a large number of untrained samples. This numbercan be reduced by experimenting more suitable combinationsof hidden PEs and learning rate. It will ultimately improve therecognition rate.

60°/

5 0%

4O0%

3O0%

CRs FRs RFs CRs FRs RFs CRs FRs RFs

'Threshold': NONE 'Thres hold': 0.5 'Thres hold': 0.9

Fig 5 Graphical Presentation ofBPN performances

VI. CONCLUSIONAn elementary online handwriting recognition prototype for

isolated upper case English characters has been developedusing novel feature proposition. The system is also writer-independent based on neural network approach. The onlinedrawn input character is captured into x, y coordinates.Filtering is performed to remove the noise. Then the characteris converted to a direction code sequence. A 22-feature vectorfor written character produced can then be stored for trainingpurposes or can be supplied to a developed model for making adecision regarding the class of the character. The preliminaryresults are quite encouraging. The experiments provided theauthors an opportunity to explore pattern recognitionmethodologies; the exercise provided a theoretical base forfurther investigations and impetus for development work in thisdiscipline.

REFRENCES

[1] Anoop M. Namboodiri, Anil K. Jain. (2004). Online Handwritten ScriptRecognition. IEEE Trans. PAMI. 26(1): 124-130.

[2] Chakraborty, B., and Chakraborty, G., (2002). A New Feature ExtractionTechnique for On-line Recognition Of Handwritten AlphanumericCharacters. Information Sciences, 148(1), 55-70.

[3] Evan Koblentz (May 2005). The Evolution Of The PDA: 1975-1995.Computer Collector Newsletter, version 0.993.

[4] Freeman H. (1974). Computer processing of line-drawing images.Computing Surveys, 6(1):57-97.

[5] Gomez Sanchez, E., Gago Gonzalez, J.A., Dimitriadis, Y.A., CanoIzquierdo, J.M., and Lopez Coronado, J., (1998). Experimental Study ofANovel Neuro-fuzzy System For On-line Handwritten UNIPEN DigitRecognition. Pattern Recognition Letters, 19(3), 357-364.

[6] Hammandlu, M., Murali Mohan, K. R., Chakraborty, S., Goyal, S. andChoudhury, D. R., (2003). Unconstrained Handwritten CharacterRecognition Based On Fuzzy Logic. Pattern Recognition, 36(3), 603-623.

[7] Hu J., Lim,S., and Brown, M., (1998). HMM based writer independent on-

line handwritten character and word recognition. International Workshopon Frontiers in Handwriting Recognition, 143-155.

[8] IBM ThinkPad TransNote,1 32.ibm.com/content/search/transnote.html

(2005). http://www-

[9] Jaeger, S., Manke, S., Reichert, J., and Waibel, A., (2001). OnlineHandwriting Recognition: The NPen++ Recognizer. International Journalon Document Analysis & Recognition, 3(3), 169-180.

[10]Jong Oh. (2001). An On-Line Handwriting Recognizer with FisherMatching, Hypotheses Propagation Network and ContextConstraint Models. PhD thesis, Department of Computer Science NewYork University USA.

[11]Liu, X.Y., and Blumenstein, M. (2004). Experimental analysis of themodified direction feature for cursive character recognition. InternationalWorkshop on Frontiers in Handwriting Recognition, 353-358.

[12]Parizeau, M., Lemieux, A., and Gagne, C., (2001). Character RecognitionExperiments Using Unipen Data. Proceedings, Sixth Internationalconference on Document Analysis and Recognition, 48 1.

[13 ]Pen Computing Magazine: PenWindows,http://www.pencomputing.com/PenWindows/index.html.

(2005).

[14]Ping, Z., and Lihui, C., (2002). A Novel Feature Extraction Method AndHybrid Tree Classification For Handwritten Numeral Recognition. PatternRecognition Letters, 23(1), 45-56.

[15]Plamondon R. and Privitera C. M. (1999). The Segmentation of CursiveHandwriting: An Approach Based on Off-Line Recovery of the Motor-Temporal Information. IEEE Trans. Image Processing, 8(1): 80-91.

[1 6]Plamondon Rejean, and Sargur N. Srihari, (2000). On-Line and Off-LineHandwriting Recognition: A Comprehensive Survey. IEEE Transactionson Pattern Analysis and Machine Intelligence. 22(1): 63-84.

[17]Powalka R Kazimierz. (1995). An algorithm toolbox for on-line cursivescript recognition. PhD thesis The Nottingham Trent University.

[1 8]Scott D. Connell. (2000).Online Handwriting Recognition Using MultiplePattern Class Models. PhD Thesis, Dept. of Computer Science andEngineering, Michigan State University USA.

[19]Smart Technologies Inc. Homepage, (2005). http://www.smarttech.com/

[20]Trier, 0. D., Jain, A.K., and Taxt, T., (1996). Feature Extraction MethodsFor Character Recognition - A Survey. Pattern Recognition, 29(4), 641-662.

[21 ]Verma, B., Lu, J., Ghosh, M., and Ghosh, R. (2004). A Feature ExtractionTechnique for Online Handwriting Recognition. IEEE International JointConference on Neural Networks, IJCNTN'04, Hungary, 1337-43.

[22]Watt M.Stephen and Xiaofang Xie (2005). Prototype Pruning by FeatureExtraction for Handwritten Mathematical Symbol Recognition. In Ilias S.Kotsireas, editor, Maple Conference 2005, 423-437, Waterloo, Ontario,Maplesoft.

[23]Windows XP Tablet PC Edition Homepage,http://www.microsoft.com/windowsxp/tabletpc/default.asp.

(2005).

[24]Xiaolin, L., and Yeung, D.-Y. (1997). On-line Handwritten AlphanumericCharacter Recognition Using Dominant Points In Strokes. PatternRecognition, 30(1), 31-44.

162

top related