research article quantum neural network based machine translator for hindi to english · 2019. 7....

9
Research Article Quantum Neural Network Based Machine Translator for Hindi to English Ravi Narayan, 1 V. P. Singh, 1 and S. Chakraverty 2 1 Department of Computer Science & Engineering, apar University, Patiala, Punjab 147 004, India 2 Department of Mathematics, National Institute of Technology Rourkela, Odisha 769 008, India Correspondence should be addressed to Ravi Narayan; ravi n [email protected] Received 31 August 2013; Accepted 8 January 2014; Published 27 February 2014 Academic Editors: P. Bala and J. Silc Copyright © 2014 Ravi Narayan et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. is paper presents the machine learning based machine translation system for Hindi to English, which learns the semantically correct corpus. e quantum neural based pattern recognizer is used to recognize and learn the pattern of corpus, using the information of part of speech of individual word in the corpus, like a human. e system performs the machine translation using its knowledge gained during the learning by inputting the pair of sentences of Devnagri-Hindi and English. To analyze the effectiveness of the proposed approach, 2600 sentences have been evaluated during simulation and evaluation. e accuracy achieved on BLEU score is 0.7502, on NIST score is 6.5773, on ROUGE-L score is 0.9233, and on METEOR score is 0.5456, which is significantly higher in comparison with Google Translation and Bing Translation for Hindi to English Machine Translation. 1. Introduction Machine translation is one of the major fields of NLP in which the researchers are having their interest from the time computers were invented. Many machine translation systems are available with their pros and cons for many languages. Researchers have also presented different approaches for computer to understand and generate the languages with semantics and syntactics. But still many languages are having translation difficulties due to ambiguity in their words and the grammatical complexity. e machine translator should address the key characteristic properties which are necessary to increase the performance of machine translation up to the level of human performance in translation. Most of the machine translators are working on the alignment of words in chunk (sentence). is paper presents the quantum neural based machine translation for Hindi to English. e quantum neural network (QNN) based approach increases the accuracy during the knowledge adoptability. In this work our main focus is to show the significant increase in the accuracy of machine translation during our research with the pair of Hindi and English sentences. e machine translation is done using the new approach based on quantum neural network which learns the patterns of language using the pair of sentences of Hindi and English. Some researchers have done their machine translation (MT) using statistical machine translation (SMT). e SMT uses the pattern recognition for automatic machine transla- tion systems for available parallel corpora. Statistical machine translation needs alignment mapping of words between the source and target sentence. On one hand alignments are used to train the statistical models and, on the other hand, during the decoding process to link the words in the source sentence to the words of target sentence [14]. But SMT methods are having the problem of word ordering. To overcome the problem of word ordering and for increasing the accuracy, some researchers introduced the concept of syntax-based reordering for Chinese-to-English and Arabic-to-English [5]. Recently some work has been done with Hindi by several researchers using different methods of machine translation, like example based system [5, 6], rule based [7], statistical machine translation [8], and parallel machine translation system [9]. A. Chandola and Mahalanobis described the use of corpus pattern for alignment and reordering of words for English to Hindi machine translation using the neural Hindawi Publishing Corporation e Scientific World Journal Volume 2014, Article ID 485737, 8 pages http://dx.doi.org/10.1155/2014/485737

Upload: others

Post on 09-Aug-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article Quantum Neural Network Based Machine Translator for Hindi to English · 2019. 7. 31. · Research Article Quantum Neural Network Based Machine Translator for Hindi

Research ArticleQuantum Neural Network Based Machine Translator forHindi to English

Ravi Narayan1 V P Singh1 and S Chakraverty2

1 Department of Computer Science amp Engineering Thapar University Patiala Punjab 147 004 India2Department of Mathematics National Institute of Technology Rourkela Odisha 769 008 India

Correspondence should be addressed to Ravi Narayan ravi n sharmahotmailcom

Received 31 August 2013 Accepted 8 January 2014 Published 27 February 2014

Academic Editors P Bala and J Silc

Copyright copy 2014 Ravi Narayan et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

This paper presents the machine learning based machine translation system for Hindi to English which learns the semanticallycorrect corpus The quantum neural based pattern recognizer is used to recognize and learn the pattern of corpus using theinformation of part of speech of individual word in the corpus like a humanThe system performs themachine translation using itsknowledge gained during the learning by inputting the pair of sentences ofDevnagri-Hindi and English To analyze the effectivenessof the proposed approach 2600 sentences have been evaluated during simulation and evaluation The accuracy achieved on BLEUscore is 07502 onNIST score is 65773 on ROUGE-L score is 09233 and onMETEOR score is 05456 which is significantly higherin comparison with Google Translation and Bing Translation for Hindi to English Machine Translation

1 Introduction

Machine translation is one of the major fields of NLP inwhich the researchers are having their interest from the timecomputers were invented Many machine translation systemsare available with their pros and cons for many languagesResearchers have also presented different approaches forcomputer to understand and generate the languages withsemantics and syntactics But still many languages are havingtranslation difficulties due to ambiguity in their words andthe grammatical complexity The machine translator shouldaddress the key characteristic properties which are necessaryto increase the performance of machine translation up tothe level of human performance in translation Most of themachine translators are working on the alignment of wordsin chunk (sentence)

This paper presents the quantum neural based machinetranslation forHindi to EnglishThequantumneural network(QNN) based approach increases the accuracy during theknowledge adoptability In this work our main focus is toshow the significant increase in the accuracy of machinetranslation during our research with the pair of Hindi andEnglish sentences The machine translation is done using

the new approach based on quantum neural network whichlearns the patterns of language using the pair of sentences ofHindi and English

Some researchers have done their machine translation(MT) using statistical machine translation (SMT) The SMTuses the pattern recognition for automatic machine transla-tion systems for available parallel corpora Statisticalmachinetranslation needs alignment mapping of words between thesource and target sentence On one hand alignments are usedto train the statistical models and on the other hand duringthe decoding process to link the words in the source sentenceto the words of target sentence [1ndash4] But SMT methodsare having the problem of word ordering To overcome theproblem of word ordering and for increasing the accuracysome researchers introduced the concept of syntax-basedreordering for Chinese-to-English andArabic-to-English [5]

Recently some work has been done with Hindi by severalresearchers using different methods of machine translationlike example based system [5 6] rule based [7] statisticalmachine translation [8] and parallel machine translationsystem [9] A Chandola and Mahalanobis described the useof corpus pattern for alignment and reordering of wordsfor English to Hindi machine translation using the neural

Hindawi Publishing Corporatione Scientific World JournalVolume 2014 Article ID 485737 8 pageshttpdxdoiorg1011552014485737

2 The Scientific World Journal

network [10] but still there are a lot of possibilities to developa MT System for Hindi to increase the accuracy of MT Someof the important works on Hindi are discussed in Section 2

Themainmotivation behind the study of QNN is the pos-sibility to address the unrealistic situation as well as realisticsituation which is not possible with the traditional neuralnetworkQNN learns and predictsmore accurately and needsless computation power and time for learning in comparisonto artificial neural network Researchers introduced the novelapproach of neural network model based on quanta statessuperposition having multilevel transfer function [11ndash13]

The most important difference among classical neuralnetwork and QNN is of their respective activation functionsIn QNN as a substitute of normal activation functions a mul-tilevel activation function is used Each multilevel functionconsists of the summation of sigmoid functions excited withquantum difference [14]

In QNN the multilevel sigmoid function has beenemployed as activation function and is expressed as

sgm (119909) =1

119899119904

119899119904

sum119903=1

(1

1 + exp (119909 minus 120579119903)) (1)

where 119899119904denotes the total multilevel positions in the sigmoid

functions and 120579119903 denotes quantum interval of quantum level119903 [2]

2 Hindi-English Machine Translation Systems

21 ANGLABHARTI-II Machine Translation System ANGL-ABHARTI-II was proposed in 2004 which is a hybrid systembased on the generalized example-base (GEB) with rawexample-base (REB) At the time of development the authorestablishes that the alteration in the rule-base is hard andthe outcome is possibly random This system consists oferror-analysis component and statistical language compo-nent for postediting Preediting component can change theentered sentence to a structure to translate without difficulty[5]

22 MATRA Machine Translation System The MaTra wasintroduced in 2004which is based on transfer approach usinga frame-like structured representation In this the rule-basedand heuristics approach is used to resolve ambiguities Thetext classification module is used for deciding the categoryof news item before working in entered sentence The systemselects the appropriate dictionary based on domain of newsIt requires human assistance in analyzing the input Thissystem also breaks up the complex English sentence to easysentences after examining the structure it produces Hindisentences This system is developed to work in the domain ofnews annual reports and technical phrases [7]

23 Hinglish Machine Translation System Hinglish machinetranslation system is developed in 2004 for standard Hindi tostandard EnglishThis system is developed to incorporate theadded enhancement to available AnglaBharti-II MT Systemfor English to Hindi and to AnuBharti-II systems for Hindi

to English translation developed by Sinha The accuracy ofthis system is satisfactory more than 90 As the verbs havemultiple meanings it is not able to determine the sense dueto nondeep grammatical analysis [6]

24 IBM-English-Hindi Machine Translation System IBM-English-Hindi MT System is developed by IBM IndiaResearch Lab in 2006 at the beginning of this project theystarted to develop an example based MT system but lateron shifted to the statistical machine translation system fromEnglish to Indian languages

25 Google Translate Google Translate was developed byFranz-Josef Och in 2007 This model used the statistical MTapproach to translate English to other languages and viceversa Among the 57 languages Hindi and Urdu are the onlyIndian languages present with Google Translate Accuracy ofthe system is good enough to understand the sentence aftertranslation [15]

3 Proposed Machine Translation System forHindi to English

The proposed machine translation (MT) system consists oftwo approaches one is rule basedMT system and the other isquantum neural based MT systemThe source language goesinto the rule based MT system and passes through the QNNbased MT system to refine the MT done by rule based MTmodule which basically recognizes and classifies the sentencecategory 2600 sentences are used with English and their cor-responding Devanagari-Hindi sentences Each Devanagari-Hindi sentence consists of words with question word nounhelping verb negative word verb preposition article adjec-tive postnoun adverb and so forth Each English sentencecontains a question word noun helping verb negative wordverb preposition article adjective postnoun adverb and soforth The data used to train is produced by an algorithmwhich is based on simple deterministic grammar The entirearchitecture of the proposed MT system model is given inFigure 1

4 Quantum Neural Architecture

As shown in the Figure 2 three-layer architecture of QNNconsist of inputs one layer of multilevel hidden units andoutput layer In QNN as a substitute of normal activationfunctions a multilevel activation function is used Each mul-tilevel function consists of summation of sigmoid functionsexcited with quantum difference

Where 119899119904denotes total multilevel positions in sigmoid

functions 120579119903 denotes quantum interval of quantum level 119903

sgm (119909) =1

119899119904

119899119904

sum119903=1

(1

1 + exp (minus119909 plusmn 120579119903)) (2)

Here every neural network node represents three substates initself with the difference of quantum interval 120579119903with quantum

The Scientific World Journal 3

QNN based POS tagger

Tokenizer Rule based POS tagger

Tagged sentence for training (input)

English sentence(input)

Hindi sentence(output)

Lexicon QNN based tag classifier

Rule based grammar analysis

QNN and rule based sentence

mapping

Semantic translation Tagged

sentence

Figure 1 Architecture of MT system model

Excited state

Excited state

Normal state

Input layer Hidden layer Output layer

Input (ni

ni

)

n1

n2

n1

n2

nk

Output (nk)

+120579

minus120579

Figure 2 Architecture of Quantum Neural Network

level 119903 where 119899119904denotes the number of grades in the quantum

activation functions

5 Quantum Neural Implementation ofTranslation Rules

The strategy is to first identify and tag the parts of speechusing Table 1 and then translate the English (source language)sentences literally into Devanagari-Hindi (target language)

with no rearrangement of words After syntactic translationrearrangement of the words has been done for accurate trans-lation retaining the sense of translated sentenceThe rules arebased on parts of speech not based on meaning To facilitatethe procedure distinctive three-digits codes based on theirparts of speech are assigned which are shown in Table 1

For a special case when input sentence and the resultingsentence are having unequal number of words then thedummy numeric code 000 is used for giving a similar wordalignment

4 The Scientific World Journal

Table 1 Numeric codes for parts of speech

Parts of speech (subclass) Numeric codePrenoun (PN) 100Noun-infinitive (Ni) 101Pronoun (PRO) 102Gerund (GER) 103Relative pronoun (RPRO) 104Postnoun (POSTN) 105Verb (V) 110Helping verb (HV) 111Adverb (ADV) 112Auxiliary verb (AUX) 113Interrogative (question word) (INT) 120Demonstrative words (DEM) 121Quantifier (QUAN) 122Article (A) 123Adjective (ADJ) 130Adjective-particle (ADJP) 131Number (N) 132Preposition (PRE) 140Postposition (POST) 141Punctuation (PUNC) 150Conjunction (CONJ) 160Interjection (INTER) 170Negative word (NE) 180Determiner (D) 190Idiom (I) 200Phrases (P) 210Unknown words (UW) 220

Case 1 When input sentence and the resulting sentence arehaving unequal numbers of words

The coded version of sentence is thus

Ram will not go to the market

100 111 220 110 140 123 102

Therefore input numeral sequence is [100 111 220110 140 123 102] and the corresponding output is [100102 220 110 111 000 000] The outcome of neural net-work might not be the perfect integer it should be roundoff and few basic error adjustments might be needed tofind the output numeral codes Even the network is likelyto arrange the location of 3-digit codes By this it learnsthe target language knowledge which is needed for semanticrearrangement and also helps in parts of speech tagging bypattern matching it is also helpful to adopt and learn thegrammar rules up to a level For handling the complex sen-tences the algorithm is used The algorithm first removes theinterrogative and negative words on the basis of conjunctionthe system breaks up and converts the complex sentence intotwo or more small simple sentences After the translation ofeach of the simple sentences the system again rejoins theentire subsentences and also adds the removed interrogative

and negative words in the sentence The whole process isexplained in Algorithm 1 in the next section

51 Algorithm for Proposed QNN Based MT System forComplex Sentences

QNNMTS (SENTENCE TOKENN LOC)Here SENTENCEis an array with119873 elements containing Hindi words Param-eter TOKEN contains the token of each word and LOCkeeps track of position ICOUNT contains the maximumnumber of interrogative words encountered in sentenceNCOUNT contains the maximum number of negative wordsencountered in the sentence and CCOUNT contains themaximum number of conjunction words encountered in thesentence (see Algorithm 1)

6 Experiment and Results

All words in each language are assigned with a uniquenumeric code on the basis of their respective part of speechExperiments show that memorization of the training data isoccurringThe results shown in this section are achieved aftertraining with 2600 Devanagari-Hindi sentences and theirEnglish translations 500 tests are performed with the systemfor each value of quantum interval (120579) with random data setsselected from 2600 sentences the dataset is divided in 4 3 3ratiosrespectively for training validation and test from2600English sentences and their Devanagari-Hindi translationsIn Table 2 the values are the average of 500 tests performedwith the system for each value of quantum interval (120579) for2600 sentences The best performance is shown for value ofquantum interval (120579) equal to one with respect to all theparameters that is epoch or iterations needed to train thenetwork the training performance validation performanceand test performance in respect to their mean square error(MSE) Here it is clearly shown that QNN at (120579) equal toone is very much efficient as compared to classical artificialneural network at (120579) equal to zero Table 2 clearly shows thecomparison between the performances of QNNwith ANN inrespect to above said performance parameters and as a resultwe can conclude that QNN is better than ANN for machinetranslation

7 Evaluations and Comparison

This paper proposed a new machine translation methodwhich can combine the advantage of quantum neural net-work 2600 sentences are used to analyze the effectiveness ofthe proposed MT system

The performance of proposed system is comparativelyanalyzed with Google Translation (httptranslategooglecom) and Microsoftrsquos Bing Translation (httpwwwbingcomtranslator) by using various MT evaluation methodslike BLEU NIST ROUGE-L and METEOR For evaluationpurpose we translate the same set of input sentences byusing our proposed system Google Translation and BingTranslation and then evaluate the output got from each of

The Scientific World Journal 5

Step 1 Check whether the sentence is Complex sentence or Simple Sentence[Initialize] Set ICOUNT = 1 NCOUNT = 1 CCOUNT = 1 ILOC NLOC CLOCRepeat for LOC = 1 to119873

if SENTENCE[LOC] = ldquoInterrogative wordrdquo thenSet ILOC[ICOUNT] = LOCICOUNT = ICOUNT + 1

[End of if structure]if SENTENCE[LOC] = ldquoNegative wordrdquo then

Set NLOC[NCOUNT] = LOCNCOUNT= NCOUNT + 1

[End of if structure]if SENTENCE[LOC] = ldquoConjunctionrdquo then

Set CLOC[CCOUNT] = LOCCCOUNT = CCOUNT + 1

[End of if structure][End of for]if ILOC = NULL or NLOC = NULL or CLOC = NULL then

Go to Step 5Step 2 Remove the interrogative word from the complex sentence to make it Affirmative

Repeat for119883 = 1 to ICOUNTSet ITemp[119883]= SENTENCE[ILOC[119883]]Set SENTENCE[ILOC[119883]]= Null

[End of for]Step 3 Then Remove the negative to make it simple sentence

Repeat for 119884 = 1 to NCOUNTSet NTemp[119884]= SENTENCE[NLOC[119884]]Set SENTENCE[NLOC[119884]]= Null

[End of for]Step 4 Split the sentence into two or more simple sentences on the basis of conjunction

Repeat for 119885 = 1 to CCOUNTSet CTemp[119885]= SENTENCE[CLOC[119885]]Set SENTENCE[CLOC[119885]]= Hindi Full-stop (ldquo|rdquo)

[End of for]Step 5 Pass each sub sentence with TOKEN to QNN based Machine Translator for repositionStep 6 Refine the Translated sentences by applying the grammar rulesStep 7 Add the interrogative word if removed in Step 2

Repeat for119883 = 1 to ICOUNTif ITEMP[119883] = NOT NULL

Set SENTENCE[ILOC[119883]]= ITemp[119883][End of if structure]

[End of for]Step 8 Add the negative word if removed in Step 3

Repeat for 119884 = 1 to NCOUNTif NTEMP[119884] = NOT NULL

Set SENTENCE[NLOC[119884]]= NTemp[119884][End of if structure]

[End of for]Step 9 Rejoin the entire sub sentences if split in Step 4Step 10 Semantic TranslationStep 11 Exit

Algorithm 1

the systems The fluency check is done by 119899-gram analysisusing the reference translations

71 BLEU We have used BLEU (bilingual evaluation under-study) to calculate the score of systemoutput BLEU is an IBMdeveloped metric which uses modified n-gram precision to

compare the candidate translation against reference transla-tions [16]

Comparative bar diagram between proposed systemGoogle and Bing based on BLEU scale is shown in Figure 3The bar diagram clearly shows that the proposed system hasremarkably high accuracy of 07502 on BLEU scale Bing

6 The Scientific World Journal

Table 2 Comparison of performance measurement of MT system with QNN and traditional neural network

S No Quantum interval (120579) Epoch(iteration)

Trainingperformance (MSE)

Validationperformance (MSE)

Test performance(MSE)

1 120579 = 0 is equivalent to ANN 2004391 0002027 0002066 00020752 025 1536727 0000185 0000205 00002033 05 1547305 0000248 0000273 00002734 075 1528343 0000231 0000248 00002525 1lowast 1548303 000025 000027 00002726 125 1567066 0000284 00003 000037 15 1564271 0000296 0000318 00003168 175 1578643 0000349 0000374 00003759 2 1568663 0000184 0000204 000020910 225 1596607 0000249 0000273 000027311 25 1627345 0000256 0000281 000027612 275 1654092 000022 0000242 000024213 3 1649301 0000348 0000361 00003714 325 1693214 0000192 0000214 000021615 35 1776248 0000294 0000315 000032316 375 1785429 0000185 0000202 000020717 4 1872056 0000217 0000237 000024lowastFor the value of 120579 = 1 the optimum trade-off of iteration and the minimum Error (MSE) achieved

0262603501

07502

0

02

04

06

08

BLEU

BingGoogleProposed

Figure 3 Comparative bar diagram between proposed systemGoogle and Bing based on BLEU scale

has shown accuracy of 02626 and Google has shown 03501accuracy on BLEU scale

Calculate the 119899-gram resemblance by comparing thesentences Then add the clipped 119899-gram counts for all thecandidate sentences and divide by the number of candidate119899-grams in the test sentence to calculate the precision score119901119899 for the whole test sentence

119901119899=

sum119862isinCandidatessum119899-gramisin119862Countclip (119899-gram)

sum1198621015840isinCandidatessum119899-gram1015840isin1198621015840 Countclip (119899-gram1015840)

(3)

where Countclip = min (Count Max Ref Count) In otherwords one truncates each wordrsquos count

417444955

65773

01234567

NIST

BingGoogleProposed

Figure 4 Comparative bar diagram between proposed systemGoogle and Bing based on NIST scale

Here 119888 denotes length of the candidate translation and119903 denotes reference sentence length Then calculate brevitypenalty BP

BP =

1 if 119888 gt 119903

119890(1minus119903119888) if 119888 le 119903(4)

Then

BLEU = BP sdot exp(119873

sum119899=1

119908119899log 119901119899) (5)

72 NIST Proposed by NIST (national institute of standardand technology) it reduces the effect of longer 119873-gramsby using arithmetic mean over 119873-grams counts instead of

The Scientific World Journal 7

06

1

08

04

02

0

ROUGE-L

BingGoogleProposed

0647507189

09233

Figure 5 Comparative bar diagram among proposed systemGoogle and Bing based on ROUGE-L scale

geometricmean of cooccurrences over119873 [17] Figure 4 showsthe comparative bar diagram between proposed systemGoogle and Bing based on NIST scale The bar diagramclearly shows that the proposed system has remarkably highaccuracy of 65773 onNIST scale Bing has shown accuracy of41744 and Google has shown 4955 accuracy on NIST scale

NISTscore = BPNIST lowast PRECISIONNIST

PRECISIONNIST

=

119873

sum119899=1

sumall 119908

1sdotsdotsdot119908119899that co-occur Info (1199081 sdot sdot sdot 119908119899)

sumall 1199081sdotsdotsdot119908119899in hypo (1)

(6)

where

Info (1199081sdot sdot sdot 119908119899) = minuslog

2(

Count (1199081sdot sdot sdot 119908119899)

Count (1199081sdot sdot sdot 119908119899minus1

))

(7)

where Info weights more the words that are difficult topredict and count is computed over the full set of referencestheoretically the precision range is having no limit

BPNIST = exp 120573 lowast log2 [119898119899(LenHypoLenRef

1)]

(8)

where LenHypo is total length of hypothesis and LenRef isaverage length of all references which does not depend onhypothesis

73 ROUGE-L ROUGE-L (recall-oriented understudy forgisting evaluation-longest common subsequence) calculatesthe sentence-to-sentence resemblance using the longest com-mon substring among the candidate translation and referencetranslations The longest common substring represents thesimilarity among two translations 119865lcs calculates the resem-blance between two translations119883of length119898 and119884of length119899 119883 denotes reference translation and 119884 denotes candidatetranslation [18] Comparative bar diagram between proposedsystem Google and Bing based on ROUGE-L scale is shownin Figure 5 The bar diagram clearly shows that the proposed

06

05

04

03

02

01

0

METEOR

BingGoogleProposed

01384

02021

05456

Figure 6 Comparative bar diagram among proposed systemGoogle and Bing based on METEOR scale

system has remarkably high accuracy of 09233 on ROUGE-L scale Bing has shown accuracy of 06475 and Google hasshown 07189 accuracy on ROUGE-L scale

119877lcs =LCS (119883 119884)

119898

119875lcs =LCS (119883 119884)

119899

119865lcs =(1 + 1205732) 119877lcs119875lcs

119877lcs + 1205732119875lcs

(9)

where 119875lcs is precision and 119877lcs is recall and LCS(119883 119884)denotes the longest common substring of 119883 and 119884 and 120573 =

119875lcs119877lcs when 120597119865lcs120597119877lcs = 120597119865lcs120597119875lcs

Rouge-L = Harmonic Mean (119875lcs 119877lcs) =(2 lowast 119875lcs lowast 119877lcs)

(119875lcs + 119877lcs)

(10)

74 METEOR METEOR (metric for evaluation of transla-tion with explicit ordering) is developed at Carnegie MellonUniversity Figure 6 shows comparative bar diagram betweenproposed system Google and Bing based onMETEOR scaleThe bar diagram clearly shows that the proposed system hasremarkably high accuracy of 05456 on METEOR scale Binghas shown accuracy of 01384 and Google has shown 02021accuracy on METEOR scale

The METEOR weighted harmonic mean of unigramprecision (119875 = 119898119908

119905) and unigram recall (119877 = 119898119908

119903) used

Here 119898 denotes unigram matches 119908119905denotes unigrams

in candidate translation and 119908119903is the reference translation

119865mean is calculated by combining the recall and precision viaa harmonic mean that places equal weight on precision andrecall as (119865mean = 2119875119877(119875 + 119877))

This measure is for congruity with respect to single wordbut for considering longer 119899-gram matches a penalty 119901 iscalculated for the alignment as (119901 = 05(119888119906

119898)3

)Here 119888 denotes the number of chunks and 119906

119898denotes the

number of unigrams that have been mapped [19]

8 The Scientific World Journal

Final METEOR-score (M-score) can be calculated asfollows

Meteor-score = 119865mean (1 minus 119901) (11)

Experiments confirm that the accuracy was achieved formachine translation based on quantum neural networkwhich is better than other bilingual translation methods

8 Conclusion

In this work we have presented the quantum neural networkapproach for the problem of machine translation It hasdemonstrated the reasonable accuracy on various scoresIt may be noted that BLEU score achieved 07502 NISTscore achieved 65773 ROUGE-L score achieved 09233 andMETEOR score achieved 05456 accuracy The accuracy ofthe proposed system is significantly higher in comparisonwith Google Translation Bing Translation and other exist-ing approaches for Hindi to English machine translationAccuracy of this system has been improved significantly byincorporating techniques for handling the unknown wordsusingQNN It is also shown above that it requires less trainingtime than the neural network based MT systems

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Rodrıguez I Garcıa-Varea andA Jose Gamez ldquoOn the appli-cation of different evolutionary algorithms to the alignmentproblem in statistical machine translationrdquo Neurocomputingvol 71 pp 755ndash765 2008

[2] S Ananthakrishnan R Prasad D Stallard and P NatarajanldquoBatch-mode semi-supervised active learning for statisticalmachine translationrdquo Computer Speech amp Language vol 27 pp397ndash406 2013

[3] DOrtiz-Martınezb I Garcıa-Vareaa and F Casacubertab ldquoThescaling problem in the pattern recognition approach tomachinetranslationrdquo Pattern Recognition Letters vol 29 pp 1145ndash11532008

[4] J Andres-Ferrer D Ortiz-Martınez I Garcıa-Varea and FCasacuberta ldquoOn the use of different loss functions in statisticalpattern recognition applied to machine translationrdquo PatternRecognition Letters vol 29 no 8 pp 1072ndash1081 2008

[5] M Khalilov and J A R Fonollosa ldquoSyntax-based reordering forstatistical machine translationrdquo Computer Speech amp Languagevol 25 no 4 pp 761ndash788 2011

[6] R M K Sinha ldquoAn engineering perspective of machine trans-lation anglabharti-II and anubharti-II architecturesrdquo in Pro-ceedings of the International SymposiumonMachine TranslationNLP and Translation Support System (ISTRANS rsquo04) pp 134ndash138 Tata McGraw-Hill New Delhi India 2004

[7] RMK Sinha andAThakur ldquoMachine translation of bi-lingualHindi-English (Hinglish)rdquo in Proceedings of the 10th MachineTranslation Summit pp 149ndash156 Phuket Thailand 2005

[8] R Ananthakrishnan M Kavitha J H Jayprasad et al ldquoMaTraa practical approach to fully-automatic indicative English-Hindi machine translationrdquo in Proceedings of the Symposium onModeling and Shallow Parsing of Indian Languages (MSPIL rsquo06)2006

[9] S Raman and N R K Reddy ldquoA transputer-based parallelmachine translation system for Indian languagesrdquoMicroproces-sors and Microsystems vol 20 no 6 pp 373ndash383 1997

[10] A Chandola and A Mahalanobis ldquoOrdered rules for full sen-tence translation a neural network realization and a case studyfor Hindi and Englishrdquo Pattern Recognition vol 27 no 4 pp515ndash521 1994

[11] N B Karayiannis and G Purushothaman ldquoFuzzy pattern clas-sification using feed-forward neural networks with multilevelhidden neuronsrdquo in Proceedings of the IEEE InternationalConference on Neural Networks pp 1577ndash1582 Orlando FlaUSA June 1994

[12] G Purushothaman and N B Karayiannis ldquoQuantum NeuralNetworks (QNNrsquos) inherently fuzzy feedforward neural net-worksrdquo IEEE Transactions on Neural Networks vol 8 no 3 pp679ndash693 1997

[13] R Kretzschmar R Bueler N B Karayiannis and F EggimannldquoQuantum neural networks versus conventional feedforwardneural networks an experimental studyrdquo in Proceedings of the10th IEEE Workshop on Neural Netwoks for Signal Processing(NNSP rsquo00) pp 328ndash337 December 2000

[14] Z Daqi andW Rushi ldquoA multi-layer quantum neural networksrecognition system for handwritten digital recognitionrdquo inProceedings of the 3rd International Conference on NaturalComputation (ICNC rsquo07) pp 718ndash722 Hainan China August2007

[15] T Brants A C Popat P Xu F J Och and J Dean ldquoLargelanguage models in machine translationrdquo in Proceedings of theJoint Conference on Empirical Methods in Natural LanguageProcessing and Computational Natural Language Learning pp858ndash867 Prague Czech Republic June 2007

[16] K Papineni S Roukos T Ward and W -J Zhu ldquoBLEU amethod for automatic evaluation of machine translationrdquo inProceedings of the 40th Annual Meeting of the Association forComputational Linguistics (ACL rsquo02) pp 311ndash318 PhiladelphiaPa USA 2002

[17] G Doddington ldquoAutomatic evaluation of machine translationquality using n-gram co-occurrence statisticsrdquo in Proceedings ofthe 2nd International Conference on Human Language Technol-ogy Research 2002

[18] C Y Lin ldquoRouge a package for automatic evaluation of sum-mariesrdquo in Proceedings of the Workshop on Text SummarizationBranches Out Barcelona Spain 2004

[19] S Banerjee and A Lavie ldquoMETEOR an automatic metric forMT evaluation with improved correlation with human judg-mentsrdquo in Proceedings of theWorkshop on Intrinsic and ExtrinsicEvaluation Measures for MT Association of ComputationalLinguistics (ACL) Ann Arbor Mich USA 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 2: Research Article Quantum Neural Network Based Machine Translator for Hindi to English · 2019. 7. 31. · Research Article Quantum Neural Network Based Machine Translator for Hindi

2 The Scientific World Journal

network [10] but still there are a lot of possibilities to developa MT System for Hindi to increase the accuracy of MT Someof the important works on Hindi are discussed in Section 2

Themainmotivation behind the study of QNN is the pos-sibility to address the unrealistic situation as well as realisticsituation which is not possible with the traditional neuralnetworkQNN learns and predictsmore accurately and needsless computation power and time for learning in comparisonto artificial neural network Researchers introduced the novelapproach of neural network model based on quanta statessuperposition having multilevel transfer function [11ndash13]

The most important difference among classical neuralnetwork and QNN is of their respective activation functionsIn QNN as a substitute of normal activation functions a mul-tilevel activation function is used Each multilevel functionconsists of the summation of sigmoid functions excited withquantum difference [14]

In QNN the multilevel sigmoid function has beenemployed as activation function and is expressed as

sgm (119909) =1

119899119904

119899119904

sum119903=1

(1

1 + exp (119909 minus 120579119903)) (1)

where 119899119904denotes the total multilevel positions in the sigmoid

functions and 120579119903 denotes quantum interval of quantum level119903 [2]

2 Hindi-English Machine Translation Systems

21 ANGLABHARTI-II Machine Translation System ANGL-ABHARTI-II was proposed in 2004 which is a hybrid systembased on the generalized example-base (GEB) with rawexample-base (REB) At the time of development the authorestablishes that the alteration in the rule-base is hard andthe outcome is possibly random This system consists oferror-analysis component and statistical language compo-nent for postediting Preediting component can change theentered sentence to a structure to translate without difficulty[5]

22 MATRA Machine Translation System The MaTra wasintroduced in 2004which is based on transfer approach usinga frame-like structured representation In this the rule-basedand heuristics approach is used to resolve ambiguities Thetext classification module is used for deciding the categoryof news item before working in entered sentence The systemselects the appropriate dictionary based on domain of newsIt requires human assistance in analyzing the input Thissystem also breaks up the complex English sentence to easysentences after examining the structure it produces Hindisentences This system is developed to work in the domain ofnews annual reports and technical phrases [7]

23 Hinglish Machine Translation System Hinglish machinetranslation system is developed in 2004 for standard Hindi tostandard EnglishThis system is developed to incorporate theadded enhancement to available AnglaBharti-II MT Systemfor English to Hindi and to AnuBharti-II systems for Hindi

to English translation developed by Sinha The accuracy ofthis system is satisfactory more than 90 As the verbs havemultiple meanings it is not able to determine the sense dueto nondeep grammatical analysis [6]

24 IBM-English-Hindi Machine Translation System IBM-English-Hindi MT System is developed by IBM IndiaResearch Lab in 2006 at the beginning of this project theystarted to develop an example based MT system but lateron shifted to the statistical machine translation system fromEnglish to Indian languages

25 Google Translate Google Translate was developed byFranz-Josef Och in 2007 This model used the statistical MTapproach to translate English to other languages and viceversa Among the 57 languages Hindi and Urdu are the onlyIndian languages present with Google Translate Accuracy ofthe system is good enough to understand the sentence aftertranslation [15]

3 Proposed Machine Translation System forHindi to English

The proposed machine translation (MT) system consists oftwo approaches one is rule basedMT system and the other isquantum neural based MT systemThe source language goesinto the rule based MT system and passes through the QNNbased MT system to refine the MT done by rule based MTmodule which basically recognizes and classifies the sentencecategory 2600 sentences are used with English and their cor-responding Devanagari-Hindi sentences Each Devanagari-Hindi sentence consists of words with question word nounhelping verb negative word verb preposition article adjec-tive postnoun adverb and so forth Each English sentencecontains a question word noun helping verb negative wordverb preposition article adjective postnoun adverb and soforth The data used to train is produced by an algorithmwhich is based on simple deterministic grammar The entirearchitecture of the proposed MT system model is given inFigure 1

4 Quantum Neural Architecture

As shown in the Figure 2 three-layer architecture of QNNconsist of inputs one layer of multilevel hidden units andoutput layer In QNN as a substitute of normal activationfunctions a multilevel activation function is used Each mul-tilevel function consists of summation of sigmoid functionsexcited with quantum difference

Where 119899119904denotes total multilevel positions in sigmoid

functions 120579119903 denotes quantum interval of quantum level 119903

sgm (119909) =1

119899119904

119899119904

sum119903=1

(1

1 + exp (minus119909 plusmn 120579119903)) (2)

Here every neural network node represents three substates initself with the difference of quantum interval 120579119903with quantum

The Scientific World Journal 3

QNN based POS tagger

Tokenizer Rule based POS tagger

Tagged sentence for training (input)

English sentence(input)

Hindi sentence(output)

Lexicon QNN based tag classifier

Rule based grammar analysis

QNN and rule based sentence

mapping

Semantic translation Tagged

sentence

Figure 1 Architecture of MT system model

Excited state

Excited state

Normal state

Input layer Hidden layer Output layer

Input (ni

ni

)

n1

n2

n1

n2

nk

Output (nk)

+120579

minus120579

Figure 2 Architecture of Quantum Neural Network

level 119903 where 119899119904denotes the number of grades in the quantum

activation functions

5 Quantum Neural Implementation ofTranslation Rules

The strategy is to first identify and tag the parts of speechusing Table 1 and then translate the English (source language)sentences literally into Devanagari-Hindi (target language)

with no rearrangement of words After syntactic translationrearrangement of the words has been done for accurate trans-lation retaining the sense of translated sentenceThe rules arebased on parts of speech not based on meaning To facilitatethe procedure distinctive three-digits codes based on theirparts of speech are assigned which are shown in Table 1

For a special case when input sentence and the resultingsentence are having unequal number of words then thedummy numeric code 000 is used for giving a similar wordalignment

4 The Scientific World Journal

Table 1 Numeric codes for parts of speech

Parts of speech (subclass) Numeric codePrenoun (PN) 100Noun-infinitive (Ni) 101Pronoun (PRO) 102Gerund (GER) 103Relative pronoun (RPRO) 104Postnoun (POSTN) 105Verb (V) 110Helping verb (HV) 111Adverb (ADV) 112Auxiliary verb (AUX) 113Interrogative (question word) (INT) 120Demonstrative words (DEM) 121Quantifier (QUAN) 122Article (A) 123Adjective (ADJ) 130Adjective-particle (ADJP) 131Number (N) 132Preposition (PRE) 140Postposition (POST) 141Punctuation (PUNC) 150Conjunction (CONJ) 160Interjection (INTER) 170Negative word (NE) 180Determiner (D) 190Idiom (I) 200Phrases (P) 210Unknown words (UW) 220

Case 1 When input sentence and the resulting sentence arehaving unequal numbers of words

The coded version of sentence is thus

Ram will not go to the market

100 111 220 110 140 123 102

Therefore input numeral sequence is [100 111 220110 140 123 102] and the corresponding output is [100102 220 110 111 000 000] The outcome of neural net-work might not be the perfect integer it should be roundoff and few basic error adjustments might be needed tofind the output numeral codes Even the network is likelyto arrange the location of 3-digit codes By this it learnsthe target language knowledge which is needed for semanticrearrangement and also helps in parts of speech tagging bypattern matching it is also helpful to adopt and learn thegrammar rules up to a level For handling the complex sen-tences the algorithm is used The algorithm first removes theinterrogative and negative words on the basis of conjunctionthe system breaks up and converts the complex sentence intotwo or more small simple sentences After the translation ofeach of the simple sentences the system again rejoins theentire subsentences and also adds the removed interrogative

and negative words in the sentence The whole process isexplained in Algorithm 1 in the next section

51 Algorithm for Proposed QNN Based MT System forComplex Sentences

QNNMTS (SENTENCE TOKENN LOC)Here SENTENCEis an array with119873 elements containing Hindi words Param-eter TOKEN contains the token of each word and LOCkeeps track of position ICOUNT contains the maximumnumber of interrogative words encountered in sentenceNCOUNT contains the maximum number of negative wordsencountered in the sentence and CCOUNT contains themaximum number of conjunction words encountered in thesentence (see Algorithm 1)

6 Experiment and Results

All words in each language are assigned with a uniquenumeric code on the basis of their respective part of speechExperiments show that memorization of the training data isoccurringThe results shown in this section are achieved aftertraining with 2600 Devanagari-Hindi sentences and theirEnglish translations 500 tests are performed with the systemfor each value of quantum interval (120579) with random data setsselected from 2600 sentences the dataset is divided in 4 3 3ratiosrespectively for training validation and test from2600English sentences and their Devanagari-Hindi translationsIn Table 2 the values are the average of 500 tests performedwith the system for each value of quantum interval (120579) for2600 sentences The best performance is shown for value ofquantum interval (120579) equal to one with respect to all theparameters that is epoch or iterations needed to train thenetwork the training performance validation performanceand test performance in respect to their mean square error(MSE) Here it is clearly shown that QNN at (120579) equal toone is very much efficient as compared to classical artificialneural network at (120579) equal to zero Table 2 clearly shows thecomparison between the performances of QNNwith ANN inrespect to above said performance parameters and as a resultwe can conclude that QNN is better than ANN for machinetranslation

7 Evaluations and Comparison

This paper proposed a new machine translation methodwhich can combine the advantage of quantum neural net-work 2600 sentences are used to analyze the effectiveness ofthe proposed MT system

The performance of proposed system is comparativelyanalyzed with Google Translation (httptranslategooglecom) and Microsoftrsquos Bing Translation (httpwwwbingcomtranslator) by using various MT evaluation methodslike BLEU NIST ROUGE-L and METEOR For evaluationpurpose we translate the same set of input sentences byusing our proposed system Google Translation and BingTranslation and then evaluate the output got from each of

The Scientific World Journal 5

Step 1 Check whether the sentence is Complex sentence or Simple Sentence[Initialize] Set ICOUNT = 1 NCOUNT = 1 CCOUNT = 1 ILOC NLOC CLOCRepeat for LOC = 1 to119873

if SENTENCE[LOC] = ldquoInterrogative wordrdquo thenSet ILOC[ICOUNT] = LOCICOUNT = ICOUNT + 1

[End of if structure]if SENTENCE[LOC] = ldquoNegative wordrdquo then

Set NLOC[NCOUNT] = LOCNCOUNT= NCOUNT + 1

[End of if structure]if SENTENCE[LOC] = ldquoConjunctionrdquo then

Set CLOC[CCOUNT] = LOCCCOUNT = CCOUNT + 1

[End of if structure][End of for]if ILOC = NULL or NLOC = NULL or CLOC = NULL then

Go to Step 5Step 2 Remove the interrogative word from the complex sentence to make it Affirmative

Repeat for119883 = 1 to ICOUNTSet ITemp[119883]= SENTENCE[ILOC[119883]]Set SENTENCE[ILOC[119883]]= Null

[End of for]Step 3 Then Remove the negative to make it simple sentence

Repeat for 119884 = 1 to NCOUNTSet NTemp[119884]= SENTENCE[NLOC[119884]]Set SENTENCE[NLOC[119884]]= Null

[End of for]Step 4 Split the sentence into two or more simple sentences on the basis of conjunction

Repeat for 119885 = 1 to CCOUNTSet CTemp[119885]= SENTENCE[CLOC[119885]]Set SENTENCE[CLOC[119885]]= Hindi Full-stop (ldquo|rdquo)

[End of for]Step 5 Pass each sub sentence with TOKEN to QNN based Machine Translator for repositionStep 6 Refine the Translated sentences by applying the grammar rulesStep 7 Add the interrogative word if removed in Step 2

Repeat for119883 = 1 to ICOUNTif ITEMP[119883] = NOT NULL

Set SENTENCE[ILOC[119883]]= ITemp[119883][End of if structure]

[End of for]Step 8 Add the negative word if removed in Step 3

Repeat for 119884 = 1 to NCOUNTif NTEMP[119884] = NOT NULL

Set SENTENCE[NLOC[119884]]= NTemp[119884][End of if structure]

[End of for]Step 9 Rejoin the entire sub sentences if split in Step 4Step 10 Semantic TranslationStep 11 Exit

Algorithm 1

the systems The fluency check is done by 119899-gram analysisusing the reference translations

71 BLEU We have used BLEU (bilingual evaluation under-study) to calculate the score of systemoutput BLEU is an IBMdeveloped metric which uses modified n-gram precision to

compare the candidate translation against reference transla-tions [16]

Comparative bar diagram between proposed systemGoogle and Bing based on BLEU scale is shown in Figure 3The bar diagram clearly shows that the proposed system hasremarkably high accuracy of 07502 on BLEU scale Bing

6 The Scientific World Journal

Table 2 Comparison of performance measurement of MT system with QNN and traditional neural network

S No Quantum interval (120579) Epoch(iteration)

Trainingperformance (MSE)

Validationperformance (MSE)

Test performance(MSE)

1 120579 = 0 is equivalent to ANN 2004391 0002027 0002066 00020752 025 1536727 0000185 0000205 00002033 05 1547305 0000248 0000273 00002734 075 1528343 0000231 0000248 00002525 1lowast 1548303 000025 000027 00002726 125 1567066 0000284 00003 000037 15 1564271 0000296 0000318 00003168 175 1578643 0000349 0000374 00003759 2 1568663 0000184 0000204 000020910 225 1596607 0000249 0000273 000027311 25 1627345 0000256 0000281 000027612 275 1654092 000022 0000242 000024213 3 1649301 0000348 0000361 00003714 325 1693214 0000192 0000214 000021615 35 1776248 0000294 0000315 000032316 375 1785429 0000185 0000202 000020717 4 1872056 0000217 0000237 000024lowastFor the value of 120579 = 1 the optimum trade-off of iteration and the minimum Error (MSE) achieved

0262603501

07502

0

02

04

06

08

BLEU

BingGoogleProposed

Figure 3 Comparative bar diagram between proposed systemGoogle and Bing based on BLEU scale

has shown accuracy of 02626 and Google has shown 03501accuracy on BLEU scale

Calculate the 119899-gram resemblance by comparing thesentences Then add the clipped 119899-gram counts for all thecandidate sentences and divide by the number of candidate119899-grams in the test sentence to calculate the precision score119901119899 for the whole test sentence

119901119899=

sum119862isinCandidatessum119899-gramisin119862Countclip (119899-gram)

sum1198621015840isinCandidatessum119899-gram1015840isin1198621015840 Countclip (119899-gram1015840)

(3)

where Countclip = min (Count Max Ref Count) In otherwords one truncates each wordrsquos count

417444955

65773

01234567

NIST

BingGoogleProposed

Figure 4 Comparative bar diagram between proposed systemGoogle and Bing based on NIST scale

Here 119888 denotes length of the candidate translation and119903 denotes reference sentence length Then calculate brevitypenalty BP

BP =

1 if 119888 gt 119903

119890(1minus119903119888) if 119888 le 119903(4)

Then

BLEU = BP sdot exp(119873

sum119899=1

119908119899log 119901119899) (5)

72 NIST Proposed by NIST (national institute of standardand technology) it reduces the effect of longer 119873-gramsby using arithmetic mean over 119873-grams counts instead of

The Scientific World Journal 7

06

1

08

04

02

0

ROUGE-L

BingGoogleProposed

0647507189

09233

Figure 5 Comparative bar diagram among proposed systemGoogle and Bing based on ROUGE-L scale

geometricmean of cooccurrences over119873 [17] Figure 4 showsthe comparative bar diagram between proposed systemGoogle and Bing based on NIST scale The bar diagramclearly shows that the proposed system has remarkably highaccuracy of 65773 onNIST scale Bing has shown accuracy of41744 and Google has shown 4955 accuracy on NIST scale

NISTscore = BPNIST lowast PRECISIONNIST

PRECISIONNIST

=

119873

sum119899=1

sumall 119908

1sdotsdotsdot119908119899that co-occur Info (1199081 sdot sdot sdot 119908119899)

sumall 1199081sdotsdotsdot119908119899in hypo (1)

(6)

where

Info (1199081sdot sdot sdot 119908119899) = minuslog

2(

Count (1199081sdot sdot sdot 119908119899)

Count (1199081sdot sdot sdot 119908119899minus1

))

(7)

where Info weights more the words that are difficult topredict and count is computed over the full set of referencestheoretically the precision range is having no limit

BPNIST = exp 120573 lowast log2 [119898119899(LenHypoLenRef

1)]

(8)

where LenHypo is total length of hypothesis and LenRef isaverage length of all references which does not depend onhypothesis

73 ROUGE-L ROUGE-L (recall-oriented understudy forgisting evaluation-longest common subsequence) calculatesthe sentence-to-sentence resemblance using the longest com-mon substring among the candidate translation and referencetranslations The longest common substring represents thesimilarity among two translations 119865lcs calculates the resem-blance between two translations119883of length119898 and119884of length119899 119883 denotes reference translation and 119884 denotes candidatetranslation [18] Comparative bar diagram between proposedsystem Google and Bing based on ROUGE-L scale is shownin Figure 5 The bar diagram clearly shows that the proposed

06

05

04

03

02

01

0

METEOR

BingGoogleProposed

01384

02021

05456

Figure 6 Comparative bar diagram among proposed systemGoogle and Bing based on METEOR scale

system has remarkably high accuracy of 09233 on ROUGE-L scale Bing has shown accuracy of 06475 and Google hasshown 07189 accuracy on ROUGE-L scale

119877lcs =LCS (119883 119884)

119898

119875lcs =LCS (119883 119884)

119899

119865lcs =(1 + 1205732) 119877lcs119875lcs

119877lcs + 1205732119875lcs

(9)

where 119875lcs is precision and 119877lcs is recall and LCS(119883 119884)denotes the longest common substring of 119883 and 119884 and 120573 =

119875lcs119877lcs when 120597119865lcs120597119877lcs = 120597119865lcs120597119875lcs

Rouge-L = Harmonic Mean (119875lcs 119877lcs) =(2 lowast 119875lcs lowast 119877lcs)

(119875lcs + 119877lcs)

(10)

74 METEOR METEOR (metric for evaluation of transla-tion with explicit ordering) is developed at Carnegie MellonUniversity Figure 6 shows comparative bar diagram betweenproposed system Google and Bing based onMETEOR scaleThe bar diagram clearly shows that the proposed system hasremarkably high accuracy of 05456 on METEOR scale Binghas shown accuracy of 01384 and Google has shown 02021accuracy on METEOR scale

The METEOR weighted harmonic mean of unigramprecision (119875 = 119898119908

119905) and unigram recall (119877 = 119898119908

119903) used

Here 119898 denotes unigram matches 119908119905denotes unigrams

in candidate translation and 119908119903is the reference translation

119865mean is calculated by combining the recall and precision viaa harmonic mean that places equal weight on precision andrecall as (119865mean = 2119875119877(119875 + 119877))

This measure is for congruity with respect to single wordbut for considering longer 119899-gram matches a penalty 119901 iscalculated for the alignment as (119901 = 05(119888119906

119898)3

)Here 119888 denotes the number of chunks and 119906

119898denotes the

number of unigrams that have been mapped [19]

8 The Scientific World Journal

Final METEOR-score (M-score) can be calculated asfollows

Meteor-score = 119865mean (1 minus 119901) (11)

Experiments confirm that the accuracy was achieved formachine translation based on quantum neural networkwhich is better than other bilingual translation methods

8 Conclusion

In this work we have presented the quantum neural networkapproach for the problem of machine translation It hasdemonstrated the reasonable accuracy on various scoresIt may be noted that BLEU score achieved 07502 NISTscore achieved 65773 ROUGE-L score achieved 09233 andMETEOR score achieved 05456 accuracy The accuracy ofthe proposed system is significantly higher in comparisonwith Google Translation Bing Translation and other exist-ing approaches for Hindi to English machine translationAccuracy of this system has been improved significantly byincorporating techniques for handling the unknown wordsusingQNN It is also shown above that it requires less trainingtime than the neural network based MT systems

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Rodrıguez I Garcıa-Varea andA Jose Gamez ldquoOn the appli-cation of different evolutionary algorithms to the alignmentproblem in statistical machine translationrdquo Neurocomputingvol 71 pp 755ndash765 2008

[2] S Ananthakrishnan R Prasad D Stallard and P NatarajanldquoBatch-mode semi-supervised active learning for statisticalmachine translationrdquo Computer Speech amp Language vol 27 pp397ndash406 2013

[3] DOrtiz-Martınezb I Garcıa-Vareaa and F Casacubertab ldquoThescaling problem in the pattern recognition approach tomachinetranslationrdquo Pattern Recognition Letters vol 29 pp 1145ndash11532008

[4] J Andres-Ferrer D Ortiz-Martınez I Garcıa-Varea and FCasacuberta ldquoOn the use of different loss functions in statisticalpattern recognition applied to machine translationrdquo PatternRecognition Letters vol 29 no 8 pp 1072ndash1081 2008

[5] M Khalilov and J A R Fonollosa ldquoSyntax-based reordering forstatistical machine translationrdquo Computer Speech amp Languagevol 25 no 4 pp 761ndash788 2011

[6] R M K Sinha ldquoAn engineering perspective of machine trans-lation anglabharti-II and anubharti-II architecturesrdquo in Pro-ceedings of the International SymposiumonMachine TranslationNLP and Translation Support System (ISTRANS rsquo04) pp 134ndash138 Tata McGraw-Hill New Delhi India 2004

[7] RMK Sinha andAThakur ldquoMachine translation of bi-lingualHindi-English (Hinglish)rdquo in Proceedings of the 10th MachineTranslation Summit pp 149ndash156 Phuket Thailand 2005

[8] R Ananthakrishnan M Kavitha J H Jayprasad et al ldquoMaTraa practical approach to fully-automatic indicative English-Hindi machine translationrdquo in Proceedings of the Symposium onModeling and Shallow Parsing of Indian Languages (MSPIL rsquo06)2006

[9] S Raman and N R K Reddy ldquoA transputer-based parallelmachine translation system for Indian languagesrdquoMicroproces-sors and Microsystems vol 20 no 6 pp 373ndash383 1997

[10] A Chandola and A Mahalanobis ldquoOrdered rules for full sen-tence translation a neural network realization and a case studyfor Hindi and Englishrdquo Pattern Recognition vol 27 no 4 pp515ndash521 1994

[11] N B Karayiannis and G Purushothaman ldquoFuzzy pattern clas-sification using feed-forward neural networks with multilevelhidden neuronsrdquo in Proceedings of the IEEE InternationalConference on Neural Networks pp 1577ndash1582 Orlando FlaUSA June 1994

[12] G Purushothaman and N B Karayiannis ldquoQuantum NeuralNetworks (QNNrsquos) inherently fuzzy feedforward neural net-worksrdquo IEEE Transactions on Neural Networks vol 8 no 3 pp679ndash693 1997

[13] R Kretzschmar R Bueler N B Karayiannis and F EggimannldquoQuantum neural networks versus conventional feedforwardneural networks an experimental studyrdquo in Proceedings of the10th IEEE Workshop on Neural Netwoks for Signal Processing(NNSP rsquo00) pp 328ndash337 December 2000

[14] Z Daqi andW Rushi ldquoA multi-layer quantum neural networksrecognition system for handwritten digital recognitionrdquo inProceedings of the 3rd International Conference on NaturalComputation (ICNC rsquo07) pp 718ndash722 Hainan China August2007

[15] T Brants A C Popat P Xu F J Och and J Dean ldquoLargelanguage models in machine translationrdquo in Proceedings of theJoint Conference on Empirical Methods in Natural LanguageProcessing and Computational Natural Language Learning pp858ndash867 Prague Czech Republic June 2007

[16] K Papineni S Roukos T Ward and W -J Zhu ldquoBLEU amethod for automatic evaluation of machine translationrdquo inProceedings of the 40th Annual Meeting of the Association forComputational Linguistics (ACL rsquo02) pp 311ndash318 PhiladelphiaPa USA 2002

[17] G Doddington ldquoAutomatic evaluation of machine translationquality using n-gram co-occurrence statisticsrdquo in Proceedings ofthe 2nd International Conference on Human Language Technol-ogy Research 2002

[18] C Y Lin ldquoRouge a package for automatic evaluation of sum-mariesrdquo in Proceedings of the Workshop on Text SummarizationBranches Out Barcelona Spain 2004

[19] S Banerjee and A Lavie ldquoMETEOR an automatic metric forMT evaluation with improved correlation with human judg-mentsrdquo in Proceedings of theWorkshop on Intrinsic and ExtrinsicEvaluation Measures for MT Association of ComputationalLinguistics (ACL) Ann Arbor Mich USA 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 3: Research Article Quantum Neural Network Based Machine Translator for Hindi to English · 2019. 7. 31. · Research Article Quantum Neural Network Based Machine Translator for Hindi

The Scientific World Journal 3

QNN based POS tagger

Tokenizer Rule based POS tagger

Tagged sentence for training (input)

English sentence(input)

Hindi sentence(output)

Lexicon QNN based tag classifier

Rule based grammar analysis

QNN and rule based sentence

mapping

Semantic translation Tagged

sentence

Figure 1 Architecture of MT system model

Excited state

Excited state

Normal state

Input layer Hidden layer Output layer

Input (ni

ni

)

n1

n2

n1

n2

nk

Output (nk)

+120579

minus120579

Figure 2 Architecture of Quantum Neural Network

level 119903 where 119899119904denotes the number of grades in the quantum

activation functions

5 Quantum Neural Implementation ofTranslation Rules

The strategy is to first identify and tag the parts of speechusing Table 1 and then translate the English (source language)sentences literally into Devanagari-Hindi (target language)

with no rearrangement of words After syntactic translationrearrangement of the words has been done for accurate trans-lation retaining the sense of translated sentenceThe rules arebased on parts of speech not based on meaning To facilitatethe procedure distinctive three-digits codes based on theirparts of speech are assigned which are shown in Table 1

For a special case when input sentence and the resultingsentence are having unequal number of words then thedummy numeric code 000 is used for giving a similar wordalignment

4 The Scientific World Journal

Table 1 Numeric codes for parts of speech

Parts of speech (subclass) Numeric codePrenoun (PN) 100Noun-infinitive (Ni) 101Pronoun (PRO) 102Gerund (GER) 103Relative pronoun (RPRO) 104Postnoun (POSTN) 105Verb (V) 110Helping verb (HV) 111Adverb (ADV) 112Auxiliary verb (AUX) 113Interrogative (question word) (INT) 120Demonstrative words (DEM) 121Quantifier (QUAN) 122Article (A) 123Adjective (ADJ) 130Adjective-particle (ADJP) 131Number (N) 132Preposition (PRE) 140Postposition (POST) 141Punctuation (PUNC) 150Conjunction (CONJ) 160Interjection (INTER) 170Negative word (NE) 180Determiner (D) 190Idiom (I) 200Phrases (P) 210Unknown words (UW) 220

Case 1 When input sentence and the resulting sentence arehaving unequal numbers of words

The coded version of sentence is thus

Ram will not go to the market

100 111 220 110 140 123 102

Therefore input numeral sequence is [100 111 220110 140 123 102] and the corresponding output is [100102 220 110 111 000 000] The outcome of neural net-work might not be the perfect integer it should be roundoff and few basic error adjustments might be needed tofind the output numeral codes Even the network is likelyto arrange the location of 3-digit codes By this it learnsthe target language knowledge which is needed for semanticrearrangement and also helps in parts of speech tagging bypattern matching it is also helpful to adopt and learn thegrammar rules up to a level For handling the complex sen-tences the algorithm is used The algorithm first removes theinterrogative and negative words on the basis of conjunctionthe system breaks up and converts the complex sentence intotwo or more small simple sentences After the translation ofeach of the simple sentences the system again rejoins theentire subsentences and also adds the removed interrogative

and negative words in the sentence The whole process isexplained in Algorithm 1 in the next section

51 Algorithm for Proposed QNN Based MT System forComplex Sentences

QNNMTS (SENTENCE TOKENN LOC)Here SENTENCEis an array with119873 elements containing Hindi words Param-eter TOKEN contains the token of each word and LOCkeeps track of position ICOUNT contains the maximumnumber of interrogative words encountered in sentenceNCOUNT contains the maximum number of negative wordsencountered in the sentence and CCOUNT contains themaximum number of conjunction words encountered in thesentence (see Algorithm 1)

6 Experiment and Results

All words in each language are assigned with a uniquenumeric code on the basis of their respective part of speechExperiments show that memorization of the training data isoccurringThe results shown in this section are achieved aftertraining with 2600 Devanagari-Hindi sentences and theirEnglish translations 500 tests are performed with the systemfor each value of quantum interval (120579) with random data setsselected from 2600 sentences the dataset is divided in 4 3 3ratiosrespectively for training validation and test from2600English sentences and their Devanagari-Hindi translationsIn Table 2 the values are the average of 500 tests performedwith the system for each value of quantum interval (120579) for2600 sentences The best performance is shown for value ofquantum interval (120579) equal to one with respect to all theparameters that is epoch or iterations needed to train thenetwork the training performance validation performanceand test performance in respect to their mean square error(MSE) Here it is clearly shown that QNN at (120579) equal toone is very much efficient as compared to classical artificialneural network at (120579) equal to zero Table 2 clearly shows thecomparison between the performances of QNNwith ANN inrespect to above said performance parameters and as a resultwe can conclude that QNN is better than ANN for machinetranslation

7 Evaluations and Comparison

This paper proposed a new machine translation methodwhich can combine the advantage of quantum neural net-work 2600 sentences are used to analyze the effectiveness ofthe proposed MT system

The performance of proposed system is comparativelyanalyzed with Google Translation (httptranslategooglecom) and Microsoftrsquos Bing Translation (httpwwwbingcomtranslator) by using various MT evaluation methodslike BLEU NIST ROUGE-L and METEOR For evaluationpurpose we translate the same set of input sentences byusing our proposed system Google Translation and BingTranslation and then evaluate the output got from each of

The Scientific World Journal 5

Step 1 Check whether the sentence is Complex sentence or Simple Sentence[Initialize] Set ICOUNT = 1 NCOUNT = 1 CCOUNT = 1 ILOC NLOC CLOCRepeat for LOC = 1 to119873

if SENTENCE[LOC] = ldquoInterrogative wordrdquo thenSet ILOC[ICOUNT] = LOCICOUNT = ICOUNT + 1

[End of if structure]if SENTENCE[LOC] = ldquoNegative wordrdquo then

Set NLOC[NCOUNT] = LOCNCOUNT= NCOUNT + 1

[End of if structure]if SENTENCE[LOC] = ldquoConjunctionrdquo then

Set CLOC[CCOUNT] = LOCCCOUNT = CCOUNT + 1

[End of if structure][End of for]if ILOC = NULL or NLOC = NULL or CLOC = NULL then

Go to Step 5Step 2 Remove the interrogative word from the complex sentence to make it Affirmative

Repeat for119883 = 1 to ICOUNTSet ITemp[119883]= SENTENCE[ILOC[119883]]Set SENTENCE[ILOC[119883]]= Null

[End of for]Step 3 Then Remove the negative to make it simple sentence

Repeat for 119884 = 1 to NCOUNTSet NTemp[119884]= SENTENCE[NLOC[119884]]Set SENTENCE[NLOC[119884]]= Null

[End of for]Step 4 Split the sentence into two or more simple sentences on the basis of conjunction

Repeat for 119885 = 1 to CCOUNTSet CTemp[119885]= SENTENCE[CLOC[119885]]Set SENTENCE[CLOC[119885]]= Hindi Full-stop (ldquo|rdquo)

[End of for]Step 5 Pass each sub sentence with TOKEN to QNN based Machine Translator for repositionStep 6 Refine the Translated sentences by applying the grammar rulesStep 7 Add the interrogative word if removed in Step 2

Repeat for119883 = 1 to ICOUNTif ITEMP[119883] = NOT NULL

Set SENTENCE[ILOC[119883]]= ITemp[119883][End of if structure]

[End of for]Step 8 Add the negative word if removed in Step 3

Repeat for 119884 = 1 to NCOUNTif NTEMP[119884] = NOT NULL

Set SENTENCE[NLOC[119884]]= NTemp[119884][End of if structure]

[End of for]Step 9 Rejoin the entire sub sentences if split in Step 4Step 10 Semantic TranslationStep 11 Exit

Algorithm 1

the systems The fluency check is done by 119899-gram analysisusing the reference translations

71 BLEU We have used BLEU (bilingual evaluation under-study) to calculate the score of systemoutput BLEU is an IBMdeveloped metric which uses modified n-gram precision to

compare the candidate translation against reference transla-tions [16]

Comparative bar diagram between proposed systemGoogle and Bing based on BLEU scale is shown in Figure 3The bar diagram clearly shows that the proposed system hasremarkably high accuracy of 07502 on BLEU scale Bing

6 The Scientific World Journal

Table 2 Comparison of performance measurement of MT system with QNN and traditional neural network

S No Quantum interval (120579) Epoch(iteration)

Trainingperformance (MSE)

Validationperformance (MSE)

Test performance(MSE)

1 120579 = 0 is equivalent to ANN 2004391 0002027 0002066 00020752 025 1536727 0000185 0000205 00002033 05 1547305 0000248 0000273 00002734 075 1528343 0000231 0000248 00002525 1lowast 1548303 000025 000027 00002726 125 1567066 0000284 00003 000037 15 1564271 0000296 0000318 00003168 175 1578643 0000349 0000374 00003759 2 1568663 0000184 0000204 000020910 225 1596607 0000249 0000273 000027311 25 1627345 0000256 0000281 000027612 275 1654092 000022 0000242 000024213 3 1649301 0000348 0000361 00003714 325 1693214 0000192 0000214 000021615 35 1776248 0000294 0000315 000032316 375 1785429 0000185 0000202 000020717 4 1872056 0000217 0000237 000024lowastFor the value of 120579 = 1 the optimum trade-off of iteration and the minimum Error (MSE) achieved

0262603501

07502

0

02

04

06

08

BLEU

BingGoogleProposed

Figure 3 Comparative bar diagram between proposed systemGoogle and Bing based on BLEU scale

has shown accuracy of 02626 and Google has shown 03501accuracy on BLEU scale

Calculate the 119899-gram resemblance by comparing thesentences Then add the clipped 119899-gram counts for all thecandidate sentences and divide by the number of candidate119899-grams in the test sentence to calculate the precision score119901119899 for the whole test sentence

119901119899=

sum119862isinCandidatessum119899-gramisin119862Countclip (119899-gram)

sum1198621015840isinCandidatessum119899-gram1015840isin1198621015840 Countclip (119899-gram1015840)

(3)

where Countclip = min (Count Max Ref Count) In otherwords one truncates each wordrsquos count

417444955

65773

01234567

NIST

BingGoogleProposed

Figure 4 Comparative bar diagram between proposed systemGoogle and Bing based on NIST scale

Here 119888 denotes length of the candidate translation and119903 denotes reference sentence length Then calculate brevitypenalty BP

BP =

1 if 119888 gt 119903

119890(1minus119903119888) if 119888 le 119903(4)

Then

BLEU = BP sdot exp(119873

sum119899=1

119908119899log 119901119899) (5)

72 NIST Proposed by NIST (national institute of standardand technology) it reduces the effect of longer 119873-gramsby using arithmetic mean over 119873-grams counts instead of

The Scientific World Journal 7

06

1

08

04

02

0

ROUGE-L

BingGoogleProposed

0647507189

09233

Figure 5 Comparative bar diagram among proposed systemGoogle and Bing based on ROUGE-L scale

geometricmean of cooccurrences over119873 [17] Figure 4 showsthe comparative bar diagram between proposed systemGoogle and Bing based on NIST scale The bar diagramclearly shows that the proposed system has remarkably highaccuracy of 65773 onNIST scale Bing has shown accuracy of41744 and Google has shown 4955 accuracy on NIST scale

NISTscore = BPNIST lowast PRECISIONNIST

PRECISIONNIST

=

119873

sum119899=1

sumall 119908

1sdotsdotsdot119908119899that co-occur Info (1199081 sdot sdot sdot 119908119899)

sumall 1199081sdotsdotsdot119908119899in hypo (1)

(6)

where

Info (1199081sdot sdot sdot 119908119899) = minuslog

2(

Count (1199081sdot sdot sdot 119908119899)

Count (1199081sdot sdot sdot 119908119899minus1

))

(7)

where Info weights more the words that are difficult topredict and count is computed over the full set of referencestheoretically the precision range is having no limit

BPNIST = exp 120573 lowast log2 [119898119899(LenHypoLenRef

1)]

(8)

where LenHypo is total length of hypothesis and LenRef isaverage length of all references which does not depend onhypothesis

73 ROUGE-L ROUGE-L (recall-oriented understudy forgisting evaluation-longest common subsequence) calculatesthe sentence-to-sentence resemblance using the longest com-mon substring among the candidate translation and referencetranslations The longest common substring represents thesimilarity among two translations 119865lcs calculates the resem-blance between two translations119883of length119898 and119884of length119899 119883 denotes reference translation and 119884 denotes candidatetranslation [18] Comparative bar diagram between proposedsystem Google and Bing based on ROUGE-L scale is shownin Figure 5 The bar diagram clearly shows that the proposed

06

05

04

03

02

01

0

METEOR

BingGoogleProposed

01384

02021

05456

Figure 6 Comparative bar diagram among proposed systemGoogle and Bing based on METEOR scale

system has remarkably high accuracy of 09233 on ROUGE-L scale Bing has shown accuracy of 06475 and Google hasshown 07189 accuracy on ROUGE-L scale

119877lcs =LCS (119883 119884)

119898

119875lcs =LCS (119883 119884)

119899

119865lcs =(1 + 1205732) 119877lcs119875lcs

119877lcs + 1205732119875lcs

(9)

where 119875lcs is precision and 119877lcs is recall and LCS(119883 119884)denotes the longest common substring of 119883 and 119884 and 120573 =

119875lcs119877lcs when 120597119865lcs120597119877lcs = 120597119865lcs120597119875lcs

Rouge-L = Harmonic Mean (119875lcs 119877lcs) =(2 lowast 119875lcs lowast 119877lcs)

(119875lcs + 119877lcs)

(10)

74 METEOR METEOR (metric for evaluation of transla-tion with explicit ordering) is developed at Carnegie MellonUniversity Figure 6 shows comparative bar diagram betweenproposed system Google and Bing based onMETEOR scaleThe bar diagram clearly shows that the proposed system hasremarkably high accuracy of 05456 on METEOR scale Binghas shown accuracy of 01384 and Google has shown 02021accuracy on METEOR scale

The METEOR weighted harmonic mean of unigramprecision (119875 = 119898119908

119905) and unigram recall (119877 = 119898119908

119903) used

Here 119898 denotes unigram matches 119908119905denotes unigrams

in candidate translation and 119908119903is the reference translation

119865mean is calculated by combining the recall and precision viaa harmonic mean that places equal weight on precision andrecall as (119865mean = 2119875119877(119875 + 119877))

This measure is for congruity with respect to single wordbut for considering longer 119899-gram matches a penalty 119901 iscalculated for the alignment as (119901 = 05(119888119906

119898)3

)Here 119888 denotes the number of chunks and 119906

119898denotes the

number of unigrams that have been mapped [19]

8 The Scientific World Journal

Final METEOR-score (M-score) can be calculated asfollows

Meteor-score = 119865mean (1 minus 119901) (11)

Experiments confirm that the accuracy was achieved formachine translation based on quantum neural networkwhich is better than other bilingual translation methods

8 Conclusion

In this work we have presented the quantum neural networkapproach for the problem of machine translation It hasdemonstrated the reasonable accuracy on various scoresIt may be noted that BLEU score achieved 07502 NISTscore achieved 65773 ROUGE-L score achieved 09233 andMETEOR score achieved 05456 accuracy The accuracy ofthe proposed system is significantly higher in comparisonwith Google Translation Bing Translation and other exist-ing approaches for Hindi to English machine translationAccuracy of this system has been improved significantly byincorporating techniques for handling the unknown wordsusingQNN It is also shown above that it requires less trainingtime than the neural network based MT systems

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Rodrıguez I Garcıa-Varea andA Jose Gamez ldquoOn the appli-cation of different evolutionary algorithms to the alignmentproblem in statistical machine translationrdquo Neurocomputingvol 71 pp 755ndash765 2008

[2] S Ananthakrishnan R Prasad D Stallard and P NatarajanldquoBatch-mode semi-supervised active learning for statisticalmachine translationrdquo Computer Speech amp Language vol 27 pp397ndash406 2013

[3] DOrtiz-Martınezb I Garcıa-Vareaa and F Casacubertab ldquoThescaling problem in the pattern recognition approach tomachinetranslationrdquo Pattern Recognition Letters vol 29 pp 1145ndash11532008

[4] J Andres-Ferrer D Ortiz-Martınez I Garcıa-Varea and FCasacuberta ldquoOn the use of different loss functions in statisticalpattern recognition applied to machine translationrdquo PatternRecognition Letters vol 29 no 8 pp 1072ndash1081 2008

[5] M Khalilov and J A R Fonollosa ldquoSyntax-based reordering forstatistical machine translationrdquo Computer Speech amp Languagevol 25 no 4 pp 761ndash788 2011

[6] R M K Sinha ldquoAn engineering perspective of machine trans-lation anglabharti-II and anubharti-II architecturesrdquo in Pro-ceedings of the International SymposiumonMachine TranslationNLP and Translation Support System (ISTRANS rsquo04) pp 134ndash138 Tata McGraw-Hill New Delhi India 2004

[7] RMK Sinha andAThakur ldquoMachine translation of bi-lingualHindi-English (Hinglish)rdquo in Proceedings of the 10th MachineTranslation Summit pp 149ndash156 Phuket Thailand 2005

[8] R Ananthakrishnan M Kavitha J H Jayprasad et al ldquoMaTraa practical approach to fully-automatic indicative English-Hindi machine translationrdquo in Proceedings of the Symposium onModeling and Shallow Parsing of Indian Languages (MSPIL rsquo06)2006

[9] S Raman and N R K Reddy ldquoA transputer-based parallelmachine translation system for Indian languagesrdquoMicroproces-sors and Microsystems vol 20 no 6 pp 373ndash383 1997

[10] A Chandola and A Mahalanobis ldquoOrdered rules for full sen-tence translation a neural network realization and a case studyfor Hindi and Englishrdquo Pattern Recognition vol 27 no 4 pp515ndash521 1994

[11] N B Karayiannis and G Purushothaman ldquoFuzzy pattern clas-sification using feed-forward neural networks with multilevelhidden neuronsrdquo in Proceedings of the IEEE InternationalConference on Neural Networks pp 1577ndash1582 Orlando FlaUSA June 1994

[12] G Purushothaman and N B Karayiannis ldquoQuantum NeuralNetworks (QNNrsquos) inherently fuzzy feedforward neural net-worksrdquo IEEE Transactions on Neural Networks vol 8 no 3 pp679ndash693 1997

[13] R Kretzschmar R Bueler N B Karayiannis and F EggimannldquoQuantum neural networks versus conventional feedforwardneural networks an experimental studyrdquo in Proceedings of the10th IEEE Workshop on Neural Netwoks for Signal Processing(NNSP rsquo00) pp 328ndash337 December 2000

[14] Z Daqi andW Rushi ldquoA multi-layer quantum neural networksrecognition system for handwritten digital recognitionrdquo inProceedings of the 3rd International Conference on NaturalComputation (ICNC rsquo07) pp 718ndash722 Hainan China August2007

[15] T Brants A C Popat P Xu F J Och and J Dean ldquoLargelanguage models in machine translationrdquo in Proceedings of theJoint Conference on Empirical Methods in Natural LanguageProcessing and Computational Natural Language Learning pp858ndash867 Prague Czech Republic June 2007

[16] K Papineni S Roukos T Ward and W -J Zhu ldquoBLEU amethod for automatic evaluation of machine translationrdquo inProceedings of the 40th Annual Meeting of the Association forComputational Linguistics (ACL rsquo02) pp 311ndash318 PhiladelphiaPa USA 2002

[17] G Doddington ldquoAutomatic evaluation of machine translationquality using n-gram co-occurrence statisticsrdquo in Proceedings ofthe 2nd International Conference on Human Language Technol-ogy Research 2002

[18] C Y Lin ldquoRouge a package for automatic evaluation of sum-mariesrdquo in Proceedings of the Workshop on Text SummarizationBranches Out Barcelona Spain 2004

[19] S Banerjee and A Lavie ldquoMETEOR an automatic metric forMT evaluation with improved correlation with human judg-mentsrdquo in Proceedings of theWorkshop on Intrinsic and ExtrinsicEvaluation Measures for MT Association of ComputationalLinguistics (ACL) Ann Arbor Mich USA 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 4: Research Article Quantum Neural Network Based Machine Translator for Hindi to English · 2019. 7. 31. · Research Article Quantum Neural Network Based Machine Translator for Hindi

4 The Scientific World Journal

Table 1 Numeric codes for parts of speech

Parts of speech (subclass) Numeric codePrenoun (PN) 100Noun-infinitive (Ni) 101Pronoun (PRO) 102Gerund (GER) 103Relative pronoun (RPRO) 104Postnoun (POSTN) 105Verb (V) 110Helping verb (HV) 111Adverb (ADV) 112Auxiliary verb (AUX) 113Interrogative (question word) (INT) 120Demonstrative words (DEM) 121Quantifier (QUAN) 122Article (A) 123Adjective (ADJ) 130Adjective-particle (ADJP) 131Number (N) 132Preposition (PRE) 140Postposition (POST) 141Punctuation (PUNC) 150Conjunction (CONJ) 160Interjection (INTER) 170Negative word (NE) 180Determiner (D) 190Idiom (I) 200Phrases (P) 210Unknown words (UW) 220

Case 1 When input sentence and the resulting sentence arehaving unequal numbers of words

The coded version of sentence is thus

Ram will not go to the market

100 111 220 110 140 123 102

Therefore input numeral sequence is [100 111 220110 140 123 102] and the corresponding output is [100102 220 110 111 000 000] The outcome of neural net-work might not be the perfect integer it should be roundoff and few basic error adjustments might be needed tofind the output numeral codes Even the network is likelyto arrange the location of 3-digit codes By this it learnsthe target language knowledge which is needed for semanticrearrangement and also helps in parts of speech tagging bypattern matching it is also helpful to adopt and learn thegrammar rules up to a level For handling the complex sen-tences the algorithm is used The algorithm first removes theinterrogative and negative words on the basis of conjunctionthe system breaks up and converts the complex sentence intotwo or more small simple sentences After the translation ofeach of the simple sentences the system again rejoins theentire subsentences and also adds the removed interrogative

and negative words in the sentence The whole process isexplained in Algorithm 1 in the next section

51 Algorithm for Proposed QNN Based MT System forComplex Sentences

QNNMTS (SENTENCE TOKENN LOC)Here SENTENCEis an array with119873 elements containing Hindi words Param-eter TOKEN contains the token of each word and LOCkeeps track of position ICOUNT contains the maximumnumber of interrogative words encountered in sentenceNCOUNT contains the maximum number of negative wordsencountered in the sentence and CCOUNT contains themaximum number of conjunction words encountered in thesentence (see Algorithm 1)

6 Experiment and Results

All words in each language are assigned with a uniquenumeric code on the basis of their respective part of speechExperiments show that memorization of the training data isoccurringThe results shown in this section are achieved aftertraining with 2600 Devanagari-Hindi sentences and theirEnglish translations 500 tests are performed with the systemfor each value of quantum interval (120579) with random data setsselected from 2600 sentences the dataset is divided in 4 3 3ratiosrespectively for training validation and test from2600English sentences and their Devanagari-Hindi translationsIn Table 2 the values are the average of 500 tests performedwith the system for each value of quantum interval (120579) for2600 sentences The best performance is shown for value ofquantum interval (120579) equal to one with respect to all theparameters that is epoch or iterations needed to train thenetwork the training performance validation performanceand test performance in respect to their mean square error(MSE) Here it is clearly shown that QNN at (120579) equal toone is very much efficient as compared to classical artificialneural network at (120579) equal to zero Table 2 clearly shows thecomparison between the performances of QNNwith ANN inrespect to above said performance parameters and as a resultwe can conclude that QNN is better than ANN for machinetranslation

7 Evaluations and Comparison

This paper proposed a new machine translation methodwhich can combine the advantage of quantum neural net-work 2600 sentences are used to analyze the effectiveness ofthe proposed MT system

The performance of proposed system is comparativelyanalyzed with Google Translation (httptranslategooglecom) and Microsoftrsquos Bing Translation (httpwwwbingcomtranslator) by using various MT evaluation methodslike BLEU NIST ROUGE-L and METEOR For evaluationpurpose we translate the same set of input sentences byusing our proposed system Google Translation and BingTranslation and then evaluate the output got from each of

The Scientific World Journal 5

Step 1 Check whether the sentence is Complex sentence or Simple Sentence[Initialize] Set ICOUNT = 1 NCOUNT = 1 CCOUNT = 1 ILOC NLOC CLOCRepeat for LOC = 1 to119873

if SENTENCE[LOC] = ldquoInterrogative wordrdquo thenSet ILOC[ICOUNT] = LOCICOUNT = ICOUNT + 1

[End of if structure]if SENTENCE[LOC] = ldquoNegative wordrdquo then

Set NLOC[NCOUNT] = LOCNCOUNT= NCOUNT + 1

[End of if structure]if SENTENCE[LOC] = ldquoConjunctionrdquo then

Set CLOC[CCOUNT] = LOCCCOUNT = CCOUNT + 1

[End of if structure][End of for]if ILOC = NULL or NLOC = NULL or CLOC = NULL then

Go to Step 5Step 2 Remove the interrogative word from the complex sentence to make it Affirmative

Repeat for119883 = 1 to ICOUNTSet ITemp[119883]= SENTENCE[ILOC[119883]]Set SENTENCE[ILOC[119883]]= Null

[End of for]Step 3 Then Remove the negative to make it simple sentence

Repeat for 119884 = 1 to NCOUNTSet NTemp[119884]= SENTENCE[NLOC[119884]]Set SENTENCE[NLOC[119884]]= Null

[End of for]Step 4 Split the sentence into two or more simple sentences on the basis of conjunction

Repeat for 119885 = 1 to CCOUNTSet CTemp[119885]= SENTENCE[CLOC[119885]]Set SENTENCE[CLOC[119885]]= Hindi Full-stop (ldquo|rdquo)

[End of for]Step 5 Pass each sub sentence with TOKEN to QNN based Machine Translator for repositionStep 6 Refine the Translated sentences by applying the grammar rulesStep 7 Add the interrogative word if removed in Step 2

Repeat for119883 = 1 to ICOUNTif ITEMP[119883] = NOT NULL

Set SENTENCE[ILOC[119883]]= ITemp[119883][End of if structure]

[End of for]Step 8 Add the negative word if removed in Step 3

Repeat for 119884 = 1 to NCOUNTif NTEMP[119884] = NOT NULL

Set SENTENCE[NLOC[119884]]= NTemp[119884][End of if structure]

[End of for]Step 9 Rejoin the entire sub sentences if split in Step 4Step 10 Semantic TranslationStep 11 Exit

Algorithm 1

the systems The fluency check is done by 119899-gram analysisusing the reference translations

71 BLEU We have used BLEU (bilingual evaluation under-study) to calculate the score of systemoutput BLEU is an IBMdeveloped metric which uses modified n-gram precision to

compare the candidate translation against reference transla-tions [16]

Comparative bar diagram between proposed systemGoogle and Bing based on BLEU scale is shown in Figure 3The bar diagram clearly shows that the proposed system hasremarkably high accuracy of 07502 on BLEU scale Bing

6 The Scientific World Journal

Table 2 Comparison of performance measurement of MT system with QNN and traditional neural network

S No Quantum interval (120579) Epoch(iteration)

Trainingperformance (MSE)

Validationperformance (MSE)

Test performance(MSE)

1 120579 = 0 is equivalent to ANN 2004391 0002027 0002066 00020752 025 1536727 0000185 0000205 00002033 05 1547305 0000248 0000273 00002734 075 1528343 0000231 0000248 00002525 1lowast 1548303 000025 000027 00002726 125 1567066 0000284 00003 000037 15 1564271 0000296 0000318 00003168 175 1578643 0000349 0000374 00003759 2 1568663 0000184 0000204 000020910 225 1596607 0000249 0000273 000027311 25 1627345 0000256 0000281 000027612 275 1654092 000022 0000242 000024213 3 1649301 0000348 0000361 00003714 325 1693214 0000192 0000214 000021615 35 1776248 0000294 0000315 000032316 375 1785429 0000185 0000202 000020717 4 1872056 0000217 0000237 000024lowastFor the value of 120579 = 1 the optimum trade-off of iteration and the minimum Error (MSE) achieved

0262603501

07502

0

02

04

06

08

BLEU

BingGoogleProposed

Figure 3 Comparative bar diagram between proposed systemGoogle and Bing based on BLEU scale

has shown accuracy of 02626 and Google has shown 03501accuracy on BLEU scale

Calculate the 119899-gram resemblance by comparing thesentences Then add the clipped 119899-gram counts for all thecandidate sentences and divide by the number of candidate119899-grams in the test sentence to calculate the precision score119901119899 for the whole test sentence

119901119899=

sum119862isinCandidatessum119899-gramisin119862Countclip (119899-gram)

sum1198621015840isinCandidatessum119899-gram1015840isin1198621015840 Countclip (119899-gram1015840)

(3)

where Countclip = min (Count Max Ref Count) In otherwords one truncates each wordrsquos count

417444955

65773

01234567

NIST

BingGoogleProposed

Figure 4 Comparative bar diagram between proposed systemGoogle and Bing based on NIST scale

Here 119888 denotes length of the candidate translation and119903 denotes reference sentence length Then calculate brevitypenalty BP

BP =

1 if 119888 gt 119903

119890(1minus119903119888) if 119888 le 119903(4)

Then

BLEU = BP sdot exp(119873

sum119899=1

119908119899log 119901119899) (5)

72 NIST Proposed by NIST (national institute of standardand technology) it reduces the effect of longer 119873-gramsby using arithmetic mean over 119873-grams counts instead of

The Scientific World Journal 7

06

1

08

04

02

0

ROUGE-L

BingGoogleProposed

0647507189

09233

Figure 5 Comparative bar diagram among proposed systemGoogle and Bing based on ROUGE-L scale

geometricmean of cooccurrences over119873 [17] Figure 4 showsthe comparative bar diagram between proposed systemGoogle and Bing based on NIST scale The bar diagramclearly shows that the proposed system has remarkably highaccuracy of 65773 onNIST scale Bing has shown accuracy of41744 and Google has shown 4955 accuracy on NIST scale

NISTscore = BPNIST lowast PRECISIONNIST

PRECISIONNIST

=

119873

sum119899=1

sumall 119908

1sdotsdotsdot119908119899that co-occur Info (1199081 sdot sdot sdot 119908119899)

sumall 1199081sdotsdotsdot119908119899in hypo (1)

(6)

where

Info (1199081sdot sdot sdot 119908119899) = minuslog

2(

Count (1199081sdot sdot sdot 119908119899)

Count (1199081sdot sdot sdot 119908119899minus1

))

(7)

where Info weights more the words that are difficult topredict and count is computed over the full set of referencestheoretically the precision range is having no limit

BPNIST = exp 120573 lowast log2 [119898119899(LenHypoLenRef

1)]

(8)

where LenHypo is total length of hypothesis and LenRef isaverage length of all references which does not depend onhypothesis

73 ROUGE-L ROUGE-L (recall-oriented understudy forgisting evaluation-longest common subsequence) calculatesthe sentence-to-sentence resemblance using the longest com-mon substring among the candidate translation and referencetranslations The longest common substring represents thesimilarity among two translations 119865lcs calculates the resem-blance between two translations119883of length119898 and119884of length119899 119883 denotes reference translation and 119884 denotes candidatetranslation [18] Comparative bar diagram between proposedsystem Google and Bing based on ROUGE-L scale is shownin Figure 5 The bar diagram clearly shows that the proposed

06

05

04

03

02

01

0

METEOR

BingGoogleProposed

01384

02021

05456

Figure 6 Comparative bar diagram among proposed systemGoogle and Bing based on METEOR scale

system has remarkably high accuracy of 09233 on ROUGE-L scale Bing has shown accuracy of 06475 and Google hasshown 07189 accuracy on ROUGE-L scale

119877lcs =LCS (119883 119884)

119898

119875lcs =LCS (119883 119884)

119899

119865lcs =(1 + 1205732) 119877lcs119875lcs

119877lcs + 1205732119875lcs

(9)

where 119875lcs is precision and 119877lcs is recall and LCS(119883 119884)denotes the longest common substring of 119883 and 119884 and 120573 =

119875lcs119877lcs when 120597119865lcs120597119877lcs = 120597119865lcs120597119875lcs

Rouge-L = Harmonic Mean (119875lcs 119877lcs) =(2 lowast 119875lcs lowast 119877lcs)

(119875lcs + 119877lcs)

(10)

74 METEOR METEOR (metric for evaluation of transla-tion with explicit ordering) is developed at Carnegie MellonUniversity Figure 6 shows comparative bar diagram betweenproposed system Google and Bing based onMETEOR scaleThe bar diagram clearly shows that the proposed system hasremarkably high accuracy of 05456 on METEOR scale Binghas shown accuracy of 01384 and Google has shown 02021accuracy on METEOR scale

The METEOR weighted harmonic mean of unigramprecision (119875 = 119898119908

119905) and unigram recall (119877 = 119898119908

119903) used

Here 119898 denotes unigram matches 119908119905denotes unigrams

in candidate translation and 119908119903is the reference translation

119865mean is calculated by combining the recall and precision viaa harmonic mean that places equal weight on precision andrecall as (119865mean = 2119875119877(119875 + 119877))

This measure is for congruity with respect to single wordbut for considering longer 119899-gram matches a penalty 119901 iscalculated for the alignment as (119901 = 05(119888119906

119898)3

)Here 119888 denotes the number of chunks and 119906

119898denotes the

number of unigrams that have been mapped [19]

8 The Scientific World Journal

Final METEOR-score (M-score) can be calculated asfollows

Meteor-score = 119865mean (1 minus 119901) (11)

Experiments confirm that the accuracy was achieved formachine translation based on quantum neural networkwhich is better than other bilingual translation methods

8 Conclusion

In this work we have presented the quantum neural networkapproach for the problem of machine translation It hasdemonstrated the reasonable accuracy on various scoresIt may be noted that BLEU score achieved 07502 NISTscore achieved 65773 ROUGE-L score achieved 09233 andMETEOR score achieved 05456 accuracy The accuracy ofthe proposed system is significantly higher in comparisonwith Google Translation Bing Translation and other exist-ing approaches for Hindi to English machine translationAccuracy of this system has been improved significantly byincorporating techniques for handling the unknown wordsusingQNN It is also shown above that it requires less trainingtime than the neural network based MT systems

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Rodrıguez I Garcıa-Varea andA Jose Gamez ldquoOn the appli-cation of different evolutionary algorithms to the alignmentproblem in statistical machine translationrdquo Neurocomputingvol 71 pp 755ndash765 2008

[2] S Ananthakrishnan R Prasad D Stallard and P NatarajanldquoBatch-mode semi-supervised active learning for statisticalmachine translationrdquo Computer Speech amp Language vol 27 pp397ndash406 2013

[3] DOrtiz-Martınezb I Garcıa-Vareaa and F Casacubertab ldquoThescaling problem in the pattern recognition approach tomachinetranslationrdquo Pattern Recognition Letters vol 29 pp 1145ndash11532008

[4] J Andres-Ferrer D Ortiz-Martınez I Garcıa-Varea and FCasacuberta ldquoOn the use of different loss functions in statisticalpattern recognition applied to machine translationrdquo PatternRecognition Letters vol 29 no 8 pp 1072ndash1081 2008

[5] M Khalilov and J A R Fonollosa ldquoSyntax-based reordering forstatistical machine translationrdquo Computer Speech amp Languagevol 25 no 4 pp 761ndash788 2011

[6] R M K Sinha ldquoAn engineering perspective of machine trans-lation anglabharti-II and anubharti-II architecturesrdquo in Pro-ceedings of the International SymposiumonMachine TranslationNLP and Translation Support System (ISTRANS rsquo04) pp 134ndash138 Tata McGraw-Hill New Delhi India 2004

[7] RMK Sinha andAThakur ldquoMachine translation of bi-lingualHindi-English (Hinglish)rdquo in Proceedings of the 10th MachineTranslation Summit pp 149ndash156 Phuket Thailand 2005

[8] R Ananthakrishnan M Kavitha J H Jayprasad et al ldquoMaTraa practical approach to fully-automatic indicative English-Hindi machine translationrdquo in Proceedings of the Symposium onModeling and Shallow Parsing of Indian Languages (MSPIL rsquo06)2006

[9] S Raman and N R K Reddy ldquoA transputer-based parallelmachine translation system for Indian languagesrdquoMicroproces-sors and Microsystems vol 20 no 6 pp 373ndash383 1997

[10] A Chandola and A Mahalanobis ldquoOrdered rules for full sen-tence translation a neural network realization and a case studyfor Hindi and Englishrdquo Pattern Recognition vol 27 no 4 pp515ndash521 1994

[11] N B Karayiannis and G Purushothaman ldquoFuzzy pattern clas-sification using feed-forward neural networks with multilevelhidden neuronsrdquo in Proceedings of the IEEE InternationalConference on Neural Networks pp 1577ndash1582 Orlando FlaUSA June 1994

[12] G Purushothaman and N B Karayiannis ldquoQuantum NeuralNetworks (QNNrsquos) inherently fuzzy feedforward neural net-worksrdquo IEEE Transactions on Neural Networks vol 8 no 3 pp679ndash693 1997

[13] R Kretzschmar R Bueler N B Karayiannis and F EggimannldquoQuantum neural networks versus conventional feedforwardneural networks an experimental studyrdquo in Proceedings of the10th IEEE Workshop on Neural Netwoks for Signal Processing(NNSP rsquo00) pp 328ndash337 December 2000

[14] Z Daqi andW Rushi ldquoA multi-layer quantum neural networksrecognition system for handwritten digital recognitionrdquo inProceedings of the 3rd International Conference on NaturalComputation (ICNC rsquo07) pp 718ndash722 Hainan China August2007

[15] T Brants A C Popat P Xu F J Och and J Dean ldquoLargelanguage models in machine translationrdquo in Proceedings of theJoint Conference on Empirical Methods in Natural LanguageProcessing and Computational Natural Language Learning pp858ndash867 Prague Czech Republic June 2007

[16] K Papineni S Roukos T Ward and W -J Zhu ldquoBLEU amethod for automatic evaluation of machine translationrdquo inProceedings of the 40th Annual Meeting of the Association forComputational Linguistics (ACL rsquo02) pp 311ndash318 PhiladelphiaPa USA 2002

[17] G Doddington ldquoAutomatic evaluation of machine translationquality using n-gram co-occurrence statisticsrdquo in Proceedings ofthe 2nd International Conference on Human Language Technol-ogy Research 2002

[18] C Y Lin ldquoRouge a package for automatic evaluation of sum-mariesrdquo in Proceedings of the Workshop on Text SummarizationBranches Out Barcelona Spain 2004

[19] S Banerjee and A Lavie ldquoMETEOR an automatic metric forMT evaluation with improved correlation with human judg-mentsrdquo in Proceedings of theWorkshop on Intrinsic and ExtrinsicEvaluation Measures for MT Association of ComputationalLinguistics (ACL) Ann Arbor Mich USA 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 5: Research Article Quantum Neural Network Based Machine Translator for Hindi to English · 2019. 7. 31. · Research Article Quantum Neural Network Based Machine Translator for Hindi

The Scientific World Journal 5

Step 1 Check whether the sentence is Complex sentence or Simple Sentence[Initialize] Set ICOUNT = 1 NCOUNT = 1 CCOUNT = 1 ILOC NLOC CLOCRepeat for LOC = 1 to119873

if SENTENCE[LOC] = ldquoInterrogative wordrdquo thenSet ILOC[ICOUNT] = LOCICOUNT = ICOUNT + 1

[End of if structure]if SENTENCE[LOC] = ldquoNegative wordrdquo then

Set NLOC[NCOUNT] = LOCNCOUNT= NCOUNT + 1

[End of if structure]if SENTENCE[LOC] = ldquoConjunctionrdquo then

Set CLOC[CCOUNT] = LOCCCOUNT = CCOUNT + 1

[End of if structure][End of for]if ILOC = NULL or NLOC = NULL or CLOC = NULL then

Go to Step 5Step 2 Remove the interrogative word from the complex sentence to make it Affirmative

Repeat for119883 = 1 to ICOUNTSet ITemp[119883]= SENTENCE[ILOC[119883]]Set SENTENCE[ILOC[119883]]= Null

[End of for]Step 3 Then Remove the negative to make it simple sentence

Repeat for 119884 = 1 to NCOUNTSet NTemp[119884]= SENTENCE[NLOC[119884]]Set SENTENCE[NLOC[119884]]= Null

[End of for]Step 4 Split the sentence into two or more simple sentences on the basis of conjunction

Repeat for 119885 = 1 to CCOUNTSet CTemp[119885]= SENTENCE[CLOC[119885]]Set SENTENCE[CLOC[119885]]= Hindi Full-stop (ldquo|rdquo)

[End of for]Step 5 Pass each sub sentence with TOKEN to QNN based Machine Translator for repositionStep 6 Refine the Translated sentences by applying the grammar rulesStep 7 Add the interrogative word if removed in Step 2

Repeat for119883 = 1 to ICOUNTif ITEMP[119883] = NOT NULL

Set SENTENCE[ILOC[119883]]= ITemp[119883][End of if structure]

[End of for]Step 8 Add the negative word if removed in Step 3

Repeat for 119884 = 1 to NCOUNTif NTEMP[119884] = NOT NULL

Set SENTENCE[NLOC[119884]]= NTemp[119884][End of if structure]

[End of for]Step 9 Rejoin the entire sub sentences if split in Step 4Step 10 Semantic TranslationStep 11 Exit

Algorithm 1

the systems The fluency check is done by 119899-gram analysisusing the reference translations

71 BLEU We have used BLEU (bilingual evaluation under-study) to calculate the score of systemoutput BLEU is an IBMdeveloped metric which uses modified n-gram precision to

compare the candidate translation against reference transla-tions [16]

Comparative bar diagram between proposed systemGoogle and Bing based on BLEU scale is shown in Figure 3The bar diagram clearly shows that the proposed system hasremarkably high accuracy of 07502 on BLEU scale Bing

6 The Scientific World Journal

Table 2 Comparison of performance measurement of MT system with QNN and traditional neural network

S No Quantum interval (120579) Epoch(iteration)

Trainingperformance (MSE)

Validationperformance (MSE)

Test performance(MSE)

1 120579 = 0 is equivalent to ANN 2004391 0002027 0002066 00020752 025 1536727 0000185 0000205 00002033 05 1547305 0000248 0000273 00002734 075 1528343 0000231 0000248 00002525 1lowast 1548303 000025 000027 00002726 125 1567066 0000284 00003 000037 15 1564271 0000296 0000318 00003168 175 1578643 0000349 0000374 00003759 2 1568663 0000184 0000204 000020910 225 1596607 0000249 0000273 000027311 25 1627345 0000256 0000281 000027612 275 1654092 000022 0000242 000024213 3 1649301 0000348 0000361 00003714 325 1693214 0000192 0000214 000021615 35 1776248 0000294 0000315 000032316 375 1785429 0000185 0000202 000020717 4 1872056 0000217 0000237 000024lowastFor the value of 120579 = 1 the optimum trade-off of iteration and the minimum Error (MSE) achieved

0262603501

07502

0

02

04

06

08

BLEU

BingGoogleProposed

Figure 3 Comparative bar diagram between proposed systemGoogle and Bing based on BLEU scale

has shown accuracy of 02626 and Google has shown 03501accuracy on BLEU scale

Calculate the 119899-gram resemblance by comparing thesentences Then add the clipped 119899-gram counts for all thecandidate sentences and divide by the number of candidate119899-grams in the test sentence to calculate the precision score119901119899 for the whole test sentence

119901119899=

sum119862isinCandidatessum119899-gramisin119862Countclip (119899-gram)

sum1198621015840isinCandidatessum119899-gram1015840isin1198621015840 Countclip (119899-gram1015840)

(3)

where Countclip = min (Count Max Ref Count) In otherwords one truncates each wordrsquos count

417444955

65773

01234567

NIST

BingGoogleProposed

Figure 4 Comparative bar diagram between proposed systemGoogle and Bing based on NIST scale

Here 119888 denotes length of the candidate translation and119903 denotes reference sentence length Then calculate brevitypenalty BP

BP =

1 if 119888 gt 119903

119890(1minus119903119888) if 119888 le 119903(4)

Then

BLEU = BP sdot exp(119873

sum119899=1

119908119899log 119901119899) (5)

72 NIST Proposed by NIST (national institute of standardand technology) it reduces the effect of longer 119873-gramsby using arithmetic mean over 119873-grams counts instead of

The Scientific World Journal 7

06

1

08

04

02

0

ROUGE-L

BingGoogleProposed

0647507189

09233

Figure 5 Comparative bar diagram among proposed systemGoogle and Bing based on ROUGE-L scale

geometricmean of cooccurrences over119873 [17] Figure 4 showsthe comparative bar diagram between proposed systemGoogle and Bing based on NIST scale The bar diagramclearly shows that the proposed system has remarkably highaccuracy of 65773 onNIST scale Bing has shown accuracy of41744 and Google has shown 4955 accuracy on NIST scale

NISTscore = BPNIST lowast PRECISIONNIST

PRECISIONNIST

=

119873

sum119899=1

sumall 119908

1sdotsdotsdot119908119899that co-occur Info (1199081 sdot sdot sdot 119908119899)

sumall 1199081sdotsdotsdot119908119899in hypo (1)

(6)

where

Info (1199081sdot sdot sdot 119908119899) = minuslog

2(

Count (1199081sdot sdot sdot 119908119899)

Count (1199081sdot sdot sdot 119908119899minus1

))

(7)

where Info weights more the words that are difficult topredict and count is computed over the full set of referencestheoretically the precision range is having no limit

BPNIST = exp 120573 lowast log2 [119898119899(LenHypoLenRef

1)]

(8)

where LenHypo is total length of hypothesis and LenRef isaverage length of all references which does not depend onhypothesis

73 ROUGE-L ROUGE-L (recall-oriented understudy forgisting evaluation-longest common subsequence) calculatesthe sentence-to-sentence resemblance using the longest com-mon substring among the candidate translation and referencetranslations The longest common substring represents thesimilarity among two translations 119865lcs calculates the resem-blance between two translations119883of length119898 and119884of length119899 119883 denotes reference translation and 119884 denotes candidatetranslation [18] Comparative bar diagram between proposedsystem Google and Bing based on ROUGE-L scale is shownin Figure 5 The bar diagram clearly shows that the proposed

06

05

04

03

02

01

0

METEOR

BingGoogleProposed

01384

02021

05456

Figure 6 Comparative bar diagram among proposed systemGoogle and Bing based on METEOR scale

system has remarkably high accuracy of 09233 on ROUGE-L scale Bing has shown accuracy of 06475 and Google hasshown 07189 accuracy on ROUGE-L scale

119877lcs =LCS (119883 119884)

119898

119875lcs =LCS (119883 119884)

119899

119865lcs =(1 + 1205732) 119877lcs119875lcs

119877lcs + 1205732119875lcs

(9)

where 119875lcs is precision and 119877lcs is recall and LCS(119883 119884)denotes the longest common substring of 119883 and 119884 and 120573 =

119875lcs119877lcs when 120597119865lcs120597119877lcs = 120597119865lcs120597119875lcs

Rouge-L = Harmonic Mean (119875lcs 119877lcs) =(2 lowast 119875lcs lowast 119877lcs)

(119875lcs + 119877lcs)

(10)

74 METEOR METEOR (metric for evaluation of transla-tion with explicit ordering) is developed at Carnegie MellonUniversity Figure 6 shows comparative bar diagram betweenproposed system Google and Bing based onMETEOR scaleThe bar diagram clearly shows that the proposed system hasremarkably high accuracy of 05456 on METEOR scale Binghas shown accuracy of 01384 and Google has shown 02021accuracy on METEOR scale

The METEOR weighted harmonic mean of unigramprecision (119875 = 119898119908

119905) and unigram recall (119877 = 119898119908

119903) used

Here 119898 denotes unigram matches 119908119905denotes unigrams

in candidate translation and 119908119903is the reference translation

119865mean is calculated by combining the recall and precision viaa harmonic mean that places equal weight on precision andrecall as (119865mean = 2119875119877(119875 + 119877))

This measure is for congruity with respect to single wordbut for considering longer 119899-gram matches a penalty 119901 iscalculated for the alignment as (119901 = 05(119888119906

119898)3

)Here 119888 denotes the number of chunks and 119906

119898denotes the

number of unigrams that have been mapped [19]

8 The Scientific World Journal

Final METEOR-score (M-score) can be calculated asfollows

Meteor-score = 119865mean (1 minus 119901) (11)

Experiments confirm that the accuracy was achieved formachine translation based on quantum neural networkwhich is better than other bilingual translation methods

8 Conclusion

In this work we have presented the quantum neural networkapproach for the problem of machine translation It hasdemonstrated the reasonable accuracy on various scoresIt may be noted that BLEU score achieved 07502 NISTscore achieved 65773 ROUGE-L score achieved 09233 andMETEOR score achieved 05456 accuracy The accuracy ofthe proposed system is significantly higher in comparisonwith Google Translation Bing Translation and other exist-ing approaches for Hindi to English machine translationAccuracy of this system has been improved significantly byincorporating techniques for handling the unknown wordsusingQNN It is also shown above that it requires less trainingtime than the neural network based MT systems

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Rodrıguez I Garcıa-Varea andA Jose Gamez ldquoOn the appli-cation of different evolutionary algorithms to the alignmentproblem in statistical machine translationrdquo Neurocomputingvol 71 pp 755ndash765 2008

[2] S Ananthakrishnan R Prasad D Stallard and P NatarajanldquoBatch-mode semi-supervised active learning for statisticalmachine translationrdquo Computer Speech amp Language vol 27 pp397ndash406 2013

[3] DOrtiz-Martınezb I Garcıa-Vareaa and F Casacubertab ldquoThescaling problem in the pattern recognition approach tomachinetranslationrdquo Pattern Recognition Letters vol 29 pp 1145ndash11532008

[4] J Andres-Ferrer D Ortiz-Martınez I Garcıa-Varea and FCasacuberta ldquoOn the use of different loss functions in statisticalpattern recognition applied to machine translationrdquo PatternRecognition Letters vol 29 no 8 pp 1072ndash1081 2008

[5] M Khalilov and J A R Fonollosa ldquoSyntax-based reordering forstatistical machine translationrdquo Computer Speech amp Languagevol 25 no 4 pp 761ndash788 2011

[6] R M K Sinha ldquoAn engineering perspective of machine trans-lation anglabharti-II and anubharti-II architecturesrdquo in Pro-ceedings of the International SymposiumonMachine TranslationNLP and Translation Support System (ISTRANS rsquo04) pp 134ndash138 Tata McGraw-Hill New Delhi India 2004

[7] RMK Sinha andAThakur ldquoMachine translation of bi-lingualHindi-English (Hinglish)rdquo in Proceedings of the 10th MachineTranslation Summit pp 149ndash156 Phuket Thailand 2005

[8] R Ananthakrishnan M Kavitha J H Jayprasad et al ldquoMaTraa practical approach to fully-automatic indicative English-Hindi machine translationrdquo in Proceedings of the Symposium onModeling and Shallow Parsing of Indian Languages (MSPIL rsquo06)2006

[9] S Raman and N R K Reddy ldquoA transputer-based parallelmachine translation system for Indian languagesrdquoMicroproces-sors and Microsystems vol 20 no 6 pp 373ndash383 1997

[10] A Chandola and A Mahalanobis ldquoOrdered rules for full sen-tence translation a neural network realization and a case studyfor Hindi and Englishrdquo Pattern Recognition vol 27 no 4 pp515ndash521 1994

[11] N B Karayiannis and G Purushothaman ldquoFuzzy pattern clas-sification using feed-forward neural networks with multilevelhidden neuronsrdquo in Proceedings of the IEEE InternationalConference on Neural Networks pp 1577ndash1582 Orlando FlaUSA June 1994

[12] G Purushothaman and N B Karayiannis ldquoQuantum NeuralNetworks (QNNrsquos) inherently fuzzy feedforward neural net-worksrdquo IEEE Transactions on Neural Networks vol 8 no 3 pp679ndash693 1997

[13] R Kretzschmar R Bueler N B Karayiannis and F EggimannldquoQuantum neural networks versus conventional feedforwardneural networks an experimental studyrdquo in Proceedings of the10th IEEE Workshop on Neural Netwoks for Signal Processing(NNSP rsquo00) pp 328ndash337 December 2000

[14] Z Daqi andW Rushi ldquoA multi-layer quantum neural networksrecognition system for handwritten digital recognitionrdquo inProceedings of the 3rd International Conference on NaturalComputation (ICNC rsquo07) pp 718ndash722 Hainan China August2007

[15] T Brants A C Popat P Xu F J Och and J Dean ldquoLargelanguage models in machine translationrdquo in Proceedings of theJoint Conference on Empirical Methods in Natural LanguageProcessing and Computational Natural Language Learning pp858ndash867 Prague Czech Republic June 2007

[16] K Papineni S Roukos T Ward and W -J Zhu ldquoBLEU amethod for automatic evaluation of machine translationrdquo inProceedings of the 40th Annual Meeting of the Association forComputational Linguistics (ACL rsquo02) pp 311ndash318 PhiladelphiaPa USA 2002

[17] G Doddington ldquoAutomatic evaluation of machine translationquality using n-gram co-occurrence statisticsrdquo in Proceedings ofthe 2nd International Conference on Human Language Technol-ogy Research 2002

[18] C Y Lin ldquoRouge a package for automatic evaluation of sum-mariesrdquo in Proceedings of the Workshop on Text SummarizationBranches Out Barcelona Spain 2004

[19] S Banerjee and A Lavie ldquoMETEOR an automatic metric forMT evaluation with improved correlation with human judg-mentsrdquo in Proceedings of theWorkshop on Intrinsic and ExtrinsicEvaluation Measures for MT Association of ComputationalLinguistics (ACL) Ann Arbor Mich USA 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 6: Research Article Quantum Neural Network Based Machine Translator for Hindi to English · 2019. 7. 31. · Research Article Quantum Neural Network Based Machine Translator for Hindi

6 The Scientific World Journal

Table 2 Comparison of performance measurement of MT system with QNN and traditional neural network

S No Quantum interval (120579) Epoch(iteration)

Trainingperformance (MSE)

Validationperformance (MSE)

Test performance(MSE)

1 120579 = 0 is equivalent to ANN 2004391 0002027 0002066 00020752 025 1536727 0000185 0000205 00002033 05 1547305 0000248 0000273 00002734 075 1528343 0000231 0000248 00002525 1lowast 1548303 000025 000027 00002726 125 1567066 0000284 00003 000037 15 1564271 0000296 0000318 00003168 175 1578643 0000349 0000374 00003759 2 1568663 0000184 0000204 000020910 225 1596607 0000249 0000273 000027311 25 1627345 0000256 0000281 000027612 275 1654092 000022 0000242 000024213 3 1649301 0000348 0000361 00003714 325 1693214 0000192 0000214 000021615 35 1776248 0000294 0000315 000032316 375 1785429 0000185 0000202 000020717 4 1872056 0000217 0000237 000024lowastFor the value of 120579 = 1 the optimum trade-off of iteration and the minimum Error (MSE) achieved

0262603501

07502

0

02

04

06

08

BLEU

BingGoogleProposed

Figure 3 Comparative bar diagram between proposed systemGoogle and Bing based on BLEU scale

has shown accuracy of 02626 and Google has shown 03501accuracy on BLEU scale

Calculate the 119899-gram resemblance by comparing thesentences Then add the clipped 119899-gram counts for all thecandidate sentences and divide by the number of candidate119899-grams in the test sentence to calculate the precision score119901119899 for the whole test sentence

119901119899=

sum119862isinCandidatessum119899-gramisin119862Countclip (119899-gram)

sum1198621015840isinCandidatessum119899-gram1015840isin1198621015840 Countclip (119899-gram1015840)

(3)

where Countclip = min (Count Max Ref Count) In otherwords one truncates each wordrsquos count

417444955

65773

01234567

NIST

BingGoogleProposed

Figure 4 Comparative bar diagram between proposed systemGoogle and Bing based on NIST scale

Here 119888 denotes length of the candidate translation and119903 denotes reference sentence length Then calculate brevitypenalty BP

BP =

1 if 119888 gt 119903

119890(1minus119903119888) if 119888 le 119903(4)

Then

BLEU = BP sdot exp(119873

sum119899=1

119908119899log 119901119899) (5)

72 NIST Proposed by NIST (national institute of standardand technology) it reduces the effect of longer 119873-gramsby using arithmetic mean over 119873-grams counts instead of

The Scientific World Journal 7

06

1

08

04

02

0

ROUGE-L

BingGoogleProposed

0647507189

09233

Figure 5 Comparative bar diagram among proposed systemGoogle and Bing based on ROUGE-L scale

geometricmean of cooccurrences over119873 [17] Figure 4 showsthe comparative bar diagram between proposed systemGoogle and Bing based on NIST scale The bar diagramclearly shows that the proposed system has remarkably highaccuracy of 65773 onNIST scale Bing has shown accuracy of41744 and Google has shown 4955 accuracy on NIST scale

NISTscore = BPNIST lowast PRECISIONNIST

PRECISIONNIST

=

119873

sum119899=1

sumall 119908

1sdotsdotsdot119908119899that co-occur Info (1199081 sdot sdot sdot 119908119899)

sumall 1199081sdotsdotsdot119908119899in hypo (1)

(6)

where

Info (1199081sdot sdot sdot 119908119899) = minuslog

2(

Count (1199081sdot sdot sdot 119908119899)

Count (1199081sdot sdot sdot 119908119899minus1

))

(7)

where Info weights more the words that are difficult topredict and count is computed over the full set of referencestheoretically the precision range is having no limit

BPNIST = exp 120573 lowast log2 [119898119899(LenHypoLenRef

1)]

(8)

where LenHypo is total length of hypothesis and LenRef isaverage length of all references which does not depend onhypothesis

73 ROUGE-L ROUGE-L (recall-oriented understudy forgisting evaluation-longest common subsequence) calculatesthe sentence-to-sentence resemblance using the longest com-mon substring among the candidate translation and referencetranslations The longest common substring represents thesimilarity among two translations 119865lcs calculates the resem-blance between two translations119883of length119898 and119884of length119899 119883 denotes reference translation and 119884 denotes candidatetranslation [18] Comparative bar diagram between proposedsystem Google and Bing based on ROUGE-L scale is shownin Figure 5 The bar diagram clearly shows that the proposed

06

05

04

03

02

01

0

METEOR

BingGoogleProposed

01384

02021

05456

Figure 6 Comparative bar diagram among proposed systemGoogle and Bing based on METEOR scale

system has remarkably high accuracy of 09233 on ROUGE-L scale Bing has shown accuracy of 06475 and Google hasshown 07189 accuracy on ROUGE-L scale

119877lcs =LCS (119883 119884)

119898

119875lcs =LCS (119883 119884)

119899

119865lcs =(1 + 1205732) 119877lcs119875lcs

119877lcs + 1205732119875lcs

(9)

where 119875lcs is precision and 119877lcs is recall and LCS(119883 119884)denotes the longest common substring of 119883 and 119884 and 120573 =

119875lcs119877lcs when 120597119865lcs120597119877lcs = 120597119865lcs120597119875lcs

Rouge-L = Harmonic Mean (119875lcs 119877lcs) =(2 lowast 119875lcs lowast 119877lcs)

(119875lcs + 119877lcs)

(10)

74 METEOR METEOR (metric for evaluation of transla-tion with explicit ordering) is developed at Carnegie MellonUniversity Figure 6 shows comparative bar diagram betweenproposed system Google and Bing based onMETEOR scaleThe bar diagram clearly shows that the proposed system hasremarkably high accuracy of 05456 on METEOR scale Binghas shown accuracy of 01384 and Google has shown 02021accuracy on METEOR scale

The METEOR weighted harmonic mean of unigramprecision (119875 = 119898119908

119905) and unigram recall (119877 = 119898119908

119903) used

Here 119898 denotes unigram matches 119908119905denotes unigrams

in candidate translation and 119908119903is the reference translation

119865mean is calculated by combining the recall and precision viaa harmonic mean that places equal weight on precision andrecall as (119865mean = 2119875119877(119875 + 119877))

This measure is for congruity with respect to single wordbut for considering longer 119899-gram matches a penalty 119901 iscalculated for the alignment as (119901 = 05(119888119906

119898)3

)Here 119888 denotes the number of chunks and 119906

119898denotes the

number of unigrams that have been mapped [19]

8 The Scientific World Journal

Final METEOR-score (M-score) can be calculated asfollows

Meteor-score = 119865mean (1 minus 119901) (11)

Experiments confirm that the accuracy was achieved formachine translation based on quantum neural networkwhich is better than other bilingual translation methods

8 Conclusion

In this work we have presented the quantum neural networkapproach for the problem of machine translation It hasdemonstrated the reasonable accuracy on various scoresIt may be noted that BLEU score achieved 07502 NISTscore achieved 65773 ROUGE-L score achieved 09233 andMETEOR score achieved 05456 accuracy The accuracy ofthe proposed system is significantly higher in comparisonwith Google Translation Bing Translation and other exist-ing approaches for Hindi to English machine translationAccuracy of this system has been improved significantly byincorporating techniques for handling the unknown wordsusingQNN It is also shown above that it requires less trainingtime than the neural network based MT systems

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Rodrıguez I Garcıa-Varea andA Jose Gamez ldquoOn the appli-cation of different evolutionary algorithms to the alignmentproblem in statistical machine translationrdquo Neurocomputingvol 71 pp 755ndash765 2008

[2] S Ananthakrishnan R Prasad D Stallard and P NatarajanldquoBatch-mode semi-supervised active learning for statisticalmachine translationrdquo Computer Speech amp Language vol 27 pp397ndash406 2013

[3] DOrtiz-Martınezb I Garcıa-Vareaa and F Casacubertab ldquoThescaling problem in the pattern recognition approach tomachinetranslationrdquo Pattern Recognition Letters vol 29 pp 1145ndash11532008

[4] J Andres-Ferrer D Ortiz-Martınez I Garcıa-Varea and FCasacuberta ldquoOn the use of different loss functions in statisticalpattern recognition applied to machine translationrdquo PatternRecognition Letters vol 29 no 8 pp 1072ndash1081 2008

[5] M Khalilov and J A R Fonollosa ldquoSyntax-based reordering forstatistical machine translationrdquo Computer Speech amp Languagevol 25 no 4 pp 761ndash788 2011

[6] R M K Sinha ldquoAn engineering perspective of machine trans-lation anglabharti-II and anubharti-II architecturesrdquo in Pro-ceedings of the International SymposiumonMachine TranslationNLP and Translation Support System (ISTRANS rsquo04) pp 134ndash138 Tata McGraw-Hill New Delhi India 2004

[7] RMK Sinha andAThakur ldquoMachine translation of bi-lingualHindi-English (Hinglish)rdquo in Proceedings of the 10th MachineTranslation Summit pp 149ndash156 Phuket Thailand 2005

[8] R Ananthakrishnan M Kavitha J H Jayprasad et al ldquoMaTraa practical approach to fully-automatic indicative English-Hindi machine translationrdquo in Proceedings of the Symposium onModeling and Shallow Parsing of Indian Languages (MSPIL rsquo06)2006

[9] S Raman and N R K Reddy ldquoA transputer-based parallelmachine translation system for Indian languagesrdquoMicroproces-sors and Microsystems vol 20 no 6 pp 373ndash383 1997

[10] A Chandola and A Mahalanobis ldquoOrdered rules for full sen-tence translation a neural network realization and a case studyfor Hindi and Englishrdquo Pattern Recognition vol 27 no 4 pp515ndash521 1994

[11] N B Karayiannis and G Purushothaman ldquoFuzzy pattern clas-sification using feed-forward neural networks with multilevelhidden neuronsrdquo in Proceedings of the IEEE InternationalConference on Neural Networks pp 1577ndash1582 Orlando FlaUSA June 1994

[12] G Purushothaman and N B Karayiannis ldquoQuantum NeuralNetworks (QNNrsquos) inherently fuzzy feedforward neural net-worksrdquo IEEE Transactions on Neural Networks vol 8 no 3 pp679ndash693 1997

[13] R Kretzschmar R Bueler N B Karayiannis and F EggimannldquoQuantum neural networks versus conventional feedforwardneural networks an experimental studyrdquo in Proceedings of the10th IEEE Workshop on Neural Netwoks for Signal Processing(NNSP rsquo00) pp 328ndash337 December 2000

[14] Z Daqi andW Rushi ldquoA multi-layer quantum neural networksrecognition system for handwritten digital recognitionrdquo inProceedings of the 3rd International Conference on NaturalComputation (ICNC rsquo07) pp 718ndash722 Hainan China August2007

[15] T Brants A C Popat P Xu F J Och and J Dean ldquoLargelanguage models in machine translationrdquo in Proceedings of theJoint Conference on Empirical Methods in Natural LanguageProcessing and Computational Natural Language Learning pp858ndash867 Prague Czech Republic June 2007

[16] K Papineni S Roukos T Ward and W -J Zhu ldquoBLEU amethod for automatic evaluation of machine translationrdquo inProceedings of the 40th Annual Meeting of the Association forComputational Linguistics (ACL rsquo02) pp 311ndash318 PhiladelphiaPa USA 2002

[17] G Doddington ldquoAutomatic evaluation of machine translationquality using n-gram co-occurrence statisticsrdquo in Proceedings ofthe 2nd International Conference on Human Language Technol-ogy Research 2002

[18] C Y Lin ldquoRouge a package for automatic evaluation of sum-mariesrdquo in Proceedings of the Workshop on Text SummarizationBranches Out Barcelona Spain 2004

[19] S Banerjee and A Lavie ldquoMETEOR an automatic metric forMT evaluation with improved correlation with human judg-mentsrdquo in Proceedings of theWorkshop on Intrinsic and ExtrinsicEvaluation Measures for MT Association of ComputationalLinguistics (ACL) Ann Arbor Mich USA 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 7: Research Article Quantum Neural Network Based Machine Translator for Hindi to English · 2019. 7. 31. · Research Article Quantum Neural Network Based Machine Translator for Hindi

The Scientific World Journal 7

06

1

08

04

02

0

ROUGE-L

BingGoogleProposed

0647507189

09233

Figure 5 Comparative bar diagram among proposed systemGoogle and Bing based on ROUGE-L scale

geometricmean of cooccurrences over119873 [17] Figure 4 showsthe comparative bar diagram between proposed systemGoogle and Bing based on NIST scale The bar diagramclearly shows that the proposed system has remarkably highaccuracy of 65773 onNIST scale Bing has shown accuracy of41744 and Google has shown 4955 accuracy on NIST scale

NISTscore = BPNIST lowast PRECISIONNIST

PRECISIONNIST

=

119873

sum119899=1

sumall 119908

1sdotsdotsdot119908119899that co-occur Info (1199081 sdot sdot sdot 119908119899)

sumall 1199081sdotsdotsdot119908119899in hypo (1)

(6)

where

Info (1199081sdot sdot sdot 119908119899) = minuslog

2(

Count (1199081sdot sdot sdot 119908119899)

Count (1199081sdot sdot sdot 119908119899minus1

))

(7)

where Info weights more the words that are difficult topredict and count is computed over the full set of referencestheoretically the precision range is having no limit

BPNIST = exp 120573 lowast log2 [119898119899(LenHypoLenRef

1)]

(8)

where LenHypo is total length of hypothesis and LenRef isaverage length of all references which does not depend onhypothesis

73 ROUGE-L ROUGE-L (recall-oriented understudy forgisting evaluation-longest common subsequence) calculatesthe sentence-to-sentence resemblance using the longest com-mon substring among the candidate translation and referencetranslations The longest common substring represents thesimilarity among two translations 119865lcs calculates the resem-blance between two translations119883of length119898 and119884of length119899 119883 denotes reference translation and 119884 denotes candidatetranslation [18] Comparative bar diagram between proposedsystem Google and Bing based on ROUGE-L scale is shownin Figure 5 The bar diagram clearly shows that the proposed

06

05

04

03

02

01

0

METEOR

BingGoogleProposed

01384

02021

05456

Figure 6 Comparative bar diagram among proposed systemGoogle and Bing based on METEOR scale

system has remarkably high accuracy of 09233 on ROUGE-L scale Bing has shown accuracy of 06475 and Google hasshown 07189 accuracy on ROUGE-L scale

119877lcs =LCS (119883 119884)

119898

119875lcs =LCS (119883 119884)

119899

119865lcs =(1 + 1205732) 119877lcs119875lcs

119877lcs + 1205732119875lcs

(9)

where 119875lcs is precision and 119877lcs is recall and LCS(119883 119884)denotes the longest common substring of 119883 and 119884 and 120573 =

119875lcs119877lcs when 120597119865lcs120597119877lcs = 120597119865lcs120597119875lcs

Rouge-L = Harmonic Mean (119875lcs 119877lcs) =(2 lowast 119875lcs lowast 119877lcs)

(119875lcs + 119877lcs)

(10)

74 METEOR METEOR (metric for evaluation of transla-tion with explicit ordering) is developed at Carnegie MellonUniversity Figure 6 shows comparative bar diagram betweenproposed system Google and Bing based onMETEOR scaleThe bar diagram clearly shows that the proposed system hasremarkably high accuracy of 05456 on METEOR scale Binghas shown accuracy of 01384 and Google has shown 02021accuracy on METEOR scale

The METEOR weighted harmonic mean of unigramprecision (119875 = 119898119908

119905) and unigram recall (119877 = 119898119908

119903) used

Here 119898 denotes unigram matches 119908119905denotes unigrams

in candidate translation and 119908119903is the reference translation

119865mean is calculated by combining the recall and precision viaa harmonic mean that places equal weight on precision andrecall as (119865mean = 2119875119877(119875 + 119877))

This measure is for congruity with respect to single wordbut for considering longer 119899-gram matches a penalty 119901 iscalculated for the alignment as (119901 = 05(119888119906

119898)3

)Here 119888 denotes the number of chunks and 119906

119898denotes the

number of unigrams that have been mapped [19]

8 The Scientific World Journal

Final METEOR-score (M-score) can be calculated asfollows

Meteor-score = 119865mean (1 minus 119901) (11)

Experiments confirm that the accuracy was achieved formachine translation based on quantum neural networkwhich is better than other bilingual translation methods

8 Conclusion

In this work we have presented the quantum neural networkapproach for the problem of machine translation It hasdemonstrated the reasonable accuracy on various scoresIt may be noted that BLEU score achieved 07502 NISTscore achieved 65773 ROUGE-L score achieved 09233 andMETEOR score achieved 05456 accuracy The accuracy ofthe proposed system is significantly higher in comparisonwith Google Translation Bing Translation and other exist-ing approaches for Hindi to English machine translationAccuracy of this system has been improved significantly byincorporating techniques for handling the unknown wordsusingQNN It is also shown above that it requires less trainingtime than the neural network based MT systems

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Rodrıguez I Garcıa-Varea andA Jose Gamez ldquoOn the appli-cation of different evolutionary algorithms to the alignmentproblem in statistical machine translationrdquo Neurocomputingvol 71 pp 755ndash765 2008

[2] S Ananthakrishnan R Prasad D Stallard and P NatarajanldquoBatch-mode semi-supervised active learning for statisticalmachine translationrdquo Computer Speech amp Language vol 27 pp397ndash406 2013

[3] DOrtiz-Martınezb I Garcıa-Vareaa and F Casacubertab ldquoThescaling problem in the pattern recognition approach tomachinetranslationrdquo Pattern Recognition Letters vol 29 pp 1145ndash11532008

[4] J Andres-Ferrer D Ortiz-Martınez I Garcıa-Varea and FCasacuberta ldquoOn the use of different loss functions in statisticalpattern recognition applied to machine translationrdquo PatternRecognition Letters vol 29 no 8 pp 1072ndash1081 2008

[5] M Khalilov and J A R Fonollosa ldquoSyntax-based reordering forstatistical machine translationrdquo Computer Speech amp Languagevol 25 no 4 pp 761ndash788 2011

[6] R M K Sinha ldquoAn engineering perspective of machine trans-lation anglabharti-II and anubharti-II architecturesrdquo in Pro-ceedings of the International SymposiumonMachine TranslationNLP and Translation Support System (ISTRANS rsquo04) pp 134ndash138 Tata McGraw-Hill New Delhi India 2004

[7] RMK Sinha andAThakur ldquoMachine translation of bi-lingualHindi-English (Hinglish)rdquo in Proceedings of the 10th MachineTranslation Summit pp 149ndash156 Phuket Thailand 2005

[8] R Ananthakrishnan M Kavitha J H Jayprasad et al ldquoMaTraa practical approach to fully-automatic indicative English-Hindi machine translationrdquo in Proceedings of the Symposium onModeling and Shallow Parsing of Indian Languages (MSPIL rsquo06)2006

[9] S Raman and N R K Reddy ldquoA transputer-based parallelmachine translation system for Indian languagesrdquoMicroproces-sors and Microsystems vol 20 no 6 pp 373ndash383 1997

[10] A Chandola and A Mahalanobis ldquoOrdered rules for full sen-tence translation a neural network realization and a case studyfor Hindi and Englishrdquo Pattern Recognition vol 27 no 4 pp515ndash521 1994

[11] N B Karayiannis and G Purushothaman ldquoFuzzy pattern clas-sification using feed-forward neural networks with multilevelhidden neuronsrdquo in Proceedings of the IEEE InternationalConference on Neural Networks pp 1577ndash1582 Orlando FlaUSA June 1994

[12] G Purushothaman and N B Karayiannis ldquoQuantum NeuralNetworks (QNNrsquos) inherently fuzzy feedforward neural net-worksrdquo IEEE Transactions on Neural Networks vol 8 no 3 pp679ndash693 1997

[13] R Kretzschmar R Bueler N B Karayiannis and F EggimannldquoQuantum neural networks versus conventional feedforwardneural networks an experimental studyrdquo in Proceedings of the10th IEEE Workshop on Neural Netwoks for Signal Processing(NNSP rsquo00) pp 328ndash337 December 2000

[14] Z Daqi andW Rushi ldquoA multi-layer quantum neural networksrecognition system for handwritten digital recognitionrdquo inProceedings of the 3rd International Conference on NaturalComputation (ICNC rsquo07) pp 718ndash722 Hainan China August2007

[15] T Brants A C Popat P Xu F J Och and J Dean ldquoLargelanguage models in machine translationrdquo in Proceedings of theJoint Conference on Empirical Methods in Natural LanguageProcessing and Computational Natural Language Learning pp858ndash867 Prague Czech Republic June 2007

[16] K Papineni S Roukos T Ward and W -J Zhu ldquoBLEU amethod for automatic evaluation of machine translationrdquo inProceedings of the 40th Annual Meeting of the Association forComputational Linguistics (ACL rsquo02) pp 311ndash318 PhiladelphiaPa USA 2002

[17] G Doddington ldquoAutomatic evaluation of machine translationquality using n-gram co-occurrence statisticsrdquo in Proceedings ofthe 2nd International Conference on Human Language Technol-ogy Research 2002

[18] C Y Lin ldquoRouge a package for automatic evaluation of sum-mariesrdquo in Proceedings of the Workshop on Text SummarizationBranches Out Barcelona Spain 2004

[19] S Banerjee and A Lavie ldquoMETEOR an automatic metric forMT evaluation with improved correlation with human judg-mentsrdquo in Proceedings of theWorkshop on Intrinsic and ExtrinsicEvaluation Measures for MT Association of ComputationalLinguistics (ACL) Ann Arbor Mich USA 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 8: Research Article Quantum Neural Network Based Machine Translator for Hindi to English · 2019. 7. 31. · Research Article Quantum Neural Network Based Machine Translator for Hindi

8 The Scientific World Journal

Final METEOR-score (M-score) can be calculated asfollows

Meteor-score = 119865mean (1 minus 119901) (11)

Experiments confirm that the accuracy was achieved formachine translation based on quantum neural networkwhich is better than other bilingual translation methods

8 Conclusion

In this work we have presented the quantum neural networkapproach for the problem of machine translation It hasdemonstrated the reasonable accuracy on various scoresIt may be noted that BLEU score achieved 07502 NISTscore achieved 65773 ROUGE-L score achieved 09233 andMETEOR score achieved 05456 accuracy The accuracy ofthe proposed system is significantly higher in comparisonwith Google Translation Bing Translation and other exist-ing approaches for Hindi to English machine translationAccuracy of this system has been improved significantly byincorporating techniques for handling the unknown wordsusingQNN It is also shown above that it requires less trainingtime than the neural network based MT systems

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Rodrıguez I Garcıa-Varea andA Jose Gamez ldquoOn the appli-cation of different evolutionary algorithms to the alignmentproblem in statistical machine translationrdquo Neurocomputingvol 71 pp 755ndash765 2008

[2] S Ananthakrishnan R Prasad D Stallard and P NatarajanldquoBatch-mode semi-supervised active learning for statisticalmachine translationrdquo Computer Speech amp Language vol 27 pp397ndash406 2013

[3] DOrtiz-Martınezb I Garcıa-Vareaa and F Casacubertab ldquoThescaling problem in the pattern recognition approach tomachinetranslationrdquo Pattern Recognition Letters vol 29 pp 1145ndash11532008

[4] J Andres-Ferrer D Ortiz-Martınez I Garcıa-Varea and FCasacuberta ldquoOn the use of different loss functions in statisticalpattern recognition applied to machine translationrdquo PatternRecognition Letters vol 29 no 8 pp 1072ndash1081 2008

[5] M Khalilov and J A R Fonollosa ldquoSyntax-based reordering forstatistical machine translationrdquo Computer Speech amp Languagevol 25 no 4 pp 761ndash788 2011

[6] R M K Sinha ldquoAn engineering perspective of machine trans-lation anglabharti-II and anubharti-II architecturesrdquo in Pro-ceedings of the International SymposiumonMachine TranslationNLP and Translation Support System (ISTRANS rsquo04) pp 134ndash138 Tata McGraw-Hill New Delhi India 2004

[7] RMK Sinha andAThakur ldquoMachine translation of bi-lingualHindi-English (Hinglish)rdquo in Proceedings of the 10th MachineTranslation Summit pp 149ndash156 Phuket Thailand 2005

[8] R Ananthakrishnan M Kavitha J H Jayprasad et al ldquoMaTraa practical approach to fully-automatic indicative English-Hindi machine translationrdquo in Proceedings of the Symposium onModeling and Shallow Parsing of Indian Languages (MSPIL rsquo06)2006

[9] S Raman and N R K Reddy ldquoA transputer-based parallelmachine translation system for Indian languagesrdquoMicroproces-sors and Microsystems vol 20 no 6 pp 373ndash383 1997

[10] A Chandola and A Mahalanobis ldquoOrdered rules for full sen-tence translation a neural network realization and a case studyfor Hindi and Englishrdquo Pattern Recognition vol 27 no 4 pp515ndash521 1994

[11] N B Karayiannis and G Purushothaman ldquoFuzzy pattern clas-sification using feed-forward neural networks with multilevelhidden neuronsrdquo in Proceedings of the IEEE InternationalConference on Neural Networks pp 1577ndash1582 Orlando FlaUSA June 1994

[12] G Purushothaman and N B Karayiannis ldquoQuantum NeuralNetworks (QNNrsquos) inherently fuzzy feedforward neural net-worksrdquo IEEE Transactions on Neural Networks vol 8 no 3 pp679ndash693 1997

[13] R Kretzschmar R Bueler N B Karayiannis and F EggimannldquoQuantum neural networks versus conventional feedforwardneural networks an experimental studyrdquo in Proceedings of the10th IEEE Workshop on Neural Netwoks for Signal Processing(NNSP rsquo00) pp 328ndash337 December 2000

[14] Z Daqi andW Rushi ldquoA multi-layer quantum neural networksrecognition system for handwritten digital recognitionrdquo inProceedings of the 3rd International Conference on NaturalComputation (ICNC rsquo07) pp 718ndash722 Hainan China August2007

[15] T Brants A C Popat P Xu F J Och and J Dean ldquoLargelanguage models in machine translationrdquo in Proceedings of theJoint Conference on Empirical Methods in Natural LanguageProcessing and Computational Natural Language Learning pp858ndash867 Prague Czech Republic June 2007

[16] K Papineni S Roukos T Ward and W -J Zhu ldquoBLEU amethod for automatic evaluation of machine translationrdquo inProceedings of the 40th Annual Meeting of the Association forComputational Linguistics (ACL rsquo02) pp 311ndash318 PhiladelphiaPa USA 2002

[17] G Doddington ldquoAutomatic evaluation of machine translationquality using n-gram co-occurrence statisticsrdquo in Proceedings ofthe 2nd International Conference on Human Language Technol-ogy Research 2002

[18] C Y Lin ldquoRouge a package for automatic evaluation of sum-mariesrdquo in Proceedings of the Workshop on Text SummarizationBranches Out Barcelona Spain 2004

[19] S Banerjee and A Lavie ldquoMETEOR an automatic metric forMT evaluation with improved correlation with human judg-mentsrdquo in Proceedings of theWorkshop on Intrinsic and ExtrinsicEvaluation Measures for MT Association of ComputationalLinguistics (ACL) Ann Arbor Mich USA 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 9: Research Article Quantum Neural Network Based Machine Translator for Hindi to English · 2019. 7. 31. · Research Article Quantum Neural Network Based Machine Translator for Hindi

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014