international journal of computing and ict...
TRANSCRIPT
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
1
Contents Volume 11, No 1, June 2017.
ISSN 1818-1139 (PRINT), ISSN 1996-1065 (ONLINE)
Is Africa “Overproducing” “Well Trained” Information Technologists? – Part – 1 …………………… 6 Joseph M. Kizza – Editor-in-Chief Is the SAMR Model Valid and Reliable for Measuring the Use of ICT in Pedagogy? Answers from a Study of Teachers of Mathematical Disciplines in Universities in Uganda………11 Marjorie S K Batiibwe; Fred E K Bakkabulindi; John M Mango Empirical Study of Continuous Change of Open Source System…………………..………………………….31 Michael Abayomi Olatunji, Rufus Olalere Oladele and Amos Orenyi Bajeh Measuring the Impacts of E-Learning on Students’ Achievement in Learning Process: An Experience from Tanzanian Public Universities……………………………………………………………………….53 Titus Tossy Reducing Checkpoint Overhead in Grid Environment……………………………………………………………..72 Faki Ageebe Silas, Jimoh Rasheed Gbenga Keystroke Dynamics Authentication for a Web-Based Sales and Stock Solution……………………..84 Oluwakemi C. Abikoye and Bilikis T. Sanni
International Journal
of Computing and
ICT Research
2 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
International Journal of Computing and ICT Research
Editorial Board
Editor-in-Chief: Prof. Joseph M. Kizza,
Department of Computer Science and Engineering
College of Engineering and Computer Science
The University of Tennessee-Chattanooga,
615 McCallie Avenue, Chattanooga, Tennessee, USA
Managing Editors:
Information Technology
Prof. Shushma Patel, London South Bank University, UK
Information Systems
Prof. Ravi Nath, Creighton University, Nebraska, USA
Software Engineering
Prof. P.K. Mahanti, University of New Brunswick, Canada
Data Communication and Computer Networks
Prof. Vir Phoha, Syracuse University, New York , USA
Production Editor:
Journal Editorial Office:
The International Journal of Computing and ICT Research
Makerere University
P.O. Box 7062,
Kampala, Uganda.
Tel: +256 414 540628
Fax: +256 414 540620
Email: [email protected]
Web: http://www.ijcir.mak.ac.ug
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
3
Volume 11, Issue 1 June 2017
The International Journal of Computing and ICT Research
College of Computing and Information Sciences
Makerere University
P.O. Box 7062,
Kampala, Uganda.
Tel: +256 414 540628
Fax: +256 414 540628
Email: [email protected]
International Journal of Computing
and ICT Research
4 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Web: http://www.ijcir.mak.ac.ug
Book Reviews
Every issue of the journal will carry one or more book reviews. This is a call for reviewers of
books. The book reviewed must be of interest to the readers of the journal. That is to say, the book
must be within the areas the journal covers. The reviews must be no more than 500 words. Send
your review electronically to the book review editor at: [email protected]
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
5
International Journal of Computing and ICT Research
The IJCIR is an independent biannual publication of Makerere University. In addition to
publishing original work from international scholars from across the globe, the Journal strives to
publish African original work of the highest quality that embraces basic information
communication technology (ICT) that will not only offer significant contributions to scientific
research and development but also take into account local development contexts. The Journal
publishes papers in computer science, computer engineering, software engineering, information
systems, data communications and computer networks, ICT for sustainable development, and
other related areas. Two issues are published per year: June and December. For more detailed
topics please see: http://www.ijcir.mak.ac.ug
Submitted work should be original and unpublished current research in computing and ICT based
on either theoretical or methodological aspects, as well as various applications in real world
problems from science, technology, business or commerce.
Short and quality articles (not exceeding 20 single spaced type pages) including references are
preferable. The selection of journal papers which involves a rigorous review process secures the
most scholarly, critical, analytical, original, and informative papers. Papers are typically published
in less than half a year from the time a final corrected version of the manuscript is received.
Authors should submit their manuscripts in Word or PDF to [email protected]. Manuscripts
submitted will be admitted subject to adherence to the publication requirements in formatting and
style. For more details on manuscript formatting and style please visit the journal website at:
http://www.ijcir.mak.ac.ug.
6 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Is Africa “Overproducing” “Well Trained” Information Technologists?
– Part 1
PROF. JOSEPH M. KIZZA1,
Editor-in-Chief
Department of Computer Science and Engineering,
The University of Tennessee-Chattanooga, Tennessee, 37403, USA.
IJCIR Reference Format:
Kizza, Joseph. M. Is Africa “Overproducing” “Well Trained” Information Technologists? – Part
1, Vol. 11, Issue.1 pp 6 - 10. http://ijcir.mak.ac.ug/volume11-issue1/article1.pdf
INTRODUCTION
In my editorial article of the maiden issue, Vol. 1 Issue 1, Vol. 3 Issue 2 and in subsequent volumes
I pointed out that the African technology journey from the bottom of the stack to where Africa is
now, has not been without problems. There, at the dawn of the Information and Communications
Technology (ICT), Africa’s technological imprints were but non-starters compared to the giant
steps that were being taken in the rest of the world. Throughout the continent, ICT capacity and
infrastructure were low and in some places non-existent, and equipment acquisition was sporadic
and unplanned. However, as I pointed out then and stress now, Africans were just deprived not
disabled. African leading universities and institutions set themselves on a quest to be the ICT
incubators and jump start ICT education and research and build ICT capacity to help in the
construction of the badly needed infrastructure.
1 Author’s Address: Joseph M. Kizza, Department of Computer Science and Engineering,The University of Tennessee-
Chattanooga,Chattanooga, TN 37403, USA., [email protected]. "Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than IJCIR must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee."
© International Journal of Computing and ICT Research 2008.
International Journal of Computing and ICT Research, ISSN 1818-1139 (Print), ISSN 1996-1065 (Online), Vol.11, Issue 1, pp. 6 - 10, June 2017.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
7
For these institutions and everybody else involved in the quest for technological advance, the climb
to the top of the technological mountain has been and continues to be steep. However, with the
typical African determination, they are inching on to the top. Years into the climb, there has been
signs of achievements. Everywhere in Africa today, one witnesses exuberance, soaring interest,
especially of the young, an increasing inventory of working ICT equipment, an increasing number
of confident and young ICT technocrats, an unbelievably large number of young people taking
courses in information technology and a growing number of governments believing more in ICT
as a tool for development. The African technological acquisition, though still low by international
standards, driven mostly by an unprecedented indigenous interest in technological development
and the numerous and sometimes ambitious initiatives by NGOs and the donor community, is
changing the fortunes of Africa, quickly leapfrogging her into the 21st
Century with rapid
development not seen in generations. The long awaited African technological dawn may be in
sight.
But like all good and successful journeys, it is time to take a break and a breather and take stock
of what is it that we are getting. Indeed the results we are getting, though still overwhelmingly
good, there are signs that things are not going as planned. There are signs of cracks in the castles
of success we are building. The first and probably the biggest crack to show is the growing numbers
of university graduates in ICT that are roaming African streets looking for employment. This
should lead us to serious soul searching for where we went wrong. Is Africa
“Overproducing” “Well Trained” Information Technologists?
Why do we have what seems to be so many frustrated young ICT graduated that are roaming our
streets with annoyed parents who spent their lives savings for a promise that was not to be? Did
we oversell ICT? Did we false advertise ICT? Did we train their sons and daughters right? Have
their governments failed to create enabling environments and infrastructures to enable these
graduates to start on their own? Last and most important, did we misunderstand ICT? To try to
understand what is going on and ignite a debate about this very growing problem, I put quotes of
key words and phrases in my title question to indicate how loaded the question is.
8 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
As we start to debate the question looking for answers, I want to point out pertinent issues that
may explain some of the root courses. I discussed these very same issues, on the cause of poor
education given to our students, in Vol. 5 Issue 2:
A PERVASIVE CONSULTANCY CULTURE Today, intellectual life in universities has
been reduced to bare-bones classroom activity. Extra-curricular seminars and workshops
have migrated to hotels. Workshop attendance goes with transport allowances and per
diem. All this is part of a larger process, the NGO-ization of the university. Academic
papers have turned into corporate-style power point presentations. Academics read less and
less. A chorus of buzz words have taken the place of lively debates (Mamdani, Mail and
Guardian Online)
Mahmood Mamdani: “African Universities Breed ―Native Informers‖, Not Researchers”
- A leading East African political scientist, Prof. Mahmood Mamdani, who is the director
of Makerere University ‘s Institute of Social Research has put universities in Sub-Saharan
Africa in the dock by accusing them of not creating researchers but churning out native
informers to national and international non-governmental organizations. (Mamdani, Mail
and Guardian Online)
Statistics from the United Nations Educational Scientific and Cultural Organization
(UNESCO) reveal that the entire African continent contributes only 2.3 per cent of the
world ‘s researchers. (Wachira Kigotho, online)
UNESCO estimates that on average, Africa has only 169 researchers per one million
inhabitants. Apart from having the lowest density of researchers in the world, investment
in research and development in Africa stands at 0.9 per cent. (Stepps in Sync.)
Besides underrated education given to our students as one of the root causes of this problem, there
are other causes including:
Lack of enough resources like private capital funding and back loans to enable these
graduates to start on their own- building start ups. The majority of technology-based
businesses and companies start as individual start-ups.
African governments are not investing enough into policies and financing of ICT-based
enabling environments. There are a few exceptions like Kenya’s Konza Techno City, an
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
9
IT-focused “smart” metropolis – code named the Silicon Savannah (Jonathan Rosen).
False advertising by institutions of higher learning. In the last 10 years or so more
universities and institutions of higher learning have sprung up across African than in the
last fifty years. The key getter is ICT. Everyone of these “new” institutions, some not
worthy to be called institutions of higher learning, and with poorly trained and poorly paid
teaching faculty, are advertising to parents as they offer a strong curriculum of ICT and
how their sons and daughter will not have problems finding ICT-related jobs. Parents pull
their money out of mattresses to pay for the success of their off-springs.
Probably the most serious problem is the misunderstanding of what ICT is. I attended a
conference where this one African minister tried to convince everyone that “with “ICTs”
(whatever ITCs meant to him) we can do anything”. This has been and continues to be a
very serious and indeed dangerous problem. ICT is a spectrum of technologies that if one
is not careful, one can take courses and courses and come out without an iota of knowledge
of even writing a single line of code to do anything useful. I have seen African students
with degrees in ICT coming to American universities and not be able to pass a single
undergraduate freshman course in computing sciences. It is a deplorable situation we are
in. In this situation, how to you expect to graduate a student who can find a meaningful
job?
Something needs to give!
REFERENCES
MAHMOUD MAMDANI. The Importance of Research in a University - Mahmoud Mamdani,
http://pascalobservatory.org/pascalnow/blogentry/importance-research-university-mahmoud-
mamdani
WACHIRA KIGOTHO, Africa, Home to Only 2.3 per cent World ‘s Researchers,
http://www.standardmedia.co.ke/InsidePage.php?id=2000040499&cid=4&ttl=Africa+home+to+
only+2.3+per+cent+world%E2%80%99s+researchers
MAHMOUD MAMDANI, Africa ‘s Post-Colonial Scourge, Mail and Guardian Online.
http://www.ru.ac.za/media/rhodesuniversity/content/documents/institutionalplanning/African%2
10 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
0higher%20education%20-%20Mahmood%20Mamdani%2027May2011.pdf
STEPPS IN SYNC. About the Life of Creatives in the Overlooked ‗Pockets ‘of the Planet.
http://steppesinsync.wordpress.com/about-steppes-in-sync/
ROSEN, JONATHAN. Business Report. “Kenya Tries to Build Its Silicon Valley”.
https://www.technologyreview.com/s/543406/kenya-tries-to-build-its-silicon-valley/
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
11
Is the SAMR Model Valid and Reliable for Measuring the Use of ICT in
Pedagogy? Answers from a Study of Teachers of Mathematical
Disciplines in Universities in Uganda
*MARJORIE S K BATIIBWE2; *FRED E K BAKKABULINDI; **JOHN M MANGO
*College of Education & External Studies, Makerere University
**College of Natural Sciences, Makerere University
ABSTRACT
Puentedura's (2006) SAMR model of the use of ICT operationalises ICT as consisting of
substitution (S), augmentation (A), modification (M) and redefinition (R) constructs. Accordingly,
this study sought first, to establish the validity and reliability of each of the four constructs (S, A,
M & R). Second, to test whether the four constructs were independent. Third, to re-examine
whether the four-factor SAMR model of ICT was reasonable. A sample of 261 was chosen from
among teachers of mathematical disciplines in four universities in Uganda who reacted to a self-
administered questionnaire. The analysis involved using confirmatory factor analysis (CFA) and
Cronbach alpha (first objective); Pearson’s linear correlation (second objective); and exploratory
factor analysis (EFA) (third objective). CFA suggested that while not all the items of each of S
and A constructs were valid measures, all the items of each of M and R constructs were valid. The
four constructs were highly inter-correlated. EFA revealed that the four-factor SAMR model of
2 Author’s Address: MARJORIE S K BATIIBWE2; *FRED E K BAKKABULINDI; **JOHN M MANGO
*College of Education & External Studies, Makerere University, **College of Natural Sciences, Makerere University
"Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than IJCIR must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to
post on servers or to redistribute to lists, requires prior specific permission and/or a fee." © International Journal of Computing and ICT Research 2008.
International Journal of Computing and ICT Research, ISSN 1818-1139 (Print), ISSN 1996-1065 (Online), Vol.11, Issue 2, pp. 11 - 30, June
2017.
12 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
operationalising ICT was questionable. Hence a call was made to researchers to continue testing
the SAMR model in different contexts with intent to refining it.
General terms: Academic Staff, Cronbach Alpha, Factor Analysis, Mathematical disciplines, Use
of ICT in pedagogy
IJCIR Reference Format:
MARJORIE S K BATIIBWE; FRED E K BAKKABULINDI; JOHN M MANGO. Is the
SAMR Model Valid and Reliable for Measuring the Use of ICT in Pedagogy? Answers from a
Study of Teachers of Mathematical Disciplines in Universities in Uganda, Vol. 11, Issue.1 pp 11
- 30. http://ijcir.mak.ac.ug/volume11-issue1/article2.pdf
1. INTRODUCTION
Lloyd (2005) defined ICT as “those technologies that are used for accessing, gathering,
manipulating and presenting or communicating information” (p. 3). These technologies, she
explained, include hardware such as computers and other devices; software applications; and
connectivity such as access to the Internet. She then defined the use or integration of ICT in
pedagogy (UIP) as “a change in pedagogical approach to make ICT less peripheral to schooling
and more central to student learning” (p. 5). From its conception, UIP is of utmost importance.
According to Noor-Ul-Amin (2013), UIP provides opportunities to the learner to access an
abundance of information using multiple information resources and also to view information from
multiple perspectives.
Given its importance, UIP has attracted several studies. But how is UIP measured in those studies?
To that effect, several scholars (e.g. Jamieson-Proctor, Watson, Finger, Grimbeek & Burnett, 2006;
Papanastasiou & Angeli, 2008; Peeraer & Petegem, 2012; Proctor, Watson & Finger, 2003;
Puentedura, 2006) have made attempts to develop and/ or test instruments that measure UIP by
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
13
teachers and students. Of particular interest in this study is the substitution, augmentation,
modification and redefinition (SAMR) model (Puentedura, 2006). This study evaluated the validity
and reliability of an instrument based on the SAMR model in the context of the teachers of
mathematical disciplines in four universities in Uganda.
2. LITERATURE REVIEW
2.1 Instruments on the use of ICT in pedagogy than those based on SAMR
Jamieson-Proctor et al. (2006) developed an instrument that they termed the “Learning with ICTs:
Measuring ICT Use in the Curriculum”. Their instrument was an improvement of the “ICT
Curriculum Integration Performance Measurement” instrument (Proctor et al., 2003), which had
137 self-rated items, scaled using the four-point Likert (as Never, Sometimes, Often & Very often).
Using face validity that involved examining the items of the instrument for redundancy and
ambiguity, Jamieson-Proctor et al. reduced the items to 45 in number. They then administered the
refined instrument to 929 teachers in 38 state primary and secondary schools in Queensland,
Australia. Using a series of validity and reliability analyses, they further reduced the instrument to
20 items, and termed it “Learning with ICTs: Measuring ICT use in the Curriculum”.
Papanastasiou and Angeli (2008) constructed what they termed a “survey of factors affecting
teachers teaching with technology (SFA-T3)” (p. 70). Such factors included those related to (a) a
teacher’s knowledge of technology tools (14 items), and (b) a teacher’s frequency of using
technology for personal purposes (15 items). While they referred to their tool as being on “factors
affecting teachers teaching with technology”, in reality the tool had constructs – see those given
above – dealing with teaching with technology itself. They sought to determine the reliability and
validity of the instrument. Hence using Cronbach and factor analyses respectively, they found their
instrument sound.
14 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Peeraer and Petegem (2012) developed an instrument to measure the integration of ICT in
education. Having carried out a literature search on the definitions of the integration of ICT in
education, they used the item response modeling approach (Wilson, 2005 cited in Peeraer &
Petegem, 2012, p. 1252) to develop a self-report instrument on teacher educators’ use of ICT for
teaching and support of students’ learning. In the bid to test the validity and reliability of their 15-
item instrument, Peeraer and Petegem (2012) collected data from 933 teacher educators working
in five teacher education institutions in five different regions in the north and centre of Vietnam.
Finally, using the Rasch model of measurement (Linacre, 2002 cited in Peeraer & Petegem, 2012),
they concluded that their instrument could be used with confidence.
2.2 Instruments on the use of ICT in pedagogy based on SAMR
As part of his work with the Maine Learning Technologies Initiative, Puentedura (2006) developed
the substitution, augmentation, modification and redefinition (SAMR) model of ICT which he
intended to encourage educators to significantly enhance the quality of education provided via
technology (Romrell, Kidder & Wood, 2014). According to Puentedura (2006 cited in Romrell et
al., 2014), at the level of substitution, ICT acts as a substitute for another technology, but with no
functional change. An example of this is when one uses a computer as a substitute for a type writer
for purposes of producing documents but without any significant change to the function of a type
writer. At the level of augmentation, ICT acts as a substitute for another technology, but with
functional improvement. For example, one can use a computer as a substitute for a type writer but
with significant functionality increase such as spell checking. At the modification level, ICT allows
for activities to be redesigned. An example in the modification dimension is when ICT allows for
the learning processes to integrate them with technology such as email and graphics packages.
Lastly, at the redefinition level, ICT allows for the creation of new tasks that were previously
inconceivable. According to Lubega et al. (2014), such tasks involve the use of visualization and
simulation tools as part of the learning activities. According to Puentedura (2013, cited in Romrell,
2014, p. 5), “learning activities that fall within the substitution and augmentation [S & A]
classification are said to enhance learning, while learning activities that fall within the modification
and redefinition [M & R] classifications are said to transform learning”.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
15
In accordance with Green (2014) who contends that SAMR “is an extremely popular model” (p.
37), several researchers have used it to guide their studies on the use of ICT in pedagogy. For
example, Azama (2015) examined the integration of ICT by a beginning Japanese class in a public
high school in a rural town in California. He measured the integration of ICT using the SAMR
model. In particular, he reported that “technology enhanced activities… [were] developed based
on the SAMR model and students’ learning… documented through formative assessments and
personal reflections” (p. 21). In terms of results, Azama (2015) reported that,
although students’ responses did not show any particular SAMR stage being more engaging than
other stages, when students were asked to rate which technology tools they imagined they would
use in the future, significantly more students chose redefinition tasks over the other stages of
activities. Although students found some activities in the augmentation stage enjoyable, they
perceived those activities… as [mere] “tools of learning”. However, the activities in the
modification and redefinition stage[s] were perceived as useful by students for everyday purposes
(pp. 34 - 35).
Kihoza, Zlotnikova, Bada and Kalegele (2016) examined the technological knowledge,
competencies, skills, attitude, beliefs and readiness to integrate classroom technology of teacher
trainees and tutors. The trainees and tutors were from Morogoro Teacher Training College and
Mzumbe University both of Morogoro Region in Tanzania. They used an instrument structured
according to SAMR to collect data on the use of ICT in pedagogy from 206 respondents and used
descriptive statistics to show that the majority of the respondents had low pedagogical ICT
competencies. In particular they reported finding that, the “ICT competence level [that the]
majority reported was [that of] beginners” (p. 116). That is, the majority of the respondents were
at the substitution (S) level in SAMR terms. They went on to report that, in terms of the
“augmentation [A] attributes [that] were used to assess tutors’ and teacher trainees’ readiness to
integrate ICTs in classrooms”, the majority of the tutors reported being well prepared and
somewhat prepared. However for the teacher trainees, the majority reported either being poorly
prepared or not prepared at all (p. 117). They recorded similar patterns for the use of ICT at the
modification (M) and redefinition (R) levels.
16 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Lubega et al. (2014) assessed the adoption of ICT in pedagogy by the academic staff and students
of Makerere University in Uganda. Using an instrument that they had structured as per the SAMR
model, they collected data from 600 respondents, which they analysed using descriptive statistics,
notably percentages. Hence Lubega et al. found that non-use of a number of ICT tools in
pedagogical processes in Makerere was highly prevalent. In particular, at the substitution (S) level
in SAMR terms, they reported finding that, “a number of teaching activities... [were] yet to be
computerized even at the basic substitution level” (p. 110). Then at the augmentation (A) level,
they summarized their finding by noting that, “more pedagogical activities in and outside the
classroom... [were] mainly ported onto substitution ICTs than augmentation ICTs” (p. 111). At the
modification (M) stage, they reported that, “the most common modification ICT... [was] the
Internet” (p. 112). Lastly with regard to the use of ICT at redefinition (R) level, they reported only
a trace of activities at that level.
Speirs (2016) assessed mobile device skills and the integration of technology into the lives of pre-
service teachers at the State University New York (SUNY) Plattsburgh and practicing teachers in
New York State. Having structured his questionnaire using SAMR to categorise activities
commonly completed with mobile devices, Speirs (2016) used a class of 10 psychometrics
graduate students and their professor to ensure the face validity of the items. Then he collected
data from 53 respondents. In terms of analysis, he reported that “total values for each SAMR level
were calculated in accordance with the structure of the survey” (p. 15). Also, “scores for total
integration level (IntTot)… were evaluated for normality” (p. 16). Hence in terms of findings, he
reported that the “skewness for IntTot scores was highly… to the left, indicating that many
participants indicated high level[s] of technology integration into their lives” (p. 18).
Apart from empirical studies using SAMR, even literature reviews (e.g. Green, 2014; Romrell et
al., 2014) on the same are available. Green (2014) started generously by noting that “SAMR is an
extremely popular model” (p. 37) because “SAMR is clean and simple, which means it can be
easily adapted and interpreted in multiple ways” (p. 38). However she went on to question whether
Ruben Puentedura who had a PhD in Chemistry and whose name features “on several institutional
documents and news articles concerning his work in Physics, Chemistry and multimedia labs” (p.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
17
38) could have originated the SAMR model independently. Green further asserted that,
“Puentedura has not published any results of the decade of study he claims to have conducted” (p.
39) as he developed SAMR. She thus went on to insinuate that Puentedura may have plagiarized
the idea of SAMR from Hughes (2005) whose article had identified three functions of technology
as replacement (which SAMR calls substitution), amplification (which SAMR calls augmentation)
and transformation (which terms SAMR reserves for both its modification and redefinition
constructs together). Another shortcoming of SAMR, according to Green (2014) was that Ruben
Puentedura who proposed it, had hardly used it in his empirical studies. In particular, Green (2014)
contended that, “no peer reviewed papers on this model have been authored and published by
Puentedura” (p. 38-39). That implied that it is others (e.g. Azama, 2015; Kihoza et al., 2016;
Lubega et al., 2014; Speirs, 2016) who have empirically used SAMR and not its proponent,
Puentedura, himself. Green (2014) however, made a general critique of all such studies lamenting
that, “there is no record of research studies that could document the development and validation
of SAMR” (p. 39). In other words, the studies that have used the SAMR model have used it at face
value without checking its validity and reliability.
Romrell et al. (2014) devoted their energies to highlighting the SAMR model as a framework for
evaluating mobile learning (mLearning). Having defined mLearning as “learning that is
personalized, situated, and connected through the use of a mobile device” (p. 1), they went on to
review recent literature on mLearning and to provide examples of activities from studies, that fell
within each of the four classifications of the SAMR model. The classifications, as expected, were
substitution, augmentation, modification and redefinition.
As suggested by the reviewed studies, as Green (2014) claimed, among the studies (e.g. Azama,
2015; Kihoza et al., 2016; Lubega et al., 2014; Speirs, 2016) that have used the SAMR model
empirically, “there is no record of research studies that could document the development and
validation of SAMR” (p. 39). Indeed, as we prepared this article, our efforts to come across one
scholarly article of that kind was in vain. In other words, the studies that have used the SAMR
model have used it at face value without checking its validity and reliability. On the basis of such
gaps, this study came in handy to test the validity and reliability of the SAMR model using the
18 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
teachers of mathematical disciplines in four universities in Uganda. In particular, the study sought;
first, to establish the validity and reliability of each of the four constructs (S, A, M & R). The
second objective of this study was to test whether the four constructs were independent. Third, the
study was to re-examine whether the SAMR model of ICT as being made up of the four constructs
(S, A, M & R) was reasonable.
3. METHODOLOGY
3.1 Instrument
Using the survey design, data were collected using Puentedura’s (2006) SAMR measure of the use
of ICT in pedagogy (UIP), which operationalises the use of ICT in pedagogy as having the
substitution (S), augmentation (A), modification (M), and redefinition (R) constructs. This
instrument was developed by Lubega et al. and hence sourced from Lubega et al. (2014, Tables
III, IV, V & VIII). However, before collecting the data, preliminary validation of the instrument
was carried out using face validity to see which items were applicable to teachers at university
level. Hence, the items on S reduced from 13 to 12; those on A reduced from 16 to nine; those on
M reduced from 10 to five; and lastly those on R reduced from six to five in number. In total there
were 31 items measuring UIP. The items were scaled using the five-point Likert scale from a
minimum of 1 for the worst case scenario (strongly disagree) to a maximum of 5, which was the
best case scenario (Strongly agree).
3.2 Sample
The sample comprised 261 academic staff teaching mathematical disciplines in four universities
in Uganda, namely Kyambogo (KyU), Makerere (Mak), Makerere University Business School
(MUBS) and Mountains of the Moon (MMU). The term “mathematical disciplines” was taken to
be broad and thus included a range of disciplines where mathematical skills are useful. Such
disciplines included Science, Technology, Engineering and Mathematics (STEM) disciplines and
their offshoots like Biomathematics, Finance, Management Science, Quantitative Data
Management, and Software Engineering. As suggested in Table 1, a typical respondent was aged
30 but below 40 years (62.9%); a male (64.0%); belonging to the Makerere University Business
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
19
School, MUBS (36.0%); having served for below five years as a lecturer at university level
(56.6%); holding a Masters degree (69.6%) as the highest qualification; and currently holding the
academic rank of Lecturer (48.8%).
Table 1: Background Characteristics of the Respondents
Item Categories Frequency Percent
Age group in years of the respondent Up to 30 33 15.7
30 but below 40 132 62.9
40 and above 45 21.4
Gender of the respondent Female 94 36.0
Male 167 64.0
University to which the respondent
belonged
Kyambogo
Makerere
50
87
19.2
33.3
Makerere University Business School 94 36.0
Mountains of the Moon 30 11.5
Tenure in years of lecturing at university
level of the respondent
Up to five 141 56.6
Five but below 10 85 34.1
10 and above 23 09.2
Highest academic qualification attained
by the respondent
Bachelors degree 20 07.7
Masters degree 181 69.6
PhD degree 59 22.7
Academic rank of the respondent Teaching Assistant
Assistant Lecturer
20
68
07.7
26.2
Lecturer 127 48.8
Senior Lecturer 37 14.2
Associate Professor 06 02.3
Professor 02 0.8
3.3 Data Analysis
The validities of multi-item constructs of SAMR, namely S, A, M and R, were tested using
confirmatory factor analysis (CFA), while their reliabilities were tested using the Cronbach alpha
method. Pearson's linear correlation analysis was carried out to establish whether the constructs of
SAMR were independent, while exploratory factor analysis (EFA) helped with the re-assessment
of the structure of SAMR.
4. RESULTS
4.1 Validities and Reliabilities
20 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
The first objective of the study was to establish the validity and reliability of the measure for each
of the four constructs of SAMR. This was achieved via confirmatory factor analysis (CFA) and
Cronbach alpha on the items on each construct. The Kaiser rule or criterion (Kaiser, 1960 cited in
Khan, 2006, p. 690; Yong & Pearce, 2013, p. 85) or the Kaiser-Guttman rule (cited in Schmidt,
Baran, Thompson, Mishra, Koehler & Shin, 2009, p. 131) that stipulates that factors with
eigenvalues greater than one be considered significant, was used in the study. For a given factor
loading, 0.5 (Costello & Osborne, 2005) was used as the minimum. For reliability tests, a
benchmark of α = 0.7 (Tavakol & Dennick, 2011) for the Cronbach alpha was set. The results are
as presented in the subsequent tables (Tables 2 through 5).
4.1.1 Use of Substitution ICTs
According to Table 2, confirmatory factor analysis (CFA) reduced the 12 items of the first
construct (S) in SAMR to three factors. The respective factors had eigenvalues of 6.42, 1.54 and
1.16, meaning that the respective three factors accounted for 6.42/ 12 x 100 = 53.5%; 1.54/ 12 x
100 = 12.8%; and 1.16/ 12 x 100 = 09.7% of the total variance among the 12 items. The loadings
of the respective items on a given factor are also given in Table 2 after a Varimax rotation, as
recommended by Kline (1994, cited in Foster, 1998, p. 207). Considering loadings of at least 0.5
as being high, then Table 2 suggests that only seven items (S3 – S5, S7 – S10) loaded highly on
the first and main factor. Five items (S1, S2, S4 – S6) loaded highly on the second factor; while
two others (S11 & S12) loaded highly on the third and least among the significant factors. Given
that “Kaiser’s criterion will often yield too many factors [to be] retained” (Khan, 2006, p. 690),
for the sake of parsimony, as advised by Khan (2006, pp. 690 - 691), only the five items (S3, S7 –
S10) that loaded highly on only the first and main factor, were taken as the (most) valid items of
S. Items S4 and S5 cross-loaded on the first two factors and hence were dropped for being complex
(Yong & Pearce, 2013, p. 84). The reliability (Cronbach alpha, α) of the valid items (S3, S7 – S10)
was 0.920, which being greater than 0.7, implied that the five items were reliable measures of S
too.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
21
Table 2: Loadings on Factors on the Use of Substitution ICTs (S)
Loadings
Items Descriptions Factor 1 Factor 2 Factor 3 α
S1 I use ICTs to prepare my lecture notes, assignments and
examinations
0.910 0.920
S2 I use presentation software (e.g. PowerPoint) to deliver my
lectures
0.709
S3* I upload my teaching and learning materials on electronic
sites/ devices (e.g. MUELE) for students to access
0.711
S4 When supporting my students, I communicate to them
using e-mail
0.571 0.662
S5 I refer my students to electronic databases for reference
materials
0.569 0.634
S6 When supporting my students, I communicate to them
using my cell phone
0.570
S7* During my lectures, I use the smart boards/ interactive
boards installed in the lecture rooms for writing
0.860
S8* I encourage students to submit their course work
assignments through e-mail
0.800
S9* When communicating to my students, I use electronic
notice boards
0.899
S10* When supporting my students, I communicate to them
through social media (e.g. blogs, chat rooms, discussion
boards, Facebook, Instagram and Twitter)
0.796
S11 I record my lectures on CDs or other media and give them
to my students
0.850
S12 I take video/ audio recordings of myself while lecturing
and use them in subsequent years to teach the same course
to another cohort of students
0.908
Eigenvalue 6.417 1.536 1.160
% variation
explained
53.5 12.8 09.7
*Valid items of S
4.1.2 Use of Augmentation ICTs
According to Table 3, CFA reduced the nine items of the second construct (A) in SAMR to two
factors. The respective factors had eigenvalues of 5.07 and 1.17, meaning that the respective two
22 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
factors accounted for 56.3% and 13.0% of the total variance among the nine items. The loadings
of the respective items on a given factor are also given in Table 3 after a Varimax rotation.
Considering loadings of at least 0.5 as being high, then Table 3 suggests that only five items (A5
– A9) loaded highly on the first and main factor. The other four items (A1 - A4) loaded highly on
the second and minor among the significant factors. As was the case for the construct S, and for
the same reasons, only the five items (A5 – A9) that loaded highly on the first and main factor,
were taken as the (most) valid items of A. Their reliability (Cronbach alpha, α) of 0.861 implied
that the five items were also reliable measures of A.
Table 3: Loadings on Factors on the Use of Augmentation ICTs (A)
Loadings
Items Descriptions Factor 1 Factor 2 α
A1 I use search engines (e.g. Google) to look for content in my
discipline
0.834 0.861
A2 I use editorial tools (e.g. spell checker) in my word processor
to correct grammatical errors in any document I process
0.847
A3
I use editorial tools (e.g. Thesaurus) in my word processor to
get alternative words to use in my documents
0.717
A4
I use online encyclopedias (e.g. Wikipedia) to make meaning
of words or phrases that I do not understand
0.695
A5*
I use digital libraries (e.g. MuLib and MakIR) as a source of
useful content for my lectures
0.781
A6*
I use the track changes tool in my word processor to review
documents (e.g. student dissertations/ theses)
0.645
A7*
I use internet group lists to contact my students in matters
related to their academics
0.785
A8*
I encourage my students to use Google docs to accomplish
group course work assignments
0.795
0.811
A9*
I use different videos to illustrate different case studies during
my lectures
0.745
Eigenvalue
5.069
1.167
% variation
explained
56.3
13.0
*Valid items of A
4.1.3 Use of Modification ICTs
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
23
According to Table 4, CFA reduced the five items of the third construct (M) in SAMR to the ideal
situation of one factor. The factor had an eigenvalue of 3.08, meaning that the factor accounted for
61.5% of the total variance among the five items. The loadings of the respective items on the factor
are also given in Table 4. Considering loadings of at least 0.5 as being high, then Table 4 suggests
that all the five items (M1 – M5) loaded highly on the factor. Hence all of them were valid items
of M. Their reliability (Cronbach alpha, α) was 0.842, implying that the five items were also
reliable measures of M.
Table 4: Loadings on the Factor on the Use of Modification ICTs (M)
Items * Descriptions Loadings α
M1 I assign students topics to research about from the Internet 0.517 0.842
M2 I lecture modules in my discipline using e-learning platforms
(e.g. MUELE)
0.892
M3 I use content authoring software when preparing my lectures 0.882
M4 I use online tools (e.g. RM Assessor) to assess my students 0.898
M5 I use video conferencing or Skype to teach my students when I
am not at the University
0.655
Eigenvalue 3.075
% variation
explained
61.5
* All items were valid for M
4.1.4 Use of Redefinition ICTs
According to Table 5, CFA reduced the five items of the fourth and last construct (R) in SAMR to
the ideal situation of one factor. The factor had an eigenvalue of 3.46, meaning that the factor
accounted for 69.2% of the total variance among the five items. The loadings of the respective
items on the factor are also given in Table 5. Considering loadings of at least 0.5 as being high,
then Table 5 suggests that all the five items (R1 – R5) loaded highly on the factor. Hence all of
them were valid items of R. Their reliability (Cronbach alpha, α) was 0.888, implying that the five
items were reliable measures of R too.
Table 5: Loadings on the Factor on the Use of Redefinition ICTs (R)
Items * Descriptions Loadings α
R1 I use open education resources (e.g. massive open
online courses, MOOCS) as my lecturing materials
0.851 0.888
R2 I use electronic games when lecturing 0.514
R3 I use simulations (e.g. 2nd life) when lecturing 0.903
24 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
R4
I use e-learning platforms (e.g. MUELE) to encourage
group discussions among my students
0.900
R5
I use e-learning platforms (e.g.MUELE) to assess my
students’ learning
0.921
Eigenvalue 3.460
% variation
explained
69.2
*All items were valid for R
4.2 Correlations among the SAMR Constructs
The second objective of the study was to test whether the four constructs in the SAMR model,
namely substitution (S), augmentation (A), modification (M) and redefinition ( R) were
independent. Average indexes were computed for the valid items of the respective constructs from
Tables 2 - 5. The indexes were then correlated using Pearson's linear correlation. Table 6
(correlation matrix) suggests that all the four constructs were significantly inter-related.
Table 6: Inter-correlations of the SAMR Constructs
S A M R
S
A 0.667**
M 0.783** 0.727**
R 0.770** 0.703** 0.864**
** Correlation is significant at the 0.01 level
4.3 Re-examining the Structure of SAMR
The third and last objective in the study was to re-examine whether the SAMR model of ICT
(Puentedura, 2006) as being made up of the four constructs (S, A, M & R) was reasonable.
Exploratory factor analysis (EFA) reduced the 31 items in the SAMR instrument (Tables 2 - 5)
into as many factors. However, as suggested in Table 7, only the first six factors were significant
since they had eigenvalues = 15.17, 2.40, 2.32, 1.25, 1.12 and 1.08 that exceeded 1.00. These
factors explained 48.9%, 7.7%, 7.5%, 4.0%, 3.6% and 3.5% respectively of the joint variation in
the 31 items. The items with high factor loadings (of at least 0.5) of the respective factors are also
given in Table 7 after a Varimax rotation. The question is: Is the SAMR structure as suggested by
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
25
Puentedura (2006) discernible in Table 7? Table 7 suggests that only the first four of the factors
were significant given that the sixth one had only two cross-loading and hence complex items. The
fifth one had only two valid items and could be disregarded, in accordance with Yong and Pearce
(2013) who observe that, “factors that have less than three variables… are… undesirable” (p. 86).
Thus in agreement with the SAMR model, Table 7 suggested four factors from the 31 items on the
use of ICT.
Apart from Factor 3 that was dominated by only one construct (A), none of the other three
significant factors was dominated by items from only one construct of SAMR. In particular, Factor
1 had five valid items (S3, S7, S8, S9 & S10) on the use of substitution ICTs; one valid item (A7)
on the use of augmentation ICTs; three valid items (M2 - M4) on the use of modification ICTs;
and three valid items (R3 – R5) on the use of redefinition ICTs. Factor 2 had two valid items (S5
& S6) on the use of substitution ICTs; and three valid items (A5, A6 & A8) on the use of
augmentation ICTs. Factor 4 had two valid items (S11 & S12) on the use of substitution ICTs; and
one valid item (A9) on the use of augmentation ICTs. In summary, the SAMR structure as
suggested by Puentedura (2006) could hardly be discerned in Table 7.
Table 7 Factors, their eigenvalues, % variance explained and their highly loading
items
Factor Eigenvalue % variance Highly loading items (loading in brackets)
1 15.17 48.9 S3 (0.689); *S4 (0.504); S7 (0.792); S8 (0.717); S9 (0.847); S10 (0.696);
** A4 (0.500); A7 (0.754); M2 (0.868); M3 (0.729); M4 (0.697); ***R1
(0.540); R3 (0.691); R4 (0.868); R5 (0.885)
2 2.40 7.7 *S4 (0.626); S5 (0.643); S6 (0.647); A5 (0.781); A6 (0.628); A8 (0.624)
3 2.32 7.5 A1 (0.751); A2 (0.762); A3 (0.692); ** A4 (0.609)
4 1.25 4.0 S11 (0.799); S12 (0.893); A9 (0.531); ****R2 (0.529)
5 1.12 3.6 S1 (0.762); S2 (0.771)
6 1.08 3.5 ***R1 (0.600); ****R2 (0.583)
Footnote: Items prefixed with a similar symbol (*S4, **A4, ***R1 and ****R2 ) were cross-loading, and hence dropped for being
complex
5. DISCUSSION
The first objective in the study was to establish the validity and reliability of each of the four
constructs, namely substitution (S), augmentation (A), modification (M) and redefinition ( R) in
26 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Puentedura's (2006) SAMR model of ICT. Confirmatory factor analysis (CFA) showed that the
items on construct S were categorisable into three factors (see Table 2, i.e. S3, S7 – S10 as the
most important factor; S1 and S6 as another less significant factor; and S11 and S12 forming the
third and least significant factor). This means that if for example all the items on S were to be an
independent variable in a model, it would automatically mean that the dependent variable would
be regressed on three different factors on S and not one as suggested by the SAMR model. If it
was a question of looking for the valid items on construct S, then as we did in this paper (subsection
4.1.1), those factors (S3, S7 – S10) that loaded highly on only the first and main factor, were the
most optimal set, given their high reliability (Cronbach alpha, α) of 0.898. Dropping the other
seven items (S1, S2, S4 – S6, S11 & S12) in favour of only five is implying that the construct S of
SAMR may be unnecessarily long.
CFA showed that the items on construct A were categorisable into two factors (see Table 3, i.e.
A5 – A9 as the most important factor; and A1 – A4 forming a second but less important factor).
This means that if for example all the items on A were to be an independent variable in a model,
it would automatically mean that the dependent variable would be regressed on two different
factors on A and not one as suggested by the SAMR model. If it was a question of looking for the
valid items on construct A, then as we did in this paper (subsection 4.1.2), those factors (A5 – A9)
that loaded highly on only the first and main factor, were the most optimal set, given their high
reliability (Cronbach alpha, α) of 0.861. Dropping the other four items (A1 – A4) in favour of only
five is implying that the construct A of SAMR may be unnecessarily long. CFA showed that the
items on construct M could be reduced to one factor (see Table 4). This meant that all the five
items on construct M that loaded highly on the one factor, were valid items of M. They were also
reliable, given their high reliability (Cronbach alpha, α) of 0.842. That all the five items (M1 –
M5) were valid and reliable, might be implying that the construct M of SAMR may be optimally
long. CFA showed that the items on construct R could also be reduced to one factor (see Table 5).
Also all the five items (R1 – R5) loaded highly on the factor, implying that they were valid ones.
They were also reliable (their Cronbach alpha, α = 0.888), suggesting that they were the ideal
measure of R.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
27
The second objective in the study was to test whether the four constructs of SAMR, namely S, A,
M and R were independent. The results of correlation analysis (Table 6) suggested that all the four
constructs were inter-related. This puts into question whether they are really measuring different
things. That is food for thought for future researchers. It could also imply that when carrying out
a study using the constructs of SAMR as explanatory variables, the researcher does not have to
include all of them. The third and last objective of the study was to re-examine whether the four-
factor SAMR model of ICT, that is, as made up of the four constructs (S, A, M & R) was
reasonable. Exploratory factor analysis, EFA (Table 7) showed that the SAMR structure as
suggested by Puentedura (2006) and operationalised by Lubega et al. (2014), could hardly be
replicated using our data set. While our exploratory study using teachers of mathematical
disciplines in four universities in Uganda cannot be used to dismiss Puentedura’s (2006)
suggestions and the operationalisation of Lubega et al. (2014), it still raises a lot of food for thought
for future researchers. Is the four-factor SAMR model really reasonable? This question still begs
for research in several contexts, and from researchers including Ruben Puentedura himself that
Green (2014) accused of hardly having answered the question.
6. CONCLUSION
The use of ICT in pedagogy (UIP) provides opportunities to the learner to access an abundance of
information using multiple information resources and also to view information from multiple
perspectives. As such, UIP increases learner motivation and engagement by facilitating the
acquisition of basic skills (Noor-Ul-Amin, 2013). Given the importance of UIP, in the study being
concluded, an attempt was made to test the validity and reliability of one of the instruments that
measure UIP by teachers and students. The instrument was the substitution (A), augmentation (A),
modification (M) and redefinition (R) - SAMR model (Puentedura, 2006, as operationalized by
Lubega et al., 2014). The context was that of the teachers of mathematical disciplines in four
universities in Uganda. The study was among the very first to report findings on the validity and
reliability of the SAMR model of ICT. CFA suggested that while some items of S and A
respectively were not valid, all the items of each of M and R were valid. The four constructs were
highly inter-related. Exploratory factor analysis (EFA) revealed that the four-factor SAMR model
28 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
of operationalising ICT was questionable. Hence a call was made to researchers to continue testing
the SAMR model in different contexts with intent to refining it. The study had limitations.
Obviously, the sample scope was limited. More studies on SAMR could be carried out among
other teachers in the four universities than those of mathematical disciplines. Studies could even
be extended to other universities in Uganda and other countries.
REFERENCES
AZAMA, Y. 2015. Effective integration of technology in a high school beginning Japanese
class. Capstones and Theses at Digital Commons. Paper 517. Retrieved from:
http://digitalcommons.csumb.edu/caps-theses.
COSTELLO, A. B., AND OSBORNE, J. W. 2005. Best practices in exploratory factor analysis:
Four recommendations for getting the most from your analysis. Practical Assessment Research
and Evaluation, 10 (7). Retrieved from: http://pareonline.net/getvn.asp?v=10&n=7.
FOSTER, J. J. 1998. Data analysis using SPSS for Windows. London, UK: Sage.
GREEN, L. S. 2014. Through the looking glass: Examining technology integration in school
librarianship. Knowledge Quest, 43(1), 36-43.
HUGHES, J. 2005. The role of teacher knowledge and learning experiences in forming
technology-related pedagogy. Journal of Technology and Teacher Education, 13(2), 277-302.
JAMIESON-PROCTOR, R. M., WATSON, G., FINGER, G., GRIMBEEK, P., & BURNETT,
P. C. 2006. Measuring the use of information and communication technologies (ICTs) in the
classroom. Computers in the Schools, 24(1), 167-184.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
29
KAISER, H. F. 1960. The application of electronic computers to factor analysis. Educational
and Psychological Measurement, 20, 141 - 151. doi: 10.1177/001316446002000116
KHAN, J. H. 2006. Factor analysis in counseling psychology research, training, and practice:
Principles, advances, and applications. The Counseling Psychologist, 34 (5), 684 - 718. doi:
10.1177/0011000006286347
KIHOZA, P., ZLOTNIKOVA, I., BADA, J., AND KALEGELE, K. 2016. Classroom ICT
integration in Tanzania: Opportunities and challenges from the perspectives of TPACK and SAMR
models. International Journal of Education and Development Using Information and
Communication Technology, 12(1), 107-128.
KLINE, P. 1994. An easy guide to factor analysis. New York, US: Routledge.
LINACRE, J. M. 2002. Understanding Rasch measurement: Optimising rating scale category
effectiveness. Journal of Applied Measurement, 31 (1), 85 - 106.
LLOYD, M. 2005. Towards a definition of the integration of ICT in the classroom. In AARE
(Eds.), Proceedings of AARE ‘05 on Education research – creative dissent: Constructive solutions.
Parramatta, New South Wales.
LUBEGA, T. J., MUGISHA, A. K., AND MUYINDA, P. B. 2014. Adoption of the SAMR
model to assess ICT pedagogical adoption: A case of Makerere University. International Journal
of e-Education, e-Business, e-Management and e-Learning, 4(2), 106-115.
NOOR-UL-AMIN, S. 2013. An effective use of ICT for education and learning by drawing on
worldwide knowledge, research and experience: ICT as a change agent for education. Scholarly
Journal of Education, 2(4), 38-54.
PAPANASTASIOU, E. C., AND ANGELI, C. 2008. Evaluating the use of ICT in education:
30 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Psychometric properties of the survey of factors affecting teachers teaching with technology (SFA-
T3). Journal of Educational Technology & Society, 11 (1), 69 - 86. Retrieved from:
http://www.jstor.org/stable/jeductechsoci.11.1.69.
PEERAER, J., AND PETEGEM, P. V. 2012. Measuring integration of information and
communication technology in education: An item response modeling approach. Computers &
Education, 58, 1247-1259.
PROCTOR, R., WATSON, G., AND FINGER, G. 2003. Measuring information and
communication technology (ICT) curriculum integration. Computers in Schools, 20(4), 67-87.
PUENTEDURA, R. R. (2006, November 28). Transformation, technology, and education in the
state of Maine. [Web log post]. Retrieved from:
http://www.hippasus.com/rrpweblog/archives/000095.html.
PUENTEDURA, R. R. 2013, May 29. SAMR: Moving from enhancement to transformation.
[Web log post]. Retrieved from: http://www.hippasus.com/rrpweblog/archives/000095.html.
ROMRELL, D., KIDDER, L. C., AND WOOD, E. 2014. The SAMR model as a framework
for evaluating mlearning. Journal of Asynchronous Learning Networks, 18(2), 1-15.
SCHMIDT, D. A., BARAN, E., THOMPSON, A. D., MISHRA, P., KOEHLER, M. J., AND
SHIN, T. S. 2009. Technological pedagogical content knowledge (TPACK): The development and
validation of an assessment instrument for preservice teachers. Journal of Research on Technology
in Education, 42(2), 123-149.
SPEIRS, M. N. 2016. Assessing teachers’ mobile device skills and the integration of
technology into their lives. Psychology at Digital Commons. Paper 6. Retrieved from:
http://digitalcommons.plattsburgh.edu/psychology-theses.
TAVAKOL, M., AND DENNICK, R. 2011. Making sense of Cronbach’s alpha. International
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
31
Journal of Medical Education, 2, 53-55. doi: 10.5116/ijme.4dfb.8dfd
WILSON, M. 2005. Constructing measures: An item response modeling approach. Mahway:
Erlbaum.
YONG, A. G., AND PEARCE, S. 2013. A beginner’s guide to factor analysis: Focusing on
exploratory factor analysis. Tutorials in Quantitative Methods for Psychology, 9(2), 79-94.
Empirical Study of Continuous Change of Open Source System
Michael Abayomi Olatunji3, Rufus Olalere Oladele and Amos Orenyi Bajeh
Department of Computer Science
University of Ilorin, Ilorin, Nigeria
P. M. B. 1515, Ilorin, Nigeria.
[email protected], [email protected], [email protected]
___________________________________________________________________________
ABSTRACT
This paper describes empirical investigation of the first law of software evolution: law of
continuing change, using four open source systems in evolution as case studies. The four systems
used as case studies have 354 versions altogether with a combined 55 years of evolution and
running existence. Three code analysing tools were used to parse source codes and generated
3 Author’s Address: Michael Abayomi Olatunji3, Rufus Olalere Oladele and Amos Orenyi Bajeh
Department of Computer Science, University of Ilorin, Ilorin, Nigeria, P. M. B. 1515, Ilorin, Nigeria.
[email protected], [email protected], [email protected]"Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than IJCIR must be
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee." © International Journal of Computing and ICT Research 2008. International Journal of Computing and ICT Research,
ISSN 1818-1139 (Print), ISSN 1996-1065 (Online), Vol.11, Issue 2, pp. 31 - 52, June 2017.
32 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
metrics needed for quantification of continuing change property. Statistical functions and
descriptive statistics were used for metrics processing and analysis. Results of the study confirm
the law of continuing change. In particular, results show that: (i) in the pre versions, lot of
changes do occur but when it is getting close to first release candidate of version 1, the changes
reduce but in stable branch after the first official release more changes do occur close to the
release of another stable or development major release. (ii) More changes occur in the function
body than in the function interfaces. (iii) Changes in the stable branches are fewer than those in
the development branches. (iv) Function interface changes are correlated to the function
statement changes. Within the domain of the case studies investigated, there is no system that
does not have shrinkages in number of functions and Lines of code in the course of evolution.
Keywords: Lehman Law, Empirical investigation, Open Source Systems, Software Metrics,
Software Evolution, Quantification.
IJCIR Reference Format:
Michael Abayomi Olatunji4, Rufus Olalere Oladele and Amos Orenyi Bajeh. Empirical Study of
Continuous Change of Open Source System, Vol. 11, Issue.1 pp 31 - 52.
http://ijcir.mak.ac.ug/volume11-issue1/article3.pdf
INTRODUCTION
4 Author’s Address: Michael Abayomi Olatunji4, Rufus Olalere Oladele and Amos Orenyi Bajeh
Department of Computer Science, University of Ilorin, Ilorin, Nigeria, P. M. B. 1515, Ilorin, Nigeria.
[email protected], [email protected], [email protected]"Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than IJCIR must be
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee." © International Journal of Computing and ICT Research 2008. International Journal of Computing and ICT Research,
ISSN 1818-1139 (Print), ISSN 1996-1065 (Online), Vol.11, Issue 2, pp. 31 - 52, June 2017.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
33
Software evolution is the process by which softwares are modified and adapted to their changing
environment to maintain relevance. Software evolution became a field of research owing to the
pioneering empirical studies done by Meir Lehman, which he started by using IBM systems as
case studies about 47 years ago. Lehman aimed at formulating theory for software evolution from
empirical and statistical perspective. He intended to bring out commonalities or invariants within
software systems. In the course of pursuing the above aim, he came by three debatable invariants
which were postulated as the laws of software evolution in 1974. After years of continual
research in software evolution, more invariants were observed and postulated as laws; by 1996,
the laws of software evolution became eight in number, amongst which is the law of Continuing
Change which is further investigated in this study. The laws of software evolution are said to be
referring only to software systems classified as Evolutionary type systems (E-type systems),
these are systems that solve problems involving people or real world. They are reflections of
human processes. E-type systems can be Open Source Systems (OSS) or commercial/proprietary
systems which are close source systems developed for a particular organisation.
The law of continuing change was the first of the three laws of software evolution postulated in
1974 and it states that E-type systems must be continually adapted else they become
progressively less satisfactory [Lehman 96b]. Since user requirements changes with time,
software systems are inevitably subject to change and thus, must evolve to continually meet the
needs of the users. Otherwise, with time the software will not be meeting users' needs and
satisfaction, thereby losing relevance. The other laws of software evolution which are not the
focus of this paper are stated as follows:
Increasing complexity (1974): as an E-type system evolves its complexity increases unless work
is done to maintain or reduce it.
Self-regulation (1974): E-type system evolution process is self-regulating with distribution of
product and process measures close to normal.
Conservation of organisational stability (1980): the average effective global activity rate in an
evolving E-type system is invariant over product lifetime (Invariant work rate).
34 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Conservation of familiarity (1980): as an E-type system evolves all associated with it, developers,
sales personnel, users, for example must maintain mastery of its content and behaviour to achieve
satisfactory evolution. Excessive growth diminishes that mastery. Hence the average incremental
growth remains invariants as the system evolves.
Continuing growth (1980): the functional content of E-type system must be continually increased
to maintain satisfaction over its lifetime.
Declining Quality (1996): The quality of E-type systems will appear to be declining unless they
are rigorously maintained and adapted to the operational environment changes.
Feedback system (first stated in 1974, formalised in 1996): E-type system evolution process
constitute multi-level, multi-loop, multi agent feedback system and must be treated as such to
achieve significant improvement over reasonable base.
Lehman studied seven commercial large software systems and came out with the laws of software
evolution. There are lot of differences between commercial systems and OSS. For instance, the
development of OSS is done by programmers scattered over the world. Also, the development of
OSS and its evolution/maintenance are been done at same time which happen at different times in
commercial systems. Some studies [Godfrey and Tu, 2000; Herraiz 2008; Fernandez-Ramil et al.
2008; Israeli and Feitelson 2010; Neamtiu et al. 2013; Pirzada 1988; Alenezi and Almustafa 2015;
Eick et al. 2001; Vasa, 2010] show that some of the laws are not holding or partially hold in OSS.
The laws have to be investigated with OSS in order to check the applicability of the laws with OSS
and to arrive at theory for software evolution, most especially with the availability of lot of open
source codes in diverse repositories on internet. This paper is an attempt towards validating the
first law, law of continuing change, using a new set of metrics.
RELATED WORKS
Since the promulgation of the Lehman laws [Lehman 74; Lehman 78; Lehman 79; Lehman 80;
Lehman 85a; Lehman 85b; Lehman 85c; Lehman 89; Lehman 90; Lehman 96], several studies
have been carried out that attempts to validate and also refine the laws. The last expanded version
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
35
of the laws published in 1996 [Lehman 96b] remains unmodified since then. However, it was since
1996 that most studies questioning or confirming their validity have been published.
Lawrence [1982] in his study on software evolution dynamics, validated the law of continuing
change using number of modules handed in each release as metric.
Pirzada [1988] addressed the validity issue of the Lehman's laws of software evolution using
statistical approach in his PhD thesis. In the study, the evolution of several flavours of UNIX was
investigated. One of the results of Pirzada's study showed that all the UNIX flavours verified the
first law of continuing change. He considered number of modules in release as metric.
Godfrey and Tu [2000] studied the case of the Linux kernel in an attempt to verify the laws of
software evolution in the context of Open Source Software (OSS) and found out that it was
evolving at an accelerating pace and obeys the law of continuing change as well. Godfrey and Tu
used system and module size as metrics for quantifying the Law of Continuing Change.
Koch [2007] addressed the validity question with a large sample of software projects, precisely
8,621 projects from SourceForge.net repository. Koch used SLOC as metric and graphical
statistical evaluation. He observed that most of the projects grow either linearly or with a
decreasing rate, according to the laws. But, about 40% of the projects showed a super linear pattern
which disincline with the laws. Koch speculated that the cause of the super linear growth might be
a certain organizational model, that he called the chief programmer team, a term originated in IBM
in the seventies.
Vasa [2010] studied 40 large open source systems using size and complexity metrics at the class
level in his study. Inclinations were found for the law of continuing change, law of self-regulation,
law of conservation of familiarity and law of continuing Growth.
More so, Israeli and Feitelson [2010] also studied the Linux kernel, they investigated if they fulfil
laws or not. They found that linear kernel growth in evolution is initially super linear and later
became linear from 2.5 release. Therefore, the law of continuing change is verified but at a
different change rate. Israeli and Feitelson concluded that Linux confirmed most of the Lehman's
36 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
laws; especially those of growth, continuing change and stability. They used number of source
files counted in arch and drivers directories of Linux kernel as metric to quantify continuing change
law.
In a study of 705 releases of 9 open source software projects, Neamtiu et al. [2013] reported that
only the laws of continuing change and continuing growth are confirmed for all programs. Neamtiu
research group are the collectors of the datasets used in this study. In their study, they used the
cumulative number of changes to program elements (i.e., functions, types, and global variables) as
metrics for this law.
Alenezi and Almustafa [2015] conducted empirical investigation on OOS evolution by considering
five different sized and different domain OSS using Cyclomatic Complexity and SLOC as metrics.
They found support for applicability of Law of Continuing Change, Increasing Complexity and
Continuing Growth for OSS.
In this paper, aside function count metric used in previous studies, the quantification of the law of
Continuing Change is also measured using the following new metrics: parameters in functions,
returns in functions, statements in function body, and incremental differences in the following:
function count, parameters, returns and function statements.
RESEARCH METHOD
SAMPLE OSS (CASE STUDIES)
We downloaded the datasets which are used as case studies for this paper from UCR online
repository. The repository contains 9 merged source codes of evolutionary OSS written in C
language. The main reason for using a dataset placed on internet repository by a group of evolution
researchers and not trying to download all the publicly available official releases of the applications
that were used as case studies is that quantification of law of software evolution must be based on
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
37
empirical results, verifiable and repeatable, and made on a large scale, so that conclusions with
statistical significance can be achieved [Sjøberg et al.2008]. If software evolution is analysed with
data that is not available to third parties, it cannot be verified, repeated and replicated. It is
dangerous to build a theory on empirical studies that do not fulfil those requirements.
The datasets used as case studies in this paper are well evolved open source applications, namely:
Bison systems, Samba systems, SQLite Systems and Vsftpd systems. The above dataset are
selected because they have long term software evolution, sizable and actively maintained.
OVERVIEW OF THE APPLICATIONS USED AS CASE STUDY
Table 1 presents the sample OSS used for the empirical analysis.
Table 1: Sample OSS
OSS Description
Bison: Version
No. of Functions
No. of parameters
No. of returns
Statements
v1 v1.3 v1.5 v2.4.3 Bison is a general-purpose parser generator,
which can be use to develop a wide range of
language parsers, from those used in simple
desk calculators to complex programming
languages.
128 223 572 843
71 249 1003 1631
223 456 1062 1481
3646 7665 13214 19449
Samba: Version
No. of Functions
No. of Parameters
No. of returns
Statements
v1.5.14 v1.9.18 v2.2.12 v3.3.1 Samba is a tool suite that facilitates Windows-
UNIX/Linux interoperability. It is based on the
common client/server protocol of Server
Message Block (SMB) and Common Internet
File System (CIFS)
120 940 4001 13157
291 2071 10902 37292
379 2850 17142 69626
3039 25373 118597 506158
VSFTPD: Version
No. of Functions
No. of Parameters
No. of returns
v0.0.9 v1.1.0 V1.2.2 v2.1.0 Vsftpd stands for “Very Secure FTP Daemon”
and is the FTP server in major Linux
distributions like Ubuntu, Centos, Unix, Fedora
and the likes. It is licensed under the GNU
General Public License. It supports IPv6
325 401 450 590
414 641 724 939
461 564 719 997
38 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Statements 2987 4342 5470 7098
SQLite: Version
No. of Functions
No. of Parameters
No. of returns
Statements
v1.0.0 v2.0.7 v3.1.1 v3.6.11 SQLite is an implementation of a self-contained
SQL database engine. It is as a library in C
programming language. It is not a client-server
database management system and it comes with
a "shell" that can be used for command–line
interaction.
167 303 391 1284
393 677 886 3028
648 1197 1479 3556
6926 11527 13814 31126
Table 1 gives information on some selected versions of the sample OSS. We analysed the entire
versions of the sample OSS available on the UCR online repository as at the time of carrying out
this study. Bison has 33 versions (v1.0 - v2.4.3) released over a period of 22 years. 89 versions of
Samba released over a period of 15 years and grew from 5514 LOC to more than 1,000,000 LOC
were analysed. 60 versions of VSFTPD were analysed. 172 versions of SQLite were analysed;
starting with its first version v1.0 of 17723 LOC to v3.6.11 of 65108 LOC.
Measurement Tools and Metrics
Two static code analysing tools are selected namely Resource Standard Metrics (RSM) and
PMCCABE. They consume C/C++ and Java source code files and generate diverse metrics. RSM
tool was used to determine file function count, total function parameters, total function returns and
number of statements in functions.
PMCCABE is a command line tool in Linux environment, it calculates McCabe-style Cyclomatic
Complexity for C/C++ source code, statements in function, lines of code in function, blank lines
and the likes. PMCCABE uses per-function metrics and was run with Unix scripts in the Debian
command line. Metric values from PMCCABE tool was used for tracing individual functions and
their metrics values for analysis purpose. The metrics values of the two tools are correlated and
this gives confidence that the metrics values are correct; however, visual inspection was still done.
As regards Continuing Change law metrics, others including Neamtiu et al.[2013] have used
metrics like number of functions in each release, system size, module size, function modification,
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
39
function complexity, changes to types, global variables and the likes. Function count, parameters
in functions, returns of functions, statements in function body, incremental difference in the
following: function count, parameters, returns and statements in function are used in this study.
The above metrics have not been used in the previous studies except the function count metric.
The metrics are defined as follows.
i. Function Count (FC): this is the number of functions in a software version
ii. Functions Parameters (FP): this is the sum of all the parameters in functions of a software
version
iii. Functions Returns (FR): this is the sum of the return statements in functions of a software
version.
iv. Functions Statements (FS): the sum of executable expressions that terminates by semi-colon.
v. Incremental difference in the following: function count, parameters, returns and statements in
function.
vi. Function Count Incremental Difference (FC-ID): this is the difference between the sum of
functions in a system version and the immediate preceding one.
vii. Functions Parameters Incremental Difference (FP-ID): this is the difference between the sum
of parameters in functions of a software and its immediate preceding version.
viii. Functions Returns incremental difference (FR-ID): this is the difference between the sum of
returns in functions of a software and its immediate preceding version.
ix. Functions Statements Incremental Difference (FS-ID): this is the difference between the sum
of statements in functions of a software and its immediate preceding version.
Continuing change: E-type software systems must be continually adapted else they become
progressively less satisfactory. E-type software systems adapt to their environment and to
changing/increasing requirement from the users. The software systems considered as case studies
are widely used and well maintained, so they are good candidate softwares to investigate the law
40 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
of continuing change. Also, changes in kind of incremental difference in functions interfaces
(parameters and returns) and in the functions body like functions statements, depict continuing
changes occurring to the software systems in evolution. As the softwares are being continually
adapted to their environment, modifications to the function interfaces and body are inevitable.
Results and Discussion
Results
Having measured the sample OSS using the measurement tools, the following graphs depict the
relationship between the selected metrics under consideration. Figure 1 to Figure 9 depicts the
change behaviour of the four OSS. They show how incremental difference in Functions Count ,
Parameters, Returns and Statements changes in the four sample OSS.
Figure 1. Graph of Function Count Incremental Difference(FC-ID), Function Parameters
Incremental Difference(FP-ID), Function Returns Incremental Difference(FR-ID), Function
Statements Incremental Difference(FS-ID) against vsftpd pre versions of v1.0 on horizontal axis.
-50
0
50
100
150
200
250
300
0.0
.9
0.0
.12
0.0
.13
0.0
.14
0.0
.15
0.9
.0
0.9
.1
0.9
.2 p
re3
0.9
.2 p
re4
0.9
.2 p
re5
0.9
.3 p
re1
0.9
.3 p
re2
0.9
.3 p
re3
0.9
.3 p
re4
0.9
.4 p
re1
0.9
.4 p
re2
0.9
.4 p
re3
0.9
.4 p
re4
FC-ID
FP-ID
FR-ID
FS-ID
vsftpd
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
41
Figure 2a. Graph of Function Parameters Incremental Difference (FP-ID), Function Returns
Incremental Difference(FR-ID), function counts incremental difference(FC-ID) against vsftpd
version 1.0 releases on horizontal axis.
Figure 2b. Graph of Function Statements Incremental Difference (FS-ID) against vsftpd version
1.0 releases on horizontal axis.
-4
-3
-2
-1
0
1
2
3
4
1.0.2pre1 1.0.2pre2 1.0.2pre3 1.0.2pre4 1.0.2pre5
FC-ID
FP-ID
FR-ID
-60
-40
-20
0
20
40
60
1.0.2pre1 1.0.2pre2 1.0.2pre3 1.0.2pre4 1.0.2pre5
FS-ID
FS-ID
vsftpd
vsftpd
42 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Figure 3. Graph of Function Counts Incremental Difference (FC-ID), Function Parameters
Incremental Difference(FP-ID), Function Returns Incremental Difference(FR-ID), Function
Statements Incremental Difference(FS-ID) against vsftpd version 1.1 releases on horizontal axis.
Figure 4. Graph of Function Counts Incremental Difference (FC-ID), Function Parameters
Incremental Difference(FP-ID), Function Returns Incremental difference(FR-ID),Function
Statements Incremental Difference(FS-ID) against vsftpd version 0.0.9 to version 2.1.0 on
horizontal axis.
-50
0
50
100
150
200
FC-ID
FP-ID
FR-ID
FS-ID
-200
0
200
400
600
800
1000
1200
0.0
.9
0.0
.14
0.9
.1
0.9
.2 p
re5
0.9
.3 p
re3
0.9
.4 p
re2
1.0
.2p
re1
1.0
.2p
re4
1.1
.1p
re1
1.1
.1p
re4
1.1
.3p
re1
1.1
.3r
1.2
.0p
re3
1.2
.0rc
2
1.2
.1p
re2
1.2
.2
2.0
.2 p
re1
2.0
.2r
2.0
.3r
2.0
.6
FC-ID
FP-ID
FR-ID
FS-ID
vsftpd
vsftpd
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
43
Figure 5a. Graph of Function Counts Incremental Difference(FC-ID), Function Parameters
Incremental Difference(FC-ID), Function Returns Incremental Difference(FR-ID) against bison
version 1.0 to version 2.4.3 on horizontal axis
Figure 5b. Graph of Function Statements Incremental Difference (FS-ID) against bison version
1.0 to version 2.4.3 on horizontal axis
-200
0
200
400
600
800
FC-ID
FP-ID
FR-ID
-4000
-2000
0
2000
4000
6000
8000
1
1.0
3
1.1
4
1.1
8
1.2
1
1.2
4
1.2
6
1.2
8
1.3
1.3
2
1.3
4
1.5
1.8
75
2.1
2.3
2.4
.1
2.4
.3
FS-ID
FS-ID
bison
bison
44 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Figure 6a. Graph of Function Parameters Incremental Difference(FP-ID) ,Function Counts
Incremental Difference(FC-ID), Function Returns Incremental Difference(FR-ID) against samba
version 1.5.14 to version 3.3.1 on horizontal axis
Figure 6b. Graph of Function Statements Incremental Difference (FS-ID) against samba version
1.5.14 to version 3.3.1 on horizontal axis
-5000
0
5000
10000
15000
20000
25000
30000
1.5
.14
1.6
.07
1.9
.01
1.9
.06
1.9
.10
1.9
.14
1.9
.18
2.0
.3
2.0
.7
2.2
.1
2.2
.5
2.2
.9
3.0
.1
3.0
.5
3.0
.9
3.0
.13
3.0
.21
a
3.0
.25
3.0
.29
3.0
.33
3.2
.2
3.2
.6
3.3
.1
FP-ID
FC-ID
FR-ID
-100000
0
100000
200000
300000
1.5
.14
1.7
.07
1.9
.03
1.9
.09
1.9
.14
2.0
.0
2.0
.5
2.0
.10
2.2
.5
2.2
.10
3.0
.3
3.0
.8
3.0
.13
3.0
.22
3.0
.27
3.0
.32
3.2
.2
3.2
.7FS-ID
FS-ID
samba
samba
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
45
Figure 7a. Graph of Function Counts Incremental Difference(FC-ID), Function Parameters
Incremental Difference(FP-ID), Function Returns Incremental Difference(FR-ID) against sqlite
version 1.0.0 to version 3.6.11 on horizontal axis
Figure 7b. Graph of Function Statements Incremental Difference against sqlite version 1.0.0 to
version 3.6.11 on horizontal axis
-100
0
100
200
300
400
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
\sqlite-…
FC-ID
FP-ID
FR-ID
-1500
-1000
-500
0
500
1000
1500
2000
2500
\sq
lite
-1.0
.0.c
\sq
lite
-1.0
.10
.c\s
qli
te-1
.0.1
8.c
\sq
lite
-1.0
.25
.c\s
qli
te-1
.0.3
2.c
\sq
lite
-2.0
.6.c
\sq
lite
-2.1
.4.c
\sq
lite
-2.2
.3.c
\sq
lite
-2.4
.0.c
\sq
lite
-2.4
.7.c
\sq
lite
-2.5
.1.c
\sq
lite
-2.6
.1.c
\sq
lite
-2.7
.4.c
\sq
lite
-2.8
.4.c
\sq
lite
-2.8
.11
.c\s
qli
te-3
.0.3
.c\s
qli
te-3
.1.1
.c\s
qli
te-3
.2.1
.c\s
qli
te-3
.2.8
.c\s
qli
te-3
.3.6
.c\sqlite-…
\sq
lite
-3.4
.2.c
\sq
lite
-3.5
.6.c
\sq
lite
-3.6
.3.c
\sq
lite
-3.6
.8.c
FS-ID
FS-ID
sqlite
sqlite
46 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Figure 8. Graph of Function Counts Incremental Difference(FC-ID), Function Parameters
Incremental Difference(FP-ID), Function Statements Incremental Difference(FS-ID), Function
Returns Incremental Difference(FR-ID) against sqlite version 2.1.0 to version 2.1.7 on
horizontal axis
Figure 9. Graph of Function Counts Incremental Difference(FC-ID), Function Returns
Incremental Difference(FR-ID), Function Parameters Incremental Difference(FP-ID), Function
Statements Incremental Difference(FS-ID) against sqlite version 2.4.0 to version 2.4.12 on
horizontal axis
Discussion
In Figures 1, it is seen in vsftpd systems evolution that in pre versions before the first main version
1.0, there were lot of early changes but close to the first major release candidate of v1.0 the changes
got vastly reduced as the trend becomes straight on the horizontal axis.
-10
0
10
20
30
40
50
FC-ID
FP-ID
FR-ID
FS-ID
-100
0
100
200
300
400
500
FC-ID
FP-ID
FR-ID
FS-ID
sqlite
sqlite
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
47
Figures 2a and Figure 2b show changes within vsftpd version 1.0 releases, it is noted that the
changes within stable branch of version 1.0 is within a short range i.e. no much changes; precisely
in Fig 2a, FC-ID the range is (1- (-1)) = 2; FP-ID is of range (3- (-3)) = 6; FR-ID is (1- (-1)) = 2;
FS-ID of Fig.2a is (53 - (-53)) = 106.
Figures 3a Show changes in function components of vsftpd v1.1 releases; it is seen that changes
within development branch of vsftpd v1.1 is more than that of vsftpd v1.0. The range of the
changes in the components/elements is wider i.e. the highest peak in FS-ID trend in v1.1 is 150
while its lowest trough is -30, the range is (150 - (-30)) = 180, while corresponding range of FS-
ID in v1.0 is (53 - (-53)) = 106, range of FC-ID in v1.1 is (12 - (-3)) = 15 while that of v1.0 is (1
- (-1)) = 2, range of FR-ID in v1.1 is (13 - (-4)) = 17 while that of v1.0 is (1-(-1)) = 2 ,this is same
with the trend of FP-ID. Also there are peaks and troughs in the Incremental Difference trends of
v1.1 than those of v1.0. However, there are continuing changes in both development branch and
stable branch.
In Figure4, it is shown that, more changes occur in the function statements than in the interfaces
of the function in term of parameters and return. Also, all trends from vsftpd version 0.0.9 to v2.1
is depicted, in contrast to vast reduction in changes prior or close to the first main version 1.0, in
the pre versions of a stable release which is to lead to the stable release, more changes do take
place close to the stable release candidate version. It is noted that the changes within the stable
branches are lower compared to the development branches, but there is continuing change in both.
The standard deviation of the incremental difference of statements in functions of vsftpd from
v0.0.9 to v2.1.0 is 161. The Skewness is 4.6 while Kurtosis is 27. Mean is 69.
The standard deviation of incremental difference of returns of vsftpd from version 0.0.9 to version
2.1.0 is 29, skewness is 5.99 and kurtosis is 40. Standard deviation is a measure of how widely
values are dispersed from average value, so standard deviation of 161 and 29 for incremental
difference of functions' statements and returns respectively shows that there is variance/dispersion
which depicts continuing change. Kurtosis characterizes the relative peakedness or flatness of a
distribution compared with the normal distribution. Positive kurtosis indicates a relatively peaked
distribution. Negative kurtosis indicates a relatively flat distribution. The incremental difference
of the statements and returns are of 27 and 40 respectively, which shows relative peakedness of
48 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
the metrics values distribution. Skewness characterizes the degree of asymmetry of a distribution
around its mean. Positive skewness indicates a distribution with an asymmetric tail extending
toward more positive values. Negative skewness indicates a distribution with an asymmetric tail
extending toward more negative values. The skewness of the incremental difference of statements
and returns in vsftpd systems are 4.6 and 5.9. The asymmetrical tails for both distributions extend
towards positive value. In all the descriptive statistics that have been considered for vsftpd version
0.0.9 to version 2.1.0 i.e. graphs, standard deviation, skewness and kurtosis; changes have been
depicted.
Figure 8 above shows graphs of Incremental difference for function counts, parameters, returns,
statement for sqlite v2.1 releases (development branch) .The changes in the development branch
of sqlite 2.1 is more than what happens in Figure 9 above for stable branch of sqlite 2.4, going by
the trends oscillations and variance, more continuing change occurred in sqlite v2.1 releases.
It can be seen in all the trends from Figure 1 to Figure 9, continuing change occurs to all the
systems in evolution. Also more functions and invariably more function components are added to
the systems as they are changing. Additions are more than deletions. In very large systems like
samba (there are versions with more than 1 million LOC), shrinkages/deletions occur less in
comparison to averagely large systems like sqlite systems. Changes within a major version
amongst the minor versions e.g. within major version 2.4 having minor versions v2.4.1, v2.4.2,
v2.4.3 and so on are less compare to changes that occur from major version to major regardless of
either development or stable major version i.e. from v2.4 to v2.5 to v3.0 and so on.
Incremental difference of samba (v1.5.14 to v3.6.11) functions statements, returns, parameters,
function counts are of mean 5717, 786, 420, 148 respectively and of standard deviation 23805,
3140, 1502, 533 respectively. This shows dispersion or variance which is invariably continuing
change. These statistical parameters values are also observed in other softwares of this project case
study. In the study of Neamtiu et al. [2013], cumulative number of changes to program elements
(i.e., functions, types, and global variables) of 9 evolutionary software systems were quantified
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
49
and continuing changes were found in all. Similarly, components of Linux Operating systems
(arch and drivers directories) were measured using number of files and Lines of Code metrics by
Israeli and Feitelson [2010], they also confirmed continuing changes occuring to in Linux
Operating system.
Stable major versions are systems like v1.0, 1.2, 2.2, 2.0, 2.4, 3.0, 3.4, 3.8 etc. Development major
versions have the second number in the versioning system to be odd number e.g. v2.1, 2.3, 3.5, 3.7
etc
CONCLUSION
In the pre versions, lot of changes do occur but when it is getting close to first release candidate of
version 1, the changes reduce but in stable branch after the first official release more changes do
occur close to the release of another stable or development major release.
Changes within stable branch is within a short range i.e. no much changes because simple
enhancements, updates, bug fixings are the major things that are usually done in the stable branch.
Changes within development branch are more than that of stable branch because many new small
files are created due to new features and functionalities in the development branch.
More changes occur in the function statements than in the interfaces of the function in term of
parameters and returns.
Changes within the stable branches are lower compared to the development branches, but there is
continuing change in both.
In all the trends, continuing change occurs to all the systems in evolution. Also more functions and
invariably more function components are added to the systems as they are changing. Additions are
more than deletions. In very large systems like samba (there are versions with more than 1million
50 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
LOC), shrinkages/deletions occur less in comparison to averagely large systems like sqlite
systems.
Changes within a major version amongst the minor versions e.g. within major version2.4 having
minor versions v2.4.1, v2.4.2, v2.4.3 and so on are less compare to changes that occur from major
version to major regardless of either development or stable major version i.e. from v2.4 to v2.5 to
v3.0 and so on.
Incremental difference of system components such as function statements, returns, parameters and
function counts show dispersion from their mean going by their standard deviation parameter. This
shows dispersion or variance which is invariably continuing change.
The systems under consideration observed versioning order as follows: stable major versions are
systems like v1.0, 1.2, 2.2, 2.0, 2.4, 3.0, 3.4, 3.8 e.t.c. Development major versions have the second
number in the versioning system to be odd number.
REFERENCES
ALENEZI, M., AND ALMUSTAFA, K. 2015. Empirical analysis of the complexity evolution in
open-source software systems. International Journal of Hybrid Information Technology, 8(2),
257-266.
EICK, S. G., GRAVES, T. L., KARR, A. F., MARRON, J. S., AND MOCKUS, A. 2001. Does
code decay? assessing the evidence from change management data. IEEE Transactions on
Software Engineering, 27(1), 1-12.
FERNANDEZ-RAMIL, J., LOZANO, A., WERMELINGER, M., AND CAPILUPPI, A. 2008.
Empirical studies of open source evolution. In Software evolution (pp. 263-288). Springer Berlin
Heidelberg.
GODFREY, M.W., & TU, Q. 2000. Evolution in open source software: A case study. In
Software Maintenance, 2000. Proceedings. International Conference on (pp. 131-142). IEEE.
HERRAIZ, I. 2008. A Statistical Examination of the Evolution and Properties of Libre
Software. PhD thesis, Universidad Rey Juan Carlos, Retrieved from
http://purl.org/net/who/iht/phd.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
51
ISRAELI, A., AND FEITELSON, D. G. 2010. The Linux kernel as a case study in software
evolution. Journal of Systems and Software, 83(3), 485-501.
KOCH, S. 2007. Software evolution in open source projects—a large‐scale investigation.
Journal of Software: Evolution and Process, 19(6), 361-382.
LAWRENCE, M. J. 1982. An examination of evolution dynamics. In Proceedings of the 6th
international conference on Software engineering (pp. 188-196). IEEE Computer Society Press.
LEHMAN M. M. 1969. The Programming Process. IBM Res. Rep. RC 2722, IBM Res. Centre,
Yorktown Heights, NY 10594
LEHMAN, M. M. 1974. Programs, cities, students—limits to growth?. In Programming
Methodology (pp. 42-69). Springer New York.
LEHMAN, M.M. 1978. Laws of Program Evolution - Rules and Tools for Programming
Management, Proc. Infotech State of the Art Conf., Why Software Projects Fail,
pp. 11/1-11/25.
LEHMAN, M.M.1979. On understanding laws, evolution, and conservation in the large-
program life cycle. Journal of Systems and Software , 213 – 221.
LEHMAN, M. M. 1980. Programs, life cycles, and laws of software evolution. Proceedings
of the IEEE, 68(9), 1060-1076.
LEHMAN, M.M. 1985a. Program Evolution. In Program Evolution. Processes of Software
Change, M. M.Lehman and L. A. Belady (Eds.). Academic Press Professional, Inc.,
San Diego, CA, USA, 9–38.
LEHMAN, M.M. 1985b. The Programming Process. In Program Evolution. Processes of
Software Change, M. M. Lehman and L. A. Belady (Eds.). Academic Press Professional, Inc., San
Diego, CA, USA, 39–84.
LEHMAN, M.M. 1985c. Programs, Cities, Students: Limits to Growth? In Program Evolution
Processes of Software Change, M. M. Lehman and L. A. Belady (Eds.). Academic
Press Professional, Inc., San Diego, CA, USA, 133–164.
LEHMAN, M.M. 1989. Uncertainty in computer application and its control through the
engineering of software. Journal of Software Maintenance: Research and Practice, 1,1, 3–27.
52 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
LEHMAN, M.M. 1990. Uncertainty in Computer Application. Communications of ACM 33, 5,
584–586. Technical letter.
LEHMAN, M.M., PERRY, D.E., AND TURSKI, W. M. 1996.Why is it so hard to find Feedback
Control in Software Processes?, Invited Talk, Proc. of the 19th Australasian Comp. Sc. Conf.,
Melbourne, Australia, pp. 107-115.
LEHMAN, M.M. 1996b. Laws of Software Evolution Revisited. In Proceedings of the European
Workshop on Software Process Technology. Springer-Verlag, London, UK,
108–124.
NEAMTIU, I., XIE, G., & CHEN, J. 2013. Towards a better understanding of software evolution:
an empirical study on open‐source software. Journal of Software: Evolution and Process, 25(3),
193-218.
PIRZADA, S. S. 1998. A statistical examination of the evolution of the UNIX system (Doctoral
dissertation, Imperial College London (University of London)).
SJØBERG, D. I., DYBÅ, T., ANDA, B. C., &HANNAY, J. E. 2008.Building theories in software
engineering. In Guide to advanced empirical software engineering (pp. 312-336).
Springer London.
VASA, R. 2010. Growth and change dynamics in open source software systems.
Ph.D. Dissertation. Swinburne University of Technology, Melbourne, Australia.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
53
Measuring the Impacts of E-Learning on Students’ Achievement in
Learning Process: An Experience from Tanzanian Public
Universities
Titus Tossy5
Mzumbe University, Tanzania
ABSTRACT
This paper is located within the global debates about the impact of e-learning as one of the ICT on
students’ achievements in teaching and learning process in universities. From the perspectives of
Tanzania, this paper provides a model for measuring the impact of e-learning on students’
achievements in universities. The rationale for the investigation stems from the notion that despite
the hundreds impact studies, the impacts of e-learning on student’s achievements remain difficult
to measure and open too much reasonable debate. This raised contradiction and elusive findings
on the conclusion based on the impacts of e-learning systems on student’s achievement. A Mixed
method research methodology involving survey and interviews was employed in the collection of
data for building the model. Multiple regressions technique was used to analyze the hypothesized
relationships conceptualized in the research model. The model was built and validated using
structural equation modeling and Delphi technique respectively. Measuring e-learning impact on
student’s achievements, indicators such as student engagement, student cognitive, performance
expectancy, student control, student satisfaction, continue using, student motivation, student self
esteem, student confidence on e-learning system have positive significance relationship with
students’ achievement. The model has the potential to policy makers, universities and other
5 Author’s Address: Titus Tossy, Mzumbe University, Tanzania "Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than IJCIR must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to
post on servers or to redistribute to lists, requires prior specific permission and/or a fee." © International Journal of Computing and ICT Research 2008.
International Journal of Computing and ICT Research, ISSN 1818-1139 (Print), ISSN 1996-1065 (Online), Vol.11, Issue 1, pp. 53 - 71, June
2017.
54 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
stakeholder to understand the impacts of e-learning after implementation in order to justify the
total investment based on that technology. The novelty of this research lies in the extension of the
findings in literature with constructs such as frequency use and intension to use e-learning in
learning context.
Categories and Subject Descriptors:
K.3.1 [Computers and Education] Computer Uses in Education;
General Terms: Collaborative Learning, Distance Learning
Additional Key Words and Phrases: E-learning, learning process, impacts of E-learning, Tanzania
Universities, Public Universities
IJCIR Reference Format:
Titus Tossy. Measuring the Impacts of E-Learning on Students’ Achievement in Learning
Process: An Experience from Tanzanian Public Universities, Vol. 11, Issue.1 pp 53 - 71.
http://ijcir.mak.ac.ug/volume11-issue1/article4.pdf
1. INTRODUCTION
Information and Communication Technologies (ICTs) have influenced the landscape of education
sector by changing the way various education activities are being conducted. Rapid developments
of ICTs have improved access to and efficiency of teaching and learning processes in universities
(Lwoga and Komba, 2015), thereby leading to improved students’ achievements. This associated
academic career achievement provides the promise for meaningful employment of graduates as
well as movement towards a knowledge-based economy and rapid national economic growth
(Olson et al., 2011). Based on this reason, most governments and universities in developed
countries have invested in ICTs, e-learning systems in particular. As such, electronic learning
systems (e-learning systems) have become a major phenomenon in recent years (Tossy, 2012) as
transforms teacher-centered teaching and learning system into a student-centered one (Truncano,
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
55
2005). Further, this transformation enables students to develop their problem-solving abilities;
information reasoning and communication skills; improves creativity and other higher orderly-
thinking skills (Rosenblit et al., 2005). The system indeed changes the way in which teaching,
learning, and administration of education activities are conducted (Tossy, 2012; Lwoga and
Komba, 2015); offers efficient use of time and ease sharing of educational materials between
students and staff (Shivaraji et al., 2013) and improves the quality of teaching and learning (Kahiigi
et al., 2008; Jones, 2011).
Despite these notable attributes of utilization of e-learning in teaching and learning, its impact on
student’s achievements remain difficult to measure and open to debate as there are few conclusive
statements (Truncano, 2005; Rosenblit and Gros, 2011). Others further argue that there is a
contradiction on the conclusion on the impacts of e-learning systems on student’s achievement
(Hilz et al., 2001; Trancore, 2005). It is also argued that data to support the perceived benefits from
e-learning technologies are limited and evidence of effective impact is elusive (Eurydice, 2011;
Bocconi et al., 2013; Pandolfini, 2016). In developing countries, there is paucity of information
about the relationship between e-learning technologies and student’s achievement (Rosenblit et
al., 2011). There is thus a need to carry out more research, notably to develop useful indicators and
methodologies that need to be used in measuring the impact of e-learning in teaching and learning
in developing countries including Tanzania in order to guide policy formulation. This is important
because developing countries including Tanzania are still at very basic stage of e-learning
technology adoption. Tanzania needs to tap into experiences of universities in developed countries
that have long experience of using e-learning so as to formulate innovative corrective measures.
2. E-LEARNING
Wentling et al. (2000:5) define e-learning as:
““The acquisition and use of knowledge distributed and facilitated primarily by electronic means.
This form of learning currently depends on networks and computers but will likely evolve into
systems consisting of a variety of channels (e.g. Wireless, satellite), and technologies (e.g. Cellular
phones,etc.) as they are developed and adopted. E-learning can take the form of courses as well as
56 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
modules and smaller learning objects. E-learning may incorporate synchronous or asynchronous
access and may be distributed geographically with varied limits of time.” (Wentling et al., 2000:5).
E-learning captures a wide range of terms [Albert & Mori, 2001] referred to as ‘labels’ which have
been used to describe the concept of e-learning. These labels include, but are not limited to Web
Based Learning (WBL), Web Based Instruction (WBI), Web Based Training (WBT), Internet
Based Training (IBT), Online Resource Based Learning (ORBL), Advanced Distributed Learning
(ADL), Tele-Learning (T-L), Computer-Supported Collaborative Learning (CSCL), Mobile
Learning (M-learning or ML), Nomadic Learning, Off-Site Learning [Collis, 1996; Khana, 2005;
Yieke, 2005; Bates, 2001; Dam, 2004; Goodear et al., 2001; Pegler & Littlejohn, 2007; Dabbagh
et al., 2000; Barbara, 2002, 2004; Cramer et al., 2000; Salzbert & Polyson, 1995; Schreiber, et al.,
1998; Schank, 2001; Howard, 2003; and Singh, 2003]. The e-learning term is used interchangeably
with other related terms such as online learning, virtual learning, and web-based learning
(Twaakyondo, 2004).
While The use of e-learning has the added value of flexibility (” anywhere, anytime, anyplace”),
E-learning facilitates both learner engagement and the engaging of experiences (Uys, 2004;
Meyen, 2000; 2002). Meyen (2002) demonstrate how e-learning helps to overcome the traditional
barriers to education delivery. These barriers include lack of physical infrastructure, lack of
qualified teaching staff, absence of adequate education budgets, and the failure of traditional
pedagogy and curricula. East African countries are characterised by these barriers (Ndume et al,
2008). The failure of the government's efforts in building physical classrooms has created an
opportunity for innovative education delivery via e-learning (Yieke, 2005). As Alavi and Leidner
(2001) argues that e-learning’s importance will grow right across the educational spectrum from
primary to HEIs, the e-learning implementation in Tanzania HEIs is taking place despite the
various outlined barriers. The e-learning implementation differs from one HEI to another.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
57
3. TANZANIA HIGHER EDUCATION STATUS
According to TCU (2010), the education sector in Tanzania has grown drastically for the past fifty
(50) years; this has been due to an increase in the number of Higher Education Institutions (HEIs).
The students’ enrolment has increased tremendously since independency. As MoEVT (2011)
states that the number of students enrolled in HEIs increased drastically. In 1961, Tanzania had
1,737 students enrolled in 4 HEIs, while in 2011 a total of 244,045 students in 358 HEIs (MoEVT,
2011). This emanated from free markets which encourages establishment of both private and
public HEIs, backed by various government policies on education sector such as Vision 2025, ICT
Policy and Higher Education Master Plan (HEMP), which enhance the establishment of both
private and public HEIs (Maliyamkono, 2006:396-445). Despite the fact that the number of HEIs
has increased since 1961, the pace of increase of students compared to overall national population
growth doesn’t match the enrolment offered by these institutions (Maliyamkono, 2006). This is
due to limitation on enrolment capacity, geographical constraints, cost of education, lack of enough
infrastructures, lack of qualified personnel and lack of innovative ideas (Chiemelie, 2012). In the
light of those challenges, e-learning is sought to be the ultimate solution in which the enrolment
does neither depend on the infrastructure nor geographical locations (Noe, 2005). As MoEVT
(2011) argues that the HEIs should deploy e-learning for their day to day training activities, in
order to minimize training cost and to remain competitive in the market. Furthermore, while MoCT
(2003) articulates the need for harnessing ICT opportunities to meet the vision 2025 goals by
blending strategic ICT leadership; ICT infrastructure; ICT Industry through Human Capital,
MoEVT (2007) stipulates that Tanzania needs national e-learning sensitization by stressing the
effort on applications such as distance education, e-learning, m-learning and blended learning.
4. E-LEARNING AT HEIS IN TANZANIA
58 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Dr. Gajaraj Dhanarajan (2001:9), President of the Commonwealth of Learning, argued that:
“One would be foolish to question the importance of the internet and www for education in this
new decade; at worst, it has the ability to connect communities of learners and teachers and at its
best it could very well be the tool that education has been waiting for these past thousands of years;
its promise is only limited by the imagination and capacity of the people who can apply and benefit
from it”.
This kind of vision of a future electronically driven and inclusive education has been a driving
force for HEIs in Tanzania and has provided the spur to implement e-learning. As is the case with
other African countries, the rate of implementation of e-learning platforms in Tanzania is still very
slow despite the potential opportunities provided by open source technology and the conducive
environments created by the respective governments. There have been some initiatives on the part
of governments to develop ICT policies as a way forward in the implementation of e-learning. In
addition, there have been different round table conferences and the formation of the Tanzania
Commission of Universities (TCU) has fostered a debate on a common education delivery. For
example, Tanzania has abolished all taxes related to computers and related equipment and reduced
licence fees and royalties’ payable by the telecommunication operators (Morrison & Khan, 2003
and McPherson & Nunes, 2008). The more established public and private HEIs have managed to
implement e-learning platforms in Tanzania. They are implementing these using either open source
or customized platforms such as WEBCT, Blackboard, Moodle, Joomla, etc. Other universities in
the Tanzania have started the basic process of ICT infrastructure expansion to include local area
network implementation, Internet, computer labs and other facilities, as a way forward to the
establishment of e-learning (Sife, et al., 2007).
5. E-LEARNING MARKET AND THE DRIVERS OF CHANGE IN TANZANIA
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
59
While e-learning is not a new phenomenon in the developed world, it may be new to some
developing countries. Its market is rapidly increasing globally. While Merrill Lynch (2003) argues
that the e-learning is the fastest growing sector in the developed countries, many developing
countries (including Tanzania) are striving to implement e-learning in HEIs. Doughty et al. (2001)
and Saint (1999) have documented the rise of the virtual university in Africa (including Tanzania).
There are many e- learning initiatives in progress in Tanzania, such as Schoolnet, e-learning
centres, and African Virtual University (Ndume, et al., 2008; Sife et al., 2007). The increase in the
demand for higher education is one of the driving forces for implementing e-learning. Higher
population growth, lower education costs, increased access to education, and higher participation
rates in higher education changes the way firms organize work and cost-effectiveness and are
factors driving the implementing of e-learning in Tanzania (Ndume et al., 2008).
6. METHODOLOGY
Conceptual Model and Research Hypothesis Development
The research model for this study was formulated based on the concept of information system (IS)
success model adapted from DeLone and McLean (1992). The model is consisting of three
dimensions each consists three constructs as illustrated in Figure 1. This paper therefore uses this
conceptual model to underpin the measurement of the impact of e-learning system on student’s
achievement in Tanzania universities.
Independent/ Exogenous Variables Dependent/ Endogenous Variable
60 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Figure 1: Conceptual Model Adapted from (DeLone and McLean, 1992)
Based on the conceptual model depicted in Figure 1, the following hypotheses were proposed:
Students’ acquisition of knowledge and skills (SACKS)
SACKS
SDMAL
SM
Student Engagement
Student Cognitive
Performance
Impact level
on E-learning
Students
Achievement
s
Student Control
Student satisfaction
Continue using
Student motivation
Student self esteem
Student confidence
Measurable indicators
sIndicatorsindicators
Latent Variables
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
61
H1. Students’ engagement on using the system has a significant positive relationship with their
achievements
H2. Students’ performance expectancy has a significant positive relationship with students’
achievement
H3. Cognitive learning using e-learning system has a significant positive relationship with
students’ achievement
Students’ development maturity as autonomous learner (SDMAL)
H4. Students control on using e-learning system has positive relationship with students’
achievement
H5. Students continue using e-learning system has positive relationship with students’
achievement
H6. Students’ satisfaction on e-learning system has positive relationship with students’
achievement
Students Motivation (SM)
H7. Student’s motivation on using e-learning system has positive relationship with students’
achievement
H8. Students self esteemed on e-learning system has positive relationship with students
achievement
H9. Students’ confidence on e-learning system has positive relationship with students’
achievement
The study used a survey design, involving 4 universities with long ICT experience. These were
thus purposively selected amongst 30 universities in Tanzania. Three hundred and fifty (350)
respondents used in this study, thereby 306 respondents equal to 87.5% representing the planned
62 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
respondent pool. The survey questionnaire consisted of five point Likert scales (Likert, 1932) was
employed. The in-depth interview was employed to collect qualitative data from ICT experts
during model validation. The data was then analyzed quantitatively and qualitatively respectively
to identify different indicators and aspects relating to the measure of the impact of using and not
using e-learning systems on students’ achievements. The empirical data were analyzed using
multiple regressions and structural equation modeling (SEM) using Statistical Package for Social
Science (SPSS). The multiple regressions were used in analyzing hypothesized relationships
conceptualized in the research model. In order to validate the model, the Delphi Technique was
employed (Harold and Murray, 1975) and a new model was developed accordingly (Rowe and
Wright, 1999).
7. RESULTS AND DISCUSSION
7.1 E-Learning Experience and Awareness
The study revealed that 75% of the respondents were exposed to e-learning systems based on
whether one had ever used it for learning; attended a course on e-learning (9.5%); heard about it
from a colleague of other institutions or seen a colleague using it (2%). It was further evident that
79% of students were aware of the use of e-learning frequently in their day-to-day learning
activities, while 65% were found to have intention of using e-learning methods in their academic
career. These results match with those of previous studies by Alexander (2008) and Mazman and
Usluel (2009) which found that the more a person is involved in Internet or Web activities, the
more they are likely to use e-learning. It is therefore more likely that, in developing countries
particularly Tanzania, use rate of e-learning methods is likely to increase if university can afford
to embrace them in institutional operations.
7.2 Indicators of the impact of e-learning
The results of the multiple regressions are shown in Tables 1, 2 and 3.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
63
Table 1: SACKS indicators of Students’ achievements
Students
Achievement
(Measure)
Indicators
β
t-value
Significance
Tolerance
VIP
R2
SACKS
(Constant) .412 2.304 .012
.513
SE .268 .886 .271 .926 1.079
SC .618 7.854 .000 .641 1.560
PE .596 7.617 .000 .641 1.679
The results in Table 1 show that indicators such as student’s engagement (SE), student cognitive
learning using e-learning methods (SC) and the performance expectance (PE) on e-learning had
positive relationship with the student’s achievement.
Table 2: SDMAL indicators of Students’ achievements
Students
Achievement
(Measure)
Indicators
β
t-value
Significance
Tolerance
VIP
R2
SDMAL
(Constant) .412 2.304 .012
.684
SCO .191 .092 .244 .807 .931
SS .730 8.181 .000 .641 1.560
CU .592 6.211 .000 .641 1.559
The results [Table 2] further show that indicators such as students’ control on using e-learning
(SCO), students’ satisfaction (SS) and continued use of e-learning had positive relationship with
the students achievement.
Table 3: SM indicators of Students’ achievements
64 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Students
Achievement
(Measure)
Indicators
β
t-value
Significance
Tolerance
VIP
R2
SM
(Constant) 1.106 6.88 .000
.896
SSE .323 4.409 .000 .641 1.560
MT .545 7.191 .000 .641 1.679
CON -.069 .881 .257 .903 1.108
Table 3 indicates that students’ self-esteem on using e-learning (SSE) and student motivation (SS)
had positive relationship with the students’ achievement with the exception of students’ confidence
on using e-learning.
7.3 A model for measuring e-learning impact on student achievement
The previously hypotheses were tested using SEM. Of the nine relationships, eight were
statistically significant (Table 4). These were student’s engagement (SS) (β = .268, p < .01);
performance expectance (β =.596, p < .01); student cognitive learning (SC) (β = .618, p < .01)
control on using e-learning (β = .191, p < .01); continued use of methods (β = .592, p < .01);
satisfactions (β = .730, p < .01); motivation (β = .545, p < .01); self-esteem (β = .323, p < .01) and
confidence on e-learning (β = -.069, p < .01). Only student confidence on using e-learning in
learning context was not supported.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
65
Table 4: Summary of hypotheses tested
Hypotheses Accepted/Rejected
β, p <
.01
H1 Students’ engagement on using the system has a
significant positive relationship with their achievements Accepted .268
H2 Students’ performance expectancy has a significant
positive relationship with students’ achievement Accepted .596
H3 Cognitive learning using e-learning system has a
significant positive relationship with students’
achievement
Accepted .618
H4 H4. Students control on using e-learning system has
positive relationship with students’ achievement Accepted .191
H5 H5. Students continue using e-learning system has
positive relationship with students’ achievement Accepted .592
H6 Students’ satisfaction on e-learning system has positive
relationship with students’ achievement Accepted .730
H7 Student’s motivation on using e-learning system has
positive relationship on students’ achievement Accepted .545
H8 Students self esteemed on e-learning system has positive
relationship students’ achievement Accepted .323
H7 Students’ confidence on e-learning system has positive
relationship on students’ achievement Rejected -.069
With the latent variables presented in the conceptual model, Structural Equation Modeling (SEM)
approach (Bollen, 1998; Hoyle and Panter, 1995) was used to determine the cause-effect
66 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
relationships among the latent variables with their indicators and the e-learning on students’
achievement in education. Three regression models were developed and used to determine the
value of dependent variables. The regression models were developed for Students’ acquisition of
knowledge and skills (SACKS); Students’ development maturity as an autonomous learner
(SDMAL) and Motivation (SM). SACKS indicators were student engagement (SE); cognitive
capacity (SC) and Performance expectancy (PE). It was further apparent that SDMAL measurable
indicators were student control (SCO); satisfaction (SS); continued use (CU) and the measurable
indicators for SM were student motivation (MT); self-esteem (SSE) and confidence (CON).
Based on the findings, the initial regression models were as follows:
SACKS = 0.268SE + 0.596PE + 0.618SC R2 = 0.513…………………….. (1)
SDMAL = 0.191SCO + 0.592CU + 0.730SS R2 =0.684…………………. (2)
SM = 0.545MT + 0.323SSE - 0.069CON R2 = 0.896……………………… (3)
Where:
SE = Student Engagement: SC = Student Cognitive: PE = Performance expectancy
SCO = Student Control: SS = Student satisfaction: CU = Student Continue Using
CON = Confidence: MT = Student Motivation: SSE = Student Self Esteem
The entire model was found to have a significant fit for the study, as all the three regression models
had R2 > = 0.5 (Hoyle and Panter, 1995). All hypotheses from H1 up to H8 were found to have
significant positive relationship with the student’s achievement. However, on the hypothesis (H9),
the study revealed that students’ confidence on e-learning system had a negative relationship with
students’ achievement. However, this was contrary to the findings of the study conducted by Olson
et al., (2011).
Further from the findings above, it is clear that, student engagement, student cognitive capacity,
performance were the key indicators of the latent variable which is students’ acquisition of
knowledge and skills (SACKS) for one to realize how e-learning impacts on student teaching and
learning achievement. In addition students’ control, satisfaction and continued use of e-learning
strategies were indicators of the latent variable, which is Students’ development maturity as an
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
67
autonomous learner (SDMAL) which is known to have an influence on student’s teaching and
learning achievements. The findings further show that self esteem and motivation were indicators
of the latent variable which is Students Motivation (SM) that had positive significance on
students’ teaching and learning achievement. In exception the study shows that student’s
confidence on e-learning had a negative impact on student’s achievement. These findings agree
with those of Olson et al. (2011) and The McGraw Hill report (2011).
7.4 Model Validation
The model was validated using the Delphi Technique based on the assumptions that a group expert
judgment is better than an individual judgment (Amiresmaili et al., 2011). Therefore, two different
groups composed of panels of ICT experts were formed with the view to discuss and evaluate the
model. The experts were technical personnel; lecturers specialized in e-learning and consultants
of e-learning. All relevant determinant factors obtained from Section 2 were critically discussed
by panelists and compared. The expert judgments arising were then used to test the validity of the
model, which was then refined using inputs from the workshop. The model finally established was
a function of Students’ acquisition of knowledge and skills (SACKS), development maturity as an
autonomous learner (SDMAL), Motivation (SM) and Behavioral Intension (BI) as latent variables,
each with measurable variables as presented in section 3. This relation is depicted mathematically
as follows:
Measurement Model = f (SACKS, SDMAL, SM, BI) + e
This further shows that the model had the potential to improve the measurement of e-learning
impact on student’s achievement in order for the management at an institutional level to make
decision based on the impact. This is envisaged to help to realize the net benefit to justify the total
investment.
8. CONCLUSION AND RECOMMENDATION
68 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
This study shows that developed model [Figure 2] has the potential to be used in measuring the
impact of e-learning on students’ achievements in universities and other institutions. Results
obtained through a mixed research method approach revealed that Student Engagement (SE),
Cognitive capacity (SC), Performance expectancy (PE), Control (SCO), Continued use (CU),
satisfaction (SS), Confidence (CON), Motivation (MT), Self Esteem (SSE) are important
measurable indicators of the model. In particular, intention to use (IU) and the Frequency of using
(FU) e-learning are measurable variable from behavioral intension (BI) which are of particular
importance in evaluating its impact on students’ achievement. These are novel additions indicators
to measure e-learning technology utilization impacts using the developed model. These results call
for more research that focuses on evaluating the impact of e-learning systems on students’
achievement in teaching and learning using the developed model in this study. The developed
model as a result of this paper is important as it help policy makers, university managements and
other stakeholder to measure the impact of e-learning in order to understand the status of e-learning
for justifying the total investment in learning context.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
69
Figure 2: A final model for Measuring the Impact of e-learning on Students Achievement
MT
SSE
SACKS
SDMAL
SM
Intention to Use (IU)
Frequency of Using
(FU)
E-
learning
Impact
on
Students
SE
SC
PE
CU
CON
70 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
9. REFERENCES
Alexander, B. (2008).‘‘Social networking in higher education’’, available at
http://net.educause.edu/ir/ library/pdf/PUB7202s.pdf: accessed on 13 November 2016.
Amiresmaili, M. et al. (2011). ‘‘A model for health services priority for Iran’’, Journal of
American Science, Vol. 7 No. 4.
Bocconi, S., Balanskat A., Kampylis P., & Punie Y. (Eds.).. (2013). Overview and analysis of
learning initiatives in Europe. Luxembourg: European Commission
Borgatti, S.P. & Cross, R. (2003), ‘‘A relational view of information seeking and learning in social
networks’’, Management Science, Vol. 49 No. 4, pp. 432-45.
DeLone, W. H., & McLean, E. R. (1992). Information Systems Success: The Quest for the
Dependent Variable. Information Systems Research, 3(1), 60–95
Eurydice. (2011). Key data on learning and innovation through ICT at school in Europe 2011.
Brussels: EACEA P9 Eurydice
Guri-Rosenblit, S., & Gros, B. (2016) E-Learning: Confusing Terminology, Research Gaps and
Inherent Challenges: International Journal of E-learning and Distance Education. Vol. 25, No. 1.
Hiltz, S. R., Zhang, Y., & Turoff, M. (2001). Studies of effectiveness of learning networks. Newark,
N.J.: New Jersey Institute of Technology
Hoyle, R.H., & Panter, A.T. (1995).Writing about Structural Equation Models, Sage, Thousand
Oaks, CA: pp 3-18
Harold, A.L. and Murray, T. (1975), The Delphi Method: Techniques and Applications, Addison-
Wesley, Reading, MA.
Jones, D. T. (2011). An Information Systems Design Theory for E-learning: A thesis submitted
for the degree of Doctor of Philosophy of The Australian National University. Pp-17-431
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
71
Kahiigi, E. K. et al “Exploring the e-Learning State of Art.” The Electronic Journal of e-
Learning Volume 6 Issue 2, pp77 -88, available online at www.ejel.org Electronic Journal e-
Learning Volume 6 Issue 2 2008 (77 - 88)
Lwoga, E. T & Komba M (2015). Antecedents of continued usage intentions of web based learning
management system in Tanzania: Education + Training, Vol. 57 Iss 7 pp. 738 – 756 Permanent
links to this document: http://dx.doi.org/10.1108/ET-02-2014-0014. Accessed on 23/3/2015.
Mazman, S.G. & Usluel, Y.K. (2009), ‘‘The usage of social networks in educational context’’,
World Academy of Science, Engineering and Technology, Vol. 49 No. 1.
Munguatosha, G.M. et al. (2011). A social networked learning adoption model for higher
education institutions in developing countries: On the Horizon, Vol. 19 Iss 4 pp. 307 –
320.available online at http://dx.doi.org/10.1108/10748121111179439. Accessed on 12/12/2016
Olsonurt, J., Tarkleson, E., Sinclair, J., Yook, S., & Egidio, R. (2011). An Analysis of e-Learning
Impacts & Best Practices in Developing Countries. With Reference to Secondary School
Education in Tanzania: pp. 1-53. Available online at http://tism.msu.edu/ict4d +1 517.355.8372.
accessed on 19/11/2016
Pandolfini, V. (2016). Exploring the Impact of ICTs in Education: Controversies and Challenges.
Italian Journal of Sociology of Education, 8(2), 28-53. doi: 10.14658/pupj-ijse-20
Rowe, G. & Wright, G. (1999), ‘‘The Delphi technique as a forecasting tool: issues and
analysis’’,International Journal of Forecasting, Vol. 15 No. 4.
Shivaraj, O. et al. (2013). Students’ Attitude towards the Uses of Internet: Indian Journal of
Library and Information Science, 7(1), 13-23.
Tossy, T. (2012). Cultivating Recognition: A Classic Grounded Theory of E-Learning Providers
Working in East Africa: pp.1-381. Available online at http://www.elearningcouncil.com. Accessed
on 2/5/2016
72 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Trucano, M. (2005). Knowledge maps: ICTs in education. Washington D.C.: InfoDev, The
Information for Development Program.
Reducing Checkpoint Overhead in Grid Environment
Faki Ageebe Silas6, Jimoh Rasheed Gbenga
1,2Computer Science Department, Faculty of Communication and Information Science,
University of Ilorin, Ilorin-Nigeria
____________________________________________________________________________________________
ABSTRACT
Grid Computing has become major player in super-computing community. But due to the diversity
and disruptive nature of its resources, failure of jobs is not an exception. However, many
researchers have come up with models that enhance jobs survivability. Popular among this model
is checkpoint model which have the ability of saving already computed jobs on a stable secured
storage. This model avoids re-computing of already computed jobs from the scratch in case of
resources failure. But the time a job takes in checkpoinitng also becomes another task which adds
overheads to computing resources thereby reducing the resources performance. In order not to add
too many overheads to computing resources, the number of checkpoints must be minimized. This
6 Author’s Address: Faki Ageebe Silas6, Jimoh Rasheed Gbenga 1,2Computer Science Department, Faculty of Communication and Information Science, University of Ilorin, Ilorin-Nigeria [email protected] , "Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than IJCIR must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to
post on servers or to redistribute to lists, requires prior specific permission and/or a fee." © International Journal of Computing and ICT Research 2008. International Journal of Computing and ICT Research, ISSN 1818-1139 (Print), ISSN 1996-1065 (Online), Vol.11, Issue 1 pp. 72 – 83, June
2017.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
73
study proposed checkpoint interval models which is implemented based on fault index history of
computing resources. Failed jobs are re-allocated from their last saved checkpoint using an
exception handler. The study observed that arithmetic checkpoint model is better used when fault
index of computing resources is high while geometric checkpoint model is better when fault index
of resources is low.
Keywords: Arithmetic Checkpoint, Exception Handler, Fault Tolerance. Geometric Checkpoint
IJCIR Reference Format:
Faki Ageebe Silas7, Jimoh Rasheed Gbenga. Reducing Checkpoint Overhead in Grid
Environment, Vol. 11, Issue.1 pp 72 - 83. http://ijcir.mak.ac.ug/volume11-issue1/article5.pdf
1.0 INTRODUCTION
Grid has proven to be a great computational resource in computing circle. But due to its unstable
and unpredictable nature of its resources, resources and job failure is a norm rather than exception.
Systems and networks can fail, resources can be turned on and off and the introduction of more
7 Author’s Address: Faki Ageebe Silas7, Jimoh Rasheed Gbenga 1,2Computer Science Department, Faculty of Communication and Information Science, University of Ilorin, Ilorin-Nigeria [email protected] , "Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than IJCIR must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee." © International Journal of Computing and ICT Research 2008.
International Journal of Computing and ICT Research, ISSN 1818-1139 (Print), ISSN 1996-1065 (Online), Vol.11, Issue 1, pp. 72 - 83, June 2017.
74 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
users can result in resource starvations which significantly affect quality of service (QoS). To
maintain a tolerable QoS level for users, fault tolerance mechanisms are incorporated. Providing
fault tolerance in a grid environment while on the other hand optimizing resources utilization and
job execution time is a challenging task (Nakkeeran, 2015). This is because fault tolerance
measures increase computing overheads to grid resources. This study adopts checkpoint and
recovery system as a measure to increase job life line thereby optimizing job completion time. The
goal of this method is aimed at increasing the survivability of interrupted jobs by re-scheduling
them at last save checkpoint and minimizing the number of checkpoints in order not to incur too
many overheads. This model therefore, reduces the number of checkpoints and increase the
survivability of jobs thereby optimization resources performance (Li & Msacagni, 2003).
2.0 LITERATURE REVIEW
The desire of grid checkpoint service is to meet basic requirements which are: ability of software
to exchange information among resources, ability of grid middleware and infrastructures to
exchange and maintain vital information, and availability of checkpoint data to all resources
(Mangesh & Urmila, 2014). But achieving interoperability in grid is difficult due complexity and
unstable nature grid resources. The function of fault tolerance as opined by Paul and Jie (2003) is
to preserve the delivery of expected service despite the presence of faulty processors and declining
resources capability within the system. This means errors within grid system should be detected
and corrected promptly. Permanent fault that could cause great havocs should be located and
remove quickly. All these are to allow grid resources deliver a tangible and acceptable QoS. Due
to technological advancement and increase in autonomous systems, many researchers are
exploring ways to design and implement fault detective models that can predict and perform
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
75
recovery from crash processors of computing resources. These models aim at improving resources
performance and job survivability in the presence of failed processors, their effectiveness largely
depends on tuning runtime parameters such as the checkpoint interval and number of replicas
(Pankaj, 2011). Fault tolerance schemes in grid can either be pro-active or post-active (Garg &
Singh, 2011; Ganga & Karthik, 2012). In pro-active mechanism, the resources make a failure
prediction process before jobs are scheduled with the hope that it will follow the prediction process.
Whereas post-active mechanism handles the job failure after it has occurred. Most approaches
applied to fault tolerance in grid environment are on post-active rather than the pro-active
approaches (Sajjad & Babar, 2016)
According to Tanenbaun and Van Steen (2002), the most popular fault-tolerance model in use is
checkpointing which involved periodic saving of snapshot of job progress on a stable storage
device which can survive failures (hard disk). The information stored on the stable storage is called
a checkpoint. Any time a processor or job crashes, the last saved checkpoint is used to restart the
job or resource rather than from beginning. There exist varieties of checkpoint model but the
popular ones could either be categorized as coordinated, uncoordinated and communication
induced checkpointing (Elnozahy, Alvisi, Wang & Johnson, 2002).
The main advantage of checkpointing is that, it is a general technique which can be applied to any
type of parallel applications. The time to take a snapshot or make a checkpoint is also a task or job
which adds execution time overheads to a resource even when crashes have not occurred. This
overhead is dependent on the frequency at which checkpoints are taken and in turn depends on the
programmer or middleware designed.
76 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
To improve fault tolerance of grid system, Ndeye, Pierre and Ousmane (2003) proposed
hierarchical performance of checkpoint protocols grid computing which discussed protocols based
on rollback recovery that was classified into two categories: checkpoint based rollback recovery
and message logging protocols. However, the performance of protocols was observed to depend
on the characteristics of the system, network and applications running. In situations where the
computational intensity is low, the Algorithm Based on System checkpoint (ABSC) model is
applicable. Meanwhile, if the computational intensity is high, Algorithm based on application
Checkpoint (ABAC) model is more suitable though with production of slight overheads in fault
free situations and very reliable in faulty situations (Mangesh & Urmila, 2014). Adaptive fault
tolerant scheduling utilizes an adaptive number of job replicas according to the grid failure history.
This technique composed of Adaptive Job Replication (AJR) and Backup Resource Selection
(BRS) where AJR determines number of replicas according to selected resources.
To reduce checkpoint overheads, Ramesh (2016) proposed Optical Checkpoint Automation
(OCA). The goal of this model is to make the most effective use of grid resources and also to
improve throughput value in the mist of fault. This model focus on minimizing the effect of grid
faults and reduces fault recovery time using optimal automation of checkpoint. To evaluate the
performance of the model, fault index of resources were kept and analyzed and jobs were assigned
to resources based on next sequence of pattern of failure. The failure patterns were predicted by
using Hidden Markov Model (HMM) to assign checkpoint interval and also to provide automatic
failure replica (context file of checkpoint) to the grid resource. The proposed OCA model provide
a better performance as compare to adaptive algorithm though the model depends on previous
history of computing resources which implies that checkpoint interval of new resources are
unpredictable.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
77
Checkpointing is the most common method to achieve fault tolerance. Though, there exist research
issues on how improve its efficiency and overheads can be, Sajadah (2011) proposed a novel
solution on how checkpoint can operate on parallel application in grid. The model allows
checkpoint on applications at regions where there no inter-process communication thereby
reducing the checkpoint overheads and checkpoint size. In Dilbag, Jaswinder, Singh, Amit and
Chhabra (2012), a novel technique to analyze the performance of checkpointing algorithms is
proposed. The model was implemented in a suitable cloud environment with six service node with
analysis made in terms of parallel jobs execution.
3.0 ARITHMETIC AND GEOMETRIC CHECKPOINT MODEL (AGCM)
This model allows users submit jobs to the grid scheduler. Each submitted job is then assigned to
a computing resource that matches the requirement by the scheduler as specified by user. As
computing resources continue execution of job, statistics about each job and their resources are
sent to the Grid Information Service (GIS) for storage. Resource capacity, checkpoint activities
and current load are some of the information’s that are stored in the GIS.
Scheduler
Schedule
Manager
Job
Dispatcher
Checkpoin
t Manager
Fault Index
Manager
User
Checkpoint
Server
Exception
Handler
GIS
78 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Figure 1: Model with Checkpoint and exception handler.
Resources in grid environment are not of the same specifications. Specifications like processing
speed, internal scheduling policy and load factor varies for resources to resources. In the same
way, each job differs from other jobs by execution time, deadline, and so forth. Fault indexes here
indicate the frequencies of failure of a particular resource. This fault indexes are stored in the GIS
as performing history of every resources. The fault indexes are used by this model to determine
how frequently a checkpoint is made on a resource. This is to make checkpoint in a most reasonable
way bearing in mind that the time to make a checkpoint also adds to the execution time of
computing resources. In other to make checkpoint at appropriate and economical time, checkpoint
interval must be minimize. All computing resources with high fault index history use arithmetic
model (eqn I) to set their checkpoint while those resources with low fault index uses geometric
checkpoint model (eqn II) as shown in eqn I and II respectively.
𝐽endT = 𝐽startT + (𝑛 − 1) 𝐶𝑖 (1)
𝐽endT = 𝐽startT∗ 𝐶𝑖(n - 1) (2)
Where 𝐽endT is the end time of execution of job j
𝐽startT is the start time of execution of job j,
𝑛 is the number of checkpoint of job j with the execution time,
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
79
Ci is the checkpoint interval at which checkpoint is to be set.
The last saved checkpoint (𝐿𝑆𝑐𝑝) for any job as distributed to all computing resources can be
recovered by the model in eqn 3.
𝐿𝑆𝑐𝑝 = ∑ 𝐽𝑒𝑛𝑑𝑇𝑛𝑖=1 (3)
This shows that module(s) of a job can be recovered form one or more computing resources at the
time of crashes or processor failure.
Recovery and reassigning of failed jobs to new computing resources is achieved by an exception
handler.
Exception Handler Algorithm
Step 1: Start
Step 2: if every processor Pi is recovering from failure
Checkpoint is last storage saved massage for processor Pi
Step 3: for k = 1 to N (N is the number of processor holding the same jobs)
Do
For every processor pj that holds the same job
Step 4: invoke exception handler
Step 5: Stop
4.0 CHECKPOINT MODEL SIMULATION
80 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
A submitted job has an estimated start and end time with a checkpoint set a fixed interval in
between them. To test the performance of checkpoint overhead in the system, the study simulates
checkpoints based on our proposed model. Chosen any of the proposed models is based on fault
index of a computing resource. The performances of the two models are shown in figure 1. From
the simulation, job start time at first minute, and checkpoint interval is set every two minutes, eight
checkpoints were taken for each model.
Figure 1: Graph of checkpoint interval vs. time of execution
It can be seen that though, same numbers of checkpoints were set, execution time of jobs differs
as checkpoint progresses. At 4th checkpoint, model I (eqn I) has done 6 minutes while model II
(eqn 2) has done 8 minutes of execution. At 6th checkpoint, model I (eqn I) has executed for 10
minutes while model II (eqn 2) has executed for 32 minutes.
It can be observes that both models exhibit close related behavior up to the fourth checkpoint
interval, the execution time and the number of checkpoint made is almost the same. Though, the
numbers of checkpoints made are the same, the time used in execution of the job varies. Model II
0
20
40
60
80
100
120
140
1 2 3 4 5 6 7 8
Ex
ecu
tio
n T
ime
in
Se
con
ds
Number of Chekpoints
Model I
Model II
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
81
(eqn 2) execution time is more than model I (eqn I). The 8th checkpoint was when job has executed
for 14 minutes for model I (eqn I ) and 128 minutes for model II (eqn 2). This implies that Model
I will make more checkpoint than model II thereby incurring more checkpoint overhead. In terms
of recovery, model II will do more re-computing of failed job because the checkpoints are far apart
in the terms of time. Conclusively, model I is more suitable for computing resources that have high
fault index (mission critical jobs) and computing resources with poor uptime. Meanwhile Model
II is suitable for computing resources with more up time and less processor crashes.
The model recovery uses exception handler which is done from last checkpoint saved on stable
storage. The exception handling model handles fault and reallocate jobs automatically to other
resources without jobs beginning from the scratch because jobs start from the last served
checkpoint. The exception handler seamlessly relocates jobs to the scheduler making it less costly
and incurring little overhead in job.
5.0 CONCLUSION
In order to minimize checkpoint overheads incur by resources, the study proposed two models.
One was arithmetic (model I) and the other geometric (model II). Based on computing resource
history, a geometric checkpoint is good for a resource that has a good uptime time with less
withdrawer and crashes while arithmetic checkpoint is good for a mission critical job or resources
that is known to have a high faulty index. Exception handler was use to re-assigned jobs that were
unable to finishes due to processor crashes to new computing resources that meet their
requirement. Being a simple design, exception handler incurs little over heads and recovers jobs
in a seamless manner.
82 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
REFERENCES
DILBAG, S., JASWINDER, S. AND AMIT, C., 2012. Evaluating overheads of integrated
multilevel checkpointing algorithms in cloud computing environment, International. Journal of.
Computer Network and Information Security, 4(5), pp.29-38.
ELNOZAHY, E. A., ALVISI, L., WANG, Y.M. AND JOHNSON, D. B., 2002. A survey of Roll-
Back recovery protocols in massage -passing system, ACM Computing Survey, 34(3), pp.375-408.
GANGA K. AND KARTHIK S., 2012. A survey on fault tolerance in workflow management
and scheduling. IJARCET, 1( 8) , p.176.
GARG, R. AND SINGH, A.K, 2011. Fault tolerance in grid computing: state of the art and open
issues. International Journal Computer Science Engineering Survey.2(1), p.88. doi:
10.5121/ijcses.2011.2107.
LI, Y., AND MSACAGNI, M. 2003. Improving performance via computational replica on a large
scale computational grid. Proceedings of third International Symposium on Cluster Computing
and the Grid, Tokyo-Japan.
MANGESH, B. AND URMILA, S. 2014. Checkpointing based fault tolerant job scheduling
system for computational grid. International Journal of Advancements in Technology, 5(2).
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
83
NAKKEERAN, M. 2015. A survey on task checkpointing and replication based fault tolerance in
grid computing. International Research Journal of Engineering and Technology (IRJET), 3(9),
pp.832-838.
NDEYE, M. N., PIERRE S. AND OUSMANE, T. 2003. Performance comparison of hierarchical
checkpoint protocols grid computing. International Journal of Interactive Multimedia and
Artificial Intelligence, 1(5), pp.46-53.
PANKAJ, G. 2011. Grid computing and checkpoint approach. International Journal of Computer
and management Studies. 11(1).
PAUL, T. AND JIE, X. 2003. Fault tolerance within a grid environment, University of Durham
DHI 3LE, United Kingdom.
RAMESH, B. 2016. An optimal checkpoint automation mechanism for fault tolerance in
computational grid. International Journal of Scientific & Engineering Research, 7(2), pp.1012-
1019.
SAJADAH, K. 2011. Checkpointing of parallel applications in a grid environment, A dissertation
submitted to the University of Westminster for the degree of Master of Philosophy, Centre for
Parallel Computing, School of Electronics and Computer Science, University of Westminster,
London, United Kingdom.
SAJJAD, H. AND BABAR, N. 2016. Fault tolerance in computational grids: perspectives,
challenges, and issues, SpringerPlus, 5(1), Nov 18. doi: 10.1186/s40064-016-3669-0.
TANENBAUN, A.S. AND VAN STEEN, M. 2002. Distributed Systems, Principles and
84 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Paradigms, Prentice Hall
Keystroke Dynamics Authentication for a Web-Based Sales and
Stock Solution
Oluwakemi C. Abikoye8 and Bilikis T. Sanni
Department of Computer Science, University of Ilorin, Ilorin, Nigeria
*Corresponding author: [email protected]
ABSTRACT
The importance of security in any financial sector cannot be over emphasized, a single factor
authentication (username and password) is no longer sufficient. Therefore, a multifactor
authentication is required to address security in financial systems. In this work, keystroke
dynamics, a new behavioral biometric is proposed as an additional security measures. Keystroke
dynamics is a very important tool for authentication with a high level of security which can be
implemented to enhance other forms of authentication. This is achieved using static keystroke
approach on passwords, statistical feature extraction method to analyze the keystroke time from
hold-time and latency, and direct comparison with thresholds based on password length for users
verification and authorization. This system is used in a web-based small and medium scale
8 Author’s Address: Oluwakemi C. Abikoye8 and Bilikis T. Sanni, Department of Computer Science, University of Ilorin, Ilorin, Nigeria
Corresponding author: [email protected] "Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than IJCIR must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee."
© International Journal of Computing and ICT Research 2008.
International Journal of Computing and ICT Research, ISSN 1818-1139 (Print), ISSN 1996-1065 (Online), Vol.11, Issue 1, pp. 84 - 113, June 2017.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
85
enterprises (SMEs) sales and stock solution and also it provides a stronger security measure than
the conventional password authentication.
Keywords: Keystroke Dynamics, Password, Security, Multifactor Authentication, Biometric,
Small and Medium Scale Enterprises (SMEs)
IJCIR Reference Format: Oluwakemi C. Abikoye and Bilikis T. Sanni. Keystroke Dynamics
Authentication for a Web-Based Sales and Stock Solution, Vol. 11, Issue.1 pp 84 - 113.
http://ijcir.mak.ac.ug/volume11-issue1/article6.pdf
1. INTRODUCTION
In this research work, an improved small and medium scale enterprise in terms of security is
developed using keystroke dynamics biometrics to enhance the usual password authentication.
Keystroke dynamics authentication is a multifactor authentication based on individual typing style
that can be used with other forms of existing security biometrics that involves typing like the
password used in this work. Keystroke dynamics is a very secure behavioral biometric, since each
individual have different typing styles (Sawant, Nagargoje, Bora, Shelke, and orate, 2013). The
purpose of this behavioral biometrics is to serve as a two-step security verification for user logins
to facilitate higher level of security during verification and authentication so that only authorized
users will have access to the information of the software.
Authentication is the process of identifying and verifying an individual for security purpose. User
authentication is classified into three main categories (Modi, Upadhyay, and Thakor 2014;
Abualgasim, and Osman 2011) which include; knowledge, object or token and biometrics based
authentication. Knowledge based authentication is a type of authentication that is based on
something one knows and is characterized by secrecy. e.g. Personal Identification Number(PIN)
codes and passwords. PINS and passwords are individual user unique set of alphabets, numbers,
or alphanumeric used for security during registration usually with a username.
82 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Figure 1: Classification of Authentication
Token based authentication is a method of authentication that is based on something one possess
such as keycard or badge while Biometrics based authentication is a type of authentication that is
based on individual physical or behavioral features.
Biometrics is derived from two Greek words bio meaning “life” and metric meaning “to measure.
In computing, biometrics techniques are mainly used for user authentication for measuring and
Knowledge based
Authentication
Physiological
biometrics
KEYSTROKE DYNAMICS,
gait, voice, lip movement
and signature
Retina scanning, voice
recognition, fingerprints,
face recognition, palm-print
Behavioral
biometrics
Biometric based Object or token
based
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
83
analyzing individual’s physiological or behavioral features like keystroke, gait, fingerprints,
signature, lip movement, voice patterns, typing patterns, facial patterns, and palm-print for the
identification and verification purpose.
1.1 Types of Biometrics
There are two main categories of biometric based authentication which includes:
Physiological biometrics
Behavioral biometrics
Physiological Biometrics
Physiological biometrics is a type of biometric that measures individual physical features of a
certain part of the body. e.g. human eye (this involves iris and retina scanning), voice recognition,
fingerprints, face recognition, palm-print etc. Physiological biometrics is normally considered
more effective than behavioral biometrics because they are fairly consistent over time and are
unique, but are very expensive and require different equipment.
Behavioral Biometrics
Behavioural biometrics is a type of biometrics based on what a person does, or how the person
uses the body. e.g. keystroke dynamics, gait, and signature. Behavioural biometrics is considered
weaker when compared to the physiological biometrics because they are less constant and can vary
over time. Although Behavioural biometrics have unstable qualities, they are still successfully used
because it is still difficult to imitate others behaviour such as typing rhythm. Keystroke dynamics
is one of the cheapest biometric methods to implement since the only hardware required is the
keyboard.
84 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
1.2 Keystroke Dynamics
Keystroke dynamics is one of the several innovative technologies used to automate the process of
authenticating an individual based upon a unique and personal behaviour. Keystroke dynamics
involve a technique of analyzing how a user types at a terminal by observing the keyboard in order
to find the user based on his or her usual typing rhythm features (Sawant et al. 2013). Keystroke
dynamics also involve inspecting timing features of individual’s typing in order to identify patterns
in their keystroke data. This may sometimes include the analysis of features such as the pressing
time, releasing time and the latency of all authorized users extracted while typing their password
during registration process.
Most of the papers used statistical methods and neural networks for keystroke based
Authentication. Collection of data needs keyboard and software (Sawant et al. 2013). All
Keystroke mechanism depends upon the key press and key release duration. The time
duration between key press and key release convert into security parameters (Singh, and
Arya, 2010)
1.3 Small Medium Enterprises
There have been related studies in the past and still on-going by different researchers, reviews have
shown that integrated business software covers all relevant aspects of an enterprise (Davenport,
1998). The Small and Medium Scale Enterprises (SMEs) provide the cornerstones on which any
country’s economic growth and stability rest. Based on this, there is a need to take all available
security measure on such systems. This work uses keystroke dynamics biometrics authentication
as a plus to the usual login process. Research by experts has recommended and suggested
principles that modern enterprise systems should be service-oriented, constructed in a modular
form and communicates via Web services (Schmitt, 2007). This keystroke dynamics authentication
SMEs research work is cost effective which helps to solve the problem further analyzed by Scott
& Vessey (2002). Winkelmann and Klose (2008) stated that in small and medium companies in
particular, the introduction of a new system or the migration of old to new platforms is
accompanied by high costs in relation to turnover and naturally by associated risks because
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
85
integrating keystroke dynamics authentication to such system attracts no additional cost. It’s
almost free in terms of hardware because the most significant device used for data input is the
keyboard which is part of the existing hardware of the system.
2. RELATED WORK
Keystroke dynamics authentication, a behavioral biometrics has drawn a lot of research interest
and a number of techniques in different aspects were proposed. Several methods and modules have
been suggested with special emphasis on data collection, feature extraction, storage database and
matching process. Most of the papers used statistical methods and neural networks for keystroke
based Authentication, collection of data needs keyboard and simple software (Sawant et al. 2013).
Avasthi and Sanwal (2016) carried out a comprehensive study on Keystroke Dynamics based on
identification, verification, methods and performance measures & metrics. Modi, Upadhyay, and
Thakor (2014) compared existing systems based on some features such as the authentication
provided, classification method, outlier handling, keystroke hardening scalable, False Acceptance
Rate (FAR), False Rejection Rate(FRR) ration, limit ID and password text.
Giot, El-Abed, and Rosenberger (2014) used two (2) main families of keystroke dynamic methods
in making use of statistical methods; (1) The static families, where the user is asked to type several
times the same string in order to build its model, and (2) The dynamic families which allows the
authentication of individuals independently of what they are typing on the keyboard.
In Sawant et al. (2013) proposed two algorithms to implement the keystroke efficiently. The
Gaussian Probability Density Function and Direction Similarity Measures were proposed. The
fusion of the two algorithms was done by different methods which from among them the AND
rule showed the best result. The resulted mean, standard deviation and weight formula used to
calculate the weight in the first step and the login time keystroke data was compared with the
registered mean ± standard deviation which resulted into match count in the second step. Monte
Carlo approach was used for Data Collection; the time duration between key press and key release
was converted into security parameters while the Parallel Decision Tree (DT) was used for the
identification of a genuine user.
86 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Shukla and Solanki (2013) focused on the problem of human emotion recognition in the case of
naturalistic. Users’ behavioral or physiological patterns were mapped to emotional categories
collected. Six basic emotional classes were worked on, confidence, sadness, nervousness,
happiness, tiredness, hesitation. The study aimed to develop a web application that recognizes
human emotional states using four modules; Data Collection Process which consists of gathering
and labeling users’ keystroke and mouse data. The feature extraction and attribute selection which
reduces the number of attribute to facilitate the classification process were performed. Data
Labeling and the data provided by the human were labeled while questioning the best accuracy of
the trained system. Classification: fuzzy logic was used as a solution to recognize user’s emotional
state.
Bajaj and Kaur (2013) applied statistical method to measure the mean time and average time and
developed an application using java. Some studies based on selection process on normality
statistics applied. Abernethy and Rai (2012) captured time values associated with keystroke events,
keystroke duration and digraph latency metrics were calculated from the raw data files and
Artificial Neural Networks were used for classification, and results were calculated as the false
acceptance rate (FAR) and the false rejection rate (FRR). Karnan and Krishnaraj (2012)
implemented statistical method on various keystroke dynamics feature extracted: (i) Duration
(Amount of a time a key is pressed), (ii) Latency (Differences of time between two key events),
(iii) Mean, standard deviation (Mean and standard deviation value of each type of PIN), (iv) Press–
Release (Latency between pressing and releasing the key), (v) Digraph
Al-Jarrah (2012) used a password typing rhythm to detect the actual and unauthenticated user. The
study introduced two phases; training phase and testing phase. The calculation of key down/up
time of every key and latency time between keys were done. The comparison in testing phase
check the value in binary for analysis, the binary score is 1 if it is within a close distance threshold
from the median stored else 0. The use of benchmark data in the study of the detection performance
of authentication systems is a scientific approach. A novel technique was proposed for free text
keystroke dynamics in Singh & Arya (2011), keys were classified into two halves (left-right) and
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
87
four lie (total eight groups). They used timing vector to calculate the flight time. Timing vector
technique used to define the difference between fraud and actual users. The best results were
obtained and very supporting.
In Giot et al. (2009) a comparative study was conducted, considering the operational constraints
of collaborative systems. The main scheme used includes:(i) the enrollment (which consists of
registering the user in the system) embeds the data capture, eventually some data filtering, the
feature extraction, and the learning step; and (ii) the verification process realizes the data capture,
the feature extraction and the comparison with the biometric model. Depending on the studies, this
scheme may be slightly modified. Some algorithms can also adapt the model of the user by adding
the template created during the last successful authentication.
3. METHODOLOGY
Nigeria is still a developing country especially when it comes to technology, and majority
of the enterprise in Nigeria especially the small and medium scale enterprise still operate on the
manual method of inventory. The staff record, stock record and daily sales are still being recorded
manually especially in the small enterprises because of the fear of expansion. In this research, after
series of direct interview, it was deduced that two things causes the fear, inadequate knowledge
about computing technology and assumed the new technology is too expensive. This project,
design with the aid of computer system, an inexpensive solution that doesn’t require expert
knowledge to effectively handles operations of the SMEs in Nigeria from the information gathered
by visiting some stores (small and medium enterprises) and designing a web-based application
using PHP, JavaScript and HTML at the front-end, MYSQL at the back-end for the database and
also apply keystroke dynamics biometrics on the password using PHP, JavaScript ( Query) and
HTML at the front-end, MYSQL at the back-end for the database, to be able to give the
organization (SMEs) a more secure system than using password alone. The cost of adding
keystroke dynamics to any system is almost free since the only hardware required is the keyboard
which is already part of the Computer system. This keystroke dynamics authentication security
based system is designed to eradicate the problems associated with the old system of unauthorized
users accessing the data of an organization to perform any form of fraudulent actions.
88 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
In this new system, though if the unauthorized users lay hands on password (that is guessed,
lost or stolen), the imposter would not be able to gain access to the system. This is so because
during the registration process of new system users, as the password is being typed, keystroke time
is also been calculated for every users. Even if unauthorized user knows the password, he/she
would not have access to the new system because of the individual keystroke time access control.
To access the new system, the inputted username, password and keystroke time must be the same
in the database
3.1 Basic Modules Used
There are four main basic modules used in keystroke dynamics from the papers reviewed Modi,
Upadhyay, and Thakor (2014); Sawant, Nagargoje, Bora, Shelve, and Borate (2013); Bajaj, and
Kaur (2013); Abernethy, and Rai (2012), which includes:
Data Collection
Feature Extraction
Storage Database
Matching Process
3.1.1 Collection of Data
This is the enrollment phase where by the authorized staff (both the admin staff and the
sales representative) register to create an account in the sales and stock web application. During
registration, each user registers with their specific password and username which will be saved in
the database against that specific user and while processing the password, the keystroke dynamics
program will also be performing its operation by using Static Approach where the same username
and password are maintained for each authorized users in order to build its model. In Static
approach, the system checks the user only at the authentication time. It provides additional security
than the username/password. It also provides more robust user verification than simple passwords.
In this approach, the analysis is performed on the password while typing during registration for all
the individuals using the application. The minimum password length used is 4 while the maximum
is 30. The static analysis is done at login time in conjunction with the password authentication
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
89
methods. The username and password used during registration are saved against each user for
verification during login. Collection of data is done using keyboard and software.
3.1.2 Feature Extraction
The keystroke data is based on the password using the system time. The key press (the time
in which the key is held down), key release (the time in which the key is released), the latency (the
time between two consecutive keys) and the password length (the total keys of the password) of
each users (staff) which are produced while typing each characters of user password for the
registration. The passwords are further analyzed using the statistical method to find the keystroke
time which will be stored in the database for future verification for each registered users.
Latency is the distance between two successive keys.
𝐿 = 𝐾2𝑃 − 𝐾1𝑅
Where 𝐿 = Latency, 𝐾1𝑅 = the time Key 1 is Released, and 𝐾2𝑃 = the time Key 2 is Pressed.
Duration is the amount of time a key is pressed or the hold-time for each key of individual
password. The difference between the time a key is released and pressed.
𝐷 = 𝐾1𝑅 − 𝐾1𝑃
Where 𝐷 = Duration, 𝐾1𝑅 = the time Key 1 is Released, and 𝐾1𝑃 = the time Key 1 is Pressed.
Digraph is the duration or hold-time of a key plus latency between successive keys.
𝐷𝐼 = 𝐷𝐾1 + 𝐿𝐾1,2
Where 𝐷𝐼 = Digraph, 𝐷 = Duration, 𝐾1 = Key 1, 𝐿 = Latency, and 𝐾1,2 =Between Key 1 and
Key 2.
Keystroke Time is the amount of time of typing the whole password used considering the key press
time, key release time, latency and password length.
Password Length is total keys of the password.
𝐾𝑇 =𝐷𝐼𝐾1+𝐷𝐼𝐾2+⋯+𝐷𝐼𝐾𝑛
𝑃𝐿
Where 𝐾𝑇 =KeystrokeTime, 𝐷𝐼 = Digraph, 𝐾1 = Key 1, 𝐾2 = Key 2, and 𝐾𝑛 = Key n, and
𝑃𝐿 = Password Length
90 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Figure 2: Feature Extraction
3.1.3 Storage Database
The database stores all information of the new system in a systematic way, which makes
the update, retrieval and maintenance of the new system easy. After featured extraction and the
statistical calculations on the feature, the keystroke time is stored on the database which will be
retrieved for login verifications. The database houses all the business information; inventories of
staff, stock and daily sales (name of goods, quantity of goods sold, price of each item sold (unit
KEY 2 KEY 1
LATENCY
DURATION/
HOLD-TIME DURATION/
HOLD-TIME
PRESS
EVENT
RELEASE
EVENT
PRESS
EVENT
RELEASE
DIGRAPH
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
91
price) and the cost of goods sold which is its unit price multiplied its quantity, against each sales
representative).
3.1.4 Matching Process
In the matching process, genuine users are identified by direct comparison. During the
login process, the system compares the user name and password to the pre-registered data on the
database; if it doesn’t correspond it pops-up incorrect username or password with a back check
box to restart the process, otherwise as the user is typing his password to login, the keystroke also
counter check. The keystroke time of the present login is compared with the existing keystroke
time stored in the database. This system uses threshold considering the password length. The
default threshold is ±10 for the keystroke time (in microseconds) for password length that ranges
from 4:10 characters, for password length that ranges from 11:20 characters, threshold ±15 is set
for the keystroke time (in microseconds) and for password length that ranges from 21:30
characters, threshold ±20 is set for the keystroke time (in microseconds). These thresholds are
used during the verification between the present keystroke time and saved keystroke time for
authentication. If the present keystroke time falls between the given range, the user is able to login
otherwise pops up accessed denied which directs the user back to the login page to enter the
password again and follow the same process.
3.2 Proposed Algorithm
3.2.1 Algorithm for Creating New User
Step1: User opens the create account page to fill
Step 2: Input username, password, retype password, and full name
input name="username" type="text" id="username" size="30"
input name="pass1" type="password" id="pass1" size="30"
input name="pass2" type="password" id="pass2" size="30" and
input name="name" type="text" id="fullname" size="30"
Step 3: Select branch and role from branch and role table respectively in the database
Step 4: Submit
92 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Step 5: The System Confirm password and the retyped password to be the same
if (password == retyped password)
{Create Keystroke Dynamics for Password}
Else {"Your Password do not match, Please try again!!!"}
Step 6: The system Extract keystroke feature from the password to create keystroke time
The key press (the time in which the key is held down) is extracted
The key release (the time in which the key is released) is extracted
The latency (the time between two consecutive keys) is extracted and
The password length (the total keys of the password) is extracted
The keystroke time is calculated using statistical method.
Step 7: Create and Save Authorized user username, password and Keystroke time in database
against individual username.
Step 8: display account creation page
3.2.2 Algorithm for Feature Extraction
Begin;
Input password; //Character by character
𝒊𝒏𝒕 𝑆𝑖𝑧𝑒 = 𝑠𝑡𝑟𝑔𝑙𝑒𝑛(𝑝𝑎𝑠𝑠𝑤𝑜𝑟𝑑);
𝒊𝒇 𝑖 = 0; //extract key press & key release
𝒅𝒐 { 𝒇𝒍𝒐𝒂𝒕 𝑡𝑖𝑚𝑒𝑝𝑟𝑒𝑠𝑠[𝑠𝑖𝑧𝑒], 𝑡𝑖𝑚𝑒𝑟𝑒𝑙𝑒𝑎𝑠𝑒[𝑠𝑖𝑧𝑒];
𝑡𝑖𝑚𝑒𝑝𝑟𝑒𝑠𝑠[𝑖] = 𝑡𝑖𝑚𝑒𝑠𝑡𝑎𝑚𝑝(𝑒𝑣𝑡. 𝑘𝑒𝑦𝑝𝑟𝑒𝑠𝑠); 𝑡𝑖𝑚𝑒𝑟𝑒𝑙𝑒𝑎𝑠𝑒𝑠[𝑖]
= 𝑖𝑚𝑒𝑠𝑡𝑎𝑚𝑝(𝑒𝑣𝑡. 𝑘𝑒𝑦𝑟𝑒𝑙𝑒𝑎𝑠𝑒);
𝑖 + +; }
𝑤ℎ𝑖𝑙𝑒 (𝑖 < 𝑠𝑖𝑧𝑒);
// Calculate duration, latency &digraph.
𝒇𝒐𝒓 (𝑗 = 0; 𝑗 < 𝑠𝑖𝑧𝑒; 𝑗 + +; )
{ 𝒇𝒍𝒐𝒂𝒕 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛[𝑠𝑖𝑧𝑒], 𝑙𝑎𝑡𝑒𝑛𝑐𝑦[𝑠𝑖𝑧𝑒], 𝑑𝑖𝑔𝑟𝑎𝑜ℎ[𝑠𝑖𝑧𝑒];
𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛[𝑗] = 𝑡𝑖𝑚𝑒𝑟𝑒𝑙𝑒𝑎𝑠𝑒[𝑗] − 𝑡𝑖𝑚𝑒𝑝𝑟𝑒𝑠𝑠[𝑗];
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
93
𝑙𝑎𝑡𝑒𝑛𝑐𝑦[𝑗] = 𝑡𝑖𝑚𝑒𝑟𝑒𝑙𝑒𝑎𝑠𝑒[𝑗 + 1] − 𝑡𝑖𝑚𝑒𝑝𝑟𝑒𝑠𝑠[𝑗];
𝑑𝑖𝑔𝑟𝑎𝑝ℎ[𝑗] = 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛[𝑗] − 𝑙𝑎𝑡𝑒𝑛𝑐𝑦[𝑗]; }
//Calculate keystroke as the mean value of the digraph
𝒇𝒐𝒓 (𝑖 = 0, 𝑖 < 𝑑𝑖𝑔𝑟𝑎𝑝ℎ. 𝑙𝑒𝑛𝑔𝑡ℎ; 𝑖 + +; )
{ 𝑘𝑒𝑦𝑠𝑡𝑟𝑜𝑘𝑒 = 𝑘𝑒𝑦𝑠𝑡𝑟𝑜𝑘𝑒 + 𝑑𝑖𝑔𝑟𝑎𝑝ℎ[𝑖];
𝐾𝑒𝑦𝑠𝑡𝑟𝑜𝑘𝑒𝑇𝑖𝑚𝑒 = 𝑘𝑒𝑦𝑠𝑡𝑟𝑜𝑘𝑒 𝑑𝑖𝑔𝑟𝑎𝑝ℎ. 𝑙𝑒𝑛𝑔ℎ𝑡⁄ ; }
Else
{ “go back to registration page” }
End
3.2.3 Algorithm for User Authentication and Keystroke Verification
User opens the login page.
Begin;
// Test for password and username match
𝒊𝒇 ((𝑢𝑠𝑒𝑟. 𝑝𝑎𝑠𝑠 == 𝑝𝑎𝑠𝑠𝑤𝑜𝑟𝑑)&&(𝑢𝑠𝑒𝑟𝑛𝑎𝑚𝑒 = 𝑢𝑠𝑒𝑟𝑛𝑎𝑚𝑒))
{
//Set threshold
𝒊𝒏𝒕 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 = 10; //default threshold for password length is 10
𝒊𝒏𝒕 𝑙𝑒𝑛𝑔𝑡ℎ = 𝑠𝑡𝑟𝑔𝑙𝑒𝑛(𝑢𝑠𝑒𝑟. 𝑝𝑎𝑠𝑠);
𝒊𝒇 𝑙𝑒𝑛𝑔ℎ𝑡 ≥ 11 && 𝑙𝑒𝑛𝑔𝑡ℎ ≤ 20;
{ i𝒏𝒕 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 = 15; }
𝒊𝒇 𝑙𝑒𝑛𝑔ℎ𝑡 ≥ 21 && 𝑙𝑒𝑛𝑔𝑡ℎ ≤ 30;
{ i𝒏𝒕 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 = 20; }
// Test for keystroke
𝒊𝒇 ((𝑢𝑠𝑒𝑟. 𝑘𝑒𝑦𝑠𝑡𝑟𝑜𝑘𝑒 + 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 ≥ 𝑘𝑒𝑦𝑠𝑡𝑟𝑜𝑘𝑒)&&(𝑢𝑠𝑒𝑟. 𝑘𝑒𝑦𝑠𝑡𝑟𝑜𝑘𝑒 − 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑
≤ 𝑘𝑒𝑦𝑠𝑡𝑟𝑜𝑘𝑒))
{ “Login successful” }
else
{ “Access denied due to keystroke conflict” }
94 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
else
{ “Password or username incorrect” }
End
3.2.4 Sales and Stock Processes
This step is after the login, the user click on the process he wants to perform operation on, follow the steps
and submit after wards. The follow lists are the pages on the index page for performing the named operations;
View Transaction
Add Product
Daily Report
Transaction list
Change password
Create User Account
Stock Category
View stock
Branch
Logout
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
95
3.3 Relational Database Model for the System
Login
User_id int(30) pk
Username Varchar(30) nn
Pass Varchar(30) nn
Name Varchar(30) nn
Role Varchar(30) nn
Branch_id Varchar(30) nn
Keystroke_Time(30) nn
Branch
Branch_id mediumint(40) pk
Name Varchar(40) nn
Branch_manager varchar(40)
Customer
Id int(50) pk
Fullname Varchar(50)
Gender Varchar(30) nn
city Varchar(40) nn
State Varchar(40) nn
Telephone Varchar(40) nn
Address Varchar(50) nn
Category
Cat_id mediumint(9) pk
Category Varchar(40) nn
Products
Product_id mediumint(30) pk
Product_name varchar(50) nn
Unit_price Varchar(30) nn
Category Varchar(40) nn
Quantity Varchar(40) nn
Branch Varchar(40) nn
Barcode Varchar(40) nn
Purchase order
Transact_id mediumint() pk
Date date nn
Customer_name Varchar(50)
nn
Total Varchar(50) nn
Paid Varchar(50) nn
Acc_balance Varchar(50) nn
Invoice Varchar(5000) nn
Operator Varchar(50) nn
Item1 Varchar(100) nn
Item2 Varchar(100) nn
Item3 Varchar(100) nn
Item4 Varchar(100) nn
Item5 Varchar(100) nn
Item6 Varchar(100) nn
Item7 Varchar(100) nn
Item8 Varchar(100) nn
Item9 Varchar(100) nn
Item10 Varchar(100) nn
Branch Varchar(500) nn
Trans_session Varchar(30) nn
Qty Varchar(30) nn
Unit_price Varchar(30) nn
Stock
Id mediumint(50) pk
Item Varchar(40) nn
Unit Varchar(30) nn
Qty Varchar(30) nn
Price Varchar(40) nn
Amt_paid Varchar(40) nn
Balance Varchar(40) nn
96 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Login Table
The user id, username, password, users’ privilege, branch and the keystroke time
are stored on the login table.
4 RESULTS AND DISCUSSION
Create Account
This page is used for the creation of new user account by users with admin level privilege.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
97
In this module, the keystroke feature is extracted from the password typed in by the authorized
user when creating account to statistically calculate the Keystroke Time which will be saved in the
database against each user.
Figure 3: The create account page
Branch: the branch selected will be profiled to the new user.
Admin (if it is an admin user)
Creation of Keystroke Dynamics Authentication
This page shows the initial keystroke data of an individual user created which will be stored in the
database after creating the account.
98 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Figure 4: The keystroke created
Figure 5(a): Account creation for user 1
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
99
Figure 5(b): Keystroke dynamics authentication created for user 1
Figure 6 (a): Account creation for user 2
100 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Figure 6 (b): Keystroke dynamics authentication created for user 2
Login Page
The login page is where users are accessed by identifying and authenticating the user
through the username and password presented by the user. All other pages of the application will
refer the user to the login page for authentication.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
101
Figure 7: The application login page
Username: enter username used for registration
Password: is a secret word or alphanumeric used to gain admission into the system
which can only be created by an authorized user.
Error Page for Wrong Login Details
This page shows the error, to inform the user that the username or password doesn’t correspond
with the initially stored in the database. The system only proceeds to keystroke verification only
if the username and password corresponds.
102 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Figure 8: Error page for wrong username or password
Back: it directs the user back to the login page
Error Page for Keystroke Dynamics
This page shows the error, to inform the user about the reason for being unable to login. It will get
the information detail stored in the database against the present one from the login page. If a
malicious transaction is suspected, even if the password is correct, the unauthorized user won’t be
able to login because of the keystroke data saved against each user in the database. The present
keystroke data of an individual user might be more or less than initial keystroke data stored in the
database. The page takes you back to re-login.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
103
Figure 9: Error page for keystroke dynamics out of threshold
Back: it directs the user back to the login page
Figure 10(a): User 1 login page
104 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Figure 10(b): User 1 error page for keystroke dynamics authentication.
Figure 11 (a): User 2 login page
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
105
Figure 11(b): User 2 error page for keystroke dynamics authentication.
Home Page
This page is the initial page of the website, the point of entry to all the information stored within
the application. It contains link to selection of all other available content. This page summarizes
the aim of the work.
106 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Figure 12: Home page
It contains the following options:
Change password: this is where all the staff can change the secret word used, for a
new user and when the user feels it is exposed.
Create account: to create new account for staff.
Manage users: this where the admin user can delete a staff id when resigned or fired
Stock category: is the type of goods available for sales.
View stock: this is where the staff can view the goods available in the stock.
Transaction: is the day-to-day sale.
Add product: is used to add product to the stock.
Daily report: shows daily journal. An admin user can check for all staff transaction
while the sales representative can see his transaction alone.
Barcode Reader: shows transaction using a scanner or manual input of the product
id.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
107
Figure 13(a): User 1 login page with correct username, password and keystroke
Figure 13(b): User 1 Home page after Keystroke Dynamics Authentication
108 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Figure 14(a): User 2 login page with correct username, password and keystroke
Figure 14(b): User 2 Home page after Keystroke Dynamics Authentication
Change Password
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
109
This page allows user to change the password to protect their accounts from fraudulent
transactions. The keystroke data will be extracted from the new password which will be saved
against the user in the database. It is advisable for a user to change their password every three
months to protect their accounts and saves the company from losses.
Figure 15: Change password page
Error Page for Change Password
This page occurs if the new password and the confirm password doesn’t match.
110 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
Figure 16: Error page for mis-match password
Logout: exit the application.
Back: it directs the user back to the change password page
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
111
5 CONCLUSION
In this work, a behavioral biometrics, keystroke dynamics authentication solution is proposed on
the usual password authentication used for authorization. The inventory of different steps in
business enterprises where taken with emphasis on stocking, transaction and staffing processing
to develop an e-commerce system that a multifactor authentication is implemented on with the aim
of enhancing the security system. The researchers implemented a behavioral biometric method,
keystroke dynamics authentication static method on the usual passwords authentication for login
to produce a multifactor authentication system that helps tackle the variety of security frauds
involved in user login process, whether in house or from the outside by using the typing style of
the authorized users which has been proven unique with the password. The researchers suggested
a more secured method for Authentication, as well as a business solution for SMEs. This work
can also detect whether a user is authorized or fraudulent which makes it secure. Experimental
results show that the performance and effectiveness of this system demonstrate the usefulness of
implementing it. The system is also scalable for handling large volumes of transactions. This new
system meets user needs, easy to understand and can be used by both educated (professionals) and
uneducated (local traders) users. This system has qualities such as reduction in the cost of adding
new features to the existing system since the keyboard is the only hardware required and also built
in maintenance view with low execution and response time.
112 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
REFERENCES
ABERNETHY, M., AND RAI, S. 2012. Applying Feature Selection to Reduce Variability in
Keystroke Dynamics Data for Authentication Systems. In proceedings of the 13th Australian
Information Warfare and Security Conference, Perith, Western Australia, December, 2012, 17-
23.
ABUALGASIM, S.D., AND OSMAN, I. 2011. An Application of the Keystroke Dynamics
Biometric for Securing PINs and Passwords. World of Computer Science and Information Technology
Journal (WCSIT) ISSN: 2221-0741, 1(9), 398-404.
AL-JARRAH, M.M. 2012. An Anomaly Detector for Keystroke Dynamics Based on Medians
Vector Proximity. Journal of Emerging Trends in Computing and Information Sciences, 3(6), 988-
993.
VASTHI, S., AND SANWAL, T. 2016. Biometric Authentication Techniques: A Study on
Keystroke Dynamics. International Journal of Scientific Engineering and Applied Science
(IJSEAS) ISSN: 2395-3470, 2(1), 215-221.
BAJAJ, S., AND KAUR, S. 2013. Typing Speed Analysis of Human for Password Protection
(Based On Keystrokes Dynamics). International Journal of Innovative Technology and Exploring
Engineering (IJITEE) ISSN: 2278-3075, 3(2), 88-91.
DAVENPORT, T. 1998. Putting the Enterprise into the Enterprise System. Harvard Business
Review, 76(4), 121-131.
EPP, C. 2010. Identifying Emotional States through Keystroke Dynamics. A thesis submitted to
the college of Graduate Studies and Research in partial fulfillment of the Requirements for the
Degree of Master of Science in the Department of Computer Science, University of Saskatchewan
Saskatoon, Canada.
FERREIRA, J., SANTOS, H., AND PATRAO, B. 2011. Intrusion Detection through Keystroke
Dynamics. In proceedings of the 10th European Conference on Information Warfare and Security,
Tallinn, Estonia, July 2011, 81-90.
International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
113
GIOT, R., EL-ABED, M., AND ROSENBERGER, C. 2014. Keystroke Dynamics Authentication.
Biometrics, In Tech, Chapitre 8. 2011. 978-953-307-618-8. <10.5772/170647>. <hal-00990373>
157-182.
GIOT, R., EL-ABED, M., AND ROSENBERGER, C. 2009. Keystroke Dynamics Authentication
for Collaborative System. The IEEE International Symposium on collaboration Technologies and
Systems (CTS), Baltimore, United States, May 2009, IEEE Computer Society, 172-179.
<10.1109/CTS2009.5067478> .<hal-00432764>
KARNAN, M., AND KRISHNARAJ, N. 2012. A Model to Secure Mobile Devices Using
Keystroke Dynamics through Soft Computing Techniques. International Journal of Soft
Computing and Engineering (IJSCE) ISSN: 2231-2307, 2(3).
MASOCHA, R., CHILIYA, N., AND ZINDIYE, S. 2011. E-banking adoption by customers in the
rural milieus of South Africa: A case of Alice, Eastern Cape, South Africa. African Journal of
Business Management, 5(5), 1857-1863.
MODI, J., UPADHYAY, H.G., AND THAKOR, M. 2014. Password less Authentication Using
Keystroke Dynamics: A Survey. International Journal of Innovative Research in computer and
Communication Engineering, 2(11), 7060-7064.
MONROSE, F., AND RUBIN, A.D. 1999. Keystroke Dynamics as a Biometric for Authentication.
Future Generation Computer Systems, 16(2000), 351-359.
SAWANT, M.M., NAGARGOJE, Y., BORA, D., SHELKE, S., AND BORATE, V. 2013.
Keystroke Dynamics: Review Paper. International Journal of Advanced Research in Computer and
Communication Engineering, 2(10), 4018- 4020.
SCOTT, J. E., AND VESSEY, I. 2002. Managing Risks in Enterprise Systems Implementations.
Communication of the ACM, 45(4), 74-81.
SHUKLA, P., AND SOLANKI, R. 2013. Web Based Keystroke Dynamics Application for
Identifying Emotional State. International Journal of Advanced Research in Computer
Communication Engineering, 2(11), 4489-4493.
SINGH, S., AND ARYA, K.V. 2011. Key Classification: A New Approach in Free Text Keystroke
Authentication System. In proceedings of the 3rd Pacific- Asia Conference on Circuits and
Communication Systems (PACCS), Wuhan, China, July, 2011, 1-5.
114 International Journal of Computing and ICT Research, Vol. 11, Issue 1, June 2017
VANDOMMELE, T. 2010. Biometric authentication today. Seminar on Network Security.
Retrieved from: http://www.cse.hut.fi/en/publications/B/11/papers/vandommele.pdf