the copyright © of this thesis belongs to its rightful...
TRANSCRIPT
The copyright © of this thesis belongs to its rightful author and/or other copyright
owner. Copies can be accessed and downloaded for non-commercial or learning
purposes without any charge and permission. The thesis cannot be reproduced or
quoted as a whole without the permission from its rightful owner. No alteration or
changes in format is allowed without permission from its rightful owner.
TEST DATA GENERATION METHOD FOR DYNAMIC – STRUCTURAL
TESTING IN AUTOMATIC PROGRAMMING ASSESSMENT
MD. SHAHADATH SARKER
MASTER OF SCIENCE (INFORMATION TECHNOLOGY)
UNIVERSITI UTARA MALAYSIA
2016
i
Permission to Use
In presenting this thesis in full fulfillment of the requirements for a postgraduate degree from
Universiti Utara Malaysia, I agree that the University Library may make it freely available for
inspection. I further agree that permission for copying of this thesis in any manner, in whole or in
part, for scholarly purposes may be granted by my supervisor(s) or, in their absence, by the
Dean. It is understood that any copying or publication or use of this thesis or parts thereof for
financial gain shall not be allowed without my written permission. It is also understood that due
recognition shall be given to me and to Universiti Utara Malaysia for any scholarly use which
may be made of any material from my thesis.
Requests for permission to copy or to make other use of materials in this thesis, in whole or in
part should be addressed to:
Dean of Awang Had Salleh Graduate School of Arts and Sciences
Universiti Utara Malaysia
06010 UUM Sintok
Kedah Darul Aman
Malaysia
ii
Declaration
I declare that this thesis is my own work and has not previously been submitted in any form for
another degree or diploma at any other university or other institute of tertiary education.
Information derived from the published and unpublished works of others have been
acknowledged in the text and a list of references is given.
Md. Shahadath sarker
2016
iii
Abstrak
Penaksiran Pengaturcaraan Automatik atau dikenali sebagai APA telah diketahui sebagai suatu
keadah yang berkesan dalam membantu para pensyarah untuk melaksanakan penaksiran dan
penggredan tugasan pengaturcaraan pelajar. Untuk melaksanakan pengujian dinamik dalam
APA, adalah menjadi suatu keperluan untuk menyediakan set data ujian melalui proses
penjanaan data ujian yang bersistematik. Sekiranya memfokus kepada bidang pengujian perisian,
pelbagai kaedah untuk mengautomasikan penjanaan data ujian telah
dicadangkan.Walaubagaimanapun, kaedah-kaedah ini jarang diguna pakai di dalam kajian
semasa APA. Terdapat kajian awalan yang cuba mengintegrasikan APA dan penjanaan data
ujian, tetapi masih terdapat jurang dari segi menerbitkan dan menjana data ujian untuk pegujian
dinamik-berstruktur. Untuk mengatasi jurang ini, kajian ini mencadangkan suatu kaedah
penjanaan data ujian untuk melaksanakan pengujian dinamik-berstruktur (atau dikenali sebagai
DyStruc-TDG).DyStruc-TDG direalisasikan sebagai alatan fizikal yang bertindak sebagai
penjana data ujian untuk menyokong fungsian APA.Dapatan daripada ekperimen kawalan yang
dilaksana berdasarkan reka bentuk one-group pre-test dan post-test mendapati bahawa DyStruc-
TDG memperbaiki kriteria kecukupan data ujian kebolehpercayaan (atau pengujian positif)
dalam penaksiran pengaturcaraan.Kaedah yang dicadangkan ini adalah dijangkakan dapat
membantu para pensyarah kursus pengaturcaraan awalan untuk menerbitkan dan menjana data
ujian dan kes ujian untuk melaksanakan penaksiran pengatucaraan automatik untuk pengujian
dinamik-berstruktur tanpa memerlukan pengetahuan khusus dalam reka bentuk kes ujian.Dengan
mengguna-pakai kaedah ini sebagai sebahagian APA, beban para penyarah secara tidak langsung
dapat dikurangkan secara berkesan oleh kerana penaksiran tipikal yang manual senantiasa
cenderung kepada ralat dan penyebab kepada ketidakseragaman.
Kata kunci: penjanaan data ujian, Penaksiran Pengaturcaraan Automatik, pengujian dinamik, pengujian berstruktur, path coverage, Modified Condition/Decision Coverage.
iv
Abstract
Automatic Programming Assessment or so-called APA has being known as a significant method
in assisting lecturers to perform automated assessment and grading on students’ programming
assignments. Having to execute a dynamic testing in APA, it is necessary to prepare a set of test
data through a systematic test data generation process. Particularly focusing on the software
testing research area, various automated methods for test data generation have been proposed.
However, they are rarely being utilized in recent studies of APA. There have been limited early
attempts to integrate APA and test data generation, but there is still a lack of research in deriving
and generating test data for dynamic structural testing. To bridge the gap this study proposes a
method of test data generation for dynamic structural testing (or is called DyStruc-TDG).
DyStruc-TDG is realized as a tangible deliverable that acts as a test data generator to support
APA. The findings from conducted controlled experiment that is based on one-group pre-test and
post-test design depict that DyStruc-TDG improves the criteria of reliability (or called positive
testing) of test data adequacy in programming assessments. The proposed method is expectantly
to assist the lecturers who teach introductory programming courses to derive and generate test
data and test cases to perform automatic programming assessment regardless of having a
particular knowledge of test cases design in conducting a structural testing. By utilizing this
method as part of APA, the lecturers’ workload can be reduced effectively since the typical
manual assessments are always prone to errors and leading to inconsistency.
Keywords: test data generation, Automatic Programming Assessment, dynamic testing, structural testing, path coverage, Modified Condition/Decision Coverage.
v
Acknowledgement
Firstly, it is my great pleasure to thank Allah (swt) that has given me opportunities to complete
this thesis. A billion thanks those, who directly or indirectly contributed in the development of
this work and who influenced my thinking, behavior, and acts throughout the study period.
I express my sincere gratitude to my supervisor Dr.Rohaida Binti Romli for her continuous
support, cooperation, valuable suggestions and motivation for completing the thesis and she
exchanged her interesting ideas, thoughts and made this thesis easy and accurate.
Lastly, I would like to thank all of my friends for their encouragement and valuable suggestions
during my study period.
vi
Table of Contents
Permission to Use............................................................................................................................i
Declaration.....................................................................................................................................ii
Abstrak..........................................................................................................................................iii
Abstract .........................................................................................................................................iv
Acknowledgement…………...………………………………………..........................................v
List of Tables .................................................................................................................................x
List of Figures ............................................................................................................................xiii
List of Abbreviations…………………………………………………………………………...xv
CHAPTER 1: INTRODUCTION ...............................................................................................1
1.1 Background of Study ..........................................................................................................1
1.2 Problem Statement ..............................................................................................................2
1.3 Research Questions .............................................................................................................4
1.4 Research Objectives ............................................................................................................4
1.5 Scope of the Study ..............................................................................................................5
1.6 Contribution of the Study....................................................................................................6
1.7 Organization of the Thesis ………………………………………………………………..7
CHAPTER 2: LITERATURE REVIEW ………………………………………………...........8
2.1 Introduction to Software Testing …………………………………………………………8
2.2 Dynamic Structural Testing ………………………………………………………………9
2.3 Test Adequacy Criteria …………………………………………………………………...9
2.3.1 Loop Coverage …………………………………………………………………..11
2.3.2 Statement Coverage ……………………………………………………...……...14
2.3.3 Path Coverage …………………………………………………………………...15
2.3.4 Modified Condition/Decision Coverage ……………………………...................17
2.3.5 Multiple Condition Coverage ……………………………………………...........19
vii
2.3.6 Condition Coverage …………………………………………………….……….19
2.3.7 Branch Coverage …………………………………………………………...........20
2.4 Comparison of Different Code Coverage ……………………………………………….20
2.5 Automatic Programming Assessment (APA) …………………………………………...22
2.6 Automatic Test Data Generation (ATDG) ………………………………………………24
2.7 Integration of APA and ATDG ………………………………………………………….24
2.8 Summary ………………………………………………………………………………...26
CHAPTER 3: METHODOLOGY ……………………………………………………………27
3.1 Research Procedure ………………………………………………………………...........27
3.1.1 Theoretical Study ………………………………………………………………..28
3.1.2 Construction of Method …………………………………………………………28
3.1.3 Development of Prototype ………………………………………………………30
3.1.4 Evaluation and Conclusion ………………………………………………...........30
3.1.4.1 Controlled Experiment …………………………………………………30
3.1.4.2 Procedures of Experiment ………………………………………...........30
3.1.4.3 Subject of Experiment ………………………………………………….31
3.1.4.4 Instrument of Data Collection ………………………………………….31
3.1.4.5 Comparative Evaluation ………………………………………………..31
3.1.5 Threats to the Validity of the Experiments………………………………............32
3.1.5.1 Internal Validity…………………………………………………...........32
3.1.5.2 External Validity ……………………………………………………….32
3.1.5.3 Construct Validity ………………………………………………...........33
3.2 Summary ………………………………………………………………………………...33
CHAPTER 4: PROPOSED METHOD (DyStruc-TDG) ………………..…………………...34
4.1 Structural Test Data Generation …………………………………………………………34
viii
4.1.1 Selection Control Structure (Path Coverage) ……………………………………34
4.1.2 Loop control Structure ((Path Coverage)………………………………………...39
4.1.3 Selection Control Structure (Modified Condition/Decision Coverage) …………40
4.1.4 Loop Control Structure (Modified Condition/Decision Coverage) ………...…...47
4.2 Structural Test Data Generation by Examples …………………………………………..50
4.2.1 Selection Control Structure ……………………………………………………...50
4.2.2 Loop Control Structure (Modified Condition/Decision Coverage) ……………..53
4.3 Implementation of DyStruc-TDG ……………………………………………………….54
4.4 Unit Testing ……………………………………………………………………………..57
4.5 Summary ………………………………………………..……………………………….57
CHAPTER 5: EVALUATION …………..………………………….………….……………..58
5.1 Descriptive Statistics …………………………………………………………………….58
5.1.1 Question (1): Current Method……………………………………………………58
5.1.2 Question (1): DyStruc-TDG Method…………………….………………………61
5.1.3 Question (2): Current Method……………………………………………………63
5.1.4 Question (2): DyStruc-TDG Method…………………….………………………65
5.1.5 Question (3): Current Method……………………………………………………67
5.1.6 Question (3): DyStruc-TDG Method…………………….………………………69
5.1.7 Question (4): Current Method……………………………………………………70
5.1.8 Question (4): DyStruc-TDG Method…………………….………………………73
5.2 Comparative Evaluation …………………………………………………………………75
5.3 Summary ………………………………………………………………………………...77
ix
CHAPTER 6: CONCLUSION ……………………………………..…………………………78
6.1 Revisit of Research Questions and Objectives…………………………………………..78
6.1.1 Discussion on Research Question (1)……………………………………………79
6.1.2 Discussion on Research Question (2) ...…………………………………………80
6.2 Contribution of the Study .……………………………………………………………….81
6.3 Limitations and Recommendations ……………………………………………………...81
6.4 Conclusion ……………………………………………………………………………….82
REFERENCES ……………………………………………………………………....................83
APPENDIX A: Experiment Assignments …………………………………………………….87
APPENDIX B: Pre-Test and Post-Test Questions …………………………………………...91
x
List of Tables
Table 2.1 Test cases for loop coverage ………………………………………………………….12
Table 2.2 Test cases for while loop coverage ……………………………………………………13
Table 2.3 Test cases for do…while coverage ……………………………………………………14
Table 2.4 Test cases for statement coverage …………………………………………………….15
Table 2.5 Test cases for path coverage ………………………………………………………….17 Table 2.6 Test cases for MC/DC coverage ……………………………………………………...18
Table 2.7 Formula obtained from MC/DC coverage ……………………………………………19
Table 2.8 Differences of code coverage based on coverage criteria (Hayhurst et al., 2001)……21
Table 2.9 Summary of the trends of APA (Rohaida, 2014) ……………………………………..23
Table 2.10 Integration of APA and ATDG (extended from Rohaida, 2014)……………………26
Table 3.1 Research Procedure …………………………………………………………………...27
Table 4.1 Generated test cases based on valid and invalid input conditions ……………………35
Table 4.2 Generated test cases based on valid input conditions ………………………………...36
Table 4.3 Test cases for valid input conditions ………………………………………………….37
Table 4.4 Test cases for invalid input conditions ……………………………………………….37
Table 4.5 Test cases for all input conditions for selection-nested ………………………………38
Table 4.6 Test cases for only valid input conditions for selection-nested ………………………39
Table 4.7 Test cases for valid and invalid input conditions for loop control structures ………...39
Table 4.8 Test cases for valid input for loop control structures …………………………………40
Table 4.9 Truth Table for one option ……………………………………………………………40
Table 4.10 Generated test cases for one option …………………………………………………41
Table 4.11 Truth table for two options ………………………………………………………….41
Table 4.12 Independently effect option (A) …………………………………………………….41
Table 4.13 Independently effect option (B) ……………………………………………………..42
Table 4.14 Independently effect option (A) and (B) ……………………………………………42
Table 4.15 Truth table for three options …………………………...……………………………42
xi
Table 4.16 Independently effect option (A) ……………………………………………………..43
Table 4.17 Independently effect option (B) ……………………………………………………..43
Table 4.18 Independently effect option (C) ……………………………………………………..43
Table 4.19 Independently effect option (A), (B) and (C) ……………………...………………..44
Table 4.20 Truth table for four options ………………...………………………………………..44
Table 4.21Independently effect option (A) ……………………………………………………..45
Table 4.22 Independently effect option (B) ……………………………………………………..45
Table 4.23 Independently effect option (C) ……………………………………………………..45
Table 4.24 Independently effect option (D) ……………………………………………………..46
Table 4.25 Independently effect option (A), (B), (C) and (D) …………………………………..46
Table 4.26 Truth Table for one option (loop) …………………………………………………...47
Table 4.27 Generated test cases for one option (loop) …………………………………………..48
Table 4.28 Truth table for two options (loop) ……………………...……………………………48
Table 4.29 Independently effect Option (X) …………………………………………………….49
Table 4.30 Independently effect option (Y) ……………………………………………………..49
Table 4.31 Independently effect option (X) and option (Y)……………………………………..49
Table 4.32 Illustration of sample program ………………………………………………………50
Table 4.33 Generated test cases for one options (MC/DC) ……………………………………..51
Table 4.34 Illustration of sample program for two options……………………………………...51
Table 4.35 Generated test cases for two options (MC/DC) ……………………………………..52
Table 4.36 Illustration of sample program for three options ……………………………………53
Table 4.37 Generated test cases for three options (MC/DC) ……………………………………53
Table 4.38 Illustration of sample program (loop) ……………………………………………….54
Table 4.39 Generated test cases for one option (loop) …………………………………………..54
Table 5.1 Number of paths for selection control structures ……………………………………..59 Table 5.2 Number of test cases to cover each path by current method for question (1) ………...60 Table 5.3 Total number of test cases by current method for question (1) ………………………60
xii
Table 5.4 Total number of test cases by DyStruc-TDG for question (1) ………………………..61
Table 5.5 Total number of test cases by DyStruc-TDG for question (1) ………………………..62
Table 5.6 Number of test cases to cover by current method for question (2) ….………………..64
Table 5.7 Total number of test cases by current method for question (2) ………………………65
Table 5.8 Number of test cases to cover by DyStruc-TDG method for question (2) …………...65
Table 5.9 Total number of test cases by DyStruc-TDG for question (2) ……………………….66
Table 5.10 Number of test cases to cover by current method for question (3) ………………....68
Table 5.11 Total number of test cases by DyStruc-TDG method for question (3) ……………..69
Table 5.12 Test data for sentinel loop in current method ……………………………………….70
Table 5.13 Number of paths for selection and repetition control structures ……………………71
Table 5.14 Number of test cases to cover by current method for question (4) ………………….72
Table 5.15 Total number of test cases by current method for question (4) ……………………..73
Table 5.16 Number of test cases to cover by DyStruc-TDG method for question (4) ………….74
Table 5.17 Total number of test cases by DyStruc-TDG method for question (4) ……………..74
Table 5.18 Comparison of three studies ………………………………………………………...76
xiii
List of Figures
Figure 2.1 Types of software testing techniques ………………………………………...............8
Figure 2.2 Ranking of code coverage based on survey (Abdurahim, 2014)…………………….10
Figure 2.3 Code segment for “for loop” control structures ……………………………………..11
Figure 2.4 Flow graph of code segment for loop control structures …………………………….11
Figure 2.5 Code segment for while loop coverage ……………………………………………...12
Figure 2.6 Flow graph of while loop coverage ………………………………………………….12
Figure 2.7 Code segment for statement coverage ……………………………………………….13
Figure 2.8 Flow graph of do…while loop ……………………………………………………….13
Figure 2.9 Code segment for statement coverage ……………………………………………….14
Figure 2.10 Flow graph of statement coverage ………………………………………………….14
Figure 2.11 Control flow graph ………………………………………………………………....15
Figure 2.12 Code segment for selection control structures ……………………………………..16 Figure 2.13 Code segment for selection control structures (MC/DC) …………………………..18 Figure 3.1 Flow chart of DyStruc-TDG method ………………………………………………..29
Figure 4.1 Example of tree structure for selection control structures …………………………...35
Figure 4.2 Example of tree structure for nested selection control structures………….………...36
Figure 4.3 Example of two selection control structures ………………………………………...38
Figure 4.4 Example of for loop control structure ……………………………………………….47
Figure 4.5 Example of do…while loop control structure …………………………………..........48
Figure 4.6 Sample of programming exercise with one option …………………………………..50
Figure 4.7 Sample of programming exercise with two options …………………………………51
Figure 4.8 Sample of programming exercise with three options ………………………………..52
Figure 4.9 Sample of programming exercise do…while loop with one option …………………53
Figure 4.10 Interface of generated test data (selection control structures) ……………………...55
Figure 4.11 Interface of generated test data (selection control structures) ……………………...55
Figure 4.12 Interface of generated test data (Selection and Loop control structures) …………..56
Figure 4.13 Interface of generated test data for MC/DC (selection and loop control structures) .56
xiv
Figure 5.1Control flow graph for selection control structure ………………………………..….59
Figure 5.2 Test cases coverage between Current Method and DyStruc-TDG for question (1) …63
Figure 5.3 Control flow graph for repetition (counter loop) …………………………………….63
Figure 5.4 Test cases coverage between Current Method and DyStruc-TDG for question (2) …67
Figure 5.5 Control flow graph for repetition (sentinel loop) ……………………………………68
Figure 5.6 Test cases coverage between Current Method and DyStruc-TDG for question (3) …70
Figure 5.7 Control flow graph for repetition and selection control structures …………………..71
Figure 5.8 Test cases coverage between Current Method and DyStruc-TDG for question (4) …75
Figure 5.9 Sample of programming exercise ……………………………………………………76
xv
List of Abbreviations
APA Automatic Programming Assessment
ATDG Automatic Test Data Generation
DyStruc-TDG Dynamic Structural- Test Data Generation
MC/DC Modified Condition/Decision Coverage
OOAD Object Oriented Analysis and Design
UML Unified Modeling Language
UUM Universiti Utara Malaysia
1
CHAPTER 1
INTRODUCTION
1.1 Background of Study
Learning computer programming languages has become essential for students who pursue their
study in Information Technology, Computer Science and Software Engineering disciplines.
Computer introductory programming courses are commonly offered for the first year degree
students who pursue their study in these fields (Truong et al., 2005). Effective and good
programming skills are necessary for students in order to be a master in programming. Students
can be skilled in programming only through practices (Lahtinen et al., 2005). Computer
programming courses are normally designed with full of practical besides theory. The goal of
practical course is to develop student’s basic understanding of programming principles and
writing basic source code. Therefore, students are given many programming exercises as take
home assignments or hands on practice in the class in order to develop student’s programming
understanding and skill (Rohaida et al., 2010). If the assessment of programming exercises is
done by manually for a large number of students it leads to workload to lecturers and assessing
manually is really difficult task which cannot ensure the consistency and accuracy of the marking
scheme (Rohaida et al., 2010). Therefore, the concept of Automatic Programming Assessment
(APA) has become very important to assess students program for grading and providing
feedback (Saikkonen et al., 2001). Besides, APA can improve students marking assessment in
terms of consistency and thoroughness testing (Gupta et al., 2012).
According to Jackson (1996) APA is founded on software testing technique. The programming
assessment normally involves the measuring of the program quality. In order to achieve program
quality the program should be tested. Hence, through the software testing technique the quality
of the program can be measured (Rohaida et al., 2010). Software testing is a method for locating,
measuring, and disclosing errors that occurred in a program (Latiu, 2012). Software testing can
be categorized into two types: static analysis and dynamic testing, in which static testing is a test
that does not involve in the execution of the program (Zin et al., 1994). On the other hand,
dynamic testing requires a program execution with test data (Chu et al., 1997). Test data is data
which is developed as input in order to perform testing for any software program (Korel, 1990).
83
REFERENCES
Bertolino, A., & Marchetti, E. (2005). A brief essay on software testing.Software Engineering,
3rd edn. Development process, 1, 393-411.
Blumenstein, M., Green, S., Nguyen, a., & Muthukkumarasamy, V. (2004). GAME: a Generic
Automated Marking Environment for programming assessment. International Conference
on Information Technology: Coding and Computing, 2004. Proceedings. ITCC 2004.,
212–216 Vol.1.
Burnstein, I. (2003). Practical Software Testing, Springer-Verlag, New York.
Cheng, Z., Monahan, R., & Mooney, A. (2011). nExaminer: A semi-automated computer
programming assignment assessment framework for Moodle.
Choy, M., Nazir, U., Poon, C. K., & Yu, Y. T. (2005). Experiences in using an automated system
for improving students’ learning of computer programming. In Advances in Web-Based
Learning–ICWL 2005 (pp. 267-272). Springer Berlin Heidelberg.
Chu, H. D. (1997). An evaluation scheme of software testing techniques (pp. 259-262). Springer
US.
Clarke, L. a. (1976). A System to Generate Test Data and Symbolically Execute Programs. IEEE
Transactions on Software Engineering, SE-2(3), 215–222.
Clarke, L. A. (1976). A system to generate test data and symbolically execute
programs. Software Engineering, IEEE Transactions on, (3), 215-222.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of
information technology. MIS quarterly, 319-340.
Edvardsson, J. (1999, October). A survey on automatic test data generation. In Proceedings of
the 2nd Conference on Computer Science and Engineering (pp. 21-28).
Edvardsson, J. (1999, October). A survey on automatic test data generation. In Proceedings of
the 2nd Conference on Computer Science and Engineering(pp. 21-28).
Foong, O. M., Tran, Q. T., Yong, S. P., & Rais, H. M. (2014, June). Swarm inspired test case
generation for online C++ programming assessment. InComputer and Information
Sciences (ICCOINS), 2014 International Conference on (pp. 1-5). IEEE.
84
Fraenkel, J. R., & Wallen, N. E. (1993). How to design and evaluate research in education (Vol.
7). New York: McGraw-Hill.
Ghani, K., & Clark, J. A. (2009, September). Automatic test data generation for multiple
condition and MCDC coverage. In Software Engineering Advances, 2009. ICSEA'09.
Fourth International Conference on (pp. 152-157). IEEE.
Ghani, K., & Clark, J. A. (2009, September). Automatic test data generation for multiple
condition and MCDC coverage. In Software Engineering Advances, 2009. ICSEA'09.
Fourth International Conference on (pp. 152-157). IEEE.
Goodenough, J. B., & Gerhart, S. L. (1975). Toward a theory of test data selection. Software
Engineering, IEEE Transactions on, (2), 156-173.
Guo, M., Chai, T., & Qian, K. (2010, April). Design of Online Runtime and Testing
Environment for Instant Java Programming Assessment. InInformation Technology: New
Generations (ITNG), 2010 Seventh International Conference on (pp. 1102-1106). IEEE.
Gupta, N., Mathur, A. P., & Soffa, M. L. (1998). Automated test data generation using an
iterative relaxation method. ACM SIGSOFT Software Engineering Notes, 23(6), 231-244.
Gupta, S., & Dubey, S. K. (2012). Automatic Assessment of Programming assignment, 452003,
315–323.
Hakulinen, L., & Malmi, L. (2014, June). QR code programming tasks with automated
assessment. In Proceedings of the 2014 conference on Innovation & technology in
computer science education (pp. 177-182). ACM.
Hayhurst, K. J., Veerhusen, D. S., Chilenski, J. J., & Rierson, L. K. (2001). A practical tutorial
on modified condition/decision coverage.
Ihantola, P. (2006). Automatic test data generation for programming exercises with symbolic
execution and Java PathFinder. Master's thesis, Helsinki University of Technology,
Departement of Theoretical Computer Science.
IPL Information Processing Ltd. (197a). Designing Unit Test Cases. Available
http://www.ipl.com/pdf p0823.pdf. Retrieved on: 20 Feb 2009
Jackson, D. (2000). A semi-automated approach to online assessment. ACM SIGCSE
Bulletin, 32(3), 164-167.
Jackson, D., & Usher, M. (1997). Grading student programs using ASSYST. ACM SIGCSE
Bulletin, 29(1), 335–339.
85
James, R., Ivar, J., & Grady, B. (1999). The unified modeling language reference
manual. Reading: Addison Wesley.
Koomen, T., & Pol, M. (1999). Test process improvement: a practical step-by-step guide to
structured testing. Addison-Wesley Longman Publishing Co., Inc.
Korel, B. (1990). Automated software test data generation. Software Engineering, IEEE
Transactions on, 16(8), 870-879.
Korel, B. (1996). Automated test data generation for programs with procedures. ACM SIGSOFT
Software Engineering Notes, 21(3), 209–215.
Lahtinen, E., Ala-Mutka, K., & Järvinen, H. M. (2005, June). A study of the difficulties of
novice programmers. In ACM SIGCSE Bulletin (Vol. 37, No. 3, pp. 14-18). ACM.
Latiu, G. I., Cret, O. A., & Vacariu, L. (2012). Automatic Test Data Generation for Software
Path Testing Using Evolutionary Algorithms. 2012 Third International Conference on
Emerging Intelligent Data and Web Technologies, 1–8.
Luck, M., & Joy, M. (1999). A secure on‐line submission system. Software: Practice and
Experience, 29(8), 721–740.
Malmi, L., Karavirta, V., Korhonen, A., Nikander, J., Seppälä, O., & Silvasti, P. (2004). Visual
Algorithm Simulation Exercise System with Automatic Assessment:
TRAKLA2. Informatics in education, 3(2), 267-288.
McMinn, P. (2004). Search-based software test data generation: A survey.Software Testing
Verification and Reliability, 14(2), 105-156.
Monpratarnchai, S., Fujiwara, S., Katayama, A., & Uehara, T. (2014). Automated testing for
Java programs using JPF-based test case generation.ACM SIGSOFT Software
Engineering Notes, 39(1), 1-5.
Offutt, J., Liu, S., Abdurazik, A., & Ammann, P. (2003). Generating test data from state-based
specifications. Software Testing, Verification and Reliability, 13(1), 25–53.
Pargas, R. P., Harrold, M. J., & Peck, R. R. (1999). Test-data generation using genetic
algorithms. Software Testing Verification and Reliability, 9(4), 263-282.
Pressman, R. S. (2005). Software engineering: a practitioner's approach. Palgrave Macmillan.
Rayadurgam, S., & Heimdahl, M. P. E. (2003, December). Generating MC/DC Adequate Test
Sequences Through Model Checking. In SEW (p. 91).
Romli, R. (2014). Test Data Generation Framework for Automatic Programming Assessment,
84–89.
86
Romli, R., Sulaiman, S., & Zamli, K. Z. (2010). Automatic Programming Assessment and Test
Data Generation, 00(c).
Romli, R., Sulaiman, S., & Zamli, K. Z. (2013). Designing a Test Set for Structural Testing in
Automatic Programming Assessment.
Rumbaugh, J. (2003). Object-oriented analysis and design (OOAD).
Saikkonen, R., Malmi, L., & Korhonen, A. (2001, June). Fully automatic assessment of
programming exercises. In ACM Sigcse Bulletin (Vol. 33, No. 3, pp. 133-136). ACM.
Sekaran, U. (2006). Research methods for business: A skill building approach. John Wiley &
Sons.
Tillmann, N., & Halleux, J. De. (2008). Pex – White Box Test Generation for . NET, 134–153.
Tillmann, N., De Halleux, J., Xie, T., Gulwani, S., & Bishop, J. (2013, May). Teaching and
learning programming and software engineering via interactive gaming. In Software
Engineering (ICSE), 2013 35th International Conference on (pp. 1117-1126). IEEE.
Truong, N., Bancroft, P., & Roe, P. (2005). Learning to program through the web. ACM
SIGCSE Bulletin, 37(3), 9.
Varshney, S., & Mehrotra, M. (2013). Search based software test data generation for structural
testing: a perspective. ACM SIGSOFT Software Engineering Notes, 38(4), 1-6.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of
information technology: Toward a unified view. MIS quarterly, 425-478
Watkins, J., & Mills, S. (2010). Testing IT: an off-the-shelf software testing process. Cambridge
University Press.
Zamli, K. Z., Ashidi, N., Isa, M., Fadel, M., & Klaib, J. (2007). A Tool for Automated Test Data
Generation ( and Execution ) Based on Combinatorial Approach, 19–36.
Zhu, H., Hall, P. A., & May, J. H. (1997). Software unit test coverage and adequacy. Acm
computing surveys (csur), 29(4), 366-427.
Zidoune, W., & Benouhiba, T. (2012). Targeted adequacy criteria for search-based test data
generation.2012 International Conference on Information Technology and E-Services, 1-6.