personalizing group instruction using knowledge …

210
PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE SPACE THEORY AND CLUSTERING TECHNIQUES by Rim S. Zakaria A Thesis Presented to the Faculty of the American University of Sharjah College of Engineering in Partial Fulfillment of the Requirements for the Degree of Master of Science in Engineering Systems Management Sharjah, United Arab Emirates May 2016

Upload: others

Post on 05-Jan-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE

SPACE THEORY AND CLUSTERING TECHNIQUES

by

Rim S. Zakaria

A Thesis Presented to the Faculty of the

American University of Sharjah

College of Engineering

in Partial Fulfillment

of the Requirements

for the Degree of

Master of Science in

Engineering Systems Management

Sharjah, United Arab Emirates

May 2016

Page 2: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

© 2016 Rim S. Zakaria. All rights reserved.

Page 3: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

Approval Signatures

We, the undersigned, approve the Master’s Thesis of Rim S. Zakaria.

Thesis Title: Personalizing Group Instruction Using Knowledge Space Theory and

Clustering Techniques

Signature Date of Signature (dd/mm/yyyy)

___________________________ _______________

Dr. Imran A. Zualkernan

Associate Professor, Department of Computer Science and Engineering

Thesis Advisor

___________________________ _______________

Dr. Hazim El-Baz

Associate Professor, Department of Industrial Engineering

Thesis Committee Member

___________________________ _______________

Dr. Tarik Ozkul

Professor, Department of Computer Science and Engineering

Thesis Committee Member

___________________________ _______________

Dr. Moncer Hariga

Director, Engineering Systems Management Graduate Program

___________________________ _______________

Dr. Mohamed Guma El-Tarhuni

Associate Dean, College of Engineering

___________________________ _______________

Dr. Leland Blank

Dean, College of Engineering

___________________________ _______________

Dr. Khaled Assaleh

Interim Vice Provost for Research and Graduate Studies

Page 4: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

4

Acknowledgments

First and foremost, I would like to express all my gratitude to Allah (SWT) for all

the strength, patience, and divine support He has granted me in everything I did in my

life, whether it was at a social, professional, or academic level.

Also, I would like to show my gratitude and thanks to the two people who have

strongly encouraged me to take on this endeavor of doing my master’s and my thesis:

Dr. Imran Zualkernan who has always probed our minds to think way outside the box

and who has attentively supervised and encouraged my efforts throughout the process,

and Noha Tarek who has, for the last 10 years, been one of the greatest friends,

supporters, and strong believers in my potential.

In addition, I would like to express my deepest gratitude and appreciation

for my dear husband Hisham Shoblaq, my Mother, my Father, my siblings; Amer

Zakaria (thanks for all the Nescafe dolce gusto shots), Eman Zakaria, Yasmin Zakaria,

and Rana Zakaria and her family Ahmed, Soliman, and Yusef, and my in-laws Sharif

Shoblaq, Hanan Shoblaq, and Hussam Shoblaq who have supported me greatly and seen

me go through the ups and downs of the entire process.

I would also like to extend my gratitude to all ESM faculty members for their

continuous help and support throughout the MSc. program. I am also very grateful for

the advice and help provided by my committee members; Dr. Moncer Hariga, Dr.

Hazem El-Baz, and Dr. Tarik Ozkul.

I would also like to thank and extend my deepest appreciation to all CEN faculty

and staff, especially Ms. Salwa Mohammed for her guidance and encouragement, and

Dr. Mahmoud Ismail and Mr. Fekrat El-Wehedi who have witnessed my efforts

throughout. I would also like to thank Dr. Taha Landolsi and Dr. Cindy Gunn who have

given me valuable advice and recommendations during the Master’s degree application

process.

I would also like to extend my deep appreciation to my 10+ year friends: Zeinab

Alayan, my sister-in-law Rasha Saffarini, Maram Jibreel, Rashid Al Hammadi, Rawan

Tayem, Nihal Al Khunaizi, and Nour Nour who have always encouraged me and

inquired about my progress.

Finally, I would also like to thank my colleagues in the ESM program and CEN.

Page 5: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

5

Dedication

I dedicate this thesis to all my family and friends, and to everyone

who is truly striving to do good for humanity out there..

We do need more of those…

Page 6: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

6

Abstract

In the competitive market today, availability of appropriate skills and

competencies that are aligned with an organization’s objectives and that contribute to

the organization’s long-term success and survival is not always guaranteed. Lack of

such alignment leads to a Skill Gap. A Skill Gap is the gap between an organization's

current capabilities and the skills the employees must have to achieve an organization’s

goals. This is especially true for engineering companies where the half-life of

knowledge is relatively short. One cause of Skill Gap is inefficient and standardized

training programs. Due to high costs, most training programs are not personalized, and

deliver generic training to employees which may not be very effective in addressing

individual or group Skill Gaps. This thesis explores personalizing and optimizing the

content delivery decisions made by workforce trainers and instructors. The proposed

approach is data driven and combines a set-theoretic framework called the Knowledge

Space Theory (KST) with analytic techniques like cluster analysis. In specific, K-

Means, DBSCAN and EM clustering techniques are used in conjunction with KST to

cluster learners based on currently acquired skills, and on skills they are ready to acquire

next. These clusters of learners can be used to design personalized training/instructional

programs. Various internal measures like Compactness, Separation, Dunn Index, and

Davies-Bouldin Index, and external measures like Purity, Entropy, Normalized Mutual

Information, and Adjusted Random Index are used to compare alternative clustering

techniques. Sensitivity analysis was also carried out. In general, K-Means seems to

perform better than DBSCAN and EM for this type of data. However, there is no

systematic preference between prior learning as opposed to affordance for future

learning to cluster data.

Search Terms: Skills Management, Talent Management, Instructional Decision-

Making, Knowledge Space Theory, KST, HR, Human Capital Management,

Optimizing-Decision Making, Clustering, Labor Training and Development

Page 7: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

7

Table of Contents

List of Tables ............................................................................................................... 11

List of Figures .............................................................................................................. 19

List of Abbreviations ................................................................................................... 21

Chapter 1: Introduction ............................................................................................ 23

1.1. Background .................................................................................................... 23

1.2. Problem Statement ......................................................................................... 24

1.3. Constraints and Assumptions ......................................................................... 24

1.4. Significance of the Research .......................................................................... 25

1.5. Research Methodology ................................................................................... 25

1.6. Thesis Organization........................................................................................ 26

Chapter 2: Literature Review and Previous Work ................................................... 28

2.1. Learners’ Abilities and Data Warehousing .................................................... 28

2.2. Workplace Training and Development .......................................................... 29

2.3. Competency Models ....................................................................................... 30

2.4. Knowledge Space Theory (KST) ................................................................... 31

2.5. Clustering Analysis ........................................................................................ 32

Chapter 3: Approach and Algorithm ....................................................................... 36

3.1. Illustrative KST Example ............................................................................... 36

3.2. Approach and Algorithm ................................................................................ 37

3.3. Detailed Example ........................................................................................... 39

3.3.1. Step 1. ..................................................................................................... 39

3.3.2. Step 2. ..................................................................................................... 39

3.3.3. Step 3. ..................................................................................................... 41

3.3.4. Step 4. ..................................................................................................... 41

Chapter 4: Evaluation .............................................................................................. 44

Page 8: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

8

4.1. Internal Indices ............................................................................................... 44

4.2. External Indices .............................................................................................. 47

Chapter 5: Data Collection and KST Encoding ....................................................... 50

5.1. Data Collection ............................................................................................... 50

5.2. Illustrative Example of NUMBERS Unit....................................................... 51

5.2.1. Determining the inner and outer fringes of students. .............................. 51

5.2.2. Encoding the inner and outer fringes sets of students............................. 52

5.3. Additional Observations ................................................................................. 53

Chapter 6: K-Means Clustering ............................................................................... 56

6.1. K-Means Overview ........................................................................................ 56

6.1.1. K-Means distance metrics. ...................................................................... 57

6.2. K-Means Results ............................................................................................ 58

6.2.1. Clustering control and treatment students based on inner fringes. ......... 58

6.2.2. Clustering control and treatment students based on outer fringes. ......... 65

6.3. K-Means Results Evaluation .......................................................................... 72

6.4. K-Means Comparative Analysis .................................................................... 76

6.4.1. K-Means clustering as explained by knowledge states. .......................... 77

6.4.2. K-Means clustering as explained by 25th percentile/quartile. ................. 78

6.5. K-Means Overall Summary ........................................................................... 79

Chapter 7: DBSCAN Clustering .............................................................................. 82

7.1. DBSCAN Overview ....................................................................................... 82

7.1.1. DBSCAN procedure and parameters. ..................................................... 82

7.2. DBSCAN Results ........................................................................................... 84

7.2.1. Clustering control and treatment students based on inner fringes. ......... 84

7.2.2. Clustering control and treatment students based on outer fringes. ......... 90

7.3. DBSCAN Results Evaluation......................................................................... 97

7.4. DBSCAN Comparative Analysis ................................................................. 100

Page 9: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

9

7.4.1. DBSCAN clustering as explained by knowledge states. ...................... 100

7.4.2. DBSCAN clustering as explained by 25th percentile/quartile. ............. 102

7.5. DBSCAN Overall Summary ........................................................................ 104

Chapter 8: EM Clustering ...................................................................................... 106

8.1. EM Overview ............................................................................................... 106

8.1.1. EM procedure and parameters. ............................................................. 106

8.2. EM Results ................................................................................................... 108

8.2.1. Clustering control and treatment students based on inner fringes. ....... 108

8.2.2. Clustering control and treatment students based on outer fringes. ....... 115

8.3. EM Results Evaluation ................................................................................. 122

8.4. EM Comparative Analysis ........................................................................... 125

8.4.1. EM clustering as explained by knowledge states. ................................ 126

8.4.2. EM clustering as explained by 25th percentile/quartile. ........................ 127

8.5. EM Overall Summary .................................................................................. 129

Chapter 9: Overall Results Analysis ...................................................................... 131

9.1. Pairwise Comparison Using Clustering based on Knowledge States .......... 131

9.2. Pairwise Comparison Using Students Grouping based on Quartiles ........... 133

9.3. Pairwise Comparison Using Fringes K-Means Clustering .......................... 135

9.4. Pairwise Comparison Using Fringes DBSCAN Clustering ......................... 138

9.5. Pairwise Comparison Using Fringes EM Clustering ................................... 140

9.6. Summary ...................................................................................................... 143

Chapter 10: Sensitivity Analysis .......................................................................... 144

10.1. Quantitative Analysis ................................................................................... 144

10.2. Qualitative Analysis ..................................................................................... 146

Chapter 11: Model Validation and Further Insights ............................................ 157

11.1. Model Validation for Generalizability ......................................................... 157

11.2. External Key Insights ................................................................................... 168

Page 10: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

10

Chapter 12: Conclusion and Future Research ...................................................... 176

References .................................................................................................................. 178

Appendix A: KST Details .......................................................................................... 183

Appendix B: Quartiles Details ................................................................................... 185

Appendix C: K-Means Results Details ...................................................................... 186

K-Means Results at 60% Threshold ........................................................................ 186

Appendix D: DBSCAN Results Details .................................................................... 190

DBSCAN Data Sets Epsilons and k-NN Plots ........................................................ 190

DBSCAN MinPts Variation (MinPts = 2, 5, 3, 10, and 20) .................................... 192

Appendix E: Knowledge States Clustering Results ................................................... 196

K-Means Results Internal Indices for Knowledge States Clustering ...................... 196

DBSCAN Results Internal Indices for Knowledge States Clustering .................... 196

EM Results Internal Indices for Knowledge States Clustering ............................... 197

Appendix F: Generalization Example Details ........................................................... 198

K-Means Inner Fringes Clustering Results ............................................................. 198

K-Means Outer Fringes Clustering Results ............................................................ 200

DBSCAN Inner Fringes Clustering Results ............................................................ 202

DBSCAN Outer Fringes Clustering Results ........................................................... 204

EM Inner Fringes Clustering Results ...................................................................... 206

EM Outer Fringes Clustering Results ..................................................................... 208

Vita ………………………………………………………………………………….210

Page 11: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

11

List of Tables

Table 1: Clustering Algorithms to be used ................................................................... 34

Table 2: Dimensions of Clustering Valuation Criteria ................................................ 35

Table 3: Example of Student Topic Assessment Scores .............................................. 39

Table 4: Example of Student Topic Deterministic Assessment Scores ....................... 40

Table 5: Students’ Best Knowledge States using a deterministic method ................... 40

Table 6: Students Inner and Outer Fringes ................................................................... 40

Table 7: Students Inner and Outer Fringes Conversion ................................................ 41

Table 8: K-Means Distance Metrics ............................................................................. 57

Table 9: Pre-test Control Students K-Means Clusters Based on Inner Fringes ............ 59

Table 10: Post-test Control Students K-Means Clusters Based on Inner Fringes ........ 59

Table 11: Pre-test Control Students K-Means Clusters Kruskal-Wallis Mean Ranks.. 60

Table 12: Pre-test Control Students K-Means Clusters Mood’s Median Test.............. 61

Table 13: Post-test Control Students K-Means Inner Fringes Clusters Kruskal-Wallis

Mean Ranks .................................................................................................................. 61

Table 14: Post-test Control Students K-Means Inner Fringes Clusters Mood’s Median

Test ................................................................................................................................ 61

Table 15: Pre-test Treatment Students K-Means Clusters Based on Inner Fringes ...... 62

Table 16: Post-test Treatment Students K-Means Clusters Based on Inner Fringes .... 62

Table 17: Pre-test Treatment Students K-Means Inner Fringes Clusters Kruskal-Wallis

Mean Ranks .................................................................................................................. 64

Table 18: Pre-test Treatment Students K-Means Inner Fringes Clusters Mood’s Median

Test ................................................................................................................................ 65

Table 19: Pre-test Control Students K-Means Clusters Based on Outer Fringes ......... 66

Table 20: Post-test Control Students K-Means Clusters Based on Outer Fringes ........ 66

Table 21: Pre-test Control Students K-Means Outer Fringes Clusters Kruskal-Wallis

Mean Ranks .................................................................................................................. 67

Table 22: Pre-test Control Students K-Means Outer Fringes Clusters Mood’s Median

Test ................................................................................................................................ 68

Table 23: Post-test Control Students K-Means Outer Fringes Clusters Kruskal-Wallis

Mean Ranks .................................................................................................................. 68

Page 12: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

12

Table 24: Post-test Control Students K-Means Outer Fringes Clusters Mood’s Median

Test ................................................................................................................................ 68

Table 25: Pre-test Treatment Students K-Means Clusters Based on Outer Fringes ..... 69

Table 26: Post-test Treatment Students K-Means Clusters Based on Outer Fringes ... 69

Table 27: Pre-test Treatment Students K-Means Outer Fringes Clusters Kruskal-Wallis

Mean Ranks .................................................................................................................. 70

Table 28: Pre-test Treatment Students K-Means Outer Fringes Clusters Mood’s Median

Test ................................................................................................................................ 71

Table 29: Post-test Treatment Students K-Means Outer Fringes Clusters Kruskal-Wallis

Mean Ranks .................................................................................................................. 71

Table 30: Post-test Treatment Students K-Means Outer Fringes Clusters Mood’s

Median Test .................................................................................................................. 71

Table 31: K-Means Results Evaluation ........................................................................ 73

Table 32: K-Means Clusters Intra-class Correlation Coefficient ................................. 76

Table 33: Is K-Means Clustering Based on Knowledge States .................................... 77

Table 34: Is K-Means Clustering Based on Quartiles .................................................. 79

Table 35: Pre-test Control Students DBSCAN Clusters Based on Inner Fringes at ε = 11

....................................................................................................................................... 85

Table 36: Post-test Control Students DBSCAN Clusters Based on Inner Fringes at ε =

11................................................................................................................................... 85

Table 37: Pre-test Control Students DBSCAN Inner Fringes Clusters Kruskal-Wallis

Mean Ranks .................................................................................................................. 87

Table 38: Pre-test Control Students DBSCAN Inner Fringes Clusters Mood’s Median

Test ................................................................................................................................ 87

Table 39: Pre-test Treatment Students DBSCAN Clusters Based on Inner Fringes at ε =

6..................................................................................................................................... 88

Table 40: Post-test Treatment Students DBSCAN Clusters Based on Inner Fringes at ε

= 10 ............................................................................................................................... 88

Table 41: Pre-test Treatment Students DBSCAN Inner Fringes Clusters Kruskal-Wallis

Mean Ranks .................................................................................................................. 90

Table 42: Pre-test Treatment Students DBSCAN Inner Fringes Clusters Mood’s Median

Test ................................................................................................................................ 90

Page 13: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

13

Table 43: Post-test Treatment Students DBSCAN Inner Fringes Clusters Kruskal-Wallis

Mean Ranks .................................................................................................................. 90

Table 44: Post-test Treatment Students DBSCAN Inner Fringes Clusters Mood’s

Median Test .................................................................................................................. 90

Table 45: Pre-test Control Students DBSCAN Clusters Based on Outer Fringes at ε = 2

....................................................................................................................................... 92

Table 46: Post-test Control Students DBSCAN Clusters Based on Outer Fringes at ε =

2..................................................................................................................................... 92

Table 47: Pre-test Control Students DBSCAN Outer Fringes Clusters Kruskal-Wallis

Mean Ranks .................................................................................................................. 93

Table 48: Pre-test Control Students DBSCAN Outer Fringes Clusters Mood’s Median

Test ................................................................................................................................ 93

Table 49: Post-test Control Students DBSCAN Outer Fringes Clusters Kruskal-Wallis

Mean Ranks .................................................................................................................. 93

Table 50: Post-test Control Students DBSCAN Outer Fringes Clusters Mood’s Median

Test ................................................................................................................................ 94

Table 51: Pre-test Treatment Students DBSCAN Clusters Based on Outer Fringes at ε

= 1 ................................................................................................................................. 95

Table 52: Post-test Treatment Students DBSCAN Clusters Based on Outer Fringes at ε

= 1 ................................................................................................................................. 95

Table 53: Pre-test Treatment Students DBSCAN Outer Fringes Clusters Kruskal-Wallis

Mean Ranks .................................................................................................................. 96

Table 54: Pre-test Treatment Students DBSCAN Outer Fringes Clusters Mood’s

Median Test .................................................................................................................. 96

Table 55: DBSCAN Results Evaluation ....................................................................... 98

Table 56: DBSCAN Clusters Intra-class Correlation Coefficient .............................. 100

Table 57: Is DBSCAN Clustering Based on Knowledge States ................................. 101

Table 58: Is DBSCAN Clustering Based on Quartiles ............................................... 103

Table 59: Pre-test Control Students EM Clusters Based on Inner Fringes ................. 109

Table 60: Post-test Control Students EM Clusters Based on Inner Fringes ............... 109

Table 61: Pre-test Control Students EM Inner Fringes Clusters Kruskal-Wallis Mean

Ranks........................................................................................................................... 110

Page 14: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

14

Table 62: Pre-test Control Students EM Inner Fringes Clusters Mood’s Median Test

..................................................................................................................................... 110

Table 63: Post-test Control Students EM Inner Fringes Clusters Kruskal-Wallis Mean

Ranks........................................................................................................................... 111

Table 64: Post-test Control Students EM Inner Fringes Clusters Mood’s Median Test

..................................................................................................................................... 111

Table 65: Pre-test Treatment Students EM Clusters Based on Inner Fringes............. 112

Table 66: Post-test Treatment Students EM Clusters Based on Inner Fringes at ....... 112

Table 67: Pre-test Treatment Students EM Inner Fringes Clusters Kruskal-Wallis Mean

Ranks........................................................................................................................... 114

Table 68: Pre-test Treatment Students EM Inner Fringes Clusters Mood’s Median Test

..................................................................................................................................... 114

Table 69: Post-test Treatment Students EM Inner Fringes Clusters Kruskal-Wallis Mean

Ranks........................................................................................................................... 114

Table 70: Post-test Treatment Students EM Inner Fringes Clusters Mood’s Median Test

..................................................................................................................................... 114

Table 71: Pre-test Control Students EM Clusters Based on Outer Fringes ................ 116

Table 72: Post-test Control Students EM Clusters Based on Outer Fringes............... 116

Table 73: Pre-test Control Students EM Outer Fringes Clusters Kruskal-Wallis Mean

Ranks........................................................................................................................... 117

Table 74: Pre-test Control Students EM Outer Fringes Clusters Mood’s Median Test

..................................................................................................................................... 117

Table 75: Pre-test Treatment Students EM Clusters Based on Outer Fringes ............ 118

Table 76: Post-test Treatment Students EM Clusters Based on Outer Fringes .......... 118

Table 77: Pre-test Treatment Students EM Outer Fringes Clusters Kruskal-Wallis Mean

Ranks........................................................................................................................... 120

Table 78: Pre-test Treatment Students EM Outer Fringes Clusters Mood’s Median Test

..................................................................................................................................... 120

Table 79: Post-test Treatment Students EM Outer Fringes Clusters Kruskal-Wallis

Mean Ranks ................................................................................................................ 120

Table 80: Post-test Treatment Students EM Outer Fringes Clusters Mood’s Median Test

..................................................................................................................................... 120

Table 81: EM Results Evaluation ............................................................................... 123

Page 15: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

15

Table 82: EM Clusters Intra-class Correlation Coefficient ........................................ 125

Table 83: Is EM Clustering Based on Knowledge States ........................................... 126

Table 84: Is EM Clustering Based on Quartiles ......................................................... 128

Table 85: Knowledge States Clustering and Quartiles Pairwise Comparison ............ 131

Table 86: Knowledge States and K-Means Clustering Pairwise Comparison ............ 132

Table 87: Knowledge States and DBSCAN Clustering Pairwise Comparison .......... 132

Table 88: Knowledge States and EM Clustering Pairwise Comparison ..................... 133

Table 89: Quartiles and Knowledge States Pairwise Comparison .............................. 134

Table 90: Quartiles and K-Means Clustering Pairwise Comparison .......................... 134

Table 91: Quartiles and DBSCAN Clustering Pairwise Comparison ......................... 135

Table 92: Quartiles and EM Clustering Pairwise Comparison ................................... 135

Table 93: K-Means and Knowledge States Pairwise Comparison ............................. 136

Table 94: K-Means Clustering and Quartiles Pairwise Comparison .......................... 136

Table 95: K-Means and DBSCAN Clustering Pairwise Comparison ........................ 137

Table 96: K-Means and EM Clustering Pairwise Comparison ................................... 137

Table 97: DBSCAN and Knowledge States Pairwise Comparison ............................ 138

Table 98: DBSCAN Clustering and Quartiles Pairwise Comparison ......................... 139

Table 99: DBSCAN and K-Means Clustering Pairwise Comparison ........................ 139

Table 100: DBSCAN and EM Clustering Pairwise Comparison ............................... 140

Table 101: EM and Knowledge States Pairwise Comparison .................................... 141

Table 102: EM Clustering and Quartiles Pairwise Comparison ................................. 141

Table 103: EM and K-Means Clustering Pairwise Comparison ................................. 142

Table 104: EM and DBSCAN Clustering Pairwise Comparison ............................... 142

Table 105: Quantitative Analysis................................................................................ 145

Table 106: 33% vs. 60% K-Means Clustering Results ............................................... 146

Table 107: K-Means Quantitative Analysis – Internal Indices ................................... 147

Table 108: K-Means Qualitative Analysis – External Indices as compared to KS

Clustering .................................................................................................................... 149

Table 109: DBSCAN Quantitative Analysis – Internal Indices ................................. 151

Table 110: DBSCAN Qualitative Analysis – External Indices as compared to KS

Clustering .................................................................................................................... 152

Table 111: EM Quantitative Analysis – Internal Indices ............................................ 154

Page 16: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

16

Table 112: EM Qualitative Analysis – External Indices as compared to KS Clustering

..................................................................................................................................... 155

Table 113: Generalization Example Quantitative Analysis with Comparison ........... 158

Table 114: Generalization Example Qualitative Analysis – Internal Indices – CP and SP

..................................................................................................................................... 160

Table 115: Generalization Example Qualitative Analysis – Internal Indices – DB and

DVI ............................................................................................................................. 161

Table 116: Generalization Example Qualitative Analysis – External Indices as

compared to KS Clustering – CA and Entropy ........................................................... 163

Table 117: Generalization Example Qualitative Analysis – External Indices as

compared to KS Clustering – NMI and ARI .............................................................. 164

Table 118: Generalization Example Qualitative Analysis – External Indices as

compared to Quartiles – CA and Entropy ................................................................... 166

Table 119: Generalization Example Qualitative Analysis – External Indices as

compared to Quartiles – NMI and ARI....................................................................... 167

Table 120: K-Means and School Pairwise Comparison ............................................. 169

Table 121: K-Means and School Type (Teacher Gender) Pairwise Comparison ....... 174

Table 122: K-Means and School Grade 2 Enrollment size Pairwise Comparison ..... 175

Table 123: NUMBERS unit KST description ............................................................ 183

Table 124: NUMBERS unit KST Inner and Outer Fringes ........................................ 183

Table 125: NUMBERS unit Inner and Outer Fringes Binary to Decimal Conversion

..................................................................................................................................... 184

Table 126: Pre-test Control Students Quartiles .......................................................... 185

Table 127: Pre-test Treatment Students Quartiles ...................................................... 185

Table 128: Post-test Control Students Quartiles ......................................................... 185

Table 129: Post-test Treatment Students Quartiles ..................................................... 185

Table 130: Pre-test Control Students Clusters Based on Inner Fringes ...................... 186

Table 131: Post-test Control Students Clusters Based on Inner Fringes .................... 186

Table 132: Pre-test Treatment Students Clusters Based on Inner Fringes.................. 187

Table 133: Post-test Treatment Students Clusters Based on Inner Fringes ................ 187

Table 134: Pre-test Control Students Clusters Based on Outer Fringes ..................... 188

Table 135: Post-test Control Students Clusters Based on Outer Fringes .................... 188

Table 136: Pre-test Treatment Students Clusters Based on Outer Fringes ................. 189

Page 17: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

17

Table 137: Post-test Treatment Students Clusters Based on Outer Fringes ............... 189

Table 138: DBSCAN ε for Pre-test Control Students Inner Fringes Data Set ........... 190

Table 139: DBSCAN ε for Post-test Control Students Inner Fringes Data Set .......... 190

Table 140: DBSCAN ε for Pre-test Treatment Students Inner Fringes Data Set ....... 190

Table 141: DBSCAN ε for Post-test Treatment Students Inner Fringes Data Set ...... 190

Table 142: DBSCAN ε for Pre-test Control Students Outer Fringes Data Set ........... 191

Table 143: DBSCAN ε for Post-test Control Students Outer Fringes Data Set ......... 191

able 144: DBSCAN ε for Pre-test Treatment Students Outer Fringes Data Set ......... 191

Table 145: DBSCAN ε for Post-test Treatment Students Outer Fringes Data Set ..... 192

Table 146: Varying MinPts for Pre-test Control Students Inner Fringes Clusters ..... 192

Table 147: Varying MinPts for Post-test Control Students Inner Fringes Clusters .... 193

Table 148: Varying MinPts for Pre-test Treatment Students Inner Fringes Clusters . 193

Table 149: Varying MinPts for Post-test Treatment Students Inner Fringes Clusters193

Table 150: Varying MinPts for Pre-test Control Students Outer Fringes Clusters..... 194

Table 151: Varying MinPts for Post-test Control Students Outer Fringes Clusters ... 194

Table 152: Varying MinPts for Pre-test Treatment Students Outer Fringes Clusters 194

Table 153: Varying MinPts for Post-test Treatment Students Outer Fringes Clusters

..................................................................................................................................... 195

Table 154: K-Means Results Evaluation for Knowledge States Clustering ............... 196

Table 155: DBSCAN Results Evaluation for Knowledge States Clustering .............. 196

Table 156: EM Results Evaluation for Knowledge States Clustering ........................ 197

Table 157: Pre-test Control Students K-Means Clusters Based on Inner Fringes ...... 198

Table 158: Post-test Control Students K-Means Clusters Based on Inner Fringes .... 198

Table 159: Pre-test Treatment Students K-Means Clusters Based on Inner Fringes .. 199

Table 160: Post-test Treatment Students K-Means Clusters Based on Inner Fringes 199

Table 161: Pre-test Control Students K-Means Clusters Based on Outer Fringes ..... 200

Table 162: Post-test Control Students K-Means Clusters Based on Outer Fringes .... 200

Table 163: Pre-test Treatment Students K-Means Clusters Based on Outer Fringes . 201

Table 164: Post-test Treatment Students K-Means Clusters Based on Outer Fringes201

Table 165: Pre-test Control Students DBSCAN Clusters Based on Inner Fringes ..... 202

Table 166: Post-test Control Students DBSCAN Clusters Based on Inner Fringes ... 202

Table 167: Pre-test Treatment Students DBSCAN Clusters Based on Inner Fringes 203

Page 18: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

18

Table 168: Post-test Treatment Students DBSCAN Clusters Based on Inner Fringes

..................................................................................................................................... 203

Table 169: Pre-test Control Students DBSCAN Clusters Based on Outer Fringes .... 204

Table 170: Post-test Control Students DBSCAN Clusters Based on Outer Fringes .. 204

Table 171: Pre-test Treatment Students DBSCAN Clusters Based on Outer Fringes 205

Table 172: Post-test Treatment Students DBSCAN Clusters Based on Outer Fringes

..................................................................................................................................... 205

Table 173: Pre-test Control Students EM Clusters Based on Inner Fringes ............... 206

Table 174: Post-test Control Students EM Clusters Based on Inner Fringes ............. 206

Table 175: Pre-test Treatment Students EM Clusters Based on Inner Fringes........... 207

Table 176: Post-test Treatment Students EM Clusters Based on Inner Fringes ......... 207

Table 177: Pre-test Control Students EM Clusters Based on Outer Fringes .............. 208

Table 178: Post-test Control Students EM Clusters Based on Outer Fringes ............. 208

Table 179: Pre-test Treatment Students EM Clusters Based on Outer Fringes .......... 209

Table 180: Post-test Treatment Students EM Clusters Based on Outer Fringes ........ 209

Page 19: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

19

List of Figures

Figure 1: Standard Competency Model. ...................................................................... 31

Figure 2: Overview of Clustering Algorithms. ............................................................ 33

Figure 3: Examples of K-Means, DBSCAN, & EM Clustering. ................................. 35

Figure 4: An Example Knowledge Structure for Grade 4 Multiplication. ................... 36

Figure 5: Approach. ...................................................................................................... 38

Figure 6: Algorithm. ..................................................................................................... 38

Figure 7: Suggested Algorithm Overview. ................................................................... 42

Figure 8: DBSCAN Test on Multi-dimensional Data................................................... 43

Figure 9: EM Test on Multi-dimensional Data. ............................................................ 43

Figure 10: KS for Grade II NUMBERS Unit. .............................................................. 51

Figure 11: Knowledge State Assignments for Grade II NUMBERS Unit. .................. 52

Figure 12: NUMBERS Unit Knowledge States Inner and Outer Fringe Sets Encoding.

....................................................................................................................................... 53

Figure 13: Knowledge State Transfer between Pre-test and Post-test for the NUMBERS

unit. ............................................................................................................................... 55

Figure 14: K-Means Results Internal Indices Comparison (Control Students) ............ 74

Figure 15: K-Means Results Internal Indices Comparison (Treatment Students) ........ 75

Figure 16: K-Means Results Internal Indices Comparison (Control vs. Treatment) .... 75

Figure 17: k-NN and k-NN Plot Example. ................................................................... 83

Figure 18: DBSCAN Illustration. ................................................................................. 83

Figure 19: Comparing K-Means and DBSCAN clustering to Quartile grouping. ...... 104

Figure 20: EM Inner Fringes Clustering Results Gaussian Distribution. ................... 121

Figure 21: EM Outer Fringes Clustering Results Gaussian Distribution. .................. 122

Figure 22: Comparing K-Means and EM clustering to Quartile grouping. ................ 128

Figure 23: K-Means Results Internal Indices Comparison (33% vs 60%) ................. 148

Figure 24: K-Means Results Internal Indices Comparison for Data Set 1 ................. 162

Figure 25: K-Means Results Internal Indices Comparison for Data Set 2 ................. 162

Figure 26: Geographical Location of Clustering Results for Inner Fringes – Vehari

District......................................................................................................................... 170

Page 20: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

20

Figure 27: Geographical Location of Clustering Results for Outer Fringes – Vehari

District......................................................................................................................... 171

Figure 28: Geographical Location of Clustering Results for Inner Fringes – Mandi

Bahauddin District ...................................................................................................... 172

Figure 29: Geographical Location of Clustering Results for Outer Fringes – Mandi

Bahauddin District ...................................................................................................... 173

Figure 30: Students Inner Fringes Data Set k-NN Plot. .............................................. 191

Figure 31: Students Outer Fringes Data Set k-NN Plot. ............................................. 192

Page 21: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

21

List of Abbreviations

ARI Adjusted Rand Index

BIC Bayesian Information Criterion

BIRCH Balanced Iterative Reducing and Clustering using Hierarchies

BSS Between cluster Sum of Squares

CA Cluster Accuracy

CH Calinski and Harabasz

CP Compactness

CTT Classical Test Theory

CV Coefficient of Variation

DB Davies-Bouldin

DBSCAN Density-Based Spatial Clustering of Applications with Noise

DVI Dunn Validity Index

EM Expectation Maximization

HClust Hierarchical Clustering

ICC Intra-class Correlation Coefficient

IFS Inner Fringes Set

IRT Item Response Theory

k-NN kth Nearest Neighbor

KS Knowledge Structure

KST Knowledge Space Theory

NMI Normalized Mutual Information

OFS Outer Fringes Set

SP Separation

Page 22: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

22

SVM Support Vector Machine

WSS Within cluster Sum of Squares

Page 23: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

23

Chapter 1: Introduction

1.1. Background

Labor and Skills Management, also sometimes referred to as Talent

Management and/or Human Capital Management [1], is one of the main functions of

most HR entities in local and global organizations. This type of function emerged in the

1990s [1] as a movement to drive an organization’s business strategy, excellence, and

success through the use of its employees’ skills, talents, and acquired professional

knowledge and competencies. Therefore, this type of HR management function can be

defined as the process of recruiting, managing, assessing, developing, and maintaining

an organization’s most important resource- people [1]. One way of typically doing so is

monitoring employees’ performance, assessing their skills, and improving where

required to build a strong workforce that leverages an organization’s mission and long-

term as well as short-term goals and objectives, and to increase their value in a market

whose skill demands and experience requirements is continuously changing.

Managing employees’ skills in organizations is usually facilitated through the

design and definition of various relevant job roles which are represented by the set of

skills and knowledge areas to be efficiently acquired by the individual employee. The

skills and relevant knowledge aid the employee to perform his/her job role’s tasks

effectively and efficiently. Hence, organizations should take care of their individual

employees, nurture their existing skills, and focus on their skills needs by integrating

them as part of their human resources management system [2].

However, according to [3], The McKinsey Global Institute June 2012 report

titled The World at Work: Jobs, Pay, and Skills for 3.5 billion People speculates that by

2020 there is a potential global deficit of 38 to 40 million high-skills workers which are

at an estimated market demand of 13% and 45 million middle-skills workers which are

at an estimated market demand of 15%, with the least demand at 10% for low-skills

workers at a shortage of 90 to 95 million workers [3]. The latter skills shortage issue

leads to an organizational risk termed as a Skill Gap as put by the American Society for

Training and Development (ASTD) – ASTD is the world’s largest association dedicated

to workplace learning and development professionals.

As defined by [3], a Skills Gap is a significant gap between an organization’s

current capabilities and the skills it needs to achieve its goals. A Skills Gap leads to the

Page 24: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

24

inability of an organization to fill critical business roles due to the lack of employees

that have the right skills, knowledge, and capabilities to perform the job; therefore,

reducing the organization’s competitive advantage in the market and halting its growth.

1.2. Problem Statement

In this thesis, the challenge addressed is trying to minimize the Skill Gap in any

organization by optimizing the decisions made by a workforce trainer/instructor to

enhance the employees’ skills and professional knowledge in an efficient way.

After looking through the literature, the approach will be using individuals’

knowledge states to later cluster the individuals based on what they mastered recently

and what they are ready to master next. This clustering based on knowledge states to

optimize and personalize instructional decision-making for trainers was not considered

before. Therefore, the approach proposed consists of:

Using individuals’ assessment results for the various skills related to their job

role(s) or the main topic being taught. These results are used to determine their

current knowledge state based on a predetermined threshold score and the

algorithm of Knowledge Space Theory (KST).

Utilizing different viable clustering techniques and grouping the different

individuals in a given sample, department, or similar job roles in a meaningful

manner. The clustering of individuals will be formed based on what skills the

individuals have learned/acquired recently, and what they are ready to

learn/acquire next given their current knowledge state.

Validating the various clusters formed using different clustering evaluation

measures and comparing the evaluation results. Finally, the clustering techniques

used will be compared to determine which is most appropriate.

Consequently, the set of outcomes of this approach can be used by workforce

trainers and instructors to personalize the training and learning of different groups of

employees from different organization departments and/or similar job roles.

1.3. Constraints and Assumptions

For this problem, the first constraint is that the solution proposed should be

directly and easily applicable in companies where certain resources might be

unavailable or difficult to have. For example, it is not safe to assume that a company

Page 25: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

25

has enough budget to spend on 1-to-1 training of its individual employees. Therefore,

the problem should address groups of employees in a department or particular job role

where skills are common.

The second constraint is that the approach should apply well for all kinds of

businesses where the number of employees vary. For example, according to [4], a micro-

size enterprise consists of fewer than 10 employees, a small-size enterprise consists of

fewer than 50 employees, a medium-size enterprise consists of fewer than 250

employees, and a large-size enterprise consists of more than 250 employees. The

mentioned sizes can vary depending on industry and region.

Throughout the approach, we will assume individual employees are dealt with

as students, as both types of individuals are learners being instructor-led to improve

their proficiency level and enhance their learning and knowledge.

1.4. Significance of the Research

The significance of this research is posed by two main motivations. The first

motivation is that this method can be applied in any context where the knowledge state

of an individual about a set of topics/skills in a given subject/competency is concerned,

and in different environments such as schools and workplaces that provide training for

their employees. The second motivation is that this method provides a low cost and a

more time efficient approach to training and teaching individuals in an optimized

personalized way because it has been reported that in 2013, organizations expenditures

on training and development have increased by 1% from the previous year to reach an

average expenditure of $1,280 per employee with an increase in training duration from

30.3 hours to 31.5 hours [5].

1.5. Research Methodology

The objective of this thesis will be attained by conducting the following steps:

Step 1. Carrying out literature review on data mining, KST, clustering

analysis, clustering algorithms and techniques, and clustering

evaluation and validation measures.

Step 2. Collecting assessment data from individuals in the context of

learning and education. The assessment data will be used to test

the thesis approach.

Page 26: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

26

Step 3. Developing the approach by formulating the algorithm to integrate

the set framework of KST and the clustering techniques to be

used.

Step 4. Coding and running the formulated set of algorithms using

RStudio (S-Language) and other statistical and data mining

software such as Minitab.

Step 5. Comparing the results using suitable validation indices by

performing a pairwise comparison between one clustering

algorithm and another as well as between each clustering

algorithm and the pre-set hypotheses.

Step 6. Performing a sensitivity analysis by selecting a different threshold

score for the collected assessment data (Criterion Referencing) as

well as using the median score of the collected assessment data

(Norm Referencing).

Step 7. Emphasizing the generalizability of the proposed approach by

applying the algorithm on another sample of learners.

Step 8. Deducing some key insights about the relevancy of the approach

results to the educational environment from which the assessment

data was collected.

Step 9. Summarizing the findings and conclusions after running the

developed thesis approach.

1.6. Thesis Organization

The thesis is organized in to several Chapters as follows:

Chapter 1 - this chapter is the introduction for this thesis. It contains the

background for this thesis, problem statement, the constraints and assumptions

taken into account, the significance of conducting this research, and the

methodology used to complete this thesis.

Chapter 2 – this chapter contains the relevant literature review and previous work.

Chapter 3 - this chapter contains the formulated approach and algorithm for the

thesis approach and a description of the components used in it.

Chapter 4 – this chapter contains the clustering evaluation measures to be used to

validate the results from the proposed approach.

Page 27: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

27

Chapter 5 – this chapter contains an illustrative example of the KST algorithm

application on a given data sample. In this chapter, the inner fringes and outer

fringes required to perform the clustering analysis are extracted using the KST

characteristics. Also, the results from the KST application example are analyzed.

Chapter 6 - this chapter contains the K-Means clustering technique overview, an

illustrative example of the clustering method, an evaluation of the clustering

method, results analysis, and a comparative analysis between K-Means clustering

results and pre-defined class labels of the same data sample used.

Chapter 7 - this chapter contains the DBSCAN clustering technique overview, an

illustrative example of the clustering method, an evaluation of the clustering

method, results analysis, and a comparative analysis between DBSCAN

clustering results and pre-defined class labels of the same data sample used.

Chapter 8 - this chapter contains the EM clustering technique overview, an

illustrative example of the clustering method, an evaluation of the clustering

method, results analysis, and a comparative analysis between EM clustering

results and pre-defined class labels of the same data sample used.

Chapter 9 - this chapter contains an overall results analysis comparing the three

different clustering techniques used, as well as comparing the clustering

techniques to grouping using the knowledge states and the grouping using the

median of the overall scores collected (25th percentile/quartiles).

Chapter 10 - this chapter contains the sensitivity analysis of the thesis approach

proposed. The sensitivity analysis is done selecting a different threshold score for

the collected assessment data (criterion referencing) as well as using the median

score of the collected assessment data (norm referencing).

Chapter 11 - this chapter contains the approach partially and quickly tested on

another data sample of assessment scores to further verify the significance of the

model and indicate its generalizability. The chapter also contains some key

insights about the relevancy of the model results to the educational environment

from which the assessment data used in this chapter was collected.

Chapter 12 - this chapter contains the conclusion for this thesis, limitations, and

future work and research relevant to the thesis approach proposed.

Page 28: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

28

Chapter 2: Literature Review and Previous Work

This chapter includes the literature review and previous work in relevance to the

thesis approach.

The decision making process of instructors and teachers is in general affected

by several aspects depending on the state and condition of the classroom and training

environment in which they are conducting their lessons. In addition, [6] points out many

other factors, such as the teachers lesson plan and the educational, social, and behavioral

goals they want to attain in the context of the classroom. To back up the decision made,

data can be collected from training and classroom environments to leverage the

instructor’s decisions and make them more effective.

2.1. Learners’ Abilities and Data Warehousing

A number of methods have been used previously to make instructional decisions

and to measure and analyze learners’ abilities based on their performance data. For

example, [7] develops a formula using the concept of Item Response Theory (IRT) to

estimate a student’s ability level which resulted in providing a personalized adaptive

test with a minimum number of questions based on the student’s estimated ability.

Another earlier method of abilities measurement used, as [8] points out, is the Classical

Test Theory (CTT) which approximates the reliability of the observed scores of the test

given on a set of items, adding an error element to the observed scores to get true scores.

However, both of the previously mentioned methods do not take into consideration the

knowledge state of the learner in the taught subject/lesson; rather they are either item-

oriented in the case of IRT or test-oriented in the case of CTT. They also deal with the

learner’s abilities, proficiency levels, and/or the likelihood of the learner in answering a

specific question correctly.

On the other hand, several techniques have been proposed previously using data

mining approaches. For example, in one case study by [9] data mining and data

warehousing were used to predict student academic performance in schools. Similarly,

[10] have recently proposed a contextualized, differential sequence mining method to

derive an individual’s learning behavior patterns. Likewise, neural networks, support

vector machine (SVM), decision trees, and multinomial logistic regression were used

by [11] to predict and analyze secondary education placement-test scores. Moreover,

[12] used data-driven discovery to construct better student models to improve student

Page 29: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

29

learning. Finally, [13] discuss the use of cluster analysis for data mining in educational

technology research.

2.2. Workplace Training and Development

Most modern workplaces are adopting the culture of continual learning to help

in their employees’ growth and to build a strong highly-skilled workforce which

promotes the long-terms success of the company and enforces its profitability.

According to [14], there are several common techniques for improving employees’

Technical skills as well as Communication skills, such as On-the-Job training, Role

Playing, Self-Instruction, Team Building Training, Games & Simulations, Mentoring,

Computer-based Training, Performance Appraisals, and Job Rotation. Our method is

considered to be a combination of Computer-Based training and Mentoring. The

Computer-Based training part is involved in the use of technology such as easily

accessible Android smart mobiles and tablets to collect performance and skill

assessment data to help reduce training costs. Training costs might normally involve

cost of traveling and accommodation. The Mentoring part is involved in the feedback

and counseling given to the employees/trainees after analyzing their performance data

using the thesis model to improve their work effectiveness.

On the other hand, several techniques have been used previously for delivering

effective workplace training and skill developments. For example, [15] proposes a

conceptual framework which tries to identify the factors that minimize the difference

between the expected and actual performance results of trainees, help employees

recognize their actual professional capabilities, and help improve the effectiveness of

the trainee’s learning transfer from the skill and training institution to the actual

application in the workplace. Another example, [16], attempts to propose an initial

framework for modeling engineers’ skill competencies and needs in engineering

companies which was applied in several industrial case studies. Finally, others like [17]

and [18] emphasize the use of competency models to link skills requirements to business

goals and organizational strategies.

Page 30: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

30

2.3. Competency Models

A competency is a behavior, knowledge, skill, ability or any other characteristic

that contributes to the employees’ success in performing their identified duties and areas

of responsibility. [17]

Competency models provide an organized, systematic, and efficient way for

organizations to assess and evaluate the existing skills among their employees and

identify their skills needs to improve their performance and career progression to align

with an organization’s business objectives.

According to [17], there are several ways to build competency models and

architectures, but the standard template in Figure 1 includes the following layers of

progression of the competency levels:

Core Competencies – this includes the general skills that all employees should

acquire to maintain the key values of the organization (e.g. Teamwork /

Networking, Customer Focus, etc.)

Job Family Competencies – this includes skills that are similar among different

groups of job roles (e.g. Project Management Fundamentals, etc.)

Technical / Professional Competencies – this includes skills that are specialized

to specific job roles (e.g. ability to use a specific software, etc.)

Leadership Competencies – this includes skills specific to senior and executive

level roles critical for organizational development, strategic objectives

attainment, and influencing work of other employees in the company (e.g.

strategic thinking, people management, etc.)

Each competency, regardless of level, is made up of a set of skills to be mastered

by the learner to acquire the addressed competency. For example, Project Management

is a Job Family competency which is made of a set of skills and knowledge area to be

mastered such as Communication, Scope Management, Human Resources

Management, Time Management, Risk Management, Cost Management, Procurement

Management, Quality Management, Project Integration, etc.[19]

Our approach will attempt to optimize a trainer’s decision regarding the

employees’ knowledge and skills needs in order to help them in a way that fulfills the

attainment of the four previously mentioned competencies.

Page 31: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

31

Figure 1: Standard Competency Model.

Next, the proposed method is made up of two main concepts that will be

discussed in the literature in this chapter. The concepts are Knowledge Space Theory

(KST) and Clustering Analysis.

2.4. Knowledge Space Theory (KST)

As defined by [20], Knowledge Space Theory (KST) is a set-theoretical

framework, while a single Knowledge Structure (KS) is made up of a collection of

knowledge states which represent a subset of problems in the learning domain that a

learner is capable of performing at some predetermined level of competence. In later

years, a type of KST based on competencies and the skills associated with them has

been developed. This type of KST is known as Competence-based Knowledge Space

Theory. As compared to the original KST, [21] refers to the underlying skills associated

with the competencies sets. This type of KST is more popular with workplace and

professional applications of knowledge transfer as it involves competence states along

with the knowledge state. However, for now, the original KST framework suffices for

the thesis approach as the model requires the knowledge state only.

Page 32: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

32

KS Fringes represent symmetric difference between a given knowledge state and

its neighboring states. An inner fringe represents what a learner has just learned, and an

outer fringe represents what a learner is ready to learn next.

There are several previous applications of KST in the context of learning and

education. For example, ALEKS (Assessment and LEarning in Knowledge Spaces) [22]

is one of the prominent practical applications based on KST, in which ALEKS is a web-

based system which assesses students of Kindergarten-Grade 12 continuously and

individually on mathematics, science, and business topics. Also, another application

named ComKoS [23] integrates a competency based assessment model ComBA with

KST to provide computer-based feedback and multiple response possibilities.

2.5. Clustering Analysis

In general, [24] defines clustering as dividing objects/events into meaningful

groups and cluster based on information found in and extracted from the data describing

objects or the relationship between them. Typically, the characteristics of objects/events

in one cluster or group are similar or related, and they are different from the

characteristics of the objects/events in the other groups (clusters).

In the context of education and learning, clustering analysis has been used as a

feasible method of data mining to analyze learning patterns and behaviors in Online

Learning Environments (OLEs) [13]. According to Antonenko, cluster analysis can aid

educational researchers to develop learner profiles that are formed based on the learner’s

activity during a learning session.

There are many clustering methods available as [25] points out, and there is no

one “best” algorithm to select, as the clustering method adopted depends on the

applications, the conditions that they are used in, and the type of data sets being used.

According to [25], clustering algorithms can be classified into five classes; each

of which has several algorithms categorized under them as shown in Figure 2:

Partitioning-based - this class involves algorithms that would divide data into

partitions that have a center as reference and at least one object. Each partition

represents a cluster.

Hierarchical-based – this class involves algorithms that would divide clusters

in a hierarchical manner depending on proximity between data items. It can be

bottom-up (agglomerative) where several initially formulated clusters recursively

Page 33: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

33

merge together appropriately until standstill, or top-down (divisive) where one

big cluster containing the entire data set is initially formed then recursively

appropriately splits into smaller clusters until standstill.

Density-based – this class involves algorithms that would divide objects based

on near point neighbors, thus forming density regions. This type of clustering can

prevent outliers in the clusters formed.

Model-based – this class involves algorithms that would optimize the clustering

of the data by automatically determining the best number of clusters that the given

data can be divided into.

Grid-based – this class involves algorithms that would divide data into clusters

in the form of grids, which would ensure faster processing time while performing

statistical operations and analysis on the dataset in each grid.

Figure 2: Overview of Clustering Algorithms.

Some of the notable clustering algorithms are K-Means, DBSCAN, and EM

clustering algorithms. K-Means, DBSCAN, and EM are briefly described in Table 1

and Figure 3 :

Page 34: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

34

Table 1: Clustering Algorithms to be used

Clustering

Algorithm Class

Clustering

Algorithm Selected

Description of Clustering

Algorithm Selected [25]

Partitioning-based K-means

K-Means iteratively searches for

possible cluster centers and then

assigns objects to the centers which

have similar properties as the object.

This forms different clusters with

different centers and objects.

Density-based DBSCAN

DBSCAN stands for Density-Based

Spatial Clustering of Application

with

Noise. It forms cluster which

contains border points that are within

a specific radius from the core point.

The points which are not part of

border points or core point are

knows as noise points and they form

outlier clusters. This algorithm can

recognize clusters with arbitrary

shapes. DBSCAN is useful when

data has a lot of noise.

Model-based EM

EM stands for Expectation

Maximization. This algorithm is

iterative for finding maximum

likelihood estimates of parameter in

statistical models which depend on

hidden unobserved variables of the

data. It classifies each point in the

data set into the most likely

Gaussian distribution which has its

own statistical parameters and

properties.

The results from the clustering algorithms selected will be evaluated for validity

using certain measures as will be discussed in the Evaluation chapter, in order to

determine which would give the best set of results and instructional decisions.

To evaluate the “goodness” of the approach suggested combing KST and

Clustering Analysis, the criteria mentioned in [25] will be examined for the selected

clustering algorithms. The evaluation criteria depend on the three-dimensional aspects

of Big Data (Volume, Variety, and Velocity), which are shown in Table 2:

Page 35: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

35

Figure 3: Examples of K-Means, DBSCAN, & EM Clustering.

Table 2: Dimensions of Clustering Valuation Criteria

Property Criteria

Volume

Size of Dataset

Handling High

Dimensionality

Handling Outliers/Noisy Data

Variety Type Of Dataset

Clusters Shape

Velocity Time Complexity

Others Input Parameters

Stability

Page 36: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

36

Chapter 3: Approach and Algorithm

In this chapter, an example of a KS is presented, along with the intuition behind

using the KST fringes. Next, the latter example is used to explain the approach and

algorithm.

3.1. Illustrative KST Example

Figure 4 below is an example of KS for the Grade 4 unit of Multiplication. The

KS in the example consists of eight knowledge states (ф, A, B, C, D, E, G, and G), and

each knowledge state is made up of a set of topics (a, b, c, and/or d) which indicates

what topics the learner knows at each knowledge level. In general, constructing an

appropriate KS is not an easy task. It usually involves inputs from experienced teachers

and many methods as mentioned by [20].

Figure 4: An Example Knowledge Structure for Grade 4 Multiplication.

The topics (a, b, c, and/or d) also form the fringes that go in and out of the

knowledge states of the Grade 4 Multiplication KS. The topics going into the knowledge

states are called inner fringes. They represent what the student has learnt recently given

his/her current knowledge state. Using the example in Figure 4, a student at knowledge

state (‘C’) has an inner fringe set {c} and has recently learned the topic ‘Multiplication

of Decimals’ (i.e. c).

On the other hand, the topics going out of the knowledge states are called outer

fringes, and they represent what the student is ready to learn next given his/her current

Page 37: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

37

knowledge state. Using the same example in Figure 4, a student at knowledge state (‘C’)

has an outer fringe set {b, d} and is ready to be taught next the topics ‘Multiplication of

Fractions’ (i.e. b) and ‘Multiplication of Percentages’ (i.e. d).

Fringes form a crucial part of the thesis approach as they will be used to cluster

the learners accordingly. The intuition behind using the KST’s inner and outer fringes

is attributed to the concept of Zone of Proximal Development (ZPD). As [20] points

out, some have argued that inner and outer fringes in the KS are representations of the

Zone of Proximal Development (ZPD) of learners [26]. In the thesis approach, ZPD is

appropriate because “the ZPD explicitly includes the intervention of external agents,

such as human teachers or other students (via cooperative problem solving). This is

consistent with the possible use of the outer fringe(s) mentioned previously, which

involves the selection of the part of the class that is prepared for pointed instruction on

a particular topic” [20]. Therefore, in alignment with ZPD, for the data sample on which

the proposed approach will be applied, the outer fringes of the learners who will be

clustered together meaningfully before undergoing the training sessions. Instructions

represent what learners are ready to learn unaided and without teacher intervention. On

the other hand, the outer fringes of the learners who will be clustered together

meaningfully after undergoing training sessions and instructor-led sessions represent

what students are ready to learn with the help of a teacher using conventional techniques

or with the aid of technology. Accordingly, the same can be said about inner fringes.

3.2. Approach and Algorithm

The primary approach in this thesis is to cluster the learners using the inner

fringes and outer fringes of their knowledge states in a given KST. The approach is

presented using the flow chart in Figure 5. In addition, the Algorithm in Figure 6

presents the pseudo-code for the approach.

As noticed in Step 3 of the approach, the binary vector of the inner fringes and

outer fringes are collapsed into a single real positive number. The reason for this

collapse will be explained later in the Detailed Example section.

Page 38: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

38

Figure 5: Approach.

Algorithm: Extracting Inner and Outer Fringes Sets and Clustering Learners

Based on Fringes

1: Input:

2:

3: Knowledge Space KS

4: Performance Data of Learners/Students DATA:= {s1, s2, s3,…, sn}

5: Threshold score of the skills set th

6: Output:

7:

8: for each si ϵ DATA do

9: get inner_fringes MIFi

10: get out_fringes MOFi

11: if score of inner/outer fringe > th

12: assign vector value of MIFi / MOFi = 1

13: Else

14: assign vector value of MIFi / MOFi = 0

15: MIFn = MIFi ∪ MIFn

16: MOFn = MOFi ∪ MOFn

17: //Convert each fringe set in MIFn/MOFn to R+

18: //Cluster learners/students based on their encoded inner and outer fringes

Figure 6: Algorithm.

Page 39: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

39

3.3. Detailed Example

In this section, a detailed account of the steps in the approach in Figure 5 will be

explained using the Multiplication KST example in Figure 4.

3.3.1. Step 1. In Step 1, the inner fringes and outer fringes of the

students/learners in a given data sample is extracted first using the input to the approach

proposed. The input to the method proposed here are: 1) an appropriately constructed

KS, 2) a threshold score to determine whether a student passed or failed a topic, and 3)

students/learners performance data on topics covered in the KS. Table 3 shows a sample

of 5 students (s1, s2, s3, s4, s5) and their performance results in the topics included for

the KS shown in Figure 4. Note that sometimes data might be non-available or missing

data, which might be a realistic case when collecting any type of data.

Table 3: Example of Student Topic Assessment Scores

Student Items in Knowledge Structure

a b C d

s1 60% 29% 55% 20%

s2 80% 77% 28% 17%

s3 62% 27% 90% 15%

s4 85% 75% 80% 80%

s5 17% 65% 60% 25%

The KST property of an inner and an outer fringe are critical to the method being

proposed. The inner and outer fringes for every state in Figure 4 are shown on the arrows

between the knowledge states. For example, the inner fringes for state (‘A’) in Figure 4

are represented by the empty set {a}, and the outer fringes for state (‘A’) are represented

by the set {b, c}. In other words, a learner in state (‘A’) is ready to learn either b or c to

move forward in his/her learning.

3.3.2. Step 2. In Step 2, using the threshold score from the input, a binary

vector of the extracted inner and outer fringes is created for every student/learner. Several

methods can be used for the binary vector transformation. The example here uses an

absolute threshold; scores below or equal to 30%, are assigned a 0, and others above 30%

are assigned a 1. The results after the transformation are shown in Table 4.

Once a binary item vector for each student has been calculated, a student can be

assigned to a unique ‘best’ knowledge state in the KS that best fits his/her performance.

Page 40: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

40

Results based on a deterministic approach described in [27] for this assignment are

shown in Table 5.

Table 4: Example of Student Topic Deterministic Assessment Scores

Student Items in Knowledge Structure

a b c d

s1 1 0 1 0

s2 1 1 0 0

s3 1 0 1 0

s4 1 1 1 1

s5 0 1 1 0

Table 5: Students’ Best Knowledge States using a deterministic method

Student Best ‘Fit’ State State

Components

s1 C [a,c]

s2 B [a,b]

s3 C [a,c]

s4 G [a, b, c, d]

s5 E [a, b, c]

Once each student has been assigned to a best fit knowledge state, inner and outer

fringes for each student can be calculated as shown in Table 6 below. With regard to the

Inner Fringe Set (IFS) items, 0 means IFS item is not recently learnt, and 1 means IFS

item is recently learnt by the student. However, in the case of Outer Fringe Set (OFS)

items, 0 means (OFS) item is not ready to be learnt next, and 1 means (OFS) item is

ready to be learnt next by the student.

Table 6: Students Inner and Outer Fringes

Student Inner

Fringe(s)

Outer

Fringe(s)

Inner Fringe Set

[a,b,c,d]

Outer Fringe Set

[a,b,c,d]

s1 {c} {b,d} [0,0,1,0] [0,1,0,1]

s2 {b} {c,d} [0,1,0,0] [0,0,1,1]

s3 {c} {b,d} [0,0,1,0] [0,1,0,1]

s4 {b,c,d} { } [0,1,1,1] [0,0,0,0]

s5 {b,c} {d} [0,1,1,0] [0,0,0,1]

For example, student s1 and s3 possess the same knowledge state (‘C’), and

therefore have the same inner fringe {c} and outer fringes {b, d} meaning that they could

Page 41: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

41

be clustered together from a future learning perspective. In general, both inner and outer

fringes can be used to cluster students. The algorithm proposed here uses both fringes.

3.3.3. Step 3. In Step 3, the IFS/OFS binary vectors are collapsed into a single

space positive real number R+ as shown in Table 7 below:

Table 7: Students Inner and Outer Fringes Conversion

Student IFS

(binary)

IFS

(decimal)

OFS

(binary)

OFS

(decimal)

s1 [0,0,1,0] 2 [0,1,0,1] 5

s2 [0,1,0,0] 4 [0,0,1,1] 3

s3 [0,0,1,0] 2 [0,1,0,1] 5

s4 [0,1,1,1] 7 [0,0,0,0] 0

s5 [0,1,1,0] 6 [0,0,0,1] 1

There are several motives for converting the fringe sets binary form to a decimal

form which will be explained in Step 4. as the reason is connected to the types of

clustering algorithms to be used in the thesis.

In addition, it will be noted that the lower the decimal value of the fringe set, the

more difficult topics it contains. From the Multiplication example in Figure 4, the

topics’ difficulty progresses in ascending order with topic a being least challenging and

topic d being most ‘challenging’. If the IFS or OFS is {d}, then the binary set would be

[0,0,0,1] and the decimal would be 1, whereas if the IFS or OFS is {a,b}, then the binary

set would be [1,1,0,0] and the corresponding decimal form is 12.

3.3.4. Step 4. In Step 4, the last step of the approach, after applying the KST

algorithm on the given sample of students, and identifying what their encoded IFS and

OFS are, the students will be clustered according to their inner fringes and outer fringes

using the clustering algorithms K-Means, DBSCAN, and EM.

For every clustering algorithm used, different viable metrics will be tested

depending on the clustering technique being used, and results populated will be per

metric as shown in Figure 7.

Page 42: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

42

Figure 7: Suggested Algorithm Overview.

For example, K-means clustering has metrics such as the number of maximum

centers allowed. On the other hand, DBSCAN clustering depends on the distance

between points and the minimum number of neighborhood points allowed and EM

clustering depends on log-likelihood value. The latter different metrics of every

clustering technique will be discussed in its designated chapters.

As mentioned previously in Step 3, collapsing the IFS and OFS in to a R+ is

attributed to several factors. Firstly, using thresholding when initially extracting each

student’s fringe sets causes the dimensionality of the extracted fringes to be clustered to

become complex, as the dimension will be equal to the number of topics in the subject

being tested. For example, the Multiplication subject from the example in Figure 4

consists of four topics, and so a fringe set would look like [0,1,0,1] which depicts a

dimension of four. Such complexity in the dimensionality of binary data does not work

well for most of the clustering algorithms to be used later, such as DBSCAN and EM.

Therefore, all the fringe sets are linearized and collapsed to a single 1-D positive

decimal code.

With respect to DBSCAN, the high dimensionality of binary data would always

result in giving only one DBSCAN cluster which contains all the learners. This is due

to the ε (epsilon) value (explained in DBSCAN procedure and parameters. section in

Chapter 6) always being 1, i.e. the maximum distance between one data point and

another will always be 1 as shown in Figure 8. The figure below assumes each of the

students below is assessed on three topics and therefore the data has a dimensionality of

three. A DBSCAN result containing only one cluster is inefficient for the purpose of the

approach proposed in providing feedback to the teacher and educational administrator.

Page 43: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

43

Figure 8: DBSCAN Test on Multi-dimensional Data.

Similarly, with respect to EM clustering, the high dimensionality of binary data

would always result in giving only one EM cluster which contains all the learners. The

cluster will be in the form of a spherical distribution with distance to the center (radius)

being 1 as shown in Figure 9. Like the DBSCAN case, an EM result containing only

one cluster is inefficient for the purpose of the model proposed in providing feedback

to the teacher and educational administrator.

Figure 9: EM Test on Multi-dimensional Data.

Hence, converting the binary data to decimal data allows the clustering

algorithms to behave in a better manner to provide a more meaningful outcome.

Finally, the clustering results will finally be evaluated using different Internal

and External indices which will be discussed in the next Chapter.

Page 44: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

44

Chapter 4: Evaluation

To evaluate the approach proposed, known clustering validity measures will be

calculated for every cluster result formed. According to [25], there are two types of

indices to be calculated, Internal Indices which depend on centroids of clusters and

External Indices which do not depend on centroids.

4.1. Internal Indices

The Internal validation indices are as follows

- Compactness (CP) – this index measures the average distance between

each data point’s pair. It is the most common measure of validity

evaluation for clustering analysis. It is calculated in two stages. First, the

compactness of individual cluster is calculated using Equation (1).

Secondly, the average of all clusters compactness is calculated using

Equation (2). The lower the value of CP, the better.

(1)

where :

Ω𝑖 is the total number of elements in the ith cluster in the result

𝑥𝑖 is the element/data point in the ith cluster in the result

𝑤𝑖 is the centroid point in the ith cluster in the result

(2)

where :

K is the total number of formed clusters in the result

𝐶𝑃̅̅̅̅𝑘 is the individual compactness of every cluster in the result

- Separation (SP) – this index measures the degree of separation between

the individual clusters in the result. It calculates the Euclidean distance

between the centers of the clusters using Equation (3). The lower the value

of SP, the closer are the clusters in the result.

Page 45: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

45

(3)

where :

k is the total number of clusters

𝑤𝑖 is the centroid point in the ith cluster in the result

𝑤𝑗 is the centroid point in the jth cluster in the result

jth cluster ≠ ith cluster. j=i+1

- Dunn Validity Index (DVI) – this index measures both the degree of

compactness of clusters and the degree of separation between the

individual clusters in the result. It is calculated using Equation (4). The

higher the value of DVI, the better, as it indicates that the clusters are

compact and well-separated.

(4)

where :

K is the total number of clusters in the result

𝛿(𝐶𝑖 , 𝐶𝑗) is the inter-cluster distance between cluster i and cluster j

Δ(𝐶𝑚 ) is the intra-cluster distance between the element in cluster m

- Davies-Bouldin Index (DB) –this index measures the ratio of sum of

within cluster scatter/dispersion (intra-cluster) to between cluster

separation (inter-cluster). It is calculated using Equations (5) then (6)

accordingly. The lower the DB value, the more compact are the individual

clusters and the further away the clusters in the result are from each other.

Page 46: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

46

and

(5)

where :

var(Ci) is the variance of the ith cluster in the result

var(Cj) is the variance of the jth cluster in the result

ci is the centroid of cluster i

cj is the centroid of cluster j

(6)

where:

k is the total number of clusters in the result

- WSS and BSS – these indices represent the within cluster sum of square

and between cluster sum of squares. WSS is calculated using Equation (7)

and BSS is calculated using Equation (8). The lower the value of WSS,

the better, and the higher the value of BSS, the better.

(7)

where :

𝐶𝑖 is cluster i

𝑥 is an observation belonging to cluster i

𝑚𝑖 is the mean of cluster i

(8)

where :

|𝐶𝑖 | is number of of observations in cluster i

𝑚 is the mean of all the data in all the clusters

𝑚𝑖 is the mean of cluster i

- Intra-class Correlation Coefficient (ICC) - is a descriptive statistics

measurement which estimates the strength of resemblance and correlation

between observations in a single cluster in a clustering result [28]. The

ICC is estimated using the variance analysis of a one-way ANOVA and

it should be non-negative. The estimated ICC is calculated using the

following equation [29]:

jikj

iji RR

,,..1

max||||

)var()var(

ji

ji

ji

ijcc

CCR

k

i

iRk

DB1

.1

Page 47: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

47

𝐼𝐶𝐶 =𝜎𝛼

2

𝜎𝛼2 + 𝜎𝜀

2

(9)

where:

𝜎𝛼 is the variance of the unobserved random trait between the clusters

in a single clustering outcome, and

𝜎𝜀 is the pooled variance within the clusters in a single clustering

outcome

4.2. External Indices

The External validation indices are as follows

- Adjusted Rand Index (ARI) – this index measures how correctly the data

elements are clustered together. It considers the number of elements that

occur in the same cluster and the number of elements that occur in

different clusters. It is calculated using Equation (10). ARI has to lie

between 0 and 1. The closer ARI is to 1, the better.

(10)

where :

𝑛00 is the number of pairs of data that are in different clusters in

two partition or data sets

𝑛11 is the number of pairs of data that are in the same cluster in

two partition or data sets

𝑛10 is the number of pairs of data that are in same clusters in

first partition or data sets, but in different clusters in the second one.

𝑛01 is the number of pairs of data that are in different clusters in

first partition or data sets, but in same cluster in the second one.

- Normalized Mutual Information (NMI) – this index measures the amount

of statistical information that is shared by variables that represent the

cluster assignments and the pre-defined label assignments of data

instances. It is calculated using Equation (11). When NMI is 1, it means

that clustering assignments perfectly matches the predefined label

assignments. When NMI is 0, it means that the matching is weak.

Page 48: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

48

(11)

where :

𝑑ℎ is the number of instances in class h

c𝑙 is the number of instances in cluster l

𝑑ℎ,𝑙 is the number of instances occurring in both class h and

cluster l

Ω is the total number of elements in the cluster

- Cluster Accuracy (CA) – this index measures the percentage of data

points that are correctly classified as compared to a predefined class

labels. It is also referred to as Purity. It is calculated using Equation (12).

The higher the CA, the better.

(12)

where :

Ω𝑖 is the total number of elements in the ith cluster in the result

𝐶𝑖 is the set of elements in the ith cluster

𝐿𝑖 is the set of class labels that appear in the ith cluster

𝑚𝑎𝑥(𝐶𝑖 |𝐿𝑖 ) is the maximum number of times the most recurring

label in the ith cluster appears

- Entropy – this measure compares the results of a cluster analysis to

externally known results and given class labels. It is calculated using

Equation (13), Equation (14), and Equation (15) accordingly. The lower

the entropy value, the better.

(13)

where :

pij is the probability that a member of cluster j belongs to class i

mij is the number of values of class i in cluster j

mj is the number of values in cluster j

(14)

where:

ej is the entropy of cluster j

L is the total number of classes

Page 49: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

49

(15)

where:

e is the sum of entropies for each cluster weighted by the size of each

cluster

k is the total number of classes

mj is the number of values in cluster j

m is the total number of data points

Moreover, in addition to the Internal and External Indices, to measure the

wellness of the standard deviation of the scores of the individuals in the clusters, the

Coefficient of Variation (CV) was estimated as follows using the formula:

𝐶𝑉 = 𝜎

𝜇

(16)

Finally, to emphasize the significance of the research and the “goodness” of the

thesis model, for every clustering technique, a Comparative Analysis is performed on

every clustering technique results.

As mentioned earlier in the section, the Comparative Analysis measures include

NMI, CA (Purity), Entropy, and ARI. The Comparative Analysis measures depend on

pre-determined class labels hypotheses which are used as the reference ground truth

grouping results of the same data under study.

The first class labels hypothesis is clustering based on the knowledge states of

the students.

The second class labels hypothesis is grouping the students based on the 25th

quartiles of the overall subject/unit scores of the students (median of the scores).

Page 50: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

50

Chapter 5: Data Collection and KST Encoding

5.1. Data Collection

First of all, there are two types of data samples collected. The first data sample

is referred to as Data Set 1 and it is used for model development and initial validation.

It is used in Chapters 5, 6, 7, and 8. The second data sample is referred to as Data Set

2 and it is used for further model validation in Chapter 11.

The data samples used are based on pre-assessment and post-assessment grades

of Grade 2 mathematical and literacy subjects, and both curricula are based on

Pakistan’s National Curriculum for Mathematics [30]. The data was collected by

teachers and educational administrators throughout an entire school year from over 260

different schools located in Pakistani rural areas.

Eight Mathematics units were received, with each unit having its own pre-

assessment and pot-assessment grades of each of the topics comprising the unit. One

unit was chosen out of the eight provided. Unit 1 – NUMBERS was chosen because it

has the largest number of topics to be covered, and also these topics are essential for

any child to advance in other Grade 2 units of Mathematics, as well as the Mathematics

units of higher classes.

A pre-assessment grade (also referred to as Pre-test) is the grade of the students

in a topic BEFORE the topic in concern is taught by the educator or teacher. A post-

assessment grade (also referred to as Post-test) is the grade of the students in a topic

AFTER the topic in concern is taught by the educator or teacher.

Furthermore, students are divided into two groups: Control and Treatment. A

Control group contains students that are instructed using conventional teaching

methods. A Treatment group contains students that are instructed using technology

means like tablets. In Data Set 1, there are 54 Control students and 148 Treatment

students. In Data Set 2, there are 187 Control students and 615 Treatment students.

This type of data sample is feasible for the approach proposed because the model

is general and can be applied in any context where the knowledge state of an individual

about a set of topics/skills in a given subject/competency is concerned.

Page 51: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

51

5.2. Illustrative Example of NUMBERS Unit

The first part of the approach proposed is determining the knowledge states of

the students to deduce their inner fringes (what the students have recently learned) and

outer fringes (what the students are ready to learn next).

The KST algorithm was applied on Data Set 1 as described in section Data

Collection. To recollect, the data is divided into pre-assessment scores and post-

assessment scores of Grade 2 topics in mathematics. In the illustrated example, the

assessment data is based on the topics in the unit of NUMBERS. The students are

divided into two groups: Control Group and Treatment Group.

5.2.1. Determining the inner and outer fringes of students. First, the

appropriate KS of the NUMBERS unit was constructed using one of the many

techniques cited in [20]. The resulting KS is shown in Figure 10. Table 123 in Appendix

A: KST Details shows the topics in the NUMBERS unit that each small letter

represents. Using a threshold of 33% [31] and Data Set 1, 54 Control group students

and 148 Treatment group students were assigned to their appropriate knowledge states

in the KS in Figure 10. The IFS and OFS for every knowledge state can also be seen in

Table 124 in Appendix A: KST Details.

Figure 10: KS for Grade II NUMBERS Unit.

Page 52: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

52

For the pre-test and post-test assessments performed on the Control and

Treatment groups, the knowledge state assignments are shown in Figure 11:

Figure 11: Knowledge State Assignments for Grade II NUMBERS Unit.

Therefore, from the knowledge states assignments shown in Figure 11, the IFS

and OFS for these student can be deduced from Table 124 in Appendix A. For example,

students in knowledge state (‘C’) will have the IFS {c} (i.e. they have learned topic c)

and the OFS {b,d,e} (i.e. they are ready to be taught topics b, d, and e).

5.2.2. Encoding the inner and outer fringes sets of students. After

determining the IFS and OFS for every student, the sets were treated as binary sequence

and converted to positive decimals to be able to cluster the students clearly and

conveniently. For example, a student in knowledge state (‘C’) will have the IFS {c},

which in binary is [0,0,1,0,0,0,0] and in decimal is 16 , and the OFS {b,d,e}, which in

binary is [0,1,0,1,1,0,0] and in decimal is 44. The complete decimal conversion

Page 53: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

53

corresponding to every fringe set for the NUMBERS unit can be found in Appendix A:

KST Details.

Figure 12 is a visual representation of the decimal encoding of the IFS and OFS

corresponding to their knowledge state. They can give an idea of how further away is

one IFS/OFS from another IFS/OFS.

Figure 12: NUMBERS Unit Knowledge States Inner and Outer Fringe Sets Encoding.

Therefore, for example, with regards to inner fringes, conceptually knowledge

states (‘F’) and (‘G’) look close to each other which means they can potentially be

clustered together as opposed to two other knowledge states which are far apart like (‘I’)

and (‘C’). The same applies to outer fringes examples of knowledge states (‘F’) and

(‘G’).

However, it is important to note that IFS and OFS encoding are different for the

same knowledge state because the topics that are recently learnt by a student are

different than the topic to be learnt next given a particular knowledge state. Therefore,

it is not obvious enough how clustering will occur by only using the knowledge states

without their fringes. Knowledge states only give an approximation of how clustering

might occur as will be observed in later chapters.

5.3. Additional Observations

Firstly, after Pre-test, about 92.6% of the learners in the Control group belong to

the knowledge states ('H') and ('I'), and 93.2%% of the learners in the Treatment Group

belong to knowledge states ('H') and ('I') as well. Therefore, it can be deduced that for

both groups, the majority of the students have attained most of the topics required in the

NUMBERS unit since they belong to the higher-level knowledge states ('H') and (‘I’).

This is an interesting observation since Pre-test assessment is done BEFORE students

receive instruction from the educator/teacher, and students seem to already have some

Page 54: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

54

idea about the topics in the NUMBERS unit. On the other hand, 7.4% of the Control

group students and 6.8% of the Treatment group students belong to the lower-level

knowledge states.

Secondly, after Post-test, about 96.3% of the learners in the Control group

belong to the knowledge states ('H') and ('J'), and 96.6%% of the learners in the

Treatment Group belong to knowledge states ('H') and ('I') as well. Therefore, it can be

deduced that for the Control Group, the 50% of the students have mastered the

NUMBERS unit since they belong to the higher-level knowledge states ('J'). However,

in the case of the Treatment Group, most of the students seemed to have stayed in the

knowledge states ('H') and ('I'), and none of the students have completely mastered the

NUMBERS unit. This might be an indication that the traditional teaching methods used

for the Control group prove to be more effective than the technological methods used in

case of the Treatment group.

Furthermore, for both groups less than 4% of the Post-test students belong to the

lower-level knowledge states. This might be an indication that teacher instruction

helped improve the knowledge of students in the topics included in the NUMBERS unit

as compared to Pre-test.

In terms of knowledge transfers, looking at Figure 13 above, after teacher

intervention, the Post-test assessment results show that 27.8% of the learners in the

Control group who were at the lower knowledge states in the Pre-test moved to higher

ones. Furthermore, 50% of the Control group learners remained in the same knowledge

state, whereas 22.2% seem to have moved to lower knowledge states levels. Overall,

50% of the Control learners mastered all the topics in the NUMBERS unit.

On the other hand, the Post-test assessment results show that 33.8% of the

learners in the Treatment group who were at the lower knowledge states in the Pre-test

moved to higher ones. Furthermore, 47.3% of the Treatment group learners remained in

the same knowledge state, whereas 18.9% seem to have moved to lower knowledge

states levels. However, none of the Treatment learners mastered all the topics in the

NUMBERS unit and reached knowledge state (‘J’).

Hence, for both Control and Treatment groups, the educator might want to single

out the latter individuals and investigate the reason behind the falling back of these

students who transitioned to lower knowledge states, and determine what might be

possible factors that have led to this transition; for example, if the causes are related to

Page 55: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

55

teaching method, curriculum design, educator himself/herself, and /or student’s

sociological state. Moreover, the percentage of students in the Treatment group who

transitioned to higher knowledge states is higher than that of Control group which might

be an indication that the teaching methods used with the Treatment group is more

effective. However, one cannot deny that none of the Treatment group students have

mastered all the topics in the NUMBERS unit.

Figure 13: Knowledge State Transfer between Pre-test and Post-test for the

NUMBERS unit.

In the next chapters, the various clustering techniques will be discussed along

with how each clustering method will cluster/group the students based on their single

space conversion of the IFSs and OFSs extracted in this chapter.

Page 56: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

56

Chapter 6: K-Means Clustering

6.1. K-Means Overview

K-Means clustering is a common type of partitioned-based algorithm. K-Means

partitions data sets into K groups. The clusters are first initialized by randomly selecting

K instances from the data set. Next, the means of every cluster formed is calculated, and

K-Means assigns each instance in the data set to the initially formed K clusters. The

assignment can be done using different distance metrics [32], with Euclidean distance

being most common. The algorithm then iteratively reassigns the data instance to the

clusters and recalculates the means of every cluster. When the algorithm converges to a

local minimum, the reassignment stops and the output consists of the K clusters. The

local minimum of convergence depends on the centroids each cluster started with [33].

In summary, regardless of the distance measure used, the K-Means clustering in

the approach uses K-centroids. K-centroid algorithm minimizes the total error by

assigning each observation to the nearest cluster center and its formula is as follows

[34]:

𝐶(𝑖) = arg min1≤𝑘<𝐾

𝑑(𝑥𝑖, 𝑚𝑘), 𝑖 = 1 … 𝑁 (17)

where:

𝑥𝑖 is the ith observation to be assigned, and

𝑚𝑘 is the kth center

The K-Means clustering technique in this thesis will produce the optimal number

of clusters using the Calinski and Harabasz (CH) index [31]. The CH index is based on

the inter-cluster error sum of squares and the intra-cluster squared differences of all

objects in the individual cluster. The formula shown in Equation (19).

The value of q is the optimal number of clusters produced by the K-Means

clustering, and q belongs to a set of numbers between 2 and n-2 where n is the maximum

number of unique observations in the data set being clustered.

K-Means application in education and learning includes predicting students’

academic performance using a deterministic model that analyzes existing students’

results [33].

Page 57: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

57

𝐶𝐻(𝑞) =𝑡𝑟𝑎𝑐𝑒 (𝐵𝑞)/(𝑞−1)

𝑡𝑟𝑎𝑐𝑒 (𝑊𝑞)/(𝑛−𝑞) 𝑓𝑜𝑟 𝑞 ∈ 𝑛 (18)

where:

𝐵𝑞 = ∑ 𝑛𝑘 ∗ (𝑐𝑘 − 𝑐)(𝑐𝑘 − 𝑐)𝑇𝑞𝑘=1 is the error sum of squares between different

clusters (inter-cluster)

𝑊𝑞 = ∑ ∑ (𝑥𝑖 − 𝑐𝑘)(𝑥𝑖 − 𝑐𝑘)𝑇𝑖∈𝐶𝑘

𝑞𝑘=1 is the squared differences of all objects in a

cluster from their respective cluster center 𝑐𝑘 (intra-cluster)

𝑐𝑘 is the centroid of cluster k

𝑐 is the centroid of a data matrix

𝑛𝑘 is the number of objects in cluster 𝐶𝑘, and

𝑥𝑖 is the p-dimensional vector of observations of the ith object in cluster k

6.1.1. K-Means distance metrics. K-Means clustering algorithm depends on

assigning observations to the mean that provides the least within-cluster sum of square.

Even though Euclidean distance is the usual distance metric used in K-Means, the

distance between points can be calculated using other different distance metrics as

shown in Table 8 below [32]:

Table 8: K-Means Distance Metrics

Distance

Metric Description Formula

Euclidean Distance Square distance between

two points or vectors x and y. 𝑑(𝑥, 𝑦) = √∑(𝑥𝑖 − 𝑦𝑖)2

𝑛

𝑖=1

Maximum (Chebychev)

Distance Maximum distance between

two points or vectors x and y. 𝑑(𝑥, 𝑦) = max

1≤𝑖≤𝑛|𝑥𝑖 − 𝑦𝑖|

Manhattan (CityBlock)

Distance Absolute distance between

two points or vectors x and y. 𝑑(𝑥, 𝑦) = ∑|𝑥𝑖 − 𝑦𝑖|

𝑛

𝑖=1

Canberra Distance Omits 0 numerators and

denominator from the sum,

and imputes them as missing.

𝑑(𝑥, 𝑦) = ∑|𝑥𝑖 − 𝑦𝑖|

|𝑥𝑖| + |𝑦𝑖|

𝑛

𝑖=1

The upcoming K-Means Results section uses Euclidean distance to get

the K-Means clustering outcome.

Page 58: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

58

6.2. K-Means Results

The first clustering technique tested in the approach is K-Means. K-Means

clustering was applied on the inner fringes and outer fringes results obtained in Chapter

5.

6.2.1. Clustering control and treatment students based on inner fringes.

Next, the students were clustered based on their fringe sets, starting with inner fringes

(i.e. what topics they have recently learned given the knowledge level of the student).

Inner fringe clustering results might give intel/feedback to the educator of how students

tend to progress in a certain subject. Furthermore, it can also help them in improving

the curriculum design to adapt to students' progress behavior.

Using K-Means clustering, the optimal number of clusters was determined using

the method from [31]. As seen in Table 9 and Table 10 below, first the Control group

Pre-test and Post-test means, medians, and standard deviations of their respective

clusters were calculated.

As observed in Table 9 and Table 10, first, the difference between the individual

clusters, whether it was the Pre-test or Post-test, is the topic(s) that have been recently

learned by the students. For example, after Pre-test, the Control students in C2 who are

at knowledge state (‘H’) have inner fringes d and e, and the Control students in C3 who

are at knowledge state (‘J’) have inner fringes g. Therefore, in feedback form, the

teacher becomes informed that the students in C2 are the ones who have recently learned

the topics of ‘Read numbers up to 999’ (i.e. d) and ‘Count backward ten step down from

any given number’ (i.e. e), whereas the students in C3 are the ones who have recently

learned the topic of ‘Count and write in 10s (e.g. 10, 20, 30, etc.)’ (i.e. g). The teacher

can use this information to identify the reason behind some students’ ability to attain

the last topic of ‘Counting in 10s’ faster than other students. One of the reasons might

be the students in C3 already have sufficient prior knowledge about the topics in state

(‘I’) which contains the prerequisite of knowing how to count numbers up to 999

backwards and forwards (i.e. d and e) and arranging them in any order (i.e. f).

Second observation is that only a few knowledge state transfers happened

between the Pre-test and Post-test.

Page 59: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

59

Table 9: Pre-test Control Students K-Means Clusters Based on Inner Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C1 0 1 0 0 0 1 0 0 0 0 0.5068 0.5068 0.121 0.24 2

C2 0 0 0 2 0 0 0 24 0 0 0.6407 0.638 0.1122 0.18 26

C3 0 0 0 0 0 0 0 0 0 26 0.8421 0.8223 0.1082 0.13 26

All 0 1 0 2 0 1 0 24 0 26 0.7327 0.7484 0.1539 0.21 54

Table 10: Post-test Control Students K-Means Clusters Based on Inner Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C’1 0 0 0 2 0 0 0 25 0 0 0.6333 0.6119 0.1461 0.23 27

C’2 0 0 0 0 0 0 0 0 0 27 0.9014 0.969 0.1023 0.11 27

All 0 0 0 2 0 0 0 25 0 27 0.7673 0.7869 0.1841 0.24 54

Page 60: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

60

While it is true that the students in Pre-test who were at lower knowledge levels

(‘A’), (‘D’), and (‘F’) moved to higher knowledge states in Post-test, already a majority

of the students in Pre-test were high up in the knowledge structure with 24 students in

(‘H’) and 26 students in (‘J’) which means they have completed the NUMBERS unit.

Also, the effect of receiving instruction can be seen in the Post-test clustering results, as

the 1 student who was at knowledge state (‘B’) in Pre-test, became at knowledge state

(‘D’) in Post-test because he/she has recently learned the topic of ‘Read numbers up to

999’ (i.e. d) via teacher’s instruction.

In terms of statistical testing, first, at a significance level of 0.05, the assumption

of normality was tested on the large clusters in the result using Anderson-Darling test

and Shapiro-Walk test. For the test on Pre-test Control Students Inner Fringes, the two

large clusters C2 (A = 0.293, p-value = 0.5756; W = 0.9636, p-value = 0.4681) and C3

(A = 0.4944, p-value = 0.1968; W = 0.9459, p-value = 0.1861) are normally distributed.

For the test on Post-test Control Students Inner Fringes, the two large clusters C’1 (A =

0.809, p-value = 0.0316; W = 0.9161, p-value = 0.03181) and C’2 (A = 1.9276, p-value

< 0.05, p-value = 0.1968; W = 0.8219, p-value < 0.05) are not normally distributed.

Next, at a significance level of 0.05, the assumption of equal variance across the

clusters in a single result was tested using Levene’s Test. If Levene’s test indicated equal

variance across clusters in a single result, a Kruskal-Wallis test was performed on the

Medians of the clusters in a single result. For the test on Pre-test Control Students Inner

Fringes, Levene’s test indicated equal variances across clusters (F= 0.0462, p-value=

0.9549), whereas Median scores across clusters (Kruskal-Wallis: chi2= 26.0246, df =2,

p-value <0.05) were significantly different. To emphasize this significant difference of

the Median scores across clusters, Kruskal-Wallis mean/average ranks and Mood’s

Median test results of the clusters in the results are as follow:

Table 11: Pre-test Control Students K-Means Clusters Kruskal-Wallis Mean Ranks

Cluster No. of

Students

Median

Scores Average Rank

C1 2 0.5068 6.5

C2 26 0.638 18.0

C3 26 0.8223 38.6

Overall 54 27.5

Page 61: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

61

Table 12: Pre-test Control Students K-Means Clusters Mood’s Median Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C1 2 0.5068

C2 26 0.638

C3 26 0.8223

For the test on Post-test Control Students Inner Fringes, Levene’s test indicated

equal variances across clusters (F= 0.6222, p-value= 0.4338), whereas Median scores

across clusters (Kruskal-Wallis: chi2= 29.0584, df =1, p-value <0.05) were significantly

different. Similarly, to emphasize this significant difference of the Median scores across

clusters Kruskal-Wallis mean/average ranks and Mood’s Median test results of the

clusters in the results are as follow:

Table 13: Post-test Control Students K-Means Inner Fringes Clusters Kruskal-Wallis

Mean Ranks

Cluster No. of

Students

Median

Scores Average Rank

C’1 27 0.6119 16.0

C’2 27 0.969 39.0

Overall 54 27.5

Table 14: Post-test Control Students K-Means Inner Fringes Clusters Mood’s Median

Test

Cluster No. of

Students

Median

Scores Individual 95.0% Cis

C’1 27 0.6119

C’2 27 0.969

Next, K-Means clustering and descriptive statics were applied on the Treatment

students as seen in Table 15 and Table 16 below.

Page 62: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

62

Table 15: Pre-test Treatment Students K-Means Clusters Based on Inner Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C1 1 0 0 0 0 0 1 0 0 0 0.4103 0.4103 0.0079 0.02 2

C3 0 0 0 2 3 0 0 0 0 0 0.4386 0.4907 0.128 0.29 5

C2 0 3 0 0 0 0 0 0 0 0 0.4752 0.5291 0.1185 0.25 3

C5 0 0 0 0 0 0 0 59 0 0 0.6426 0.624 0.0966 0.15 59

C4 0 0 0 0 0 0 0 0 79 0 0.8264 0.8095 0.1116 0.14 79

All 1 3 0 2 3 0 1 59 79 0 0.7273 0.7197 0.1568 0.22 148

Table 16: Post-test Treatment Students K-Means Clusters Based on Inner Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'4 0 0 0 0 0 2 0 0 0 0 0.6841 0.6841 0.2263 0.33 2

C'1 0 0 0 0 2 0 0 45 0 0 0.7049 0.7526 0.1319 0.19 47

C'3 0 0 0 0 0 0 1 0 0 0 0.7087 0.7087 N/A N/A 1

C'2 0 0 0 0 0 0 0 0 98 0 0.9371 0.9629 0.0793 0.08 98

All 0 0 0 0 2 2 1 45 98 0 0.8584 0.9261 0.1489 0.17 148

Page 63: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

63

As observed in Table 15 and Table 16 of the Treatment students, as seen

previously for the Control students, the difference between the individual clusters,

whether it was the Pre-test or Post-test, is also the topic(s) that have been recently

learned by the students. For example, after Pre-test, the Treatment students in C5 who

are at knowledge state (‘H’) have inner fringes d and e, and the Treatment students in

C4 who are at knowledge state (‘I’) have inner fringes f. Therefore, in feedback form,

the teacher becomes informed that the students in C5 are the ones who have recently

learned the topics of ‘Read numbers up to 999’ (i.e. d) and ‘Count backward ten step

down from any given number’ (i.e. e), whereas the students in C3 are the ones who have

recently learned the topic of ‘Arrange numbers up to 999, written in mixed form in

increasing or decreasing order’ (i.e. f). The teacher can use this information to identify

the reason for some students being able to attain the last topic of ‘Arrange numbers up

to 999’ faster than other students. Furthermore, after Pre-test, the students in knowledge

states (‘D’) and (‘E’) are in the same cluster C3 because they have the same inner fringes

which are d. Therefore, the teacher will know that the students in this group C3 have

recently learnt the topic of ‘Read numbers up to 999’. In addition, after Post-test, K-

Means optimally clustered the students at knowledge state (‘H’) and knowledge state

(‘E’) together maybe because the common topic recently learnt by these students is

‘Read numbers up to 999’ (i.e. d), even though the difference in inner fringe set is

‘Count backward ten step down from any given number’ (i.e. e). This might inform the

teacher that students who attain sufficient knowledge in reading number up to 999 have

the potential to simultaneously acquire the skill of counting backwards ten step down

from any given number.

Second observation is, compared to Control students, in the case of Treatment

students, more knowledge state transfers happened between the Pre-test and Post-test.

Also, the students who were in the lowest knowledge states (‘A’) and (‘B’) moved to

either one of the higher knowledge states (‘H’) and (‘I’). The significance in the amount

of knowledge state transfer might be because of the larger number of students in the

Treatment batch as compared to the Control batch (i.e. 148 vs 54 students). Moreover,

it might be due to the teaching method used with the Treatment group which involves

using tablets. This method helped the teachers transform students who only had

knowledge about the topics of counting numbers up to a low certain limit and identifying

Page 64: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

64

place values to attaining more demanding topics such as counting up/down to/from 999

and arranging them in any order. However, it seems that this same teaching method did

not help any of the Treatment students master the full NUMBERS unit as none of the

students is in knowledge state (‘J’). On the other hand, the teacher might not have been

able to efficiently use the tablets to teach the last topic of counting and writing in 10s as

opposed to the traditional teaching method used with the Control students, and therefore

more training on teaching using technology might be necessary to be provided to the

teacher.

In terms of statistical testing, first, at a significance level of 0.05, for the test on

Pre-test Treatment Students Inner Fringes, the two large clusters C4 (A = 1.5407, p-

value < 0.05; W = 0.941, p-value < 0.05) and C5 (A = 1.3607, p-value < 0.05; W =

0.9353, p-value < 0.05) are not normally distributed. For the test on Post-test Treatment

Students Inner Fringes, the two large clusters C’1 (A = 2.1933, p-value < 0.05; W =

0.8387, p-value < 0.05) and C’2 (A = 9.3122, p-value < 0.05; W = 0.742, p-value <

0.05) are also not normally distributed.

Next, at a significance level of 0.05, for the test on Pre-test Treatment Students

Inner Fringes, Levene’s test indicated equal variances across clusters (F= 2.4098, p-

value= 0.05199), whereas Median scores across clusters (Kruskal-Wallis: chi2=

79.7629, df =4, p-value <0.05) were significantly different. To emphasize this

significant difference of the Median scores across clusters Kruskal-Wallis mean/average

ranks and Mood’s Median test results of the clusters in the results are as follow:

Table 17: Pre-test Treatment Students K-Means Inner Fringes Clusters Kruskal-Wallis

Mean Ranks

Cluster No. of

Students

Median

Scores Average Rank

C1 2 0.4103 3.5

C3 5 0.4907 6.8

C2 3 0.5291 12.0

C5 59 0.624 48.4

C4 79 0.8095 102.4

Overall 148 74.5

Page 65: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

65

Table 18: Pre-test Treatment Students K-Means Inner Fringes Clusters Mood’s

Median Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C1 2 0.4103

C3 5 0.4907

C2 3 0.5291

C5 59 0.624

C4 79 0.8095

6.2.2. Clustering control and treatment students based on outer fringes.

Next, the students were clustered based on their outer fringes (i.e. what topics they are

ready to learn next). Outer fringe clustering results can help the educator to separate

learners into distinct groups and identify which topics should be best taught next to each

of these groups.

Using K-Means clustering, the optimal number of clusters was determined using

the method from [31]. As seen in Table 19 and Table 20 below, first the Control group

Pre-test and Post-test means, medians, and standard deviations of their respective

clusters were calculated.

As observed in Table 19 and Table 20, the difference between the individual

clusters, whether it was the Pre-test or Post-test, is the topic(s) that the students are ready

to learn given their current knowledge state. For example, after Pre-test, the Control

students in C1 who are at knowledge states (‘F’), (‘H’) and (‘J’) have outer fringes e

and f, and the Control students in C2 who are at knowledge states (‘B’) and (‘D’) have

outer fringes c, d, and e. This informs the teacher that, using the conventional teaching

methods, the students in C1 are ready to learn the topic of ‘Count backward ten step

down from any given number’ (i.e. e). and the topic of ‘Arrange numbers up to 999,

written in mixed form in increasing or decreasing order’ (i.e. f); otherwise, the other

C1 students who have already mastered all the topics in the NUMBERS unit can just

attend the lesson to revise the topics of counting and arranging numbers. On the other

hand, the students in C2 are ready to learn topics of ‘Identify the place value of a specific

digit in a 3-digit number’ (i.e. c), the topic of ‘Read numbers up to 999’ (i.e. d), and the

topic of ‘Count backward ten step down from any given number’ (i.e. e).

Page 66: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

66

Table 19: Pre-test Control Students K-Means Clusters Based on Outer Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C2 0 1 0 2 0 0 0 0 0 0 0.5771 0.6476 0.1352 0.23 3

C1 0 0 0 0 0 1 0 24 0 26 0.7418 0.7562 0.1511 0.20 51

All 0 1 0 2 0 1 0 24 0 26 0.7327 0.7484 0.1539 0.21 54

Table 20: Post-test Control Students K-Means Clusters Based on Outer Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'2 0 0 0 2 0 0 0 0 0 0 0.2868 0.2868 0.0352 0.12 2

C'1 0 0 0 0 0 0 0 25 0 27 0.7858 0.7893 0.1607 0.20 52

All 0 0 0 2 0 0 0 25 0 27 0.7673 0.7869 0.1841 0.24 54

Page 67: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

67

The topics suggested by the outer fringes methods are more efficient to a teacher

to plan what to teach the students next and how to group them, as opposed to teaching

what the Knowledge Structure suggests. For example, even though the Knowledge

Structure in Figure 10 advises that to move from knowledge state (‘H’) to the higher

knowledge states (‘I’) and (‘J’), the teacher should deliver the topics of arranging

numbers (i.e. f) as well as counting and writing in 10s (i.e. g). However, the outer fringe

method stresses that the teacher has to focus on and spend time teaching only the topic

of arranging numbers and not to move forward until the topic is acquired by the student.

Second observation of knowledge state transfer is similar to that of

corresponding inner fringe clustering.

In terms of statistical testing, first, at a significance level of 0.05, for the test on

Pre-test Control Students Outer Fringes, cluster C1 (A = 0.3121, p-value = 0.5392; W

= 0.9751, p-value = 0.356) is normally distributed. On the other hand, for the test on

Post-test Control Students Outer Fringes, cluster C’1 (A = 1.2201, p-value = 0.003185;

W = 0.9236, p-value = 0.002563) is not normally distributed.

Next, at a significance level of 0.05, for the test on Pre-test Control Students

Outer Fringes, Levene’s test indicated equal variances across clusters (F= 0.1149, p-

value = 0.736), and Median scores across clusters (Kruskal-Wallis: chi2= 2.5759, df =

1, p-value = 0.1085) were not significantly different in the case of this K-Means result.

This insignificance can be seen in the clusters’ Kruskal-Wallis mean/average ranks and

Mood’s Median test results of the clusters in the results which are as follow:

Table 21: Pre-test Control Students K-Means Outer Fringes Clusters Kruskal-Wallis

Mean Ranks

Cluster No. of

Students

Median

Scores Average Rank

C2 3 0.6476 13.3

C1 51 0.7562 28.3

Overall 54 27.5

Page 68: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

68

Table 22: Pre-test Control Students K-Means Outer Fringes Clusters Mood’s Median

Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C2 3 0.6476

C1 51 0.7562

For the test on Post-test Control Students Outer Fringes, Levene’s test indicated

equal variances across clusters (F= 3.348, p-value = 0.07302), whereas Median scores

across clusters (Kruskal-Wallis: chi2= 5.6762, df = 1, p-value = 0.0172) were

significantly different. This significance can be seen in the clusters’ Kruskal-Wallis

mean/average ranks and Mood’s Median test results of the clusters in the results are as

follow:

Table 23: Post-test Control Students K-Means Outer Fringes Clusters Kruskal-Wallis

Mean Ranks

Cluster No. of

Students

Median

Scores Average Rank

C’2 2 0.2868 1.5

C’1 52 0.7893 28.5

Overall 54 27.5

Table 24: Post-test Control Students K-Means Outer Fringes Clusters Mood’s Median

Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C’2 2 0.2868

C’1 52 0.7893

Next, K-Means clustering and descriptive statics were applied on the Treatment

students as seen in Table 25 and Table 26 below.

Page 69: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

69

Table 25: Pre-test Treatment Students K-Means Clusters Based on Outer Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C2 0 0 0 0 3 0 0 0 0 0 0.3872 0.4571 0.1511 0.39 3

C3 0 0 0 0 0 0 1 0 0 0 0.4048 0.4048 N/A N/A 1

C1 1 0 0 0 0 0 0 0 0 0 0.4159 0.4159 N/A N/A 1

C5 0 3 0 0 0 0 0 0 0 0 0.4752 0.5291 0.1185 0.25 3

C6 0 0 0 2 0 0 0 0 0 0 0.5158 0.5158 0.0013 0.00 2

C4 0 0 0 0 0 0 0 59 79 0 0.7478 0.7405 0.1391 0.19 138

All 1 3 0 2 3 0 1 59 79 0 0.7273 0.7197 0.1568 0.22 148

Table 26: Post-test Treatment Students K-Means Clusters Based on Outer Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'1 0 0 0 0 2 0 0 0 0 0 0.2813 0.2813 0.0594 0.21 2

C'2 0 0 0 0 0 2 1 45 98 0 0.8663 0.9274 0.1334 0.15 146

All 0 0 0 0 2 2 1 45 98 0 0.8584 0.9261 0.1489 0.17 148

Page 70: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

70

As observed in Table 25 and Table 26, like the Control group, the same outer

fringes concept applies on the Treatment group. For example, after Pre-test, the

Treatment students in C4 who are at knowledge states (‘H’) and (‘I’) have outer fringes

f and g, and the Control students in C5 who are at knowledge state (‘E’) have outer

fringe b. Therefore, the teacher is advised to teach the students in C4 the more

demanding topics of arranging numbers (i.e. f) as well as counting and writing in 10s

(i.e. g), and separate them from these students in C5 who need to be taught next the

simpler topic of identifying simple place values (i.e. b). This will help the student in C5

focus more on attaining the primitive topics, rather than just teaching them the more

complex topics causing them to suffer with the NUMBERS unit.

Second observation of knowledge state transfer is similar to that of

corresponding inner fringe clustering.

In terms of statistical testing, first, at a significance level of 0.05, for the test on

Pre-test Treatment Students Outer Fringes, cluster C4 (A = 1.1272, p-value < 0.05; W

= 0.9653, p-value < 0.05) is not normally distributed. For the test on Post-test Treatment

Students Outer Fringes, cluster C’2 (A = 6.5695, p-value < 0.05; W = 0.8705, p-value

< 0.05) is also not normally distributed.

Next, at a significance level of 0.05, for the test on Pre-test Treatment Students

Outer Fringes, Levene’s test indicated equal variances across clusters (F= 1.9064, p-

value= 0.09689), whereas Median scores across clusters (Kruskal-Wallis: chi2=

26.1194, df = 5, p-value <0.05) were significantly different. To emphasize this

significant difference of the Median scores across clusters Kruskal-Wallis mean/average

ranks and Mood’s Median test results of the clusters in the results are as follow:

Table 27: Pre-test Treatment Students K-Means Outer Fringes Clusters Kruskal-Wallis

Mean Ranks

Cluster No. of

Students

Median

Scores Average Rank

C3 1 0.4048 3.0

C1 1 0.4159 4.0

C2 3 0.4571 4.3

C6 2 0.5158 10.6

C5 3 0.5291 12.0

C4 138 0.7405 79.3

Overall 148 74.5

Page 71: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

71

Table 28: Pre-test Treatment Students K-Means Outer Fringes Clusters Mood’s

Median Test

Cluster No. of

Students

Median

Score Individual 95.0% Cis

*C3 1 0.4048

*C1 1 0.4159

C2 3 0.4571

C6 2 0.5158

C5 3 0.5291

C4 138 0.7405

*no arithmetic/visual can be shown for this cluster as it only has one member in it

For the test on Post-test Treatment Students Outer Fringes, Levene’s test

indicated equal variances across clusters (F= 2.4458, p-value = 0.12), whereas Median

scores across clusters (Kruskal-Wallis: chi2= 5.8926, df =1, p-value <0.05) were

significantly different. Similarly, to emphasize this significant difference of the Median

scores across clusters Kruskal-Wallis mean/average ranks and Mood’s Median test

results of the clusters in the results are as follow:

Table 29: Post-test Treatment Students K-Means Outer Fringes Clusters Kruskal-

Wallis Mean Ranks

Cluster No. of

Students

Median

Scores Average Rank

C'1 2 0.2813 1.5

C'2 146 0.9274 75.5

Overall 148 74.5

Table 30: Post-test Treatment Students K-Means Outer Fringes Clusters Mood’s

Median Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C'1 2 0.2813

C'2 146 0.9274

Page 72: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

72

Overall, for Data Set 1, from a K-Means clustering perspective, it is better to

provide advice and guidance to the teachers about their in-class instruction using the

outer fringes rather than the inner fringes. K-Means outer fringes clustering facilitates

for the teacher grouping students suitably according to the topic(s) each group is ready

to learn next. Outer fringes clustering can also help teachers focus on struggling students

who need to learn the simpler topics and spend more time with them, so they do not

remain falling behind the other advancing students.

Maybe K-Means inner fringes clusters can help guide educational administrators

design their curriculums more efficiently by observing and forecasting what topics

learners tend to acquire simultaneously, and what teachers can investigate the KST to

check potential topics to teach next.

In terms of the statistical properties of the clusters in each K-Means result, the

clusters that comprised of the larger number of students were not normally distributed.

Also, in most results, Levene’s test demonstrated equal variance across the clusters, but

significant difference in the Median scores across them as per Kruskal-Wallis’s test and

Mood’s Median test.

6.3. K-Means Results Evaluation

Next, the results from the K-Means clustering were evaluated using the indices

described in Chapter 4. Firstly, the Internal Indices CP, SP, DB, DVI, WSS, and BSS

were calculated using the methods from [35]. The latter indices for the K-Means clusters

are shown below in Table 31.

Looking at the resulting indices, firstly, in terms of compactness, the K-Means

inner fringes clusters are always more compact than the outer fringe clusters.

Furthermore, the Treatment clusters are always more compact than the Control clusters,

with compactness being 0.0025 after Pre-test and 0.0617 after Post-test.

Next, the separation of K-Means outer fringes clusters is better than that of inner

fringes, with the Treatment outer fringes clusters being the most separated after Pre-test

as well as after Post-test; the separations are 25.09 and 30.60 respectively.

The DB of the Treatment clusters is overall lower than that of Control clusters

with the Treatment outer fringes clusters DB being 0.0018 after Pre-test and 0.0190 after

Post-test.

Page 73: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

73

Table 31: K-Means Results Evaluation

K-Means Clusters Internal Indices

Pre-test Clusters *↓CP +↑SP ↓DB ↑DVI ↓WSS ↑BSS ↓WSS/BSS

Control Inner Fringes 0.3195 13.7538 0.2127 0.4375 0.0430 0.043 0.043

Outer Fringes 1.1142 18.9804 0.4793 1.0000 0.1518 0.1518 0.1518

Treatment Inner Fringes 0.0025 12.5333 0.0441 0.3333 0.0069 0.0069 0.0069

Outer Fringes 0.4927 25.0903 0.0018 4.0000 0.0047 0.0047 0.0047

Post-test Clusters ↓CP ↑SP ↓DB ↑DVI ↓WSS ↑BSS ↓WSS/BSS

Control Inner Fringes 0.2849 10.7037 0.0532 1.7500 29.62963 1546.685 0.0192

Outer Fringes 1.0173 15.0385 0.0677 7.0000 51.92308 435.5584 0.1192

Treatment Inner Fringes 0.0617 12.7876 0.0018 1.0000 30.6383 8808.389 0.0035

Outer Fringes 0.5802 30.6027 0.0190 3.4286 88.9589 1847.744 0.0481 *↓ means the less the value the better.

+↑ means the greater the value the better.

Page 74: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

74

Also, the DVI of K-Means outer fringes clusters is higher than the inner clusters

with Treatment outer fringes clusters DVI being 4.0 after Pre-Test and Control outer

fringes clusters DVI being 7.0 after Post-test.

In addition, the WSS and BSS of every K-Means clustering result were

calculated. In terms of WSS, Pre-test outer fringes clusters are more cohesive than Pre-

test inner fringes clusters as the WSS for outer fringes clusters is lower than inner fringes

clusters. However, the case does not appear so for the Post-test results.

In terms of BSS, the BSS values suggest that in most results the inner fringes

clusters are better separated than outer fringes clusters. However, despite the latter

finding, the SP values in the Internal indices are more critical in describing how well

the clusters are separated than the BSS. According to SP, the outer fringes clusters are

better separated.

As seen in Figure 14 and Figure 15 , at a threshold of 33%, looking at the

compactness and DB, inner fringes clustering results have a better quality than outer

fringes. On the other hand, if the clustering results were judged based on separation and

DVI, outer fringes clusters will give the better quality of results.

Figure 14: K-Means Results Internal Indices Comparison (Control Students)

Page 75: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

75

Figure 15: K-Means Results Internal Indices Comparison (Treatment Students)

Therefore, before determining which fringes are better to use to give feedback

to the educational administrator, Internal indices chosen to make the judgment have to

be first decided.

Figure 16: K-Means Results Internal Indices Comparison (Control vs. Treatment)

It is also noted in Figure 16 that the indices for Treatment clusters are “better”

than those of Control clusters. For example, with regards to Post-test Outer fringes, the

CP for Control and Treatment clustering results are 1.0173 and 0.5802 respectively.

This means the Treatment clustering results are more compact than that of Control ones.

Hence, there is still a potential that teaching methods using technology might prove to

Page 76: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

76

be better than the conventional methods. The Pre-test Treatment outer fringe clusters

and Post-test Treatment outer fringe K-Means clusters can be considered to have a

“good” quality due to the suitable Internal indices values as compared to the other K-

Means clustering results.

Finally, the ICC value was calculated for each of the eight K-means cluster

results. The ICCs for the results in the K-Means Results are as follows:

Table 32: K-Means Clusters Intra-class Correlation Coefficient

K-Means Clusters ICC

Pretest

Control Inner Fringes 0.6334

Outer Fringes 0.2969

Treatment Inner Fringes 0.6804

Outer Fringes 0.6823

Posttest

Control Inner Fringes 0.6898

Outer Fringes 0.8231

Treatment Inner Fringes 0.7186

Outer Fringes 0.9040

In terms of ICC, as shown in Table 32, the ICC value of the K-Means fringes

clustering results are overall more than 60% which is considered acceptable, with the

exception of the ICCs of the Pre-test Control outer fringes clusters at 29.69%. Therefore,

the high ICC of most of the results, especially the outer fringes results, reinforces the

meaningfulness of the K-Means clustering results, and it also indicates that clustering

the students based on fringes makes a difference as opposed to grouping them based on

school or grades only.

The External indices of the K-Means clustering results will be discussed in the

K-Means Comparative Analysis section of this chapter where it will compare the

clustering results to pre-defined class labels.

6.4. K-Means Comparative Analysis

Finally, to emphasize the importance and “goodness” of the approach, a

comparative analysis is done between the K-Means clustering results and each of two

pre-defined class labels. The measures and indices used are NMI, CA (Purity), Entropy,

and ARI. NMI, CA, and ARI measures were calculated using the techniques as

mentioned in [36], [37], and [38] respectively.

Page 77: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

77

6.4.1. K-Means clustering as explained by knowledge states. The first

hypothesis of the pre-defined class is K-Means which clusters the students based on

their knowledge states.

First, for each cluster resulting from applying K-Means algorithm on the

targeted data, the Purity, Entropy, NMI and ARI were calculated. A detailed

comparative analysis was created for every run on every data sample: (Pre-test Control

Inner Fringes), (Pre-test Control Outer Fringes), (Pre-test Treatment Inner Fringes),

(Pre-test Treatment Outer Fringes), (Post-test Control Inner Fringes), (Post-test Control

Outer Fringes), (Post-test Treatment Inner Fringes), and (Post-test Treatment Outer

Fringes).

Overall, the comparative analysis results for the first pre-defined class labels

hypothesis are as follows:

Table 33: Is K-Means Clustering Based on Knowledge States

K-Means Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.9445 -0.2254 0.8413 0.8940

Outer Fringes 0.5370 -0.9442 0.4969 0.1712

Treatment Inner Fringes 0.9932 -0.0135 0.5147 0.8665

Outer Fringes 1 0 0.8812 1

Posttest

Control Inner Fringes 0.9630 -0.1905 0.0796 0.0028

Outer Fringes 1 0 1 1

Treatment Inner Fringes 1 0 0.9366 1

Outer Fringes 0.6756 -0.9014 0.0694 0.0262

*↑means the greater the value the more resemblance between fringes and KS clustering. +↓ arrow means the less the value the more resemblance between fringes and KS clustering.

With respect to K-Means clustering, looking at Table 33, as compared to the

knowledge states clustering results, it can be seen that the overall purity of each K-

Means clustering result of the fringes in both Pre-test and Post-test is greater than 50%,

sometimes approaching 100% as in the case of Pre-test Treatment Outer Fringes, Post-

test Control Outer Fringes, and Post-test Treatment Inner Fringes. The same applies to

the Entropy measure which is approaching 0 in most of the latter data sets.

Moreover, the overall NMI between the KS clusters and Fringes clusters is half

the time more than 50%, which is also sometimes approaching 100%. This is especially

true for Post-test Control Outer Fringes.

Also, the overall ARI values between the KS clusters and Fringes clusters are

half the time more than 50%, also sometimes approaching 100%. This is especially true

Page 78: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

78

for Pre-test Treatment Outer Fringes, Post-test Control Outer Fringes, and Post-test

Treatment Inner Fringes.

With respect to K-Means clustering, even though it may appear that clustering

based on knowledge state is sufficient to give feedback about the students to the teacher,

without using their fringes, this is not entirely true as there are still some cases where

the External indices show the low resemblance between what the students are ready to

learn next and the knowledge level they are at. For example, the External indices

between Pre-test Control Outer Fringes and KS are CA=0.5370, Entropy=0.9442,

NMI=0.4969, and ARI=0.1712. Another example is the External indices between Post-

test Control Outer Fringes and KS which are CA=0.6756, Entropy=0.9014,

NMI=0.0694, and ARI=0.0262.

Overall, the comparative analysis for the first hypothesis of clustering students

based on knowledge states is satisfactory. Therefore, with respect to K-Means

clustering, even though in most of the cases there was a high resemblance between the

fringes clustering outcomes and knowledge states clustering outcome, this does not

justify using only knowledge states rather than fringes to get information about the

learners’ learning progress. This is clearly determined by some of the other cases where

resemblance between fringes clustering and knowledge states clustering outcomes was

low.

6.4.2. K-Means clustering as explained by 25th percentile/quartile. The

second hypothesis of the pre-defined class depends on grouping the students based on

the 25th percentiles/ quartiles of the overall NUMBERS unit scores of the students.

First, the learners were distributed into four different groups using 25th

percentiles of the learners’ average score in the NUMBERS unit in the Illustrative

Example. For every data set, the quartiles of every data set were extracted, and the Mean,

Median, SD, and CV were calculated. The tables can be found in Appendix B:

Quartiles Details. A detailed comparative analysis was created for every run on every

data sample: (Pre-test Control Inner Fringes), (Pre-test Control Outer Fringes), (Pre-test

Treatment Inner Fringes), (Pre-test Treatment Outer Fringes), (Post-test Control Inner

Fringes), (Post-test Control Outer Fringes), (Post-test Treatment Inner Fringes), and

(Post-test Treatment Outer Fringes).

Page 79: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

79

Overall, the comparative analysis results for the second pre-defined class labels

hypothesis are as follows:

Table 34: Is K-Means Clustering Based on Quartiles

K-Means Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.5 -1.524 0.3078 0.1973

Outer Fringes 0.2963 -1.9349 0.0816 0.0022

Treatment Inner Fringes 0.473 -1.5125 0.2939 0.4230

Outer Fringes 0.3244 -1.8535 0.1458 0.0484

Posttest

Control Inner Fringes 0.5185 -1.4784 0.3682 0.2490

Outer Fringes 0.2963 -1.9239 0.1112 0.0011

Treatment Inner Fringes 0.4595 -1.4818 0.3573 0.2234

Outer Fringes 0.2635 -1.9726 0.0604 0.0004

*↑means the greater the value the more resemblance between fringes clustering and Quartiles. +↓means the less the value the more resemblance between fringes clustering and Quartiles.

With respect to K-Means clustering, looking at Table 34, as compared to the

Quartiles grouping results , it can be seen that the overall purity of each K-Means

clustering results of the fringes in both Pre-test and Post-test is less than 50% in the

majority of the cases . The same applies to the absolute Entropy measures which are

approaching values greater than 1 in all of the latter data sets. These values are an

indication of the irrelevancy between the knowledge level of the learners and their

corresponding unit scores.

Moreover, all NMI and ARI values between the Fringes and Quartiles clusters

are less than 50%, which is an indication of the large discrepancy between grouping

based on quartiles of scores and k-means clustering based on fringes.

Overall, the comparative analysis for the second hypothesis using students’

Medians to group these students based on the 25th percentiles/ quartiles of their overall

NUMBERS unit scores was satisfactory to affirm the significance of the proposed

approach in the thesis. Therefore, this indicates that the knowledge states of learners

and what they score in the course are not connected.

6.5. K-Means Overall Summary

In summary, findings regarding clustering learners’ fringes using K-Means

algorithms can be seen from two perspectives. The first perspective is with regards to

the clustering method, which is K-Means, and the second perspective is with regards to

providing advice to the teacher and/or educational administrator.

Page 80: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

80

With respect to K-Means clustering, using the dataset of Grade 2 learners scores

in the NUMBERS unit, outer fringes clustering results (what the learners are ready to

learn) are better for feedback for the teacher for in-class instruction if the clustering

results were chosen based on the SP and DVI internal properties of the clusters in the

results. Otherwise, inner fringes clustering results (what the learners have recently

learned) are better for feedback for the teacher for in-class instruction if the clustering

results were chosen based on the CP and DB internal properties of the clusters in the

results. Therefore, for K-Means clustering, both types of fringes can be used to guide

the educational administrator on how to manage students’ learning experience.

When validating the K-Means clusters using the External indices, the high

resemblance between the fringes clustering outcomes and the knowledge states

clustering outcome is expected but not entirely; therefore, it is not enough to only use

knowledge states to get information about the learners’ learning progress. Finally,

dividing into Quartiles students using Median scores as seen in K-Means clustering as

explained by 25th percentile/quartile section is not a good way to cluster students as

determined by the External indices values in Table 34. In most cases, for a single K-

Means clustering outcome, the Median across the distinct clusters is significantly

different. However, the fringes clusters still do not group the students the same way as

the Quartiles method did.

With respect to testing using different distance metrics, the algorithm was

repeated for Manhattan, Maximum, and Canberra distance metrics. The results were

exactly the same as when using the Euclidean distance metric with the K-Means

algorithm.

With respect to providing advice to the teachers and administrators, firstly, the

students in the model development dataset come from different schools, with 2-4

representative student samples from each of the 36 schools. Therefore, in this thesis the

advice given to the teacher is at the cluster level rather than the school level. At the

cluster level, using the proposed model, the administrator would tell the teacher

generally how the proposed model divided the students, and that the teacher should split

his/her students in to 2 or more groups (as suggested by the K-Means clustering

outcomes); each group with its optimal topics that the students are ready to learn next

(as suggested by the K-Means clustering outcomes). For example, as shown previously

in Table 19, after performing the Pre-test on the Control students, K-Means Outer

Page 81: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

81

fringes clustering outcome suggests that the teacher divide the students into 2 groups to

teach them 2 different sets of topics; one group for teaching them about the topics of

‘Identify the place value of a specific digit in a 3-digit number’, ‘Read numbers up to

999’, and ‘Count backward ten step down from any given number’, and another group

for teaching them the topics of ‘Count backward ten step down from any given number’

and ‘Arrange numbers up to 999, written in mixed form in increasing or decreasing

order’.

Finally, with regards to inner fringes, the K-Means Post-test Inner Fringes

clustering outcomes can inform the administrator how well the teachers are trained to

teach the topics in a unit, whether through the conventional teaching methods (i.e. as

used with the Control students) or technological methods (i.e. as used with the

Treatment students). Also, the administrator can use the Inner fringes results to inform

the teacher what topics the students tend to spend more time on to acquire as compared

to other topics.

In the next chapter, DBSCAN clustering will be applied on the same data. K-

Means was tested on.

Page 82: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

82

Chapter 7: DBSCAN Clustering

7.1. DBSCAN Overview

DBSCAN stands for Density-Based Spatial Clustering of Application with

Noise. Given a set of points in space, DBSCAN groups together the points which lie

closely packed together within a given distance. On the other hand, the points which lie

too far away from their nearest neighbors and stand alone in regions of low density are

marked as outliers (aka NOISE points). DBSCAN algorithm can recognize clusters with

arbitrary shapes. DBSCAN is useful when data has a lot of noise [39].

7.1.1. DBSCAN procedure and parameters. The DBSCAN clustering

algorithm used in the proposed model depends on two main parameters: ε (epsilon) and

MinPts.

ε (epsilon) specifies the distance at which the points are close enough to each

other to be considered a part of a cluster. It is the maximum allowed radius of the

neighborhood point. Any point at a distance greater than ε from neighborhood points is

considered as an outlier. MinPts specifies how many neighbor points should be included

into a single dense region or cluster. MinPts is the minimum number of points in ε-

neighborhood of that point. According to [40], to get the optimum DBSCAN clustering

result, MinPts is often equal to the number of dimensions of the dataset being clustered

plus one (dimension(data)+1). ε is decided using the knee of the kth nearest neighbor

plot (k-NN). Basically, for points in a cluster, their kth nearest neighbors are at roughly

the same distance from one another. For example, in Figure 17 below, for k=4, the knee

is approximately at k-NN distance 10; therefore ε = 10.

For the dataset being used for model development, the value of MinPts will

always be equal to 3, as each dataset has a dimension value of 2: A constant and the

Fringe Set value. The constant is required in order for the DBSCAN to work properly,

as DBSCAN requires a data with more than one dimension. However, ε is different for

every dataset used. Also, the DBSCAN algorithm used in the thesis model uses the

nearest neighbor search strategy known as the “kdtree” data structure for faster k-nearest

neighbor search [41]. The distances are calculated using Euclidean distances.

Page 83: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

83

Figure 17: k-NN and k-NN Plot Example.

Consequent to the algorithm, DBSCAN clustering classifies points in the

following ways [39]:

Core Points – a point p is a core point if there are at least MinPts number of points

within a distance ε from it. The points which are with an ε distance from the core

point are known as “directly reachable”. Therefore, there are no points which are

“directly reachable” from a point which is not a core point. In Figure 18, A is a

core point and the green dots around it are “directly reachable” points.

Density-reachable Points – a point q is a “density-reachable” point if it’s

reachable to a core point p via a path p1,p2…pn with p1=p and pn=q. Each pi+1 is

“directly reachable” from pi. All the points in the path of between p1 and pn must

be core points. In Figure 18, B and C in blue are “density-reachable” points.

Outliers – a point which is neither a core point nor “density-reachable” point is

an outlier. In Figure 18, N in red is an outlier point.

Figure 18: DBSCAN Illustration.

Page 84: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

84

7.2. DBSCAN Results

DBSCAN is the second clustering technique to be tested in the approach. Like

in the section K-Means Results, DBSCAN clustering was applied on the inner fringes

and outer fringes results obtained in Chapter 4.

Using DBSCAN clustering, for every data set/case the number of clusters was

determined using the ε and MinPts. While MinPts is equal to 3 for all cases, ε was

calculated using the knee of the kth nearest neighbor plot (k-NN) mentioned earlier. The

different possible epsilons for each of the eight data sets and their k-NN plots can be

found in Appendix D: DBSCAN Results Details.

7.2.1. Clustering control and treatment students based on inner fringes.

Like in the K-Means example, the students were clustered based on their fringe sets,

starting with inner fringes (i.e. what topics they have recently learned given the

knowledge level the student is in).

With respect to inner fringes, using the k-NN plot method, the ε for Pre-test and

Post-test Control students DBSCAN clusters was found to be 11. An ε of 11 means that,

for a group of Pre-test Control students to be considered as a cluster, the students who

are neighbors to each other must be within 17% (i.e. 11/64 as 64 is the highest possible

fringe set number) away from each other in terms of the topics they have recently

learned. The same applies for the Post-test Control students.

As seen in Table 35 and Table 36 below, first the Control group Pre-test and

Post-test means, medians, and standard deviations of their respective clusters were

calculated.

As observed in Table 35 and Table 36, firstly like in the case of K-Means

clustering, the difference between the individual clusters, whether it was the Pre-test or

Post-test, is the topic(s) that have been recently learned by the students. For example,

after Pre-test, the Control students in C1 who are at knowledge states (‘H’), (‘J’), and

(‘D’) have inner fringes d, e, and g, and the Control students in the Noise cluster C0

who are at knowledge states (‘B’) and (‘F’) have inner fringes b and c.

Page 85: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

85

Table 35: Pre-test Control Students DBSCAN Clusters Based on Inner Fringes at ε = 11

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C0 (Noise) 0 1 0 0 0 1 0 0 0 0 0.5068 0.5068 0.1210 0.24 2

C1 0 0 0 2 0 0 0 24 0 26 0.7414 0.7531 0.1492 0.2 52

All 0 1 0 2 0 1 0 24 0 26 0.7327 0.7484 0.1539 0.21 54

Table 36: Post-test Control Students DBSCAN Clusters Based on Inner Fringes at ε = 11

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'1 0 0 0 2 0 0 0 25 0 27 0.7673 0.7869 0.1841 0.24 54

All 0 0 0 2 0 0 0 25 0 27 0.7673 0.7869 0.1841 0.24 54

Page 86: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

86

Therefore, in feedback form, with respect to DBSCAN clustering, the teacher

becomes informed that the students in C1 are the ones who have recently learned the

topics of ‘Read numbers up to 999’ (i.e. d), ‘Count backward ten step down from any

given number’ (i.e. e), and ‘Count and write in 10s (e.g. 10, 20, 30,etc.)’ (i.e. g);

whereas the students who were considered as Noise in C0 are the ones who have

recently learned the topics of ‘Identify simple Place Value’ (i.e. b) and ‘Identify the

place value of a specific digit in a 3-digit number’ (i.e. c). The teacher can use this

information to identify the reasons why majority of students were able to attain the more

advanced topics concerning reading and counting numbers, whereas few distinct

students who were considered as outliers by the DBSCAN algorithm were not proficient

enough to move to the more advanced topics, and were only able to acquire the simpler

topics concerning identifying place values.

In terms of statistical testing, first, at a significance level of 0.05, the assumption

of normality was tested on the non-Noise DBSCAN clusters in the result using

Anderson-Darling test and Shapiro-Walk test. For the test on Pre-test Control Students

Inner Fringes, cluster C1 (A = 0.2682, p-value = 0.67; W = 0.9775, p-value = 0.4267)

is normally distributed. For the test on Post-test Control Students Inner Fringes, the only

cluster in the result which contains all of the students in the data set C’1 (A = 0.9775,

p-value = 0.0129; W = 0.9303, p-value = 0.003755) is not normally distributed.

Next, at a significance level of 0.05, the assumption of equal variance across the

non-noise clusters and the noise cluster in a single result was tested using Levene’s Test.

If Levene’s test indicated equal variance across the clusters in a single result, a Kruskal-

Wallis test was performed on the Medians of the clusters in a single result. For the test

on Pre-test Control Students Inner Fringes, Levene’s test indicated equal variances

across clusters (F= 0.2964, p-value= 0.5885), whereas Median scores across clusters

(Kruskal-Wallis: chi2= 3.7008, df = 1, p-value = 0.05439) were not significantly

different. This insignificance can be seen in the clusters’ Kruskal-Wallis mean/average

ranks and Mood’s Median test results of the clusters in the results are shown in Table

37 and Table 38.

For the test on Post-test Control Students Inner Fringes, Levene’s test was not

applicable as the result only had one cluster level.

Page 87: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

87

Table 37: Pre-test Control Students DBSCAN Inner Fringes Clusters Kruskal-Wallis

Mean Ranks

Cluster No. of

Students

Median

Scores

Average

Rank

C0 (Noise) 2 0.5068 6.5

C1 52 0.7531 28.3

Overall 54 27.5

Table 38: Pre-test Control Students DBSCAN Inner Fringes Clusters Mood’s Median

Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C0

(Noise) 2 0.5068

C1 52 0.7531

Next, DBSCAN clustering and descriptive statics were applied on the Treatment

students as seen in Table 39 and Table 40.

With respect to inner fringes, using the k-NN plot method, the ε for Pre-test and

Post-test Treatment students DBSCAN clusters were found to be 6 and 10 respectively.

An ε of 6 means that, for a group of Pre-test Treatment students to be considered as a

cluster, the students who are neighbors to each other must be within 9% (i.e. 6/64) away

from each other in terms of the topics they have recently learned. Furthermore, an ε of

10 means that, for a group of Post-test Treatment students to be considered as a cluster,

the students who are neighbors to each other must be within 15% (i.e. 10/64) away from

each other in terms of the topics they have recently learned.

As observed in Table 39 and Table 40 of the Treatment students, as seen

previously for the Control students, first, the difference between the individual clusters,

whether it was the Pre-test or Post-test, is also the topic(s) that have been recently

learned by the students. For example, after Pre-test, the Treatment students in C1 who

are at knowledge states (‘H’), (‘I’), (‘D’) and (‘E’) have inner fringes d, e, and f, and

the Control students in the Noise cluster C0 who are at knowledge states (‘A’) and (‘B’)

have inner fringes a and b.

Page 88: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

88

Table 39: Pre-test Treatment Students DBSCAN Clusters Based on Inner Fringes at ε = 6

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C0 (Noise) 1 3 0 0 0 0 1 0 0 0 0.4492 0.4159 0.0911 0.2 5

C1 0 0 0 2 3 0 0 59 79 0 0.737 0.7283 0.1496 0.2 143

All 1 3 0 2 3 0 1 59 79 0 0.7273 0.7197 0.1568 0.22 148

Table 40: Post-test Treatment Students DBSCAN Clusters Based on Inner Fringes at ε = 10

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'2 0 0 0 0 0 2 1 0 0 0 0.6923 0.7087 0.1606 0.23 3

C'1 0 0 0 0 2 0 0 45 98 0 0.8618 0.9286 0.1473 0.17 145

All 0 0 0 0 2 2 1 45 98 0 0.8584 0.9261 0.1489 0.17 148

Page 89: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

89

Therefore, in feedback form, after Post-test, the teacher becomes informed that

the students in C’1 are the ones who have recently learned the topics of ‘Read numbers

up to 999’ (i.e. d), ‘Count backward ten step down from any given number’ (i.e. e), and

‘Arrange numbers up to 999, written in mixed form in increasing or decreasing order’

(i.e. f)., whereas the students in C’2 are the ones who have recently learned the topics

of ‘Identify simple Place Value’ (i.e. b), ‘Identify the place value of a specific digit in a

3-digit number’ (i.e. c), and ‘Count backward ten step down from any given number’

(i.e. e). The teacher can use this information to identify the reasons why majority of

Treatment students were able to attain the advanced topics concerning reading and

counting numbers, whereas few distinct students who were considered as outliers by the

DBSCAN algorithm were not proficient enough and were only able to acquire topics

concerning identifying simple place values. Also, the teacher can use the inner fringe

results to identify what topics can be potentially recently learned by the students if the

teacher uses the instructional procedure used with the Treatment students.

In terms of statistical testing, first, at a significance level of 0.05 for the test on

Pre-test Treatment Students Inner Fringes, cluster C1 (A = 0.7447, p-value = 0.05131;

W = 0.973, p-value = 0.006317) is barely normally distributed as indicated by AD test,

but not normally distributed as indicated by KW test. For the test on Post-test Treatment

Students Inner Fringes, the large cluster in the result which contains most of the students

in the data set C’1 (A = 6.754, p-value < 0.05; W = 0.8413, p-value < .05) is not normally

distributed.

Next, at a significance level of 0.05, for the test on Pre-test Treatment Students

Inner Fringes, Levene’s test indicated equal variances across clusters (F= 1.515, p-value

= 0.2204), and Median scores across clusters (Kruskal-Wallis: chi2= 12.2297, df = 1, p-

value = 0.0004703) were significantly different. To emphasize this significant

difference of the Median scores across clusters Kruskal-Wallis mean/average ranks and

Mood’s Median test results of the clusters in the results are as shown in Table 41 and

Table 42.

For the test on Post-test Treatment Students Inner Fringes, Levene’s test

indicated equal variances across clusters (F= 0.0361, p-value = 0.8495), whereas

Median scores across clusters (Kruskal-Wallis: chi2= 3.407, df = 1, p-value = 0.06492)

were not significantly different.

Page 90: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

90

Table 41: Pre-test Treatment Students DBSCAN Inner Fringes Clusters Kruskal-

Wallis Mean Ranks

Cluster No. of

Students

Median

Scores

Average

Rank

C0 (Noise) 5 0.4159 8.6

C1 143 0.7283 76.8

Overall 148 74.5

Table 42: Pre-test Treatment Students DBSCAN Inner Fringes Clusters Mood’s

Median Test

Cluster No. of

Students

Median

Score Individual 95.0% Cis

C0

(Noise) 5 0.4159

C1 143 0.7283

This insignificance can be seen in the clusters’ Kruskal-Wallis mean/average

ranks and Mood’s Median test results of the clusters in the results are as follow:

Table 43: Post-test Treatment Students DBSCAN Inner Fringes Clusters Kruskal-

Wallis Mean Ranks

Cluster No. of

Students

Median

Scores Average Rank

C'2 3 0.7087 29.3

C'1 145 0.9286 75.4

Overall 148 74.5

Table 44: Post-test Treatment Students DBSCAN Inner Fringes Clusters Mood’s

Median Test

Cluster No. of

Students

Median

Score Individual 95.0% Cis

C'2 3 0.7087

C'1 145 0.9286

7.2.2. Clustering control and treatment students based on outer fringes.

Next, the students were clustered based on their outer fringes (i.e. what topics they are

ready to learn next).

With respect to outer fringes, using the k-NN plot method, the ε for Pre-test and

Post-test Control students DBSCAN clusters was found to be 2. An ε of 2 means that,

Page 91: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

91

for a group of Pre-test Control students to be considered as a cluster, the students who

are neighbors to each other must be within 3% (i.e. 2/64) away from each other in terms

of the topics they are ready to learn next. The same applies for the Post-test Control

students.

As seen in Table 45 and Table 46 below, first the Control group Pre-test and

Post-test means, medians, and standard deviations of their respective clusters were

calculated.

As observed in Table 45 and Table 46, firstly like in the case of K-Means, the

difference between the individual clusters, whether it was the Pre-test or Post-test, is the

topic(s) that the students are ready to learn given their current knowledge state. For

example, after Pre-test as well as Post-test, the Control students in the non-noise clusters

C1 and C’1 who are at knowledge states (‘H’), (‘J’), and/or (‘F’), have outer fringes e

and f, whereas the Control students in the Noise clusters C0 and C’0 who are at

knowledge states (‘B’) and/or (‘D’) have outer fringes c, d, and e. The latter DBSCAN

result is similar as when K-Means was used with this data set. Therefore, the previous

information about the clusters informs the teacher that, using the conventional teaching

methods, the students in C1 and C’1 are ready to learn the topic of ‘Count backward ten

step down from any given number’ (i.e. e). and the topic of ‘Arrange numbers up to

999, written in mixed form in increasing or decreasing order’ (i.e. f); otherwise, the

other C1 and C’1 students who have already mastered all the topics in the NUMBERS

unit can just attend the lesson to revise the topics of counting and arranging numbers.

On the other hand, the few Noise students in C0 and C’0 who are ready to learn

topics of ‘Identify the place value of a specific digit in a 3-digit number’ (i.e. c), the

topic of ‘Read numbers up to 999’ (i.e. d), and/or the topic of ‘Count backward ten step

down from any given number’ (i.e. e) should be given extra practice questions so they

can catch up with the rest of the students in C1 and C’1.

In terms of statistical testing, first, at a significance level of 0.05, for the test on

Pre-test Control Students Outer Fringes, cluster C1 (A = 0.3121, p-value = 0.5392; W

= 0.9751, p-value = 0.356) is normally distributed. On the other hand, for the test on

Post-test Control Students Outer Fringes, cluster C’1 (A = 1.2201, p-value = 0.003185;

W = 0.9236, p-value = 0.002563) is not normally distributed.

Page 92: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

92

Table 45: Pre-test Control Students DBSCAN Clusters Based on Outer Fringes at ε = 2

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C0 (Noise) 0 1 0 2 0 0 0 0 0 0 0.5771 0.6476 0.1352 0.23 3

C1 0 0 0 0 0 1 0 24 0 26 0.7418 0.7562 0.1511 0.2 51

All 0 1 0 2 0 1 0 24 0 26 0.7327 0.7484 0.1539 0.21 54

Table 46: Post-test Control Students DBSCAN Clusters Based on Outer Fringes at ε = 2

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'0 (Noise) 0 0 0 2 0 0 0 0 0 0 0.2868 0.2868 0.0352 0.12 2

C'1 0 0 0 0 0 0 0 25 0 27 0.7858 0.7893 0.1607 0.2 52

All 0 0 0 2 0 0 0 25 0 27 0.7673 0.7869 0.1841 0.24 54

Page 93: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

93

Next, at a significance level of 0.05, for the test on Pre-test Control Students

Outer Fringes, Levene’s test indicated equal variances across clusters (F= 0.1149, p-

value = 0.736), and Median scores across clusters (Kruskal-Wallis: chi2= 2.5759, df =

1, p-value = 0.1085) were not significantly different in the case of this K-Means result.

This insignificance can be seen in the clusters’ Kruskal-Wallis mean/average ranks and

Mood’s Median test results of the clusters in the results which are as follow:

Table 47: Pre-test Control Students DBSCAN Outer Fringes Clusters Kruskal-Wallis

Mean Ranks

Cluster No. of

Students

Median

Scores

Average

Rank

C0 (Noise) 3 0.6476 13.3

C1 51 0.7562 28.3

Overall 54 27.5

Table 48: Pre-test Control Students DBSCAN Outer Fringes Clusters Mood’s Median

Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C0

(Noise) 3 0.6476

C1 51 0.7562

For the test on Post-test Control Students Outer Fringes, Levene’s test indicated

equal variances across clusters (F= 3.348, p-value = 0.07302), whereas Median scores

across clusters (Kruskal-Wallis: chi2= 5.6762, df = 1, p-value = 0.0172) were

significantly different. This significance can be seen in the clusters’ Kruskal-Wallis

mean/average ranks and Mood’s Median test results of the clusters in the results are as

follow:

Table 49: Post-test Control Students DBSCAN Outer Fringes Clusters Kruskal-Wallis

Mean Ranks

Cluster No. of

Students

Median

Scores

Average

Rank

C’0 (Noise) 2 0.2868 1.5

C’1 52 0.7893 28.5

Overall 54 27.5

Page 94: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

94

Table 50: Post-test Control Students DBSCAN Outer Fringes Clusters Mood’s Median

Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C’0

(Noise) 2 0.2868

C’1 52 0.7893

Next, DBSCAN clustering and descriptive statics were applied on the Treatment

students as seen in Table 51 and Table 52 below.

With respect to outer fringes, using the k-NN plot method, the ε for Pre-test and

Post-test Treatment students DBSCAN clusters was found to be 1. An ε of 1 means that,

for a group of Pre-test Treatment students to be considered as a cluster, the students who

are neighbors to each other must be within 1.5% (i.e. 1/64) away from each other in

terms of the topics they are ready to learn next. The same applies for the Post-test

Treatment students.

As observed in Table 51 and Table 52, like the Control group, the same outer

fringes concept applies on the Treatment group. For example, after Pre-test as well as

Post-test, the Treatment students in C1 and C’1 who are at knowledge states (‘H’) and

(‘I’) have outer fringes f and g, and the Treatment students in the noise clusters C0 and

C’0 who are at the other knowledge states have all the other outer fringes.

Therefore, the teacher is advised to teach the students in C1 and C’1 the more

demanding topics of arranging numbers (i.e. f) as well as counting and writing in 10s

(i.e. g), and separate them from these minority Noise students in C0 as well as C’0 who

need to be taught next the simpler topic of identifying simple place values (i.e. b) and

simple counting and reading numbers backwards (i.e. d and e). This will help the less

knowledgeable students in the Noise cluster focus more on attaining the primitive

topics, rather than just teaching them the more complex topics causing them to suffer

with the NUMBERS unit.

In terms of statistical testing, first, at a significance level of 0.05, for the test on

Pre-test Treatment Students Outer Fringes, cluster C1 (A = 1.1272, p-value = 0.005779;

W = 0.9653, p-value = 0.00139) is not normally distributed. For the test on Post-test

Treatment Students Outer Fringes, cluster C’1 (A = 6.7364, p-value < 0.05; W = 6.7364,

p-value < 0.05) is also not normally distributed.

Page 95: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

95

Table 51: Pre-test Treatment Students DBSCAN Clusters Based on Outer Fringes at ε = 1

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C0 (Noise) 1 3 0 2 3 0 1 0 0 0 0.4439 0.4739 0.1049 0.24 10

C1 0 0 0 0 0 0 0 59 79 0 0.7478 0.7405 0.1391 0.19 138

All 1 3 0 2 3 0 1 59 79 0 0.7273 0.7197 0.1568 0.22 148

Table 52: Post-test Treatment Students DBSCAN Clusters Based on Outer Fringes at ε = 1

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'0 (Noise) 0 0 0 0 2 2 1 0 0 0 0.5279 0.5241 0.2539 0.48 5

C'1 0 0 0 0 0 0 0 45 98 0 0.87 0.9305 0.131 0.15 143

All 0 0 0 0 2 2 1 45 98 0 0.8584 0.9261 0.1489 0.17 148

Page 96: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

96

Next, at a significance level of 0.05, for the test on Pre-test Treatment Students

Outer Fringes, Levene’s test indicated equal variances across clusters (F= 2.1191, p-

value= 0.1476), whereas Median scores across clusters (Kruskal-Wallis: chi2= 26.0427,

df = 1, p-value <0.05) were significantly different. This significance can be seen in the

clusters’ Kruskal-Wallis mean/average ranks and Mood’s Median test results of the

clusters in the results are as follow:

Table 53: Pre-test Treatment Students DBSCAN Outer Fringes Clusters Kruskal-

Wallis Mean Ranks

Cluster No. of

Students

Median

Scores

Average

Rank

C0 (Noise) 10 0.4739 7.7

C1 138 0.7405 79.3

Overall 148 74.5

Table 54: Pre-test Treatment Students DBSCAN Outer Fringes Clusters Mood’s

Median Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C0

(Noise) 10 0.4739

C1 138 0.7405

For the test on Post-test Treatment Students Outer Fringes, Levene’s test

indicated non-equal variances across clusters (F= 7.8747, p-value = 0.005697),

therefore Kruskal-Wallis test was not applicable.

Overall, from a DBSCAN clustering perspective, the algorithm seems to

consider the minority of students who are at the lower knowledge levels as

Noise/Outliers. Therefore, DBSCAN may guide the teacher on which students he/she

needs to focus on for more efficient way of taking care of the knowledge needs of the

students in a class.

With regards outer fringes, the clustering result for the Control data sets is

exactly the same as when K-Means clustering was applied on it. For example, both

clustered the students with outer fringes e and f which are concerned with topics related

to backwards counting and arranging numbers in ascending/descending order.

Page 97: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

97

In terms of the statistical properties of the clusters in each DBSCAN result, the

clusters that comprised of the larger number of students were mostly not normally

distributed. Also, in most results, Levene’s test demonstrated equal variance across the

clusters, but with significant difference in the Median scores across them as per Kruskal-

Wallis’s test and Mood’s Median test. This is similar to the statistical properties of

clusters in the K-Means results.

All in all, K-Means clustering results are more logical and made more sense in

providing feedback to the teacher than DBSCAN clustering results.

7.3. DBSCAN Results Evaluation

Next, the results from the DBSCAN clustering were evaluated using the indices

described in Chapter 4. Firstly, the Internal Indices CP, SP, DB, DVI, WSS, and BSS

were calculated using the methods from [35]. The latter indices for the DBSCAN

clusters are shown below in Table 55.

Looking at the resulting indices, firstly, in terms of compactness, the DBSCAN

outer fringes clusters are always more compact than the inner fringe clusters.

Furthermore, like in K-Means, the Treatment clusters are always more compact than the

Control clusters, with compactness reaching 0.5501 after Pre-test and 0.4505 after Post-

test. However, the overall compactness of the K-Means clusters is better than that of the

DBSCAN clusters as they are lower.

Next, the separation of DBSCAN inner fringes clustering results is better than

that of outer fringes, with the Treatment inner fringes clusters being the most separated

after Pre-test as well as after Post-test; the separations are 36.0643 and 44.1471

respectively. The overall separation indices of DBSCAN cluster is higher than that of

K-Means cluster, even though the K-Means overall separation indices seem to be more

consistent.

The DB of the outer fringes clusters is overall lower than that of inner fringes

clusters with the Control outer fringes clusters DB being 0.4793 after Pre-test and

0.0677 after Post-test. However, the overall DB of the K-Means clusters is better than

that of the DBSCAN clusters as they are lower.

Also, the DVI of DBSCAN Control clusters are higher than the Treatment

clusters with Control outer fringes clusters DVI being 1.0 after Pre-Test and 7.0 after

Post-test.

Page 98: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

98

Table 55: DBSCAN Results Evaluation

DBSCAN Clusters Internal Indices

Pre-test Clusters *↓CP +↑SP ↓DB ↑DVI ↓WSS ↑BSS ↓WSS/BSS

Control Inner Fringes 5.6036 33.6538 0.6417 1.25 1643.769 2181.268 0.7536

Outer Fringes 1.1142 18.9804 0.4793 1 154.9804 1020.723 0.1518

Treatment Inner Fringes 4.9521 36.0643 0.6028 0.625 4275.088 6283.479 0.6804

Outer Fringes 0.5501 25.3725 0.5134 0.15 1155.375 6002.645 0.1925

Post-test Clusters ↓CP ↑SP ↓DB ↑DVI ↓WSS ↑BSS ↓WSS/BSS

Control Inner Fringes N/A N/A N/A N/A 1576.315 0 -

Outer Fringes 1.0173 15.0385 0.0677 7 51.92308 435.5584 0.1192

Treatment Inner Fringes 4.3708 44.1471 0.1594 3.6 3110.639 5728.388 0.543

Outer Fringes 0.4505 14.6853 1.1736 0.0714 894.8392 1041.864 0.8589

*↓ means the less the value the better.

+↑ means the greater the value the better.

N/A means not applicable as the clustering result contains only one cluster.

Page 99: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

99

However, the overall DVI of the K-Means clusters is better than that of the

DBSCAN clusters as they are higher than the DVI of the equivalent data set in most

cases.

In addition, the WSS and BSS of every DBSCAN clustering result were

calculated. In terms of WSS, like K-Means, Pre-test outer fringes clusters are more

cohesive than Pre-test inner fringes clusters as the WSS for outer fringes clusters is

lower than inner fringes clusters. This is also the same for Post-test data sets. The WSS

of the K-Means clusters is considerably better than those of DBSCAN clusters as the

WSS of the K-Means clusters is overall considerably lower than that of the DBSCAN

clusters.

In terms of BSS, the BSS values suggest that the in most results the inner fringes

clusters are better separated than outer fringes clusters. The SP indices findings can also

further support that observation.

Overall, with respect to the DBSCAN clustering results at a threshold of 33%,

the Pre-test Control outer fringe clusters and Post-test Control outer fringe DBSCAN

clusters can be considered to have a “good” quality due to the suitable Internal indices

values as compared to the other DBSCAN clustering results. This further reinforces the

fact that, with respect to DBSCAN clustering, outer fringes are more useful than inner

fringes in providing guidance to teachers for in-class instruction if the judgment is done

based on CP and DB. However, unlike K-Means, it is noted that the indices for Control

clusters are overall “better” than those of Treatment clusters. There is still a potential

that teaching methods using the conventional methods might prove to be better than the

technology methods.

Furthermore, despite the previous internal indices analysis for DBSCAN, the K-

Means Internal indices were overall “better” than those of DBSCAN. Therefore, this

might be an indication that the feedback provided from K-Means clustering is of better

quality and value to the teacher than the feedback provided from DBSCAN clustering.

Finally, ICC was calculated for the eight DBSCAN cluster results. The ICCs for

the results in the DBSCAN are as follows:

Page 100: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

100

Table 56: DBSCAN Clusters Intra-class Correlation Coefficient

DBSCAN Clusters ICC

Pretest

Control Inner Fringes 0.4961

Outer Fringes 0.2969

Treatment Inner Fringes 0.6401

Outer Fringes 0.7056

Posttest

Control Inner Fringes *N/A

Outer Fringes 0.8231

Treatment Inner Fringes 0.3294

Outer Fringes 0.7542

* N/A means not applicable as the clustering result contains only one cluster.

In terms of ICC, as shown in Table 56, the ICC value of the DBSCAN fringes

clustering results are half of the time greater than 60% which is considered acceptable,

with the exception of the ICCs of the Pre-test Control inner and outer fringes clusters

and Post-test Treatment inner fringes clusters; 49.61%, 29.69%, and 32.94%

respectively.

Like K-Means, the above 60% ICCs indicates that clustering the students based

on fringes makes a difference as opposed to grouping them based on school or grades

only. All in all, the ICC of the K-Means clustering results are better than those of

DBSCAN.

As previously done with K-Means clustering results, the External indices of the

DBSCAN clustering results will be discussed in the DBSCAN Comparative Analysis

section of this chapter where the clustering results to pre-defined class labels will be

compared, to emphasize the significance of the proposed model.

7.4. DBSCAN Comparative Analysis

Like in K-Means clustering, to emphasize the importance and “goodness” of the

approach, a comparative analysis is done between the DBSCAN clustering results and

each of two pre-defined class labels. The measures and indices used are NMI, CA

(Purity), Entropy, and ARI. NMI, CA, and ARI measures were calculated using the

techniques as mentioned in [36], [37], and [38] respectively.

7.4.1. DBSCAN clustering as explained by knowledge states. The first

hypothesis of the pre-defined class is DBSCAN clustering the students based on their

knowledge states.

Page 101: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

101

First, for each cluster resulting from applying DBSCAN algorithm on the

targeted data, the Purity, Entropy, NMI and ARI were calculated. A detailed

comparative analysis was created for every run on every data sample: (Pre-test Control

Inner Fringes), (Pre-test Control Outer Fringes), (Pre-test Treatment Inner Fringes),

(Pre-test Treatment Outer Fringes), (Post-test Control Inner Fringes), (Post-test Control

Outer Fringes), (Post-test Treatment Inner Fringes), and (Post-test Treatment Outer

Fringes).

Overall, the comparative analysis results for the first pre-defined class labels

hypothesis are as follows:

Table 57: Is DBSCAN Clustering Based on Knowledge States

DBSCAN Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 1 0 N/A 1

Outer Fringes 1 0 N/A 1

Treatment Inner Fringes 0.9594 -0.2359 0.3577 0.6464

Outer Fringes 0.9932 -0.0317 0.8704 1

Posttest

Control Inner Fringes 0.963 -0.2284 N/A 1

Outer Fringes 1 0 1 1

Treatment Inner Fringes 1 0 N/A 1

Outer Fringes 1 0 N/A 1

*↑means the greater the value the more resemblance between fringes and KS clustering.

+↓ arrow means the less the value the more resemblance between fringes and KS clustering.

N/A means not applicable as the clustering result contains only one cluster.

With respect to DBSCAN clustering, looking at Table 57, as compared to the

knowledge state KS clustering results, it can be seen that the overall purity of each

DBSCAN clustering results of the fringes in both Pre-test and Post-test is greater than

90%, sometimes approaching 100% as in the case of Pre-test Control Inner and Outer

Fringes, Post-test Control Outer Fringes, and Post-test Treatment Inner and Outer

Fringes. The same applies to the Entropy measure which is approaching 0 in most of

the latter data sets.

Moreover, the overall NMI (where was applicable) between the KS clusters and

Fringes clusters is half the time more than 50%, which is also sometimes approaching

100%. This is especially true for Post-test Control Outer Fringes.

Also, the overall ARI values between the KS clusters and Fringes clusters are

most time approaching 100%. This is especially true for all data sets except Pre-test

Treatment Inner Fringes where ARI is 64.64%.

Page 102: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

102

With respect to DBSCAN clustering, even though in more cases than K-Means

the External indices of the DBSCAN fringes clusters were close to 100% when

compared against the KS clusters, we cannot safely deduce that knowledge state is

sufficient to give feedback about the students to the teacher, without using their fringes.

In some few cases where the External indices show less than 100% resemblance

between what the students are ready to learn next and the knowledge level they are at.

Also, in some cases the resemblance was not clear due to non-applicability of the

comparative analysis. The non-applicability is mostly attributed to the noise in the

DBSCAN clustering.

Overall, the comparative analysis for the first hypothesis of clustering students

based on knowledge states is satisfactory. Therefore, with respect to DBSCAN

clustering, even though in most of the cases there was a high resemblance between the

fringes clustering outcomes and KS clustering outcome, this still does not justify using

only KS rather than fringes to get information about the learners’ learning progress.

Also, the comparative analysis results for this hypothesis in the DBSCAN are not

entirely reliable as the DBSCAN clustering results contain noise/outliers which might

not make sense for this comparative analysis. Hence, also in this case, K-Means

clustering results prove to be more reliable than the DBSCAN results.

7.4.2. DBSCAN clustering as explained by 25th percentile/quartile. The

second hypothesis of the pre-defined class depends on grouping the students based on

the 25th percentiles/ quartiles of the overall NUMBERS unit scores of the students.

As was done for K-Means, a detailed comparative analysis was created for every

run on every data sample against the Quartiles. The data sample includes (Pre-test

Control Inner Fringes), (Pre-test Control Outer Fringes), (Pre-test Treatment Inner

Fringes), (Pre-test Treatment Outer Fringes), (Post-test Control Inner Fringes), (Post-

test Control Outer Fringes), (Post-test Treatment Inner Fringes), and (Post-test

Treatment Outer Fringes).

Overall, the comparative analysis results for the second pre-defined class labels

hypothesis are as follows:

Page 103: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

103

Table 58: Is DBSCAN Clustering Based on Quartiles

DBSCAN Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.2963 -1.9239 0.1112 0.0434

Outer Fringes 0.2963 -1.9349 0.0816 0.0634

Treatment Inner Fringes 0.2905 -1.9294 0.1075 0.0416

Outer Fringes 0.3244 -1.8535 0.1732 0.0798

Posttest

Control Inner Fringes 0.2593 -1.999 N/A 0

Outer Fringes 0.2963 -1.9239 0.1112 0.0434

Treatment Inner Fringes 0.2635 -1.9777 0.0417 0.0001

Outer Fringes 0.277 -1.9552 0.0684 0.0415

*↑means the greater the value the more resemblance between fringes clustering and Quartiles.

+↓means the less the value the more resemblance between fringes clustering and Quartiles.

N/A means not applicable as the clustering result contains only one cluster.

With respect to DBSCAN clustering, looking at Table 58, as compared to the

Quartiles grouping results , it can be seen that the overall purity of each DBSCAN

clustering results of the fringes in both Pre-test and Post-test is less than 30% in the all

of the cases . The same applies to the Entropy measure which is approaching values

greater than 1 in all of the latter data sets. These values are an indication of the

irrelevancy between the knowledge level of the learners and their corresponding unit

scores.

Moreover, all NMI and ARI values between the fringes clustering results and

quartile groups are less than 20%, which is an indication of the large discrepancy

between grouping based on quartiles of scores and DBSCAN clustering based on

fringes.

Figure 19 compares K-Means and DBSCAN clustering resemblance to the

Quartiles grouping results, and which one is closer in each case. Even though the inner

fringes cases of K-Means are more resembling to the Quartiles grouping results, the

outer fringes cases of DBSCAN are more closely resembling the Quartiles grouping

results. The K-Means outer fringes results very lowly resemble the quartiles. This

reinforces the fact that K-Means clustering/grouping based on fringes is more different

than quartiles grouping when compared to DBSCAN. The N/A cases are the cases where

External indices were not applicable on the DBSCAN clustering result as it contained

one cluster only.

Page 104: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

104

Figure 19: Comparing K-Means and DBSCAN clustering to Quartile grouping.

Overall, like in the case of K-Means, the comparative analysis for the second

hypothesis using students’ Medians to group these students based on the 25th

percentiles/ quartiles of their overall NUMBERS unit scores was satisfactory to affirm

the significance of the proposed approach in the thesis. Therefore, this indicates that the

knowledge states of learners and what they score in the course are not connected.

7.5. DBSCAN Overall Summary

In summary, findings regarding clustering learners’ fringes using DBSCAN

algorithms can be seen from two perspectives. The first perspective is with regards to

the clustering method, which is DBSCAN, and the second perspective is with regards

to providing advice to the teacher and/or educational administrator.

With respect to DBSCAN clustering, using the dataset of Grade 2 learners’

scores in the NUMBERS unit, opposite to K-Means, DBSCAN clustering of outer

fringes gives better feedback and guidance than Inner fringes clustering to the teacher

for in-class instruction, and it is more logical if the Internal indices used for judgment

are CP and DB. However, the Internal indices of the corresponding K-Means clusters

were better than that of DBSCAN, and therefore the K-Means clusters had a better

quality.

When validating the DBSCAN clusters using the External indices, the high

resemblance between the fringes clustering outcomes and the KS clustering outcome is

expected but not entirely, and it is not safe to rely entirely on the comparative analysis

Page 105: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

105

due to the noise factor in DBSCAN clustering results. Moreover, like K-Means, dividing

into Quartiles students using Median scores as seen in DBSCAN clustering as

explained by 25th percentile/quartile section is not a good way to cluster students as

determined by the External Indices values in Table 58. For a single DBSCAN clustering

outcome, the Median across the distinct clusters are different. Finally, the value of

MinPts for every case was varied to observe the effect of changing the MinPts value

below and above 3. The MinPts tested were 2, 5, 10, and 20. The clustering results were

most of the time the same as those using MintPts 3. Therefore, varying MinPts, has

almost no effect on the number of clusters formed by DBSCAN. The tables showing the

results of the different MinPts can be examined in Appendix D: DBSCAN Results

Details.

With respect to providing advice to the teachers and administrators, at a cluster

level, DBSCAN may be good to identify the distinct students (outliers) in a large dataset.

These distinct students might be the weaker or stronger ones in class or group of students

in a selected sample. Hence, the administrator would tell the teacher which students the

proposed model advices the teacher to focus on. For example, as shown in Table 52,

after performing the Post-test on the Treatment students, 5 students out of the 148 were

considered as outliers to the entire class performance. This is because, while the

majority of the class have one topic g left (i.e. ‘Count and write in 10s (e.g. 10, 20, 30,

etc.’) to entirely complete their knowledge on the NUMBER units. These 5 students

are lagging behind. Therefore, the model suggests that the teacher can give extra tuitions

or more practice problems to help the 5 students acquire knowledge on the topics they

are missing; the topics are ‘Identify simple Place Value’, ‘Read numbers up to 999’, and

‘Count backward ten step down from any given number’. Hence, the 5 students would

catch up with the rest of the class.

All in all, for the data sample being used, K-Means clustering results proved to

be better than those of DBSCAN clustering results as they made more sense in terms of

number of clusters and validation indices. In the next chapter, EM clustering will be

applied on the same data K-Means and DBSCAN was tested on.

Page 106: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

106

Chapter 8: EM Clustering

8.1. EM Overview

EM stands for Expectation Maximization clustering algorithm. EM clustering is

a type of model-based clustering, and it is based on a finite mixture of distributions as

there is a finite number of clusters being represented in the clustering result [41]. Each

cluster is represented by one distribution which has its own mean and standard

deviation. Given a new student x to classify into one of the EM result clusters A and B,

the probability of the student belonging to a cluster A, as opposed to the other cluster B

in the result, is as follows:

𝑃𝑟(𝐴|𝑥) = 𝑃𝑟(𝑥|𝐴) ∗ 𝑃𝑟(𝐴)

𝑃𝑟(𝑥)=

𝑓(𝑥, 𝜇𝐴, 𝜎𝐴) ∗ 𝑝𝐴

𝑝𝑥

with 𝑓(𝑥, 𝜇𝐴, 𝜎𝐴) = 1

√2𝜋𝜎∗ 𝑒

−(𝑥−𝜇)2

2𝜎2

(19)

where:

𝑃𝑟 (𝐴) is the probability of cluster A ,

𝜇𝐴 is the mean of the distribution of cluster A, and

𝜎𝐴 is the standard deviation of the distribution of cluster A

The new student x is placed into the cluster to which it has higher probability to

belong to as compared to the other clusters. For example, if Pr(A|x) > Pr(B|x), then the

student belongs to cluster A.

8.1.1. EM procedure and parameters. The EM clustering algorithm uses an

iterative procedure which consists of two steps: (1) Expectation and (2) Maximization.

In Expectation step, the cluster probability of each instance of data is calculated. Next,

in the Maximization step, the distribution parameters, μ and σ, are estimated based on

the cluster probabilities. Each cluster’s probabilities are stored as instance weights, and

based on these weighted instances, μ and σ for the cluster is estimated as follows:

Page 107: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

107

Given an EM result which consists of two classifications, cluster A and

cluster B:

and n

nnB

www

xwxwxw

...

...

21

2211

(20)

and

n

nnB

www

xwxwxw

...

)(...)()(

21

22

22

2

112

(21)

The iterative procedure converges when the log-likelihood saturates and reaches

its largest value. The log-likelihood increases with every iteration, and it is calculated

as follows:

Given an EM result which consists of two classifications, cluster A and

cluster B:

log- ])|Pr[]|Pr[(log BxpAxplikelihood iBiA

i

(22)

Furthermore, with every log-likelihood calculated, EM approximates Bayesian

Information Criterion (BIC) factor to determine the number of clusters in the EM

classification results. Accordingly, the larger the value of the BIC, the more evidence

that supports the resulting EM classification [41]. BIC is calculated as follows:

BIC = -2 * ln max(log-likelihood) + k * ln(n) (23)

where:

k is the number of parameters to be estimated (in our case 1 parameter

which is the fringe), and

n is the number of data instances in a given data set

In addition to log-likelihood and BIC, since the dimensionality of the data set

used is 1-D, the EM method in the thesis will use the univariate Gaussian mixture model

of equal variance (E). Therefore, BIC which is a goodness of fit measure will try to fit

n

nnA

www

xwxwxw

...

...

21

2211

n

nnA

www

xwxwxw

...

)(...)()(

21

2222

2112

Page 108: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

108

the univariate Gaussian to each of the clusters in the single result [41] . For the univariate

Gaussian model, during initialization EM classifies the data using quantiles [42].

Trying to fit such a Gaussian mixture to the clusters might result in what is

known as “overfitting” which would result in poor predictive performance for newly

introduced students, and can exaggerate minor changes and fluctuations in the data [43].

Therefore, the EM clustering results might not be as reliable as the K-Means clustering

results.

8.2. EM Results

EM is the third clustering technique to be tested in the approach. Like in the

previous sections of K-Means and DBSCAN examples, EM clustering was applied on

the inner fringes and outer fringes results obtained in Chapter 4.

Using EM clustering, for every data set/case, the number of clusters was

determined using the BIC calculated from the maximum log-likelihood for every data

set.

8.2.1. Clustering control and treatment students based on inner fringes.

Like in the K-Means and DBSCAN examples, the students were clustered based on their

fringe sets, starting with inner fringes (i.e. what topics they have recently learned given

the knowledge level the student is in).

As seen in Table 59 and Table 60 below, first the Control group Pre-test and

Post-test means, medians, and standard deviations of their respective clusters were

calculated.

As observed in Table 59 and Table 60, like in the previous clustering algorithms,

first, the difference between the individual clusters, whether it was the Pre-test or Post-

test, is the topic(s) that have been recently learned by the students. For example, after

Post-test, the Control students in C’2 who are at knowledge states (‘D’) and (‘H’) have

inner fringes d and e, and the Control students in C’1 who are at knowledge state (‘J’)

have inner fringes g. Therefore, in feedback form, the teacher becomes informed that

the students in C’2 are the ones who have recently learned the topics of ‘Read numbers

up to 999’ (i.e. d) and ‘Count backward ten step down from any given number’ (i.e. e),

whereas the students in C’1 are the ones who have recently learned the topic of ‘Count

and write in 10s (e.g. 10, 20, 30, etc.)’ (i.e. g) only.

Page 109: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

109

Table 59: Pre-test Control Students EM Clusters Based on Inner Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C2 0 1 0 0 0 1 0 0 0 0 0.5068 0.5068 0.121 0.24 2

C1 0 0 0 2 0 0 0 24 0 26 0.7414 0.7531 0.1492 0.2 52

All 0 1 0 2 0 1 0 24 0 26 0.7327 0.7484 0.1539 0.21 54

Table 60: Post-test Control Students EM Clusters Based on Inner Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C’2 0 0 0 2 0 0 0 25 0 0 0.6333 0.6119 0.1461 0.23 27

C’1 0 0 0 0 0 0 0 0 0 27 0.9014 0.969 0.1023 0.11 27

All 0 0 0 2 0 0 0 25 0 27 0.7673 0.7869 0.1841 0.24 54

Page 110: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

110

This behavior in clustering is similar to K-Means clustering as seen in Table 10

for the same set of students, Post-test Control students. Therefore, similarly to the

feedback from K-Means, the teacher can use this information to identify the reasons

why some students were able to attain the last topic of ‘Counting in 10s’ faster than

other students. One of the reasons might be that the students in C’1 already have

sufficient prior knowledge about the topics in state (‘I’) which contains the prerequisite

of knowing how to count numbers up to 999 backwards and forwards (i.e. d and e) and

arranging them in any order (i.e. f).

In terms of statistical testing, first, at a significance level of 0.05, for the test on

Pre-test Control Students Inner Fringes, cluster C1 (A = 0.2682, p-value = 0.67; W =

0.9775, p-value = 0.4267) is normally distributed. For the test on Post-test Control

Students Inner Fringes, the two large clusters C’1 (A = 1.9276, p-value < 0.05, p-value

= 0.1968; W = 0.8219, p-value < 0.05) and C’2 (A = 0.809, p-value = 0.0316; W =

0.9161, p-value = 0.03181) are not normally distributed.

Next, at a significance level of 0.05, for the test on Pre-test Control Students

Inner Fringes, Levene’s test indicated equal variances across clusters (F= 0.2964, p-

value= 0.5885), and Median scores across clusters (Kruskal-Wallis: chi2= 3.7008, df =

1, p-value = 0.05439) were not significantly different. This insignificance can be seen

in the clusters’ Kruskal-Wallis mean/average ranks and Mood’s Median test results of

the clusters in the results are as follow:

Table 61: Pre-test Control Students EM Inner Fringes Clusters Kruskal-Wallis Mean

Ranks

Cluster No. of

Students

Median

Scores

Average

Rank

C2 2 0.5068 6.5

C1 52 0.7531 28.3

Overall 54 27.5

Table 62: Pre-test Control Students EM Inner Fringes Clusters Mood’s Median Test

Cluster No. of

Students

Median

Score Individual 95.0% Cis

C2 2 0.5068

C1 52 0.7531

Page 111: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

111

For the test on Post-test Control Students Inner Fringes, Levene’s test indicated

equal variances across clusters (F= 0.6222, p-value= 0.4338), whereas Median scores

across clusters (Kruskal-Wallis: chi2= 29.0584, df =1, p-value <0.05) were significantly

different. This significance can be seen in the clusters’ Kruskal-Wallis mean/average

ranks and Mood’s Median test results of the clusters in the results are as follow:

Table 63: Post-test Control Students EM Inner Fringes Clusters Kruskal-Wallis Mean

Ranks

Cluster No. of

Students

Median

Scores

Average

Rank

C’2 27 0.6119 16.0

C’1 27 0.9690 39.0

Overall 54 27.5

Table 64: Post-test Control Students EM Inner Fringes Clusters Mood’s Median Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C’2 27 0.6119

C’1 27 0.9690

Next, EM clustering and descriptive statics were applied on the Treatment

students as seen in Table 65 and Table 66 below.

As observed in Table 65 and Table 66 of the Treatment students, as seen

previously for the Control students, first, the difference between the individual clusters,

whether it was the Pre-test or Post-test, is also the topic(s) that have been recently

learned by the students. For example, after Pre-test, the Treatment students in C3 who

are at knowledge states (‘D’), (‘E’), and (‘H’) have inner fringes d and e, and the

Treatment students in C1 who are at knowledge state (‘I’) have inner fringes f. The

difference between the K-Means clustering in Table 16 and the EM clustering in Table

65 for the same data set is that K-Means separated the students at knowledge states (‘D’)

and (‘E’) from the students at knowledge state (‘H’) even though they have a common

inner fringe topic d.

Page 112: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

112

Table 65: Pre-test Treatment Students EM Clusters Based on Inner Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C4 1 3 0 0 0 0 1 0 0 0 0.4492 0.4159 0.0911 0.20 5

C3 0 0 0 2 3 0 0 59 0 0 0.6267 0.6194 0.1126 0.18 64

C1 0 0 0 0 0 0 0 0 79 0 0.8264 0.8095 0.1116 0.14 79

C2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

All 1 3 0 2 3 0 1 59 79 0 0.7273 0.7197 0.1568 0.22 148

Table 66: Post-test Treatment Students EM Clusters Based on Inner Fringes at

EM

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C’2 0 0 0 0 0 2 1 0 0 0 0.6923 0.7087 0.1606 0.23 3

C’1 0 0 0 0 2 0 0 45 98 0 0.8618 0.9286 0.1473 0.17 145

All 0 0 0 0 2 2 1 45 98 0 0.8584 0.9261 0.1489 0.17 148

Page 113: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

113

Therefore, in feedback form, EM informs the teacher that the students in C3 are

the ones who have recently learned the topics of ‘Read numbers up to 999’ (i.e. d) and

‘Count backward ten step down from any given number’ (i.e. e), whereas the students

in C1 are the ones who have recently learned the topic of ‘Arrange numbers up to 999,

written in mixed form in increasing or decreasing order’ (i.e. f). Like in K-Means

results, the teacher can use this information to identify the reasons why some students

were able to attain the last topic of ‘Arrange numbers up to 999’ faster than other

students. Furthermore, after Pre-test, the students in knowledge states (‘D’) and (‘E’)

are in the same cluster C3 because they have the same inner fringes which are d.

Therefore, the teacher will know that the students in this group C3 have recently learnt

the topic of ‘Read numbers up to 999’. In addition, as opposed to K-Means, after Post-

test, EM clustered the students at knowledge state (‘E’) with students at knowledge

states (‘H’) and (‘I’) maybe because the common topic recently learnt by some of these

student is ‘Read numbers up to 999’ (i.e. d), even though the difference in inner fringe

set is ‘Count backward ten step down from any given number’ (i.e. e) and ‘Arrange

numbers up to 999, written in mixed form in increasing or decreasing order’ (i.e. f).

This might inform the teacher that students who attain sufficient knowledge in reading

number up to 999 have the potential to simultaneously arrange them in any mixed form

and also be able to count backwards ten step down from any given number.

In terms of statistical testing, first, at a significance level of 0.05, for the test on

Pre-test Treatment Students Inner Fringes, the two large clusters C1 (A = 1.5407, p-

value < 0.05; W = 0.941, p-value < 0.05) and C3 (A = 0.9897, p-value = 0.01219; W =

0.9418, p-value = 0.00461) are not normally distributed. For the test on Post-test

Treatment Students Inner Fringes, the large cluster C’1 (A = 6.754, p-value < 0.05; W

= 0.8413, p-value < 0.05) is also not normally distributed.

Next, at a significance level of 0.05, for the test on Pre-test Treatment Students

Inner Fringes, Levene’s test indicated equal variances across clusters (F= 1.1184, p-

value= 0.3296), whereas Median scores across clusters (Kruskal-Wallis: chi2= 75.3734,

df =2, p-value <0.05) were significantly different. This significance can be seen in the

clusters’ Kruskal-Wallis mean/average ranks and Mood’s Median test results of the

clusters in the results are as follow:

Page 114: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

114

Table 67: Pre-test Treatment Students EM Inner Fringes Clusters Kruskal-Wallis

Mean Ranks

Cluster No. of

Students

Median

Scores

Average

Rank

C4 5 0.4159 8.6

C3 64 0.6194 45.2

C1 79 0.8095 102.4

Overall 148 74.5

Table 68: Pre-test Treatment Students EM Inner Fringes Clusters Mood’s Median Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C4 5 0.4159

C3 64 0.6194

C1 79 0.8095

For the test on Post-test Treatment Students Inner Fringes, Levene’s test

indicated equal variances across clusters (F= 0.0361, p-value = 0.8495), and Median

scores across clusters (Kruskal-Wallis: chi2= 3.407, df = 1, p-value = 0.06492) were not

significantly different. This insignificance can be seen in the clusters’ Kruskal-Wallis

mean/average ranks and Mood’s Median test results of the clusters in the results are as

follow:

Table 69: Post-test Treatment Students EM Inner Fringes Clusters Kruskal-Wallis

Mean Ranks

Cluster No. of

Students

Median

Scores

Average

Rank

C’2 3 0.7087 29.3

C’1 145 0.9286 75.4

Overall 148 74.5

Table 70: Post-test Treatment Students EM Inner Fringes Clusters Mood’s Median

Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C’2 3 0.7087

C’1 145 0.9286

Page 115: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

115

8.2.2. Clustering control and treatment students based on outer fringes.

Next, the students were clustered based on their outer fringes (i.e. what topics they are

ready to learn next).

As seen in Table 71 and Table 72 below, first the Control group Pre-test and

Post-test means, medians, and standard deviations of their respective clusters were

calculated.

As observed in Table 71 and Table 72, like K-Means and DBSCAN

corresponding examples, the difference between the individual clusters, whether it was

the Pre-test or Post-test, is the topic(s) that the students are ready to learn given their

current knowledge state. For example, after Pre-test, the Control students in C1 who are

at knowledge states (‘F’), (‘H’) and (‘J’) have outer fringes e and f, and the Control

students in C3 who are at knowledge states (‘B’) and (‘D’) have outer fringes c, d, and

e. This informs the teacher that, using the conventional teaching methods, the students

in C1 are ready to learn the topic of ‘Count backward ten step down from any given

number’ (i.e. e). and the topic of ‘Arrange numbers up to 999, written in mixed form in

increasing or decreasing order’ (i.e. f); otherwise the rest of the students in C1 who

have nothing left to learn next and have already mastered all the topics in the

NUMBERS unit can just attend the lesson to revise the topics of counting and arranging

numbers. On the other hand, the students in C3 are ready to learn topics of ‘Identify the

place value of a specific digit in a 3-digit number’ (i.e. c), the topic of ‘Read numbers

up to 999’ (i.e. d), and the topic of ‘Count backward ten step down from any given

number’ (i.e. e). In terms of clustering behavior, for Pre-test Control students, EM

clustered the students the same way as K-Means, with the exception that EM suggests

there are 3 clusters instead of 2 as suggested by K-Means. But, as seen in Table 71, C2

has no students as it might be for those students at knowledge state (‘G’) with outer

fringe d (i.e. those students who only need to learn next the topic of reading numbers

up to 999). The deduction for C2 was made by looking at the distributions in Figure 21.

On the other hand, for Post-test Control students, EM tried to fit all the students

at knowledge states (‘D’), (‘H’), and (‘J’) in one cluster C’1. This is unlike K-Means

which put (‘D’) students in one cluster and (‘H’) and (‘J’) in another cluster. Therefore,

in feedback form for teachers, EM suggests students need to learn the topics of

identifying place value of a given 3-digit numbers and the arranging numbers in any

order together, along with the students who already mastered the entire NUMBER unit.

Page 116: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

116

Table 71: Pre-test Control Students EM Clusters Based on Outer Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C3 0 1 0 2 0 0 0 0 0 0 0.5771 0.6476 0.1352 0.23 3

C1 0 0 0 0 0 1 0 24 0 26 0.7418 0.7562 0.1511 0.20 51

C2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

All 0 1 0 2 0 1 0 24 0 26 0.7327 0.7484 0.1539 0.21 54

Table 72: Post-test Control Students EM Clusters Based on Outer Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C’1 0 0 0 2 0 0 0 25 0 27 0.7673 0.7869 0.1841 0.24 54

All 0 0 0 2 0 0 0 25 0 27 0.7673 0.7869 0.1841 0.24 54

Page 117: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

117

In terms of statistical testing, first, at a significance level of 0.05, for the test on

Pre-test Control Students Outer Fringes, cluster C1 (A = 0.3121, p-value = 0.5392; W

= 0.9751, p-value = 0.356) is normally distributed. On the other hand, for the test on

Post-test Control Students Outer Fringes, cluster C’1 (A = 0.9775, p-value = 0.0129; W

= 0.9303, p-value = 0.003755) is not normally distributed.

Next, at a significance level of 0.05, for the test on Pre-test Control Students

Outer Fringes, Levene’s test indicated equal variances across clusters (F= 0.1149, p-

value = 0.736), and Median scores across clusters (Kruskal-Wallis: chi2= 2.5759, df =

1, p-value = 0.1085) were not significantly different in the case of this EM clustering

result as can be seen in the tests below.

Table 73: Pre-test Control Students EM Outer Fringes Clusters Kruskal-Wallis Mean

Ranks

Cluster No. of

Students

Median

Scores

Average

Rank

C3 3 0.6476 13.3

C1 51 0.7562 28.3

Overall 54 27.5

Table 74: Pre-test Control Students EM Outer Fringes Clusters Mood’s Median Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C3 3 0.6476

C1 51 0.7562

For the test on Post-test Control Students Outer Fringes, Levene’s test and

Kruskal Wallis test were not applicable as it only contains one level of clusters.

Next, EM clustering and descriptive statics were applied on the Treatment

students as seen in Table 75 and Table 76 below.

Page 118: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

118

Table 75: Pre-test Treatment Students EM Clusters Based on Outer Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C3 1 3 0 0 3 0 0 0 0 0 0.429 0.4571 0.1194 0.28 7

C1 0 0 0 2 0 0 1 59 79 0 0.7421 0.7393 0.1433 0.19 141

C2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

All 1 3 0 2 3 0 1 59 79 0 0.7273 0.7197 0.1568 0.22 148

Table 76: Post-test Treatment Students EM Clusters Based on Outer Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C’3 0 0 0 0 2 0 0 0 0 0 0.2813 0.2813 0.0594 0.21 2

C’2 0 0 0 0 0 2 1 0 0 0 0.6923 0.7087 0.1606 0.23 3

C’1 0 0 0 0 0 0 0 45 98 0 0.87 0.9305 0.131 0.15 143

All 0 0 0 0 2 2 1 45 98 0 0.8584 0.9261 0.1489 0.17 148

Page 119: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

119

As observed in Table 75 and Table 76, like the Control group, the same outer

fringes concept applies on the Treatment group. For example, after Pre-test, unlike K-

Means, EM clustered together the Treatment students in C3 which have knowledge

states (‘D’), (‘G’), (‘H’) and (‘I’) with outer fringes c, e, f and g, and the Treatment

students in C1 which have knowledge state (‘A’), (‘B’), and (‘E’) with outer fringe b

and c. Therefore, the teacher is advised to teach the students in C3 the more demanding

topics of counting backwards (i.e. e), arranging numbers (i.e. f) as well as counting and

writing in 10s (i.e. g) with the simpler topic of identifying 3-digit place values (i.e. c)

and separate them from these students in C1 who need to be taught next the simpler

topic of identifying simple place values (i.e. b) and also the topic of identifying 3-digit

place values (i.e. c). If the teacher combines all the student in C3 who need to learn topic

c concerned with identifying 3-digit values with the students in C1, this will help the

students in C3 focus more on attaining the primitive topics, rather than just teaching the

topic c students the more complex topics e, f, and g causing them to suffer with the

NUMBERS unit. For Post-test Treatment students, EM gave the same kind of clustering

results, except that it put the students which have knowledge states (‘F’) and (‘G’) in a

different cluster than (‘H’) and (‘I’), thus suggesting to the teacher to teach the students

who need the topics of reading numbers up to 999 (i.e. d) and counting backwards (i.e.

e) separately from the more complex topics to be taught at knowledge states (‘H’) and

(‘I’).

In terms of statistical testing, first, at a significance level of 0.05, for the test on

Pre-test Treatment Students Outer Fringes, cluster C1 (A = 0.9624, p-value = 0.0148;

W = 0.9717, p-value = 0.005023) is not normally distributed. For the test on Post-test

Treatment Students Outer Fringes, cluster C’1 (A = 6.7364, p-value < 0.05; W = 0.8669,

p-value < 0.05) is also not normally distributed.

Next, at a significance level of 0.05, for the test on Pre-test Treatment Students

Outer Fringes, Levene’s test indicated equal variances across clusters (F= 0.8633, p-

value= 0.3544), whereas Median scores across clusters (Kruskal-Wallis: chi2= 17.9108,

df = 5, p-value <0.05) were significantly different as seen in the tests below.

Page 120: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

120

Table 77: Pre-test Treatment Students EM Outer Fringes Clusters Kruskal-Wallis

Mean Ranks

Cluster No. of

Students

Median

Scores

Average

Rank

C3 7 0.4571 7.6

C1 141 0.7393 77.8

Overall 148 74.5

Table 78: Pre-test Treatment Students EM Outer Fringes Clusters Mood’s Median

Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C3 7 0.4571

C1 141 0.7393

For the test on Post-test Treatment Students Outer Fringes, Levene’s test

indicated equal variances across clusters (F= 1.2142, p-value = 0.2999), whereas

Median scores across clusters (Kruskal-Wallis: chi2= 9.4532, df = 2, p-value =

0.008857) were significantly different as seen in the test below.

Table 79: Post-test Treatment Students EM Outer Fringes Clusters Kruskal-Wallis

Mean Ranks

Cluster No. of

Students

Median

Scores

Average

Rank

C’3 2 0.2813 1.5

C’2 3 0.7087 29.3

C’1 143 0.9305 76.5

Overall 148 74.5

Table 80: Post-test Treatment Students EM Outer Fringes Clusters Mood’s Median

Test

Cluster No. of

Students

Median

Score Individual 95.0% CIs

C’3 2 0.2813

C’2 3 0.7087

C’1 143 0.9305

Page 121: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

121

Overall, from an EM clustering perspective, like K-Means, it is better to provide

advice and guidance to the teachers about their in-class instruction using the outer

fringes rather than the inner fringes. Also, it is more useful to provide the feedback after

Pre-test assessment, as the Pre-test is the stage where the students have not received any

instruction from the teacher and then got tested, and so the outer fringes will guide the

teacher what to teach the students next and plan the lessons and sessions accordingly.

In terms of the statistical properties of the clusters in each EM result, the clusters

that comprised of the larger number of students were most of the time not normally

distributed. Also, in most results, Levene’s test demonstrated equal variance across the

clusters, but showed significant difference in the Median scores across them as per

Kruskal-Wallis’s test.

The difference between EM and K-Means is that EM tends to try to use the BIC

goodness of fit measure which depends on quantiles to fit data into a univariate Gaussian

mixture model as seen in Figure 20 and Figure 21. For negative BIC, the closer the BIC

value is to negative infinity, the better the goodness measure of fit. However, in all cases

the BIC values were between -100 and -2000, which is far away from negative infinity.

Therefore, for all cases the goodness measure of fit was not good.

Figure 20: EM Inner Fringes Clustering Results Gaussian Distribution.

Page 122: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

122

Figure 21: EM Outer Fringes Clustering Results Gaussian Distribution.

Furthermore, it is confirmed from Figure 20 and Figure 21 that the EM’s attempt

to fit data into a univariate Gaussian model does not make sense and is not normal. Thus,

the bad goodness of fit measure is validated, and so the clustering results from EM are

not really reliable due to “overfitting.” Hence, K-Means results are more reliable.

8.3. EM Results Evaluation

Next, the results from the EM clustering were evaluated using the indices

described in Chapter 4. Firstly, the Internal Indices CP, SP, DB, DVI, WSS, and BSS

were calculated using the methods from [35]. The latter indices for the EM clusters are

shown below in Table 81.

Looking at the resulting indices, firstly, in terms of compactness, the EM outer

fringes clusters are most of the time more compact than the inner fringe clusters.

Furthermore, the Treatment outer fringes clusters are always more compact than the

Control clusters, with compactness being 0.9857 after Pre-test and 0.435 after Post-test.

The K-Means compactness was similar in some cases as EM such as in the case of Pre-

test Control Outer Fringes (i.e. CP=1.1142). However, EM cluster result was more

compact than the corresponding K-Means cluster result for the case of Post-test

Treatment Outer Fringes (i.e. EM CP=0.435 < K-Means CP= 0.5802).

Page 123: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

123

Table 81: EM Results Evaluation

EM Clusters Internal Indices

Pre-test Clusters *↓CP +↑SP ↓DB ↑DVI ↓WSS ↑BSS ↓WSS/BSS

Control Inner Fringes 5.6036 33.6538 0.6417 1.25 1643.769 2181.268 0.7536

Outer Fringes 1.1142 18.9804 0.4793 1 154.9804 1020.723 0.1518

Treatment Inner Fringes 0.264 12.9555 0.2152 0.1875 956.95 9601.618 0.0997

Outer Fringes 0.9857 30.8906 0.2535 0.6 794.3526 6363.668 0.1248

Post-test Clusters ↓CP ↑SP ↓DB ↑DVI ↓WSS ↑BSS ↓WSS/BSS

Control Inner Fringes 0.2849 10.7037 0.0532 1.75 29.62963 1546.685 0.0192

Outer Fringes N/A N/A N/A N/A 487.4815 0 -

Treatment Inner Fringes 4.3708 44.1471 0.1594 3.6 3110.639 5728.388 0.543

Outer Fringes 0.435 14.785 0.0505 0.5 41.50583 1895.197 0.0219

*↓ means the less the value the better.

+↑ means the greater the value the better.

N/A means not applicable as the clustering result contains only one cluster.

Page 124: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

124

Next, the separation of EM inner fringes clustering results is most of the time

better than that of outer fringes, with the Treatment inner fringes clusters being the most

separated after Post-test 44.1471. However, the Pre-test Treatment outer fringes cluster

result with SP 30.8906 is higher SP than inner fringes cluster result 12.9555. The

separation of EM clustering results is overall higher than that of K-Means clustering

results.

The DBI of the Treatment clusters is overall lower than that of Control clusters

with the Treatment outer fringes clusters DB being 0.2535 after Pre-test and 0.050 after

Post-test. The DBI of K-Means clustering results is overall lower than that of EM

clustering results.

Also, the DVI of EM is sometimes higher in the case of inner fringes than outer

fringes and vice versa with DVI being highest for inner fringes for Post-test Treatment

case at 3.6 and DVI being highest for outer fringes for Pre-test Control case at 1.0. . The

DVI of K-Means clustering results is overall higher than that of EM clustering results.

In addition, the WSS and BSS of every EM clustering result were calculated. In

terms of WSS, Pre-test outer fringes clusters are more cohesive than Pre-test inner

fringes clusters as the WSS for outer fringes clusters is lower than inner fringes clusters.

The case is similar for Post-test Treatment clustering results but not for Control

clustering results.

In terms of BSS, the BSS values suggest that in all results the inner fringes

clusters are better separated than outer fringes clusters. This is also reflected in the

separation index (SP) results above.

Overall, with respect to the EM clustering results at a threshold of 33%, the Pre-

test Treatment outer fringe clusters and Post-test Treatment outer fringe EM clusters can

be considered to have a “good” quality due to the suitable Internal Indices values as

compared to the other EM clustering results. Like K-Means, this further reinforces the

fact that, with respect to EM clustering, outer fringes are more useful than inner fringes

to provide guidance to teachers for in-class instruction. It is also noted that the indices

for Treatment clusters are “better” than those of Control clusters, there is still a potential

that teaching methods using technology might prove to be better than the conventional

methods.

Page 125: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

125

However, the “goodness” of the Internal indices for the EM clustering results is

dropped and negated by the “over-fitted” unreliable clustering results of EM as

mentioned previously due to the bad goodness of measure.

Moreover, the ICC was calculated for the eight EM cluster results. The ICCs for

the results in the EM Results are as follows:

Table 82: EM Clusters Intra-class Correlation Coefficient

EM Clusters ICC

Pretest

Control Inner Fringes 0.4961

Outer Fringes 0.2969

Treatment Inner Fringes 0.648

Outer Fringes 0.7009

Posttest

Control Inner Fringes 0.6898

Outer Fringes N/A

Treatment Inner Fringes 0.3294

Outer Fringes 0.8143

In terms of ICC, as shown in Table 82, the ICC value of the EM fringes

clustering results are half of the time greater than 60% which is considered acceptable,

with the exception of the ICCs of the Pre-test Control inner and outer fringes clusters

and Post-test Treatment inner fringes clusters; 49.61%, 29.69%, and 32.94%

respectively. Like K-Means, the above 60% ICCs indicates that clustering the students

based on fringes makes a difference as opposed to grouping them based on school or

grades only. All in all, the ICC of the K-Means clustering results are better than those

of EM also.

The External indices of the EM clustering results will be discussed in the EM

Comparative Analysis section of this chapter where the clustering results to pre-

defined class labels will be compared, to emphasize the significance of the proposed

model.

8.4. EM Comparative Analysis

Like in K-Means and DBSCAN clustering, to emphasize the importance and

“goodness” of the approach, a comparative analysis is done between the EM clustering

results and each of two pre-defined class labels. The measures and indices used are NMI,

CA (Purity), Entropy, and ARI. NMI, CA, and ARI measures were calculated using the

techniques as mentioned in [36], [37], and [38] respectively.

Page 126: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

126

8.4.1. EM clustering as explained by knowledge states. The first hypothesis

of the pre-defined class is EM clustering the students based on their knowledge states.

First, for each cluster resulting from applying EM algorithm on the targeted data,

the Purity, Entropy, NMI and ARI was calculated. A detailed comparative analysis was

created for every run on every data sample: (Pre-test Control Inner Fringes), (Pre-test

Control Outer Fringes), (Pre-test Treatment Inner Fringes), (Pre-test Treatment Outer

Fringes), (Post-test Control Inner Fringes), (Post-test Control Outer Fringes), (Post-test

Treatment Inner Fringes), and (Post-test Treatment Outer Fringes).

Overall, the comparative analysis results for the first pre-defined class labels

hypothesis are as follows:

Table 83: Is EM Clustering Based on Knowledge States

EM Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.5 -1.1885 0.3177 0.1033

Outer Fringes 0.537 -1.0563 0.476 0.1608

Treatment Inner Fringes 0.9595 -0.1954 0.8748 0.9454

Outer Fringes 0.5811 -1.0289 0.3885 0.1458

Posttest

Control Inner Fringes 0.963 -0.1905 0.0796 0.0028

Outer Fringes 0.963 -0.2284 N/A 0

Treatment Inner Fringes 0.9797 -0.1216 0.3609 0.5492

Outer Fringes 0.9932 -0.0186 0.7653 0.8817

*↑means the greater the value the more resemblance between fringes and KS clustering.

+↓ arrow means the less the value the more resemblance between fringes and KS clustering.

N/A means not applicable as the clustering result contains only one cluster.

With respect to EM clustering, looking at Table 83, as compared to the

knowledge states KS clustering results, it can be seen that the overall purity of each EM

clustering results of the fringes in both Pre-test and Post-test is greater than 50%, but

never approaching 100% like in the case of some corresponding examples in K-Means

and DBSCAN. The same applies to the Entropy measure which is sometimes higher

than 1 like in the case of Pre-test Control Outer Fringes data set and Treatment Outer

Fringes data set being in absolute vale 1.0563 and 1.0289 respectively.

Moreover, the overall NMI between the KS clusters and Fringes clusters is

overall less than 70% with the highest outer fringe case being 76.53%.

Also, the overall ARI values between the KS clusters and Fringes clusters are

half the time less than 50%. The highest ARI for outer fringes case being 88.17% for

Post-test Treatment students.

Page 127: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

127

With respect to EM clustering, in most cases, the External Indices show the low

resemblance between what the students are ready to learn next and the knowledge level

they are at. For example, the External Indices between Pre-test Control Outer Fringes

and KS are CA=0.5370, Entropy=1.0563, NMI=0.476, and ARI=0.1608. Another

example is the External indices between Pre-test Treatment Outer Fringes and KS are

CA=0.5811, Entropy=1.0289, NMI=0.3885, and ARI=0.1458.

Overall, the comparative analysis for the first hypothesis of clustering students

based on knowledge states is satisfactory. Therefore, with respect to EM clustering,

using only KS rather than fringes to get information about the learners’ learning

progress is not sufficient, as determined by some most cases in Table 83 where

resemblance between fringes and KS clustering outcomes was low.

8.4.2. EM clustering as explained by 25th percentile/quartile. The second

hypothesis of the pre-defined class depends on grouping the students based on the 25th

percentiles/ quartiles of the overall NUMBERS unit scores of the students.

First, the learners were distributed into four different groups using 25th

percentiles of the learners’ average score in the NUMBERS unit in the Illustrative

Example. For every data set, the quartiles of every data set were extracted, and the Mean,

Median, SD, and CV were calculated. The tables can be found in Appendix B:

Quartiles Details. A detailed comparative analysis was created for every run on every

data sample: (Pre-test Control Inner Fringes), (Pre-test Control Outer Fringes), (Pre-test

Treatment Inner Fringes), (Pre-test Treatment Outer Fringes), (Post-test Control Inner

Fringes), (Post-test Control Outer Fringes), (Post-test Treatment Inner Fringes), and

(Post-test Treatment Outer Fringes).

Overall, the comparative analysis results for the second pre-defined class labels

hypothesis are shown in Table 84.

With respect to EM clustering, looking at Table 84, as compared to the Quartiles

grouping results , like K-Means and DBSCAN clustering results, it can be seen that the

overall purity of each EM clustering results of the fringes in both Pre-test and Post-test

are less than 50% in the majority of the cases. The same applies to the Entropy measure

which is approaching values greater than 1 in all of the latter data sets. These values are

an indication of the irrelevancy between the knowledge level of the learners and their

corresponding unit scores.

Page 128: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

128

Table 84: Is EM Clustering Based on Quartiles

EM Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.2963 -1.9239 0.1112 0.0011

Outer Fringes 0.2963 -1.9349 0.0816 0.0022

Treatment Inner Fringes 0.473 -1.5499 0.2939 0.2045

Outer Fringes 0.3041 -1.9 0.1347 0.0083

Posttest

Control Inner Fringes 0.5185 -1.4784 0.3682 0.249

Outer Fringes 0.2593 -1.999 N/A 0

Treatment Inner Fringes 0.2635 -1.9777 0.0417 0.0001

Outer Fringes 0.277 -1.9494 0.0719 0.0012

*↑means the greater the value the more resemblance between fringes clustering and Quartiles.

+↓means the less the value the more resemblance between fringes clustering and Quartiles.

N/A means not applicable as the clustering result contains only one cluster.

Moreover, all NMI and ARI values between the fringes clustering results and

quartile groups is less than 40%, which is an indication of the large discrepancy between

grouping based on quartiles of scores and EM clustering based on fringes.

Figure 22 compares K-Means and EM clustering resemblance to the Quartiles

grouping results, and which one is closer in each case. Even though in all cases K-Means

clustering results resemble the quartiles more than (or even sometimes equally to) EM

clustering results, the reliability of the K-Means clustering results outweigh that of EM

as EM Gaussian “overfitting” to the clusters do not make sense. The N/A cases are the

cases where External indices were not applicable on the EM clustering result as it

contained one cluster only.

Figure 22: Comparing K-Means and EM clustering to Quartile grouping.

Page 129: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

129

Overall, like in the case of K-Means, the comparative analysis for the second

hypothesis using students’ Medians to group these students based on the 25th

percentiles/ quartiles of their overall NUMBERS unit scores was satisfactory to affirm

the significance of the proposed approach in the thesis. Therefore, this indicates that the

knowledge states of learners and what they score in the course are not connected.

8.5. EM Overall Summary

In summary, findings regarding clustering learners’ fringes using EM clustering

algorithm are divided into two perspectives. The first perspective is with regards to the

clustering method, which is EM, and the second perspective is with regards to providing

advice to the teacher and/or educational administrator.

With respect to EM clustering, like in the case of K-Means, to judge whether

inner fringes or outer fringes are better, one has to decide on the clusters’ Internal indices

to base the decision on. Unlike K-Means and DBSCAN, when validating the EM

clusters using the External indices, the medium resemblance between the fringes

clustering outcomes and the KS clustering outcome indicated that it is not enough to

only use knowledge states to get information about the learners’ learning progress.

Finally, like in the case of K-Means and DBSCAN, dividing into Quartiles students

using Median scores as seen in K-Means clustering as explained by 25th

percentile/quartile section is not a good way to cluster students as determined by the

External Indices values in Table 84. For a single EM clustering outcome, the Median

across the distinct clusters are different. However, the clusters still do not group the

students the same way as the Quartiles method did.

With respect to providing advice to the teachers and administrators, firstly, the

feedback is similar to that given using K-Means algorithm.

Finally, as compared to K-Means, EM gave fewer number of clusters in the

individual results, and it most of the time clustered the simpler topics together and the

more complex topics together as seen in Table 75. Even though, in terms of processing

and efficiency, according to [44], EM is robust to data which contains noise and missing

data, it converges faster than DBSCAN and K-Means, the results are not as efficient as

the K-Means clustering results due to the goodness of fit measure being bad when

clustering fringes as seen earlier in Figure 20 and Figure 21.

Page 130: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

130

All in all, for the data sample being used, among the three clustering techniques,

K-Means seem to be giving the more sensible and logical clustering results of the

fringes, with inner fringes being the more efficient in providing the teacher with

feedback for personalized class lessons if the judging is based on CP and DB, and outer

fringes being the more efficient if the decision is based on SP and DVI.

Page 131: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

131

Chapter 9: Overall Results Analysis

In this chapter, an overall technical results analysis will be done by performing

a pairwise comparison between K-Means, EM, and DBSCAN inner and outer fringes

clustering results from the example applied on the model development data of 54

Control students and 148 Treatment students. Also, a pairwise comparison will be done

with and for clustering students’ knowledge states as well as grouping students using

the median of the overall scores collected (25th percentile/quartiles). The measures used

to compare between the different techniques are the External indices, which are CA,

Entropy, NMI, and ARI. The data sample used for the comparison is the same one used

in the previous chapters Illustrated Examples.

9.1. Pairwise Comparison Using Clustering based on Knowledge States

First, the knowledge states clustering will be compared with grouping based on

quartiles, K-Means, DBSCAN, and EM fringes clustering results. The pairwise

comparison using knowledge states clustering are as follows:

Table 85: Knowledge States Clustering and Quartiles Pairwise Comparison

KS Clusters <-> Quartiles *↑CA +↓Entropy ↑NMI ↑ARI

Pretest Control 0.5185 -1.5069 0.2979 0.199

Treatment 0.473 -1.5203 0.3024 0.1957

Posttest Control 0.2963 -1.9239 0.1112 0.0011

Treatment 0.2702 -1.967 0.055 0.0008

*↑means the greater the value the more resemblance between KS clustering and Quartiles.

+↓means the less the value the more resemblance between KS clustering and Quartiles.

As seen in Table 85, knowledge states clustering gives very low resemblance

to the grouping results using quartiles. For example, the low resemblance can be seen

in the NMI values between knowledge states results and quartiles which are less than

30%.

As seen in Table 86, knowledge states clustering gives a high resemblance to

the most of the K-Means outer fringes clustering results. For example, after Pre-test,

the knowledge states clustering has a purity of 97.3% and an NMI of 88.12% when

compared to Treatment students Outer fringes K-Means clusters. Also, after Post-test,

the knowledge states clustering has a purity and an NMI of 100% when compared to

Control students Outer fringes K-Means clusters.

Page 132: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

132

Table 86: Knowledge States and K-Means Clustering Pairwise Comparison

KS <-> K-Means Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.963 -0.1632 0.8413 0.894

Outer Fringes 1 0 0.4969 1

Treatment Inner Fringes 0.5878 -0.998 0.5147 0.1892

Outer Fringes 0.973 -0.1125 0.8812 0.9376

Posttest

Control Inner Fringes 0.537 -0.962 0.0796 0.0028

Outer Fringes 1 0 1 1

Treatment Inner Fringes 0.9797 -0.1293 0.9366 0.9735

Outer Fringes 0.9865 0.9865 0.9865 0.9865

*↑means the greater the value the more resemblance between KS K-Means clustering and fringes K-Means

clustering.

+↓means the less the value the more resemblance between KS K-Means clustering and K-Means fringes

clustering.

However, in some cases of inner fringes clustering results, the knowledge states

give a low resemblance. For example, after Pre-test, the knowledge states clustering has

a purity of 58.78% and an NMI of 51.47% when compared to Treatment students Inner

fringes K-Means clusters. This indicates that, with regards to K-Means, clustering based

on fringes is not really the same as clustering based on knowledge states of students.

Table 87: Knowledge States and DBSCAN Clustering Pairwise Comparison

KS <-> DBSCAN Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.963 -0.2284 N/A 1

Outer Fringes 0.9444 -0.3097 N/A 1

Treatment Inner Fringes 0.9662 -0.118 0.3577 0.6464

Outer Fringes 0.9932 -0.0578 0.8704 1

Posttest

Control Inner Fringes 1 0 N/A 1

Outer Fringes 1 0 1 1

Treatment Inner Fringes 0.9797 -0.1431 N/A 0

Outer Fringes 0.9662 -0.2131 N/A 1

*↑means the greater the value the more resemblance between KS DBSCAN clustering and DBSCAN fringes

clustering.

+↓means the less the value the more resemblance between KS DBSCAN clustering and DBSCAN fringes

clustering.

N/A means not applicable as the clustering result contains only one cluster.

As seen in Table 87, knowledge states clustering gives a high resemblance to

the most of the DBSCAN inner and outer fringes clustering results. For example, after

Pre-test, the knowledge states clustering has a purity of 99.32% and an NMI of 87.04%

when compared to Treatment students Outer fringes DBSCAN clusters. In some cases,

like Treatment Inner fringes clusters, even though purity is high, the NMI of knowledge

states clustering results is low when compared to the fringes DBSCAN clusters. This is

Page 133: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

133

largely attributed to the presence of the Noise cluster in the fringes DBSCAN clustering

results. Therefore, like K-Means, with regards to DBSCAN, clustering based on fringes

is not the same as clustering based on knowledge states of students.

Table 88: Knowledge States and EM Clustering Pairwise Comparison

KS <-> EM Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.9815 -0.051 0.3177 0.399

Outer Fringes 1 0 0.476 N/A

Treatment Inner Fringes 0.9662 -0.1099 0.8748 0.9454

Outer Fringes 0.9865 -0.0465 0.3885 0.9606

Posttest

Control Inner Fringes 0.537 -0.962 0.0796 0.0028

Outer Fringes 1 0 N/A 1

Treatment Inner Fringes 0.9798 -0.0848 0.3609 0.5492

Outer Fringes 0.9798 -0.0848 0.7653 0.8779

*↑means the greater the value the more resemblance between KS EM clustering and fringes EM clustering.

+↓means the less the value the more resemblance between KS EM clustering and fringes EM clustering.

N/A means not applicable as the clustering result contains only one cluster.

As seen in Table 88, in terms of purity, knowledge states clustering gives a high

resemblance to most of the EM fringes clustering results. For example, after Pre-test

and Post-test, the knowledge states clustering has a purity of 100% when compared to

Control students Outer fringes EM clusters. Also, after Post-test, the knowledge states

clustering has a purity 97.98% when compared to Treatment students Outer fringes EM

clusters. However, unlike K-Means and DBSCAN, in terms of NMI, in most cases of

fringes clustering results, the knowledge states give a low resemblance. For example,

after Pre-test, the knowledge states clustering has an NMI of 47.6% and 38.85% when

compared to Control Treatment students Outer fringes EM clusters respectively.

When compared with the pairwise comparison between knowledge states

clusters and K-Means or DBSCAN fringes clustering, the pairwise comparison with EM

clustering results gave the lowest NMI values in most of the cases. This indicates that,

also with regards to EM, clustering based on fringes is not the same as clustering based

on knowledge states of students.

9.2. Pairwise Comparison Using Students Grouping based on Quartiles

Next, grouping students based on quartiles will be compared with the knowledge

states clustering, K-Means, DBSCAN, and EM fringes clustering results. The pairwise

comparisons using quartiles are as follows:

Page 134: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

134

Table 89: Quartiles and Knowledge States Pairwise Comparison

Quartiles <-> KS Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest Control 0.7037 -0.8736 0.2979 0.199

Treatment 0.7432 -0.7777 0.3024 0.1957

Posttest Control 0.963 -0.1534 0.1112 0.0011

Treatment 0.973 -0.1463 0.055 0.0008

*↑means the greater the value the more resemblance between Quartiles and KS clustering.

+↓means the less the value the more resemblance between Quartiles and KS clustering.

As seen in Table 89, in terms of purity, quartiles give a relatively high

resemblance when compared to knowledge states clustering results. As can be seen, the

CA values are between 70% and 95%. However, in terms of NMI, quartiles give a very

low resemblance when compared to knowledge states clustering results. As can be seen,

the NMI values are sometimes under 30% and 10% as well.

The low NMI values are very important to emphasize the indication that

clustering based on knowledge states and fringes is not the same as grouping the

students based on quartiles/25th percentiles of their overall subject score. The same can

also be observed in the quartiles pairwise comparison with K-Means, DBSCAN, and

EM fringes clustering results in Table 90, Table 91, and Table 92 respectively.

Table 90: Quartiles and K-Means Clustering Pairwise Comparison

Quartile <-> K-Means Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.7408 -0.7165 0.3078 0.1973

Outer Fringes 0.9445 -0.2453 0.0816 0.0022

Treatment Inner Fringes 0.7365 -0.8881 0.2939 0.423

Outer Fringes 0.9324 -0.3572 0.1458 0.0484

Posttest

Control Inner Fringes 0.7778 -0.4794 0.3682 0.249

Outer Fringes 0.963 -0.1534 0.1112 0.0011

Treatment Inner Fringes 0.8176 -0.5336 0.3573 0.2234

Outer Fringes 0.9865 -0.0759 0.0604 0.0004

*↑means the greater the value the more resemblance between Quartiles and fringes K-Means clustering. +↓means the less the value the more resemblance between Quartiles and K-Means fringes clustering.

Page 135: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

135

Table 91: Quartiles and DBSCAN Clustering Pairwise Comparison

Quartile <-> DBSCAN Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.963 -0.1534 0.1112 0.5934

Outer Fringes 0.9445 -0.2453 0.0816 0.6002

Treatment Inner Fringes 0.9662 -0.1428 0.1075 0.6062

Outer Fringes 0.9324 -0.2105 0.1732 0.6239

Posttest

Control Inner Fringes 1 0 N/A 1

Outer Fringes 0.963 -0.1534 0.1112 0.5934

Treatment Inner Fringes 0.9797 -0.1207 0.0417 0.5739

Outer Fringes 0.9662 -0.1683 0.0684 0.6046

*↑means the greater the value the more resemblance between Quartiles and DBSCAN fringes clustering.

+↓means the less the value the more resemblance between Quartiles and DBSCAN fringes clustering.

N/A means not applicable as the clustering result contains only one cluster.

Table 92: Quartiles and EM Clustering Pairwise Comparison

Quartile <-> EM Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.963 -0.1534 0.1112 0.0011

Outer Fringes 0.9445 -0.2453 0.0816 0.0022

Treatment Inner Fringes 0.7702 -0.7218 0.2939 0.2045

Outer Fringes 0.9527 -0.175 0.1347 0.0083

Posttest

Control Inner Fringes 0.7778 -0.4794 0.3682 0.249

Outer Fringes 1 0 N/A 0

Treatment Inner Fringes 0.9797 -0.1207 0.0417 0.0001

Outer Fringes 0.9662 -0.1954 0.0719 0.0012

*↑means the greater the value the more resemblance between Quartiles and fringes EM clustering.

+↓means the less the value the more resemblance between Quartiles and fringes EM clustering.

N/A means not applicable as the clustering result contains only one cluster.

9.3. Pairwise Comparison Using Fringes K-Means Clustering

Next, the K-Means fringes clustering results will be compared with the

DBSCAN and EM fringes clustering results as well as knowledge states clustering

results and grouping students based on quartiles. The pairwise comparisons using K-

Means fringes clustering results are shown in Table 93.

The results analysis for Table 93 can be referred to in Chapter 5 under the

section K-Means clustering as explained by knowledge states.

Page 136: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

136

Table 93: K-Means and Knowledge States Pairwise Comparison

K-Means <-> KS Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.9445 -0.2254 0.8413 0.8940

Outer Fringes 0.5370 -0.9442 0.4969 0.1712

Treatment Inner Fringes 0.9932 -0.0135 0.5147 0.8665

Outer Fringes 1 0 0.8812 1

Posttest

Control Inner Fringes 0.9630 -0.1905 0.0796 0.0028

Outer Fringes 1 0 1 1

Treatment Inner Fringes 1 0 0.9366 1

Outer Fringes 0.6756 -0.9014 0.0694 0.0262

*↑means the greater the value the more resemblance between fringes K-Means and KS K-Means

clustering.

+↓means the less the value the more resemblance between fringes K-Means and KS K-Means clustering.

Table 94: K-Means Clustering and Quartiles Pairwise Comparison

K-Means Clusters <-> Quartile *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.5 -1.524 0.3078 0.1973

Outer Fringes 0.2963 -1.9349 0.0816 0.0022

Treatment Inner Fringes 0.473 -1.5125 0.2939 0.4230

Outer Fringes 0.3244 -1.8535 0.1458 0.0484

Posttest

Control Inner Fringes 0.5185 -1.4784 0.3682 0.2490

Outer Fringes 0.2963 -1.9239 0.1112 0.0011

Treatment Inner Fringes 0.4595 -1.4818 0.3573 0.2234

Outer Fringes 0.2635 -1.9726 0.0604 0.0004

*↑means the greater the value the more resemblance between fringes K-Means clustering and Quartiles.

+↓means the less the value the more resemblance between fringes K-Means clustering and Quartiles.

The results analysis for Table 94 can be referred to in Chapter 5 under the

section K-Means clustering as explained by 25th percentile/quartile.

As seen in Table 95, in terms of purity, K-Means clustering gives a very high

resemblance to most of the DBSCAN fringes clustering results. The CA values are most

of the time a 100%.

On the other hand, in terms of NMI, K-Means clustering gives a high

resemblance to most of the DBSCAN outer fringes clustering results as compared to the

inner fringes clustering results. For example, after Pre-test and Post-test, the K-Means

clustering has an NMI of 100% when compared to Control students Outer fringes

DBSCAN clusters, whereas when compared to Treatment and Control students’ Inner

fringes DBSCAN clusters, the NMI value is mostly below 40%.

Page 137: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

137

Table 95: K-Means and DBSCAN Clustering Pairwise Comparison

K-Means <-> DBSCAN Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 1 0 0.438 0.1237

Outer Fringes 1 0 1 1

Treatment Inner Fringes 1 0 0.3936 0.1068

Outer Fringes 1 0 0.8418 1

Posttest

Control Inner Fringes 1 0 N/A 0

Outer Fringes 1 0 1 1

Treatment Inner Fringes 1 0 0.3686 0.092

Outer Fringes 0.9798 -0.1424 0.4751 1

*↑means the greater the value the more resemblance between fringes K-Means and DBSCAN clustering.

+↓means the less the value the more resemblance between fringes K-Means and DBSCAN clustering.

N/A means not applicable as the clustering result contains only one cluster.

Table 96: K-Means and EM Clustering Pairwise Comparison

K-Means <-> EM Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 1 0 0.438 0.1228

Outer Fringes 1 0 1 1

Treatment Inner Fringes 1 0 0.9229 0.9443

Outer Fringes 1 0 0.7388 0.7993

Posttest

Control Inner Fringes 1 0 1 1

Outer Fringes 1 0 N/A 0

Treatment Inner Fringes 1 0 0.3686 0.092

Outer Fringes 0.9798 -0.1424 0.6482 0.5605

*↑means the greater the value the more resemblance between fringes K-Means and EM clustering.

+↓means the less the value the more resemblance between fringes K-Means and EM clustering.

N/A means not applicable as the clustering result contains only one cluster.

As seen in Table 96, in terms of purity, K-Means clustering gives a very high

resemblance to most of the EM fringes clustering results. The CA values are most of

the time a 100%.

On the other hand, in terms of NMI, K-Means clustering gives a high

resemblance to most of the EM outer fringes clustering results as compared to the inner

fringes clustering results, which is similar to K-Means resemblance results to DBSCAN.

For example, after Pre-test, the K-Means clustering has an NMI of 100% when

compared to Control students Outer fringes DBSCAN clusters, whereas when compared

to the Control students Inner fringes DBSCAN clusters, the NMI value is 43.8%. Also

in another example, after Post-test, the K-Means clustering has an NMI of 64.82% when

compared to Treatment students Outer fringes DBSCAN clusters, whereas when

Page 138: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

138

compared to the Treatment students’ Inner fringes DBSCAN clusters, the NMI value is

36.86%.

Therefore, K-Means highly resembles DBSCAN and EM clustering results in

most of cases that contain the students’ outer fringes. This might be an indication that

outer fringes are more efficient in providing feedback to teachers and educational

administrators to plan students personalized lessons, and the reliability of using outer

fringes to efficiently plan what to teach the students next is reinforced by the high

resemblance between the different clustering algorithms results.

9.4. Pairwise Comparison Using Fringes DBSCAN Clustering

Next, the DBSCAN fringes clustering results will be compared with the K-

Means and EM fringes clustering results as well as knowledge states clustering results

and grouping students based on quartiles. The pairwise comparisons using DBSCAN

fringes clustering results are as follows:

Table 97: DBSCAN and Knowledge States Pairwise Comparison

DBSCAN <-> KS Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 1 0 N/A 1

Outer Fringes 1 0 N/A 1

Treatment Inner Fringes 0.9594 -0.2359 0.3577 0.6464

Outer Fringes 0.9932 -0.0317 0.8704 1

Posttest

Control Inner Fringes 0.963 -0.2284 N/A 1

Outer Fringes 1 0 1 1

Treatment Inner Fringes 1 0 N/A 1

Outer Fringes 1 0 N/A 1

*↑means the greater the value the more resemblance between fringes DBSCAN and KS DBSCAN

clustering.

+↓means the less the value the more resemblance between fringes DBSCAN and KS DBSCAN clustering.

N/A means not applicable as the clustering result contains only one cluster.

The results analysis for Table 97 can be referred to in Chapter 6 under the

section DBSCAN clustering as explained by knowledge states.

Page 139: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

139

Table 98: DBSCAN Clustering and Quartiles Pairwise Comparison

DBSCAN Clusters <-> Quartile *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.2963 -1.9239 0.1112 0.0434

Outer Fringes 0.2963 -1.9349 0.0816 0.0634

Treatment Inner Fringes 0.2905 -1.9294 0.1075 0.0416

Outer Fringes 0.3244 -1.8535 0.1732 0.0798

Posttest

Control Inner Fringes 0.2593 -1.999 N/A 0

Outer Fringes 0.2963 -1.9239 0.1112 0.0434

Treatment Inner Fringes 0.2635 -1.9777 0.0417 0.0001

Outer Fringes 0.277 -1.9552 0.0684 0.0415

*↑means the greater the value the more resemblance between fringes DBSCAN clustering and Quartiles.

+↓means the less the value the more resemblance between fringes DBSCAN clustering and Quartiles.

N/A means not applicable as the clustering result contains only one cluster.

The results analysis for Table 98 can be referred to in Chapter 6 under the

section DBSCAN clustering as explained by 25th percentile/quartile.

Table 99: DBSCAN and K-Means Clustering Pairwise Comparison

DBSCAN <-> K-Means Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.5185 -0.963 0.438 1

Outer Fringes 1 0 1 1

Treatment Inner Fringes 0.554 0.554 0.554 0.554

Outer Fringes 0.9527 -0.1467 0.8418 N/A

Posttest

Control Inner Fringes 0.5 -1 N/A 1

Outer Fringes 1 0 1 1

Treatment Inner Fringes 0.6757 -0.909 0.3686 0.0926

Outer Fringes 0.9865 -0.0328 0.4751 1

*↑means the greater the value the more resemblance between fringes DBSCAN and K-Means clustering.

+↓means the less the value the more resemblance between fringes DBSCAN and K-Means clustering.

N/A means not applicable as the clustering result contains only one cluster.

As seen in Table 99, in terms of purity, DBSCAN clustering gives a very high

resemblance to most of the K-Means outer fringes clustering results. The CA values are

mostly around 95% - 100%. One the other hand, DBSCAN clustering gives a medium

resemblance to most of the K-Means inner fringes clustering results. The CA values are

mostly around 50% - 60%.

On the other hand, in terms of NMI, DBSCAN clustering gives a high

resemblance to most of the K-Means outer fringes clustering results as compared to the

inner fringes clustering results. For example, after Pre-test, the DBSCAN clustering

result has an NMI of 100% when compared to Control students Outer fringes K-Means

clusters, whereas when compared to the Control students Inner fringes K-Means

Page 140: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

140

clusters, the NMI value is 43.8%. Also in another example, after Post-test, the DBSCAN

clustering has an NMI of 100% when compared to Control students’ Outer fringes K-

Means clusters, whereas when compared to the Control students’ Inner fringes K-Means

clusters, the NMI indices is not valid as the DBSCAN inner fringes clustering result for

the Post-test Control students only has one cluster.

Table 100: DBSCAN and EM Clustering Pairwise Comparison

DBSCAN <-> EM Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 1 0 1 1

Outer Fringes 1 0 1 1

Treatment Inner Fringes 0.5675 -0.9586 0.4264 1

Outer Fringes 0.9797 -0.0595 0.6875 1

Posttest

Control Inner Fringes 0.5 -1 N/A 1

Outer Fringes 1 0 N/A 1

Treatment Inner Fringes 1 0 1 1

Outer Fringes 0.9865 -0.0328 0.9309 1

*↑means the greater the value the more resemblance between fringes DBSCAN and EM clustering.

+↓means the less the value the more resemblance between fringes DBSCAN and EM clustering.

N/A means not applicable as the clustering result contains only one cluster.

As seen in Table 100, in terms of purity, DBSCAN clustering gives a very high

resemblance to most of the EM fringes clustering results. The CA values are mostly

around 100%, except for only two inner fringes cases where the CA is around 50%.

Also, in terms of NMI, DBSCAN clustering gives a high resemblance to most

of the EM inner and outer fringes clustering results. For example, after Pre-test, the

DBSCAN clustering result has an NMI of 100% when compared to Control students

Inner and Outer fringes EM clusters. Also in another example, after Post-test, the

DBSCAN clustering has an NMI of 93.09% when compared to Treatment students’

Outer fringes EM clusters and 100% for Treatment students’ Inner fringes EM clusters.

This might be an indication that DBSCAN and EM clustering techniques behave

similarly when clustering students according to fringes.

9.5. Pairwise Comparison Using Fringes EM Clustering

Next, the EM fringes clustering results will be compared with the K-Means and

DBSCAN fringes clustering results as well as knowledge states clustering results and

grouping students based on quartiles. The pairwise comparisons using EM fringes

clustering results are as follows:

Page 141: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

141

Table 101: EM and Knowledge States Pairwise Comparison

EM <-> KS Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.5 -1.1885 0.3177 0.1033

Outer Fringes 0.537 -1.0563 0.476 0.1608

Treatment Inner Fringes 0.9595 -0.1954 0.8748 0.9454

Outer Fringes 0.5811 -1.0289 0.3885 0.1458

Posttest

Control Inner Fringes 0.963 -0.1905 0.0796 0.0028

Outer Fringes 0.963 -0.2284 N/A 0

Treatment Inner Fringes 0.9797 -0.1216 0.3609 0.5492

Outer Fringes 0.9932 -0.0186 0.7653 0.8817

*↑means the greater the value the more resemblance between fringes EM and KS EM clustering.

+↓means the less the value the more resemblance between fringes EM and KS EM clustering.

N/A means not applicable as the clustering result contains only one cluster.

The results analysis for Table 101 can be referred to in Chapter 7 under the

section EM clustering as explained by knowledge states.

Table 102: EM Clustering and Quartiles Pairwise Comparison

EM <-> Quartile *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.2963 -1.9239 0.1112 0.0011

Outer Fringes 0.2963 -1.9349 0.0816 0.0022

Treatment Inner Fringes 0.473 -1.5499 0.2939 0.2045

Outer Fringes 0.3041 -1.9 0.1347 0.0083

Posttest

Control Inner Fringes 0.5185 -1.4784 0.3682 0.249

Outer Fringes 0.2593 -1.999 N/A 0

Treatment Inner Fringes 0.2635 -1.9777 0.0417 0.0001

Outer Fringes 0.277 -1.9494 0.0719 0.0012

*↑means the greater the value the more resemblance between fringes EM clustering and Quartiles.

+↓means the less the value the more resemblance between fringes EM clustering and Quartiles.

N/A means not applicable as the clustering result contains only one cluster.

The results analysis for Table 102 can be referred to in Chapter 7 under the

section EM clustering as explained by 25th percentile/quartile.

As seen in Table 103, in terms of purity, EM clustering gives a very high

resemblance to most of the K-Means fringes clustering results. The CA values are

mostly around 95% - 100%. One the other hand, EM clustering gives a medium

resemblance to some of the K-Means inner fringes clustering results. The CA values are

mostly around 50% - 60%.

On the other hand, in terms of NMI, EM clustering gives a fluctuating

resemblance to the K-Means fringes clustering results. For example, after Pre-test, the

EM clustering result has an NMI of 100% when compared to Control students Outer

Page 142: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

142

fringes K-Means clusters, whereas when compared to the Treatment students Outer

fringes K-Means clusters, the NMI value is 73.88%. Also in another example, after Post-

test, the EM clustering has an NMI of 64.82% when compared to Treatment students’

Outer fringes K-Means clusters. The latter results are similar to the previously discussed

pairwise comparison between K-Means and EM clustering results which are shown in

Table 95.

Table 103: EM and K-Means Clustering Pairwise Comparison

EM <-> K-Means Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.5185 -0.963 0.438 1

Outer Fringes 1 0 1 1

Treatment Inner Fringes 0.9527 -0.2038 0.9229 1

Outer Fringes 0.9527 -0.2289 0.7388 N/A

Posttest

Control Inner Fringes 1 0 1 1

Outer Fringes 0.963 -0.2284 N/A 1

Treatment Inner Fringes 0.6757 0.6757 0.6757 0.6757

Outer Fringes 1 0 0.6482 0.5597

*↑means the greater the value the more resemblance between fringes EM and K-Means clustering.

+↓means the less the value the more resemblance between fringes EM and K-Means clustering.

N/A means not applicable as the clustering result contains only one cluster.

Table 104: EM and DBSCAN Clustering Pairwise Comparison

EM <-> DBSCAN Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 1 0 1 1

Outer Fringes 1 0 1 1

Treatment Inner Fringes 1 0 0.4264 0.1185

Outer Fringes 0.9797 -0.1417 0.6875 1

Posttest

Control Inner Fringes 1 0 N/A 0

Outer Fringes 0.963 -0.2284 N/A 1

Treatment Inner Fringes 1 0 1 1

Outer Fringes 1 0 0.9309 1

*↑means the greater the value the more resemblance between fringes EM and DBSCAN clustering.

+↓means the less the value the more resemblance between fringes EM and DBSCAN clustering.

N/A means not applicable as the clustering result contains only one cluster.

As seen in Table 104, in terms of purity, EM clustering gives a very high

resemblance to most of the DBSCAN fringes clustering results. The CA values are

mostly around 100%.

Also, in terms of NMI, wherever applicable, EM clustering gives a high

resemblance to most of the EM inner and outer fringes clustering results. For example,

after Pre-test, the EM clustering result has an NMI of 100% when compared to Control

Page 143: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

143

students Inner and Outer fringes DBSCAN clusters. Also in another example, after Post-

test, the EM clustering has an NMI of 93.09% when compared to Treatment students’

Outer fringes DBSCAN clusters and 100% for Treatment students’ Inner fringes

DBSCAN clusters. Overall, the latter results are similar to the previously discussed

pairwise comparison between DBSCAN and EM clustering results in Table 100.

Therefore, last pairwise comparison is a further indication that DBSCAN and

EM clustering techniques behave similarly when clustering students according to

fringes.

9.6. Summary

Overall, several conclusions about the model’s overall results analysis can be

made after performing all the previous pairwise comparison tests:

Firstly, clustering students based on the medians of their overall scores (quartiles)

is different than clustering students according to their inner and outer fringes (i.e.

the topics they learnt recently and the topics they are ready to learn next).

Secondly, clustering students based on their knowledge states is not entirely the

same as clustering the students based on their inner and outer fringes.

Finally, K-Means, DBSCAN, and EM outer fringes clustering results highly

resemble one another as seen from the External indices obtained, especially when

looking at the NMI values. This might be an indication that using outer fringes

rather than inner fringes to give feedback to the instructor or educational

administrators is more favorable and popular with the different types of clustering

algorithms, and it is efficient and effective in helping to plan customized lessons

for the different group of students that have different topic knowledge

requirements to progress proficiently in a given subject.

Page 144: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

144

Chapter 10: Sensitivity Analysis

In this chapter, a sensitivity analysis will be done to examine the effect of

varying the threshold of the subject’s topics’ scores on the Internal and External indices

of the clustering results. Two types of threshold referencing will be tested. The first one

is called Criterion Referencing, and the second one is called Norm Referencing.

According to [45], Criterion Referencing is used to determine if students

attained a specific set of skills or concepts. It is a form of summative assessment.

Therefore, each learner/student score is compared with a pre-determined standard score.

If the student scores above or equal to the pre-determined standard score, he/she has

“passed” the topic, otherwise they “failed”. Hence, the student’s performance is

irrelevant of the other students’ performances. In the Illustrated Example in the previous

chapters, this kind of referencing has been used, and the pre-determined score was 33%.

For the sensitivity analysis, another pre-determined score of 60% will be tested to

observe the effect of varying the criterion referencing threshold on the clustering results.

According to [45], Norm Referencing is used to measure the performance of

each student in a specific topic with respect to the performances of others in the same

topic. It is a form of formative assessment. Therefore, each learner/student score in a

certain topic/concept is compared with the overall median of all students’ scores in the

same latter topic/concept. If the student scores above or equal to the median score for

that topic/concept, he/she has “passed” the topic/concept, otherwise they “failed”.

Hence, this form of referencing is used to identify the low and high achievers in a certain

topic/concept. For the sensitivity analysis, for every topic in the KST, the overall median

of all the students’ scores will be tested to observe the effect of using norm referencing

using the topics’ median on the clustering results.

The sensitivity analysis will be divided into Quantitative Analysis and

Qualitative Analysis.

10.1. Quantitative Analysis

In the Quantitative Analysis, the effect of the Criterion Referencing score (60%)

and the Norm Referencing score (each topic’s Median) on the number of clusters

obtained in the K-Means, DBSCAN, and EM results will be compared with respect to

the previous Illustrated Example that used (33%).

Page 145: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

145

The overall Quantitative Analysis test results are as shown in Table 105.

As seen in Table 105, with regards to Criterion Referencing, it can be observed

that increasing the threshold from 33% to 60% leads to an increase in the number of

clusters per result in most K-Means, DBSCAN, and EM cases.

Table 105: Quantitative Analysis

Clustering

Technique Data Sets

Number of Clusters in Result

33% 60% Median

K-Means

Pretest

Control Inner Fringes 3 5 1

Outer Fringes 2 4 1

Treatment Inner Fringes 5 7 1

Outer Fringes 6 8 1

Posttest

Control Inner Fringes 2 5 1

Outer Fringes 2 5 1

Treatment Inner Fringes 4 5 1

Outer Fringes 2 5 1

DBSCAN

Pretest

Control Inner Fringes 1 (+Noise) 1 (+Noise) 0 (+Noise)

Outer Fringes 1 (+Noise) 1 (+Noise) 0 (+Noise)

Treatment Inner Fringes 1 (+Noise) 2 (+Noise) 0 (+Noise)

Outer Fringes 1 (+Noise) 1 (+Noise) 0 (+Noise)

Posttest

Control Inner Fringes 1 1 (+Noise) 0 (+Noise)

Outer Fringes 1 (+Noise) 1 (+Noise) 0 (+Noise)

Treatment Inner Fringes 2 2 (+Noise) 0 (+Noise)

Outer Fringes 1 (+Noise) 1 (+Noise) 0 (+Noise)

EM

Pretest

Control Inner Fringes 2 2 1

Outer Fringes 3 3 1

Treatment Inner Fringes 4 5 1

Outer Fringes 3 6 1

Posttest

Control Inner Fringes 2 4 1

Outer Fringes 1 1 1

Treatment Inner Fringes 2 3 1

Outer Fringes 3 2 1

To check if there is a significant relationship between how changing the

threshold clusters the learners differently, a pairwise comparison between the 33% K-

Means clustering results and the 60% K-Means clustering results was done as shown in

Table 106.

Page 146: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

146

Table 106: 33% vs. 60% K-Means Clustering Results

33% K-Means <-> 60% K-Means

Clusters *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.8148 -0.8024 0.6604 0.8423

Outer Fringes 0.8518 -0.8001 0.4376 0.5964

Treatment Inner Fringes 0.8648 -0.7468 0.6991 0.8907

Outer Fringes 0.5338 -1.5873 0.2945 0.2515

Posttest

Control Inner Fringes 0.926 -0.4136 0.841 0

Outer Fringes 0.5185 -1.2951 0.306 N/A

Treatment Inner Fringes 0.9459 -0.2968 0.8449 0.8723

Outer Fringes 0.6621 -1.2045 0.1942 0.4702

*↑means the greater the value the more resemblance between the two clustering results.

+↓means the less the value the more resemblance between the two clustering results.

N/A means there is no correspondence at all between the clustering results for 33% and 60%.

Looking at Table 106, it can be seen that even though the purity/accuracy

between the 33% and 60% K-Means clustering results is above 80% most of the time,

the NMI and ARI values are most of the time less than 50%, especially in the most outer

fringes cases such as Pre-test and Post-test Treatment Outer fringes where NMI values

are 29.45% and 19.42% respectively, and ARI values are 25.15% and 47.02%

respectively.

Hence, it can be concluded that there is a significant relationship between

changing the threshold and how learners are clustered together based on inner/outer

fringes of their knowledge states. Furthermore, increasing threshold leads to increasing

the number of clusters in the result. Also, changing the threshold will potentially change

the knowledge state the student is currently at, and therefore will change the inner and

outer fringes that they have. That’s why the NMI between the 33% and 60% K-Means

clustering results was low.

On the other hand, with regards to Norm Referencing, it can be observed that

using the topics’ median as a threshold for passing the respective topics leads to having

only one cluster per result in all the K-Means, DBSCAN, and EM cases. This is because

all the students seem to belong to one knowledge state level when using this kind of

threshold, and hence they will all have the same inner fringes and outer fringes.

10.2. Qualitative Analysis

In the Qualitative Analysis, the effect of the Criterion Referencing score (60%)

and the Norm Referencing score (each topic’s Median) on each Internal and External

Page 147: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

147

indices of the clustering results obtained in the K-Means, DBSCAN, and EM will be

compared with respect to the previous Illustrated Example used (33%).

With regards to K-Means, the overall Quantitative Analysis test results for

Internal indices are as shown in Table 107.

Table 107: K-Means Quantitative Analysis – Internal Indices

Internal

Indices Data Sets

Tests

33% 60% Median

*↓CP

Pretest

Control Inner Fringes 0.3195 0.2762 0

Outer Fringes 1.1142 1.0632 0

Treatment Inner Fringes 0.0025 0.0021 0

Outer Fringes 0.4927 0.0188 0

Posttest

Control Inner Fringes 0.2849 0.1455 0

Outer Fringes 1.0173 0.0137 0

Treatment Inner Fringes 0.0617 0.0029 0

Outer Fringes 0.5802 0.0029 0

+↑SP

Pretest

Control Inner Fringes 13.7538 18.9213 0

Outer Fringes 18.9804 27.0214 0

Treatment Inner Fringes 12.5333 18.5688 0

Outer Fringes 25.0903 15.8796 0

Posttest

Control Inner Fringes 10.7037 18.7446 0

Outer Fringes 15.0385 11.7872 0

Treatment Inner Fringes 12.7876 19.6376 0

Outer Fringes 30.6027 7.4957 0

↓DB

Pretest

Control Inner Fringes 0.2127 0.0303 N/A

Outer Fringes 0.4793 0.0706 N/A

Treatment Inner Fringes 0.0441 0.0046 N/A

Outer Fringes 0.0018 0.0034 N/A

Posttest

Control Inner Fringes 0.0532 0.001 N/A

Outer Fringes 0.0677 0.0108 N/A

Treatment Inner Fringes 0.0018 0.0049 N/A

Outer Fringes 0.019 0.0081 N/A

↑DVI

Pretest

Control Inner Fringes 0.4375 0.3333 N/A

Outer Fringes 1 0.75 N/A

Treatment Inner Fringes 0.3333 1 N/A

Outer Fringes 4 0.25 N/A

Posttest

Control Inner Fringes 1.75 1 N/A

Outer Fringes 7 0.5 N/A

Treatment Inner Fringes 1 1 N/A

Outer Fringes 3.4286 0.25 N/A

*↓ means the less the value the better.

+↑ means the greater the value the better.

N/A means not applicable as the clustering result contains only one cluster.

Page 148: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

148

In the case of K-Means clustering, as seen in Table 107, it can be observed that

increasing the threshold from 33% to 60% leads to a decrease in the CP values of the

clustering results. Next, with regards to separation, most of the 60% clustering results

have higher SP values than the 33% ones, with the exception of Pre-test Control Outer

fringes results and Post-test Control and Treatment Outer fringes clustering results.

Then, with regards to the DB index, most of the 60% clustering results have lower DB

values than the 33% ones, with the exception of Pre-test Treatment Outer fringes results

and Post-test Treatment Inner fringes clustering results. Finally, it can be observed that

increasing the threshold from 33% to 60% leads to a decrease in the DVI values of the

clustering results.

Figure 23: K-Means Results Internal Indices Comparison (33% vs 60%)

Looking at Figure 23, it can be observed that in most cases all the Internal indices

measures, except for DVI, are better the higher the threshold is. The DVI at 33%

surpasses the DVI at 60% in 6 cases, which means the clusters at 33% are well separated

for the compactness they have, with little variance between the members in the clusters

of the result. Regardless, it might be a good idea to increase the threshold for

passing/failing a topic in a unit taught to the students.

Also, as seen in Table 107, the Internal indices were not applicable to the Norm

Referencing clustering results as all the K-Means cases results contain only one cluster.

Page 149: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

149

Next, with regards to K-Means, the overall Qualitative Analysis test results for

External indices are as shown in Table 108.

Table 108: K-Means Qualitative Analysis – External Indices as compared to KS

Clustering

External

Indices Data Sets

Tests

33% 60% Median

*↑CA

Pretest

Control Inner Fringes 0.9445 0.9074 1

Outer Fringes 0.5370 0.537 1

Treatment Inner Fringes 0.9932 0.9865 1

Outer Fringes 1 0.9797 1

Posttest

Control Inner Fringes 0.9630 0.9815 1

Outer Fringes 1 0.9815 1

Treatment Inner Fringes 1 1 1

Outer Fringes 0.6756 1 1

+↓Entropy

Pretest

Control Inner Fringes -0.2254 -0.2727 0

Outer Fringes -0.9442 -0.9812 0

Treatment Inner Fringes -0.0135 -0.0439 0

Outer Fringes 0 -0.0558 0

Posttest

Control Inner Fringes -0.1905 -0.1099 0

Outer Fringes 0 -0.051 0

Treatment Inner Fringes 0 0 0

Outer Fringes -0.9014 0 0

↑NMI

Pretest

Control Inner Fringes 0.8413 0.8993 N/A

Outer Fringes 0.4969 0.666 N/A

Treatment Inner Fringes 0.5147 0.9609 N/A

Outer Fringes 0.8812 0.9527 N/A

Posttest

Control Inner Fringes 0.0796 0.9496 N/A

Outer Fringes 1 0.9703 N/A

Treatment Inner Fringes 0.9366 0.5768 N/A

Outer Fringes 0.0694 0.5768 N/A

↑ARI

Pretest

Control Inner Fringes 0.8940 0.9442 N/A

Outer Fringes 0.1712 0.399 N/A

Treatment Inner Fringes 0.8665 0.9956 N/A

Outer Fringes 1 0.998 N/A

Posttest

Control Inner Fringes 0.0028 0.9669 N/A

Outer Fringes 1 0.9957 N/A

Treatment Inner Fringes 1 0.8887 N/A

Outer Fringes 0.0262 0.8887 N/A

*↑means the greater the value the more resemblance between fringes and KS clustering.

+↓ arrow means the less the value the more resemblance between fringes and KS clustering.

N/A means not applicable as the clustering result contains only one cluster.

Page 150: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

150

In the case of K-Means clustering, as seen in Table 108, when compared to their

respective knowledge states clustering results, the 33% and 60% fringes clustering

results relatively have the same CA and Entropy values. On the other hand, the 60%

fringes clustering results have a higher NMI and ARI than the 33% results in most K-

Means cases. However, the differences are not substantial.

Also, as seen in Table 108, apart from purity, the External indices were not

applicable to the Norm Referencing clustering results as all the K-Means cases results

contain only one cluster. Obviously, the purity would be 1 in all cases as the knowledge

states clustering results also contain only one cluster.

Next, with regards to DBSCAN, the overall Quantitative Analysis test results

for Internal indices are shown in Table 109.

In the case of DBSCAN clustering, as seen in Table 109, it can be observed that

increasing the threshold from 33% to 60% leads to an increase in the CP values of the

clustering results. Next, with regards to separation, most of the 60% clustering results

have higher SP values than the 30% ones, with the exception of Pre-test Control Inner

fringes results. Then, with regards to the DB index, the 60% Treatment students

clustering results have lower DB values than the 30% ones. Finally, it can be observed

that increasing the threshold from 33% to 60% leads to a decrease in the DVI values of

the clustering results. Hence, like K-Means, the DVI values indicate that the 33%

DBSCAN clustering results are better well-separated given the compactness and

separations they have.

Also, as seen in Table 109, the Internal indices were not applicable to the Norm

Referencing clustering results as the all the DBSCAN cases results contain only one

cluster.

With regards to DBSCAN, the overall Qualitative Analysis test results for

External indices are as shown in Table 110.

In the case of DBSCAN clustering, as seen in Table 110, when compared to their

respective knowledge states clustering results, the 33% and 60% fringes clustering

results relatively have the same CA and Entropy values. Moreover, where applicable,

the 60% fringes clustering results relatively have the same NMI and ARI than the 33%

results in most DBSCAN cases.

Page 151: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

151

Table 109: DBSCAN Quantitative Analysis – Internal Indices

Internal

Indices Data Sets

Tests

33% 60% Median

*↓CP

Pretest

Control Inner Fringes 5.6036 9.1083 N/A

Outer Fringes 1.1142 1.9666 N/A

Treatment Inner Fringes 4.9521 5.3052 N/A

Outer Fringes 0.5501 1.1234 N/A

Posttest

Control Inner Fringes N/A N/A N/A

Outer Fringes 1.0173 1.1636 N/A

Treatment Inner Fringes 4.3708 4.356 N/A

Outer Fringes 0.4505 0.5332 N/A

+↑SP

Pretest

Control Inner Fringes 33.6538 16.1454 N/A

Outer Fringes 18.9804 27.3869 N/A

Treatment Inner Fringes 36.0643 46.3496 N/A

Outer Fringes 25.3725 32.582 N/A

Posttest

Control Inner Fringes N/A N/A N/A

Outer Fringes 15.0385 35.102 N/A

Treatment Inner Fringes 44.1471 51.1534 N/A

Outer Fringes 14.6853 25.8957 N/A

DB

Pretest

Control Inner Fringes 0.6417 1.1283 N/A

Outer Fringes 0.4793 0.6747 N/A

Treatment Inner Fringes 0.6028 0.2142 N/A

Outer Fringes 0.5134 0.4906 N/A

Posttest

Control Inner Fringes N/A N/A N/A

Outer Fringes 0.0677 0.5758 N/A

Treatment Inner Fringes 0.1594 0.0506 N/A

Outer Fringes 1.1736 0.9021 N/A

DVI

Pretest

Control Inner Fringes 1.25 0.1111 N/A

Outer Fringes 1 0.15 N/A

Treatment Inner Fringes 0.625 0.375 N/A

Outer Fringes 0.15 0.0455 N/A

Posttest

Control Inner Fringes N/A N/A N/A

Outer Fringes 7 0.15 N/A

Treatment Inner Fringes 3.6 0.8571 N/A

Outer Fringes 0.0714 0.0455 N/A

*↓ means the less the value the better.

+↑ means the greater the value the better.

N/A means not applicable as the clustering result contains only one cluster.

Page 152: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

152

Table 110: DBSCAN Qualitative Analysis – External Indices as compared to KS

Clustering

External

Indices Data Sets

Tests

33% 60% Median

*↑CA

Pretest

Control Inner Fringes 1 1 1

Outer Fringes 1 1 1

Treatment Inner Fringes 0.9594 1 1

Outer Fringes 0.9932 1 1

Posttest

Control Inner Fringes 0.963 1 1

Outer Fringes 1 1 1

Treatment Inner Fringes 1 0.9932 1

Outer Fringes 1 0.9662 1

+↓Entropy

Pretest

Control Inner Fringes 0 0 0

Outer Fringes 0 0 0

Treatment Inner Fringes -0.2359 0 0

Outer Fringes -0.0317 0 0

Posttest

Control Inner Fringes -0.2284 0 0

Outer Fringes 0 0 0

Treatment Inner Fringes 0 -0.0578 0

Outer Fringes 0 -0.0676 0

↑NMI

Pretest

Control Inner Fringes N/A N/A N/A

Outer Fringes N/A N/A N/A

Treatment Inner Fringes 0.3577 N/A N/A

Outer Fringes 0.8704 N/A N/A

Posttest

Control Inner Fringes N/A N/A N/A

Outer Fringes 1 N/A N/A

Treatment Inner Fringes N/A 0.5379 N/A

Outer Fringes N/A 0.5276 N/A

↑ARI

Pretest

Control Inner Fringes 1 1 N/A

Outer Fringes 1 1 N/A

Treatment Inner Fringes 0.6464 1 N/A

Outer Fringes 1 1 N/A

Posttest

Control Inner Fringes 1 1 N/A

Outer Fringes 1 1 N/A

Treatment Inner Fringes 1 0.548 N/A

Outer Fringes 1 1 N/A

*↑means the greater the value the more resemblance between fringes and KS clustering.

+↓ arrow means the less the value the more resemblance between fringes and KS clustering N/A means not applicable as the clustering result contains only one cluster.

Page 153: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

153

Also, as seen in Table 110, apart from purity, the External indices were not

applicable to the Norm Referencing clustering results as the all the DBSCAN cases

results contain only one cluster. Like K-Means, the purity would always be 1 in all cases

as the knowledge states clustering results also contain only one cluster.

Next, with regards to EM, the overall Quantitative Analysis test results for

Internal indices are shown in Table 111.

In the case of EM clustering, as seen in Table 111, it can be observed that

increasing the threshold from 33% to 60% leads to an increase in the CP values of the

clustering results, except for a few cases like Pre-test Treatment Outer fringes clusters.

Next, with regards to separation, most of the 60% clustering results have higher SP

values than the 33% ones, with the exception of Post-test Treatment Inner fringes

results. Then, with regards to the DB index, the 60% clustering results have lower DB

values than the 33% ones. Finally, it can be observed that increasing the threshold from

33% to 60% lead to a decrease in the DVI values of the clustering results, except for a

few cases like Post-test Treatment Outer fringes results. Like K-Means and DBSCAN,

the EM clustering results at 33% are better well-separated given the compactness and

separation values they have as compared to the 60% results.

Also, as seen in Table 111, the Internal indices were not applicable to the Norm

Referencing clustering results as the all the EM cases results contain only one cluster.

With regards to EM, the overall Qualitative Analysis test results for External

indices are shown in Table 112.

In the case of EM clustering, as seen in Table 112, when compared to their

respective knowledge states clustering results, the 60% fringes clustering results

relatively have higher CA values than 30%, except for a few cases like Post-test

Treatment Outer fringes clustering results. The CA values reflect the Entropy values.

Moreover, the 60% fringes clustering results relatively have similar or higher

NMI and ARI values than 30%, except for a few cases like Post-test Treatment Outer

fringes clustering results also.

Also, as seen in Table 112, apart from purity, the External indices were not

applicable to the Norm Referencing clustering results as all the EM cases results contain

only one cluster. Like K-Means and DBSCAN, the purity would always be 1 in all cases

as the knowledge states clustering results also contain only one cluster.

Page 154: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

154

Table 111: EM Quantitative Analysis – Internal Indices

Internal

Indices Data Sets

Tests

33% 60% Median

*↓CP

Pretest

Control Inner Fringes 5.6036 6.5085 N/A

Outer Fringes 1.1142 1.2516 N/A

Treatment Inner Fringes 0.264 0.7166 N/A

Outer Fringes 0.9857 0.541 N/A

Posttest

Control Inner Fringes 0.2849 0.3302 N/A

Outer Fringes N/A N/A N/A

Treatment Inner Fringes 4.3708 0.08 N/A

Outer Fringes 0.435 0.8002 N/A

+↑SP

Pretest

Control Inner Fringes 33.6538 50.76 N/A

Outer Fringes 18.9804 27.2934 N/A

Treatment Inner Fringes 12.9555 20.1792 N/A

Outer Fringes 30.8906 32.846 N/A

Posttest

Control Inner Fringes 10.7037 19.1798 N/A

Outer Fringes N/A N/A N/A

Treatment Inner Fringes 44.1471 19.7851 N/A

Outer Fringes 14.785 45.6895 N/A

DB

Pretest

Control Inner Fringes 0.6417 0.2857 N/A

Outer Fringes 0.4793 0.2213 N/A

Treatment Inner Fringes 0.2152 0.0405 N/A

Outer Fringes 0.2535 0.0363 N/A

Posttest

Control Inner Fringes 0.0532 0.0734 N/A

Outer Fringes N/A N/A N/A

Treatment Inner Fringes 0.1594 0.0705 N/A

Outer Fringes 0.0505 0.0525 N/A

DVI

Pretest

Control Inner Fringes 1.25 0.6452 N/A

Outer Fringes 1 0.3 N/A

Treatment Inner Fringes 0.1875 0.375 N/A

Outer Fringes 0.6 0.5 N/A

Posttest

Control Inner Fringes 1.75 0.5833 N/A

Outer Fringes N/A N/A N/A

Treatment Inner Fringes 3.6 0.625 N/A

Outer Fringes 0.5 5.1429 N/A

*↓ means the less the value the better.

+↑ means the greater the value the better.

N/A means not applicable as the clustering result contains only one cluster.

Page 155: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

155

Table 112: EM Qualitative Analysis – External Indices as compared to KS

Clustering

External

Indices Data Sets

Tests

33% 60% Median

*↑CA

Pretest

Control Inner Fringes 0.5 0.8333 1

Outer Fringes 0.537 0.963 1

Treatment Inner Fringes 0.9595 0.8919 1

Outer Fringes 0.5811 0.6013 1

Posttest

Control Inner Fringes 0.963 0.9445 1

Outer Fringes 0.963 0.5 1

Treatment Inner Fringes 0.9797 0.9594 1

Outer Fringes 0.9932 0.6824 1

+↓Entropy

Pretest

Control Inner Fringes -1.1885 -0.615 0

Outer Fringes -1.0563 -0.0741 0

Treatment Inner Fringes -0.1954 -0.4805 0

Outer Fringes -1.0289 -0.9712 0

Posttest

Control Inner Fringes -0.1905 -0.2349 0

Outer Fringes -0.2284 -1.3027 0

Treatment Inner Fringes -0.1216 -0.131 0

Outer Fringes -0.0186 -1.0625 0

↑NMI

Pretest

Control Inner Fringes 0.3177 0.0703 N/A

Outer Fringes 0.476 0.744 N/A

Treatment Inner Fringes 0.8748 0.8199 N/A

Outer Fringes 0.3885 0.6178 N/A

Posttest

Control Inner Fringes 0.0796 0.8362 N/A

Outer Fringes N/A N/A N/A

Treatment Inner Fringes 0.3609 0.9373 N/A

Outer Fringes 0.7653 0.4087 N/A

↑ARI

Pretest

Control Inner Fringes 0.1033 0.1765 N/A

Outer Fringes 0.1608 0.8596 N/A

Treatment Inner Fringes 0.9454 0.839 N/A

Outer Fringes 0.1458 0.354 N/A

Posttest

Control Inner Fringes 0.0028 0.9371 N/A

Outer Fringes 0 0 N/A

Treatment Inner Fringes 0.5492 0.9871 N/A

Outer Fringes 0.8817 0.1314 N/A

*↑means the greater the value the more resemblance between fringes and KS clustering.

+↓ arrow means the less the value the more resemblance between fringes and KS clustering

N/A means not applicable as the clustering result contains only one cluster.

Page 156: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

156

Overall, with regards to Criterion Referencing, increasing the threshold at which

the student passes the topics in a subject can somewhat increase the number of clusters

in a single result (regardless of whether it was K-Means, DBSCAN, or EM). Also, the

increase produces clustering results with better quality as indicated by the Internal

indices as seen previously in Figure 23.

In terms of External indices, the K-Means and DBSCAN results of the 33% and

60% thresholds were relatively similar. On the other hand, the EM results External

indices at 60% were higher than those at 33%.

Finally, with regards to Norm Referencing using the topics’ medians, all the

students will have the same fringes sets and so will all be in one cluster in a single result.

Page 157: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

157

Chapter 11: Model Validation and Further Insights

The purpose of this chapter is to first validate the generalization of the approach

proposed in this thesis using Data Set 2 which is much larger than the previous sample

Data Set 1. To reiterate from the section of Data Collection in Chapter 5, the larger

data Data Set 2 is also based on pre-assessment and post-assessment grades of Grade 2

mathematical unit of NUMBERs. As compared to Data Set 1 used in the previous

clustering example in Chapters 6, 7, and 8, Data Set 2 contains 802 students with 187

in the Control group and 616 in the Treatment group.

The second purpose is to come out with several external key insights regarding

the relevancy between some of the school status/properties of Data Set 2 used in this

chapter and the fringes clustering results obtained from the same data set.

11.1. Model Validation for Generalizability

First, the KST algorithm was applied on the data sample to extract the inner and

outer fringe sets. Next, K-Means, DBSCAN, and EM clustering algorithms were run on

the extracted inner fringe sets and outer fringe sets. Finally, the fringes clustering results

from every K-Means, DBSCAN, and EM were validated using the Internal and External

indices mentioned earlier.

Let us call the model development data as (Data Set 1) and model validation

data as (Data Set 2). The number of clusters per result for Data Set 2 were obtained from

K-Means, DBSCAN, and EM clustering techniques are as follows, and compared with

Data Set 1 as shown in Table 113.

In terms of numbers of clusters per result, the findings shown in Table 113 verify

what was observed when the clustering techniques were applied on Data Set 1 in the

previous chapters. To briefly reiterate, also for the larger Data Set 2 K-Means gave the

maximum number of clusters per result when compared to EM and DBSCAN.

With regards to DBSCAN, the results obtained with this data sample is similar

to the example from Chapter 7. Hence, also for the larger Data Set 2, DBSCAN

algorithms did not produce more than 1 or 2 clusters per result, and in most cases the

Noise clusters contained the distinct students (outliers) that represented the minority

subset of the used data set. With regards to EM, also in this case “overfitting” of data in

the clusters per result happened due to the bad goodness fit measure.

Page 158: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

158

Table 113: Generalization Example Quantitative Analysis with Comparison

Data Sets

Clustering Technique

Data Set 2 (model validation) Data Set 1 (model development)

K-Means EM DBSCAN K-Means EM DBSCAN

Pretest

Control Inner Fringes 2 1 1(+Noise) 3 2 1(+Noise)

Outer Fringes 2 1 1(+Noise) 2 3 1(+Noise)

Treatment Inner Fringes 5 4 2(+Noise) 5 4 1(+Noise)

Outer Fringes 4 3 1(+Noise) 6 3 1(+Noise)

Posttest

Control Inner Fringes 3 2 1(+Noise) 2 2 1

Outer Fringes 3 3 1(+Noise) 2 1 1(+Noise)

Treatment Inner Fringes 4 3 1(+Noise) 4 2 2

Outer Fringes 4 4 1(+Noise) 2 3 1(+Noise)

Page 159: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

159

A detailed account of the contents of the clusters per result can be found in

Appendix F: Generalization Example Details.

Next, the Internal indices of the clustering results obtained from K-Means,

DBSCAN, and EM clustering techniques were calculated and are as shown in Table 114

and Table 115.

In terms of Internal indices, the findings shown in Table 114 and Table 115

verify what was observed when the clustering techniques were applied on Data Set 1 in

the previous chapters. Therefore, most of the EM clustering results have somehow

similar compactness and separation to the K-Means results. On the other hand, K-Means

clustering results have better DB and DVI measure than the EM results. Therefore, as

observed before, K-Means clustering results have the better quality cluster as compared

to EM and DBSCAN. Furthermore, it can be observed that also in this example the outer

fringes results gave better internal indices than the inner fringes ones. Hence, this further

supports that outer fringes give more efficient and optimized feedback to teachers as

what to teach the students next.

Furthermore, the overall compactness of the K-Means clusters is better than that

of the DBSCAN clusters as they are lower. Also, the overall separation indices of

DBSCAN cluster are higher than that of K-Means cluster as the DBSCAN results’ SP

is higher. The overall DB measures of the K-Means clustering results are better than

that of the DBSCAN clusters as they are lower. Moreover, the overall DVI of the K-

Means clusters is better than that of the DBSCAN clusters as they are higher than the

DVI of the equivalent data set in most cases.

For K-Means clustering, Figure 24 and Figure 25 were constructed to check if

there is a certain pattern in how the Internal indices behave when using different data

sets.

For both data sets, outer fringes gave better SP and SVI values than inner fringes.

However, while in Data Set 1 inner fringes had better SP and DB than outer fringes, the

case is not the same for Data Set 2. Therefore, this is an indication that data was collected

from different places, and that the indices of goodness will be different. Hence, after

looking at the results for the two data sets, for the approach to be generalized, the

internal indices have to be examined to decide whether it is better to use inner fringes

or outer fringes.

Page 160: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

160

Table 114: Generalization Example Qualitative Analysis – Internal Indices – CP and SP

Internal

Indices Data Sets

Clustering Technique

Data Set 2 (model validation) Data Set 1 (model development)

K-Means EM DBSCAN K-Means EM DBSCAN

*↓CP

Pretest

Control Inner Fringes 2.6373 N/A 2.1227 0.3195 5.6036 5.6036

Outer Fringes 0.8432 N/A 0.8432 1.1142 1.1142 1.1142

Treatment Inner Fringes 0.9633 0.0167 1.6557 0.0025 0.264 4.9521

Outer Fringes 0.6033 0.6623 0.8452 0.4927 0.9857 0.5501

Posttest

Control Inner Fringes 0.6611 5.408 5.408 0.2849 0.2849 N/A

Outer Fringes 0.0388 0.0388 1.0206 1.0173 N/A 1.0173

Treatment Inner Fringes 0.0221 0.0228 2.2292 0.0617 4.3708 4.3708

Outer Fringes 0.2098 0.2108 0.2226 0.5802 0.435 0.4505

+↑SP

Pretest

Control Inner Fringes 9.9714 N/A 10.583 13.7538 33.6538 33.6538

Outer Fringes 30.6767 N/A 30.6767 18.9804 18.9804 18.9804

Treatment Inner Fringes 13.4064 11.7236 12.8988 12.5333 12.9555 36.0643

Outer Fringes 30.7511 31.1283 26.6928 25.0903 30.8906 25.3725

Posttest

Control Inner Fringes 11.4372 42.0811 42.0811 10.7037 10.7037 N/A

Outer Fringes 11.5894 11.5894 31.1138 15.0385 N/A 15.0385

Treatment Inner Fringes 12.8359 12.8355 47.0094 12.7876 44.1471 44.1471

Outer Fringes 22.4759 22.4825 22.4366 30.6027 14.785 14.6853

*↓ means the less the value the better.

+↑ means the greater the value the better.

N/A means not applicable as the clustering result contains only one cluster.

Page 161: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

161

Table 115: Generalization Example Qualitative Analysis – Internal Indices – DB and DVI

Internal

Indices Data Sets

Clustering Technique

Data Set 2 (model validation) Data Set 1 (model development)

K-Means EM DBSCAN K-Means EM DBSCAN

*↓DB

Pretest

Control Inner Fringes 0.2939 N/A 0.5364 0.2127 0.6417 0.6417

Outer Fringes 0.032 N/A 0.032 0.4793 0.4793 0.4793

Treatment Inner Fringes 0.0046 0.0357 0.0418 0.0441 0.2152 0.6028

Outer Fringes 0.0038 0.0648 0.3434 0.0018 0.2535 0.5134

Posttest

Control Inner Fringes 0.0149 0.1285 0.1285 0.0532 0.0532 N/A

Outer Fringes 0.0017 0.0017 0.0333 0.0677 N/A 0.0677

Treatment Inner Fringes 0.0042 0.0571 0.1447 0.0018 0.1594 0.1594

Outer Fringes 0.0012 0.0446 0.6759 0.019 0.0505 1.1736

+↑DVI

Pretest

Control Inner Fringes 0.175 N/A 0.1489 0.4375 1.25 1.25

Outer Fringes 7 N/A 7 1 1 1

Treatment Inner Fringes 1 0.25 0.0968 0.3333 0.1875 0.625

Outer Fringes 1.3333 1.5 0.0455 4 0.6 0.15

Posttest

Control Inner Fringes 1.75 3.2727 3.2727 1.75 1.75 N/A

Outer Fringes 1 1 7 7 N/A 7

Treatment Inner Fringes 1.5 0.375 2.25 1 3.6 3.6

Outer Fringes 2 0.125 0.0455 3.4286 0.5 0.0714

*↓ means the less the value the better.

+↑ means the greater the value the better.

N/A means not applicable as the clustering result contains only one cluster.

Page 162: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

162

Figure 24: K-Means Results Internal Indices Comparison for Data Set 1

Figure 25: K-Means Results Internal Indices Comparison for Data Set 2

Next, the External indices of the clustering results obtained from K-Means,

DBSCAN, and EM clustering techniques as compared to knowledge states clustering

were calculated and are shown in Table 116 and Table 117.

Page 163: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

163

Table 116: Generalization Example Qualitative Analysis – External Indices as compared to KS Clustering – CA and Entropy

External

Indices Data Sets

Clustering Technique

Data Set 2 (model validation) Data Set 1 (model development)

K-Means EM DBSCAN K-Means EM DBSCAN

*↑CA

Pretest

Control Inner Fringes 1 0.4492 1 0.9445 0.5 1

Outer Fringes 0.7487 0.4492 1 0.5370 0.537 1

Treatment Inner Fringes 0.8602 0.9984 1 0.9932 0.9595 0.9594

Outer Fringes 0.974 0.974 1 1 0.5811 0.9932

Posttest

Control Inner Fringes 0.8931 0.5187 1 0.9630 0.963 0.963

Outer Fringes 0.9893 0.9893 1 1 0.963 1

Treatment Inner Fringes 1 0.9821 1 1 0.9797 1

Outer Fringes 0.8846 1 1 0.6756 0.9932 1

+↓Entropy

Pretest

Control Inner Fringes 0 -1.5402 0 -0.2254 -1.1885 0

Outer Fringes -0.6664 -1.5402 0 -0.9442 -1.0563 0

Treatment Inner Fringes -0.449 -0.0094 0 -0.0135 -0.1954 -0.2359

Outer Fringes -0.1678 -0.1681 0 0 -1.0289 -0.0317

Posttest

Control Inner Fringes -0.3678 -1.3566 0 -0.1905 -0.1905 -0.2284

Outer Fringes -0.0705 -0.0705 0 0 -0.2284 0

Treatment Inner Fringes 0 -0.0758 0 0 -0.1216 0

Outer Fringes -0.5108 0 0 -0.9014 -0.0186 0

*↑means the greater the value the more resemblance between fringes and KS clustering.

+↓ arrow means the less the value the more resemblance between fringes and KS clustering

N/A means not applicable as the clustering result contains only one cluster.

Page 164: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

164

Table 117: Generalization Example Qualitative Analysis – External Indices as compared to KS Clustering – NMI and ARI

External

Indices Data Sets

Clustering Technique

Data Set 2 (model validation) Data Set 1 (model development)

K-Means EM DBSCAN K-Means EM DBSCAN

*↑NMI

Pretest

Control Inner Fringes 1 N/A N/A 0.8413 0.3177 N/A

Outer Fringes 0.175 N/A N/A 0.4969 0.476 N/A

Treatment Inner Fringes 0.3392 0.6313 N/A 0.5147 0.8748 0.3577

Outer Fringes 0.8528 0.7845 N/A 0.8812 0.3885 0.8704

Posttest

Control Inner Fringes 0.8367 0.0977 N/A 0.0796 0.0796 N/A

Outer Fringes 0.9558 0.9558 N/A 1 N/A 1

Treatment Inner Fringes 0.9501 0.3253 N/A 0.9366 0.3609 N/A

Outer Fringes 0.2333 0.9362 N/A 0.0694 0.7653 N/A

↑ARI

Pretest

Control Inner Fringes 1 0 1 0.8940 0.1033 1

Outer Fringes 0.0431 0 1 0.1712 0.1608 1

Treatment Inner Fringes 1 1 1 0.8665 0.9454 0.6464

Outer Fringes 0.8878 0.8961 1 1 0.1458 1

Posttest

Control Inner Fringes 0.8336 0.0218 1 0.0028 0.0028 1

Outer Fringes 0.9786 0.9786 1 1 0 1

Treatment Inner Fringes 0.93 0.278 1 1 0.5492 1

Outer Fringes 0.2897 1 1 0.0262 0.8817 1

*↑means the greater the value the more resemblance between fringes and KS clustering.

+↓ arrow means the less the value the more resemblance between fringes and KS clustering

N/A means not applicable as the clustering result contains only one cluster.

Page 165: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

165

In terms of clustering validation using External indices, the findings shown in

Table 116 and Table 117 verify what was observed when comparing the fringes

clustering results of Data Set 1with the knowledge states clustering results. Therefore,

looking at the CA, NMI, and ARI indices for Data Set 2 results, even though in most

cases that are applicable, the latter measures indicate a very high resemblance between

the K-Means/EM/DBSCAN fringes clustering results and their corresponding

knowledge states clustering results (i.e. CA/NMI/ARI > 80%), the idea of using only

knowledge states rather than fringes to get information about the learners’ learning

progress is not sufficient. This is evident in some K-Means results cases such as Post-

test Treatment Outer fringes clusters where the NMI is 23.33% and ARI is 28.97%. It

is also evident in some EM results cases such as Post-test Treatment Inner fringes

clusters where the NMI is 32.53% and AR is 27.8%.

Finally, the External indices of the clustering results obtained from K-Means,

DBSCAN, and EM clustering techniques as compared to grouping students according

to the quartiles that they belong to as dictated by their overall scores for the NUMBERs

unit were calculated and are shown in Table 118 and Table 119.

In terms of clustering validation using External indices, the findings shown in

Table 118 and Table 119 verify what was observed in the previous clustering results for

Data Set 1 when comparing the fringes clustering results with the quartiles grouping

results. As compared to the Quartiles grouping results, it can be seen that the overall

purity of each K-Means/EM/DBSCAN clustering results of the fringes in both Pre-test

and Post-test is less than 40% in the majority of the cases . The same applies to the

Entropy measure which is approaching values greater than 1 in all of the latter data sets.

These values are an indication and confirmation of the irrelevancy between the

knowledge level of the learners and their corresponding unit scores.

Moreover, most of the NMI and ARI values between the fringes clustering

results and quartiles groups is less than 20%, which is an indication of the large

discrepancy between grouping based on quartiles of scores and K-Means/EM/DBSCAN

clustering based on fringes.

Page 166: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

166

Table 118: Generalization Example Qualitative Analysis – External Indices as compared to Quartiles – CA and Entropy

External

Indices Data Sets

Clustering Technique

Data Set 2 (model validation) Data Set 1 (model development)

K-Means EM DBSCAN K-Means EM DBSCAN

*↑CA

Pretest

Control Inner Fringes 0.3529 0.2567 0.3476 0.5 0.5 0.2963

Outer Fringes 0.3529 0.2567 0.3529 0.2963 0.537 0.2963

Treatment Inner Fringes 0.3968 0.3968 0.3903 0.473 0.9595 0.2905

Outer Fringes 0.3236 0.3236 0.3122 0.3244 0.5811 0.3244

Posttest

Control Inner Fringes 0.3316 0.2621 0.2621 0.5185 0.963 0.2593

Outer Fringes 0.3369 0.3369 0.2887 0.2963 0.963 0.2963

Treatment Inner Fringes 0.3398 0.3382 0.3057 0.4595 0.9797 0.2635

Outer Fringes 0.309 0.3073 0.3057 0.2635 0.9932 0.277

+↓Entropy

Pretest

Control Inner Fringes -1.9012 -1.9995 -1.9166 -1.524 -1.1885 0.2963

Outer Fringes -1.903 -1.9995 -1.903 -1.9349 -1.0563 0.2963

Treatment Inner Fringes -1.8345 -1.8425 -1.8576 -1.5125 -0.1954 0.2905

Outer Fringes -1.9174 -1.9262 -1.9558 -1.8535 -1.0289 0.3244

Posttest

Control Inner Fringes -1.9404 -1.9783 -1.9783 -1.4784 -0.1905 0.2593

Outer Fringes -1.9302 -1.9302 -1.9712 -1.9239 -0.2284 0.2963

Treatment Inner Fringes -1.9475 -1.951 -1.9806 -1.4818 -0.1216 0.2635

Outer Fringes -1.97 -1.978 -1.9796 -1.9726 -0.0186 0.277

*↑means the greater the value the more resemblance between fringes and Quartiles.

+↓ arrow means the less the value the more resemblance between fringes and Quartiles.

N/A means not applicable as the clustering result contains only one cluster.

Page 167: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

167

Table 119: Generalization Example Qualitative Analysis – External Indices as compared to Quartiles – NMI and ARI

External

Indices Data Sets

Clustering Technique

Data Set 2 (model validation) Data Set 1 (model development)

K-Means EM DBSCAN K-Means EM DBSCAN

*↑NMI

Pretest

Control Inner Fringes 0.0772 N/A 0.0644 0.3078 -1.1885 0.1112

Outer Fringes 0.0734 N/A 0.0734 0.0816 -1.0563 0.0816

Treatment Inner Fringes 0.1055 0.0865 0.0928 0.2939 -0.1954 0.1075

Outer Fringes 0.0704 0.0655 0.0366 0.1458 -1.0289 0.1732

Posttest

Control Inner Fringes 0.0404 0.0514 0.0514 0.3682 -0.1905 N/A

Outer Fringes 0.0419 0.0419 0.0287 0.1112 -0.2284 0.1112

Treatment Inner Fringes 0.0302 0.0273 0.0037 0.3573 -0.1216 0.0417

Outer Fringes 0.0182 0.0062 0.0041 0.0604 -0.0186 0.0684

↑ARI

Pretest

Control Inner Fringes 0.0367 0 0.2383 0.1973 0.1033 0.0434

Outer Fringes 0.046 0 0.2561 0.0022 0.1608 0.0634

Treatment Inner Fringes 0.3943 0.0716 0.3496 0.4230 0.9454 0.0416

Outer Fringes 0.0149 0.0167 0.1762 0.0484 0.1458 0.0798

Posttest

Control Inner Fringes 0.0182 0.0001 0.0137 0.2490 0.0028 0

Outer Fringes 0.022 0.022 0.1175 0.0011 0 0.0434

Treatment Inner Fringes 0.0055 0.0055 0.0153 0.2234 0.5492 0.0001

Outer Fringes 0.0016 0.0016 0.0379 0.0004 0.8817 0.0415

*↑means the greater the value the more resemblance between fringes and Quartiles.

+↓ arrow means the less the value the more resemblance between fringes and Quartiles.

N/A means not applicable as the clustering result contains only one cluster.

Page 168: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

168

After running the thesis model on the new larger data sample of Grade 2

students’ score in the NUMBERS unit (Data Set 2), the overall results obtained, and

observations and findings made, highly correspond to the clustering examples using

Data Set 1 in Chapters 5,6, and 7. Findings include:

K-Means is the better choice for clustering fringes in this model.

In general, deciding which fringes are the better choice for providing efficient

and personalized/optimized feedback to educational administrators can be

deduced by looking at the Internal indices of the clustering results.

Clustering and grouping students based on only their knowledge states (without

fringes) and/or quartiles is not the same as clustering students based on inner and

out fringes (what they learnt recently and what they are ready to learn next given

their current knowledge state). This can be deduced from the External indices of

the clustering results.

If the data is collected from different places with different nature of students at

different snapshots of time, the indices of goodness will also be different.

Therefore, these findings support and validate the notion of generalization of the

approach proposed in this thesis.

11.2. External Key Insights

With regards to the external insight, an overall pragmatic results analysis will be

done to come out with potential correlations between certain characteristics of the

teachers/ students/schools and the fringes clustering results. Therefore, here, a pairwise

comparison will be done between the K-Means clustering results and certain

characteristics of the sample such as the geographical locations of the students under

study, gender of the teachers, school in which student is enrolled in, and school grade 2

enrollment size. The measures used to extract the external insights about the model

results are the External indices used in earlier chapters.

The insights will be for the clustering results obtained from the model validation

data set only, as the information about the schools for this data set were made available.

Firstly, the pairwise comparison was done between the clustering results and the

school in which the student is enrolled in, as shown in Table 120. In addition, using

[46], the geographical locations (latitude and longitude) of the clustering results were

Page 169: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

169

obtained for both districts in which the schools are located, and they are shown in Figure

26, Figure 27, Figure 28, and Figure 29. Figure 26 and Figure 27 are for the first district of

Vehari, and Figure 28 and Figure 29 are for the second district of Mandi Bahauddin.

Table 120: K-Means and School Pairwise Comparison

K-Means Clusters <-> School *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.0904 -4.0872 0.2025 1

Outer Fringes 0.096 -4.115 0.1841 1

Treatment Inner Fringes 0.0453 -5.7092 0.2044 0.3752

Outer Fringes 0.0296 -5.9789 0.1446 0.3436

Posttest

Control Inner Fringes 0.1073 -3.9851 0.2252 1

Outer Fringes 0.1469 -3.7991 0.2745 1

Treatment Inner Fringes 0.0366 -5.945 0.1605 0.3663

Outer Fringes 0.0244 -6.1548 0.0998 0.3787

*↑means the greater the value the more resemblance between K-Means fringes clustering and school names.

+↓means the less the value the more resemblance between K-Means fringes clustering and school names.

It can be seen from the CA, NMI, and most of the ARI values that there is very

little relevancy between the schools the learners are in and the K-Means fringes

clustering results. The CA is less than 10% and the NMI is less than 30%. This is

especially true for the results of Pre-test Treatment Outer fringes (CA= 2.96%, NMI=

14.46%, ARI= 34.36%) and Post-test Treatment Outer fringes (CA= 2.44%, NMI=

9.98%, ARI= 37.87%). Therefore, the school in which the student is in does not affect

his/her knowledge of the topics mastered recently and the topic he/she to be taught next.

As seen in the figures, in some cases, the similar clusters of the K-Means results

seem to be located close to one another. This can be seen in the example of Post-test

Control and Treatment Outer fringes results in the district of Vehari. However, it is not

obvious if there is a direct apparent relationship between the clustering of the fringes

and the geographical locations of the clustering results.

On the other hand, an educational administrator can look at the maps to identify

where the different learners are located, and which cluster do most student belong to.

For example, using Figure 29, in the case of Post-test Treatment Outer fringes results,

the educational administrator can deduce that most of the students belong to cluster 1,

and so most of them are ready to learn the topics of cluster 1. After this deduction, the

educational administrator can inform the teachers in that region about this feedback to

take appropriate teaching decisions.

Page 170: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

170

Figure 26: Geographical Location of Clustering Results for Inner Fringes – Vehari District

Page 171: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

171

Figure 27: Geographical Location of Clustering Results for Outer Fringes – Vehari District

Page 172: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

172

Figure 28: Geographical Location of Clustering Results for Inner Fringes – Mandi Bahauddin District

Page 173: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

173

Figure 29: Geographical Location of Clustering Results for Outer Fringes – Mandi Bahauddin District

Page 174: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

174

Next, the pairwise comparison was done between the clustering results and the

school types as shown in Table 121. School types refer to whether it is a boys’ school

or girls ’ school. If it is a boys’ school, then the teacher gender would be male, and if it

is a girls’ school, then the teacher gender would be female.

Table 121: K-Means and School Type (Teacher Gender) Pairwise Comparison

K-Means Clusters <-> School Type *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.661 -0.8879 0.0409 0.0267

Outer Fringes 0.661 -0.9183 0.0062 0.0223

Treatment Inner Fringes 0.5871 -0.9698 0.0075 0.0038

Outer Fringes 0.5871 -0.9713 0.0084 0.0031

Posttest

Control Inner Fringes 0.661 -0.9222 0.0016 0.0034

Outer Fringes 0.661 -0.9205 0.003 0.0137

Treatment Inner Fringes 0.5888 -0.9634 0.0181 0.0144

Outer Fringes 0.5888 -0.9683 0.021 0.0061

*↑means the greater the value the more resemblance between K-Means fringes clustering and teacher

gender.

+↓means the less the value the more resemblance between K-Means fringes clustering and teacher gender.

It can be seen from the CA, NMI, and most of the ARI values that there is very

little relevancy between the teacher’s gender and the K-Means fringes clustering results.

The CA is less than 60% in most cases, and the NMI and ARI are less than 2%. Post-

test results are a good indication for this finding as Post-test gives assessment scores of

students after teacher intervention in the student’s learning process. For example, this

can be seen for the results of Post-test Treatment Inner fringes (CA= 58.88%, NMI=

1.81%, ARI= 1.44%) and Post-test Treatment Outer fringes (CA= 58.88%, NMI= 2.1%,

ARI= 0.61%). Therefore, the gender of the teacher or the students in the school does not

affect the students’ knowledge of the topics they mastered recently and the topic they

are to be taught next.

Finally, the pairwise comparison was done between the clustering results and

the school grade 2 enrollment size as shown in Table 122.

It can be seen from the CA and NMI values that there is very little relevancy

between the school’s grade 2 enrollment size and the K-Means fringes clustering results.

The CA is less than 15% in most cases and the NMI is less than 30%. For example, this

can be seen for the results of Post-test Treatment Inner fringes (CA= 9.59%, NMI=

10.6%) and Post-test Treatment Outer fringes (CA= 8.71%, NMI= 7.33%). Therefore,

Page 175: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

175

the school’s enrollment size per class does not affect the students’ knowledge of the

topics they mastered recently and the topic they are to be taught next.

Table 122: K-Means and School Grade 2 Enrollment size Pairwise Comparison

K-Means Clusters <-> Enrollment Size *↑CA +↓Entropy ↑NMI ↑ARI

Pretest

Control Inner Fringes 0.1243 -3.8406 0.1686 N/A

Outer Fringes 0.1243 -3.84 0.1654 N/A

Treatment Inner Fringes 0.0819 -4.7333 0.1118 0.75

Outer Fringes 0.0906 -4.8528 0.0855 N/A

Posttest

Control Inner Fringes 0.13 -3.6914 0.2195 N/A

Outer Fringes 0.1695 -3.5221 0.2653 N/A

Treatment Inner Fringes 0.0959 -4.8152 0.106 N/A

Outer Fringes 0.0871 -4.9319 0.0733 N/A

*↑means the greater the value the more resemblance between K-Means fringes clustering and school

enrollment.

+↓means the less the value the more resemblance between K-Means fringes clustering and school

enrollment.

N/A means there is no correspondence between the clustering results and the enrollment sizes.

To summarize and reiterate, the following four external key insights can be made

about the fringes clustering results and the properties of the schools and teachers in

which the learners are enrolled in:

1) The school in which the student is enrolled in does not affect the students’

knowledge of the topics they mastered recently and the topic they are to be

taught next.

2) There is no direct relevant relation between the fringes clustering results and

the geographical location in which they are in. This might be already

deduced from the first insight about the school name, as the geographical

location corresponds to the location of the school.

3) The gender of the teacher or the students in the school does not affect the

students’ knowledge of the topics they mastered recently and the topic they

are to be taught next.

4) The school’s enrollment size per class does not affect the students’

knowledge of the topics they mastered recently and the topic they are to be

taught next.

Page 176: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

176

Chapter 12: Conclusion and Future Research

To reiterate, the purpose of this thesis was to develop a model that helps

instructors/teachers personalize and optimize in-class instruction to improve learners’

knowledge acquiring experience. Given a certain subject (like Mathematics in the

example), the learner’s recently mastered topics in the subject (inner fringes) and what

he/she is ready to learn next (outer fringes) are first extracted given the learner’s

knowledge levels. Next, the fringe sets of all the learners are clustered using different

clustering algorithms. The clustering results qualities were finally evaluated using the

different viable clustering evaluation measures (Internal and External indices).

Consequently, the major results from the approach are the following:

For our data samples Data Set 1 and Data Set 2, using learners’ fringe sets, K-

Means algorithm gave better and optimal clustering results than DBSCAN and

EM algorithm.

In general, deciding which fringes are the better choice for providing efficient

and personalized/optimized feedback to educational administrators can be

deduced by looking at the Internal indices of the clustering results.

The model was tested for generalizability using a larger data set Data Set 2, and

the generalizability was positively validated. Hence, the approach is applicable

to different types of data sets collected from different locations.

For our cases, the higher threshold score of passing the topics, the better the

quality of the clustering results.

There is very low resemblance/connection between school name, geographical

location, enrollment size per class, and teacher/students gender and the way

students’ fringes sets are clustered.

The limitation of the approach might be attributed to the fact that, firstly, the

KST construction has to be done by instructors familiar with the subject being taught.

Secondly, the model is limited to subjects and skills sets which have hierarchical

dependencies. For example, numeracy works fine, but literacy would not as the

connection between the topics is a parallel one rather than a hierarchical one.

Page 177: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

177

In the future, other techniques could be examined and potentially integrated to

this model such as Item Response Theory as it would combine the learner’s knowledge

level and her/his probability to correctly answer assessment questions. Moreover, other

clustering algorithms can be explored such as Grid-Based and Hierarchical-Based

clustering algorithms.

Page 178: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

178

References

[1] P. Khatri, S. Gupta, K. Gulati, and S. Chauhan, “Talent management in HR,”

Journal of management and strategy, vol. 1, no. 1, pp. 39–46, 2010.

[2] J. Demers and R. Colman, "Skills management. (news and views).(new research

chair at the school of management of Universite du Quebec a Montreal) - CMA

management," in HighBeam Research, CMA Management, 2003. [Online].

Accessed: Mar. 1, 2015.

[3] “Bridging the Skills Gap: New Factors Compound the Growing Skills

Shortage,” American Society for Training and Development Std., 2010.

[4] D. Chambers, "The three sizes of business justify different small cell technical

solutions," in ThinkSmallCell, 2012. [Online]. Accessed: Apr. 5, 2015.

[5] L. Miller, “2014 State of the Industry Report: Spending on Employee Training

Remains a Priority,” The Association for Talent Development’s (ATD), Rep.,

2014.

[6] M. Manley-Casimir, “The Teacher as Decision-Maker: Connecting Self with the

Practice of Teaching,” Childhood Education, vol. 65, no. 5, pp. 288, 1989.

[7] M. Al-A’ali, “Implementation of an Improved Adaptive Testing Theory,”

Journal of Educational Technology and Society, vol. 10, pp. 80, 2007.

[8] M. Erguven, “Two approaches to psychometric process: Classical test theory

and item response theory,” Journal of Education, vol. 2, no. 2, pp. 23-30, 2013.

[9] Y. Kurniawan and E. Halim, “Use data warehouse and data mining to predict

student academic performance in schools: A case study (perspective application

and benefits),” in 2013 IEEE International Conference on Teaching, Assessment

and Learning for Engineering (TALE), 2013, pp. 98 – 103.

[10] K. M. Kinnebrew, John S. Loretz, and G. Biswas, “A contextualized, differential

sequence mining method to derive students’ learning behavior patterns,” JEDM

- Journal of Educational Data Mining, vol. 5, no. 1, pp. 190 – 219, 2013.

Page 179: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

179

[11] E. Şena Baha Uçarb, and D. Delenc, “Predicting and analyzing secondary

education placement-test scores: A data mining approach,” Expert Systems with

Applications, vol. 39, no. 10, pp. 9468 – 9476, 2012.

[12] K. Koedinger, J. Stamper, E. McLaughlin, and T. Nixon, "Using data-driven

discovery of better student models to improve student learning", in 16th

International Conference on Artificial Intelligence in Education, Memphis, TN,

United states, 2013, pp. 421-430.

[13] S. Antonenko, P. D. Toy, and D. S. Niederhauser, “Using cluster analysis for

data mining in educational technology research,” Educational Technology

Research and Development, vol. 60, pp. 383 – 398, 2012.

[14] A. Kunche, R. Kumar Puli, S. Guniganti, and D. Puli, “Analysis and Evaluation

of Training Effectiveness,” Human Resource Management Research, vol. 1, no.

1, pp. 1–7, 2011.

[15] R. M. Yasin, Y. F. A. Nur, C. R. Ridzwan, R. M. Bekri, R. A. A. Abd, I. I.

Mahazir, and H. T. Ashikin, “Learning Transfer at Skill Institutions’ and

Workplace Environment: A Conceptual Framework,” Asian Social Science, vol.

10, no. 1, p. 179, 2014.

[16] G. Coates, C. Thompson, A. Duffy, B. Hills, and I. Whitfield, “Modelling skill

competencies in engineering companies,” Engineering Designer, vol. 35, no. 5,

pp. 16-19, 2009.

[17] “Behavioral Competency Framework,” UAE Federal Authority for Government

Human Resources Std.

[18] D. Rodriguez, R. Patel, A. Bright, D. Gregory, and M. K. Gowing, “Developing

Competency Models To Promote Integrated Human Resource Practices,”

Human Resource Management, vol. 41, no. 3, pp. 309 – 324, 2002.

[19] I. P. Management, “A guide to the project management body of knowledge

(PMBOK Guide),” Newtown Square, Pa: Project Management Institute, Inc,

2008.

Page 180: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

180

[20] J.-C. Falmagne, X. Hu, D. Eppstein, C. Doble, and D. Albert, Knowledge Spaces

Applications in Education. Berlin, Heidelberg: Springer Berlin Heidelberg,

2013.

[21] “Competence-based knowledge space theory,” in Css-kti.tugraz.at, 2008.

[Online]. Accessed: Nov. 1, 2014.

[22] “ALEKS (Assessment and LEarning in Knowledge Spaces),” in ALEKS, 2013.

[Online]. Accessed: Nov. 1, 2014.

[23] L. G. Onjira Sitthisak and T. Soonklang, “Integrating Competence Models with

Knowledge Space Theory for Assessment,” in 16th CAA International

Computer Assisted Assessment Conference, Southampton, UK, 2013.

[24] V. Kumar, "An introduction to cluster analysis for data mining", CS Dept,

University of Minnesota, Minnesota, USA, 2000.

[25] A. Fahad, N. Alshatri, Z. Tari, A. Alamri, I. Khalil, A. Y. Zomaya, S. Foufou,

and A. Bouras, “A Survey of Clustering Algorithms for Big Data: Taxonomy

and Empirical Analysis,” IEEE Transactions on Emerging Topics in Computing,

vol. 2, no. 3, pp. 267–279, 2014.

[26] L. S. Vygotsky and M. Cole, Mind in society: the development of higher

psychological processes. Cambridge: Harvard University Press, 1978.

[27] C. Stahl, “Knowledge Space Theory,” in CRAN, 2011. [Online]. Accessed: Nov.

1, 2014.

[28] William Revelle, “Package psych,” in CRAN, 2015. [Online]. Accessed: Dec. 7,

2015.

[29] J. Uebersax, "Intraclass Correlation and Variance Component Methods", John-

uebersax.com, 2000. [Online]. Accessed: Dec. 17, 2015.

[30] “National Curriculum for Mathematics Grades I–XII 2006,” Government of

Pakistan ministry of education Islamabad, 2006.

Page 181: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

181

[31] “Academic grading in Pakistan,” in Research.omicsgroup.org, 2012. [Online].

Accessed: May 1, 2015.

[32] M. Charrad, N. Ghazzali, V. Boiteau, and A. Niknafs, “Package NbClust,” in

CRAN, 2012. [Online]. Accessed: Jan. 1, 2015.

[33] O. J. Oyelade, O. O. Oladipupo, and I. C. Obagbuwa, “Application of k-Means

Clustering algorithm for prediction of Students’ Academic Performance,”

International Journal of Computer Science and Information Security, vol. 7, no.

1, pp. 292–295, 2010.

[34] N. Ray, “Clustering”, CS Dept, University of Alberta, Alberta, Canada, 2009.

[35] C. Hennig, “Package fpc,” in CRAN, 2015. [Online]. Accessed: Nov. 1, 2015.

[36] Kurt Hornik and Walter Böhm, “Package clue,” in CRAN, 2015. [Online].

Accessed: Nov. 1, 2015.

[37] J. Colby, “R Clustering ‘purity’ metric,” in Stackoverflow, 2012. [Online].

Accessed: Nov. 1, 2015.

[38] Matthew J. Vavre, “Package fossil,” in CRAN, 2015. [Online]. Accessed: Nov.

1, 2015.

[39] Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu, “A density-

based algorithm for discovering clusters in large spatial databases with noise,”

in Second International Conference on Knowledge Discovery and Data Mining

(KDD-96), 1996, pp. 226–231.

[40] Michael Hahsler, “R Code for Chapter 8 of Introduction to Data Mining:

Clustering,” in Michael.Hahsler, 2015. [Online]. Accessed: Jan. 1, 2016.

[41] C. Fraley and A. E. Raftery, “How Many Clusters? Which Clustering Method?

Answers Via Model-Based Cluster Analysis,” The Computer Journal, vol. 41,

no. 8, pp. 578-588, 2008.

Page 182: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

182

[42] Chris Fraley, Adrian E. Raftery, T. Brendan Murphy, and Luca Scrucca, “mclust

Version 4 for R: Normal Mixture Modeling for Model-Based Clustering,

Classification, and Density Estimation,” Department of Statistics, University of

Washington, Technical, 2012.

[43] X. Anguera, T. Shinozaki, C. Wooters and J. Hernando, "Model Complexity

Selection and Cross-Validation EM Training for Robust Speaker

Diarization," 2007 IEEE International Conference on Acoustics, Speech and

Signal Processing - ICASSP '07, Honolulu, HI, 2007, pp. IV-273-IV-276.

[44] Bhagyashree Umale and Nilav M, “Overview of K-means and Expectation

Maximization Algorithm for Document Clustering,” International Journal of

Computer Applications, vol. 63, no. 13, pp. 578-588, 2014.

[45] W. Huitt, Measurement and evaluation: Criterion- versus norm-referenced

testing. Valdosta, GA: Valdosta State University, 1996.

[46] David Kahle and Hadley Wickham, “Package ‘ggmap,’” in CRAN, 2016.

[Online]. Accessed: Apr. 25, 2016.

Page 183: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

183

Appendix A: KST Details

Table 123: NUMBERS unit KST description

KST Topic

Letter Topic Letter Description

a Count to numbers up to 100

b Identify simple Place Value

c Identify the place value of a specific digit in a 3-digit number

d Read numbers up to 999

e Count backward ten step down from any given number

f Arrange numbers up to 999, written in mixed form in

increasing or decreasing order

g Count and write in 10s (e.g. 10, 20, 30, etc.)

Table 124: NUMBERS unit KST Inner and Outer Fringes

Knowledge

State Inner Fringe Set Outer Fringe Set

ф [] [a]

A [a] [b,c]

B [b] [c,d,e]

C [c] [b,d,e]

D [d] [c]

E [d] [b]

F [b,c] [e]

G [b,c,e] [d]

H [d,e] [f]

I [f] [g]

J [g] []

Page 184: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

184

Table 125: NUMBERS unit Inner and Outer Fringes Binary to Decimal Conversion

Knowledge

State

Inner

Fringe

Set

Inner Fringe

Set (Binary)

Inner Fringe

Set (Decimal)

Outer

Fringe

Set

Outer Fringe

Set (Binary)

Outer Fringe

Set (Decimal)

ф [] [0,0,0,0,0,0,0] 0 [a] [1,0,0,0,0,0,0] 64

A [a] [1,0,0,0,0,0,0] 64 [b,c] [0,1,1,0,0,0,0] 48

B [b] [0,1,0,0,0,0,0] 32 [c,d,e] [0,0,1,1,1,0,0] 28

C [c] [0,0,1,0,0,0,0] 16 [b,d,e] [0,1,0,1,1,0,0] 44

D [d] [0,0,0,1,0,0,0] 8 [c] [0,0,1,0,0,0,0] 16

E [d] [0,0,0,1,0,0,0] 8 [b] [0,1,0,0,0,0,0] 32

F [b,c] [0,1,1,0,0,0,0] 48 [e] [0,0,0,0,1,0,0] 4

G [b,c,e] [0,1,1,0,1,0,0] 52 [d] [0,0,0,1,0,0,0] 8

H [d,e] [0,0,0,1,1,0,0] 12 [f] [0,0,0,0,0,1,0] 2

I [f] [0,0,0,0,0,1,0] 2 [g] [0,0,0,0,0,0,1] 1

J [g] [0,0,0,0,0,0,1] 1 [] [0,0,0,0,0,0,0] 0

Page 185: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

185

Appendix B: Quartiles Details

Table 126: Pre-test Control Students Quartiles

Quartiles Quartiles Statistics No. of

Students Mean Median SD CV

Q1 0.5386 0.5633 0.0728 0.1352 14

Q2 0.6848 0.6814 0.0405 0.0592 13

Q3 0.7813 0.7786 0.0221 0.0283 13

Q4 0.9260 0.9304 0.0622 0.0671 14

Table 127: Pre-test Treatment Students Quartiles

Quartiles Quartiles Statistics No. of

Students Mean Median SD CV

Q1 0.5308 0.5502 0.0809 0.1525 37

Q2 0.6688 0.6605 0.0339 0.0508 37

Q3 0.7788 0.7769 0.0342 0.0439 38

Q4 0.9348 0.9429 0.0498 0.0532 36

Table 128: Post-test Control Students Quartiles

Quartiles Quartiles Statistics No. of

Students Mean Median SD CV

Q1 0.5318 0.5833 0.1127 0.2119 14

Q2 0.7120 0.7212 0.0613 0.0861 13

Q3 0.8380 0.8357 0.0395 0.0472 13

Q4 0.9886 0.9901 0.0101 0.0102 14

Table 129: Post-test Treatment Students Quartiles

Quartiles Quartiles Statistics No. of

Students Mean Median SD CV

Q1 0.6529 0.7037 0.1222 0.1871 37

Q2 0.8310 0.8095 0.0503 0.0606 37

Q3 0.9558 0.9595 0.9558 1.0000 37

Q4 0.9939 1.0000 0.0081 0.0082 37

Page 186: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

186

Appendix C: K-Means Results Details

K-Means Results at 60% Threshold

Table 130: Pre-test Control Students Clusters Based on Inner Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C1 0 1 0 0 0 0 0 0 0 0 0.4212 0.4212 N/A N/A 1

C4 0 0 0 2 3 0 0 0 0 0 0.5897 0.5924 0.0816 0.14 5

C5 0 0 1 0 0 0 0 21 0 0 0.6639 0.6507 0.1196 0.18 22

C2 2 0 0 0 0 0 2 0 0 0 0.7727 0.8073 0.1863 0.24 4

C3 0 0 0 0 0 0 0 0 0 22 0.8408 0.8155 0.1107 0.13 22

All 2 1 1 2 3 0 2 21 0 22 0.7327 0.7484 0.1539 0.21 54

Table 131: Post-test Control Students Clusters Based on Inner Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'1 0 0 0 0 1 0 0 0 0 0 0.3117 0.3117 N/A N/A 1

C'3 2 0 0 0 0 0 0 0 0 0 0.4214 0.4214 0.2255 0.5352 2

C'2 0 0 0 0 0 0 1 0 0 0 0.6458 0.6458 N/A N/A 1

C'4 0 0 1 0 0 0 0 22 0 0 0.6651 0.6167 0.1143 0.1718 23

C'5 0 0 0 0 0 0 0 0 0 27 0.9014 0.969 0.1023 0.1135 27

All 2 0 1 0 1 0 1 22 0 27 0.7673 0.7869 0.1841 0.24 54

Page 187: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

187

Table 132: Pre-test Treatment Students Clusters Based on Inner Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C7 7 0 0 0 0 0 0 0 0 0 0.447 0.4907 0.1356 0.3034 7

C2 0 3 0 0 0 0 0 0 0 0 0.566 0.569 0.0356 0.0628 3

C6 0 0 0 2 6 0 0 0 0 0 0.5823 0.5614 0.09 0.1546 8

C1 0 0 5 0 0 0 0 0 0 0 0.5928 0.601 0.1276 0.2152 5

C3 0 0 0 0 0 0 0 51 0 0 0.6526 0.6367 0.098 0.1502 51

C4 0 0 0 0 0 0 0 0 71 0 0.8341 0.8095 0.1089 0.1305 71

C5 0 0 0 0 0 1 2 0 0 0 0.895 0.9085 0.0332 0.0371 3

All 7 3 5 2 6 1 2 51 71 0 0.7273 0.7197 0.1568 0.22 148

Table 133: Post-test Treatment Students Clusters Based on Inner Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'3 4 0 0 0 0 0 0 0 0 0 0.4984 0.5194 0.2531 0.5078 4

C'2 0 0 1 0 0 0 0 0 0 0 0.5241 0.5241 N/A N/A 1

C'1 0 0 0 0 0 1 4 0 0 0 0.7166 0.7087 0.0805 0.1124 5

C'4 0 0 0 0 0 0 0 42 0 0 0.7271 0.7571 0.1 0.1375 42

C'5 0 0 0 0 0 0 0 0 96 0 0.9417 0.9632 0.0732 0.0778 96

All 4 0 1 0 0 1 4 42 96 0 0.8584 0.9261 0.1489 0.17 148

Page 188: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

188

Table 134: Pre-test Control Students Clusters Based on Outer Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C3 0 1 0 0 3 0 0 0 0 0 0.5809 0.62 0.1106 0.1904 4

C2 2 0 1 0 0 0 0 0 0 0 0.6969 0.6952 0.1523 0.2185 3

C1 0 0 0 2 0 0 2 0 0 0 0.7241 0.7549 0.2386 0.3296 4

C4 0 0 0 0 0 0 0 21 0 22 0.7501 0.7571 0.1457 0.1943 43

All 2 1 1 2 3 0 2 21 0 22 0.7327 0.7484 0.1539 0.21 54

Table 135: Post-test Control Students Clusters Based on Outer Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'1 0 0 0 0 1 0 0 0 0 0 0.3117 0.3117 N/A N/A 1

C'3 2 0 1 0 0 0 0 0 0 0 0.4762 0.5808 0.1856 0.3897 3

C'2 0 0 0 0 0 0 1 0 0 0 0.6458 0.6458 #N/A #N/A 1

C'4 0 0 0 0 0 0 0 22 0 0 0.6687 0.6219 0.1156 0.1729 22

C'5 0 0 0 0 0 0 0 0 0 27 0.9014 0.969 0.1023 0.1135 27

All 2 0 1 0 1 0 1 22 0 27 0.7673 0.7869 0.1841 0.24 54

Page 189: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

189

Table 136: Pre-test Treatment Students Clusters Based on Outer Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C5 7 0 0 0 0 0 0 0 0 0 0.447 0.4907 0.1356 0.3034 7

C7 0 3 0 0 6 0 0 0 0 0 0.5652 0.5369 0.0767 0.1356 9

C1 0 0 5 0 0 0 0 0 0 0 0.5928 0.601 0.1276 0.2152 5

C4 0 0 0 2 0 0 0 0 0 0 0.6351 0.6351 0.0695 0.1094 2

C2 0 0 0 0 0 0 0 51 0 0 0.6526 0.6367 0.098 0.1502 51

C3 0 0 0 0 0 0 0 0 71 0 0.8341 0.8095 0.1089 0.1305 71

C6 0 0 0 0 0 1 0 0 0 0 0.8571 0.8571 N/A N/A 1

C8 0 0 0 0 0 0 2 0 0 0 0.9139 0.9139 0.0076 0.0083 2

All 7 3 5 2 6 1 2 51 71 0 0.7273 0.7197 0.1568 0.22 148

Table 137: Post-test Treatment Students Clusters Based on Outer Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'2 4 0 1 0 0 0 0 0 0 0 0.5036 0.5241 0.2195 0.4359 5

C'3 0 0 0 0 0 0 4 0 0 0 0.6848 0.6975 0.0434 0.0633 4

C'4 0 0 0 0 0 0 0 42 0 0 0.7271 0.7571 0.1 0.1375 42

C'1 0 0 0 0 0 1 0 0 0 0 0.844 0.844 N/A N/A 1

C'5 0 0 0 0 0 0 0 0 96 0 0.9417 0.9632 0.0732 0.0778 96

All 4 0 1 0 0 1 4 42 96 0 0.8584 0.9261 0.1489 0.17 148

Page 190: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

190

Appendix D: DBSCAN Results Details

DBSCAN Data Sets Epsilons and k-NN Plots

Table 138: DBSCAN ε for Pre-test Control Students Inner Fringes Data Set

MinPts Epsilon Best_Eps No_of_Clusters Noise Perc_Noise

3 4 No 1 28 51.9%

3 11 Yes 1 2 3.7%

3 20 No 1 0 0%

3 36 No 1 0 0%

Table 139: DBSCAN ε for Post-test Control Students Inner Fringes Data Set

MinPts Epsilon Best_Eps No_of_Clusters Noise Perc_Noise

3 4 No 1 27 50%

3 11 Yes 1 0 0%

Table 140: DBSCAN ε for Pre-test Treatment Students Inner Fringes Data Set

MinPts Epsilon Best_Eps No_of_Clusters Noise Perc_Noise

3 4 No 1 84 56.8%

3 6 Yes 1 5 3.4%

3 20 No 1 0 0%

3 32 No 1 0 0%

Table 141: DBSCAN ε for Post-test Treatment Students Inner Fringes Data Set

MinPts Epsilon Best_Eps No_of_Clusters Noise Perc_Noise

3 4 No 2 98 66.2%

3 10 Yes 2 0 0%

3 36 No 1 0 0%

3 40 No 1 0 0%

Page 191: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

191

Figure 30: Students Inner Fringes Data Set k-NN Plot.

Table 142: DBSCAN ε for Pre-test Control Students Outer Fringes Data Set

MinPts Epsilon Best_Eps No_of_Clusters Noise Perc_Noise

3 2 Yes 1 3 5.6%

3 14 No 1 0 0%

3 24 No 1 0 0%

Table 143: DBSCAN ε for Post-test Control Students Outer Fringes Data Set

MinPts Epsilon Best_Eps No_of_Clusters Noise Perc_Noise

3 2 Yes 1 2 3.7%

3 14 No 1 0 0%

able 144: DBSCAN ε for Pre-test Treatment Students Outer Fringes Data Set

MinPts Epsilon Best_Eps No_of_Clusters Noise Perc_Noise

3 1 Yes 1 10 6.8%

3 4 No 2 4 2.7%

3 6 No 2 3 2%

3 12 No 1 1 0.7%

3 16 No 1 0 0%

Page 192: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

192

Table 145: DBSCAN ε for Post-test Treatment Students Outer Fringes Data Set

MinPts Epsilon Best_Eps No_of_Clusters Noise Perc_Noise

3 1 Yes 1 5 3.4%

3 2 No 1 3 2%

3 6 No 1 2 1.4%

3 28 No 1 0 0%

Figure 31: Students Outer Fringes Data Set k-NN Plot.

DBSCAN MinPts Variation (MinPts = 2, 5, 3, 10, and 20)

Table 146: Varying MinPts for Pre-test Control Students Inner Fringes Clusters

MinPts Best ε Number of

Clusters % of Data in Clusters

2 5.5 1 C0 (Noise) = 28/54 (51.9%)

C1 = 26/54 (48.1%)

3 11 1 C0 (Noise) = 2/54 (3.7%)

C1 = 52/54 (96.3%)

5 11 1 C0 (Noise) = 2/54 (3.7%)

C1 = 52/54 (96.3%)

10 11 1 C0 (Noise) = 2/54 (3.7%)

C1 = 52/54 (96.3%)

20 11 1 C0 (Noise) = 2/54 (3.7%)

C1 = 52/54 (96.3%)

Page 193: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

193

Table 147: Varying MinPts for Post-test Control Students Inner Fringes Clusters

MinPts Best ε Number of

Clusters % of Data in Clusters

2 5.5 1 C0 (Noise) = 27/54 (50%)

C1 = 27/54 (50%)

3 11 1 C0 (Noise) = 0/54 (0%)

C1 = 54/54 (100%)

5 11 1 C0 (Noise) = 0/54 (0%)

C1 = 54/54 (100%)

10 11 1 C0 (Noise) = 0/54 (0%)

C1 = 54/54 (100%)

20 11 1 C0 (Noise) = 0/54 (0%)

C1 = 54/54 (100%)

Table 148: Varying MinPts for Pre-test Treatment Students Inner Fringes Clusters

MinPts Best ε Number of

Clusters % of Data in Clusters

2 6 1 C0 (Noise) = 5/148 (3.4%)

C1 = 26/148 (96.6%)

3 6 1 C0 (Noise) = 5/148 (3.4%)

C1 = 26/148 (96.6%)

5 6 1 C0 (Noise) = 5/148 (3.4%)

C1 = 26/148 (96.6%)

10 6 1 C0 (Noise) = 5/148 (3.4%)

C1 = 26/148 (96.6%)

20 6 1 C0 (Noise) = 5/148 (3.4%)

C1 = 26/148 (96.6%)

Table 149: Varying MinPts for Post-test Treatment Students Inner Fringes Clusters

MinPts Best ε Number of

Clusters % of Data in Clusters

2 6 2

C0 (Noise) = 0/148 (0%)

C1 = 145/148 (98%)

C2 = 26/54 (2%)

3 10 2

C0 (Noise) = 0/148 (0%)

C1 = 145/148 (98%)

C2 = 26/54 (2%)

5 10 1 C0 (Noise) = 3/148 (2%)

C1 = 145/148 (98%)

10 10 1 C0 (Noise) = 3/148 (2%)

C1 = 145/148 (98%)

20 10 1 C0 (Noise) = 3/148 (2%)

C1 = 145/148 (98%)

Page 194: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

194

Table 150: Varying MinPts for Pre-test Control Students Outer Fringes Clusters

MinPts Best ε Number of

Clusters % of Data in Clusters

2 2 1 C0 (Noise) = 3/54 (5.6%)

C1 = 51/54 (94.4%)

3 2 1 C0 (Noise) = 3/54 (5.6%)

C1 = 51/54 (94.4%)

5 2 1 C0 (Noise) = 3/54 (5.6%)

C1 = 51/54 (94.4%)

10 2 1 C0 (Noise) = 3/54 (5.6%)

C1 = 51/54 (94.4%)

20 2 1 C0 (Noise) = 3/54 (5.6%)

C1 = 51/54 (94.4%)

Table 151: Varying MinPts for Post-test Control Students Outer Fringes Clusters

MinPts Best ε Number of

Clusters % of Data in Clusters

2 2 1 C0 (Noise) = 2/54 (3.7%)

C1 = 52/54 (96.3%)

3 2 1 C0 (Noise) = 2/54 (3.7%)

C1 = 52/54 (96.3%)

5 2 1 C0 (Noise) = 2/54 (3.7%)

C1 = 52/54 (96.3%)

10 2 1 C0 (Noise) = 2/54 (3.7%)

C1 = 52/54 (96.3%)

20 2 1 C0 (Noise) = 2/54 (3.7%)

C1 = 52/54 (96.3%)

Table 152: Varying MinPts for Pre-test Treatment Students Outer Fringes Clusters

MinPts Best ε Number of

Clusters % of Data in Clusters

2 1 1 C0 (Noise) = 10/148 (6.8%)

C1 = 138/148 (93.2%)

3 1 1 C0 (Noise) = 10/148 (6.8%)

C1 = 138/148 (93.2%)

5 1 1 C0 (Noise) = 10/148 (6.8%)

C1 = 138/148 (93.2%)

10 1 1 C0 (Noise) = 10/148 (6.8%)

C1 = 138/148 (93.2%)

20 1 1 C0 (Noise) = 10/148 (6.8%)

C1 = 138/148 (93.2%)

Page 195: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

195

Table 153: Varying MinPts for Post-test Treatment Students Outer Fringes Clusters

MinPts Best ε Number of

Clusters % of Data in Clusters

2 1 1 C0 (Noise) = 5/148 (3.4%)

C1 = 143/148 (96.6%)

3 1 1 C0 (Noise) = 5/148 (3.4%)

C1 = 143/148 (96.6%)

5 1 1 C0 (Noise) = 5/148 (3.4%)

C1 = 143/148 (96.6%)

10 1 1 C0 (Noise) = 5/148 (3.4%)

C1 = 143/148 (96.6%)

20 1 1 C0 (Noise) = 5/148 (3.4%)

C1 = 143/148 (96.6%)

Page 196: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

196

Appendix E: Knowledge States Clustering Results

K-Means Results Internal Indices for Knowledge States Clustering

Table 154: K-Means Results Evaluation for Knowledge States Clustering

K-Means Clusters Internal Indices

Pre-test Clusters *↓CP +↑SP ↓DB ↑DVI

Control 0.0796 2.6961 0.0653 1

Treatment 0.5087 4.7372 0.1384 0.5

Post-test Clusters ↓CP ↑SP ↓DB ↑DVI

Control 1.0173 5.0385 0.2021 2

Treatment 0.0845 1.22 0.3379 0.3333

*↓ means the less the value the better.

+↑ means the greater the value the better.

DBSCAN Results Internal Indices for Knowledge States Clustering

Table 155: DBSCAN Results Evaluation for Knowledge States Clustering

DBSCAN Clusters Internal Indices

Pre-test Clusters *↓CP +↑SP ↓DB ↑DVI

Control N/A N/A N/A N/A

Treatment 0.5115 4.7834 0.3735 0.6667

Post-test Clusters ↓CP ↑SP ↓DB ↑DVI

Control 1.0173 5.0385 0.2021 2

Treatment N/A N/A N/A N/A

*↓ means the less the value the better.

+↑ means the greater the value the better.

Page 197: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

197

EM Results Internal Indices for Knowledge States Clustering

Table 156: EM Results Evaluation for Knowledge States Clustering

EM Clusters Internal Indices

Pre-test Clusters *↓CP +↑SP ↓DB ↑DVI

Control 0.0033 2.6759 0.0351 2

Treatment 0.0215 1.8032 0.1255 0.3333

Post-test Clusters ↓CP ↑SP ↓DB ↑DVI

Control 1.0173 5.0385 0.2021 2

Treatment 0.4519 3.1736 0.3524 0.5

*↓ means the less the value the better.

+↑ means the greater the value the better.

Page 198: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

198

Appendix F: Generalization Example Details

K-Means Inner Fringes Clustering Results

Table 157: Pre-test Control Students K-Means Clusters Based on Inner Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C1 0 0 0 0 54 2 0 84 0 0 0.3621 0.3571 0.2018 0.56 140

C2 0 0 0 0 0 0 0 0 0 47 0.5603 0.5 0.228 0.41 47

All 0 0 0 0 54 2 0 84 0 47 0.4119 0.4013 0.2252 0.55 187

Table 158: Post-test Control Students K-Means Clusters Based on Inner Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'1 0 0 0 0 0 2 0 0 0 0 0.6 0.6 0.0182 0.03 2

C'2 0 0 0 0 20 0 0 70 0 0 0.6289 0.6364 0.3015 0.48 90

C'3 0 0 0 0 0 0 0 0 0 95 0.7502 0.8125 0.2487 0.33 95

All 0 0 0 0 20 2 0 70 0 95 0.6902 0.7727 0.2802 0.41 187

Page 199: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

199

Table 159: Pre-test Treatment Students K-Means Clusters Based on Inner Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students

A B C D E F G H I J Mean Median SD CV

C2 4 0 0 0 0 0 0 0 0 0 0 0 0 N/A 4

C5 0 0 0 0 86 0 0 251 0 0 0.3956 0.3616 0.1996 0.5047 337

C3 0 0 0 0 0 0 0 0 257 0 0.5933 0.5909 0.2153 0.3629 257

C4 0 0 0 0 0 16 0 0 0 0 0.6165 0.5858 0.1945 0.3155 16

C1 0 0 0 0 0 0 1 0 0 0 0.6818 0.6818 N/A N/A 1

All 4 0 0 0 86 16 1 251 257 0 0.4819 0.4545 0.2311 0.48 615

Table 160: Post-test Treatment Students K-Means Clusters Based on Inner Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students

A B C D E F G H I J Mean Median SD CV

C'1 0 0 0 0 11 0 0 71 0 0 0.7035 0.749 0.2413 0.343 82

C'2 0 0 0 0 0 6 0 0 0 0 0.7854 0.8163 0.2203 0.2805 6

C'3 0 0 0 0 0 0 0 0 526 0 0.8294 0.9091 0.2153 0.2596 526

C'4 1 0 0 0 0 0 0 0 0 0 0.9508 0.9508 N/A N/A 1

All 1 0 0 0 11 6 0 71 526 0 0.8124 0.9091 0.2227 0.27 615

Page 200: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

200

K-Means Outer Fringes Clustering Results

Table 161: Pre-test Control Students K-Means Clusters Based on Outer Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Stude

nts A B C D E F G H I J Mean Median SD CV

C1 0 0 0 0 54 0 0 0 0 0 0.298 0.2298 0.1903 0.6385 54

C2 0 0 0 0 0 2 0 84 0 47 0.4581 0.4394 0.2224 0.4855 133

All 0 0 0 0 54 2 0 84 0 47 0.4119 0.4013 0.2252 0.55 187

Table 162: Post-test Control Students K-Means Clusters Based on Outer Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Stude

nts A B C D E F G H I J Mean Median SD CV

C'2 0 0 0 0 0 2 0 70 0 0 0.6139 0.619 0.3186 0.5189 72

C'1 0 0 0 0 20 0 0 0 0 0 0.6801 0.7046 0.2075 0.3052 20

C'3 0 0 0 0 0 0 0 0 0 95 0.7502 0.8125 0.2487 0.3316 95

All 0 0 0 0 20 2 0 70 0 95 0.6902 0.7727 0.2802 0.41 187

Page 201: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

201

Table 163: Pre-test Treatment Students K-Means Clusters Based on Outer Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students

A B C D E F G H I J Mean Median SD CV

C2 4 0 0 0 0 0 0 0 0 0 0 0 0 N/A 4

C3 0 0 0 0 86 0 0 0 0 0 0.3258 0.3058 0.1868 0.5735 86

C1 0 0 0 0 0 16 0 251 257 0 0.5108 0.4956 0.2243 0.4392 524

C4 0 0 0 0 0 0 1 0 0 0 0.6818 0.6818 N/A N/A 1

All 4 0 0 0 86 16 1 251 257 0 0.4819 0.4545 0.2311 0.48 615

Table 164: Post-test Treatment Students K-Means Clusters Based on Outer Fringes

K-Means

Clusters

Knowledge States Clusters Statistics No. of

Students

A B C D E F G H I J Mean Median SD CV

C'3 0 0 0 0 11 0 0 0 0 0 0.7195 0.7727 0.2151 0.299 11

C'2 0 0 0 0 0 6 0 0 0 0 0.7854 0.8163 0.2203 0.2805 6

C'1 0 0 0 0 0 0 0 71 526 0 0.8142 0.9091 0.2229 0.2738 597

C'4 1 0 0 0 0 0 0 0 0 0 0.9508 0.9508 N/A N/A 1

All 1 0 0 0 11 6 0 71 526 0 0.8124 0.9091 0.2227 0.27 615

Page 202: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

202

DBSCAN Inner Fringes Clustering Results

Table 165: Pre-test Control Students DBSCAN Clusters Based on Inner Fringes

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C1 0 0 0 0 54 0 0 84 0 0 0.3633 0.3575 0.2028 0.56 138

C0 (Noise) 0 0 0 0 0 2 0 0 0 47 0.5489 0.4867 0.2307 0.42 49

All 0 0 0 0 54 2 0 84 0 47 0.4119 0.4013 0.2252 0.55 187

Table 166: Post-test Control Students DBSCAN Clusters Based on Inner Fringes

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'0 (Noise) 0 0 0 0 0 2 0 0 0 0 0.6 0.6 0.0182 0.03 2

C'1 0 0 0 0 20 0 0 70 0 95 0.6912 0.7727 0.2816 0.41 185

All 0 0 0 0 20 2 0 70 0 95 0.6902 0.7727 0.2802 0.41 187

Page 203: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

203

Table 167: Pre-test Treatment Students DBSCAN Clusters Based on Inner Fringes

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students

A B C D E F G H I J Mean Median SD CV

C1 0 0 0 0 86 0 0 251 0 0 0.3956 0.3616 0.1996 0.5 337

C0 (Noise) 4 0 0 0 0 0 0 0 257 0 0.5843 0.5909 0.2258 0.39 261

C2 0 0 0 0 0 16 1 0 0 0 0.6203 0.5871 0.189 0.3 17

All 4 0 0 0 86 16 1 251 257 0 0.4819 0.4545 0.2311 0.48 615

Table 168: Post-test Treatment Students DBSCAN Clusters Based on Inner Fringes

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students

A B C D E F G H I J Mean Median SD CV

C'0 (Noise) 1 0 0 0 0 6 0 0 0 0 0.809 0.9091 0.2106 0.26 7

C'1 0 0 0 0 11 0 0 71 526 0 0.8125 0.9091 0.223 0.27 608

All 1 0 0 0 11 6 0 71 526 0 0.8124 0.9091 0.2227 0.27 615

Page 204: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

204

DBSCAN Outer Fringes Clustering Results

Table 169: Pre-test Control Students DBSCAN Clusters Based on Outer Fringes

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C1 0 0 0 0 0 2 0 84 0 47 0.298 0.2298 0.1903 0.64 133

C0 (Noise) 0 0 0 0 54 0 0 0 0 0 0.4581 0.4394 0.2224 0.49 54

All 0 0 0 0 54 2 0 84 0 47 0.4119 0.4013 0.2252 0.55 187

Table 170: Post-test Control Students DBSCAN Clusters Based on Outer Fringes

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'0 (Noise) 0 0 0 0 20 0 0 0 0 0 0.6801 0.7046 0.2075 0.31 20

C'1 0 0 0 0 0 2 0 70 0 95 0.6914 0.7727 0.2881 0.42 167

All 0 0 0 0 20 2 0 70 0 95 0.6902 0.7727 0.2802 0.41 187

Page 205: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

205

Table 171: Pre-test Treatment Students DBSCAN Clusters Based on Outer Fringes

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students

A B C D E F G H I J Mean Median SD CV

C0 (Noise) 4 0 0 0 86 16 1 0 0 0 0.3604 0.3447 0.2238 0.62 107

C1 0 0 0 0 0 0 0 251 257 0 0.5075 0.4924 0.2246 0.44 508

All 4 0 0 0 86 16 1 251 257 0 0.4819 0.4545 0.2311 0.48 615

Table 172: Post-test Treatment Students DBSCAN Clusters Based on Outer Fringes

DBSCAN

Clusters

Knowledge States Clusters Statistics No. of

Students

A B C D E F G H I J Mean Median SD CV

C'0 (Noise) 1 0 0 0 11 6 0 0 0 0 0.7543 0.7954 0.2119 0.28 18

C'1 0 0 0 0 0 0 0 71 526 0 0.8142 0.9091 0.2229 0.27 597

All 1 0 0 0 11 6 0 71 526 0 0.8124 0.9091 0.2227 0.27 615

Page 206: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

206

EM Inner Fringes Clustering Results

Table 173: Pre-test Control Students EM Clusters Based on Inner Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C1 0 0 0 0 54 2 0 84 0 47 0.4119 0.4013 0.2252 0.55 187

All 0 0 0 0 54 2 0 84 0 47 0.4119 0.4013 0.2252 0.55 187

Table 174: Post-test Control Students EM Clusters Based on Inner Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C'2 0 0 0 0 0 2 0 0 0 0 0.6 0.6 0.0182 0.03 2

C'1 0 0 0 0 20 0 0 70 0 95 0.6912 0.7727 0.2816 0.41 185

All 0 0 0 0 20 2 0 70 0 95 0.6902 0.7727 0.2802 0.41 187

Page 207: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

207

Table 175: Pre-test Treatment Students EM Clusters Based on Inner Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students

A B C D E F G H I J Mean Median SD CV

C2 0 0 0 0 86 0 0 0 0 0 0.3258 0.3058 0.1868 0.5735 86

C3 0 0 0 0 0 0 0 251 0 0 0.4195 0.407 0.1986 0.4734 251

C4 4 0 0 0 0 16 1 0 0 0 0.5022 0.5182 0.3014 0.6003 21

C1 0 0 0 0 0 0 0 0 257 0 0.5933 0.5909 0.2153 0.3629 257

All 4 0 0 0 86 16 1 251 257 0 0.4819 0.4545 0.2311 0.48 615

Table 176: Post-test Treatment Students EM Clusters Based on Inner Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students

A B C D E F G H I J Mean Median SD CV

C'2 0 0 0 0 11 0 0 71 0 0 0.7035 0.749 0.2413 0.343 82

C'3 1 0 0 0 0 6 0 0 0 0 0.809 0.9091 0.2106 0.2603 7

C'1 0 0 0 0 0 0 0 0 526 0 0.8294 0.9091 0.2153 0.2596 526

All 1 0 0 0 11 6 0 71 526 0 0.8124 0.9091 0.2227 0.27 615

Page 208: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

208

EM Outer Fringes Clustering Results

Table 177: Pre-test Control Students EM Clusters Based on Outer Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students A B C D E F G H I J Mean Median SD CV

C1 0 0 0 0 54 2 0 84 0 47 0.4119 0.4013 0.2252 0.5468 187

All 0 0 0 0 54 2 0 84 0 47 0.4119 0.4013 0.2252 0.55 187

Table 178: Post-test Control Students EM Clusters Based on Outer Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students

A B C D E F G H I J Mean Median SD CV

C'2 0 0 0 0 0 2 0 70 0 0 0.6139 0.619 0.3186 0.5189 72

C'3 0 0 0 0 20 0 0 0 0 0 0.6801 0.7046 0.2075 0.3052 20

C'1 0 0 0 0 0 0 0 0 0 95 0.7502 0.8125 0.2487 0.3316 95

All 0 0 0 0 20 2 0 70 0 95 0.6902 0.7727 0.2802 0.41 187

Page 209: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

209

Table 179: Pre-test Treatment Students EM Clusters Based on Outer Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students

A B C D E F G H I J Mean Median SD CV

C3 4 0 0 0 86 0 0 0 0 0 0.3113 0.291 0.1947 0.6254 90

C1 0 0 0 0 0 16 1 251 257 0 0.5111 0.4958 0.2242 0.4387 525

C2 0 0 0 0 0 0 0 0 0 0 N/A N/A N/A N/A 0

All 4 0 0 0 86 16 1 251 257 0 0.4819 0.4545 0.2311 0.48 615

Table 180: Post-test Treatment Students EM Clusters Based on Outer Fringes

EM

Clusters

Knowledge States Clusters Statistics No. of

Students

A B C D E F G H I J Mean Median SD CV

C'4 1 0 0 0 11 0 0 0 0 0 0.7388 0.7954 0.2157 0.292 12

C'3 0 0 0 0 0 6 0 0 0 0 0.7854 0.8163 0.2203 0.2805 6

C'1 0 0 0 0 0 0 0 71 526 0 0.8142 0.9091 0.2229 0.2738 597

C'2 0 0 0 0 0 0 0 0 0 0 N/A N/A N/A N/A 0

All 1 0 0 0 11 6 0 71 526 0 0.8124 0.9091 0.2227 0.27 615

Page 210: PERSONALIZING GROUP INSTRUCTION USING KNOWLEDGE …

210

Vita

Rim S. Zakaria was born in 1988, and she is a Palestinian, born and raised in the

United Arab Emirates. She was educated in private schools following the British

curricula. She completed her 7 O-Levels from Rosary School Sharjah and her 4 AS-

Levels from Al Ma'arifa International Private School from which she graduated from in

2006 with honors. She completed a Bachelor's of Science in Computer Engineering

from the American University of Sharjah, and graduated with a cum Laude in 2010. She

was part of the Dean's List 7 times during her undergraduate years and awarded the

Chancellor's List 3 consecutive times.

Ms. Zakaria worked for one year as an Activation Engineer at Du Telecom and

then worked as an IT Analyst, Junior IT Architect and PMO at IBM for 3 years. In 2014,

Ms. Zakaria began a Master’s program in Engineering Systems Management at the

American University of Sharjah.

Ms. Zakaria published 4 papers for various Institute of Electrical and Electronics

Engineers conferences in the area of Educational and Learning Technologies.

During her leisure time, Ms. Zakaria enjoys playing the piano, ice skating,

reading fiction and non-fiction, and playing video games.