teaching and testing speaking skills at the university

335
TEACHING AND TESTING SPEAKING SKILLS AT THE UNIVERSITY FRESHMAN LEVEL: A CASE STUDY Nailah Riaz (Reg # 120954) DOCTOR OF PHILOSOPHY in Linguistics and Literature DEPARTMENT OF ENGLISH, FACULTY OF SOCIAL SCIENCES AIR UNIVERSITY, ISLAMABAD April, 2020

Upload: others

Post on 15-Mar-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

TEACHING AND TESTING SPEAKING SKILLS AT THE

UNIVERSITY FRESHMAN LEVEL: A CASE STUDY

Nailah Riaz

(Reg # 120954)

DOCTOR OF PHILOSOPHY

in

Linguistics and Literature

DEPARTMENT OF ENGLISH, FACULTY OF SOCIAL SCIENCES

AIR UNIVERSITY, ISLAMABAD

April, 2020

ii

TEACHING AND TESTING SPEAKING SKILLS AT THE

UNIVERSITY FRESHMAN LEVEL: A CASE STUDY

Nailah Riaz

(Reg #120954)

A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF

THE REQUIREMENTS FOR THE DEGREE OF

DOCTOR OF PHILOSOPHY

in

Linguistics and Literature

To

DEPARTMENT OF ENGLISH, FACULTY OF SOCIAL SCIENCES

AIR UNIVERSITY, ISLAMABAD

April, 2020

iii

iv

v

vi

SUPERVISOR THESIS APPROVAL FORM

Of a thesis: TEACHING AND TESTING SPEAKING SKILLS AT THE UNIVERSITY

FRESHMAN LEVEL: A CASE STUDY

Submitted by: Nailah Riaz

Registration No.: 120954 Discipline: Linguistics & Literature

Candidate for the degree of: PhD in Linguistics and Literature

This thesis has been read by me and has been found to be satisfactory regarding content,

English usage, format, citation, bibliographic style, and consistency, and thus fulfills the

qualitative requirements of this study.

Dr. Sham Haidar

Name of Supervisor Signature of the Supervisor

Date: 28th April, 2020

vii

Dedicated to my caring husband, and AUM&A who have always stood by

me...

viii

ACKNOWLEDGMENTS

I ought to thank Air University and my co-workers for supporting me in a variety of ways.

First of all, I am thankful to the Vice Chancellor AVM Faaiz Amir (Retd.) and Dean

Faculty of Social Sciences and Department of English, Air University, Prof. Dr Wasima

Shehzad for letting me complete this work at my own pace. Indeed, office of the Vice

Chancellor has never disappointed. The Dean FSS has been guiding through the matters

such as the process of completing a PhD, and entrusting me with a competent local advisor.

I am grateful to the Senior Dean Prof. Dr Zafar Ullah Koreshi for allowing me to carry out

classroom research using language lab with an initiative overload (weekly 50 minutes in

the three sections of Bachelors of Mechatronic Engineering). I am highly thankful to my

supervisor Prof. Dr Riaz Hassan for his support, encouragement and advice. He not only

inspired me to undertake a next to impossible research study on oracy, but complete it in

spite of physically and mentally challenging phases in my life.

I’m indebted to my local advisor cum supervisor Assistant Professor Dr Sham Haidar

(Department of English, Faculty of Social Sciences, Air University). He not only

empathetically understood my academic and medical context but very patiently kept on

motivating me with his insightful guidance. Being approachable he provided feedback and

criticism on my research work that was constructive and prompt. In addition, I value my

advisor cum supervisor for considering my individual advisory needs.

I would like to express my sincere gratitude to (Ex) VC AU Dr Ijaz Malik, the (Ex) Dean

Faculty of Social Sciences, Dr Rubina Kamran, Dr Abdullah Sadiq, Dr Moeen Zafar, Dr

Irfan Ul Haq, Air commodore Afzal (Retd.), Air Commodore Wahab Motla, Dr Tasneem

Shah, Dr Asad Naeem, Dr Basharat Ullah Malik, Dr Muhammad Anwar, Dr Ismat Jabeen,

Dr Afia Kanwal, Dr Akhtar Abbas, Isma Waseem, Naveed Ehsan, Afroz Ilyas, and Samia

ix

Mudasser, for sharing their perspectives and practices. I shall always remember the prompt

responses and productive directives of Dr Farzana Masroor, Assistant Professor, PG

Program and Thesis/Defense Coordinator Department of English, Air University,

Islamabad, Pakistan in Ramzan Kareem.

My heartfelt gratitude to my foreign evaluators, Dr Faizah BT Mohamad Nor and Dr Reiko

Kataoka for their aptly detailed feedback. My External Examiners, Dr Ajmal Gulzar and

Dr Hazrat Umar for their crisp academic criticism. My Internals, Dr Farzana Masroor and

Dr Uzma Anjum for their guiding comments. Thank you all for helping me shape and guide

the direction of the research study with careful instructions.

I would sincerely like to thank the network department, automation department and library

for their timely support. The timely technical assistance and maintenance of Muhammad

Farooq Arshad, Sohail Khalid, Mohammad Wajid, Mohammad Yasir Iqbal, and Nasir

Mahmood are beyond acknowledgement. AU automation department, Zubair Azam, Raja

Ghalib Hussain and Masood Ahmed facilitated whenever needed. I am grateful to the

Deputy Registrar Air University, Mr. Amjad Mahmood for providing me reliable

information about the semester extension (s). Without his technical guidance, it could have

been hard to keep motivated to complete my research study.

I am truly obliged to my family and relatives who considerately accepted my commitment,

and supported me morally throughout my research work. I am greatly thankful to my friend

Abida Hassan and my children Assad, Saad, Haleema and Kiran for their motivational

sessions when I most needed them. I could have never been able to actualize this dream

without the prayers of my sisters, students and well-wishers.

Above all, I greatly acknowledge the all-time available support of my husband Squadron

Leader Irfan Ul Haq (Retd).

Finally, no matter what I say, I can never acknowledge enough the bounties of Allah SWT.

May He always grant my efforts to facilitate the youth of my country, Ameen.

x

ABSTRACT

English serves as main language of communication in the world and henceforth also for people in

Pakistan. But the skills of oracy get scant attention in English Language Teaching (ELT) classes.

One of the reasons of least focus on oracy is that testing speaking skills appear inaccessible. This

widespread issue in Pakistan makes the present work relevant to all centers of ELT. Speaking is

one of important coordinating skills, not only for other language skills, but also for all types of

learning. Motivating students to speak in English language, this classroom research focuses

primarily on two dimensions of English speaking skills (ESS), namely, (1) what experiences can

be built into classroom teaching: and (2) what kind of testing can be used. Testing is important for

the teaching cycle as it provides datum lines of progress and achievement. Kim’s (2010) criteria

provide an analytical tool. This work assesses learners’ autonomous interaction in recorded

speaking performances (RSPs) in a university language lab, and tests the hypothesis that university

freshmen (UF) develop their ESS if they are exposed to purposeful teaching reinforced with

relevant testing procedures. A longitudinal approach was adopted to evaluate progress in the

speaking ability of UF from one semester to another. Four components were investigated: (1)

teaching practices, (2) the structure of tests, (3) the contribution of tasks and (4) progressive rating

in speaking performance. Data were analyzed both qualitatively and quantitatively. College

language learners’ (CLLs) responses to English teaching, learning, and testing speaking skills

practices at college level, and speaking performances of the University Freshmen (UF) were

statistically analyzed. Interviews of University English language teachers (UELTs) and University

management and administration (UM&A) were analyzed through textual analysis. The study found

that with proper focus on speaking skills and use of criterion for analysis of speaking skills

improved the speaking skills of the learners. Their performance improved after explicit instruction

of speaking and explicit criterion for the evaluation of their speaking performance. Moreover, the

use of Kim’s criteria helped in understanding the evaluation of speaking skills of the students. The

study, thus, suggests that the use of explicit speaking instruction can reduce the English proficiency

differences among students who are graduated from different schools. The study attempts to make

a contribution to the knowledge of teaching methods and learning English as a second language,

specifically for development of speaking skills in university freshman. A judicious allocation of

weightage for ESS in overall assessments of English in the undergraduate program of studies was

recommended.

xi

TABLE OF CONTENTS

TITLE PAGES i - ii

THESIS AND DEFENSE APPROVAL FORM iii

AUTHOR’S DECLARATION iv

PLAGIARISM UNDERTAKING v

SUPERVISOR THESIS APPROVAL FORM vi

DEDICATION vii

ACKNOWLEDGMENTS viii

ABSTRACT x

TABLE OF CONTENTS xi

LIST OF APPENDICES xviii

LIST OF TABLES xix

LIST OF FIGURES xxi

LIST OF ILLUSTRATIONS xxv

LIST OF ABBREVIATIONS xxvi

1. INTRODUCTION 1

1.1. Positionality of the Researcher 4

1.2. Context of the Current Study 8

1.3. Statement of the Problem 10

1.4. Research Objectives 10

1.5. Research Questions/Hypothesis 11

1.6. Significance of English Speaking Skills 12

1.6.1. Communicative Function and Communicative Competence 14

1.7. Importance of Teaching/Testing of Oracy 17

xii

1.8. Rationale for Evaluating Speaking Ability 20

1.9. Methodology 20

1.10. Delimitation 21

1.11. Chapters Breakdown 22

1.12. Conclusion 22

2. LITERATURE REVIEW 24

2.1. Introduction 24

2.1.1. Crisis of Educational Quality 25

2.2. Teaching English Speaking Skills (ESS) 26

2.3. Language Acquisition 34

2.4. Language Learning 34

2.5. Learning and Teaching of English Speaking Skill 37

2.5.1 Tasks as Means to English Language Learning and Teaching 38

2.6. Testing of English Speaking Skill 40

2.7. Rationale for using Kim’s Scoring Rubrics 46

2.7.1. Analytic Scoring Rubrics and Interaction Specifications of RSA 46

2.7.2. Interagency Language Roundtable (ILR) proficiency ratings and 49

ACTFL

2.8. Raters’ Contribution to Students’ Speaking Performance 51

2.9. Impact of British Rule 54

2.10. Official Language of Pakistan 55

2.11. National Language of Pakistan 56

2.12. Pakistani English 57

2.13. Englishness of English 58

2.14. Promoting ESS in Pakistan 60

xiii

2.15. Conclusion 63

3. RESEARCH METHODOLOGY 65

3.1. Introduction 65

3.1.1. Researcher cum University English Language Teacher 66

3.2. Research Design 69

3.2.1. Justification of Research Design 71

3.2.2. Classroom research 73

3.2.3. Case Study Method 74

3.2.4. Mixed Method Approach 75

3.3. Research Strategy 76

3.3.1. Background of UF 2013 77

3.3.2. Research Participants 78

3.4. Data Collection 80

3.4.1. Time Frame of Research Data 80

3.4.2. In Class Survey 81

3.4.3. Video Interviews with University English Language Teachers 82

3.4.3.1. UELTs’ Teaching Practices 82

3.4.3.2. UELTs’ Testing Techniques 83

3.4.4. University Management/ Administration’s Interviews 84

3.4.4.1. ORIC Perspective 85

3.4.4.2. Office of QEC Perspective 86

3.4.4.3. Perspective of Head of Computer Science Department 91

3.4.4.4. The Office of Senior Dean Perspective 94

3.4.4.5. The Office of Vice Chancellor (VC) Perspective 99

3.4.5. Rationale for Recorded Speaking Performances 104

xiv

3.4.5.1. Rationale for Near Natural Recordings 106

3.4.6. Semester 1 (Fall, 2013) 106

3.4.7. Semester 2 (Spring, 2014) 109

3.4.8. Evaluation of Students’ Speaking Performances 110

3.4.9. Comparative Evaluation of Meaningfulness (Semester 1 & 2) 111

3.5. Scope and Limitations of the Methodology 113

3.6. Presenting Data, Analysis and Interpretation 114

3.6.1. Process of Triangulation 115

3.7. Conclusion 117

4. DATA PRESENTATION, ANALYSIS AND INTERPRETATION 118

4.1. Introduction 118

4.2. Analysis of University Freshmen’s (UF) Survey 120

4.2.1. Practical Use of English Language at Personal Level 121

4.2.2. Practical Use of English Language at Public Level 123

4.2.3. Practical Use of English Language at Academic Level 125

4.2.4. Teaching Techniques for English Oral Skills at Freshman level 126

4.2.5. Teaching/Testing of English Oral Skills at College Level (2013) 129

4.2.6. Practices of Testing Criteria of English Oral Skills at College 130

Level

4.2.7. Weightage of Oral Skills in Overall English Assessment at 131

College Level

4.2.8. Conclusion of University Freshmen’s (UF) Survey 132

4.3. Analysis of Interviews 132

4.3.1. Analysis of Interviews of the UELTs 133

4.3.1.1. Teaching Practices of the UELTs 135

xv

4.3.1.2. UELT’s Evolving Sets of Criteria 136

4.3.1.3. Weightage of ESS for UELTs 137

4.3.1.4. Conclusion of UELTs’ Interviews 138

4.3.2. Analysis of Interviews of University Management/ 139

Administration

4.3.2.1. UM&A and University Teaching Practices 140

4.3.2.2. UM&A and a Set of Criteria on English Speaking Skills 142

4.3.2.3. UM&A and Weightage for English Speaking Skills 144

4.3.2.4. Conclusion of University Management & 144

Administration’s Interviews

4.4. Rationale of the UF’s Recorded Speaking Performances 144

4.4.1. Using Analytic Scoring Rubric 146

4.4.2. Speaking Performances of Semester 1 & 2 147

4.4.3. Analysis of Evaluation of Meaningfulness (Semester 1 & 2) 149

4.4.4. Analysis of Evaluation of Grammatical Competence 157

(Semester 1 & 2)

4.4.5. Analysis of Evaluation of Discourse Competence (DC) 166

(Semester 1 & 2)

4.4.6. Analysis of Evaluation of Task Completion (Semester 1 & 2) 174

4.4.7. Analysis of Evaluation of Intelligibility (Semester 1 & 2) 181

4.4.8. Findings of the Comparative Evaluation of Semester 1& 2 189

5. FINDINGS, IMPLICATIONS, CONCLUSIONS, 196

RECOMMENDATIONS

5.1. Background of the UF 197

5.1.1. Survey (2013) Based Findings from the UF’s Lens 197

5.1.2. English Speaking Practices of the UF at the Joining Time 198

xvi

5.1.3. Reasons for Lesser Practice in ESS at UF Level 200

5.1.4. UELTs on UF’s Indigenous ESS at Joining Time 200

5.1.4.1. UELTs’ Consciously Teaching ESS 201

5.1.5. The Symbolic Power of ESS on the Pakistani Social Set-up 201

5.2. Responsibility of a Higher Seat of Learning 202

5.2.1. The University Management and Administration‘s 203

Perspective

5.2.2. The University Management and Administration’s 207

Recommendations

5.2.3. ESS Adds Value to UF’s Marketability 210

5.2.4. Niches in English Language Teaching of the UF 211

5.2.5. The University Freshmen’s Requirements 212

5.3. UELT Researcher’s Reflection 213

5.3.1. UELTs – The Agents of Change to RSPs 216

5.3.2. UELTs’ Diverse Techniques to Capacitate the UF’s ESS 217

5.3.3. Interruption Obstructs Language Learning 218

5.3.4. Asking Questions from Pairs and Groups 220

5.3.5. Using Analytic Scoring Rubrics and It’s Benefits 225

5.3.5.1. Impact of a Criterion on the Rater and the UF 226

5.3.6. Recorded Speaking Performances (RSPs) 227

5.3.6.1. Benefits of Speaking Performances 230

5.3.6.2. Evaluating Peer performances 232

5.3.6.3. Practicing ESS in RSPs and UF’s Output 234

5.3.7. Results of UF’s Speaking Skills 237

5.4. The Recommended Weightage for ESS and its Impact 241

5.4.1. Contribution of Research Study 242

xvii

5.4.2. Theoretical Underpinnings of Research Study 244

5.4.3. Limitations of the Study 245

5.4.4. Conclusions and Recommendations 246

5.4.5. Implications and Future Research Prospects 249

REFERENCES 252

PLAGARISM REPORT

xviii

LIST OF APPENDIXES

Appendix A Survey conducted among the UF 2013 (Questionnaire) 286

Appendix A-1 Saved impression of in class survey 295

Appendix B. List of questions for interviewing UELTs 297

Appendix B-1 Saved impression of the record of UELTs’ video interviews 299

Appendix C. List of questions for interviewing the UM&A 300

Appendix C-1 Saved impression of UM&A interviews 302

Appendix D. Kim’s (2010) analytic scoring rubric 303

Appendix E. Proof reading Certificate 308

xix

LIST OF TABLES

TABLE TITLE PAGE

3.1 Meaningfulness (Communication Effectiveness) 111

3.2 Comparative Evaluation of Meaningfulness in Speaking Performance

of Semester 1&2 (2013-2014)

150

4.1 Language learners’ frequency of speaking English at personal level, in

2013

122

4.2 Frequency of language learners’ practical use of English oral skills at

public level in 2013:

124

4.3 Teaching, using & testing of English oral skills academically at

freshman level

125

4.4 Frequency of English oral skills teaching techniques at freshman level

in 2013

126

4.5 Frequency of teaching/testing of English oral skills at freshman level in

2013

129

4.6 Usage frequency of testing Criteria for English oral skill at college

level in 2013

131

4.7 Weightage of English oral skills at college/freshman level in 2013 131

4.8 Number of responses on ‘No’ to ‘Limited’ scale-point in semester1 & 2 189

4.9 Number of responses on ‘Fair’ scale-point in semester1 & 2 190

4.10 Number of responses on ‘Adequate’ scale-point in scoring rubric (Sem

1 & 2)

191

4.11 Number of responses on ‘Good’ scale-point in semester 1& 2 192

4.12 Scale point ‘Adequate’ and ‘Good’ together in meaningfulness

Semester 1&2

193

4.13 Scale point ‘Adequate’ and ‘Good’ together in grammatical

competence Semester 1&2

193

4.14 Scale point ‘Adequate’ and ‘Good’ together in discourse competence

Semester 1&2

193

4.15 Scale point ‘Adequate’ and ‘Good’ together in task completion

Semester 1&2

193

4.16 Scale point ‘Adequate’ and ‘Good’ together in intelligibility Semester

1&2

194

4.17 Achievement of the UF in scale point excellent of test constructs 194

5.1 Frequency of English speaking skills taught and tested at college level

(2013)

197

xx

5.2 Frequency of speaking English with parents, family and friends at

college level (2013)

199

5.3 UM&A Perspective on the benefits of enhancing UF’s ESS 204

5.4 UM&A Perspective on the significant need of English language for the

UF

205

5.5 UM&A’s Support to let UELTs enhance ESS of UF 206

5.6

5.7

UM&A’s Perspective on Marketability of the UF

UELTs’ diverse techniques to capacitate the UF’s ESS

210

217

5.8 The UELTs intervening practices (2014) to correct ESS of the UF 219

5.9 UELTs and UM&A’s criterion to check UF’s ESS 220

5.10 UELTs’ checked linguistic features in the UF’s ESS 221

5.11 UM&A’s perspective on Standardized Criterion for Assessing ESS 222

5.12 Awareness about criterion led to achievement at college level (2013) 224

xxi

LIST OF FIGURES

FIGURE TITLE PAGE

4.1 Meaningfulness Limited (1) response is generally unclear and

extremely hard to understand

151

4.2 Meaningfulness Limited (3) response delivers extremely simple,

limited ideas

151

4.3 Meaningfulness Fair (1) response often displays obscure points

leaving the listener confused

152

4.4 Meaningfulness Fair (3) response delivers simple ideas 152

4.5 Meaningfulness Adequate (1) response occasionally displays

obscure points; however, main points are still conveyed

153

4.6 Meaningfulness Adequate (2) response includes some elaboration 153

4.7 Meaningfulness Adequate (3) delivers somewhat simple ideas 153

4.8 Good (1) response is generally meaningful-in general, what the

speaker wants to convey is clear and easy to understand

155

4.9 Good (2) Meaningfulness response is well elaborated 155

4.10 Good (3) Meaningfulness response delivers generally

sophisticated ideas

155

4.11 Excellent (1) response is completely meaningful-what the speaker

wants to convey is completely clear and easy to understand

156

4.12 Excellent (2) Meaningfulness Response is fully elaborated 156

4.13 Excellent (3) Meaningfulness response delivers sophisticated

ideas

156

4.14 Grammatical Competence No (3) response contains not enough

evidence to evaluate

158

4.15 Grammatical Competence Limited (1) response is almost always

grammatically inaccurate, which causes difficulty in

understanding what the speaker wants to say

158

4.16 Grammatical Competence Fair (1) response displays several

major errors as well as frequent minor errors, causing confusion

sometimes

159

4.17 Grammatical Competence Fair (2) response displays a narrow

range of syntactic structures, limited to simple sentences

160

4.18 Grammatical Competence Fair (3) response displays use of

simple and inaccurate lexical form

160

4.19 Grammatical Competence Adequate (1) response rarely displays

major errors that obscure meaning and a few minor errors but

what the speaker wants to say can be understood

161

xxii

4.20 Grammatical Competence Adequate (2) response displays a

somewhat narrow range of syntactic structures; too many simple

sentences

161

4.21 Grammatical Competence Adequate (3) response displays

somewhat simple syntactic structures

162

4.22 Grammatical Competence Adequate (4) displays use of somewhat

simple or inaccurate lexical form

162

4.23 Grammatical Competence Good (1) response is generally

grammatically accurate without any major errors (e.g., article

usage, subject/verb agreement, etc.

163

4.24 Grammatical Competence Good (2) response displays a relatively

wide range of syntactic structures and lexical form

163

4.25 Grammatical Competence Good (3) response displays relatively

complex syntactic structures and lexical form

163

4.26 Grammatical Competence Excellent (1) response is

grammatically accurate

164

4.27 Grammatical Competence Excellent (2) response displays a wide

range of syntactic structures and lexical form

164

4.28 Discourse Competence No (1) response is incoherent 167

4.29 Discourse Competence No (3) response contains not enough

evidence to evaluate

167

4.30 Discourse Competence Limited (1) response is generally

incoherent

168

4.31 Discourse Competence Limited (2) response displays illogical or

unclear organization, causing great confusion

168

4.32 Discourse Competence Limited (3) response displays attempts to

use cohesive devices, but they are either quite mechanical or

inaccurate leaving the listener confused

168

4.33 Discourse Competence Fair (1) response is loosely organized,

resulting in generally disjointed discourse

169

4.34 Discourse Competence Fair (2) response Often displays illogical

or unclear organization, causing some confusion

169

4.35 Discourse Competence Fair (3) response displays repetitive use of

simple cohesive devices; uses of cohesive devices are not always

effective

169

4.36 Discourse Competence Adequate (1) response is occasionally

incoherent

170

4.37 Discourse Competence Adequate (2) response Contains parts that

display somewhat illogical or unclear organization; however, as a

whole, it is in general logically structured

170

xxiii

4.38 Discourse Competence Adequate (3) at times displays somewhat

loose connection of ideas

171

4.39 Discourse Competence Adequate (4) response displays use of

simple cohesive devices

171

4.40 Discourse Competence Good (1) response is generally coherent 172

4.41 Discourse Competence Good (2) response displays generally

logical structure

172

4.42 Discourse Competence Good (3) response displays good use of

cohesive devices that generally connect ideas smoothly

172

4.43 Discourse Competence Excellent (2) response is logically

structured-logical openings and closures; logical development of

ideas

173

4.44 Task Completion No (2) response contains not enough evidence

to evaluate

175

4.45 Task Completion Limited (1) response barely addresses the task 175

4.46 Task Completion Fair (1) response insufficiently addresses the

task

176

4.47 Task Completion Fair (2) response displays some major

incomprehension/ misunderstanding(s) that interferes with

successful task completion

176

4.48 Task completion Adequate (1) response Semester 1&2 2013-2014 177

4.49 Task completion Adequate (2) response Semester 1&2 2013-2014 177

4.50 Task completion Adequate (3) response Semester 1&2 2013-2014 178

4.51 Task completion Adequate (4) response Semester 1&2 2013-2014 178

4.52 Task Completion Good (1) response addresses the task well. 178

4.53 Task Completion Good (2) response includes no noticeably

misunderstood points.

178

4.54 Task Completion Good (3) response completely covers all main

points with a good amount of details discussed in the prompt

178

4.55 Task Completion Excellent (1) response fully addresses the task 179

4.56 Task Completion Excellent (2) response displays completely

accurate understanding of the prompt without any misunderstood

points

179

4.57 Intelligibility No (1) response completely lacks intelligibility 182

4.58 Comparative study of Intelligibility Limited (1) Sem-1&2 (2013-

2014)

182

4.59 Comparative study of Intelligibility Limited (2) Sem-1&2 (2013-

2014)

183

4.60 Comparative study of Intelligibility Limited (3) Sem-1&2 (2013-

2014)

183

xxiv

4.61 Comparison of Intelligibility Fair (1) responses Sem- 1&2 (2013-

2014)

183

4.62 Comparison of Intelligibility Fair (2) responses Sem- 1&2 (2013-

2014)

183

4.63 Comparison of Intelligibility Fair (4) responses Sem- 1&2 (2013-

2014)

183

4.64 Comparative study of Sem-1&2 on Intelligibility Adequate (1) in

2013-2014

184

4.65 Comparative study of Sem-1&2 on Intelligibility Adequate (2) in

2013-2014

184

4.66 Comparative study of Sem-1&2 on Intelligibility Adequate (3) in

2013-2014

185

4.67 Comparative study of Sem-1&2 on Intelligibility Adequate (4) in

2013-2014

185

4.68 Comparative study of Sem-1&2 on Intelligibility Good (1) in

2013-2014

186

4.69 Comparative study of Sem-1&2 on Intelligibility Good (2) in

2013-201

186

4.70 Comparative study of Sem-1&2 on Intelligibility Good (3) in

2013-2014

186

4.71 Comparative study of Sem-1&2 on Intelligibility Excellent (1) in

2013-2014

187

4.72 Comparative study of Sem-1&2 on Intelligibility Excellent (2) in

2013-2014

187

xxv

LIST OF ILLUSTRATIONS

ILLUSTRATION TITLE PAGE

1 Study Research Design 70

2 Research Participants 78

3 Research Site 79

4 Feedback through Email to All Students 108

5 Steps of Data Analysis 118

6 UF’s No-Limited Control on ESS (2013-2014) 190

7 Difference at Level ‘Adequate’ SEM 1 & 2 192

8 Emailed Feedback to All Students on Required

Specifications

224

9 RSPs Help UF Verbally Evaluate Their Peers 228

10 Consciously Teaching/Testing/Grading ESS is Vital 228

11 Testing ESS Contributed to Pakistani Prospective

Engineers

229

12 Emailed Feedback to All Students on Long Utterances 230

13 RSP-An Effective Technique for Ample Opportunities

to Practice

231

14 Ratio in Weightage (ESS) on University Grade Sheet 241

15 Contribution of Research Study 242

xxvi

LIST OF ABBREVIATIONS

American Council for the Teaching of Foreign Languages ACTFL

Bachelors of engineering for Mechatronics BEMTS

College English language learners CELLs

College English language teachers CELTs

Discourse Competence DC

Educational Testing System ETS

English language learners ELLs

English Language Teaching ELT

English Language Teachers ELTs

English speaking skills ESS

Faculty of Social Sciences FSS

Figure Fig

First language L1

Grammatical Competence GC

Greetings, apologies, and congratulations GAC

Higher Education Commission HEC

Intelligibility INT

Interagency Language Roundtable ILR

Language acquisition device LAD

Meaningfulness MFN

Minnesota Language Proficiency Assessments Model MLPA

Pakistani English PE

Pakistan Engineering Council PEC

Royal Society of Arts’ RSA

Semester Sem/SEM

Task Completion TC

Task Completion Adequate TC Adq

xxvii

Universal grammar UG

University English Language Teachers UELTs

University freshmen UF

University management/Administration UM&A

Recorded speaking performances RSPs

English speaking performances ESPs

Zone of proximal development ZPD

CHAPTER 1

INTRODUCTION

English is a global language, spoken all over the world for communication. It is one of the

official languages of Pakistan (Cook, 2016; Rahman, 2005; Sultana, 2009). This language

connects the Pakistani people with the rest of the world in all aspects of life (Haidar, 2016;

Schneider, 2007), from tourism to trade. Not only as an international mode of

communication (Riaz, Haidar, & Hassan, 2019), English as a language of prestige has

seeped into the daily conversation of the people who change this language to suit their

needs. Keeping the role that this language plays in the upward social mobility, university

curriculum expects new entrants to communicate in English. Taking into account the most

common school background of such students (Kanwal, 2016; Zulfiqar, 2011), apparently

this is an unrealistic expectation. It thus becomes vital to teach them oral skills consciously,

as a directed activity. English speaking skill (ESS) is the target skill that is needed to be

fostered (Norris, 2009, p. 412). However, teaching English language to mixed ability large

classes is one of the challenges in Pakistan (Shamim, Negash, Chuku, & Demewoz, 2007).

In fact, effective learning depends on effective teaching (Riaz, 2012). The learners could

complete their tasks, and perform their assignments in a better vein in smaller (15 students)

classes but at freshman level, in a class of 40 students, managing task-based activities is

2

difficult. Large classes are environment-based professional challenges for the language

teachers (Wette & Barkhuizen, 2009). However, a teacher can create an environment that

necessitates language learning. A teacher can think of different ways to motivate the

language learners to learn and practice (Riaz, 2012) language. Both types of motivation,

intrinsic (learning for personal satisfaction) and extrinsic (learning for reward or avoiding

punishment) are interdependent (Deci & Ryan, 2010; Mulvahill, 2018)). In the process of

learning and teaching, motivation helps in self-regulation. Internalizing regulation, the

learners and the teachers experience greater autonomy in action.

Exam based instructions, syllabus for language teaching, the mixed abilities of the UF,

lack of motivation, fatigue, anxiety to give more time to core subjects make developing

classroom oracy and testing it in large classes appear not only difficult but next to

impossible. Next, teaching is rounded off by testing. Testing is an effective activity for

teachers and learners to know where they stand (Laar, 1998). “Some accommodation is

required… some criteria need to be determined,” (Hassan, 2009, p. 263) for evaluation of

students. Exploring the possibilities of testing the response-ability of the learners is one

aim of this study. However, ‘operationalizing its assessment’ (Norris, 2009, p. 412)

particularly on the nature of testing construct is not the goal of this study. The objectives

of this research are: (1) to teach oral skills to second language learners, (2) to test the

suitability of Kim’s (2010) criterion for assessing learners’ oral skills, and (3) to examine

the viability of the hypothesis that university freshmen improve their English speaking

skills if taught and assessed purposefully in English Courses. Being one of the official

languages, English has become an important language of/for communication. The learners

use it for a variety of subjects and activities during their course of studies. The books they

study for engineering courses are written in English, the terminology they use for their core

courses originates from English. Other than this, they attempt their examination papers in

the English language. While communicating, if not completely, almost half of their

expression relies on this language, through code mixing and code switching as they

communicate. Using English as main language of communication, the Pakistani language

learners code switch because they use their first and second languages e.g. Punjabi, Urdu,

in the same routine situations (Crystal, 2012).

3

When English has such a significant place in their academic life, it is crucial for freshmen

to enhance their proficiency in this language. Testing is also very important to determine

learners’ starting and sustaining strengths, and to meter their performance as the course

progresses. Learners need to have communicative competence in English in order to aid

their learning of other subjects. Therefore, teachers need tools to assess this competence

progressively. To suggest an accessible testing criterion is one of the objectives of this

exploration. Although this investigation is a case study of the Bachelors of Mechatronics

Engineering, first and second semester (2013-2014), Air University, Islamabad, its findings

can benefit other institutions and ELT centers. The offshoots of the topic go beyond a

particular university, occasion, time-frame or course.

Speaking skills refer to the ability to talk with others for exchanging ideas and for

enhancing knowledge and understanding. In the present global society, most of the

countries are devoting considerable resources to respond to the incessant needs for English

language teaching (Savignon, 2018). Oracy is the broader spectrum against which I have

chosen to focus primarily on ‘speech making’. Oracy is a type of spoken discourse that

involves coherence and turn taking. Considering coherence and cohesion in spoken

interactions can mark discourse competence (Riggenbach, 2006). Despite their basic

importance, speaking skills are hardly touched in most ELT classes in Pakistan.

My study was fundamental in that I wanted to understand the actual practices of English

teaching and learning (Riaz, 2012, p.2) and their relationship to English speaking skill

(ESS). In order to test the validity of the hypothesis that the UF evolve their English

speaking skills if taught and assessed in English courses, I explored answers to my research

questions i.e. how the learners can be taught oral skills, and what the factorial structure of

the speaking test is.

I have organized the rest of the chapter systematically to discuss my positionality in the

first section since I have been an essential phenomenon in this mixed method research.

The second section probes the context of the current study beginning with the HEC

curriculum to teaching/testing practices of ESS of the UELTs, to learning practices of the

4

UF. The third section provides statement of the problem identifying gaps in the

teaching/learning processes of ESS. The fourth section informs about research objectives.

The fifth section acquaints research questions and hypothesis. The sixth section presents

significance of ESS, and its sub section, 1.6.1 highlights the relationship of communicative

functions and communicative competence. The seventh section liaise teaching of oracy

with the testing procedures. The eighth section offers rationale for evaluating speaking

ability. The ninth section sketches the methodology for this research study. The tenth

section clearly defines the boundaries, as delimitation of this study. The eleventh section

presents chapter wise break down of the thesis. The twelfth section concludes the chapter.

1.1 Positionality of the Researcher

It is relevant to discuss my positionality and what it brought to the research since

positionality plays an important role in qualitative studies (Bourke, 2014; Creswell, 2012).

I studied at a Cantonment board school and college. Being a member of Blue Bird and

Girls’ Guide, in addition to being the Head Girl of the School, I always felt need to use the

English language, crucial for ‘networks of power’ (Ashraf, 2006, p. 209) for

communication purposes (For reasons, see section 5.1.5). However, my communication

and conversation remained an amalgam of Urdu and English as was the custom those days.

My medium of instruction changed from Urdu to English at college level. Therefore,

throughout my school and college days, I personally tried to develop my speaking

competence by interacting with other competent speakers at school and college; in family

and social circle. The reason being, English language has always been identified as

language of power and domination (Shamim, 2008; Zulfiqar, 2011). In order to enhance

my English speaking competence, I managed to do a short course in the English language

other than my regular studies at Intermediate level (grade 11 and 12). I have been observing

students at different levels since 1991 when I joined the teaching profession. As an in-

charge language lab in Riyadh, Saudi Arabia, I found teaching speaking English was not

so effective without equity of its weightage in the overall assessment of the English

language. Throughout school life, the learners need to manage academic conversations

5

other than reading and writing assignments. Speaking and listening happen to be the most

important and fundamental skills.

During my Master of Philosophy, I discerned lack of speaking ability among some of the

most knowledgeable class mates. Then, while conducting research sessions for my MPhil

thesis (Riaz, 2012), I observed that more than 50 % time used to be invested in discussing

an issue before writing about it concisely. I realized the importance of discussions for

writing sessions. I observed confidence among the nonnative/second/third/foreign

language learners of the research sessions. Given an opportunity to voice their analysis of

a written statement, they grew clear and committed to learn better than before. These

research sessions led to the idea of teaching and testing of speaking skills at university

freshmen level. As a UELT, I was cognizant to the difficulties of checking the written

examination of the UF in large classes. So much so that with the help of the University

Automation department, I introduced computer based tests (CBTs) in Technical Report

Writing courses in 2003. These tests were based on multiple choice questions (MCQs).

Important to remember is that theory might be tested through MCQs and written form. ESS

need to have a matching (Puppin, 2007) testing system. Then I discussed this idea with the

Dean of the department who approved it saying “very little work has been done on oracy

in Pakistan”. I, along with the English department started compiling a text book and

working on redesigning the outline for English ‘Communication Skills’ course. The aim

was “an attempt to achieve certain ends in students-products” (Srivastava, 2005, p.3).

Higher Education Commission (HEC), Pakistan mandated the UF from Mechatronics to

use language lab for enhancing their speaking ability in 2013. I talked to the Senior Dean,

heading the department of Mechatronics at that time. I sought his permission to use

language lab to let the learners improve their speaking skills. Incorporating in- lab speaking

activities in Communication Skills course, I started the task in line with the Curriculum.

This practice fulfilled HEC, Pakistan’s demands for the fresh graduates to use language lab

as well.

One minute speaking per class could not develop meaningful, intelligible, discourse

competence of the nonnative/ second language learners. Speaking for one minute per class

6

could not help to develop language speaker’s communicative competence. ‘The little use

of the English language that traditionally takes place, is in the form of one-sentence

expressions or one-word verbal expressions inserted in Urdu conversations. One may

comment that this form of occasional code switching or a short sentence can hardly be

termed as English language’ (Manan, 2015, p.177). However, the weekly recordings of the

students helped me to generate renewed commitment. This study presents an integration of

language lab in usual class room practice which was intricate and demanding for the

utilization of “a host of skills and instructional paradigms” (Greenfield, 2003, p.57). I

wished to have had videotaped the spontaneous responses of the UF. Much was going on

in the Communication Skills and Technical Writing classes. It was difficult to hold the

camera and videotape a response there and then but in the future, probably, this videotaping

might be made possible for the future English Teachers through in- built Cameras in the

class rooms. This research study demonstrates how language speaking skill and technology

can be unified through the technical support or “professional growth activities”

(Greenfield, 2003, p.58).

Teaching abroad, using a language lab, I concluded that assessing all the language learners

in their speaking performances was difficult and time consuming. Therefore, on joining the

profession of teaching in Pakistan, I comfortably accommodated the written examination

of English language learning. However, teaching at university level, family, academic, and

social pressure on the university students to interact in English language helped me realize

the impact that testing (Hughes, 2001; Lasagabaster, 2011) might have on the speaking

performances of the university freshmen (UF). Testing engaged more deliberation to

speaking (Norris, 2009).

For testing, a criterion for gauging oral skills of the language learners from variety of

streams of school was requisite to motivate them to move forward to an adequate level. At

UF level, the language learners are developing and building up their linguistic experiences

that keep them motivated (Mulvahill, 2018). In a nonnative/second/third language learning

context, it was important to show the UF what to do, other than not to let them feel inferior.

As a teacher-researcher, I could do it better than a non-researcher teacher. Having a feel

7

for the UF of AU, my presence at the site, made me cognizant of the data I collected

(Hubbard & Power, 1993). An outside researcher might have not done it with such

cognizance. Therefore, I was interested to explore how testing ESS could be implemented

at university level.

My MPhil research (Riaz, 2012) informed me that administrators had to invest time for

their employees’ (departmental coordinators and personal secretaries/assistants) tasks of

writing due to their writing abilities. The administrators edited their coordinators’ writing.

As language learners they were not trained to write routine correspondence, independently.

As a teaching researcher, I realized that the employers or administrators might not possibly

speak, or present on behalf of the newly hired employees. The newly hired had to do the

speaking themselves. Thus, I was keen to furnish the UF to speak English. Employment

was the targeted goal of the UF. Employers were not satisfied with the English skill of their

employees. Choosing teaching practices from the UELTs and combining them with the

learning practices of the UF, I did not scrutinize the UF for problems with pronunciation,

intonation or pacing in the beginning. This helped them build confidence in the target

language. The UF’s minor misunderstandings were ignored. While learning, students

committed errors but time and practice taught them gradually. I took corrective measures

through collective feedback via email to all the UF. This type of feedback saved the UF

from humiliation and pedagogical intervention. It saved me as a UELT from time

consuming explanation of individual feedback. In addition, the other UF remained involved

in revising syntactic structures or inaccurate lexical forms.

Thus, my personal experience as a student, as a University English language teacher cum

researcher, and then, as a scholar for higher education helped me understand the UF’s

problem in using English. In order to locate the role of testing in speaking performances of

the UF, I am next going to discuss the context of the current study. I introduce some

methods and techniques for enhancing and assessing oral skills that could be of benefit

wherever the language is taught in the country.

8

1.2 Context of the Current Study

The HEC Curriculum (English) for Bachelor of Engineering (revised in 2009) seeks ‘to

improve the students’ proficiency in English Language,’ a broad objective which is further

divided into the skills of reading comprehension, writing, listening, and speaking, The

implication is that equal emphasis be given to the skills of oracy as to those of literacy.

However, it is observed that in actual practices students rarely get a chance to speak, so

this aspect of the curriculum remains underdeveloped. There are several reasons for this,

the most obvious being (1) the nation’s public examination system, which is heavily

weighted towards writing, (2) the poorly trained teachers (Kanwal, 2016), many of whom

rely heavily on input from local languages for their teaching, and (3) the difficulty of

defining, controlling or judging the spontaneous nature of natural speech, something that

can neither be written into a syllabus nor assessed through written examinations. At this

stage it is enough to say that the main impediment in the teaching of oral skills is the

difficulty of evaluating them. This has led to my main focus, which is the ‘testing’ part of

the learning cycle. Clarifications in this area can have a kind of backwash effect, leading

to clearer definitions in syllabus design (Canale & Swain, 1980), to more purposeful

teaching, and to a greater awareness among all stakeholders of the importance of these

skills. In effect, some aspects of course design and classroom practices move backwards

from expectations (and constraints) in assessment, rather than forward from preconceived

objectives.

Intake BEMTS (2013) showed only 11% UF from ‘O’, and ‘A’ Level of education

managed to get admission at the University (See Section 3.3.1.). The rest of the UF were

from government schools and colleges. The stakeholders in the education system needed

to put in more efforts to enhance the UF’s ESS. The UELTs and the UF were required to

have adequate motivation to perform the sufficient proficient level of ESS.

In the United Kingdom Practitioner Research, there were 10 projects formulated on the

basis of ideas that emerged from published research findings within the area of classroom

talk and learning. These projects were linked together under the title, ‘Better

9

communication skills as a means of reducing the barriers to learning’ and were sponsored

by Education Action Zone. The focus of these projects was to enhance learning through

talk and to reduce barriers to learning. The National Oracy Project was also one of them

(Thompson, 2007). Although the context of these projects is different, they reinforce my

understanding and approach in these matters, namely, that assessment moves learning

forward (Heritage, 2007). Assessment is usually the incentive behind active learning that

generates accountability among the users (Savignon, 2018). ‘Teaching to the test’

(Popham, 2001) is common in Pakistan. Neither students nor teachers want to waste time

on activities that will not be tested. Students want to know if such tests will make a

difference to their overall results. If they are not a part of official procedures, they lose

interest.

In Pakistani education system speaking continues to be ignored, or at best, and that too

only in a few institutions, given sketchy and grudging attention. Second language learners

are left with a generic weakness. Conscientization (Freire, 1970), a concept that seeks

‘critical consciousness,’ needs to be introduced to equip students with language skills.

Research processes help to reorient, focus and energize co-researchers to widen their

understanding and transform the existing reality (Smith, 1996). Though time consuming,

learners need to be given freedom to express their ideas, thoughts and opinions. Neither

should language trainees be restricted nor oppressed, nor should teachers be allowed to

hegemonize in an overbearing manner. I seek to reorient policy making, curriculum

development and classroom practices to the giving of due attention to (1) the enhancement

of oral skills through teaching and evaluation, (2) to the assistance of learners in removing

barriers to learning, and (3) to the motivation of learners towards visible progress on the

ladder of success.

My study focuses on the teaching and testing of oral skills (restricted to speaking, though

listening is presupposed). Chronically, the language teaching methodologies could not pair

the teaching and testing activities of ESS in English language learning (Norris, 2009).

However, their development can help students in many circumstances. Oral skills allow

the exposure, perception and negotiation of social and political contradictions (Thompson,

10

2007). This research identifies the gaps, i.e., lack of a criterion, lack of practice, lack of

testing, and minimal weightage for English speaking skills in overall assessment of English

language that exist in our approaches to second/third language learning to students of an

Engineering university. These gaps should be understood, acknowledged and addressed

through practices available in the world today, especially those that can be adapted to a

national context characterized by light budgets, low motivation, poorly trained teachers,

lopsided curricula and the absence of testing requirements (Kanwal, 2016).

1.3 Statement of the Problem

English being one of the official languages of Pakistan is written and spoken all over

Pakistan in its own restricted way. It is taught all over the country; the curriculum is revised

at given intervals to develop and assess the language capabilities of learners. In context

classroom talk is carried out, writing is done and assessed. However, the focused (teaching

and) testing of the oral skills is commonly avoided. It is painstaking and time consuming

to assess oracy; it is difficult for the teacher to grade students in the absence of a well-

practiced and valid criterion of assessment. Serious attention has to be given to criteria for

evaluation, or levels of achievement (Canale & Swain, 1980, p. 25).

In order to address this problem, this study uses Kim’s (2010) analytic scoring rubric

containing five rating scales (Meaningfulness, Grammatical Competence, Discourse

Competence, Task Completion and Intelligibility) to assess the students’ recorded

responses.

The assessment methods fulfill the aims and objectives of the courses (Bachelors of

Mechatronics Engineering, Fall, 2013 & Spring 2014).

1.4 Research objectives

a) To critically examine how learners can be taught English oral skills

b) To experiment with the structure of a speaking test

11

c) To measure the extent to which raters (students and teachers) contribute in

improving the UF’s speaking performance by including tests in the course

d) To evaluate the usefulness of tasks in contributing to students’ speaking

performance

The practice of teaching English language needs to be changed. English possesses the

status of an international language (Kachru, 1990). However, language teachers teach

English language by teaching English literature. Instruction by lecture method does not

support interaction. Interactive and task-based teaching (Alam & Bashir Uddin, 2013)

might promote English oral skills among the language learners (see section 2.2). Task is a

function, errand, exercise, project, real life situation (Hassan, 2004; Rabab’ah, 2003),

scenario or story that entwine (Konno, Nonaka & Ogilvy, 2014). Task based activities

include announcing to responding. Tasks contribute to perform functions linguistically

(Lalljee, 1998). In other words ‘a language use task as an activity that involves individuals

in using language for the purpose of achieving a particular goal or objective in a particular

situation’ (Bachman & Palmer, 1996. p. 44). Moreover, examining the structure of a

speaking test could help to measure the strengths and weaknesses (Poonpon, 2010) of the

university freshman. Raters, the teachers (Kim, 2010), particularly and the students

generally contribute to UF’s speaking performance. However, attaining the research

objectives was challenging in large classes.

1.5 Research questions / Hypothesis

This research addresses the following four questions:

(1) How can the learners be taught English oral skills?

(2) What is the possible factorial structure of a speaking test?

(3) Do raters (students and teachers) contribute to UF’ speaking performance? If yes, to

what extent?

(4) How do tasks contribute to students’ speaking performance?

12

This study examines the validity of the following hypothesis: English speaking skills of

University Freshmen evolve if they are taught and assessed in English speaking skills.

Contribution of tasks in promoting students’ speaking performances is gauged through the

factorial structure of speaking tests. A structure that is composed of essential factors: 1)

Meaningfulness, 2) Grammatical competence, 3) Discourse competence, 4) Intelligibility,

and 5) Task completion. The factorial structure of speaking tests could help the test takers

and the testees to gauge the speaking performances.

1.6 Significance of English Speaking Skills

English Speaking Skills (ESS) are in high demand in professional life. The power of

English language is ‘symbolic’ as well. It endows its users with ‘a certain linguistic

capital’, ‘based on enciphering and deciphering’ as an ‘economic exchange’. Thus,

‘utterances’ are ‘signs of wealth’, and ‘sign of authority’ (Bourdieu, 1991, p. 502).

However, in Asian countries, English is either a foreign language or a second language’

(Patil, 2008, p. 230). As a second/ third or foreign language, English in Pakistan has

become a ‘survival’ requirement (Canagarajah, 2005; Lambert, Genesee, Holobow &

Chartrand, 1993). Global prospects of English language (Annamalai, 2004; Crystal, 2012)

have revolutionized its role in the economic prosperity. English speaking nations,

particularly the United States have a governing role in the anglosphere, i.e., English

speaking nations in politics and economy. The use of English as a contact language on

internet has incited ‘a pull toward English as a much sought-after commodity at the

national, subnational and supranational levels’ (Annamalai, 2004, p. 6). The global

(Ntshuntshe, 2011) growth has transformed English to the language of International

Capitalism (Pennycook, 1997). Globally learners aspire to learn English for functional

reasons, i.e. passing an exam in English, getting a raise in salary, reading books in English

or becoming a teacher of English (Gatenby, 1948).

13

As an example of the role of English in social mobility, Pakistani graduates well versed in

English language choose to join multinational corporations or international non-

governmental organizations (NGOs). In fact, performing in English can help the UF

accomplish more than performing without English language (Greenfield, 2003). Some of

these language competent graduates join civil services or the military (Haidar, 2016;

Rahman, 2005a). However, most of the employers are not comfortable with the English

skill of their white collar employees whereas they consider English language competence

as the most important factor for fresh recruits. Without disagreeing with the concept that

English divides (Durrani, 2012; Haidar, 2016; Ramanathan, 2005a) this research presents

the other side of the concept that English unites the global language users.

ESS is a source of power and learners can realize it through formal assessment (Shohamy,

1993). English is a single universal language that allows all mankind to communicate with

each other directly (Schneider, 2007). As the world de facto lingua franca (Majhanovich,

2013), English language for international communication, politics, commerce, travel,

media and News has diversified. Crystal (1997) explains the role of English as the leading

world language (Schneider, 2007, p.1). English is enjoying unprecedented status (Crystal,

2012) in the 21st century (Majhanovich, 2013). In Pakistan, English is mainly restricted

towards the formal domains such as administration and formal government machinery

(Manan, 2015, p.236).

However, English language has managed to stay, not only in formal and official functions;

it has indigenized /domesticated and grown local roots (Schneider, 2007, p.2). English is

most probably the language of communication at international conferences. Globally, more

universities are offering programs in business studies in English. Internationally, policies

are regularly written in the English language. Undeniably, the international science journals

are progressively published in English. Moreover, being lingua franca, it is a dominant

language in the administration and the conferences of the African Union (AU) which

represents more than fifty countries that develop into about one billion people

(Majhanovich, 2013). It is the language of communication of the association of South East

Asian Nations (ASEAN: Indonesia; Malaysia; the Philippines; Singapore; Thailand;

14

Brunei; Vietnam; Laos and Burma (Myanmar) and Cambodia (Clayton, 2006). It has

always been the sole official and working language of ASEAN (Kirkpatrick, 2008). Thus,

English has been the common language for diverse types of linguistic exchange.

1.6.1 Communicative Function and Communicative Competence

Proficiency in English besides one’s regional language opens the gates of employment

(Agnihotri, 2007; Khan & Chaudhury, 2012). Being an international language (Holliday,

2005), English is beneficially useful for communication and work opportunities with

people of other nationalities (Hamid et al., 2013). English has been considered as an

‘indispensable means of communication’ in corporate sector. It is one of the most

significant employability and global literacy skills (Khan & Chaudhury, 2012, p.116).

Globalization has metamorphosed English into a prestigious language of the world

(Rassool, 2013). Other than standard British or American spoken English, the international

citizens need to understand, varieties of English spoken around the world (Flowerdew &

Miller, 2005). The fluidity of English language is not merely due to social and

geographical mobility (Blommaert, 2010). Rather the pervasive use of English language

has infiltrated the lives of Pakistanis. So much so that without English competence, people

cannot perform linguistically. English is ‘important for competition in a globalized world

order’.(Government of Pakistan, 2009, p.11). ‘Linguistic markets are hierarchical, ranging

from highly formal to informal, and different forms of linguistic capital have value in

certain markets’ (Haidar, 2016, p. 31). Without comprehending English language,

communicability might be hindered by major or minor incomprehension and

misunderstanding interfering with business (es). These limited interactions among speakers

‘reinforce the existing language deficiency’ (Ashraf, 2006, p. 2). The commoners become

language-less in their localities (Haidar, 2016, p.53). To save the Pakistani youth from

becoming language-less, it is important to strengthen them communication wise.

Due to the British colonial background in Pakistan (see section 2.9), English remained the

most crucial language in education and professional positions. The constitution of Pakistan

is in English language. English being the official language of the country, empowers the

15

Pakistanis to manage business internationally (see section 2.10). English language connects

Pakistan with the world. While co existing Pakistani languages and English language

influence each other. As a result of this co-existence, another variant of English language

called Pakistani English has been born cultivating national and international intelligibility

varieties of English (See section 2.12). The localized varieties of English spoken around

the world refer to World Englishes (Bamgbo, 2003).

However, the concept of communicative competence (Hymes, 1972) is indispensable in

world Englishes (Berns, 2019). Hymes’ notion of ‘ability for use’ links to the idea of

communicative performance to foreground interaction (Hymes, 1972. p.64).

Communicative functions (Canale & Swain, 1980) are means to communicate, e.g.

greetings, apologies, and congratulations. These functions and tasks tend to deliver

information. Semester-1 used a communicative/functional approach to learn ESS (See

section 3.4.6). The theory of sociolinguistics encompasses concepts of ‘verbal

repertoire’/‘communicative repertoire’, ‘linguistic routines’, and ‘domains of language

behavior’. (Hymes, 1972, p. 70). The communicative repertoire of the UF varied within

second/third/nonnative/ foreign language learners’ levels. Their linguistic routines were

the activities of a single person or interaction of two or more UF.

Vision of prosperity has elevated the status of English to a language of higher education.

The UF’s ESS could lead them to succeed in learning modern knowledge, professions, and

higher positions (Canagarajah & Ashraf, 2013; Cheng & Curtis, 2010; Hassan, 2009; Riaz,

Haidar, Hassan, 2019). Integrating the second language with the learning of academic

content, the learners made greater progress in English (Lambert, Genesee, Holobow &

Chartrand, 1993). English is a highly regarded language in the academics. Oracy takes

precedence because immediate communication takes place through oral channels (Murphy,

Hildebrandt & Thomas, 1997; Wilkinson, 1970).

English language is a means to advance professionally and socially. That is why ‘O’, and

‘A’ Level students having learned English as a language linguistically function in

professional fields. English competence being the requirement for job interviews, these

16

students more frequently qualify the initial ordeal. It is crucial for education system to

empower students to be able to compete on an equal footing with each other in the job

market. Therefore, English language teaching and learning must take priority in the

educational systems in a global world (Haidar, 2016; Majhanovich, 2013).

Spoken communication has a coordinating role in the learning process (Hall, 1983;

Wilkinson, 1970). Natural language acquisition takes place through understanding

messages without discerning every single word and structure in it (Krashen & Terrell,

1985). This study deals with deliberate language learning that takes place in a classroom

and a language lab. It starts with communicative tasks or language situations (Hymes,

1972) that involved the UF in Sem-1 to talk about their most exciting experiences in life

and/or discuss problems and their solution. Through social knowledge, the UF’s

meaningful comprehension enabled them to know how and when to use utterances

appropriately (Hymes, 1972). Their communicative competence was benchmarked

through communicative functions termed as tasks in the present research throughout two

semesters. Functionally, the UF informed, persuaded and promoted good will through their

recorded speaking performances (RCPs). Their communicative competence aligned with

two semesters’ curriculum was appropriated through an analytical framework.

There are people in some communities around the world, and in Pakistan, who never make

transition to reading and writing (unless necessary). They are satisfied to live in a speaking

culture (Flowerdew & Miller, 2005). There are tribal languages in South America, Africa

and Asia that still have no writing system. Asking a question or raising another option

might position the UF to augment a point in a classroom environment. Henceforth, the

education system in Pakistan requires incorporating ESS in English language learning to

make the UF, linguistically functional (Riaz, Haidar & Hassan, 2019). The UF’s ESS lead

them to success in learning modern knowledge, professions, and higher positions

(Canagarajah & Ashraf, 2013; Cheng & Curtis, 2010; Hassan, 2009). Thus, developing

ESS and confirming the development of language through a criterion rather than assuming

language learners’ communicative competence is the need of the day. In fact, the process

of learning and usage of English language requires conscious efforts (Schmidt, 1995) on

17

the part of teachers and learners. It should be mandatory for the pupils to learn ESS for

competent bearings (Rahman, 2005). The act of learning depends on ‘intended’ teaching.

Teaching oracy might not exist independently. It is consciously appropriated by testing.

This consciousness actively intends (Eagleton, 2011) English speaking skills.

1.7 Importance of Teaching/Testing of Oracy

Language teachers commonly entertain a perspective that language teaching in classrooms

needs to be associated with the world outside the classroom (Widdowson, 1978, p.16).

Language is a factor present in every phase of socialization in a given community, from

basic needs to task orientation, instructions, process comprehension, task completion, and

feedback, continuation of tasks and improvement of processes—in fact, of living itself. On

analysis it is seen that this use is largely confined to reading and writing (Kanwal, 2016;

Manan, 2015; Riaz, Haidar & Hassan, 2019; Zulfiqar, 2011). Commercial and official

transactions, planning, coordination, even letter-writing and day-to-day written

communication, are often been conducted in some kind of localized English, while

speaking is usually done in either the national or a regional language, reinforcing makeshift

dichotomies that lead to a number of psychological and linguistic anomalies. However,

while important, the analysis of social practices and implications is not the primary concern

of this limited study. It is critical to note that most of the private and public schools cannot

develop the required linguistic competence in their learners (National Education Policy

2009). Research suggests ‘extra coaching’ in improving students’ language skills (Kanwal,

2016, p.233).

The centrality of speaking skills (Aleksandrzak, 2011) in learning a language and in using

it for meaningful activities (Norris, 2009) in everyday living demands deliberation not only

in teaching those skills to learners, but also in assessing them to assure them of their

seriousness. Assessment of learning improves teachers' and students' performance

(Pedulla, Abrams, Madaus, Russell, Ramos & Miao, 2003). A progressive focus on formal

accuracy in linguistic competence might reduce the danger of fossilization (Canale &

Swain, 1980). However, excessive interruption in the form of correction on the part of

18

language teachers might inhibit the process of learning. ESS requires testing and grading

like English writing skills (Riaz, Haidar, Hassan, 2019). Assessment of second/third

language is fundamentally consequential within the practices of language teaching.

Considering evaluation of ESS may steer to an improved and better learning and teaching

processes in Pakistan (Shahzad, 2018). Assessing speaking skills regularly and

contributing continuous feedback is important to minimize deviations which might

otherwise harden into permanent features. Teaching and class room activities cannot be

conducted without communication and oral communicative exchanges are the first step

(Wilkinson, 1970). In language and culture, members of a community routinely participate

in a speech activity known as conversation (Riggenbach, 2006). Learners need oral

communicative competence, and teachers need to have tools not only to teach, but also to

assess this competence for continuous improvement. Regular testing enables learners to

see where they stand and what needs to be improved.

I discussed students’ problems in language learning due to the gaps between College

English Education and University Requirements with the Dean Faculty of Social Sciences

(FSS) as well as the Senior Dean Air University. Their support encouraged me to carry on

with the practitioner’s research in spite of the time and curriculum constraints. The Faculty

of Social Sciences, Department of English, with the increased demand to improve the

speaking skills of undergraduate students, included number of modules on oral skills ( e.g.,

Introduction-Advanced Communication Skills, and Review of Communication Basics

from MTD Training (2012), Oral Presentation Skills (Storz, 2002), Social Communication

(Carver & Fotinos, 1998), and English for Work: Everyday Technical English (Lambert &

Murray, 2003), in Fall Quarter 2013, Air University reading package for Communication

Skills course meant for the UF). These modules coupled with teaching techniques and

methodologies provided the learners with opportunities to pay conscious attention to their

speaking ability, and to address their generic weakness with deliberate efforts through

performing variety of tasks, and engaging in cooperative activities like role-playing, and

interactions to practice their language, and spoken skills. Activities and tasks influence the

language used (Bygate, 2016). It can be called ‘language-in-action’ (Carter & McCarthy,

199), the UF performed variety of activities. At that time, the departmental consensus was

19

that functional approach, a way to have ‘a voice in world affairs’ (Crystal, 2003, p.24), in

English curriculum might help the teaching faculty to enhance ESS of the UF. Something

more was required for conscientious teaching and learning of ESS. Then I, as a UELT

realized the need for testing ESS to enhance speaking skills.

This research work adds to two dimensions of teaching English language: it examines the

type of experiences that the English teachers can build into the class teaching and then the

type of testing that the teachers can operate. Then the audio recorded speaking

performances of semester-1 and semester-2 were assessed and compared to find the

difference achieved in their speaking ability. Research is a state of mind that instead of

waiting reaches out to change (Hubbard & Power, 1993). Language is produced in a

classroom environment by speaking and by writing; it is received through reading and

listening. It is noted that primacy goes to oracy in both its receptive and productive aspects.

In the natural acquisition of language, we see that a child develops oral skills long before

he/she starts to learn reading and writing. There are people in some communities round the

world (Kachru, Kachru, & Sridhar, 2008; Wang & Postiglione , 2008), and in our country,

who never make the difficult, relatively artificial transition to reading and writing, satisfied,

apparently, to live their lives in a speaking culture. Even where literacy exists, there are

many (sometimes highly) literate persons who rarely employ it. Communication is done

most of the time through listening and speaking. Oracy takes precedence because

immediate communication takes place through oral channels. Why, then, is oracy not given

the same status in the curricula and teaching as literacy?

As a researcher my stance is that, without ignoring the obvious advantages of literacy,

oracy should be given equal treatment. As Riggenbach (2006) states, this debate can be

resolved only if syllabus designers, linguists, researchers and teachers are convinced that

better oracy can be attained if addressed consciously, not as a vague, hoped-for by-product

of other activities. Generally talk and learning have strong relationship and particularly in

classroom contexts, this relationship is even stronger. Keeping this relationship paramount,

this study demonstrates teaching English language to the second language learners, using

20

an analytic scoring rubric, and sharing rubrics with the learners to achieve improved

outcomes.

1.8 Rationale for Evaluating Speaking Ability

The assessment of the university students is divided into two parts. In the first part there is

internal assessment comprising of quizzes, assignments, project, and presentations

including midterm exams. This part is allocated 55% of total evaluation. In the second part

there is final examination which is allocated 45% on the scale of 100 percent. When the

students know that only 5-10 % of their assessment is going to be on speaking ability

(semester project presentation), and the rest of 90-95% assessment would be on quizzes,

assignments, semester project, midterm examination and final examination, many of the

students lose interest in oral activities, further reducing the chances to enhance speaking

ability. Institutional support to teachers is mandatory to sustain innovation and change

(Savignon, 2018). It is an essential collaboration (Greenfield, 2003). Hence, if

administration, management, and teachers aim to improve learners’ speaking ability, it is

vitally relevant to test students on their oracy, and give reasonable weightage to their

speaking performances in overall assessment of English Language.

1.9 Methodology

I have given a detailed recounting of the methodology I employed for this research in other

part of this study (Chapter 3, Methodology). At this point it is enough to say that I

considered it important to form some base-line assessment of background proficiency

through a survey administered to 120 students of the first semester in three sections of

Mechatronic Engineering (Riaz, Haidar & Hassan, 2019). My research focuses only on the

submitted audio recorded speaking performances of students from the three sections of

semester-1, Mechatronic Engineering throughout the course entitled Communication

Skills. These speaking performances were assessed using an analytic scoring rubric

consisting of five categories (meaningfulness, grammatical competence, discourse

competence, task completion, and intelligibility). Then again, in semester-2, throughout

21

the course of Technical Writing their oral proficiency was gauged using the same analytic

scoring rubric. Next, the acquired scales (proficiencies) of the first semester were compared

with the acquired scales of the second semester students to evaluate learning and skill gains.

The sample sizes of the two semesters were different, so the results were converted and

compared in percentages. The time period selected for this research was two semesters of

Communication Skills and Technical Writing, from September, 2013 to May, 2014.

Triangulation (Haidar, 2016; Stake, 1995) (see section 3.6.1) was considered useful and

relevant for a fuller comprehension of the study’s objectives. University

management/administration (UM&A) is an overarching body for all staff working in a

university. Interviews were conducted with the UM&A to gauge their perspective on

teaching and testing of speaking skills in English, the extent of support that could be

expected to improve the processes of the same. Then, interviews were also organized with

colleagues, the UELTs engaged in second-language teaching to obtain their perspectives

and inputs on these matters. Their viewpoint on the students’ linguistic output was crucial

for the foundation of the research study.

These interviews were customized, tabulated and analyzed through textual analysis for the

purposes of this study. The students’ survey and evaluation of the two semesters were

analyzed through Microsoft Excel.

1.10 Delimitation

Listening, speaking, reading and writing are mutually supportive and important

components of communication. Receptive (reading, listening) and productive (writing,

speaking) skills should be taught together (Goldenberg, 2008; Greenfield, 2003;

Simatupang, Hendar & Supri, 2019) However, what happens is that reading and writing

tend to quickly overtake speaking and listening, mainly because reading and writing

(especially) can be tested relatively easily. To fill the gap that exists, as stated earlier, this

study is about teaching and testing of speaking skills (only) at the University freshman

22

level, at Air University, Islamabad. While significant, the analysis of social practices is not

the major goal of this delimited study.

This research suggests in practical terms an accessible criterion for testing speaking

performances, quantifying the extent of language advancement, finding areas of weakness

in knowledge and speaking performance, and facilitating teachers and students to gauge

progress and address deficiencies. Without downgrading or undermining other skills, the

focus is on speaking. Oral skills have a crucial coordinating role in the learning process

and must therefore be accorded due importance in the totality of things.

1.11 Chapters Breakdown

I have divided this research into five chapters. The first chapter introduces this study

through the researcher’s role as a UELT, finding a discrepancy in theory and practice. The

second chapter accords an overview of the relevant literature in this context. It establishes

gaps in developing English teaching and testing system. After reviewing the literature, the

third chapter takes up quantitative and qualitative methods to probe into the research

questions. After methodology, the fourth chapter analyzes the collected data under the

heading of data presentation, analysis and interpretation. Then, the fifth chapter submits

findings, recommendations and implications for future research followed by references and

appendixes.

1.12 Conclusion

Introducing the status of English language, the first chapter informs about the positionality

of the researcher (see 1.1) including her teaching, learning, and academic practices.

Secondly, discussing the context of the current study (see 1.2), the chapter makes a

statement of the problem (see 1.3). Fourthly, the problem statement leads to the objectives

of the research (see 1.4). Fifthly, it poses the research questions and proposes a hypothesis

for this investigation (see 1.5). Sixthly, the chapter on the introduction to the current study

23

developes an understanding on the significance of English speaking skills (see 1.6). Then,

it discusses the importance of teaching/testing of oracy. Next, it presents the rationale for

evaluating speaking ability followed by methodology (see 1.9). Shortly, it apprises about

the organization and design of the present research. After delimiting (see 1.10) the present

research, the introduction of this study submits the succeeding chapters’ breakdown (see

1.11). Summarizing chapter one in a conclusion (see 1.12), it expands to the relevant

literature review in the form of chapter 2.

CHAPTER 2

LITERATURE REVIEW

2.1 Introduction

This chapter reviews the related research literature in the field to place my dissertation

study “as a contribution to the ongoing discourse about the topic” (Marshall & Rossman,

2011, p. 23). To start with, section 2.1 builds a road map for the review of relevant literature

for the current research. Section 2.1.1 deals with the crisis of educational quality. Taking

into account the most common school background of the UF; it becomes vital to teach them

oral skills consciously (See chapter 1). Thus, I build on accessible research about the role

teaching plays in enhancing English speaking proficiency of the UF in section 2.2. After

that I briefly review second language acquisition in relation with its relevance to English

language learning in classroom in section 2.3. I review language learning supported by

language acquisition, and relate it with the outer faculty of teaching in section 2.4. I contrast

sponsored practitioner research projects to promote spoken English with language

teachers’ efforts to promote ESS in a University running semester in section 2.5. I review

the contribution of tasks in the UF’s ESS in sub section 2.5.1. I discuss testing of English

25

speaking skills in section 2.6 to support the main theme of the study. Rationale for using a

scoring rubrics follows the testing of ESS in section 2.7. Then, I reviewed the users’ of

scoring rubrics, the teachers and students, the raters’ offerings in section 2.8. Functionality

of English language is focused with conventions of the language to be kept for

intelligibility in section 2.9 as impact of British rule. Section 2.10 envisages English as the

official language of Pakistan. Section 2.11 muses on the national language of Pakistan.

These sections generate Pakistani English in section 2.12 which acknowledges Englishness

of English in varieties of English language in section 2.13. Section 2.14 reviews promoting

ESS academically in Pakistan. Finally, a summary of the discussion concludes the chapter

in section 2.15.

2.1.1 Crisis of Educational Quality

Insufficient English competence of the applicants for undergrads’ inductions has been

alarming administration and faculty of Air University about the ‘crisis of educational

quality’ (Gardiner, 1998, p. 71) since the establishment of the university in 2002. Teachers

teaching the UF have been shuffling linguistic exponents and course objectives in the

subject outlines and course breakup-templates to help students overcome the

communication deficiencies and to prepare them for their relevant professions, but not ”to

much satisfaction” (Zulfiqar, 2011, p. 28). University freshmen bring with them a

difference in their language communication ability (Kanwal, 2016). At university level

there is hardly any space for communicative activity to develop UF’s proficiency in using

English language (Qadir, 1996). This situation demands for a need to review classroom

discourse practices (Gulzar, 2009, p.58). Without practice, language cannot be acquired

(Rabab’ah, 2003). As leaders, teachers need to have broad vision to implement what they

see as appropriate in given situations. Teacher leaders articulate a positive future for all

learners, show genuine interest in their needs and well-being, and work with administrators

to solve issues of equity, fairness, and justice. Encouraging collective responsibility to

emphasize accomplishments, teachers nurture a culture for success (Crowther, 2009).

Viewing learning as a process, the teachers must understand the students’ prior learning,

what they need to learn, how the students need to maintain their motivation, how to allow

26

time for practice to develop skills, provide feedback, understand surrounding and develop

a climate for students to learn and be self-directed.

2.2 Teaching English Speaking Skills

Hierarchally, basic education stabilizes primary education. Colleges reinstitute school

education and universities extend college education. Since the quality of secondary

education is lapsing (Memon, 2007), ultimately, the responsibility of university digresses

greater than ever. At university level, English language skill lack ‘quality and attention

with a number of implications for learners’ overall skills and attainment levels and thereby

reducing chances of upward mobility’ (Kanwal, 2016, p. 72). Behavioral educational

philosophy focuses on increasing desirable behaviour for teaching by ‘incentives’,

‘prompts’ and reinforcement’ to ‘encourage its occurrence in the future’ (O'Donnell, Reeve

& Smith, 2011, p. 168). A teaching methodology that helps the language learners develop

speech patterns and argumentation requires to express thoughts in English language. Such

approach to teaching boasts learners’ communicative competence in classroom activities

because they practice English as a device for purposeful interaction (Kusaka & Robertson,

2006). Communicative Competence is defined as one construct with four subcategories

reflecting the use of the linguistic system in 1) grammatical competence (GC), and 2)

discourse competence (DC). The functional aspects of communication were mirrored in 3)

sociolinguistic competence and 4) strategic competence (Canale & Swain, 1980).

In Pakistan, the ELTs usually focus on grammar (Jabeen, 2013; Patil, 2008; Zulfiqar, 2011)

to teach English language. However, ‘grammaticality alone’ is ‘not sufficient’ (Patil, 2008,

p.229). Therefore, the linguistic system is catered through GC and DC in the analytic

framework of the research. Whereas, the functional aspects of communication are

measured through meaningfulness (MFN), task completion (TC), and Intelligibility (INT).

The approaches to teach ESS may differ, but the practical move empowers the language

learners to manage the relevant structure of sentences and then, modify attained sentence

structures to handle diverse situations (Wilson & Peterson, 2006). ‘Students learn (if

anything) precisely what they are taught’ (Carroll, 1971, p. 111). ‘Technological tools’ can

27

be implemented as ‘pedagogical instruments’ (Bakar & Latif, 2010, p. 120) in large classes

to teach speaking skills to the students.

In Pakistan, at university level, language teaching is done through English for Specific

Purposes (ESP). These courses cater to the specific needs of the students enrolled in

different departments. The UELTs aim to develop effective communication skills (literacy

and oracy) among the undergraduates through these courses. The UELTs use discipline

specific scenarios to help the UF ‘internalize the language content’ to excel in ‘professional

environment’ in near future (Kanwal, 2016, p. 215). The observers, teachers and learners

need to be asked to focus on relevant conversation strategies (Riggenbach, 2006; Wrigley,

1994). To enhance learners’ communicative competence, many teachers integrate

communicative tasks (Canale & Swain, 1980) with conventional instructions (Cook, 2016;

Wette & Barkhuizen, 2009). Tasks that testees perform are operations with fully specified

oral content (students know what they are tested for). These operations involve expressing

day to day needs, narrating events, soliciting information, conducting instructions,

reporting, pair or group conversations (Buckwalter, 2001; Bachman, 2002; Hughes, 2001;

Laar, 1998). The pair or group conversations lead the language learners/UF to cooperative

learning (Greenfield, 2003). The learners favour exchanging ideas with each other.

Teachers clearly define their expectations to motivate (Chen, Warden & Chang, 2005)

language learners. Teachers revise tasks, the rating criteria, and pilot tests (Sweet, Reed,

Lentz & Alcaya, 2000). English being the language of higher education, the UF are

expected to communicate through it. To enable the UF to communicate, it is paramount to

teach them ESS, rationally (Haque, 1982). Oracy has been the central and primary

(Aleksandrzak, 2011) of the four skills in second language learning (Cook, 2016; Bailey &

Savage, 1994). Teaching English speaking skills in large classrooms is challenging. The

teaching materials developed in native contexts are used in Pakistan (a country of English

second/third/nonnative/foreign language learners). Language teachers teach native English

dealing with British and/or American history, literature, and culture and customs even

when English possesses the status of an international language (Kachru, 1990). Internet

use, information technology, low-cost phone calls, multinational businesses, travel,

education and mass entertainment has added to globalization. The requirement for

28

international students is to understand not only the standard British or American spoken

English but other varieties of English spoken across the world (Flowerdew & Miller, 2005).

Knowing that schools in Great Britain promoted oracy to enhance the ability to manage

personal needs (Laar, 1998), I gained more confidence to guide the UF to develop their

ESS. Motivating class discipline and learning ESS, mixing learners’ abilities, teaching the

whole class, providing the language learners the opportunity to practice, giving them

feedback, managing written quizzes, tests, presentations and exams (Park, Anderson &

Karimbux, 2016) are some of the challenges the UELTs encounter. Majority of the teachers

prefer an instructional approach that can establish learners’ linguistic ability, particularly

oral skills. However, instruction or lecture method does not offer space for communicative

language use. Teachers prioritize their responsibilities in diverse manners: teaching the

subject, developing learners’ linguistic competence, or preparing them for English

examination. Some teachers perceive promoting learners’ oral abilities as their major duty.

A teacher’s major obligation is using a combination of subject-centered and learner-

centered approaches (Wette & Barkhuizen, 2009).

The UELTs use numerous ways to meet challenges. Questioning (Parker & Hess, 2001)

the language learners provide interactional resources for classroom language teaching.

Asking questions is a good initial ice-breaking technique. Open-ended questions lead the

learners to argue and reason, for talk promotion. The UELTs remain on the forefront to

lead interaction. But the verbal participation in the teacher-led discussion often involves

linguistic and/or pragmatic errors that invite occasional prompts and corrections from the

teacher. However, motivating all the UF through questions in a large class is hard. Teachers

could inspire the class participants to engage in the interactive sessions through questions,

gradually seeking solicited or voluntary responses. However, ‘the little use of the English

language that traditionally takes place, is in the form of one-sentence expressions or one-

word verbal expressions inserted in Urdu conversations. One may comment that this form

of occasional code switching or a short sentence can hardly be termed as English language’

(Manan, 2015, p.177). It also could not be denied that some English language learners’

proficiency influence typical teacher-fronted discussions. Discussion and interaction

29

(Greenfield, 2003; Swain & Lapkin, 1998) stimulates student class participation. However,

‘the ratio of those who either often or always use English in their discussions with teachers

is considerably lower than those who do not use other languages such as Urdu’ (Manan,

2015, p. 174). Though difficult, providing opportunity to practicing discussions, and testing

ESS is important to enhance learning speaking skills. Testing measures the impact of

teaching and learning (Lasagabaster, 2011) of ESS.

UELTs as motivators involve the shy language learners to participate in class talk through

display questions (Lee, 2006), false questions (Paoletti & Fele, 2004), or test questions to

develop their communicative language use. The interaction progresses as the teacher uses

her turns to steer the discourse in a particular direction, and the students recognize teacher’s

speaking style and inviting ways to speak next. UELTs ask referential questions (Lee,

2006) to offer adequate ground for communicative language use. These questions give

flexible access to the interactional sequence. Knowing the functions and relevance of the

types of questions to language acquisition and language learning facilitates practicing

language teachers. Language teachers inquire in three-turn sequences, initiation-response-

evaluation (IRE), initiation, response and feedback (IRF) (Goldenberg, 2008; Cook, 2016)

to hear a response from a UF, after receiving an answer, the teacher assesses it, and

monitors and mentors student’s learning. Questioning techniques provide linguistic means

to deliver content knowledge, and make meaning (Canale & Swain, 1980; Parker & Hess,

2001). All types of questions asked from the learners might limit the genuine language use.

It enables targeted second language learners to develop self-confidence by responding to

teacher’s question. Self-confidence is another step to sustain interest in the long process of

language learning. But this practice is hard to sustain in a large classroom. The verbal

response-ability of the learners is contingent on teachers’ solicitation that is demanding.

Even though questions are asked, all the learners do not necessarily respond, nor do all of

them have the capability or linguistic confidence (Hassan, 2009) to answer those questions.

The vast majority of students ‘either do not use or sometimes use English during their

questions… English is seldom used as a tool for communication in the oral form’ (Manan,

2015, p.175). Having said this, silence in answer to a teacher’s question might denote that

students are reflecting to form a response. Thus, they might be asked to scribble their points

30

to reduce their anxiety and offer valid responses. Teachers provide speaking opportunities

to the second language learners considering their capabilities, shyness and nervousness

(Patil, 2008; Nawab, 2012).

Other than asking questions to elicit responses, UELTs stimulate language competencies

of the UF through situations (Hassan, 2004), scenarios, stories that entwine (Konno,

Nonaka & Ogilvy, 2014) common problems and solutions. Reflective of real life linguistic

situations (Rabab’ah, 2003), Scenarios enable the language teachers to prepare learners for

real life situations (Clandenin & Connelly, 1996; Patil, 2008; Santoro & Allard, 2008).

Extending talk opportunities for the learners and compressing teachers’ talk to provide

space to the language learners through situations/ skits can be productive impetus for the

students to speak a target language. Language teachers believe in the productivity of

narratives for teaching and learning processes of English language to understand the

experiences from diverse perspectives. Through reflection and discussion, the UF learn to

use the target language available through acquisition and learning. Incorporating routine

functions, vocabulary, role play, tone, intonation, emotion, and drama (Clipson-Boyles,

1998) in the teaching process of English language speaking, language learners lead to

discourse (Thompson, 2007). However, this practice is time restrained and syllabus

constrained in a running semester (Riaz, 2012).

Related to scenarios are brainstorming exercises and cartoons that involve English

language learners in classroom talk. Classroom talks enhance their speaking ability in the

target language (Patil, 2008; Nawab, 2012). But developing oracy in a large classroom is

arduous (Aleksandrzak, 2011). The activity of Brainstorming achieves the goal of

increasing the students’ speaking time as they share their ideas, connect and organize them

gaining confidence for self-initiative. A large amount of information can be exchanged

through brainstorming sessions which maximizes learners’ speaking time (Cullen, 1998;

Nakatani, 2010). Language learners’ maximized time for language learning teaches them

the target language. It can be used as a starting point to develop concepts and to rearrange

thinking (Lalljee, 1998). But the students’ increased speaking time does not stretch the

limited time for language learning.

31

Designed activities increase learners’ consciousness about the active role of conversants

(language learners). Different methods and activities can be used to teach and enhance oral

skills according to the interests, learning styles, age, educational background, size of the

class, and learning requirements of the pupils (Riggenbach, 2006). Like educational

background is important to fulfill the learning requirements of the language learners, in the

same way, consistency of teaching and testing of English speaking skills from the same

concentric circle is important. Language teachers can cultivate and guide the learning

behaviors in variance (Carroll, 1971). Teachers engaging the UF in discussions incline the

participants to speak. Discussions over controversial issues contribute to logical thinking,

listening to other discussants’ point of view, reflecting, evaluating and speaking out one’s

own viewpoint. Discussions are the most difficult but appropriate pedagogical approach

(Hand & Levinson, 2012; Parker & Hess, 2001; Goldenberg, 1991). Incorporating

activities (discussions) for speaking skills in all subject areas promote teaching and

learning processes (Lalljee, 1998). However, these stimuli of language production require

more support for developing the target language.

UELTs as team trainers concentrate on challenges of large ESS classes strategically

distributing work among learners. To facilitate the language learners practice conversing

in a large class, the class can be divided into two sections. Learners from recording section

can collect information from the talking section, and write down the collected information.

The participating learners need to be reminded that the purpose of the activity is rehearsing

talking, not only documenting information (Bresnihan, 1994).

Speaking skills are taught through the use of audio-visual aids, recorded in the voice of

native speakers of the target language. This one sided listening allows the UF random

absorption of ESS. But the viability of English language demands the language teachers

coach the sponge learners who to some extent absorb the vocabulary, tone, accent and style

of the native English speakers to grow into learners who may actively construct the

meaning in English to participate in conversations (Wilson & Peterson, 2006, p.2).

Audiolingualism spotlighted speaking performances (Aleksandrzak, 2011; Cook, 2016).

32

Lado (1964) drawing on the developing science, stressed on the audio-lingual approach to

language teaching. Contrastive analysis helped to improve the second language teaching

materials. Rivers (1964) majorly assumed that the foreign language learning through audio-

lingual approach is a mechanical process of habit formation. Exclusive “audio lingual habit

theory” or “cognitive code-learning theory” is inadequate (Carroll, 1971, p. 110). However,

this study finds that spoken word is more than a repeated regular routine. Furthermore,

spoken form is acquired earlier than written form in foreign languages (Canale & Swain,

1980). Analogy, similitude offers a base for learning foreign language as the language

learner associate words in different contexts (Konishi, Kanero, Freeman, Golinkoff &

Pasek, 2014; Burstall, 1965).

With motivation as the basic incentive to start second/third or foreign language learning

and to sustain through the lifelong learning process, many frameworks for classroom talk

have been offered, with enhanced stress on class participation (Holderness & Lalljee,

1998). Learners are empowered to talk in a variety of contexts in the classroom

environment. All the processes of speaking from announcing to responding must be made

explicit to intentionally work on them (Lalljee, 1998). Teachers praise the language

learners who use new words. The other learners are also encouraged (Holderness, 1998).

Ignoring the errors of the students, they should be encouraged to speak (Nawab, 2012);

their deviations and the deviations of their class fellows make them learn during practice.

Teachers must be completely cognizant about the progress of learners’ competence in

speech (Laar, 1998).

Language is initially spoken and then written. To begin with language learning, “speaking

must have a priority in language teaching” (Demirezen, 1988, p.137). In language primacy

goes to speech (Cook, 2016). Speaking is an activity in itself and learning speaking is

undertaking a procedure to actively construct meaning. PhD scholars’ oral skills in a

defense of research viva are paramount to their success. They are expected to defend their

research study and elaborate the relevant aspects to their study (Tinkler & Jackson, 2002).

Language learners are to be capable of performing functions linguistically (Lalljee, 1998).

Speakers build on attained language reservoir through speaking in scenarios and tasks.

33

Then, their enthusiasm stimulates them continue a lifelong learning of second language

(Masgoret & Gardener, 2003; Wette & Barkhuizen, 2009). However, a scoring rubric

sustains learners’ motivation.

According to Vygotsky (1978) learning is a collective as well as an individual experience.

Social interaction is a working for individual verbal development (Donato, 1994). In

groups learners mutually influence each other’s learning processes (Parker & Hess, 2001).

Since learners have different capacities, they become challenges as well as resources for

each other (Anderson, 2016; Wilson & Peterson, 2006). For learning ESS, the UF has to

have confidence, space, time, interaction with peers (Lambert, Genesee, Holobow &

Chartrand, 1993), and the teacher. A Pakistani researcher opines that ‘English as a medium

of communication nearly seldom takes place in the classrooms, a fact that runs counter to

the perceptions and expectations of the stakeholders (Manan, 2015, p. 175). However, large

classes, time constraints, work load of the teachers as well as the speakers’ restrain the UF

from the required practice (Anderson, 2016).

Non-native language learning is an increasingly complex phenomenon. Most people

around the world come in contact with two or more languages (Jafri, Zai, Arain & Soomro,

2013; Kumaravadivelu, 2003). First language is acquired in a natural way. In informal

situations children acquire their first language through their parents, family, and friends in

their comfort zone. If language of education is not first then students either have to learn

the target language informally or usually formally. Informally they acquire linguistic

competence from their surroundings and formally they learn speaking competence

academically (Krashen, 1976). Social context accommodates formal and informal learning

opportunities, according to Spolsky’s model of Second Language Learning. But learners

avail opportunities and produce speaking performances in line with their attitude,

motivation, capabilities, age and experience as cited in (Mitchell, Myles & Marsden, 2013).

Use of English language by the parents motivated the learners to learn speaking English

better than the learners whose parents did not use the target language (Krashen, 1976).

34

2.3 Language Acquisition

Language acquisition and language learning are different routes intersecting time and

again. Acquisition is like children’s unconscious process (Krashen, 2003, 1982, 1976) that

they use in acquiring first language from their surroundings. Researchers involved in

second language acquisition (SLA) and teaching realize that the responsibility for acquiring

second language in a classroom lies ‘primarily with the learners rather than the teacher’

(Buckwalter, 2001, p.380). Some researchers identify English language teaching with

encouraging language acquisition through ‘conducive and motivating environment’ that

renders opportunities for meaningful language learning at ‘subconscious level’ without

sidelining ‘conscious learning’ (Zulfiqar, 2011, p.38). However, only hearing a language

is insufficient to learn that language. Opportunities to talk to others in the target language

strengthen speakers’ oracy. Language acquisition is a meaningful exchange of ideas in a

routine environment. Language acquisition supports conscious language learning.

However, classroom can provide the environment to acquire and learn language

concurrently (Krashen, 1976).

2.4 Language Learning

Children determine rules of language in the conscious process of language learning. Using

the learned rules, they focus on language form in class room experience. Learners learn

rules of language, but not acquire them (Mitchell, Myles, & Marsden, 2013). Human beings

are designed for learning (Senge, 1990, p.7). A ‘good’ second language teacher creates

opportunities to promote language (Buckwalter, 2001, p.380). Without sidelining language

acquisition, opportunities to talk cooperatively to each other in a learning environment,

improves speaking skills that might be one of the most helpful means of improving English

(Greenfield, 2003). Researchers like Rabab’ah (2003) opine that all university courses

should be taught in English because it could improve the university students’ linguistic

ability adding to their communicative competence. However, English language only as

medium of instruction cannot suffice to promote English language as instrument for

‘economic exchange’ (Bourdieu, 1991, p. 502). English language learners develop second

35

language competencies at two levels: basic interpersonal communicative skills (BICS) and

cognitive academic language proficiency (CALP). BICS, language of conversational

interaction develops within one to three years. CALP, language pertaining more advanced

vocabulary skills, establishes within five to seven years to fully flourish (Cummins, 2000).

Cummins (2000) discuses that in the academic context, tasks become abstract and the

degree of contextualization decreases, there is a case in some circumstances for discrete

point testing of certain core skills. English language is the main source of cognitive social

capital (Ashraf, 2006, p. 211). Important is to observe and rectify that the university

students make ‘some basic errors in pronunciation… they cannot express themselves

‘comfortably and efficiently’ while handling ‘academic topics’ or ‘common everyday

topics’, as Mukattash (1983) was cited by Rabab’ah (2003). The present research explores

BICS in the first semester by teaching and testing through RSPs, graded in Microsoft Excel.

‘It is rare in everyday life for language to function as a pure instrument of communication’

(Bourdieu, 1991, p. 502). Language is more than an instrument of communication. Thus,

the current research probes the optimal of CALP through the same process of teaching.

Besides, being a mode of communication, English as language takes on a social value and

expands to symbolic competence. This exceptional competence helps the

consumers/speakers effectively function in social groups vertically and horizontally

through interpersonal relationship. These relationships have impact on the mutual

resources.

Teaching and learning move together. Teaching relies on learning theories to lubricate the

role of coaching. Teachers need to productively integrate learning theories to promote ESS

(Carroll, 1971). Some researchers thought that reward and punishment (Skinner, 1948),

stimulus and response form language learning habits (McLeod, 2007). The behaviorist

theory applies to the initial stage establishing the basic background of language learning.

If learning is response dependent then each learner can learn under the same learning

conditions (Demirezen, 1988). In a large class it is not possible to provide the same learning

conditions to the UF. Three to four UF might interact and respond in a fifty minute

language class. Language rehearsals with relevant language models help avoid linguistic

errors. Other researchers presented that acquiring a set of appropriate speech habits leads

36

to learning ESS (Howatt & Widdowson, 2004). Yet other researchers contended to let the

class room learning take its’ pace (Marsden, Mitchell, & Myles, 2013; Newmark, 1966;

Thornbury, 2012). The newer theories of learning conceptualize the language learning

process. Functional and communicative language is more viable than the analysis of

language structure. Premature language production led the learners to seek help from their

native language. Some researchers are not in favor of combating with the native language

because this is what is known to the second language learners (Newmark, 1966). Non-

native language learners infer some second language speaking rules. They use the

interlanguage, uniquely supporting their communication with vocabulary, structure, and

dialect of their native language in a second or foreign language to communicate their

message (Selinker, 1972). Psycholinguists have observed that individuals being different

learn second language at variance (Carroll, 1971; Dornyei, 1990). Learning language is

using language to perform what one understands. It is a transformation of perceptions into

utterances. Communicative, functional approaches and language aptitude are major

cognitive and non-cognitive factors that impact on second language learning. The second

language learners have to form new habits, and learn new structures at the unfamiliar points

in the process of language learning. Children have an internal agenda that helps them in

learning language from their environment (Chomsky, 1959). But mastering a language is

a long time process (Demirezen, 1988). The mental grammar, universal grammar (UG),

language acquisition device (LAD) and innate faculty ‘wired in’ learners (Carroll, 1971,

p.109) impact second language development. It is biologically triggered, inherent language

capacity to explain children’s spontaneous growth of language in their native environment

without language teaching intervention. Without denying the inner faculty of children to

acquire language, I believe in consciously teaching and testing the target language to boast

the phenomenon of developmental biology. The inbuilt biological schedules capacitate

language acquisition and speech production (Lenneberg, 1967). However, language

learning is all about practice by producing language in context (Swain & Lapkin, 1998).

Teachers should teach and evaluate four skills to build up learners’ linguistic competence

thoroughly. The outer faculty of teaching through variety of tools including testing speech

performances enhances learners’ vocal communication (Cheng, 2008). Second language

37

learners are probably cognitively mature to deal with philosophical and abstract concepts

linguistically.

Other than their previous knowledge of language learning, the UF’s parameter to learn the

second language is UG of their first language which the input (instructions, environment)

modifies to desired results. The second language learners need to reconfigure language

from their first language. The basic formal features construct different lexical items of

every language (Lardiere, 2009). Children develop second language learning provided if

they hear it most of the time. Listening to a diversity of words and linguistic structures

promotes their ESS. They learn new words related to their areas of interest. Interactivity

not passivity builds up their second language. They grasp vocabulary in purposeful frame

of reference. Semantic and syntactic development processes correspond with each other

(Konishi et al., 2014). Language development process is a two way traffic that comprises

teaching and learning. Language teaching and learning interwoven with motivation

enhance language development (Riaz, 2012). Born with intrinsic motivation, humankind

has curiosity to learn. Learning boosts their self-esteem (Deci & Ryan, 2010; Senge, 1990).

However, they can be passive and alienated depending on the social conditions (Ryan &

Deci, 2000). In the present research study social environment was composed of the English

classroom, the language lab, the UF and the UELT.

One of the objectives of this research is to teach oral skills to the UF, the second language

learners. Likewise one of the research questions that this study addressed was how to teach

oral skill to the UF. Learning and teaching of ESS are interconnected.

2.5 Learning and teaching of English Speaking Skills

Teaching of ESS relates it to UF learning and shows how teaching and learning transactions

take place. Educating English non-native language learners, is a mutual responsibility of

classroom teachers and the English language learners (ELLs). They negotiate their roles

and responsibilities in group conversations. Conversations improve teachers’ instructional

practices by check and balance (English, 2009; Goldenberg, 1991). The motivation of

38

language teachers and the motivation of language learners are interdependently connected

in securing linguistic achievement (Guilloteaux & Dornyei, 2008; Dörnyei, 2005).

Teacher’s motivation positively influences language acquisition (Karaoglu, 2008) and

learning. The first and foremost job of a teacher is to develop learning among learners

(Volante, 2004). However, ‘when English is not comprehended by all, communicability

over large areas is achieved at the expense of serious gaps in internal communication’

(Ashraf, 2006, p. 2).

Without underestimating the challenges that the English language teachers undergo, this

study refers to earlier initiatives taken in UK to derive inspiration. If it was vital to enhance

oracy (Thompson, 2007), its relevance cannot be denied in Pakistan where English is one

of the official languages as well as an important international communicative tool. The

absence of sponsored practitioner research in the country has added to language teachers’

responsibilities to manage teaching ESS individually if the higher authorities support and

approve. Official interest in the promotion of spoken English to the extent of sponsoring

practitioner research projects is totally different to language teachers’ efforts to promote

ESS of the UF instead of waiting for the Curriculum Authority to do it at national level.

2.5.1 Tasks as Means to English Language Learning and Teaching

Performing linguistic/communicative functions (see section 1.6.1) establishes language

learners’ ability to speak. Tasks and scenarios like role play related to sociolinguistics

(Canale & Swain, 1980; Laar, 1998) help the UF expand their language learning capacities.

Aspiration for accomplishment motivates learning a foreign language (Dornyei, 1990).

Motivation of the language teachers and the need to achieve their target ignites the

motivation of the learners to excel in the target language. Students learn more effectively

when their minds are focused on the task, rather than on the language they are using

(Prabhu, 1987). The non-native speaking teachers must be invested in for effective

language teaching as mostly they shall be teaching language (Brumfit, 1986).

39

In second language learning, learners passively absorb in the beginning gradually moving

to actively engaging in linguistic activity reconstructing the absorbed structures of the

second language. However, learning practices differ in a learning community. Likewise

language teachers evolve their roles from information deliverers to architect of educative

experiences. Some language teachers instruct grammar rules, correct language errors of the

language learners as founts of knowledge. Other language teachers provide space to the

language learners; give them tasks to perform linguistically. Tasks provide a range of

learning activities from the simple and brief exercises to more complex and lengthy

activities such as problem-solving or/and decision-making (Breen, 1987). Given tasks

(Canale & Swain, 1980) enable the language learners to actively construct meaning in the

field of education (Wilson & Peterson, 2006). ‘Students need opportunities to learn in

multiple ways, and teachers need to have a pedagogical repertoire that draws from myriad

learning theorists’ (Wilson & Peterson, 2006, p.4). Tasks provide the learners opportunities

to learn language meaningfully. They are useful to function in language. Activities that

involve individuals in using language for the purpose of achieving particular goals or

objectives in particular situations (Bachman, 2002, p. 458) are tasks, scenarios, discussions,

point of views are various tasks that contribute to students’ speaking performance. The

research questions of the present study are explored around diverse tasks. In the process of

teaching and learning of oral skills, tasks are ‘regarded as a vehicle for assessment’. The

factorial structure of the speaking tests includes task completion as one of the five factors.

This approach was classified as ‘strong sense of performance assessment’ (McNamara,

1996; Savignon, 1972 cited in Kim, 2010, p. 1). Considering the nature of tasks, the raters,

teacher as well as the peers could contribute to the UF’s English speaking performances

(ESPs).

Learning speaking is an activity that is managed through teachers and learners. It constructs

meaning through performing mock functions. These functions are acquired from

surroundings and learned collectively and individually, informally and formally via family

and parents, teachers and co-learners in variance due to the difference in learners’ abilities.

‘Code complexity, the language required to accomplish the task, cognitive complexity, the

thinking required to accomplish the task, and communicative stress are the performance

40

conditions for accomplishing a task’ (Skehan, 1998, p. 88, cited in Bachman, 2002, p. 465)

that need to be considered. Then, using questioning techniques and designed activities,

proposing scenarios, creating tasks, situations and skits; employing role play and dramas;

holding discussions (Greenfield, 2003); conducting cartoon descriptions and narratives, the

UF could be taught to speak English language. But in large classes it is demanding (Nunan,

2003, p. 596).

Performance of meaningful tasks are central to the language learning process. Students

perform tasks or solve a problem instead of learning structure of language (Harmer, 2007).

Teaching and learning are interlinked. Thus, other than teaching motivation, strategies,

approaches, and methods; learning motivation, strategies, approaches and methods

compliment the process of teaching and learning. Tasks have an essential similarity to real

life language (Rabab’ah, 2003) use. Success in examination can expectedly correlate with

success in real life language use (Prabhu, 1987). However, the extent of construction of

meaning gauged by a criterion optimizes the cycle of teaching, learning, testing and

achieving ESS. Hence, this study explores and observes a testing criterion to teach ESS

that motivates the whole process. The other objective of this research is to examine the

viability of the hypothesis that university freshmen improve their speaking skills if taught

and assessed purposefully in English Courses.

2.6 Testing of English Speaking Skills

English Speaking Skills are coordinating skills for all types of learning that make it a major

concern for research. Speaking is central to the development of learning (Laar, 1998), but

as a researcher teacher I found oracy a ubiquitous weakness among the university

freshmen. Despite its contribution to learning, ESS is usually given insufficient attention

in ELT classes (Alam & Basiruddin, 2013; Bygate, 2011; Wilkinson, 1970). I reviewed

the possibilities of enhancing the speaking ability of learners. Number of written

quizzes/written assignments, a written midterm (including written response on oral

communication portion), a project presentation, and a written final examination (Jabeen,

2013; Kanwal, 2016; Zulfiqar, 2011) at the end of the every semester was the usual practice

41

for testing the UF. Oral communication was tested as a question to be answered (dialogue

form) in the written paper like “abstract demonstration of knowledge” (McNamara, 1996;

Puppin, 2007). The UF used to take this practice seriously since they were graded (Ur,

2008). The UELTs gave them some general feedback for improvement. Speaking tests

have always been credited with significance for human placement purposes. However,

speaking tests are usually not considered as prestigiously as written tests (Fulcher,

2014). Testing the UF’s speaking performances is essential to teaching and learning. The

speaking performances become considerable in language teaching (Shahzad, 2018).

Testing of the ESS strengthens teaching of ESS; it is good for teachers and learners to know

where they stand (Laar, 1998, p.27). Testing motivates learning (Pedulla, Abrams, Madaus,

Russell, Ramos & Miao, 2003). However, ESS of the UF are neither tested systematically

nor graded in Pakistan. Being a problem solver, the UELT was ready to research (Hubbard

& Power, 1993). Testing is interlinked with improvement (Kanwal, 2016, p.310) of the

UF’s ESS. Language learners’ achievement could be maximized if greater attention is paid

to ‘the improvement of classroom assessment’ (Stiggins, 2002). It is possible to assess

speaking skills as ‘part of classroom-based assessment, giving them a fair share of marks’

(Mathews, 2018, p.21).

Keeping in view the UF’s genuine efforts to prepare their project presentation, I thought of

evaluating the UF’s ESS. The usefulness of tests is cyclical (Bachman & Palmer, 1996,

p.35). Testing of interactive teaching gauged the interest of the English language learners

to probe and challenge instead of unquestionably accept the taught perspectives

(Alexander, 2015). I required a testing criterion to assess the UF’s ESS. There is a lack of

consensus on criteria. Speaking skills are found difficult to fit into a framework of

quantitative assessment (Fulcher, 2014). It is significant to realize that speaking

competence cannot be measured, ‘only speaking performance is observable’ (Canale &

Swain, 1980, p. 6). Thus, a systematic testing of ESS helps the learning community as well

as the teaching community to observe and plan the constructs of speaking performance to

focus their attention to improve.

42

Language tests are mandatory as generally people infer about learners’ language ability

from the grades of language tests (Ur, 2008). Language tests identify non-natives, foreign

language learners, and second language learners at different level of education (Cheng,

2008). Instead of dividing the UF in categories of non-native language learners, this

research focuses on enhancing ESS through testing the English speaking performances

(ESPs) of the UF. Testing speaking performances of the UF is fundamental to teaching and

learning as the performances become meaningful and substantial in language teaching

(Shahzad, 2018). Thus, the “actual performances of relevant tasks” known as performance

tests (Puppin, 2007) are practiced. The researcher/ UELT makes no specific test

preparation (Pedulla, Abrams, Madaus, Russell, Ramos & Miao, 2003, p.72). The speaking

performances of the UF were carried out as routine academic semester practices. However,

the UF were adequately instructed to perform (See Chapter 3 on Methodology).

In large classes, the teachers cannot collect reliable information about the routine

achievements of the learners. They lack resources for testing. Due to drain of resources no

teachers’ training is appropriated for classroom assessment. Neither the teachers nor the

administrators are trained to build assessment system for classroom assessments. These

chronic problems exert impact on classrooms, universities, communities, cities, and the

country itself. In addition to this, it impinges a nation’s image around the globe.

Resultantly, the students face the aftermath (Stiggins. 2002).

Students are selected for admission to universities, to be placed in different language

programs, to be screened as potential immigrants, and to be selected as potential

employees, on the basis of the scores they obtain from language tests (Bachman, 2004).

However, my point is that tests must be conducted for speaking English in the usual

semesters for different programs of study if such is their significance. This study

incorporated tests of speaking performances of the UF in the running program of the

learners under graduation studies stressing on testing ESS to make the language teachers

and language learners seriously enhance ESS.

43

The elevated demands for ESS entail integration of a performance component in L2 testing

emphasizing speaking performance assessments. Performance based assessments are

defined as contextualized, authentic, task based and learner centered (Sweet, Reed, Lentz

& Alcaya, 2000). The assessments focus on eliciting learners’ underlying language ability

through their authentic oral performance on a given task. This research explores how tasks

contribute to students’ speaking performance. Tasks are apparatus that provide the raters

to assess learners’ oral proficiency according to a given criteria. Language teachers keep

revising tasks before execution and after execution, after receiving students’ perspective

on assessment.

This study uses task-centered approach that focuses on what the examinees can do more

than grammar and vocabulary (Mislevy, Steinberg & Almond, 2002) in the target language.

Through tasks, the testers can examine knowledge and the ability of the learners to use

language the way they want to examine (Bachman, 2002). Tasks for this research are the

activities that the UF perform and are recorded for assessment in the first semester and then

in the second semester. They are target (English) language use tasks (Bachman & Palmer,

1996). This approach provides a systematic way for the evaluation of examinees’ task

fulfillment. The activities of the UF can be seen as a series of tasks central to methodology

(Swales, 1990). According to the task-centered approach, test contexts or tasks play a

crucial role in measuring L2 ability because examinees’ performance is evaluated based on

real-world conditions. Tasks prompt learners’ linguistic and perceivable resources. They

are outcome-oriented as the language learners achieve a life like situation (Ahmadian,

2016).

Understanding the interwoven nature of teaching, learning, and testing, it is vital to test

these skills because evaluation provides the testees with knowledge of their standing, with

an aspiration to do better. But Pakistani exam system does not include a test for speaking

ability of the students resulting in the negligence in the area (Nawab, 2012). In examination

grammar is more important than speaking performances (Greenfield, 2003). Thus,

exploring the possibilities of testing the ‘response-ability’ of the learners was vital for this

study. Testees perform tasks that the tester measures through a criterion. The inferred

44

speaking ability informs both stakeholders to take relevant directions. Tasks, interlocutors,

raters, and examinees’ speaking abilities affect their speaking performance (Kim, 2010).

To equip the language teachers with an accessible criterion, and to test the suitability of

the criterion for assessing learners’ oral skills is imperative for testing ESS. The factorial

structure of the speaking test needs to meet the linguistic requirement of the UF.

This study uses Kim’s (2010) analytic scoring rubric containing five rating scales (1)

Meaningfulness, (2) Grammatical competence, (3) Discourse competence, (4) Task

completion, (5) Intelligibility to measure the students’ recorded responses. These five

rating scales are the testing constructs for the UF speaking performances (Riaz, Haidar,

Hassan, 2019). Teaching and testing are in constant partnership, placement tests are taken

to determine the level of proficiency, and achievement tests are taken to observe the

progress of the learners. Internship and job interviews require the interviewees to perform

well. This demand for speaking performance requires the UF to qualify speaking

performance. The factorial structure of the speaking test regulates valid constructs, the

domains of knowledge (Stevens, et al., 2008), and the factors through which consistent

ratings prevail (Jones, 1979; Liao, 2004). Further reinforcement comes, ‘a test is reliable if

it measures consistently’ (Hughes, 2001). Validity and reliability in second language

performance assessment challenges extended attention for the use and development of

performance tests (Liao, 2004). If oral ability is to be encouraged, then it needs to be tested.

Oral ability cannot be endorsed by written tests (Bygate, 2011, p.412). As a matter of

content validity, testing oral ability supports advancement in developing oral ability which

is not the practice. It is vital to test and sufficiently weigh certain abilities in relation to

other abilities (Hughes, 2001). I have observed that learners do not prepare components of

language that carry lesser or no weight in assessment e.g. if class participation is not graded,

they rarely try to participate. Their project presentation is graded. Grades keep the language

learners moving on (Chamberlin, Yasué & Chiang, 2018; Ur, 2008). So, they prepare it to

obtain maximum grades.

45

ESS is too sophisticated to be learned in a limited semester in a chain of stimulus and

response (Demirezen, 1988). However, testing ESS is one condition that brings the

language learners in the learning space. Next to testing and grading is the weightage of the

whole process. ‘On the ELT front, the learning and teaching of the English language in

Nepal suffered a serious quality setback when the weightage of English in relation to other

subjects was reduced by half in the 1970s’ (Giri, 2005, p. 23). Granting balanced weightage

to various language skills during assessment, and removing ‘heavy imbalance between the

assessments of different skills’ could motivate the Pakistani UF to learn ESS to ‘acceptable

levels’. Without equity ratio in weightage the students ‘might fail to develop listening and

speaking skills’ (Mathews, 2018, p.21). Teachers can contribute to improve testing by

designing more useful tests themselves and pressurizing (Hughes, 2001) other teachers,

professional testers, and examining board to maintain the practice.

The test makers need to consider the purpose of the test, the skills and abilities to be tested,

the significance of the test, and the limitations of constructing, administrating, and scoring

with scarce facilitation. The reason is that in the presence of clear purpose, clear idea of

required skills and abilities, clear significance of the practice and realization of hurdles in

construction, administration and scoring enables the test makers to keep a practical

approach to testing (Bachman & Palmer, 1996; Hughes, 2001).

Meeting the challenges of large classes, limited time, and administrating speaking

performance assessments is managed through audio recorded assignments. Direct audio

recording of students’ speaking performance gives firsthand knowledge of their English

speaking ability. Some researchers find recording speaking performance to be a feasible

alternative to evaluating ESS (Cheng, 2008). It is neither observation of UF’s ESS nor

insight gained through detailed discussion with parents (Laar, 1998). It is direct test of their

speaking performance. Evidence considers direct tests more valid than semi direct tests

(Fulcher, 2003; Cheng, 2008).

46

2.7 Rationale for Using Kim’s Scoring Rubrics

Written work is usually measured on the testing constructs of comprehensibility, discourse

competence, fluency or naturalness of expression. The testing constructs particular to oral

presentations e.g. impact on the audience, eye contact, intelligibility, effective use of

visuals and appropriate body language can be incorporated in the scoring rubric for ESS.

English academic projects rounded off in a written product and an oral presentation provide

creditability to this kind of assessment. However, oral presentations can also be gauged

(Wrigley, 1994, p. 25).

I chose Kim’s analytic rubrics as the importance and the purpose of tests, the ESS of the

UF; the limitations of constructing, administrating and scoring with scarce facilitation were

catered to. The tests were relevant to the UF’s curriculum for the courses. They were

learner-centered, and course- centered. The scoring rubric was analytical. Language

problems could be diagnosed. It used a systematic approach as compared to the running

practices of the UELTs. Moreover, a scoring rubric performs as feedback based on trial

and error stance in non-pejorative (Carroll, 1971, p. 111) and non-derisive manner. Using

a testing criterion is agreeing with the constructs of the test and the procedures (Bachman,

2002) that a criterion offers. Thus, the analytic rubric assists to benchmark the UF’s oral

recordings. The testing constructs of the rubric perform like ‘learning goals’, and

‘achievement targets’. Moreover, companionating the scoring rubric with the UF is like

informing them about the goals, from the beginning of teaching and learning process

(Stiggins, 2002). For the purpose of this research I have done the same in order to alert the

UF about their learning goals that might lead them to the achievement targets.

2.7.1 Analytic Scoring Rubrics and Interaction Specifications of RSA

The running practices of the UELTs matched the basic level oral interaction specifications

of the Royal Society of Arts’ (RSA) criterial levels of performance consisting accuracy,

appropriacy, range, flexibility and size (Hughes, 2001). On the other hand, the analytical

scoring rubric that the present study uses consists of five rating scales (meaningfulness,

47

grammatical competence, discourse competence, task completion, and intelligibility) to

score the learners’ recorded oral responses (Kim, 2010). I have found RSA’s description

of accuracy (“pronunciation may be heavily influenced by L1 and accented though

generally intelligible. Any confusion caused by grammatical/lexical errors can be clarified

by the candidate” (Hughes, 2001, p.50), vague. The degree of influence of L1 is not

specified. Intelligibility is generalized, and the extent of confusion is not quantified.

Likewise, I have found the description of appropriacy in RSA, (“use of language broadly

appropriate to function, though no subtlety should be expected. The intention of the speaker

can be perceived without excessive effort.”), none-specified. Specifications help testers

and testees see what step to take next. RSA’s third criterial level of performance, range

(“severely limited range of expression is acceptable, may often have to search for a way to

convey the desired meaning”) was not stated. It did not cater to different abilities of the

language learners. It led raters to subjective scoring. Flexibility (“Need not usually take the

initiative in conversation, may take time to respond to a change of topic. Interlocutor may

have to make considerable allowances and often adopt a supportive role”) and Size

(“Contributions generally limited to one or two simple utterances are acceptable”), the

fourth and fifth criterial levels of performance are also not particularized. On the contrary,

the analytical scoring rubrics that this study uses accommodates language learners with

variance in ability (See Appendix D, Kim’s (2010) analytic scoring rubric).

The flexible span of the descriptions of one to five extensions of the six levels in Kim’s

(2010) analytical scoring rubrics for the assessment of second language learners has helped

the researcher to assess and accommodate learner’s ESS relevantly. Kim offers a scale for

measuring ‘Grammatical competence’, and ‘meaningfulness’ at variance to retain the spirit

of communication. Its excellent level in testing constructs is challenging for the learners

but not demotivating for the second language learners. It sets benchmarks. It deals with

degree of meaningfulness from completely meaningful to incomprehensible, interlinking

generally meaningful, occasionally, often, and generally unclear. More effective second

language learning takes place if emphasis is placed immediately on getting one’s meaning

across (communication) than on grammaticalness or appropriacy. ‘Dominant mechanism

48

of the learner’ (Canale & Swain, 1980, p. 11) is to try to convey meaning. It is the ability

to function in a communicative setting with skills through which meaning can be

disseminated (Savignon, 1972). There are no extremes in this criterion. It considers

communication of meaning and offers a regular scheme of testing.

Under pressure of work and time, language teachers generally tested the speaking ability

of the language learners on grammar, vocabulary, mechanics, fluency, and form like John

Anderson’s range based on an oral ability scale found in Harris (1968, as cited in Hughes,

2001). This Oral Ability Scale includes ‘like that of educated native writer’, and ‘Few (if

any) noticeable errors’, to ‘virtually impossible’, and ‘seriously impaired’ communication

(Hughes, 2001, p.91-93) in these five scales that have further six levels. My point is that

“native writer” is too remote an example to be placed in a testing scale for second language

learners. If compared, Kim’s scales of discourse competence, and intelligibility include

Anderson’s testing scales of vocabulary, mechanics, fluency, and form. One of Anderson’s

scales, grammar imparts the idea of rules of language, grammatical knowledge, like

‘declarative knowledge’ (Du, 2013, p.1), which is language learners’ traditionally acquired

knowledge. Contrarily, grammatical competence, as one of Kim’s scales of measuring

speaking ability is the usage of grammatical knowledge in language output. It is like

‘procedural knowledge’ (Du, 2013, p.1) that enables the learners to use their declarative

knowledge in a variety of contexts. Improving language learners’ grammatical competence

is the aim of English grammar teaching in non-native situations because grammar is an

essential component of language (He, 2013). Developing grammatical competence is the

practice of the theory. Thus, grammatical competence communicates an internalized ability

to interact. That is why it has been included in the language assessing scales. Other than

these scales, ‘meaningfulness’ and ‘task completion’ encompass ‘size’. Kim’s rubric can

equip the raters to use concrete scales to appreciate speaking performance of second

language learners.

49

2.7.2 Interagency Language Roundtable (ILR) proficiency ratings and ACTFL

Then, American Council for the Teaching of Foreign Languages (ACTFL) Guidelines that

use the Interagency Language Roundtable (ILR) proficiency ratings, are too detailed to be

strictly observed by the language teachers. ACTFL describes Intermediate-Mid ability

‘successfully’/ effectively handling a variety of ‘uncomplicated’, ‘basic’, ‘communicative’

tasks and ‘social situations’, talking ‘simply about self and family members’, asking and

answering questions, participating in simple conversations involving ‘personal history and

leisure time activities’. ACTFL further describes that ‘utterance length increases’ but

speech may involve ‘frequent long pauses’, ‘the smooth incorporation of even basic

conversational strategies is often hindered as the speaker struggles to create appropriate

language forms’, pronunciation may continue to be influenced by first language, and ‘the

Intermediate-Mid speaker can generally be understood by sympathetic interlocutors’

(Hughes, 2001, p.103), though obstacles in communication might be encountered.

Time and efforts must be invested to continuously improve the practice of testing speaking

ability, to attain reliable results. Otherwise, scientific scaling of oral ability is difficult

(Cheng, 2008; Hughes, 2001). Language teachers must know the purpose of testing

language ability to systematically assess the learning ability so that the learners can also

conscientiously try to improve in specific areas of measurement.

In a number of criteria, native speaker competence is assumed to be ideal for non-native,

English language learners, and/or English as foreign language learners (Zhang & Elder,

2010). Some of the raters due to their ‘nativized styles’ (Kachru, 1991), evaluate the second

language learners on the criteria of ‘Native English’, this practice can affect some

evaluations negatively and others positively. Testing criteria need to be practically feasible,

observing ‘pragmatic components’ (Klesmer, 1993, p. 19).

The dissemination of English has been seen in three concentric circles. These circles: the

inner circle, the outer circle and the expanding circle (Kachru, 1991), bid for relevant

criteria to assess language ability. Native speakers may not always serve as benchmark for

50

non-native or second language learners (Elder, McNamara, Kim, Pill & Sato, 2017).

Taking an assessment criterion from inner circle, applying it to assess learners from the

outer circle or the expanding circle can be dismal. The criterion meets the ability of the

inner circle, neither the ability of the outer circle nor the ability of the expanding circle.

As far as language testing is concerned systematic varieties of English (Hong Kong

English, Singapore English, Japlish, Chinglish (Crystal, 2012), or China English (Zhang &

Elder, 2010), Pakistani English (PE) (Rahman, 2014), or Paklish (Hassan, 2004) can

function as Standard in their own capacity, though native standards of speaking in the fields

of teaching and testing need to be regarded. All these varieties are equivalent to ‘English

as a lingua franca’ (Seidlhofer, 2001, p. 152). Since many rating scales directly or

indirectly plead native speaker norms, it may influence the test constructs. It may force the

raters to interpret the test taker’s performance differently (Zhang & Elder, 2010). Language

raters (teachers) can usefully contribute to fill learners’ relevant spoken-proficiency-gaps

through reflective written comments on aggregate assessment (Zhang & Elder, 2010).

Whereas some researchers believe the language teachers to check grammatical theories to

observe the extent those theories capacitate the speakers to use and understand a language

(Carroll, 1971). This study offers a comprehensive practice in a large class of 45-50

language learners. Thus, a scale of scoring rubric relieves the language raters to write time

consuming comments.

Non-native English speaking raters focus on a limited range of abilities in judging

candidates’ oral test performance (Zhang & Elder, 2010). Scorer’s cognitive processes

(Bejar, 2012) need to be consistent with constructs for measurement. According to

Educational Testing System (ETS) ratings, language learners showcase higher level of

proficiency in some aspects of performance than in others. The assessment tasks need to

be long enough to measure the speak-ability of the assessees. The language teachers are

advised to create speaking tasks and tests directly corresponding with the class activities.

The language learners are supposed to be provided with contextualized tasks organized

around a single theme. Sweet et al. (2000) found that these activities contribute to the

language learners’ training to accomplish a communicative purpose in real life.

51

Minnesota Language Proficiency Assessments (MLPA) Model for performance-based

assessment recommends designing of tasks that provide language learners/test takers with

‘a series of interrelated tasks, contextualized in a way that they can build on information

learned in previous tasks as they complete subsequent task’. CARLA’s Mini-guide for

Assessment Development approves language teachers to keep revising tasks before

execution and after execution, after receiving students’ perspective on assessment. But in

a limited time of one semester to another a UELT could manage all the requisites with

bringing out ‘adequate’ ESS through observing a rubric.

2.8 Raters Contribution to Students’ Speaking Performance

This research enquires into the contribution that raters make to the students’ speaking

performances. Other than being language trainers, teachers as raters influence the

performance of their students (Kim, 2010). Teaching speaking skills is one step, learning

it in variation is another step, developing and sustaining this skill to apply in life is the top

required step which could be aspired through testing. Testing is graded by the teachers as

raters. Assessment is like accountability for learners’ learning and rating teachers’ teaching

(Bachman & Palmer, 1996). However, the test scores must be probed in to gauge the

improvement of students’ learning (Stiggins, 2002). Teaching ESS without testing, and not

awarding grades to the students was almost denying its academic standing. The stance of

accountability in measuring oral skill progressively continues to exert an impact on

program content, objectives and goals (Savignon, 2018). The test constructs help the raters

to contribute to the learning of the candidates’ ESS, while teaching and assessing their

learned speaking ability. Moreover, speaking tests need to prevent the interference of other

irrelevant factors in the score (Fulcher, 2014). The raters motivate the language learners

to showcase better ESS than before, by developing positive attitude to learning language

and evolving their expectancy to succeed (Chen, Warden & Chang, 2005; Guilloteaux &

Dornyei, 2008).

52

Individual differences among raters might lead to inconsistent rating because some scorers

might be more severe or more generous than others. It is vital for the rater to comprehend

and accept the measuring constructs of a rubric due to the relevance of raters’ cognitive

considerations. If a rater ignores a particular construct, s/he will not assign it equal

significance, resulting in invalidity of assigned scores through her/his act of negation.

Raters’ neglecting a construct can impact test takers’ spoken performances. Moreover, the

background of the judges, can influence the score representation of the language learners

by constructing trivial factors that might have not been included in the rubrics (Bejar,

2012), rendering the rater’s judgment inconsistent and affecting the students’

performances. Sometimes the background knowledge of the scorer enables her/him to

abridge the scoring criterion to a convenient graph that accelerates or holds the momentum

of scoring. Other than these raters can score an average response followed by weak

responses generously and vice versa. Like a scorer can overestimate a lengthy written

response, a rater can overrate a fluent spoken response that contains ‘fluff’, and underrate

a precise response. These variables influence the gradation of performance responses.

‘Physical environment’ and ‘physiological state’ like ‘fatigue and hunger’ are other factors

that might prevail during scoring. The mental state of the scorer is affected. Students’

ratings of performances are controlled. But being productive, teachers as raters can be more

motivated to develop strategies to conduct ‘efficient scoring’ in ‘the most economical and

prompt fashion’ (Bejar, 2012). There might have to be a resolution between what a school

can afford and the level of validity that is obtained through testing. These are economic

issues (Fulcher, 2014).

Raters’ background or other factors could lead to rater effects as they rate. A rater ‘forms

a mental response representation’. S/he compares a ‘work product’, a ‘response’, a

speaking performance with ‘mental scoring rubric’. The work product is tentatively

assigned a score category. At times, assessors assess the performances from testees

according to a checklist of classification and observation. This behavior of the scorers

reveals that their allocation to categories and test constructs is ‘probabilistic’, not

‘deterministic’. S/he further, identifies the impact of test method used by a skillful

interlocutor and unskillful interlocutor. A single candidate interacting with two different

53

interlocutors can obtain passing grades with one interlocutor, and failing grades with the

other interlocutor. The analyzed difference lies in the skillful interlocutor’s sympathetically

handling variety of topics, probing in their view points and facilitating their interaction,

and the unskillful interlocutor’s contrary ways (McNamara, 2006). Hence, the skill of

interlocutor, and the familiarity (another way of language teachers to enhance learning

outcomes) of the candidate with the conventions of a designed task influence the outcomes,

the performances of the candidates.

An examination of rater orientations and test-taker performance finds out that raters other

than considering a range of ‘performance features’ within each ‘conceptual category’,

conduct ‘holistic ratings’ inspired by all the ‘the assessment categories’, without being

dominated by grammatical considerations (Brown, Iwashita & McNamara, 2005). Some

scorers include features of speech not mentioned (e.g., pronunciation, fluency, and

communicative skills) in the rubrics. Rater’s cognition can play a constructive or

destructive role in the speaking performance and likewise, in the rating of their speaking

performances.

The levels of proficiency at which raters assess oral proficiency of the test-takers, are the

“decision points” (Upshur & Turner, 1995) that the raters identify. While assessing

performance without a specific set of criteria on the Cambridge Assessment of Spoken

English, the assessors regard grammatical competence at lower level and sociolinguistic

and stylistic competence at the upper levels (Brown, Iwashita & McNamara, 2005; Pollitt

& Murray, 1996). Raters have different perceptions. Some evaluators of oral proficiency

value the test-takers’ reproduction of input vocabulary in responses. Others value the

ability of the test-takers to rephrase. Some of the scorers take repair in speech negatively

by associating it with hesitation and disfluency. Others positively associate it with self-

monitoring and self- correction. Therefore, they should be trained for ESS testing

(Shahzad, 2018).

Some teachers find it conducive to grade class discussions and participations to incite the

learners to engage in purposeful interaction (Wesley, 2013). Teaching English speaking

54

skills and enhancing English speaking ability of the UF requires vigilantly combining

diverse approaches to meaningfulness, communicative discourse competence, grammatical

competence, and intelligibility for assessment purposes. As a rater and assessor, I have

approached a scoring rubric with defined testing domains to cater to the diversity of the UF

(see 1.2), and the developments in English language as world Englishes, global English,

international language (Holliday, 2005), Paklish (Hassan, 2004), Pakistani English

(Hassan, 2004; Rahman, 1990), Hong Kong English (Joseph, 2004) and English as lingua

franca. Having said this, the most important feature of this dynamic language is its’ identity

as ‘English English’.

2.9 Impact of British rule

Pakistan was part of British India before partition in 1947. It is important to gauge the

impact of British rule on the language policy of Pakistan. The British established the

dominance of English language with the introduction of English education (Evans, 2002).

On the Indian subcontinent, in colonial times, the superiority of English was established

with the introduction of a British modeled school system along with making English a

prerequisite for Indian Civil Service. Due to the interests of the strong civil and military

bureaucracy, the superiority of English continues to exist. English privilege the children of

the civil and military bureaucracy. Thus, it resists replacing English with local languages

or the national language, i.e., Urdu. In addition to this, the continuation of English is

nationwide supported because of the internal linguistic diversity (Qadeer, 2006).

Moreover, different linguistic groups consider English a neutral language among the

languages spoken in Pakistan. In the countries that were British colonies in the past, like

Pakistan, English serves as a gateway to a position of prestige in the society (Coleman,

2010; Rahman, 2005a). Due to its historic association with the elite since British colonial

times, English as a gatekeeper to prestige is the language of elite (Rahman, 1998). People

in Pakistan generally consider it superior to other languages (Shamim, 2011). English

became the dominant language replacing Arabic (the Muslims religious language), and

Persian, the official language of the Moguls, Muslim rulers of the subcontinent before the

British (Evans, 2002). Most of the Muslims in the subcontinent were resistant to the

55

imposition of the English language in colonial times due to political defeat by the British

and the fear that it will dilute religious fervor and blunt opposition to the British dominance.

This led to a division among the Muslims in India and Pakistan into groups: one group

rejected, the other group accepted and yet another group believed in pragmatically utilizing

English language (Rahman, 2005b). Urdu used to be considered the language of the

Muslims on the subcontinent. With the independence of Pakistan, Urdu became the

national language and the language of education. The British colonial background in

Pakistan formed English the most important language for education and professional

positions. On the other hand, the ordinary people had little exposure to English at home, so

the school was the only place to learn and use English.

2.10 Official language of Pakistan

Countries and organizations need particular language (s) to transact their official

businesses, identified as official language (s). Official language is the state language that

commands a legal status. However, a state language is not necessarily used in routine by

majority of the people in a country. Pakistan is a home for many first languages at

provincial level, e.g., Urdu, Punjabi, Pashto, Sindhi, and Saraiki. Other than provincial

languages, Pakistanis interact in diverse languages at regional and local level, e.g., Gujari,

Hindko, Balochi, Kashmiri, Shina, Brahui, Khowar, Balti, Burushaski, Dhatki, Haryanvi,

Wakhi and Marwari. Within the country Urdu, the national language of Pakistan is the

lingua franca. ‘For simplicity, I assume no one is natively plurilingual’ (Pool, 1991, p.497).

In a world with thousands of languages, choosing official language (s) is an essential

governmental issue. Keeping broader vision in the interests (though conflictive in nature)

of a country and its people, languages are officialized. The Constitution of Pakistan (1973),

Article 251 permitted the use of English language for official purposes ‘until arrangements

are made for its replacement by Urdu’ (Shamim, 2008, p.238). However, other than Urdu,

English being the official language of Pakistan (Manan, Dumanig & David, 2017)

empowers the country folks to conduct businesses internationally. The people of Pakistan

whose language is not English, understand and communicate this language with struggle.

‘English, in Pakistan, enjoys the status of a privileged official language’ (Mahmood, 2009,

56

p. vii).The Pakistanis compete unfairly for professions and positions. The ‘universities are

designed to be the important producers ... of knowledge. It is their responsibility, by doing

this, to educate the elites of the future’ (Mahmood, 2016, p. 77). In Pakistani society,

‘English and Urdu have gained great significance especially when it comes to affecting the

social capital’ (Ashraf, 2006, p.2).

2.11 National language of Pakistan

According to the constitution of Pakistan (1973, Article 251), Urdu is the national language

of Pakistan (Manan, Dumanig & David, 2017). The ‘state language of Pakistan’, as the

founder of Pakistan, Mohammad Ali Jinnah called it (Khalique, 2007). It was the parent

language of the Mohajirs, the immigrants who strived for the independence, and migrated

to Pakistan. It is one of the official languages of the country. However, it is the mother

tongue of less than 8% of the population of the country (Shamim, 2008). It is one of the 22

official languages of India (Jha, 2010). Urdu is the lingua franca within Pakistan. Lower

level of government administration is also conducted in Urdu. From the independence of

Pakistan till 1988 (before the introduction of the international language from Grade 1

instead of later Grades, in public sector schools) the Muslim identity and unification of the

people of Pakistan have been spurring forces for the advancement of the national language.

The constitution of Pakistan (1973) allowed the global lingua franca, English to be used as

official language till (15 years) Urdu, the national language of Pakistan could be upgraded

to manage at government level. Meager organized efforts were exerted for status and

corpus planning, for the promotion of the national language as it was to replace English,

the other official language of Pakistan (Shamim, 2008). A National Language Promotion

Department (National Language Authority) was established to supply demands at national

level (Ilahi, 2013). However, English language kept on flourishing through economic

advancement of the elite in Pakistan frequently via government/ non-governmental

organizations’ support.

Within Pakistan, the national and the international language are surviving in a competition.

In the national context, Urdu is a linguistic magnetic that unites people from all provinces,

57

two self-governing territories, and a federal territory. On the other side, English language

connects Pakistan with the world at large. Both languages have a dominant role to play

(Abbas, Pervaiz & Arshad, 2019).

2.12 Pakistani English

Language represents nations, countries, and people. Being means of communication it

promotes businesses, and industries. English being the official language (see official

language of Pakistan) of Pakistan, and international language is used in Pakistan. Pakistan

was placed in the outer circle of the three concentric circles for the sociolinguistic profile

of English Language (Kachru, 1992, p. 356). In Pakistan, Urdu and English have been

coexisting as official languages since independence of the country. ‘When the languages

coexist, it is natural that they influence each other’ (Abbas, Pervaiz & Arshad, 2019, p.

153). This coexistence of the two official languages along with the other local languages

of Pakistan gave birth to another variant of the English language. The English language

used in Pakistan is called Pakistani English (PE) (McArthur, McArthur & McArthur,

(Eds.). 2005). PE is not ‘a mass of ignorant errors which must not be encouraged’ (Rahman,

1990, p.2). Educators and linguists are apprehensive about cultivating national and

international intelligibility varieties of English (Kachru, 1992, p. 49).

English is used for the performance of internal and external functions in the country.

Developing a flavor of its own, without impeding communication, PE is similar to British

and American English (Mahmood, 2009). As one of the varieties of English, PE is a non-

native English variety that linguistically tends to look inwards. Pakistani English relies on

local forms and customs, it is endonormative (Rahman, 1990). PE is part of the South Asian

Englishes (Kachru, 1975), ‘the expansion of cultural identities’ (Kachru, 1986a, p.355).

Due to limited educational background, it could be broken English, or the English users

might be semi-fluent. This was similar to developing pidgin Englishes across the world

(Crystal, 2008, p. 4). In a language contact situation, an official language of a country

becomes the second language that receives a treatment from country’s linguistic

58

environment (Kachru, 1992, p.148). However, PE has been developing for functional

purposes.

Generally, in the linguistic area of South Asia, Indian English had been described assuming

that the description of Indian English applied to Pakistani English as well. Indian and

Pakistani speakers were likely to accommodate to Indian English instead of aiming at

British/American models of English language learning (Rahman, 1990) in research on

Pakistani English. Pakistan along with India, and other African countries used moderately

stable variety of English (Rahman, 1990). As far as lexicon, word stock, and semantics,

expositions, connotations are concerned, PE should have been considered as a nonnative

variety. It is nativized English. Thus, the language teachers in the East and the West are to

show more linguistic tolerance for accepting the local varieties of English (Baumgardner,

1987). Urduization of English (Baumgardner, Kennedy & Shamim, 1993) was

endonormative. PE was Urduized English. However, the local linguistic repertoire of

Pakistan is intrinsically plurilingual (Canagarajah & Ashraf, 2013) i.e., multilingual,

Punjabi, Pashtun, Sindhi, and Balochi other than Urduized English (Rahman, 1990).

2.13 Englishness of English

English being common mode of communication, other than being language of science and

knowledge has developed number of variants in different countries all over the world.

Being an international language (Holliday, 2005) that connects people around the globe. It

is natural to function in this language under the influence of one’s own accent and way to

speak. Urdu being the national language of Pakistan is spoken in divergent fashion all over

the country. Paklish (Hassan, 2004), Pakistani English (Baumgardner, 1987; Hassan, 2004;

Rahman, 1990), Indian English, Singaporean or Nigerian English (Rahman, 1990) make a

part of world Englishes that contribute to global English. Americans, English people,

Scotts, Australians and other users of the language around the world have their

idiosyncratic way of speaking language. As all the variants of English survive due to

communicational needs, the efforts to maintain a link with Basic English remains

undeniable. The unharnessed communication in English might lead to chaos of

59

meaninglessness. It is significant to keep Englishness for understanding meaning.

Conventions need to be kept for intelligibility (Hassan, 2004; Rahman, 1990).

World Englishes (WE) refer to the localized varieties of English spoken in the world. It is

the umbrella term used for English that covers all the varieties of the language under the

influence of United States and America (Jenkins, 2006). Second language acquisition

extended to WE (Kachru, 1992) that is used the worldwide as English today and Asian

English (including PE). WE focuses on the functionality of language. Linguistics is not

concerned with borders, it is a study of language (Hassan, 2004, p.2). English has been

diversified because it is spoken by indigenous peoples of the world.

Studies into English Lingua Franca (ELF) have been carried out at different levels

(Deterding & Kirkpatrick, 2005; House, 1999; Jenkins, 2000; James, 2000; Kirkpatrick,

2004; Mauranen, 2003) as cited in Jenkins, 2006, p.169). Teachers, teacher trainers and

educators need to learn about the similarities and dissimilarities, issues involved in

intelligibility between Englishes from Braj Bihari Kachru’s inner circle to outer circle.

Furthermore, to find out the overlaps within the expanding nonnative speakers to affirm

their linguistic rights (Ammon, 2000). Need of the present times is to raise and develop

awareness about pluricentric approach to English language. Pluricentric approach is the

one that accommodates variety of interacting codified standard forms of various countries

related to different circles. This approach could accredit the speakers’ and language

learners’ English to mirror their own sociolinguistic reality (Jenkins, 2006, p.173).

Mirroring their own reality of linguistic command, they could correspond with different

countries’ English language teaching, learning and testing as compared to a distanced

monocentric approach, of the native speakers. One of the most crucial moves in language

enhancement could be blending a ‘WES-ELF perspective into testing’ (Canagarajah, 2005a

cited in Jenkins, 2006, p.174). This blend could facilitate the other languages’ speakers

distinguish between linguistic error and local variety. The knowledge of this difference

could add confidence with linguistic capital to the non-native language users.

60

The role of English as an international lingua franca (Pakir, 2009) is that of a link language

that bridges across the speakers whose native language is other than English. On analysis,

all types of English are a bridge of communication between all the people who can convey

their messages to the speakers of other languages. Significant is the exchange and the

transmission, be it intellectual, pragmatic or scientific. ‘If the job of communication is

achieved, variations should not matter’ (Hassan, 2004, p.3). Asian language learners ‘do

not try to speak English’ in their ‘constant fear of instant teacher correction’ (Patil, 2008,

p. 231). For promoting English speaking skills, the language teachers need to modify their

facilitating methodology and techniques.

2.14 Promoting ESS in Pakistan

English language is required for survival in Pakistan (Canagarajah, 2005; Lambert,

Genesee, Holobow & Chartrand, 1993). HEC Curriculum (English) Bachelor of

Engineering seeks to improve the students’ proficiency in English Language Skills

(Curriculum Division, HEC, 2009). The ‘crisis of English teaching in Pakistan’ (Manan,

Dumanig & David, 2017, p.736) has been analyzed from diverse angles. Unsuccessful

language policies (Manan, 2015), unproductive curriculum, untrained teachers, traditional

teaching techniques, over-crowded classrooms, lack of motivation, and teacher-centered

activities (Kanwal, 2016; Jabeen, 2013; Zulfiqar, 2011) have been explored. English is

taught to the Pakistani students from an early stage of childhood. However, majority of

Pakistani students are unable to communicate in English fluently and confidently (Kamran,

2008).

The UF rarely get a chance to speak English in large classes (Shamim, Negash, Chuku &

Demewoz, 2007). The main impediment in the teaching of speaking skills is the difficulty

of testing them (Fulcher, 2014). There are tensions between policy and practice

(Canagarajah & Ashraf, 2013). “In practice, English is not used meaningfully and

substantively in classroom transactions, which can be helpful in learning the language as

purported in policy and presumed by supporters of the policy” (Manan, 2012, p. iv).

Developing the schools, colleges, and universities around the reduction of theory-practice

61

difference would help academia to facilitate the students in developing ESS. However,

straight-for-English policy suffers from mismatches in theory and practice (Manan, 2012).

There is a “disconnect between policy and implementation” (Mustafa, 2011, p. 120).

Developing universities around the reduction of theory-practice difference would help to

facilitate UF in developing ESS.

In Pakistan formulation of language policy was emphasized (Mustafa, 2011). English

language teaching involves number of challenges that include relatively low proficiency of

students and teachers in government and non-elite (low-cost, low fee) private secondary

schools (Shamim, 1993), teachers’ attitudes to national education policy (2009) and

fluctuating succeeding transition towards English-medium policy (Channa, 2014). The

educational policy would not be rewarding as the teachers did not regard the policy

practically desirable. The implementation on policy led to the realization of ‘students’ lack

of sound skill base in English language, and teachers’ lack of satisfactory level

competencies in the English language…teachers believed that they needed training to be

able to teach in English’ (Manan, 2012, p. 71). Few of the teachers believed that teaching

in English could assist them improve their English language proficiency and teaching

skills. Believing the English medium policy logically beneficial, the teachers considered

that if effectively taught the English subject policy could be positively constructive. The

study of English as a compulsory subject from grade one was a considerable change in

most of the urban areas. However, many of the undergraduates pass out without attaining

competency in English language (Zulfiqar, 2011). This ‘policy suffers from chaos’

(Manan, 2012, p.297) as English as subject versus English as a second or foreign language

are confused without clear guidelines.

Keeping the need of communicative competence of the undergrads (see section 1.6.1) in

mind, promoting ESS at the UF level becomes crucial. Since teachers and learners require

to invest conscious efforts (Schmidt, 1995) to promote the process of learning and English

language usage, learning ESS should be mandatory for the UF for future competent

bearings (Rahman, 2005). English language policy and planning (Canagarajah, 2005;

Canagarajah & Ashraf, 2013; Channa, 2014; Dixon & Peake, 2008; Durrani,2012; Manan,

62

2012; Rassool, 2013; Shamim, 2006, 2008, 2011) plays a crucial role in the advancement

of the target language. Perceptions of different stakeholders about English-medium

education policy in the low-fee English-medium schools have been analyzed (Manan,

2015). Within Pakistan access to English language through different school systems, has

been explored (Haidar, 2016; Kanwal, 2016). The struggle for literacy, access to English,

and technological progress being interdependent reflect an inclination to engage with the

international community from a position of strength rather than weakness (Norton &

Kamal, 2003). Possible curriculum changes to improve communication competence of the

learners at undergrad level have been dealt with (Zulfiqar, 2011). English teaching and

learning practices in the classrooms and students’ exposure to the English language in their

sociocultural ecology, the change in culture and society has been examined (Manan, 2015).

Classroom discourse, code switching and its effects on language learning in Pakistan, have

been researched (Gulzar, 2009). Code mixing of English with the national language, Urdu

is a common aspect of present socio-linguistic situation. Everyday conversation of a

layperson is fraught with English words (Rasul 2006). Shaping up of Pakistani English and

its features have been discussed (Baumgardner, 1987; Mahmood, 2009; Rahman, 1990;

Riaz, 2004). Research has been done to develop communicative skills of the Pakistani

language learners (Alam & Bashir, 2013; Jabeen, 2013). However, the present research

focuses on the testing part of English speaking skills for linguistic promotion.

Testing, as a means to ascertain what the learners have learnt, is a meaningful component

in teaching and learning process (Alexander, 2015; Bachman, 2004; Cheng, 2008; Hughes,

2001; Kanwal, 2016; Laar, 1998; Lasagabaster, 2011; Pedulla, Abrams, Madaus, Russell,

Ramos & Miao, 2003; Shahzad, 2018). ‘The most important quality of a test is its

usefulness’ (Bachman & Palmer, 1996, p.17). The usefulness of tests equates with

reliability (i.e., consistency of measurement), construct validity (i.e., defining a measurable

construct), authenticity (i.e., communicative and task-based), impact (i.e., on macro level

(society, educational systems), on micro level (individuals: test takers, teachers), and

practicality (the ways in which tests will be implemented) (Bachman & Palmer, 1996). The

test makers need to endorse the significance and purpose of the tests; the skills and abilities

to be tested; the limitations of constructing, administrating, and scoring with scarce

63

facilitation (Bachman & Palmer, 1996; Hughes, 2001). ‘Rubric is an assessment tool that

lists the criteria for a piece of work’ (Andrade, 2005, p.27). Rubric defines advantageous

qualities along with prevailing drawbacks in learners’ performances. Rubric that teachers

use to accredit grades, is known as a scoring rubric. It might be used for peer assessment,

self-assessment and teacher feedback (Andrade, 2005). It is imperative to see the factors

that affect the ESS of the UF in their speaking tests during testing phase and rating phase.

The testing phase involves learner, task, interlocutor, and interaction. Whereas the rating

phase includes the raters and the rating scales (Kim, 2010).

To conclude, the main focus of the study was to explore, describe and interpret the teaching

of English speaking skills that could be tested through an analytical scoring rubric in

classroom at university freshman level through assigned tasks in which raters could

contribute. This study was guided by the concept of oracy, frameworks of talk, task-based

language teaching speaking, large classrooms, testing speaking and analytical framework

of a scoring rubric. While discussing frameworks of teaching and testing, the researcher

tried to connect them with the topic of the research. It can be claimed that the present study

is an attempt to better understand the phenomena of testing speaking English in Pakistani

context. After reviewing of the literature, conclusion of the whole discussion is presented

in the following section, 2.15.

2.15 Conclusion

The review of the literature discusses how teaching of English language extends from basic

to primary, from schools to colleges and from colleges to universities. Due to lapse in the

quality of school education (Kanwal, 2016; Memon, 2007; Zulfiqar, 2011), ultimately, the

responsibility of university education is built up. Researchers are focused on developing

English language. However, this chapter taps on the gap of testing ESS in the examination

system, in Pakistan. Testing ESS is good for the UELTs and the UFs (if shared) to know

their output and required input (Laar, 1998). Krashen’s ‘comprehensive input’ and Swain’s

concept of ‘output’ (2005) integrate the second language learning. ‘Learners need

sufficient output also’ (Manan, 2015, p. 245). Speaking performances could be output of

64

the productive skills like writing and speaking. Then, testing the output, the outpours of

language learners could lead to testing to teach. Testing enhances (Kanwal, 2016) the

process of learning ESS. The UF might grasp English speaking skills to attain

contemporary knowledge, occupations, and power positions (Canagarajah & Ashraf, 2013;

Cheng, 2008; Haidar, 2017; Hassan 2009; Haque, 1982; Jafri, Zai, Arain, & Soomro, 2013;

Rasul 2013). Smooth-spoken speakers of English are better placed than the unskilled in

ESS (Rahman, 2005). Therefore, regularly testing ESS guides the learning and the teaching

community to strategically treat the constructs of speaking performance to target

improvement.

The survey of literature establishes the way this research bridges the gap in the research

base about testing ESS. Testing English speaking skill has not been compassed in Pakistan.

CHAPTER 3

RESEARCH METHODOLOGY

3.1 Introduction

This chapter introduces the methodology of developing speaking skill through testing,

grading, and giving weightage in overall assessment of English language learning.

Speaking is a core skill of a language. English speaking skills ensure high status for the

users (Haidar, 2018). In order to understand the processes that lead to students’ progress in

different proficiencies of ESS, English teaching and learning advancements needed be

observed, and analyzed in possibly natural settings in the form of recorded speaking

performances. Recordings help in improving oracy. It is a way the UELTs and the English

language learners use this educational mechanism. ‘One reason for the survival of oral

literature is that while the number of bards appears to be declining, their means of

communicating with listeners has improved dramatically with the introduction of radio

throughout Africa (Hale, 1982). Therefore, I used qualitative (Greenfield, 2003) methods

of research to explore developing oral skills of English with the research question: how the

learners can be taught oral skills. I conducted qualitative research because I found it vital

to explore the issue of developing ESS. I wanted to study the teaching/learning practices

66

(ESS) of the prospective engineers (2013-2014) from the department of Mechatronic

Engineering. Therefore, I also included quantitative component to the study. I classified

the measurable variables and heard the silenced voices. Qualitative part of research was

prerequisite to the detailed understanding (Creswell & Poth, 2016, p.40) for developing

ESS. I propose concentration to improving ESS of the UF. Educators have diversified

focuses like accuracy, grammar, fluency, posture, stress, expansion, diction, attitude, body

language, gestures, and knowledge of the function, tone and intonation. The main purpose

is to promote ESS, be it correct or in correct English. They were expected to gain

confidence through classroom talk. In short, need for extra attention to English speaking

ability was observed. Next was a feasible research plan. Appropriateness of a research

design can be evaluated through the methods used to conduct a study (Creswell & Poth,

2016).

3.1.1. Researcher cum university English language teacher

Research integrates teaching and learning (Clark, 1997, p.244). As a university English

language teacher, I found my own teaching and my students’ learning producing a seamless

blend (Clark, 1987) of a teacher and a researcher. My roles as a teacher and a researcher

merged (Colbeck, 1998). I studied at a Cantonment board school and college. Being a

member of Blue Bird and Girls’ Guide, in addition to being the Head Girl of the School, I

always felt a need to use English language for communication purposes. However, my

communication and conversation remained an amalgam of Urdu and English as was the

custom those days. My medium of instruction changed from Urdu to English at college

level. Therefore, throughout my school and college days, I personally tried to develop my

speaking competence by interacting with other competent speakers at school and college;

in family and social circle. Those were the personal efforts to acquire English language

(see section 2.3), the language of moving in networks of power. In order to enhance my

English speaking competence, I managed to do a short course in English language other

than my regular studies at Intermediate level (grade 11 and 12). Throughout school life,

the learners need to manage academic conversations other than reading and writing

67

assignments. Speaking and listening happen to be the most important and fundamental

skills.

As a full time faculty member at the same university, other than teaching different types of

English language, I was in charge of two students’ societies: the Air University Music

Society (AUMS) and the Shaoor Society. Language plays important role in learners’ life

and build their cognitive capacities that affect their social capital (Ashraf, 2006). Both of

the societies developed social entrepreneurship that generated funds and contributed to

solve some of social, vocational and environmental issues like sponsoring a child,

providing computer literacy to the poor children, and cleaning environment, etc. ‘Social

capital is related to social entrepreneurship’ (Madhooshi & Samimi, 2015, p. 108).

During my Master of Philosophy, I discerned lack of speaking ability among some of the

most knowledgeable school mates. Then, while conducting research sessions for my MPhil

thesis (Riaz, 2012), I observed that more than 50 % time used to be invested in discussing

an issue before writing about it concisely. I realized the importance of discussions for

writing sessions. I observed confidence among the nonnative/second/third/foreign

language learners of the research sessions. Given an opportunity to voice their analysis of

a written statement, they grew clear and committed to learn better than before. These

research sessions led to the idea of teaching and testing of speaking skills at university

freshmen level. As a UELT, I was cognizant to the difficulties of checking the written

examination of the UF in large classes. So much so that with the help of the University

Automation department, I introduced computer based tests (CBTs) in Technical Report

Writing courses in 2003. These tests were based on multiple choice questions (MCQs).

Important to remember is that theory might be tested through MCQs and written form. ESS

need to have a matching (Puppin, 2007) testing system. Then I discussed this idea with the

Dean of the department who approved it saying “very little work has been done on oracy

in Pakistan”. I, along with the English department started compiling a text book and

working on redesigning the outline for English ‘Communication Skills’ course. The aim

was “an attempt to achieve certain ends in students-products” (Srivastava, 2005). Higher

Education Commission (HEC), Pakistan mandated the UF from Mechatronics to use

68

language lab for enhancing their speaking ability in 2013. I talked to Senior Dean, heading

the department of Mechatronics at that time. I sought his permission to use language lab to

let the learners improve their speaking skills. Incorporating in- lab speaking activities in

Communication Skills course, I started the task in line with the Curriculum. This practice

fulfilled HEC, Pakistan’s demands for the fresh graduates to use language lab as well.

Teaching abroad, using a language lab, I concluded that assessing all the language learners

in their speaking performances was difficult and time consuming. Therefore, on joining the

profession of teaching in Pakistan, I comfortably accommodated the written examination

of English language learning. However, teaching at university level, family, academic, and

social pressure on the university students to interact in English language helped me realize

the impact that testing (Bachman & Palmer, 1996; Hughes, 2001; Lasagabaster, 2011)

might have on the speaking performances of the university freshmen (UF). Testing engaged

more deliberation to speaking (Chamberlin, Yasué & Chiang, 2018; Ur, 2008)

My MPhil research (Riaz, 2012) informed me that administrators had to invest time for

their employees’ tasks of writing due to the writing inability of their employees. The

administrators edited their coordinators’ writing. As language learners they were not

trained to write routine correspondence, independently. As a teaching researcher, I realized

that the employers or administrators might not possibly speak, or present on behalf of the

newly hired employees. The newly hired had to do the speaking themselves. Thus, I was

keen to furnish the UF to speak English. Employment was the targeted goal of the UF.

Employers were not satisfied with the English skill of their employees. Choosing teaching

practices from the UELTs and combining them with the learning practices of the UF, I did

not scrutinize the UF for problems with pronunciation, intonation or pacing in the

beginning (Alam & Bashir Uddin, 2013). This helped them build confidence in the target

language. The UF’s minor misunderstandings were ignored. While learning, students

committed errors but time and practice taught them gradually. I took corrective measures

through collective feedback via email to all the UF. This type of feedback saved the UF

from humiliation and pedagogical intervention. It saved me as a UELT from time

69

consuming explanation of individual feedback. In addition, the other UF remained involved

in revising syntactic structures or inaccurate lexical forms.

Thus, my personal experience as a student, as a University English language teacher cum

researcher, and then, as a scholar for higher education helped me understand the UF’s

problem in using English. Research is a process (England, 1994, p.244), not merely a

product. It is an ongoing process (Bourke, 2014, p. 1). The present study represents the

shared space that I as a researcher teacher and the research participants shaped (England,

1994) in classrooms/language lab.

Before the acceptance of my PhD proposal, I conducted meetings with the university

dissertation committee members: Professor Riaz Hassan, Prof Wasima Shehzad, Prof

Rubina Kamran, and Assistant Prof Ismat Jabeen. These meetings, discussions,

question/answer sessions helped me fine tune the topic, and methodology of my

dissertation for the benefit of my study, the English department (s), the engineering

departments, the university freshmen, the university English language teachers, the

university management and administration, and all the English language teaching centers

at large.

3.2 Research Design

This study uses mixed methods research design by combining qualitative and quantitative

methods (Creswell, 2009). The study aims to explore possibility and affordances of

teaching and evaluating oracy at university freshman level involving multiple pools of

respondents; therefore, a mixed methodology is deemed appropriate to obtain multiple

perspectives and a well-rounded picture of the issue. Moreover, researcher believes that

reality is both subjective and objective and the purpose of methodology is to answer the

research question in the best possible way. The research questions have guided the

selection of the data instruments and helped in carrying out further inquiry in a systematic

way. Therefore, the paradigm used in the study is pragmatic to achieve the purpose through

both qualitative and quantitative methods. It is believed that interviewing the respondents

70

helps obtain elaborate descriptions while analyzing the speaking of students quantitatively

supports in validating and cross validating the information gathered from the interviews.

In addition, selection of mixed method is motivated by the nature of research purpose and

epistemological stance which is termed as participatory (Creswell, 2012). Therefore, the

study being mixed methods incorporates and integrates statistical information as well as

contextualize the understanding of the participants. Mixed methods is sued as variation in

data sources lead to validity of the study. It also covers the gaps in the methodology and

present better picture of the phenomenon at hand. Therefore, the rational for combining

qualitative and quantitative data is to enhance the validity of the study results (Creswell,

2012). The strategy of using more than one research instruments in the measurement of the

main variables has been referred to as ‘triangulation of measurement’ (Bryman, 2003, p.

130). As a researcher, I balanced the weaknesses of questionnaire with the strengths of

qualitative interviews with the indirect observers: the UM&A; the direct observers, the

UELTs and the insider, the researcher/ UELT herself, to improve the ‘reliability and

validity’ of my present research (Denzin, 1970, p. 1). The research design of the present

study follows in Illustration 1:

Illustration 1. Study Research Design

71

This section addresses research design of the present study. It clarifies the purposes of

research (see 1.3, 1.4, 1.5). Secondly, it demonstrates the research tools i.e. survey (see

3.4.2), interviews (see 3.4.3), a scoring rubric (see Appendix D), speaking performances

of the UF (Sem 1 & 2) (see 4.3.9), and the comparative evaluation of their speaking ability

in two consecutive semesters (see 4.3.10-4.3.15). Thirdly, it specifies the research

participants (see 3.3.2) and the deixis of classroom research (see 3.2.1). Fourthly, it

discusses the result of the survey operated through the freshman to gauge their background

knowledge in ESS (see 5.1.1). Fifthly, it examines the activities performed in the first

semester (see 3.4.7) and second semester (see 3.4.8). Then, it considers the strategies of

data collection (compilation) and examination (analysis) (see chapter 4). It submits

findings and contributions to relevant Literature (see chapter 5).

3.2.1 Justification of research design

The research design of the present study is constructed on classroom research (see section

3.2.2), research questions/hypothesis (see section 1.5), research participants (see section

3.3.2), use of mixed method approach (3.2.4), research instruments, in class survey (see

section 3.4.2) and video interviews (see section 3.4.3), and the data analysis techniques

(i.e., statistical or qualitative) (Onwuegbuzie & Leech, 2005). The framing of research

questions was crucial. The research questions of the present study provided a scope to

examine the teaching and testing of English oral skills including the role of the raters using

tasks, and a scoring rubric through a case study method. Qualitative research questions are

open-ended (Creswell & Poth, 2016) and evolving (Onwuegbuzie & Leech, 2005) to keep

the research process open to continual discovery (Hubbard & Power, 1993, p. 7). The

research questions ‘How do tasks contribute to students’ speaking performance?’ and ‘How

can the learners be taught oral skills?’ indicated a case study (Onwuegbuzie & Leech,

2005). Tasks, speaking performances and teaching and testing techniques were the

boundaries of the case (Stake, 1995) that defined factors of case study methodology. ‘Do

raters (students and teachers) contribute to students’ speaking performance? If yes, to what

extent?’ Here, the qualitative phase could be represented by a case study research design.

The qualitative research method provided a detailed description of the research topic (see

72

sections 3.4.4, and 3.4.5). The qualitative research paradigm is more exploratory in nature

that helped me interweave the current case study. On the other hand, I focused on

quantifying the speaking competencies of the UF through classified features in the structure

of scoring rubric using the quantitative research method. ‘What is the factorial structure of

the speaking test?’ is a quantitative question that sustained the research purpose to study

the difference in the English speaking performances of sem-1 and sem-2. This question on

the structure of speaking test tried to quantify speaking responses on included variables

(see Appendix D on scoring rubrics). As a mixed method researcher I looked to a

combination of approaches to collect and analyze data (Creswell & Poth, 2016).

As a teacher researcher, I chose classroom research as my research design to improve the

in class teaching and learning practices. I integrated teaching and research (Colbeck, 1998).

As a teacher, I was familiar with the environment. The UF and I could participate together,

learning about our own classroom (Hubbard & Power, 1993). I strategically worked long

hours to achieve classroom teaching and research goals together. I found class room

research manageable for attaining the research objectives. In a class room, while teaching

I could examine how recorded speaking performances of the UF could support their ESS.

Data collection was conducted in semester 1, and sem-2 of 2013-2014. Classroom research

enabled me to experiment with Kim’s scoring rubrics (2010). I could pay attention to the

extent the raters could contribute to improving the UF’s speaking performances. I could

examine the usefulness of activities in enriching the UF’s ESS. I in person could find

answers to the research questions (see 1.5) of my study.

I used the case study approach for the particularity of a single case (the university freshmen

in three sections of Mechatronic Engineering) to apply it for the generality of relevant

higher seats of learning that produce future Engineers if/when required. I double checked

my understanding of the context through the university English language teachers and the

university management and administration. I comprehended the setting of the case through

adequate contextual descriptions from the office of Research Innovation and

Commercialization (ORIC) (see section 3.4.4.1.), the office of Quality Enhancement Cell

(QEC) (see section 3.4.4.2), and the office of Vice Chancellor (VC) (see section 3.4.4.5.).

73

Quantitative and qualitative are two different methods to conduct research. However, they

complement each other. These two methods are in flesh (qualitative) on bone (quantitative)

relationship (Bryman, 2003). Quantitative method is clearly discernable like the evaluation

of students’ speaking performances (see section 3.4.9). The research questions of the

current study helped me make a technical decision (Bryman, 2003) of appropriating

qualitative and quantitating research methods to use. Classroom is a rich site (Hubbard &

Power, 1993, p. xiv) and my research is established in the affluence of classroom.

3.2.2 Classroom Research

Classroom generates research. In classroom research, the data is both qualitative and

quantitative (Schensul, 1999) in nature hence, each data tool (e.g., research participants,

survey, interviews, RSPs) required being analyzed in accordance to the specific research

traditions under which it fell. Language testing is to be related to language teaching and

language use (Bachman & Palmer, 1996, p.13). For years, the UELTs conduct classroom

research unknowingly, while teaching. They do not consider themselves researchers. I used

my classrooms as places for research work and the UF as co researchers. I changed the way

I worked with the university language learners as I explored my classrooms methodically

through research (Hubbard & Power, 1993). The UELTs have to solve problems emerging

within an actual classroom for the contextualized (Nunan, 1990, p. 2) learning procedures.

Thus, the research design employed in this study is classroom research (Dornyei, 2007)

marked by the quality of relationships between the classmates (Dornyei, 2007, p. 720). A

classroom research is defined as research focalized on the classroom (Allwright & Bailey,

2006). In other words, classroom research supports the researchers to investigate what

takes place in a classroom. I collected data from actual classrooms and a language lab and

focused my research on the UF. My research catered to developing their ESS through

pedagogical tasks and classroom interaction. It involved testing of ESS, the target skill as

language promoting activity of the UELTs (Nunan, 2003). I knew the university, the

students, the colleagues and my agendas. My research is grounded in this rich resource

base (Hubbard & Power, 1993: xiv). I consulted expert opinion on the issue of oracy that I

wanted to explore within classroom setting. In this study, I as a teacher researcher

74

employed a criterion as independent variable to assess its effects on the spoken

performance of the UF. The UF’s speaking performances were the dependent variables.

Concentrating on the relationships between independent and dependent variables, this

study falls in the category of quantitative research mainly for its classroom data generation

and collection. However, the extensive interview-based data collection from UM and

UELTs, directed me to use qualitative (Greenfield, 2003) method of research. This

composed a mixed method approach for my research.

3.2.3 Case Study Method

Case study method furnished the present investigation with contextual insights. ‘Case study

is the study of the particularity and complexity of a single case’ (Stake, 1995). I have been

teaching at the same university for more than a decade. These teaching years had given me

time to observe the way freshmen could learn and use oral English language. . My study is

an illustration of the way the bachelor of engineering for Mechatronics (BEMTS)-1; Air

University evolved its speaking skills by BEMTS-2 being assessed on a particular criterion.

Case study is the way to do educational research (Hubbard & Power, 1993).

The research questions (See 1.5) of the present study constructed a scope to examine the

teaching and testing of English oral skills including the role of the raters using tasks, and a

scoring rubric through a case study method. To Creswell (1998) qualitative research

questions are open-ended and evolving (Onwuegbuzie & Leech, 2006). The research

questions ‘How do tasks contribute to students’ speaking performance?’ and ‘How can the

learners be taught oral skills?’ indicated a case study (Onwuegbuzie & Leech, 2006, p.

482). ‘Do raters (students and teachers) contribute to students’ speaking performance? If

yes, to what extent?’ Here, the qualitative phase could be represented by a case study

research design. ‘What is the factorial structure of the speaking test?’ is a quantitative

question that sustains the research purpose to study the difference in the English speaking

performances of sem-1 and sem-2. This question on the structure of speaking test tried to

quantify speaking responses on included variables (see Appendix D on scoring rubrics).

75

The present case study is a real life phenomenon that gives space to mixed method

approach to handle numerical and textual data for its research questions (William, 2007).

The results of the survey conducted among the University freshmen in 2013, informed

about the uniqueness of the batch. I obtained qualitative data through interviews (Creswell,

2012; Greenfield, 2003) from the University administration and management (UM&A)

that included Engineering Management, subject teachers, and the University English

language teachers (UELTs). The support of UM&A was tapped and the oral English

teaching and testing mechanism of the UELTs’ was realized. I spent ample time in

collecting, examining, and analyzing data due to the longitudinal nature of research. Next,

I de-identified the survey, the interviews of the UM&A, UELTs, and the speaking

performances of the UF, Sem-1 and 2 and saved them on a computer hard drive, an external

drive (Haidar, 2016). I tagged all files as follows: F:\Drive - E\2013 Students surveys

(Appendix A), F:\Drive - E\Administrators Management Interviews, F:\Drive - E\English

Teachers Interviews, F:\Drive - E\Recordings 1 Semester Communication Skills\BEMTS

1A B C September 2013 Recordings\Semester 1 Scored, F:\Drive - E\Recordings 2

Semester TW 2014\2 Semester Scored, etc.

3.2.4 Mixed Method Approach

This study is built on quantitative data as well as qualitative data (Dornyei, 2007). One

purpose of qualitative methods is to discover important questions, processes and

relationships (Marshall & Ross, 2011) whereas quantitative methods help the researchers

collect hard, rigorous, and reliable data (Bryman, 2003). Combining both methods

generated knowledge that created better understanding of classroom research than using

one method. Using mixed method approach involves collecting, analyzing, and interpreting

qualitative and quantitative data in a research study that explores the same latent reality

(Onwuegbuzie & Leech, 2006). Quantitative data contained number of students from

government colleges or A Level School Systems, number of surveyed students, and the

scores of semester-1 and semester-2 students obtained in the speaking performances. From

quantitative data, I generated verbal information. Qualitative data contained interviews (see

section 3.4.3). I used mixed method approach, a combination of qualitative and quantitative

76

(Greenfield, 2003) research to handle two types of data (Dornyei, 2007) that helped to

triangulate (see section 3.6.1). Triangulation helped attain comprehensive

appreciation/recognition from different prospects of investigating the phenomenon of

teaching and promoting English speaking skills through testing procedures.

Oracy and literacy are interdependent (Wilkinson, 1970). Teaching and learning

communication skills during learning period have been mandated at university freshman

level (Curriculum Division, HEC, Revised 2009). Without comprehensive testing

procedures, the teaching and learning processes have been slowed down. I quantified the

collected information about teaching and testing of English oral skills to attain background

knowledge of the UF in the field (see Table 4.5). Oracy in English language has been

ignored at the cost of literacy in English language. Identifying the gap of testing English

speaking skills, this case study explored the teaching and testing of English speaking skills

hypothesizing that these skills of freshman could be developed if evaluated. Reviewing

literature, it investigated diverse ways to facilitate English speaking skills (see section 2.2).

I started with the statement of a research problem (see section 1.3) and then examined

relevant literature to provide rationale for the research problem. The rationale positions my

study within the ongoing literature about the topic (Creswell & Poth, 2016), testing of the

English speaking skills (see section 5.2.6). The present research examined the recorded

English speaking performances of the UF on the factorial structure of a speaking test. Other

than considering raters’ contribution to speakers’ performances, it weighed the input of

tasks in stimulating students’ speaking outpour.

3.3 Research Strategy

Research strategy was designed observing research questions and examining the validity

of the research hypothesis (See 1.4.) of this study. Ontologically, to improve the nature of

reality the researcher teacher found the gap to test and grade the English speaking skills of

the UF in the teaching learning process of English language. The UELTs (see section

4.3.1.4) and the UM&A (see table 5.3) affirmed this reality. Epistemologically, being the

UELT of the UF, the researcher as ‘an insider’ (Creswell & Poth, 2016) collaborates with

77

the research participants (see section 3.3.2) to reduce the identified gap (see section 1.3).

Methodically, I focused on the aim of this study. I operated a criterion (see Appendix D

Kim’s (2010) analytic scoring rubric) on the UF’s speaking performances to facilitate them

see their stronger speaking constructs, and improve their speaking ability where required.

I tapped on the background knowledge and practice of the UF to achieve this goal. I

discovered the contribution of the teachers/raters to the UF’s speaking performance

through interviews with the UELTs. Interviews with the UM&A ascertained their

perspectives on teaching and testing of ESS of the UF. The speaking performances of the

UF in sem-1, and sem-2 were rated on criteria to monitor the range of UF excellence on

semester wise tasks in ESS.

3.3.1 Background of UF 2013

According to 2013 Intake in BEMTS, more than 89% of the students enrolled with

University education were from Government Colleges, and less than 11% students were

from ‘O’, and ‘A’ Level of education (Refer to Appendix, 2013 Intake BEMTS Students).

The UF had to undertake a written admission test before getting enrolled in the University.

After obtaining 60% marks in the written test, the UF were interviewed to check their

confidence level and the level of their speaking performance. Air University is basically

an engineering university, running different departments. I chose Mechatronics department

for this research study because 1) HEC had mandated use of language lab for enhancing

speaking ability of prospective mechatronic engineers in 2013-2014, 2) I was teaching

English in all the sections at the department, 3) I could handle the affairs of English

language teaching single handedly, 4) I did not have to oversee the working of another

UELT for the classroom procedure, and 5) I could schedule extra hour for speaking

performances in the departmental free slots. Pragmatically, I as a UELT researcher had

freedom to choose methods, techniques and procedures of research that best meet the needs

and purposes (Creswell & Poth, 2016, p. 23).

78

3.3.2 Research Participants

The UELTs (9), the UF (120), and the UM and A (11) were my study participants as

follows in Illustration 2:

Illustration 2. Research Participants

These research participants were chosen because of ‘easy accessibility’ (Etikan, Musa &

Alkassim, 2016, p. 2). Moreover, they study participants understood the necessity of

English in Pakistan (Haidar, 2016, P.104). Teaching language to the UF, the UELTs were

the best source to enlighten me about their oral language teaching techniques and practices.

I interviewed them to develop an insight about learners’ linguistic cognition as they joined

university to crosscheck the UF/CELLs’ information about the same as data source

triangulation (Haidar, 2016; Stake, 1995). It was paramount for me to know about the

UELTs’ way of teaching oral English, the value they gave to learners’ ESS, their ESS

testing criteria, and their testing techniques. These UELTs were teaching in different

departments of Air University. They had taught at other national and international

universities as well.

Then, the UF in three sections of Mechatronic Engineering participated in this research.

My research dealt with only the submitted audio recorded speaking performances of

students from the three sections of semester-1, Mechatronic Engineering throughout 2

courses of English language. As a UELT, without solid weightage in overall assessment of

79

English language, I could motivate the UF generally to submit their speaking performances

for evaluation and improvement of ESS.

Air University campus was the site as shown in Illustration 3:

Illustration 3. Research Site

I conducted interviews with the management and administration of the university including

the vice chancellor, deans, directors, and different head of the departments at Air

University. They were the decision makers, policy makers, and implementers of university

policies. They are considered to be effective leaders, effective communicators, supervisors

who maintain discipline and understand leadership and management (Kanwal, 2016).

Their support could impact the overall efficiency of the language teachers, language

learners, and the University itself. Their opinion and speculation, conception and

implementation could transform the status of English language on the map of Air

University. Researcher’s long time association with the university was regarded. To ensure

the quality of responses, in person interviews were conducted.

The sample for this study contained 292 submitted (recorded) responses (pair, group, and

individual) of UF from three sections of Mechatronic engineering semester-1, Air

University. Group tasks and pair work motivate mutual interaction because of their

interactive, discovery oriented nature (Savignon, 2018). Individual RSPs catered to UF

individual problems i.e., non-availability of a partner/interlocutor or a late submission.

80

Then, in the second semester, 562 recorded responses from the same engineering sections

represented the sample. It was conveniently (Dornyei, 2007) accessible sample to me

because I was teaching those UF. They submitted their speaking performances to me (their

language teacher) for evaluation. I taught and tested the research participants. The

population was the UF at university level. Due to different sample sizes of the two

semesters, the results (competencies) of both the semesters were converted and compared

in percentages. Convenience sampling (Etikan, Musa & Alkassim, 2016) was the sampling

technique for this part of study.

3.4 Data Collection

A problem generates research and methodology. Selection of tools and modes of data

collection to explore a process might unfold a solution. ‘Qualitative study capitalizes on

ordinary ways of getting acquainted with things’ (Stake, 1995, P. 49). Collecting data for

a case study ‘involves a wide array of procedures as the researcher builds an in-depth

picture of the case’ (Creswell, 2012, p. 132). Similarly, this research within a time frame,

using qualitative and quantitative (Banyard & Miller, 1998; Greenfield, 2003) tools

deciphers the problem of developing English speaking skill at university freshman level.

3.4.1 Time Frame of Research Data

Focusing on the educative aspects of the student community, I opted for longitudinal study

to highlight the change that took place in their speaking ability from semester-1(September,

2013) to semester-2 (May, 2014). Data were collected for two specific time periods.

Semester-1 was treated as a single unit. Progress was worked out in semester-2 as a single

unit i.e., the outcome of the speaking performances of semester-1 was compared with the

outcome of the speaking performances of semester-2. Menard (2002) recommended and

Dornyei endorsed data collection for two time periods, where cases were on a par with each

other. Moreover, data between two time periods could be compared for analysis (Dornyei,

2007). To embark on this research, I surveyed the UF to know about their background

English language learning experiences.

81

3.4.2 In Class Survey

Usually English is taught at college level as a subject (Haidar, & Fang, 2019; Manan, 2012)

not as a language, or a communicative skill. Finding out the practices in language teaching

and testing at college level, in the contemporary Pakistan from the university freshman,

was crucial to the background knowledge of the research. Two steps were taken: a report

on the admission in Mechatronic Engineering (BEMTS), Air University was attained, and

a survey was conducted. The report helped realize that more than 89% of the students under

study, were from government colleges, from different corners of the country, and only 11%

students were from ‘O’, and ‘A’ level of education in 2013 Intake. Therefore, to know the

background of English language proficiency, I emailed a questionnaire (Greenfield, 2003)

to the students.

In my presence (as a UELT), they completed their in class survey and emailed it to me as

part of a lesson (Dornyei, 2007) in the language lab of the university. Teacher’s

administrating the survey herself in the class time made the students take it intently. This

survey capacitated me to realize students’ situation in language learning. It aims at

describing certain characteristics of the sample for this study. This survey offered the

informants 40 questions with options of ‘yes’, ‘no’, ‘sometime’, ‘uncertain’ and

‘occasionally’. These options helped the researcher measure the extent of their liking for

use of English language at personal, public and academic level. I was able to penetrate

through teaching and testing practices, and the testing criteria of oral skills at their college

level. The survey conducted for this research asks factual questions (Dornyei, 2007) related

to the UF’s history of language learning at college level (See survey questionnaire in

appendix). The data gathered through the UF’s responses corresponded with the College

English teachers’ talk (Ashraf, Riaz, & Zulfiqar, 2008). I further validated the gathered

information through data triangulation by interviewing the UELTs and UM and A.

82

3.4.3 Video Interviews with University English Language Teachers

The qualitative research possesses more humanistic stance. Interviewing is a resourceful

research tool (Creswell, 2012; Dornyei, 2007; Greenfield, 2003). Being a key instrument,

as a qualitative researcher I collected data by interviewing participants (Creswell, 2012). I

availed one-on-one video interviews with my colleagues, the UELTs. Then, using

qualitative content analysis, I identified and coded the themes i.e., teaching, learning,

testing, grading, and weightage of English speaking skills in the interviews ‘for the

subjective interpretation of the content of text data’ (Hsieh & Shannon, 2005, p. 1278).

I planned to analyze the way the UF were guided to promote their speaking ability. I

reviewed the way their linguistic ability was gauged before the start of the present study.

Then the UF could be empowered to work with language (Kanwal, 2016). For empowering

the UF to function in language, they needed to be cognizant of their academic standing in

ESS. It was important to interview the UELTs to develop an insight about learners’

linguistic cognition as they joined university (Bygate, 2011, p.412), the UELTs’ way of

teaching, the value they gave to learners’ ESS, their language testing criteria, and language

testing techniques. Teachers have first-hand knowledge of students (Sayer, 2015).

3.4.3.1 UELTs’ Teaching Practices

I procured permission from the UELTs, teaching at undergraduate level (or had taught at

undergraduate level) to participate in my research (Creswell, 2012). Pedagogical practices

involve a combination of ‘teaching methods, teacher competency, and availability of

instructional facilities, assessment and options available to teachers (Kanwal, 2016, p.61).

These variables generate diverse results in students’ language proficiency. The UELTs

were consciously teaching ESS to the UF (See section 4.3.1.1). Mutually discussed, in a

second language learning context, it was important to show the learners what to do, other

than not to let them feel inferior. The point of argument was that checking understandability

made the learners conscious, whereas some compromise releasing their tension made

83

interaction effective to greater extent (Interview, T5, 7/5/2014). Thus, a yardstick was

required to measure the level of understanding among the language learners.

3.4.3.2 UELTs’ Testing Techniques

All the UELTs were testing the presentation skills of the UF. They were using their own

criteria to judge the project presentations, in lieu of English speaking skills of the UF (see

section 4.3.1.2). Keeping different criterion in mind, I had a broad canvas to work at.

Including ‘everything (Interview, T1, 5/3/2014)’ in assessment procedures was beyond

possibility. Leaving ‘other things’ (Interview, T1, 5/3/2014) while including tone, fluency,

and body language was probably insufficient. Retaining vocabulary, pronunciation, and

facial expression for evaluative procedures, and leaving rest of the things to assessor’s

imagination was also not justified. Coming from three different school systems, English

medium, Urdu medium and Madrassa schools (Interview, T8, 4/6/2014), the UF had

different levels of understanding of the English language. I inferred that the UELTs and

the UF needed to be trained to address the identified gap of testing English speaking skills

of the UF.

Next, I attained a panoramic overview of the top management and administration (the vice

chancellor, deans, directors, and different head of the departments at Air University)

through interviews. The reason being that vitality of co constructed knowledge during

exchanges between the two stakeholders in an interview (Duit & Treagust, 2012) could not

be denied. Thus, listing the ‘intended uses of tests’, the ‘potential consequences’, ‘possible

outcomes’ in terms of the ‘desirability or undesirability of their occurring’, and the

frequency of the ‘likely outcomes’ was crucial to observe and treat accordingly in the long

run. The analysis of the possible consequences needed to be compared with ‘without tests

consequences’ (Bachman & Palmer, 1996, p.35).

As a UELT, I motivated and capacitated the sem-1 and sem-2 UF ‘to perform at their

highest level of ability’ (Bachman & Palmer, 1996). The analytic scoring rubric facilitated

the UELT to ‘build considerations of fairness’ during the process of evaluation. The

84

speaking performances of the UF were based on communication skills (see section 3.4.6)

that acknowledged curriculum as the body of knowledge (Kanwal, 2016) that the UELTs

of English department wanted to transmit (Srivastava, 2005, p.4) to the UF. The approach

to test the speaking ability of the UF through RSP (submitting their understanding of the

presentation, appreciation of the article or one aspect of the article) based on their course

curriculum (see section 3.4.7) rendered confidence to the test takers. It was using their

listening/ observing/ thinking/ analyzing/ verbalizing/ strategic / motivating/ appreciating

and criticizing skills. The test takers were apprehensive of making a good impression on

their rater as well. Thus, the testing process was humanized by involving the test takers

into the process. The test takers were treated as responsible individuals. The UF were as

well informed about the testing procedure as humanly possible. As a test giver, the UELT,

and the UF as test takers were accountable for using test tasks. Moreover, the scores

(decisions) of the UF could be verified by the descriptors of the scoring rubric. However,

there were ‘no universal answers to the tests of tasks. Thus, the testing procedures matched

with Philosophy of language testing (Bachman & Palmer, 1996, p.13).

3.4.4 University Management & Administration’s Interviews

Interviewing the UM and A, the ‘campus informants’ (Creswell, 2012, p. 341), as crucial

to learn about their views on teaching, testing and grading of ESS. These interviews helped

in understanding the use of ESS in the University along with its level of importance and

identifying what learning facilities were available for the UF (Haidar, 2016, p. 64). The in-

depth interviews with the UM and A, and UELTs helped me further understand diverse

viewpoints and mutual feelings to facilitate the learning condition of the UF. The UM and

A were the decision makers, the policy makers, and the implementers of university policies.

They were knowledgeable academicians, competent to contribute to the development of

the learners. They understood the power of language based on enciphering and deciphering

(Bourdieu, 1991, p.503). Their support could enhance the overall efficiency of the language

teachers, language learners, the University, and the society itself. Their opinion and

speculation, conception, and implementation could transform the status of English

language on the map of Air University.

85

Appointments were taken, list of questions (See Appendix C) was dropped at their offices

prior to one-on-one interviews for their speculation, and their interviews were video

recorded. The questions were pre-prepared, the interviewees were motivated to detail on

the focal issues in an elaborate manner. These recorded interviews were compiled, listened

to, transcribed verbatim (Creswell, 2012, p. 289), and then tabulated for the present study.

Vital to inform is that without changing the meaning, the responses of the interviewees

were customized for this study. Positive responses of the UM and A further motivated me

to explore how I could test the ESS at UF level.

3.4.4.1 ORIC Perspective

Office of Research Innovation and Commercialization stands for ORIC. Research is crucial

for an achievable economic growth and future knowledge economy. Research is possible

through language. The exceptionally exclusive goal of linguistic production is the ‘pursuit

of maximum informative efficiency’ (Bourdieu, 1991, p. 503). English language is a means

to observe and direct economic growth. English language has an effect on the economic

growth (Lee, 2012). Economic growth leads to increased demand for English speaking

employees and thus to higher English proficiency. English has been widely used in the

commercial sector as well. The goal of the Higher Education Commission is motivating

and facilitating the Higher Education Institutions to prioritize research. To attain this

purpose, centers called offices of research, innovation and commercialization (ORICs) are

established in universities. ORIC provides integrated services for all research, innovation,

and commercialization related matters. In addition to this, copyright and collaborations fall

in its jurisdiction. Furthermore, it arranges seminars, symposiums, conferences and

workshops. Language being crucially important in the lives of men and women affects the

living and thinking capacities which in turn affect the social capital of the people (Ashraf,

2006, p. 2). Thus, the status of English language helps to shape the social climate.

I interviewed the director ORIC keeping in view the crucial role that the office might play

in the promotion of ESS of the UF. The office of ORIC was wired to launch any

constructive idea for the university. It aims to serve and develop community for social

86

improvement through creating a linkage between the industry and academia. ORIC was

cognizant to the different backgrounds of the UF that they did not belong to the urban area.

Communication in English language is a weak area that needs to be consciously promoted.

Learning ESS is meeting the market needs and requirements. Uniformity with the market

trends is of great importance (Noor, Ismail & Arif, 2014, p.5). The students owe their

education and advancement to their institution. Written and verbal communication plays

significant role in future careers. Some of the students might be exceptional at studies but

could not transfer/present their ideas. Some of the instructors might be expert in their

subject but if they could not transfer their knowledge, they would fail as instructors.

Training the trainers could be beneficial. ORIC could provide strategic and operational

support to the research activities of the university. It would have a central role in facilitating

the outcome of the university's researches by focusing on this major trait that can ultimately

impact the welfare of community. When progress of the community is the concern, and the

higher seats of learning, i.e., universities entailing schools and colleges are the sources, the

offices of quality enhancement cell play constructive role to promote higher standards of

research and evaluation.

3.4.4.2. Office of QEC perspective

Quality Enhancement Cell (QEC) is one of the highest working category that started

establishment of a working national unit on the direction of the higher education

commission. QEC Directorate was established at Air University in 2010. It is responsible

to promote higher standards of education and research in the university and develop quality

assurance processes and evaluation standards (Wahab, 2013). Universities need to realize

that quality is constantly based on continuous and committed efforts (Hina & Ajmal, 2016,

p. 117). QEC envisions to bring excellence in all programs offered at University. It is

committed to add to the quality of education at each department by implementing effective

and efficient quality assurance system to fulfill requirements of all stake holders. In

Pakistan, the standard of education did not match the international standards. A degree

obtained from Pakistan was not accepted abroad (2015, Herani, Mugheri & Advani, p. 37).

87

QEC’s vision to bring excellence includes promotion of English language including

English speaking skills of the university freshman.

Air University QEC aims at bringing the educational standards at par with recognized

international standards. It intends to implement HEC criteria, a total of eleven standards

that articulate specific dimensions of institutional quality (2015, Herani, Mugheri &

Advani; Wahab, 2013) for higher education and promote quality culture. University’s

governance system implements university’s goals. It establishes corporate effectiveness

and integrity. The institution creates and maintains enabling environments for teaching

learning service and scholarship that helps in developing a research culture. It assures

provision of support adequate for the appropriate functioning of all programs being offered

by the University. Quality education is a prerequisite to gain access to knowledge which

guarantees economic development (2007, Batool & Qureshi, p. ix). Economic development

and quality enhancement can never be separated from utterances that receive their value

‘only in relation to a market’ (Bourdieu, 1991, p. 503). At Air University, different tiers of

governance such as faculty, administration, staffs, students and the governing body support

each other to achieve the institutional mission and objectives in an appropriate manner.

The prevalent system encourages participatory approach that allows open discussion of the

issues concerning submission, planning and resource allocation by those who assume

responsibilities for respective activities.

The academic programs offered by university are consistent with its mission and goals.

These programs culminate in identified competencies of students and lead to degrees. The

institution works effectively to plan, provide, evaluate, assure and improve the academic

quality and integrity of its academic programs, curricula, credits and degrees awarded.

QEC at the University works with a specific end goal to efficiently enhance quality in

higher education (2016, Hina & Ajmal, p. 118).

QECs are working in universities. However, most of the faculty and students are

incognizant to its work (2016, Hina & Ajmal, p.117). Faculty members could make

changes in contents of course within ten percent of total course contents. The labs could

88

be upgraded as and when a faculty member recommended or program need arose. In the

category of engineering, Air University was 7th in the ranking of domestic universities in

Pakistan. The department of English was the first one selected for the establishment and

the development of the self-assessment board, according to the criteria and standards.

During a discussion, the Dean of FSS acquainted the office of QEC with the requirement

of a well-equipped language lab. The director of quality enhancement was directly related

to the support that administration could give to the department of English to improve the

speaking ability of the non-native/ second language learners.

At establishment the first and the foremost requirement was to fulfill the criteria given by

the higher education. Therefore, the office of QEC assisted the Vice Chancellor to

undertake the important step for the establishment of the language lab. The

department/University facilitated the UELTs and the UF through softwares. This

facilitation, whenever availed, enabled the students in different aspects of learning

language (s). It inspired them to work through those softwares to self-learn.

Promoting the speaking ability of the students was very important. University got a blend

of students from urban background and the rural background. The students from rural

background faced difficulty in English language. Those students particularly and all

students generally were supported to learn English language. The UF, the research

participants for the present study recorded their speaking performances in the language lab.

This setting provided the UF with an opportunity to discuss, talk, comment, practice their

performances, record, review their recordings (if they wanted) and re-record with their

peers.

Another aspect to be emphasized was that the Director QEC was a teacher as well. Thus, a

director teacher’s perspective was retrieved. As a teacher, the director QEC strongly

believed in developing ESS. Being cognizant that the office of QEC could not compel the

UELTs to take initiative overload to urgently solve the high level of under achievement in

English oral skills. However, at management level, it appreciated any move of the kind by

the department or an individual during the summer vacations. In case of availability of

89

students from different departments of university belonging to the local area (s), the UELTs

could encourage them to seek benefit from the resources. Moreover, class (es) needed to

be formed.

Teaching in one of the departments of the university, it was observed that the entrants were

shy of speaking especially in English. They could be encouraged to learn. Freshmen must

have been given some assignments, presentations till they develop their oral skills at

university level. Then, the students could present themselves well in English. The students

coming from rural background, with Urdu as medium of instruction, had never spoken

English. They were shy of talking in English. They could explain their point of view in

Urdu but not in English. Therefore, they needed to be encouraged. Only 15 to 20 percent

could convey their ideas in English. Body language, interacting and conveying their ideas

to the others required skills and techniques that was challenging for them. The students at

the early stages, i.e., 1st 2nd 3rd 4th semesters in their engineering courses were not good

communicators. However, in the final year presentations, they are able to present their

ideas. The UF lag behind rephrasing, defining, explaining, and expanding their ideas or

concepts. They were too shy to speak. They might not pick up appropriate words fearing

fun made over their talk in English.

The UF are generally shy of talking to the top management. Once they are comfortable

then they talk fluently. They can interact with the teachers because they are frequent

visitors of the class. A teaching director could doubly benefit the students. Oral skills in

English is one of the weak areas of the students. Only 20% could convey their ideas in

English language otherwise most of them could not. However, 80% of the students

presented their final year projects well. In a team of two three students every individual

had to present his/her area of presentation. Department of English has been playing great

role in teaching the English speaking skills.

The UF learn English as a language through deliberately interacting with academicians,

professors, lecturers, peers and intelligentsia. Basically, their future is in English: from job

interviews to interaction with people at different level, either they use English language or

90

their communication is heavily loaded with it. During the course of study, the students

realize that English language deficiency could deprive them of an opportunity in the job

market. The office of QEC emphasized that the learners must consciously be taught ESS.

Thus, in three credit hours, English reading, writing, listening, and speaking skills are

crucial to teach in integration in an English language course. QEC demands the same.

However, the basic purpose of the English courses in universities is to build up their

speaking skills, comprehension and writing skills of English. In a three credit hour course

30 to 40% attention should be given to ESS to enable the students to communicate

comfortably.

Other than English language courses, the core requirements of a course, i.e., dynamics and

statics must be fulfilled. After learning, the students need to present their understanding.

Thus, their speaking performances must be considered. Students coming from humble

background or rural areas, might score 800-900 marks out of 1000, covering the core

element of the course. However, the only issue is that he cannot convey it sometimes in

English. He might convey that in his national language. At the end of the course, the

students have to write a report, they have to develop a project, and they have to present that

project. In order to assess ESS, a competent team of professionals must approve certain

locally based internationally and nationally recognized criteria.

Thus, the university needs to groom them for the future. Without over shadowing the main

element, the significance of English is a must consideration. QEC oversees through the

HoDs and Deans that class room procedures, lectures, discussions and question answer

sessions are managed in English language. The Vice Chancellor of Air University

reemphasized the university teachers to interact with students in English.

While assessing a subject through midterm exam, quizzes or final examination, the

university teachers must assess about 8 to 10% the English writing skills and the English

speaking skills. The main reason is that their speech competence adds value to their

marketability. It is vitally important for a finance or marketing student to give certain

91

presentations. An engineer has to present and market his ideas. Thus, meeting the need of

the times, sustainability of quality criteria is vital to enhance the standard of language

education for the university (2016, Hina & Ajmal, p. 118).

Since engineers need to present and market their ideas, I tried to harness the approaches of

the Head of the newly established department of Computer Science. As a newly appointed

head, he was keen to incorporate the best advancements in developing the department.

3.4.4.3. Perspective of Head of Computer Science Department

Head houses brain in a human body. It contains sensory organs (eyes, ears, nose and a

tongue in a mouth) to function for the body processing relevant information through brain.

Likewise, Heads of departments (HoDs) have the prime duties to promptly, compatibly

and equitably implement the required administrative and managerial processes. Above all,

Head of departments are effective communicators (Kanwal, 2016). Most of the universities

have different departments. The departments run under the supervision of teachers in

charge/HoDs. These HoDs shoulder pedagogic, supervisory and administrative duties to

support the principal/vice chancellor (Heinmiller, 1921, p. 149).

Some of the undertakings of a HoD are leadership (knowledge transfer, development of

academic and research standing) and management (financial, people, and quality

assurance), responsibility for teaching, research and students. On the pedagogic side the

head of department is first of all responsible for the maintenance of high standards of

teaching (Heinmiller, 1921, p. 149). The HoDs manage departmental

communications among other relevant undertakings, e. g., university policies, decisions,

systems and performances.

Computer Science (CS) department in Air University, Islamabad, Pakistan, was a newly

established department in 2014 with 120-150 UF. The HoD of the department was keen to

incorporate the best advancements in developing the department. Every project in

Computer Science department had to have a written report (applying their technical writing

92

skills) and presentation (applying their ESS) about what technologies the students had used,

what challenges they faced, how they improved their project, and what they did to maintain

improvement. One of the biggest difference that teachers could make was providing

exposure to English language to the Pakistani UF. The best thing that any department could

do to promote English speaking skills is to make sure that the students speak English

extensively. For speaking English extensively, the students need to practice it. Computer

science department has two courses for English. If ESS can be incorporated within those

two courses of English language, i.e., Communication Skills and Technical Report Writing,

that would be great. On the pedagogical side the HoD is responsible for the maintenance

of high standard of teaching (Heinmiller, 1921). Thus, incorporating ESS within already

running two courses on English Communication Skills and Technical Report Writing was

an agreed feasible option. This is what the present study did. English speaking skills were

incorporated within the two courses, and the opportunities to Practise were provided to the

UF through recorded speaking performances.

Pakistan is a small economy as compared to China and India (Ahmed, Mahmood, Hasan,

Sidhu & Butt, 2016). The HoD of CS opined that the Chinese cannot speak English, and

most of the Indian are hard to understand when they start speaking English. Therefore, if

the Pakistanis want to compete in the field of CS with the Indian and the Chinese, one of

the edge that they could get is speaking fluent English, reading and writing English well.

Some of the UF tend to get better as they get senior because they have more exposure to

some of the teachers and people who teach them technical writing and English oral skills.

As they become senior they tend to improve. But it is not good enough. In a class of 40

students, all need to speak well, write well and read well. Only then the head of the

department of Computer Sciences could be satisfied. RSP was one technique that the

current study used as an added effort to facilitate all the UF with equal opportunity to speak,

discuss, agree and disagree.

Since the head of department must be an authority in teaching methods (Heinmiller, 1921,

p. 150) at the university level, the best the CS teachers could do to promote English

language was delivering their lectures in English other than conversing with the students

93

in English. English department was registering complaints from different departments that

the students could not speak English. The English department and the facilitators

facilitating English language needed to work on the gaps in reading, speaking and writing.

The UELTs needed to innovate ways to attain the goal. The UELTs could do anything but

it is a combined effort of all the subject teachers. The two courses of English language

could not be good for the four years program of Bachelor of computer Science Engineering.

It was a constant effort. Thus, reviews must be planned to take place at frequent intervals

(Heinmiller, 1921, p. 150).

Reading was perceived the most important, because all the knowledge body is almost in

English. Unless materials are translated in English, they do not get published. Joseph

Fourier, an important scientist invented the Fourier (The analytical theory of heat) in 1700s.

It was critically important piece of work which was almost the basis of telecom engineering

and electronic engineering, in French language. Almost a hundred years after he died, it

was translated in English. Thus, if you want to gain knowledge, reading English is crucial.

It was imperative for the UF to realize that all the knowledge is in English. People would

listen to them if they speak in English. Going online, interacting with people to get projects,

do them, write documentation and ship them off. However, it is very hard to make them

understand this. The UF should be guided to voluntarily get involved in learning ESS.

Moreover, to surely train the UF in the international language, the CS teachers (supporting

the UELTs), encouraged the students and motivated them to talk in English without

worrying about the correctness of form. The students were advised to practice ESS. Ideally

the department head could want all the students to speak/write/read well. For motivating

the UF to speak well, the department chair suggested 30% marks to be allotted to ESS on

the scale of 100% for evaluating English language. Moreover, 10% assigned to ESS was

considered insufficient.

The HoD CS proposed the UELTs to decide some standardized criteria to assess the UF as

far as their English speaking performances were concerned. Then, the students could be

assessed on a criterion not instinctively. A criterion was important so that the students could

94

see a reliable and valid resource to assess their speaking competencies. They needed to be

assessed as a unit under the same exposure through a single set of criteria.

The freshmen in our country come from a yearly system to a semester system. They have

hard time in adjusting in an unknown system. The HoD CS did not want to put excessive

pressure on the UF. Another problem might be the UF could stop asking questions in case

they are restricted to speak English. If the UF are restricted to ask question (s) in English

language only, only a couple out of 40 might ask questions. And the rest of them might call

them a theta. Thus, the change in the strategy was to deliver lectures in English. If the UF

did not understand, the teacher would explain in Urdu. Then, the UF were supposed to try

to understand. However, the HoD of CS could not stop the UF to ask him a question in

Urdu.

Focus on speaking English was a complicated issue in Computer Science department. The

UF from City school, Beacon House school systems and some Army Public Schools could

speak decently as compared to the UF from government schools. Only 5% of the students

speak decent English. They have a barrier in their minds that people would make fun of

them. The UF need to gain confidence to speak English without being conscious of their

accent, and pronunciation. They need to speak. English is not their language. They were

not born with it. It is just a language.

3.4.4.4. The Office of Senior Dean Perspective

Dean of deans joined Air University in 2002. After setting up a computer science

department, he started mechatronic engineering in 2006. He became head of the department

of mechatronics, and the dean of engineering, the senior dean of the university to see the

university evolve from a small organization to one of the highly ranked universities of

Pakistan, officially categorized as an engineering university by HEC for purposes of

standards.

95

Cognizant of the fact that English is a global language, important to read anything to be

able to follow the television, internet, to connect with people all over the world, he was

interested to get speech focused course where the UELTs could improve the English

speaking and comprehension of the UF. People think in terms of the available words and

usually they do not think outside the world of words. Senior dean was inclined to increase

the vocabulary and fluency of the students. Being an initiative inclined dean, he preferred

the students to participate in debates, dramas, activities that could come naturally to them

rather than feeling shy of the extracurricular activities. He envisioned the university

undergraduates to feel relaxed in terms of speaking English. He wanted the graduates to

join international forums where certain norms are observed. Due to immigration facilities,

he liked to see Pakistani youngsters doing good jobs in Canada, Australia, and New

Zealand. Some of them were well placed. In the next 5-10 years, he could foresee many of

the graduates in international environment. The UF needed to be very fluent.

Pakistan is a delicate nation. We are intelligent and hardworking people. We are not low

IQ people. We are the people who survive in the most difficult environments because we

live in many difficult circumstances. Our people are capable. They can excel and they do

excel if we go on individual basis. English should not be their handicap. They need to be

very proficient. The dean of deans was confident that the base of the undergraduates could

be developed. Speaking English is part of education. At university, there had been

academic arguments whether university should have 100% English teaching in the class

room, or 80% or 50%. Many of the senior colleagues in engineering departments favored

80% in Urdu and 20% in English. Their reason was to strengthen national language. To

them, a child could learn in his or her own language. They gave examples of Japan and

China. These nations became great not by learning English but by translating everything

into Japanese, and Chinese. Similarly, in his 20’s he observed his Chinese colleagues in

England and America used to photocopy books and send them to China. Their government

gave them money to send every book in their field to the embassy. The embassy sent it to

China where they translated them into Chinese. This is how they proliferated knowledge

in China.

96

However, we are a country which would be going to the English camp. Chinese and

Japanese find it strange that we speak English to each other. As a deans’ dean, he delivered

his speech in English on convocation whereas the Prime Minister delivered his speech in

Urdu. The Prime Minister was appreciated for his Nationalism and solidarity. However,

Urdu in spite of being our national language does not have many scientific terms in

engineering. The other colleagues from Engineering disagree with the argument as such an

argument might never let the national language to develop. Well informed as the senior

dean was, he asserted that the university shall have 80% English teaching, 20% Urdu

because in that 20% we become informal with the students to make them comfortable.

Many of the students who would go to America, Canada and Australia should be successful

over there and should keep their own culture also.

At university, the class strength is typically 40 students in every class. Out of 40 students,

4-5 students are good, and about 10 students are adequate. However, about 25 students are

weak in English. The system of education at the school level churns out majority of the

weak undergraduates. The majority has been neglected. The standard of government

schools is falling. The solution to this problem needs to wind up private education at school

level, and to strengthen government education. We need to rebuild government education

at school level. The majority of people need to be included, and refocused on.

English is difficult. The students are not comfortable to express their point of view in

English. Most of them are unable to talk. They are not shy. They use strange language in

phone text messages. Their words are short. However, they are decent and good mannered.

It is not easy for them to talk on any topic of their liking for 5 minutes. The students join a

university at 18 years of age, when they graduate they are 23-24. They undergo a change.

From a school where they are taught to sit, to university they are expected to open up. They

change from boys and girls to young men and women.

Engineering professors focus on equations and quantitative side. Engineers do not have

much English. They focus more on Engineering and Mathematics. As an engineering

professor, he used to teach 95% in English, but he had to compromise may be to 80%

97

teaching in English. To him, university English teachers have an immense responsibility

to shoulder within only 6 credit hours out of 40 credit hours in Engineering. It is 15% of

the time the prospective engineers spend with English faculty members. Basically, there is

something wrong with our degree program in Pakistan. We have adopted the American

system. In the American system, their language in American English that is a variant of

English language. English taught at Air University is called the King’s English, which was

the colonial English brought into the British Indian Empire by people like George

Mountbatten who was queen’s cousin. He used to speak upper class English. So the English

that we speak is not the English of Manchester or Birmingham. It is the English of

Cambridge and Oxford. In England, English language tells class. In four provinces of the

United Kingdom, i.e. England, Scotland, Wales and Northern Ireland, people are classified

according to their English. This is how the people of the higher class were separated from

the people of lower class. Therefore, two universities, Cambridge and Oxford were

established alongside universities of Manchester and Birmingham for their different class.

English was carefully taught differently to their own people because they never wanted

their lower classes to come into the upper classes.

For promoting English speaking skills, first of all, it is paramount to maintain a common

practice of talking to each other in English inside the university as English needs to be used

as a tool for the betterment of people. Pakistanis may not talk in English on the streets, in

the shops, because they do not want to lose their talent. They are proud of their identity and

culture, they have no complexes. They are not the people of the weak past. Therefore, they

are proud of their past. We would never want our children to forget Urdu. However, they

need to learn English to be mortal and to be competent.

In spite of the pressure of the senior colleagues to promote and learn in the national

language, being the senior dean, the dean of deans advises the engineering professors to

conduct their classes in English language. Having being foreign qualified, the engineering

professors and lecturers are not comfortable with English. Bright and brilliant faculty

members, having spent years abroad for their PhD degrees, strongly argue use of the

national language for lectures and understanding of concepts. However, the university

98

undergraduates cannot graduate without two mandatory courses of English

Communication Skills and Technical Writing. Today’s world is multilingual. There is no

harm in learning another language or number of languages. There is no harm in being good

in Chinese. Without knowing Chinese, we might be totally illiterate, unable to

communicate or read or write, it is natural. We need to think about the future of our

children, that is our future.

These days speaking is the most important skill. Half a century back writing was the bigger

skill than speaking. Quaid-e-Azam was a very eloquent person and knowledgeable people

are very eloquent. With the passage of time and advancement of technology the spoken

word became more important and quicker than the written word. These days media is

strong. In the last American elections, hearing Obama talk on TV was almost a feast to

ears. The words came beautifully out of his mouth as if he was not a human.

The speech competence of the students adds value to their marketability. Entering a room

for an interview, sitting down, expressing in a relaxed manner are important skills. Be an

engineer, a banker or a business person, people assess a graduate as a person. A person

they want to know if he or she can express himself or herself. If he or she cannot, he or she

might be the best person on earth but the company will not give the due value. The

employer would want a person who could express well, who knows the right word to use,

and who is persuasive. This is why the senior dean would like the students at freshman

level to be evaluated in ESS.

The office of senior dean could prefer some available established norms on a particular

criteria to standardize the procedure of evaluation. Science and engineering are quantitative

and social sciences have other ways to evaluate. He exclaimed to have no idea as to how

evaluation of ESS could be done. However, human beings are not machines. So, the

evaluation should neither be scientific nor it should be personal. Dr Abdul-Salaam, a

professor in Imperial College London, belonged to a poor family from a village, Jhang, in

Pakistan. He went to England to do his PhD when he was 20. Otherwise, he studied at

schools in Lahore. He learned English late in life. His grammar and vocabulary was

99

amazing. His eloquence was outstanding. Dr Salaam’s flawless English was an example

that tells that a scientist who never studied English language probably, he was good at it.

The message for the young prospective engineers is to speak a language one should try to

be excellent in it. Dr Salaam’s English was good and he proved that no matter what age

one learns a language, one must always have a desire to be perfect. Nevertheless, human

beings cannot be perfect, near perfection should be the target.

Students’ true weaker stages are English and Mathematics. It is unfair that English is 6

credit hours out of one hundred and twenty hours. English should be in every semester,

from semester-1 to semester-8. The connection of the English faculty with the engineering

students and other departments should not restricted to a couple of courses. This connection

in one form or the other should sustain throughout their four years stay in the degree

program. This is the biggest mistake that we are making. However, in spite of emphasizing

use of language lab for learning English, HEC and PEC will not let the university put

English in every semester.

3.4.4.5 The Office of Vice Chancellor (VC) perspective

A vice-chancellor (VC) is the chief executive of a university. His main offices are policy

making, and administration (Collison & Millen, 1969, p.79). He may serve as chairman of

the governing body. Mainly, this office secures an adequate financial base to deliver the

University’s mission. Providing academic and administrative leadership to the University,

he fulfills aims and objectives of the University other than carrying out important civic

duties. Air University is dominantly engineering university. According to Aristotle

‘Engineering is the art of directing the great sources of power in nature for the convenience

of humankind’ (Reed, 2014). English being the official language of the University is taken

as a supportive subject in the university.

According to University Portfolio report (Wahab, 2013), an ‘official publication’ (Shenton,

2004, p. 66), the Vice Chancellor at Air University holds the status of being one of the

higher management. He heads the university functional committee (UFC), the academic

100

council, the selection board, the Finance and Planning Committee, and the Executive

Committee, other than being a member of the Board of Governors (BOG).

In 2014, following a semi structured list of questions, I video recorded an interview with

the vice chancellor within approximately 38 minutes. The video recording was transcribed

verbatim. I analyzed the transcript for emergent themes related to ESS at UF level. The key

themes that emerged were 1) importance of ESS, 2) teaching of ESS, 3) learning practices

of ESS, 4) doing away with the reluctance of the learners, 5) students’ role in society and

6) support of the university administration and management.

Air University offers academic programs consistent with its mission and goals. The

academic programs crown the established skills and proficiencies of students. The skills

and proficiencies direct the students to degrees. The university works efficaciously to plan,

provide, evaluate, validate and improve the academic quality of its programs, curricula,

credits and degrees awarded (Wahab, 2013, p.81). Interview with the vice Chancellor

acknowledged the perceptions and spoken words of the leader as he commented on the

significance of ESS. Captaining the ship, the vice Chancellor was accessible to establish

quality assurance processes, and increasing the students’ experience (Scott, Bell, Coates &

Grebennikov, 2010, p. 411). In University, different tiers of governance, i.e., faculty,

students, staff, administration, and the management endorse mutually to achieve the

institutional mission. The prevalent system at University encouraged participatory

approach that allowed discussion of the diverse issues like planning and resource allocation

with concerned authorities for respective activities.

Air University aims to attain the status of the leading national universities, outshining in

the fields of teaching, learning, research, innovation and public service (Wahab, 2013, p.

23). Thus, new ways were identified to revamp the learning and teaching outcomes for the

UF to make university systems, practices and processes more agile. Oral presentation were

considered crucial at professional level. The initial steps to employment like job interviews

were expressed. Apprehensions were discussed how a candidate might be underestimated

if he or she could not express ideas or if he or she could not sell expertise. Spoken English

101

plays a central role in a professional’s life. Thus, vigorous emphasis had to be placed on

ESS at the University level. The UELTs, utilizing the available time, must uplift the UF

from the written as well as oral English language level to collocate the collegial gaps in

language learning. This could create a positive working and learning environment (Scott,

Bell, Coates & Grebennikov, 2010, p. 409). Identifying the features likely to improve the

overall understanding of promoting ESS among the UF was important. About 4-6 credit

hours were available to the UELTs to teach English language. Therefore, within those

credit hours the stakeholders had to bring up the English language proficiency (written or

oral) among the students. It was crucial to consider the effectiveness of teaching and

learning ESS procedures. Next, it was to gauge the results of teaching and learning of

English Speaking Skills to find niches for the advancement of the same. Successful

implementation of new initiatives was vital to generate improvements in learning and

teaching quality (Scott, Bell, Coates & Grebennikov, 2010, p. 406). Thus, I assessed the

general academic motivation for ESS, and considered the probable barriers towards up

gradation of the same. It was paramount to annex their linguistic proficiency to professional

level. The syllabus in English needed to cater to this delta / passage to contribute to the

professionalism of the UF.

A university treats all disciplines equitably as it aims to educate. Moreover, the nucleus of

its mission is to reconstruct theory, society and habitat. Individuals are free to unfold the

mysteries at universities, the formal citadel of learning and generating newer knowledge

(Bosetti & Walker, 2010, p.9). English language is one of the prerequisites for graduation.

It has been globalized over time. On the other hand, the UF’s problems in spoken English

were acknowledged at the university level. ‘One of the most pressing concerns for vice-

chancellors is the fundamental challenge of globalization’ (Bosetti & Walker, 2010, p. 6).

Identifying the problem of spoken English with global perspective in mind, then solving

the problem through concept and newness was the art and competence appropriate to

succeed. To handle the problem of English language, the office of vice chancellor was

ready to augment the ongoing productive practices of ESS teaching and learning be they

extra classes, extra coaching, more language courses, language labs within the limit and

resources of the university. The labs can also be upgraded as and when faculty member

102

recommends or program need arises (Wahab, 2013, p. 84). The English language

department was suggested to administer a survey to benchmark the emphasis to be placed

on spoken English and written language. Considering the credit hours for English language,

it was recommended to incorporate changes in the related syllabus. For the leader, it was

crucial to provide a structure to overall planning process that was to be introduced to

implement the direction set (Bryman, 2007, p.698).

Another compelling concern of a vice chancellor is establishing an essential role of his

University in society (Bosetti & Walker, 2010, p. 6). Universities through research,

teaching and scholarship pilot the standard of living of the citizens of a country and expand

their pursuits. Education particularly through universities drives economic and human

resilience (Nelson, 2003) to integrate diversity in society. English language is used in

research, teaching and scholarship to strengthen economy and human placement and

procurement. Bracing English speaking performances of the University individuals cum

professionals was a change that could not happen but had to be led (Scott, Bell, Coates &

Grebennikov, 2010, p. 401). Some extra effort was required. Office of the vice chancellor

was ready to contribute to the development of the oral skills of the UF. Educating the

students to the extent that they could express themselves as early as humanly possible

within their program (s) of study. They could converse with their fellow students in

English. They could exchange ideas in English. They are needed to be encouraged to

involve in interaction. They might remain reluctant for some time. However, the rate of

learning would refine. The vice chancellor was positive that the linguistic proficiency

depended upon the involvement of the UELTs and the UF. The governance system at the

university support its mission and strengthens its effectiveness. “The institution creates and

maintains enabling environments for teaching learning service” (Wahab, 2013, p. 35). This

validates adequate support for the functioning of all programs that the university offers. In

a highly competitive world, improving the students’ proficiency in spoken and written

English is a part of the mission. The UF’s interactivity would most probably contribute to

their societal roles.

103

However, in spite of being paramount, the proficiency in spoken English was going down.

The university had to bring up the standard of English to fill the deficit. Teaching and

learning is a progressively expensive constituent of university assignment. However, there

has been restricted awareness about the part university management and administration

perform in establishing effective teaching and learning (Debowski & Blake, 2004).

Graduating from schools and colleges, the UF were university students, unfamiliar with

the environment of a university. As university students, they had to take the charge. This

extended the responsibility of their faculty. The faculty of the 1st and 2nd semesters needed

to transform them from students to professionals. The UELTs had two semesters

specifically and eight semesters generally to contribute in their transformation.

The UF’s interactivity depended on their background. Most of the UF from normal schools,

had difficulty in interaction. However, the UF from Cambridge, O and A levels were

reciprocal. The UELTs ought to remove the barrier of reluctance that the UF faced in

English language. The Faculty members could make changes in contents of course with in

ten percent of total course contents (Wahab, 2013, p.82). Overall improvement among the

students in English Speaking performances was observed by the time of graduation. While

formally teaching, the UELTs needed to demonstrate the best results in the minimum time.

The rate of learning would improve beside proficiency in English. Merely four to five

active UF in an English class reflect inactive instructor. However, an active instructor

involves the UF by encouraging and motivating them. Other than this, one department

might put more emphasis on English language proficiency than the other. The university

teachers were always advised to conduct lessons, discussions, and communication in

English. The office of Vice Chancellor expected the teachers to converse 95-96% in

English. In a class of 55-60 students, four students per group presented their project. Two

of the student presenters performed well, the other two were reserved. That made 50% of

the success rate. Besides, the parents gave positive feedback on the conduct of their

siblings.

Being an engineer, the Vice chancellor was cognizant that due to engineering jargon, in

spite of the limited proficiency in English language the engineers could manage to

104

communicate. However, the role that technical English speaking and writing plays to

facilitate them, could not be overlooked. At university teaching level, in order to be

research oriented, and for knowledge transfer, the ESS of the university teachers counted.

Thus, English language should be given equal importance with the other Engineering

subjects. The instructors in professional programs needed to indulge with the faculty of

English language, to see the extent of their role in a class. Through guidance the

engineering faculty might contribute to the ESS of the university students for four years.

The university students require to express themselves in terms of reports and projects.

The faculty needed to have permanent set of criteria to gauge ESS. Through a set of criteria

confidence in assessment might be achieved. Instructors in possession of a criterion would

probably evaluate the learners better than not observing a criterion. Realizing the

importance of assessment, the UELTs most probably could be couched for the assessment

techniques. Every instructor might gauge the learners’ ESS in that particular way. Results

measured through a valid criterion become reliable sources of information about learning

for University.

3.4.5 Rationale for Recorded Speaking Performances

Pakistani teachers have to manage large classes. Giving every learner an opportunity to

speak in English in a 50 minutes class is unachievable. Getting students’ speaking

performances recorded was a way to address this problem. Evaluating speaking

performances was immense yet achievable. If in built cameras with powerful sound system

were available videotaping natural in-class practices might have been possible for the

study. But receiving audio clips of the students’ responses was a way out of the constraints.

It was managing within available resources, with the guided motivation of the facilitator.

This teaching technique of learning English speaking skills picks on the implications for

further research i.e., ‘development of classroom activities that encourage meaningful

communication in the second language learning and are administratively feasible’ (Canale

and Swain, 1980, p. 36).

105

Having said this, the first semester students were asked to record one minute short

dialogues. Actually, the researcher deemed it important to retain the commitment of the

first semester students, so the tasks duration was reduced to suit the requirement of the

students. Short dialogue was approved to boost their confidence level as they felt

comfortable in a shorter speaking performance. Other than short dialogues, scenarios,

‘stories with twists and turns’ (Konno, Nonaka, & Ogilvy, 2014) were impetus for

‘discussion, reflection, and action’ to stimulate the UF to perform speaking developing

critical thinking. These activities made the participants apply their critical thinking skills

when discussing their topic of concern’ (Bakar & Latif, 2010, p. 137). For example a group

of students exchanged their most exciting experiences. A scenario that they created had the

potential to use language in a meaningful, informative way. They supported their stories

by lexical forms and syntactic structures: ‘ahead of them’, ‘motivating them’, ‘giving them

pieces of useful advice’, ‘out of breath’, ‘exhausted but somehow we made it to Monal’,

‘steep climbs’, ‘trip to Swat’, ‘Marghazar’, ‘White Palace’, ‘Kalaam’, ‘Badein’, ‘Bahrain’,

‘Margalla Hills’, ‘hiking’, ‘the most beautiful river in Pakistan-the river Swat’, provided

the language learners with an opportunity to explore their capacity to use the target

language (Refer to DVD, Semester 1 Scored, Audio Recording A247 B248 C249 D250

E251 Group).

Believing the productivity of narratives, Ninetto Santoro and Andrea Allard evolved an

array of scenarios to develop speculation about teaching and learning processes to

understand the experiences from diverse perspectives. I used scenarios to prompt and

promote linguistic competence of the language learners based on their reflective skills

developed through scenarios in my classes. Scenarios directed the language learners to

shape their talk in contemplation in the form of utterances, statements, questions, analysis

and synthesis. The UF working individually, or in pairs required to experience realia to

achieve speaking (1998, p. 158). These discussions, narratives, pair talk, group talk were

not possible in large classes. However, recording speaking performances of individuals,

pairs, and groups facilitated the UF to develop speaking skills and critical thinking as well.

Likewise, testing of the same was possible because of recordings.

106

3.4.5.1. Rationale for Near Natural Recordings

As a researcher, I wanted to record natural responses of the university freshmen for the

current study. However, recording natural conversations in English language did not appear

feasible. It was difficult for me to record natural speaking performances of the UF in

classroom environment. In a fifty minutes class, sixty five to forty UF could not talk in

English even in English language class (see section 1.2). I used recorded speaking

performances to make practice of ESS possible. By motivating the UF to secure good

grades, they I kept providing them with the opportunities to speak English and send the

recordings to English language teacher. Their practices and performances were like role

play (Canale, Swain, 1980; Clipson-Boyles, 1998; Laar, 1998). The UF used English

language occasionally and formally (see Table 5.2). The objectives of the current research

study were to teach oral skills to second language learners, to test the suitability of Kim’s

(2010) criterion for assessing the UFs’ oral skills, and to measure the extent of raters’

(students and teachers) contribution in improving the UF’s task based speaking

performance in tests, and speaking assignments of two of the courses in English language,

i.e., Communication Skills, and Technical Writing.

This framework of research questions gauges and expands the extent that university

freshmen could improve their English speaking skills during running semesters through

English courses in Engineering departments like Mechatronics through speaking tests. The

usefulness of testing is cyclical (Bachman & Palmer, 1996). The current study opens

research gates for future. With robust recording systems installed in classrooms (see section

3.4.5), recording natural English language utterances and interactions could also be

possible for testing, and scoring to improve and teach the same.

3.4.6 Semester 1 (Fall, 2013)

First semester Communication Skills classes for the UF in Mechatronics Engineering

started in fall, 2013. Acknowledging curriculum as the body of knowledge (Kanwal, 2016)

that the educators wished ‘to transmit’ (Srivastava, 2005, p.4) to the UF, the department of

107

English compiled a customized text book for the course, ‘Communication Skills’. The

freshman were introduced to oral skills and testing of oral skills. I observed the UF

participation in class activities, and the difference in class participation if they were graded.

They were introduced to rubrics to strike consciousness. At the freshman level, the students

had a fear of the unknown, they were not used to class interaction. Thus, it was vital to help

them overcome their fear, and make them comfortable to talk with each other and the

facilitator. I, as a UELT made class participation mandatory and marked it. With the

incentive of obtaining grades for class participation, class interaction increased, the learners

had more opportunities to promote oracy. Getting grades for their responses encouraged

them to participate actively. As class participation imparted confidence, they were led to

realize that appropriate responses could be graded better than impulsive responses.

Observing the students’ response to their graded class participation, I continued with my

research study. Students from the same classes were taught oral skills, they were asked to

record their speaking performances, and then their recorded responses were assessed to

discover the results.

I used various frameworks for classroom talk (Holderness & Lalljee, 1998) to inspire the

UF to make utterances to develop oracy. They were encouraged through on spot grades to

talk about issues of interests. Questions and probe in questions were asked to elicit

responses. Scenarios were created for the students or they were asked to create situations

to hold conversations. They were asked to hold discussions. Display questions were asked

to give them confidence. Language lab was used for pair and group assignments, and

recordings of short discussions (audio recorded through Audacity software). Within pair

recordings, the UF were asked to submit recordings on ‘greetings, apologies, and

congratulations (GAC)’, communicative functions (Canale & Swain, 2002) providing them

with an opportunity to combine three functions of language, speech acts, in one speaking

performance. Other than pair recordings, fourteen groups of 3-6 students also submitted

their recordings on different topics like ‘telephoning effectively’, ‘internet users are

becoming less social, ‘reading precedes writing and speaking’, ‘social communication’,

‘hiking’, ‘our lives are designed by our efforts not by our destiny’, ‘the larger a city is the

more isolated the people are’, ‘the most exciting experience’, ‘interview for assistant

108

maintenance engineers’, ‘if I become the president of my country’, ‘job interview’,

‘problem and solution: help me and my family’, ‘problem and solution: promotion and

illness’. In addition to 112 pair recordings (224 responses), and 14 group recordings (63

responses), 5 individuals (5 responses) emailed their oral feedback on the difference they

found in their communicative competence in the first week and the last week of the course.

Pair tasks gave students support and confidence in the first semester. Recorded group

discussions enabled them to take their turns. In group discussions, they worked as teams.

A communicative or functional approach (Canale & Swain, 2002) was used for the UF to

learn ESS.

The UF were provided with software (Audacity) to record their speaking performances in

the language lab. They were trained how to record through Audacity Software. They were

reminded to send useable files. On every .aup file, the senders were informed that they had

delayed their feedback by not sending .ogg file. A visual illustration 4 follows:

Illustration 4. Feedback through Email to All Students

The above visual shows how the UF were provided with instructions and feedback to

comply with. Since the researcher was teaching three sections of BEMTS-1, they were

109

provided with a title format for the file like ‘1A 234 567 Intro’, ‘1B 899 786 Intro’, ‘1C

765 432’, but the UF took time to follow the title format after adding to their own and their

UELT’s tasks.

3.4.7 Semester 2 (Spring, 2014)

As the second semester started, one group of students was required to present a research

article every week in Mechatronics Engineering semester 2 (Spring, 2014) Technical

Writing classes, followed by question answer session. The audience UF were assigned to

record their comments on their peers’ presentations other than submitting their own

experience of presentation. In the academic context, tasks took an abstract form due to

different units of information to integrate the whole context of information (Cummins,

2000). Thus, in addition to basic interpersonal communicative skills (BICS), cognitive

academic language proficiency (CALP) was tested.

Groups of students from semester 2 presented research articles including Discussing

Controversial Issues in the Classroom (Hand & Levinson, 2012) such as, The Impact of an

elaborated assessee’s role in peer assessment (Kim, 2009), Guilty in whose eyes?

University students’ perceptions of cheating and plagiarism in academic work and

assessment (Ashworth, Bannister, Thorne, & Students, 1997), Reading as a Writer in

Australia and China: Adapting the Workshop (Kroll, & Dai, 2014), Developing team skills

through a collaborative writing assignment (Thomas, 2014), Tweeting an Ethos:

Emergency Messaging, Social Media, and Teaching Technical Communication (Bowdon,

2014), Using Social Media for Collective Knowledge-Making: Technical Communication

Between the Global North and South (Longo, 2014), Social Media in Technical

Communication (Kimme Hea, 2014), Technical Communication Unbound: Knowledge

Work, Social Media, and Emergent Communicative Practices (Ferro & Zachry, 2014),

Adventures in the blogosphere: from blog readers to blog writers, Writing to Learn:

Benefits and Limitations (Fry & Villagomez, 2012), and ‘He’s gone and wrote over it’: the

use of wikis for collaborative report writing in a primary school classroom (Doult, &

Walker, 2014).

110

In the second semester, the students were asked to submit their recorded comments on the

way their colleagues presented their articles and how well they had prepared their

presentations. What was effective, and what needed to be improved. The learners were

more inclined to submit their speaking performances due to the expectations of the

language teacher, the incentive of being graded (Chen, Warden & Chang, 2005), and their

own willingness to give feedback on the efforts of their peers. They talked about their

talking, the strengths and weaknesses of their submissions (Greenfield, 2003). Thus, I

included 562 individual responses from semester 2 in this study.

3.4.8 Evaluation of Students’ Speaking Performances

Significance of helping the UF want to learn could never be denied. Boosting the learners’

self-respect, and encouraging them to feel capable of learning was a mechanism I chose as

a UELT. This made healthy assessments possible (Stiggins, 2002) in which the students

feel encouraged.

The collected performances, 292 from first semester and 562 from second semester were

graded according to Kim’s (2010) rating scales. The practice of grading presentations

stimulated the language learners to learn attentively. It made the language learners learn

consciously (Palmer (1917) as cited in Ellis, 1993, p.102). The result of each speaking

performance accordingly was entered on Microsoft Excel sheet in the form of Evaluation

of Speaking Performance of Semester 1, and Evaluation of Speaking Performance of

Semester 2. Then, the collective standing of semester-1 was compared with the collective

standing of semester-2 in five scales (meaningfulness, grammatical competence, discourse

competence, task completion, and intelligibility).

Percentages of all the performances under the five main categories: meaningfulness,

grammatical competence, discourse competence, task completion, and intelligibility with

their six-point scale variations (5 for ‘excellent control’, 4 for ‘good’, 3 for ‘adequate’, 2

for ‘fair’, 1 for ‘limited’, and 0 for ‘no control’) of both semesters were compared with

each other to find out the difference.

111

I include comparative evaluation of semesters 1 and 2 on Meaningfulness (one testing

construct) as a specimen of the method I used to compare all the testing constructs of the

two semesters.

3.4.9 Comparative Evaluation of Meaningfulness (Semester1 & 2)

All the speaking performances of semester-1 (292) and of semester-2 (562) were scored by

a single scorer, the researcher herself, to maintain scoring, and avoid discrepancy.

Following is Hyun Jung Kim’s analytic scoring rubric for meaningfulness:

Table 3.1 Meaningfulness (Communication Effectiveness)

Is the response meaningful and effectively communicated?

5 Excellent 4 Good 3 Adequate 2 Fair 1 Limited 0 No

S.

No

The

response:

The

response:

The

response:

The

response:

The

response:

The

response:

1. is completely

meaningful-

what the

speaker

wants to

convey is

completely

clear and

easy to

understand

is

generally

meaningfu

l-in

general,

what the

speaker

wants to

convey is

clear and

easy to

under-

stand.

occasionally

displays

obscure

points;

however,

main points

are still

conveyed.

often

displays

obscure

points,

leaving

the

listener

confused.

is generally

unclear and

extremely

hard to

understand.

is

incompre

-hensible.

2.

Is fully

elabora-

ted.

is well

elaborated

includes

some

elaboration.

Includes

little

elaboratio

n.

is not well

elaborated.

contains

not

enough

evidence

to

evaluate.

3.

delivers

sophisticated

ideas.

delivers

generally

sophisticat

-ed ideas.

delivers

somewhat

simple ideas.

delivers

simple

ideas.

delivers

extremely

simple,

112

limited

ideas.

In Table 3.1, I replaced the bulleted descriptions of six-point scales (0 for ‘no control’ to 5

for ‘excellent control’) with numbers (1, 2, and 3) for better understanding of the criterion.

After scoring the recorded oral responses of students in semester-1 and semester-2, per

criterion, each recorded speaking performance is rated on six-point scale from no control

to excellent control.

According to Table 3.1, excellent meaningful response-1 is completely meaningful, clear,

and easy to understand. Likewise, excellent meaningful response-2 is fully elaborated.

Similarly, excellent meaningful response-3 delivers sophisticated ideas. Each of these three

statements under the category of excellent meaningfulness show three different shades of

excellence: one is comprehensively clear, the other is amplified, and the third one offers

mature ideas. As can be seen, level good in the construct of meaningfulness has three

variations. The first one is generally meaningful, clear and easy to grasp. The second

variation is evolved well. The third variation under the level of good meaningfulness offers

generally mature ideas. The level adequate in the testing construct of meaningfulness has

three levels. The first level occasionally displays obscure points. However, it is worthwhile

to note that this level conveys the main points. The second extension of adequate includes

some interpretation. The third extension carries somewhat simple ideas. The level of fair

meaningfulness also has three steps. The first one often displays obscure points. However,

unlike the first step of limited meaningfulness, it is not generally unclear. The second step

of fair meaningfulness includes little elaboration in comparison with the second step of

limited meaningfulness that not well interpreted. The third step of fair meaningfulness

communicates simple ideas, whereas the third step of limited meaningfulness

communicates extremely simple and limited ideas. As can be seen, the level of no

meaningfulness has two extensions distinguishing between incomprehensive response and

delivery containing not enough evidence to evaluate.

The assessment of the batch of BEMTS-1 according to Kim’s five rating scales was

compared with the assessment of the same batch as it promoted to BEMTS-2.

113

Meaningfulness in the speaking performances of the learners in the first batch was

compared with the meaningfulness of the speaking performances of the second batch. The

students had a general idea that their competencies are being evaluated. The purpose of

comparing the assessment of semester-1 with the assessment of semester-2 in the same test

construct is to measure change resulting from deliberate teaching and testing of speaking

performances in both semesters at freshman level.

3.5 Scope and Limitations of the Methodology

1) I could not conduct a pilot study, the preliminary study (Thabane, et al., 2010) in

Mechatronics Engineering semester 1 (Fall, 2013) Communication Skills classes to

familiarize the students with oral skills and Kim's rubrics prior to performance of a full

scale class room research. However, as a UELT, I managed in class awareness sessions for

mindful, intentional (Shapiro, Carlson, Astin & Freedman, 2006) oracy. Other than this

class participation and interaction was made mandatory and it was marked as well.

2) I, as a teacher researcher could not force, motivate, convince or inspire every single UF

from Mechatronic Engineering to submit their speaking performances. However, the UF

unstated understood that their best three assignments whether written or oral shall be

chosen for grading.

3) The sample of my research study is a small portion of a vaster ocean that I attempted to

understand.

4) Due to limited resources two students had to use one seat, one computer in a single

booth. Accommodating themselves (Shamim, Negash, Chuku & Demewoz, 2007) in the

available language lab booths, the UF started taking turns at their convenience. Through

weekly practice the UF learned to accommodate.

5) In my (UELT) presence, the university technical support from Networks trained the UF

(2013-2014) to record speaking performances through Audacity Software. Even then, some

114

of the learners sent .aup file instead of .ogg file in compressed form. In Audacity

software, .aup format worked for unfinished projects to be modified later, whereas .ogg

format compressed a finished audio data and offers quality.

6) In order to produce fluency in their interaction, the UF jotted down their utterances to

ease their Recorded Speaking Performances. They read out their dialogues, sometimes. I

had to compromise on the spontaneity of a talk, discussion, comment or analysis. However,

reading their talking points (Dawes, 2013) that encouraged the UF to talk to each other.

Furthermore, these short and simple main points helped them focus on the topic. It showed

their commitment to perform linguistically.

7) The present research focuses oracy, the English speaking skills. It is not traditional

content focused knowledge (O’Reilly & McNamara, 2007) or reproduction of information.

However, I, as a rater and assessor remain mindful of the topics for the speaking

performances of the UF.

8) The recorded speaking performances might intrigue the UF to repeating their fellow

speakers’ scripts. However, I alerted the UF against the offense. I used the same strategy

for the written assignments of the kind.

9) The testing constructs of the rubric are the achievement targets (Stiggins, 2002) for the

UF. As a UELT, I could not companionate the scoring rubric with the UF as thoroughly as

it were. However, I managed to inform the UF about the main testing constructs of the

rubric (See Appendix D, Table 1).

Then, after presenting the research data, I analyzed and interpreted it.

3.6 Presenting data, Analysis and Interpretation

It has been a challenging task to analyze a survey (questionnaire) (Greenfield, 2003),

interviews of the UM&A, and UELTs; evaluation of English speaking performances of

115

semester 1, and semester 2, then the comparative study of the two semesters. I have

prepared, organized and analyzed the collected data. Then I have reduced the data into

major themes like teaching, testing, grading in relation to the research objectives (See 1.4)

which form the basis of this study. The data of the research is presented in tables and

figures, followed by interpretation and discussion (Creswell, 2012, p. 148).

3.6.1 Process of Triangulation

The mixed method research design (see section 3.2) involving a survey; interviews from

two different stakeholders, i.e., the UELTs and the UM&A, and the results of RSPs

according to the analytical rubric required methodological triangulation. I assured the

research validity through cross verification. Interviews of the two stakeholders gave me

insight into two different attitudes and approaches towards promotion of ESS. ‘People’s

attitudes are tilted favorably toward particular languages due to various historical and

geopolitical reasons’ (Canagarajah & Ashraf, 2013, p. 264). Most of the UM&A belonged

to Engineering block whereas the UELTS belonged to English department teaching in

Engineering departments. Some of the UELTs had taught at language and linguistic

universities as well, that lent a richer perspective to the core (see section 4.3.1.1.), i.e.,

testing leading to grading and weightage of the assessments of ESS in 100% assessment of

English language. Two different perceptions emerged from the data where the UM&A

strongly endorsed the university official policy of speaking English presuming that the

pressure to speak only English could empower the university undergraduates use English

language. However, the need to testing ESS was also realized (see Table 5.3). The UELTs

believed in testing and grading ESS to expedite the learning process of English language.

The UELTs believed in the academic contribution to help the UF and the UELTs display

better output than before. Therefore, I tested the consistency of findings through

methodological triangulation of the research instruments.

The current study proposes equity ratio for English speaking skills in the overall 100 %

assessment of English language (see section 4.3.2.3 for UM&A; see section 4.3.1.3 for

UELTs) considering HEC policy, Wilkinson’s concept of oracy, available human

116

resources, and the educational background of university freshman. The current

teaching/learning practices can be inattentive as they do not practically consider the above

conditions. In classrooms, English is used without communicative potential (Manan, 2015,

p. iv). Meaningful use of English Language in classrooms can be helpful in learning the

language as declared in policy. ‘Data source triangulation is an effort to see if what we are

observing and reporting carries the same meaning when found under different

circumstances (Stake, 1995, p. 113). The research questions guided the triangulation of the

data for this study (see section 1.5). Tasks, raters (students and UELTs) could teach ESS

through an analytic scoring rubric. Only the important data was deliberately triangulated.

During the course of the research I asked the UELTs and the UM&A about the importance

of ESS, and then reviewed the available literature on the topic. I fathomed the spoken

language repertoire (Bygate, 2011) of the UF at joining time (see section 3.4.2.), the

progress they made throughout their four and a half years of study program (see Table 5.3).

The data is analyzed at two levels. First numerical description of the data using Microsoft

Excel, is given. Then it is followed by commentary based on the results obtained from the

data plus outcomes of the arguments presented in chapters 2, and chapter 3. By doing so

the arguments presented in literature review and results gathered from the responses

received during the process of data collection are linked. Content from survey and recorded

audio performances are organized and analyzed with Microsoft Excel. The qualitative data

collected through interviews with the UM&A, and UELTs is transcribed, and tabulated to

be used for this case study.

For ‘triangulation of measurement’ (Bryman, 2003, p. 130), more than one research

instruments (see section 3.2) were used in the current study. The data was analyzed from a

research participant point of view. As I have been in the field of education since 1991,

particularly in the field of English teaching. So my observation of how English was taught

in the past and how it is being taught now across Pakistan also forms part of the

commentary. This study became possible because of the RSPs as a modern technique to

practice ESS, to analytically score according to the testing constructs, and to redefine the

levels of the language learners’ English language proficiency and achievements.

117

3.7 Conclusion

Thus, after introduction, this chapter on methodology dealt with its research design (see

section 3.2) which involved its justification (see section 3.2.1). The research design uses

classroom research (see section 3.2.2), a case study method (see section 3.2.3). The case

study operated mixed method approach (see section 3.2.4). This chapter introduced that

before contemplating a research strategy (see section 3.3), I familiarized myself with the

background (see section 3.3.1) of the research participants (see section 3.3.2) of the study.

Then, it examined the methods for collecting data (see section 3.4). Data collection

enterprises included time frame of the current research data (see section 3.4.1), tools of

data collection, in-class survey (see section 3.4.2), and video interviews with English

language teachers (see section 3.4.3) inclusive of the UELT’s teaching practices (see

section 3.4.3.1) and their testing techniques (see section 3.4.3.2). Chapter three presented

data collection from university management and administration through interview (see

section 3.4.4). Then, under UM&A, the perspective of ORIC (see section 3.4.4.1), the

perspective of QEC (see section 3.4.4.2), the perspective of the head of computer science

department (see section 3.4.4.3), the perspective of the dean of deans (see section 3.4.4.4),

and the perspective from the office of the vice chancellor (see section 3.4.4.5) were

submitted to observe the feasibility of this research study. Furthermore, this chapter on

methodology accorded a rationale for recording the speaking performances of the

university freshmen (see section 3.4.5). It offered justification for using near natural

recordings (see section 3.4.5.1) instead of naturalistic conversations. After the rationale,

this chapter gave an account of semester-1, 2013 (see section 3.4.6), and semester-2, 2014

(see section 3.4.7). Next, it delivered an evaluation of students’ speaking performances

(3.4.8), and submitted comparative evaluation of Meaningfulness (semester 1 & 2) (see

section 3.4.9) for clarification of the audience as a specimen to the comparative method

used for the researched results in ESS teaching, learning, and testing. At the end (see

section 3.7), chapter three on Methodology of this research study, offered scope and

limitations of methodology (see section 3.5) leading to the data presentation, analysis and

interpretation (see section 3.6) through process of triangulation (see section 3.6.1) to flow

into Chapter four.

CHAPTER 4

DATA PRESENTATION, ANALYSIS AND

INTERPRETATION

4.1 Introduction

This chapter presents three steps of the data analysis procedures, the data presentation, and

the data interpretations as follows in Illustration 5:

Illustration 5. Steps of Data Analysis

The first step of data analysis involved examination of different data sets like the survey

among the UF, interviews (UELTs and UM&A), and recorded speaking performances of

119

the University freshmen from Mechatronic Engineering (2013-2014) through

methodological triangulation. The researcher collected primary research data through

convenience sampling strategy to gather ‘slices of data at different times and social

situations’, from different relevant people to triangulate (Bryman, 2004, p. 2). The main

research design (see section 3.2) supports the data analysis for the current study. The

second step of data presentation involved the thematic division based on the collected data

through qualitative (Creswell, 2012; Dornyei, 2007; Greenfield, 2003) and quantitative

(Bryman, 2003) research tools. The data was arranged in the form of tables and figures. At

the third step, the data was analyzed and interpreted through data triangulation.

This chapter is divided into four sections. This section briefly introduces chapter four. In

section 4.2, analysis of university freshmen’s survey (UF) is presented. Before banking on

the UF’s assumed level of ESS, the quantitative method of survey was the result of ‘design

suitability’ (Leech, Onwuegbuzie & Combs, 2011, p. 20). The practical use of English

language at personal level is interpreted in sub section 4.2.1. The practical use of English

language at public level is described in 4.2.2. The practical use of English language at

academic level is analyzed in sub section 4.2.3. Then, the teaching techniques for English

oral skills at freshman level are described in 4.2.4. The techniques of testing English oral

skills at freshmen level are related in 4.2.5 followed by the practices of testing criteria of

English oral skills at college level in sub section 4.2.6. After that, weightage of oral skills

in overall English assessment at college level is interpreted in sub section 4.2.7. Conclusion

of UF’s survey is presented in 4.2.8 signifying the interpretation of background knowledge

and practice in ESS at college level to build on at UF level at university as emphasized for

interpretation of the collected data (Leech, Onwuegbuzie & Combs, 2011; Onwuegbuzie

& Leech, 2004).

The second category of this chapter presents the interviews in section 4.3 to analyze the

content of the interviews of the university English teachers and university management and

administration in the following sub sections. These interviews from two different but

merging stakeholders have been analyzed separately. Looking through these two separate

but related categories of interviews not only matched ‘design suitability’ but added to

120

‘design fidelity’ (Leech, Onwuegbuzie & Combs, 2011, p. 20), that captured the ‘meaning,

effects, or relationship’ as well. Sub section 4.3.1 offers the analysis of interviews of the

UELTs. Sub section 4.3.1.1 defines the teaching practices of the UELTs. Sub section

4.3.1.2 analyzes the UELTs’ evolving sets of individualistic criteria. The sub section

4.3.1.3 unfolds estimated weightage of ESS for the UELTs. Then, section 4.3.1.4 concludes

the highlights of UELTs’ interviews interpreting the iconic knowledge base and teaching

practice in adding to ESS of the UF. Next, the second part of the analysis of interviews,

4.3.2 presents the analysis of interviews of the UM&A, university management and

administration. Sub section 4.3.2.1 deals with teaching practices and UM&A’s perspective.

Sub section 4.3.2.2 analyzes UM&A views on a set of criteria on English speaking skills.

Sub section 4.3.2.3 examines UM&A’s concept on weightage for ESS. Then, 4.3.2.4

concludes the themes from interviews of UM&A demonstrating ‘within-design

consistency’ as the role of UM&A fits the research design in ‘seamless manner’ (Leech,

Onwuegbuzie & Combs, 2011, p. 20).

The fourth category of chapter 4 is analysis of the UF’s recorded speaking performances

that appropriates ‘analytic adequacy’ (Leech et al., 2011) using the strategies ‘to provide

possible answers to research questions’. Sub section 4.4.1 interprets the use of scoring

rubric. Sub section 4.4.2 analyzes speaking performances of semester 1 and 2. Then further

sub sections analyze the evaluation of meaningfulness (semester 1 & 2) in 4.4.3. Sub

section 4.4.4 presents analysis of evaluation of grammatical competence (Semester 1 & 2).

Sub section 4.4.5 offers analysis of evaluation of discourse competence (DC) (Semester 1

& 2). Sub section 4.4.6 analyzes the evaluation of task completion (Semester 1 & 2). Sub

section 4.4.7 examines the evaluation of Intelligibility (Semester 1 & 2). Then, the findings

of the comparative evaluation of semester 1 and 2 are presented in sub section 4.4.8.

4.2 Analysis of University Freshmen’s (UF) Survey

The university freshmen (UF) were the college graduates (12 grade), the college English

language learners (CELLs) in the present research. These UF/CELLS had mixed English

language competencies (see section 2.1.1). At personal level, their parents wanted them to

121

talk in English language to sound educated. English language had a cosmic build up

(Kelson, Cooke, & Lansky, 1990) in their lives. In family get-togethers, they could have

some impression. It was a kind of social pressure (Akram & Ghani, 2013). However,

English plays an important role in their education. It is a compulsory subject, and language

of knowledge. As such, it exercised an academic pressure. In order to fathom the social

and academic pressures to learn English oral skills at college level, I emailed a

questionnaire to the UF (2013). After working out the UF’s academic standing in ESS

through a survey, I developed a number of themes related to practical use of English

language at UF level, i.e., practical use of English language at personal level, public level,

and academic level.

4.2.1 Practical Use of English Language at Personal Level

The statistics obtained from the survey (2013) show that at UF level, some students used

English language at personal level with their friends (4.16%), parents (5%), and family get

togethers (3.33%), as compared to the students who did not use English language. English

holds the status of a foreign language where it is not spoken ‘at home’, offices, ‘the station’,

‘the post office’, and ‘even at the airport’. Nonetheless, English has ‘some privilege’ in

the classrooms with extensive use of national and local language (Patil, 2008, p. 230). It is

reaffirmed that ‘a considerably small number of students use English for the purpose of

communication or interaction at home’ (Manan (2015, p. 234). It was interesting to note

that 41.66% parents expected their children to talk in English. In addition, more than 24%

parents occasionally expected their children to talk in English. Sometimes parents use local

language at home but feel that another major language should also be spoken, (Cook,

2016). In the case of Pakistani parents, it is English.

The following statistics reveal that in spite of more than 40% students’ personal liking,

English language utility at personal level was not enough to encourage the UF to learn it

with more enthusiasm than they were doing at that time:

122

Table 4.1 Language learners’ frequency of speaking English at personal level, in 2013

S.

NO.

Language learners’

use of English oral

skill

Language

learners’ %

Language learners’

occasional use %

Language learners’

absolutely no

Practice %

1. Liking to talk 40.83 50.83 7.50

2. Talk to friends 4.16 53.33 40.83

3. In family get together 3.33 38.33 58.33

4. Talk to parents 5.00 22.50 72.50

5. Parents’ talk 5.00 21.66 73.33

6. Parents’ expectation

from children to talk

41.66 24.16 34.16

Other than the affirmative (positive) responses to practical use of English language at

personal level, table 4.1 shows the occasional practices at the same level. The information

accessed in this table emphasizes that occasional use of English language outweighs the

absolute use of the target language. Contrary to the previous Pakistani researches that ‘vast

majority of children have no exposure to English at home’ (Manan, 2015, p. iv); ‘most of

the people, especially the lower classes, have little exposure to English at home’ (Haidar,

2016, p.18), the current study finds out that the UF had some exposure to English language.

‘One reason behind the problem might be the limited exposure of the learners in terms of

language input as well as opportunities for output’ (Zulfiqar, 2011, p.81). ‘A number of

forces act together to influence language learning procedures and thereby outcomes in case

of English language learning as a foreign or a second language’ (Kanwal, 2016, p. 55). The

occasional use of English language is a force to learn it and use it appropriately. Speaking

English occasionally during family get togethers is like wearing make up in public to show

one’s best appearance with some extra efforts. Cosmetic builds up physical image of a

woman (Kelson, Cooke, & Lansky, 1990), likewise, talking in English or using English

language words in a native talk upgrades the appearance of a speaker. Mohanty (2013)

rightly tags it as cosmetic Anglicization (as cited in Manan, 2015, p.178). The language

learners talk to their parents in English only formally, on occasions. Their parents’ also

occasionally talk to them in English language. But parents’ expectations from their children

123

to talk in English obviously reflects their desire to see that their children converse in the

language of power, and status. The social pressure via parents (Akram & Ghani, 2013) on

the language learners is worth consideration. The expectations of parents as major

stakeholders (Haidar, 2016; Kanwal, 2016) in educating their children could be catered in

academics.

The difference between the positive and occasional expectations of the parents who want

their children (the surveyed UF) to talk in English language, is also important to consider.

‘They are weak in speaking because they do not have practice’ (Manan, 2015, p. 383). The

occasionally expected percentage of students to talk in English language demands the

students to remain prepared to talk in the target language. The UF’s/CELLs’ no practice of

ESS at personal level, leaves a vacuum in the use of target language, demanding University

education to perform the responsibilities of college education as well. The percentage of

the UF absolutely not liking to talk in English language is contrasted with the percentage

of the UF liking to talk in English language in table 4.1. The UF who periodically like to

talk in English, and the UF who surely liked to talk to their friends in English language are

compared. In informal environment, i.e., family gatherings less than 4% UF use ESS, more

than 38% UF/CELLs use ESS from time to time, in comparison with more than 58%

UF/CELLs who do not use ESS. Statistically, more than 72% students do not talk to their

parents in English in contrast to 22.50% who occasionally talk to their parents in English

language, only 5% students used ESS. Then, I inferred that the UF/CELLs who do not talk

to their friends and family in English might have stopped themselves to use and practice

English language at informal level, leaving a gap in learning the target language.

4.2.2 Practical Use of English Language at Public Level

The citizens of a nation or community form public. Language is the mode of

communication between individuals of a community. English language being one of the

official languages of Pakistan (Cook, 2016; Rahman, 2005; Sultana, 2009) connects the

Pakistani people internationally. Being a language of prestige, English has blended in the

daily discourse of the Pakistani public. The Pakistanis have adjusted English language in

124

Pakistani languages, Punjabi, Pashtun, Sindhi, Balochi, and Urdu (Rahman, 1990), other

than varieties of English language used by other nations mainly British and American.

Table 4.2 shows that 30% of the UF/CELLs heard English most of the time, probably

because of their schooling and family background (Haidar, 2016; Kanwal, 2016) and

linguistic traditions. ‘English is almost a first language for the upper class in Pakistan, who

have more exposure to English through their schooling (Rahman, 2001)’, (as cited in

Haidar, 2016, p. 25). Thus, small number of students (in public places, 2.50%) used English

language at public level, outside the class rooms (8.33%), in public dealings (4.16%).

Table 4.2. Frequency of language learners’ practical use of English oral skills at public

level in 2013

S.

No

Practical use of English Positive

use of

English %

Occasional

use of

English %

No use

of

English %

Silence in

response to

use of

English %

1. Hearing English most of

the time

30.00

40.83 8.33 0.84

2. Speaking English outside

the class rooms

8.33 55.83 35.83 0.01

3. Speaking English in public

dealings

4.16 53.33 42.50 0.01

4. Talking in English in

Public places

2.50 55.83 41.66 0.01

Table 4.2 shows that the UF had limited exposure to English language and constrained

usage at public level. The UF (30%) heard English most of the time, whereas 40.83% UF

occasionally heard English language, and 28.33% UF did not hear English language.

Research shows that most of us do not get opportunities to hear and speak English (Patil,

2008, p. 230), whereas, listening to a language enhances language competence (Haidar,

Farrukh & Dar, 2019; Jabeen, 2013; Zulfiqar, 2011). In language acquisition (see section

125

2.3), ‘the need of communication’ motivates a second/third/foreign language learner ‘to

listen and speak’, urge for ‘praise, opportunity to receive attention, acceptance and

approval’, and the ‘academic desires for achieving high grades, acknowledgement,

appreciation and privilege’ are some of the reasons to participate actively in classrooms

(Zulfiqar, 2011, p.70). However, in public dealings, more than 42% students did not use

English language. May be because they moved in local public where they could manage

their dealings in native/ local or national language. Their native/local or national language

sufficed to tackle the causal conditions and locally-situated needs (Canagarajah, 2005).

Nevertheless, the language learners’ occasional use of English at public level (occasional

hearing (40.83%), speaking outside the classroom (55.83), Speaking English in public

dealings (53.33%), talking in English in Public places (55.83%), was higher than positive

use of English at public level. From highly formal to informal, from personal to practical,

from positive to occasional use, the different frequency of ‘linguistic capital have value in

certain markets’ (Haidar, 2016, p. 31). The occasional use of English language, in addition

to the positive use of English language at public level, by the UF required linguistic

preparedness via schooling, other than the public pressure.

4.2.3 Practical Use of English Language at Academic Level

In contrast to the practical use of English language at personal and public level, the

academic use of English can be observed in table 4.3:

Table 4.3 Teaching, using & testing of English oral skills academically at freshman level

2013

S.

No.

Practical

Use of

English

oral skill

at

academic

level

% of

Students

Teaching

of oral

skills

% of

language

teacher

Testing

criteria of

oral skills

% of

Stu-

dents

Testing of

English

oral skills

% of

testing

oral

skill

126

1. Present

projects

65.83 Taught

oral skills

18.33 Told 27.50 Tested 10.00

2. Sometime

present

projects

16.66 Sometime

taught

oral skills

35 Uncertain 16.66 Sometime

tested

29.16

3. No using

EL to

present

project

15.83 No

teaching

46.66 Not told 55. 83 No testing 60.83

4. Silent 1.68 Silent 0.01 Silent 0.01 Silent 0.01

The academic utility of English language is greater than personal or public utility. Table

4.3 illustrates the balance required between teaching of oral skills, and then testing those

taught skills through presentation of the projects. The need to explore ‘the opportunities

available’ to the CLL/UF to learn ESS that ‘mainly includes course text books, helping

materials made available to the learners, exposure to task-based projects (guided or

unguided)’, and the in-large-classroom opportunities ‘to perform the roles’ these UF will

have ‘to take up being the part of the language events in their immediate academic and

social lives’ (Zulfiqar, 2011, p.105). This table summarizes the frequency of teaching,

using, testing of spoken English academically at the UF level. This report provides data to

analyze the need to purposefully teach, and test the ESS of the UF.

4.2.4 Teaching Techniques for English Oral Skills at Freshman Level

Some interesting statistics are obtained about the teaching techniques employed to help the

UF learn and speak English language.

Table 4.4 Frequency of English oral skills teaching techniques at freshman level in 2013

S.

No. College English Language

Teachers (CELTs) used

oral English language

teaching techniques

% of

Affirmative no

of College

language

learners

% of

Occasional

no of

CELLs

% of

Absolutely

no teaching

techniques

% of

Silent

CELL

127

1. Teachers talked to college

English language learners

(CELLs)

55.00 36.66 6.66 2.34

2. Teachers expected CELLs

to talk 60.83 35.00 3.33 0.84

3. CELLs allowed to ask

questions in class 87.50 8.33 3.33 0.84

4. Opportunities given to

support statements 65.83 25.00 9.16 0.01

5. Asked to support

statements 54.16 34.16 10.83 0.85

6. In-class arguments were

appreciated 50.83 39.16 9.16 0.85

7. Strengths in oral skills

appreciated 43.33 34.16 22.50 0.01

8. Teachers expected CELLs

to respond 43.33 37.50 17.50 1.67

9. Chances given to share

ideas in class 70.00 22.50 6.66 0.84

10. Incentives given to talk 25.83 33.33 40.00 0.84

11. Cooperative Learning

Method (CLM) used 47.50 17.50 33.33 1.67

12. Permission to discuss

topics in class 54.16 33.33 11.66 0.85

13. Motivated to speak English

in class 48.33 30.83 20.83 0.01

Teachers play an important role in the language learning process of students. Table 4.4

shows that 55% CELTs communicated with the language learners in target language.

According to some research studies teachers and students seldom used English ‘in their

formal or informal classroom transactions’. Resultantly, the students did ‘not get

opportunities to develop their communicative competence in the language’ (Manan, 2015,

p. 295). ‘Our students immensely lack in spoken English proficiency’ (Jabeen, 2013, p.

54). However, the statistics of the survey for this study conducted in Islamabad, Pakistan

showed that more than 60% CELTs expect their learners to talk to them in English. The

CELTs talked to the CELLs in English to motivate them to reciprocate in the same

language. They expected the CELLs to speak the target language. More than 48% CELTs

motivated the students. Interacting with the college language learners in English, and then,

making them realize that they were expected to respond in the same language was one of

the empowering techniques that the CELTs could use. Since English is the language of

128

power and promotion therefore if the CELLs heard the target language then they could

have felt motivated to respond in the same language.

Allowing CELLs to ask questions is an effective technique to provide talk-opportunities to

the students. Another research in Pakistani context shows that the vast majority of the

students either did not use or sometimes used English during their questions. However, the

frequency of English language use was ‘significantly low’ (Manan, 2015, p.173). During

the course of the current study more than 87% CELLs had the permission to ask questions

in class. Allowing CELLs to ask questions was an effective technique to provide talk-

opportunities to the students. More than 87% students were given the opportunity at college

level, which, if availed, could have contributed to students’ speaking ability. Besides this,

more than 65% teachers used the technique of giving chances to the students to support

their statements in class. If they had availed the opportunity it could have enabled them to

develop their speaking skills. However, availing this opportunity was restrained by time

(scant) and size (large) constraints. Appreciation might have been equivalent to

recognition. Recognition could have led to motivation. More than 50% college teachers

appreciated in-class arguments. More than 43% college teachers acknowledged students’

strengths like meaningfulness and grammatical competence in their oral skills. Discussion

revitalized learners’ interaction. More than 54% CELTs allowed discussion in their classes.

The acknowledgement could have motivated the students to develop argument which might

have had enhanced their speaking skills to the extent they had.

The role of CELTs must be acknowledged for their personal, individual efforts that without

being standardized by the policy or curriculum, they tried to inspire the CELLs to talk in

English language. Education involves a combination of ‘teaching methods, teacher

competency and availability of instructional facilities, assessment and options available to

teachers’ (Kanwal, 2016, p.61). Table 4.4 provides information about the stimulating ways

of the CELTs who allowed the CELLs to ask questions in class. The CELLs were inspired

to support their statements (as this teaching technique provided them space to speak more

than a proclaimed statement). In Pakistan, mostly classrooms are crowded. Students are of

mixed ability. However, according to Table 4.4, the CELTs appreciated the in-class

129

arguments probably whenever they could. More than 43% CELTs squeezed time to

acknowledge the oral strengths of the CELLs, and granted them opportunities to share their

ideas wherever they could. More than 25% CELTs accorded most probably

miscellaneously encouraged the CELLs/UF to communicate in English. More than 47%

CELTs used CLM. However, teaching time, large classes, and syllabic constraints could

hardly be ignored.

According to Table 4.4, most of the techniques that were not used to teach oral skills were

in lower percentage than the same techniques positively used to enhance speaking ability

of the CELLs/UF in 2013. Statistics show that teachers’ expectations about students’ output

exceeded input as far as English communication skills were concerned. The CELTs

expectations contributed to CELLs motivation to develop their ESS. However,

‘improvements in achievement are not necessarily a function of high teacher expectations’

(Wiggan, 2007, p. 322). Forty percent CELLs/UF had absolutely no incentives to talk in

English. Educational psychologist determine that ‘offering an incentive is a way of sending

students the message that attractive consequences will be forthcoming if they engage in the

desired behaviour’ (O'Donnell, Reeve & Smith, 2011, p. 168). When teachers promise a

reward in order to solicit students’ learning and class participation, they use the reward as

an incentive. For more than 33% language learners, cooperative learning method was never

used. However, I realize that these teaching techniques are not enough to enhance the

CELLs’ ESS. The CELLs at university level as UFs could perform better if they were tested

and graded in English speaking skills in their English language examination.

4.2.5 Teaching/Testing of English Oral Skills at College Level (2013)

Table 4.5 Frequency of teaching/testing of English oral skills at freshman level in 2013

Teaching

of

English

oral skills

Affirmative

% of UF

Occasional

% of UF

Uncertainty

% of UF

Abso-

lutely

No %

of UF

Testing

of

English

oral

skills

Affirmative

% of UF

Occa-

sional

% of

UF

Abso-

lutely

No

Testing

% of

UF

Oral skills

taught

18.33 35.00 0.01 46.66 Oral

skills

tested

10.00 29.16 60.83

130

Table 4.5 displays that less than 19% CLL/UF were taught oral skills. When the students

are taught oral skills, they try to practice their learned skills, and have the motivation to use

them at personal, public and academic forums. According to this table, the percentage of

teaching of English oral skills is low. Moreover, testing complements teaching. Table 4.5

shows the percentage of the participants whose oral skills were tested at college level. More

than 60% students were not tested in English oral skills. ‘The learners neither get an

opportunity to practice using English for communicative purpose, nor the books have

specific activities to focus on teaching speaking and listening skills’ (Jabeen, 2013, p. 169).

Moreover, ‘the focus during English classes has been on reading from textbooks and

writing rather than listening and speaking, the two skills both play better role for learning’

(Manan, 2015, p. 313). However, the affirmative practice of testing oral skills at UF/CELL

level was limited to 10 percent. The occasional practices of testing oral skills was more

than 29%. Nonetheless, the affirmative and occasional testing of ESS at UF level

demonstrates the CELTs’ discretion to intentionally tap on the CELL/UFs’ proficiency at

ESS. Probably the nonexistent testing, grading, and weightage of English oral skill in

overall examination system of English language led the majority of CELTs to zero

practices of testing ESS in 2013. I tried to reduce this niche through the present study.

My experience as a UELT informed me that testing and grading of oral language skill could

further motivate learners to learn language as observed by some researchers (Alexander,

2015; Bachman & Palmer, 1996; Bachman, 2004; Cheng, 2008; Hughes, 2001; Kanwal,

2016; Laar, 1998; Lasagabaster, 2011; Pedulla, Abrams, Madaus, Russell, Ramos & Miao,

2003; Shahzad, 2018). However, if the teachers observe a criterion, and let the students

know that criterion, knowing what they are expected to achieve, the students can learn the

target language. But I needed to test this assumption.

4.2.6 Practices of Testing Criteria of English Oral Skills at College Level

Table 4.6 shows the practices of grading and using testing criteria for English oral skills at

college level as informed by the students.

131

Table 4.6 Usage frequency of testing Criteria for English oral skill at college level in 2013 S.No. Use of testing Criteria Affirmative % Occasional % Uncertainty % Silent/No %

1. statement (if supported)

graded 26.66 15.00 57.50 0.84

2. awareness about criteria

of testing oral skills 27.50 0.01 16.66 55.83

3. CELLs efforts to achieve

that criterion 26.66 27.50 1.68 44.16

Table 4.6 helped me deduce that more than 26% CELTs graded the statements of the

CELLs (if supported) to generate greater in-class participation. But those grades were not

incorporated in the overall assessment of students’ language learning. More than 27%

CELTs made the CELLs aware of the testing criteria. As can be seen, the CELTs graded

the CELLs’ oral skills on their own. An important finding is that out of 27.50% CELLs/UF

who knew the criteria, 26.66% CELLs tried to achieve it. ‘For tasks that require a limited

or extended production response, the test takers’ understanding of the criteria for

correctness may affect the way they approach the given tasks and hence the way they

perform’ (Bachman & Palmer, 1996, p.189). However, occasionally, more than 27%

CELLs tried to achieve that criteria. I infer from this table that if more students had been

told about the testing criteria, a higher number of UFs could have tried to achieve that.

4.2.7 Weightage of Oral Skills in Overall English Assessment at College Level

Table 4.7 Weightage of English oral skills at college/freshman level in 2013

S.

No.

Weightage of oral skill in

English overall assessment 50% 10% Uncertain% Silent%

1. Freshman awareness 12.50 16.66 70.83 0.01

Table 4.7 shows that more than 70% CELL/UF were uncertain about weightage of English

oral skills in the overall assessment of English language examination, at freshman level in

2013. Weightage, the assignment of a quota can play a constructive and motivating role to

increase learning. ‘The weightage of blog discussion was only 10%, it played an important

role towards the completion of the entire integrated project since both the oral presentation

132

and the written report to a great extent were based on the blog discussion’ (Bakar & Latif,

2010, p. 120). Weightage of English oral skills plays a vital role in language learning of

the students, the heavier the weightage, the greater the efforts to achieve that level of

learning. Unambiguity about the assessment procedures can proportionately motivate

learners to acquire that skill. On the contrary, ambivalence might have left more than 70%

CELLs confused, not knowing to intently work for English oral skill.

4.2.8 Conclusion of University Freshmen’s (UF) Survey

The UF’s survey identified a gap in English language learning at personal, public, and

academic level. This questionnaire survey formed part of the present research study. The

survey informed the researcher that the occasional practices of talking in English language,

at personal, public, and academic level were high. These occasional requirements of the

UF at three different levels make it mandatory at university level to equip the UF handle

these occasions. To equip the UF with the required proficiency, it was vital to give them

practice in tasks so that they could keep functioning linguistically. Language learning is all

about practice. The more the learners could practice in routine, the better speakers they

could become. So, at university level, in order to learn to voice their ideas to enter market,

and conduct presentations to sell their products, they required training in ESS. Thus, this

study deals with the deficiency of oral skills at academic level to support personal and

public levels of the UF.

Having analyzed the UF’s background knowledge of English speaking skills, the

University English language teachers were the most relevant stakeholder to access

information about the ESS of the UF.

4.3 Analysis of Interviews

The University Management and Administration (UM&A), and the University English

Language Teachers (UELTs) are two of the stakeholders in the promotion of English

language education. Therefore, they were interviewed. The UELTs are directly involved

133

in the teaching, learning, testing, and process activities of the UF. Thus, I had a dialogue

with them to exchange firsthand knowledge about their experience at pedagogical

practices. Then, in the present hypothetical study, I used qualitative content analysis ‘to

explicate the issues of study design and analytical procedures’ (Hsieh & Shannon, 2005, p.

1278).

4.3.1 Analysis of Interviews of the UELTs

At university joining time, 77% UELTs experienced the UF/CELLs who affronted

difficulty in elaborating information. But 22% UELTs had a contrary experience. That

means majority of the UELTs encountered below average UF/CELLs who could

superficially introduce themselves in localized English with jumbled up time frames. They

fluctuated in vocabulary, knowledge, and sentence structure, with missing sociolinguistic

competence in English, at joining time. Few students with language competence had

appropriate conventions to interact. The UELTs were cognizant that mature and

linguistically fluent speakers also had to be taught to interact at different levels. This

information provided me with a starting point appraisal (Bygate, 2011) of the UF’s ability

to communicate. However, the UELTs could not teach the UF 12 years back to function

linguistically in English. Taking up the students from where they had left at their college

was challenging.

The UELTs had to instruct and regulate body language, dress and posture of the UF. These

features of the UF were graded. Deficient vocabulary limited the competence of the

learners to use applicable tone. Learners’ lack of education in appropriate interaction,

necessitated it to teach ESS at university level. At UF level, teaching of oral skills was

carried out in low percentage (See Table 4.3).

The UELTs like UM&A unanimously found improvement in the speaking performances

of the graduating students. The first factor was time, the second factor was vocabulary, the

third factor was practice, the fourth factor was opportunity, and the fifth factor was

exposure. Particularly, two types of development could be obviously seen, consequential

134

and inconsequential. . To boost speakers’ discourse competence, the UELTs provided the

learners with opportunities to participate in group discussions, question answer sessions,

reading aloud, answering questions, and delivering comments.

While consciously teaching oral skills some UELTs did not code switch, believing that the

second/third language learners or foreign language learners could acquire language like

their native language. To them the learners were required to develop awareness about

language through discussion and interaction. On the contrary, some UELTs considered

code switching an encouraging phenomenon for the second language learners to establish

interpersonal relationship with bilingual community (Sert, 2005). Code switching enabled

the speakers to maintain fluidity in speech. Thus, to give the learners confidence in English

language. I found out that some UELTs allowed learners to code switch to a certain extent.

If the UELTs aimed to teach ESS and they attained their aim, it was acceptable. Since the

UF and the UELTs could connect with their national language so code switching did not

cause barriers in intelligibility. The UELTs believed in developing English speaking ability

in a speaker friendly environment where learners interacted with their teachers, and peers,

seniors and juniors in the target language. The UELTs, and the UM&A emphasized

intelligibility in the speech of the UF. The second language learners needed to be

understandable and coherent as they spoke.

In addition to this, a higher seat of learning, i.e. a university accommodates the learners to

attend seminars, conferences, and workshops in lingua franca to provide exposure to

produce efficient speakers. For classroom language teaching, the UELTs incorporated

additional creative activities to generate the interest of the UF by providing opportunities

to enhance their communication skills.

The UELTs insisted on the UF’s speaking English during class discussions and

interactions. Unanimously, they motivated the UF to talk in English to showcase their talent

through international language for advancement of their careers. However, in a class of 40

plus, it was hard for the UELTs to completely stop the UF from switching to home

language. For which the UELTs regularly talked about the significance of acquiring and

135

using English language for communicative purposes. The UELTs used diverse teaching

techniques for teaching ESS to the UF (See Section 4.3.2).

4.3.1.1. Teaching Practices of the UELTs

Assuring the UF that they could speak English was fundamental to their confident

interaction. The teachers’ encouragement for the UF to enquire and express in English,

could give them the confidence to use the language in spite of being grammatically

deficient. Gradually, the learners could overcome their linguistic deficiencies. Asking

questions from their colleagues and teachers or answering their questions accorded courage

to the UF but all the UF could not manage such practices due to time and class size

constraints. The UELTs while teaching ESS consciously taught grammar. However, they

deliberated on the weaker areas of the UF (Interview TI, 5/3/2014). One of the UELTs

with her experience of teaching ESS in a university of languages stated her teaching

practices:

‘When I was teaching (English speaking skills), it was a very different scenario altogether

because it was just teaching of that language. So one hour for grammar, one hour for

speaking, one listening, one reading, one writing. And the courses that were designed, they

were in a way that all core skills were being taught in each class. So reinforcement was

being given in each class. (Moreover), students were asked to speak on different topics and

that was extempore. You would just give them a topic there and then. Give them a minute

to think about it and then ask them to speak on the subject. They were also asked to pass

the page in discussion classes and topic for that participation was also given to them, about

5 minutes or 10 minutes before the discussion time. So, that kind of practice should be

given to the students in the classroom also’ (Interview T8, 4/6/2014).

This teaching technique of giving practice to language learners in classroom activities that

they were expected to engage in has been reinforced (Savignon, 1976) as cited in Canale

& Swain, 1980). However, it is worthwhile to note that ESS was not 1) the exclusive skill

to be taught to the UF from the Mechatronic Engineering in the present research study, 2)

136

it was meagerly graded and weighed in overall assessment of English language at the

researched university.

As informed by the UELTs, to reassure the learners, they facilitated the UF to communicate

in the target language by prompting relevant vocabulary and phrases if the learners fell

short of them. However, some teachers were against prompts. Nevertheless, the prompts

were meant for facilitation of the learners. They were used in the beginning, when and if

required. Once the learners picked up, the UELTs expected them to manage their talk

confidently. The UELTs invited silent students in group discussion and active participation.

The UELTs asked the UF to bounce back a question. Counseling and preparing learners to

compere events, and functions was another effective strategy to facilitate them to respond

to the call of speaking. However, all the UF could not possibly avail opportunities of the

kind. This strategy was effective for somewhat proficient students who were somewhat

confident. Depending on the keenness of the learner and the facilitation of teacher, it helped

the UF to a certain degree. Having done this, the UELTs tested ESS of the UF through

project presentations. All the UELTs had their own criteria to assess the ESS of the UF.

4.3.1.2 UELTs’ Evolving Sets of Criteria

To check the presentation skills of the UF, the UELTs had sets of criteria with diverse

testing constructs including relevance, tone, voice, and body language. The language

teachers had different perceptions. They were assessing students according to their

individual perceptions. They observed use of phrases, fluency, accuracy, confidence,

correct English, errors, grammar, vocabulary, pronunciation, facial expression, and

introduction. Shortly, it appeared to be a combination of everything, sometimes leaving

other things. However, it was not humanly possible to include everything in an evaluation

sheet. Assessment criteria needed to be justified. It could have not been based on personal

opinion. Evaluation needed to be done scientifically. Some UELTs suggested to observe a

criterion, a standard set of certain categories to evaluate ESS of the UF. An achievable

criterion followed for assessment leads to scientific evaluation. The UELTs measured

quality of the UF presentations. Quality of speaking performances could en route the UF

137

to excellence of expression and communication. A general purpose language criterion for

the evaluation of the performances of the UF was required. Observing a criterion for

assessment of ESS could have been a constructive practice for knowing the overall standing

of the learners in a particular construct of speaking ability. Both the learners and the UELT

could invest further efforts in that category. Raters could appraise the ESS of the UF after

validating/ratifying their performances in relation to the constructs of the criterion.

Moreover, observing a criterion for the assessment of ESS implied a greater possibility of

following it than not observing a criterion. Linguistic structures could justify the speaking

ability of the UF at different levels to further develop English language. Possessing a

criterion could accord confidence to the new UELTs particularly to evaluate English

linguistic structures, and across the board UELTs. Speaking competence of the UF could

be established through the range of lexical forms and syntactic structures in a scoring

rubric.

4.3.1.3 Weightage of ESS for UELTs

Speaking competency multiplies e chances of employability, adding to the significance at

learning stage. The end product of learning is job, salary, and status (Haidar, 2016; Hassan,

2009 Rahman, 2005) and ESS is one of the key requirements. Understanding that ESS adds

to the marketability of the UF is inadequate. The inadequate linguistic skills have been

creating a barrier to academic and economic success (Gray, 1996; Kanwal, 2016; Pecorino

& Dozier, 2000; Saville‐ Troike, 1984). ESS capacitates them to achieve that level.

However, that level demands involvement in the learning processes. Realizing the future

prospects of the UF, the UELTs liked to assign 20% to 50% value to their ESS. The UM&A

liked to accredit 15% to 50% value to the learner’s ESS. I, as a researcher UELT, observing

the classroom practices of implementing English language policy along with motivating

the UF to practice English speaking skills seek to accredit 50% for the same at this point

of time. Accrediting 50% to ESS in the 100% assessment of English language might

expedite the practices of transmission, acquisition, and transformation of participation in

English language.

138

It was important to seek the UF’s commitment to ESS. Allotting reasonable weightage to

ESS in overall assessment could engage the UF to endeavor to develop their relevant

competence. The UELTs suggested 20% to 50% weightage to the assessment of ESS.

Being a UELT myself, I propose 50% weightage to the evaluation of ESS in the overall

evaluation of the four skills in English language.

4.3.1.4 Conclusion of UELTs’ Interviews

To summarize, the UELTs’ encouragement to motivate (Chen, Warden & Chang, 2005)

the UF to enquire and express in English, could give them the confidence to speak English

language. Asking questions (Parker & Hess, 2001) from their colleagues and teachers or

answering their questions accorded guts to the UF. Students were asked to extemporary

speak on different topics about 5 minutes or 10 minutes. However, all the UF could not

manage such practices due to time and large class (Shamim, Negash, Chuku & Demewoz,

2007) constraints. The UELTs consciously taught grammar (Du, 2013). However, they

deliberated on the weaker areas of the UF. They facilitated the UF to communicate in the

target language by prompting (Chamot, 2004; Demuth, 1986) relevant vocabulary and

phrases if the learners fell short of them. The UELTs invited silent students in group

discussion (Shamim, Negash, Chuku & Demewoz, 2007). The UELTs asked the UF to

bounce back a question. They counseled the language learners and prepared them to

compere events (Zulfiqar, 2011). However, all the UF could not possibly avail

opportunities of the kind (4.3.1.1). As language teachers, they used sets of criteria including

diverse testing constructs that are, relevance, tone, voice, and body language to measure

the learners’ language competencies. They observed grammar, vocabulary, pronunciation,

fluency (Brown, Iwashita & McNamara, 2005), use of phrases, accuracy, confidence,

correct English, errors, facial expression, and introduction - a combination of everything;

sometimes leaving other things. A general purpose language criterion for the evaluation of

the performances of the UF was required (4.3.1.2). Other than a criterion, it was important

to assign reasonable value to ESS. Realizing the future prospects of the UF, the UELTs

liked to assign 20% to 50% weightage to the language learners’ ESS in the overall

139

assessment of English language. The UM&A liked to accredit 15% to 50% value to the

learner’s ESS (4.3.1.3).

4.3.2 Analysis of Interviews of University Management/ Administration

Pakistani (routine) environment does not encourage speaking English language at personal

informal level (see section 4.2.8). However, the UF discerned English as language of

knowledge, science, and technology. Within semesters testing of ESS at university level

was one option that the present research study concentrated. Within this concentration, I

tried to revive the key role of university management and administration (UM&A). The

university Directorate governed budgets, personnel, and policy. The UM&A regulated the

interests of the students and the UELTs.

I, one on one interviewed the UM&A to market the idea of my present research (i.e.,

reinforcing conscientious teaching of ESS and further strengthening it by testing through

some analytic scoring rubric), and to seek their perspectives on enhancing the speaking

ability of the UF through teaching, and testing of ESS. They were cognizant that the

speaking competence of the UF, coming from different streams of schools, varied. The

UM&A was interviewed to vacillate teaching, testing, grading and measuring practices of

English speaking skills of the students. Interviews, as qualitative research tools help to

explain and understand the research participants’ opinions on research problem. Hence,

using ‘a wide range of informants’ through triangulation of data sets, based on the research

design of the present study, I used ‘individual viewpoints and experiences’ against each

other to verify the results. This process supported the study to construct ‘a rich picture’ of

academic needs and academic behaviors ‘of those under scrutiny’ (Shenton, 2004, p. 66).

In this way, the present research study offered better future plans to UM&A for the success

of the students, teachers, university, and other stakeholders.

140

4.3.2.1 UM&A and University Teaching Practices

English is central to many activities around the world (see section 1.6). There is no strength

in the educational system of Engineering without English and Mathematics. University has

a good English department that in collaboration with the Engineering departments might

work out the required English language writing and speaking curricula. The students need

to present their projects. They need to attain fluency by choosing right words to

communicate a flow of ideas. Probably, people think in terms of the available words.

Usually they do not think outside the world of words. That is why, teachers need to increase

the vocabulary of the students. Many people think in terms of pictures.

As far as the teaching practices of UELTs were concerned, the UM&A stressed on the

understanding of concepts, and maintaining the spirit of inquiry without undermining the

importance of language. Logically, when management provides data about the service

under study, it ‘may well prove invaluable’ helping to explain their attitude to enhance the

contextual data related to the research site (Shenton, 2004, p.66). Moreover, for

understanding concepts and reviving inquisitive character of the students, the UELTs

needed to create chances for English speaking to all of them.

According to the UM&A, the UF could casually introduce themselves. They were

respectful. However, using appropriate etiquettes of interaction was challenging for them.

They had to be pulled out of their cultural inhibitions. The UF needed to be stimulated to

interact when required. The UM&A deemed assigning oral tasks, and presentations to

students as practically important. The qualitative interaction with the UM & A was critical

for ‘audit trail’, to establish confirmability of the two interviewed data sets. It could allow

to track the course of the current research (Shenton, 2004, p.72). Due to the significance of

English language at international forum, the university management from teachers’

perspective fully endorsed teaching English oral skill to the UF.

According to Vygotsky’s sociocultural theory, learning is a social phenomenon that takes

place within communities these learners move and interact. As learners participated in

141

different activities, their transforming roles helped them establish understanding about the

activities (Dunn & Lantolf, 1998). They learned to function, and used language in

metamorphosis of participation (Rogoff, 1994). The present research without denying the

role of transmission of knowledge from experts, or acquisition of knowledge by learners,

stressed on ‘transformation of participation’ that supports learners to remove barriers of

reluctance to evolve speaking ability. Vitality of using lingua franca could not be

overemphasized. From school to university, and from university to market, professional

life is a chain, linked up with language competence. Wherever linguistic link was weak,

the chain broke. The UF were required to receive training in oracy to increase their chances

of employability because employers looked for average but well-spoken employees. The

thick description in the current case study was an ‘important provision for promoting

credibility. It helped to convey the actual situations that have been investigated and, to an

extent, the contexts that surround them’ (Shenton, 2004, p.69). The UM&A considered

English proficiency to be the most important factor for selecting a candidate (Khan &

Chaudhury, 2012). Having said this, the UF were conditioned to be reminded that English

could benefit them learn better and move better nationally and globally. ‘Language is

always in the news, and the nearer a language moves to becoming a global language, the

more newsworthy it is’ (Crystal, 2012, p. 1). However, considering learning English

language a long term goal, opting for short term goals to pass core subjects, the UF kept

their perceived long term goal of learning speaking skill aside. Hence, motivation,

emphasizing the reasons they should develop English speaking ability, deliberate teaching

and testing appeared to be effective tools to address the problem.

Formally, English was the language of conduct at AU. The teaching faculty was advised

to conform. The University teachers were not expected to switch over to Urdu. To the

UM&A, the UF had the basic speaking ability, they needed to be taught conscientiously,

and make ESS a base for their professional communication. Teaching ESS consciously was

already in practice. In a class of 40 students it was inconceivable to establish the speaking

ability of all. However, commitment to excellence demanded sustenance.

142

4.3.2.2 UM&A and a set of criteria on English Speaking Skills

A pragmatic language criterion (Klesmer, 1993) could help the language learners and

language teachers to observe and improve deficiencies and expand language proficiencies.

At university induction time, most of the learners had limited expression but they upgraded

it as they graduated. Overall, different results were experienced in different departments

and different environments under different sets of criteria. At graduation time 80%

proficient learners created a considerable difference for some directors, whereas only 100%

proficient speakers could satisfy one head of a department.

The UM&A took the activity of developing criteria as a considerable activity.

Comprehensibility, clarity (Poonpon, 2010), kinesics, confidence (Poonpon, 2010), time

lag, comfortable pace and fluency were emphasized. A practical criterion that could

‘identify the existence of a linguistic or academic lag, it could also help determine the

magnitude of such a lag’. This implication is crucial for ‘school environment’ (Klesmer,

1993, p.1). A pragmatic set of criteria must respect effective communication to diverse

audience. Fluency depends on meaningfulness, grammatical competence, and discourse

competence. The whole package banks on practice and opportunity. The established

criterion was supposed to offer productively encouraging, and challengingly achievable

targets. The UELTs know the standard of the students, the goals of the semester, and the

learners’ potential progress. Thus, they could constitute a standard to accommodate mixed

ability (0-10) learners. It was endorsed that constituting criteria should be language teacher

and language learners’ mutual responsibility. The researcher agrees that the UELTs might

shoulder this responsibility. The UM&A advised to follow the criterion once it has been

decided to justify the students. Moreover, relying on the standard norms of national and

international level, developing a locally based criterion by a team that some competent

authority could approve was appreciated. The UM&A expected the UELTs to quantify the

ESS of the UF collectively not individually. The varied difference that was found in the

speaking competence of the learners solicited more efforts on the part of the facilitators as

well as the facilitated to achieve excellence in the area of speaking.

143

However, difficulties of evaluation were admitted. Evaluating the UF speaking

competency could guide the language teachers speculate more than before on their

speaking practices. It was acknowledged that evaluation could reassure learners the extent

of their learning. Furthermore, the learners could observe proficiency level of their English

speaking ability. From their end, they could attain an edge over others. Attaining excellence

could be a stimulant for the UELTs to design productively encouraging, and challengingly

achievable criterion for the language learners, to train themselves and the UF. Thus,

administration deemed scaling the ESS of learners important. However, learning to speak

English within the time frame of two semesters with an austere criterion could dismay the

second language learners.

The language teachers could deliberately build on the available blocks of speaking

competency. The challenges could be routed by following a suitable criterion for assessing

language learners’ speaking ability. Thus, this research suggested a criterion for grading

ESS of the UF. Having said this, understanding that human beings are not perfect is

important. However, ‘desire to be perfect’ (Interview, 10, 22/4/2014) is crucial. This desire

could stimulate the language teachers to design or adopt/ adapt the best criteria, and the

language learners to train themselves to achieve that. This way emphasis remains on

endeavours to excellence and perfection in language learning.

Therefore, a public university Vice Chancellor advised to develop a graspable criterion

with attainable standards from within the curricula. Speaking English, learners achieve the

nucleus of language. Their achievement could be acknowledged by justified percentage of

marks allotted to the assessment of ESS. For valid assessment, the UELTs needed to

observe a criterion. The UELTs were supposed to invest the available time in enhancing

the competencies of the UF. The reliability of the evaluation of ESS could be ensured

through the process of conduct. UM&A expected the UELTs to take up the UF from where

they had left in their college life. It was a tall order. However, I dared to meet this challenge

in large classes by introducing recorded speaking performances (See 3.4.6.) through the

present study.

144

4.3.2.3 UM&A and Weightage for English Speaking Skills

The facilitated needed heterogeneous activities to learn language, and the facilitators

needed to have a pedagogical repository that emerges from multiple learning theorists

(Wilson & Peterson, 2006). However, the gradual progress in ESS of the UF could have

been speedier with a few more constructive steps. It meant that individually or collectively

the UELTs could work on this project of developing ESS. Promoting ESS of the UF was

conceived indispensable. The UM&A endorsed evaluation of the UF’s ESS and

recommended further observation to see the difference it made. The UM&A recommended

15% to 50% weightage for English speaking skills.

4.3.2.4 Conclusion of University Management &Administration’s Interviews

To summarize, the UM&A emphasized the understanding of concepts, and maintaining the

spirit of inquiry without undermining the importance of language. The UELTs needed to

create chances for English speaking to all of the students. English was the language of

conduct at AU. The teaching faculty was advised to comply (4.3.2.1). Moreover, a realistic

and practical language criterion could help the language learners and the language teachers

to observe and revamp deficits and promote language competence (4.3.2.2). The UM&A

endorsed evaluation of the UF’s ESS and recommended further observation to see the

difference it made. Hence, the UM&A recommended 15% to 50% weightage for English

speaking skills (4.3.2.3).

4.4 Rationale of the UF’s Recorded Speaking Performances

Learning a language to speak solicits opportunities (see section 2.5.1) and practice

(Anderson, 2016; Bresnihan, 1994; English, 2009; Goldenberg, 1991; Jabeen, 2013;

Manan, 2015; Nawab, 2012; Park, Anderson & Karimbux, 2016; Riaz, 2012; Shamim,

Negash, Chuku & Demewoz, 2007; Swain & Lapkin, 1998). English language speaking is

all about learning, and practicing; learning through practice. It is exploring linguistic

possibilities in tasks (Ahmadian, 2016; Bachman, 2002; Bachman & Palmer, 1996; Breen,

145

1987; Canale & Swain, 1980; Harmer, 2007; Hughes, 2001; Laar, 1998; Prabhu, 1987;

Puppin, 2007; Riaz, Haidar, Hassan, 2019; Swales, 1990; Sweet, Reed, Lentz & Alcaya,

2000; Wilson & Peterson, 2006), activities, and real life situations. In Pakistan, the

language teachers have to manage large classes (Aleksandrzak, 2011; Nunan, 2003;

Shamim, Negash, Chuku & Demewoz, 2007) of mixed ability students. The reason for

initiating recorded speaking performances of the students as a teaching/learning

methodology was conforming to the requirements of language learning in large classes

with mixed ability students to cater to personal (see section 4.2.1), public (see Table 4.2),

academic (see Table 4.3) and global pressures (see section 2.13) for language friendly

environment. The recorded speaking performances were the ‘technological tools’ and

‘pedagogical instruments’ (Bakar & Latif, 2010). The tool and instrument of RSPs assisted

in developing speaking tests. Tests that granted the learner practice in speaking, listening

to their speaking, finding the level of their speaking performance and practicing anew from

the level that required more attention and deliberate practice.

The usefulness of the recorded tests is cyclical and recurring. In the current case study, the

‘consideration of practicality’ and the ‘potential consequences’ of tests might affect the

UELTs’ decisions in a semester to accommodate the students’ needs. The practicality

might guide to reconsider and revise some of the earlier specifications of oral tests

(Bachman & Palmer, 1996, p.35-36). I guided the UF to record their speaking

performances (Kim, 2010). I provided them with this opportunity to function linguistically.

It was challenging for the UF and the UELT researcher before they became comfortable

with this new method of learning, teaching, and testing ESS. The UF could play, listen to

and redo their assignments before submission only if they liked. Furthermore, not only the

UELT but the UF, themselves could gauge their linguistic strengths and weaknesses

through the shared scoring rubric (See Appendix D). A standardized test, if used

judiciously, assists teachers distinguish learners’ stabilities and instabilities (McMillan,

2000). In a mixed ability class, the UF could have different levels of competence but it

was unlikely for a class of 40 to have no grammatical competence or perfect grammatical

competence. Through the method of RSPs results could be validated. The UF were engaged

with ESS ‘more frequently and autonomously’. RSPs provided them with ‘extra

146

opportunities to acquire the language on their own’ (Bakar & Latif, 2010, p. 140). They

had levels of linguistic competences. There were no closed ended answers to the speaking

competencies of the UF. There was no quick fix solution to the existing problems of

teaching, learning, speaking, testing and grading of ESS. However, the maximum efforts

on the part of the UELT and the UF could attain possible excellence in ESS in the minimum

possible time.

4.4.1 Using Analytic Scoring Rubric

Scores or numbers are the results of tests. These sores are important part of the

measurement process of skills. Scoring rubric is ‘a set of scoring guideline’ (Perlman,

2003, p. 1). Bachman and Palmer (1996) specified two steps for the scoring method. First

was defining a criteria to assess the testees’ response and the second was determining the

procedures to match a score. An analytical scoring rubric ‘generally provide more detailed

information that may be more useful in planning and improving instruction and

communicating with students’. A good rubric keeps well defined scale points. Providing a

complete guideline, it covers ‘the range from very poor to excellent performance’

(Perlman, 2003, p. 500). Kim’s (2010) analytic scoring rubric defines all levels of the five

testing constructs. Thus, I as a rater could scientifically weigh the learners’ proficiency in

language from score point ‘No’ to ‘limited’, ‘fair’, ‘adequate’, ‘good’, and ‘excellent’ in

five of the testing constructs. This is how oral skills might be taught to the UF, without

underestimating their skills. Inculcating confidence that ‘Limited’ control on language

manifest the struggle they make to uplift their skills from ‘No’ control. Having ‘Limited’

control on a particular testing construct of analytic scoring rubric manifests that they might

be able to deliver ‘simple ideas’ with ‘little elaboration’ which is the level ‘Fair’, better

than the level ‘Limited’ in the chosen analytic rubric.

Understanding a message or a speaking performance facilitate communication. Coherence

helps the speakers logically structure their ideas, and add to discourse competence

(McNamara, 1997). The meaningfulness of the tasks could be sustained through shorter or

longer representation according to the speaking ability of the UF. Hence, grammatical

147

competence, intelligibility and discourse competence, other than meaningfulness, and task

completion (Kim, 2010) were considerable testing constructs in the rubric used for scoring

the speaking performances of the UF in semester-1and 2. The process of language learning

swung from shorter sentences to longer sentences and discussions. The UF practiced

learning the target language from utterances to dialogues (e.g. A83 B84 Short

Dialogue.ogg), discussions (e.g. A220 B221 C222 D223 Group.ogg) to role-plays and

dramas, and from short presentations to long presentations. All these tasks and activities

had topics, to which the UF were expected to remain relevant (e.g. A155 B156 Intro.ogg

did not talk about interests and hobbies as instructed). The pair, A155 B156, had ‘good’

discourse competence but they seemed to be casual in conduct of task, that is why they

missed including interests/likes and dislikes in their introduction. Sometimes brief

performances (e.g. A63 B64 Intro.ogg) were better graded (for it rarely displayed major

errors and the speakers could be understood) than long performances (e.g. A159 B160

Intro.ogg) in which the UF either displayed several major errors or occasionally displayed

obscure points. Thus, quantity or length of the speaking performances, though gauged, was

not considered for the present first time classroom research study of the kind. This method

establishes an opening for another research on the quantity and length of speaking

performances (classroom discussions of the UF).

Using analytic scoring rubric (Kim, 2010), the researcher UELT could validate the results

due to authenticity and credibility available in the form of RSPs.

4.4.2 Speaking Performances of Semester 1 & 2

The present study hypothesized that the UF develop their ESS if taught and assessed in

English Courses. Language learning (see section 2.4), learning and teaching of English

speaking skills (see section 2.5), including language acquisition (see section2.3) zoomed

in Krashen’s hypothesis of comprehensive input and Swain’s hypothesis of output. Testing

adds to the value of teaching and learning of ESS stimulating the UF to further practice to

gain optimal command on the target language. ESS testing needs to actively resist

‘oppression’ (Wiggan, 2007). The stakeholders in education system (university

148

management and administration, and university English language teachers, policy makers)

need to support testing to emerge as an ally to teaching to strengthen learning. As an

experienced UELT I made certain observations in class speaking practices of the UF. Most

of them were shy (Kanwal, 2016; Zulfiqar, 2011), reluctant, or under confident to

participate in discussions, question answer sessions, and presentations. They were

apprehensive to speak wrong English (Jabeen, 2013). They were uneasy that their peers

could laugh at them (Alam & Bashir Uddin, 2013). It was an umpteen task to motivate the

majority of the UF to perform in 50 minutes class. To give space to the reluctant students,

for the first time in AU, Islamabad, Pakistan, the UF were asked to submit their audio

recorded speaking performances for English Communication Skills course (see Section

3.4.6.). Initially three steps to get the speaking performances of the UF were taken i.e., an

extra class of one practice hour, a language lab with Audacity software installed on all

computers, and sharing analytic scoring rubric with the sample population. In the beginning

of the first semester, the UF were trained to record their performances through Audacity,

and familiarized with the scoring rubric.

Moreover, the UF had the incentive to earn grades for class participation (see section

3.4.6.). Unlike some research (Chamberlin, Yasué & Chiang, 2018) grades inspired the UF

to interact in class. As the class interaction (through recorded speaking performances)

increased, the learners created opportunities for themselves to enhance oracy. Grades

encouraged the UF to participate somewhat confidently. The response of the UF to their

graded class participation, led the researcher continue with her research study. The UF

from the same classes were taught oral skills. They were asked to record their assignments

in pairs, groups, and individually. Then their recorded responses were assessed according

to the rubric in five categories (See Appendix D). This analytic rubric defines the categories

and the levels within the categories, saving the UELTs from narrative evaluations, and

enlightening the UF as well as the UELTs as to what route to choose to optimize linguistic

achievements like Vygotsky's zone of proximal development (ZPD).

As the first semester batch promoted to second semester, in Technical Writing class, the

UF, in a group of 4-5 students were supposed to present one research article (included in

149

their university customized course compilation) every week. (The UF were divided in

groups per their class roll. Thus, they knew the topic of the article they were supposed to

prepare and present. The preparation for the presentation depended on their choice till their

turn to present). The audience (UF) were supposed to record their comments on the

presentation, and the ESS of the presenters, their own class fellows. As the UF from second

semester emailed their recorded comments, I as the rater found out that those comments

ranged from ‘incomprehensible’ (included in ‘No’ control of the testing construct) to

‘sophisticated ideas’ (included in ‘Excellent’ control of the testing construct) in

meaningfulness. The scope for evaluation, like in first semester, included ‘no grammatical

control’ to ‘excellent grammatical control’ in grammatical competence. The rubric

stretched from ‘incoherent’ to ‘completely coherent’ in discourse competence, from ‘no

understanding of the prompt’ to fully addressing the task in task completion, and from

lacking intelligibility to ‘completely intelligible’ in intelligibility.

The speaking performances of semester 1and 2 were scored on the same analytic rubric.

The result of each speaking performance accordingly was entered on Microsoft Excel sheet

(Refer to Appendix, Evaluation of Speaking Performance of Semester 1, and Evaluation of

Speaking Performance of Semester 2). Rating each recorded oral response of the UF in

semester-1 and semester-2, on six-point scale from no control to excellent control,

according to the scoring rubric, made me realize the different sample sizes. Thus,

percentages were calculated for totals to be compared. Then, the collective standing of

semester-1 in five constructs of the rubric i.e., meaningfulness, grammatical competence,

discourse competence, task completion, and intelligibility, was compared with the

collective standing of semester-2 in five scales quantitatively.

4.4.3 Analysis of Evaluation of Meaningfulness (Semester 1 & 2)

The base of the first semester Communication Skills students’ speaking performances was

social communication (see section 1.6.1). They recorded their responses on topics of their

interest. However, the second semester Technical Writing students recorded their

comments on their peers’ presentations on research articles compiled in a book form for

150

the relevant course (See Section 3.4.8.). The UF’s increased number of recorded

submissions in the second semester demonstrated their independence (these were solo

submissions), active participation, viz a viz, boosted interest, commitment, and reduced

shyness.

The testing constructs with varied levels of communication suited the diversity of

nonnative/second/third/foreign language learners/the UF. In addition to this, due to data of

different sample sizes, percentages were calculated for totals to be compared as below:

Table 3.2 Comparative Evaluation of Meaningfulness in Speaking Performance of

Semester 1&2 (2013-2014)

Semester 0

No

1

Limited

2

Fair

3

Adequate

4

Good

5

Excellent

Total

Performance

1 0 7 13 70 170 32 292

1 0 2.39% 4.45% 23.97 58.21% 10.95% 100 %

2 4 16 49 160 272 61 562

2 0.71% 2.84% 8.71% 28.46% 48.39% 10.85% 100 %

Table 3.2 on comparative evaluation of meaningfulness in speaking performance of

semester 1 & 2 (2013-2014) shows that recorded oral responses are rated on the rating scale

of meaningfulness that contain six-point scale from ‘No’ to ‘Excellent’ level. The different

obtained results are converted into percentages to compare because the sample sizes and

total of recorded speaking performances is different. I could have used the t-test to compare

the 2 groups of different sizes. However, percentages appeared more accessible to me than

a t-test.

As can be seen 10.95% excellent recorded speaking performances are found in the first

semester, whereas, in the second semester 10.85% excellent recorded oral responses are

produced. That means responses of both the semesters were close in excellence (See

Section 3.4.10). A distinction was created between basic interpersonal communicative

skills (BICS) and cognitive academic language proficiency (CALP) (Cummins, 2000). The

distinction referred to different time periods required by second (third, non-native, foreign)

language learners to achieve ‘conversational fluency’ and ‘grade-appropriate academic

151

proficiency’ (Cummins, 2003). Thus, the UF in the second semester, in general, needed

more time period to convey their meaning in a well elaborated manner. In the first semester,

58.21% good responses can be observed as compared to 48.39% good responses in the

second semester. In the second semester 28.46% adequate responses as compared to

23.97% adequate responses of the first semester can be noticed. However, in the second

semester 8.71% fair speaking performances as compared to 4.45% fair performances in the

first semester, were produced. In the second semester, the limited control on

meaningfulness (2.84%) and in the first semester their limited control (2.39%) reflects the

endeavours made to create some meaning through utterances throughout two semesters. A

notable result is the least frequency reported at level ‘No’ control of the testing construct-

Meaningfulness throughout two semesters.

Fig. 4.1 displays that the spoken responses in the second semester were further less

ambiguous than those of first semester responses:

Fig. 4.1 Meaningfulness Limited (1) response is generally unclear and

extremely hard to understand

Fig. 4.2 Meaningfulness Limited (3) response delivers extremely simple,

limited ideas

0.00%

0.50%

1.00%

1.50%

2.00%

Meaningfulness Limited (1)

response is generally unclear and extremely hard to understand

Semester 1 Semester 2

0.00%

0.10%

0.20%

0.30%

0.40%

0.50%

0.60%

0.70%

0.80%

Meaningfulness Limited (3)

response delivers extremely simple, limited ideas

Semester 1 Semester 2

152

It can be seen that the speaking performances in Meaningfulness Limited (1) control were

negligible in both the semesters. Likewise, Fig 4.2 displays exceedingly simple and limited

ideas in the second semester. The extensions of ‘Limited’ control might be observed in

relation to the extensions of ‘No’ control, and the extensions of ‘Fair’ control on the testing

construct of ‘Meaningfulness’. However, both semesters in Meaningfulness Limited (3)

extension submitted least number of RSPs.

Fig. 4.2 informs that inconsequential number of UF in first and second semesters produced

Meaningful Limited (1), extremely unclear and Limited (3), extremely limited speaking

performances.

Semester-2 offered more responses in Meaningfulness Fair (1) category, displaying

obscure points, than Semester-1. It might be due to epistemological reasons, and advanced

materials:

Fig. 4.3 Meaningfulness Fair (1) response often displays obscure points leaving

the listener confused

Fig. 4.4 Meaningfulness Fair (3) response delivers simple ideas

0.00%

1.00%

2.00%

3.00%

4.00%

5.00%

6.00%

7.00%

8.00%

Meaningfulness Fair (1)

response often displays obscure points leaving the

listener confused

Semester 1 Semester 2

0.67%

0.68%

0.68%

0.69%

0.69%

0.70%

0.70%

0.71%

0.71%

0.72%

Meaningfulness Fair (3) response delivers simple

ideas

Semester 1 Semester 2

153

The practitioners teaching the UF might understand this ambiguity in the speaking

performances. Diverse linguistic ability level that varied from school system (s), and family

background(s) could be detected through the meagre percentages. Responses of semester-

1 and semester-2 that delivered simple ideas under the category of Meaningfulness Fair (3)

are presented in Fig. 4.4. There is a likelihood that as the language learners try to express

themselves in the second/third/nonnative/foreign language, their responses might often

show obscure points and they might often confuse their listeners. Only their continuous

deliberate practice help them convey ‘main points’. When they are able to convey their

main points that is the time when they meet the next level of linguistic proficiency, i.e., the

level ‘adequate’ in the analytic scoring rubric.

Fig 4.5 suggests more than 15% upward movement in Meaningfulness Adequate (1)

speaking performances of both semesters. However, semester 1 responses went beyond

16%, occasionally involving ambiguity. According to Fig. 4.5, more responses of the first

semester communicated main points but their responses occasionally had obscure points.

Fig. 4.5 Meaningfulness Adequate (1) response occasionally displays obscure

points; however, main points are still conveyed

Fig. 4.6 Meaningfulness Adequate (2) response includes some elaboration

14.80%

15.00%

15.20%

15.40%

15.60%

15.80%

16.00%

16.20%

Meaningfulness Adequate (1) response occasionally displays obscure points;

however, main points are still conveyed

Semester 1 Semester 2

0.00%

2.00%

4.00%

6.00%

8.00%

Meaningfulness Adequate(2) response

includes some elaboration

Semester 1

Semester 2

5.20%

5.30%

5.40%

5.50%

5.60%

5.70%

5.80%

5.90%

6.00%

Meaningfulness Adequate

(3) delivers somewhat simple ideas

Semester 1 Semester 2

154

Fig. 4.7. Meaningfulness Adequate (3) delivers somewhat simple ideas

On the other hand, Fig. 4.6 informs that the speaking performances of second semester that

carried some elaboration were more in percentage than the performances of the first

semester. The percentage of the speaking performances including some explanation

increased in the second semester. This Fig. shows semester-2 started elaborating to the

degree of Adequate (2) in Meaningfulness. The competency to elaborate could help them

reduce ambiguity of their thought presentations. Then, Fig. 4.6 shows semester-2 started

elaborating to the degree of Adequate (2) in Meaningfulness. The competency to elaborate

could help them reduce ambiguity of their thought presentations. Moreover, the

comparison of the speaking performances of the first semester and the second semester in

Fig. 4.7 displayed that the performances from the latter semester shared moderately

uncomplicated ideas. The difference between the responses of sem-1 & sem-2, is in

decimals, however, it was not 0 percent.

A step up level than ‘Adequate’ in the analytic scoring rubric is ‘Good ’, further categorized

in ‘Good’ (1), (2), and (3):

0.00%

10.00%

20.00%

30.00%

40.00%

50.00%

60.00%

Good (1) response is

generally meaningful-in general, what the speaker wants to convey is clear and easy to understand

Semester 1 Semester 2

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

Good (2) Meaningfulness

response is well elaborated

Semester 1 Semester 2

0.00%

0.50%

1.00%

1.50%

2.00%

2.50%

3.00%

3.50%

Good (3) Meaningfulness

response delivers generally sophisticated

ideas

Semester 1 Semester 2

155

Fig. 4.8 Good (1) response is generally meaningful-in general, what the speaker

wants to convey is clear and easy to understand

Fig. 4.9 Good (2) Meaningfulness response is well elaborated

Fig. 4.10 Good (3) Meaningfulness response delivers generally sophisticated idea

The first semester responses of the UF were generally meaningful as shown by Fig. 4.8.

However, it is important to contemplate on Fig. 4.9 which shows that the second semester

responses were better explained, in scale point ‘good (2)’ under meaningfulness, as

compared to the first semester responses. Moreover, the second semester responses were

better explained as compared to the first semester responses. Well elaborated speaking

performances under Meaningfulness Good (2) almost doubled in second semester. This

progress is worth noting through Fig.4.9. Moreover, Fig. 4.10 displays that the second

semester responses convey more refined ideas than the first semester responses. Only one

UF (E237) in sem-1 (see A233 B234 C235 D236 E237 Group.ogg) managed to deliver

‘generally sophisticated ideas’ was an interesting finding. Kim’s (2010) scoring rubric is

finely analytic. It separates speaking performances from ‘incomprehensive’, to ‘generally

unclear’, from ‘obscure points’ to ‘main points’ ‘still conveyed’, and from ‘generally

meaningful’ to ‘completely meaningful’. The approach of the scoring rubric is rational. It

is crucial to teach ESS rationally (Haque, 1982), only then the 21st century learner learns.

English speaking skill is hard to be learned within semesters through stimulus and response

(Demirezen, 1988). However, testing ESS is one stimulant that could incline the UF in the

learning space. Sem-2 responses started getting matured.

Moving on to Meaningfulness Excellent (1), I realized that Fig. 4.11 presents

comprehensively meaningful speaking performances of sem-1 as compared to the speaking

performances of the second semester. More (8.90%) responses of the first semester were

thoroughly clear and effortlessly understandable than (3.91%) responses of the second

semester.

156

Fig. 4.11 Excellent (1) response is completely meaningful-what the speaker wants to

convey is completely clear and easy to understand

Fig. 4.12 Excellent (2) Meaningfulness Response is fully elaborated

Fig. 4.13 Excellent (3) Meaningfulness response delivers sophisticated ideas

However, it is edifying to note that more of the second semester responses are fully

explained under Excellent (2) in Meaningfulness as compared to the first semester

responses. According to Fig. 4.12, the UF had started getting expository. As can be seen,

lesser percentage of UF qualified level (2) of excellence in Meaningfulness. However, sem-

2 responses presented excellent (2) competency in meaning making. Furthermore, sem-2

speaking performances in excellent (3) level could be seen in Fig. 4.13. Sem-2 responses

carried more schooled ideas than sem-1 responses.

In Meaningfulness (the first testing construct of the criterion for this research study), from

the level of ‘No’ to ‘Excellence’, the most noticeable feature was the expository

(elaborative) stance and more sophisticated ideas of semester-2 responses. One of the most

conspicuous features of the semester-2 responses in Meaningfulness is zero responses in

‘No’ level, which positively portrayed that the UF in first as well as in the second semester

had some meaning in responses in English language. The second remarkable feature of the

0.00%

2.00%

4.00%

6.00%

8.00%

10.00%

Excellent (1) response is

completely meaningful-what the speaker wants to convey is completely clear

and easy to understand

Semester 1 Semester 2

0.00%

0.50%

1.00%

1.50%

2.00%

2.50%

3.00%

3.50%

4.00%

4.50%

Excellent (2) Meaningfulness Response

is fully elaborated

Semester 1 Semester 2

0.00%

0.50%

1.00%

1.50%

2.00%

2.50%

3.00%

Excellent (3)

Meaningfulness response delivers sophisticated

ideas

Semester 1 Semester 2

157

second semester responses was exposition (see Fig. 4.6, Fig.4.9) and sophistication (see

Fig.4.10) to certain levels of Meaningfulness. Gradually developing elaboration in the

recorded speaking performances of sem-2 was minimizing ambiguity in their

communication.

Second testing construct of the criterion was Grammatical Competence. It stood for

accuracy, complexity and range (see Appendix D, Table1). The UELTs (100%) were

observing grammatical competence of the UF. Grammar was more important than speaking

performances in examination (see 2.5). However, this research study emphasizes task

based English language teaching, learning and testing. Testing of ESS was task based that

included levels of grammatical competence for gauging the variance of this competency.

4.4.4. Analysis of Evaluation of Grammatical Competence (Semester 1 & 2)

Grammatical competence is a proficiency to generate variety of linguistic structures and

lexical forms. The results of speaking performances of semester 1 and 2 were compared to

evaluate the progress in the scale of grammatical competence on six-point scale (No,

Limited, Fair, Adequate, Good, and Excellent).

From semester one to semester two, the speaking performances of the UF did not display

‘no grammatical control’, i.e., ‘No (1)’. In both semesters no response showed ‘severely

limited or no range and sophistication of grammatical structure and lexical form’, i.e., ‘No

(2)’. However, Fig. 4.14 demonstrated that less than 2% responses in the second semester

could not carry sufficient evidence to assess grammatical competence.

158

Fig. 4.14 Grammatical Competence No (3) response contains not enough evidence

to evaluate

It is worthwhile to notice that in sem-2, negligible percentage of recorded speaking

performances (RSPs) did not contain enough evidence to evaluate, that was No (3) level of

grammatical competence. The rest of the performances had some evidence of grammatical

competence to be evaluated. Jabeen (2013, p. 57) designed grammar competence as

‘control of basic language structures’.

As can be seen (Fig. 4.15) close to 5% RSPs from the second semester were difficult to

understand:

Fig. 4.15 Grammatical Competence Limited (1) response is almost always

grammatically inaccurate, which causes difficulty in understanding what the

speaker wants to say

0.00%

1.00%

2.00%

Grammatical Competence No (3) response contains not enough evidence to

evaluate

Semester 1 Semester 2

0.00%

2.00%

4.00%

6.00%

Grammatical Competence Limited (1) response is almost always grammatically inaccurate, which causes difficulty in understanding what the speaker wants to say

Semester 1 Semester 2

159

In Fig. 4.16, the results of grammatical competence in the extension of Fair (1) were barely

positive.

Fig. 4.16 Grammatical Competence Fair (1) response displays several major errors as

well as frequent minor errors, causing confusion sometimes

Fig. 4.16, showed that comparatively lesser RSPs from second semester as compared to

first semester displayed several major errors as well as frequent minor errors in

Grammatical Competence fair (1) category. As a scorer, I could infer that the UF in second

semester reduced the frequency of casual errors.

Fig. 4.17 Grammatical Competence Fair (2) demonstrated a slight increase in narrow range

of syntactic structures in simple sentences, in the second semester as compared to the first

semester. According to Fig. 4.18, the scale-point Fair (3) in Grammatical Competence is

slightly higher in the second semester RSPs, referring to use of simple word forms.

9.00%

9.20%

9.40%

9.60%

9.80%

10.00%

Grammatical Competence Fair (1) response displays several major errors as well as frequent minor errors, causing confusion sometimes

Semester 1 Semester 2

160

Fig. 4.17 Grammatical Competence Fair (2) response displays a narrow range of

syntactic structures, limited to simple sentences

Fig. 4.18 Grammatical Competence Fair (3) response displays use of simple and

inaccurate lexical form

Modest progress in the form of a narrow range of syntactic structures, limited to simple

sentences is observable through Fig. 4.17. However, these humble progresses (see Fig.

4.16, 4.18, 4.21, 4.22, 4.27, as well) could be seen well than zero progresses. Jabeen’s

research (2013, p. 294) informs that the language learners improved in the ‘basic

components of spoken language’ after her three months experimental study. According to

her, most of the language learners at intermediate level were able to communicate on ‘a

variety of topics’ in ‘diverse situations’ in simple sentences ‘without having full command

on the relevant vocabulary, language structure and fluency’. This means that deliberate

teaching and testing brings in motivating results. Complying with the latest research

outcomes in the relevant field, there is a need to let English language grow through required

methodology. There is a need to regulate the English language teaching system through

policy. Considering the latest research trends, the second language policy need to

0.00%

0.50%

1.00%

1.50%

2.00%

2.50%

3.00%

Grammatical Competence

Fair (2) reponse displays a narrow range of syntactic

structures, limited to simple sentences

Semester 1 Semester 2

0.00%

0.50%

1.00%

1.50%

2.00%

2.50%

3.00%

Grammatical Competence

Fair (3) response displays use of simple and inaccurate

lexical form

Semester 1 Semester 2

161

complement teaching of English speaking skills with testing of English language skills at

university level. In addition to testing English language skills, granting weightage to ESS

at equal level might be a constructive step in building social capital at university level.

Some lexical form can be tracked in sem-2 RSPs through Fig. 4.18. However, the use was

simple and discrepant. It was at variance.

Fig. 4.19 Grammatical Competence Adequate (1) response rarely displays major

errors that obscure meaning and a few minor errors but what the speaker wants to

say can be understood

Fig. 4.20 Grammatical Competence Adequate (2) response displays a somewhat

narrow range of syntactic structures; too many simple sentences

Fig. 4.19 shows that the second semester responses attaining Adequate (1) level rarely

demonstrated considerable errors to create ambiguity. Their meaning could be understood.

The grammatical competence of the RSPs of the second semester were slightly lesser at

the adequate (1) level than the RSPs of the first semester. This scale-point ignored a few

minor errors when the talk could be understood. Then, Fig. 4.20, shows more of second

25.80%

26.00%

26.20%

26.40%

26.60%

26.80%

Grammatical Competence Adequate (1) response rarely displays major errors that obscure meaning and a few minor errors but what the speaker wants to say can be understood

Semester 1 Semester 2

0.00%

1.00%

2.00%

3.00%

4.00%

Grammatical

Competence Adequate (2) response displays a

somewhat narrow range of syntactic structures;

too many simple sentences

Semester 1

Semester 2

162

semester responses attained grammatical competence Adequate (2) level than first semester

responses. It implied that more performances from second semester did not have errors.

More of the second semester showed simple linguistic structures (without inaccurate

lexical forms) than the first semester, as shown in Fig. 4.21. Whereas, according to Fig.

4.22 inaccurate lexical forms were in lesser ratio in second semester than in responses from

first semester:

Fig. 4.21 Grammatical Competence Adequate (3) response displays somewhat

simple syntactic structures

Fig. 4.22 Grammatical Competence Adequate (4) displays use of somewhat simple

or inaccurate lexical form

0.00%

0.50%

1.00%

1.50%

2.00%

2.50%

Grammatical Competence

Adequate (3) response displays somewhat simple syntactic

structures

Semester 1 Semester 2

0.00%

0.50%

1.00%

1.50%

2.00%

2.50%

3.00%

3.50%

4.00%

Grammatical Competence

Adequate (4) displays use of somewhat simple or inaccurate

lexical form

Semester 1 Semester 2

163

Fig. 4.23 Grammatical Competence Good (1) response is generally grammatically

accurate without any major errors (e.g., article usage, subject/verb agreement, etc.

Fig. 4.24 Grammatical Competence Good (2) response displays a relatively wide

range of syntactic structures and lexical form

Fig. 4.25 Grammatical Competence Good (3) response displays relatively complex

syntactic structures and lexical form

In first semester, the responses of the UF were mostly grammatically precise in score point

Good (1) in the testing construct of grammatical competence, as presented by Fig. 4.23.

However, in the second semester, the speaking performances of the UF demonstrated

comparatively wide spectrum of linguistic structures and word forms as shown by Fig.

4.24. Then, Fig. 4.25 demonstrated that in the second semester, the UF used comparatively

complex syntactic structures and lexical forms in Good (3) of GC. The difference between

Good (1), Good (2) and Good (3) in GC helped the UELT and the UF distinguish between

their speaking performances. In the sem-2, the UF displayed a ‘wide range of syntactic

structures’, and ‘relatively complex syntactic structures’ and lexical forms. Teaching,

motivation, testing, environment, practice and participation could not bring a radical but a

gradual change in the second semester responses.

0.00%

20.00%

40.00%

60.00%

Grammatical Competence Good (1) response is generally grammatically accurate without any major errors (e.g., article usage, subject/verb agreement, etc.

Semester 1

Semester 2

0.00%

5.00%

10.00%

15.00%

Grammatical

Competence Good (2) response displays a relatively wide range of syntactic structures and lexical form

Semester 1

Semester 2

0.00%

0.50%

1.00%

1.50%

2.00%

Grammatical Competence Good (3) response displays relatively complex syntactic structures and lexical form

Semester 1 Semester 2

164

Fig. 4.26 showed that more of the first semester speaking performances were

grammatically accurate than the second semester, achieving Grammatical Competence

Excellent (1).

Fig. 4.26 Grammatical Competence Excellent (1) response is grammatically accurate

According to Fig. 4.27, the second semester responses attained scale-point ‘Excellent (2)’

that shows wide range of grammatical structures and lexical form.

Fig. 4.27 Grammatical Competence Excellent (2) response displays a wide range of

syntactic structures and lexical form

In scale-point Excellent (3), from 0% in the first semester, the second semester responses

rose to 1.25% showing advanced syntactic structures.

0.00%

2.00%

4.00%

6.00%

Grammatical Competence Excellent (1) response is grammatically accurate

Semester 1 Semester 2

0.00%

1.00%

2.00%

3.00%

4.00%

Grammatical Competence Excellent (2) response displays a wide range of

syntactic structures and lexical form

Semester 1 Semester 2

165

The most remarkable feature to be reported on grammatical competence (the second testing

construct of the criterion for the present research)is zero exhibit of ‘no grammatical

control’, i.e., ‘No (1)’, and no manifestation of ‘severely limited or no range and

sophistication of grammatical structure and lexical form’, i.e., ‘No (2)’ within both

semesters in the research time frame. This implies that the RSPs had a certain level of

grammatical control, and some range of sophistication of linguistic structure and lingual

form. In the later semester, lesser RSPs generated noticeable errors than the previous

semester (see Fig. 4.16) of research study. The UF’s RSPs in the second semester reported,

meagerly positive, not negative at Fair 1 level. A slight increase in the second semester as

compared to the first semester could be seen (Fig 4.17). Moreover, some modest progress

in the form of a narrow range of syntactic structures, limited to simple sentences was

observed (Fig 4.17). Use of simple (though inaccurate) lexical form was slightly higher in

the second semester RSPs (Fig 4.18). However, some lexical form could be tracked in Sem-

2 RSPs (Fig 4.18). The use of words was simple but discrepant. The lexicon was at

variance. The second semester responses rarely demonstrated considerable errors to create

ambiguity. The meaning of their utterances could be understood (Fig 4.19). Then, the

second semester responses attained some grammatical competence (Fig 4.20) at Adequate

(2) level. The simple linguistic structures in the RSPs of the second semester were without

inaccurate lexical forms (Fig 4.21). Inaccurate lexical forms were in less ratio in second

semester (Fig 4.22). Moreover, in the second semester, the speaking performances of the

UF demonstrated comparatively wide spectrum of linguistic structures and word forms

(Fig 4.24). They used comparatively complex syntactic structures and lexical forms (Fig

4.25) at Good, 3 level. The first semester RSPs were comparatively stronger than second

semester RSPs. However, the second semester RSPs showed ‘wide range of grammatical

structures and lexical form’ at Excellent 2 level (Fig 4.27). In short, teaching, testing, and

grading along with autonomous learning environment, practice, motivation and

participation did not bring a radical change in the second semester responses. However, a

gradual change was discernible.

In speaking situations, handling variety of tasks, meaningfulness is the most important

feature that involves number of competencies to fulfill users’ needs. Grammatical

competence is one of the competencies, not the only competency to focus for acquiring a

166

second language (Zulfiqar, 2011, p.1). Patil (2008) acknowledges the services of his

teachers for his lexical, phonological and grammatical competence’ (Patil, 2008, p.229).

Nevertheless, teachers need to develop students’ ‘ability to speak appropriately (grammar,

vocabulary) and according to particular circumstances (pragmatics)’ (Haidar, 2016, p. 31).

Discourse competence is one of the four subcategories of communicative competence

(Canale & Swain, 1980). It is the third testing construct of the criterion set for the study.

It stands for organization and cohesion (see Appendix D, Table1). Organization is a design

and pattern in speaking. Cohesion and coherence generate unity and connectivity within a

theme (Celce-Murcia, Dörnyei, & Thurrell, 1995; Halliday & Hasan, 1976; Kim, 2010;

Riggenbach, 2006).

4.4.5. Analysis of Evaluation of Discourse Competence (DC) (Semester 1 & 2)

Discourse competence is the way ideas are connected, and thoughts are organized in

utterances. DC pragmatically helps the users to speak appropriately. Teachers must develop

‘sense of linguistic’, and ‘social appropriateness’ before burdening them with ‘doses of

grammar’ (Patil, 2008, p. 239). The RSPs of the two semesters were rated on the third scale

of Discourse Competence with six scale-points from ‘No’ to ‘Excellent’.

A negligible percentage of incoherent responses can be gauged from Fig. 4.28, and Fig.

4.29 in DC:

167

Fig. 4.28 Discourse Competence No (1) response is incoherent

Fig. 4.29 Discourse Competence No (3) response contains not enough evidence

to evaluate

Fig. 4.28 on Discourse Competence ‘No (1)’ informed that in the second semester

incoherent and disconnected responses reduced incoherence in utterances. None of the

responses in both relevant semesters demonstrated ‘non-existent organization’. Therefore,

Discourse Competence No (2) did not apply to either of the semesters. The responses of

the UF, in semester-1 and 2 were organized to a certain extent. Fig. 4.29 displayed that

throughout two semesters, lesser than 2% utterances dropped to discourse competence ‘No

(3)’ level.

0.00%

0.50%

1.00%

1.50%

Discourse Competence

No (1) response is incoherent

Semester 1 Semester 2

1.34%

1.36%

1.38%

1.40%

1.42%

1.44%

Discourse Competence

No (3) response contains not enough evidence to

evaluate

Semester 1 Semester 2

168

Fig. 4.30 Discourse Competence Limited (1) response is generally incoherent

Fig. 4.31 Discourse Competence Limited (2) response displays illogical or

unclear organization, causing great confusion

Fig. 4.32 Discourse Competence Limited (3) response displays attempts to use

cohesive devices, but they are either quite mechanical or inaccurate leaving the

listener confused

Fig. 4.30 shows that throughout two semesters, less than 2% UF responses went down to

discourse competence limited (1). Fig. 4.31 on discourse competence limited (2) conveys

that less than 2% RSPs of the UF displayed unclear organization of utterances in both

semesters. Fig. 4.32 on DC Limited (3) reveals that the 1.78% UF in the second semester,

tried to use connectors mechanically.

Fig. 4.33 informed that the UF’s recorded speaking performances were loosely organized

at DC Fair (1) level. However, the percentage of disjointed discourse meagerly reduced in

the second semester on scale-point Fair (1), discourse competence. Nonetheless, semester

two discourse was lesser disjointed than semester one:

0.00%

0.50%

1.00%

1.50%

2.00%

Discourse Competence

Limited (2) response displays illogical or

unclear organization, causing great confusion

Semester 1

Semester 2

1.20%

1.30%

1.40%

1.50%

1.60%

1.70%

Discourse

Competence Limited (1) response is generally

incoherent

Semester 1

Semester 2

0.00%

0.50%

1.00%

1.50%

2.00%

Discourse Competence

Limited (3)response displays attempts to use cohesive devices, but they are either quite mechanical or inaccurate leaving the listener confused

Semester 1

Semester 2

169

Fig. 4.33 Discourse Competence Fair (1) response is loosely organized, resulting

in generally disjointed discourse

Fig. 4.34 Discourse Competence Fair (2) response often displays illogical or

unclear organization, causing some confusion

Fig. 4.35 Discourse Competence Fair (3) response displays repetitive use of simple

cohesive devices; uses of cohesive devices are not always effective

Fig. 4.34 on DC Fair (2) shows that promoting to second semester, the UF improved in

decimals in organizing their utterances. As they might have been trying to sound logical,

they were less confusing (in decimals) as compared to first semester. Fig. 4.35 on DC Fair

(3) displays that in the second semester, the UF started using simple cohesive devices.

Fig. 4.36 exhibits DC Adequate (1) among the four subcategories of the scale-point

Adequate.

0.00%

2.00%

4.00%

Discourse

Competence Fair (3) response displays repetitive use of simple cohesive devices; use of cohesive devices are not always effective

Semester 1

Semester 2

0.00%

1.00%

2.00%

3.00%

4.00%

Discourse Competence

Fair (2) response Often displays illogical or

unclear organization, causing some confusion

Semester 1

Semester 2

0.00%

1.00%

2.00%

3.00%

4.00%

Discourse Competence Fair (1) response is loosely organized, resulting in generally disjointed discourse

Semester 1

Semester 2

170

Fig. 4.36 Discourse Competence Adequate (1) response is occasionally

incoherent

Fig. 4.37 Discourse Competence Adequate (2) response Contains parts that

display somewhat illogical or unclear organization; however, as a whole, it is

in general logically structured

Fig. 4.36 informs that in the first semester, the UF submitted speaking performances that

were periodically disjointed. However, in the second semester, the percentage of

occasionally incoherent speaking performances decreased. Fig. 4.37 DC Adq (2) reports

that in the second semester, the percentage of illogical organization reduced. It is

interesting to note how ‘loose connection of ideas’ reduces in the RSPs of the UF in

semester-2:

0.00%

5.00%

10.00%

15.00%

20.00%

Discourse Competence

Adequate (1) response is occasionally incoherent

Semester 1 Semester 2

0.00%

5.00%

10.00%

Discourse Competence Adequate (2) response Contains parts that display somewhat illogical or unclear organization; however, as a whole, it is in general logically structured

Semester 1 Semester 2

171

Fig. 4.38 Discourse Competence Adequate (3) at times displays somewhat

loose connection of ideas

Fig. 4.39 Discourse Competence Adequate (4) response displays use of

simple cohesive devices

Fig. 4.38 on DC Adq (3) shows that in comparison to the RSPs of the first semester, the

responses in the second semester showed reduced number of RSPs on the same scale-point

on DC Adq (3). The UF in sem-2 started using simple connectors to create logic in their

talk. Fig. 4.39 on DC Adq (4) demonstrates that the second semester performances used

simple cohesive devices more than the first semester performances. The chosen scoring

rubric tracks the language learners’ journey to learn ESS in a step wise manner.

Then, DC Good has the following three sub categories.

0.00%

1.00%

2.00%

3.00%

Discourse Competence

Adequate (3) at times displays somewhat loose connection of

ideas

Semester 1

Semester 2

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

Discourse Competence

Adequate (4) response displays use of simple

cohesive devices

Semester 1

Semester 2

172

Fig. 4.40 Discourse Competence Good (1) response is generally coherent

Fig. 4.41 Discourse Competence Good (2) response displays generally logical

structure

Fig. 4.42 Discourse Competence Good (3) response displays good use of

cohesive devices that generally connect ideas smoothly

Fig. 4.40 on DC Good (1) displayed that the first semester responses were ‘generally

coherent’ as compared to the second semester responses. The second semester responses

of the UF show ‘generally logical structure’ as compared to the first semester responses as

displayed in Fig. 4.41 on DC Good (2). Fig. 4.42 on DC Good (3) exhibited that the second

semester performances used cohesive devices in an effective way as compared to the first

semester. The semester two responses linked ideas evenly. In sem-2 RSPs, the progress in

the thought through the language of the UF emerge logically according to the design of the

scoring rubric. Without rubric, it might have been difficult to finely distinguish the

responses. As the RSPs were rewindable so the distinctions could be made systematically

in the three levels of ‘Good’ in DC.

Scale-point Excellent in DC has three sub categories. DC Excellent (1) did not apply to the

Semester-2 responses. However, 2.40% Semester-1 responses, being ‘completely

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

Discourse Competence

Good (1) response is generally coherent

Semester 1

Semester 2

0.00%

5.00%

10.00%

15.00%

20.00%

Discourse Competence

Good (2) response displays generally logical

structure

Semester 1 Semester 2

0.00%

5.00%

10.00%

15.00%

20.00%

Discourse Competence

Good (3) response displays good use of

cohesive devices that generally connect ideas

smoothly

Semester 1

Semester 2

173

coherent’ qualified Excellent (1). Fig. 4.43 on DC Excellent (2) reveals that in the second

semester, more UF submitted logically structured responses that had logical openings and

closures in the sub category Excellent (2).

Fig. 4.43 Discourse Competence Excellent (2) response is logically

structured-logical openings and closures; logical development of ideas

None of the first semester performances qualified DC Excellent (3). However, 1.07% of

the second semester responses used ‘logical connectors, a controlling theme, or repetition

of key words’.

Concluding the UF’s performance on discourse competence at different levels, incoherent

and disconnected responses reduced showing coherence in utterances, in the second

semester (Fig. 4.28). It is worthwhile to note that less than 2% utterances dropped to

discourse competence ‘No (3)’ and ‘Limited (1)’ levels throughout two semesters (Fig.

4.29, Fig 4.30, respectively). The UF grew somewhat logical in the second semester. Their

utterances were organized clearly as compared to the first semester utterances. Their RSPs

were lesser confusing than the previous semester (Fig. 4.31). As they graduated semester

one, their discourse competence at Fair (1) level improved. Their communication was

rather organized, and less incoherent in semester two (Fig. 4.33). The RSPs improved in

regulating their utterance as they sounded logical. They were less confusing as compared

to first semester (Fig. 4.34, Fair2). Moreover, the UF started using simple cohesive devices

in the second semester (Fig. 4.35, Fair3). Other than this, the percentage of occasionally

0.00%

5.00%

10.00%

15.00%

Discourse Competence Excellent (2) response is logically structured-

logical openings and closures; logical development of ideas

Semester 1 Semester 2

174

incoherent speaking performances decreased in the second semester (Fig. 4.36, Adq1). The

percentage of illogical organization reduced in the second semester (Fig. 4.37, Adq2). The

tendency of the UF to somewhat loosely connect their ideas decreased in the second

semester (Fig. 4.38, Adq (3). The second semester performances used simple cohesive

devices more than the first semester performances (Fig. 4.39, Adq4). The RSPs in the

second semester showed ‘generally logical structure’ as compared to the first semester

RSPs (Fig. 4.41, Good2). The second semester RSPs used cohesive devices in an effective

way linking ideas evenly (Fig. 4.42, Good3). The second semester submitted structured

RSPs with logical openings and closures. Unlike first semester RSPs, 1.07% of the second

semester RSPs used valid linkers to harness a theme and create a meaning through key

words (Fig. 4.43, Excellent2).

Task completion is the fourth testing construct of the criterion. Task completion means the

extent to which a speaker completes a task (see Appendix D, Table1). Tasks are ‘regarded

as a vehicle for assessment’ (Kim, 2010, p. 1) (see section 2.5.1). Tasks have a primary

‘focus on meaning’. The ‘real world tasks’, and ‘pedagogical tasks’ (Ahmadian, 2016, p.

377) grant the learners, scope to make meaning through their speaking performances.

4.4.6. Analysis of Evaluation of Task Completion (Semester 1 & 2)

Task Completion (TC) the fourth scale of rubric considered for the present study gauged

the extent to which a speaker/UF completed the task. The scope of a performed task was

determined by the subcategories of six scale-points, from ‘No’ to ‘Excellent’. Two sub

categories in scale-point ‘No’ of Task Completion are ‘No (1), and No (2)’. TC ‘No (1)’

did not apply to the responses of the first semester. It hardly (0.18%) applied to second

semester.

Fig. 4.44 on TC ‘No (2)’, presents lesser than 1% performances of the UF in both semesters.

175

Fig. 4.44 Task Completion No (2) response contains not enough

evidence to evaluate

Task Completion scale-point Limited has two subcategories, Limited (1), and Limited (2).

Fig. 4.45 shows that in the second semester less than 1% responses brushed pass TC limited

(1).

Fig. 4.45 Task Completion Limited (1) response barely addresses the

task

In the secondary category of TC Limited (2), 1.78% responses from the second semester

showed misunderstandings interfering with the completion of tasks (See Appendix D).

Important to note is the low percentage of lower level (from ‘No’ to ‘Limited’ control)

speakers at University freshman level. Higher level speakers (from ‘Adequate’ to

‘Excellent’ control) (see Table 4.8 and the illustration) are more in percentage.

0.00%

0.50%

1.00%

Task Completion No (2) response contains not enough

evidence to evaluate

Semeter 1 Semester 2

0.00%

2.00%

4.00%

Task Completion Limited (1) response barely addresses the

task

Semester 1 Semester 2

176

Scale-point Fair in Task Completion has three divisions, Fair (1), Fair (2), and Fair (3).

According to Fig. 4.46 on TC Fair (1), more responses from the second semester attained

the relevant subcategory:

Fig. 4.46 Task Completion Fair (1) response insufficiently addresses the task

Fig. 4.47 Task Completion Fair (2) response displays some major

incomprehension/ misunderstanding(s) that interferes with successful task

completion

Fig. 4.47 on TC Fair (2) informs that in the second semester the percentage of speaking

performances with ‘some major incomprehension’ slightly decreased. This means that the

UF in sem-2 comprehended their task better than in sem-1, though in sem-2 their tasks

were advanced in nature. The first semester responses did not befit TC Fair (3) (See

Appendix D). Only 0.53% UF in second semester attained it.

Scale-point Adequate in Task Completion has four descriptors, Adequate (1), Adequate

(2), Adequate (3), and Adequate (4).

8.40%

8.60%

8.80%

9.00%

9.20%

9.40%

9.60%

9.80%

10.00%

Task Completion Fair (1) response

insufficiently addresses the task

Semester 1 Semester 2

2.98%

3.00%

3.02%

3.04%

3.06%

3.08%

3.10%

Task Completion Fair (2) response displays some major incomprehension/ misunderstanding(s) that interferes with successful task completion

Semester 1 Semester 2

177

Fig. 4.48 Task completion Adequate (1) response Semester 1&2 2013-2014

Fig. 4.49 Task completion Adequate (2) response Semester 1&2 2013-2014

Fig. 4.48 Adequate (1) on TC displays that the second semester respondents adequately

addressed the tasks as compared to the first semester. Fig. 4.49 on TC Adq (2) shows that

in the second semester, there were more responses that completed the task with

inconsequential misunderstanding.

0%

1%

2%

3%

4%

5%

6%

Task Completion Adequate (3)

response touches upon all main points, but leaves out details.

Semester 1 Semester 2

0%

5%

10%

15%

20%

25%

Task Completion Adequate (1) response adequately addresses

the task

Semester 1 Semester 2

0%

1%

2%

3%

4%

5%

6%

7%

8%

Task Completion Adequate (2)

response includes minor misunderstanding(s) that does not

interfere with task fulfillment

Semester 1 Semester 2

0.00%

0.50%

1.00%

1.50%

Task Completion Adequate (4)

response completely covers one (or two)main points with details,

but leaves the rest out.

Semester 1 Semester 2

178

Fig. 4.50 Task completion Adequate (3) response Semester 1&2 2013-2014

Fig. 4.51 Task completion Adequate (4) response Semester 1&2 2013-2014

Fig. 4.50 on TC Adq (3) demonstrates that the sem-2 responses conveyed all major points

but did not include details as compared to sem-1 responses. Fig. 4.51 on Adq (4) exhibits

that in comparison with seme-1, sem-2 responses covered a couple of major points with

details.

Scale-point Good in Task Completion had three secondary categories, Good (1), Good (2),

and Good (3).

Fig. 4.52 Task Completion Good (1) response addresses the task well.

Fig. 4.53 Task Completion Good (2) response includes no noticeably

misunderstood points

Fig. 4.54 Task Completion Good (3) response completely covers all main

points with a good amount of details discussed in the prompt

Fig. 4.52 shows the percentages of the UF performing their tasks well. In both semesters,

it was a well-deserved achievement. However, the percentage of sem-2 UF at TC, Good

level was divided within two sub categories, i.e. TC Good1, and Good 2. Fig. 4.52 on TC

Good (1) shows that Sem 1 performances addressed the assigned task well. However, Fig.

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

30.00%

35.00%

Task Completion Good

(1) response addresses the task well.

Semester 1 Semester 2

0.00%

10.00%

20.00%

30.00%

Task Completion Good

(2) response includes no noticeably

misunderstood points

Semester 1

Semester 2

0.00%

0.50%

1.00%

1.50%

Task Completion Good

(3) response completely covers all main points with a good amount of details discussed in the prompt

Semester 1 Semester 2

179

4.53 on TC Good (2) informs that sem-2 responses did not include ‘noticeably

misunderstood points’. Fig. 4.54 on TC Good (3) reveals that less than 2% RSPs, with

some decimal difference in sem-1and 2 covered ‘all main points with a good amount of

details in the prompt’.

The scale-point Excellent in TC carried three categories, Excellent (1), Excellent (2), and

Excellent (3).

Fig. 4.55 Task Completion Excellent (1) response fully addresses the task

Fig.4.56 Task Completion Excellent (2) response displays completely accurate

understanding of the prompt without any misunderstood points

Fig. 4.55 on TC Excellent (1) exhibits that 4.80% UF managed to attain excellence in the

second semester by fully addressing the task. However, it is 1.36% lesser than the first

semester. Then Fig. 4.56 on TC Excellent (2) shows that 0.89% responses from sem-2

achieved the level of Excellent (2). This might be insignificant percentage on the onset.

However, it implies a possibility for the UF to achieve this mark even when task was

challenging. 0.34% responses from semester one qualified TC excellent (3) that completely

0.00%

1.00%

2.00%

3.00%

4.00%

5.00%

6.00%

7.00%

Task Completion Excellent (1)

response fully addresses the task

Semester 1 Semester 2

0.00%

0.50%

1.00%

1.50%

2.00%

2.50%

Task Completion Excellent (2) response displays completely accurate understanding of the prompt without any misunderstood points

Semester 1 Semester 2

180

covered all main points with complete details discussed in the prompt. However, the UF in

the second semester needed to strive further to achieve excellent (3) in task completion.

Task completion (TC) the fourth testing construct of the analytic scoring rubric considered

for the present research gauged the extent a speaker/UF completed the task. The scope of

a performed task was determined by the subcategories of six scale-points, from ‘No’ to

‘Excellent’ (see Appendix D, Table1).

It is worth noticing that Task Completion ‘No (1)’ hardly (0.18%) applied to second

semester recorded speaking performances. In both semesters less than 1% RSPs did not

contain enough evidence to evaluate (Fig. 4.44, No 2). Less than 1% RSPs displayed major

incomprehension that interfered with addressing a task, in the second semester (Fig. 4.45).

Less than 10% RSPs insufficiently addressed the task (Fig 4.46, Fair1). However, the

second semester RSPs slightly decreased ‘some major incomprehension’ (Fig 4.47, Fair2).

Then, the second semester respondents comparatively adequately addressed the tasks (Fig.

4.48, Adq1). More responses from second semester than the first semester completed the

task with inconsequential misunderstanding (Fig 4.49, Adq2). The semester-2 responses

conveyed all major points without including details (Fig 4.50, Adq3). Semester-2 responses

covered a couple of major points with essential details (Fig 4.51, Adq4). Semester-1

performances addressed the assigned task well (Fig 4.52, Good1). However, semester-2

responses did not include ‘noticeably misunderstood points’ (Fig 4.53, Good2). Less than

2% RSPs, with some decimal difference in semester 1 and 2 covered main points with

useful details in the prompt (Fig 4.54, Good3). Overall, 4.80% UF managed to attain

excellence in the second semester in the testing construct of TC at three levels of

excellence. However, it was 1.36% lesser than the first semester. Almost 1% of the second

semester responses completely addressed the task (Fig 4.55, Excellent1). Then, 0.89%

responses from semester-2 displayed accurate understanding of the prompt (Fig. 4.56,

Excellent2). Barely, 0.34% responses from semester fully covered main points with

‘complete details discussed in the prompt’ (TC, Excellent3). The UF in the second semester

needed to strive further to achieve excellence (3) in task completion. Achievement in

181

decimals might appear insignificant on the onset. However, it implied a probability for the

UF to achieve this mark even when task was challenging.

The fifth testing construct of the scoring rubric applied to the assessment of the UF for this

research study is intelligibility. Intelligibility refers to pronunciation and prosodic features

(intonation, rhythm, and pacing) of speech (see Appendix D, Table1).

4.4.7 Analysis of Evaluation of Intelligibility (Semester 1 & 2)

Intelligibility (INT) comprehensively includes articulation with variation in spoken pitch,

stress, rhythm and pacing. These features add clarity to speaking ability. Intelligibility,

comprehensibility happens to be one of the many factors involved in effective and

successful communication (Jabeen, 2013, p. 156). Aspects of pronunciations like grammar

and vocabulary need to be mastered for ‘comfortable intelligibility’ (Patil, 2008, p. 235).

Patil (2008), further elaborates that Asians are not required to talk like Americans of

Australians, however, they are required to be understood by the Americans and the

Australians, and by fellow Asians. This is what the developments in English language as

world Englishes, global English, international language, Paklish, Pakistani English,

Singaporean or Nigerian English, Hong Kong English, and English as lingua franca might

refer to (Hassan, 2004; Holliday, 2005; Joseph, 2004; Rahman, 1990) (See section 2.8).

While evaluating the UF on the testing construct of Intelligibility, I, as a rater had in mind

that localized varieties of English spoken in the world are referred to World Englishes

(WE). This umbrella term, WE, is used for English that covers all the varieties of the

language under the influence of United States and America (Jenkins, 2006). However,

conventions need to be kept for intelligibility (Hassan, 2004; Rahman, 1990). Thus, it was

equally crucial to keep Englishness of English (see section 2.13) for understanding the

meaning.

Six scale-points in INT ranged from ‘No’ to ‘Excellent’ with additional descriptors to each

scale-point. Two divisions of ‘No’ in the scale INT are No (1), and No (2).

182

Fig. 4.57 Intelligibility No (1) response completely lacks intelligibility

Fig. 4.57 on INT ‘No (1)’ shows there are 0.68% responses in the first semester that

completely lacked intelligibility. However, in the second semester, the percentage reduced

further. This shows that lesser number of UF responses in the second semester ‘completely

lacked intelligibility’. In the first semester none of the UF responses went down to INT

No (2). Only 0.18% responses in the second semester did not have ‘enough evidence to

evaluate’ intelligibility. Scale-point Limited in INT has further five divisions showing the

flexibility in the approach of testing the second/third/foreign language learners.

Fig. 4.58 Comparative study of Intelligibility Limited (1) sem-1&2 (2013-

2014)

0.00%

0.50%

1.00%

Intelligibility No (1) response completely lacks intelligibility

Semester 1 Semester 2

0.00%

0.50%

1.00%

1.50%

Intelligibility Limited (1)

response generally lacks intelligibility

Semester 1

Semester 2

0.00%

0.20%

0.40%

0.60%

0.80%

Limited (2) response is generally unclear,

choppy. Fragmented or telegraphic

Semester 1 Semester 2

0.00%

0.50%

1.00%

1.50%

Intelligibility Limited (5) response requires

considerable listener effort

Semester 1 Semester 2

183

Fig. 4.59 Comparative study of Intelligibility Limited (2) sem-1&2 (2013-

2014)

Fig. 4.60 Comparative study of Intelligibility Limited (3) sem-1&2 (2013-

2014)

No Limited (3) in the scale of Intelligibility was marked in the two semesters. Limited (4)

applied to only 0.36% in the second semester. It was 0% in the first semester. A positive

trend among the UF of semester-1 and 2 could be observed. The total percentage of 5.25

at five levels of limited command on intelligibility is traceable in both semesters.

Scale-point Fair in Intelligibility has four descriptors.

Fig. 4.61 Comparison of Intelligibility Fair (1) responses Sem- 1&2

(2013-2014)

Fig. 4.62 Comparison of Intelligibility Fair (2) responses Sem- 1&2

(2013-2014)

Fig. 4.63 Comparison of Intelligibility Fair (4) responses Sem- 1&2

(2013-2014)

Fig. 4.61 on INT Fair (1) shows reduction in lesser intelligibility in the second semester

responses indicating slightly better speaking and recording ability of the UF. Fig. 4.62 on

2.25%

2.30%

2.35%

2.40%

2.45%

Intelligibility Fair (1) response often lacks intelligibility impeding communication

Semester 1 Semester 2

1.80%

2.00%

2.20%

2.40%

Intelligibility Fair (2) response frequently exhibits problems with pronunciation, intonation or pacing.

Semester 1

Semester 2

0.00%

2.00%

4.00%

6.00%

Intelligibility Fair (4)

response may require significant listener effort at times

Semester 1

Semester 2

184

INT Fair (2) responses increased in semester-2 as compared to semester-1. It could have

been due to advanced vocabulary or complex grammatical structures, reading their scripts

or recording noise. The descriptor of fair (3) could not be applied to semester-1 responses

but 0.18% responses in semester-2 did ‘not sustain at a consistent level throughout’. Fig.

4.63 on INT Fair (4) shows the percentage of second semester responses grew higher than

first semester responses. Recording environment, language lab collective practicing time,

sometimes required considerable listener effort.

Four sub categories of Adequate in Intelligibility extended from Adequate (1) to Adequate

(4).

Fig. 4.64 Comparative study of Sem-1&2 on Intelligibility Adequate (1) in

2013-2014

Fig. 4.65 Comparative study of Sem-1&2 on Intelligibility Adequate (2) in

2013-2014

Fig. 4.64 on INT Adequate (1) displays that the second semester responses reduced half

the percentage in Adequate (1), showing improvement in intelligibility of speaking

practices. Fig. 4.65 on Adequate (2) exhibits that more of semester-1 responses showed

some difficulties with pronunciation, intonation or pacing as compared to semester-2

responses.

0.00%

1.00%

2.00%

3.00%

4.00%

5.00%

6.00%

7.00%

8.00%

9.00%

Intelligibility Adequate (1) response may lack intelligibility at places impeding communication

Semester 1 Semester 2

0.00%

2.00%

4.00%

6.00%

8.00%

10.00%

Intelligibility Adequate (2)

response exhibits some difficulties with pronunciation, intonation or pacing

Semester 1 Semester 2

185

Fig. 4.66 Comparative study of Sem-1&2 on Intelligibility Adequate (3) in

2013-2014

Fig. 4.67 Comparative study of Sem-1&2 on Intelligibility Adequate (4) in

2013-2014

Comparatively, this problem decreased almost half the percentage of the previous semester,

in the latter semester. Fig. 4.66 on INT Adq (3) discerns minor fluidity in semester-2

responses. However, semester-2 responses in the sub category Adq (4) of INT in Fig. 4.67

required some listener efforts at times.

Point scale Good in Intelligibility contained three extensions.

0.00%

10.00%

20.00%

30.00%

Intelligibility Adequate (4)

response may require some listener efforts

at times

Semester 1 Semester 2

0.00%

0.10%

0.20%

0.30%

0.40%

0.50%

0.60%

Intelligibility Adequate (3) response exhibits some fluidity

Semester 1 Semester 2

186

Fig. 4.68 Comparative study of Sem-1&2 on Intelligibility Good (1) in 2013-2014

Fig. 4.69 Comparative study of Sem-1&2 on Intelligibility Good (2) in 2013-2014

Fig. 4.70 Comparative study of Sem-1&2 on Intelligibility Good (2) in 2013-2014

Fig. 4.68 on INT Good (1) shows reduction in minor problems with pronunciation or

intonation in the second semester. However, the responses were generally less intelligible

than the first semester. Fig. 4.69 on INT Good (2) displayed that semester-1 responses were

generally clear, fluid and sustained. Pace varied at times. Apparently, the UF found it

difficult to sustain fluidity in the latter semester. Generally negligible but highly obvious

in classroom environment, 0.71% RSPs in the second semester were ‘almost always clear,

fluid and sustained’. However, this negligible percentage existed to show upward trend.

The apparently insignificant evolution did show an upward movement which was positive

in trend. However, Fig. 4.70 on INT Good (3) demonstrates improvement in the second

semester responses. The second semester responses were effortlessly intelligible.

Finally, scale-point excellent in INT had 1-3 dimensions.

0.00%

10.00%

20.00%

30.00%

Intelligibility Good (1)

response may include minor difficulties with pronunciation or intonation, but generally intelligible

Semester 1 Semester 2

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

30.00%

35.00%

Intelligibility Good (3) response does not require listener effort much

Semester 1 Semester 2

0.00%

5.00%

10.00%

15.00%

Intelligibility Good (2) response is generally clear, fluid and sustained. Pace may vary at times

Semester 1 Semester 2

187

Fig. 4.71 Comparative study of Sem-1&2 on Intelligibility Excellent (1)

in 2013-2014

Fig. 4.72 Comparative study of Sem-1&2 on Intelligibility Excellent (2)

in 2013-2014

Fig. 4.71 on INT Excellent (1) shows that in spite of accent semester-1 responses were

completely intelligible. In the latter semester INT Excellent (1) responses were there but

the percentage was lesser than the first semester. The first semester responses could not

attain INT Excellent (2). Fig. 4.72 delineates that the speaking performances in the second

semester increased in the sub category of Excellent (3) in Intelligibility.

Intelligibility (INT) comprehensively included articulation and linguistic features. A

linguistic feature was variation in spoken pitch. Another feature was the focus on important

elements of the spoken message. Then regulation of conversational interaction. In addition

to these, pacing was the speed at which a talk moved. Pacing was also a technique, which

determined the appeal of the conversation/talk/discussion for the audience. These features

added clarity to speaking ability.

In sem-1 & sem-2, less than 1% RSPs completely lacked intelligibility, or did not have

‘enough evidence to evaluate’ intelligibility (Fig 4.57, No1-No2, respectively).

0.00%

1.00%

2.00%

3.00%

4.00%

5.00%

6.00%

7.00%

8.00%

Intelligibility Excellent (1) response is completely

intelligible although accent may be there.

Semester 1 Semester 2

0.00%

2.00%

4.00%

6.00%

8.00%

10.00%

12.00%

14.00%

Intelligibility Excellent (3)

response does not require

listener effort

Semester 1 Semester 2

188

Throughout the two semesters, the RSPs did not ‘contain frequent pauses and hesitations’

(INT, Limited3). Only 0.36% RSPs from the second semester contained consistent

pronunciation and intonation problems (INT, Limited 4). Positively, 5.25% could be traced

at five levels of limited grasp in the testing construct of Intelligibility. In second semester,

2.31% RSPs lacked intelligibility impeding communication (Fig 4.61, Fair1). In the latter

semester, 2.31% RSPs showed ‘problems with pronunciation, intonation or pacing’ (Fig

4.62, Fair2), and 0.18% RSPs could ‘not sustain at a consistent level throughout’ (INT,

Fair3). The 5.52% RSPs in the second semester, at times required listener’s significant

efforts (Fig 4.63, Fair4). The second semester responses improved in intelligibility (Fig

4.64, Adq1), and showed lesser difficulties with pronunciation, intonation or pacing as

compared to semester-1 responses (Fig 4.65, Adq2). Minor fluidity in semester-2 responses

was discerned (Fig 4.66, Adq3). Semester-2 responses required some listener efforts at

times (Fig 4.67, Adq4). In the second semester, minor problems with pronunciation or

intonation reduced (Fig 4.68, Good1). Semester-1 responses were generally clear, fluid and

sustained (Fig 4.69, Good2). Apparently, the UF found it difficult to sustain fluidity in the

latter semester-2. However, second semester responses improved (Fig 4.70, Good3).

In spite of accent semester-1 responses were completely intelligible (Fig, 4.71, Excellent1).

Semester-2 RSPs were lesser than the first semester. In the first semester none of the RSPs

were ‘almost always clear, fluid and sustained’. However, generally negligible number

(0.71%) of RSPs in the second semester was classroom obvious (INT, Excellent2),

showing upward trend. Finally, 12.63% recorded speaking performances, in the second

semester did not require listener effort (Fig 4.72, Excellent3).

After comparing the RSPs of sem-1 and sem-2 in the testing constructs of meaningfulness,

grammatical competence, discourse competence, task completion and intelligibility, it was

educating to acknowledge the findings of the comparative evaluation of semester-1 and 2.

189

4.4.8. Findings of the Comparative Evaluation of Semester 1& 2

It is significant to observe the trends in the comparative evaluation of semester 1 and 2.

The scale points ‘No’, and ‘Limited’ communicate high underachievement in the scoring

rubric. Thus, I combined them, in the form of Table 4.8 to see the percentage of high

underachievement of the UF in 2013-2014.

Table 4.8 Number of responses on ‘No’ to ‘Limited’ scale-point in semester1 & 2

S.

No

Scale-point in scoring rubric Semester 1

(%)

Semester 2

(%)

1. INT ‘No’, 0.00 less than 1

2. INT ‘Limited’ less than 3 less than 3

3. GC ‘No’ less than 1 less than 2

4. GC ‘Limited’ less than 3 less than 7

5. MFN ‘No’ 0.00 less than1

6. MFN ‘Limited’ less than 3 less than 3

7. DC ‘No’ less than 3 less than 2

8. DC ‘Limited’ less than 4 less than 5

9. TC ‘No’ less than 1 less than 1

10. TC ‘Limited’ less than 3 Less than 3

Table 4.8 on the analysis of percentages of speaking performances on ‘No’ to ‘Limited’

scale-point in semester 1 and 2 reveals lesser percentage of responses in these lowest point

scales. This trend could be uplifting for the stakeholders. A visual illustration of the same

follows in Illustration 6:

190

Illustration 6. UF’s No-Limited Control on ESS (2013-2014)

As a researcher UELT, I found it motivating for the administration, management, teachers

and learners to know that less than 4% performances of the UF, in the first semester, and

less than 7% performances, in the second semester had ‘No’ to ‘Limited’ control on ESS.

The rest of the speaking performances appropriated ‘Fair’, ‘adequate’, ‘good’, or

‘excellent’ points in the scales of scoring rubric. I tabulated the attained results on the scale

points ‘No’, and ‘Limited’ as ‘less than’ because it provided the results to an

understandable level. Most probably, 96% performances of the UF, in the first semester,

and 93% performances, in the second semester had ‘Fair’ to ‘Excellent’ control on ESS.

Table 4.9 Number of responses on ‘Fair’ scale-point in semester1 & 2

S.

No.

Scale-point ‘Fair’ in scoring

Rubric

Semester-1

(%)

Semester-2

(%)

1. Meaningfulness ‘Fair’ less than 5 less than 9

2. Grammatical Competence ‘Fair’. less than 15 less than 16

3. Discourse Competence ‘Fair’ less than 8 less than 10

4. Task Completion ‘Fair’ less than 8 less than 10

5. Intelligibility ‘Fair’ less than 7 less than 11

191

Table 4.9 reveals that semester-1 responses attained 43% in scale point ‘Fair’ in the five

categories of the applied scoring rubric. However, the semester-2 responses attained 56%

in the scale point ‘Fair’, overall.

If the percentage attained by the UF of first semester (19%), and second semester (23%) in

the scale point ‘No’, ‘Limited’, and ‘Fair’ altogether was placed aside, the rest of the

percentage for the first semester (81%), and for the second semester (77%) stretches from

‘Adequate’ to ‘Excellent’.

Table 4.10 Number of responses on ‘Adequate’ scale-point in scoring rubric (Sem-1&2)

S. No. Scale-point ‘Adequate’ in scoring

rubric

Semester-1

(100%)

Semester-2

(100%)

1. Meaningfulness ‘Adequate’ more than 23 more than 28

2. Grammatical Competence ‘Adequate’ more than 32 more than 34

3. Discourse Competence ‘Adequate’ more than 34 more than 32

4. Task Completion ‘Adequate’ more than 25 more than 34

5. Intelligibility ‘Adequate’ more than 24 more than 27

Table 4.10 reveals the second semester’s attained level of strongest standing, i.e. scale point

‘Adequate’ in the five categories of the applied scoring rubric. Therefore, the scale point

‘Adequate’ was being acceptable in the scoring rubric. The semester-2 performances were

satisfactory in meaningfulness, grammatical competence, task completion, and

intelligibility. For the clearer understanding of the achievements of sem-1 & 2 on

‘Adequate’ level, a visual illustration 7 with exact percentages follows:

192

Illustration 7. Difference at Level ‘Adequate’ SEM 1 & 2

However, the UF of Semester-2 needed to concentrate on their discourse competence.

Table 4.11 Number of responses on ‘Good’ scale-point in semester 1& 2

S.

No

Scale-point ‘Good’ in scoring rubric Semester 1

(%)

Semester 2

(%)

1. Meaningfulness ‘Good’ more than

58

more than

48

2. Grammatical Competence ‘Good’ more than

45

more than

37

3. Discourse Competence ‘Good’ more than

44

more than

39

4. Task Completion ‘Good’ more than

51

more than

42

5. Intelligibility ‘Good’ more than

55

more than

40

193

Scale point ‘Good’ generally refers to competent level. Table 4.11 acknowledges first

semester’s respondents’ performances that were found commendable at this scale point.

The second semester-2 had space to improve at this level. However, putting scale point

‘Adequate’ and ‘Good’ together makes 76.85% in meaningfulness (See Table 4.12),

71.35% in grammatical competence (See Table 4.13), 72.6% in discourse competence (See

Table 4.14), 77.4% in task completion (See Table 4.15), and 68.15% in intelligibility (See

Table 4.16), in the second semester.

Table 4.12 Scale point ‘Adequate’ and ‘Good’ together in meaningfulness Semester 1&2

Semesters 3 Adequate 4 Good Total Total Performances

1 23.97 58.21% 82.18 100 %

2 28.46% 48.39% 76.85 100 %

Table 4.13 Scale point ‘Adequate’ and ‘Good’ together in grammatical competence

Semester 1&2

Semesters 3 Adequate 4 Good Total Total Performances

1 32.53% 45.21% 77.74 100%

2 34.34% 37.01% 71.35 100%

Table 4.14 Scale point ‘Adequate’ and ‘Good’ together in discourse competence Semester

1&2

Semesters 3 Adequate 4 Good Total Total

Performances

1 34.59% 44.18% 78.77% 100%

2 32.74% 39.86% 72.6% 100%

Table 4.15 Scale point ‘Adequate’ and ‘Good’ together in task completion Semester 1&2

Semesters 3 Adequate 4 Good Total Total

Performances

1 25.00% 51.71% 76.71% 100%

2 34.52% 42.88% 77.40% 100%

194

Table 4.16 Scale point ‘Adequate’ and ‘Good’ together in intelligibility Semester 1&2

Semesters 3 Adequate 4 Good Total Total Responses

1 24.32% 55.82% 80.14% 100%

2 27.76% 40.39% 68.15% 100%

Analyzing Tables 4.12 to 4.16, room for improvement stayed obvious in the second

semester speaking performances. However, I experienced the UF promoting to second

semester, they accessed advanced materials, lexical items, and syntactic structures. They

delivered presentations on research articles. They submitted their recorded comments on

those presentations (See Section 3.4.8.). They needed to elaborate their point well. They

were required to structure their ideas logically. They were expected to address their tasks

well. Furthermore, they were supposed to communicate clearly in a sustained manner. The

UF in the second semester did it to a certain extent. The UF’s speaking performances

improved in the scale point excellent:

Table 4.17 Achievement of the UF in scale point excellent of test constructs:

S. No. Test Constructs Semester-1 Semester-2

1. Meaningfulness more than 10% More than 10%

2. Grammatical Competence more than 5% More than 5%

3. Discourse Competence more than 7% More than 12%

4. Task completion more than 8% more than 5%

5. Intelligibility more than 10% more than 17%

The analysis of scale point ‘Excellent’ through table 4.17 demonstrates that despite

advanced materials, specific terminology, and complex grammatical structures, certain

percentage of the UF in the second semester did qualify the level of Excellence in the

relevant testing constructs. This information aired potential in the UF.

Language learning is a lifelong process; there is no quick fix solution. By giving reasonable

weightage to ESS in overall assessment of English language (Examination system),

providing the UF with opportunities to rehearse language (UELTs), observing a criterion

195

for testing ESS (UELTs and UF), identifying gaps in their utterances (UELTs and UF), and

spaces to work on further gave the UF reasons to evolve their speaking ability.

My research study is useful for university language teaching practitioners, researching

practitioners (Burns, 2005), and policy making practitioners. It will help language teachers,

language learners, and administrators understand teaching, learning, testing; thus,

developing English speaking skills in universities, colleges, schools and language centers

at large. My research will be helpful for achieving equity of weightage for the assessment

of speaking skills like English writing skills in the overall assessment of English language.

In my study I have tried to understand the learners’ perceptions about English speaking

skills and their academic and professional requirements for this international and official

language. I have detailed their language acquiring and learning experiences to unfold to

language teachers, learners and administrators what has been done and what can be done

to further enhance the UF’s global interactive skill in chapter five on findings, implications,

conclusions, and recommendations.

CHAPTER 5

FINDINGS, IMPLICATIONS, CONCLUSIONS,

RECOMMENDATIONS

This final chapter is divided into four parts. I revisited the research questions and

summarized the results and discussions. The first part highlights the UF’s background

knowledge and practice of English speaking skills that helped in locating the main problem

through gaps in teaching learning practices (see section 5.1 to the sub sections 5.1.5). The

present research study finds out the responsibility of the higher seat of learning in the

second part (see section 5.2 to the sub sections 5.2.5). In the third part of this chapter (see

section 5.3 to the subsections 5.3.7), the researcher UELT reflects on the teaching practices,

teachers/raters techniques, use of RSPs, and use of an analytic scoring rubric to test the

Uf’s speaking performances and submits the key findings. Part three (5.3) deals with the

‘how’ and ‘why’ of research questions extensively. The fourth part makes

recommendations about weightage for ESS (see section 5.4) for the stakeholders including

management, administration, and the board of governors, faculty board of studies and the

English language teachers. Revisiting the contributions of the research study (see section

5.4.1), the theoretical underpinnings have been highlighted (see section 5.4.2). Discussing

197

the limitations of the study (see section 5.4.3), this chapter closes by unfolding the

implications and future research prospects (see section 5.4.5).

5.1. Background of the UF

The present study explored the teaching and testing of the speaking skills at Bachelor of

Engineering for Mechatronics (BEMTS), in 2013 intake, UF level. The findings are made

in the backdrop of HEC Curriculum (English) for Bachelor of Engineering (revised in

2009), the university freshmen’s education, the University management and

administration’s (UM&A) perspective, the university English language teachers’ (UELTs)

deixis/frame of reference and the University Curriculum that expected the UF to

communicate in English language.

5.1.1. Survey (2013) Based Findings from the UF’s Lens

The UF’s survey (2013) that was conducted as a class activity reported that more than 40%

students liked to talk in English. Interestingly, the same percentage of students did not talk

to their friends in English. This established that at informal level the learners did not use

English language as Memon (2007), Manan (2015), and Kanwal (2016) had affirmed. At

UF level, majority of the learners occasionally talked to their friends in English. Several

studies (e.g., Coleman, 2010; Mansoor, 2003, 2005; Rahman, 2002, 2005; Shamim, 2008;

Tamim, 2014) have found that most Pakistani school graduates lack English language

fluency while entering to university, especially speaking skills. A survey among the UF

found out that the following percentage of the UF were taught and tested in ESS at college

level:

Table 5.1 Frequency of English speaking skills taught and tested at college level 2013

Action Did (%) Occasionally

(%)

Did not

(%)

Taught 18.33 35 46.66

Tested Projects 65.83 16.66 15.83

198

Table 5.1 discloses that most of the UF presented their projects in English. Their speaking

performances were tested mostly without teaching oral skills (see Table 4.3). However, the

CLLs were verbally guided, and occasionally practiced oral skills. They had learnt English

as a subject (Kachru, 1990) through lecture method not as a language skill through

interaction. However, most of the UF presented their college projects in English. Their

previous education could not enable them to function in English (Kanwal, 2016; Memon,

2007; Zulfiqar, 2011). Due to the negligible teaching attention on ESS of English language,

the target of employability for the UF was shadowed. In the national educational culture,

the writing testing English skills have overshadowed the English oral skills. Hence, this

study found that English language needs to be developed academically, with all due official

processing since the UF are bound to use English at academic level.

The UF from A levels were generally thought to be better communicators as they were

more self-assured. However, this was not the case necessarily. The over confidence of like

students, sometimes brought failure to them. Examining the ESS of the UF across the board

clarified general assumptions. A scoring rubric engaged the UF in the exercise of persistent

speaking performances.

5.1.2. English Speaking Practices of the UF at the Joining Time

The survey report found out that the students held short discussions and shared their ideas.

In the process of discussion, whenever they were asked to support their statements they

could manage through code mixing and code switching. However, they could not deal with

situations in English language at UF level (Rabab’ah, 2003). They could not have had much

praxis at college level. Their English speaking skills required for ‘networks of power’

(Ashraf, 2006, p. 209) as ‘signs of wealth’ and ‘signs of authority’ for ‘economic exchange’

(Bourdieu, 1991, p. 503) (see section 1.6) in the long run could neither be regularly tested

nor graded. In the beginning of semester-1, following English speaking practices were

identified:

199

Table 5.2 Frequency of speaking English with parents, family and friends at college level

(2013)

(Talked)to Whom Liked (%) Occasionally

liked (%)

Did not

liked (%)

Generally 40.83 50.83 7.50

Friends 4.16 53.33 40.83

Parents 5 22.5 72.50

Parents – Child 5 21.66 73.33

In family 3.33 38.33 58.33

Outside the classroom 8.33 55.83 35.83

In public dealings 4.16 53.33 42.50

At public places 2.5 55.83 41.66

Teacher- Learner 55 36.66 6.66

Generally heard 30 40.83 28.33

Teacher expected 60.83 35 3.33

Parents expected 41.66 24.16 34.16

Based on the Table 5.2 the UF’s ratio of exercising ESS varies from parents, family, and

friends; outside the classroom, public places and public dealings. English language was

used as a tool for status to show level of sophistication and formality. It was not used as a

means for informal interaction. Their usage of the English language was occasion specific.

Thus, their English speaking practices resulted in lesser exercise of the target language.

Language learning progresses with practice. But why should the UF practice a language

that did not add to their academic standing? ESS did not enhance their grade point average

(GPA). Had the UF seen some immediate benefit in using it, they could have practiced ESS

with greater motivation. Their liking to talk in English outside their classrooms was less

than 9 percent. Their practice of English oral language was confined to classrooms at UF

level. More than 55% UF liked to occasionally talk in English outside their classrooms at

college level (see table 5.2). However, only their liking for English could not escalate the

use of English to the level of actual practice.

200

This study finds out that the expectations of the teachers and parents had stimulated the UF

to talk in English. But more stimulation was required for better output. The UF did not use

English language informally. They had limited exposure to the target language. They

needed exposure, incentive and practice in ESS. As UF teaching, learning, testing and

grading of ESS was vital for them to develop serious aptitude for it. Then they could adopt

the language of progress with mutual efforts of the stakeholders in English language

Education. After tabulation of the survey I found out the UELTs’ impression about the

UF’s indigenous ESS as they joined the university.

5.1.3. Reasons for Lesser Practice in ESS at UF Level

When English speaking practice of English language was not outlined in syllabi, the

English language teachers probably might have overlooked to test it. When English

speaking performances were not graded, and ESS had no weightage in overall assessment

of English language, the UELTs as well as the UF/the English language learners most

probably might have left ESS to chance and choice; liking or disliking. Academic

authorities certainly could not afford to ignore, overlook or leave the learning of as

important a language as English to the whimsical likes and dislike.

5.1.4. UELTs on UF’s Indigenous ESS at Joining Time

Diverse properties found in the UF demanded careful handling on the part of a UELT. The

UELTs encountered mixed ability UF at their lowest (see section 4.2). Apart from a few,

the UELTs saw below average UF. The UF were not good at ESS, majority of the UF was

very poor at it. Some UELTs discovered the UF could not restate information and a few

encountered the opposite. More than 33% UELTs had an impression that the UF could not

expand information. Contrary to this, more than 44% UELTs had a feeling that the UF

could explain their argument. The University English Language Teachers had mixed

perceptions about the UF’s speaking ability. The UELTs perceptions coincided with

university automation report on the enrollment of Bachelor of Engineering for

Mechatronics (BEMTS), in 2013 intake. The UF mostly were from Government colleges.

201

The background knowledge of the UF corresponded with the outset notion of the UELTs.

This contributed to the major cause of UFs’ facing linguistic problems at the joining time.

Academic deliberation in the process of teaching is required.

5.1.4.1 UELTs’ Consciously Teaching ESS

One positive finding was that all the UELTs were deliberately teaching ESS to the UF to

cater to the part of social communication (see section 1.7). Within a semester two weeks

were spent on social interaction to deliberately teach ESS. The UF were intentionally

taught to participate in group discussions, question answer sessions, reading aloud,

answering questions, and making other comments. Through asking questions, the students

were provided opportunities to speak. Real-life situations, natural and professional were

used to make them talk. Code-switching was avoided to develop awareness and

understanding through interaction and discussion. Discreetly, they were taught language

through role-play. They were deliberately coached because they were examined on English

ESS through semester presentation. Additional creative activities were included in the

curricula to consciously enhance the UF speaking ability. Thus, teaching ESS had started.

Semester presentation (see section 1.8) was one time performance in a semester in a group

of four to five students. Some of the UF invested major efforts and ample time in the

preparation of these performances as semester presentations. Unfortunately, testing was

meagerly done. Moreover, the inclusion of the earned grades (5%) in the overall grade

point were diluted.

5.1.5. The Symbolic Power of ESS on the Pakistani Social Set-up

The UF/language learners who speak English language fluently earn respect because

English language has social value and symbolic power as Bourdieu (1991) tags the

linguistic practice. English adds to the speakers’ prestige better than others who cannot

express themselves in English. The fluent speakers have the capacity to persuade others.

Therefore, at university level, they needed to learn to speak their ideas to enter market in

the long run. One of the UELT said: ‘Once (the students) graduate and (they) go for a job.

202

It is (their) performance … based on (their) oral speech. One of my students did research

on the promotion of the employees and their capability of English speaking (promotion and

speech competence are interlinked). When the employees are interviewed, their linguistic

competence does count (Interview, T7. 28/5/2014)’. This is how speakers’ competence in

English language could be related to ‘linguistic capital’, and ‘signs of wealth’ (Bourdieu,

1991, p. 503).

Another UELT said: ‘ESS is very important because if you look at today’s market trends,

students have to give a lot of presentations, they have to go for the interviews. Everyone

cannot get a government job, so people have to go to the corporate sector as well. Now in

the corporate sector, whether you like it or not English happens to be the language for every

kind of business. So they need to have that confidence where they can express themselves

and they can do that with comfort and ease’ (Interview, T8, 4/6/2014). The ‘signs of

authority’ and ‘economic exchange’ (Bourdieu, 1991) authenticate the pronouncement.

They required training to conduct presentations to sell their product. For the UF who heard

English most of the time, and the ones who heard this language periodically, or those who

did not hear it, acquiring English language was mandatory. Thus, the mission was to

develop ESS at a faster pace than they were doing at that time. The university was

committed to prepare the UF with link language to better connect with their profession and

the globe. Benefits of ESS might not be over emphasized in acquiring all types of

knowledge. The UF having oral communicative competence might expand their learning.

5.2. Responsibility of a Higher Seat of Learning

A University is a living, evolving and changing seat of learning. It has a learning climate.

Once a university commits to excellence, it needs to continuously strive up to that merit. It

attaches importance to the idea of improvement. The present case study emphasizes the

promotion of English as international language for the advancement of its students’

internationality. Universities judge the achievements of their scholars through measures

that focus on grades and tests (Freiberg, 2005, p.4). At the UF level, the academic utility

203

of English language is greater than personal or public utility in Pakistan. It is the duty of

the educational system to enhance the academic competence of the UF within an academic

programme. Universities role as important producers of knowledge has already been

emphasized (Mahmood, 2016). The UF can exchange their research and ideas on

international forums through ESS. The UM&A’s backup could support the UELTs to put

in extra efforts.

The UELTs might undertake initiative overload (see Table 5.3) to enhance the possibilities

of classroom research in the area of oracy. They are praised for their volunteerism.

However, due to their personal and professional commitments, majority of the UELTs

could not initiate extra load for teaching ESS. The aim was to advance speaking ability of

the UF. It could be initiative overload, or an ESS focused short course, or incorporation of

required changes in the running courses. The idea of initiative overload was one step of a

kind applauded by 100% administrators. It was termed as key requirement in the area of

establishing learners’ speaking performances. However, a separate research can be

conducted on how the UELTs could undertake initiative overload. Command on ESS is

beneficial in the UF’s academics, and career building. It is an indispensable realization that

the stakeholders need to make at the earliest.

5.2.1. The University Management and Administration’s Perspective

To treat this problem of inadequate ESS, in the capacity of a UELT, other than the UELTs,

I convened the UM&A’s perspective on the importance, and enhancement of ESS. The

managerial and administrative perspective of the UM&A lent a richer dimension in the

forms of inter departmental collaboration, focused ESS courses, and UELTs’ initiative

overload. Their three dimensional directorial frame of reference was beneficial but

challenging. The UM&A was cognizant to the importance of ESS. Table 5.3 displayed

their cognizance about the enhanced marketability and employability of the UF due to ESS.

204

Table 5.3 UM&A Perspective on the benefits of enhancing UF’s ESS

UM&As’ Perspective UM&A (%)

Importance of UF’s ESS 90.90

ESS adds value to UF’s marketability 100.00

Inter departmental collaboration Promotes use of

ESS

72.72

Developing a focused ESS course 63.63

UELTs’ undertaking initiative overload 100.00

UF’s varied speaking competence at joining time 72.72

Need to evaluate ESS at UF Level 81.81

UF’s varied speaking difference at graduation time 81.81

Conscious teaching of ESS 45.45

Advising teaching faculty to interact in English 81.81

One of the members from UM&A informed:

Oral skills are very important. Members of National Business and Educational Council

(NBEC) and the employers who recently visited the AU Business School mentioned the

fluent oral skills of their employees from ULSM that is known to emphasize specific oral

skills. The undergrads, ultimately, the professionals who can interact better with the

colleagues… If they are in business they have to win their customers. If they do not have

the capability of expression they lack a very important requirement. (Interview, 3,

30/4/2014)

Another member of UM&A stated:

English Department can be supported by making sure that English is spoken by most of

the people. Motivating the undergrads that if they want to compete in Computer Science

(CS) with the Indian and the Chinese, one of the edge that they can get is speak(ing) very

fluent English. They should be allowed to speak English even if they speak incorrectly.

(Interview, 7, 1/4/2014)

One more UM&A said:

205

It (ESS) is one of the very important aspects (of English language). Encouraging the

students or providing them the environment that they have the speaking skills of English

(is crucial) because the entire curriculum and books everything is in English, so in their

presentations or their communications, etc. defending their projects, everywhere they

would go (English is used). (Interview, 4, 5/5/2014)

Yet another UM&A exclaimed:

These days speaking is the most important skill... Quaid-e-Azam was a very eloquent

person and (the interviewee) used to be told that knowledgeable people are very eloquent.

And as time passed, as the years passed, the written word became less important and the

spoken word became more important. (Interview 9, 22/4/2014)

The UM&A was well informed about the implications of developing English oral skills:

Table 5.4 UM&A Perspective on the significant need of English language for the UF

S. No Administration’s reasons for enhancement of English oral skills

1. Professional knowledge

2. All the knowledge is in English

3. A tool to learn

4. To learn better

5. To gain more confidence

6. To be competent

7. Emphasis on growth

8. Traveling

9. Exploration

10. Responsibility

11. Future

12. Marketability

13. To express ideas

14. To practice

15. To become more fluent

206

Table 5.4 shows that the UM&A was cognizant to the UFs’ need to be enabled to speak

English. However, the UF were exposed to the interaction in English with the UELTs and

the others who interacted (if ever/whenever) with them in English. Upgrading ESS was a

requisite for improved learning, confidence and competence of the UF as concluded by

other studies also (Alam & Basiruddin, 2013; Ashraf, 2006; Jabeen, 2013; Kanwal, 2016;

Nawab, 2012; Patil, 2008; Rabab’ah, 2003; Zulfiqar, 2011). Interacting in English was

fundamental for further exploring relevant fields. It was beneficial for relocation, i.e., more

traveling and more handling of responsibilities. Enhancing English speaking capability of

the UF meant sending them closer to brighter future. It could augment their marketability

leading to further growth as Bourdieu (1991) validated. The UF could add to their verbal

expression. They could become more fluent through further practice. The UM&A was

ready to support the UELTs to promote ESS of the UF (see section 3.4.4.1, 3.4.4.2, 3.4.4.3).

Table 5.5 UM and A’s Support to let UELTs enhance ESS of UF

Tasks for UELTs UMAs’

Support (%)

To collaborate with the other departments More than 72

To develop an ESS focused course More than 63

To incorporate required changes within the curricula, to design a compact

course

More than 27

Table 5.5 showed the different ways the UM&A could support the UELTs to enhance ESS

of UF. More than 72% UM&A could discern the benefits of promoting collaboration

between the department of English and the other departments for the UF. Table 5.5 helped

me choose feasible options from the available UM&A support. In a running semester, the

UF are fully committed to their core courses. As a researcher teacher, I chose to undertake

initiative overload. It demanded personal commitment, and extra time that was hard but

16. English speaking teachers

207

manageable for the researcher UELT. The UM&A could not mandate it for the UELTs.

However, the researcher teacher was supported to put in some extra efforts.

More than 63% UM&A approved of ESS focused course. However, the other departments

could not spare credits for yet another course in English language. Interdepartmental

collaboration appropriated mutual agreements. ESS focused courses prescribed

interchangeable procedures to add and subtract features from Communication Skills

courses (listening, speaking, reading, and writing) to Technical Writing (listening,

speaking, reading, writing) courses. Understanding the importance of ESS, the UM&A

supported the faculty devising short courses and mentoring students to join those courses

to enhance their proficiency. However, the already overstressed UF could not afford to join

these extra courses. In a running semester, the UF were fully committed to their core

courses. The UELTs’ initiative overload demanded personal commitment, and extra time

that was hard for majority of the UELTs. The UM&A could not mandate it. The UELTs

could invest the available time in enhancing the ESS of the language learners/the UF, at

their own discretion. The UM&A approved of ESS focused course. However, the other

departments could not spare credits for yet another course in English language.

Interdepartmental collaboration appropriated mutual agreements. Focused ESS courses

prescribed interchangeable procedures to add and subtract features from Communication

Skill courses (listening, speaking, reading, and writing) to Technical Writing (listening,

speaking, reading, writing) courses. Understanding the importance of ESS, the UM&A

supported the faculty devising short courses and mentoring students to join those courses

to enhance their proficiency. However, the already overstressed UF cannot afford to join

these extra courses. They did not have time for enrolling in another ESS focused language

course. The UELTs had mammoth responsibilities.

5.2.2. The University Management and Administration’s Recommendations

The university management and administration included the administrative structures that

comprised of academic administration, deans, and chairs of different departments

208

responsible for supervising university. Some of them were involved in teaching as well.

Their perspective could have impacted the teaching/learning practices. One of the UM&A

stated:

‘The UF are at a beginning of their life and at this stage, any student will have infinite

capacity to absorb knowledge. English is one of them. So if the environment is provided

with the proper things available to them, despite limited teaching they can learn a lot

through conscious teaching of oral skills (Interview, 1, 18/6/2014)’.

The UF might acquire language in a language congenial environment. However, deliberate

task based teaching, useful testing, and grading could have expedited the processes for the

UF and the UELTs.

For establishing learners’ speaking ability, constant but combined efforts were

recommended. Majority of the administration emphasized on developing interdepartmental

collaboration to improve speaking ability of the language learners. It was a common

direction and achievable goal if the leadership could appoint noncompetitive

representatives from different departments to work on this mega project. The analyses

showed that exchange of recommendations at interdepartmental level could mutually

benefit different departments of the university, ultimately the university itself. This type of

project would involve time, motivation, collection of in context subject specific

vocabulary, and incentives for the relevant departments. Then, fiscal support could be

availed accordingly. A separate research study can be conducted on this topic.

The UM&A endorsed developing a skill focused course to promote learners’ speaking

ability. It could have been a pedagogical activity that aimed to uplift learners’

competencies. It could have been constructive in practice. To avoid encumbering the

language teachers with this extra duty in a running semester, a team of language teachers

could be assigned this task in summer semester. Once designed, the course might be run in

summer semester or in the evening. The researcher’s experience informed her that students

could find it hard to join a course in summer; they could be more inclined to outdoor

activities. Moreover, they might like to spend their summer with their families. On the

209

contrary, if the course is run in the evening, unless it is made mandatory for the UF, they

would not like to attend it in a running semester. The UF could be overburdened with an

extra oracy specific course. Thus, considering the credit hours for the degree programs, the

best option was to incorporate certain changes within the running English courses.

‘Adaptation is one of ... essential functions of a successful organization’ (Mahmood, 2016,

p.77). One of the UM&A recommended: ‘…freshmen must be given some assignments,

presentations till they develop their oral skills in university’ (Interview, 5, 18/4/2014).

Adopting the recommendation, I gave the UF weekly assignments to submit in the form of

recorded speaking performances to strengthen their oral skills. However, my UELT rights

limited me to inspiring the UF to perform their best, and grading their RSPs, incorporating

their best grades in assignment category of overall assessment of English language courses

(2013-2014).

Another UM&A informed:

‘It’s difficult for them (the freshmen) to speak in good English… If they can’t speak in

English that means, they can’t explain their point of view... Everything is done in English

(Interview, 8, 15/4/2014).

Since English gave confidence to a student, the faculty members could interact with

students, and ask them questions in English. However, very few of the UF could rephrase,

explain or re-ask a question, or expand on the information). Moreover, ESS demanded

practice and time, the UELT and the UF needed to invest more efforts than what they were

doing.

A UM&A reiterated:

‘Over the last years... in every class (40 students) 4-5 students are very good, about 10

students are ok, but about 25 students are found weak in English. So the majority (of UF)

is weak and this is not so much (of) their fault, it is because (of) our system of education.

Most of our students are unable to talk. They are not comfortable to express their point of

view in English because they’re not comfortable with English. English is difficult. Most of

210

engineering students can neither rephrase nor explain. If 5 minutes are given to them to

talk on something or anything of their liking, it is not easy (for them)’ (Interview, 10,

15/5/2014).

5.2.3. ESS Adds Value to UFs’ Marketability

The UM&A were cognizant of the prestige that English speaking skill enjoys nationally

and globally. English is used as lingua franca across the world.

Table 5.6 UM&A’s perspective on marketability of the UF

UM&A Perspective on ESS Adding to UFs’ Marketability

1. If (the UF) want to make communication and present themselves and their

knowledge, (their speech competence adds) 100% (value to their

marketability).

2. (ESS) gives them an edge in the market.

3. They (Employers) tend to get students who are better in speaking ability. They

(good speakers) were given differential jump in terms of their salary and in

terms of their induction.

4. If they can communicate well, it enhances their marketability. If they are good

in communication, it would bring them success.

5. It all depends (on speech competence).

6. Once you speak in English, they put you in a different grade. (When) you do

not speak in English they put you in a different grade.

7. In computer science, technical writers, documentation/ quality assurance

Personnel… if people want to work for a multinational and they cannot speak

English, they are not getting in… So to be considered by the entire world,

English speaking skills are absolutely necessary.

8. In Pakistan, if one speaks good English and if one can write good English, one

does not have a problem.

9. If they can express themselves in proper English, they will have proper

opportunities for securing good jobs… in the market they can cash their

knowledge easily and will prosper

211

10. People assess engineers, bankers or businesspersons, as professional…they

want to know if these professionals can express themselves…if they

cannot…they may be the most right person but the company will not give him

that value. So the employer would want a person who is persuasive and who

knows the right words to use.

11. If they (employers) are asked what sort of candidates they look for… in

majority of the cases they say that they do not want very high grade

students…they are looking for an average student who can speak well.

Table 5.6 shows UM&A’s awareness that English speaking performances could give their

graduates identity in their fields internationally. English speaking competence was their

source of cognitive social capital (Ashraf, 2006). This kind of awareness strengthens the

reasons to assess and help the UF advance their speaking ability.

5.2.4. Niches in English Language Teaching of the UF

The process of teaching, testing and grading was discussed with the UELTs and UM&A to

enhance ESS of the UF. Teaching of ESS had already started by 2013 within the research

site. However, teaching without testing and grading was delaying the process of enhancing

ESS of the UF. The policy makers, curriculum designers, the directorate of Examination

and Academics, the UM&A, and the UELTs have a larger role to play.

Constructive and productive steps need to be taken to prepare the learners to talk, and to

be proficient in the lingua franca. The UF are required to avail more opportunities to

participate in class interaction; gain more confidence to converse in English language. They

need to perform a variety of tasks, and conduct presentations to be able to compete

academically. How then could the students be expected to compete academically when

they were not taught ESS or if taught, they are not tested to validate their learning?

This research study finds out that the collaborating UELTs could bring a positive change

in the teaching and enhancement of ESS. They could work out the weekly vacant slots

212

from their schedule, use language lab, and avail extra coaching to provide the UF time to

rectify ESS gradually. Apparently, ESS is overlooked. The responsibility of the UELTs is

compounded. Without falling prey to the dilemma of what to do and what not to do, the

UELTs had to teach the UF what they could have learnt in their earlier twelve years of

ESS. There was no fast solution to fill the gaps in the English speaking ability of the UF

but the maximum efforts on the part of the UELTs and the UF were required to attain

possible excellence in ESS in the minimum possible time.

The UELTs were attentively teaching ESS to the UF. However, they could not provide the

UF with adequate opportunities to perform variety of tasks to speak English fluently and

accurately (Alam & Basiruddin, 2013). Nucleus of language is attained if the students

speak in that language. How could the UELTs provide all the UF to talk and practice their

talk in English in a large class within 50 minutes time period? Only a group or two of the

UF could demonstrate ESS through a task or activity at a time in a large class. This

demonstration could not provide the UF with practice of ESS at all. However, it was better

than no practice. In addition to teaching ESS, the learning of ESS seeks the process of

testing to complement the level of learning. Thus, teaching oral skills prompted a need for

testing oral skills.

5.2.5. The University Freshmen’s Requirements

The survey (2013) informed that more than 60% CELLs’ oral skills were not tested. The

zero practices of testing oral skills ranged from more than 48% to more than 60% of

CELLs. Besides this more than 55% students were not acquainted with testing criteria, and

more than 70% CELLs were unsure about weightage of oral skills in overall assessment.

These loopholes needed to be fixed somewhere. The students were required to avail more

opportunities to participate in class interaction; gain more confidence to converse in

English language. They needed to perform variety of tasks, and conduct presentations to

be able to compete academically. The University freshmen were required to involve in

classroom talk that could most likely enhance their speaking ability in the target language.

Linguistic deficiency did not allow the UF to apply ESS. To appropriate ESS, some of the

213

UF read out long written scripts in their speaking performances to make their ideas sound

lofty and impressive. However, regarding their input to speak better, initially, their act of

reading out manuscripts can leniently be allowed, though observed. Feasibly, their

academic endeavors can be appreciated. Spontaneity in speech might be identified.

Speaking second/third language spontaneously is deliberate hard work (Vygotsky,

1934/1962; Lambert, Genesee, Holobow & Chartrand, 1993) which demands practice. The

UF needed sufficient practice in ESS.

As a UELT, I found a gap between the requirements of ESS at college level and the

demands of ESS at UF level. Moreover, these UF, after graduation, face trouble as fresh

employees in different organizations, if not trained at ESS academically. Linguistically,

the UF needed to be prepared for the entry level jobs at the bare minimum. Failing at ESS

is most probably a result of non-equity that exists between literacy and oracy at academic

level as emphasized by Wilkinson (1970).

5.3 UELT Researcher’s Reflection

As a UELT researcher, I observed that a singular and focused learning objective (writing)

was easier to achieve than multiple objectives (reading, writing, listening, speaking, critical

thinking, comprehension, grammar) of English language learning program in a running

semester. However, I contemplated on the feasibility of incorporating English speaking

skill with English writing skill in a semester. I gathered that real life situations could enable

the UF to submit an utterance, a comment or a conversation. The language learners could

recount their striking life experiences in the form of RSPs, contrasting with practice from

the written exercises given in text books. However, the UF expressing their views on topics

of interest, needed to be motivated to meet the challenges of communicating in English

language, they could be stimulated to record their comments. The UF peer reviewed each

other’s’ recorded speaking performances to promote communication, interaction and

rationale. Their critique got productive, though, playful at times.

214

Some self-motivated UF jotted down their talking points (Dawes, 2013), some rerecorded

their performances to submit reasonable RSPs. However, the UF had the option to rehearse

and practice scripts before recording their speaking performances, if they liked or if they

could spare time within given deadline. Verbal analysis of the RSPs enabled the UF to

improve their responses. At times, perseverance impacted their performances and the

committed ELLs (English language learners) had an edge over the non-committed ELLs.

Teaching at the UF level, a teacher plays the role of a facilitator to maintain students’

autonomy but intervenes unintentionally to bridge the gaps in speaking assignment or

presentation (if any). However, RSPs suspended the possibility of teacher’s intervention.

The UF recorded their speaking performances autonomously.

Initially hitting and missing, the concentrated efforts of the UF beside the teacher’s

facilitation helped the UF to adjust to the speaking assignments in the form of RSPs. The

UF, after understanding the purpose (improvement, grades, assessment weightage) of

recording their speaking performances could manage to perform in the first or the second

version. Language learners weighed speaking in small groups (Greenfield, 2003) and

recording themselves to be useful part of learning language. Moreover, small groups of the

UF motivated each other to rehearse their RSPs to the extent they could.

As mentioned earlier, this investigation is a case study of the Bachelors of Mechatronics

Engineering, first and second semester (2013-2014), Air University Islamabad. Its findings

can benefit other institutions and ELT centers. Ample opportunities for future research in

this area are possible. This study introduces methods, techniques and examples in the form

of recorded speaking performances for assessing and enhancing oral skills to other centers

of language teaching, though not immediately. Most of the research that has been done on

development of oracy has been sponsored and funded (Thompson, 2007; Gardner &

Dickins, 1999; Wrigley, 1994) as they are costly projects and case studies. Every individual

UELT deals with different personal and professional circumstances. A UELT’s

circumstances influence her/his motivation, enthusiasm and sense of responsibility level.

Dynamic UELTs observe how the system can be developed through their research

(Hubbard & Power, 1993). Compatible environment is the requisite for the UELTs and the

215

UF to contribute to the development of ESS or any other phenomenon. Without denying

the significance of acquisition the unconscious process (Krashen, 2003, 1982, 1976) of

learning language, this research offers other academically available options to grasp

language. Learning language is a conscious process to determine the rules of language in

class room experience. Rules of language are learned not acquired (Mitchell, Myles, &

Marsden, 2013). Likewise, testing language is a conscious process, effective and impactful

if kept transparent. I yet have to research further to discover a better option than making

the UF’s ESS assessment more transparent than an analytic scoring rubric.

In the beginning, the UF reluctantly submitted their recorded performances. After a number

of motivational sessions, and reminders that their recorded speaking performances would

be graded like their written assignments, the UF started emailing their spoken responses.

Gradually (toward the second semester), the UF became used to the change. They

submitted their speaking performances more confidently. It was paramount to reassure

them that their performances would be assessed, graded and weighted to add or to subtract

from their CGPA. The present study might provide important baseline data to reorient the

policy debates on the teaching and testing of ESS on a more realistic set of assumptions.

Oracy tests needed to be examined in the framework of students’ language ability, tasks

and rating criteria interacting with each other (Kim, 2010). The UELTs needed to observe

a criterion, keeping in view the linguistic abilities of the UF instead of using imported

native standards. While measuring linguistic competencies, the fact of being nonnatives or

being second language learners and the audience (nonnative/second language learners) was

considered without disregarding the English spoken and understood at international

forums. This required the speaking ability to be scaled from within the learners of the same

level, and the measuring criterion needed to emerge from the same speakers.

This study discovers that the UELTs as well as the UF could achieve excellence in the ESS

through concrete efforts of recording speaking performances. Language teachers’

responsibility is manifold. They are to keep the learners motivated. They need to observe

and mold learners’ behavior to the optimal to facilitate them to learn what they need to

216

learn. Observing a criterion, the UELT and the UF gradually become aware of the test

constructs. As the UELTs design the UF’s learning experiences, they improve their own

teaching practices by doing so. The UF start becoming mindful of the meaningfulness of

their speaking performance. They start realizing the difference in the scales of excellent,

good, adequate, fair, limited, and no competence. An analytical scoring rubric could train

them to analyze differences between major errors and minor errors. They could begin to

find out the distinction between a wide, a relatively wide, or somewhat narrow range of

syntactic structures. When the UF know that their performances are to be evaluated, they

start self-correcting. They try to repair (Buckwalter, 2001, p.381), and fix their talk without

teacher’s intervention. Repair in conversation is self-monitoring that takes the UF a step

ahead on the road to language learning. The present research classifies repair with self-

correction (Kasper, 1985).

5.3.1. UELTs-the Agents of Change to RSPs

Teacher/researcher agency is hard to deny. The power of the spoken word emphasizes the

significance of acquiring, learning, teaching, and facilitating oral skills. UELTs are the

agents to promote a change within this cycle. A spoken word once uttered can never be

taken back (Hassan, 2004). All the steps and procedures involved in enabling speech

require practice. Classroom practice depends on teachers. Practicing speaking skills was

next to impossible in large classes. However, this study made practicing, testing, and

grading possible by bringing a change in teaching and practicing methodology. RSPs

(Recorded speaking performances) gave the UF chances to practice what they wanted to

say. They could analyze how they wanted to say it till they said it clearly. This classroom

practice most probably could metamorphose into their speaking habit. This academic

practice could reduce the second/third/nonnative/ foreign language learners’ problems of

hesitation and lack of confidence to speak English language at times. However, through

continuous efforts of the UELTs and the UF, the likelihood of speech fluency could be

increased. The prospects of ESS adding to the employability of the UF is hard to deny.

217

This study suggests that the UELTs, realizing the importance of oracy, need to specifically

plan to enhance the speakability of the students till it is done by the authorities at national

level. The UELTs will have to take on an activating role of an instigator of situations which

allow students to develop communication skills (Canale & Swain, 1980, p. 33). A student’s

performance is evaluated through actual participation, and the quality of performance is

tested and determined by groups, not individuals. Thus, language teachers need to consider

individual learners as well as the communities of learners. Projects are the final product of

the undergrad Mechatronic Engineering students’ academic program that they are to

present in English. Therefore, logically the education system need to reinforce English

speaking skills more than the present times.

Avoiding negative impact on score validity by sustaining constant physical environment is

vital, though hard to achieve. It was difficult to assess audio recorded performances of the

UF but using analytic scoring rubric, the strengths and weaknesses were grasped for further

amelioration. Language teachers cannot evaluate as accurately as teachers in physical

sciences can. However, practice to assessing speaking performances generated a likelihood

of accuracy. Using a criterion for ESS generated awareness about different test constructs

leading to deliberate efforts of the stakeholders, the UF and the UELTs.

Finding Kim’s (2010) analytic scoring rubrics feasible to the UF requirement, their

recorded speaking performances were assessed.

5.3.2 UELTs’ Diverse Techniques to Capacitate the UF’s ESS

This study found out the following ways that the UELTs were using to teach ESS to the

UF:

Table 5.7 UELTs’ diverse techniques to capacitate the UF’s ESS

S. No. Teaching techniques

1. Ask questions

2. Provide opportunities

218

3. Provoke interaction among UF

4. Support UF to respond

5. Cooperate through rephrasing

6. Iterating phrases

7. Guide in their office

8. Encourage to develop argument

9. Hold to realize potential

10. Give prompt

11. Extend chances

12. Allow UF to seek help

13. Jot down response before presenting

Table 5.7 showcases the UELTs’ teaching techniques to capacitate the UF linguistically.

Those techniques included asking questions, provoking in class interaction, giving

prompts, supporting the UF to respond, and cooperating with the UF through rephrasing,

and iterating phrases. According to the table, the UELTs guided the UF in their office, and

encouraged them to develop argument. The UELTs extended the chances and opportunities

for the UF to seek help from their language coaches. These were some of the UELTs

teaching techniques that they were employing to teach ESS to the UF. This repertoire of

language teaching techniques inspired the researcher to find some niches to capacitate the

UF to enhance their speaking performances.

5.3.3 Interruption Obstructs Language Learning

This case study found out that the UELTs refrained from interrupting the English language

utterances of the UF. The UELTs realized that interruption has a ‘demotivating factor’

(Zulfiqar, 2011). Thus, the teaching practices were somewhat changed accordingly. A

consensus was reached that intervention hinders students’ speaking fluently.

219

Table 5.8 The UELTs non intervening practices (2014) to correct ESS of the UF

The UELTs non Intervening practices to correct ESS of the UF

T1 .. First listen, identify all the weak points, and once the student is done with

speaking then...explain the points where … (the student was) not appropriate

in (his/her) speaking style. So whether it is a presentation or a research

project, I would point out but after wards.

T2 … I tell them that I want them to speak and I would appreciate (their) making

mistakes instead of keeping silent all the time. (But) this making errors and

again correcting them…depends upon the nature of the activity as well.

T3 Well I don’t believe in correcting because I think that damages their self-

esteem.

T4 If you keep on correcting them, the flow (of their ideas) will be disturbed.

T5 They must acquire (language) and when you interrupt, it is not acquiring, you

make them acquire.

T6 ..The students who come to the computer science especially in the earlier

years … they were very average as far as their English language speaking

and confidence was concerned. And we really had to work a lot on their

language…

T7 Most of the time I don’t correct the students there and then. .

T8 No, we don’t do that. However, once they have completed what they’re

saying, then for the general audience you can tell where they went wrong.

But first you encourage them.

T9 There are general discussion about tenses and the grammar that we carry on

in the class so that I can give them example of how to use the language

correctly but I don’t do so when they are speaking.

Table 5.8 shows the UELTs’ mostly did not interrupt the UF while speaking. They did not

believe in interrupting the UF during the process of interaction. The UELTs did not

intervene for correcting their pronunciation, intonation or pacing. They rather helped the

UF build confidence in the target language. It is notable result that the research participant

UELTs tried to save the UF from embarrassment that might have damaged their self-

220

esteem. The UF could have been demotivated through interference. However, they were

adequately guided to speak appropriately.

5.3.4 Asking Questions from Pairs and Groups

Questions give flexible access to the interactional sequence. Knowing the functions and

relevance of the types of questions to language acquisition and language learning could

facilitate practicing the UELTs and the UF. Learning language within pairs and groups

provided them with more opportunities to function in English language (Greenfield, 2003;

Shamim, Negash, Chuku & Demewoz, 2007). Then, the interaction progresses as the

teacher uses her turns to steer the discourse in a particular direction, and the students

recognize teacher’s speaking style and inviting ways to speak next. However, the practice

of questioning is hard to sustain in a large classroom. In the recorded speaking

performances, the UF used questioning techniques. Questioning enabled the targeted

UF/second language /nonnative learners to develop self-confidence by responding to each

other’s questions. Moreover, use of RSPs gave the opportunity to the UF to practice ESS.

The sets of criteria of the UELTs (see section 4.3.1.2) could be reformulated in the light of

the UM&A’s expectations from a criterion. The criterion expectations of both stakeholders

follows:

Table 5.9 UELTs and UM&A’s criterion to check UF’s ESS

UELTs’ Criterion for UF ESS UMA’s Criterion for UF ESS

tone, voice clarity, explaining one’s point of view,

fluency, comfortable pace

criterion in mind Develop a locally based criterion by a team, relying

on the standard norms of national and international

level

time time lag

admit that evaluation did not

adjust with intuition

UFs need to be assessed

221

assessment was not done in a

conscious way

Criterion must be followed to justify

body language, Kinesics

subject matter, emphasized on endeavors to excellence and

perfection in language learning

relevance, arduous and challenging

accuracy, errors A team could be required to develop a nationally and

internationally recognized criterion

Introduction certain statements, questions, and expressions could

be included in the criterion to measure UF’s E SS

oral skills, regarded the independence of UELTs to develop a

standardized criterion for the evaluation of ESS

Must have criteria follow a benchmark for improvement of ESS

Table 5.9 juxtaposes sets of criteria that the UELTs and the UM and A conceived to check

the UF’s ESS. This table found out similarities between the required criterion to assess the

speaking ability of the UF, i.e. clarity of speech, relevance, kinesics, time and following a

criterion. According to this table, the UELTs emphasized to have a criterion for ESS.

However, the UM&A highlighted need for excellence and perfection in language learning,

as it could help to develop confidence, comprehensibility, and paced speaking of the UF.

At university level, the UELTs taught ESS assuming that the students had Basic English

language knowledge. Most of the UELTs checked the following linguistic features in the

speaking performances of the UF:

Table 5.10 UELTs’ checked linguistic features in the UF’s ESS

Vital for

ESS

Diction Fluency Understand-

ability

Relevance Grammar

UELTs (%) 77.77 66.66 88.88 77.77 99.99

222

Table 5.10 identifies the majority of the UELTs’ self-reported testing constructs for ESS.

The UF’s ESS was assessed generally during presentations by deducting marks for

grammar, vocabulary, accent, and pronunciation, for their assumed oral skills. The UELTs

experienced range of competencies, so they had decided the constructs of a criterion

individually. They had individually figured out to achieve that criterion as well. The

UELTs could not afford to give feedback and correction to all the UF because of large

classes and administrative distractions (Carroll, 1971, p 113). However, some of them

were sometimes generally told about their errors and deficiencies. This type of practice

needed to be changed. The UELTs needed to scale ESS for scientific assessment. Austere

criterion might demotivate majority of the average or below average learners, thus

developing a motivating criteria that could boost learners’ morale to attain the standard

became a target. It was significant to put UF at ease, not to allow them feel humiliated or

depressed. The UELTs needed to use achievable standards for the UF. For achieving

stability in ESS, the UF required to be cognizant of equal exposure at University level

through achievable standards. The desire to be perfect was crucial: it could stimulate the

UELTs to design the best criterion, and the UF to train themselves to achieve that.

This study disapproves of unnecessary standardization. A criterion based on personal

opinion might not lead to valid and reliable outcomes. The audio recorded speaking

performances provided the teacher/rater to methodically evaluate the speaking ability of

the UF. A criterion needs to enforce some established norms to evaluate speaking

performances scientifically. Thus, eliciting responses in the form of speaking performances

was to provide the UF with solid opportunities that were scientifically measured.

Table 5.11 UM&A’s perspective on Standardized Criterion for Assessing ESS

Do’s/Agreeable Do not’s/ Disagreeable

UF need to be assessed Discourage

Scale progress of UF Beat down UF spirits

UELTs need to make a criterion Hide criteria from UF

UF need to be cognizant Confuse

Tell UF how to achieve that criteria Unnerve

223

Develop a motivating criteria to boost learners’

morale to attain the standards

Avoid austere criterion that might

demotivate majority of the

average or below average UF

Discover some concrete, graspable, and

achievable standards within the curricula

No unnecessary standardization

Clarity, explaining point of view, and

understanding are crucial

Do not let UF grow presumptuous

Include certain statements, questions, time lag

and expressions in the criterion

UF should not feel humiliated

UF need to be put at ease UF should not feel depressed

Follow a benchmark for improvement provide

win-win situation for UELT and UF

Bully

Respect effective communication to diverse

audience

Let the UF remain monotone

Table. 5.11 compares and contrasts the dos and the do not’s of the UM and A’s. According

to this table, a unique combination of thoughtful balance in excellence and accomplishment

in ESS of the UF emerged. As can be seen, the UM and A had learner friendly approach.

It is worthwhile to note that the UM&A recommended an encouraging, motivating and

transparent criterion through which the UF could be encouraged and motivated without

growing overconfident.

This study deems scaling of the ESS of the UF important. Without observing a criterion, it

was difficult to achieve consistent and valid scores. Different raters valued different aspects

of presentations. Rating criteria made the assessments of interactive performances less

biased and more close to standard. While measuring linguistic competencies, the fact of

the UF being nonnatives/second language learners and the audience being

nonnative/second language learners was considered. Language learning is a living

phenomenon. Target of excellence in language criteria lends a constant aspiration to the

users. Gaps in learning could be identified. Addressing these identified gaps through

continuous and conscientious efforts, the users could humanly minimize them. This

224

research offers an accessible criterion to the UELTs to explore the linguistic potential of

the UF. Moreover, the testing scales lent conscientization and deliberate efforts to the end

users. Using a criterion, i.e., an analytic scoring rubric accorded confidence to the

researcher UELT particularly and the UF generally. The same assurance could be accorded

to the other UELTs, roughly.

The statistics of survey established (Table. 5.12) that the chances to meet a given testing

criteria were more than the chances to meet an unknown or general testing criteria.

Table 5.12 Awareness about criterion led to achievement at college level 2013

Action Achievement

Criterion awareness 27.50

Criterion fulfillment 26.66

Table 5.12 suggests that if more students were told about a testing criterion, a higher

number of students could have tried to achieve that.

Thus, after introducing analytic scoring rubric (Kim, 2010) (see section 4.4.1) to the UF of

sem-1, as I, as their UELT, started emailing them the feedback, I observed change in their

oncoming RSPs. The feedback emailed to all the UF kept reinforcing what was required in

their speaking performances. An illustration 8 of feedback follows:

Illustration 8. Emailed Feedback to All Students on Required Specifications

225

Feedback in the above illustration made not only the concerned testees but all the UF in

the Mechatronic department of engineering who opened the attachment of the researcher

UELT, realize the missing specifications under the topic ‘introduction’ in the speaking

performance. This type of feedback enhanced the impact feedback had on the university

freshmen.

Evaluating recorded performances can be facilitated by training workshops. HEC, English

departments, individually or collaboratively, ELT Centers or organizations like SPELT can

train the UELTs through a couple of training sessions. A workshop or a seminar or series

of seminars might be held to acquire assessing techniques to enable conscious evaluation

of language learners’ recorded speaking performances, on an analytic scoring rubric.

5.3.5 Using Analytic Scoring Rubrics and Its Benefits

Motivating the UF, emphasizing the reasons they should develop English speaking ability,

and deliberately teaching ESS appears to be effective tools to address the problem of

establishing ESS. The incentive to earn grades for class participation can inspire the UF to

interact in class. Increased interaction in the class creates opportunities for the UF to

enhance oracy. Grades reassure the UF to engage in class interaction somewhat

consciously. To inspire them, the UELTs can broaden their vision by keep reminding them

of the significance of ESS. Through conscientization, some of the students other than

constituting diverse reservoir of phrases and patterns; optimize meaningfulness,

grammatical competence, intelligibility and task completion in English language. The UF

can work out strengths and weaknesses of their speaking performances through a criterion.

The act of testing may have a washback (Norris, 2009) effect on language learning,

classroom procedures, and curriculum itself.

Once the problems are identified, the UELTs can modify the focus to language

enhancement accordingly. Likewise, when the participants are empowered to

democratically evaluate peers’ performances, they can appreciate the strengths of their

class fellows and identify language deviations. This productive activity can facilitate them

226

to adopt the strengths and avoid the deviations. It can help the UF evolve their speaking

ability. Concrete, graspable, and achievable standards of an analytical rubric can facilitate

the language learners to better achievement. Statements in a criterion can serve the

language learners and the language raters discern the achievements and targets yet to be

achieved. The critical consciousness about the significance of ESS can direct the UF to

strive for language learning. Through conscientization, the policy makers, curriculum

developers and the language teachers could give due attention to enhancing ESS by

teaching and evaluation to enable the UF to move up to their respective ladder of ESS. This

study is useful for practitioners, researchers, and policy makers. It will help teachers,

students, and administrators understand the significance of ESS assessment and grading.

Profitability (admission in universities, immigration to other countries, fellowship abroad,

employment in national and multinational companies) of speaking English language

demands deliberate teaching of ESS at UF level.

5.3.5.1 Impact of a Criterion on the Rater and the UF

Teaching spoken English is an endlessly universal phenomenon. The teachers’ knack to

administer learning behaviors of the students is one of the hardly traversed variable in

educational research (Carroll, 1971). The UELTs used numerous ways to meet challenges

of developing English oral skills. However, motivating all the UF in a large class was hard-

won. Teachers could inspire the class participants to engage in the interactive sessions

through questions, gradually seeking solicited or voluntary responses. Evaluation has great

influence on the teaching and reviewing teaching practices of the teachers (Norris, 2009).

The teacher and the taught realized the gaps in their linguistic outcome. This knowledge

helped both to bridge the difference in ESS. Using an analytic scoring rubric, as extrinsic

motivation, and sharing the rubrics with the UF to impart intrinsic motivation, I validated

positive outcome through testing the speaking performance of the UF (see Chapter 1).

Assessment of ESS while considering a criterion stimulated the learners to commit to ESS.

Thus, the researcher gauged learners’ ESS through defined scales which enabled the UF to

227

realize what they needed to appropriate. Enforcing a criterion to test language development

could be one of the impactful moves on the part of the stakeholders.

Extending talk opportunities for the learners and compressing teachers’ talk to provide

space to the language learners through situations/ skits were productive impetus to speak

the target language. The UELTs believed in the productivity of narratives for teaching and

learning processes of English language to understand the experiences from diverse

perspectives. Through reflection and discussion, the UF learned to use the target language

available through acquisition and learning. Recording their responses saved the UF from

the embarrassment of silence in a classroom. Thus, the UELTs had to nurture the output

oriented responsibility to teach English oral skills to the UF. The UF had to attend to the

input oriented responsibility to perform linguistically. As a UELT, I found a viable option

in incorporating required changes in the running course.

Both learners and teachers encountered win-win situation. Moreover, continued attention

to measuring speaking ability can generate meticulousness. This continued meaningful

effort can bring change in the Pakistani UF expanding to Pakistani society.

5.3.6 Recorded Speaking Performances (RSPs)

For the first time at academic level, the UF recorded and submitted their recorded speaking

performances for evaluation. Assessing oracy is industrious and tedious. Language

teachers encounter difficulties in grading students there and then in the absence of a valid

and reliable criterion of assessment. For the first time in AU, Islamabad, Pakistan, the UF

were asked to submit their audio recorded speaking performances for English

Communication Skills, and Technical Writing course.

The UF listened to each other’s recordings and commented on them. Some of the UF had

genuine opinions to record and submit their recordings. This practice helped them identify

gaps in their peers’ speaking performances, and mend their own performances. A visual

Illustration 9 follows to validate:

228

Illustration 9. RSPs Help UF Verbally Evaluate Their Peers

However, some UF submitted limited, stigmatic responses for the sake of getting their

assignments marked. Learning ESS is one thing and practicing ESS is another thing that

entails time. ESS was one module of English studies in their core educational Bachelors’

program of Mechatronics. The policy makers, the board of governors, and the faculty board

of studies across the board have to harness time to empower the oppressed English

speaking skills at Air University and all the other universities in identical circumstances.

An Illustration 10 for reliability of the finding follows:

Illustration 10. Consciously Teaching/Testing/Grading ESS is Vital

In case equity ratio of weightage for ESS is granted, the level of the already motivated

efforts of the UELTs and the UF, in spite of all the limitations (see section 5.4.1) might

bring in positive changes in the form of ‘cognitive social capital’ (Ashraf, 2006).

229

In this case study, the UF harnessed the available time by recording their talk, and

submitting it to me, their UELT through email. Recording their greetings, comments on

different topics in the syllabi, and discussion on issues of mutual interests reduced their

shyness to talk in English. An Illustration 11 to validate follows:

Illustration 11. Testing ESS Contributed to the Pakistani Prospective Engineers

Learners take time to adjust with a newer method of teaching and learning. Practice does

make a difference. Moreover, the vacant slots in the UF’s weekly schedule helped the UF

to practice ESS in the form of audio recording. From the UF and the UELT’s perspective,

they were investing their free slots for the development of ESS. Why should they keep on

emailing the audio assignments? It was an extra hassle for the UF and the UELT at the

annals of University Education. The UF were of the mindset that their program of studies

was tough. They were overwhelmed by the academic burden. I, the UELT, the researcher,

wanted to conduct this classroom research to experience an improvement in the speaking

performances of the UF for the future. Having said this, a significant move was to grant an

academic benefit to the UF in practicing ESS. The UF needed to realize some strong

benediction for them to invest an extra hour in it. The UF were accustomed to submitting

written assignments and getting them graded, adding to their GPA and CGPA at

graduation. Therefore, earning grades sustained the motivation of the UF to practice ESS

in diverse situations in the form of recorded speaking performances.

230

5.3.6.1 Benefits of Speaking Performances

Recording speaking performances was an efficient way to engage introvert students in

activity. Availing a free fifty minutes slot in the weekly schedule of the learners, any

motivated and dedicated UELT could provide the UF with opportunities to record their

speaking performances and practice. Some of the introvert students (e.g. 284

Feedback.ogg), who reluctantly participated in class activities, planned and recorded their

tasks more regularly than class participation. It was like talking over phone and

communicating what one found hard to say face to face. Teachers need to balance learning

opportunities between introvert and extrovert learners. If a teacher encourages

volunteerism in seeking response from a class of learners, most of the chances might be

taken by the extroverts, and their response-ability might be at the peak. On the contrary,

the opportunity to recorded responses provided both type of students with equal chances to

submit their performances. Through this study, I have tried to understand students’

achievements in English speaking performances at their own pace. They experienced

acquiring English by talking to each other in the form of RSPs. The UF actively participated

in the process of learning. They sometimes researched on a topic before recording

themselves for assigned speaking performances. An Illustration 12 of emailed feedback on

one of the RSPs follows:

Illustration 12. Emailed Feedback to All Students on Long Utterances

One benefit of RSPs was the physical availability of the spoken response. The UELTs

could customize their teaching methodologies and appreciate language learners’

commitment with a rewindable specimen. Second benefit was the use of email to send

feedback to the whole batch. One feedback was meant for all similar performances to

231

improve. Once implemented, this research shall help the UELTs in minimizing the load of

corrections, and feedback on the written scripts towards the end of semester.

Moreover, planning their speaking performances the UF got another opportunity to think,

discuss, phrase and rephrase their speaking performances. This could add to the UF’s

speaking experience. Motivating the UF that recording their utterances could enable them

to develop more confidence, gain clearer concepts, debate, argue, negotiate, persuade, and

strengthen their literacy via oracy, helped. Recording their speaking performances was a

new experience for the UF. Learning and then confidently redoing the activity was difficult

in the beginning. However, with practice, gradually the UF started finding this learning

language technique, quite exciting. An Illustration 13 to validate follows:

Illustration 13. RSP- An Effective Technique for Ample Opportunities to Practice

Most of them gained confidence through jotting down their points to speak before

recording their performances. This process led the UF to think and reflect what they

intended to record. The time from listening, thinking, critiquing, jotting down points, and

then, recording provided them with reflective time to self-correct themselves in the process.

While performing they repaired (e.g. 544.ogg, the speaker corrected herself by replacing

232

‘misspelling’ with ‘misspelt words’) their utterances that showed they self-monitored (see

section 5.3).

The speaking performances helped the UF actively build meaning of the prompts (topics).

Continued practice of speaking performance could empower the UF to voice their opinion

somewhat confidently in class or out of the class in real life situations. Recorded speaking

performances provided the UF opportunities to participate in discussions (e.g. A265 B266

C267 D268 E269 Group.ogg), ask questions (e.g. A47 B48 Automobile.ogg), answer

questions (e.g. A27 B28 Seeking Info.ogg), support their statements (e.g. A29 B30

Smoking.ogg), rationalize their approach (e.g. A21 B22 Floods in Pak.ogg), voice their

point of view (e.g. A33 B34 Muslims and Islam.ogg), conduct interviews (e.g. A213 B214

Sort Dialogue.ogg), and comment constructively on each other’s performances without

interruption (e.g.A183 B184 Volunteerism.ogg). The presence of the UELT in university

language lab was to ensure English oral language practice and recording of the UF. Once

equity ratio for the weightage of ESS in the 100% assessment of English language is

implemented in a university, the RSPs can be submitted through mobile, email, or the

UELTs’ common on intranet.

Evaluating peer performances is a major benefit of RSPs. It is difficult for number of

student raters to assess a single written composition, whereas, number of student raters

could evaluate a single RSP at a time. Thus, working out the credibility of RSPs through

student and teacher raters is reliable and time efficient.

5.3.6.2 Evaluating Peer Performances

Through RSPs, the participants were empowered to democratically evaluate peers’

performances, they learned to appreciate the strengths of their class fellows and identify

deviations. This productive activity facilitated them to adopt the strengths and avoid the

deviations that helped the UF evolve their own speaking ability. They listened to some of

their fellow UF’s audio clips (e. g. 119.ogg) and submitted their own opinion agreeing and

233

disagreeing with the fellow analyst to a certain extent. The transcript of the RSP, 119.ogg

follows:

‘I want to analyze the analysis of Wajeeh on the topic, ‘Discussing controversial issues in

the classroom’, Wajeeh was very confident in his recording. His accent was impressive.

He spoke very well… He discussed all the aspects of the topic. His way of speaking was

fabulous. The use of vocabulary was good. He discussed every member in the recording…

I agree with Wajeeh that Taimoor was well prepared and gave meaningful examples…he

told that Haroon was confused in a part of the presentation, but according to me he was a

little bit confused because it was the first presentation of the class. I agree with Wajeeh in

case of Rafi Ullah that Rafi was not prepared. Even he did not know his slides’ (Transcript,

119.ogg).

The RSPs like the above one made the researcher UELT realize how the UF emerged as

autonomous and confident second language speakers. The UF commenting on each other’s

performances enhanced their cognitive and metacognitive powers as it complemented their

higher level of thinking. This helped them verbalize better than before. This method could

be used by other centers of teaching English the same way or with variations.

Then, in case of lack of criticality (e.g. 47.ogg; 48.ogg; 49.ogg; 50.ogg), the students were

reminded that their genuine peer assessment could help their companions address gaps in

their speaking performances. The UELTs would need to check the UF for typically

stereotype RSPs where the speakers’ comments are limited to ‘very well’, ‘very good’ and

‘very nice’ only. The UF (i.e. 47.ogg; 48.ogg; 49.ogg; 50.ogg) assumed that he completed

the task by submitting RSPs. However, the reliable form, the RSPs made the concerned UF

and other listeners listen and teach other language learners, what they were expected to

say, what the presenters presented, and how the presenting UF performed to qualify

appreciative or depreciative comments.

Moreover, using RSPs to give practice to the UF and enhance their speaking ability, might

deliver another benefit to a social scientist. The recurrent submissions of a single UF gave

234

an impression depending on the ‘pattern of associations with other information’ (Kunda,

Sinclair & Griffin, 1997) that was, the uniformity of comments in four RSPs on four

different peer groups.

5.3.6.3 Practicing ESS in RSPs and UF’s Output

Acknowledging the symbolic significance of English speaking skills (see section 1.6)

ranging from second language to foreign language in Pakistan, it is crucial to unfold this

linguistic ‘sign of wealth’ as Bourdieu (1991) terms language.

Some of the peer assessment had impact on the presenters and the fellow UF (see 543.ogg).

One of the student analysts emailed a well elaborated response that displayed complex

syntactic structures and lexical form; good use of cohesive devices that connected her ideas

smoothly. Completing the task well, she submitted her RSP with prosodic features

(intonation, rhythm, and pacing) and her comment (Feedback) on her presenting peers

follows:

“Presentation on the article, Technical Communication between Global North … First of

all… did a brilliant job, his research was amazing, his examples were well defined,

thoroughly enjoyed the examples that he kept on giving on the topic which made it very

easy for us to understand what he was saying…he was very fluent… I think he could have

done better because we have seen him present before... definitely not up to his potential.

Next to him was …he was so non serious...completely cut off from the audience…I was

very depressed …and he was reading of the book as usual…he was not prepared

obviously….. Next, was his partner… even worst…he couldn’t even read out of the book

properly… why stand there and make a fool out of yourself? …it was very disturbing …if

I had been his teacher, I could have been offended. It affects your other group mates who

prepare well. Other than that, overall, group presentation was only good because of …

there were so many typing errors as well…it was not a serious presentation. I think we need

to work on it’ (543.ogg).

235

UELTs might relate to similar (543.ogg) comments and peer assessments, helping them to

achieve the purpose of attaining not only the target language, but positive attitude,

commitment, and comradeship. This finding affirms (Bourdieu, 1991) that language is a

‘sign of authority’ leading to ‘social capital’.

Another UF’s elaborated, logically structured, and intelligible peer assessment that covered

all main points with a good amount of details (552.ogg) follows:

‘…in the start… was quite fast but he slowed down as he continued, possibly because he

realized that he had to meet the minimum requirement of two minutes…the major flaw that

I could point out was in the grammatical competence, e.g. at the 30th second. He mixed

past tense with the present tense because he started with the past tense and used present

tense in between. When he was talking about…he used the verb… ‘expresses’ which shows

the mistake…in the selection of points mentioned by … was intelligible because they were

in a sequence…he explained about every presenter from… to …And it also met the

requirement of discourse competence because every point that he mentioned was supported

by a small brief detail that was necessary to proceed. This recording was overall

meaningful. It met the requirement of Meaningfulness. Although at one or two points…

there was some lack in Grammatical Competence, and usually grammatical competence

puts a question mark on the meaningfulness… on the meaningfulness of the scenario you

are talking about. That actually creates confusion. But that was not much…at only one or

two points… Then, lastly, as far as task completion is concerned the task was complete…’

(552.ogg)

The RSP based analysis (552.ogg) might assist the UELTs develop logical openings and

closures, and logical development of ideas among the UF through peer assessments. The

cognitive development in the linguistic outpour of the UF (552.ogg) is in Vygotsky’s

concept of ‘the transformation of socially (classroom/language lab) shared activities into

internalized process’ (Mahn & Holbrook, 1996, p. 191 as cited in Jabeen, 2013, p.11).

Furthermore, the researcher UELT infers that the exposure, experience, training or habit of

236

speech analysis could go a long way with the second language learners even after

completion of academic semesters.

Verbal appreciation in the peer assessment (Feedback) can motivate learners for learning

and support positive behaviour as reinforced by (O'Donnell, Reeve & Smith, 2011).

‘…Presentation on the article, Adventures in Blogosphere from blog readers to blog

writers…the presenters were…In spite of the problem with the projector as well as the

computer… started the presentation and confidently gave the introduction. However, there

was a lot of disturbance by the IT staff members who were repairing the fault in

computer…did not even stop the presentation nor did he get confused. He said that blogs

have many advantages such as they enhance the critical thinking, literacy and promote the

use of internet as a research tool. The one who frequently read a blog can increase his or

her vocabulary. He explained the methodology of the article briefly and discussed the

research questions. The second presenter was …He presented well but he was a bit nervous.

He was anxious while presenting. He discussed the main portion of the article and told

about comparison between German student blog and French students blog… The

conclusion was presented by… Although he was confident yet his voice was not clear and

it seemed that he was talking to himself…’ (483.ogg)

The RSP (483.ogg) in the form of peer analysis helped at three levels: firstly one presenter

UF’s positive personal handling was reinforced; secondly, the second presenter UF’s

presentation skills were democratically appreciated and nervousness was pinpointed to be

minimized, thirdly the intelligibility impeding communication in the third presenter UF’s

performance was highlighted for the batch of engineering students. This type of students’

prompt diagnostic feedback proves constructive in large classes assisting the UELTs better

versions of RSPs than the RSPs without students’ feedback.

237

5.3.7 Results of UF’s Speaking Skills

The UF produced more than 10 % ‘excellent’ recorded speaking performances in the first

as well as the second semester. Less than 3% UF had ‘limited’ control on meaningfulness

in the first semester as well as the second semester, whereas only 00.71% of the UF had

‘No’ control on the testing construct- Meaningfulness throughout two semesters (see Table

3.2). The rest of the UF had varied competence level from ‘fair’ control to ‘adequate’ and

‘good’ control on meaningfulness. Having said this, I, as a language teacher researcher

infer that even the limited, narrow control on meaningfulness reflects the endeavours made

by the UF to create some meaning through utterances throughout two semesters.

Likewise, in the second testing construct of the criterion, grammatical competence, sem-1

and sem- 2’s speaking performances did not display ‘no grammatical control’. The present

study found out that both semesters’ RSPs had a certain level of grammatical control, and

some range of sophistication of linguistic structure and lingual form. The range of

competence of the UF excluded the level termed as ‘No’ control, meaning the lowest level

according to the followed analytic rubric. Sem-2 responses were lesser confusing than

semester one at limited level. That is why lesser RSPs from second semester committed

considerable errors at limited level (see Fig. 4.16). modest progress in the form of a ‘narrow

range of syntactic structures’, and ‘simple sentences’ and ‘simple word forms’ observable

in sem-2 should not be ignored as it relates to progress made at every extension of an

advancing level in the analytic scoring rubric (see Fig. 4.17, Fig. 4.18). At adequate level

in grammatical competence, the sem-2 RSPs rarely demonstrated considerable errors that

create ambiguity that is how their meaning could be understood. The sem-2 RSPs delivered

simple sentences with small range of syntactic structures (see Fig. 4.20). They offered

simple linguistic structures with less inaccurate lexical forms (see Fig. 4.21, Fig. 4.22). In

sem-2, the speaking performances of the UF demonstrated comparatively ‘wide spectrum

of linguistic structures’ and word forms at level ‘good’ of the grammatical competence (see

Fig. 4.24). The RSPs used comparatively ‘complex syntactic structures and lexical forms’

(see Fig. 4.25), and showed ‘wide range of grammatical structures and lexical form’ at

‘Excellent’ level (see Fig. 4.27). Usually considered negligible, 1.25% RSPs in sem-2

238

showed ‘advanced syntactic structures’. In short, teaching, testing, and grading along with

autonomous learning environment, practice through RSPs, motivation and participation

brought a gradual change in the sem-2 responses.

Discourse Competence was the third testing construct of the criterion set for the study.

From the first semester to the second semester incoherent and disconnected responses

reduced showing coherence in utterances (see Fig. 4.28). None of the RSPs in both relevant

semesters missed organization in utterances. The RSPs of the UF were organized to a

certain extent. Hence, Discourse Competence No (2) was not applicable to either of the

semesters. Throughout the two semesters, lesser than 2% utterances dropped to discourse

competence ‘No (3)’ level (see Fig. 4.29). Throughout two semesters, less than 2% UF

responses were generally incoherent (see Fig. 4.30), and less than 2% RSPs displayed

unclear organization of utterances (see Fig. 4.31). However, less than 2% RSPs in the

second semester tried to use connectors, though, mechanically (see Fig. 4.32) on discourse

competence, limited (3) level.

The percentage of disjointed discourse meagerly reduced in the sem-2 in discourse

competence at scale-point Fair. Their discourse was barely organized but less incoherent

(see Fig. 4.33). The RSPs showed improvement in decimals in the organization of their

utterances. As the UF might have been trying to sound logical (see Fig. 4.32). They were

less confusing, though, in decimals (see Fig. 4.34). Nonetheless, in the second semester,

the UF started using simple cohesive devices (see Fig. 4.35). As a whole, on different scale-

points in discourse competence (DC) at level ‘adequate’, the sem-2 responses were, in

general, ‘logically structured’ as the incoherent occasions in speaking performances

decreased as the UF used ‘simple cohesive devices’ (see Fig. 4.36, Fig. 4.37, Fig 4.38, and

Fig. 4.39 ). In DC at the level ‘good’, the sem-2 responses of the UF had ‘generally logical

structure’ (see Fig. 4.41). The responses used cohesive devices linking ideas evenly (see

Figure 4.42). According to scale-point Excellent in DC, the sem-2 responses included

‘logical openings and closures’. The RSPs had logical development of ideas (see Figure

4.43). In sem-2, a meager percentage (1.07) of RSPs displayed ‘logical connectors, a

controlling theme, or repetition of key words’.

239

Task completion (TC) was the fourth testing construct of the analytic scoring rubric used

for the current research. It is worth noticing that only 0.18% sem-2 RSPs could not

complete assigned task. In both semesters less than 1% RSPs did not contain enough

evidence to evaluate task completion (see Fig. 4.44). That means, 1) throughout two

semesters the UF had the competencies at various levels of the rubric to complete tasks,

and 2) the tasks assigned were within the range of their competencies per selected rubric.

Less than 1% RSPs displayed ‘major incomprehension that interfered with addressing a

task’ in sem-2 (see Fig. 4.45). Less than 10% RSPs insufficiently addressed the task (see

Fig. 4.46) at level fair. The sem-2 RSPs slightly decreased some ‘major incomprehension’

(see Fig. 4.47), as in sem-2, the respondents comparatively adequately addressed the tasks

(Fig. 4.48). They did complete tasks with ‘inconsequential misunderstanding’ (see Fig.

4.49) as they conveyed all ‘major points without including details’ (see Fig. 4.50) they

covered ‘a couple of major points with essential details’ (see Fig. 4.51). In sem-2, the UF

did not include ‘noticeably misunderstood points’ in their RSPs in the testing construct of

task completion at the level ‘good’ (see Fig. 4.53). At the same scale point, they had

elaborated assigned tasks well in sem-1. In sem-2, 4.80% of UF attained excellence at the

three levels of the rubric, in TC.

Throughout sem-1 to sem-2, RSPs of the UF were intelligible but less than 1% RSPs that

did not have enough evidence to evaluate intelligibility (see Fig 4.57). Moreover, the RSPs

did not have ‘frequent pauses and hesitations’ but only 0.36% RSPs from seme-2. In all,

5.25% RSPs could be traced at five levels of ‘limited’ control in the testing construct of

Intelligibility. In second semester, 2.31% RSPs lacked intelligibility impeding

communication at scale-point ‘Fair’ (see Fig. 4.61), then, 2.31% RSPs had problems with

pronunciation, intonation or pacing (see Fig. 4.62), and 0.18% RSPs could not sustain at a

consistent level throughout. The 5.52% RSPs in the second semester, at times required

listener’s significant efforts (see Fig 4.63). The second semester responses improved in

intelligibility at level ‘adequate’ (see Fig. 4.64), and showed less difficulties with

pronunciation, intonation or pacing (see Fig. 4.65). Minor fluidity in semester-2 responses

was discerned at the scale point ‘adequate’ in the fifth testing construct, intelligibility, (see

240

Fig 4.66) of the scorning rubric. In the second semester, minor problems with pronunciation

or intonation reduced at the scale point ‘good’ (Fig 4.68), however, their responses

improved (see Fig 4.70). In the second semester, 0.71% of RSPs were ‘always clear, fluid

and sustained’ in intelligibility at scale point excellent. It was a generally negligible

percentage. However, it was classroom obvious which could be motivating for the UELTs

while grading, and for the UF while listening for peer assessment. 12.63% RSPs in sem-2

did not require listener effort at scale point ‘excellent’ in the testing constructs of

intelligibility (see Fig 4.72).

After comparing the RSPs of sem-1 and sem-2 in the testing constructs of meaningfulness,

grammatical competence, discourse competence, task completion and intelligibility, it

could be educating to acknowledge the findings of the comparative evaluation of semester-

1 and 2 in the light of a few facts within research period. Firstly, the UF in the first semester

mostly had interesting topics which were light in nature. Then, in the second semester they

had academic topics. At freshman level, the first semester mostly introduces introductory

subjects. The sem-1 UF are trying to understand university life. As they get promoted to

sem-2, they understand the hardship of the core engineering subjects. The attention to

English language reduces with emphasis on engineering subjects that are graded. The

prospective engineers usually concentrate on Engineering subjects as compared to English

Technical Writing or any other subject from Social Sciences. Moreover, the speaking skills

were/are not graded. Hence, the UF paid/pay lesser attention to the English courses saving

their time for the Engineering courses. Secondly, in semester-2 English Technical Writing

course was an advanced course. The other courses in semester-2 were also advanced in

knowledge. The UF were concentrating on their Engineering courses more than Technical

Writing course. Thirdly, recording speaking performances was a new mode for submitting

their assignments. Fourthly, the speaking skills must have equity ratio of weightage in the

100% assessment of English language to retrieve better results than the present study.

241

5.4 The Recommended Weightage for ESS & its Impact

In an English language course equal attention to literacy and oracy is justified. Observing

HEC curriculum necessitates equal teaching and testing of literacy as well as oracy to fulfill

the requirement. Futuristic approach to education demands the same. One of the UELTs

commented on the percentage of evaluation for ESS in overall weightage of English

language:

“(Out of 100 marks, 10 marks have been allocated for presentation skills) I think that is too

less…” (Interview, T8, 4/6/2014).

Main impediment in developing oral skills is a theory-practice difference. The following

Illustration 14 portrays the university grade sheet:

Illustration 14. Ratio in Weightage (ESS) on University Grade Sheet

English speaking skills in the form of presentations have a meagre contribution to the

overall assessment of English language contributing to the social capital and cultural

capital of the future. The target language is used for economic exchange globally.

One of the UM&A stated:

“One of the languages has to be mastered for communicating with the people around you

especially at organizational level. And presenting your case and making it useful for the

242

society, we got to have good communication skills, speaking (presentation) skills as well

as on the paper itself. As a university we have two basic subjects, technical English and

communicative English. Students’ presentations are encouraged in class and in final year

especially. They present their projects also. So, the English department can obviously go

in this direction…Develop your organization around it” (Interview, 10, 15/5/2014).

Another UELT commented:

“If weightage is the part and parcel of the whole curriculum and teaching process, only

then the UF would be positive. Otherwise, it is very difficult for the UF to...allocate

separate time for such activity” (Interview, T9, 25/6/2014). Being a UELT myself, I

propose 50% weightage to the evaluation of ESS in the overall evaluation of the four skills

in English language (Interview, T1, 5/3/2014). Low weightage obstructs deliberate

learning of ESS. Viable weightage to ESS in the overall assessment of English language

can magnetize the UF to aim to develop their relevant competence. This act of allotting

50% in English language to speaking performances is a step in the movement of promoting

and developing English speaking skills for the benefits of our young generation, the

builders of our nation, our country Pakistan through international trade and development.

5.4.1 Contribution of Research Study

The following Illustration 15 capsules the contributions of the present research study:

Illustration 15. Contribution of Research Study

243

As a UELT myself, I was the rater, the scorer for both semesters. The statistics obtained

from the evaluations of both semesters, the comparison between the assessments of sem-1

& sem-2; the linguistic competencies of the semesters worked out through Kim’s (2010)

analytic scoring rubric; and the comparatively weaker areas in the oral linguistic

competence of sem-1 & sem-2; keeping in view Cummins’ concept BICs and CALP, are

the newer realities that the present research offers in the teaching, learning and testing of

English speaking skills at university freshman level. Moreover, for the first time in an

engineering university, RSPs have been used as a method of practicing ESS. The RSPs

have been used as a mode of testing. The question of equity ratio for English speaking

skills in 100% weightage for English Language has been raised. The university grade sheet

has been observed strictly since the inception of the university in 2003. This research

introduces a newer approach to achieve the same goal of increasing English speaking

ability of the UF in mainstream classrooms or language lab.

Number of researches have been carried out to show the status of English language in

Pakistan (Abbas, Pervaiz & Arshad, 2019; Ahmed, 2004; Coleman, 2010; Rahman, 2005;

Sultana, 2009). Different approaches (task-based, blogs, Pakistani newspapers) have been

experimented to promote the target language (Ahmadian, 2016; Bakar & Latif, 2010;

Baumgardner, 1987). Language learning motivation has been discussed (Akram & Ghani,

2013). Different methodologies to promote English language have been unfolded (Jabeen,

2013; Zulfiqar, 2011). Divided medium of instruction (Channa, 2014; Haidar, 2016;

Kanwal, 2016; Manan, 2015; Rahman, 2001; Rahman, 2005a), bilingualism, code

switching, code mixing (Gulzar, 2009; Rasul, 2006). Problems in developing English

language have been investigated (Alam & Bashir Uddin, 2013; Baumgardner, Kennedy &

Shamim, 1993; Shamim, 1993; Shamim, 2006 Shamim, 2008). Pakistani variety of English

language has been analyzed (Haque, 1982; Hassan, 2004; Rahman, 1990). Solutions to the

problems impeding the progress of English language have been probed (Canagarajah &

Ashraf, 2013; Shamim, Negash, Chuku & Demewoz, 2007; Shamim, 1993). Testing has

been approved as an effective activity for teachers and learners to know where they stand

(Laar, 1998). Some accommodation has been solicited, and some criteria has been

recommended to be determined (Hassan, 2009) for evaluation of students. Thus, the present

244

research tested ESS in running semesters at university level. This research determined a

rubric to accommodate testing in the teaching and learning processes of English language

in Pakistan. The current study proposes equity ratio for ESS in the 100% assessment of

English language as never considered before in engineering universities. The present

research study explored the process of learning English language in classroom by 1) giving

the learners opportunities to practice ESS in the form of RSPs, 2) assessing the RSPs on an

analytic scoring rubric, 3) sharing the rubric with the language learners, and 4) offering

ways to expedite in class feedback to promote English language.

5.4.2 Theoretical Underpinnings of Research Study

The theoretical underpinnings of the study are diversely interlinked. Language is social

capital for ‘enciphering and deciphering’, a tool for ‘economic exchange’ (Bourdieu, 1991)

(see section 1.6). English language is a medium. As a medium it is the main source of

‘cognitive social capital’ (Ashraf, 2006, p. 211). The status of English language (Crystal,

2012; Kachru, 1990) necessitates to relate to Krashen’s input hypothesis that places

primary importance on the comprehensible input that the language learners are exposed to

two systems- 1) acquired system, and 2) learned system. The present research is based on

the learned system (in classroom/language lab) connected to ‘cognitive social capital’

(Ashraf, 2006, p. 211) that does not exclude the acquired system completely. Swain’s

(2005) output hypothesis complements Krashen’s input hypothesis in

second/third/nonnative/foreign language learning. Without output/outcome, the production

of language (see section 5.3.6.3) the concept of input could not be complete. The followed

learned- system- base directed the study to deliberate teaching and conscientious testing,

and grading. This learned system was introducible through BoGs and FBS in the present

case study particularly and other universities and language centers generally. Basic

interpersonal communicative skills (BICS) and cognitive academic language proficiency

(CALP) (Cummins, 2000) matched the two semesters’ extent and the content within those

two semesters of the present study. Vygotsky's theory of language development focused

on the zone of proximal development (ZPD) (Vygotsky, 1978). ZPD is a level of linguistic

development that the learners obtained when they engage in social (classroom/language

245

lab) interactions with other UF. The current study provided in class/language lab social

interaction to the research participants, the UF. ZPD is the scope for a learner’s potential

to learn and the actual learning that might take place. Due to the diversity in the potential

of learners, the learning takes place at varied levels.

When the value of speakers’ language depends on the ‘market’ (Bourdieu, 1991, p. 503),

the speakers’ language, the ‘community linguistic repertoire’ (Ashraf, 2006, p. 1) needs to

be built up academically, for the global ‘linguistic market’ (Haidar, 2016, p. 31). To

conform to the global requirements, on the academic front, testing (Bachman & Palmer,

1996; Hughes, 2001; Laar, 1998; Lasagabaster, 2011; Norris, 2009) of oracy, oppressed in

the process of teaching of oral skills was unshackled.

There is never a single theory but a repertoire of theories that may produce what a

researcher needs for the social good. A research study is born of an interplay of many

theories.

5.4.3 Limitations of the Study

Limitations are probably out of control weaknesses in a study. The results of the present

study are suggested (Simon, 2011) to be applied to other language centers as well. This

research study had more than one limitation. Epistemologically, the sample population for

the study was limited; only one department of one university could be involved. Further

research in more than one department might introduce better understanding than the present

study. Secondly, semester 1-2 (2013-2014) were text based. However, understanding the

essentials for the UF, the text books were customized by the department of English. Then

as a UELT, I could not pressurize the UF to submit their speaking performances as I could

bound them for their written assignments. University mandated written assignments.

However, systematic grading, and allocating heavier weightage to ESS could signal the

UELT’s motivated labour to excite the UF to a better than before undertaking of English

oral skill. Ontologically, I could not view the reality of this case study only objectively. I

could not detach myself from the data analysis. Therefore, I used a combination of

246

objectivity and subjectivity. However, mindful of my positionality while conducting the

research I tried to create a new reality. Theoretically, I could not record the natural

interaction of the UF. While scaling the speaking performances, I had to ignore the reading

speaking of the UF as an initial move to encourage the language learners. However, I

informed the Language Learners about the difference between speaking and reading by

providing a few opportunities to analyze RSPs in the lab classes.

This study found out that unlike oral examination (viva voce) promoting academic integrity

(Grove, 2014), a couple of RSPs that UF submitted were found identical (their scripts of

speaking performances matched). The recorded speaking performances might hook the UF

to repeating their fellows’ speaking scripts. However, I notified against the offense like I

used to do for the written assignments of the kind. First time, I gave them a chance to

realize and not repeat, second time I cancelled the submitted performance. Like humanly

problems could be handled per routine. The UELTs could reduce, control or stop such

misconduct.

5.4.4 Conclusions and Recommendations

This research work added to two dimensions of teaching English language. Catering to first

research question, it examined the type of experiences that the English teachers could build

into the class teaching (see section 3.4.3.1). The type of testing that the teachers could

operate in large classes (see section 3.4.5). When ESS was not assessed, the drive to acquire

better ESS subsided in the drive to achieve in other subjects that were assessed and

rewarded grades. Most of the UELTs found the UF below average in ESS. Reason being

the learners did not have the opportunities to be tested to demonstrate their acquisition and

learning, at academic levels.

There were two different sides of a bigger picture. In spite of realizing the benefits of

English speaking skills (see section 1.6, 1.6.1, and 1.7) it was not practiced actively (see

section 1.3). It was through teaching, training, testing, grading and assurance of academic

standing that the UF could overpower their lapses (see section 4.4.8, and 5.3.6.2). Teaching

247

and testing practices complementing each other could steadily promote interaction in

English language. Despite its contribution to learning, ESS was usually given scant

attention in ELT classes (Alam & Basiruddin, 2013). The known difficult ways to assess

English speaking performances, teaching ESS in large classes, and time constrains to

complete the syllabus kept on contributing to the bypass of speaking skills. By necessity,

speaking is one of the most vital aspects of English language for today’s learners. In order

to promote ESS of the UF, the stakeholders had to take keen interest to enhance the UF’s

understanding of matter and materials.

This study has undertaken the largely ignored issue of whether or not the communicative

approach should focus on oral before written skills (Canale & Swain, 1980, p. 36). It was

crucial for UF to advance their ESS and the UELTs to keep facilitating them achieve this.

Academic pressure is one of the most powerful incentives that UELTs and UM&A can

have on the learners and vice versa. The UF need to ‘struggle for survival in the quest for

knowledge’ (Canagarajah, 2005, p.244). Contradictorily, the speaking skills of the UF were

neither tested nor graded like their writing skills. Teachers usually assess students’

speaking skills without a formal criterion (Riaz, Haidar, Hassan, 2019). Promotion of ESS

is a co-construction, a combined effort of the policy makers, teaching faculty, and the

UF/English language learners for implementation and practice.

The UELTs need to manage ways to involve the passive, disinterested, shy and low profiled

students other than the UF who voluntarily participate in class interaction. The teaching

researchers are to speculate highly conscientiously on the speaking practices of the UF to

build on the available blocks of already learned ESS. The vision of capacity building in

ESS needs to be elaborated to keep the UF motivated and to strengthen their ESS for better

future chances. University freshman is a chain product that rears from school to college,

from college to university, from under graduation to graduation, and from graduation to

post graduation. Every link of this chain needs to be strong. Ideally language needs to be

nurtured from school to post graduation level, not to leave a loophole.

248

Learning of English language needed to be made accessible to the UF. RSPs is an effective

method of teaching, practicing, learning and testing English language. The involved

processes and procedures respond to four of the research questions. Considering the

teaching, learning practices of English language, testing and grading speaking English,

then, incorporating the weightage of those earned grades of the UF was yet to be tried for

the promotion of ESS. Language learners listen to their language teachers without asking

a question or demanding an explanation (Ntshuntshe, 2011; Riaz, Haidar, Hassan, 2019).

They need to relearn and break this habit and build a new habit of asking a question or

disagreeing with a statement. The UELTs might transform the performance of students

from empty vessels, ‘loyal listeners’ (Ervina, Simatupang, Hendar, I. Z. S., 2019, p. 22) to

active participants.

However, it is challenging. Through this practice the UF might start participating in class

discussions, voicing their opinion and pursuing their point of view. The net result is

confidence, positively. The confidence of the UF can bring a shift in their learning

paradigm. Instead of accepting knowledge from the teaching channels, the UF might

question, experiment, reflect, and analytically find and make differences to

society/community as a whole.

A common analytical framework of a scoring rubric in future research studies will strongly

serve the UF’s evaluation in ESS. Observing a criterion to gauge proficiency in ESS is

essential for scientific grading. Sharing the scales of rubrics with the UF for clearer concept

of reducing the notches is an effective idea. It is a double edged approach. Testing the

undergrads’ potential through analytic scoring rubrics; measuring the level of proficiency

of their ESS is significant for the UELTs, the UF, the parents, and the University itself.

In the absence of an act of evaluation, the UF did not put in keen efforts to enhance their

ESS, whereas regular testing would enable them to analyze where they stand, what to

improve and how to improve. Assessment determines the status of learning and promotes

greater learning (Stiggins, 2002). The regular testing and grading of ESS at UF level would

smooth down the brunt of future tests.

249

All in all, this study offers a comprehensive practice in a large class of language learners

in a limited time of one semester to another a UELT could manage the requisites with

bringing out ‘adequate’ ESS through observing a rubric. It is the beginning of research in

assessment literacy (Popham, 2001) in ESS. Through recurrent practice, better and wider

impact could be achieved. Once speaking skill attains Equity in the teaching and testing

processes of English language, its weightage in the overall assessment can enhance the

productivity of the UF’s international functionality. Thus, a scale of scoring rubric

reinforces the teachings of the language raters to improve the language speaking processes

of the UF.

5.4.5 Implications and Future Research Prospects

I as a UELT researcher acknowledges that further aspects are likely to emerge from the

study of a wider sampling. The goal of producing practical Engineers must be established

by the goal of producing efficient speakers. Departmental collaboration could combine the

technical knowledge of other departments with ESS. This proposition might be observed

by the department of English and the other departments collaboratively in some future

research. Keeping the core contravening interests of different departments, with the focus

of English department on enhancing ESS at par with the focus of Mechatronics on the UF’s

comprehension of concepts and theory. So that the UF could demonstrate understanding

through practical display; be it native language/ a mix of native, and international language

or lingua franca (a hybrid language).

The present study can be replicated in different departments of the same or different

universities to find out divergent results. This work invites research in the field of oral

English assessment literacy (Taylor, 2009). A step forward would most probably offer

better solutions. Moreover, the UELTs have to work for ‘Futures Curriculum’ that is a

curriculum which actively discusses the future and prepares students for their lives ahead

(Littlejohn, 2014, p. 7). More research is endorsed with different kind of institutions.

250

During the process of the present research, I found out that the boundaries of English as

lingua franca, international language, world Englishes, Paklish, Pakistani English, Hong

Kong English and ‘English as a global language, English-based pidgins and creoles’

(Pawlak & Waniek-Klimczak, 2014) are blur. ‘The emphasis in world English initially

should be on justifying the very existence of world Englishes and their viability’ (Bamgbo,

2003, p. 427). Banking on the blurred boundaries of the global English, this research seeks

to incorporate some criterion by keeping the Englishness of English (see section 2.13) for

the development of English speaking skills, and possibly retain its communicative flavor.

The UF are supposed to have a controlled freedom to learn ESS to their best capacity. A

criterion to gauge linguistic capability was ‘control’ and the world Englishes was the

‘freedom’ that the UF exercised due to their Pakish/Pakistani English (Hassan, 2004;

Rahman, 1990).

The present research observed the need for attention to English speaking ability of the UF

through assessment procedures. Enhancing ESS of the UF was a matter of concern.

Evaluating ESS was difficult. But the challenges were routed by observing an analytic

scoring rubric. This study was set out to explore the concept of teaching and testing of ESS

at UF level. It has identified the types, and practices of teaching and testing of ESS at UF

level, the rationale and impetus for heterogeneity in practices. It validates ways the UF

could be taught oral skills. The study verified the factorial structure of the speaking test,

explored the extent of raters’ contribution to students’ speaking performance, and

contribution of tasks to students’ speaking performance. For regulating learners’ speaking

ability, this research emphasized constant and combined efforts of the UELTs and the UF

to enhance their ESS without denying the training of the trainers (see section 1.2) i.e.,

educating the teachers to enhance the ESS of the UF through professional programs, adding

productive exposure in the form of interaction in English language to exchange ideas with

academicians, and intellectuals. Participation in different activities to perform tasks, and

the transforming roles helped the UF learn to function linguistically. The form of recording

speaking performances of the UF and contributing to their speaking ability is

recommended.

251

The present research introduced a criterion for measuring the speaking skills of students

using different techniques, such as allowing students to record their utterances. Introducing

a criterion for measuring the English speaking skills of the UF of the department of

Mechatronics Engineering was a new move. It developed a mechanism to measure the

speaking performances of students which brought a considerable positive change in their

speaking skills (Riaz, Haidar, Hassan, 2019). The hypothesis was tested and it was found

out that the UF evolved their ESS adequately (see Table 4.10) when taught and assessed.

Henceforth, this study contributes to solving of a crucial problem of English-speaking

ability of students at universities, if the tested procedures are applied.

252

REFERENCES

Abbas, F., Pervaiz, A., & Arshad, F. (2019). The competing status of Urdu and English

after declaration of Urdu as official language in Pakistan. Journal of Research

(Urdu), 34 (1), 142-158.

Agnihotri, R. K. (2007). “Identity and multilinguality: The case of India”. In A. B. M. Tsui,

& J. W. Tollefson (Eds.), Language policy, culture, and identity in Asian contexts

(pp. 185-204). Mahwah, NJ: Lawrence Erlbaum Associates.

Ahmed, N. (2004). An evaluative study of the English course at the Intermediate Level.

NUML Research Magazine, 1, 45-55.

Ahmadian, M. (2016). Task-based language teaching and learning. The Language

Learning Journal. 44(4): 377-380

Ahmed, S., Mahmood, A., Hasan, A., Sidhu, G. A. S., & Butt, M. F. U. (2016). A

comparative review of China, India and Pakistan renewable energy sectors and

sharing opportunities. Renewable and Sustainable Energy Reviews, 57, 216-225.

Alam, Q., & Bashir Uddin, A. (2013). Improving English oral communication skills of

Pakistani public school’s students. International Journal of English Language

Teaching, 1(2), 17-36.

Aleksandrzak, M. (2011). Problems and challenges in teaching and learning speaking at

advanced level. Glottodidactica, Vol. 37 (2011), Wydawnictwo Naukowe UAM

http://hdl.handle.net/10593/1680

253

Alexander, R. (2015). http://www.robinalexander.org.uk/dialogic-teaching/. Retrieved

May 02, 2015, from http://www.robinalexander.org.uk/:

http://www.robinalexander.org.uk/

Ammon, U. (2000). “Towards more fairness in international English: Linguistic rights of

non-native speakers?” In R. Phillipson (Ed.), Rights to language: Equity, power

and education (pp.111–116). Mahwah, NJ: Lawrence Erlbaum.

Anderson, T. (2016). Theories for learning with emerging technologies. Emergence and

innovation in digital learning: Foundations and Applications, 35-50.

Andrade, H. G. (2005). Teaching with rubrics: The good, the bad, and the ugly. College

Teaching, 53(1), 27-31.

Annamalai, E. (2004). Medium of power: The question of English in education in India. In

J. W. Tollefson, & A. B. M. Tsui (Eds.), Medium of instruction policies: Which

agenda? Whose agenda? (pp. 177-194). Mahwah, NJ: Lawrence Erlbaum

Associates.

Ashraf, H. (2006). A study of language learning as an element affecting the social capital

of the people of Pakistan ( Unpublished doctoral dissertation) National University

of Modern Languages, Islamabad, Pakistan

Ashraf, H., Riaz, N., & Zulfiqar, I. (2008). Identifying Gaps between College English

Education and University Requirements: Abstract. In We Care, We Share, We are

the ELT World, Program Book, Society of Pakistan English Language Teachers,

24th Annual Conference, Islamabad, Pakistan. Oxford University Press.

Ashworth, P., Bannister, P. and Thorne, P. (1997) Guilty in whose eyes? University

students’ perceptions of cheating and plagiarism in academic work and assessment.

254

Studies in Higher Education, 22(2), pp. 187-203. Atherton, J. (2010) Learning and

teaching; SOLO Taxonomy.

Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and

developing useful language tests (Vol. 1). Oxford University Press.

Bachman, L. F. (2002). Some reflections on task-based language performance assessment.

Language Testing, 19(4), 453-476.

Bachman, L. F. (2004). Basic concepts and terms. In L. F. Bachman (Ed), Statistical

analyses for language assessment book, Cambridge language assessment (pp. 1-

11). Cambridge: Cambridge University Press.

Bailey, K. M., & Savage, L. (Eds.). (1994). New ways in teaching speaking. Bloomington,

Illinois, USA: TESOL.

Bakar, N. A., & Latif, H. (2010). ESL Students feedback on the use of blogs for language

learning. 3L: Language, Linguistics, Literature, 16(1).

Bamgbo, A. (2003). A recurring decimal: English in language policy and planning. World

Englishes, 22(4), 419-431.

Banyard, V. L., & Miller, K. E. (1998). The powerful potential of qualitative research for

community psychology. American Journal of Community Psychology, 26(4), 485-

505.

Batool, Z., & Qureshi, R. H. (2007). Quality assurance manual for higher education in

Pakistan. Higher Education Commission, Pakistan.

Baumgardner, R. J. (1987). Utilizing Pakistani newspaper English to teach grammar.

World Englishes, 6(3), 241-252.

255

Baumgardner, R. J., Kennedy, A. E., & Shamim, F. (1993). The Urduization of English in

Pakistan. The English language in Pakistan, 83-203.

Bejar, I. I. (2012, Fall). Rater Cognition: Implications for Validity. Educational

Measurement: Issues and Practice, 31(3), 2-9.

Bosetti, L., & Walker, K. (2010). Perspectives of UK vice‐chancellors on leading

universities in a knowledge‐based economy. Higher Education Quarterly, 64(1), 4-

21.

Bourdieu, P. (1991). Language and symbolic power. Harvard University Press.

Bourke, B. (2014). Positionality: Reflecting on the research process. The Qualitative

Report, 19(33), 1-9. Retrieved from https://nsuworks.nova.edu/tqr/vol19/iss33/3

Bowdon, M. A. (2014). Tweeting an Ethos: Emergency Messaging, Social Media, and

Teaching Technical Communication. Technical Communication Quarterly, 23(1),

35-54.

Breen, M. (1987). Learner contributions to task design. Language Learning Tasks, 7, 23-

46.

Bresnihan, B. (1994). Conversation Talking Zone. In K. M. Bailey, & L. Savage (Eds.),

New ways in teaching speaking (p. 305). Bloomington, Illinois, USA: TESOL.

Brown, A., Iwashita, N., & McNamara, T. (2005). An examination of rater orientations

and test-taker performance on English-for-academic- purposes speaking tasks. NJ

08541-6155: Educational Testing Service.

Brumfit, C. (1986). The practice of communicative teaching (Vol. 124). Pergamon Press.

Bryman, A. (2003). Quantity and quality in social research (Vol. 18). Routledge.

256

Bryman, A. (2004). Triangulation and measurement. Retrieved from Department of Social

Sciences, Loughborough University, Loughborough, Leicestershire:

www.referenceworld. Com/sage/socialscience/triangulation. pdf.

Bryman, A. (2007). Effective leadership in higher education: A literature review. Studies

in Higher Education, 32(6), 693–710.

Buckwalter, P. (2001). Repair sequences in Spanish L2 dyadic discourse: A descriptive

study. The Modern Language Journal, 85(3), 380-397.

https://www.jstor.org/stable/1193107

Burstall, C. (1965). Language teaching: A scientific approach. Springer

https://www.jstor.org/stable/3442162

Bygate, M. (2009). “Teaching and Testing Speaking”. The handbook of language teaching,

412.

Bygate, M. (2011). Tasks in classrooms: Developing TBLT as a researched pedagogy:

Invited Colloquia. In 4th Biennial International TBLT Conference, The University

of Auckland.

Canale, M., & Swain, M. (1980). Theoretical bases of communicative approaches to

second language teaching and testing. Applied Linguistics, 1(1), 1-47.

Canagarajah, S. (2005). “Introduction”. In S. Canagarajah (Ed.), Reclaiming the local in

policy and practice (pp. xiii–xxx). Mahwah, NJ: Lawrence Erlbaum.

Canagarajah, S., & Ashraf, H. (2013). Multilingualism and education in South Asia:

Resolving policy/practice dilemmas. Annual Review of Applied Linguistics, 33,

258-285.

257

Carroll, John B. (1971). "Current issues in psycholinguistics and second language

teaching." TESOL Quarterly (1971): 101-114.

Carter, R., & McCarthy, M. (1997). Exploring spoken English (Vol. 2). Cambridge

University Press.

Carver, T. K., & Fotinos-Riggs, S. D. (1998). A conversation book 2: English in everyday

life. White Plains, NY: Longman.

Chamberlin, K., Yasué, M., & Chiang, I. C. A. (2018). The impact of grades on student

motivation. Active Learning in Higher Education, 1469787418819728.

Chamot, A. U. (2004). Issues in language learning strategy research and

teaching. Electronic Journal of Foreign Language Teaching, 1(1), 14-26.

Channa, L. A. (2014). English medium for the government primary schools of Sindh,

Pakistan: an exploration of government primary school teachers’ attitudes PhD,

The University of Georgia.

Chen, J. F., Warden, C. A., & Chang, H.-T. (2005, December). Motivators that do not

motivate: The case of Chinese EFL learners and the influence of culture on

motivation. TESOL Quarterly, 39(4), 609-633.

Cheng, L. (2008). The key to success: English language testing in China. Language

Testing, 25(1), 15-37.

Cheng, L., & Curtis, A. (2010). English language assessment and the Chinese learner.

Routledge.

Chomsky, N. (1959). Chomsky, N. 1959. A review of BF Skinner’s Verbal behavior.

Language, 35 (1), 26-58.

258

Clandinin, D. J., & Connelly, F. M. (1996, April). Teachers' professional knowledge

landscapes: Teacher stories. Stories of teachers. School stories. Stories of schools.

Educational Researcher, 25(3), 24-30.

Clark, B. R. (1997). The modern integration of research activities with teaching and

learning. The Journal of Higher Education, 68(3), 241-255.

Clark, B. R. (1987). The Academic Life. Small Worlds, Different Worlds. A Carnegie

Foundation Special Report. Princeton University Press, 3175 Princeton Pike,

Lawrenceville, NJ 08648.

Clayton, T. (2006). Language choice in a nation under transition (pp. 207-239). Springer

US.

Clipson-Boyles, S. (1998). “Developing Oracy through Drama”. In An introduction to

oracy (p. 242). Herndon, AV 20172: Cassell.

Colbeck, C. L. (1998). Merging in a seamless blend: How faculty integrate teaching and

research. The Journal of Higher Education, 69(6), 647-671.

Coleman, H. (2010). Teaching and learning in Pakistan: The role of language in education.

Islamabad: The British Council, 148-157.

Cook, V. (2016). Second Language Learning and Language Teaching (Fourth Ed.).

London NW1 3BH: Hodder Education a Hachette UK Company.

Creswell, J. (2012). Qualitative inquiry and research design: Choosing among five.

London. Sage

Creswell, J. W., & Poth, C. N. (2016). Qualitative inquiry and research design: Choosing

among five approaches. Sage publications.

259

Crowther, F., Ferguson, M., & Hann, L. (2009). Developing teacher leaders: How teacher

leadership enhances school success. Corwin Press.

Crystal, D. (2008). Two thousand million?. English Today, 24(1), 3-6.

Crystal, D. (2012). English as a global language. Cambridge university press.

Cullen, B. (1998). Brainstorming Before Speaking Tasks. The Internet TESL Journal, IV

(7).

Cummins, J. (2000). Language, power and pedagogy: Bilingual children in the

crossfire (Vol. 23). Multilingual Matters.

Cummins, J. (2003). Basic interpersonal communicative skills and cognitive academic

language proficiency. BICS and CALP. Accessed on July, 25, 2010.

Curriculum Division, HEC. (Revised 2009). Curriculum of Computer Engineering

B.E/BSC. Higher Education Commission Islamabad, Curriculum Division.

Islamabad: HEC.

Dawes, L. (2013). Talking points: discussion activities in the primary classroom.

Routledge.

Deci, E. L., & Ryan, R. M. (2010). Intrinsic motivation. The Corsini Encyclopedia of

Psychology, 1-2.

Demirezen, M. (1988). Behaviorist theory and language learning. Hacettepe Üniversitesi

Eğitim Fakültesi Dergisi, 3(3).

Demuth, K. (1986). Prompting routines in the language socialization of Basotho

children. Language socialization across cultures, 51-79.

Denzin, N.K. (1970), The Research Act in Sociology, Chicago: Aldine.

260

Denzin, N. K. (1989). Interpretive biography (Vol. 17). Sage.

Donato, R. (1994). Collective scaffolding in second language learning. Vygotskian

approaches to second language research, 33456.

Dörnyei, Z. (1990). Conceptualizing motivation in foreign‐language learning. Language

learning, 40(1), 45-78.

Dornyei, Z. (2005). The psychology of the language learner: Individual differences in

second language acquisition. Lawrence Erlbaum Associates, Inc., Publishers

Dornyei, Z. (2007). Research Methods in Applied Linguistics. New York: Oxford

University Press.

Doult, W., & Walker, S. A. (2014). ‘He's gone and wrote over it’: the use of wikis for

collaborative report writing in a primary school classroom. Education 3-13, 42(6),

601-620.

Dörnyei, Z. (2007). Creating a motivating classroom environment. In International

handbook of English language teaching (pp. 719-731). Springer, Boston, MA.

Du, X. (2013). English Grammar Automatic Output Model under Non-native Environment.

Theory and Practice in Language Studies, 3(1), 29-34.

Ducate, L. C., & Lomicka, L. L. (2008). Adventures in the blogosphere: from blog readers

to blog writers. Computer Assisted Language Learning, 21(1), pp. 9-28.

Duit, R., & Treagust, D. F. (2012). How can conceptual change contribute to theory and

practice in science education. In Second international handbook of science

education (pp. 107-118). Springer, Dordrecht. E. Torres-Guzmán, (Eds.),

Imagining multilingual schools Clevedon, UK: Mult,

261

Dunn, W. E., & Lantolf, J. P. (1998). Vygotsky's zone of proximal development and

Krashen's i+ 1: Incommensurable constructs; incommensurable theories. Language

learning, 48(3), 411-442.

Durrani, M. (2012). Banishing colonial specters: Language ideology and education policy

in Pakistan. Working Paper in Educational Linguistics, 27(1), 29-49.

Eagleton, T. (2011). Literary theory: An introduction. John Wiley & Sons.

England, K. V. (1994). Getting personal: Reflexivity, positionality, and feminist

research. The professional geographer, 46(1), 80-89.

English, B. (2009, October 02). Who is responsible for educating English language

learners? Discursive construction of roles and responsibilities in an inquiry

community. Language and Education, 23(6), 487-507.

Ervina C. M. Simatupang, Hendar, Ida Zuraida Supri, (2019). The Impact of Using

Oraiapp.com on Improving Students' Speaking Skill for Non Native Speaker.

Universal Journal of Educational Research 7(4A), 22 - 26. Doi:

10.13189/ujer.2019.071404.

Etikan, I., Musa, S. A., & Alkassim, R. S. (2016). Comparison of convenience sampling

and purposive sampling. American Journal of Theoretical and Applied

Statistics, 5(1), 1-4.

Evans, S. (2002). Macaulay's minute revisited: Colonial language policy in nineteenth-

century India. Journal of Multilingual and Multicultural Development, 23(4), 260-

281.

262

Flowerdew, J., & Miller, L. (2005). Second language listening: Theory and practice.

Cambridge University Press.

Shamim, F. (2008). Trends, issues and challenges in English language education in

Pakistan. Asia Pacific Journal of Education, 28(3), 235-249.

Freire, P. (1970). Cultural action and conscientization. Harvard Educational Review,

40(3), 452-477.

Fry, S. W., & Villagomez, A. (2012). Writing to Learn: Benefits and Limitations. College

Teaching, 60(4), pp. 170-175.

Fulcher, G. (2003). Testing second language speaking. Pearson Education.

Fulcher, G. (2012). Assessment literacy for the language classroom. Language Assessment

Quarterly, 9(2), 113-132.

Fulcher, G. (2014). Testing second language speaking. Routledge.

Gatenby, E. V. (1948). Reasons for failure to learn a foreign language (Part 2). ELT

Journal, 2(5), 134-139.

Gardiner, L. F. (1998). Why we must change: The research evidence. Thought and

Action, 14(1), 71-88.

Giri, R. A. (2005). The adaptation of language testing models to national testing of school

graduates in Nepal: Processes, problems and emerging issues. (Doctoral

dissertation), Victoria University of Technology.

263

Goldenberg, C. N. (1991). Instructional conversations and their classroom

application (Vol. 2). National Center for Research on Cultural Diversity and

Second Language Learning.

Goldenberg, C. (2008). Teaching English language learners: What the research does-and

does not-say. https://digitalcommons.georgiasouthern.edu/esed5234-master/27

Government of Pakistan. (2009). National education policy 2009.

Greenfield, R. (2003). Collaborative e-mail exchange for teaching secondary ESL: A case

study in Hong Kong. Language Learning & Technology, 7(1), 46-70.

Guilloteaux, M. J., & Dornyei, Z. (2008, March). Motivating Language Learners: A

Classroom-Oriented Investigation of the Effects of Motivational Strategies on

Student Motivation. TESOL Quarterly, 55-77.

Gulzar, MA (2009) Classroom Discourse in Bilingual Context: Effects of Code Switching

in Language Learning in Pakistani TEFL Classroom (PhD) NUML Islamabad,

Pakistan http://prr.hec.gov.pk/jspui/bitstream/123456789/1865/1/1385S.pdf

Haidar, S., & Fang, F. (2019). English language in education and globalization: A

comparative analysis of the role of English in Pakistan and China. Asia Pacific

Journal of Education, 39(2), 165-176.

Haidar, S., Farrukh, F., & Dar, S. R. (2019). Desire for English in youth: An exploratory

study of language learners in Pakistan. Journal of Education and Educational

Development, 6(2), 288-307

Haidar, S. (2016). Passport to privilege: Access to English in different school systems in

Pakistan. (Unpublished doctoral dissertation) Rochester, New York: University of

Rochester.

264

Haidar, S. (2019). Access to English in Pakistan: Inculcating prestige and leadership

through instruction in elite schools. International Journal of Bilingual Education

and Bilingualism, 22(7), 833-848.

Haidar, S. (2018). The role of English in developing countries: English is a passport to

privilege and needed for survival in Pakistan. English Today, 1-7.

Hale, T. A. (1982). From Written Literature to the Oral Tradition and Back: Camara Laye,

Babou Condé, and Le Maître de la Parole: Kouma Lafôlô Kouma. French Review,

790-797.

Hall, J. K. (1993). The role of oral practices in the accomplishment of our everyday lives:

The sociocultural dimension of interaction with implications for the learning of

another language1. Applied linguistics, 14(2), 145-166.

Hamid, M. O., Jahan, I., & Islam, M. M. (2013). Medium of instruction policies and

language practices, ideologies and institutional divides: Voices of teachers and

students in a private university in Bangladesh. Current Issues in Language

Planning, 14(1), 144-163.

Hand, M., & Levinson, R. (2012). Discussing Controversial Issues in the Classroom.

Educational Philosophy and Theory, 44(6), 614-629.

Haque, A. R. (1982). The position and status of English in Pakistan. World Englishes, 2(1),

6-9.

Harmer, J. (2007). The practice of English language teaching. Harlow. English: Pearson

Longman.

265

Hassan, R. (2004). Aspects of Psycholinguistics. National University of Modern

Languages, Islamabad

Hassan, R. (2004). Remaking English in Pakistan (Remedial phonology for Pakistani

Students). Islamabad, Punjab, Pakistan: National University of Modern Languages.

Hassan, R. (2009). Teaching Writing to Second Language Learners. Bloomington, IN

Hughes, A. 1989: Testing for language teachers. Cambridge: Cambridge University Press .

Google Scholar

Hymes, D. (1972). On communicative competence. Sociolinguistics, 269293, 269-293.

He, H. (2013). On FL learners' individual differences in grammar learning and their

grammatical competence training. Theory and Practice in Language Studies, 3(8),

1369.

Heinmiller, L. E. (1921). The head of department. The High School Journal, 4(7), 149-151.

Herani, G. M., Mugheri, M. S., & Advani, A. (2015). Measuring the endeavors’ Impact of

Quality Enhancement Cell on Quality of Higher Education system in Pakistan. A

case of Private and Public Universities in Pakistan. Journal of Management for

Global Sustainable Development, 1(1), 37-39.

Heritage, M. (2007, October). Formative Assessment: What Do Teachers Need to Know

and Do? PHI DELTA KAPPAN, 89(02), 140-145.

Hina, K., & Ajmal, M. (2016). Quality Assurance and Enhancement Mechanism in Tertiary

Education of Pakistan: Recent Status, Issues and Expectations. Pakistan Journal of

Education, 33(1). doi:http://dx.doi.org/10.30971/pje.v33i1.13

266

Holderness, J. (1998). A Communication Framework for English as an Additional

Language (EAL) Learners. In An Introduction to Oracy/ Frameworks for Talk (p.

242). Herndon, VA 20172: Cassell.

Holderness, J., & Lalljee, B. (Eds.). (1998). An Introduction to Oracy/ Frameworks for

Talk. Herndon, VA 20172: Cassell.

Holliday, A. (2005). The struggle to teach English as an international language. Oxford,

England: Oxford University Press

https://www.hec.gov.pk/english/services/universities/ORICs/Pages/default.aspx

https://au.edu.pk › QEC › Manual_Doc › UPR-Final-4th-sep-13-v0.1.pdf

Howatt, A. P. R., & Widdowson, H. G. (2004). A history of ELT. Oxford University Press.

Hsieh, H. F., & Shannon, S. E. (2005). Three approaches to qualitative content

analysis. Qualitative health research, 15(9), 1277-1288.

Hubbard, R. S., & Power, B. M. (1993). The art of classroom inquiry. Portsmouth, NH:

Heinemann.

Hughes, A. (2001). Testing for Language Teachers (Thirteenth printing 2001 Ed.).

Cambridge CB2 2RU, United Kingdom: Cambridge University Press.

Hui, H. (2013, August). On FL Learners' Individual Differences in Grammar Learning and

Their Grammatical Competence Training. Theory and Practice in Language

Studies, 3(8), 1369-1374.

Hymes, D. (1972). On communicative competence. Sociolinguistics, 269293, 269-293.

Ilahi, M. (2013). Linguistic Disharmony, National Language Authority and Legislative

Drafting in Islamic Republic of Pakistan. Eur. JL Reform, 15, 400.

267

Jabeen, I. (2013). English Language Teaching: Implementing Collaborative Language

Learning Approach in Federal Colleges of Pakistan (Unpublished PhD Thesis).

National University of Modern Languages, Islamabad, Pakistan.

http://prr.hec.gov.pk/jspui/handle/123456789/2404

Jafri, I. H., Zai, S. Y., Arain, A. A., & Soomro, K. A. (2013). English background as the

predictors for students’ speaking skills in Pakistan. Journal of Education and

Practice, 4(20), 30-35.

Jenkins, J. (2006). Current perspectives on teaching World Englishes and English as a

lingua franca. TESOL Quarterly, 40(1), 157-181.

Jha, G. N. (2010, May). The TDIL Program and the Indian Language Corpora Initiative

(ILCI). In LREC.

Jones, R. (1979). Performance testing of second language proficiency. Concepts in

Language Testing. Washington: TESOL, 50-57.

Joseph, J. E. (2004). Case Study 1: The new quasi-nation of Hong Kong. In Language and

identity (pp. 132-161). Houndmills, England: Palgrave Macmillan.

Kachru, B. B. (1975). Lexical innovations in south Asian English. International Journal of

the Sociology of Language, 1975(4), 55-74.

Kachru, B. B. (1986). The alchemy of English: The spread, functions, and models of non-

native Englishes. University of Illinois Press.

Kachru, B. B. (1990).World Englishes and applied linguistics. World Englishes, 9(1), 3-

20.

268

Kachru, B. B. (1991). World Englishes and Applied Linguistics. 29p. In: Tickoo, Makhan

L., Ed. Languages & Standards: Issues, Attitudes, Case Studies; FL 019 461.

http://files.eric.ed.gov/fulltext/ED347805.pdf

Kachru, B. B. (1992). World Englishes: Approaches, issues and resources. Language

teaching, 25(1), 1-14.

Kachru, B. B. (1992). “Teaching World Englishes”. In B. B. Kachru (Ed.), The other

tongue. English across cultures (2nd éd. pp. 355-365). Urbana, IL: University of

Illinois Press.

Kamran, R. (2008). Attitudinal undercurrents in second language learning. National

University of Modern Languages Research Magazine, 2.

Kanwal, A. (2016, October 7). Effects of Socially Stratified Education on Linguistic

Performance. (Unpublished PhD Thesis). National University of Modern

Languages, Islamabad, Pakistan.

http://prr.hec.gov.pk/jspui/handle/123456789/9212

Karaoglu, S. (2008, June). Motivating Language Learners to Succeed. TESOL

international Association, 5(2), 1-3.

Kasper, G. (1985). Repair in foreign language teaching. Studies in Second Language

Acquisition, 7(2), 200-215. https://www.jstor.org/stable/44487343

Khan, R. & Chaudhury, T. A. (2012). The Bangladeshi employment sector: Employer

perspectives concerning English proficiency. Indonesian Journal of Applied

Linguistics, 2(1), 116-129.

269

Khalique, H. (2007). The Urdu-English relationship and its impact on Pakistan’s social

development. The Annal of Urdu Studies, (22), 99-112.

Kim, H. J. (2010). Investigating the Construct Validity of a Speaking Performance Test.

Spaan Fellow Working Papers in Second or Foreign language assessment, 8, 1-30.

Kim, M. (2009). The Impact of an elaborated assessee’s role in peer assessment.

Assessment & Evaluation in Higher Education, 34(1), pp. 105-114.

Kimme Hea, A. C. (2014). Social Media in Technical Communication. Technical

Communication Quarterly, 23(1), pp. 1-5.

Kirkpatrick, A. (2008). English as the official working language of the Association of

Southeast Asian Nations (ASEAN): Features and strategies. English today, 24(2),

27-34.

Kirkpatrick, A. (2014, March 11). Researching English as a Lingua Franca in Asia: the

Asian Corpus of English (ACE) project. Asian Englishes, 13(1), 4-18.

Klesmer, H. (1993). Development of ESL Achievement Criteria as a Function of Age and

Length of Residence in Canada. ESL Achievement Project. North York Board of

Education (Ontario). FL 024 633

Konishi, H., Kanero, J., Freeman, M. R., Golinkoff, R. M., & Pasek, K. H. (2014, August

04). Six Principles of Language Development: Implications for Second Language

Learners. Developmental Neuropsychology, 39(5), 404-420.

Konno, N., Nonaka, I., & Ogilvy, J. (2014). Scenario Planning: The Basics. World Futures:

The Journal of New Paradigm Research, 70(1), 28-43.

Krashen, S. D. (1976). Formal and informal linguistic environments in language

acquisition and language learning. Tesol Quarterly, 157-168.

270

Krashen, S. (1982). Principles and practice in second language acquisition. Pergamon Press

Inc. ISBN 0-08-028628-3

Krashen, S. D., & Terrell, T. D. (1983). The natural approach: Language acquisition in

the classroom.

Krashen, S. D. (2003). Explorations in language acquisition and use. academia.edu

Kroll, J., & Dai, F. (2013). Reading as a writer in Australia and China: Adapting the

workshop. New Writing, 11(1), 77-91.

Kumaravadivelu, B. (2003). A postmethod perspective on English language

teaching. World Englishes, 22(4), 539-550.

Kunda, Z., Sinclair, L., & Griffin, D. (1997). Equal ratings but separate meanings:

Stereotypes and the construal of traits. Journal of Personality and Social

Psychology, 72(4), 720.

Kusaka, L. L., & Robertson, M. (2006). Beyond language: Creating opportunities for

authentic communication and critical thinking. Gengo toBunka, 14, 21-38.

Laar, B. (1998), Play and Inventive Activity. In J. Holderness, & B. Laljee (Eds.) An

Introduction to Oracy/Frameworks for Talk (pp. 1-242).Wiltshire: Cassell

Lasagabaster, D. (2011). English achievement and student motivation in CLIL and EFL

settings. Innovation in language Learning and Teaching, 5(1), 3-18.

Lalljee, B. (1998). Using Talk across the Curriculum. In An Introduction to Oracy/

Frameworks for Talk (p. 242). Herndon, VA 20172: Cassell.

Lambert, V., & Murray, E. (2003). English for Work/ Everyday Technical English.

Longman.

271

Lambert, W. E., Genesee, F., Holobow, N., & Chartrand, L. (1993). Bilingual education

for majority English-speaking children. European Journal of Psychology of

Education, 8(1), 3.

Lardiere, D. (2009). Some thoughts on the contrastive analysis of features in second

language acquisition. Second language research, 25(2), 173-227.

Lee, Y.-A. (2006, Dec.). Respecifying display questions: Interactional resources for

language teaching. Teachers of English to Speakers of Other Languages, Inc.

(TESOL), 40(4), 691-713.

Lenneberg, E. H. (1967). The biological foundations of language. Hospital Practice, 2(12),

59-67.

Liao, Y. F. (2004). Issues of validity and reliability in second language performance

assessment. Studies in Applied Linguistics and TESOL, 4(2).

Lin, N. (2017). Building a network theory of social capital. In Social capital (pp. 3-28).

Routledge.

Littlejohn, A. (2014). Language Teaching for the Future, Revisited. MEXTESOL Journal,

Vol. 38(3)

Littlejohn, A. (1998). Language teaching for the millennium. English Teaching

Professionals, 8, 3-5.

Littlejohn, A. (2013). The social location of language teaching: From zeitgeist to

imperative. ELT in a changing world: Innovative approaches to new challenges, 3-

16.

272

Longo, B. (2014). Using Social Media for Collective Knowledge-Making: Technical

Communication between the Global North and South. Technical Communication

Quarterly, 23(1), pp. 22-34.

Luoma, S. (2004). Assessing speaking. Ernst Klett Sprachen. PDF] academia.edu

Madhooshi, M., & Samimi, M. H. J. (2015). Social entrepreneurship & social capital: A

theoretical analysis. American Journal of Economics, Finance and

Management, 1(3), 102-112.

Mahmood, K. (2016). Overall Assessment of the Higher Education Sector. Higher

Education Commission (HEC): H-9, Islamabad, Pakistan, 1-80.

Mahmood, M. A. (2009). A corpus based analysis of Pakistani English (Doctoral

dissertation), Bahauddin Zakariya University, Multan.

Manan, S. A. (2015). Mapping mismatches: English-medium education policy, perceptions

and practices in the low-fee private schools in Quetta Pakistan (Doctoral

dissertation), University of Malaya, Kuala Lumpur, Malaysia.

Manan, S. A., Dumanig, F. P., & David, M. K. (2017). The English-medium fever in

Pakistan: Analyzing policy, perceptions and practices through additive

bi/multilingual education lens. International Journal of Bilingual Education and

Bilingualism, 20(6), 736-752.

Marsden, E., Mitchell, R., & Myles, F. (2013). Second Language Learning Theories.

Routledge.

273

Masgoret, A. M., & Gardner, R. C. (2003). Attitudes, motivation, and second language

learning: A meta‐analysis of studies conducted by Gardner and

associates. Language learning, 53(S1), 167-210.

Mathews, S. M. (2018). Language skills and secondary education in India. Economic and

Political Weekly, 53(15), 20-2

Mohammad Javad Ahmadian (2016) Task-based language teaching and

learning, The Language Learning Journal, 44:4, 377-380, DOI:

10.1080/09571736.2016.1236523

Majhanovich, S. (2013). English as a tool of neo-colonialism and globalization in Asian

contexts. In Y. Hébert, & A. Abdi (Eds.), Critical perspectives on international

education (pp. 249-261). Rotterdam: Sense Publishers.

McArthur, T. B., McArthur, T., & McArthur, R. (Eds.). (2005). Concise Oxford companion

to the English language. Oxford University Press, USA.

McLeod, S. A. (2007). Bf skinner: Operant conditioning. Retrieved September, 9, 2009.

academia.edu

Marshall, C. & Rossman, G. (2011). Designing qualitative research, (5th Ed.). London:

Sage.

McMillan, J. H. (2000). Fundamental assessment principles for teachers and school

administrators. Practical Assessment, Research & Evaluation, 7(8), 89-103.

McNamara, T. F. (1996). Measuring second language performance. Addison Wesley

Longman.

274

McNamara, T. F. (1997). ‘Interaction’ in second language performance assessment: Whose

performance?. Applied Linguistics, 18(4), 446-466.

McNamara, T. (2006, Nov 16). Validity in Language Testing: The Challenge of Sam

Messick's Legacy. Language Assessment Quarterly, 3(1), 31-51.

Memon, G. R. (2007). Education in Pakistan: The key issues, problems and the new

challenges. Journal of Management and Social Sciences, 3(1), 47-55.

Marshall. (2011). Designing qualitative research, (5th Ed.). London: Sage.

Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2002). Design and analysis in task-

based language assessment. Language testing, 19(4), 477-496.

Mitchell, R., Myles, F., & Marsden, E. (2013). Second Language Learning Theories (Third

Ed.). New York, NY 10017, USA: Routledge.

Mulvahill, E. (2018). Understanding Intrinsic vs. Extrinsic Motivation in the Classroom.

We are teachers, Supporting Students, Classroom Ideas. Retrieved on 26 July,

2019. https://www.weareteachers.com

Murphy, H. A., Hildebrandt, H. W., & Thomas, J. P. (1998). Effective business

communications, International Edition, 7th Ed. (Eighth printing 1998 Ed.).The

McGraw-Hill Companies, Inc. USA, ISBN 0-07-114507-9

Mustafa, Z. (2011). Tyranny of language in education, the problems and its solutions

Karachi: Ushba Publishing International.

Nakatani, Y. (2010). Identifying strategies that facilitate EFL learners' oral

communication: A classroom study using multiple data collection procedures. The

Modern Language Journal, 94(1), 116-136.

National Education Policy (2009). Ministry of Education, Government of Pakistan

275

Nawab, A. (2012, March). Is it the way to teach language the way we teach language?

English language teaching in rural Pakistan. Academic Research International,

2(2), 696-705.

Newmark, L. (1966). How not to interfere with language learning. International Journal

of American Linguistics, 32(1), 77-83.

Norris, J. M. (2009). 30 Task-Based Teaching and Testing. The handbook of language

teaching, 578. John Benjamins Publishing, DOI: 10.1002/9781444315783.ch30

Norton, B., & Kamal, F. (2003). The imagined communities of English language learners

in a Pakistani school. Journal of Language, Identity, and Education, 2(4), 301-317.

Ntshuntshe, N. A. (2011). Literacy practices and English as the language of learning and

teaching in a grade nine classroom (Master dissertation), University of the Western

Cape. South Africa.

Nunan, D., 2003. “Practical English”. Language Teaching. NY: McGraw-Hill.

O'Donnell, A. M., Reeve, J., & Smith, J. K. (2011). Educational Psychology: Reflection

for Action. John Wiley & Sons.

Onwuegbuzie, A. J., & Leech, N. L. (2005). On becoming a pragmatic researcher: The

importance of combining quantitative and qualitative research methodologies.

International Journal of Social Research Methodology, 8(5), 375-387.

Pakir, A. (2009). English as a lingua franca: analyzing research frameworks in international

English, world Englishes, and ELF. World Englishes, 28(2), 224-235.

Pakistan IPO Summit, 2013 edition; www.safeasia.com/IPO2013/docs/IPO2013

Paoletti, I., & Fele, G. (2004). Order and Disorder in the Classroom. Pragmatics, 14(1),

69-85.

276

Park, S. E., Anderson, N. K., & Karimbux, N. Y. (2016). OSCE and case presentations as

active assessments of dental student performance. Journal of Dental

Education, 80(3), 334-338.

Parker, W. C., & Hess, D. (2001). Teaching with and for discussion. Teaching and Teacher

Education, 17(3), 273-289.

Patil, Z. N. (2008). Rethinking the objectives of teaching English in Asia. Asian EFL

Journal, 10(4), 227-240.

Pawlak, M., & Waniek-Klimczak, E. (Eds.). (2014). Issues in teaching, learning and

testing speaking in a second language. NY: Springer.

Pedulla, J. J., Abrams, L. M., Madaus, G. F., Russell, M. K., Ramos, M. A., & Miao, J.

(2003). Perceived effects of state-mandated testing programs on teaching and

learning: Findings from a national survey of teachers. EDRS, ERIC

Pennycook, A. (1997). Vulgar pragmatism, critical pragmatism, and EAP. English for

specific purposes, 16(4), 253-269.

Perlman, C. C. (2003). “Performance assessment: Designing appropriate performance

tasks and scoring rubrics”. Measuring Up: Assessment Issues for Teachers,

Counselors, and Administrators. Retrieved June 13, 2012, from Education

Resources Information Centre.

Google Scholar

Pollitt, A., & Murray, N. L. (1996). What raters really pay attention to. Studies in language

testing, 3, 74-91.

Pool, J. (1991). The official language problem. American Political Science Review, 85(2),

495-514.

277

Poonpon, K. (2010). Expanding a second language speaking rating scale for instructional

and assessment purposes. English Language Institute, 8, 69-94.

Popham, W. J. (2001). Teaching to the Test? Educational leadership, 58(6), 16-21

Prabhu, N. S. (1987). Second language pedagogy (Vol. 20). Oxford: Oxford University

Press.

Puppin, L. (2007). A Paradigm Shift: From Paper-and-Pencil Tests to Performance-Based

Assessment. In English Teaching Forum (Vol. 45, No. 4, pp. 10-17). US

Department of State. Bureau of Educational and Cultural Affairs, Office of English

Language Programs, SA-5, 2200 C Street NW 4th Floor, Washington, DC 20037.

Qadeer, M. (2006). Pakistan-social and cultural transformations in a Muslim Nation. New

York, Routledge.

Qadir, S. (1996). Introducing Study Skills at Intermediate Level in Pakistan. (Unpublished

PhD thesis), University of Lancaster U.K.

Rabab’ah, G. (2003). Communicating Problems Facing Arab Learners of English. Journal

of Language and Learning 3(1), 180-197.

Rahman, T. (2001). English-teaching institutions in Pakistan. Journal of Multilingual and

Multicultural Development, 22(3), 242-261.

Rahman, T. (2004). Denizens of alien worlds: A survey of students and teachers at

Pakistan's Urdu and English language-medium schools, and

madrassas. Contemporary South Asia, 13(3), 307-326.

Rahman, T. (2005a). Passports to privilege: the English-medium Schools in Pakistan.

Peace and Democracy in South Asia, 1(1), 24-44.

278

Rahman, S. (2005). Orientations and motivation in English language learning: A study of

Bangladeshi students at undergraduate level. Asian EFL Journal, 7(1), 29-55.

Rahman, T. (1990). Pakistani English. Published by National Institute of Pakistan Studies.

Islamabad.

Rahman, T. (1998). Language and politics in Pakistan. Karachi, Pakistan: Oxford

University Press.

Rahman, T. (2014). Pakistani English. Islamabad, Pakistan: National Institute of Pakistan

Studies, Quaid-i-Azam University, Islamabad.

Ramanathan, H. (2001). Assessment and Testing in an English Classroom in India. Paper

Presented at the Annual Meeting of the Mid-Western Educational Research

Association. Chicago.

Ramanathan, V. (2005a). The English-vernacular divide: Postcolonial language politics

and practice. Bilingual education and bilingualism, (49). Clevedon, UK:

Multilingual Matters.

Rassool, N. (2013). The political economy of English language and development: English

vs. national and local languages in developing countries. English and development:

Policy, pedagogy and globalization, 17, 45-68.

Rasul, S. (2006). Language Hybridization in Pakistan as Socio-Cultural Phenomenon: An

Analysis of Code-Mixed Linguistic Patterns. (Unpublished doctoral dissertation)

http://prr.hec.gov.pk/jspui/handle/123456789/5707. Islamabad, Federal, Pakistan:

National University of Modern Languages Islamabad. Retrieved from

http://prr.hec.gov.pk/jspui/handle/123456789//5707

Reed, T. (2014). Changing the conversation: Messages for improving public

understanding of engineering. stem4innovation.tamu.edu, [PDF] tamu.edu

279

Riaz, N. (2012, July 17). Wordiness: A Problem in Undergraduate Writing at Air

University (AU), Islamabad, Pakistan. (Unpublished MPhil dissertation), Air

University, Islamabad, Federal, Pakistan. AUCLC0672

Riaz, N., Haidar, S., Hassan, R., (2019). Developing English Speaking Skills: Enforcing

Testing Criteria. Global Social Sciences Review (GSSR), IV, (II), 183 – 197.

http://dx.doi.org/10.31703/gssr.2019(IV-II).18

Riggenbach, H. (1990). Discourse analysis and spoken language instruction. Annual

Review of Applied Linguistics, 11, 152-163.

Riggenbach, H. (1994). Students as Language Researchers. In K. M. Bailey, & L. Savage

(Eds.), New Ways in Teaching Speaking (p. 305). Bloomington, Illinois, USA:

TESOL.

Riggenbach, H. (1998). Evaluating learner interactional skills: Conversation at the micro

level. Talking and testing: Discourse approaches to the assessment of oral

proficiency, 53-67.

Riggenbach, H. (2006). Discourse Analysis in the Language Classroom (2006 2005 2004

2003 1999 ed., Vol. 1. The Spoken Language). Michigan: The University of

Michigan Press.

Rogoff, B. (1994). Developing understanding of the idea of communities of learners. Mind,

Culture, and Activity, 1(4), pp. 209-229.

Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of

intrinsic motivation, social development, and well-being. American

psychologist, 55(1), 68.

280

Santoro, N., & Allard, A. (2008, April 29). Scenarios as springboards for reflection on

practice: stimulating discussion. Reflective Practice: International and

Multidisciplinary Perspectives, 9(2), 167-176.

Savignon, S. J. (1972). Communicative Competence: An Experiment in Foreign-Language

Teaching. Language and the Teacher: A Series in Applied Linguistics, Volume 12.

ERIC

Savignon, S. J. (2018). Communicative competence. The TESOL encyclopedia of English

language teaching, 1-7.

Sayer, F. (2015, 02 26). Public History: A Practical Guide. 1385 Broadway NY 10018,

New York, USA: Bloombury Academic. Retrieved October 22, 2015, from

https://books.google.ae/books

Schmidt, R. (1995). “Consciousness and foreign language learning: A tutorial on the role

of attention and awareness in learning”. Attention and awareness in foreign

language learning, 9, 1-63.

Schneider, E. W. (2007). Postcolonial English: Varieties around the world. Cambridge

Scott, G. (1999). Change matters: Making a difference in education and training. Sydney

and London: Allen & Unwin.

Scott, G., Bell, S., Coates, H., & Grebennikov, L. (2010). Australian higher education

leaders in times of change: the role of Pro Vice-Chancellor and Deputy Vice-

Chancellor. Journal of Higher Education Policy and Management, 32(4), 401-418.

DOI: 10.1080/1360080X.2010.491113:

https://www.researchgate.net/publication/228371415

281

Seidlhofer, B. (2001). Closing a conceptual gap: The case for a description of English as a

lingua franca. International Journal of Applied Linguistics, 11(2), 133-158.

Selinker, L. (1972). Interlanguage. IRAL-International Review of Applied Linguistics in

Language Teaching, 10(1-4), 209-232.

Senge, P. M. (1990). The leader’s new work: Building learning organizations. Sloan

management review, 32(1), 7-23.

Senge, P. M. (2004). “The leader's new work: Building learning organizations”. How

Organizations Learn: Managing the search for knowledge, 462-486.

Sert, O. (2005, August). The Functions of Code Switching in ELT Classrooms. The

Internet TESL Journal, XI (8).

Shamim, F. (1993). Teacher-learner behaviour and classroom processes in large ESL

classes in Pakistan. (Unpublished doctoral dissertation), School of Education,

University of Leeds, UK

Shamim, F. (2006). Case studies of organization of English language teaching in public-

sector universities in Pakistan. Research report for the National Committee on

English, Higher Education Commission, Islamabad, Pakistan.

Shamim, F. (2008). Trends, issues and challenges in English language education in

Pakistan. Asia Pacific Journal of Education, 28(3), 235-249.

Shamim, F. (2011). “English as the language for development in Pakistan: Issues,

challenges and possible solutions”. In H. Coleman (Ed.), Dreams and realities:

Developing countries and the English language (pp. 291-309). London: British

Council.

282

Shamim, F., Negash, N., Chuku, C., & Demewoz, N. (2007). Maximizing learning in large

classes: Issues and options. England: British Council. Retrieved 11 12, 2018

Shahzad, K. (2018). Spoken Language Testing Practices in National University of Modern

Languages. Journal of Research in Social Sciences, 6(1), 291-314.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research

projects. Education for Information, 22, 63–75.

Shohamy, E. G. (1993). The power of tests: The Impact of Language Tests on Teaching

and Learning. Washington, DC, NFLC Occasional papers.

Skinner, B. F. (1948). 'Superstition’ in the pigeon. Journal of Experimental

Psychology, 38(2), 168.

Smith, J. A. (1996). Beyond the divide between cognition and discourse: Using

interpretative phenomenological analysis in health psychology. Psychology and

health, 11(2), 261-271.

Srivastava, D. S. (2005). Curriculum and instruction. Gyan Publishing House.

Stake, R. E. (1995). The art of case study research. Sage.

Stevens, S. Y., Krajcik, J. S., Shin, N., Pellegrino, J. W., Geier, S., Swarat, S., et al. (2008).

Using construct-centered design to align curriculum, instruction, and assessment

development in emerging science. Proceeding. 3, pp. 314-321. Illinois:

International Society of the Learning Sciences ©2008.

Stickland, R. (1998). Questioning, Arguing and Reasoning. In An Introduction to Oracy/

Frameworks for Talk (p. 242). Herndon, VA 20172: Cassell.

Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta

Kappan, 83(10), 758-765.

283

Storz, C. (2002). Oral presentation skills: A practical guide. Citeseer. Retrieved from

http://www.sis.pitt.edu/~gray/ITMgnt/references/presentations/oralPresentationSk

ills.pdf Summary report of Journal operations. (2010). American Psychologist, 65,

524-525.

Sultana, N., 2009, (www.oric.numl.edu.pk, 2017)

Swain, M. (2005). “The output hypothesis: Theory and research”. In E. Hinkel (Ed.),

Handbook of research in second language teaching and learning (pp. 471–483).

Mahwah, N.J. ; London: L. Erlbaum Associates.

Sweet, G., Reed, D., Lentz, U., & Alcaya, C. (2000). Developing Speaking and Writing

Tasks for Second Language Assessment. Minneapolis, MN: University of

Minnesota Center for Advanced Research on Language Acquisition. Available:

http://carla. umn. Edu/assessment/MLPA/pdfs/Speaking_Writing_Tasks_Guide.

pdf Tabachnick, BG, & Fidell, LS (1996). Using multivariate statistics.

Taylor, L. (2009). Developing assessment literacy. Annual Review of Applied Linguistics,

29, 21-36.

Thabane, L., Ma, J., Chu, R., Cheng, J., Ismaila, A., Rios, L. P., et al. (2010). A tutorial on

pilot studies: the what, why and how. BMC Medical Research Methodology, 10(1),

1-10.

Thomas, T. A. (2014). Developing team skills through a collaborative writing assignment.

Assessment & Evaluation in Higher Education, 39(4), 479-495.

Thompson, P. (2007, March). Developing classroom talk through practitioner research.

Educational Action Research, 15(1), 41-60. University Press.

284

Thornbury, S. (2012). Speaking instruction. Pedagogy and Practice in Second Language

Teaching, 198-207.

Upshur, J. A., & Turner, C. E. (1995). Constructing rating scales for second language tests.

ELT Journal 49, 3-12. Google Scholar

Tinkler, P., & Jackson, C. (2002). In the dark? Preparing for the PhD viva. Quality

Assurance in Education, 10(2), 86-97.

Ur, P. (2008). A course in language teaching. Ernst Klett Sprachen.

Scott, G., Bell, S., Coates, H., & Grebennikov, L. (2010). Australian higher education

leaders in times of change: the role of Pro Vice-Chancellor and Deputy Vice-

Chancellor. Journal of Higher Education Policy and Management, 32(4), 401-418.

Volante, L. (2004). Teaching to the Test: What Every Educator and Policy-Maker Should

Know. Canadian Journal of Educational Administration and Policy 35. 1-6.

Vygotsky, L. (1962). Thought and language. Cambridge, MA: The MIT

Wesley, C. (2013, September 05). Sanctioning Silence in the Classroom. The Chronicle of

Higher Education. Retrieved May 07, 2015, from

http://chronicle.com/article/Sanctioning-Silence-in-the/141369/

Wette, R., & Barkhuizen, G. (2009). Teaching the book and educating the person:

Challenges for university English language teachers in China. Asia Pacific Journal

of Education, 29(2), 195-212.

Widdowson, H. G. (1978). Teaching language as communication. Oxford: Oxford

University Press.

285

Wiggan, G. (2007). Race, school achievement, and educational inequality: Toward a

student-based inquiry perspective. Review of Educational Research, 77(3), 310-

333.

Wilkinson, A. (1970, January). The Concept of Oracy. The English Journal, 59(1), 71-77.

Williams, C. (2007). Research methods. Journal of Business & Economic Research, 5(3),

65-72.

Wilson, S. M., & Peterson, P. L. (2006, July). Theories of Learning and Teaching What

Do They Mean for Educators? Best Practices NEA Research Working Paper, i-26.

Wrigley, H.S. (1994a). Meeting the challenges of transition: Perspectives on the

REEP/AALS

Transition Project. Washington, DC: U.S. Department of Education. (ED 373 596)

Zhang, Y., & Elder, C. (2010, April 15). Judgment of oral proficiency by non-native

English speaking teacher raters: competing or complementary constructs?

Language Testing 2011 28:31, 28(1), 31-50.

Zulfiqar, I. (2011, June). The Effects of the Interaction between Monomodal and

Multimodal Texts on Language Performance in Pakistani ESL Context: A

Longitudinal Case Study (Unpublished doctoral dissertation) National University

of Modern Languages, Islamabad, Pakistan.

http://prr.hec.gov.pk/jspui/handle/123456789/757

286

APPENDIX A

Survey conducted among the UF 2013 (Questionnaire)

Roll No.

Class, Semester & Section:

College:

Survey for students about teaching of oral skills in high school

In the past years we have had many complaints about how weak in

English oral skills the university entry level students are. To help us

explore whether oral skills have been taught and tested at higher

secondary level, please take a moment to fill in the following survey.

The information you give will be used only for research purposes and

will not be revealed to anyone.

Please resend it to [email protected] or return it to Nailah

Riaz, Faculty Offices, Basement Language Training Centre (LTC), Air

University Islamabad, Pakistan.

1. Do you like to talk in English?

Yes

No

Occasionally

287

2. Do you talk to your friends in English?

Yes

No

Occasionally

3. Do you talk to your parents in English?

Yes

No

Occasionally

4. Do your parents talk to you in English?

Yes

No

Occasionally

5. Do your teachers talk to you in English?

Yes

No

Occasionally

6. Do you hear English most of the time?

Yes

288

No

Occasionally

7. Do your teachers expect you to talk in English?

Yes

No

Sometimes

8. Do your parents expect you to talk in English?

Yes

No

Occasionally

9. Were you taught oral skills at College level?

Yes

No

Sometimes

10. Were you allowed to ask questions in class?

Yes

No

sometimes

11. Were your oral skills tested at college level?

Yes

289

No

Occasionally

12. What was the weightage of oral skills in overall assessment?

10%

50%

Uncertain

13. Was Cooperative Learning Method (CLM) used in class?

Yes

No

Uncertain

14. Were you asked to support your statements?

Yes

No

Sometimes

15. Was your statement (if supported) in class graded?

Yes

No

Sometimes

16. Were you given opportunities to support your statements in class?

Yes

290

No

Occasionally

17. Were your in-class arguments appreciated?

Yes

No

Sometimes

18. Did you have tests for oral skills?

Yes

No

Sometimes

19. Were your oral abilities evaluated?

Yes

No

Sometimes

20. Were your strengths in oral skills appreciated?

Yes

No

Sometimes

21. Did the teachers expect you to respond in English?

291

Yes

No

Sometimes

22. Were you taught how to talk in English?

Yes

No

Occasionally

23. Were you given chances to share your ideas in class?

Yes

No

Occasionally

24. Was there any incentive to talk in English?

Yes

No

Sometimes

25. Were your oral skills tested?

Yes

No

Occasionally

26. Were you told about the criterion of testing your oral skills?

292

Yes

No

Uncertain

27. Did you try to achieve that criterion?

Yes

No

Sometimes

28. Was lecture method used in English class?

Yes

No

Sometimes

29. Was Cooperative Learning Method (CLM) used in class?

Yes

No

Occasionally

30. Were you allowed to discuss topics in class?

Yes

No

Occasionally

31. Were you motivated to speak in English in class?

293

Yes

No

Occasionally

32. Were you motivated to speak in English in class?

Yes

No

Occasionally

33. Was your course content conversation-based?

Yes

No

Uncertain

34. Were there oral tests in your college education?

Yes

No

Sometimes

35. Did you have task-based curriculum of English learning in College?

Yes

No

Uncertain

36. Do you speak English outside the class rooms?

Yes

294

No

Occasionally

37. Do you speak English in public dealings?

Yes

No

Sometimes

38. Do you speak English in family get together?

Yes

No

Occasionally

39. Do you talk in English in Public places?

Yes

No

Sometimes

40. Did you present your projects in English?

Yes

No

Sometimes

295

Appendix A-1 Saved impression of in class survey

296

297

APPENDIX B

List of questions for interviewing UELTs

Dear English Language Teachers

I need to interview you.

The purpose of these interviews is to seek academic information related to teaching and

testing of English speaking skills at the freshmen level. The responses shall be kept

Confidential and will be used for research purposes only.

Thanks for your precious time!

1. For how many years have you been teaching English at University level?

2. At what Level do you teach?

3. In which discipline do you teach?

4. Where do you rank the English speaking competence of your students as they join

university?

5. What difference do you find in their speak ability as they graduate ‘English

Communication Skills’?

6. Do you consciously teach oral skills to your students? OR you take for granted

that the students would naturally learn how to talk in English Language?

7. How do you teach English speaking skills to students? (Do you insist on their

talking in English? Do you try to convince them to talk in English? Do you

correct them time and again? Do you let them complete without intervention? Do

you ask them Questions? Do you help them respond?)

8. What value do you give to the speaking skills of the learners? (Does their speech

competence add value to their marketability? If so, how? )

298

9. How do you assess their English spoken skills? (Do you follow a criterion to

assess? Do you assess intuitively? Do you check understandability? Fluency?

Quality (diction)? Quantity (long speeches)? Relevance (to the point or

digression? Grammar?)

10. How much attention should be given to enhancing learners’ speaking competence

and why?

11. What percentage should be allotted to the assessment of speaking skills on the

scale of 100 percent?

Nailah Riaz

PhD Candidate

Reg # 120954

Air University

Islamabad

299

APPENDIX B-1: Saved impression of the record of UELTs’ video interviews

300

APPENDIX C

List of questions for interviewing the UM and A

Dear Management/ Administration

I need to interview you.

The purpose of these interviews is to seek genuine opinion related to teaching and

testing of English speaking skills at the University freshmen level.

The responses shall be kept Confidential and will be used for research purposes only.

Thanks for your precious time!

1. For how many years have you been the Vice Chancellor/ Senior Dean/Dean/HoD

of/in Air University?

2. What support (happens to be very important) can Management/Administration

give to the department of Humanities (English) to improve the speaking ability of

the nonnative learners?

3. Would you like the department of Humanities (English) to develop a course

focusing on enhancing the speaking ability of the students? Why?

4. Do you think English teachers need to undertake ‘initiative overload’ (well-

known in England) to urgently solve the high level of underachievement in oral

skills?

5. Do you teach as well? (In that case you might have a teacher’s perspective on the

issue of student’s oral (speaking/listening) skills as well.) As a teacher, do you

think, freshmen should be taught spoken skills to exchange idea, to interact with

others, and to participate in discussions in class rooms, workshops, conferences,

etc.?

6. Where do you rank the English speaking competence of your students as they join

university, if you ever have interacted with or observed the students at freshmen

level? (For example, can they introduce themselves in English Language? Can

they explain their point of view clearly? Can they exhibit appropriate etiquettes of

interaction with audience? Can they clarify, rephrase, explain, expand and restate

information and ideas? Can they use appropriate body language, dress and

posture? Can they use suitable tone? Can they hold appropriate interaction with

audience? Are you satisfied with the speaking ability of the students?)

301

7. What kind of difference do you find in their speaking ability as they present their

Final Year projects?

8. Do you believe in consciously teaching oral skills to students? OR you take for

granted that the students would naturally start speaking in English if the teachers

talk to them in English Language?

9. Why, in your opinion, would the students start interacting in English?

10. Would you like to advise the teachers (other than English Language as well) in

your departments to interact with students in English?

11. How much importance do you give to the speaking skills of the learners?

12. Does their speech competence add value to their marketability? If so, how?

13. If understanding and talking in English happens to be so important, would you

like the students at freshmen level to be evaluated?

14. Would you like the English teachers to follow a standardized criterion?

15. How are the English teachers going to establish a criterion?

16. How much attention should be given to enhancing learners’ speaking competence

and why?

17. What percentage should be allotted to the assessment of speaking skills on the

scale of 100 percent?

Nailah Riaz PhD Candidate Reg # 120954

Air University Islamabad

302

APPENDIX C-1: Saved impression of UM&A Interviews

303

APPENDIX D

Kim’s (2010) Analytic Scoring Rubric

Table1. Kim’s (2010) analytic scoring rubric

Analytic Scoring Rubric

Meaningfulness (Communication Effectiveness) Is the response meaningful and

effectively communicated?

Grammatical

Competence

Accuracy, Complexity and Range

Discourse

Competence

Organization and Cohesion

Task Completion To what extent does the speaker complete the task?

Intelligibility Pronunciation and prosodic features (intonation, rhythm, and pacing)

Table2. Meaningfulness (Communication Effectiveness): Is the response meaningful and

effectively communicated?

S.No. 5 Excellent 4 Good 3 Adequate 2 Fair 1 Limited 0 No

The

response:

The

response:

The

response:

The

response:

The

response:

The response:

1. Is

completely

meaningful-

what the

speaker

wants to

convey is

completely

clear and

easy to

understand

is generally

meaningful-

in general,

what the

speaker

wants to

convey is

clear and

easy to

understand.

occasionally

displays

obscure

points;

however,

main points

are still

conveyed.

often

displays

obscure

points,

leaving the

listener

confused.

is generally

unclear and

extremely

hard to

understand.

is

incomprehensible.

2. is fully

elaborated.

is well

elaborated

includes

some

elaboration.

Includes

little

elaboration.

is not well

elaborated.

Contains not

enough evidence

to evaluate.

3. delivers

sophisticated

ideas.

delivers

generally

sophisticated

ideas.

delivers

somewhat

simple

ideas.

delivers

simple

ideas.

delivers

extremely

simple,

limited

ideas.

*(The researcher has replaced the bulleted descriptions of six-point scales (0 for ‘no control’ to 5

for ‘excellent control’) with numbers (1, 2, and 3) for better understanding of the criteria.)

304

Table3 Grammatical Competence: Accuracy, Complexity and Range

5 Excellent 4 Good 3 Adequate 2 Fair 1 Limited 0 No

The response: The response: The

response:

The

response:

The response: The

response:

1. is

grammatically

accurate.

is generally

grammatically

accurate

without any

major errors

(e.g., article

usage,

subject/verb

agreement,

etc.) that

obscure

meaning.

rarely

displays

major errors

that obscure

meaning

and a few

minor errors

(but what

the speaker

wants to say

can be

understood).

displays

several

major

errors as

well as

frequent

minor

errors,

causing

confusion

sometimes.

is almost

always

grammatically

inaccurate,

which causes

difficulty in

understanding

what the

speaker wants

to say.

displays no

grammatical

control.

2. displays a

wide range of

syntactic

structures and

lexical form.

displays a

relatively

wide range of

syntactic

structures and

lexical form.

displays a

somewhat

narrow

range of

syntactic

structures;

too many

simple

sentences.

displays a

narrow

range of

syntactic

structures,

limited to

simple

sentences.

displays lack

of basic

sentence

structure

knowledge.

displays

severely

limited or no

range and

sophistication

of

grammatical

structure and

lexical form.

3. displays

complex

syntactic

structures

(relative

clause,

embedded

clause,

passive voice,

etc. and

lexical form.

displays

relatively

complex

syntactic

structures and

lexical form.

displays

somewhat

simple

syntactic

structures.

displays

use of

simple and

inaccurate

lexical

form.

displays

generally

basic lexical

form.

contains not

enough

evidence to

evaluate.

4

displays use

of

somewhat

simple or

inaccurate

lexical

form.

305

*(The researcher has replaced the bulleted descriptions of six-point scales (0 for ‘no

control’ to 5 for ‘excellent control’) with numbers (1, 2, and 3) for better understanding

of the criteria.)

Table4. Discourse Competence: Organization and Coherence

S.

No

5 Excellent 4 Good 3 Adequate 2 Fair 1 Limited 0 No

The response: The

response:

The

response:

The

response:

The

response:

The

response:

1. is completely

coherent.

is

generally

coherent.

is

occasionally

incoherent.

is loosely

organized,

resulting in

generally

disjointed

discourse.

is generally

incoherent.

is

incoherent.

2. is logically

structured-

logical

openings and

closures;

logical

development

of ideas.

displays

generally

logical

structure.

Contains

parts that

display

somewhat

illogical or

unclear

organization;

however, as

a whole, it is

in general

logically

structured.

Often

displays

illogical or

unclear

organization,

causing

some

confusion.

displays

illogical or

unclear

organization,

causing

great

confusion.

displays

virtually

non-existent

organization.

3. displays

smooth

connection

and transition

of ideas by

means of

various

cohesive

devices

(logical

connectors, a

controlling

theme,

repetition of

key words,

etc.).

displays

good use

of

cohesive

devices

that

generally

connect

ideas

smoothly.

At times

displays

somewhat

loose

connection

of ideas.

displays

repetitive

use of

simple

cohesive

devices; use

of cohesive

devices are

not always

effective.

displays

attempts to

use cohesive

devices, but

they are

either quite

mechanical

or inaccurate

leaving the

listener

confused.

contains not

enough

evidence to

evaluate.

4. displays use

of simple

cohesive

devices.

306

Table5 Task Completion: To what extent does the speaker complete the task?

S. No. 5 Excellent 4 Good 3 Adequate 2 Fair 1 Limited 0 No

The

response:

The

response:

The

response:

The

response:

The

response:

The

response:

1. fully

addresses the

task

addresses

the task

well.

Adequately

addresses the

task.

Insufficiently

addresses the

task.

Barely

addresses

the task.

Shows no

understandi

ng of the

prompt.

2. displays

completely

accurate

understandin

g of the

prompt

without any

misunderstoo

d points.

Includes no

noticeably

misundersto

od points.

Includes

minor

misunderstan

ding(s) that

does not

interfere with

task

fulfillment.

Displays

some major

incomprehen

sion/misunde

rstanding(s)

that interferes

with

addressing

the task.

Completion.

OR

displays

major

incomprehe

nsion/

misundersta

nding(s) that

interferes

with

addressing

the task.

Contains not

enough

evidence to

evaluate.

3. completely

covers all

main points

with

complete

details

discussed in

the prompt.

Completely

covers all

main points

with a good

amount of

details

discussed in

the prompt.

Touches

upon all main

points, but

leaves out

details.

OR

touches upon

bits and

pieces of the

prompts.

.

4. completely

covers one

(or two)main

points with

details, but

leaves the

rest out

307

Table6 Intelligibility: Pronunciation and prosodic features (intonation, rhythm, and

pacing)

S.No. 5

Excellent

4 Good 3 Adequate 2 Fair 1 Limited 0 No

The

response:

The response: The response: The response: The response: The

response:

1. is

completely

intelligible

although

accent

may be

there.

may include

minor

difficulties

with

pronunciation

or intonation,

but generally

intelligible.

may lack

intelligibility in

places impeding

communication.

often lacks

intelligibility

impeding

communication.

generally

lacks

intelligibility.

completely

lacks

intelligibility.

2. is almost

always

clear, fluid

and

sustained.

is generally

clear, fluid

and

sustained.

Pace may

vary at times.

exhibits some

difficulties with

pronunciation,

intonation or

pacing.

frequently

exhibits

problems with

pronunciation,

intonation or

pacing.

is generally

unclear,

choppy,

fragmented

or

telegraphic.

contains not

enough

evidence to

evaluate.

3. does not

require

listener

effort.

does not

require

listener effort

much.

exhibits some

fluidity.

may not be

sustained at a

consistent level

throughout

contains

frequent

pauses and

hesitations.

4. may require

some listener

efforts at times.

may require

significant

listener effort at

times.

contains

consistent

pronunciation

and

intonation

problems.

5. requires

considerable

listener

effort.

308

APPENDIX E

Proof Reading Certificate