holistic approach to learning and teaching introductory object-oriented programming

26
This article was downloaded by: [North Dakota State University] On: 16 October 2014, At: 21:07 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Computer Science Education Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/ncse20 Holistic approach to learning and teaching introductory object-oriented programming Neena Thota a & Richard Whitfield a a School of Intelligent Systems and Technology, University of Saint Joseph , Rua de Londres 16 (NAPE), Macau S.A.R., China Published online: 17 Jun 2010. To cite this article: Neena Thota & Richard Whitfield (2010) Holistic approach to learning and teaching introductory object-oriented programming, Computer Science Education, 20:2, 103-127, DOI: 10.1080/08993408.2010.486260 To link to this article: http://dx.doi.org/10.1080/08993408.2010.486260 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms- and-conditions

Upload: richard

Post on 09-Feb-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

This article was downloaded by: [North Dakota State University]On: 16 October 2014, At: 21:07Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Computer Science EducationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/ncse20

Holistic approach to learning andteaching introductory object-orientedprogrammingNeena Thota a & Richard Whitfield aa School of Intelligent Systems and Technology, University of SaintJoseph , Rua de Londres 16 (NAPE), Macau S.A.R., ChinaPublished online: 17 Jun 2010.

To cite this article: Neena Thota & Richard Whitfield (2010) Holistic approach to learning andteaching introductory object-oriented programming, Computer Science Education, 20:2, 103-127,DOI: 10.1080/08993408.2010.486260

To link to this article: http://dx.doi.org/10.1080/08993408.2010.486260

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Holistic approach to learning and teaching introductory

object-oriented programming

Neena Thota* and Richard Whitfield

School of Intelligent Systems and Technology, University of Saint Joseph,Rua de Londres 16 (NAPE), Macau S.A.R., China

(Received 21 October 2009; final version received 28 March 2010)

This article describes a holistic approach to designing an introductory,object-oriented programming course. The design is grounded inconstructivism and pedagogy of phenomenography. We use con-structive alignment as the framework to align assessments, learning,and teaching with planned learning outcomes. We plan learning andteaching activities, and media with an understanding of variationtheory and the ways in which students learn to program. We outlinethe implementation of the course, and discuss the findings from thefirst cycle of an action research study with a small sample ofundergraduate students. An investigation of the preferred (deep/surface) learning approaches of the students led us to believe thatthese approaches can be influenced through course design. Personalconstructs of the students, elicited through the repertory gridtechnique, revealed that rich inventories of learning resources arehighly valued. We comment on the transformational processes of theexperience of the participants, and identify areas for furtherrefinement and investigation in the next action research cycle.

Keywords: object-oriented programming; constructive alignment;action research; variation theory

Introduction

Learning to program is recognized as being problematic for students.Many studies highlight the problem that students lack the knowledgeand skills for problem-solving, and struggle with object-orientedprogramming (Lister et al., 2004; McCracken et al., 2001; Ragonis &Ben-Ari, 2005; Robins, Rountree, & Rountree, 2003). Students’ lack ofmotivation, self-efficacy, and failure to see the relevance of program-ming courses also influence success and failure rates (Rountree,Rountree, & Robins, 2002; Wiedenbeck, 2005). For the first author,the frustrations and travails of teaching object-oriented programming

*Corresponding author. Email: [email protected]

Computer Science Education

Vol. 20, No. 2, June 2010, 103–127

ISSN 0899-3408 print/ISSN 1744-5175 online

� 2010 Taylor & Francis

DOI: 10.1080/08993408.2010.486260

http://www.informaworld.com

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

(OOP) to novices provided the impetus and personal motivation to tryto improve learning outcomes and her own teaching practice. Aninterest in finding out how students learn to program and ways tointegrate this knowledge into teaching was a powerful incentive to jointhe league of teacher–researchers undertaking action research (McNiff &Whitehead, 2002). This article traces the attempt to understand andadopt learning and teaching theories to design an introductory OOPcourse. It also outlines the implementation and evaluation of the coursethat was supervised by the second author.

At our institution, undergraduate students from mixed majors arerequired to take an introductory Java programming course to developbroad foundational skills and knowledge for other fields. Most of thestudents are non-native English speakers and have part-time jobs. For thefirst author, teaching was planned around choosing a text book, identifyingthe topics, lecturing, giving practice exercises and exams, and usingnormative assessment (grading on the curve) to derive the final mark. Ageneral dissatisfaction set in with the outcomes of her teaching – The highfailure rate and the students’ perceptions that programming was ‘‘hard’’and about writing a lot of code. There was also the requirement to preparethe novice programmers from multi-cultural backgrounds for today’sglobal economy with its emphasis on distributed and collaborative teamprojects. Rapid and significant changes in information technology meantthat OOP-related content was expected to be integrated with the universityspecified learning management system. Moreover, with the advances invisualization and animation tools for OOP, it was felt that these toolsshould be adopted and possibly integrated with the learning managementsystem. The interest in finding out how students learn to program led to thephenomenographic research on programming. An understanding of theways in which students approach programming led to finding ways ofapplying this understanding to the learning and teaching context. Thetheories that contributed to the development of the conceptual frameworkof a holistic approach to course design are explored in the following section.

Exploring learning and teaching theories

Research of a relational nature into teaching and learning, which includesphenomenographic studies, is consistent with the constitutionalist beliefs ofnondualism and consideration of an individual’s awareness of the world(Trigwell & Prosser, 1997). From a phenomenographic perspective,Marton and Booth (1997, p. 38) see learning as changing as a person.Outcomes of learning are posited as being related to qualitative differencesin approaches to learning. Extrinsic motivation is seen to lead to a surfaceapproach which is perceived as being more focused on the task at hand.Intrinsic motivation or interest is seen to lead to a deep approach which is

104 N. Thota and R. Whitfield

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

perceived as seeking meaning in the task. As a research approach,phenomenography seeks to uncover the variation in capabilities forexperiencing phenomena. A fundamental assumption is that the capabil-ities represent qualitatively different understandings that are hierarchicallyordered. In the educational context, the differences between thesecapabilities are considered as ‘‘educationally critical differences’’ andchanges between them vital for learning (Marton & Booth, 1997, p. 111).

Phenomenography and programming

Focusing on computer science, Marton and Booth (1997) discussdifferent approaches to programming. The surface or opportunisticapproach is identified as an expedient or constructual way of attemptingto write a program without interpreting the problem. The deep orinterpretative approach is typified by an operational or structural wayfocused on the goal of the program or the features of the problem. Thestructural approach is believed to point to the phase of studying aproblem, the operational approach refers to the specification phase ofthe requirements of a program, and the constructual approach denotesthe program writing phase. Marton and Booth recommended the use ofstructural and operational approaches by students as a principled wayof writing programs.

Research that focuses on how students learn OOP reveals thatstudents have qualitatively different categories of conceptions of what itmeans to program (Bruce et al., 2004; Eckerdal & Berglund, 2005;Stamouli & Huggard, 2006). These categories indicated increasingunderstanding or awareness of the experience of programming. Studentswith surface orientation were found to experience learning to programas ‘‘getting through’’ the unit with a focus on the syntax of thelanguage. On the other hand, students with a deep orientation werefound to experience learning to program through understanding,integrating concepts, and problem solving. The conceptualization ofprogramming as learning a way of thinking and programming as an actof participation is reported in these studies.

For the first author, the knowledge of the qualitative differences inways of programming created the realization that learners should beencouraged to experience the entire gamut of understanding and bemotivated to progress to the higher ordered categories. The first authorconcluded that her practice might be related to the adoption of deep/surface learning approaches by the students. It became important to herto bring about a change in the students’ perceptions that programmingwas not just about writing code but a way of thinking and acculturation.However, there still remained the conundrum of how to generate ateaching strategy for bringing about this change. In further perusing the

Computer Science Education 105

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

literature on the relational approach to teaching and learning, it becameclear that learning outcomes can be influenced by the learning context,students’ perception of the learning context, and students’ approaches tolearning, as opposed to factors that exist before the student enters thecourse (Ramsden, 2005; Trigwell & Prosser, 1991). A principle that seeksto influence the learning approaches of students thorough a design forteaching is constructive alignment.

Constructive alignment

Biggs (1996) defines constructive alignment as the explicit formulation ofintended learning outcomes, followed by the process of synchronizationwith constructivist-based learning/teaching activities and assessment taskslikely to lead to attaining the learning outcomes. Constructivismemphasizes that students construct meaning through deep learning fromrelevant activities. As a learning theory, it has profoundly influenced theteaching of programming (Ben-Ari, 1998). Constructivist approaches toteaching OOP can be found in the work of Hadjerrouit (1999), Van Gorpand Grissom (2001), and Wulf (2005). There is also a growing interest inconstructive alignment for course design in computer science (Brabrand,2007). Achieving alignment rests on clearly specifying learning outcomes.The first author began the task of defining outcomes that could lead tomore students adopting deeper approaches to programming, therebyfostering a change in students’ perceptions of programming.

Taxonomies for learning and assessment

Outcomes related to the programming goals and the future workplaceneeds of the students were incorporated from different learning andassessment taxonomies. A computer science-specific Matrix Taxonomy(Fuller et al., 2007) has been devised that differentiates the ability tounderstand and interpret code, from the ability to design and build a newproduct. This taxonomy provides a matrix that maps programming-related activities to the levels in the revised version of Bloom’s taxonomy.Fuller et al. give examples of how to track various paths of learning suchas: (a) being able to read program, code, analyze, and even evaluate itwithout the ability to design a solution or produce program code; and (b)being able to apply and synthesize without the ability to analyze orevaluate program code. Additionally, Fuller and Keim (2008) argue thatto improve constructive alignment, professional values must be includedand explicitly assessed as learning outcomes in computing courses.Therefore, cognitive and affective outcomes for the OOP course weredrawn from the matrix taxonomy and Krathwohl, Bloom, and Masia’s(1964) taxonomy for the affective domain.

106 N. Thota and R. Whitfield

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

To assess learning outcomes, the Structure of the Observed LearningOutcome (SOLO) taxonomy by Biggs and Collis (1982) was adopted forthe programming course. SOLO was devised as a series of levels tomatch the evolving structural complexity of qualitatively differentlearning outcomes. The taxonomy deals with the content of the learner’sresponse to assessment, and is applicable for holistic assessment ofcognitive and affective outcomes (Biggs & Tang, 2007). Programmingactivities are amenable to assessment using the SOLO taxonomy, andthe levels can be achieved by traversing a diagonal path in the matrixtaxonomy (Fuller et al., 2007). Seminal work in applying the SOLOtaxonomy to assess programming outcomes can be found in theBRACElet project (Whalley & Robbins, 2007).

Phenomenography as pedagogy

The systematic alignment of teaching and learning activities andassessment tasks with the intended learning outcomes was initiated forthe programming course. At this stage of the study, work centered ondesigning constructivist activities such as pair programming and teamprojects and formulating assessment criteria based on the SOLO levels.The first author of this article, developed an understanding of practiceas integration of learning, teaching and assessment. With this growingawareness came the recognition that a consideration of the education-ally critical differences in understanding OOP was lacking in the coursedesign. Constructive alignment has been criticized for ignoring theneeds and preferences of individual learners. Mayes and de Freitas(2004, p. 6), for example, contend that ‘‘the alignment process cannotproceed without first examining the underlying assumptions aboutlearning, and then adopting teaching methods that align with thoseassumptions’’.

Booth (1997) distilled the principles for teaching from phenomeno-graphic research: Awareness of the content as it should be understoodby the learner; identification of the educationally critical aspects forlearning; planning and reflection on learning experiences that reveal thevariation in the learners experience of learning; and ensuring that thelearning tasks are personally relevant to the learner. In discussingphenomenographic pedagogy, Marton and Booth (1997) espoused twoprinciples of teaching: (a) building a relevance structure by conveyingthe aims, demands and outcomes of the learning situation, and (b)making use of variation to present the object of learning. Trigwell,Prosser, and Ginns (2005, p. 350) concluded that ‘‘the aim of thephenomenographic pedagogy process is to raise teachers’ awareness oftheir thinking and practice and on how variation in this practice mightbe related to their students’ approaches to learning’’.

Computer Science Education 107

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

Variation theory

According to variation theory (Marton, Runesson, & Tsui, 2004),learners can discern the critical features of the object of learning if theyare exposed simultaneously to the patterns of variation that bring thesecritical features to the focal awareness of the learners. These patternsshould present contrast, generalization, separation and fusion of thecritical aspects of the object of learning. As Runesson (2006, p. 406)further explained, ‘‘In a learning situation, a space of dimensions ofvariation is opened. The pattern of the opened variation is a potential forlearning; it is a space of learning’’.

Variation theory has been applied to identify dimensions of variationof the critical aspects of learning concepts about objects and classes.Eckerdal and Thune (2005) found that objects were experiencedprogressively as code fragments, as an active component of a programand as a model of a real world phenomenon. Similarly, classes werecomprehended as code, as a template for object properties and behaviorsand as a model of a real world phenomenon. Variation theory has alsobeen utilized for understanding student conceptions of stored objects andprogram correctness (Sorva, 2007; Stamouli & Huggard, 2006). The firstauthor of this article resolved that if experiencing variation is an essentialcondition for learning, then the course environment should provide fordiverse learning experiences. The course design should enable students toexperience educationally critical ways of learning so as to develop acomprehensive view and understanding of OOP.

In terms of implementing variation theory, Marton and Trigwell(2000) mention the possibility of opening up new patterns of variationwith the use of information technology. Computer science education, bynecessity, is dependent on software for teaching and learning. A range ofeducational media such as learning objects, visualization, development,and animation tools are available for OOP. The incorporation of some ofthese tools with a learning management system is reported by Roßlingand Kothe (2009). A principled approach for generating a teachingstrategy with educational media can be found in the ConversationalFramework developed by Laurillard (2002). This framework is based onphenomenographic research and combines discursive, adaptive, inter-active, and reflective activities matched with educational media.Laurillard prescribed a sequence of iterations of dialogue, action-feedback, adaptation, and reflection to expose students to new ideas, toimprove their practice, and link this improved practice to furtherdeveloped understanding. The framework seemed a purposeful way tointegrate learning opportunities, assessment and feedback mechanisms,and management of student progression with OOP-related tools and theuniversity’s learning management system.

108 N. Thota and R. Whitfield

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

Holistic approach to course design

The conceptual framework of our course design (see Figure 1) is inspiredby constructivism and pedagogy of phenomenography, and anchored inconstructive alignment and variation theory. We augment the principle ofconstructive alignment by blending learning/teaching activities andassessment tasks with an understanding of ways of learning. We integrateOOP-related educational media and learning experiences to open up thespace of learning and make possible many paths for learning. Our holisticapproach to course design is an integration of learning and teachingtheories that aligns outcomes, assessments, learning and teachingactivities, and media with an awareness of the content of learning andan understanding of how students learn to program. We aspire toinfluence the learning approaches of students and to bring about a changein the perceptions that students have about programming. We plan toteach in a way that brings educationally critical aspects of learning OOPto the fore and to design a learning context that provides for differentkinds of variations in experience.

Our approach is based on the following principles which are groundedin the pedagogic theories we adopt, phenomenographic research findingson ways of programming, and instructional design theory for educationalmedia:

Figure 1. Conceptual framework of course design. Adapted from Thota and Whitfield(2009a).

Computer Science Education 109

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

. Alignment of outcomes and assessments

. Alignment of learning approaches and learning/teaching activities

. Alignment of educational media and learning experiences

These principles are discussed in detail in the course implementationgiven in the following sections.

Alignment of outcomes and assessments

The primary goals for our introductory programming course were thatthe students should build up a problem-solving ability for OOP anddevelop some design competence. The programming activities were alsoplanned to equip students with skills to work productively as part of apair and team, and develop the ability for organization and internaliza-tion of affective values. Our emphasis on problem solving, design, and thedevelopment of team and communications skills aligns with the long-termcareer needs of our students irrespective of their degree discipline. Table 1enumerates our Intended Learning Outcomes (ILOs) from the cognitive(Fuller et al., 2007) and affective (Krathwohl et al., 1964) domains.Programming-related outcomes (ILOs 1–3) are in ascending order andimply inclusive understanding.

Students were expected to use the full range of assessment and otherteaching and learning activities to achieve the desired programmingoutcomes and affective values. The curriculum was tailored to emphasizeparticular ways or even the full range of possible ways of going aboutlearning to program (Bruce et al., 2004). Students modeled real worldproblems and followed the entire process from analysis to implementationof code (Eckerdal & Thune, 2005). Several opportunities were given tostudents to contrast, generalize, and separate aspects of approaches(Marton et al., 2004) in preparing the assignments given to achieve ILO2.Then, the students fused all of these aspects, to complete the end-of-termproject to achieve ILO3.

All assignments and projects were graded using criteria based on theSOLO taxonomy (Biggs & Collis, 1982). We ensured that unintended butdesirable outcomes were also allowed, by crediting initiative to experi-ment with new ideas that exceeded the expectations for the assessment(Thompson, 2007). The exam, which constituted 10% of the course grade,consisted of questions related only to ILO1, i.e. recognize vocabulary,trace, implement, and translate code. Collaborative work was given anindividual and pair/team grade derived from self- and peer evaluation,team observation, and interview. The final course grade was calculated onthe basis of the weighted percentage of all assessment activities (quizzes,exam, assignments, and project). We also stipulated that a student had toachieve all ILOs to a satisfactory level to pass the course.

110 N. Thota and R. Whitfield

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

Table

1.

Alignmentoflearningoutcomes

andassessm

ents.

Taxonomy

domain

Intended

learningoutcomes

(ILOs)

Programmingoutcomes

andaffectivevalues

Assessm

entactivities

Cognitive

1.Dem

onstrate

knowledgeand

understandingofessentialfactsand

concepts,relatingto

OOP.

Recognize–Base

knowledge,

vocabulary

ofthe

domain

Quizzesandexam

(15%)

Trace

–Desk-checkacodingsolution

Implement–Codealow-level

solution,given

acompleteddesign

Translate

–Interpretandconvertcode

Cognitive

2.Deployappropriate

theory,practices

and

toolsforproblem

definition,

specification,design,im

plementation,

maintenance

andevaluationofprograms.

Analyze

–Probethecomplexityofasolution

Programming

assignments

(30%)

andgroupproject

Relate

–Understandasolutionin

contextofothers;

Present–Explain

asolutionto

others;

Adapt–Modifyasolutionforother

domains/ranges;

Debug–Detectandcorrectflawsin

adesign;

Apply

–Use

asolutionasacomponentin

alarger

problem;

Cognitive

3.Use

object-orienteddesignasa

mechanism

forproblem

solvingaswellas

facilitatingmodularity

andsoftware

reuse.

Design–Deviseasolutionstructure;

Groupproject

(50%)

Model

–Illustrate

orcreate

anabstractionofa

solution;

Refactor–Redesignasolution(asforoptimization);

Affective

4.Work

productivelyaspartofapair/team.

Cooperate

Programming

assignments

and

groupproject

Communicate

ideas

Receive,

respond,value

Affective

5.Dem

onstrate

abilityfororganizationand

internalizationofvalues.

Reflect

Class

work

and

journals(5%)

Organize,

characterize

Adaptedfrom

Thota

andWhitfield(2009a).

Computer Science Education 111

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

Alignment of learning approaches and learning/teaching activities

Collaborative work using pair and team programming projects, andguided practice activities were incorporated (Van Gorp & Grissom, 2001;Wulf, 2005). A broad range of learning contexts – peer assessment, oralpresentations, interactive lectures/labs, and role plays – were employed todeepen understanding and to keep students engaged and motivated (Biggs& Tang, 2007). Constructivist recommendations for effective learning inprogramming were followed: The identification of students’ preconcep-tions, and the provision of adequate and appropriate tools (Hadjerrouit,1999). Helpful resources for the novice programmers were available in theform of adaptive quizzes, and lecturer and peer feedback. Course content,code samples, and tutorial exercises were provided online through theuniversity’s learning management system. Prior knowledge and possiblerange of learning styles (Felder & Silverman, 1988) were acknowledged bypresenting a variety of material that was suitable for novices and forchallenging more experienced students.

Specific attention was paid to increasing the relevance of the course andstudent motivation for learning (Booth, 1997). Writing about theirpersonal goals and learning plans enabled students to define their aimsand to reflect on their level of motivation. Journal entries, on topics such aspair programming, team work, and testing, focused the attention of thestudents on the relevance of these for programming. Best practices in OOPincluded interpreting code before producing code at the method, class andclass model levels (Caspersen & Bennedsen, 2007). Practice exercises atdifferent levels of proficiency were matched with the intended learningoutcomes to encourage qualitatively different conceptual understandings.These activities explicitly addressed student misconceptions of objects andclasses (Ragonis & Ben-Ari, 2005; Robins et al., 2003). Examples that wererelevant and familiar to multi-cultural groups (Suhonen, Thompson,Davies, & Kinshuk, 2007) were cited in class to create dimensions ofvariation. Use of unified modeling language (UML) diagrams, anddebugger was added so that student could comprehend the critical aspectsof understanding objects and classes (Eckerdal & Thune, 2005).

Alignment of educational media and learning experiences

We integrated object-oriented content with our learning managementsystem to provide opportunities to our students to discuss, interact, adapttheir understandings, and reflect upon them. The first phase ofdevelopment of a computing augmented learning management system(CALMS) is reported in a related paper (Thota & Whitfield, 2009b).The adaptation of the conversational framework (Laurillard, 2002) tointegrate narrative, interactive, adaptive, communicative, and productive

112 N. Thota and R. Whitfield

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

media forms that enhance the programming experience is depicted inFigure 2. We provided support for OOP with a range of learning resources:

. BlueJ integrated development environment (Kolling, Quig, Patter-son, & Rosenberg, 2003).

. Jeliot for visualizations (Bednarik, Moreno, & Myller, 2006).

. JEWL library class for creating graphical user interfaces (English,2004).

. JUnit extension for BlueJ for testing programs (Patterson, Kolling,& Rosenberg, 2003).

. Violet editor for UML diagrams (Horstmann & Pellegrin, 2009).

. Links on the online course page for learning objects (Bradley &Boyle, 2004).

. Links on the online course page for videos, animations, games, andmultimedia tutoring (Moritz, Wei, Parvez, & Blank, 2005).

Research design of the course evaluation

The first author taught the redesigned introductory OOP course in the fallsemester of 2008. Participants included 26 undergraduates drawn frominformation systems, design, business technology management and

Figure 2. Educational media and learning experiences in programming. The arrowsindicate the direction of interactions between the various elements.

Computer Science Education 113

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

pre-university majors. All the students were non-native English speakersof mixed nationalities (Chinese, Portuguese, Brazilian, and Nigerian).Consent for research was given by 21 students and all findings are basedon the data collected from these students.

The main aims of our first action research cycle were:

(1) Investigate the learning approaches of the students, to understandthe extent to which the learning context and assessmentsinfluenced the choice of the approaches;

(2) Gain an understanding of the personal constructs of the studentsabout their course experience;

(3) Investigate the effectiveness of the learning/teaching activities asperceived by students.

Data collection and analysis

We utilized a two-phase, sequential mixed methods approach (Creswell &Plano Clark, 2007) to collect data. We investigated the learningapproaches of the students by administering R-SPQ-2, the revised two-factor version of the study process questionnaire (Biggs, Kember, &Leung, 2001). The approach scores of the students were tabulated andcorrelated with the course grades, and the relationship of the coursegrades with exam marks was also investigated. To gain an understandingof the personal constructs of the students, and to investigate the perceivedeffectiveness of the activities, we purposefully selected a cross-section ofstudents (n ¼ 14) and interviewed them using the repertory grid technique(Kelly, 1955). A sample size of 15–25 is considered sufficient for aninterview with a repertory grid to generate enough constructs toapproximate the universe defined by the intervention (Tan & Hunter,2002). We tried to minimize implementation and attitudinal effects of theresearcher with a data collection log and a reflective log maintained by thefirst author. Ongoing discussion with the second author (who supervisedthe project) ensured that the threat to internal validity in the form of datacollector bias was dealt with, and care was taken not to overlook anyresults or responses (Fraenkel & Wallen, 2006).

The R-SPQ-2 questionnaire

Scores of the R-SPQ-2 questionnaire are seen as outcomes of teaching,and indicative of the quality of the teaching environment (Biggs et al.,2001). The measure consists of 20 items split into two main scales ofsurface approach and deep approach. Each main scale is comprised oftwo subscales, ‘‘motive’’ and ‘‘strategy’’. Deep motive is characterized asdriven by intrinsic interest, and deep strategy as intention to maximize

114 N. Thota and R. Whitfield

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

meaning, while surface motive is characterized as indicating fear offailure, and surface strategy as focus on a narrow target or rote learning(Biggs et al., 2001). Each of the subscales has five items. Responses arerecorded on a 5-point Likert-type scale from 1 (this item is never or onlyrarely true of me) to 5 (this item is always or almost always true of me).Cronbach a values for scale reliability in this study are reported at anacceptable level of 0.80 for the deep approach and 0.81 for the surfaceapproach. By requiring students to respond to items reworded to suit thecourse context, the ongoing approach scores were expected to reveal howprogramming tasks were handled. Empirical support for the R-SPQ-2F,adapted for programming studies, has been reported by the multi-institutional Building Research in Australasian Computing Education(BRACE) study (Fincher et al., 2005).

Repertory grid interviews

The repertory grid technique was developed by Kelly (1955). Thetechnique was based on his belief of constructive alternativism thatassumes that different people have different ways of construing the samething. The repertory grid offers a way to combine qualitative andquantitative methods of knowledge elicitation at the individual andaggregated levels (Rocco et al., 2003). Zuber-Skerritt (1991) notes thatrepertory grids provide richer data than a questionnaire and reduceresearcher interference or bias as compared to more traditionalinterviewing techniques.

Learning and teaching activities in the programming course wererepresented as elements in each grid (see columns in Figure 3). Using thebuilt-in triadic elicitation script in RepGrid IV software (Gaines & Shaw,2005), three elements were offered at a time. Each student was asked inwhich way two elements were alike (emergent pole) in being helpful forprogramming and differed from the third (implicit pole). The perceptionsor constructs (see rows in Figure 3) were noted. By convention, theemergent pole is on the left-hand end of the grid and defines the ‘‘1’’ endof a 5-point scale, and the implicit pole is on the right-hand end of thegrid and defines the ‘‘5’’ end of a 5-point scale. Each bi-polar constructwas then used to rate all the elements on the 5-point scale defined by thepoles of the elicited construct. The process was repeated until no moreconstructs could be elicited.

At the end of the construct elicitation period, an additional construct(Overall learnt a lot/Overall did not learn much) was supplied and thestudent was asked to rate all the elements on this construct (Honey, 1979).Each interview lasted about an hour during which 9–10 constructs perstudent were elicited. The graphic plots of the focused cluster analysis andprincipal components spatial analysis, generated automatically by the

Computer Science Education 115

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

software from the ratings (see Figures 3 and 4), were shown to the studentto engender a discussion. Any required adjustments to ratings or wordingof constructs was immediately carried out. The interested reader isreferred to Jankowicz (2004) for the interpretation of the figures.

The correlation between the ratings of a construct and the rating of thesupplied construct was measured by computing the sum of difference andthe percentage similarity score. The importance of each construct as high,intermediate, or low (H-I-L) for an individual, was derived by groupingthe similarity scores for each interviewee (Honey, 1979). The first authorthematically coded all the descriptive constructs into emergent categories(Jankowicz, 2004). Two independent coders allocated the bipolarconstructs to the construct categories. Acceptable inter-rater reliabilityfigures are reported as: 84.52% for average pairwise percent agreement;0.817 for average pairwise Cohen’s k; 0.817 for Krippendorff’s a (Hayes& Krippendorff, 2007). For each category a mean percentage similarityscore was computed. This score was used to estimate the relativeimportance of that category. The aggregated set of constructs represents

Figure 3. Sample RepGrid IV focused cluster analysis output. The % similarity scoresof adjacent elements or constructs are provided on scales in numeric form and graphicallyin the form of dendrograms. The software provides dark shading to the ratings of 4 and 5,light shading to the ratings of 3, and no shading to the ratings of 1 and 2.

116 N. Thota and R. Whitfield

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

the categorized views of all the individuals and conveys a summary ofindividual meanings. Additionally, to investigate the perceived effective-ness of the activities, the ratings on the supplied construct were treated asa set of rating scales for the elements (Jankowicz, 2004), reverse scoredand ranked from 1 (Not at all effective) to 5 (Most effective).

Findings

First, we present the results of the R-SPQ-2 questionnaire, along with thecorrelations of the approach scores with the course grades, and thecorrelations of the course grades with the exam marks. Next, we describethe personal constructs of the students elicited from the repertory gridinterviews. Finally, we give the numeric ratings from the grids for theperceived effectiveness of the learning/teaching activities.

Findings from questionnaire

The scores on the R-SPQ-2 questionnaire (n ¼ 21) are summarized inTable 2. Summing up the items of the two subscales (motive and strategy)constituted the approach scores. The mean and standard deviation showthat the students used both deep and surface approaches. Mean scores fordeep approach (32.48) were higher than mean scores for surface approach

Figure 4. Sample RepGrid IV principal components spatial analysis output. The datahas been rotated, through principal components analysis, to position the elements in atwo-dimensional plot where distances between elements reflect their ratings according tothe set of constructs.

Computer Science Education 117

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

(27.14) showing greater use of deep approach to learning. Mean scores formotive subscales were higher than mean scores for the strategy subscalesshowing that achievement was a major motivation for use of strategies.

The relationship of approach and course grades was non-significant.The results indicated a positive relationship between deep approach andcourse grades (r ¼ 0.24, n ¼ 21, p 4 0.05), and a negative relationshipbetween surface approach and course grades (r ¼ 70.26, n ¼ 21,p 4 0.05). Partial correlations were calculated to explore the relationshipof deep approach and course grades while controlling for surfaceapproach. The result was a non-significant relationship with an increasein effect size between deep approach and the course grades (r ¼ 0.29,n ¼ 21, p 4 0.05). The relationship of exam marks and course gradeswas investigated using the Pearson product-moment correlation coeffi-cient. There was a positive correlation between the two variables(r ¼ 0.92, n ¼ 21, p 5 0.05) indicating a strong relationship (Cohen,1988), with higher exam marks associated with higher course grades.

Findings from repertory grids

Personal constructs

A total of 112 constructs were elicited from the students (n ¼ 14) duringthe interviews. The construct categories and summaries are shown inTable 3. The mean similarity score was used to rank the categories andindicates the relative importance of that category for the students. Thefirst column shows the category name, description, and an example of aconstruct (with similarity score) that represented the category. Forexample, in the first row the category name is ‘‘Learning throughinformation’’. This is followed by the description of the category. Themean similarity score for the constructs (n ¼ 7) in this category was53.57%. In total, two of the constructs in this category were rated high(H), three were rated as intermediate (I) and two were rated as of low (L)importance to the particular student from whom the construct was

Table 2. Approach scores of students on the R-SPQ-2 questionnaire (n ¼ 21).

Group(possible range)

Motive Strategy Approach

Surface(5–25)

Deep(5–25)

Surface(5–25)

Deep(5–25)

Surface(10–50)

Deep(10–50)

Mean 13.67 16.38 13.48 16.10 27.14 32.48Standard

deviation3.41 2.97 3.79 3.11 6.48 5.71

Note: For each group, the range of possible values is shown in parentheses.

118 N. Thota and R. Whitfield

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

Table 3. Categories and summaries of constructs in order of importance.

Category Category summary

Learning through informationThis category relates to learningfrom resources such as lecturerslides, notes, books, and webpages; n ¼ 7, m ¼ 53.57%,similarity score: 2H, 3I, 2L;E.g. Theory – Communicationand organization skills(70%, H)

Students differentiated learningtheoretical concepts fromdeveloping collaborative,communicative or organizationalskills in other activities. Resourceswere also categorized as freelyavailable or specifically given tostudents.

Learning as experiencingThis category describes theenvironment or the learningexperience in the course, i.e. a valuejudgment of the course or learningprocess; n ¼ 19, m ¼ 52.37%,similarity score: 8H, 4I, 7L; E.g.More variety – Less variety(75%, H)

Course activities that were perceivedto be active and having variety wereconsidered to be more useful andrelated to long-term learning.Constructs relating to compulsoryassignments had lower meansimilarity scores.

Learning through reflectionThis category encapsulates constructsabout the learning process, personalunderstanding or knowledgedevelopment; n ¼ 26, m ¼ 45.77%,similarity score: 5H, 12I, 9L; E.g.Get different ideas – Examples andinformation (70%, H)

Activities that generated ideas orreflected on the skill or ability of thestudent to program were consideredmore effective and leading tounderstanding. Formal transmissionof knowledge was perceived asineffective, with self-study skillslikely to be only for individualbenefit.

Learning through scaffoldingThis category relates to learning fromfeedback, and solutions from peersor the lecturer; n ¼ 20; m ¼ 43.75%similarity score: 6H, 6I, 8L; E.g.Showed errors and mistakes – Nosolution for errors and mistakes(80%, H)

Feedback was considered useful forincreasing understanding and forimproving the work. Students coulddistinguish between sharingknowledge and having to ask forhelp. There was no clear consensusabout the effectiveness of helpbefore or after completing anassignment.

Learning by codingThis category describes programmingrelated activities and includesreading and writing code, anddesigning a program;n ¼ 14, m ¼ 40.71%, similarityscore: 4H, 2I, 8L; E.g. Ideas for code– Ideas for plan (60%, L)

Students perceived class activities asbeing more helpful for learningprogramming. There is no clearconsensus on what constituteseffectiveness for learningprogramming – The need to knowhow to write code was perceived tobe as important as learning to designa program. Students were able todifferentiate using programs fromthinking about programming.

(continued)

Computer Science Education 119

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

elicited. The second column gives a summary of the personally salientconstructs in the category and indicates if there is consensus in the group.Constructs relating to informational resources have the highest meansimilarity score for all constructs, indicating that these resources wereconsidered the most important for learning programming. Constructsrelating to collaborative work have the lowest mean similarity scoreindicating that the students had some problems with the nature of thework.

Perceived effectiveness of activities

The ratings that the 14 students gave to each element in the repertorygrid, on the Overall learnt a lot/Overall did not learn much construct, areshown in Figure 5. Material from the lecturer was deemed as overall mosteffective for learning (84%). Lecturer feedback in the form of writtencomments or personal communication and participation in pairprogramming (79%) was viewed as encouraging learning. Help frompeers (77%), and working in a team project (74%) were also appreciated.Use of BlueJ, Violet and Jeliot software (71%) and the links to differentweb-based resources (70%) were considered moderately effective forlearning. The reflective work (59%), the practice quizzes (66%), andwritten exam (67%) were ranked as least helpful for learningprogramming.

Table 3. (Continued ).

Category Category summary

Learning from assessmentThis category summarizes aspects thatrelate to assessments and grading;n ¼ 13; m ¼ 40.38% similarityscore: 2H, 5I, 6L; E.g. Shows errorsand progress – Final assessment(60%, I)

Students were able to differentiatebetween types of assessments.Formative assessments, that allowedstudents to see their errors andreceive comments about their work,were considered as being morehelpful than summative assessments.The least effective assessments wereperceived as those that simplyreturned the grade with nocomments (e.g. written exam).

Learning from collaborationThis category relates to collaborationamong students for pairprogramming and team projects;n ¼ 13; m ¼ 33.85% similarityscore: 1H, 5I, 7L; E.g. Sharingsolving problems – Not useful(65%, I)

Students mostly perceived group workas creating conflicts, though somedid see the benefit of team work forsharing ideas and solving problems.One student expressed a lack ofexperience with group work as acause of the problem.

120 N. Thota and R. Whitfield

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

Discussion

The results of implementing our introductory programming course areencouraging and insightful and validate some of our design choices. Ourinvestigation of the learning approaches of the students suggests someinfluence of the learning context and assessments on the choice of theapproaches. Our understanding, of the personal constructs and theperceptions about the effectiveness of the learning and teaching activities,reveals that some course elements worked better than others. The findingshave motivated changes to the course design and generated an agenda foraction in future research cycles.

We found that our students were not strongly aligned with either deepor surface learning approaches. The students seem to have adapted theirlearning approach on a needs basis, using the surface/deep learningapproach that they perceived was the most effective for the learningsituation being faced. We found deep learning approaches were positivelycorrelated with course grades, while surface learning approaches werenegatively correlated. These correlations, while not statistically signifi-cant, show how the course context influenced the handling of theprogramming tasks. The SOLO-based grade descriptors, for the assign-ments and projects, appear to have benefited students in meeting therequirements of quality criteria. We recognize the intuitively appealingresult that by appropriately structuring the learning outcomes andassessments, constructive alignment can affect the learning approachadopted by students.

We do not claim that course grades (based on a combination ofquizzes, exam, assignments and project marks) accurately reflect the

Figure 5. Effectiveness of learning/teaching activities as perceived by students. Sums ofthe ranks are shown as percentages.

Computer Science Education 121

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

students’ programming ability. Our programming-related learning out-comes (please refer to ILOs 1–3 in Table 1) were formulated to showincreasing levels of understanding. The exam tested the ability torecognize vocabulary, trace, implement, and translate code. The assign-ments and project dealt with higher order skills. The strong correlation ofexam marks (weighted 10%) with the course grades reveals that theability to read and comprehend code had an impact on being able todesign and model programs. Some of our students seem to have traversedfrom attaining theoretical competencies to practical application of theirunderstanding.

Our investigation, of the personal constructs and the perceptions ofeffectiveness of the learning and teaching activities, revealed which factorsmight be important for students and should thereby be the focus forfuture course design. We have learnt that students consider a choice ofresources, assignments, and activities as helpful for programming. Ourstudents prefer a variety of informational sources created specifically forthem rather than links to web resources available online. Preparing andcollecting the material required extra preparation, but our use ofnarrative media for varied content seems to have yielded significantbenefits to the students.

The descriptions of the course experiences show that our students wereaware of the manner in which we used the learning and teaching activitiesand media to create varied and active learning experiences. Our choice ofadaptive and interactive media meant additional overhead for students,but the variety of software assisted students in understanding someconcepts related to classes and objects. The reflections on the personallearning process indicate that the students were aware of the distinctionbetween learning of facts and learning for understanding. We are inclinedto believe that the reflective journal writing activity was not favored dueto a general reluctance to write and the weekly nature of the task.

Understanding and knowledge of programming was perceived todevelop through skills based activities and brainstorming together forideas. The written exam and practice quizzes were not very popular as ourstudents preferred to design and create programs, rather than answerquestions about vocabulary or trace code. Detailed feedback was deemedeffective for learning. We admit that it entails a lot of work to use gradedescriptors and to return exhaustive comments on assignments andprojects. Pair programming was preferred over programming by oneself.Peer help and the team project were ranked as being only moderatelyuseful for effective learning. We note the presence of mixed nationalitiesand the possible negative impact of cultural differences on collaborativework and the use of communicative media.

Critical reflection on the first action research cycle has motivatedrefinements to the course design in the next cycle. We recognize that there

122 N. Thota and R. Whitfield

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

is a need for one-on-one tutoring in the students’ native language, inaddition to feedback from the lecturer. We plan to incorporate peertutoring for students as an additional element of peer managed activity.We intend to design materials in-house to bring out critical aspects totackle the persistent OOP-related errors and misconceptions that we havenoticed. We have also been alerted to the need for formal instruction forstudents to develop essential team skills. We propose to introducecollaborative media for sharing code and designs. We intend to modifythe alignment of outcomes and assessments, by increasing the weightedpercentage for the exam marks with a proportionate reduction in groupproject percentage. We understand that reading and code tracingactivities are not popular with the students. Our challenge will be tohelp our novice programmers to learn these essential skills while givingopportunities to model and design programs.

Our long-term future work includes:

(1) The investigation of changes in the learning approach scores ofstudents before and after the programming course.

(2) Comparison study of exam marks and course grades to assess thesuccess of learning outcomes related to programming.

(3) Comparative study of approaches of cohorts of students withdifferent instructors.

(4) Study of preferences for resources based on learning approaches.

Conclusions

In this article, we have demonstrated an instructional design that takes aholistic approach to learning and teaching. We have adopted multipleperspectives of learning and turned to pedagogy of phenomenography tobuild dimensions of variation. We have applied constructive alignment tothe course context. We have incorporated constructivist activities toenable learning. We have integrated the use of technology for learningresources, feedback for assessment, for engaging learners, and forproviding variation in programming experiences.

Our students have shown a glimmer of change in their perceptions ofprogramming. The budding notion of ‘‘programming thinking’’ is visible.For the first author, the adoption of an epistemology based on ‘‘balance,inclusion, and connection’’ (Miller, 2007, p. 6), has crafted an under-standing that a holistic educator honors students’ ways of learning. Thisunderstanding has transformed her thinking about educational practice,as seeing the world through lenses of both learning and teaching.

Jacobs (2000) suggests that evaluators of educational innovationsshould combine formative, summative, and illuminative measures. In a

Computer Science Education 123

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

formative evaluation of the educational innovation, we have highlightedareas of weakness with the aim of improvement. We have used theilluminative approach to focus on processes within the classroom ratherthan concentrate on outcomes related to the instructional design. In thefuture, we hope to submit a summative evaluation of the impact with ademonstration of outcomes as evidence.

Anderson and Herr (1999) have linked five validity criteria (outcome,process, democratic, catalytic and dialogic) to the goals of action research.We do not claim any ‘‘successful’’ outcome in terms of improvement instudent learning due to the course design. We see the most importantoutcome as our increased understanding of the learning approaches andpreferences of our students. Process validity questions the extent to whichproblems are framed and solved to permit ongoing learning. Our holisticperspective has enabled us to integrate theories to constructively blendlearning and teaching activities and assessment tasks with an under-standing of how students learn to program. The results are relevant to thelocal setting and demonstrate democratic validity by taking into accountthe multiple perspectives of our students. Catalytic validity refers to ourprofessional growth and increased knowledge about course design. Thediscussions between the authors have contributed to our growth asreflective practitioners. Our action research initiative was informed bytheory. We hope that our practice will enrich learning for our students andin doing so contribute to the theory of course design for OOP.

Acknowledgments

The authors express their gratitude to the students who participated in the research andthe coders who categorized the constructs. The authors also thank the reviewers and theeditor for the helpful comments and suggestions.

References

Anderson, G.L., & Herr, K. (1999). The new paradigm wars: Is there room for rigorouspractitioner knowledge in schools and universities? Educational Researcher, 28(5),12–40.

Bednarik, R., Moreno, A., & Myller, N. (2006). Program visualization for programmingeducation – Case of Jeliot 3. Association for Computing Machinery New ZealandBulletin, 2. Retrieved March 6, 2007, from http://en.scientificcommons.org/16952695

Ben-Ari, M. (1998). Constructivism in computer science education. ACM SIGCSEBulletin, 30(1), 257–261.

Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education,32(3), 347–364.

Biggs, J., & Collis, K.F. (1982). Evaluating the quality of learning: The SOLO taxonomy.New York, NY: Academic Press.

Biggs, J., Kember, D., & Leung, D.Y.P. (2001). The revised two-factor study processquestionnaire: R-SPQ-2F. British Journal of Educational Psychology, 71(1), 133–149.

Biggs, J., & Tang, C. (2007). Teaching for quality learning at university: What the studentdoes (3rd ed.). Maidenhead: SRHE & Open University Press.

124 N. Thota and R. Whitfield

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

Booth, S. (1997). On phenomenography, learning and teaching. Higher EducationResearch & Development, 16(2), 135–158.

Brabrand, C. (2007). Constructive alignment for teaching model-based design forconcurrency. In Proceedings of the 2nd Workshop on Teaching Concurrency(TeaConc’2007) (pp. 1–17). Siedlce: Publishing House of University of Podlasie.

Bradley, C., & Boyle, T. (2004). The design, development, and use of multimedia learningobjects. Journal of Educational Multimedia and Hypermedia, 13, 371–389. Retrievedfrom http://www.aace.org/pubs/jemh/

Bruce, C., McMahon, C., Buckingham, L., Hynd, J., Roggenkamp, M., & Stoodley, I.(2004). Ways of experiencing the act of learning to program: A phenomenographicstudy of introductory programming students at university. Journal of InformationTechnology Education, 3, 143–160.

Caspersen, M.E., & Bennedsen, J. (2007). Instructional design of a programming course:A learning theoretic approach. In Proceedings of the 3rd International ComputingEducation Research Workshop (ICER) (pp. 111–122). New York, NY: ACM Press.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ:Lawrence Erlbaum Associates.

Creswell, J.W., & Plano Clark, V.L. (2007). Designing and conducting mixed methodsresearch. Thousand Oaks, CA: Sage.

Eckerdal, A., & Berglund, A. (2005). What does it take to learn ‘‘programmingthinking’’? In R. Anderson, S.A. Fincher, & M. Guzdial (Eds.), Proceedings of the2005 International Workshop on Computing Education Research (pp. 135–142). NewYork, NY: ACM Press.

Eckerdal, A., & Thune, M. (2005). Novice Java programmers’ conceptions of ‘‘object’’and ‘‘class’’, and variation theory. In Proceedings of the 10th Annual Conference onInnovation and Technology in Computer Science Education (ITiCSE) (pp. 89–93).New York, NY: ACM Press

English, J. (2004). Automated assessment of GUI programs using JEWL. ACM SIGCSEBulletin, 36(3), 137–141.

Felder, R., & Silverman, L. (1988). Learning and teaching styles in engineering education.Engineering Education, 78(7), 674–681.

Fincher, S., Baker, B., Box, I., Cutts, Q., de Raadt, M., Haden, P., . . . Tutty, J. (2005).Programmed to succeed?: A multi-national, multi-institutional study of introductoryprogramming courses. Retrieved July 2, 2007, from Computing Laboratory,University of Kent Web site: http://www.cs.kent.ac.uk/pubs/2005/2157/content.pdf

Fraenkel, J.R., & Wallen, N.E. (2006). How to design and evaluate research in education(6 ed.). New York, NY: McGraw-Hill.

Fuller, U., Johnson, C.G., Ahoniemi, T., Cukierman, D., Hernan-Losada, I., Jackova, J.,. . . Thompson, E. (2007). Developing a computer science-specific learning taxonomy.ACM SIGCSE Bulletin, 39(4), 152–170.

Fuller, U., & Keim, B. (2008). Assessing students’ practice of professional values. InProceedings of the 13th Annual Conference on Innovation and Technology in ComputerScience Education (ITiCSE) (pp. 88–92). New York, NY: ACM Press.

Gaines, B., & Shaw, M. (2005). RepGrid IV [Computer software]. Retrieved October 25,2007, from http://repgrid.com/

Hadjerrouit, S. (1999). A constructivist approach to object-oriented design andprogramming. ACM SIGCSE Bulletin, 31(3), 171–174.

Hayes, A.F., & Krippendorff, K. (2007). Answering the call for a standard reliabilitymeasure for coding data. Communication Methods and Measures, 1(1), 77–89.

Honey, P. (1979). The repertory grid in action: How to use it to conduct an attitudesurvey. Industrial and Commercial Training, 11(11), 452–459.

Horstmann, C.S., & Pellegrin, A. (2009). Violet (Version 0.21.0) [Computer software].Retrieved July 12, 2008, from http://alexdp.free.fr/violetumleditor

Jacobs, C. (2000). The evaluation of educational innovation. Evaluation, 6(3), 261–280.Jankowicz, D. (2004). The easy guide to repertory grids. Chichester: Wiley.Kelly, G.A. (1955). The psychology of personal constructs. New York, NY: Norton.

Computer Science Education 125

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

Kolling, M., Quig, B., Patterson, A., & Rosenberg, J. (2003). The BlueJ system and itspedagogy. Computer Science Education, 13(4), 249–268.

Krathwohl, D.R., Bloom, B.S., & Masia, B.B. (1964). Taxonomy of educationalobjectives: The classification of educational goals. Handbook II: Affective domain.New York, NY: David McKay.

Laurillard, D. (2002). Rethinking university teaching: A conversational framework forthe effective use of educational technology (2nd ed.). London: Routledge FalmerPress.

Lister, R., Adams, E.S., Fitzgerald, S., Fone, W., Hamer, J., Lindholm, M., . . . Thomas,L. (2004). A multi-national study of reading and tracing skills in novice programmers.ACM SIGCSE Bulletin, 36(4), 119–150.

Marton, F., & Booth, S. (1997). Learning and awareness. Mahwah, NJ: LawrenceErlbaum Associates.

Marton, F., Runesson, U., & Tsui, A. (2004). The space of learning. In F. Marton & A.Tsui (Eds.), Classroom discourse and the space of learning (pp. 3–40). Mahwah, NJ:Lawrence Erlbaum Associates.

Marton, F., & Trigwell, K. (2000). Variatio est mater studiorum. Higher EducationResearch & Development, 19(3), 381–395.

Mayes, T., & de Freitas, S. (2004). Review of e-learning theories, frameworks and models:Stage 2 of the e-learning models desk study. Retrieved February 9, 2007, from JISCWeb site: http://www.jisc.ac.uk

McCracken, M., Almstrum, V., Diaz, D., Guzdial, M., Hagan, D., Kolikant, Y.B.-D., . . .Wilusz, T. (2001). A multi-national, multi-institutional study of assessment ofprogramming skills of first-year CS students. ACM SIGCSE Bulletin, 33(4), 125–140.

McNiff, J., & Whitehead, J. (2002). Action research: Principles and practice (2nd ed.).London: Routledge Falmer Press.

Miller, J. (2007). The holistic curriculum (2nd ed). Toronto: OISE Press.Moritz, S.H., Wei, F., Parvez, S.M., & Blank, G.D. (2005). From objects-first to design-

first with multimedia and intelligent tutoring. In Proceedings of the 10th AnnualSIGCSE Conference on Innovation and Technology in Computer Science Education(ITiCSE) (pp. 99–103). New York, NY: ACM Press.

Patterson, A., Kolling, M., & Rosenberg, J. (2003). Introducing unit testing with BlueJ.ACM SIGCSE Bulletin, 35(3), 11–15.

Ragonis, N., & Ben-Ari, M. (2005). A long-term investigation of the comprehension ofOOP concepts by novices. Computer Science Education, 15(3), 203–221.

Ramsden, P. (2005). The context of learning in academic departments. In F. Marton,D. Hounsell, & N. Entwistle (Eds.), The experience of learning (pp. 198–216).Edinburgh: University of Edinburgh, Centre for Teaching, Learning andAssessment.

Robins, A., Rountree, J., & Rountree, N. (2003). Learning and teaching programming: Areview and discussion. Computer Science Education, 13(2), 137–172.

Rocco, T.S., Bliss, L.A., Gallagher, S., Perez-Prado, A., Alacaci, C., Dwyer, E.S., et al.(2003). The pragmatic and dialectical lenses: Two views of mixed methods use ineducation. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods insocial and behavioral research (pp. 595–615). Thousand Oaks, CA: Sage.

Roßling, G., & Kothe, A. (2009). Extending Moodle to better support computingeducation. In Proceedings of the 14th Annual Conference on Innovation andTechnology in Computer Science Education (ITiCSE) (pp. 146–150). New York,NY: ACM Press.

Rountree, N., Rountree, J., & Robins, A. (2002). Predictors of success and failure in aCS1 course. ACM SIGCSE Bulletin 34(4), 121–124.

Runesson, U. (2006). What is it possible to learn? On variation as a necessary condition forlearning. Scandinavian Journal of Educational Research, 50(4), 397–410.

Sorva, J. (2007). Students’ understandings of storing objects. In R. Lister & Simon(Eds.), Proceedings of the Seventh Baltic Sea Conference on Computing EducationResearch (Koli Calling 2007), Koli, Finland (Vol. 88, pp. 127–135). Darlington:ACS.

126 N. Thota and R. Whitfield

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4

Stamouli, I., & Huggard, M. (2006). Object oriented programming and programcorrectness: The students’ perspective. In Proceedings of the Second InternationalComputing Education Research Workshop (ICER) (pp. 109–118). New York, NY:ACM.

Suhonen, J., Thompson, E., Davies, J., & Kinshuk, K. (2007). Applications of variationtheory in computing education. In R. Lister & Simon (Eds.), Proceedings of theSeventh Baltic Sea Conference on Computing Education Research (Koli Calling 2007)(pp. 217–220). Darlington: ACS.

Tan, F.B., & Hunter, M.G. (2002). The repertory grid technique: A method for the studyof cognition in information systems. MIS Quarterly, 26(1), 39–57.

Thompson, E. (2007). Holistic assessment criteria: Applying SOLO to programmingprojects. In S. Mann & Simon (Eds.), Proceedings of the Ninth AustralasianComputing Education Conference (ACE 2007) (Vol. 66, pp. 155–162). Darlington:ACS.

Thota, N., & Whitfield, R. (2009a). Constructive Alignment: How? In A. Pears & C.Schulte (Eds.), Ninth Koli Calling International Conference on Computing EducationResearch (Koli Calling) (pp. 115–116). Koli, Finland.

Thota, N., & Whitfield, R. (2009b). Use of CALMS to enrich learning in introductoryprogramming courses. In Proceedings of the 17th International Conference onComputers in Education, Hong Kong [CDROM].

Trigwell, K., & Prosser, M. (1991). Improving the quality of student learning: Theinfluence of learning context and student approaches to learning on learningoutcomes. Higher Education, 22(3), 251–266.

Trigwell, K., & Prosser, M. (1997). Towards an understanding of individual acts ofteaching and learning. Higher Education Research & Development, 16(2), 241–252.

Trigwell, K., Prosser, M., & Ginns, P. (2005). Phenomenographic pedagogy and a revisedApproaches to teaching inventory. Higher Education Research & Development, 24(4),349–360.

Van Gorp, M.J., & Grissom, S. (2001). An empirical evaluation of using constructiveclassroom activities to teach introductory programming. Computer Science Education,11(3), 247–260.

Whalley, J.L., & Robbins, P. (2007). Report on the fourth BRACELET workshop.Bulletin of Applied Computing and Information Technology, 5. Retrieved from http://www.naccq.ac.nz

Wiedenbeck, S. (2005). Factors affecting the success of non-majors in learning toprogram. In Proceedings of the First International Computing Education ResearchWorkshop (ICER) (pp. 13–24). New York, NY: ACM Press.

Wulf, T. (2005). Constructivist approaches for teaching computer programming. InProceedings of the 6th Conference on Information Technology Education (SIGITE)(pp 245–248). New York, NY: ACM Press.

Zuber-Skerritt, O. (1991). Eliciting personal constructs of research, teaching and/orprofessional development. International Journal of Qualitative Studies in Education,4(4), 333–340.

Computer Science Education 127

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 2

1:07

16

Oct

ober

201

4