a current literature review of the concept of performance ...€¦ · 3 hélène fournier...
TRANSCRIPT
A current literature review of the concept of performance in virtual teams
By Gilles E St-Amant 1 Claire IsaBelle 2 Hélène Fournier 3
Cahier de recherche 04-2008
1- Gilles E. St-Amant, Professor of Management, UQAM, Case postale 8888, succursale Centre-Ville, Montréal (Québec) Canada H3C 3P8 Email: [email protected] 2- Claire IsaBelle, Professor, Université d’Ottawa, 145 rue J.J.-Lussier Ottawa, Ontario, K1N 6N5 Email: [email protected] 3 Hélène Fournier Researcher, Institute for Information Technology, National Research Council Canada, 55- 1113, Chemin Crowley Farm, Moncton (N.-B.), E1A 7R1 Email: [email protected] N.B. Les documents de travail sont des prépublications à diffusion restreinte pour fin d’information et de discussion. Ils n’ont pas fait l’objet de travaux d’édition ou d’arbitrage et ne devraient pas être cités ou reproduits sans l’autorisation écrite du/des auteur-e-s. Les commentaires et suggestions sont bienvenus, et devraient être adressés à/aux auteur-e-s. Working papers are preliminary versions of papers circulated on a limited basis for information and discussion. They have not undergone any editorial or refereeing process and they should not be quoted or reproduced without written consent of the author. Comments and suggestions are welcome and should be directed to the author. To consult the VDR-ESG, visit our Web site: Pour consulter les documents de travail du VDR-ESG, visitez notre site Web: http://www.esg.uqam.ca/document/ Dépôt légal : Bibliothèque nationale du Canada : année 2008 Legal Deposit National Library of Canada, 2008 Dépôt légal : Bibliothèque nationale du Québec : année 2008 Legal deposit : Bibliothèque nationale du Québec, 2008
Page 1 de 28
A current literature review of the concept of performance in virtual teams Dr. Gilles E St-Amant is professor at University of Quebec at Montreal. His work is focused on organizational and individual capabilities needed for collaborative work using ICT, especially visioconference and Group Decision-Support System (GDSS). He heads a research group on management of individual and organizational capabilities. In the last five years, his teaching, research and practical fields involved: management of individual and organizational e-business capabilities AND knowledge and business processes. Dr. Claire IsaBelle is professor at the University of Ottawa. Her work is focused on the contexts of training of school principals and the educational leadership in facilitating teacher education and their use of ICT to improve teaching and learning conditions. She is also interested in continuing education for school principals through collaborative work in the school environment. Dr. Hélène Fournier is a researcher at the Institute for Information Technology, National Research Council Canada at Moncton. 1 1 We thanks Jean-François DENAULT, research assistant, for his help.
Page 2 de 28
Abstract Virtual teams are an emerging organizational phenomenon. Nonetheless, to substantiate performance, it is important to properly use assessment tools. We completed a review of existing literature on performance and virtual teams to uncover eighteen articles that exposed their approach to measuring performance. Performance is measured with either a grading instrument, through quantitative output, or through participant self-assessment. There are three key factors when choosing a performance evaluation tool for a virtual team; length of time the virtual team will be in operation, the degree of virtuality the team will have and the temporal disparity between participants. Grading seems best adapted when dealing with short term / artificial teams, quantifying results for longer term teams while self assessment can be used in both types of environments. De plus en plus, les entreprises et les milieux éducatifs demandent aux individus de travailler en équipes dites virtuelles. Cet article présente une revue de littérature portant sur la performance des équipes de travail à distance. À partir de critères spécifiques, les instruments et la méthodologie utilisés et selon les types de participants, l’article expose un résumé des résultats de dix-huit recherches. Nous retrouvons trois facteurs importants dans les instruments d’évaluation de la performance : la durée du travail en équipe, le degré de la virtualité et la dispersion entre les participants.
Page 3 de 28
Introduction
Virtual teams are increasingly present in organizations. From academic to business settings,
distributed teams are believed to improve organizational performance and save costs, while
favoring the use of dispersed expertise. Nonetheless, to confirm performance, one must have
the proper tools to evaluate these teams. As such, a review of existing literature on
performance and virtual teams was undertaken to document how researchers currently
measure said performance. After defining our two main concepts (virtual team and
performance) and our methodology, we will present the findings of our review, and offer
suggestions for future research in this field.
1. Concepts
1.1 Virtual team
A virtual team is defined as a group of collaborators who use technology–supported
communications to accomplish their goals and objectives. They are usually geographically
dispersed and interact through interdependent tasks while guided by a common purpose. In
fact it is this technology component that makes the concept of virtuality possible (Peters,
2003). Virtual teams can exist for a short period of time, or be continuous.
Some authors (Maznevski & Chudoba, 2000) add a global component to virtual teams,
specifying that global virtual teams are teams that “work and live in different countries”. Bell
and Klozowski (2002) also argued that virtual teams are scattered, and do not meet in the
conventional sense. Other authors, such as Kratzer, Leenders, and Van Engelen (2006)
argue that virtuality is a matter of degrees, and that the extent to which a team is virtual is
determined by three factors: proximity of team members, the communication modality and
team task communication.
Many factors have an impact on virtual team performance such as trust (Jarvenpaa, Knoll, &
Leidner, 1998; Pantelli, 2004; Powell, 2006), team structure (Piccoli, Powell, & Ives, 2004),
cohesion (Salisbury, 2006), task interdependence (Rico & Cohen, 2005), group
interdependence (Balthazard, Potter, & Warren, 2004a), team empowerment (Kirkman,
2004), and task coordination (Maznevski & Chudoba, 2000).
Page 4 de 28
1.2 Performance
Our second concept for this literature review is performance. In its simplest form, performance
is seen as the ability to solve a problem and reach the correct answer (Corbitt, Gardiner, &
Wright, 2004; Piccoli et al. 2004; Rico & Cohen, 2005; Staples & Zhao, 2006). Balthazard et
al. (2004a) described performance as the absolute difference between the team’s consensus
solution and the expert solution. Authors that privilege this point of view usually use grading
instruments to evaluate the performance.
In an organizational setting, Langevin and Picq (2001) considered that a performing
organization is one that achieves its objectives. They defined two dimensions of performance:
the economic dimension and the social component. Performance can also be seen as the
result of an individual’s effort (Ahuja, Galletta, & Carley, 2003), reflecting a more qualitative
approach. Other authors see performance not only in terms of quantitative output, but also as
having qualitative elements; some of these qualitative elements include solution acceptance
(Balthazard et al. 2004a), satisfaction (Paul, Seetharaman, Samarah, & Mykytyn, 2004) and
action quality (Maznevski & Chudoba, 2000).
Peters (2003) has theorized that performance grids developed for traditional teams could
theoretically be used for virtual teams; this was supported by a study done by Potter and
Balthazard (2002), but our review did not identify many authors who applied traditional
performance evaluation tools. Staples and Cameron (2005) evaluated performance in their
study through an adapted Cohen’s Team Effectiveness model; outputs of this model are team
performance, attitudes and behaviors, with interrelations between each type of output.
Balthazard, Potter, and Warren (2004), Balthazard, Waldman, Howell, and Atwater (2004)
used the Ethical Decision Challenge to study performance in their studies. Lu (2006) also
used a multi-structured approach to defining performance, examining communications, trust,
team participation and coordination, and work outcomes as the four indicators of
performance.
Page 5 de 28
2. Data collection
Our data collection was designed to collect the majority of articles that were related to
performance issues or that used performance as an output for virtual teams. We collected
the articles between December 2006 and February 2007. Also, we favored recently
published articles (written after 2003); articles that were cited in a recurrent fashion were also
added to our target literature collection.
We started our research by identifying a list of key terms that were potentially related to the
concept of performance in the virtual environment. The first expressions we used were
“virtual team”, “performance” and “effectiveness”. This research lead us to broaden our
search to include “organizational”, “virtual” & “team”, “decision support system”, “team-
building”, “decision”, “productivity” & ”evaluation”. This search was done in all major
management and management information systems MIS databases (such as ABI-Informs &
Emerald).
During the second step of the literature review, we carefully examined the bulk of the articles
we had collected. We split them into two categories; a first set of fifteen articles was identified
as core articles, and as potentially having the most information on the subjects of
performance and virtual teams. A second subset was set aside as providing peripheral
information.
These fifteen articles were then used to search forward and backward on the concepts of
“virtual team” and “performance” literature; we carefully examined the bibliography of each of
the fifteen articles and identified another eight articles of interest to our research. Also, using
Google Scholar, we looked forward; we used our current bibliography to see if an article was
quoted by a later article, enabling us to identify four other articles of interest. Through revision
and reading of abstracts, we chose the best twenty articles for our first literature review.
Our final literature review includes 30 articles. The methodology we used gives us a good
confidence level, and leads us to believe that we have identified the most pertinent articles
related to our subject.
Page 6 de 28
3. Groups studied Academic research is often limited by the associated sample that is available to the
researcher. Some authors used student teams, while others study ongoing business teams.
Next, a short review of both sample types and their respective advantages are presented.
3.1 Students
Student teams are characterized by the artificial settings in which the research is done. Most
often, the performance is assessed by grading the output (Carte et al. 2006; Corbitt et al.
2004; Furumo & Pearson, 2006; Rico & Cohen, 2005; Staples & Zhao, 2006), although some
authors have favored self assessment (Paul et al. 2004) or hybrid evaluations (Balthazard et
al. 2004a).
Using students as proxies in studies is a limitation in many cases. As mentioned by Furumo
and Pearson (2006), their motivation for completing the project may have been different than
an employee who was already gainfully employed. Balthazard et al. (2004b) added that the
risks of poor performance were not as great as in an actual work setting, and the benefits of
excellent performance are equally limited. Transferability is another issue authors using
student teams must contend with, although some authors try to attenuate this by using MBA
students (Paul et al. 2004; Balthazard et al. 2004a). It remains unclear whether an artificial
environment and assigning a grade to the work output allows for the simulation of a real-world
setting. Nonetheless, Carte et al. (2006) mention that any concerns about the use of students
in experimentation are lessened if the students are performing relevant tasks within their
experience.
Research projects varied greatly in the length of their study; one hour in the case of Staples
and Zhao (2006), five weeks in the research by Piccoli et al. (2004), and up to a semester
long (Carte et al. 2006). Some authors wanted to examine teams with no common history
(Piccoli et al. 2004); this makes student teams an ideal setting to study phenomenon where
team history needs not be present.
Finally, as mentioned by Piccoli et al. (2004) and Corbitt et al. (2004), student teams have the
advantage of having enhanced control structure over observable elements. Also, student
Page 7 de 28
teams often lack clear power structures and the task they are presented with is often well
defined early on (Tucker & Panteli, 2003).
3.2 Business
Business teams are characterized by their functional nature. These types of research
projects favor quantifying evaluation (Ahuja et al. 2003; Horii et al. 2005; Kirkman et al. 2002),
self assessment (Horwitz, Bravington, & Silvis, 2006; Kratzer et al., 2006; Maznevski &
Chudoba, 2000) or hybrid models which are referred to as a balanced scorecard (Kirkman et
al. 2004).
Authors collected data through surveys (Kirkman, 2002; Kratzer et al. 2006; Horwtiz, 2006;
Lu, 2006), interviews (Staples & Cameron, 2005; Kirkman, 2002), a mix of observation and
interviews (Maznevksi, 2003) or communications logs and company documentation
(Maznevski & Chudoba, 2000; Ahuja et al. 2002).
Some of the data observed was historical, rather than being gathered as it occurred. As
such, it limited the authors’ interaction with the employee, as well as limiting the current value
of the gathered data. As Ahuja et al. (2003) reports, “field studies are also limited by the fact
that they have no control over the factors that might interfere with the phenomena under
investigation”.
Small samples in terms of number of teams (Kirkman et al. 2004), participants (Horwitz et al.
2006) or even limited studied business environment (Lu, 2006; Kirkman et al. 2002) can also
impact the transferability of the findings to other corporate environment.
4.0 Performance in virtual teams
Our review enabled us to determine three main methods of evaluating virtual team
performance; output grading, quantifying results and self-assessment. A brief description of
each performance indicator is presented in the next sections with a summary of tools used by
each author presented at the end of the paper (see Table 1).
Page 8 de 28
4.1 Grading the output
Some authors assess project performance by grading or evaluating the quality of the work
output completed by virtual teams (Balthazard et al. 2004; Corbitt et al. 2004; Furumo &
Pearson, 2006; Rico & Cohen, 2005; Staples & Zhao, 2006; Balthazard et al. 2004).
Researchers who use this method require participants to complete a task or solve a problem,
which is evaluated by independent experts against an established ‘correct’ solution.
Balthazard et al. (2004a) studied the impact of different levels of interaction on the
performance of virtual teams. The study used a sample of 248 professional managers in an
executive MBA program, spread out in 63 virtual teams. These groups completed the Internet
version of the “Ethical Decision Challenge” (Balthazard, 2000; Cooke, 1994), a structured
problem-solving exercise used for team building in academic and corporate settings.
Participants completed the exercise for either academic or professional credits. Team
solutions were compared to the experts’ solution, ascertaining the team’s performance. Team
performance was also evaluated through a post-study questionnaire, using questions and
scales from existing literature. The results of the study showed that expertise (which
manifests itself through individual characteristics such as extensive knowledge, experience or
complex problem solving) is a predictor of group performance, while interaction style
predicted other contextual outcomes (such as solution acceptance, cohesion, effectiveness).
Contextual measures of team performance used by Balthazard et al. (2004) are included in
Appendix A.
Another study by Balthazard et al. (2004b) showed many similarities in methodology. In this
second study, the authors attempted to compare the performance of virtual teams with their
traditional counterparts. Results showed that face-to-face teams were most likely to
demonstrate higher levels of interaction than virtual teams. The study was conducted with
336 students who were either MBA students or senior management students. The
performance tool used was once again was the “Ethical Decision Challenge” test. The authors
found that there was a positive relationship between cohesion and performance, while team
size did not impact performance.
Carte, Chidambaram, and Becker (2006) examined the role of leadership on virtual teams.
Using data collected from student teams over the course of the semester, the authors used a
classical grading method to evaluate performance in virtual teams. Hence, three independent
Page 9 de 28
instructors evaluated the final project prepared by the 22 virtual teams, with marks being
averaged out between the three evaluations. The authors found that leadership had a definite
impact on virtual team performance. Furthermore, high-performing teams exchanged more
messages overall than low-performing teams.
Corbitt et al. (2004) tested hypotheses linked to trust, task time, team performance and
developmental stages in virtual teams; our interest lies in the relationship between trust and
performance and benchmarking performance between virtual and face-to-face teams. The
experiment used students from two MIS courses (37 students in one group, 41 in the other).
The authors assigned the same task to both groups; groups from one class were allowed to
meet face-to-face, while the others could not. Remaining aspects (such as due date, team
composition, research topics) were the same for all teams. Team members and leaders were
randomly assigned to groups, and no team members received the same information as the
other team members. Each team was then provided with a company scenario and had to
complete a 3 to 4 page report. Following this task, two independent evaluators scored the
report using a 10-point scale for each of the four paper requirements. The research found that
virtual teams did not perform worse on average than face-to-face teams, and trust was
positively related to team performance.
Furumo and Pearson (2006) worked on the different outputs between individuals and teams,
especially virtual teams; specific topics include outcome, quality output and individual
performance in teams when doing preferred tasks. A total of 444 business students were
randomly assigned to three-member teams in one of four conditions; face-to-face / intellective
or face-to-face / preference, virtual / intellective or virtual / preference. Teams were given one
week to analyze data from a fictional company and report to the CEO. For the intellective
task, there existed one correct answer, while the preference task could have multiple
solutions. Following the task, student completed a survey to measure trust, cohesion and
satisfaction with team’s outcome and process. Hence, a key concept in Furumo and
Pearson’s article was cohesion, which was defined as the sense of belongingness that an
individual feels for a group, which is likely to impact individual performance and satisfaction.
Once graded, the authors found that the output was similar between face to face and a virtual
team while process and outcome satisfaction was significantly lower in virtual teams, yet high
cohesiveness resulted in higher performance.
Page 10 de 28
Rico and Cohen (2005) set out to verify the relationship between task interdependency,
synchronicity and team performance. The study used 240 graduate human resource students
assigned to 80 three-person teams, which were given a merit-rating task (Marci, 1989). All
teams were virtual, and a single correct solution existed to evaluate team performance; this
solution was based on two independent experts’ answer. Following the task, participants
filled out a survey of self-reporting questions on interdependence and synchrony to compare
to the performance variable. The study found that there is a positive relationship between the
performance of virtual teams and synchronous communication when in a low
interdependence tasks.
Staples and Zhao (2006) examined the effects of cultural diversity on virtual teams. The
authors also studied if face-to-face / virtual team nature impacted the performance of multi-
cultural teams. Seventy-nine teams of four to five students (mix of graduate and
undergraduate students) were used for this study. The participation was not linked to any
course, and a small compensation (15$) was given to participants; a small bonus (20$ per
person) was also given to the top performing teams. Teams were spread out according to
homogenous and heterogeneous cultural diversity and face-to-face or virtual team format.
Johnson and Johnson’s (1994) desert survival task was chosen, as it requires teams to solve
a problem that has a correct answer (i.e. an expert’s answer). The authors found that
although culturally diverse teams had lower results, the difference was not statistically
significant. There was also no significant difference between the performance of face-to-face
and virtual teams.
Piccolli et al. (2004) set out to study the impact of managerial controls on team performance.
Their hypothesis was that self-directed and well-coordinated teams would exhibit better
performance. They employed a pre-test, post-test design for 51 virtual teams of three / four
members. A total of 201 students participated in the study. To limit the influence of culture,
participating universities were similar in all cultural dimensions. To increase motivation, a
substantial percentage of the student’s grade (20-25%) was based on the results of the team;
a monetary prize of 1,500$ was offered to the best two teams. Multi-criteria tools were used to
study surrounding concepts (effectiveness, communication, satisfaction), while team
performance was based on the quality of the final document produced by the team (a
business plan for a new business or marketing venture). The study did not report any
Page 11 de 28
significant relationship between coordination effectiveness or communication effectiveness
and performance.
Our research shows that this type of evaluation is almost exclusively used in an academic
context with student teams. Also, many authors use grading and established tests to verify
performance because it affords the highest level of control over confounding factors
(McGrath, 1982). Weaknesses in grading output lie in transferability of results and limited use
in non-academic contexts.
4.2 Quantifying results
In these projects, researchers quantify different output variables to evaluate team
performance; some use a multi-criteria approach. The favored environment for quantifying
results is in a business environment; as such, researchers had access to teams which had
been in operation for long periods of time, and could study the impacts on the organization.
Thus, it is very difficult to use this type of indicator on short lived teams.
Horii, Levitt, and Jin (2005) concentrated their work on the impact of culture on the
performance of virtual teams. Using an ethnographic approach, four case studies were
prepared in joint-venture projects between Japanese and American companies in the San
Francisco Bay Area. Project performance was defined in terms of project duration, project
cost and project quality. The authors believe that the results extend the possibility of using
simulation modeling to capture distinguishing cross-cultural phenomena.
Kirkman, Rosen, Tesluk, and Gibson (2004) worked to determine the impact of team
empowerment on virtual team performance. Set in a high-technology service organization,
the authors studied 35 geographically dispersed teams. Data was gathered through a survey
which measured a number of process improvement items. The authors also used the
organization’s own performance evaluation tool, which assessed improvements in cycle time.
Their study demonstrated that team empowerment (defined as an increased task motivation
due to team members’ collective, positive assessments of their organization tasks) was
significantly positively related to process improvement, which the authors identified as a key
measure of performance.
Page 12 de 28
Kirkman, Rosen, Gibson, Tesluk, and McPherson, (2002), investigated the challenges
companies struggle with when establishing performing virtual teams. The challenge of
interest was recognizing virtual team performance; recognition is hence defined as the
different elements that are used to acknowledge performance. This study conducted
interviews with employees on 65 virtual teams in a corporate setting and used the
Performance Scorecard to evaluate performance, including quantitative elements (growth,
profitability, and process improvement) and self assessment elements (customer satisfaction).
Ahuja et al. (2003) proposed that functional role, status and communication have a direct
influence on performance. As such, they studied the output of the Soar project group, and
used the number of publications as a measure of performance in the group; it is a rather
unique indicator. Therefore, performance was measured in terms of the quantity (weighted by
quality) of Soar-related publications. The period studied was between 1989 and 1993.
Results of the research indicated that centrality was a stronger direct predictor of performance
than the individual characteristics of participants.
3.3 Self-assessment
Some authors used self-assessment by team members to evaluate team performance. As
described by Staples and Cameron (2005), group beliefs about the team’s performance have
been found to be a strong predictor of group effectiveness in previous research. Hence, in this
category, we encompass all authors who used participant feedback to evaluate performance,
even if some of the outcomes evaluated were more practical in nature.
Howrtiz et al. (2006) sought to identify the importance of factors such as cross-culture,
leadership (as a catalyst and integrator to team progress) and social cohesion on virtual team
effectiveness. Surveying 63 employees that use virtual technologies, and used self-
assessment to identify performance; the questionnaire included questions on management
and performance measures, team dynamics, and cross-cultural issues. The questions were a
mix of Liekert scale and open-ended questions; questions such as “What do you think would
be most helpful in making sure your team gets off to a successful start?” were used to code
performance categories. The findings were quite broad, as the authors identified four factors
as the most important for virtual team performance. These factors were cross-cultural
Page 13 de 28
communications improvement, managerial and leadership communication, goal and role
clarification and relationship building.
Paul et al. (2004) examined the relationships between heterogeneity, conflict management
and team performance. Using MBA students enrolled an MBA program, the authors used self
assessment to evaluate team performance. A total of 63 students participated. Four
dependent variables were examined; these included satisfaction with the decision making
process, perceived decision quality, perception of participation and collaborative conflict
management style. The results demonstrated statistically significant relationships between
collaborative conflict management style and performance outcomes. The table used by Paul
et al. (2004) is included in Appendix B.
Kratzer et al. (2006) studied how team creative output was influenced by their virtual nature;
this output was measured by asking team members to rate the team’s creative
accomplishment. Teams that were more creative were also deemed to be more performing.
The sample consisted of 44 research and development teams in eleven companies. The
authors found that the level of virtuality did not impact creative performance, instead, overall
flexibility given to participants in their creative process had a greater impact on performance.
Maznevski and Chudoba (2000) worked on the factors that influence effectiveness in global
virtual teams. Hence, three global virtual teams were studied over a period of 21 months. The
decision outcomes were evaluated through a series of self-assessed criteria such as quality
of work as perceived by team members, individual agreement with team decisions and
strength of relationships in the group. The study of these groups leads the researchers to
propose a series of proposition for effective teams, including propositions on decision process
and communication issues (medium, message and volume).
Staples and Cameron (2005) studied the influence of input factors on the performance of
virtual teams. In all, 39 members of six virtual teams were interviewed. The authors found
that there is a positive relationship between team performance and interpersonal skills, team
size, team turnover, team potency, team spirit and innovations. Team performance was
measured through team potency, process assessment, and outcome variable, always through
Page 14 de 28
the participants’ perceptions. The authors construct measurements are included in Appendix
C.
Lu (2006) examined the relationship between virtuality and performance in a corporate
setting. For this study, performance was examined in four broad areas; communications,
trust, team participation and coordination, and work outcomes. The study was done at Intel,
were 1269 employees answered the researchers web-based survey. The study found that
geographic distribution did not have a significant impact on team performance.
Although self assessment is often criticized as a measurement tool, some authors include
outside evaluation in addition to self-reporting to insure validity. In Kratzer’s et al. (2006)
study, evaluation of teams by team managers was compared to self-reporting measures, and
the results showed no statistically significant differences between the two ratings. The
variables influencing performance of virtual teams in business and educational environments
are presented in Tables 2 and 3 at the end of the paper.
5. Further research
Powell et al. (2004) argued that performance in virtual teams was traditionally accomplished
by comparing them to face-to-face teams, and that research should shift to understanding
performance factors in virtual teams. Our review has revealed that this shift has occurred, as
an increasing number of research projects seems to be dedicated to virtual team performance
factors such as multiculturalism (Staples & Zhao, 2000; Horii et al. 2005; Horwitz et al. 2006),
interaction (Balthazard et al. 2004a), level of virtuality, trust (Corbitt et al. 2004),
interdependency (Rico & Cohen, 2005); managerial control (Piccoli et al. 2004),
empowerment, (Kirkman et al. 2004), communication (Ahuja et al. 2003), leadership (Carte et
al. 2006), and conflict management (Paul et al. 2003).
Individual performance is another subject Powell et al. (2004) mentioned in his article that
needed further research. Our review has shown that a number of authors using self-
assessment as a performance indicator will not only focus on personnel performance, but will
also touch on team performance (Kirkman et al. 2004; Kratzer et al. 2006; Lu, 2006).
Page 15 de 28
Nonetheless, there exists a need to better understand the needed individual skills that
improve performance in the virtual environment, and how these factors can be mitigated.
Virtuality is more and more a blurry concept. Although some authors still insist that virtual
teams never meet face-to-face (Corbitt et al. 2004), levels of virtuality are quickly becoming
an issue in performance. So instead of comparing virtual teams to face-to-face teams, an
integrated vision of virtual teams should emerge, along with an opportunity to develop the
notion of degrees of virtuality (Kratzer et al. 2006; Staples & Zhao, 2006). Moreover,
Shekhar’s (2006) article on virtuality might be a great starting point for future research on the
subject. In the same vein, Balthazard et al. (2004b) recognize that research on temporally
dispersed teams must be increased if we are to better our understanding of VT and FTF
teams.
Corbitt et al. (2004) found that virtual teams were more performing than face-to-face teams.
This is a reversal of traditional literature, were virtual interactions was found to increase the
amount of time required to accomplish tasks (Martins, Gilson, & Maynard, 2004; Balthazard et
al. 2004b). Nonetheless, the length of time might be the key element in increasing
performance in virtual teams; perhaps VT need more time to build their social and formal
structures to attain the performance of FTF teams.
Also, as a multicultural team becomes a popular research topic, the level of diversity should
be tested to better understand if there are midpoints of performance. As mentioned by
Langevin and Picq (2001), multicultural teams add linguistic and cultural problems when
functioning in virtual settings, yet very little literature study the impact on the overall
performance of virtual teams. In his study, Staples and Zhao (2006) only tested low and high
levels of diversity, but there is an opportunity to study the phenomena as a continuous
variance rather than extremes. Rhythm of interactions (Maznevski & Chudoba, 2000) is
another example of the continuum study that could occur.
The use of student teams limits the longitudinal impacts of performance in virtual teams.
There is an opportunity to take a longitudinal approach to see if the results are the same over
time. This approach is recognized in literature reviews by Powell et al. (2006) and Martins et
al. (2004). More research is necessary to test if results remain consistent through time
Page 16 de 28
(Powell et al. 2004). Also, results emerging from student teams are sometimes limited or
inconsistent; as mentioned by Hiltz (2006), subjects often seem to do just enough to obtain a
good grade for their participation on tasks that were not “real” for them. Indeed, Balthazard et
al. (2004b) add that virtual teams are inappropriate when the team is newly formed or short-
lived; this would certainly invalidate some of the results we have collected. The level of
technology maturity in students versus managers is certainly something that needs to be
explored, as student teams might have core competencies that managers do not have when
adapting to a virtual environment, directly impacting on performance.
Finally, as demonstrated in the Horwitz et al. (2006) study, enabling technologies have a
direct impact on team performance, but studies on those tools is lacking. Instant messaging is
a synchronous tool that is gaining acceptance, and regularly used. Corbitt et al. (2004)
mention that using these tools might be a substitute for face-to-face meetings. Research on
the subject could be beneficial to our understanding of the tools behind the virtualization of
teams. Also the presence of audio capabilities might influence team performance by
permitting greater expression of extraversion (Balthazard et al. 2004a). Hiltz (2006) also adds
that many research efforts have ignored the supporting and inference processes resulting
from the ‘real world’ mix of communications and data that can support a task group.
6. Discussion Current research on virtual teams is often directed at student teams, rather than research on
business environment teams. Although there is an argument for the ease of data collection,
the longitudinal aspects are generally not investigated, which could lead to a biased body of
work; more often than not, student teams do not have the time they need to develop the
organizational structures beyond the artificial settings given to them. However, our case
studies have shown that performance evaluation in business settings are usually over long
periods of time. Since the competencies and background are different between students and
managers, different evaluation tools and criteria should be used to evaluate performance in
these two types of teams. In reality, the existence of other company parameters enables the
evaluation of the impact of virtual teams on the organization, rather than measuring the
performance of the team in an isolated task or setting. This dichotomy in research must be
acknowledged.
Page 17 de 28
The impact of virtual teams is interesting to managers. In an age where telecommuting and
international teams are increasingly common, and performance expectations are increased,
one must remember that corporations are organic in nature and the transformation of some
functions and business units in virtual structures cannot be abstracted from the impact that is
felt on the entire structures. Competition and conflicts between virtual and ‘traditional’ teams
is a topic that has not been extensively studied, and could be of interest to managers,
especially if they are faced with both structures at the same time. Finally, the displacement of
traditional workers from a face-to-face environment to a virtual environment cannot be made
without carefully understanding the impact on individuals themselves.
Virtual teams still rely on the cooperative effort that members give to the projects, and the
measuring of team and individual performance inevitably leads to research on other adjacent
areas such as trust, leadership, competencies and team skills. As such, training enabling the
seamless transition from traditional to virtual teams, and the impact on performance remains
an unexplored subject. How is performance affected? How long can the transition period be
expected to last? Are some user profiles expected to make a better transition, or achieve
better performance than others? Teams are not homogeneous entities, and the contribution of
individuals (as experts, or local team leaders), and their impact on performance remains a key
development to better understanding the notion of performance.
Nonetheless, three key aspects have emerged as essential to measuring performance in
virtual teams. They are as follows:
Length of time: Teams that are active for a short amount of time versus those that are active
for months are essentially two different phenomena; teams with short life spans often do not
develop fully the different organization elements (power structure, leadership, communication
protocols) that long-term teams have. This has a definite impact on what can be evaluated.
Degree of virtuality: Different levels of virtuality lead to different experiences for participants.
Some virtual teams never meet face-to-face, while others use virtual tools (such as email or
instant messaging) as an additional tool. As mentioned earlier, virtuality is a matter of degree,
determined by three factors: proximity of team members, the communication modality and
team task communication.
Page 18 de 28
Temporal disparity: Teams that interact in synchronous communications have access to more
information than asynchronous situations. Furthermore, asynchronous teams can necessitate
more supervision and training than synchronous teams.
Hence the following recommendations can be made when developing performance evaluation
tools for virtual teams:
Grading seems to apply to teams that exist for a short period of time, and exist in relatively
artificial settings. As such, researchers working with student teams or with short-term projects
in organization could turn to evaluating the output of VT. Researchers could adapt existing
tools to evaluate team performance (such as Cohen’s Team Effectiveness model by Staples
& Cameron, 2005) or the Ethical Decision Challenge (Balthazard et al. 2004a), or can rely on
grading the ‘academic project’ prepared by the team.
Quantifying results can be used in studies which are done in organization, and in which the
results have impact on other organizational structures than the one being studied. For
example, a VT working on a marketing strategy could have an impact on company indicators.
Cycle time is another often used indicator, but relies on an existing benchmark. Nonetheless,
quantifying results can only be used when the researcher has access to a recurring project,
and the team must be studied over a long period of time, or have access to historical data.
Self-assessment can be used in multiple settings and even in conjunction with the other two
performance evaluation methods. Hence, it can be used in multiple settings, and many
existing scales already exist, which could be applied to new projects. Furthermore, self-
assessment can be used to evaluate both team performance and individual performance.
However, self-assessment should be used sparingly in short-lived VT as participants might
not have retrospective views, and sufficient data to give pertinent data.
Page 19 de 28
Page 20 de 28
Table 1 : Classification Performance Assessment Group studied
Author (Year) Grading Output
Quantifying results
Self assessment
Students (Artificial)
Business (Functional)
Ahuja et al. (2003) X X Balthazard et al. (2004a) X X X Balthazard et al. (2004b) X X Carte et al. (2006) X X Corbitt et al. (2004) X X Furumo and Pearson(2006) X X Kirkman et al. (2004) X X * X Horii et al. (2005) X X Horwitz et al. (2006) X X Kirkman et al. (2002) X X Kratzer et al. (2006) X X Lu (2006) X X Maznevski and Chudoba (2000) X X Paul et al. (2003) X X Piccoli et al. (2004) X X X Rico and Cohen (2005) X X Staples and Cameron (2005) X X Staples and Zhao (2006) X X
Table 2 : Variables influencing performance of virtual teams in business environments
Author (Year) Team
Em
pow
erm
ent
Expe
rtis
e
Rec
ogni
tion
Flex
ibili
ty
Leve
l of v
irtua
lity
Lead
ersh
ip
Inte
rper
sona
l sk
ills
Com
mun
icat
ion
Team
siz
e
Ahuja et al. (2003) Q (+) Kirkman et al. (2004) (2004)
Q / S (+)
Horii et al. (2005) Not applicable : more research oriented than concrete results
Horwitz et al. (2006) S (+) S (+) S (+) Kirkman et al. (2002) Q (+) Kratzer et al. (2006) S (+) S (0) Lu (2006) S (0) Maznevski and Chudoba (2000)
S (+)
Staples and Cameron (2005)
S (+) S (+) S (+)
Reference G = Grading Output Q = Quantifying results S = Self assessment
(+) = Positive impact on performance (0) = Neutral impact on performance (-) = Negative impact on performance
Page 21 de 28
Table 3 : Variables influencing performance of virtual teams in educational environments
Author (Year) Lead
ersh
ip
Expe
rtis
e
Com
mun
icat
ion
Cul
tura
l div
ersi
ty
Leve
l of v
irtua
lity
Trus
t
Inte
rper
sona
l sk
ills
Team
siz
e
Coh
esio
n
Balthazard et al. (2004a)
G / S (+)
G / S (+)
Balthazard et al. (2004b)
G (0) G (+)
Carte et al. (2006) G (+) G (+)
Corbitt et al. (2004) G
(+)
Furumo and Pearson (2006)
G (0)
Paul et al. (2004) S (+)
Piccoli et al. (2004) G / S (0)
G / S (0)
Rico and Cohen (2005) G (+) Staples and Zhao (2006)
G (-) G (0)
Reference G = Grading Output Q = Quantifying results S = Self assessment
(+) = Positive impact on performance (0) = Neutral impact on performance (-) = Negative impact on performance
Page 22 de 28
7. Bibliography 7.1 Case Studies Ahuja, M.K., Galletta, D.F., & Carley, K.M. (2003, January). Individual Centrality and Performance in
Virtual R&D Groups: An Empirical Study. Management Science, 49(1), 21-38.
Balthazard, P., Potter, R., & Warren, J. (2004 a). Expertise, Extraversion and Group Interaction
Styles as Performance Indicators in Virtual Teams. Database for Advances in Information
Systems; Winter, 35(1), 41; ABI/INFORM Global.
Balthazard, P., Waldman, D., Howell, J., & Atwater, L. (2004b). Shared Leadership and Group
Interaction Styles in Problem-Solving Virtual Teams, Proceedings of the 37th Hawaii
International Conference on System Sciences.
Carte, T. C., Chidambaram, L., & Becker, A. (2006, July). Emergent Leadership in Self-Managed
Virtual Teams - A Longitudinal Study of Concentrated and Shared Leadership Behaviors. Group
Decision and Negotiation, 15(4).
Corbitt, G., Gardiner, L., & Wright, L. (2004). A Comparison of Team Developmental Stages, Trust
and Performance for Virtual versus Face-to-Face Teams. California State University,
Proceedings of the 37th Hawaii International Conference on System Sciences.
Furumo, K., & Pearson, J. M. (2006). An Empirical Investigation of how Trust, Cohesion, and
Performance Vary in Virtual and Face-to-Face Teams. University of Hawaii at Hilo & Southern
Illinois University, Proceedings of the 39th Hawaii International Conference on System
Sciences.
Horii, T., Levitt, R., & Jin, Y. (2005). Cross-Cultural Virtual Design Teams: Cultural Influences on
Team Performance in Global Projects, ASCE Conference Proceedings.
Horwitz, F.M., Bravington, D., & Silvis, U. (2006). The promise of virtual teams: identifying key
factors in effectiveness and failure. Journal of European Industrial Training, 30 (6), 472-494.
Kirkman, B.L., Rosen, B., Tesluk, P.E., & Gibson, C.B. (2004). The impact of team empowerment in
virtual team performance: the moderating role of face-to-face interaction. Academy of
Management Journal, 47( 2), 175-192.
Kirkman, B.L., Rosen, B., Gibson, C.B., Tesluk, P.E., & McPherson, S.O. (2002). Five challenges to
virtual team success: lessons from Sabre Inc, Academy of Management Executive, 16(3), 67-
79.
Kratzer, J., Leenders, R.Th.A.J., &Van Engelen, J.M.L.(2006, January). Managing creative team
performance in virtual environments -an empirical study in 44 R&D teams. Faculty of
Management and Organization, University of Groningen, Technovation, 26(1), 42–49.
Page 23 de 28
Lu, M. (2006). Virtuality and Team Performance: Understanding the Impact of Variety of Practice.
Journal of Global Information Technology Management, 9(1), ABI/INFORM Global
Maznevski, M.L. & Chudoba, K.M. (2000, September-October). Bridging Space Over Time: Global
Virtual Team Dynamics and Effectiveness, Organization Science, 11(5), 473-492.
Paul, S., Seetharaman, P., Samarah, I., & Mykytyn, P.P. (2004, January). Impact of heterogeneity
and collaborative conflict management style on the performance of synchronous global virtual
teams, Information & Management, 41(3), 303–321.
Piccoli, G., Powell, A., & Ives, B. (2004). Virtual teams: team control structure, work processes, and
team effectiveness, Information Technology & People, 17(4), 359-379.
Rico, R., & Cohen, S. G. (2005). Effects of task interdependence and type of communication on
performance in virtual teams. Journal of Managerial Psychology, 20(3-4), 261-274.
Staples, D. S., & Cameron, A. F. (2005, January). The Effect of Task Design, Team Characteristics,
Organizational Context and Team Processes on the Performance and Attitudes of Virtual Team
Members. Queen’s School of Business, Queen’s University, Proceedings of the 38th Hawaii
International Conference on System Sciences.
Staples, D. S., & Zhao, L. (2006, July). The effects of cultural diversity in virtual teams versus face-
to-face teams, Group Decision and negotiation 15(4), 389-406.
7.2 Literature reviews
Langevin, P. and Picq, T. (2001). Contrôle des équipes virtuelles : Une revue. Cahier de recherches
EM Lyon, 2001-04.
Martins, L.L., Gilson, L.L. & Maynard, M.T. (2004 ). Virtual Teams: What Do We Know and Where
Do We Go From Here? Journal of Management, 30(6), 805–835.
Powell, A., Piccoli, G., & Ives, B. (2004, Winter). Virtual Teams: A Review of Current Literature and
Directions for Future Research, Data Base for Advances in Information Systems 35(1), 6–36.
7.3 Other consulted articles
Balthazard, P.A. (1999). Virtual version of the Group Styles Inventory by R.A. Cooke and J.C.C
Lafferty. Arlington Heights IL; Human Synergistic / Center for Applied Research.
Balthazard, P. A. (2000). Virtual version Ethical Decision Challenge by R. A. Cooke. Arlington
Heights IL: Human Synergistics/Center for Applied Research.
Bell, B.S. & Kozlowski, S.W.J. (2002). A typology of virtual teams: implications for effective
leadership. Group and Organization Management, 27(1), 14-49.
Page 24 de 28
Hiltz, R. S. & al. (In press). Human-Computer Interaction in Management Information Systems:
Applications. In authors, Asynchronous Virtual Teams: Can Software Tools and Structuring of
Social Processes Enhance Performance?, Armonk, NY: M. E. Sharpe, Inc.
Jarvenpaa, S. L., Knoll, K., & Leidner, D. E. (1998). Is anybody out there? Antecedents of trust in
global virtual teams. Journal of Management Information Systems, 14: 29–64.
Peters, L. (2003). The Virtual Environment: The “How-to” of Studying Collaboration and
Performance of Geographically Dispersed Teams. University of Massachusetts. Proceedings of
the Twelfth IEEE International Workshops on Enabling Technologies: Infrastructure for
Collaborative Enterprises (WETICE’03), 1080-1383/03.
Potter, R. E., & Balthazard, P. A. 2002. Understanding human interaction and performance in the
virtual team. Journal of Information Technology Theory and Application, 4(1), 1–23.
Powell, A., Galvin, J. & Piccoli, G. (2006). Antecedents to team member commitment from near and
far - A comparison between collocated and virtual teams. Information Technology & People,
19(4), 299-322.
Shekhar, S. (2006). Understanding the virtuality of virtual organizations, Leadership & Organization
Development Journal, 27(6), 465-483.
Tucker, R. & Panteli, N. (2003, June 15-17). Back to basics: sharing goals and developing trust in
global virtual teams. In N. Korpela, R., Montealegre, & A. Poulymenakou (Eds), Organizational
Information Systems in the Context of Globalization, (pp.85-98). Kluwer Academic Publishers,
Boston.
Page 25 de 28
Appendix A - Contextual measures of Team Performance used by Balthazard (2004)
Cohesion
1. Member appeared to feel that they were really part of the group; 2. people offering new ideas were likely to get clobbered (reverse); 3. the group members really helped each other out on this task; 4. some people showed no respect for the others (reverse); 5. members of the group really stuck together; 6. there were feelings in the group which tended to pull the group apart (reverse); 7. group really go along well with one another; 8. there was constant bickering (reverse); 9. it appeared that members of the group would look forward to working with one another again; Process effectiveness 1. Were the potential risks to research subjects fully considered by the group? 2. Was the importance of the research procedures (to investigators, the hospital, and to scientific knowledge) fully
considered by the group? Solution acceptance 1. were you personally committed to the course of action proposed by the team? 2. did you think the solution generated by the group was better than the one developed personally? 3. did you think the group came up with the best solution possible – given the time available to solve the problem? 4. did you have reservations about any of the decisions reached by the group? 5. would you feel comfortable defending the group’s decisions?
Appendix B. Indicator items for different instruments as used by Paul et al. (2004) 1. Satisfaction with decision making process 1. I was able to evaluate a number of alternatives during the decision-making session. 2. Our group was able to reach a consensual solution without any major conflict. 3. I feel that the group members converged on the final decision. 4. I did not rush to provide my solutions. 5. I was not rushed by others in the session. 6. The decision making process of the group was complete. 7. The progress of the group towards the stated goals of the task was satisfactory. 8. Overall, as a member of our team, I am satisfied with the process I employed in arriving at the final solution. 9. Overall, I am satisfied with the solution process our group employed to arrive at the final decision. 2. Perceived decision quality 1. The decision made by my group is practical. 2. The decision made by my group is fair. 3. I am confident that the final decision we came up with is the best decision. 4. I feel that the quality of the group’s decision would have positive effects on the performance of the university. 5. Overall, it is my opinion that our final decision is of high quality. 3. Collaborative conflict management style 1. I collaborated with my teammates to come up with decisions acceptable to us. 2. I tried to bring all our concerns out in the open so that the issues could be resolved in the best possible way. 3. I tried to work with my team members to find solutions to a problem that satisfy our expectations. 4. I exchanged accurate information with my teammates to solve a problem together. 5. I tried to investigate an issue with my team members to find a solution acceptable to us. 4. Perceived participation 1. I always felt free to voice my comments during the meeting. 2. Other members appeared to have felt free to make positive and negative comments.
Page 26 de 28
3. Everyone had a chance to express his/her opinion. 4. Team members responded to the comments made by others. 5. The group members participated very actively in today’s meeting. 6. Overall, the participation of each member in the chosen task was effective. Appendix B: Construct Measurement as prepared by Staples and Cameron (2005) Task Design Assessment 1. What are the variety of skill sets required to complete the task? How are these skills distributed among the team members? 2. On a scale of 1 – 7 point (1 = very little and 7 = very much) how would you answer the following question: How does the work, or project, affect the lives or well-being of others? 3. How much autonomy does the group have in determining the parameters of the task, the methods for achieving the task, or even the task itself? 4. What kind of feedback is provided to the group on their performance? Is feedback provided regularly, and is this feedback useful? 5. Is responsibility for the final outcome shared equally among all members? Team Composition Assessment 1. Are there adequate technical skills among the group members to complete the task? Do you feel that your individual technical skills are sufficient? 2. On a scale of 1 – 7 point (1 = very low, and 7 = very high), how would you rate the general level of interpersonal skills in your group? Why? 3. What level of relevant IT training and abilities do the team members have? Is it adequate for the existing IT tools? What level of IT training and experience do you as an individual have and is it adequate? 4. How many team members are there? Are there too few/too many team members to do a good job? 5. How long has the group been working together? Is there a high turnover in the group membership? How was the team first started / got to know each other? 6. How many members of the team are geographically dispersed? How dispersed is the team - # time zones spread out among members? How often do they meet face to face? Group/Team Potency [12] 1. My team has confidence in itself 2. My team believes it can become unusually good at producing high-quality work 3. My team expects to be known as high-performing 4. My team feels it can solve any problem it encounters 5. My team believes it can be very productive 6. My team can get a lot done when it works hard 7. No task is too tough for my team 8. My team expects to have a lot of influence around here Team Process Assessment 1. How would you characterize your team’s level of coordination? What is the level of duplication that occurs (or does any duplication occur)? 2. Is there a sense of team spirit in your group? Why ? 3. How comfortable are your team members with sharing important information within the team? How comfortable are your team members with taking advice from or deferring to someone in the team with greater knowledge or skill? 4. Has the team adopted or created any new innovations or inventions to improve your way of doing required tasks? Organizational Context Assessment 1. What is the reward system? How are rewards distributed? 2. How adequately is training available and supported? 3. Who has the information you need to do your job? How easy is it to get the information you need?
Page 27 de 28
Page 28 de 28
4. Does your geographic location hinder or increase your access to required resources? How difficult is it to acquire resources as the need arises, does your location make a difference? What resources (if any) do you feel are missing in your offsite work, compared to onsite work? 5. What kinds of IT tools / infrastructure are present? 6. Power/authority - What is the power structure in your team? What level of authority does your team have in making important decisions? Team Outcome Variables Team Performance 1. On a scale of 1 – 7 point (1 = very low, and 7 = very high), how would you rate your own team’s performance? Why? 2. Do you think your team is very effective (i.e. meeting objectives on time in an efficient and effective manner)? Why or why not? Motivation with the Task 1. On a scale of 1 – 7 point (1 = very low, and 7 = very high), how would you answer the following question: How would you characterize your level of motivation with your team’s current project? Why? Satisfaction with Being Part of the Team 1 On a scale of 1 – 7 point (1 = very low, and 7 = very high), how would you answer the following question: How would you describe your level of satisfaction with your team? Why?