a qualitative approach to upward evaluation of leadership performance: pros and cons

10
A Qualitative Approach to Upward Evaluation of Leadership Performance: Pros and Cons by Cathryn G. Turrentine, Edward F. Lener, Michelle L. Young, and Victoria T. Kok Available online 18 May 2004 This article describes a qualitative upward evaluation of the leadership performance of library managers. Follow-up studies were conducted, focusing on the advantages and disadvantages of the qualitative approach to upward appraisal. The authors discuss pros and cons to guide others who might use this methodology for upward appraisals in the future. U pward performance appraisals have been used in business and industry for about the past three decades as part of a 360-degree appraisal process. They are most often used as a professional development tool rather than for performance or compensation evalua- tion. Current estimates indicate that about 90 percent of Fortune 1000 companies use some form of multisource evaluation to rate their managers. Such evaluations have been recognized by many authors as an important tool to help managers im- prove their performance. 1 Most of the upward performance ap- praisals reported in the literature to date have utilized quantitative methods. These include upward appraisals for library managers. There are good reasons, how- ever, to be dissatisfied with quantitative methods in performance appraisal. Quan- titative methods describe average re- sponses but may obscure deeply held and important individual views. In addi- tion, many quantitative methods (those that use inferential statistical analysis) rely on random selection of samples that must meet minimum requirements for sample size. Nonrandom samples can be biased in many ways that would distort the interpretation of the results. Further, many quantitatively based upward ap- praisal instruments use Likert-type scales. These tend to elicit biased responses that are biased in the direction of agreement with the survey items. While qualitative methods have challenges and limits of their own, they at least offer the oppor- tunity to explore the respondents’ views more deeply and with somewhat greater certainty about what a response might mean. 2 This article describes a qualitative up- ward evaluation of the leadership perfor- mance of library managers. It outlines the pros and cons of this approach to guide those who might use a similar methodol- ogy in the future. METHODS In fall 2001, the dean of libraries at Virginia Tech, a large, research-exten- sive, land-grant university, appointed a seven member team to design and carry out an upward appraisal of the libraries’ eight managers (seven unit directors and the associate dean). The appraisal team included the authors, plus Alan Arm- strong, Ladd Brown, and Jana Doyle. One member of the appraisal team was from outside the library. The team was charged to develop the appraisal instru- ment (see Appendix A) and process, with input from the directors and associate dean and from both faculty and staff throughout the library. 3 The charge to the appraisal team had specific requirements for the research methods, the instrument, and the ap- praisal process. The dean specified qual- itative methodology for this appraisal process because she had been the sub- ject of quantitative upward appraisals herself and had sometimes found it difficult to interpret quantitative feed- back or to know what action she might take to improve in a particular area based on a numerical response. With regard to the instrument, the charge stated that the team was to develop ‘‘a brief, qualitative instrument, eliciting specific input focusing on major leader- ship issues appropriate to meeting the mission of the Libraries,... and result- ing in information that can assist the director or associate dean in doing things that will make her/him a better 304 The Journal of Academic Librarianship, Volume 30, Number 4, pages 304–313 Cathryn Turrentine is Director of Planning and Assessment, Division of Student Affairs, Virginia Polytechnic Institute and State University, 112 Burruss Hall, Blacksburg, VA 24061-0250, USA <[email protected].>

Upload: c

Post on 31-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

304 The Jou

Cathryn Tuand Assessm

VirginiaUniversity

A Qualitative Approach to Upward Evaluation ofLeadership Performance: Pros and Cons

rn

rreP

,

by Cathryn G. Turrentine, Edward F. Lener,Michelle L. Young, and Victoria T. Kok

Available online 18 May 2004

This article describes aqualitative upward evaluationof the leadership performanceof library managers. Follow-up

studies were conducted,focusing on the advantages anddisadvantages of the qualitativeapproach to upward appraisal.

The authors discuss pros andcons to guide others who might

use this methodology forupward appraisals in the future.

al of Academic Librarianship, Volume

entine is Director of Planningnt, Division of Student Affairs,olytechnic Institute and State

112 Burruss Hall, Blacksburg,VA 24061-0250, USA

<[email protected].>

Upward performance appraisalshave been used in business andindustry for about the past three

decades as part of a 360-degree appraisalprocess. They are most often used as aprofessional development tool rather thanfor performance or compensation evalua-tion. Current estimates indicate that about90 percent of Fortune 1000 companiesuse some form of multisource evaluationto rate their managers. Such evaluationshave been recognized by many authors asan important tool to help managers im-prove their performance.1

Most of the upward performance ap-praisals reported in the literature to datehave utilized quantitative methods. Theseinclude upward appraisals for librarymanagers. There are good reasons, how-ever, to be dissatisfied with quantitativemethods in performance appraisal. Quan-titative methods describe average re-sponses but may obscure deeply heldand important individual views. In addi-tion, many quantitative methods (thosethat use inferential statistical analysis)rely on random selection of samples thatmust meet minimum requirements forsample size. Nonrandom samples can bebiased in many ways that would distortthe interpretation of the results. Further,many quantitatively based upward ap-praisal instruments use Likert-type scales.These tend to elicit biased responses thatare biased in the direction of agreementwith the survey items. While qualitativemethods have challenges and limits oftheir own, they at least offer the oppor-tunity to explore the respondents’ viewsmore deeply and with somewhat greatercertainty about what a response mightmean.2

This article describes a qualitative up-ward evaluation of the leadership perfor-

30, Number 4, pages 304–313

mance of library managers. It outlines thepros and cons of this approach to guidethose who might use a similar methodol-ogy in the future.

METHODS

In fall 2001, the dean of libraries atVirginia Tech, a large, research-exten-sive, land-grant university, appointed aseven member team to design and carryout an upward appraisal of the libraries’eight managers (seven unit directors andthe associate dean). The appraisal teamincluded the authors, plus Alan Arm-strong, Ladd Brown, and Jana Doyle.One member of the appraisal team wasfrom outside the library. The team wascharged to develop the appraisal instru-ment (see Appendix A) and process, withinput from the directors and associatedean and from both faculty and staffthroughout the library.3

The charge to the appraisal team hadspecific requirements for the researchmethods, the instrument, and the ap-praisal process. The dean specified qual-itative methodology for this appraisalprocess because she had been the sub-ject of quantitative upward appraisalsherself and had sometimes found itdifficult to interpret quantitative feed-back or to know what action she mighttake to improve in a particular areabased on a numerical response. Withregard to the instrument, the chargestated that the team was to develop ‘‘abrief, qualitative instrument, elicitingspecific input focusing on major leader-ship issues appropriate to meeting themission of the Libraries,. . . and result-ing in information that can assist thedirector or associate dean in doingthings that will make her/him a better

leader.’’ This portion of the charge con-strained the appraisal team to use alocally developed instrument that wastailored for the specific library organi-zation in which the appraisal was con-ducted. Commercially available instru-ments would have been too long forthis purpose and not tailored to thespecific needs of the library and mostwould have used quantitative methods.The charge further stated that the ap-praisal process was to include (a) con-fidentiality of responses, (b) the abilityto evaluate both one’s own director orassociate dean and also those to whomone does not report, and (c) a mecha-nism for differentiating responses fromthe director or associate dean’s own areaof responsibility from the responses ofothers.4

This upward appraisal was the fourthupward appraisal for the University Li-braries at Virginia Tech, but it was thefirst to use a qualitative approach. Thefirst experience with upward evaluationwas in 1981. This assessment used aLikert-scale instrument with the optionto add written comments. Only libraryfaculty were eligible to participate; clas-sified staff were excluded. Employeeswere asked to sign their evaluation formsand the signed responses were given intheir entirety to the director.5 Quantitativeupward evaluations were conducted againin 1987 and in 1994. These were open toboth faculty and staff. They used scanna-ble forms for automated scoring and wereanonymous.

As the appraisal team developed theinstrument and the data collection andanalysis methods described below, itsought input from the dean, the associatedean and directors, and the faculty andstaff of the library. Suggestions and con-cerns expressed by each of these constit-uent groups were incorporated into thefinal process to the extent possible withinthe original charge to the team. Thisprocess of consultation followed bestpractices for upward appraisals as de-scribed in the literature.6

The appraisal team implemented theupward appraisal according to the origi-nal charge and then followed this with apostsurvey of nonrespondents to deter-mine the reasons for failure to completethe upward appraisal. Next the authorssurveyed the dean, associate dean, anddirectors to examine their perception ofthe advantages and disadvantages of thequalitative approach. Finally, members ofthe team conducted a follow-up focus

group with employees in one workgroupto learn the advantages and disadvan-tages of this qualitative approach fromtheir point of view. The first step of thisfour-step process (the upward appraisalof the leadership performance of manag-ers) provided the feedback to managersthat was the original charge to the ap-praisal team. The next three steps evalu-ated the process itself, with a focus onthe pros and cons of the qualitativeapproach.

Step One Methods: Upward Appraisalof Leadership Performance

Participants

Eligible participants for the originalupward performance appraisal were the127 librarians and classified staff mem-bers in all branches of the UniversityLibraries at Virginia Tech. Student em-ployees were not eligible to participate.

Instrument

This study used a locally developedinstrument, the Leadership PerformanceSurvey, which is a short, locally developed,qualitative, written survey instrument fo-cused on leadership. The instrument andcover letter appear in Appendix A.

‘‘The selection ofleadership

characteristics. . .wasincluded in the originalcharge to the appraisal

team.’’

The selection of leadership character-istics as the issue of concern for thisappraisal was included in the originalcharge to the appraisal team. The teamdiscussed the model of leadership devel-oped by Kouzes and Posner. This modelincludes five leadership practices (chal-lenging the process, inspiring a sharedvision, enabling others to act, modelingthe way, and encouraging the heart),with each practice subdivided into tworelated commitments. For example, thepractice titled ‘‘modeling the way’’includes the commitments to ‘‘set theexample’’ and ‘‘achieve small wins.’’Starting from this rather lengthy list,the appraisal team added and subtractedleadership practices that in their vieware most important to library employees.

The final list of leadership practicescovered in this survey included vision,collaboration, communication, fairness,and leading by example. Some of thecharacteristics selected by the team (rolemodeling and some of the elements offairness) are consistent with those iden-tified in a study of library employees asimportant to subordinates in evaluatingtheir supervisors. The focus on leader-ship characteristics is outside the normfor performance appraisals in most U.S.libraries, which typically address a verydifferent set of management skills, suchas budgeting and staffing.7

To help respondents understand thequalities to be assessed in each section,the committee provided brief examples ofeach. For instance, the fairness sectionincluded the following phrases: is even-handed; recognizes accomplishments ofindividuals and groups appropriately; anddistributes the resources and workload ofthe department fairly.

In each section, the respondent wasasked to provide examples of ‘‘things thisperson does well in this area’’ and ‘‘thingsthis person could do to improve perfor-mance in this area.’’ A brief space wasprovided for response in each section, andrespondents were invited to use additionalpaper if necessary.

In addition, respondents were asked toprovide an overall evaluation on anyissues not addressed in the five precedingsections. Again, the comments were tofocus on ‘‘things this person does well’’and ‘‘things this person could do to im-prove performance.’’

To make data analysis possible, re-spondents were asked to identify the per-son being evaluated and to check one ofthe following options: ‘‘I report to thisperson,’’ or ‘‘I do not report to this per-son, but I feel I know his or her work wellenough to comment on at least someaspects of leadership.’’

After the draft of this instrument wasdeveloped, the appraisal team met withthe dean as well as the managers whowould be the subject of the appraisal.Next the team held separate informationsessions with the library faculty andclassified staff. In each meeting, thedraft instrument was shared along withthe process that would be used for theappraisal. Each group had an opportu-nity for comment on both the instru-ment and the process. This consultativeapproach was intended to create buy-infrom all constituent groups and to re-duce anxiety that had been expressed by

July 2004 305

both managers and employees. Mostcomments in these meetings centeredon the concern for anonymity of re-sponses by employees and confidential-ity of feedback to supervisors.

Data Collection

Hard copies of the Leadership Per-formance Survey were distributed with acover letter from the dean to all salariedlibrary personnel. Respondents weregiven three ways to complete the sur-vey: in hard copy, online, or by person-al interview. Three e-mail reminderswere sent over the following ten days,and the final due date for returning thesurveys was set for two weeks afteroriginal distribution.

Data Analysis

Team members entered the hard copyand interview responses into the onlinesurvey form. Once all responses wereconsolidated into the online survey soft-ware, the data were downloaded into anExcel spreadsheet. Using this spreadsheet,team members individually analyzed theresponses about each manager, searchingfor common themes across respondents foreach person. Where comments in one sec-tion of the survey pertained more appro-priately to another section, they wereanalyzed with the more appropriate sectiononly. For example, some comments in thecollaboration section pertained more ap-propriately to communication and weretherefore grouped with other communica-tion responses. To enhance the objectivityof this analysis, one teammember preparedthe first draft of the summary because shedoes not work in the libraries and does notknow the directors or associate dean. Theremaining team members then revised herdraft based on their own reading of thesurvey responses. This form of peerdebriefing is an important way to enhancethe quality of the data analysis for a qual-itative study.8

Step Two Methods: Postsurvey ofNonrespondents

For the second step of this study, thepostsurvey of nonrespondents, the eligibleparticipants were all librarians and classi-fied staff members who did not respond tothe original appraisal. After the responsesto the original survey were analyzed, theappraisal team developed a second surveywith the following purposes: (a) to deter-mine why some library employees chosenot to respond to the original survey, and(b) to develop a set of recommendations

306 The Journal of Academic Librarianship

from both respondents and nonrespond-ents to improve future iterations of thisprocess. Postsurveys such as this are rec-ommended in the literature to aid in refin-ing the process for future iterations.9

‘‘The postsurvey askedrespondents to identify allof the following issues thatcontributed to their choice

not to respond to theoriginal leadershipappraisal survey.’’

This postsurvey asked respondents toidentify all of the following issues thatcontributed to their choice not to re-spond to the original leadership appraisalsurvey:

� It was a waste of my time.

� I was concerned about how the resultswould be handled.

� I was concerned about how the resultswould be used.

� I was concerned about the confiden-tiality of my responses.

� The timing of the survey was notconvenient.

� I disliked the survey format.

� I thought the survey was confusing.

� I thought the survey was too long.

� The survey did not cover enough areasof importance.

� I dislike writing.

� I could not come up with the words todescribe my true feelings.

� I was afraid of retribution.

� I have bad memories from the lastupward appraisal, and I did not wantto be a part of anything like thatagain.

� I meant to return the survey but Inever got around to it.

� I never received the survey.

� I do not know what you are talkingabout.

The postsurvey was sent to all libraryemployees, with a request that this surveybe completed and returned only by thosewho had not responded to the originalsurvey. This instrument was distributedonline only, with a cover message fromthe chair of the appraisal team. Responsesto the postsurvey were downloaded di-rectly into an Excel spreadsheet from theonline survey software. Analysis was lim-ited to frequency distributions and a sum-mary of comments.

Step Three Methods: ManagerInterviews

Approximately one year after the orig-inal performance appraisal, two of theauthors scheduled individual interviewswith the dean, associate dean, and unitdirectors. Each interview lasted approxi-mately thirty minutes, and both inter-viewers were present for each. Beforethe interviews, team members developedan interview protocol focused on theadvantages and disadvantages of thequalitative appraisal that had taken placethe previous year. Interviewers followeda semistructured approach to the inter-views. That is, they used an establishedprotocol to assure that all questions ofinterest were covered, but with the free-dom to probe unclear responses or tofollow the conversation into unanticipat-ed but relevant areas. The interviewerstook field notes of the interviews andtranscribed them. The authors reviewedthese transcripts and coded them foremergent themes, first within each inter-view and then across all interviews. Thecoders then met to resolve any signifi-cant differences.

Step Four Methods: Employee FocusGroup

In the final portion of this study, four-teen employees were invited to participatein a focus group that took place abouteighteen months after the original upwardperformance appraisal. These were libraryfaculty and staff from the Instruction andOutreach Department. This departmentwas selected for this portion of the studyfor two main reasons: (a) the number ofresponses related to the director of this unithad been relatively high, and (b) the di-rector had ‘‘closed the loop’’ of the up-ward performance appraisal with theemployees in this department by sharingsome of the feedback she had receivedfrom the dean. This type of purposivesampling is typical of qualitative studies.10

This focus group used a semistructuredprotocol that was similar to the one usedfor manager interviews. Again, the ques-tions focused on the advantages and dis-advantages of the qualitative methodsused in the upward performance appraisal,this time from the point of view of thosecompleting the survey.

Two of the authors conducted the focusgroup. As with the manager interviews, thefacilitators for this focus group took notesand transcribed them. For both the manag-er interviews and the employee focusgroup, no tape recordings were used inorder to reduce the participants’ anxietyand encourage fuller responses. Data anal-ysis for the focus group responses wassimilar to the analysis for the managerinterviews. The authors reviewed tran-scripts and coded them for emergentthemes, then resolved any difference incoding by discussion and consensus.

RESULTS

Step One: Upward Appraisal ofLeadership Performance

Seventy-four usable responses were re-ceived from the 127 eligible salariedemployees of the libraries. Since respond-ents were permitted to submit appraisals ofmore than one manager, it was not possibleto determine howmany unique individualsresponded to the survey and therefore alsoimpossible to determine a response rate.

Two respondents reported their re-sponses by personal interview, twentymailed hard copies of the survey, andfifty-two used online submission. All butone of these responses related to a specificdirector or associate dean. One responsemade a comment about the leadershipgroup in general rather than about aspecific individual.

Most respondents were thoughtful andbalanced in their appraisals. Some wereextremely critical and used blunt lan-guage. Previous research has shown thatthis is particularly common for comput-er-based surveys where the respondentperceives the protection of anonymity.Kiesler and Sproull, for example, foundthat computer responses were twice aslong as paper responses and that theyincluded less socially desirable responsesthan paper surveys. Waterton and Duffyand McBrien found computer-basedresponses to be less inhibited than pa-per-based or interview responses. In thepresent study, the lengthier responsescame online, but critical responses werereceived in both hard copy and online.11

Members of the appraisal team sum-marized the responses concerning individ-ual managers in a confidential report forthe dean. Copies of the raw responses tothe Leadership Performance Survey foreach manager were included in an appen-dix so that she could verify the accuracyof the appraisal team’s perceptions. Thereport specified, however, that theseresponses were confidential and shouldnot be shared with the managers aboutwhom they were written in order to pro-tect the confidentiality of the respondents.

Based on this report and the dean’sown reading of the raw response data,she prepared a written summary for eachindividual manager. She shared this in aprivate meeting with each manager. Laterthe dean called a special meeting of alllibrary employees to share in a generalway the results of the upward appraisal.Then each manager met with employeesto share the overall feedback he or shehad received. Individual results from thisportion of the assessment are notreported here to preserve confidentiality.

Step Two: Postsurvey ofNonrespondents

Respondents (n = 12) to this surveywere asked to indicate all of the reasonsthat contributed to their choice not torespond to the leadership appraisal survey.The most common responses to this sur-vey indicated concern about confidential-ity (n = 12), concern about retribution (n =9), and a belief that completing the surveywas a waste of time (n = 6). Theseresponses were consonant with the com-ments that members of the appraisal teamreceived from library employees both be-fore and after the original upward apprais-al took place.

‘‘The most commonresponses [for not

participating in upwardevaluation]. . .indicated

concern[s] aboutconfidentiality. . .

retribution. . .and. . .wasteof time.’’

Step Three: Manager Interviews

The dean considered the process verysuccessful overall and indicated a desire

to repeat it approximately every fiveyears. She found the team’s report tobe well organized and useful but notedthat seeing the full data herself washelpful to developing a complete pic-ture. She also indicated a desire to see ahigher response rate. In describing herone-on-one meetings with the managers,she found that while some were initiallythreatened by the process, most hadrecognized as legitimate the points iden-tified by the survey responses as need-ing improvement.

The managers expressed a high levelof satisfaction with the survey processand the degree of confidentiality main-tained by the appraisal team. Those whohad been through past upward evalua-tions that had been strictly quantitativein nature generally liked the qualitativeformat, although some would have pre-ferred a mix of qualitative and quanti-tative questions. Nearly all managersrecommended that an appraisal such asthis one be conducted on a regularbasis, with most suggesting a three-yearinterval.

Some managers expressed concernthat different areas of the survey weregiven different weight by the dean in herone-on-one feedback meetings. For ex-ample, several managers reported that thedean had de-emphasized the communica-tion section in her feedback to them.Managers also commented that therehad been no further follow-up on theresults of this appraisal to assess progresson implementing any suggested changes.Some managers wanted more detail thanthey were able to glean from the sum-mary results that the dean shared withthem. Finally, several of the managerscommented that it would be valuable touse a very similar instrument in futureupward appraisals so that data could becompared from one assessment to thenext.

Step Four: Employee Focus Group

Four employees participated in the finalstep of this study, the focus group. Partic-ipants were generally happy with the sur-vey format, the appraisal team’s work, andthe overall intention of the project. How-ever, some voiced concerns about the suc-cess of the project overall and questionedwhether there had been any real impact.Some participants reported that they didnot get back enough information at the endof the appraisal so they had no way tomeasure whether their own comments had

July 2004 307

been reinforced by similar comments fromother employees. They also were unable tojudge whether any changes in the manag-er’s behavior were related to the results ofthe evaluation, as opposed to normalchanges that would have occurred overtime even without the appraisal. Finally,these employees believed that those man-agers most in need of improvement werethe ones least likely to change as a result ofthis process.

DISCUSSION

Inherent Sense of Threat

From the beginning, it was clear thatthis assessment was threatening both to themanagers who were being evaluated andalso to the employees who were providingan assessment of their supervisors. Thiswas particularly evident in the concern forconfidentiality by all constituents. Anyperformance evaluation can be threaten-ing, whether it uses quantitative or quali-tative methods. However, there washeightened sensitivity about the qualitativeformat because employees reasonably be-lieved that their words could identify themin ways that a numerical response couldnot. This concern was probably exacerbat-ed in this particular setting because someemployees who had been at Virginia Techfor many years remembered the earliestupward appraisal, in which responses werenot anonymous.

Libraries are places where freedom ofexpression is valued. Even in this envi-ronment, many of the employees in thisstudy felt fear to express their opinionsabout a matter that concerned them direct-ly: the supervision of their own depart-ments. This comment is not intended as acriticism of the Virginia Tech libraries.Rather it is an observation that even sucha strong value for freedom of expressioncannot overcome the fact that perfor-mance appraisals inherently threaten em-ployment, even as they are intended toimprove performance, and upward perfor-mance appraisals can be similarly threat-ening to employees as well as managers.Valuing free expression does not makethat expression necessarily safe.

Open Process with Closed Results

This sense of threat on both sides madeit particularly important that the appraisalteam create an open process with closedresults. That is, before the original upwardperformance appraisal was conducted, theappraisal team was very open about whatwas planned. The team was careful to offer

308 The Journal of Academic Librarianship

many opportunities for managers andemployees in all departments to learnabout the instrument and the proposedappraisal process and to comment on both.

‘‘The sense of threat onboth sides made it

particularly important thatthe appraisal team create

an open process withclosed results.’’

Everyone had a chance to ask questions

and to be assured about any concerns

before the appraisal began. Then, once the

actual appraisal started, every element of

the data collection and analysis was closed

to everyone but members of the appraisal

team. Each step was carefully and con-

fidentially conducted. The completed ap-

praisal instruments were forwarded to the

only member of the appraisal team who

was not employed by or officed in the

libraries so that employees would not feel

that their identity could be guessed from

the process of submitting the form. The

meetings of the appraisal team to analyze

the responses were conducted under strict

rules of confidentiality. The report to the

dean was clearly labeled ‘‘confidential,’’

and there were reminders within the report

that individual responses were not to be

shared with the managers. The dean’s

general feedback summaries for each

manager were not shared with the appraisal

team or with anyone else in the libraries,

unless the manager personally chose to do

so.In the end, both managers and employ-

ees responded positively to the fact thattheir own portion of the process had beenhandled with such care. Interestingly,both managers and employees said thatthe process and the results would havebeen more meaningful to them if they hadbeen privy to more of the information onthe other side. Some managers wantedmore information about specific employ-ee responses in order to judge whether theteam had done an appropriate job ofsummarizing the results and in order tounderstand some of the comments morespecifically. Of course, sharing such in-formation would have compromised theconfidentiality of the employees whoresponded to the survey. On the other

side, employees felt that they were leftout of the loop once the survey wascompleted. They wanted more informa-tion about what their managers had beentold and what the dean had asked themanagers to change in their behavior. Ofcourse, sharing this type of informationwould have compromised the confidenti-ality that is necessary in any employee’sperformance evaluation.

The authors believe that the team madethe best choices available under the cir-cumstances. To offer any weaker guaran-tees of confidentiality of responses wouldhave made it far less likely that employeeswould have been willing to respond to thesurvey in the first place. To offer anyweaker guarantees of confidentiality re-garding the feedback to managers wouldhave caused them to withdraw their sup-port from the process at the outset andmight have risked a formal grievance.Overall, the authors believe that a qualita-tive upward performance appraisal musthave a general climate of openness andsafety for both managers and employees inorder to be accepted. The open processwith closed results provided this climatefor the present appraisal. For future upwardappraisals that use methods such as these,it would be important to stress the limita-tions on both sides so that everyone under-stands that the confidentiality proceduresthat protect them also limit what they canknow about the outcome.

Strong Versus Weak Relationships

The authors observed that this processseemed to work best in departmentswhere relationships between the managerand the employees were already fairlystrong. In those cases, the employees feltcomfortable in responding to the surveyand the managers felt comfortable inreceiving feedback from the survey. Mostof the responses concerned suggestionsfor changes in specific behaviors thatwould be relatively easy to accomplishand that would make a visible differencein the function of the unit. This is pre-cisely the way that an upward evaluationis supposed to work.

However, in departments where therelationships between the manager andthe employees were already very strained,this appraisal process did not work nearlyso well. The response rate in those depart-ments was much lower because the em-ployees did not trust the confidentiality orefficacy of the appraisal process. The will-ingness of the managers in those depart-ments to accept the feedback of the

employees was much lower. In thosedepartments, the types of comments thatemployees made were more likely to cen-ter on personality issues, which were un-likely to lead to change, rather than onleadership behaviors that might reasonablybe changed.

Of course, these relationship issues arenot limited to qualitative upward apprais-als. They would be similar if the appraisalmethods were quantitative.

Misunderstanding of QualitativeMethods

The authors observed that many peopleinvolved in this study did not fully under-stand and appreciate the nature of qualita-tive analysis, including typical elements ofqualitative research such as purposivesampling, small sample sizes, and non-numeric responses. Several of the manag-ers felt a need to confirm the qualitativeresults with a set of numbers—‘‘Howmany people responded in this way?’’The dean herself, who had requested aqualitative approach, wanted to confirmthe appraisal team’s findings by viewingthe complete data set. These responsesindicate a general lack of confidence forsome people in the value of qualitativeresults and a far greater willingness to givecredence to numerical results. Those fa-miliar with qualitative research would ar-gue that a carefully conducted qualitativestudy can tell far more about a phenome-non than a quantitative study, and thatinterpreting quantitative results accuratelyis sometimes very difficult or even impos-sible. For future qualitative upwardappraisals, it would be important to spendtime at the outset explaining what qualita-tive research is and what it is not andhelping constituents on all sides under-stand the value of results that are derivedinductively.

Some Technical Issues

This appraisal team ran into a fewtechnical issues that could be addressed

in any similar appraisals in the future.First, several employees apparently mis-understood the question about whetherthey reported to the director they wereevaluating. From the responses, it isclear that some employees within adirector’s unit indicated that they didnot report to that director because theirreporting relationship was more thanone step removed. The appraisal teamshared the instrument with employees inadvance of the administration, but noneever indicated a misunderstanding ofthis issue. This finding is a reminderthat a locally developed instrumentshould be piloted with potential partic-ipants. However, in this case, the num-ber who apparently misunderstood thisquestion was small enough that even apilot administration might not havecaught the problem.

Second, some staff members do nothave a private office or work space aslibrary faculty may have. This lack ofprivacy heightens the sense of threat thatmay exist for these staff members inresponding to an upward appraisal andmay have depressed the response rate. Infuture appraisals, this problem could bereduced by providing a private place forany employee to complete the appraisalinstrument.

Third, the more verbal a participant isand the more education a participant has,the more comfortable it is for that person tocomplete a qualitative survey such as thisone. For future appraisals, it might be help-ful to consider whether a particular surveyformat arbitrarily inhibits the responses ofany particular group of employees.

Fourth, employees in smaller depart-ments were particularly concerned aboutconfidentiality since their responses couldnot reasonably be combined with manyothers on a particular issue. This wouldbe true for quantitative upward appraisalsin these departments as well. Upwardappraisals of any format may not be ap-propriate in smaller units if confidentialityis a concern.

CONCLUSION

Both managers and employees ex-pressed awillingness to repeat this process,perhaps as often as every three years. In-terestingly, even those who had complaintsabout the process and those who were notsure of the immediate value of the appraisalin their own department expressed an in-terest in repeating the appraisal.

‘‘Both managers andemployees expressed a

willingness to repeat thisprocess [of upward

evaluation]. . .’’

This survey was conducted in the beliefthat a qualitative appraisal might be supe-rior in some ways to a quantitative format.The hope was that it would provide moremeaningful information to managers toguide their professional development.Based on the observations of the authorsthroughout this process, and on theresponses of the managers and employeeswho participated in it, it is clear that aqualitative upward appraisal can providericher information than would be availablefrom a quantitative survey. The optimalcircumstances for using a qualitative for-mat include (a) a climate of safety on bothsides, (b) an open process with closedresults, (c) relationships between managersand employees that are essentially healthy,and (d) a clear understanding of whatqualitative data are and what they are not.The authors agree with both managers andemployees who recommended that thisprocess will lead to the most meaningfulresults if it is institutionalized so that asimilar instrument is used every few years.This repetition could help reduce the senseof threat on all sides andmake it possible toobserve trends over time in a particularmanager’s performance.

July 2004 309

APPENDIX A

310 The Journal of Academic Librarianship

July 2004 311

312 The Journal of Academic Librarianship

NOTES AND REFERENCES

1. Susan J. Wells, ‘‘A New Road: TravelingBeyond 360-Degree Evaluation,’’ HRMa-gazine 44 (1999): 82–88;Leanne Atwater & David Waldman, ‘‘Ac-countability in 360 Degree Feedback,’’HRMagazine 43 (1998): 96–102;Leanne E. Atwater, et al., ‘‘An UpwardFeedback Field Experiment: Supervisors;Cynicism, Reactions, and Commitment toSubordinates,’’ Personnel Psychology 53(2000): 275–297;Linda deLeon & Ann J. Ewen, ‘‘Multi-Source Performance Appraisals,’’ Reviewof Public Personnel Administration 17(1997): 22–36;Stephen L. Guinn, ‘‘Executive Develop-ment—Why Successful Executives Contin-ue to Change,’’ Career DevelopmentInternational 4 (1999): 240–243;Jeff W. Johnson & Kerri L. Ferstl, ‘‘TheEffects of Interrater and Self-Other Agree-ment on Performance Improvement Fol-lowing Upward Feedback,’’ PersonnelPsychology 52 (1999): 271–303;J.E. Osborne, ‘‘Upward Evaluations: WhatHappens When Staffers Evaluate Supervi-sors,’’ Supervisory Management 35(1990): 1–2;Richard Rubin, ‘‘The Development ofa Performance Evaluation Instrumentfor Upward Evaluation of Supervisorsby Subordinates,’’ Library & Informa-tion Science Research 16 (1994):315–328;Richard Rubin, ‘‘Upward Appraisal:What Do Subordinates Consider Impor-tant in Evaluating Their Supervisors?’’Library & Information Science Re-search 17 (1995): 151–161;James W. Smither, et al., ‘‘An Examinationof the Effects of an Upward Feedback Pro-

gram Over Time,’’ Personnel Psychology48 (1995): 1–34;Alan G. Walker & James W. Smither, ‘‘AFive-Year Study of Upward Feedback:WhatManagers DoWith Their ResultsMat-ters,’’ Personnel Psychology 52 (1999):393–423.

2. Jess A. Martin, ‘‘Staff Evaluation ofSupervisors,’’ Special Libraries 70(1979): 26–29;Germaine C. Linkins, ‘‘Department HeadEvaluations: The Virginia Tech Experi-ence,’’ Journal of Library Administration5 (1984): 53–59;Martin Elliot Jaffe & Sheila Ives, ‘‘TheyShoot Supervisors, Don’t They?,’’ LibraryJournal 112 (1987): 116–118.Rubin, 1994.Fred Pyrczak, Making Sense of Statis-tics: A Conceptual Overview , 2nd ed.,Pyrczak Publishing, Los Angeles, 2001,pp. 66–67.Linda A. Suskie, Questionnaire SurveyResearch; What Works , 2nd ed., Associ-ation for Institutional Research, Tallahas-see, FL, 1996, p. 33.

3. Eileen Hitchingham, ‘‘Appraisal Team forEvaluating Leadership Performance of theLibrary Directors and the Associate Dean,’’University Libraries, Virginia Tech, 2001;Eileen Hitchingham, meeting with the ap-praisal team, August 30, 2001.

4. Ibid.; Eileen Hitchingham, meeting withthe appraisal team, August 30, 2001.

5. Germaine C. Linkins, ‘‘Department Headevaluations: The Virginia Tech Experi-ence,’’ Journal of Library Administration5 (1983): 53–59.

6. James L. Hall, Joel K. Leidecker, & Chris-topher DiMarco, ‘‘What We Know aboutUpward Appraisals of Management: Facil-itating the Future Use of UPAs,’’ HumanResource Development Quarterly 7 (1996):209–226;

Dianne LaMountain, ‘‘Things Are Look-ing Up,’’ Small Business Reports 17(1992): 11–12.

7. James M. Kouzes & Barry Z. Posner,The Leadership Challenge: How toKeep Getting Extraordinary ThingsDone in Organizations, 2nd ed., Jos-sey-Bass, San Francisco, 1995.MaryJane Santos, ‘‘An Examination of Standar-dized Evaluation Forms Used in Public Li-braries to Evaluate Professional Librarians’’(master’s thesis, Kent State University,1992);Rubin, 1995.

8. Manning, ‘‘Giving Voice,’’ pp. 24–25.9. Kenneth M. Nowack, Jeanne Hartley, &William Bradley, ‘‘How to Evaluate Your360 Feedback Efforts’’, Training & Devel-opment 53 (4) (1999) 48–53.

10. Kathleen Manning (Ed.), Giving Voice toCritical Campus Issues: Qualitative Re-search in Student Affairs,American CollegePersonnel Association, Alexandria, VA,1999, pp. 16–17.

11. Sara Kiesler & Lee S. Sproull, ‘‘ResponseEffects in the Electronic Survey,’’ PublicOpinion Quarterly 50 (1986): 402–413;J.J. Waterton & C. Duffy, ‘‘A Compar-ison of Computer Interviewing Techni-ques and Traditional Methods in theCollection of Self-Reported AlcoholConsumption Data in a Field Survey,’’International Statistical Review 52(1984): 173–182;B.D. McBrien, ‘‘The Role of the Person-al Computer in Data Collection, DataAnalysis, and Data Presentation: A CaseStudy’’ (paper presented to the 82ndESOMAR Seminar, Nice, France, No-vember 7–10, 1984);.Nicolaos E. Synodinos & Jerry M.Brennan, ‘‘Computer Interactive Inter-viewing in Survey Research,’’ Psychol-ogy & Marketing 5 (1988): 117–137.

July 2004 313