an experimental study of performance information systems

Upload: nicoleta-mihaela-buza

Post on 14-Apr-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 An Experimental Study of Performance Information Systems

    1/18

    An Experimental Study of Performance Information Systems

    Author(s): Andrew D. Luzi and Kenneth D. MackenzieReviewed work(s):Source: Management Science, Vol. 28, No. 3 (Mar., 1982), pp. 243-259Published by: INFORMSStable URL: http://www.jstor.org/stable/2630879 .

    Accessed: 07/11/2012 02:39

    Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

    .

    JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms

    of scholarship. For more information about JSTOR, please contact [email protected].

    .

    INFORMSis collaborating with JSTOR to digitize, preserve and extend access toManagement Science.

    http://www.jstor.org

    http://www.jstor.org/action/showPublisher?publisherCode=informshttp://www.jstor.org/stable/2630879?origin=JSTOR-pdfhttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/stable/2630879?origin=JSTOR-pdfhttp://www.jstor.org/action/showPublisher?publisherCode=informs
  • 7/30/2019 An Experimental Study of Performance Information Systems

    2/18

    MANAGEMENT SCIENCEVol. 28, No. 3, March 1982

    Pr nted in U.S.A.

    AN EXPERIMENTAL STUDY OF PERFORMANCEINFORMATION SYSTEMS*ANDREW D. LUZIt AND KENNETH D. MACKENZIEt

    Performance information systems minimally consist of a performance standard, perfor-mance measurement, incentives to match performance to the standard, and periodic reportson performance. In this study four performance information systems are imposed on labora-tory groups solving simple routine problems. These included speed, minimum messages,minimum error, and voting. There was a control condition with no explicit performanceinformation system. In aggregate, the groups tried to meet the assigned standard of perfor-mance. The performance information systems also set into motion other group processesresulting in numerous sensible but unanticipated consequences. It was not possible toconclude that the explicit performance information systems improved overall performance.The choice of performance information system did affect the task processes, the choice ofstructure,and the involvement in task supporting processes. The variability of behavior withineach group suggests caution in widespread application of performance information systems.(PERFORMANCE INCENTIVES; INFORMATION SYSTEMS; ORGANIZATION DE-SIGN)

    1. IntroductionManagers are generally held accountable for the performance of subordinates. Theydevelop and adopt procedures and systems to obtain performance information. Thisinformation is used to control performance. Performance information systems areprominent features of most large organizations. The development and installation ofperformance information systems is a large industry.And this industry will continue togrow as demands for accountability increase.The basic steps in developing a performance information system are, in principle,straightforward. First, the manager decides what he or she wants the organizationalunit to do. Second, the manager seeks a measure for amounts of the activity beingperformed. Third, the manager tries to develop a standard of performance againstwhich he or she can compare actual performance. Fourth, the manager developsprocedures for obtaining performance information. Fifth, an incentive system is

    created to assist in bringing actual performance close to the standard. Finally, a systemis set up to provide results to the employees.Despite the wide range and sophistication of performance information systems, thediverse behavioral consequences are widely recognized but the underlying behavioralprocesses are not well understood [14]. Any intrusion into an operating system disturbsit. Performance information systems are, by their very nature, intrusive. After all, aprime goal for developing and installing one is to gain control over performance. Dothey work? If they work sometimes and not others, can we explain why? Are thereunanticipated consequences that need understanding?The purpose of this study is to test impacts of several performance informationsystems on group performance and problem solving processes. We seek to extend atheory of group structures [9], [10] into this problem area and thereby begin thedevelopment of a theory capable of answering questions about the relationships*Accepted by Arie Y. Lewin; received February 4, 1980. This paper has been with the authors 11 monthsfor 1 revision.tPennsylvania State University.tUniversity of Kansas.

    2430025-1909/82/2803/0 243 $01 25Copyright 'D 1982, The Institute of Management Sciences

  • 7/30/2019 An Experimental Study of Performance Information Systems

    3/18

    244 ANDREW D. LUZI AND KENNETH D. MACKENZIEbetween a performance information system and the human response to it. To accom-plish these purposes we deliberately worked with simple problems, small groups in alaboratory, and elementary performance information systems.PerformanceInformation SystemsMinimally, a performance information system has four characteristics: (1) a perfor-mance standard, (2) a performance measure, (3) periodic reports which match actualperformance against the standard, and (4) an incentive scheme to encourage perfor-mance to meet the standard.Unanticipated and disruptive results can be minimized, in theory, if five conditionsare met. These include: (1) one measures what one is intending to measure (validity),(2) the measurements are consistently applied and the purposes are acceptable (f air-ness and legitimacy), (3) the person or group controls the key factors that allow it tomeet the standard, (4) the promised rewardsactually materialize, and (5) the standardsare not tightened as they are met. In the case of multiple performance informationsystems in a large organization, there is a sixth condition that the performanceinformation systems are meshed to achieve overall organizational objectives. Dorn-busch and Scott [2] describe these conditions in depth. These five conditions are met inthis study.In this study we investigate the behavior of a five-person group performing aroutine, repetitive task which is subject to a performance information system. Such alittle organization is a laboratory analog to a group of clerks at the lowest organiza-tional level. We believe that such a group is the least likely to produce dysfunctionalconsequences in responding to a performance information system. We seek to under-stand the behavioral responses to operating with a performance information systemunder the most favorable conditions.GroupProblem Solving Processes

    Groups and organizations engage in complex and interrelated tasks, micro political,and social activities. It is possible to organize one's perception of group behavioraround the flows of these activities. Each of these time dependent and contingent flowsconstitutes a process. Performance is an output of these processes. Just as the finalscore in a tennis match does not explain the play, the reliance on an output measuresuch as performance may not capture the reasons for the performance.A performance measure covers only selected aspects of its constituent processes.There is an inherent incompleteness in any performance information system becausehuman and group behavior is multidimensional, fluid, political, and interrelated.Manyprocesses are dependent upon other processes. For example, a micro political processfor the emergence of a leader will change the task processes [9], [101. Emergencies inthe task processes can, in turn, set off new micro political processes to changestructures and the leaders.

    A performance information system can reinforce the repetition of existing processes.It can also reinforce change processes believed to result in performance closer to thestandard. However, if the group is completing a unit of work collectively, a perfor-mance measure tied to the completion of a unit of work can disrupt organizational andsocial processes cutting across such work cycles. A partial performance measure candisrupt the flows within a work cycle. For example, if a bank teller has zero discretionin approving a check for payment because of the controls, an influential depositor witha twenty-year history with the bank can become annoyed at the procedures andthreaten to remove his deposits to a rival. The depositor's reaction to the teller's actioncan disrupt the loan operations, cause borrowing from the Federal Reserve Board, and

  • 7/30/2019 An Experimental Study of Performance Information Systems

    4/18

    STUDY OF PERFORMANCE INFORMATION SYSTEMS 245affect the tasks of other bank personnel. It could even trigger the sudden arrival of thebank examiners. The facts that: (1) group outputs are the result of complex, dynamic,and interconnected streams of processes, (2) performance measures are surrogates ofthe output, (3) the incentives built into a performance information can trigger shifts inthese processes, and (4) the performance measures usually are based on outputs ratherthan the processes leading to the outputs, imply that there should be non-trivialrelationships among performance information systems and group problem solvingprocesses.For these relationships to exist, there must be an attempt by group members to meetthe standard of performance. If the standard is ignored, i.e., has no influence, therewill be no systematic differences between groups' performances under different perfor-mance information systems. This standard, along with the other aspects of a perfor-mance information system, sets up a "game" to play that unfolds into sets of groupprocesses. Which aspects of the initial conditions (or "game") cause a group to attemptto meet the standard is unknown. But, if the minimal conditions mentioned in theprevious section are met, it is hypothesized that groups will attempt to meet thestandard of performance assigned them. Once groups enter their problem solvingprocesses, these processes themselves also become determinants of subsequent behav-ior. Thus, the relationships between performance information systems and groupproblem solving processes (and results) are very complex. It is necessary to investigatethese relationships to understand how performance information systems can be used toinfluence and control group performance.

    We shall focus on the task processes and structural change processes in this study(cf. [9] for a more detailed presentation of these processes). The task processes consistof the contingent interpersonal flows of interaction used to complete a work cycle. Atask process is defined in terms of its stages called milestonesand the flows among themilestones. Each milestone is reached after interactions. The pattern of interactions foreach milestone is its structure.Structural change processes are contingent interpersonalexchanges of influence concerned with changing the structures and the task processes.Different performance information systems can affect a given process differentlydepending on what the group perceives it needs to accomplish to meet the standard. Inthe task process, this difference can be shown by the frequency of task activities andthe type of structures used to complete the task. We shall refer to three structuralforms: wheels, chains, and all-channels. These are illustrated in Figure 1. Previouswork has established that a group typically selects a wheel, chain, or all-channel for thetype of problem selected for this experiment [101.Thus, we describe structuralchangesin this study in terms of shifts among these forms.Changes in the task processes are due to the structural change processes. Structuralchange processes involve influence attempts which are called votes [9], [10]. A sequenceof votes on a given task structure is called an election. Depending on the problemattributes upon which the performance information system focuses, different perfor-mance information systems can result in different structural change processes. Anextensive description of structuralchange processes can be found in [9], [101.Groups have numerous concurrent group processes. Group problem solving pro-cesses have been described in [3], [4], [9], [10], and [15]. Linkages between groupprocesses and task effectiveness have been described ([1] and [5]). The specific groupprocesses highlighted in the study are: (1) The establishment of procedures to solve theproblems, (2) Verification of previously sent messages, (3) Discussion of goals, and (4)Social or non-task oriented interactions. These were selected because of the nature ofthe problems and the choice of performance information systems used in this study.Details about these processes and the experiments can be found in [8].

  • 7/30/2019 An Experimental Study of Performance Information Systems

    5/18

    246 ANDREW D. LUZI AND KENNETH D. MACKENZIE

    CZ

    U)03,cX CZ

  • 7/30/2019 An Experimental Study of Performance Information Systems

    6/18

    STUDY OF PERFORMANCE INFORMATION SYSTEMS 247General Hypotheses

    Structures represent need satisfying patterns of interaction. Performance informa-tion systems affect the instrumentality of actions to satisfy these needs. The structurescan change to allow the group to meet the performance standard. Given that these arevalid, fair, legitimate standards, that the group has control over those processes to meetthe standards, that the rewards are stable and matched to the closeness of performanceto the standard, we have four general hypotheses:H1: Groups will attempt to meet the standard of performance assigned them.H2: Relationships among the performance information system and structuralchange processes will have two effects: (a) the number of votes made to influence thechoice of structures and (b) the number of elections held about the choice of structure.H3: Relationships among the performance information system and the task pro-cesses will influence: (a) the choice of structures and (b) the frequency of taskinteraction.H4: Different performance information systems will tend to produce differences in:(a) the number of procedural messages, (b) the number of verification messages, (c) thenumber of messages concerned with goals, and (d) the number of non-task orientedmessages.Consequently, if one selects different performance information systems while hold-ing constant the nature of the problem, these general hypotheses predict very differentbehavior depending upon the choice of the performance information system. That is,for the same type of task situation, one can expect very different behaviors, dependingupon the selection of the performance information system. There will be variety inboth the results and the group processes. One should also expect different results andgroup processes if the task situation is changed under a common performanceinformation system. Therefore, there is a basic indeterminacy among group processes,structures, performance, and performance information systems. We postulate that thisinherent indeterminacy is the prime cause of unanticipated consequences in applyingperformance information systems.

    2. MethodThe experiment employed four different performance information systems, a controlwith no explicit performance information system (herein referred to as experimentalconditions), a laboratory setting, and a process paradigm with which to analyze thedata. Forty five-person groups were utilized with eight groups per experimentalcondition. Each group was under only one performance information system andworked six to eight problems. The analysis focused on how the groups worked theirproblems.

    Five Experimental ConditionsFour different performance information systems are examined in this study, eachrepresenting a different experimental condition. A group was under only one of theseconditions for its entire experiment. For each condition, the standard of performancewas the level of performance at which the members received maximum pay perproblem. The performance measure was the measure of their performance as againstthe given standard. A group received a measure of its performance after it successfullycompleted each problem. In the first four conditions outlined below, the minimumamount that could be earned was $0 and the maximum was 20 cents for each problemfor each person in the group.

  • 7/30/2019 An Experimental Study of Performance Information Systems

    7/18

    248 ANDREW D. LUZI AND KENNETH D. MACKENZIEThe performance standard for one condition was the number of minutes to solve aproblem. This was referred to as the speedcondition, with the performance standard setat three minutes. The incentive was a linear relationship based on 20 cents for thethree-minute standard and a penny-a-minute penalty. (The lab clock measured to thenearest minute.)A second condition, the minimummessage condition(min. msg.), was represented bythe performance standard of the number of excess task messages sent by the group.Each problem could be solved with a minimum of 18 messages. The standard ofperformance was set at zero excess messages sent. The incentive was a linear relation-ship based on 20 cents for zero excess messages and with a penny deduction for eachexcess message.A third condition, the error condition, was represented by the performance standardof the number of errors per problem. The standard of performance was set at zero

    errors, and the incentive was a linear relationship based on 20 cents for zero errorswith a five-cent deduction for each error.A fourth condition, the vote condition,was represented by the performance standardof the number of organizational messages sent. The standard of performance was tenorganizational messages. The incentive was a linear relationship based on 20 cents forten organizational messages sent with a two-cent deduction for each organizationalmessage not sent.A fifth experimental condition provided a link to the previous experiments [8] andacted as a control. This was referred to as the $/hour condition. No performancestandard was set, nor was any measure of performance made. The incentive was nottied to a standard, nor were any periodic reports given. The participants were simplytold that the faster they worked the six or eight problems, the sooner everyone couldgo home and the higher their rate of pay ($/hour) would be. The pay was a flat $2.20per participant for working all problems.The speed, minimum messages, and errorconditions were reasonable in that they (1)directly affected the problems the groups solved, (2) are fairly common means ofevaluating performance, and (3) are concerned with the input, output, and processactivities required to solve the problems efficiently. The voting condition attempts tostimulate a process of structural change that has on other studies [8] proven helpful inobtaining performance. The voting condition was concerned directly with a means andnot the end results. Each condition could readily be measured as a group workedthrough its problem, so that within a minute or two after the solution was accepted ascorrect, a group was provided results of its performance.The Experimental Problems

    The problems presented to the group were deductive and simple (cf. [8], [9] for acomplete description). Every subject was given a set of symbols at the beginning ofeach problem. Each subject was unaware of the set of symbols the other membersreceived. The subjects had to share their symbols to obtain a solution, which consistedof a minimum list of the different symbols. There were four types of symbols:numbers, colors, letters, and shapes. The type of symbol varied randomly with theproblem. The variety of symbols made it difficult to define a procedure for a consistentordering of the minimum list of symbols without someone performing the coordina-tion.LaboratorySetting and Directions

    The laboratory setting consisted of five isolated booths in a semi-circle around theexperimenter. Communication was by written message only. Each subject was identi-fied by the color of the pen he or she used. The forms were pre-marked with the

  • 7/30/2019 An Experimental Study of Performance Information Systems

    8/18

    STUDY OF PERFORMANCE INFORMATION SYSTEMS 249sender's color, and the receiver could be designated by circling the color of a booth onthe communications form. Carbon paper was provided for multiple messages. Eachmessage was sent through a mail slot to the experimenter who stamped the date, time,and message number on it. It was then delivered to the designated booth.The subjects were first brought together outside the experiment room and giveninstructions. An example problem was worked on a blackboard, and the participantswere allowed to ask questions until the experimenter felt that everyone knew exactlyhow to work the problems.Sample problem data, similar to that presented to the participants, appears below.

    Group Member Symbols ReceivedRed 2, 3Black 1,0Green 2, 1Blue 3, 4Purple 1,3

    One possible solution, derived for the group, might be 2,3, 1,0,4. The subjects thenreceived the rules for an acceptable group solution: (1) each different symbol must beincluded, (2) a given symbol can be included only once, (3) each member must have asolution and turn it in to the experimenter, and (4) the order of the symbols must bethe same for each one of the five submitted solutions. After the experimenter receivedone solution from each subject, he would make a ruling. If the group answer wasincorrect, the experimenter would announce this fact and wait for another answer. If itwas correct, this would be announced, the group performance measure would becalculated and distributed to each subject, and the information for the next problemwould be sent to the subjects.Next, the subjects were introduced to the forms they would use and shown how theycould communicate with these forms. They were told that they could communicatewith whomever they wished and on any subject. They received sample forms represen-tative of the type placed in their booths. One form was addressed to the experimenterand was used for submitting the solution. Two general memorandum forms, white andyellow, were distributed. The experimenter explained that the white form was to beused if the content concerned the task and messages about the task. The yellow formwas used for "organizing content" and for social messages. "Task" meant subjectmatter concerned with original symbols or solutions, or messages about these symbolsor solutions. "Organizing content" referred to the fact that the group, as a smallorganization, had to organize itself and devise a method of coordinating its activitiesbefore it could begin production.The final part of the directions concerned the standard of the performance. This wasthe only aspect of the experiment in which the instructions differed for groups underdifferent experimental conditions. All groups having an explicit performance informa-tion system received a form that explained it.Participants

    The 250 subjects were, in general, associated with the University of Kansas. Theyranged from freshman to Ph.D students. Most of the subjects were drawn from thestudent body of the School of Business. The authors discussed the experiment invarious business school classes there and concluded by requesting volunteers. Duringthese presentations the authors explained that the incentives consisted of $1.00 forlistening to directions and a maximum of $.20 per problem for six to eight problems,depending on the group's performance. (The first four groups were under a slightlydifferent incentive system. They were told that the pay was $.20 per problem with a

  • 7/30/2019 An Experimental Study of Performance Information Systems

    9/18

    250 ANDREW D. LUZI AND KENNETH D. MACKENZIEmaximum of eight problems. The change was made because the experiments requiredmore time than originally expected. We were unable to detect any effect of this slightchange under the conditions of this experiment.) Approximately ten percent of eachclass volunteered. Recruiting took place over a six-month period and as participantswere needed to run the groups.Measures

    Four performance measures were employed: time per problem, number of messagesper problem, number of errors per problem, and number of organizing messages(called votes) sent during each problem solving period. To describe the task processes,we use two phases. Phase One is data exchange. Phase One is completed when one ormore subjects hold all original problem data given by the experimenter. Phase Two isreaching the solution. Phase Two is completed when each subject has the correctanswer. We reporthere the number and types of messages by phase and the structuresselected for each phase.A content analysis was performed to also detect procedural, verification, goal, andsocial messages. A procedural message describes a method for deriving a solution sothat each member can prepare his own solution consistent with each other memberwithout exchanging solutions. Verificationmessages verify the correctness of initialproblem data or solutions sent in solving the problems. Goal messages state anobjective or express encouragement on reaching an objective. Finally, social or non-task oriented messages are messages having no direct relationship or influence onworking the problems.The measures for the structural change processes include the number of votes in anelection for the data phase and the number of elections held. A vote is a messagehaving content concerning the structural form of the group. An election begins with arecall vote calling into question a current structure and ends when there is consensuson the new structure. Some elections result in no change in structure. Some groupsengage in sequences of elections over the sequence of problems. (A full description ofthe underlying theory of Behavioral Constitutions can be found in ([9], chapter 7).As a check on these objective measures of group behaviors, a questionnaire wasadministered to each participant at the end of each group session. This subjectiveassessment was made in three areas: (a) paying attention to the standard, (2) satisfac-tion with the problems and their performance, and (3) the relative importance of fourattributes related to the four performance information systems: minimize time, mini-mize errors,minimize messages, and send organizing messages.Results

    The results for Hypothesis 1 are tabulated in Table 1. The speed groups were fastest,the minimum message groups sent the fewest messages, the voting groups voted themost, and the error groups had the fewest number of errors. These effects are allstatistically significant in magnitude except for the error groups. While the errorgroups had the lowest number of errors they were not significantly lower than eitherthe minimum message or voting groups. The control group ($/hour) was intermediateon each of the four performance measures.Data for Hypothesis 2 are given in Table 2. The data clearly show that the votecondition encouraged both voting and more elections. Seven of the eight comparisonsare statistically significant. While there were more elections in the vote condition thanany other, there is not a statistically significant difference (t = 1.5) in the meannumber of elections held in the speed condition (1.48) compared with the votecondition (2.38).

  • 7/30/2019 An Experimental Study of Performance Information Systems

    10/18

    STUDY OF PERFORMANCE INFORMATION SYSTEMS 251TABLE I

    Mean Number of Minutes, Messages, Errors,and OrganizationalMessages by ExperimnentalConditionis

    Experimental ConditionSpeed Min.Msg. Error Vote $/Hour Compari-(1) (2) (3) (4) (5) sons (n = 48)

    (a) Minutes per 50b 7.5 8.6 11.3 6.5 1 vs 2 3.51***Completed I vs 3 4.42***Solutiona (2.6)c (4.2) (5.0) (5.6) (3.4) 1 vs 4 7.07***I vs 5 2.35**(b) Messages be- 23.4 12.0 29.6 55.7 22.7 2 vs 1 5.30***tween subjects 2 vs 3 5.86***per completed 13.3) (6.6) (19.7) (23.6) (12.9) 2 vs 4 12.40***

    Solutiona 2 vs 5 5.12***(c) Errors .7083 .3125 .1667 .2917 .4375 3 vs 1 3.42***(.988) (.624) (.476) (.711) (.848) 3 vs 2 1.293 vs 4 0.963 vs 5 1.93*(d) Organizational 11.1 5.2 10.9 25.8 9.9 4vs 1 3.19***Messages 4 vs 2 5.66***(20.9) (7.6) (12.2) (24.1) (14.1) 4 vs 3 3.82***4 vs 5 3.96***

    * p < 0.05 (one tailed)** p < 0.01 (one tailed)*** p < 0.001 (one tailed)a"Per completed solution" is calculated by taking total time (or messages betweensubjects) per problem and dividing by one (the correct solution) plus the number of errorsolutions submitted.bMean value over all groups and problems in an experimental conditionCStandard deviation about the mean

    TABLE 2Mean Numberof Votes and Elections in the Structural Change Process

    ExperimentalConditionSpeed Min.Msg. Error Vote $/Hour Compari-(1) (2) (3) (4) (5) sons t(48)

    (a) Number of 9.6a 3.9 7.5 21.2 7.7 4 vs 1 2.92**Votes in 4 vs 2 5.99***elections (20.l)b (8.2) (10.2) (18.2) (11.1) 4 vs 3 4.52***for the 4vs 5 4.39***data mile-stone

    (b) Number of 1.48 .73 1.00 2.38 1.06 4 vs 1 1.50elections (3.6) (1.6) (1.4) (2.6) (1.5) 4 vs 2 3.79***4 vs 3 3.25***4vs5 3.07*** p < 0.05 (one tailed)** p < 0.01 (one tailed)p < 0.001 (one tailed)'Mean value over all groups and problems in an experimental conditionbStandard deviation about the mean

  • 7/30/2019 An Experimental Study of Performance Information Systems

    11/18

    252 ANDREW D. LUZI AND KENNETH D. MACKENZIETABLE 3

    Freqluentcyf Selected StructluteAcross All Problens for Each ExperimentalContditiont.i DaltaExchangeand Solultioni hases (in?%)Experimental Condition

    MinimumSpeed Messages Error Vote $/Hour(1) (2) (3) (4) (5)Selected Data Solution Data Solution Data Solution Data Solution Data SolutionStructure Phase Phase Phase Phase Phase Phase Phase Phase Phase Phase

    Wheel 21 38 42 77 42 65 23 56 42 54Chain 0 0 27 0 6 0 0 4 0 0All-Channel 31 0 2 0 13 4 17 0 23 2Othera 48 62 29 23 39 31 60 40 35 44"'Other" structures are ways of completing the data or solution phases that do not conform to thethree structures in Figure 1. They typically involve more messages than a wheel and chainibut fewer thanan all-channel.

    Table 3 contains data relevant to part (a) of the third hypothesis. For the dataphase, the relationshipbetween the choice of structureand the experimental conditionsyields a X2 of 66.5 with twelve degrees of freedom and a significance level ofp < 0.0001. The choice of a chain structure occurred mainly for the minimum messagecondition, rarely in the error condition and never for the other conditions. The wheelstructure was selected less in the speed and vote condition than in the other threeconditions. The time trend for the selection of structure in the data exchange phase isshown in Figure 2. (The trend for the solution phase is similar.) The trend is to move501

    C:

    IInn

    a) C //a)oBE) 301 \e/

    Problem NumberKEYo SPEED o ERROR V $/HOUR* MIN.MSG. A VOTEFIGURE2. Cumulative Graph of the Number of Wheels and Chains Used in the Data Exchange Phase.

  • 7/30/2019 An Experimental Study of Performance Information Systems

    12/18

    STUDY OF PERFORMANCE INFORMATION SYSTEMS 253TABLE 4

    Mean Number of Data Phase Interactionsand Mean Numlber f Solution Phase Interactions orEach ExperimentalConditionExperimental ConditionSpeed Min.Msg. Error Vote $/HourPhase (1) (2) (3) (4) (5) Comparisons t = 48

    Data 13.9a 6.9 9.2 13.1 11.0 2 vs 1 5.56***(7.3)b (4.9) (6.6) (6.9) (7.6) 2 vs 3 1.95**2vs4 5.11*2vs5 3.14**Solution 9.0 6.0 8.5 11.5 10.6 2 vs 1 2.24*(8.4) (3.6) (7.5) (16.8) (12.2) 2 vs 3 2.07*2 vs 4 2.20*2vs5 2.51**

    * p < 0.05 (one tailed)** p < 0.01 (one tailed)*** p < 0.001 (one tailed)aMean value over all groups and problems in an experimental conditionbStandard deviation about the mean

    towards greater centralization in both the data exchange phase and the solution phase.However, the slopes vary with the experimental condition with the min. msg. conditioncentralizing most rapidly and the speed condition most slowly. For the solution phase,the relationship between the choice of structureand the experimental conditions yieldsat X2 = 31.0 with 12 degrees of freedom and a significance level of p < 0.002.Data in Table 1 provide support for part (b) of the third hypothesis. These data canbe reworked to show that the frequency of task interaction also varied with both theexperimental condition and the task phase, as is shown in Table 4. The minimummessage condition had statistically significant fewer interactions for both problemphases when compared with the other four conditions.The average number and standard deviations for each of the four dimensions of thefourth general hypothesis is given for each experimental condition in Table 5. Thevalues of the standard deviations exceed the value of the mean for all 20 means. Thishigh variance represents the variability in processes within each group and acrossconditions. Summary statistics mask the dynamics within a group. Summary statistics

    TABLE 5AverageNumber and StandardDeviationof Procedural, Verification,Goal, and Social Messages By

    Experimental ConditionsExperimentalCondition

    Dimension of MinimumGroup Behavior Speed Messages Error Vote $/HourProcedural Messages 5.13a 1.73 1.73 3.31 2.77

    (8.1 b (4.7) (4.2) (5.0) (4.9)Verification 1.35 0.63 7.00 6.52 3.63Messages (2.5) (1.4) (10.2) (9.3) (6.78)Goal Messages 0.96 1.17 0.71 4.94 0.60(2.0) (2.0) (1.5) (6.8) (2.2)Social Messages 1.23 0.85 6.33 9.25 1.35(4.1) (1.4) (8.1) (12.8) (2.1)aMean value over all groups and problems in an experimental conditionbStandard deviation about the mean

  • 7/30/2019 An Experimental Study of Performance Information Systems

    13/18

    254 ANDREW D. LUZI AND KENNETH D. MACKENZIEdo show the basic trends. However, the high variability does suggest some indetermi-nacy among group processes, structures, performance, and performance informationsystems. A paper could be written on each group as it struggles from problem toproblem.QuestionnaireResults

    The questionnaire was administered following the completion of the set of sixproblems. Figure 3 presents a portion of the questionnaire used in the experiment.These questionnaire results allow a corroboration of the main results.A performance information system contains both a standard and an incentive tomeet it. The subjects were asked to rate their relative concern with the standard andthe incentive. The results indicate that both were important with relatively moreemphasis on the standard than the incentive. The mean responses for the speed,minimum messages, error and vote conditions are respectively 3.5, 4.5, 3.3, and 4.5.Performance data in Tables I and 4 are consistent.I. You were given "Directions For Cotnputing Your IndividuLalPay Per Problem". Your pay per problemiiwas directly related tohow close your performance came to the standard of performance. We wotud like yotu to think back on your exper ience and try to

    recall whether you were concerned more about meeting the standard or the loss of pay if it was inot nmet.Please circle one numliberon the scale below:where (a) represents paying attention otily to meeting the standard of performance,where (b) represents paying attention equtally to both pay anid meeting the standard of performance.where (c) represents payinig attentioai onl)yto pay.

    (b)(a) Equally to both (c)Only to the standard pay and standard Only to pay

    0 1 2 3 4 5 6 7 8 9

    2. Below are three factors yotu may have considered in deciding to send each message.Not On Somiie Oni Every(a) How often did you consider the effect a message At All Messages Messagewould have on meeting the standard of performance? 0 1 2 3 4 5 6 7 8 9Not On Somne On Every

    (b) How often did you consider the effect a message At All Messages Messagewould have on the group's performance? 0 1 2 3 4 5 6 7 8 9Not On Some On Every(c) How often did you consider the effect a message At All Messages Messatgewould have on money earnied or lost? 0 1 2 3 4 5 6 7 8 9

    3. We are interested in how youL rate your satisfaction with the problems, the performance of the group, and yottr participaitioni inthe experiment. Circle one number in each row.None A Moderate A Great A VeryAt All A Little Amount Amount Great Amount

    Interest in the problems 0 1 2 3 4 5 6 7 8 9Satisfaction with the group's performance 0 1 2 3 4 5 6 7 8 9Satisfaction with participation in the experiment 0 1 2 3 4 5 6 7 8 9

    4. Rank order the following statemetnts I throuLgh in termiisof their imiiporttantcetoyot. The ranlk of I is the mitost miiportantand therank of 4 is the least importanit._Mihimize time taken

    Minimize the nunmberof errors iltadeMinimize the nLtmber of miiessagessentSend as many organlizing messages as possible

    FIGURE 3. A Portion of the QuestionnlaireAdministered After Completing the Set of Problems.

  • 7/30/2019 An Experimental Study of Performance Information Systems

    14/18

    STUDY OF PERFORMANCE INFORMATION SYSTEMS 255The reasoning behind the hypotheses and results implied concern with the effect of amessage sent upon the results. Question 2 addresses this concern. The averageresponses to Question 2 (a) are 6.8, 7.6, 6.2, and 5.5 respectively for the speed,minimum messages, error and vote conditions. These data indicate that subjects were

    paying attention to the effects their messages had on meeting the standard. Theaverage responses to Question 2 (b) are 7.9, 7.9, 6.9, and 6.1 respectively for the fourconditions. These results indicate the subjects were concerned by the effect of amessage sent upon the performance of the group. Questions 2 (a) and 2 (b) both mirrorthe concern with the effect of their messages upon performance. Question 2 (c)inquired about the effect on the incentive reward. Here the results are 3.1, 5.5, 2.4, and4.3 respectively for the speed, minimum messages, error and vote conditions. Thesedata indicate a much lower concern for the monetary reward than for performance.These results may reflect the small value of the incentives to meet the standard.Meeting the standard is, of course, the means to achieve the maximum reward and is,in this sense, more basic than the reward itself.On all three parts of Question 2, the minimum message groups had the highestaverage responses. There is more of a direct means-end linkage in this condition thanthe other three. Any excess message caused a direct decline in meeting the standard,their performance and in the amount of reward received. Furthermore, Question 2focuses directly on messages, the object of concern to the minimum message groups.Question 3 asked about satisfaction with the problems, the performance of thegroup, and with participation in the experiment. Subjects' responses suggest moderateinterest in the problems. The mean responses were 5.6, 5.1, 4.9, and 4.9 for the speed,minimum messages, error, and vote conditions respectively. There was relativelygreater satisfaction with the group's performance. The mean responses were 6.2, 6.1,7.6, and 6.0 respectively for the speed, minimum messages, error, and vote condition.The satisfaction with the group is statistically significantly greater for the errorcondition than for the other three. This may be due to the probable greater intellectualchallenge for devising processes to reduce errors. Individuals, by and large, weresatisfied with the individual performance. The average results were 6.6, 6.6, 7.0, and6.9 respectively for the four conditions.Question 4 is a forced ranking of problem attributes related to the four standards.Of interest here is not the ranking but the difference between conditions on theimportance of each of the separate four categories. The results are shown in Table 6.'The groups of each condition ranked their respective performance standard as moreimportant than the other conditions. Also, within the speed, minimum messages anderror conditions (columns on Table 6), they ranked the problem attribute related totheir respective standard as most important. This was not true in the vote conditionwhere a very large amount of time was taken to solve the problem thus forcingproblem attributes other than their standard to become important. For example theymay have been saying to themselves "Will we ever get done?"

    Previous discussion of results showed how the groups attempted to meet theperformance standard. The results from Question 4 are consistent with the assumptionthat meeting the standard was an important element in the group's behavior.

    3. Discussion of ResultsThe main conclusion of this study is that the subjects are very sensible in respondingto the demands of a performance information system. There exists high variabilitywithin each group dictated by its special problems arising during the problem solvingprocesses in which each group chose to engage. The aggregate data present the trends

  • 7/30/2019 An Experimental Study of Performance Information Systems

    15/18

    256 ANDREW D. LUZI AND KENNETH D. MACKENZIETABLE 6

    Mean and Standard Deviations or QuestionnaireResponsesto QulestionNumber4, Figure 3Experimental ConditionSpeed Min.Msg. Error Vote $/Hour Compar-

    (1) (2) (3) (4) (5) isons t (40)Minimize Time 1.6 3.1 2.3 1.8 1.8 1 vs 2 9.66***(0.8) (0.6) (0.7) (0.9) (0.9) 1 vs 3 3.88***1vs 4 .931 vs5 1.17Minimize Errors 1.9 1.8 1.3 1.9 1.8 3 vs 1 4.45***(0.6) (0.8) (0.6) (0.7) (1.8) 3 vs 2 3.45***3 vs 4 4.49***3 vs 5 3.55***Minimize 3.1 1.6 3.1 3.5 2.8 2 vs 1 8.05***Messages (0.8) (0.8) (0.9) (0.8) (0.9) 2 vs 3 7.42***2 vs 4 10.39***2 vs 5 6.28***Send Organizational 3.5 3.5 3.4 2.9 3.7 4 vs 1 2.68**Messages (0.9) (1.0) (0.8) (1.1) (0.8) 4 vs 2 2.47**4vs3 2.56**4 vs 5 3.62***

    * p 0.05 (one tailed)** p 0.01 (one tailed)

    **p0.001 (one tailed)for each condition. A manager imposing a performance information system across anumber of units is interested in the aggregate response. However, the same managerand the subunit managers must be aware of the large variability within each unit.These variabilities are a prime source of unanticipated effects. They are often sensibleresponses to specific contingent events occurring to and within each group. Theimplications of the aggregate data are direct. Under the conditions of this study, thegroups do respond to the demands of the performance information system.The achievement of these results is consistent with the following argument. Theperformance standard presented a baseline with which to compare performance.Discrepancy between the anticipated results and standard defined the problem whichdirected the problem solving efforts. Thus the performance information system hadtwo effects: (1) it served to define the problems and (2) it continually reinforced thisproblem definition.These results are consistent with arguments made elsewhere [13]. Incentives to bespeedy produce speedy behavior. Incentives to minimize the number of messages resultin fewer messages. Asked to eliminate errors, the error condition groups have thelowest error rate. When encouraged to engage in micro-political structural changeprocesses, the groups responded by large quantities of votes. In aggregate the groupsresponded to the demands of the performance information system by complying.Different performance information systems produce, in aggregate, consistently differ-ent behaviors. This basic finding that, under the condition of this experiment, groupswill try to do what is asked of them is encouraging to persons attempting to createperformance information systems.However, the result that subjects will attempt to comply with a performanceinformation system under these experimental conditions places a responsibility on theperformance system designer. A well-intentioned performance information system canbackfire by reinforcing behavior that was not intended. For example, the votingcondition encourages subjects to think and decide about group structure. But once a

  • 7/30/2019 An Experimental Study of Performance Information Systems

    16/18

    STUDY OF PERFORMANCE INFORMATION SYSTEMS 257decision has been reached, the continuance of the process can be counter-productivein achieving desired performance. It can also destabilize other processes. As seen inTable 5, vote condition groups send more goal messages than any other condition;they send more non-task or social messages than any other condition; they use almostas many verification messages as the error condition groups; and more than any othercondition they get involved in many procedural messages. So, an initial means toachieve performance may persist past the point of being useful and actually worsenperformance. These results are clearly in line with modern contingency theory.Because it is easier to start than stop a performance information system, the implica-tions are clear to the designer. The designer must be concerned with both theimmediate impact and the longer run social processes set in motion by a performanceinformation system.The minimum message performance information system, in aggregate, had a reverseeffect. It discouraged messages other than those directly needed to solve the problem.These groups sent the fewest number of organizational, procedural, verification, andsocial messages. But it took them longer to solve each problem than in the speed or$/hour condition. It also encouraged the formation of chain and wheel structures.The minimization of error performance information system, in aggregate, whilediscouraging errors, stimulated extra messages to avoid errors by having the greatestnumber of verification messages. This caution led to the second highest time tocomplete a problem, the second highest number of messages, and the second highestnumber of organizational messages. While it did produce the lowest number of errors,the response to the demands of the performance information system set other pro-cesses into motion that adversely affected performance.The speed condition's performance information system did produce the greastestspeed. This came at the cost of the largest number of errors and the largest number ofprocedural messages to straighten out the task processes. The high number of proce-dural messages mirror the tendency shown in Figure 2 to avoid selecting a structure asa wheel or chain that would simplify the procedures to solve the problem. Thedemands for speed apparently interfered with the group's taking time to think throughits organization structure.The control condition of $/hour did not have an explicit performance informationsystem. It did encourage speed and its data resemble those of the speed condition morethan any other condition. Its performance is intermediate on each of the performancemeasures and on the other four dimensions of group behavior shown in Table 5.Judgments concerning which condition produced the most effective groups are subjec-tive. But the $/hour groups avoided the excesses of the processes stimulated by theother four performance information systems.The conditions of this experiment constitute a relatively pure case in which toinvestigate the behavioral impacts of various performance information systems. Unlikemost organizations, those groups have no prior history, no environmental uncertainty,no change in personnel or tasks, and were of short duration. Unlike most performanceinformation systems in organizations, the performance information systems wereapplied objectively and consistently, they were well understood and documented [9],and the incentives to comply were monetarily trivial.

    4. Implicationsof the Results to Management PracticeLippitt and Mackenzie [7] published a paper describing authority-task gaps and howadministrators seem to select strategies for solving problems arising out of authority-

    task gaps. The main idea was that organizations do not operate as they are designedand these structural inconsistencies between what should be and what is are natural

  • 7/30/2019 An Experimental Study of Performance Information Systems

    17/18

    258 ANDREW D. LUZI AND KENNETH D. MACKENZIEresults of the apparent speeds by which the actual organization, the official organiza-tion, and the legally mandated organization adapt to change. One of the authors hasbeen involved with approximately 30 organizational design engagements. There hasnot been a single case where the official organization is closely followed by the actualorganization. In some cases the actual organization does not even resemble the officialone [11], [12].These findings are relevant to the design of performance incentive systems whichusually are designed about the official organizational structures. If the actual perfor-mance incentive system is designed on the assumption that the organization operatesas designed, when the assumption is wrong there is a good possibility that theperformance incentive system is inappropriate. This can result in what Kerr [6] calls"The folly of rewarding A and hoping for B".This phenomenon of the authority-task gap is further exacerbated by relying on theaccounting system to generate the numbers upon which the performance incentivesystem is based. Given the conservatism of accounting departments and their remote-ness from the day to day operations, it is entirely possible that the performanceincentive system is inaccurate and unresponsive to change.Furthermore, it is often the human resources and personnel departments thatsponsor and administer the concept of performance incentives. Because these depart-ments are often the conservators of the official organization and because they arefrequently isolated from the on-going organizational work, they tend to aggravate theinconsistencies between what is rewarded and what should have been rewarded giventhe actual practices. The need for consistency and fairness in an organization that isnot functioning as designed dooms many well-intentioned performance incentivesystems to at least partial irrelevance.Finally, just as environmental change and internal changes are creating authority-task gaps, they also serve to change the assumptions upon which the standards arebased. For example, rapidly rising interest rates coupled with a recession can affect therelevance of an incentive system geared to other assumptions about the economy. Amortgage loan officer rewarded by the dollar volume of mortgages might be encour-aged to make bad loans. The performance standards for a salesman based on volumeand margin goals can result in sudden drop of income followed by his defection toanother wholesaler.It is the lack of adaptability of many performance incentive systems to the actualorganizational and economic realities that cause many of the problems with perfor-mance incentive system. We have learned from these experiments that subjects showsensible responses to performance incentive systems and that they actually try to meetthem. The implications of these findings to the real world where the actual perfor-mance incentive systems are inconsistent are clear. Many existing performance incen-tive systems are inappropriate. Inappropriate performance incentive systems may becounter-productive. More research into the dynamics for adjusting performance incen-tive systems to the organizational and environmental realities is needed.

    References1. ARGYRIS, C., "The Incompleteness of Social Psychological Theory: Examples from Small Group,Cognitive Consistency, and Attribution Research," Amer. Psychologist, Vol. 24 (1969), pp. 893-908.2. DORNBUSCH, S. M. AND SCOTT, W. R., Evaluation and the Exercise of Authority, Jossey-Bass, SanFrancisco, Calif., 1975.3. GARBARINO, J., "The Impact of Anticipated Reward upon Cross-Age Tutoring," J. Personality SocialPsychology,Vol. 32, No. 3 (1975), pp. 421-428.4. HACKMAN, J. R., BROUSSEAU, K. R. AND WEISS, J. A., "The Interaction of Task Design and Group

    Performance Strategies in Determining Group Effectiveness," Organizational Behavior and HumanPefiformance,Vol. 16 (1976). pp. 350-365.

  • 7/30/2019 An Experimental Study of Performance Information Systems

    18/18

    STUDY OF PERFORMANCE INFORMATION SYSTEMS 2595. HOFFMAN,L. R., "Group Problem Solving," in L. Berkowitz (ed.), Advances in Experimental SocialPsychology, 2, Academic Press, New York, 1965.6. KERR, S., "On the Folly of Rewarding A, While Hoping for B," Acad. Management J. (1975), pp.769-783.7. LippiTT,M. E. AND MACKENZIE,K. D., "Authority-Task Problems," Admin. Sci. Quart., Vol. 21 (1976),pp. 643-660.8. Luzi, A. D., Selected Performance Information Systems and Small Group Problem Solving Behavior,Unpublished Doctoral Dissertation, The University of Kansas, School of Business, 1978.9. MACKENZIE,K. D., A Theoty of Group Structures, VolumeL Basic Theory,Gordon and Breach, NewYork, 1976.10. , A Theory of Group Structures, Volume II. Empirical Tests, Gordon and Breach, New York,1976.11. , "A Process Based Measure for the Degree of Hierarchy in a Group. III. Applications toOrganizational Design," J. EnterpriseManagement,Vol. 1 (1978), pp. 175-184.12. , "Concepts and Measurements in Organizational Development," Dimensions of ProductivityResearch,Vol. 1 (J. Hogan, Ed). American Productivity Center, Houston, Texas, 1981, pp. 233-304.13. POUNDS,W. F., "The Process of Problem Finding," Industrial Management Rev. (Fall 1969), pp. 1-20.14. RIDGEWAY,. F., "Dysfunctional Consequences of Performance Measurements," Admin. Sci. Quart.,Vol. 1 (1956), pp. 240-247.