models 5

318
16 PERSONALITY FACTOR MODEL BY CATTEL Personality traits and scales used to measure traits are numerous and commonality amongst the traits and scales is often difficult to obtain. To curb the confusion, many personality psychologists have attempted to develop a common taxonomy. A notable attempt at developing a common taxonomy is Cattell's Sixteen Personality Factor Model based upon personality adjectives taken form the natural language. Although Cattell contributed much to the use of factor analysis in his pursuit of a common trait language his theory has not been successfully replicated. Science has always strived to develop a methodology through which questions are answered using a common set of principles; psychology is no different. In an effort to understand differing personalities in humans, Raymond Bernard Cattell maintained the belief that a common taxonomy could be developed to explain such differences. Cattell's scholarly training began at an early age when he was awarded admission to King's College at Cambridge University where he graduated with a Bachelor of Science in Chemistry in 1926 (Lamb, 1997). According to personal accounts, Cattell's socialist attitudes, paired with interests developed after attending a Cyril Burt lecture in the same year, turned his attention to the study of psychology, still regarded as a philosophy (Horn, 2001). Following the completion of his doctorate studies of psychology in 1929 Cattell lectured at the University at Exeter where, in 1930, he made his first contribution to the science of psychology with the Cattell Intelligence Tests (scales 1,2, and 3). During fellowship studies in 1932, he turned his attention to the measurement of personality focusing of the understanding of economic, social and moral problems and how objective psychological research on moral decision could aid such problems (Lamb, 1997). Cattell's most renowned contribution to the science of psychology also pertains to the study of personality. Cattell's 16

Upload: monica-ahluwalia

Post on 26-Mar-2015

2.862 views

Category:

Documents


6 download

TRANSCRIPT

Page 1: MODELS 5

16 PERSONALITY FACTOR MODEL BY CATTEL

Personality traits and scales used to measure traits are numerous and commonality amongst the traits and scales is often difficult to obtain. To curb the confusion, many personality psychologists have attempted to develop a common taxonomy. A notable attempt at developing a common taxonomy is Cattell's Sixteen Personality Factor Model based upon personality adjectives taken form the natural language. Although Cattell contributed much to the use of factor analysis in his pursuit of a common trait language his theory has not been successfully replicated.

Science has always strived to develop a methodology through which questions are answered using a common set of principles; psychology is no different. In an effort to understand differing personalities in humans, Raymond Bernard Cattell maintained the belief that a common taxonomy could be developed to explain such differences.Cattell's scholarly training began at an early age when he was awarded admission to King's College at Cambridge University where he graduated with a Bachelor of Science in Chemistry in 1926 (Lamb, 1997). According to personal accounts, Cattell's socialist attitudes, paired with interests developed after attending a Cyril Burt lecture in the same year, turned his attention to the study of psychology, still regarded as a philosophy (Horn, 2001). Following the completion of his doctorate studies of psychology in 1929 Cattell lectured at the University at Exeter where, in 1930, he made his first contribution to the science of psychology with the Cattell Intelligence Tests (scales 1,2, and 3). During fellowship studies in 1932, he turned his attention to the measurement of personality focusing of the understanding of economic, social and moral problems and how objective psychological research on moral decision could aid such problems (Lamb, 1997). Cattell's most renowned contribution to the science of psychology also pertains to the study of personality. Cattell's 16 Personality Factor Model aims to construct a common taxonomy of traits using a lexical approach to narrow natural language to standard applicable personality adjectives. Though his theory has never been replicated, his contributions to factor analysis have been exceedingly valuable to the study of psychology.

Origins of the 16 Personality Factor ModelIn developing a common taxonomy of traits for the 16 Personality Factor model, Cattell relied heavily on the previous work of scientists in the field. Previous development of a list of personality descriptors by Allport and Odbert in 1936, and Baumgartner’s similar work in German in 1933, focused on a lexical approach to the dimensions of personality. Since psychology, like most other sciences, requires a descriptive model to be effective, the construction of a common taxonomy is necessary to successful in explaining personality simplistically (John, 1990). Already focused on the understanding of personality as it pertains to psychology, Cattell set out to narrow the work already completed by his predecessors. The goal of the research is to achieve integration as it relates to language and personality, that is, to identify the personality relevant adjectives in the language relating to specific traits.The lexical approach to language is creates the foundation of a shared taxonomy of natural language of personality description (John, 1990). Historically, psychologists

Page 2: MODELS 5

relied on such natural language to aid in the identification of personality attributes for such taxonomy. The first step in such a process was to narrow all adjectives within a language to those relating to personality descriptions, as it provided the researchers with a base guiding such a lexical approach. When working with a limited set of variables or adjectives within a language progressed from spoken word as it evolved throughout its progression. Since there are finite sets of adjectives in a language, the narrowing of the variables into base personality categories becomes necessary as multiple adjectives can express similar meanings within the language (John, 1999).

In the process of developing a taxonomy, a process that had taken predecessors sixty years up to this point, Allport and Odbert systematized thousands of personality attributes in 1936. They recognized four categories of adjectives in developing the taxonomy including personality traits, temporary states highly evaluative judgments of personally conduct and reputation, and physical characteristics. Personality traits are defined as "generalized and personalized determining tendencies--consistent and stable modes of an individuals adjustment to their environment" (John, 1999) as stated by Allport and Odbert in their research. Each adjective relative to personality falls within one of the previous categories to aid in the identification of major personality categories and creates a primitive taxonomy, which many psychologists and researchers would elaborate and build upon later. Norman (1967) divided the same limited set of adjectives into seven categories, which, like Allport and Odbert's categories, where all mutually exclusive (John, 1999). Despite this, work from both parties have been classified as containing ambiguous category boundaries, resulting in the general conviction that such boundaries should be abolished and the work has less significance than the earlier judgment.

Factor AnalysisIntroduced and established by Pearson in 1901 and Spearman three years thereafter, factor analysis is a process by which large clusters and grouping of data are replaced and represented by factors in the equation. As variables are reduced to factors, relationships between the factors begin to define the relationships in the variables they represent (Goldberg & Digman, 1994). In the early stages of the process' development, there was little widespread use due largely in part to the immense amount of hand calculations required to determine accurate results, often spanning periods of several months. Later on a mathematical foundation would be developed aiding in the process and contributing to the later popularity of the methodology. In present day, the power of super computers makes the use of factor analysis a simplistic process compared to the 1900's when only the devoted researchers could use it to accurately attain results (Goldberg & Digman, 1994).In performing a factor analysis, the single most import factor to consider is the selection of variables as considerations such as domain, where a single domain results in the highest accuracy, and other representative variables related to a single domain would provide a more accurate outcome (Goldberg & Digman, 1994). Exploratory factor analysis governs a single domain while confirmatory factor analysis, often less accurate and more difficult to calculate, governs several domains. In terms of variables, it is unlikely to see a factor analysis with fewer than 50 variables. In those situations, another statistical equation may be a better, easier consideration to process the information. A

Page 3: MODELS 5

standard sample size for such a function would range between 500 to 1,000 participants (Goldberg & Digman, 1994).

Cattell, another champion of the factor analysis methodology, believed that there are three major sources of data when it comes to research concerning personality traits (Hall & Lindzey, 1978). L-Data, also referred to as the life record, could include actual records of a person's behavior in society such as court records. Cattell, however, gathered the majority of L-Data from ratings given by peers. Self -rating questionnaires, also known as Q-Data, gathered data by allowing participants to assess their own behaviors .The third source of Cattell's data the objective test, also known as T-Data, created a unique situation in which the subject is unaware of the personality trait being measured (Pervin & John, 2001).

With the intent of generality, Cattell's sample population was representative of several age groups including adolescents, adults and children as well as representing several countries including the U.S., Britain, Australia, New Zealand, France, Italy, Germany, Mexico, Brazil, Argentina, India, and Japan (Hall & Lindzey, 1978).Through factor analysis, Cattell identified what he referred to as surface and source traits. Surface traits represent clusters of correlated variables and source traits represent the underlying structure of the personality. Cattell considered source traits much more important in understanding personality than surface traits (Hall& Lindzey, 1978). The identified source traits became the primary basis for the 16 PF Model.The 16 Personality Factor Model aims to measure personality based upon sixteen source traits. Table 1 summarizes the surface traits as descriptors in relation to source traits within a high and low range.

Critical Review

Although Cattell contributed much to personality research through the use of factor analysis his theory is greatly criticized. The most apparent criticism of Cattell's 16 Personality Factor Model is the fact that despite many attempts his theory has never been entirely replicated. In 1971, Howarth and Brown's factor analysis of the 16 Personality Factor Model found 10 factors that failed to relate to items present in the model. Howarth and Brown concluded, “that the 16 PF does not measure the factors which it purports to measure at a primary level (Eysenck & Eysenck, 1987) Studies conducted by Sell et al. (1970) and by Eysenck and Eysenck (1969) also failed to verify the 16 Personality Factor Model's primary level (Noller, Law, Comrey, 1987). Also, the reliability of Cattell's self-report data has also been questioned by researchers (Schuerger, Zarrella, & Hotz, 1989).Cattell and colleagues responded to the critics by maintaining the stance that the reason the studies were not successful at replicating the primary structure of the 16 Personality Factor model was because the studies were not conducted according to Cattell's methodology. However, using Cattell's exact methodology, Kline and Barrett (1983), only were able to verify four of sixteen primary factors (Noller, Law & Comrey, 1987).In response to Eysenck's criticism, Cattell, himself, published the results of his own factor analysis of the 16 Personality Factor Model, which also failed to verify the hypothesized primary factors (Eysenck, 1987).

Page 4: MODELS 5

Despite all the criticism of Cattell's hypothesis, his empirical findings lead the way for investigation and later discovery of the 'Big Five' dimensions of personality. Fiske (1949) and Tupes and Christal (1961) simplified Cattell's variables to five recurrent factors known as extraversion or surgency, agreeableness, consciousness, emotional stability and intellect or openness (Pervin & John, 1999).

Cattell's Sixteen Personality Factor Model has been greatly criticized by many researchers, mainly because of the inability of replication. More than likely, during Cattell's factor analysis errors in computation occurred resulting in skewed data, thus the inability to replicate. Since, computer programs for factor analysis did not exist during Cattell's time and calculations were done by hand it is not surprising that some errors occurred. However, through investigation into to the validity of Cattell's model researchers did discover the Big Five Factors, which have been monumental in understanding personality, as we know it today.

Descriptors of Low RangePrimary Factor

Descriptors of High Range

Impersonal, distant, cool, reserved, detached, formal, aloof (Schizothymia)

Warmth(A)

Warm, outgoing, attentive to others, kindly, easy-going, participating, likes people (Affectothymia)

Concrete thinking, lower general mental capacity, less intelligent, unable to handle abstract problems (Lower Scholastic Mental Capacity)

Reasoning(B)

Abstract-thinking, more intelligent, bright, higher general mental capacity, fast learner (Higher Scholastic Mental Capacity)

Reactive emotionally, changeable, affected by feelings, emotionally less stable, easily upset (Lower Ego Strength)

Emotional Stability(C)

Emotionally stable, adaptive, mature, faces reality calmly (Higher Ego Strength)

Deferential, cooperative, avoids conflict, submissive, humble, obedient, easily led, docile, accommodating (Submissiveness)

Dominance(E)

Dominant, forceful, assertive, aggressive, competitive, stubborn, bossy (Dominance)

Serious, restrained, prudent, taciturn, introspective, silent (Desurgency)

Liveliness(F)

Lively, animated, spontaneous, enthusiastic, happy go lucky, cheerful, expressive, impulsive (Surgency)

Expedient, nonconforming, disregards rules, self indulgent (Low Super Ego Strength)

Rule-Consciousness(G)

Rule-conscious, dutiful, conscientious, conforming, moralistic, staid, rule bound (High Super Ego Strength)

Shy, threat-sensitive, timid, Social Socially bold, venturesome, thick

Page 5: MODELS 5

hesitant, intimidated (Threctia)Boldness(H)

skinned, uninhibited (Parmia)

Utilitarian, objective, unsentimental, tough minded, self-reliant, no-nonsense, rough (Harria)

Sensitivity(I)

Sensitive, aesthetic, sentimental, tender minded, intuitive, refined (Premsia)

Trusting, unsuspecting, accepting, unconditional, easy (Alaxia)

Vigilance(L)

Vigilant, suspicious, skeptical, distrustful, oppositional (Protension)

Grounded, practical, prosaic, solution oriented, steady, conventional (Praxernia)

Abstractedness(M)

Abstract, imaginative, absent minded, impractical, absorbed in ideas (Autia)

Forthright, genuine, artless, open, guileless, naive, unpretentious, involved (Artlessness)

Privateness(N)

Private, discreet, nondisclosing, shrewd, polished, worldly, astute, diplomatic (Shrewdness)

Self-Assured, unworried, complacent, secure, free of guilt, confident, self satisfied (Untroubled)

Apprehension(O)

Apprehensive, self doubting, worried, guilt prone, insecure, worrying, self blaming (Guilt Proneness)

Traditional, attached to familiar, conservative, respecting traditional ideas (Conservatism)

Openness to Change(Q1)

Open to change, experimental, liberal, analytical, critical, free thinking, flexibility (Radicalism)

Group-oriented, affiliative, a joiner and follower dependent (Group Adherence)

Self-Reliance(Q2)

Self-reliant, solitary, resourceful, individualistic, self sufficient (Self-Sufficiency)

Tolerates disorder, unexacting, flexible, undisciplined, lax, self-conflict, impulsive, careless of social rules, uncontrolled (Low Integration)

Perfectionism(Q3)

Perfectionistic, organized, compulsive, self-disciplined, socially precise, exacting will power, control, self-sentimental (High Self-Concept Control)

Relaxed, placid, tranquil, torpid, patient, composed low drive (Low Ergic Tension)

Tension(Q4)

Tense, high energy, impatient, driven, frustrated, over wrought, time driven. (High Ergic Tension)

Primary Factors and Descriptors in Cattell's 16 Personality Factor Model (Adapted From Conn & Rieke, 1994).

Page 6: MODELS 5

ATTITUDE COMPETENCE

If you want to have success you should should try to absorb as much knowledge as possible right? Well, not quite. At least not only! I believe success, whether we talk at professional or personal level, derives from three factors: knowledge,competencies and attitudes. Most of the people, however, pay an excessive attention to the knowledge component while neglecting the development of the other two.Before discussing the argument further we need to define what we mean by each of these factors. Knowledge is practical information gained through learning, experience or association.Examples of knowledge:

second degree equations human anatomy the rules of monopoly how to change a wheel the capital of Zimbabwe (Harare, if nothing else you learned this reading this

article…)Competencies, on the other hand, refer to the ability to perform specific tasks.Examples of competencies:

ability to communicate effectively ability to write clearly ability to play an instrument ability to solve problems ability to dance

Page 7: MODELS 5

The last one, attitude, involves how people react to certain situations and how they behave in general.Examples of attitudes:

being proactive being able to get along with other people being optimistic being critic towards other people being arrogant

Now, if you take a look at the picture below, you will see that attitudes are the base of the pyramid. One should, therefore, focus on developing the right attitudes before passing to the competencies and to the knowledge. If you take a look at the five attitudes we used as example it is clear that one would desire to develop the first three. Distinguishing between a desirable and a problematic attitude is actually an easy task.

.Why then do we fail to dedicate enough energy to the development of valuable attitudes? First because we might think that attitude is affected by the genetic, meaning that some people are born optimistic while others are naturally pessimistic, and there is nothing one can do to change it. This is far from the truth. While most people are naturally inclined to behave in certain ways we can still radically change or develop specific attitudes at will. Developing or changing an attitude will require much more work than developing a competence or gaining some knowledge, but that is exactly why it is also more valuable.The second reason why people fail to focus on attitudes is because they are not aware of the benefits they would derive from that. The common sense states that the more knowledgeable someone is, the more successful he will be. While this affirmation might be true, it is only so if that person also has the right attitudes.After developing the attitudes (which is a life long process, by the way) one should focus on competencies. Competencies come before knowledge because they are flexible and can be applied to many different situations.Consider two different men, John and Mark, working for a financial services company. Both of them are eager to succeed so that they spend lots of time trying to grow professionally. John uses his time gaining as much knowledge as possible, he studies balance sheets, financial reports, accounting practices and the like. Mark, on the other hand, gets the knowledge that is necessary to carry out his job. Other than that he uses his time to improve his writing skills, his ability to solve problems, to come up with innovative ideas and so on. Should the financial services sector enter a downturn some day who do you think will have a harder time? Yeah, I am sure you have guessed it.

Page 8: MODELS 5

The last part of the pyramid is formed by the knowledge. Now, when I defend that prior to getting the knowledge one should develop attitudes and competencies I am not saying that knowledge is not important. Far from it, knowledge is essential. But if you consider the information and communication technologies revolution you can see that virtually anyone in the world has access to all the information ever produced. I know that information and knowledge are two different things, but the process of transforming one into the other is not that complex. What I am saying, therefore, is that the knowledge alone will not be sufficient. It does not represent a competitive advantage per se.Summing up, success at personal or professional level will inevitably derive from three factors: attitudes, competencies and knowledge. Most people pay an excessive attention to the knowledge component while neglecting the development of competencies and attitudes. Make sure you are focusing on all the three components, it is the best strategy in the long run.

CAPABILITY MATURITY MODEL

The Capability Maturity Model (CMM) is a service mark owned by Carnegie Mellon University (CMU) and refers to a development model elicited from actual data. The data was collected from organizations that contracted with the U.S. Department of Defense, who funded the research, and became the foundation from which CMU created the Software Engineering Institute (SEI). Like any model, it is an abstraction of an existing system.When it is applied to an existing organization's software development processes, it allows an effective approach toward improving them. Eventually it became clear that the model

Page 9: MODELS 5

could be applied to other processes. This gave rise to a more general concept that is applied to business processes and to developing people.

The Capability Maturity Model (CMM) was originally developed as a tool for objectively assessing the ability of government contractors' processes to perform a contracted software project. The CMM is based on the process maturity framework first described in the 1989 book Managing the Software Process by Watts Humphrey. It was later published in a report in 1993 (Technical Report CMU/SEI-93-TR-024 ESC-TR-93-177 February 1993, Capability Maturity Model SM for Software, Version 1.1) and as a book by the same authors in 1995.

Though the CMM comes from the field of software development, it is used as a general model to aid in improving organizational business processes in diverse areas; for example insoftware engineering, system engineering, project management, software maintenance, risk management, system acquisition, information technology (IT), services, business processes generally, and human capital management. The CMM has been used extensively worldwide in government offices, commerce, industry and software development organizations.

HISTORY

Prior need for software processesIn the 1970s, the use of computers grew more widespread, flexible and less costly. Organizations began to adopt computerized information systems, and the demand for software development grew significantly. The processes for software development were in their infancy, with few standard or "best practice" approaches defined.As a result, the growth was accompanied by growing pains: project failure was common, and the field of computer science was still in its infancy, and the ambitions for project scale and complexity exceeded the market capability to deliver. Individuals such as Edward Yourdon, Larry Constantine, Gerald Weinberg, Tom DeMarco, and David Parnas began to publish articles and books with research results in an attempt to professionalize the software development process.In the 1980s, several US military projects involving software subcontractors ran over-budget and were completed far later than planned, if at all. In an effort to determine why this was occurring, the United States Air Force funded a study at the SEI.

PrecursorThe Quality Management Maturity Grid was developed by Philip B. Crosby in his book "Quality is Free".[1]

The first application of a staged maturity model to IT was not by CMM/SEI, but rather by Richard L. Nolan, who, in 1973 published the stages of growth model for IT organizations.[2]

Watts Humphrey began developing his process maturity concepts during the later stages of his 27 year career at IBM. (References needed)

Development at SEI

Page 10: MODELS 5

Active development of the model by the US Department of Defense Software Engineering Institute (SEI) began in 1986 when Humphrey joined the Software Engineering Institute located at Carnegie Mellon University in Pittsburgh, Pennsylvania after retiring from IBM. At the request of the U.S. Air Force he began formalizing his Process Maturity Framework to aid the U.S. Department of Defense in evaluating the capability of software contractors as part of awarding contracts.The result of the Air Force study was a model for the military to use as an objective evaluation of software subcontractors' process capability maturity. Humphrey based this framework on the earlier Quality Management Maturity Grid developed by Philip B. Crosby in his book "Quality is Free".[1] However, Humphrey's approach differed because of his unique insight that organizations mature their processes in stages based on solving process problems in a specific order. Humphrey based his approach on the staged evolution of a system of software development practices within an organization, rather than measuring the maturity of each separate development process independently. The CMM has thus been used by different organizations as a general and powerful tool for understanding and then improving general business process performance.Watts Humphrey's Capability Maturity Model (CMM) was published in 1988[3] and as a book in 1989, in Managing the Software Process.[4]

Organizations were originally assessed using a process maturity questionnaire and a Software Capability Evaluation method devised by Humphrey and his colleagues at the Software Engineering Institute (SEI).The full representation of the Capability Maturity Model as a set of defined process areas and practices at each of the five maturity levels was initiated in 1991, with Version 1.1 being completed in January 1993.[5] The CMM was published as a book[6] in 1995 by its primary authors, Mark C. Paulk, Charles V. Weber, Bill Curtis, and Mary Beth Chrissis.

Superseded by CMMIThe CMM model proved useful to many organizations, but its application in software development has sometimes been problematic. Applying multiple models that are not integrated within and across an organization could be costly in training, appraisals, and improvement activities. The Capability Maturity Model Integration (CMMI) project was formed to sort out the problem of using multiple CMMs.For software development processes, the CMM has been superseded by Capability Maturity Model Integration (CMMI), though the CMM continues to be a general theoretical process capability model used in the public domain.

Adapted to other processesThe CMM was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. Though it comes from the area of software development, it can be, has been, and continues to be widely applied as a general model of the maturity of processes (e.g., IT service management processes) in IS/IT (and other) organizations.l

Maturity model

Page 11: MODELS 5

A maturity model can be viewed as a set of structured levels that describe how well the behaviours, practices and processes of an organisation can reliably and sustainably produce required outcomes. A maturity model may provide, for example :

a place to start the benefit of a community’s prior experiences a common language and a shared vision a framework for prioritizing actions. a way to define what improvement means for your organization.

A maturity model can be used as a benchmark for comparison and as an aid to understanding - for example, for comparative assessment of different organizations where there is something in common that can be used as a basis for comparison. In the case of the CMM, for example, the basis for comparison would be the organizations' software development processes.

StructureThe Capability Maturity Model involves the following aspects:

Maturity Levels: a 5-level process maturity continuum - where the uppermost (5th) level is a notional ideal state where processes would be systematically managed by a combination of process optimization and continuous process improvement.

Key Process Areas: a Key Process Area (KPA) identifies a cluster of related activities that, when performed together, achieve a set of goals considered important.

Goals: the goals of a key process area summarize the states that must exist for that key process area to have been implemented in an effective and lasting way. The extent to which the goals have been accomplished is an indicator of how much capability the organization has established at that maturity level. The goals signify the scope, boundaries, and intent of each key process area.

Common Features: common features include practices that implement and institutionalize a key process area. There are five types of common features: commitment to Perform, Ability to Perform, Activities Performed, Measurement and Analysis, and Verifying Implementation.

Key Practices: The key practices describe the elements of infrastructure and practice that contribute most effectively to the implementation and institutionalization of the KPAs.

LevelsThere are five levels defined along the continuum of the CMM[7] and, according to the SEI: "Predictability, effectiveness, and control of an organization's software processes are believed to improve as the organization moves up these five levels. While not rigorous, the empirical evidence to date supports this belief."

1. Initial (chaotic, ad hoc, individual heroics) - the starting point for use of a new process.

2. Managed - the process is managed in accordance with agreed metrics.3. Defined - the process is defined/confirmed as a standard business process, and

decomposed to levels 0, 1 and 2 (the latter being Work Instructions).

Page 12: MODELS 5

4. Quantitatively managed5. Optimizing - process management includes deliberate process

optimization/improvement.Within each of these maturity levels are Key Process Areas (KPAs) which characterise that level, and for each KPA there are five definitions identified:

1. Goals2. Commitment3. Ability4. Measurement5. Verification

The KPAs are not necessarily unique to CMM, representing — as they do — the stages that organizations must go through on the way to becoming mature.The CMM provides a theoretical continuum along which process maturity can be developed incrementally from one level to the next. Skipping levels is not allowed/feasible.N.B.: The CMM was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. It has been used for and may be suited to that purpose, but critics pointed out that process maturity according to the CMM was not necessarily mandatory for successful software development. There were/are real-life examples where the CMM was arguably irrelevant to successful software development, and these examples include many shrinkwrap companies (also called commercial-off-the-shelf or "COTS" firms orsoftware package firms). Such firms would have included, for example, Claris, Apple, Symantec, Microsoft, and Lotus. Though these companies may have successfully developed their software, they would not necessarily have considered or defined or managed their processes as the CMM described as level 3 or above, and so would have fitted level 1 or 2 of the model. This did not - on the face of it - frustrate the successful development of their software.Level 1 - Initial (Chaotic)It is characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes.Level 2 - RepeatableIt is characteristic of processes at this level that some processes are repeatable, possibly with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may help to ensure that existing processes are maintained during times of stress.Level 3 - DefinedIt is characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. These standard processes are in place (i.e., they are the AS-IS processes) and used to establish consistency of process performance across the organization.Level 4 - ManagedIt is characteristic of processes at this level that, using process metrics, management can effectively control the AS-IS process (e.g., for software development ). In particular, management can identify ways to adjust and adapt the process to particular projects

Page 13: MODELS 5

without measurable losses of quality or deviations from specifications. Process Capability is established from this level.Level 5 - OptimizingIt is a characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements.At maturity level 5, processes are concerned with addressing statistical common causes of process variation and changing the process (for example, to shift the mean of the process performance) to improve process performance. This would be done at the same time as maintaining the likelihood of achieving the established quantitative process-improvement objectives.

Software process frameworkThe software process framework documented is intended to guide those wishing to assess an organization/projects consistency with the CMM. For each maturity level there are five checklist types:

TypeSD Description

PolicyDescribes the policy contents and KPA goals recommended by the CMM.

StandardDescribes the recommended content of select work products described in the CMM.

Process

Describes the process information content recommended by the CMM. The process checklists are further refined into checklists for:

roles entry criteria inputs activities outputs exit criteria reviews and audits work products managed and controlled measurements documented procedures training tools

ProcedureDescribes the recommended content of documented procedures described in the CMM.

Level overview

Provides an overview of an entire maturity level. The level overview checklists are further refined into checklists for:

KPA purposes (Key Process Areas) KPA Goals policies standards process descriptions

Page 14: MODELS 5

procedures training tools reviews and audits work products managed and controlled measurements

Overview

CMMI is a process improvement approach that provides organizations with the essential

elements of effective processes that ultimately improve their performance. CMMI can be used

to guide process improvement across a project, a division, or an entire organization. It helps

integrate traditionally separate organizational functions, set process improvement goals and

priorities, provide guidance for quality processes, and provide a point of reference for

appraising current processes. 

The benefits you can expect from using CMMI include the following: Your organization's activities are explicitly linked to your business objectives. Your visibility into the organization's activities is increased to help you ensure

that your product or service meets the customer's expectations. You learn from new areas of best practice (e.g., measurement, risk)

CMMI is being adopted worldwide, including North America, Europe, Asia, Australia, South America, and Africa. This kind of response has substantiated the SEI's commitment to CMMI.You can use CMMI in three different areas of interest:

Product and service acquisition  (CMMI for Acquisition model) Product and service development (CMMI for Development model) Service establishment, management, and delivery (CMMI for Services model)

CMMI models are collections of best practices that you can compare to your organization's best practices and guide improvement to your processes. A formal comparison of a CMMI model to your processes is called an appraisal. The Standard CMMI Appraisal Method for Process Improvement (SCAMPI) incorporates the best ideas of several process improvement appraisal methods.

Capability Maturity Model - (CMM) The Software Engineering Institute's model of software engineering that specifies five levels of maturity of the processes of a software organisation. CMM offers a framework for evolutionary process improvement. Originally applied to software development (SE-CMM), it has been expanded to cover other areas including Human Resources and Software Acquitition.

The levels - focii - and key process areas are:

Level 1 Initial - Heroes - None.

Page 15: MODELS 5

Level 2 Repeatable - Project Management - Software Project Planning, Software Project Tracking and Oversight, Software Subcontract Management, Software Quality Assurance, Software Configuration Management, Requirements Management.

Level 3 Defined - Engineering Process - Organisation Process Focus, Organisation Process Definition, Peer Reviews, Training Program, Inter-group Coordination, Software Product Engineering, Integrated Software Management.

Level 4 Managed - Product and Process Quality - Software Quality Management, Quantitative Process Management.

Level 5 Optimising - Continuous Improvement - Process Change Management, Technology Change Management, Defect Prevention.

BUSINESS EXCELLENCE MODEL

Business excellence is the systematic use of quality management principles and tools in business management, with the goal of improving performance based on the principles of customer focus, stakeholder value, and process management. Key practices in business excellence applied across functional areas in an enterprise include continuous and breakthrough improvement, preventative management and management by facts. Some of the tools used are the balanced scorecard, Lean, the Six Sigma statistical tools, process management, andproject management.Business excellence, as described by the European Foundation for Quality Management (EFQM), refers to "outstanding practices in managing the organization and achieving results, all based on a set of eight fundamental concepts." These concepts are "results orientation, customer focus, leadership and constancy of purpose, management by processes and facts, people development and involvement, continuous learning, innovation and improvement; partnership development, and public responsibility."In general, business excellence models have been developed by national bodies as a basis for award programs. For most of these bodies, the awards themselves are secondary in importance to the widespread adoption of the concepts of business excellence, which ultimately leads to improved national economic performance. By far the majority of organizations that use these models do so for self-assessment, through which they may identify improvement opportunities, areas of strength, and ideas for future organizational development. Users of the EFQM Excellence Model, for instance, do so for the following purposes: self-assessment, strategy formulation, visioning, project management, supplier management, and mergers. The most popular and influential model in the western world is the Malcolm Baldrige National Quality Award Model (also known as the Baldrige model, the Baldrige Criteria, or the Criteria for Performance Excellence), launched by the US government. More than 60 national and state/regional awards base their frameworks upon the Baldrige criteria.

Page 16: MODELS 5

When used as a basis for an organization's improvement culture, the business excellence criteria within the models broadly channel and encourage the use of best practices into areas where their effect will be most beneficial to performance. When used simply for self-assessment, the criteria can clearly identify strong and weak areas of management practice so that tools such as benchmarking can be used to identify best-practice to enable the gaps to be closed. These critical links between business excellence models, best practice, and benchmarking are fundamental to the success of the models as tools of continuous improvement.The essence of the Methodology is to concentrate in a perfect blend of Focus between Processes, Technologies and Resources (Human, Financial, etc.)The main idea is that neither of those elements can be improved by itself and it needs to be balanced and improved in a blend with the other two.Process Phases - Because of the blend of different methodologies that have specific phases within their processes Business Excellence drives results through four well defined phases: Discover/Define, Measure/Analyze, Create/Optimize, Monitor/Control.Those phases evolve continuously within the ever-growing organization, driving constant monitoring, optimization and re-evaluation.

Overview of the Excellence Model The Model is an over-arching, non-prescriptive framework based on nine criteria. Five of these are 'Enablers' and four are 'Results'. The 'Enabler' criteria cover what an organisation does. The 'Results' criteria cover what an organisation achieves. 'Results' are caused by 'Enablers'. 

Page 17: MODELS 5

The Model, which recognises there are many approaches to achieving sustainable excellence in all aspects of performance, is based on the premise that: 

Excellent results with respect to Performance, Customers, People and Society are achieved through Leadership driving Policy and Strategy, that is delivered through People Partnerships and Resources, and Processes.

The arrows emphasise the dynamic nature of the model. They show innovation and learning helping to improve enablers that in turn lead to improved results.

Model structure The Model's nine boxes, shown above, represent the criteria against which to assess an organisation's progress towards excellence. Each of the nine criteria has a definition, which explains the high level meaning of that criterion. 

To develop the high level meaning further each criterion is supported by a number of sub-criteria. Sub-criteria pose a number of questions that should be considered in the course of an assessment. 

Below each sub-criterion are lists of possible areas to address. The areas to address are not mandatory nor are they exhaustive lists but are intended to further exemplify the meaning of the sub-criterion. 

EnablersLeadershipPolicy & StrategyPeoplePartnerships & ResourcesProcesses

ResultsCustomer Results

Page 18: MODELS 5

People ResultsSociety ResultsKey Performance Results

The Fundamental Concepts of Excellence The EFOM Model is a non-prescriptive framework that recognises there are many approaches to achieving sustainable excellence. Within this non-prescriptive approach there are some Fundamental Concepts which underpin the EFQM Model. These are expressed below.

There is no significance intended in the order of the concepts. The list is not meant to be exhaustive and they will change as excellent organisations develop and improve.

Results OrientationExcellence is achieving results that delight all the organisation's stakeholders.

Customer FocusExcellence is creating sustainable customer value.

Leadership & Constancy of PurposeExcellence is visionary and inspirational leadership, coupled with constancy of purpose.

Management by Processes & FactsExcellence is managing the organisation through a set of interdependent and interrelated systems, processes and facts.

People Development & InvolvementExcellence is maximising the contribution of employees through their development and involvement.

Continuous Learning, Innovation & ImprovementExcellence is challenging the status quo and effecting change by using learning to create innovation and improvement opportunities.

Partnership DevelopmentExcellence is developing and maintaining value-adding partnerships.

Corporate Social ResponsibilityExcellence is exceeding the minimum regulatory framework in which the organisation operates and to strive to understand and respond to the expectations of their stakeholders in society.

RADARAt the heart of the self assessment process lies the logic known as RADAR which has the following elements: Results, Approach, Deployment, Assessment and Review. 

Page 19: MODELS 5

The logic of RADAR® states that an organisation should:

• Determine the Results it is aiming for.

• Implement an integrated set of sound Approaches to deliver the required results.

• Deploy the approaches systematically.

• Assess and Review the effectiveness of the approaches.

Why organizations should have a Business Excellence model ?

1) Highly competitive environment 2) To achieve Companies Mission & Vision3) To achieve excellent Business results & Customer delight4) To improve brand image5) CEO's Passion for excellence6) To bulid a great organization7) To achieve break-through improvements

Buisness excellence model is the means of achieving all the above. Organisations can develop a customized business excellence model and/or adopt one or more of the below mentioned concepts.

1) Hoshin Kanri (policy deployment)2) TQM3) Deming award model4) MNBQA model5) Six Sigma6) Lean7) TPM  CII - Exim or EFQM9) Balanced scorecard10)SEI-CMMI/PCMMI

With rapid change in marketplace and ever-increasing competition in today's business

Page 20: MODELS 5

environment, organizations are looking for every opportunity to improve their business results. To be competitive to sustain growth, the organization lays firm approach to drive performance and attain higher levels of efficiency. The business excellence model is a tool, which has a set of criteria using which the organization can improve performance on the critical factors that can drive their business to success. 

The criteria provides a framework for performance excellence and helps the organization to assess and measure performance on a wide range of key business indicators like - customer, product and service, operational, and financial. 

This allows the organization to carry out a self assessment on the business performance and allows to identify strengths and to target areas for “opportunities for improvement” on processes and results affecting all key stakeholders – including customers, employees, owners, suppliers, and the public. 

The criteria also helps the organization to align its resources, productivity, effectiveness, and achieve the goals. In short, Business excellence model is:• A comprehensive coverage of strategy-driven performance• Focuses on the needs, expectations and satisfaction of all stakeholders• Examines all processes that are essential in achieving business excellence• Is a framework to assess and enhance business excellence• Continuous improvement of organizational overall performance and capabilities.• Delivering ever-improving value to customers, resulting in marketplace success.• Understanding the business and analyzing in areas like - leadership, strategy, customer, market, information & data, knowledge sharing, HR, production processes, and the results. • Framework for excellence through values, processes, and outcome. In the model, they call it as “Approach”, “deployment” and “results”• It is not a prescriptive model.• It asks questions, but does not provide solutions.

Organizations also uses “Balanced score card” as performance measurement system. This is again a framework, which enables to translate the organizations vision & strategy in to coherent set of performance and measures.

Although, there are many excellence models available, you can refer to “Malcolm Baldrige National Quality Award” model. Tata Business excellence model is based on this.

TATA Business Excellence ModelTata Business Excellence Model is a framework which helps companies to achieve excellence in their business performance. This is the chosen model by the TATA group to help in building globally competitive organizations across TATA Group companies. TBEM is based on the Malcolm Balridge National Quality Award Model of the U.S.

Page 21: MODELS 5

The Criteria have three important roles in strengthening competitiveness: To help improve organizational performance practices, capabilities, and results To facilitate communication and sharing of best practices information among all

organisations within TATA Group. To help in guiding organizational planning and opportunities for learning

TBEM Criteria is designed to help organizations use an integrated approach to organisational performance management that results in

Delivery of ever-improving value to customers and stakeholders, contributing to organizational sustainability

Improvement of overall organisational effectiveness and capabilities Organisational and personal learning

The Criteria are built on the following set of 11 Interrelated Core Values and Concepts:

Visionary Leadership Customer-driven Excellence Organisational and Personal Learning Valuing Employees and Partners Agility Focus on the Future Managing for Innovation Management by Fact Social Responsibility Focus on Results and Creating Value Systems Perspective

The Core Values and Concepts are embodied in seven Categories, as follows: Leadership Strategic Planning Customer and Market Focus Measurement, Analysis, and Knowledge Management Work force Focus Process Management Business Results

The TBEM criteria are the operational details of the Core Values, applied to the different facets of a Business organisation.The 7 Criteria Categories are divided into 18 items and 32 Areas to AddressThe TBEM framework has the following characteristics

Focus on Business results Non-prescriptive and Adaptable Maintains System Perspective Supports Goal based diagnosis

TBEM instills a process centric approach in an organisation as a means to achieve the chosen Business GoalsTata Teleservices Limited as a part of the TATA Group has adopted the TATA Business Excellence model as an intricate part of its operation structure and uses it to grow from strength to strength, keeping Operational excellence and Business results in focus.

Page 22: MODELS 5

BUSINESS FUNDING MODEL

Importance of Sustainability PlanningMany existing campus-based publishing collaborations pay less direct attention to sustainability planning and financial structures than to the design and technical implementation of the collaborative projects themselves. Such a focus is understandable, as working through these sustainability issues requires that a collaboration’s partners reconcile significant operational and cultural differences. However, as libraries and presses move beyond narrowly defined, low-risk projects to undertake more ambitious long-term publishing programs, resolving these differences becomes increasingly critical to success.This section describes the organizational context in which most collaborations will operate, including:

the disparate funding models of the library and the press, and why they must be reconciled to support significant, long-term collaboration;

the potential benefits of earned revenue for fulfilling a collaboration’s mission; and

the utility of business principles—irrespective of funding model—for managing a collaboration.

Section 5 discusses practical issues relevant to structuring and managing a library-press publishing collaboration, including: 

setting financial performance expectations (whether subsidized deficit, cost recovery, or net surplus seeking);

tracking costs and allocating resources; and choosing between multiple projects.

Together these sections provide an overview of the financial and business issues many libraries and presses face in collaborating and offer practical insight on how a collaboration might be structured and managed.4.2 Reconciling Financial ModelsMany current publishing partnerships are of limited scope and duration. For a collaboration with relatively modest goals, a temporary diversion of staff time and/or a limited capital outlay may provide sufficient resources for its projects. However, such an ad hoc approach will often be ill-suited for sustaining more ambitious, long-term collaborative programs.Libraries and university presses share much in common: both operate on a nonprofit model and each seeks, in its own way, to fulfill a mission that serves the needs of its host institution. However, there are real differences in the operating structures and strategies of libraries and presses, and these must be reconciled to allow a library-press partnership to realize its full potential. If these differences are not explicitly recognized and accommodated, the library may not consider its mission objectives to be adequately served and the press may not be able to commit significant resources to a long-term collaborative publishing program. In such cases, collaborative activity would lack the full commitment of both partners, and as a result, the scale, scope, and duration of collaborative projects would be limited.

Page 23: MODELS 5

University presses are sometimes characterized as resistant to change or unsupportive of new models that might support scholarly communication in a networked, digital publishing environment.[45] Indeed, some university presses may view the range of potential business models narrowly, focusing on established market models, even when those models are beginning to fail. However, considering the limited resources, slim margins, and cost-recovery expectations under which presses must typically operate, this conservatism is scarcely surprising.If a university press were to be fully subsidized by its host institution—a remote contingency under the prevailing model, in which relatively little of the press’s activity directly benefits the host institution—then it might operate under the same funding model as the library. Unless such an improbable transformation takes place, practical reality dictates that a partnership establish a financial structure that reconciles the disparate funding models under which each partner operates.Libraries and academic computing centers are funded by institutional standing budgets, while university presses generate most or all of their operating budgets through earned revenue from market activities. A typical breakdown of an institutional library’s funding sources would include about 75-85% from university appropriations and about 5-15% from designated funds, with the balance coming from sponsored programs and endowments.[46] In terms of expense categories, approximately 45% of a university library’s budget will typically cover staff costs and 40% will go towards materials acquisitions, with other operating expenses representing 15% of the budget.[47]On average, university presses operate on a combination of earned income (80-90%) and institutional subsidies (5-15%), supplemented by title subsidies and endowment income (5%).[48]  As presses depend on earned revenue for 80-90% of their operating budgets, they must manage their publishing activities overall to balance mission fulfillment and revenue generation. Some press projects will balance both the press’s mission and revenue objectives, while other projects may cross-subsidize mission-worthy publications that are incapable of covering their own costs. Whatever the mix, overall, the press must manage its publishing portfolio to cover both direct and indirect costs to remain operationally self-sustaining.Recognizing the requirements of the press’s funding model will allow a collaboration to channel subsidies and/or create hybrid revenue-subsidy models that permit the press to participate fully in a collaboration. For presses and libraries to collaborate successfully requires a funding model and financial structure that allows the press to participate without diverting resources from other mission-critical publishing programs. If a collaboration fails to accommodate the requirements imposed by the press’s financial model, then participation in the collaboration would require the press to divert resources from other subsidized mission-critical publishing activities, which may be highly valued by the host institution and its faculty.4.2.1 Partnership Funding ModelsThe need for a shared financial understanding remains, irrespective of the source of a partnership’s income. A partnership’s strategic objectives, and the types of projects that it intends to undertake as a result, will affect whether subsidies, earned revenue, or a combination of the two provides a viable business model for its projects.If the partnership emphasizes open-access models, or provides products or services that cannot capture sufficient value on the open market, then the potential for generating self-

Page 24: MODELS 5

sustaining revenue from those activities may be limited. Even where its activities are capable of generating earned revenue, a market approach might compromise the collaboration’s mission and objectives by limiting its target audiences’ access to the products or services it offers.If a partnership were to secure a subsidy sufficient to fund all of the activities necessary to achieve its mission, then there would be no need for it to use revenue-generating models. However, there may be instances where partners want to pursue activities for which a) adequate subsidies are not available and/or b) an earned revenue model provides a viable source of income.4.2.2 The Role of Earned RevenueIdeally, a campus-based publishing venture would receive subsidies from its host institution commensurate with the full mission value it delivers. In practice, this will seldom be the case. Competition for scarce institutional resources, coupled with the problems inherent in demonstrating and quantifying the mission value delivered by its activities, may leave a partnership inadequately subsidized to fully achieve its objectives. In such situations, a collaboration may elect to generate earned revenue by imposing fees for some or all of its products and services.Although university presses work under a market model, they operate differently than commercial entities. While commercial publishers maximize profits, university presses seek to maximize mission attainment, publishing as much high-quality content as their resources allow. However, mission maximization is subject to financial constraints. By exploiting market opportunities to generate income, a publishing partnership can relax the financial constraint and thus fuel greater levels of mission attainment. In this way, a partnership may be able to pursue more activities that fulfill its mission with a combination of subsidy and earned revenue than by subsidy alone. As long as the income-generating activities are well aligned with the partnership’s mission—and revenue generation serves as a means to an end, rather than an end in itself—the market activity may contribute positively to achieving the mission. In such cases, the surplus generated can be applied to support publishing programs that do not generate revenue, and that might otherwise not be possible.[49] A partnership can subsidize financially unprofitable projects from the revenue contributed by projects that generate a surplus and/or from income from institutional subsidies and other sources. If all the partnership’s projects were to generate positive financial contributions, there would be no need for cross-subsidies. However, for many partnerships, some projects will require cross-subsidies from projects with positive contributions.In terms of program investment decisions, the marginal cost of increasing the publishing program should equal the marginal mission attainment per dollar spent plus the marginal revenue generated.[50] A publishing project with a positive financial contribution—the difference between what the project generates and the direct costs it incurs—provides funds available for cross-subsidizing publishing activities that support the program’s mission, but that are not financially self-sustaining. Although this approach does not avoid the problems inherent in assigning a financial value to mission attainment, it does provide a financial framework in which the projected returns can be assessed.[51]4.3 The Utility of Business Principles

Page 25: MODELS 5

The aggressive market practices of some commercial publishers have tainted the perception of market-based publishing models for many in the academy; indeed, such excesses will sometimes provide the impetus for library participation in online publishing collaborations. However, business processes and market models do have relevance and utility for campus-based publishing partnerships. Regardless of whether it uses a subsidy or earned revenue model, a collaboration can benefit from the market orientation that a press brings to the partnership. It will be important for library partners in collaborations to examine where resistance to market forces and business principles represent a genuine value conflict, as opposed to cultural stereotyping.[52]Here the distinction between competition and profit as motivators for campus-based market activities is instructive. Campus-based publishing collaborations need to couple the feedback mechanisms and performance stimulants of market participation with the value-driven goal of mission attainment. As Bok and others have observed, market forces compel nonprofit entities to assess both what they do and how well they do it. [53]  Markets provide incentives to respond to demand and to improve operating efficiency and productivity. All things being equal, cost savings from increased efficiency fund cross-subsidies for non-revenue generating projects with high mission value and allow an initiative to charge less for its services than profit-maximizing ventures. The pressures of market competition on revenue contribution—which funds mission attainment—should prompt productivity improvements, including gains in efficiency that the collaboration would not have undertaken had it been insulated from competition.Thus, while complete reliance on the market and earned revenue would expose a collaboration to forces that may not align well with its mission and values, ignoring the market sacrifices the discipline that market participation requires.  Stated negatively, insulation from market forces can reduce the mission relevance and financial value of a partnership’s output, lower its operating efficiency, and result in the suboptimal use of resources.The issue in applying business principles and practices is not that a partnership should alter its mission—in terms of what it publishes or the constituencies it serves—in order to generate a surplus. Rather, that in serving its mission, the collaboration operates as efficiently and cost effectively as possible given the resources available. This will allow the partnership to better serve the needs of its constituencies by funding activities with high mission value, but low market value. 

Page 26: MODELS 5

RISK MANAGEMENT

Risk management is the identification, assessment, and prioritization of risks (defined in ISO 31000 as the effect of uncertainty on objectives, whether positive or negative) followed by coordinated and economical application of resources to minimize, monitor, and control the probability and/or impact of unfortunate events[1] or to maximize the realization of opportunities. Risks can come from uncertainty in financial markets, project failures, legal liabilities, credit risk, accidents, natural causes and disasters as well as deliberate attacks from an adversary. Several risk management standards have been developed including the Project Management Institute, the National Institute of Science and Technology, actuarial societies, and ISO standards.[2][3] Methods, definitions and goals vary widely according to whether the risk management method is in the context of project management, security, engineering, industrial processes, financial portfolios, actuarial assessments, or public health and safety.The strategies to manage risk include transferring the risk to another party, avoiding the risk, reducing the negative effect of the risk, and accepting some or all of the consequences of a particular risk.Certain aspects of many of the risk management standards have come under criticism for having no measurable improvement on risk even though the confidence in estimates and decisions increase

This section provides an introduction to the principles of risk management. The vocabulary of risk management is defined in ISO Guide 73, "Risk management. Vocabulary."[2]

Page 27: MODELS 5

In ideal risk management, a prioritization process is followed whereby the risks with the greatest loss and the greatest probability of occurring are handled first, and risks with lower probability of occurrence and lower loss are handled in descending order. In practice the process can be very difficult, and balancing between risks with a high probability of occurrence but lower loss versus a risk with high loss but lower probability of occurrence can often be mishandled.Intangible risk management identifies a new type of a risk that has a 100% probability of occurring but is ignored by the organization due to a lack of identification ability. For example, when deficient knowledge is applied to a situation, a knowledge risk materializes. Relationship risk appears when ineffective collaboration occurs. Process-engagement risk may be an issue when ineffective operational procedures are applied. These risks directly reduce the productivity of knowledge workers, decrease cost effectiveness, profitability, service, quality, reputation, brand value, and earnings quality. Intangible risk management allows risk management to create immediate value from the identification and reduction of risks that reduce productivity.Risk management also faces difficulties in allocating resources. This is the idea of opportunity cost. Resources spent on risk management could have been spent on more profitable activities. Again, ideal risk management minimizes spending and minimizes the negative effects of risks.

MethodFor the most part, these methods consist of the following elements, performed, more or less, in the following order.

1. identify, characterize, and assess threats2. assess the vulnerability of critical assets to specific threats3. determine the risk (i.e. the expected consequences of specific types of attacks on

specific assets)

Page 28: MODELS 5

4. identify ways to reduce those risks5. prioritize risk reduction measures based on a strategy

Principles of risk managementThe International Organization for Standardization (ISO) identifies the following principles of risk management:[4]

Risk management should: create value be an integral part of organizational processes be part of decision making explicitly address uncertainty be systematic and structured be based on the best available information be tailored take into account human factors be transparent and inclusive be dynamic, iterative and responsive to change be capable of continual improvement and enhancement

According to the standard ISO 31000 "Risk management -- Principles and guidelines on

implementation,"[3] the process of risk management consists of several steps as follows:

Establishing the context

Establishing the context involves:

1. Identification of risk in a selected domain of interest

2. Planning the remainder of the process.

3. Mapping out the following:

the social scope of risk management

the identity and objectives of stakeholders

the basis upon which risks will be evaluated, constraints.

4. Defining a framework for the activity and an agenda for identification.

5. Developing an analysis of risks involved in the process.

6. Mitigation or Solution of risks using available technological, human and organizational

resources.

IdentificationAfter establishing the context, the next step in the process of managing risk is to identify potential risks. Risks are about events that, when triggered, cause problems. Hence, risk identification can start with the source of problems, or with the problem itself.

Source analysis[citation needed] Risk sources may be internal or external to the system that is the target of risk management.

Page 29: MODELS 5

Examples of risk sources are: stakeholders of a project, employees of a company or the weather over an airport.

Problem analysis[citation needed] Risks are related to identified threats. For example: the threat of losing money, the threat of abuse of privacy information or the threat of accidents and casualties. The threats may exist with various entities, most important with shareholders, customers and legislative bodies such as the government.

When either source or problem is known, the events that a source may trigger or the events that can lead to a problem can be investigated. For example: stakeholders withdrawing during a project may endanger funding of the project; privacy information may be stolen by employees even within a closed network; lightning striking an aircraft during takeoff may make all people onboard immediate casualties.The chosen method of identifying risks may depend on culture, industry practice and compliance. The identification methods are formed by templates or the development of templates for identifying source, problem or event. Common risk identification methods are:

Objectives-based risk identification[citation needed] Organizations and project teams have objectives. Any event that may endanger achieving an objective partly or completely is identified as risk.

Scenario-based risk identification In scenario analysis different scenarios are created. The scenarios may be the alternative ways to achieve an objective, or an analysis of the interaction of forces in, for example, a market or battle. Any event that triggers an undesired scenario alternative is identified as risk - see Futures Studies for methodology used byFuturists.

Taxonomy-based risk identification The taxonomy in taxonomy-based risk identification is a breakdown of possible risk sources. Based on the taxonomy and knowledge of best practices, a questionnaire is compiled. The answers to the questions reveal risks.[5]

Common-risk checking In several industries, lists with known risks are available. Each risk in the list can be checked for application to a particular situation.[6]

Risk charting[7] This method combines the above approaches by listing resources at risk, Threats to those resources Modifying Factors which may increase or decrease the risk and Consequences it is wished to avoid. Creating a matrix under these headings enables a variety of approaches. One can begin with resources and consider the threats they are exposed to and the consequences of each. Alternatively one can start with the threats and examine which resources they would affect, or one can begin with the consequences and determine which combination of threats and resources would be involved to bring them about.

AssessmentOnce risks have been identified, they must then be assessed as to their potential severity of loss and to the probability of occurrence. These quantities can be either simple to measure, in the case of the value of a lost building, or impossible to know for sure in the case of the probability of an unlikely event occurring. Therefore, in the assessment

Page 30: MODELS 5

process it is critical to make the best educated guesses possible in order to properly prioritize the implementation of the risk management plan.The fundamental difficulty in risk assessment is determining the rate of occurrence since statistical information is not available on all kinds of past incidents. Furthermore, evaluating the severity of the consequences (impact) is often quite difficult for immaterial assets. Asset valuation is another question that needs to be addressed. Thus, best educated opinions and available statistics are the primary sources of information. Nevertheless, risk assessment should produce such information for the management of the organization that the primary risks are easy to understand and that the risk management decisions may be prioritized. Thus, there have been several theories and attempts to quantify risks. Numerous different risk formulae exist, but perhaps the most widely accepted formula for risk quantification is:Rate of occurrence multiplied by the impact of the event equals risk

Composite Risk IndexThe above formula can also be re-written in terms of a Composite Risk Index, as follows:Composite Risk Index = Impact of Risk event x Probability of OccurrenceThe impact of the risk event is assessed on a scale of 0 to 5, where 0 and 5 represent the minimum and maximum possible impact of an occurrence of a risk (usually in terms of financial losses).The probability of occurrence is likewise assessed on a scale from 0 to 5, where 0 represents a zero probability of the risk event actually occurring while 5 represents a 100% probability of occurrence.The Composite Index thus can take values ranging from 0 through 25, and this range is usually arbitrarily divided into three sub-ranges. The overall risk assessment is then Low, Medium or High, depending on the sub-range containing the calculated value of the Composite Index. For instance, the three sub-ranges could be defined as 0 to 8, 9 to 16 and 17 to 25.Note that the probability of risk occurrence is difficult to estimate since the past data on frequencies are not readily available, as mentioned above.Likewise, the impact of the risk is not easy to estimate since it is often difficult to estimate the potential financial loss in the event of risk occurrence.Further, both the above factors can change in magnitude depending on the adequacy of risk avoidance and prevention measures taken and due to changes in the external business environment. Hence it is absolutely necessary to periodically re-assess risks and intensify/relax mitigation measures as necessary.

Risk mitigation measures are usually formulated according to one or more of the following major risk options, which are:1. Design a new business process with adequate built-in risk control and containment measures from the start.2. Periodically re-assess risks that are accepted in ongoing processes as a normal feature of business operations and modify mitigation measures.3. Transfer risks to an external agency (e.g. an insurance company)4. Avoid risks altogether (e.g. by closing down a particular high-risk business area)

Page 31: MODELS 5

Later research[citation needed] has shown that the financial benefits of risk management are less dependent on the formula used but are more dependent on the frequency and how risk assessment is performed.In business it is imperative to be able to present the findings of risk assessments in financial terms. Robert Courtney Jr. (IBM, 1970) proposed a formula for presenting risks in financial terms.[8] The Courtney formula was accepted as the official risk analysis method for the US governmental agencies. The formula proposes calculation of ALE (annualised loss expectancy) and compares the expected loss value to the security control implementation costs (cost-benefit analysis).

Potential risk treatmentsOnce risks have been identified and assessed, all techniques to manage the risk fall into one or more of these four major categories:[9]

Avoidance (eliminate, withdraw from or not become involved) Reduction (optimize - mitigate) Sharing (transfer - outsource or insure) Retention (accept and budget)

Ideal use of these strategies may not be possible. Some of them may involve trade-offs that are not acceptable to the organization or person making the risk management decisions. Another source, from the US Department of Defense, Defense Acquisition University, calls these categories ACAT, for Avoid, Control, Accept, or Transfer. This use of the ACAT acronym is reminiscent of another ACAT (for Acquisition Category) used in US Defense industry procurements, in which Risk Management figures prominently in decision making and planning.

Risk avoidanceThis includes not performing an activity that could carry risk. An example would be not buying a property or business in order to not take on the legal liability that comes with it. Another would be not flying in order not to take the risk that the airplane were to be hijacked. Avoidance may seem the answer to all risks, but avoiding risks also means losing out on the potential gain that accepting (retaining) the risk may have allowed. Not entering a business to avoid the risk of loss also avoids the possibility of earning profits.

Hazard PreventionMain article: Hazard preventionHazard prevention refers to the prevention of risks in an emergency. The first and most effective stage of hazard prevention is the elimination of hazards. If this takes too long, is too costly, or is otherwise impractical, the second stage is mitigation.

Risk reductionRisk reduction or "optimisation" involves reducing the severity of the loss or the likelihood of the loss from occurring. For example, sprinklers are designed to put out a fire to reduce the risk of loss by fire. This method may cause a greater loss by water damage and therefore may not be suitable. Halon fire suppression systems may mitigate that risk, but the cost may be prohibitive as a strategy.

Page 32: MODELS 5

Acknowledging that risks can be positive or negative, optimising risks means finding a balance between negative risk and the benefit of the operation or activity; and between risk reduction and effort applied. By an offshore drilling contractor effectively applying HSE Management in its organisation, it can optimise risk to achieve levels of residual risk that are tolerable.[10]

Modern software development methodologies reduce risk by developing and delivering software incrementally. Early methodologies suffered from the fact that they only delivered software in the final phase of development; any problems encountered in earlier phases meant costly rework and often jeopardized the whole project. By developing in iterations, software projects can limit effort wasted to a single iteration.Outsourcing could be an example of risk reduction if the outsourcer can demonstrate higher capability at managing or reducing risks.[11] For example, a company may outsource only its software development, the manufacturing of hard goods, or customer support needs to another company, while handling the business management itself. This way, the company can concentrate more on business development without having to worry as much about the manufacturing process, managing the development team, or finding a physical location for a call center.

Risk sharingBriefly defined as "sharing with another party the burden of loss or the benefit of gain, from a risk, and the measures to reduce a risk."The term of 'risk transfer' is often used in place of risk sharing in the mistaken belief that you can transfer a risk to a third party through insurance or outsourcing. In practice if the insurance company or contractor go bankrupt or end up in court, the original risk is likely to still revert to the first party. As such in the terminology of practitioners and scholars alike, the purchase of an insurance contract is often described as a "transfer of risk." However, technically speaking, the buyer of the contract generally retains legal responsibility for the losses "transferred", meaning that insurance may be described more accurately as a post-event compensatory mechanism. For example, a personal injuries insurance policy does not transfer the risk of a car accident to the insurance company. The risk still lies with the policy holder namely the person who has been in the accident. The insurance policy simply provides that if an accident (the event) occurs involving the policy holder then some compensation may be payable to the policy holder that is commensurate to the suffering/damage.Some ways of managing risk fall into multiple categories. Risk retention pools are technically retaining the risk for the group, but spreading it over the whole group involves transfer among individual members of the group. This is different from traditional insurance, in that no premium is exchanged between members of the group up front, but instead losses are assessed to all members of the group.

Risk retentionInvolves accepting the loss, or benefit of gain, from a risk when it occurs. True self insurance falls in this category. Risk retention is a viable strategy for small risks where the cost of insuring against the risk would be greater over time than the total losses sustained. All risks that are not avoided or transferred are retained by default. This includes risks that are so large or catastrophic that they either cannot be insured against or

Page 33: MODELS 5

the premiums would be infeasible. War is an example since most property and risks are not insured against war, so the loss attributed by war is retained by the insured. Also any amounts of potential loss (risk) over the amount insured is retained risk. This may also be acceptable if the chance of a very large loss is small or if the cost to insure for greater coverage amounts is so great it would hinder the goals of the organization too much.

Create a risk management planSelect appropriate controls or countermeasures to measure each risk. Risk mitigation needs to be approved by the appropriate level of management. For instance, a risk concerning the image of the organization should have top management decision behind it whereas IT management would have the authority to decide on computer virus risks.The risk management plan should propose applicable and effective security controls for managing the risks. For example, an observed high risk of computer viruses could be mitigated by acquiring and implementing antivirus software. A good risk management plan should contain a schedule for control implementation and responsible persons for those actions.According to ISO/IEC 27001, the stage immediately after completion of the risk assessment phase consists of preparing a Risk Treatment Plan, which should document the decisions about how each of the identified risks should be handled. Mitigation of risks often means selection of security controls, which should be documented in a Statement of Applicability, which identifies which particular control objectives and controls from the standard have been selected, and why.

ImplementationImplementation follows all of the planned methods for mitigating the effect of the risks. Purchase insurance policies for the risks that have been decided to be transferred to an insurer, avoid all risks that can be avoided without sacrificing the entity's goals, reduce others, and retain the rest.

Review and evaluation of the planInitial risk management plans will never be perfect. Practice, experience, and actual loss results will necessitate changes in the plan and contribute information to allow possible different decisions to be made in dealing with the risks being faced.Risk analysis results and management plans should be updated periodically. There are two primary reasons for this:

1. to evaluate whether the previously selected security controls are still applicable and effective, and

2. to evaluate the possible risk level changes in the business environment. For example, information risks are a good example of rapidly changing business environment.

LimitationsIf risks are improperly assessed and prioritized, time can be wasted in dealing with risk of losses that are not likely to occur. Spending too much time assessing and managing unlikely risks can divert resources that could be used more profitably. Unlikely events do occur but if the risk is unlikely enough to occur it may be better to simply retain the risk

Page 34: MODELS 5

and deal with the result if the loss does in fact occur. Qualitative risk assessment is subjective and lacks consistency. The primary justification for a formal risk assessment process is legal and bureaucratic.Prioritizing the risk management processes too highly could keep an organization from ever completing a project or even getting started. This is especially true if other work is suspended until the risk management process is considered complete.It is also important to keep in mind the distinction between risk and uncertainty. Risk can be measured by impacts x probability.

Areas of risk managementAs applied to corporate finance, risk management is the technique for measuring, monitoring and controlling the financial or operational risk on a firm's balance sheet. See value at risk.The Basel II framework breaks risks into market risk (price risk), credit risk and operational risk and also specifies methods for calculating capital requirements for each of these components.

Enterprise risk managementMain article: Enterprise Risk ManagementIn enterprise risk management, a risk is defined as a possible event or circumstance that can have negative influences on the enterprise in question. Its impact can be on the very existence, the resources (human and capital), the products and services, or the customers of the enterprise, as well as external impacts on society, markets, or the environment. In a financial institution, enterprise risk management is normally thought of as the combination of credit risk, interest rate risk or asset liability management, market risk, and operational risk.In the more general case, every probable risk can have a pre-formulated plan to deal with its possible consequences (to ensure contingency if the risk becomes a liability).From the information above and the average cost per employee over time, or cost accrual ratio, a project manager can estimate:

the cost associated with the risk if it arises, estimated by multiplying employee costs per unit time by the estimated time lost (cost impact, C where C = cost accrual ratio * S).

the probable increase in time associated with a risk (schedule variance due to risk, Rs where Rs = P * S):

Sorting on this value puts the highest risks to the schedule first. This is intended to cause the greatest risks to the project to be attempted first so that risk is minimized as quickly as possible.

This is slightly misleading as schedule variances with a large P and small S and vice versa are not equivalent. (The risk of the RMS   Titanic  sinking vs. the passengers' meals being served at slightly the wrong time).

the probable increase in cost associated with a risk (cost variance due to risk, Rc where Rc = P*C = P*CAR*S = P*S*CAR)

sorting on this value puts the highest risks to the budget first. see concerns about schedule variance as this is a function of it, as

illustrated in the equation above.

Page 35: MODELS 5

Risk in a project or process can be due either to Special Cause Variation or Common Cause Variation and requires appropriate treatment. That is to re-iterate the concern about extremal cases not being equivalent in the list immediately above.

Risk management activities as applied to project managementIn project management, risk management includes the following activities:

Planning how risk will be managed in the particular project. Plan should include risk management tasks, responsibilities, activities and budget.

Assigning a risk officer - a team member other than a project manager who is responsible for foreseeing potential project problems. Typical characteristic of risk officer is a healthy skepticism.

Maintaining live project risk database. Each risk should have the following attributes: opening date, title, short description, probability and importance. Optionally a risk may have an assigned person responsible for its resolution and a date by which the risk must be resolved.

Creating anonymous risk reporting channel. Each team member should have possibility to report risk that he/she foresees in the project.

Preparing mitigation plans for risks that are chosen to be mitigated. The purpose of the mitigation plan is to describe how this particular risk will be handled – what, when, by who and how will it be done to avoid it or minimize consequences if it becomes a liability.

Summarizing planned and faced risks, effectiveness of mitigation activities, and effort spent for the risk management.

Risk management for megaprojectsMegaprojects (sometimes also called "major programs") are extremely large-scale investment projects, typically costing more than US$1 billion per project. Megaprojects include bridges, tunnels, highways, railways, airports, seaports, power plants, dams, wastewater projects, coastal flood protection schemes, oil and natural gas extraction projects, public buildings, information technology systems, aerospace projects, and defence systems. Megaprojects have been shown to be particularly risky in terms of finance, safety, and social and environmental impacts. Risk management is therefore particularly pertinent for megaprojects and special methods and special education have been developed for such risk management.[12] [13]

Risk management of Information TechnologyMain article: IT risk managementInformation technology is increasing pervasive in modern life in every sector.[14] [15] [16]

IT risk is a risk related to information technology. This relatively new term due to an increasing awareness that information security is simply one facet of a multitude of risks that are relevant to IT and the real world processes it supports.A number of methodologies have been developed to deal with this kind of risk.ISACA's Risk IT framework ties IT risk to Enterprise risk management.

Risk management techniques in petroleum and natural gas

Page 36: MODELS 5

For the offshore oil and gas industry, operational risk management is regulated by the safety case regime in many countries. Hazard identification and risk assessment tools and techniques are described in the international standard ISO 17776:2000, and organisations such as the IADC (International Association of Drilling Contractors) publish guidelines for HSE Case development which are based on the ISO standard. Further, diagrammatic representations of hazardous events are often expected by governmental regulators as part of risk management in safety case submissions; these are known as bow-tie diagrams. The technique is also used by organisations and regulators in mining, aviation, health, defence, industrial and finance.[17]

[edit]Risk management and business continuityRisk management is simply a practice of systematically selecting cost effective approaches for minimising the effect of threat realization to the organization. All risks can never be fully avoided or mitigated simply because of financial and practical limitations. Therefore all organizations have to accept some level of residual risks.Whereas risk management tends to be preemptive, business continuity planning (BCP) was invented to deal with the consequences of realised residual risks. The necessity to have BCP in place arises because even very unlikely events will occur if given enough time. Risk management and BCP are often mistakenly seen as rivals or overlapping practices. In fact these processes are so tightly tied together that such separation seems artificial. For example, the risk management process creates important inputs for the BCP (assets, impact assessments, cost estimates etc.). Risk management also proposes applicable controls for the observed risks. Therefore, risk management covers several areas that are vital for the BCP process. However, the BCP process goes beyond risk management's preemptive approach and assumes that the disaster will happen at some point.

Risk communicationRisk communication is a complex cross-disciplinary academic field. Problems for risk communicators involve how to reach the intended audience, to make the risk comprehensible and relatable to other risks, how to pay appropriate respect to the audience's values related to the risk, how to predict the audience's response to the communication, etc. A main goal of risk communication is to improve collective and individual decision making. Risk communication is somewhat related to crisis communication.

Bow tie diagramsA popular solution to the quest to communicate risks and their treatments effectively is to use bow tie diagrams. These have been effective, for example, in a public forum to model perceived risks and communicate precautions, during the planning stage of offshore oil and gas facilities in Scotland. Equally, the technique is used for HAZID (Hazard Identification) workshops of all types, and results in a high level of engagement. For this reason (amongst others) an increasing number of government regulators for major hazard facilities (MHFs), offshore oil & gas, aviation, etc. welcome safety case submissions which use diagrammatic representation of risks at their core.Communication advantages of bow tie diagrams: [17]

Page 37: MODELS 5

Visual illustration of the hazard, its causes, consequences, controls, and how controls fail.

The bow tie diagram can be readily understood at all personnel levels. "A picture paints a thousand words."

Seven cardinal rules for the practice of risk communication(as first expressed by the U.S. Environmental Protection Agency and several of the field's founders)

Accept and involve the public/other consumers as legitimate partners. Plan carefully and evaluate your efforts with a focus on your strengths,

weaknesses, opportunities, and threats. Listen to the public's specific concerns. Be honest, frank, and open. Coordinate and collaborate with other credible sources. Meet the needs of the media. Speak clearly and with compassion.

ORGANIZATIONAL MATURITY STAGES

The organizational life cycle (OLC) is a model which proposes that over the course of time, business firms move through a fairly predictable sequence of developmental stages. This model, which has been a subject of considerable study over the years, is linked to

Page 38: MODELS 5

the study of organizational growth and development. It is based on a biological metaphor—that business firms resemble living organisms because they demonstrate a regular pattern of developmental process. Organizations that are said to pass through a recognizable life cycle, wrote Gibson, Ivancevich, and Donnelly in Organizations: Behavior, Structure, Processes, are fundamentally impacted by external environmental circumstances as well as internal factors: "We're all aware of the rise and fall of organizations and entire industries…. Marketing experts acknowledge the existence of product-market life cycles. It seems reasonable to conclude that organizations also have life cycles."In a summary of OLC models, Quinn and Cameron wrote in Management Science that the models typically propose that "changes that occur in organizations follow a predictable pattern that can be characterized by developmental stages. These stages are sequential in nature; occur as a hierarchical progression that is not easily reversed; and involve a broad range of organizational activities and structures." The number of life cycle stages proposed in various works studying the phenomenon have varied considerably over the years. Some analysts have delineated as many as ten different stages of an organizational life cycle, while others have flattened it down to as few as three stages. Most models, however, tout the organizational life cycle as a period comprised of four or five stages that can be encapsulated as start-up, growth, maturity, decline, and death (or revival).Trends in Olc StudyWhile a number of business and management theorists alluded to developmental stages in the early to mid-1900s, Mason Haire's 1959 work Modern Organization Theory is generally recognized as one of the first studies that used a biological model for organizational growth and argued that organizational growth and development followed a regular sequence. The study of organizational life cycles intensified, and by the 1970s and 1980s it was well-established as a key component of overall organizational growth.Organizational life cycle is an important model because of its premise and its prescription. The model's premise is that requirements, opportunities, and threats both inside and outside the business firm will vary depending on the stage of development in which the firm finds itself. For example, threats in the start-up stage differ from those in the maturity stage. As the firm moves through the developmental stages, changes in the nature and number of requirements, opportunities, and threats exert pressure for change on the business firm. Baird and Meshoulam stated in the Academy of Management Review that organizations move from one stage to another because the fit between the organization and its environment is so inadequate that either the organization's efficiency and/or effectiveness is seriously impaired or the organization's survival is threatened. The OLC model's prescription is that the firm's managers must change the goals, strategies, and strategy implementation devices of the business to fit the new set of issues. Thus, different stages of the company's life cycle require alterations in the firm's objectives, strategies, managerial processes (planning, organizing, staffing, directing, controlling), technology, culture, and decision-making. For example, in a longitudinal study of 36 corporations published in Management Science, Miller and Friesen proposed five growth stages: birth, growth, maturity, decline, and revival. They traced changes in the organizational structure and managerial processes as the business firms proceeded through the stages. At birth, the firms exhibited a very simple organizational structure

Page 39: MODELS 5

with authority centralized at the top of the hierarchy. As the firms grew, they adapted more sophisticated structures and decentralized authority to middle- and lower-level managers. At maturity, the firms demonstrated significantly more concern for internal efficiency and installed more control mechanisms and processes.GROWTH PHASES. Despite the increase in interest in OLC, though, most scholarly works focusing on organizational life cycles have been conceptual and hypothetical in content. Only a small minority have attempted to test empirically the organizational life cycle model. One widely-cited conceptual work, however, was published in the Harvard Business Review in 1972 by L. Greiner. He used five growth phases: growth through creativity; growth through direction; growth through delegation; growth through coordination; and growth through collaboration. Each growth stage encompassed an evolutionary phase ("prolonged periods of growth where no major upheaval occurs in organization practices"), and a revolutionary phase ("periods of substantial turmoil in organization life"). The evolutionary phases were hypothesized to be about four to eight years in length, while the revolutionary phases were characterized as the crisis phases. At the end of each one of the five growth stages listed above, Greiner hypothesized that an organizational crisis will occur, and that the business's ability to handle these crises will determine its future:Phase 1—Growth through creativity eventually leads to a crisis of leadership. More sophisticated and more formalized management practices must be adopted. If the founders can't or won't take on this responsibility, they must hire someone who can, and give this person significant authority.Phase 2—Growth through direction eventually leads to a crisis of autonomy. Lower level managers must be given more authority if the organization is to continue to grow. The crisis involves top-level managers' reluctance to delegate authority.Phase 3—Growth through delegation eventually leads to a crisis of control. This occurs when autonomous employees who prefer to operate without interference from the rest of the organization clash with business owners and managers who perceive that they are losing control of a diversified company.Phase 4—Growth through coordination eventually leads to a crisis of red tape. Coordination techniques like product groups, formal planning processes, and corporate staff become, over time, a bureaucratic system that causes delays in decision making and a reduction in innovation.Phase 5—Growth through collaboration, is characterized by the use of teams, a reduction in corporate staff, matrix-type structures, the simplification of formal systems, an increase in conferences and educational programs, and more sophisticated information systems. While Greiner did not formally delineate a crisis for this phase, he guessed that it might revolve around "the psychological saturation of employees who grow emotionally and physically exhausted by the intensity of team work and the heavy pressure for innovative solutions."Organization Life Cycle and the Small Business OwnerEntrepreneurs who are involved in the early stages of business creation are unlikely to become preoccupied with life cycle issues of decline and dis-solution. Indeed, their concerns are apt to be in such areas as securing financing, establishing relationships with vendors and clients, preparing a physical location for business operations, and other aspects of business start-up that are integral to establishing and maintaining a viable firm.

Page 40: MODELS 5

Basically, these firms are almost exclusively concerned with the very first stage of the organization life cycle. Small business enterprises that are well-established, on the other hand, may find OLC studies more relevant. Indeed, many recent examinations of organization life cycles have analyzed ways in which businesses can prolong desired stages (growth, maturity) and forestall negative stages (decline, death). Certainly, there exists no timeline that dictates that a company will begin to falter at a given point in time. "Because every company develops at its own pace, characteristics, more than age, define the stages of the cycle," explained Karen Adler and Paul Swiercz in Training & Development.Small business owners and other organization leaders may explore a variety of options designed to influence the enterprise's life cycle—from new products to new markets to new management philosophies. After all, once a business begins to enter a decline phase, it is not inevitable that the company will continue to plummet into ultimate failure; many companies are able to reverse such slides (a development that is sometimes referred to as turning the OLC bell curve into an "S" curve). But entrepreneurs and managers should recognize that their business is always somewhere along the life cycle continuum, and that business success is often predicated on recognizing where your business is situated along that measuring stick.

Organisational life cycle: growth, maturity, decline and deathOrganisations exhibit a similar, though not identical, life-cycle pattern of changes to living organisms. They grow, mature, decline, and eventually pass away. However, there are some differences that require attention. Firstly, the duration of each stage is less precise than that of typical organisms. In human beings, physiological growth reaches its climax at about the age of 25 whereas the growth phase of an organisation can vary to a great extent. Secondly, the mechanics upon which changes are based are different. Living organisms are typical biological machines with their own physics and chemistry, while organisations are not. According to Boulding (1956), organisations are at a higher level of complexity than living organisms.Genetic factors and available resources both influence growth in organisms. Organisms develop from fertilisation to maturity through a programmed or predetermined genetic code, a process termed ‘ontogenic development’ (Ayres, 1994). Apart from this, it is also necessary that the organism acquire sufficient necessary resources from the environment to sustain its life and remain viable. Although the concept of ontogenic development may not be directly applicable to the growth of real organisations due to the difference in basic constituents and mechanisms (i.e. biological vs. socio-technical), there is a similar idea upon which the description of growth in organisations can be based. Greiner (1972) proposed a growth model that explained the growth in business organisations as a predetermined series of evolution and revolution (Figure   11.2, “The five phases of organisational growth (adapted from Greiner, 1972).”). In order to grow, the organisation is supposed to pass through a series of identifiable phases or stages of development and crisis, which is similar, to some degree, to the concept of ontogenic development. Thus, it is interesting to see that systems at different levels of complexity (Boulding, 1956) can exhibit a similar pattern of change. This is also consistent with General System Theory, which attempts to unify the bodies of knowledge in various disciplines (Bertalanffy, 1973).

Page 41: MODELS 5

Figure 11.2. The five phases of organisational growth (adapted from Greiner, 1972).

Greiner’s model suggests how organisations grow, but the basic reasons behind the growth process and its mechanics remain unclear. As mentioned previously, growth in a living organism is a result of the interplay between the ontogenic factor and the environment. Here, positive feedback plays a vital role in explaining changes in a living system. Although both positive and negative feedback work in concert in any living system, in order to grow (or to effect other changes in a system), the net type of feedback must be positive (Skyttner, 2001). In organisms, starting at birth, the importation of materials and energy from the environment not only sustains life but also contributes to growth. As they keep growing, so does their ability to acquire resources. This means that the more they grow, the more capacity in resources acquisition they have and the more resources they can access. This growth and the increase in resource acquisition capabilities provides a positive feedback loop, which continues until the organism matures. The positive feedback loop will be active again when the organism starts to decline, which will be mentioned later.An analogy can be made between the process of growth in a business organisation and that in an organism (provided that the business organisation pursues a growth strategy). If the resources in a niche or a domain are abundant, a business organisation in that niche is likely to run at a profit (provided that the relevant costs are under control). An increase in profit results in an improvement in return on investment (ROI), which tends to attract more funds from the investors. The firm can use these funds to reinvest for expansion, to gain more market control, and make even more profit. This positive feedback will continue until limiting factors (e.g. an increase in competition or the depletion of resources within a particular niche) take effect.A living system cannot perpetually maintain growth, nor can it ensure its survival and viability forever. After its growth, the system matures, declines, and eventually ends. This can be explained by using the concept of ‘homeokinesis’ (Cardon, et al., 1972; Van Gigch, 1978, 1991; Skyttner, 2001). It has already been argued that one of the most important characteristics of any living system is that it has to be in a homeostatic, or dynamic, equilibrium condition to remain viable. Nonetheless, the fact that a living system deteriorates over time and eventually expires indicates that there is a limit to this.

Page 42: MODELS 5

Rather than maintaining its dynamic equilibrium, it is argued that a living system is really in a state of disequilibrium, a state of evolution termed ‘homeokinesis’. Rather than being a living system’s normal state, homeostasis is the ideal or climax state that the system is trying to achieve, but that is never actually achievable. Homeostasis can be described in homeokinetic terms as a ‘homeokinetic plateau’– the region within which negative feedback dominates in the living system. In human physiology, after age 25 (the physiological climax state), the body starts to deteriorate but can still function. After achieving maturity, it seems that a living system has more factors and contingencies to deal with, and that require more energy and effort to keep under control. Beyond the ‘upper threshold’, it is apparent that the system is again operating in a positive feedback region, and is deteriorating. Even though the living system is trying its best to maintain its viability, this effort, nonetheless, cannot counterbalance or defeat the entropically increasing trend. The system gradually and continuously loses its integration and proper functioning, which eventually results in the system’s expiry.Although we argue that the concept of homeokinesis and net positive feedback can also be applied to the explanation of deterioration and demise in organisations, as noted earlier it is very difficult to make a direct homology between changes in organisms and changes in organisations. Rather than being biological machines, which can be described and explained, to a large extent if not (arguably) completely, in terms of physics and chemistry, organisations are much more complex socio-technical systems comprising ensembles of people, artefacts, and technology working together in an organised manner.Figure 11.3. Control requires that the system be maintained within the bounds of the homokinetic plateau. Adapted from Van Gigch (1991).

As mentioned earlier, after its maturity, the organism gradually and continuously loses its ability to keep its integration and organisation under control (to counterbalance the entropically increasing trend) and this finally leads to its demise. While this phenomenon is normal in biological systems, even though organisations in general may experience decline and death (as many empires and civilisations did in history), it appears that the entropic process in organisations is less definite and more complicated than that in organisms. Kiel (1991) suggests that this dissimilarity can be explained in terms of systems’ differences in their abilities to extract and utilise energy, and the capacity to reorganise as a result of unexpected and chaotic contextual factors. This suggests that biological systems are less resilient and capable than social systems with respect to

Page 43: MODELS 5

natural decline. This may be reflected in the difference in timing and duration of each of their developmental phases. For example, while the duration of each phase in the life cycle, and the life expectancy, are relatively definite for a particular type of organism, such duration is very difficult, if not impossible, to specify for organisations. A small business may, on average, last from several months to a number of years whereas, in contrast, the Roman Catholic Church has lasted for centuries (Scott, 1998). It may be that the size and form of the organisation are influential factors in this respect, a proposition that still requires further empirical investigation.To be in the region of the homeokinetic plateau, the proper amount of control for a well-functioning and sustainable living systems must be present, and similarly for organisations. Too little control will lead to poor integration and a chaotic situation whereas too much control results in poor adaptation and inflexibility.

BUSINESS PLANNING

A business plan is a formal statement of a set of business goals, the reasons why they are believed attainable, and the plan for reaching those goals. It may also contain background information about the organization or team attempting to reach those goals.Business plans may also target changes in perception and branding by the customer, client, tax-payer, or larger community. When the existing business is to assume a major change or when planning a new venture - a 3 to 5 year business plan is essential.

Business plans may be internally or externally focused. Externally focused plans target goals that are important to external stakeholders, particularly financial stakeholders. They typically have detailed information about the organization or team attempting to reach the goals. With for-profit entities, external stakeholders include investors and customers.[1] External stake-holders of non-profits include donors and the clients of the non-profit's services.[2] For government agencies, external stakeholders include tax-payers, higher-level government agencies, and international lending bodies such as the IMF, the World Bank, various economic agencies of the UN, and development banks.Internally focused business plans target intermediate goals required to reach the external goals. They may cover the development of a new product, a new service, a new IT system, a restructuring of finance, the refurbishing of a factory or a restructuring of the organization. An internal business plan is often developed in conjunction with a balanced scorecard or a list of critical success factors. This allows success of the plan to be measured using non-financial measures. Business plans that identify and target internal goals, but provide only general guidance on how they will be met are called strategic plans.Operational plans describe the goals of an internal organization, working group or department.[3] Project plans, sometimes known as project frameworks, describe the goals of a particular project. They may also address the project's place within the organization's larger strategic goals

Business plans are decision-making tools. There is no fixed content for a business plan. Rather the content and format of the business plan is determined by the goals and audience. A business plan represents all aspects of business planning process; declaring

Page 44: MODELS 5

vision and strategy alongside sub-plans to cover marketing, finance, operations, human resources as well as a legal plan, when required. A business plan is a bind summary of those disciplinary plans.For example, a business plan for a non-profit might discuss the fit between the business plan and the organization’s mission. Banks are quite concerned about defaults, so a business plan for a bank loan will build a convincing case for the organization’s ability to repay the loan. Venture capitalists are primarily concerned about initial investment, feasibility, and exit valuation. A business plan for a project requiring equity financing will need to explain why current resources, upcoming growth opportunities, and sustainable competitive advantage will lead to a high exit valuation.Preparing a business plan draws on a wide range of knowledge from many different business disciplines: finance, human resource management, intellectual property management,supply chain management, operations management, and marketing, among others.[5] It can be helpful to view the business plan as a collection of sub-plans, one for each of the main business disciplines.[6]

"... A good business plan can help to make a good business credible, understandable, and attractive to someone who is unfamiliar with the business. Writing a good business plan can’t guarantee success, but it can go a long way toward reducing the odds of failure." 

The format of a business plan depends on its presentation context. It is not uncommon for businesses, especially start-ups to have three or four formats for the same business plan:

an "elevator pitch" - a three minute summary of the business plan's executive summary. This is often used as a teaser to awaken the interest of potential funders, customers, or strategic partners.

an oral presentation - a hopefully entertaining slide show and oral narrative that is meant to trigger discussion and interest potential investors in reading the written presentation. The content of the presentation is usually limited to the executive summary and a few key graphs showing financial trends and key decision making benchmarks. If a new product is being proposed and time permits, a demonstration of the product may also be included.

a written presentation for external stakeholders - a detailed, well written, and pleasingly formatted plan targeted at external stakeholders.

an internal operational plan - a detailed plan describing planning details that are needed by management but may not be of interest to external stakeholders. Such plans have a somewhat higher degree of candor and informality than the version targeted at external stakeholders.

Typical structure for a business plan for a start up venture[7]

cover page and table of contents executive summary business description business environment analysis industry background competitor analysis market analysis marketing plan operations plan

Page 45: MODELS 5

management summary financial plan attachments and milestones

A business plan is often prepared when: Starting a new organization, business venture, or product (service) or Expanding, acquiring or improving any of the above.

There are numerous benefits of doing a business plan, including: To identify an problems in your plans before you implement those plans. To get the commitment and participation of those who will implement the plans,

which leads to better results. To establish a roadmap to compare results as the venture proceeds from paper to

reality. To achieve greater profitability in your organization, products and services -- all

with less work. To obtain financing from investors and funders. To minimize your risk of failure. To update your plans and operations in a changing world. To clarify and synchronize your goals and strategies.

For these reasons, the planning process often is as useful as the business plan document itself.

Types of Content of a Business PlanBusiness plans appear in many different formats, depending on the audience for the plan and complexity of the business. However, most business plans address the following five topic areas in one form or another.

1. Business summary -- Describes the organization, business venture or product (service), summarizing its purpose, management, operations, marketing and finances.

2. Market opportunity -- Concisely describes what unmet need it will (or does) fill, presents evidence that this need is genuine, and that the beneficiaries (or a third party) will pay for the costs to meet this need. Describes credible market research

Page 46: MODELS 5

on target customers (including perceived benefits and willingness to pay), competitors and pricing.

3. People -- Arguably the most important part of the plan, it describes who will be responsible for developing, marketing and operating this venture, and why their backgrounds and skills make them the right people to make this successful. Ideally, each person in the management team (and key program and technical folks) are indicated by NAME.

4. Implementation -- This is the how-to section of the plan, where the action steps are clearly described, usually in four areas: start-up, marketing, operations and financial. Marketing builds on market research presented, e.g., in a Market Opportunity section of the plan, including your competitive niche (how you will be better than your competitors in ways that matter to your target customers). Financial plan includes, e.g., costs to launch, operate, market and finance the business, along with conservative estimates of revenue, typically for three years; a break-even analysis is often included in this section.

5. Contingencies -- This section outlines the most likely things that could go wrong with implementing this plan, and how management is prepared to respond to those problems if they emerge.

In many cases, an organization will already have in its possession some of the information needed for preparing a business plan. For example, in the case of nonprofits, grant proposals often contain some of this information.

Preparation for Planning a Business Venture (nonprofit or for-profit)Before you start a major venture, there are several considerations about yourself that you should address. This manual guides you through those considerations. Then the manual guides you through the major considerations you'll have to address when you complete your business plan. The manual includes numerous links to other free resources as the reader goes through each section of the manual. 

Key planning questions Where are we now? How did we get here ? Where would we like to be? How do we get there? Are we on course to achieve our targets?

The planning process - an overview of the key steps1. Analyse the external environment2. Analyse the internal environment3. Define the business and mission4. Set corporate objectives5. Formulate strategies6. Make tactical plans7. Build in procedures for monitoring and controlling

The planning cycleThe key elements & models

Page 47: MODELS 5

Where are we now? The purpose of situational analysis is to determine which opportunities to pursue

Pest/Pestle - identify and analyse trends in the environment Competitor analysis – understand and, if possible, predict the behaviour of

competitors Audit of internal resources SWOT analysis: build on strengths ,resolve weaknesses, exploit opportunities,

confront threatsSituational analysis

Analysing the present situation is the prelude to devising objectives and strategies for the future

We need to understand where we are and where we have come from before planning the future

But we must always be careful to avoid paralysis by analysis This describes a situation in which no decisions are made because of the

disproportionate amount of effort that goes into the analysis phase  Explanations for "paralysis by analysis"

1. High complexity of the situation at hand2. Excessive amount of analytical data3. Poor prioritisation4. Excessive focus on planning rather than action5. Inability to delegate6. A rigid, formal and rational organisational culture7. Aversion to risk

Where are we going?Vision

Non-specific directional and motivational guidance for the entire operation What will the organisation be like in five years time 

Mission statement An organisation’s reason for being It is concerned with the scope of the business

and what distinguishes it from similar businessesObjectives - SMART objectivesGoals - specific statements of anticipated resultsStrategy versus tactics“Strategy without tactics is the slowest route to victory Tactics without strategy is the noise before the defeat” (Sun Tzu, The Art of War)

Strategy: the broad approach  to achievement of objectives over the long term Tactics: detailed filling-in of measures designed to contribute to the strategy

Designed to achieve short term goalsStrategy

The annual business plan specifies actions needed to implement the strategy Strategy is the broad approach to the achievement of objectives It starts with the identification and evaluation of strategic objectives And then summarises how to fulfil the objectives Strategic options can be analysed by using Ansoff’s matrix and Porter’s generic

strategies

Page 48: MODELS 5

Tactics Tactics are designed for the short term Tactics are the details within the overall the strategy The details include what, where and how activities will take place to accomplish a

goalExamples :

o Promotional mixo Pricing policyo Production plan

These details will be contained in programmes and budgets and will eventually be translated into action plansAction

This involves the implementation of the plan Remember the greatest strategy on earth is useless unless properly implemented

This stage involves: Action plans The development of costed action programmes Detailed budgets Project management Putting the strategy and tactics into action

Action plansAction plans convert strategy into a series of steps which answer the following questions:

What is to be done? How is to be done? By whom is it to be done? Who is responsible for making sure it is done? By when is it to be done?

BudgetsBudgets play a key role in planningBudgets are presented as spread sheet showing:

Expected sales or cash inflow Expected and planed expenditure

Discretionary spending such as that on promotion can be planned in terms of type of spending and timing with the aid of a budget spreadsheetThis allows specialist managers to enjoy some autonomy within the budget  but at the same time places a cap on spending and facilitates the monitoring of spendingMonitor and controlThe results of a business should be monitored to determine whether or not the strategic initiatives are being implemented on schedule and within the budgeted resources allocated to itIf senior managers are to retain control (whilst delegating detailed implementation) there must be an efficient data collection system feeding back information on progressControl mechanisms

Gantt charts: progress on a project can be monitored against the schedule in the chart

Budgets: monitor actual performance against budget to analyse variance

Page 49: MODELS 5

Management information system (MIS): rather than gathering information on an ad hoc basis organisations use computer software to gather a wide variety of information as a by-product of activities Computer systems capture and process the data to provide managers with continuously up-dated results

Management by exceptionA review of current data allows managers to compare actual; performance against standards or plansThese comparisons will reveal deviations from the plan and, if substantial, will lead to further investigations This approach is an example of management by exception- leave matters to subordinates down the line and intervenes only when there is evidence of deviation from the plan

Advice to managers Monitor for success - not control for its own sake: Only intervene where deviation is substantial Feed back results to allow subordinates to correct minor deviations Keep a focus on strategic goals If you micro-manage you will not be able to see the wood for the trees Monitor selectively Focus on variables that of great significant and those that provide early warning

of major problems And always avoid paralysis by analysis 

Evaluation and modification The evaluation of performance should lead an ongoing review process Where necessary, modify plans and take  corrective action to ensure put the

organisation back on course to achieve its objectives Planning is not a one off event but a continuing process: The implementation has to be fine tuned during the period of the plan Results from the plan will be fed into next years plan

A final thought"Planning without action is futile Action without planning is fatal"

Types of Content of a Business PlanBusiness plans appear in many different formats, depending on the audience for the plan and complexity of the business. However, most business plans address the following five topic areas in one form or another.

1. Business summary -- Describes the organization, business venture or product (service), summarizing its purpose, management, operations, marketing and finances.

2. Market opportunity -- Concisely describes what unmet need it will (or does) fill, presents evidence that this need is genuine, and that the beneficiaries (or a third party) will pay for the costs to meet this need. Describes credible market research

Page 50: MODELS 5

on target customers (including perceived benefits and willingness to pay), competitors and pricing.

3. People -- Arguably the most important part of the plan, it describes who will be responsible for developing, marketing and operating this venture, and why their backgrounds and skills make them the right people to make this successful. Ideally, each person in the management team (and key program and technical folks) are indicated by NAME.

4. Implementation -- This is the how-to section of the plan, where the action steps are clearly described, usually in four areas: start-up, marketing, operations and financial. Marketing builds on market research presented, e.g., in a Market Opportunity section of the plan, including your competitive niche (how you will be better than your competitors in ways that matter to your target customers). Financial plan includes, e.g., costs to launch, operate, market and finance the business, along with conservative estimates of revenue, typically for three years; a break-even analysis is often included in this section.

5. Contingencies -- This section outlines the most likely things that could go wrong with implementing this plan, and how management is prepared to respond to those problems if they emerge.

In many cases, an organization will already have in its possession some of the information needed for preparing a business plan. For example, in the case of nonprofits, grant proposals often contain some of this information.

Preparation for Planning a Business Venture (nonprofit or for-profit)Before you start a major venture, there are several considerations about yourself that you should address. This manual guides you through those considerations. Then the manual guides you through the major considerations you'll have to address when you complete your business plan. The manual includes numerous links to other free resources as the reader goes through each section of the manual. 

CAPACITY MATURITY MODEL

Page 51: MODELS 5

The Capability Maturity Model (CMM) is a service mark owned by Carnegie Mellon University (CMU) and refers to a development model elicited from actual data. The data was collected from organizations that contracted with the U.S. Department of Defense, who funded the research, and became the foundation from which CMU created the Software Engineering Institute (SEI). Like any model, it is an abstraction of an existing system.When it is applied to an existing organization's software development processes, it allows an effective approach toward improving them. Eventually it became clear that the model could be applied to other processes. This gave rise to a more general concept that is applied to business processes and to developing people.

OverviewThe Capability Maturity Model (CMM) was originally developed as a tool for objectively assessing the ability of government contractors' processes to perform a contracted software project. The CMM is based on the process maturity framework first described in the 1989 book Managing the Software Process by Watts Humphrey. It was later published in a report in 1993 (Technical Report CMU/SEI-93-TR-024 ESC-TR-93-177 February 1993, Capability Maturity Model SM for Software, Version 1.1) and as a book by the same authors in 1995.Though the CMM comes from the field of software development, it is used as a general model to aid in improving organizational business processes in diverse areas; for example in software engineering, system engineering, project management, software maintenance, risk management, system acquisition, information technology (IT),

Page 52: MODELS 5

services, business processes generally, and human capital management. The CMM has been used extensively worldwide in government offices, commerce, industry and software development organizations.HistoryPrior need for software processesIn the 1970s, the use of computers grew more widespread, flexible and less costly. Organizations began to adopt computerized information systems, and the demand for software development grew significantly. The processes for software development were in their infancy, with few standard or "best practice" approaches defined.As a result, the growth was accompanied by growing pains: project failure was common, and the field of computer science was still in its infancy, and the ambitions for project scale and complexity exceeded the market capability to deliver. Individuals such as Edward Yourdon, Larry Constantine, Gerald Weinberg, Tom DeMarco, and David Parnas began to publish articles and books with research results in an attempt to professionalize the software development process.In the 1980s, several US military projects involving software subcontractors ran over-budget and were completed far later than planned, if at all. In an effort to determine why this was occurring, the United States Air Force funded a study at the SEI.PrecursorThe Quality Management Maturity Grid was developed by Philip B. Crosby in his book "Quality is Free".[1]

The first application of a staged maturity model to IT was not by CMM/SEI, but rather by Richard L. Nolan, who, in 1973 published the stages of growth model for IT organizations.[2]

Watts Humphrey began developing his process maturity concepts during the later stages of his 27 year career at IBM. (References needed)Development at SEIActive development of the model by the US Department of Defense Software Engineering Institute (SEI) began in 1986 when Humphrey joined the Software Engineering Institute located at Carnegie Mellon University in Pittsburgh, Pennsylvania after retiring from IBM. At the request of the U.S. Air Force he began formalizing his Process Maturity Framework to aid the U.S. Department of Defense in evaluating the capability of software contractors as part of awarding contracts.The result of the Air Force study was a model for the military to use as an objective evaluation of software subcontractors' process capability maturity. Humphrey based this framework on the earlier Quality Management Maturity Grid developed by Philip B. Crosby in his book "Quality is Free".[1] However, Humphrey's approach differed because of his unique insight that organizations mature their processes in stages based on solving process problems in a specific order. Humphrey based his approach on the staged evolution of a system of software development practices within an organization, rather than measuring the maturity of each separate development process independently. The CMM has thus been used by different organizations as a general and powerful tool for understanding and then improving general business process performance.Watts Humphrey's Capability Maturity Model (CMM) was published in 1988[3] and as a book in 1989, in Managing the Software Process.[4]

Page 53: MODELS 5

Organizations were originally assessed using a process maturity questionnaire and a Software Capability Evaluation method devised by Humphrey and his colleagues at the Software Engineering Institute (SEI).The full representation of the Capability Maturity Model as a set of defined process areas and practices at each of the five maturity levels was initiated in 1991, with Version 1.1 being completed in January 1993.[5] The CMM was published as a book[6] in 1995 by its primary authors, Mark C. Paulk, Charles V. Weber, Bill Curtis, and Mary Beth Chrissis.Superseded by CMMIThe CMM model proved useful to many organizations, but its application in software development has sometimes been problematic. Applying multiple models that are not integrated within and across an organization could be costly in training, appraisals, and improvement activities. The Capability Maturity Model Integration (CMMI) project was formed to sort out the problem of using multiple CMMs.For software development processes, the CMM has been superseded by Capability Maturity Model Integration (CMMI), though the CMM continues to be a general theoretical process capability model used in the public domain.Adapted to other processesThe CMM was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. Though it comes from the area of software development, it can be, has been, and continues to be widely applied as a general model of the maturity of processes (e.g., IT service management processes) in IS/IT (and other) organizations.lModel topicsMaturity modelA maturity model can be viewed as a set of structured levels that describe how well the behaviours, practices and processes of an organisation can reliably and sustainably produce required outcomes. A maturity model may provide, for example :

a place to start the benefit of a community’s prior experiences a common language and a shared vision a framework for prioritizing actions. a way to define what improvement means for your organization.

A maturity model can be used as a benchmark for comparison and as an aid to understanding - for example, for comparative assessment of different organizations where there is something in common that can be used as a basis for comparison. In the case of the CMM, for example, the basis for comparison would be the organizations' software development processes.StructureThe Capability Maturity Model involves the following aspects:

Maturity Levels: a 5-level process maturity continuum - where the uppermost (5th) level is a notional ideal state where processes would be systematically managed by a combination of process optimization and continuous process improvement.

Key Process Areas: a Key Process Area (KPA) identifies a cluster of related activities that, when performed together, achieve a set of goals considered important.

Page 54: MODELS 5

Goals: the goals of a key process area summarize the states that must exist for that key process area to have been implemented in an effective and lasting way. The extent to which the goals have been accomplished is an indicator of how much capability the organization has established at that maturity level. The goals signify the scope, boundaries, and intent of each key process area.

Common Features: common features include practices that implement and institutionalize a key process area. There are five types of common features: commitment to Perform, Ability to Perform, Activities Performed, Measurement and Analysis, and Verifying Implementation.

Key Practices: The key practices describe the elements of infrastructure and practice that contribute most effectively to the implementation and institutionalization of the KPAs.

LevelsThere are five levels defined along the continuum of the CMM[7] and, according to the SEI: "Predictability, effectiveness, and control of an organization's software processes are believed to improve as the organization moves up these five levels. While not rigorous, the empirical evidence to date supports this belief."

1. Initial (chaotic, ad hoc, individual heroics) - the starting point for use of a new process.

2. Managed - the process is managed in accordance with agreed metrics.3. Defined - the process is defined/confirmed as a standard business process, and

decomposed to levels 0, 1 and 2 (the latter being Work Instructions).4. Quantitatively managed5. Optimizing - process management includes deliberate process

optimization/improvement.Within each of these maturity levels are Key Process Areas (KPAs) which characterise that level, and for each KPA there are five definitions identified:

1. Goals2. Commitment3. Ability4. Measurement5. Verification

The KPAs are not necessarily unique to CMM, representing — as they do — the stages that organizations must go through on the way to becoming mature.The CMM provides a theoretical continuum along which process maturity can be developed incrementally from one level to the next. Skipping levels is not allowed/feasible.N.B.: The CMM was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. It has been used for and may be suited to that purpose, but critics pointed out that process maturity according to the CMM was not necessarily mandatory for successful software development. There were/are real-life examples where the CMM was arguably irrelevant to successful software development, and these examples include many shrinkwrap companies (also called commercial-off-the-shelf or "COTS" firms or software package firms). Such firms would have included, for example, Claris, Apple, Symantec, Microsoft, and Lotus. Though these companies may have successfully developed their software, they would not

Page 55: MODELS 5

necessarily have considered or defined or managed their processes as the CMM described as level 3 or above, and so would have fitted level 1 or 2 of the model. This did not - on the face of it - frustrate the successful development of their software.Level 1 - Initial (Chaotic)It is characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes.Level 2 - RepeatableIt is characteristic of processes at this level that some processes are repeatable, possibly with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may help to ensure that existing processes are maintained during times of stress.Level 3 - DefinedIt is characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. These standard processes are in place (i.e., they are the AS-IS processes) and used to establish consistency of process performance across the organization.Level 4 - ManagedIt is characteristic of processes at this level that, using process metrics, management can effectively control the AS-IS process (e.g., for software development ). In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. Process Capability is established from this level.Level 5 - OptimizingIt is a characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements.At maturity level 5, processes are concerned with addressing statistical common causes of process variation and changing the process (for example, to shift the mean of the process performance) to improve process performance. This would be done at the same time as maintaining the likelihood of achieving the established quantitative process-improvement objectives.Software process frameworkThe software process framework documented is intended to guide those wishing to assess an organization/projects consistency with the CMM. For each maturity level there are five checklist types:

TypeSD Description

PolicyDescribes the policy contents and KPA goals recommended by the CMM.

StandardDescribes the recommended content of select work products described in the CMM.

Process Describes the process information content recommended by the CMM. The process checklists are further refined into checklists for:

roles entry criteria

Page 56: MODELS 5

inputs activities outputs exit criteria reviews and audits work products managed and controlled measurements documented procedures training tools

ProcedureDescribes the recommended content of documented procedures described in the CMM.

Level overview

Provides an overview of an entire maturity level. The level overview checklists are further refined into checklists for:

KPA purposes (Key Process Areas) KPA Goals policies standards process descriptions procedures training tools reviews and audits work products managed and controlled measurements

THE CHANGE ARENA

Page 57: MODELS 5
Page 58: MODELS 5

The Change Curve:The Change Curve model describes the four stages most people go through as they adjust to change. You can see this in figure 1, below.When a change is first introduced, people's initial reaction may be shock or denial, as they react to the challenge to the status quo. This is stage 1 of the Change Curve.Once the reality of the change starts to hit, people tend to react negatively and move to stage 2 of the Change Curve: They may fear the impact; feel angry; and actively resist or protest against the changes.Some will wrongly fear the negative consequences of change. Others will correctly identify real threats to their position.As a result, the organization experiences disruption which, if not carefully managed, can quickly spiral into chaos.

Page 59: MODELS 5

For as long as people resist the change and remain at stage 2 of the Change Curve, the change will be unsuccessful, at least for the people who react in this way. This is a stressful and unpleasant stage. For everyone, it is much healthier to move to stage 3 of the Change Curve, where pessimism and resistance give way to some optimism and acceptance.

Tip: It's easy just to think that people resist change out of sheer awkwardness and lack of vision. However you need to recognize that for some, change may affect them negatively in a very real way that you may not have foreseen. For example, people who've developed expertise in (or have earned a position of respect from) the old way of doing things can see their positions severely undermined by change.

At stage 3 of the Change Curve, people stop focusing on what they have lost. They start to let go, and accept the changes. They begin testing and exploring what the changes mean, and so learn the reality of what's good and not so good, and how they must adapt.By stage 4, they not only accept the changes but also start to embrace them: They rebuild their ways of working. Only when people get to this stage can the organization can really start to reap the benefits of change.Using the Change CurveWith knowledge of the Change Curve, you can plan how you'll minimize the negative impact of the change and help people adapt more quickly to it. Your aim is to make the curve shallower and narrower, as you can see in figure 2.

Page 60: MODELS 5

As someone introducing change, you can use your knowledge of the Change Curve to give individuals the information and help they need, depending on where they are on the curve. This will help you accelerate change, and increase its likelihood of success.Actions at each stage are:Stage 1:At this stage, people may be in shock or in denial. Even if the change has been well planned and you understand what is happening, this is when reality of the change hits, and people need to take time to adjust. Here, people need information, need to understand what is happening, and need to know how to get help.This is a critical stage for communication. Make sure you communicate often, but also ensure that you don't overwhelm people: They'll only be able to take in a limited amount of information at a time. But make sure that people know where to go for more information if they need it, and ensure that you take the time to answer any questions that come up.Stage 2:As people start to react to the change, they may start to feel concern, anger, resentment or fear. They may resist the change actively or passively. They may feel the need to express their feelings and concerns, and vent their anger.For the organization, this stage is the "danger zone". If this stage is badly managed, the organization may descend into crisis or chaos.So this stage needs careful planning and preparation. As someone responsible for change, you should prepare for this stage by carefully considering the impacts and objections that people may have.Make sure that you address these early with clear communication and support, and by taking action to minimize and mitigate the problems that people will experience. As the reaction to change is very personal and can be emotional, it is often impossible to preempt everything, so make sure that you listen and watch carefully during this stage (or have mechanisms to help you do this) so you can respond to the unexpected.Stage 3:This is the turning point for individuals and for the organization. Once you turn the corner to stage 3, the organization starts to come out of the danger zone, and is on the way to making a success of the changes.

Page 61: MODELS 5

Individually, as people's acceptance grows, they'll need to test and explore what the change means. They will do this more easily if they are helped and supported to do so, even if this is a simple matter of allowing enough time for them to do so.As the person managing the changes, you can lay good foundations for this stage by making sure that people are well trained, and are given early opportunities to experience what the changes will bring. Be aware that this stage is vital for learning and acceptance, and that it takes time: Don't expect people to be 100% productive during this time, and build in the contingency time so that people can learn and explore without too much pressure.Stage 4:This stage is the one you have been waiting for! This is where the changes start to become second nature, and people embrace the improvements to the way they work.As someone managing the change, you'll finally start to see the benefits you worked so hard for. Your team or organization starts to become productive and efficient, and the positive effects of change become apparent.Whilst you are busy counting the benefits, don't forget to celebrate success! The journey may have been rocky, and it will have certainly been at least a little uncomfortable for some people involved: Everyone deserves to share the success. What's more, by celebrating the achievement, you establish a track record of success: Which will make things easier the next time change is needed.

The change curve is a behavioral model of group and individual reactions to the process of change. The communicator’s task in any change process is to try to “concertina” the curve, by helping people to adjust and enthusiastically support change as quickly as possible. This requires a communication strategy for each angle of the curve.

Satisfaction:Listen to employees; roll-out your strategy. Denial:Maximise face-to-face communication; address the “me” issues. Resistance:Involve yourself in informal channels; use multiple communication

forms. Exploration:Communicate timelines for the project; encourage involvement. Hope:Repeat and reinforce your objectives and strategy; build buy-in. Commitment:Reward behaviour change.

The change curve can also track commitment to change. Try putting up posters of the change curve in each department and ask employees to fix a spot on it to show their current mood. The feedback, which is anonymous, can be used to assess the overall feeling of the company.

CHANGE EQUATION

The Formula for Change was created by Richard Beckhard and David Gleicher, refined by Kathie Dannemiller and is sometimes called Gleicher's Formula. This formula provides a model to assess the relative strengths affecting the likely success or otherwise of organisational change programs.

D x V x F > R

Page 62: MODELS 5

Three factors must be present for meaningful organizational change to take place. These factors are:D = Dissatisfaction with how things are now;V = Vision of what is possible;F = First, concrete steps that can be taken towards the vision.If the product of these three factors is greater thanR = Resistance,Then change is possible. Because D, V, and F are multiplied, if any one is absent or low, then the product will be low and therefore not capable of overcoming the resistance.To ensure a successful change it is necessary to use influence and strategic thinking in order to create vision and identify those crucial, early steps towards it. In addition, the organization must recognize and accept the dissatisfaction that exists by communicating industry trends, leadership ideas, best practice and competitive analysis to identify the necessity for change.

HistoryThe original formula, as created by Gleicher and authored by Beckhard and Harris, is:C = (ABD) > XWhere C is change, A is the status quo dissatisfaction, B is a desired clear state, D is practical steps to the desired state, and X is the cost of the change.It was Kathleen Dannemiller who dusted off the formula and simplified it, making it more accessible for consultants and managers. Dannemiller and Jacobs first published the more common version of the formula in 1992 (and Jacobs, 1994). Paula Griffin stated it accurately (in Wheatley et al, 2003) when she wrote that Gleicher started it, Beckhard and Harris promoted it, but it really took off when Dannemiller changed the language to make it easier to remember and use.

The Change Equation shows how to tap into the power of the diversity already in your

Page 63: MODELS 5

organization and turn it into a "pluralistic" workplace where change is not something to resist but something to embrace. Organizational change agents, business leaders, human resource managers, and anyone who wants to make his or her organization stronger and more competitive will find in this readable volume a wealth of practical solutions that will help any forward-looking organization thrive in the new economy.

Traditionally, "change projects" have often been driven by technology implementations or upgrades, with business processes and working practices being changed to fit in with the new system. In today's turbulent economy, however, change is just as likely to be driven by something else: a long-established competitor unexpectedly going bust, for example, or your bank calling in a loan, or a layer of middle management being made redundant. Whatever the situation, when change looms on the horizon, chances are that you'll hear things like:” I can't believe that restructuring the sales force is really going to increase sales.” Upgrading the system is such a disruption. I just don't see why we need to go through all that work.""Our current system isn't great, but what's so wonderful about the new one? How will that be any better?""I know that Corascon going under should be good news for us, but I can't work out what I should be doing about it."With comments like these flying around, how will you get everyone to agree with the changes you have in mind? After all, you can't do this without them!This is where Beckhard and Harris's Change Equation can help. In this article, we'll look at this equation, and see how you can use it to roll out successful change in the future.

Explaining the Change Equation

Richard Beckhard and Rubin Harris first published their change equation in 1977 in "Organizational Transitions: Managing Complex Change", and it's still useful today. It states that for change to happen successfully, the following statement must be true:

Beckhard change equation

An early indicator of the chances of success

Historically, the Beckhard change equation can be seen as a major milestone in the field of Organisational Development in that it acknowledged the role and importance of employee involvement in change.It represented a significant shift in management thinking from the "command and control" of the industrial age to a people centric approach.Richard Beckhard has long been considered one of the founders of organisation development and the creator of the core framework for a large system change. He articulated a generic change framework, which comprises four main themes:

(1) Determining the need for change - We must be clear why things need to change. We need to articulate why it is unacceptable and undesirable to conduct business in the

Page 64: MODELS 5

same way. If we are not dissatisfied with the present situation, then there is no motivation to change.(2) Articulating a desired future - Ensuring that your employees fully understand and can picture their future as part of a changed organisation and can see their place in the new organization.(3) Assessing the present and what needs to be changed in order to move to the desired future - Making sure that each employee understands what they need to know what to do to prepare themselves for the change and what steps they need to take in order for this change to be successful.(4) Getting to the desired future by managing the transition - using external specialist help and appropriate processes - see the diagram below. 

The change equation is expressed as Dissatisfaction x Vision x First Steps > Resistance to Change.Three factors must be present for meaningful organizational change to take place, namely:

Dissatisfaction with the status quo Vision of what is possible First, concrete steps that can be taken towards the vision

If any of these factors are missing or weak, then you’re going to experience resistance.In my experience, nobody voluntarily initiates a change in their life unless there is an actual or perceived threat to survival and / or an aspirational desire linked to a personal vision.So the Beckhard change equation is - in my opinion - a change model that recognises the basic psychology of change at the personal level.

Page 65: MODELS 5

Abstract This paper will focus on the topic of organisational change and its management from an information systems perspective. The paper will examine the issues raised during a review of the change management literature – looking at the major approaches to change management, namely, the planned, emergent and contingency approaches – as background to the issues raised in other papers in this theme of the book. As in the Management In The 90s (MIT90s) study, a very broad definition of the term IT is used to include: computers of all types, hardware, software, communications networks and the integration of computing and communications technologies. The paper will then examine change management within the context of Information Systems (IS) theory and practice. This will lead to a discussion of an emerging model by Orlikowski and Hofman which will be briefly reviewed to provide insight into the types of models which are likely to provide a focus for research in the area in the near future. The model also provides a strong and interesting framework against which to view some of the papers that follow in this theme of the book. 1. Introduction

As we approach the twenty first century there can be little doubt that successful organisations of the future must be prepared to embrace the concept of change management. Change management has been an integral part of organisational theory and practice for a long time, however, many theorists and practitioners now believe that the rate of change that organisations are subjected to is set to increase significantly in the future. Indeed, some even go so far as to suggest that the future survival of all organisations will depend on their ability to successfully manage change (Burnes 1996; Peters 1989; Toffler 1983).It could be argued that the study of organisational change management should be the preserve of the social scientist or the business manager. After all, much of the theory has evolved from social and business studies and not from the field of computer science. However, information systems do not exist in a vacuum and It is widely accepted that technology, particularly Information Technology (IT), is one of the major enablers of organisational change (Markus and Benjamin 1997; Scott-Morton 1991). The successful development of any information system must address sociological issues including the effects of the system itself on the organisation into which it is introduced. Paul (1994) maintains that information systems must be developed specifically for change as they must constantly undergo change to meet changing requirements. Clearly, organisational change is an important issue This paper will focus on the topic of organisational change management from an information systems perspective. The paper will examine the issues raised during a review of the change management literature as background to the issues raised in other papers in this theme of the book. As in the Management In The 90s (MIT90s) study (Scott-Morton 1991), a very broad definition of the term IT is used to include: computers of all types, hardware, software, communications networks and the integration of computing and communications technologies. 2. Overview of the Field

Page 66: MODELS 5

Many of the theories and models relating to the management of organisational change have evolved from the social sciences (Burnes 1996; Bate 1994; Dawson 1994). Information Systems (IS) research is of course a much newer discipline. However, the socio-technical nature of information systems is now recognised and many of the IS theories and models have been adopted and adapted from the social sciences (Yetton et al. 1994; Benjamin and Levinson 1993). This paper presents a discussion on the change management literature drawn from a social science perspective which is then related to an IS perspective of IT-enabled change. We will begin by giving a broad overview of change management and examining the nature of change and its applicability to the IS field. We will then briefly examine the foundations of change management theory. Specifically, the three main theories that underpin the different approaches to change management are examined which concentrate on individual, group and organisation-wide change respectively. The paper will then examine the major approaches to change management, namely, the planned, emergent and contingency approaches. The planned approach to change, based on the work of Lewin (1958), has dominated change management theory and practice since the early 1950s. The planned approach views the change process as moving from one fixed state to another. In contrast, the emergent approach, which appeared in the 1980s (Burnes 1996), views change as a process of continually adapting an organisation to align with its environment. The contingency approach is a hybrid approach which advocates that there is not ‘one best way’ to manage change. The paper will then examine change management within the context of Information Systems (IS) theory and practice. In particular, the paper will investigate the fundamental characteristics of ITenabled change and will discuss how this is different to the management of change in pure social systems. Finally, the Improvisational Change Model proposed by Orlikowski and Hofman (1997) will be examined in detail. This model is based on the same principles as the emergent approach to change management and, similarly, Orlikowski and Hofman (1997) maintain that their model is more suitable than the traditional Lewinian models for modern, networked organisations using adaptive technologies.

3 Change Management

Although it has become a cliché, it is nevertheless true to say that the volatile environment in which modern organisations find themselves today mean that the ability to manage change successfully has become a competitive necessity (Burnes 1996; Kanter 1989; Peters and Waterman 1982). The aim of this section is to provide a broad overview of the substance of change and of change management. Organisational change is usually required when changes occur to the environment in which an organisation operates. There is no accepted definition of what constitutes this environment, however, a popular and practical working definition is that the environmental variables which influence organisations are political, economical, sociological and technological (Jury 1997). Change has been classified in many different ways. Most theorists classify change according to the type or the rate of change required and this is often referred to as the substance of change (Dawson 1994). Bate (1994) proposes a broad definition for the

Page 67: MODELS 5

amount of change which he argues may be either incremental or transformational. Bate maintains that incremental change occurs when an organisation makes a relatively minor change to its technology, processes or structure whereas transformational change occurs when radical changes programmes are implemented. Bate also argues that modern organisations are subject to continual environmental change and consequently they must constantly change to realign themselves. Although there is a general recognition for the need to successfully manage change in modern organisations, questions regarding the substance of change and how the process can be managed in today’s context remain largely unanswered. There are numerous academic frameworks available in the management literature that seek to explain the issues related to organisational change and many of these frameworks remain firmly rooted in the work of Lewin (1958). Dawson (1994) points out that, almost without exception, contemporary management texts uncritically adopt Lewin’s 3-stage model of planned change and that this approach is now taught on most modern management courses. This planned (Lewinian) approach to organisational change is examined in detail later in the paper. Information systems are inherently socio-technical systems and, therefore, many of the theories and frameworks espoused by the social sciences for the management of change have been adopted by the IS community. Consequently, even the most modern models for managing IT-enabled change are also based on the Lewinian model (Benjamin and Levinson 1993). Figure 1 depicts the most popular and prominent models for understanding organisational change which are examined in detail in later sections of this paper. These models will be subsequently be compared with the main change management models adopted by the IS community.

Change ManagementPlanned Approach Emergent Approach Contingency ApproachContextualist Processual

Principal Change Management Models 4 Theoretical FoundationsChange management theories and practice originate from different, diverse, social science disciplines and traditions. Consequently, change management does not have clear and distinct boundaries and the task of tracing its origins and concepts is extremely difficult. This section will briefly examine the foundations of change management theory as these foundations underpin later discussions concerning the most prominent models for understanding organisational change. Whatever form change takes and whatever the required outcomes of any change initiative, managers responsible for implementing change must address the management issues at either an individual,group or organisational level. It may also be argued that a successful change programme must address the management issues at all levels. Three of the main theories upon which change management theorystands are: the individual, group dynamics and the open systems perspectives which are summarised inthe remainder of this section.

The Individual Perspective The individual perspective school is divided into two factions know as the Behaviourists and the Gestalt-Field psychologists. Behaviourists believe that behaviour is caused by an

Page 68: MODELS 5

individual’s interaction with the environment. The basic principle of this approach, which originates from Pavlov’s (1927) work, is that human actions are conditioned by their expected consequences. Put simply, this means that rewarded behaviour is repeated while ignored behaviour tends not to be repeated. GestaltField protagonists, however, believe that behaviour is not just caused by external stimuli, but that it arises from how an individual uses reason to interpret these stimuli. Behaviourists attempt to effect organisational change by modifying the external stimuli acting upon the individual whereas Gestalt-Field theorists seek to change individual self-awareness to promote behavioural and thus organisational change.

The Group Dynamics Perspective

Group dynamics theorists believe that the focus of change should be at the group or team level and that it is ineffectual to concentrate on individuals to bring about change as they will be pressured by the group to conform. The group dynamics school has been influential in developing the theory and practice of change management and of all the schools they have the longest history (Schein 1969). Lewin (1958) maintains that the emphasis on effecting organisational change should be through targeting group behaviour rather than individual behaviour since people in organisations work in groups and, therefore, individual behaviour must be seen, modified or changed to align with the prevailing values, attitudes and norms (culture) of the group. The group dynamics perspective manifests itself as the modern management trend for organisations to view themselves as teams rather than merely as a collection of individuals.

The Open Systems Perspective Proponents of the open systems perspective believe that the focus of change should be neither on the individual nor on the group but that it should be on the entire organisation (Burnes 1996). Organisations are viewed as a collection of interconnected sub-systems and the open systems approach is based on analysing these sub-systems to determine how to improve the overall functioning of the organisation. The sub-systems are regarded as open because they interact not only internally with each other but also with the external environment. Therefore, internal changes to one sub-system affect other sub-systems which in turn impact on the external environment (Buckley 1968). The open systems perspective focuses on achieving overall synergy rather than on optimising any one individual sub-system (Mullins 1989). Burke (1980) maintains that this holistic approach to understanding organisations is reflected in an different approach to change management which is driven by three major factors: interdependent subsystems, training and management style. An organisation's sub-systems are regarded as interdependent and Burke argues that change cannot occur in one sub-system in isolation without considering the implications for the other sub-systems. He also argues that training cannot achieve organisational change alone as it concentrates on the individual and not the organisational level. Burke also maintains that modern organisations must adopt a consultative management approach rather than the more prevalent controlling style epitomized by Taylor’s (1911) famous work. The Planned Approach

Page 69: MODELS 5

Much of the literature relating to the planned approach to organisational change is drawn from Organisational Development (OD) practice and numerous OD protagonists have developed models and techniques as an aid to understanding the process of change (Dawson 1994). The origins of most of the developments in this field can be traced to the work of Lewin (1958) who developed the highly influential Action Research and Three-Phase Models of planned change which are summarised in the remainder of this section.

The action research model

Lewin (1958) first developed the Action Research (AR) model as a planned and collective approach to solving social and organisational problems. The theoretical foundations of AR lie in Gestalt-Field and Group Dynamics theory. Burnes (1996) maintains that this model was based on the basic premise that an effective approach to solving organisational problems must involve rational, systematic analysis of the issues in question. AR overcomes “paralysis through analysis” (Peters and Waterman 1982: 221) as it emphasises that successful action is based on identifying alternative solutions, evaluating the alternatives, choosing the optimum solution and, finally, that change is achieved by taking collective action and implementing the solution. The AR approach advocates the use of a change agent and focuses on the organisation, often represented by senior management. The AR approach also focuses on the individuals affected by the proposed change. Data related to the proposed change is collected by all the groups involved and is iteratively analysed to solve any problems. Although the AR approach emphasizes group collaboration, Burnes (1996) argues that cooperation alone is not always enough and that there must also be a ‘feltneed’ by all the participants.

The three-phase model

Lewin’s ubiquitous Three-Phase model (1958) is a highly influential model that underpins many of the change management models and techniques today (Burnes 1996; Dawson 1994). The main thrust of this model is that an understanding of the critical steps in the change process will increase the probability of successfully managing change. Lewin (1958) also argues that any improvement in group or individual performance could be prone to regression unless active measures are take to institutionalise the improved performance level. Any subsequent behavioural or performance change must involve the three-phases of unfreezing the present level, moving to a new level and re-freezing at the new level. Lewin (1958) argues that there are two opposing sets of forces within any social system; these are the driving forces that promote change and the resisting forces that maintain the status quo. Therefore, to unfreeze the system the strength of these forces must be adjusted accordingly. In practice the emphasis of OD practitioners has been to provide data to unfreeze the system by reducing the resisting forces (Dawson 1994). Once these negative forces are reduced the organisation is moved towards the desired state through the implementation of the new system. Finally, re-freezing occurs through a program of positive reinforcement to internalise new attitudes and behaviour. Burnes (1996) argues that this model merely represents a logical extension to the AR model as unfreezing and moving respectively equate to the research and action phases of the AR

Page 70: MODELS 5

model. Lewin’s Three-Phase model of planned change has since been extended by numerous theorists to enhance its practical application including the Lippitt et al.'s (1958) seven-phase model and the Cummings and Huse (1989) eight-phase model. All these models are based on the planned approach to change management and, according to Cummings and Huse (1989), they all share one fundamental concept: “the concept of planned change implies that an organisation exists in different states at different times and that planned movement can occur from one state to another". The implications of this concept are that an understanding of planned organisational change cannot be gained by simply understanding the processes which bring about change, it is also necessary to understand the states that an organisation passes through before attaining the desired future state (Burnes 1996).

The Emergent Approach

Within the social sciences, an approach described by Burnes (1996) as the emergent approach is a popular contemporary alternative to the planned approach to the management of change. The emergent approach was popularised in the 1980s and includes what other theorists have described as processual or contextualist perspectives (Dawson 1994). However, these perspectives share the common rationale that change cannot and should not be ‘frozen’ nor should it be viewed as a linear sequence of events within a given time period as it is with a planned approach. In contrast, with an emergent approach, change is viewed as a continuous process. The modern business environment is widely acknowledged to be dynamic and uncertain and consequently, theorists such as Wilson (1992) and Dawson (1994) have challenged the appropriateness of a planned approach to change management. They advocate that the unpredictable nature of change is best viewed as a process which is affected by the interaction of certain variables (depending on the particular theorist’s perspective) and the organisation. Dawson (1994) proposed an emergent approach based on a processual perspective which he argues is not prescriptive but is analytical and is thus better able to achieve a broad understanding of change management within a complex environment. Put simply, advocates of the processual perspective maintain that there cannot be a prescription for managing change due to the unique temporal and contextual factors affecting individual organisations. Dawson succinctly summarises this perspective,saying that “change needs to be managed as an ongoing and dynamic process and not a single reaction to adverse contingent circumstance”.(Dawson 1994:182). For advocates of the emergent approach it is the uncertainty of the external environment which makes the planned approach inappropriate. They argue that rapid and constant changes in the external environment require appropriate responses from organisations which in turn force them to develop an understanding of their strategy, structure, systems, people, style and culture and how these can affect the change process (Dawson 1994; Pettigrew and Whipp 1993; Wilson 1992). This has in turn led to a requirement for a ‘bottom-up’ approach to planning and implementing change within an organisation. The rapid rate and amount of environmental change has prevented senior managers from effectively monitoring the business environment to decide upon appropriate organisational responses. Pettigrew and Whipp (1993) maintain that emergent change involves linking action by people at all levels of a business. Therefore, with an emergent

Page 71: MODELS 5

approach to change, the responsibility for organisational change is devolved and managers must take a more enabling rather than controlling approach to managing. Although the proponents of emergent change may have different perspectives there are, nevertheless, some common themes that relate them all. Change is a continuous process aimed at aligning an organisation with its environment and it is best achieved through many small-scale incremental changes which, over time, can amount to a major organisational transformation. Furthermore, this approach requires the consent of those affected by change it is only through their behaviour that organisational structures, technologies and processes move from abstract concepts to concrete realities (Burnes 1996).

The Contingency Approach

Burns and Stalker (1961) established a contingent relationship between an organisation and its environment and the need to adapt to that environment. Perhaps more importantly, they also showed that there was more than ‘one best way’ to do this. In contrast to both the planned and the emergent approaches to change management, the basic tenet of the contingency approach to change management is that there is no ‘one best way’ to change. Although British theorists acknowledge that contingency theory has contributed significantly to organisational design theory, they do not acknowledge that it has had the same impact on change management theory (Burnes 1996; Bate 1994). However, within North America and Australia a rational model of change based on a contingency perspective has prevailed therefore this section will briefly discuss this approach (Dawson 1994).A contingency approach has been taken by Dunphy and Stace (1993) who proposed a model of organisational change strategies and developed methods to place an organisation within that model. Dunphy and Stace (1993) maintain that their model reconciles the opposing views of the planned and emergent theoretical protagonists. It can be argued that the planned and emergent approaches to change management are equally valid but that they apply to different organisational circumstances. For example an organisation facing constant and significant environmental changes may find an emergent approach to change management more appropriate than a planned approach. In short, a model of change could embrace a number of approaches with the most suitable approach being determined by the organisation's individual environment. The resultant continuum can be seen in Figure 2:

ENVIRONMENTAPPROACHES TO CHANGEStable TurbulentPlanned Emergent

The Change Management Continuum (from Burnes 1996: 197)Contingency theory is a rejection of the ‘one best way’ approach taken by the majority of change management protagonists. This approach adopts the perspective that an organisation is ‘contingent’ on the situational variables it faces and therefore, organisations must adopt the most appropriate change management approach.

Page 72: MODELS 5

IT-Enabled Organisational Change

Previous sections of this paper have dealt with the different approaches to managing organisational change taken from a social science perspective. Regardless of which model is adopted, the requirement for an organisation to change is generally caused by changes in its environmental variables which many academics and practitioners agree are political, economic, sociological and technological (Jury 1997; Scott-Morton 1991). This section will focus on one of these environmental variables, namely technology, in the specific form of IT, and will examine the major issues that are particular to ITenabled change. Woodward’s (1965) study demonstrated the need to take into account technological variables when designing organisations and this gave credibility to the argument for technological determinism which implies that organisational structure is ‘determined’ by the form of the technology. However, despitethe general acceptance that the application of change management techniques can considerably increase the probability of a project’s success, many IT-enabled change projects have failed for nontechnical reasons. Some projects, such as the London Ambulance Service Computer Aided Dispatch System have failed with fatal consequences (Benyon-Davies 1995). Markus and Benjamin (1997) attribute this to what they describe as the magic bullet theory of IT whereby IT specialists erroneously believe in the magic power of IT to create organisational transformation. Some academics argue that although IT is an enabling technology it cannot by itself create organisational change (Markus and Benjamin 1997; McKersie and Walton 1991). McKersie and Walton (1991) maintain that to create IT-enabled organisational change it is necessary to actively manage the changes. They also argue that the effective implementation of IT is, at its core, a task of managing change. The Management In The 1990s (MIT90) program (Scott Morton 1991) proposed a framework for understanding the interactions between the forces involved in IT-enabled organisational change. A simplified adaptation of this framework StructureProcessesSkills & RolesStrategy ITExternalTechnologicalEnvironmentExternalSocioeconomicEnvironment

Adapted From The MIT90s Framework (Scott-Morton 1991) Proponents of the MIT90s model maintain that to successfully manage IT-enabled change it is necessary to ensure that the organisational choices, the technology and the strategic choices depicted inFigure 2.3 are properly aligned (Scott-Morton 1991). In contrast however, Yetton et al. (1994) challenge the view that the critical issue in managing IT successfully is alignment. They argue that IT can be used deliberately to modify an organisation's strategy and also that the MIT90s framework is a static model that does not address the dynamic nature of change. Nonetheless, despite this criticism,the MIT90s study has been highly influential

Page 73: MODELS 5

to IS academics and practitioners (Yetton et al. 1994;Benjamin and Levinson 1993). The MIT90s study concluded that the benefits of IT are not generallybeing realised by organisations because investment is biased towards technology and not towards managing changes in organisational processes, structure and culture. Benjamin and Levinson (1993) maintain that IT-enabled change is different from change which isdriven by other environmental concerns. They argue that skills, jobs and organisational control processes change radically. Zuboff (1988) also described the revolutionary changes in jobs and controlprocesses within organisations that take full advantage of IT as workers become ‘informated’ and thus empowered. Ives and Jarvenpaa (1994) provide a vision of the affect of IT-enabled changes on basic work methods as organisations become global networked organisations to take advantage of collaborative work methods. IT-enabled changes also span across functions and organisations as technology enables increased inter and intra-organisational coordination with decreased transaction costs (Kalakota and Whinston 1996). Many academics and practitioners would agree that IT-enabled change is different from more general change processes and that change must be managed to be successful (Yetton et al. 1994; Benjamin and Levinson 1993). Clearly, the change process must be understood to be managed and a number of models have been proposed for this. One such model is Benjamin and Levinson’s (1993) which draws on the general change management literature to develop a framework for managing IT-enabledchange. This framework is typical of many IS change models (Orlikowski and Hofman 1997) which have been adopted and adapted from the social sciences and are based on the Lewinian unfreeze, change and re-freeze approach to change management discussed previously. However, in a situation reminiscent of the developments within the social sciences, a number of new IT-enabled change management models are now emerging which are based on the emergent or contingent approaches tochange management.

Orlikowski and Hofman’s Improvisational Change Model

A key example of this type of model is presented by Orlikowski and Hofman (1997). We will review this model here to provide insight into the types of models which are likely to provide a focus for research in the area in the near future. The model also provides a strong and interesting framework against which to view some of the papers that follow in this theme of the book. Theirs is an improvisational model for managing technological change which is an alternative to the predominantLewinian models. They maintain that IT-enabled change managers should take as a model the Trukese navigator who begins with an objective rather than a plan and responds to conditions as they arise in an ad-hoc fashion. They also argue that traditional Lewinian change models are based on the fallacious assumption that change occurs only during a specified period whereas they maintain that change is now a constant. This is similar to the arguments of the proponents of the emergent change management approach which were examined earlier in this paper. The origins of Orlikowski and Hofman’s (1997) Improvisational Change Model can be found in a study by Orlikowski (1996) which examined the use of new IT within one organisation over a two year period. The study concluded by demonstrating the critical role of situated change enacted by organisational members using groupware technology over time. Mintzberg (1987) first made the distinction between deliberate and emergent strategies and Orlikowski (1996) argues that the perspectives which have influenced

Page 74: MODELS 5

studies of IT-enabled organisational change have similarly neglected emergent change. Orlikowski challenges the arguments that organisational change must be planned, that technology is the primary cause of technology-based organisational transformation and that radical changes always occur rapidly and discontinuously. In contrast, she maintains that organisational transformation is an ongoing improvisation enacted by organisational actors trying to make sense of and act coherently in the world.

Model assumptions and types of change

Orlikowski and Hofman’s (1997) Improvisational Change Model is based on two major assumptions. First, that changes associated with technology implementations constitute an ongoing process rather than an event with an end point after which an organisation can return to a state of equilibrium. Second, that every technological and organisational change associated with the ongoing process cannot be anticipated in advance. Based on these assumptions, Orlikowski and Hofman (1997) have identified three different types of change:

· Anticipated Change. Anticipated changes are planned ahead of time and occur as intended. For example the implementation of e-mail that accomplishes its intended aim of facilitating improved communications. · Opportunity-Based Change. Opportunity-Based changes are not originally anticipated but are intentionally introduced during the ongoing change process in response to an unexpected opportunity. For example, as companies gain experience with the World Wide Web they may deliberately respond to unexpected opportunities to leverage its capabilities. · Emergent Change. Emergent changes arise spontaneously from local innovation and that are not originally anticipated or intended. For example the use of e-mail as an informal grapevine for disseminating rumours throughout an organisation. Orlikowski and Hofman (1997) maintain that both anticipated and opportunity-based changes involve deliberate action in contrast to emergent changes which arise spontaneously and usually tacitly from organisational members’ actions over time. Furthermore, they contend that the three types of change usually build iteratively on each other in an undefined order over time. They also argue that practical change management using the Improvisational Change Model requires a set of processes and mechanisms to recognise the different types of change as they occur and to respond effectively to them.

Critical enabling conditionsOrlikowski and Hofman (1997) suggest that there are certain enabling conditions which must be fulfilled to allow their Improvisational Change Model to be successfully adopted for implementing technology within an organisation. The first of these enabling conditions is that dedicated resources must be allocated to provide ongoing support for the change process which Orlikowski and Hofman (1997) maintain is inherently continuous. They also suggest that another enabling condition is the interdependent relationship between the organisation, the technology and the change model as depicted

Aligning the Key Change Dimensions (from Orlikowski and Hofman 1997: 18)

Page 75: MODELS 5

Orlikowski and Hofman’s (1997) research suggested that the interaction between these key change dimensions must ideally be aligned or at least not in opposition. Their research also suggested that an Improvisation Change Model may only be appropriate for introducing open-ended technology into organisations with adaptive cultures. Open-ended technology is defined by them as technology which is locally adaptable by end users with customizable features and the ability to create new applications. They maintain that open-ended technology is typically used in different ways across an organisation. Orlikowski and Hofman appear to share similar views to the contingency theorists discussed earlier as they do not subscribe to the view that there is ‘one best way’ for managing IT-enabled change.

Orlikowski’s (1996) research, upon which Orlikowski and Hofman’s (1997) Improvisational Change Model is based, concluded that further empirical research was needed to determine the extent to which an improvisational perspective of organisational change is useful in other contexts and how differentorganisational and technological conditions influence the improvisations attempted and implemented. Orlikowski and Hofman’s (1997) Improvisational Change Model is a first attempt at moving thisresearch theme forward and it is an area which is likely to grow in importance over the next few years.

Summary The dominant theories and models relating to the management of change have evolved from the social sciences. IS research is relatively much newer and the socio-technical nature of information systems has caused most IS theories and models to be adapted from the social sciences. The main theories that provide the foundation for general change management approaches are the individual, group dynamics and the open systems perspectives. The planned approach to change management tends to concentrate on changing the behaviour of individuals and groups through participation. In contrast, the newer emergent approach to change management focuses on the organisation as an open system with its objective being to continually realign the organisation with its changing external environment.Lewin’s (1958) model is a highly influential planned approach model that underpins many of the change management models and techniques today and most contemporary management texts adopt this 3-phase unfreeze, change and re-freeze model. The rationale of the newer emergent approach is that change should not be ‘frozen’ or viewed as a linear sequence of events but that it should be viewed as an ongoing process. Contingency theory is a rejection of the ‘one best way’ approach taken by planned and emergent protagonists. The contingency approach adopts the perspective that an organisation is‘contingent’ on the situational variables it faces and, therefore, it must adopt the most appropriate change management approach. Many IT-enabled change projects fail despite the general acceptance that change management can considerably increase the probability of a project’s success. This is often attributable to the misconception that IT is not only an enabling technology but that it can also create organisational change. The highly influential MIT90s framework is useful for understanding the interactions between the forces involved in IT-enabled organisational change which must be aligned to create successful organisations. IT-enabled change is different from changes driven by other environmental concerns and the

Page 76: MODELS 5

process must be understood to be managed. Consequently, many IS change models have adopted and adapted the Lewinian unfreeze, change and re-freeze approach to change management. However, in a situation reminiscent of the developments within the social sciences, a number of new IT-enabled change management models are now emerging which are based on the emergent or contingent approaches to change management. Orlikowski and Hofman (1997) have proposed an improvisational model for managing technological change as one alternative to the predominant Lewinian models. This improvisational model is based on the assumptions that technological changes constitute an ongoing process and that every change associated with the ongoing process cannot be anticipated beforehand. Based on these assumptions Orlikowski and Hofman (1997) have identified three different types of change, namely, anticipated,opportunity-based and emergent changes. Both anticipated and opportunity-based changes involve deliberate action in contrast to emergent changes which arise spontaneously and usually tacitly from organisational actors’ actions over time. These three types of change build iteratively on each other in an undefined order over time. Orlikowski and Hofman (1997) suggest that the critical enabling conditions which must be fulfilled to allow their Improvisational Change Model to be successfully adopted for implementing technology are aligning the key dimensions of change and allocating dedicated resources to provide ongoing support for the change process. The review of models of change presented in this paper provides background for the following papers in this theme, and provides a developing research perspective against which to view the issues discussed by the other authors.

American psychologist McClelland  proposed a famous model called icebery model in 1973.the so-called “Iceberg model” is to staff the different quality of the performance of the individual tables into the surface “over part of the iceberg” and hidden ” The following part of the iceberg. ”Among them, the “top of the above section,” including basic knowledge and basic skills, which is the external performance. It is easy to understand and measure the part, which is also relatively easy to change through training and development.and The following section includes social roles, self-image, character and motivation, which are inside of the people, some difficulty to measure. They are not easy to change through the outside influence. But the behavior and performance of the staff plays a key role.

The six dimensions of people’s quality1, Knowledge (Knowledge): refers to individuals get factual information and empirical in a particular field2, skill (Skill): refers to the structured use of the knowledge capacity to perform a specific job, namely, you need a technology and know-how required for the situation of a particular area .3, social role (Social Roles): means a person’s behavior based on attitudes and values, ways and style4, self-concept (Self-Concept): refers to a person’s attitudes, values and self-image.5. Trait:   The sustained response of physical characteristics and a variety of information on the environment. Quality and motivation of individuals can predict the work under the supervision of the state in the long term.

Page 77: MODELS 5

6. Motives: refers to a specific area of natural and sustainable ideas and preferences (such as achievement, affinity, influence), they will drive, guide and determine a person’s external action.

Topography of Mind:Freud’s Iceberg Model for Unconscious, Pre-conscious, & Conscious

According to Freud, there are three levels of consciousness: conscious (small): this is the part of the mind that holds what you抮 e aware of.

You can verablize about your conscious experience and you can think about it in a logical fashion.

preconscious (small-medium): this is ordinary memory. So although things stored here aren 抰 in the conscious, they can be readily brought into conscious.

unconscious (enormous): Freud felt that this part of the mind was not directly accessible to awareness. In part, he saw it as a dump box for urges, feelings and ideas  that are tied to anxiety, conflict and pain. These feelings and thoughts have not disappeared and according to Freud, they are there, exerting influence on our actions and our conscious awareness.  This is where most of the work of the Id, Ego, and Superego take place.

Material passes easily back and forth between the conscious and the preconscious. Material from these two areas can slip into the unconscious. Truly unconscious material cant be made available voluntarily, according to Freud. You need a psychoanalyst to �do this!Iceberg metaphor for the mind 抯 layout:We can use the metaphor of an iceberg to help us in understanding Freud’s topographical theory.

Only 10% of an iceberg is visible (conscious) whereas the other 90% is beneath the water (preconscious and unconscious).

The Preconscious is allotted approximately 10% -15% whereas the Unconscious is allotted an overwhelming 75%-80%.

Change Management Iceberg Software Tool - Change Management Iceberg Software (Strategic

The Change Management Iceberg of Wilfried Kr ger is a strong visualization of what is� arguably the essence of change in organizations: dealing with barriers. According to Kr ger many C. managers only consider the top of the iceberg: Cost, Quality and Time �("Issue Management").However, below the surface of the water there are two more dimensions of C. and implementation M.:- M. of Perceptions and Beliefs, and- Power and Politics M.What kind of barriers arise, and what kind of Implementation M. is consequently needed, depends on:- the kind of C.- hard things "only" (information systems, processes) just scratches the surface,

Page 78: MODELS 5

- soft things also (values, mindsets and capabilities) is much more profound- the applied C. strategy- revolutionary, dramatic change as in Business Process Reengineering- evolutionary, incremental change as in KaizenBelow the surface of the CM Iceberg:- Opponents have both a negative general attitude towards C. AND a negative behaviour towards this particular personal C. They need to be controlled by M. of Perceptions and Beliefs to change their minds as far as possible.- Promoters on the other hand have both a positive generic attitude towards C. AND are positive about this particular C. for them personally. They take advantage of the C. and will therefore support it.- Hidden Opponents have a negative generic attitude towards C. although they seem to be supporting the C. on a superficial level ("Opportunists"). Here M. of Perceptions and Beliefs supported by information (Issue M.) is needed to change their attitude.- Potential Promoters have a generic positive attitude towards C. , however for certain reasons they are not convinced (yet) about this particular C. Power and Politics M. seems to be appropriate in this case.

COACHING SKILL WILL

Page 79: MODELS 5

Why Skill/Will Matrix?

If you assigned a task to someone and the job does not quite get done well enough, one of the most likely reasons is that:

You have delegated the task to someone who is unwilling – or unable – to complete the job, and have then remained relatively uninvolved or 'hands-off', or

You may have been too directive or 'hands-on' with a capable person who was quite able to complete the assignment with little assistance from you; you just ended up demotivating him/her.

Consequently, whether you are managing, or leading, or coaching, it is critical to match your style of interaction with the player's readiness for the task. The Skill/Will Matrix will help you do this.

Details on Applying the Skill / Will Matrix

Direct

Build the will

Develop a vision of future performance

Identify motivations

Provide clear briefing

Build the skill...

Sustain the will...

Supervise closely with tight control and clear rules/deadlines...

Guide...

Excite...

Delegate...

Modern Management: 12 Breakthrough Ideas

Source: Harvard Economic Review Executive summary by Anastasia Bibikova.

Page 80: MODELS 5

The best ideas related to the practice of management. If you know somebody has used them, don’t try to repeat. Simply consider them, debate them and let them inspire your own thinking!...

8. The Use of Giving Alms: There is no use giving alms to those who need assistance. Why should you be too much compassionate to people who are just looking  for 2-3 “coaching” words to find their own way/ solution?.

10 Roles of an Inspirational Leader

Coach and train your people to greatness. Empowermentalone is not enough. You must train and coach your people to enhance their learning ability and performance. Coaching is the key to unlocking the potential of your people, your organization, and yourself. It increases your effectiveness as a leader. As a coach, you must help your people grow and achieve more by inspiring them, asking effective questionsand providing feedback. Find the right combination of instructor-led training and coaching follow-ups to achieve success.

The High Low Matrix can help managers overcome one of the more challenging aspects of their role which is understanding what motivates their employees. It's easy to assume that because you are motivated by knowing you did a good job or by making an impact on your environment, that others feel this enthusiasm as well. In the real world, people have many different motivations.

 

In turn, people also have different levels of skill sets for particular tasks. Their level of skill can often depend on their experience, the level of training they have received, or the type of task itself.

Since most coaching techniques rely on the employees skills and their will to accomplish a goal, it is important to understand how these two aspects work together. This knowledge will help you to better craft your approach with your employees and teams to get the best results possible from each individual.

Let's start by introducing the High Low Matrix.

As you can see, the High Low Matrix coaching model is a punet square of an employee's will vs. skill and contains some coaching techniques to utilize based on where the associate falls.

Page 81: MODELS 5

Let's get into further detail about how to use this coaching model and discuss each of the coaching techniques to use once we've identified where our employee falls on the High Low Matrix.

First, how do we identify if an employee is exhibiting or feeling a high degree of will? This should be somewhat obvious from how they approach their work. If tasks that are not skill related are still delivered in a less than stellar fashion or their attitude has taken a change recently, you can infer that the associate's motivation has slipped. Utilizing the IGROW Model will help you to determine the root causes for any changes in behavior.

Assessing their skill is typically a much simpler task as it is likely why you are here. You have no doubt seen results from your employee or team that do not meet your expectations and therefore have determined a change must be made. Again, to determine the exact skill that the associate is struggling with that has caused the poor performance, the IGROW Model is recommended.

Now that you have determined both the employee's skill level and their will level, it is time to discuss the coaching techniques that you should apply based on where the employee falls in the High Low Matrix coaching model.

High Low Matrix Coaching Model: Advise

Are you faced with an employee that is highly motivated, yet as much as they want it, they just can't seem to deliver the results needed? This is your low skilled, high willed employee. In order to effectively apply coaching techniques to this type of employee, the Advise phase of the High Low Matrix coaching model should be applied. Advising is focused on providing the skills necessary to turn the employee's motivation into success. By focusing on teaching or training the skill, you will leverage the employee's desire and provide them with the necessary tools to improve. Throughout the learning process, it is key that you

continually give the employee praise and endorsement for their improvements. Remember, you may be expecting great leaps, but even baby steps deserve some positive reinforcement. Skipping this valuable step could result in a backslide in will and you'll then have a whole other set of challenges to deal with.

High Low Matrix Coaching Model: Motivate

At the other end of the spectrum, you may be faced with an employee who has the necessary skill set to deliver and perhaps has delivered great results in the past, but is experiencing a will issue that is apparent in their performance. Again, utilizing the IGROW Model will help you to identify what has occurred or changed and impacted

Page 82: MODELS 5

their level of motivation. This coaching model will also help you determine if your approach should be one of Coaching vs. Counseling. When faced with an employee who lacks the will there are some key areas to focus on while trying to re-engage the employee. Start by determining what the employee's 'hot buttons' or motivators are. This can be done as simply as by asking what they take the most pride in at work or how they like to be recognized for a job well done. Once you have determined their hot buttons, focus on them. Whenever the employee does deliver, use your new found knowledge to show your appreciation. Next you should determine if there are any road blocks or constraints that the employee is experiencing. Often times removing these road blocks or providing options, can alleviate their challenges with motivation.

High Low Matrix Coaching Model: Direct

For situations that involve a low skilled and low willed employee, the Direct coaching technique should be utilized. Directing is focused on a combination of the previously discussed coaching techniques applied together. Since this employee is not very skilled and is not very motivated, the two key areas to focus on are training and praising. You will first need to provide the employee with the tools to develop their skills. This does not mean that the tools you provide, such as training, have to be new information to them, but can offer reinforcement to information they have already been presented with. Because the employee is not delivering

results there can often be a challenge with their confidence level that is inhibiting their ability to apply new or existing skills. Giving the employee low risk opportunities to practice their skills and to succeed, will give you the ability to provide them with the positive feedback they need, and will result in a confidence building experience for them. This is one of the better coaching techniques to apply with this type of employee.

High Low Matrix Coaching Model: Delegate

The last type of employee is the high skilled, high willed employee. These are typically your top performers that consistently provide results and strive to do a good job. They are a motivating force for themselves and typically for your team. If they are succeeding, you may wonder why we are discussing their development? Many schools of thought tell us to always focus on bringing up our bottom performers, and this often times leaves a lack of focus on our top performers. Since these high skilled, high willed employees are likely your future leaders, utilizing the Delegate coaching technique can help them to develop to the next level. Delegating is often misused in the business world today. Many managers use this technique not for development but as a way to reduce their work load or stress by shoving the work onto

Page 83: MODELS 5

their team. This is not always a bad thing, but what most managers don't do is follow up on the opportunities they've delegated to others or provide them with the tools or resources to succeed. Delegation, when used effectively, will often take more time than just doing the task yourself. You should not be using this technique as a means to reduce your workload as you will be working through the task with your high performer, helping them to learn and master it. This is how development through delegation works. Giving your highly motivated top performers the opportunity to be challenged and to continue to learn will ensure they continue to be your highly motivated top performers. While delegating you should also focus on praising and endorsing what they do well as well as offering them opportunities to either make decisions or to collaborate on decisions being made. This will continue to instill a sense of ownership in them.

As we have discussed, each scenario you are faced with requires a different approach or coaching technique to achieve the desired results. To ensure you are successful in your approach, be sure to spend time prior to your coaching session thinking about where the associate falls, what motivates them, and what options you may want to offer. Being prepared for the discussion will make a great difference.

CULTURE LEVEL BY SCHEIN

Organizational culture is an idea in the field of organizational studies and management which describes the psychology, attitudes, experiences, beliefs and values (personal and cultural values) of an organization. It has been defined as "the specific collection of values and norms that are shared by people and groups in an organization and that control the way they interact with each other and with stakeholders outside the organization."[1]

This definition continues to explain organizational values, also called as "beliefs and ideas about what kinds of goals members of an organization should pursue and ideas about the appropriate kinds or standards of behavior organizational members should use to achieve these goals. From organizational values develop organizational norms, guidelines, or expectations that prescribe appropriate kinds of behavior by employees in particular situations and control the behavior of organizational members towards one another.

Strong culture is said to exist where staff respond to stimulus because of their alignment to organizational values. In such environments, strong cultures help firms operate like well-oiled machines, cruising along with outstanding execution and perhaps minor tweaking of existing procedures here and there.

Page 84: MODELS 5

Conversely, there is weak culture where there is little alignment with organizational values and control must be exercised through extensive procedures and bureaucracy.

Where culture is strong—people do things because they believe it is the right thing to do—there is a risk of another phenomenon, Groupthink. "Groupthink" was described by Irving L. Janis. He defined it as "...a quick and easy way to refer to a mode of thinking that people engage when they are deeply involved in a cohesive in-group, when members' strive for unanimity override their motivation to realistically appraise alternatives of action." This is a state where people, even if they have different ideas, do not challenge organizational thinking, and therefore there is a reduced capacity for innovative thoughts. This could occur, for example, where there is heavy reliance on a central charismatic figure in the organization, or where there is an evangelical belief in the organization’s values, or also in groups where a friendly climate is at the base of their identity (avoidance of conflict). In fact group think is very common, it happens all the time, in almost every group. Members that are defiant are often turned down or seen as a negative influence by the rest of the group, because they bring conflict.

Innovative organizations need individuals who are prepared to challenge the status quo—be it group-think or bureaucracy, and also need procedures to implement new ideas effectively.

Several methods have been used to classify organizational culture. Some are described below:

Hofstede (1980[2]) demonstrated that there are national and regional cultural groupings that affect the behavior of organizations.

Hofstede looked for national differences between over 100,000 of IBM's employees in different parts of the world, in an attempt to find aspects of culture that might influence business behavior.

Hofstede identified five dimensions of culture in his study of national influences:

Power distance - The degree to which a society expects there to be differences in the levels of power. A high score suggests that there is an expectation that some individuals wield larger amounts of power than others. A low score reflects the view that all people should have equal rights.

Uncertainty avoidance reflects the extent to which a society accepts uncertainty and risk.

Individualism vs. collectivism - individualism is contrasted with collectivism, and refers to the extent to which people are expected to stand up for themselves, or alternatively act predominantly as a member of the group or organization. However, recent researches have shown that high individualism may not necessarily mean low collectivism, and vice versa[citation needed]. Research indicates that the two concepts are actually unrelated. Some people and cultures might have both high individualism and high collectivism, for

Page 85: MODELS 5

example. Someone who highly values duty to his or her group does not necessarily give a low priority to personal freedom and self-sufficiency

Masculinity vs. femininity - refers to the value placed on traditionally male or female values. Male values for example include competitiveness, assertiveness, ambition, and the accumulation of wealth and material possessions

Deal and Kennedy

Deal and Kennedy defined organizational culture as the way things get done around here.In relation to its feedback this would mean a quick response and also measured organizations in ition, such as oil prospecting or military aviation.

The Process Culture occurs in organizations where there is little or no feedback. People become bogged down with how things are done not with what is to be achieved. This is often associated with bureaucracies. While it is easy to criticize these cultures for being overly cautious or bogged down in red tape, they do produce consistent results, which is ideal in, for example, public services.

Charles Handy

Charles Handy[4] (1985) popularized the 1972 work of Roger Harrison of looking at culture which some scholars have used to link organizational structure to organizational culture. He describes Harrison's four types thus:

a Power Culture which concentrates power among a few. Control radiates from the center like a web. Power and influence spread out from a central figure or group. Power desires from the top person and personal relationships with that individual matters more than any formal title of position. Power Cultures have few rules and little bureaucracy; swift decisions can ensue.

In a Role Culture, people have clearly delegated authorities within a highly defined structure. Typically, these organizations form hierarchical bureaucracies. Power derives from a person's position and little scope exists for expert power. Controlled by procedures, roles descriptions and authority definitions. Predictable and consistent systems and procedures are highly valued.

By contrast, in a Task Culture, teams are formed to solve particular problems. Power derives from expertise as long as a team requires expertise. These cultures often feature the multiple reporting lines of a matrix structure. It is all a small team approach, who are highly skilled and specialist in their own markets of experience.

A Person Culture exists where all individuals believe themselves superior to the organization. Survival can become difficult for such organizations, since the concept of an organization suggests that a group of like-minded individuals pursue the

Page 86: MODELS 5

organizational goals. Some professional partnerships can operate as person cultures, because each partner brings a particular expertise and clientele to the firm.

Edgar Schein

Edgar Schein,[5] an MIT Sloan School of Management professor, defines organizational culture as:

"A pattern of shared basic assumptions that was learned by a group as it solved its problems of external adaptation and internal integration, that has worked well enough to be considered valid and, therefore, to be taught to new members as the correct way you perceive, think, and feel in relation to those problems"(Schein, 2004, p. 17).

According to Schein, culture is the most difficult organizational attribute to change, outlasting organizational products, services, founders and leadership and all other physical attributes of the organization. His organizational model illuminates culture from the standpoint of the observer, described by three cognitive levels of organizational culture.

At the first and most cursory level of Schein's model is organizational attributes that can be seen, felt and heard by the uninitiated observer - collectively known as artifacts. Included are the facilities, offices, furnishings, visible awards and recognition, the way that its members dress, how each person visibly interacts with each other and with organizational outsiders, and even company slogans, mission statements and other operational creeds.

The next level deals with the professed culture of an organization's members - the values. At this level, local and personal values are widely expressed within the organization. Organizational behavior at this level usually can be studied by interviewing the organization's membership and using questionnaires to gather attitudes about organizational membership.

At the third and deepest level, the organization's tacit assumptions are found. These are the elements of culture that are unseen and not cognitively identified in everyday interactions between organizational members. Additionally, these are the elements of culture which are often taboo to discuss inside the organization. Many of these 'unspoken rules' exist without the conscious knowledge of the membership. Those with sufficient experience to understand this deepest level of organizational culture usually become acclimatized to its attributes over time, thus reinforcing the invisibility of their existence. Surveys and casual interviews with organizational members cannot draw out these attributes—rather much more in-depth means is required to first identify then understand organizational culture at this level. Notably, culture at this level is the underlying and driving element often missed by organizational behaviorists.

Using Schein's model, understanding paradoxical organizational behaviors becomes more apparent. For instance, an organization can profess highly aesthetic and moral standards

Page 87: MODELS 5

at the second level of Schein's model while simultaneously displaying curiously opposing behavior at the third and deepest level of culture. Superficially, organizational rewards can imply one organizational norm but at the deepest level imply something completely different. This insight offers an understanding of the difficulty that organizational newcomers have in assimilating organizational culture and why it takes time to become acclimatized. It also explains why organizational change agents usually fail to achieve their goals: underlying tacit cultural norms are generally not understood before would-be change agents begin their actions. Merely understanding culture at the deepest level may be insufficient to institute cultural change because the dynamics of interpersonal relationships (often under threatening conditions) are added to the dynamics of organizational culture while attempts are made to institute desired change

Robert A. Cooke

The Organizational Culture Inventory: Culture Clusters

Robert A. Cooke, PhD, defines culture as the behaviors that members believe are required to fit in and meet expectations within their organization. The Organizational Culture Inventory measures twelve behavioral of norms that are grouped into three general types of cultures:

•Constructive Cultures, in which members are encouraged to interact with people and approach tasks in ways that help them meet their higher-order satisfaction needs.

•Passive/Defensive Cultures, in which members believe they must interact with people in ways that will not threaten their own security.

•Aggressive/Defensive Cultures, in which members are expected to approach tasks in forceful ways to protect their status and security.

The Constructive Cluster

The Constructive Cluster includes cultural norms that reflect expectations for members to interact with others and approach tasks in ways that will help them meet their higher order satisfaction needs for affiliation, esteem, and self-actualization.

The four cultural norms in this cluster are:

•Achievement

•Self-Actualizing

•Humanistic-Encouraging

•Affiliative

Page 88: MODELS 5

Organizations with Constructive cultures encourage members to work to their full potential, resulting in high levels of motivation, satisfaction, teamwork, service quality, and sales growth. Constructive norms are evident in environments where quality is valued over quantity, creativity is valued over conformity, cooperation is believed to lead to better results than competition, and effectiveness is judged at the system level rather than the component level. These types of cultural norms are consistent with (and supportive of) the objectives behind empowerment, total quality management, transformational leadership, continuous improvement, re-engineering, and learning organizations.

The Passive/Defensive Cluster

Norms that reflect expectations for members to interact with people in ways that will not threaten their own security are in the Passive/Defensive Cluster.

The four Passive/Defensive cultural norms are:

•Approval

•Conventional

•Dependent

•Avoidance

In organizations with Passive/Defensive cultures, members feel pressured to think and behave in ways that are inconsistent with the way they believe they should in order to be effective. People are expected to please others (particularly superiors) and avoid interpersonal conflict. Rules, procedures, and orders are more important than personal beliefs, ideas, and judgment. Passive/Defensive cultures experience a lot of unresolved conflict and turnover, and organizational members report lower levels of motivation and satisfaction.

The Aggressive/Defensive Cluster

The Aggressive/Defensive Cluster includes cultural norms that reflect expectations for members to approach tasks in ways that protect their status and security.

The Aggressive/Defensive cultural norms are:

•Oppositional

•Power

•Competitive

Page 89: MODELS 5

•Perfectionistic

Organizations with Aggressive/Defensive cultures encourage or require members to appear competent, controlled, and superior. Members who seek assistance, admit shortcomings, or concede their position are viewed as incompetent or weak. These organizations emphasize finding errors, weeding out “mistakes,” and encouraging members to compete against each other rather than competitors. The short-term gains associated with these strategies are often at the expense of long-term growth.

G. Johnson[6] described a cultural web, identifying a number of elements that can be used to describe or influence Organizational Culture:

The Paradigm: What the organization is about; what it does; its mission; its values.

Control Systems: The processes in place to monitor what is going on. Role cultures would have vast rulebooks. There would be more reliance on individualism in a power culture.

Organizational Structures: Reporting lines, hierarchies, and the way that work flows through the business.

Power Structures: Who makes the decisions, how widely spread is power, and on what is power based?

Symbols: These include organizational logos and designs, but also extend to symbols of power such as parking spaces and executive washrooms.

Rituals and Routines: Management meetings, board reports and so on may become more habitual than necessary.

Stories and Myths: build up about people and events, and convey a message about what is valued within the organization.

These elements may overlap. Power structures may depend on control systems, which may exploit the very rituals that generate stories which may not be true.

Edgar Henry Schein (born 1928), a professor at the MIT Sloan School of Management, has made a notable mark on the field of organizational development in many areas, including career development, group process consultation, and organizational culture. He is generally credited [by whom?] with inventing the term "corporate culture”.

Schein's organizational culture model

Page 90: MODELS 5

Illustration of Schein's model of organizational culture

Schein's model of organizational culture originated in the 1980s. Schein (2004) identifies three distinct levels in organizational cultures:

Artifacts and behaviors

Espoused values

Assumptions

The three levels refer to the layers of corporate culture.

Artifacts include any tangible or verbally identifiable elements in an organization. Architecture, furniture, dress code, office jokes, and history all exemplify organizational artifacts.

Values are the organization's stated or desired cultural elements. This is most often[citation needed] a written or stated tone that the CEO or President hope to exude throughout the office environment. Examples of this would be employee professionalism, or a "family first" mantra.

Assumptions are the actual values that the culture represents, not necessarily correlated to the values. These assumptions are typically so well integrated in the office dynamic that they are hard to recognize from within.

The model has undergone various modifications, such as the Raz update of Schein's organizational culture model (2006), and others.

Coercive persuasion

Schein has written on the issues surrounding coercive persuasion, comparing and contrasting brainwashing as a use for "goals that we deplore and goals that we accept.

According to Edgar Schein organizational culture is the residue of success of any organization. One can easily change the products, services, leadership of the organization

Page 91: MODELS 5

but it is difficult to change the organisational culture. The organization culture can be described in three cognitive levels by the viewpoint of an observer. The first level of organization include the facilities, furnishings, awards and recognition, uniform or dress of the members and interaction of members within themselves and also with outsiders.

The next level includes the culture of the business organization that can be depicted by company slogans, mission statements and different operational creeds. The behavior of members in the organization can be reviewed by interviewing the members as well as by using questionnaires to gather the attitude of organization members. The third level includes the tacit assumptions of the organization that are whether unseen but these elements influence the daily interaction between the members of the organization. These unspoken rules prove their existence by influencing the culture of the organization. Even surveys and casual interviews are not helpful in determining the culture but more in-depth knowledge of the organization helps in identifying and understanding the organization culture at this level and helps in safety of the business.

Importance of culture and model

By this model one can easily understand the paradoxical organizational behaviors. Organizational norms are something different at the deepest level that is why its bit difficult for a newcomer to adjust in a new organization and it is the only reason for the failure of the organizational agents as they cultural norms of the organization are not understood by them completely before action. Along with in-depth knowledge of the culture the interpersonal relationship between the members also play an important role in understanding the dynamics of Schein organizational culture. The basic underlying assumptions act as cognitive defense mechanism for both individuals and groups that is difficult, anxiety provoking and time consuming as the culture is deep seated and complex so its very hard to know about the assumptions. The leaders should behave marginally for organization culture so as to learn some new ways of culture and member psychology. The model helps to know the culture in the organization at different levels.

Organizations and culture problems with solutions

Culture of an organization is the customs and rights of an organization that include behavior patterns, traditions, rituals, structural stability, integration and patterning.The large organizations face the difficulty of subculture development and integration of newcomers like during any mergers one should create a new culture for growth and success of the company. 

Good Schein organizational culture should be developed so as to give a safe and ethical environment to the employee. The organization culture helps in effective communication of the members of the company at every level and this can be achieved by group activities of the members of the company. As that will initiate the common thoughts and ideas of the individuals of the members to work in one direction for achieving the goal. By the development and new innovation one can just have good culture in the

Page 92: MODELS 5

organization that will help the group in making its presence felt both national and international. 

Organizational Culture & Leadership by Edgar H Schein

"Some are born great,

Some achieve greatness,

And some have greatness thrust upon 'em"

Oct 1997

OCAIonline (Organizational culture assessment instrument online) a hassle-free tool for diagnosing organizational culture, developed by professors Robert Quinn and Kim Cameron.

Culture a phenomenon that surrounds us all. Culture helps us understand how it is created, embedded, developed, manipulated,

managed, and changed.

Culture defines leadership.

Understand the culture to understand the organization.

Defining Organizational Culture

Culture is customs and rights.

Good managers must work from a more anthropological model.

Each org has its own way and an outsider brings his/her baggage as observer.

Understand new environment and culture before change or observation can be made.

Observe behavior: language, customs, and traditions

Groups norms: standards and values

Espoused values: published, publicly announced values.

Page 93: MODELS 5

Formal Philosophy: mission

Rules of the Game: rules to all in org

Climate: climate of group in interaction

Embedded skills:

Habits of thinking, acting, paradigms: Shared knowledge for socialization.

Shared meanings of the group

Metaphors or symbols:

Culture: norms, values, behavior patterns, rituals, traditions.

Culture implies structural stability and Patterning and integration.

Culture is the accumulated shared learning from shared history.

2 problems all groups must deal with:

1. Survival, growth, and adaptation in environment

2. Internal integration that permits functioning and adapting.

Culture Formally Defined

A pattern of shared basic assumptions that the group learned as it solved its problems of external adaptation and internal integration that has worked well enough to be considered valid and, therefore, to be taught to new members as the correct way you perceive, think, and feel in relation to those problems.

The problem of socialization: teaching newcomers

The problem of behavior:

Can a large org have one culture? Subcultures.

Summary

Culture explains incomprehensible, irrational

Org with history has culture.

Not every group develops a culture.

Page 94: MODELS 5

Once culture exists it determines criteria of leadership.

Leaders should be conscious of culture otherwise it will manage them.

Levels of Culture

Artifacts on surface sees hears feels visible products

Language technology

Products

Creations

Style: clothing, manners of address, myths, stories

Easy to observe

Difficult to decipher

Symbols are ambiguous

Problems in classification

Espoused Values

All group learning reflects original values

Those who prevail influence group: the leaders

First begins as shared value then becomes shared assumption

Social validation happens with shared learning.

Initially started by founder, leader and then assimilated.

Basic Assumptions

Evolve as solution to problem is repeated over and over again.

Hypothesis becomes reality

To learn something new requires resurrection, reexamination, frame breaking

Culture defines us:

Page 95: MODELS 5

What we pay attention to

What things mean

React emotionally

What actions to take when

Humans need cognitive stability

Defense mechanisms

McGregor: if people are treated consistently in terms of certain basic assumptions, they come eventually to behave according to those assumptions in order to make their world stable and predictable.

Different cultures make different assumptions about others based on own values etc: see them with our eyes not theirs.

Third party may help solve differences between 2 cultures

Each new member comes with own assumptions.

Culture of 2 orgs

Case study of 2 distinct companies illustrating multilevel analysis of culture.

One can best understand a system best by trying to change it.

Key is he looked at these two with his eyes, his culture, not theirs.

Lurk for awhile to understand culture.

Dimensions of Culture

External Environments

Develop a model of how assumptions arise and persist

Identify issues groups faced from origin of group, if possible.

Group growth and culture formation are intertwined.

Essential elements:

Mission and strategy

Page 96: MODELS 5

Goals

Means of developing consensus, reaching goals

Measurement

Correction

Mission and strategy

Each new group must develop shared concept to survive.

What is the function? Can be multifunctional.

If in debate mission is reached then culture has developed.

Culture exists when members share identity and mission.

Goals to achieve consensus on goals group needs shared language and shared assumptions.

Mission can be timeless, but goals must have end point

Do not confuse assumptions of goals with assumptions of mission.

Means to Achieve Goals

Need clear consensus on means to achieve goals.

Goals can be ambiguous, but not how to achieve them.

Means:

Design of tasks

Division of labor

Org structure

Reward and incentive system

Control system

Info systems

Skills, technology, and knowledge acquired to cope become part of culture of org

Page 97: MODELS 5

Lack of consensus on who owns can cause difficulty

Be aware of feelings about territory, property, turf

Crowding is a problem

Changing is difficult because of internal "property."

Consensus on means creates behavioral regularities

Once regularities in place, stability and patterns are in place.

Measuring results

Need consensus on how to evaluate self

From top

Trust self

Use outsider

Trust hard data

Correction, Repair

Consensus needed on how to affect change

Change may be for growth not just to solve a problem

Correctiveness can have great effect on culture, because it may question culture: mission.

Summary

Culture is multidimensional, multifaceted.

Culture reflects group's efforts to cope and learn.

Managing Internal Integration

External is important but so is internal relationships.

Major Internal Issues:

Common Language

Page 98: MODELS 5

Group boundaries for inclusion or exclusion

Distributing power and status

Developing norms of intimacy, friendship, and love

Rewards and punishments

Explaining the unexplainable: ideology and religion

Common Language

To function as group must have common language

Conflict arises when two parties assume about the other without communicating.

Often creators create common language

Common understanding begins with categories of action, gesture, and speech.

Groups Boundaries

Consensus of who is in and who is out.

Leader usually sets this, but group tests it.

Orgs can have three dimensions:

Lateral movement: from one task to the next

Vertical movement: from one rank to the next

Exclusionary: from outsider to insider

As org ages becomes more complex:

Indiv may belong to many levels, depts....

Distribution of Power and Status

How will influence, power, and authority be allocated?

All need to have some power or know limits

Who will grant power?

Page 99: MODELS 5

Power can be earned

Or assigned

Developing Rules

How to deal with authority and with peer

We use family model in new situations

Allocating Reward and Punishment

Must have system of sanctions for obeying and disobeying rules.

Explaining the Unexplainable

Facing issues not under control: weather, natural disaster

Religion can provide this

Ideology too

Myths, stories, legends help

Summary

Every group must learn to be a group.

Groups must reach consensus

Reality, Truth, Time, Space, Human Nature, Activity, Relationships

Develop shared assumptions about more abstract, general, deeper issues.

The deeper issues:

Nature of reality and truth

Nature of time

Nature of space

Nature of human nature

Nature of human activity

Page 100: MODELS 5

Nature of human relationships

Nature of reality and truth

What is real and how to determine reality

Levels of Reality

External reality refers to that which is determined empirically, by objective tests, Western tradition. ie SAT

Social reality when groups reaches consensus on things.

Individual reality is self learned knowledge, experience, but this truth may not be shared by others.

High Context/Low Context

Low = unidirectional culture, events have clear universal meanings

High = mutual casualty culture, events understood only in context, meanings can vary, categories can change

Moralism-Pragmatism

Pragmatist seeks validation in self

Moralist seeks validation in a general philosophy, moral system, or tradition.

*Different bases for what is true: p 102

Pure dogma: based on tradition, it has always been this way.

Revealed dogma: wisdom based on trust in the authority of wise men, formal leaders, prophets, or kings.

Truth derived by a "rational-legal" process

Truth as that which survives conflict and debate

Truth as that which works: let's try it and test it

Truth as established by scientific method, borders on pure dogma

What is Information?

Page 101: MODELS 5

*to test for reality a group must determine what is information.

*Information or data

Nature of time

*Time is a fundamental symbolic category that we use for talking about the orderliness of social life.

*Time: not enough, late, on-, too much at one, lost time never found again.

Basic Time Orientation

*past, present, future Gettysburg Address

**Only present counts for immediacy

**Past exists to show past glories, successes

**Future always with vision, ideas

Monochromic and Polychromic Time

*monochromic one thing at a time

*polychromic several things done simultaneously. Kill two birds with one stone.

Planning Time and development Time

*planning time is linear, fixed, monochronic, closure.

*developmental time is limitless, as long as needed. Process world, open ended.

Discretionary Time Horizons

*deals with size and units, classes=44 minutes, 34 large

*annually, quarterly, monthly, weekly, daily, hourly...

*length of time depended on work to be done. ie research people would not get closure.

*differ by function and occupation and by rank.

Temporal Symmetry and Pacing

*subtle how activities are paced

Page 102: MODELS 5

Time imposes social order

Pacing, rhythms of life, sequence, duration, symbolic

Time is critical because it is so invisible, taken for granted.

Nature of space

Comes to have symbolic meanings

Distance and Relative Placement

*has both physical and social meaning

**intimacy distance touching; 6"-18" too far

**personal distance 18"-30" near; 2'-4' too far. soft voice

**social distance 4'-7' near; 7'-12' too far.

**public distance 12'-25' near; >25' too far.

*feelings about distance have biological roots.

*we use partitions, walls, sound barriers, etc

*intrusion distance

Symbols of Space

*who has what and how much

*size may determine rank.

*decorating one's office

*design of space reflects much

Body Language

*use of gestures, body position

*who do you sit next to, avoid, touch, bow to

*deferring reinforces hierarchical relationship

Page 103: MODELS 5

Nature of human nature

What are our basic instincts?

What is inhuman behavior?

Good or bad intrinsic or learned?

Self is compartmentalized: work, family, leisure

Maslow's basic needs

Human nature is complex and malleable

Changes in life cycle as humans mature

Humans can learn new things.

Nature of human activity

How humans act in relation to their environment

The Doing Orientation

*controlled by manipulation

*pragmatic orientation toward the nature of reality

*belief in human perfectibility

**getting things done, let's do something; *focus' on task, on efficiency, on discovery

*id with Prometheus

*id with Athena = task

*id with Zeus = building useful political relationships

The Being Orientation

*nature is powerful

*humanity is subservient

*kind of fatalism

Page 104: MODELS 5

Cannot influence nature

Become accepting and enjoy what one has

*think in terms of adapting

Being-in-Becoming Orientation

*harmony with nature

*develop own capacity to achieve perfect union with the environment

*focus on what person is rather than what can be accomplished.

*id with Apollo which emphasizes hierarchy, rules, clear roles

Activity Orientation and Role Definition

*nature of work and the relationships between work, family, and personal concepts

*in one work is primary

*in another family is primary

*in another self-interest is primary

*in another integrated life-style is possible

Organization/Environment Relations

*can nature be subjugated

*nature harmonized with

*does group view itself capable of dominating nature

*or coexisting with nature

Nature of human relationships

Make group safe, comfortable, and productive

Must solve problems of power, influence, hierarchy

And intimacy, love, peer relationship

Page 105: MODELS 5

Individual and Groupism

*individual competitive

*cooperative: group more important than indiv

*hierarchy and tradition

Participation and Involvement

Etzioni's theories:

Coercive systems

Members are alienated

Exit if can

Peer relationship develops as defense v authority

Unions develop

Utilitarian systems

Will participate

Evolve work groups

Incentive system

Systems based on goal consensus between leaders and followers

Morally involved

Identify with org

Evolve around tasks in support of org

At a more specific level

Autocratic

Paternalistic

Consultative/democratic

Page 106: MODELS 5

Participative and power sharing

Delegative

Abdicative

Characteristics of Role Relationships

Parsons:

Emotionally charged or emotionally neutral

Diffuse or specific: like family or salesperson

Universalistic or particularistic: broad or specific criteria

Ascription or achievement oriented: family connections or accomplishments

Self or collectively oriented

Culture is not only deep it is wide and complex.

Culture comes to cover all aspects of life.

How Leaders Create Org Cultures

Mysterious: how does it happen?

Culture Beginnings and the Impact of Founders as Leaders

Spring from three sources:

Beliefs, values, and assumptions of founders

Learning experiences of group members

New beliefs, values, and assumptions brought by new members

Impact of founder most important.

Orgs do not form spontaneously or accidentally.

The process of culture formation is the process of creating a small group:

Single person (founder) has idea.

Page 107: MODELS 5

Founder brings in one or more people and creates core group. They share vision and believe in the risk.

Founding group acts in concert, raises money, work space...

Others are brought in and a history is begun.

Jones an example of "visible management"

Culture does not survive if the main culture carriers depart and if bulk of members leave.

Smithfield started things and then left them to the members.

Murphy of Action: Total consensus had to be met. Open office landscape.

How Founders/Leaders Embed and Transmit Culture

Leader assumptions are "taught" to the group.

Things tried out are leader imposed teaching.

How do leaders get their ideas implemented?

Socialization

Charisma

Acting, by doing, exuding confidence

Culture-Embedding Mechanisms

Primary Embedding Mechanisms

Secondary Articulation and Reinforcement Mechanisms

What leaders pay attention to, measure, and control on a regular basis?

Organization design and structure

How leaders react to critical incidents and organizational crises.

Organizational systems and procedures

Observed criteria by which leaders allocate scarce resources.

Organizational rites and rituals

Page 108: MODELS 5

Deliberate role modeling, teaching, and coaching

Design of physical space, facades, and buildings

Observed criteria by which leaders allocate rewards and status.

Stories, legends, and myths about people and events.

Observed criteria by which leaders recruit, select, promote, retire, and excommunicate organizational members.

Formal statements of organizational philosophy, values, and creed.

Primary Embedding Mechanisms called "climate" of the organization

"Climate" precedes existence of a group culture.

Return to Culture-Embedding Mechanisms Table

What Leaders Pay Attention to, Measure, and Control

What leader systematically pays attention to communicates major beliefs.

What is noticed?

Comments made

Casual questions and remarks

Becomes powerful if leader sees it and is consistent

If leader is unaware and inconsistent then confusion can ensue.

Consistency more important than intensity of attention.

Attention is focused in part by the kinds of questions that leaders ask and how they set the agendas for meetings

Emotional reactions.

Important what they do not react to.

Return to Culture-Embedding Mechanisms Table

Page 109: MODELS 5

Leader Reactions to Critical Incidents and Organizational Crises

In crisis: how do they deal with it?

Creates new norms, values, working procedures, reveals important underlying assumptions.

Crises are especially important in culture creation.

Crisis heightens anxiety, which motivates new learning.

A crisis is what is perceived to be a crisis, and what is defined by leader

Crisis about leader, insubordination, tests leader.

Return to Culture-Embedding Mechanisms Table

Observed Criteria for Resource Allocation

How budgets are created reveals leader assumption.

What is acceptable financial risk?

How much of what is decided is all inclusive? bottom up? top down?

Return to Culture-Embedding Mechanisms Table

Deliberate Role Modeling, Teaching, and Coaching

Own visible behavior has great value for communicating assumptions and values to others.

Video tape is good

Informal messages are very powerful.

Return to Culture-Embedding Mechanisms Table

Observed Criteria for Allocation of Rewards and Status

Members learn from their own experience with promotions, performance appraisals, and discussions with the boss.

What is rewarded or punished is a message.

Actual practice, what happens as opposed to what is written or said?

Page 110: MODELS 5

If something is to be learned their must be a reward system setup to insure it.

Return to Culture-Embedding Mechanisms Table

Observed Criteria for Recruitment, Selection, Promotion, Retirement, and Excommunication

Adding new members is very telling because it is unconsciously done.

Also who doesn't get promoted says something.

Return to Culture-Embedding Mechanisms Table

Secondary Articulation and Reinforcement Mechanisms

In young org design, structure, architecture, rituals, stories, and formal statements are cultural rein forcers, not culture creators.

Once an org stabilizes these become primary and constrain future leaders.

These are cultural artifacts that are highly visible but hard to interpret.

When org is in developmental stage, the leader is driving force. After a while these will become the driving forces for next generation.

These secondary mechanisms will become primary in Midlife or mature orgs.

Return to Culture-Embedding Mechanisms Table

Organization Design and Structure

Organizing org has more passion than logic.

Founders have strong ideas about how to organize

Build a tight hierarchy that is highly centralized.

Strength in people so decentralized

Negotiated solutions (Murphy)

How stable structure should be is variable.

Some stick to original setup

Some constantly rework

Page 111: MODELS 5

Design

Some articulate why this way

Some not aware of why this way

Structure and design can be used to reinforce leaders assumptions.

Return to Culture-Embedding Mechanisms Table

Organizational Systems and Procedures

Routines most visible parts of life in org: daily, weekly, monthly, quarterly, annually.

Group’s members seek this kind of order

They formalize the process of "paying attention."

Systems and procedures give consistency

Inconsistency allows for subcultures.

Return to Culture-Embedding Mechanisms Table

Rites and Rituals of the Organization

Rites and rituals may be central in deciphering as well as communicating the cultural assumptions.

They can be powerful rein forcers too.

They are only views of a limited portion of org so be careful.

Return to Culture-Embedding Mechanisms Table

Design of Physical Space, Facades, and Buildings

Visible features

Symbolic purposes

May convey philosophy

Open office means openness

Return to Culture-Embedding Mechanisms Table

Page 112: MODELS 5

REINVENTING GOVERNMENT OSBORNE

Osborne and Gaebler argue that a revolutionary restructuring of the public sector is under way — an "American Perestroika." Like the Soviet version, they believe this one is being driven largely by politicians and bureaucrats who, under great fiscal pressure, are introducing market forces into monopolistic government enterprises. In this book, they integrate hundreds of examples of these initiatives into a basically new concept of how

Page 113: MODELS 5

government should function. This concept is organized into ten chapters, reflecting the ten operating principles that distinguish a new "entrepreneurial" form of government.

Osborne and Gaebler suggest that governments should: 1) steer, not row (or as Mario Cuomo put it, "it is not government's obligation to provide services, but to see that they're provided"); 2) empower communities to solve their own problems rather than simply deliver services; 3) encourage competition rather than monopolies; 4) be driven by missions, rather than rules; 5) be results-oriented by funding outcomes rather than inputs; 6) meet the needs of the customer, not the bureaucracy; 7) concentrate on earning money rather than spending it; 8) invest in preventing problems rather than curing crises; 9) decentralize authority; and 10) solve problems by influencing market forces rather than creating public programs.

The authors insist that this book does not offer original ideas. Rather, it is a comprehensive compilation of the ideas and experiences of innovative practitioners and activists across the country. The authors build on the work of a handful of political scientists who have studied bureaucratic reform efforts, especially that of James Q. Wilson, whose 1989 book Bureaucracylaid out key elements of what they call "a new paradigm." They also count Robert Reich, Alvin Toffler, and Harry Boyte among their chief influences. As they point out in the acknowledgements, however, the biggest influence on their thinking comes not from government but from management consultants like Thomas Peters, Edward Deming, and Peter Drucker. These writers all recognize that corporations suffer from bureaucratic rigidities just like governments do, and that the structures of both are rooted in bygone eras. Too many corporations are still bound to the strict work rules and centralized command that marked the Industrial Age, they insist. Similarly, most government agencies are bound by civil service rules and other Progressive era reforms designed to control costs, eliminate patronage, and guarantee uniform service to the public. "Hierarchical, centralized bureaucracies designed in the 1930s or 1940s simply do not function well in the rapidly changing, information-rich, knowledge-intensive society and economy of the 1990s," they write. Suffering from the same rigidities, governments and businesses must transform themselves in essentially the same way: by flattening hierarchies, decentralizing decision-making, pursuing productivity-enhancing technologies, and stressing quality and customer satisfaction.

Osborne and Gaebler are careful to point out that while much of what is discussed in the book could be summed up under the category of market-oriented government, markets are only half the answer. Markets are impersonal, unforgiving, and, even under the most structured circumstances, inequitable, they point out. As such, they must be coupled with "the warmth and caring of families and neighborhoods and communities." They conclude that entrepreneurial governments must embrace both markets and community as they begin to shift away from administrative bureaucracies.

The search for a “third way” somewhere between socialism and capitalism is to the modern era what the search for the philosopher’s stone was to the Dark Ages. Although the implosion of the Soviet Union has put a damper on calls for more bureaucracy, most people harbor various misconceptions and fallacies that make them equally distrustful of

Page 114: MODELS 5

fully free markets. So, yet another pair of would-be alchemists has set out in search of that elusive goal big-government order without the big government.

In Reinventing Government, David Osborne and Ted Gaebler attempt to chart a course between big government and laissez faire. They want nothing to do with “ideology.” Rather, Osborne and Gaebler are technocrats in search of pragmatic answers. “Reinventing Government,” they write, “addresses how governments work, notwhat governments do.” Thus, from the standpoint of what governments do, the book is a proverbial grab bag of policy prescriptions some good, some bad.

In the course of the book’s eleven chapters, Osborne and Gaebler lay the foundations for what they call “entrepreneurial government.” That is, government that is active, but bereft of bureaucracy and its attendant red tape and inefficiency. In Osborne and Gaebler’s paradigm, the problems that America presently faces are all a result of the reforms of the Progressive Era. The large bureaucracies set up to discourage corruption and abuses of power actually waste more resources through regulations and procedures than they save. To stop a small number of crooks, bureaucracy must tie the hands of honest employees.

Osborne and Gaebler outline a series of sweeping reforms, all aimed at changing the entire focus of government. The old way of thinking envisions a government that identifies problems and then introduces an agency or program to solve the problem. The result has been a plethora of large bureaucracies and programs targeted at specific problems but achieving no real success. Entrepreneurial government identifies broad social goals. Instead of being burdened with hierarchies and rules, agencies are allowed great leeway within which to meet the goals. The civil service system is replaced with a system that rewards innovation and holds employees responsible for failure to meet goals.

Old methods of budgeting are scrapped. The line-item budget does not permit innovation nor the flexibility needed to deal with the unforeseen. Furthermore, the practice of throwing more money into programs that do not work must end. Such a policy encourages failure. Instead, programs that succeed will be rewarded with the funds to expand. Programs that fail can be weeded out over time.

The purpose of government, Osborne and Gaebler contend, is not actually to deliver services, but to set policy. They call it steering rather than rowing. In order to deliver services, governments can contract out to private providers, utilize government providers in competition with private firms, or utilize different government firms in competition with each other. Only in the case of so-called natural monopolies such as utilities-should government be the sole provider.

Osborne and Gaebler provide anecdotal evidence of the success of “entrepreneurial government.” For example, in Phoenix, Arizona, the city-owned garbage collection firm competes on an equal footing with several private firms. The result has been a decrease of 4.5 percent a year in solid-waste costs.

Page 115: MODELS 5

Another example is the East Harlem school system. East Harlem, thanks to a system of public school choice, has some of the most successful public schools in America. This is despite the dire economic conditions that prevail in the district. Students can choose between different schools (some located within the same building) operating independently, with programs tailored to meet the tastes of different students. Osborne and Gaebler seek a decentralized government, with control of programs exercised at the lowest possible level. In the Kenilworth-Parkside development in Washington, D.C., the residents were given control over their own housing project. They wrote their own by-laws and took charge of making repairs. Eventually, the residents started their own adult education program and created a fund to help finance business ventures taken on by enterprising residents.

When it replaces the bureaucracies of old, “entrepreneurial government” is indeed an improvement. Still, the effects must be temporary. “Entrepreneurial government” can only delay the inevitable. Most “entrepreneurial government” schemes still leave a gap between those who consume a service and those who pay for it. The costs are spread over a broad tax base. Only specific groups, however, actually receive the service. Thus, the “customers” are partially subsidized by non-customers. The result is that the demand for the service is greater than it would be if the customers had to pay the full price. The public good is inevitably overproduced and resources misallocated.

Of course, “entrepreneurial government” is even worse when applied to areas already relatively free of government intervention. “Entrepreneurial government” seeks to “structure” the market. As Osborne and Gaebler note, structuring “is a way of using public leverage to shape private decisions to achieve collective goals. It is a classic method of entrepreneurial governance: active government without bureaucratic government.”

The problem with Osborne and Gaebler’s analysis is its short-sighted empirical framework. In their preoccupation with finding whatever solution will “work,” they ignore basic principles of how human beings behave and how markets operate. When policy planners structure the market, they change the incentive system. Resources flow into areas in which they otherwise would not. The planners only have two choices: They can send resources to where they think they are needed, or they can send resources to where already overestimated customer demand is greatest. In either case, inefficiency will result. Even with the political reforms Osborne and Gaebler mention (campaign finance reform, term limits, etc.), decisions concerning resource allocation will still be politically motivated not economically motivated.

Economic intervention, whether performed by a bureaucracy or an “entrepreneurial government” will always result in inefficiency. Osborne and Gaebler’s attempt to make entrepreneurial government an end rather than a means is misguided. Bureaucracy is not the problem. Government intervention is the problem. Laissez faire is the solution.

Peter Senge and the learning organization

Page 116: MODELS 5

Peter Senge’s vision of a learning organization as a group of people who are continually enhancing their capabilities to create what they want to create has been deeply influential. We discuss the five disciplines he sees as central to learning organizations and some issues and questions concerning the theory and practice of learning organizations. 

Peter M. Senge (1947- ) was named a ‘Strategist of the Century’ by the Journal of Business Strategy, one of 24 men and women who have ‘had the greatest impact on the way we conduct business today’ (September/October 1999). While he has studied how firms and organizations develop adaptive capabilities for many years at MIT (Massachusetts Institute of Technology), it was Peter Senge’s 1990 book The Fifth Discipline that brought him firmly into the limelight and popularized the concept of the ‘learning organization'. Since its publication, more than a million copies have been sold and in 1997, Harvard Business Reviewidentified it as one of the seminal management books of the past 75 years.

On this page we explore Peter Senge’s vision of the learning organization. We will focus on the arguments in his (1990) book The Fifth Discipline as it is here we find the most complete exposition of his thinking.

Peter Senge

Born in 1947, Peter Senge graduated in engineering from Stanford and then went on to undertake a masters on social systems modeling at MIT (Massachusetts Institute of Technology) before completing his PhD on Management. Said to be a rather unassuming man, he is is a senior lecturer at the Massachusetts Institute of Technology.  He is also founding chair of the Society for Organizational Learning (SoL). His current areas of special interest focus on decentralizing the role of leadership in organizations so as to enhance the capacity of all people to work productively toward common goals.

Peter Senge describes himself as an 'idealistic pragmatist'. This orientation has allowed him to explore and advocate some quite ‘utopian’ and abstract ideas (especially around systems theory and the necessity of bringing human values to the workplace). At the same time he has been able to mediate these so that they can be worked on and applied by people in very different forms of organization. His areas of special interest are said to focus on decentralizing the role of leadership in organizations so as to enhance the capacity of all people to work productively toward common goals. One aspect of this is Senge’s involvement in the Society for Organizational Learning (SoL), a Cambridge-based, non-profit membership organization. Peter Senge is its chair and co-founder. SoL is part of a ‘global community of corporations, researchers, and consultants’ dedicated to discovering, integrating, and implementing ‘theories and practices for the interdependent development of people and their institutions’. One of the interesting aspects of the Center (and linked to the theme of idealistic pragmatism) has been its ability to attract corporate sponsorship to fund pilot programmes that carry within them relatively idealistic concerns.

Page 117: MODELS 5

Aside from writing The Fifth Discipline: The Art and Practice of The Learning Organization (1990), Peter Senge has also co-authored a number of other books linked to the themes first developed in The Fifth Discipline. These include The Fifth Discipline Fieldbook: Strategies and Tools for Building a Learning Organization (1994); The Dance of Change: The Challenges to Sustaining Momentum in Learning Organizations (1999) and Schools That Learn (2000).

The learning organization

According to Peter Senge (1990: 3) learning organizations are:

…organizations where people continually expand their capacity to create the results they truly desire, where new and expansive patterns of thinking are nurtured, where collective aspiration is set free, and where people are continually learning to see the whole together.

The basic rationale for such organizations is that in situations of rapid change only those that are flexible, adaptive and productive will excel. For this to happen, it is argued, organizations need to ‘discover how to tap people’s commitment and capacity to learn atall levels’ (ibid.: 4).

While all people have the capacity to learn, the structures in which they have to function are often not conducive to reflection and engagement. Furthermore, people may lack the tools and guiding ideas to make sense of the situations they face. Organizations that are continually expanding their capacity to create their future require a fundamental shift of mind among their members. 

When you ask people about what it is like being part of a great team, what is most striking is the meaningfulness of the experience. People talk about being part of something larger than themselves, of being connected, of being generative. It become quite clear that, for many, their experiences as part of truly great teams stand out as singular periods of life lived to the fullest. Some spend the rest of their lives looking for ways to recapture that spirit. (Senge 1990: 13)

For Peter Senge, real learning gets to the heart of what it is to be human. We become able to re-create ourselves. This applies to both individuals and organizations. Thus, for a ‘learning organization it is not enough to survive. ‘”Survival learning” or what is more often termed “adaptive learning” is important – indeed it is necessary. But for a learning organization, “adaptive learning” must be joined by “generative learning”, learning that enhances our capacity to create’ (Senge 1990:14).

The dimension that distinguishes learning from more traditional organizations is the mastery of certain basic disciplines or ‘component technologies’. The five that Peter Senge identifies are said to be converging to innovate learning organizations. They are:

Systems thinking

Page 118: MODELS 5

Personal mastery

Mental models

Building shared vision

Team learning

He adds to this recognition that people are agents, able to act upon the structures and systems of which they are a part. All the disciplines are, in this way, ‘concerned with a shift of mind from seeing parts to seeing wholes, from seeing people as helpless reactors to seeing them as active participants in shaping their reality, from reacting to the present to creating the future’ (Senge 1990: 69). It is to the disciplines that we will now turn.

Systems thinking – the cornerstone of the learning organization

A great virtue of Peter Senge’s work is the way in which he puts systems theory to work.The Fifth Discipline provides a good introduction to the basics and uses of such theory – and the way in which it can be brought together with other theoretical devices in order to make sense of organizational questions and issues. Systemic thinking is the conceptual cornerstone (‘The Fifth Discipline’) of his approach. It is the discipline that integrates the others, fusing them into a coherent body of theory and practice (ibid.: 12). Systems theory’s ability to comprehend and address the whole, and to examine the interrelationship between the parts provides, for Peter Senge, both the incentive and the means to integrate the disciplines.

Here is not the place to go into a detailed exploration of Senge’s presentation of systems theory (I have included some links to primers below). However, it is necessary to highlight one or two elements of his argument. First, while the basic tools of systems theory are fairly straightforward they can build into sophisticated models. Peter Senge argues that one of the key problems with much that is written about, and done in the name of management, is that rather simplistic frameworks are applied to what are complex systems. We tend to focus on the parts rather than seeing the whole, and to fail to see organization as a dynamic process. Thus, the argument runs, a better appreciation of systems will lead to more appropriate action.

‘We learn best from our experience, but we never directly experience the consequences of many of our most important decisions’, Peter Senge (1990: 23) argues with regard to organizations. We tend to think that cause and effect will be relatively near to one another. Thus when faced with a problem, it is the ‘solutions’ that are close by that we focus upon. Classically we look to actions that produce improvements in a relatively short time span. However, when viewed in systems terms short-term improvements often involve very significant long-term costs. For example, cutting back on research and design can bring very quick cost savings, but can severely damage the long-term viability of anorganization. Part of the problem is the nature of the feedback we receive. Some of the feedback will be reinforcing (or amplifying) – with small changes building on

Page 119: MODELS 5

themselves. ‘Whatever movement occurs is amplified, producing more movement in the same direction. A small action snowballs, with more and more and still more of the same, resembling compound interest’ (Senge 1990: 81). Thus, we may cut our advertising budgets, see the benefits in terms of cost savings, and in turn further trim spending in this area. In the short run there may be little impact on people’s demands for our goods and services, but longer term the decline in visibility may have severe penalties. An appreciation of systems will lead to recognition of the use of, and problems with, such reinforcing feedback, and also an understanding of the place of balancing (or stabilizing) feedback. (See, also Kurt Lewin   on feedback). A further key aspect of systems is the extent to which they inevitably involve delays – ‘interruptions in the flow of influence which make the consequences of an action occur gradually’ (ibid.: 90). Peter Senge (1990: 92) concludes:

The systems viewpoint is generally oriented toward the long-term view. That’s why delays and feedback loops are so important. In the short term, you can often ignore them; they’re inconsequential. They only come back to haunt you in the long term.

Peter Senge advocates the use of ‘systems maps’ – diagrams that show the key elements of systems and how they connect. However, people often have a problem ‘seeing’ systems, and it takes work to acquire the basic building blocks of systems theory, and to apply them to your organization. On the other hand, failure to understand system dynamics can lead us into ‘cycles of blaming and self-defense: the enemy is always out there, and problems are always caused by someone else’ Bolam and Deal 1997: 27; see, also, Senge 1990: 231).

The core disciplines

Alongside systems thinking, there stand four other ‘component technologies’ or disciplines. A ‘discipline’ is viewed by Peter Senge as a series of principles and practices that we study, master and integrate into our lives. The five disciplines can be approached at one of three levels:

Practices: what you do.

Principles: guiding ideas and insights.

Essences: the state of being those with high levels of mastery in the discipline (Senge 1990: 373).

Each discipline provides a vital dimension. Each is necessary to the others if organizations are to ‘learn’.

Personal mastery. ‘Organizations learn only through individuals who learn. Individual learning does not guarantee organizational learning. But without it no organizational learning occurs’ (Senge 1990: 139). Personal mastery is the discipline of continually clarifying and deepening our personal vision, of focusing our energies, of developing

Page 120: MODELS 5

patience, and of seeing reality objectively’ (ibid.: 7). It goes beyond competence and skills, although it involves them. It goes beyond spiritual opening, although it involves spiritual growth (ibid.: 141). Mastery is seen as a special kind of proficiency. It is not about dominance, but rather about calling. Vision is vocation rather than simply just a good idea.

People with a high level of personal mastery live in a continual learning mode. They never ‘arrive’. Sometimes, language, such as the term ‘personal mastery’ creates a misleading sense of definiteness, of black and white. But personal mastery is not something you possess. It is a process. It is a lifelong discipline. People with a high level of personal mastery are acutely aware of their ignorance, their incompetence, their growth areas. And they are deeply self-confident. Paradoxical? Only for those who do not see the ‘journey is the reward’. (Senge 1990: 142)

In writing such as this we can see the appeal of Peter Senge’s vision. It has deep echoes in the concerns of writers such as M. Scott Peck (1990) and Erich Fromm (1979). The discipline entails developing personal vision; holding creative tension (managing the gap between our vision and reality); recognizing structural tensions and constraints, and our own power (or lack of it) with regard to them; a commitment to truth; and using the sub-conscious (ibid.: 147-167).

Mental models.  These are ‘deeply ingrained assumptions, generalizations, or even pictures and images that influence how we understand the world and how we take action’ (Senge 1990: 8). As such they resemble what Donald A Schön talked about as a professional’s ‘repertoire’. We are often not that aware of the impact of such assumptions etc. on our behaviour – and, thus, a fundamental part of our task (as Schön would put it) is to develop the ability to reflect-in- and –on-action. Peter Senge is also influenced here by Schön’s collaborator on a number of projects, Chris Argyris.

The discipline of mental models starts with turning the mirror inward; learning to unearth our internal pictures of the world, to bring them to the surface and hold them rigorously to scrutiny. It also includes the ability to carry on ‘learningful’ conversations that balance inquiry and advocacy, where people expose their own thinking effectively and make that thinking open to the influence of others. (Senge 1990: 9)

If organizations are to develop a capacity to work with mental models then it will be necessary for people to learn new skills and develop new orientations, and for their to be institutional changes that foster such change. ‘Entrenched mental models… thwart changes that could come from systems thinking’ (ibid.: 203). Moving the organization in the right direction entails working to transcend the sorts of internal politics and game playing that dominate traditional organizations. In other words it means fostering openness (Senge 1990: 273-286). It also involves seeking to distribute business responsibly far more widely while retaining coordination and control. Learning organizations are localized organizations (ibid.: 287-301).

Page 121: MODELS 5

Building shared vision. Peter Senge starts from the position that if any one idea about leadership has inspired organizations for thousands of years, ‘it’s the capacity to hold a share picture of the future we seek to create’ (1990: 9). Such a vision has the power to be uplifting – and to encourage experimentation and innovation. Crucially, it is argued, it can also foster a sense of the long-term, something that is fundamental to the ‘fifth discipline’.

When there is a genuine vision (as opposed to the all-to-familiar ‘vision statement’), people excel and learn, not because they are told to, but because they want to. But many leaders have personal visions that never get translated into shared visions that galvanize an organization… What has been lacking is a discipline for translating vision into shared vision - not a ‘cookbook’ but a set of principles and guiding practices.

The practice of shared vision involves the skills of unearthing shared ‘pictures of the future’ that foster genuine commitment and enrolment rather than compliance. In mastering this discipline, leaders learn the counter-productiveness of trying to dictate a vision, no matter how heartfelt. (Senge 1990: 9)

Visions spread because of a reinforcing process. Increased clarity, enthusiasm and commitment rub off on others in the organization. ‘As people talk, the vision grows clearer. As it gets clearer, enthusiasm for its benefits grow’ (ibid.: 227). There are ‘limits to growth’ in this respect, but developing the sorts of mental models outlined above can significantly improve matters. Where organizations can transcend linear and grasp system thinking, there is the possibility of bringing vision to fruition.

Team learning. Such learning is viewed as ‘the process of aligning and developing the capacities of a team to create the results its members truly desire’ (Senge 1990: 236). It builds on personal mastery and shared vision – but these are not enough. People need to be able to act together. When teams learn together, Peter Senge suggests, not only can there be good results for the organization, members will grow more rapidly than could have occurred otherwise.

The discipline of team learning starts with ‘dialogue’, the capacity of members of a team to suspend assumptions and enter into a genuine ‘thinking together’. To the Greeks dia-logos meant a free-flowing if meaning through a group, allowing the group to discover insights not attainable individually…. [It] also involves learning how to recognize the patterns of interaction in teams that undermine learning. (Senge 1990: 10)

The notion of dialogue that flows through The Fifth Discipline is very heavily dependent on the work of the physicist, David Bohm (where a group ‘becomes open to the flow of a larger intelligence’, and thought is approached largely as collective phenomenon).  When dialogue is joined with systems thinking, Senge argues, there is the possibility of creating a language more suited for dealing with complexity, and of focusing on deep-seated structural issues and forces rather than being diverted by questions of personality and leadership style. Indeed, such is the emphasis on dialogue in his work that it could almost be put alongside systems thinking as a central feature of his approach.

Page 122: MODELS 5

Leading the learning organization

Peter Senge argues that learning organizations require a new view of leadership. He sees the traditional view of leaders (as special people who set the direction, make key decisions and energize the troops as deriving from a deeply individualistic and non-systemic worldview (1990: 340). At its centre the traditional view of leadership, ‘is based on assumptions of people’s powerlessness, their lack of personal vision and inability to master the forces of change, deficits which can be remedied only by a few great leaders’ (op. cit.). Against this traditional view he sets a ‘new’ view of leadership that centres on ‘subtler and more important tasks’.

In a learning organization, leaders are designers, stewards and teachers. They are responsible for building organizations were people continually expand their capabilities to understand complexity, clarify vision, and improve shared mental models – that is they are responsible for learning…. Learning organizations will remain a ‘good idea’… until people take a stand for building such organizations. Taking this stand is the first leadership act, the start of inspiring (literally ‘to breathe life into’) the vision of the learning organization. (Senge 1990: 340)

Many of the qualities that Peter Senge discusses with regard to leading the learning organization can be found in the shared leadership model (discussed elsewhere on these pages). For example, what Senge approaches as inspiration, can be approached asanimation. Here we will look at the three aspects of leadership that he identifies – and link his discussion with some other writers on leadership.

Leader as designer. The functions of design are rarely visible, Peter Senge argues, yet no one has a more sweeping influence than the designer (1990: 341). The organization’s policies, strategies and ‘systems’ are key area of design, but leadership goes beyond this. Integrating the five component technologies is fundamental. However, the first task entails designing the governing ideas – the purpose, vision and core values by which people should live. Building a shared vision is crucial early on as it ‘fosters a long-term orientation and an imperative for learning’ (ibid.: 344). Other disciplines also need to be attended to, but just how they are to be approached is dependent upon the situation faced. In essence, ‘the leaders’ task is designing the learning processes whereby people throughout the organization can deal productively with the critical issues they face, and develop their mastery in the learning disciplines’ (ibid.: 345).  

Leader as steward. While the notion of leader as steward is, perhaps, most commonly associated with writers such as Peter Block (1993), Peter Senge has some interesting insights on this strand. His starting point was the ‘purpose stories’ that the managers he interviewed told about their organization. He came to realize that the managers were doing more than telling stories, they were relating the story: ‘the overarching explanation of why they do what they do, how their organization needs to evolve, and how that evolution is part of something larger’ (Senge 1990: 346). Such purpose stories provide a single set of integrating ideas that give meaning to all aspects of the leader’s work – and not unexpectedly ‘the leader develops a unique relationship to his or her own personal

Page 123: MODELS 5

vision. He or she becomes a steward of the vision’ (op. cit.). One of the important things to grasp here is that stewardship involves a commitment to, and responsibility for the vision, but it does not mean that the leader owns it. It is not their possession. Leaders are stewards of the vision, their task is to manage it for the benefit of others (hence the subtitle of Block’s book – ‘Choosing service over self-interest’). Leaders learn to see their vision as part of something larger. Purpose stories evolve as they are being told, ‘in fact, they are as a result of being told’ (Senge 1990: 351). Leaders have to learn to listen to other people’s vision and to change their own where necessary. Telling the story in this way allows others to be involved and to help develop a vision that is both individual and shared.

Leader as teacher. Peter Senge starts here with Max de Pree’s (1990) injunction that the first responsibility of a leader is to define reality. While leaders may draw inspiration and spiritual reserves from their sense of stewardship, ‘much of the leverage leaders can actually exert lies in helping people achieve more accurate, more insightful and moreempowering views of reality (Senge 1990: 353). Building on an existing ‘hierarchy of explanation’ leaders, Peter Senge argues, can influence people’s view of reality at four levels: events, patterns of behaviour, systemic structures and the ‘purpose story’. By and large most managers and leaders tend to focus on the first two of these levels (and under their influence organizations do likewise). Leaders in learning organizations attend to all four, ‘but focus predominantly on purpose and systemic structure. Moreover they “teach” people throughout the organization to do likewise’ (Senge 1993: 353). This allows them to see ‘the big picture’ and to appreciate the structural forces that condition behaviour. By attending to purpose, leaders can cultivate an understanding of what the organization (and its members) are seeking to become. One of the issues here is that leaders often have strengths in one or two of the areas but are unable, for example, to develop systemic understanding. A key to success is being able to conceptualize insights so that they become public knowledge, ‘open to challenge and further improvement’ (ibid.: 356).

“Leader as teacher” is not about “teaching” people how to achieve their vision. It is about fostering learning, for everyone. Such leaders help people throughout the organization develop systemic understandings. Accepting this responsibility is the antidote to one of the most common downfalls of otherwise gifted teachers – losing their commitment to the truth. (Senge 1990: 356)

Leaders have to create and manage creative tension – especially around the gap between vision and reality. Mastery of such tension allows for a fundamental shift. It enables the leader to see the truth in changing situations.

Issues and problems

When making judgements about Peter Senge’s work, and the ideas he promotes, we need to place his contribution in context. His is not meant to be a definitive addition to the ‘academic’ literature of organizational learning. Peter Senge writes for practicing and aspiring managers and leaders. The concern is to identify how interventions can be made to turn organizations into ‘learning organizations’. Much of his, and similar theorists’

Page 124: MODELS 5

efforts, have been ‘devoted to identifying templates, which real organizations could attempt to emulate’ (Easterby-Smith and Araujo 1999: 2). In this field some of the significant contributions have been based around studies of organizational practice, others have ‘relied more on theoretical principles, such as systems dynamics or psychological learning theory, from which implications for design and implementation have been derived’ (op. cit.). Peter Senge, while making use of individual case studies, tends to the latter orientation.

The most appropriate question in respect of this contribution would seem to be whether it fosters praxis – informed, committed action on the part of those it is aimed at? This is an especially pertinent question as Peter Senge looks to promote a more holistic vision of organizations and the lives of people within them. Here we focus on three aspects. We start with the organization.

Organizational imperatives. Here the case against Peter Senge is fairly simple. We can find very few organizations that come close to the combination of characteristics that he identifies with the learning organization. Within a capitalist system his vision of companies and organizations turning wholehearted to the cultivation of the learning of their members can only come into fruition in a limited number of instances. While those in charge of organizations will usually look in some way to the long-term growth and sustainability of their enterprise, they may not focus on developing the human resources that the organization houses. The focus may well be on enhancing brand recognition and status (Klein 2001); developing intellectual capital and knowledge (Leadbeater 2000); delivering product innovation; and ensuring that production and distribution costs are kept down. As Will Hutton (1995: 8) has argued, British companies’ priorities are overwhelmingly financial. What is more, ‘the targets for profit are too high and time horizons too short’ (1995: xi). Such conditions are hardly conducive to building the sort of organization that Peter Senge proposes. Here the case against Senge is that within capitalist organizations, where the bottom line is profit, a fundamental concern with the learning and development of employees and associates is simply too idealistic.

Yet there are some currents running in Peter Senge’s favour. The need to focus on knowledge generation within an increasingly globalized economy does bring us back in some important respects to the people who have to create intellectual capital.

Productivity and competitiveness are, by and large, a function of knowledge generation and information processing: firms and territories are organized in networks of production, management and distribution; the core economic activities are global – that is they have the capacity to work as a unit in real time, or chosen time, on a planetary scale. (Castells 2001: 52)

A failure to attend to the learning of groups and individuals in the organization spells disaster in this context. As Leadbeater (2000: 70) has argued, companies need to invest not just in new machinery to make production more efficient, but in the flow of know-how that will sustain their business. Organizations need to be good at knowledge generation, appropriation and exploitation. This process is not that easy:

Page 125: MODELS 5

Knowledge that is visible tends to be explicit, teachable, independent, detachable, it also easy for competitors to imitate. Knowledge that is intangible, tacit, less teachable, less observable, is more complex but more difficult to detach from the person who created it or the context in which it is embedded. Knowledge carried by an individual only realizes its commercial potential when it is replicated by an organization and becomes organizational knowledge. (ibid.: 71)

Here we have a very significant pressure for the fostering of ‘learning organizations’. The sort of know-how that Leadbeater is talking about here cannot be simply transmitted. It has to be engaged with, talking about and embedded in organizational structures and strategies. It has to become people’s own.

A question of sophistication and disposition. One of the biggest problems with Peter Senge’s approach is nothing to do with the theory, it’s rightness, nor the way it is presented. The issue here is that the people to whom it is addressed do not have the disposition or theoretical tools to follow it through. One clue lies in his choice of ‘disciplines’ to describe the core of his approach. As we saw a discipline is a series of principles and practices that we study, master and integrate into our lives. In other words, the approach entails significant effort on the part of the practitioner. It also entails developing quite complicated mental models, and being able to apply and adapt these to different situations – often on the hoof. Classically, the approach involves a shift from product to process (and back again). The question then becomes whether many people in organizations can handle this. All this has a direct parallel within formal education. One of the reasons that product approaches to curriculum (as exemplified in the concern for SATs tests, examination performance and  school attendance) have assumed such a dominance is that alternative process approaches are much more difficult to do well. They may be superior – but many teachers lack the sophistication to carry them forward. There are also psychological and social barriers. As Lawrence Stenhouse put it some years ago: ‘The close examination of one’s professional performance is personally threatening; and the social climate in which teachers work generally offers little support to those who might be disposed to face that threat’ (1975: 159). We can make the same case for people in most organizations.

The process of exploring one’s performance, personality and fundamental aims in life (and this is what Peter Senge is proposing) is a daunting task for most people. To do it we need considerable support, and the motivation to carry the task through some very uncomfortable periods. It calls for the integration of different aspects of our lives and experiences. There is, here, a straightforward question concerning the vision – will people want to sign up to it? To make sense of the sorts of experiences generated and explored in a fully functioning ‘learning organization’ there needs to be ‘spiritual growth’ and the ability to locate these within some sort of framework of commitment. Thus, as employees, we are not simply asked to do our jobs and to get paid. We are also requested to join in something bigger. Many of us may just want to earn a living!

Politics and vision. Here we need to note two key problem areas. First, there is a question of how Peter Senge applies systems theory. While he introduces all sorts of

Page 126: MODELS 5

broader appreciations and attends to values – his theory is not fully set in a political or moral framework. There is not a consideration of questions of social justice, democracy and exclusion. His approach largely operates at the level of organizational interests. This is would not be such a significant problem if there was a more explicit vision of the sort of society that he would like to see attained, and attention to this with regard to management and leadership. As a contrast we might turn to Peter Drucker’s (1977: 36) elegant discussion of the dimensions of management. He argued that there are three tasks – ‘equally important but essentially different’ – that face the management of every organization. These are:

To think through and define the specific purpose and mission of the institution, whether business enterprise, hospital, or university.

To make work productive and the worker achieving.

To manage social impacts and social responsibilities. (op. cit.)

He continues:

None of our institutions exists by itself and as an end in itself. Every one is an organ of society and exists for the sake of society. Business is not exception. ‘Free enterprise’ cannot be justified as being good for business. It can only be justified as being good for society. (Drucker 1977: 40)

If Peter Senge had attempted greater connection between the notion of the ‘learning organization’ and the ‘learning society’, and paid attention to the political and social impact of organizational activity then this area of criticism would be limited to the question of the particular vision of society and human flourishing involved.

Second, there is some question with regard to political processes concerning his emphasis on dialogue and shared vision. While Peter Senge clearly recognizes the political dimensions of organizational life, there is sneaking suspicion that he may want to transcend it. In some ways there is link here with the concerns and interests ofcommunitarian thinkers like Amitai Etzioni (1995, 1997). As Richard Sennett (1998: 143) argues with regard to political communitarianism, it ‘falsely emphasizes unity as the source of strength in a community and mistakenly fears that when conflicts arise in a community, social bonds are threatened’. Within it (and arguably aspects of Peter Senge’s vision of the learning organization) there seems, at times, to be a dislike of politics and a tendency to see danger in plurality and difference. Here there is a tension between the concern for dialogue and the interest in building a shared vision. An alternative reading is that difference is good for democratic life (and organizational life) provided that we cultivate a sense of reciprocity, and ways of working that encourage deliberation. The search is not for the sort of common good that many communitarians seek (Guttman and Thompson 1996: 92) but rather for ways in which people may share in a common life. Moral disagreement will persist – the key is whether we can learn to respect and engage with each other’s ideas, behaviours and beliefs.

Page 127: MODELS 5

Conclusion

John van Maurik (2001: 201) has suggested that Peter Senge has been ahead of his time and that his arguments are insightful and revolutionary. He goes on to say that it is a matter of regret ‘that more organizations have not taken his advice and have remained geared to the quick fix’. As we have seen there are very deep-seated reasons why this may have been the case. Beyond this, though, there is the questions of whether Senge’s vision of the learning organization and the disciplines it requires has contributed to more informed and committed action with regard to organizational life? Here we have little concrete evidence to go on. However, we can make some judgements about the possibilities of his theories and proposed practices. We could say that while there are some issues and problems with his conceptualization, at least it does carry within it some questions around what might make for human flourishing. The emphases on building a shared vision, team working, personal mastery and the development of more sophisticated mental models and the way he runs the notion of dialogue through these does have the potential of allowing workplaces to be more convivial and creative. The drawing together of the elements via the Fifth Discipline of systemic thinking, while not being to everyone’s taste, also allows us to approach a more holistic understanding of organizational life (although Peter Senge does himself stop short of asking some important questions in this respect). These are still substantial achievements – and when linked to his popularizing of the notion of the ‘learning organization’ – it is understandable why Peter Senge has been recognized as a key thinker.

The Fifth Discipline: The Art and Practice of the Learning Organization (Senge 1990) is a book by Peter Senge (a senior lecturer at MIT) focusing on group problem solving using the systems thinking method in order to convert companies into learning organizations. The five disciplines represent approaches (theories and methods) for developing three core learning capabilities: fostering aspiration, developing reflective conversation, and understanding complexity.

The Five Disciplines

The five disciplines of the learning organization discussed in the book are:

1) "Personal mastery is a discipline of continually clarifying and deepening our personal vision, of focusing our energies, of developing patience, and of seeing reality objectively." (p. 7)

2) "Mental models are deeply ingrained assumptions, generalizations, or even pictures of images that influence how we understand the world and how we take action." (p. 8)

3) "Building shared vision a practice of unearthing shared pictures of the future that foster genuine commitment and enrollment rather than compliance." (p. 9)

4) "Team learning starts with dialogue, the capacity of members of a team to suspend assumptions and enter into genuine thinking together." (p. 10)

Page 128: MODELS 5

5) Systems thinking - The Fifth Discipline that integrates the other 4.

"Systems thinking also needs the disciplines of building shared vision, mental models, team learning, and personal mastery to realize its potential. Building shared vision fosters a commitment to the long term. Mental models focus on the openness needed to unearth shortcomings in our present ways of seeing the world. Team learning develops the skills of groups of people to look for the larger picture beyond individual perspectives. And personal mastery fosters the personal motivation to continually learn how our actions affect our world."

The Learning Disabilities

1) "I am my position."

People fail to recognize their purpose as a part of the enterprise. Instead, they see themselves as an inconsequential part of a system over which they have little influence, leading them to limit themselves to the jobs they must perform at their own positions. This makes it hard to pinpoint the reason an enterprise is failing, with so many hidden 'loose screws' around.

2) "The enemy out there." 3) The Illusion of Taking Charge

4) The Fixation of Events

The tendency to see things as results of short-term events undermines our ability to see things on a grander scale. Cave men needed to react to events quickly for survival. However, the biggest threats we face nowadays are rarely sudden events, but slow, gradual processes, such as environmental changes.

5) The Parable of the Boiling frog 6) The Delusion of Learning from Experience

7) The Myth of the Management Team

The 11 Laws of the Fifth Discipline

1) Today's problems come from yesterday's "solutions." 2) The harder you push, the harder the system pushes back.

3) Behavior will grow better before it grows worse.

4) The easy way out usually leads back in.

5) The cure can be worse than the disease.

Page 129: MODELS 5

6) Faster is slower.

7) Cause and effect are not closely related in time and space.

8) Small changes can produce big results...but the areas of highest leverage are often the least obvious.

9) You can have your cake and eat it too ---but not all at once.

10) Dividing an elephant in half does not produce two small elephants.

11) There is no blame.

Deming's 14 points

W Edwards Deming was an American statistician who was credited with the rise of Japan as a manufacturing nation, and with the invention of Total Quality Management (TQM). Deming went to Japan just after the War to help set up a census of the Japanese population. While he was there, he taught 'statistical process control' to Japanese engineers - a set of techniques which allowed them to manufacture high-quality goods without expensive machinery. In 1960 he was awarded a medal by the Japanese Emperor for his services to that country's industry.

Deming returned to the US and spent some years in obscurity before the publication of his book "Out of the crisis" in 1982. In this book, Deming set out 14 points which, if applied to US manufacturing industry, would he believed, save the US from industrial doom at the hands of the Japanese.

Although Deming does not use the term Total Quality Management in his book, it is credited with launching the movement. Most of the central ideas of TQM are contained in "Out of the crisis".

The 14 points seem at first sight to be a rag-bag of radical ideas, but the key to understanding a number of them lies in Deming's thoughts about variation. Variation was seen by Deming as the disease that threatened US manufacturing. The more variation - in the length of parts supposed to be uniform, in delivery times, in prices, in work practices - the more waste, he reasoned.

From this premise, he set out his 14 points for management, which we have paraphrased here:

1."Create constancy of purpose towards improvement". Replace short-term reaction with long-term planning. 

2."Adopt the new philosophy". The implication is that management should actually adopt his philosophy, rather than merely expect the workforce to do so.

Page 130: MODELS 5

3."Cease dependence on inspection". If variation is reduced, there is no need to inspect manufactured items for defects, because there won't be any. 

4."Move towards a single supplier for any one item." Multiple suppliers mean variation between feedstocks. 

5."Improve constantly and forever". Constantly strive to reduce variation. 

6."Institute training on the job". If people are inadequately trained, they will not all work the same way, and this will introduce variation. 

7."Institute leadership". Deming makes a distinction between leadership and mere supervision. The latter is quota- and target-based. 

8."Drive out fear". Deming sees management by fear as counter- productive in the long term, because it prevents workers from acting in the organisation's best interests. 

9."Break down barriers between departments". Another idea central to TQM is the concept of the 'internal customer', that each department serves not the management, but the other departments that use its outputs. 

10."Eliminate slogans". Another central TQM idea is that it's not people who make most mistakes - it's the process they are working within. Harassing the workforce without improving the processes they use is counter-productive. 

11."Eliminate management by objectives". Deming saw production targets as encouraging the delivery of poor-quality goods. 

12."Remove barriers to pride of workmanship". Many of the other problems outlined reduce worker satisfaction. 

13."Institute education and self-improvement". 

14."The transformation is everyone's job". 

William Edwards Deming (October 14, 1900 – December 20, 1993) was an American statistician, professor, author, lecturer, andconsultant. He is perhaps best known for his work in Japan. There, from 1950 onward, he taught top management how to improve design (and thus service), product quality, testing and sales (the last through global markets)[1] through various methods, including the application of statistical methods.

Deming made a significant contribution to Japan's later reputation for innovative high-quality products and its economic power. He is regarded as having had more impact upon

Page 131: MODELS 5

Japanese manufacturing and business than any other individual not of Japanese heritage. Despite being considered something of a hero in Japan, he was only just beginning to win widespread recognition in the U.S. at the time of his death.

Dr. Deming's teachings and philosophy can be seen through the results they produced when they were adopted by Japanese industry, as the following example shows: Ford Motor Company was simultaneously manufacturing a car model with transmissions made in Japan and the United States. Soon after the car model was on the market, Ford customers were requesting the model with Japanese transmission over the USA-made transmission, and they were willing to wait for the Japanese model. As both transmissions were made to the same specifications, Ford engineers could not understand the customer preference for the model with Japanese transmission. Finally, Ford engineers decided to take apart the two different transmissions. The American-made car parts were all within specified tolerance levels. On the other hand, the Japanese car parts were virtually identical to each other, and much closer to the nominal values for the parts - e.g., if a part were supposed to be one foot long, plus or minus 1/8 of an inch - then the Japanese parts were within 1/16 of an inch. This made the Japanese cars run more smoothly and customers experienced fewer problems. Engineers at Ford could not understand how this was done, until they met Deming.[3]

Deming received a BSc in electrical engineering from the University of Wyoming at Laramie (1921), an M.S. from the University of Colorado (1925), and a Ph.D. from Yale University(1928). Both graduate degrees were in mathematics and physics. Deming had an internship at Bell Telephone Laboratories while studying at Yale. He later worked at the U.S. Department of Agriculture and the Census Department. While working under Gen. Douglas MacArthur as a census consultant to the Japanese government, he famously taught statistical process control methods to Japanese business leaders, returning to Japan for many years to consult and to witness economic growth that he had predicted would come as a result of application of techniques learned from Walter Shewhart at Bell Laboratories. Later, he became a professor at New York University while engaged as an independent consultant in Washington, D.C.

Deming was the author of Out of the Crisis (1982–1986) and The New Economics for Industry, Government, Education (1993), which includes his System of Profound Knowledge and the 14 Points for Management (described below). Deming played flute & drums and composed music throughout his life, including sacred choral compositions and an arrangement of The Star Spangled Banner.[4]

In 1993, Deming founded the W. Edwards Deming Institute in Washington, D.C., where the Deming Collection at the U.S. Library of Congress includes an extensive audiotape and videotape archive. The aim of the W. Edwards Deming Institute is to foster understanding of The Deming System of Profound Knowledge to advance commerce, prosperity, and peace.

The 14 points are a basis for transformation of [American] industry. Adoption and action on the 14 points are a signal that management intend to stay in business and aim to

Page 132: MODELS 5

protect investors and jobs. Such a system formed the basis for lessons for top management in Japan in 1950 and in subsequent years.

The 14 points apply anywhere, to small organisations as well as to large ones, to the service industry as well as to manufacturing. They apply to a division within a company.

 

1. Create constancy of purpose toward improvement of product and service, with the aim to become competitive and to stay in business, and to provide jobs.

2. Adopt the new philosophy. We are in a new economic age. Western management must awaken to the challenge, must learn their responsibilities, and take on leadership for change.

3. Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place.

4. End the practice of awarding business on the basis of price tag. Instead, minimise total cost. Move towards a single supplier for any one item, on a long-term relationship of loyalty and trust.

5. Improve constantly and forever the system of production and service, to improve quality and productivity, and thus constantly decrease costs.

6. Institute training on the job.

7. Institute leadership. The aim of supervision should be to help people and machines and gadgets to do a better job. Supervision of management is in need of an overhaul, as well as supervision of production workers.

8. Drive out fear, so that everyone may work effectively for the company.

9. Break down barriers between departments. People in research, design, sales, and production must work as a team, to foresee problems of production and in use that may be encountered with the product or service.

10. Eliminate slogans, exhortations, and targets for the workforce asking for zero defects and new levels of productivity. Such exhortations only create adversarial relationships, as the bulk of the causes of low quality and low productivity belong to the system and thus lie beyond the power of the work force.

11. a. Eliminate work standards (quotas) on the factory floor. Substitute leadership.b. Eliminate management by objective. Eliminate management by numbers, numerical goals. Substitute leadership.

Page 133: MODELS 5

12. a. Remove barriers that rob the hourly paid worker of his right to pride in workmanship. The responsibility of supervisors must be changed from sheer numbers to quality.b. Remove barriers that rob people in management and engineering of their right to pride in workmanship. This means, inter alia, abolishment of the annual or merit rating and management by objective.

13. Institute a vigorous program of education and self-improvement.

14. Put everybody in the company to work to accomplish the transformation. The transformation is everybody's job.

Point 1: Create constancy of purpose toward improvement of the product and service so as to become competitive, stay in business and provide jobs.

Point 2: Adopt the new philosophy. We are in a new economic age. We no longer need live with commonly accepted levels of delay, mistake, defective material and defective workmanship.

Point 3: Cease dependence on mass inspection; require, instead, statistical evidence that quality is built in.

Point 4: Improve the quality of incoming materials. End the practice of awarding business on the basis of a price alone. Instead, depend on meaningful measures of quality, along with price.

Point 5: Find the problems; constantly improve the system of production and service. There should be continual reduction of waste and continual improvement of quality in every activity so as to yield a continual rise in productivity and a decrease in costs.

Point 6: Institute modern methods of training and education for all. Modern methods of on-the-job training use control charts to determine whether a worker has been properly trained and is able to perform the job correctly. Statistical methods must be used to discover when training is complete.

Point 7: Institute modern methods of supervision. The emphasis of production supervisors must be to help people to do a better job. Improvement of quality will automatically improve productivity. Management must prepare to take immediate action on response from supervisors concerning problems such as inherited defects, lack of maintenance of machines, poor tools or fuzzy operational definitions.

Point 8: Fear is a barrier to improvement so drive out fear by encouraging effective two-way communication and other mechanisms that will enable everybody to be part of change, and to belong to it.

Page 134: MODELS 5

Fear can often be found at all levels in an organization: fear of change, fear of the fact that it may be necessary to learn a better way of working and fear that their positions might be usurped frequently affect middle and higher management, whilst on the shop-floor, workers can also fear the effects of change on their jobs.

Point 9: Break down barriers between departments and staff areas. People in different areas such as research, design, sales, administration and production must work in teams to tackle problems that may be encountered with products or service.

Point 10: Eliminate the use of slogans, posters and exhortations for the workforce, demanding zero defects and new levels of productivity without providing methods. Such exhortations only create adversarial relationships.

Point 11: Eliminate work standards that prescribe numerical quotas for the workforce and numerical goals for people in management. Substitute aids and helpful leadership.

Point 12: Remove the barriers that rob hourly workers, and people in management, of their right to pride of workmanship. This implies, abolition of the annual merit rating (appraisal of performance) and of management by objectives.

Point 13: Institute a vigorous program of education, and encourage self-improvement for everyone. What an organization needs is not just good people; it needs people that are improving with education.

Point 14: Top management's permanent commitment to ever-improving quality and productivity must be clearly defined and a management structure created that will continuously take action to follow the preceding 13 points.

GESTALT THEORY

Gestalt theory is a broadly interdisciplinary general theory which provides a framework for a wide variety of psychological phenomena, processes, and applications. Human beings are viewed as open systems in active interaction with their environment. It is especially suited for the understanding of order and structure in psychological events, and has its origins in some orientations of Johann Wolfgang von Goethe, Ernst Mach, and particularly of Christian von Ehrenfels and the research work of Max Wertheimer, Wolfgang Köhler, Kurt Koffka, and Kurt Lewin, who opposed the elementistic approach to psychological events, associationism, behaviorism, and to psychoanalysis. The coming to power of national socialism substantially interrupted the fruitful scientific development of Gestalt theory in the German-speaking world; Koffka, Wertheimer, Köhler and Lewin emigrated, or were forced to flee, to the United States .The GTA views as its main task the provision of a scientific and organizational framework for the elaboration and further development of the perspective of Gestalt theory in research and practice. In this sense, Gestalt theory is not limited only to the concept of the Gestalt or the whole, or to the Gestalt principles of the organization of perception (as it is presented in many publications), but must be understood as essentially

Page 135: MODELS 5

far broader and more encompassing: - The primacy of the phenomenal: Recognizing and taking seriously the human world of experience as the only immediately given reality, and not simply discussing it away, is a fundamental assertion of Gestalt theory, the fruitfulness of which for psychology and psychotherapy has by no means been exhausted. - It is the interaction of the individual and the situation in the sense of a dynamic field which determines experience and behavior, and not only drives (psychoanalysis, ethology) or external stimuli (behaviorism, Skinner) or static personality traits (classical personality theory). - Connections among psychological contents are more readily and more permanently created on the basis of substantive concrete relationships than by sheer repetition and reinforcement.

- Thinking and problem solving are characterized by appropriate substantive organization, restructuring, and centering of the given ('insight') in the direction of the desired solution.

- In memory, structures based on associative connections are elaborated and differentiated according to a tendency for optimal organization.

- Cognitions which an individual cannot integrate lead to an experience of dissonance and to cognitive processes directed at reducing this dissonance.

- In a supra-individual whole such as a group, there is a tendency toward specific relationships in the interaction of strengths and needs.

The epistemological orientation of Gestalt theory tends to be a kind of critical realism. Methodologically, the attempt is to achieve a meaningful integration of experimental and phenomenological procedures (the experimental-phenomenological method). Crucial phenomena are examined without reduction of experimental precision. Gestalt theory is to be understood not as a static scientific position, but as a paradigm that is continuing to develop. Through developments such as the theory of the self-organization of systems, it attains major significance for many of the current concerns of psychology.

Gestalt theory is a broadly interdisciplinary general theory which provides a framework for a wide variety of psychological phenomena, processes, and applications. Human beings are viewed as open systems in active interaction with their environment. It is especially suited for the understanding of order and structure in psychological events, and has its origins in some orientations of Johann Wolfgang von Goethe, Ernst Mach, and particularly of Christian von Ehrenfels and the research work of Max Wertheimer, Wolfgang Köhler, Kurt Koffka, and Kurt Lewin, who opposed the elementistic approach to psychological events, associationism, behaviorism, and to psychoanalysis. The coming to power of national socialism substantially interrupted the fruitful scientific development of Gestalt theory in the German-speaking world; Koffka, Wertheimer, Köhler and Lewin emigrated, or were forced to flee, to the United States.The GTA views as its main task the provision of a scientific and organizational framework for the elaboration and further development of the perspective of Gestalt theory in research and practice. In this sense, Gestalt theory is not limited only to the concept of the Gestalt or the whole, or to the Gestalt principles of the organization of

Page 136: MODELS 5

perception (as it is presented in many publications), but must be understood as essentially far broader and more encompassing: - The primacy of the phenomenal: Recognizing and taking seriously the human world of experience as the only immediately given reality, and not simply discussing it away, is a fundamental assertion of Gestalt theory, the fruitfulness of which for psychology and psychotherapy has by no means been exhausted. - It is the interaction of the individual and the situation in the sense of a dynamic field which determines experience and behavior, and not only drives (psychoanalysis, ethology) or external stimuli (behaviorism, Skinner) or static personality traits (classical personality theory). - Connections among psychological contents are more readily and more permanently created on the basis of substantive concrete relationships than by sheer repetition and reinforcement.

- Thinking and problem solving are characterized by appropriate substantive organization, restructuring, and centering of the given ('insight') in the direction of the desired solution.

- In memory, structures based on associative connections are elaborated and differentiated according to a tendency for optimal organization.

- Cognitions which an individual cannot integrate lead to an experience of dissonance and to cognitive processes directed at reducing this dissonance.

- In a supra-individual whole such as a group, there is a tendency toward specific relationships in the interaction of strengths and needs.

The epistemological orientation of Gestalt theory tends to be a kind of critical realism. Methodologically, the attempt is to achieve a meaningful integration of experimental and phenomenological procedures (the experimental-phenomenological method). Crucial phenomena are examined without reduction of experimental precision. Gestalt theory is to be understood not as a static scientific position, but as a paradigm that is continuing to develop. Through developments such as the theory of the self-organization of systems, it attains major significance for many of the current concerns of psychology.

Gestalt psychology or gestaltism (German: Gestalt - "essence or shape of an entity's complete form") of the Berlin School is a theory of mind and brain positing that the operational principle of the brain is holistic, parallel, and analog, with self-organizing tendencies. The Gestalt effect is the form-generating capability of our senses, particularly with respect to the visual recognition of figures and whole forms instead of just a collection of simple lines and curves. In psychology, gestaltism is often opposed to structuralism and Wundt. The phrase "The whole is greater than the sum of the parts" is often used when explaining Gestalt theory.

The concept of Gestalt was first introduced in contemporary philosophy and psychology by Christian von Ehrenfels (a member of the School of Brentano). The idea of Gestalt has its roots in theories by Johann Wolfgang von Goethe, Immanuel Kant, and Ernst Mach. Max Wertheimer's unique contribution was to insist that the "Gestalt" is

Page 137: MODELS 5

perceptually primary, defining the parts of which it was composed, rather than being a secondary quality that emerges from those parts, as von Ehrenfels's earlier Gestalt-Qualität had been.

Both von Ehrenfels and Edmund Husserl seem to have been inspired by Mach's work Beiträge zur Analyse der Empfindungen (Contributions to the Analysis of the Sensations, 1886), in formulating their very similar concepts of Gestalt and Figural Moment, respectively.

Early 20th century theorists, such as Kurt Koffka, Max Wertheimer, and Wolfgang Köhler (students of Carl Stumpf) saw objects as perceived within an environment according to all of their elements taken together as a global construct. This 'gestalt' or 'whole form' approach sought to define principles of perception -- seemingly innate mental laws which determined the way in which objects were perceived. It is based on the here and now, and in the way you view things. It can be broken up into two: figure or ground, at first glance do you see the figure in front of you or the background?

These laws took several forms, such as the grouping of similar, or proximate, objects together, within this global process. Although Gestalt has been criticized for being merely descriptive, it has formed the basis of much further research into the perception of patterns and objects ( Carlson et al. 2000), and of research into behavior, thinking, problem solving and psychopathology.

It should also be emphasized that Gestalt psychology is distinct from Gestalt psychotherapy. One has little to do with the other.

The investigations developed at the beginning of the 20th century, based on traditional scientific methodology, divided the object of study into a set of elements that could be analyzed separately with the objective of reducing the complexity of this object. Contrary to this methodology, the school of Gestalt practiced a series of theoretical and methodological principles that attempted to redefine the approach to psychological research.

The theoretical principles are the following:

Principle of Totality - The conscious experience must be considered globally (by taking into account all the physical and mental aspects of the individual simultaneously) because the nature of the mind demands that each component be considered as part of a system of dynamic relationships.

Principle of psychophysical isomorphism - A correlation exists between conscious experience and cerebral activity.

Based on the principles above the following methodological principles are defined:

Page 138: MODELS 5

Phenomenon Experimental Analysis - In relation to the Totality Principle any psychological research should take as a starting point phenomena and not be solely focused on sensory qualities.

Biotic Experiment - The School of Gestalt established a need to conduct real experiments which sharply contrasted with and opposed classic laboratory experiments. This signified experimenting in natural situations, developed in real conditions, in which it would be possible to reproduce, with higher fidelity, what would be habitual for a subject.

Gestalt psychology attempts to understand psychological phenomena by viewing them as organised and structured wholes rather than the sum of their constituent parts. Thus, Gestalt psychology dissociates itself from the more 'elementistic'/reductionistic/decompositional approaches to psychology like structuralism (with its tendency to analyse mental processes into elementary sensations) and it accentuates concepts like emergent properties, holism, and context.

In the 30s and 40s Gestalt psychology was applied to visual perception, most notably by Max Wertheimer, Wolfgang Köhler, and Kurt Koffka who founded the so-called gestalt approaches to form perception. Their aim was to investigate the global and holistic processes involved in perceiving structure in the environment (e.g. Sternberg 1996). More specifically, they tried to explain human perception of groups of objects and how we perceive parts of objects and form whole objects on the basis of these. The investigations in this subject crystallised into "the gestalt laws of perceptual organization." Some of these laws, which are often cited in the HCI or interaction design community, are as follows.

Diffusion of innovations

Diffusion of Innovations is a theory of how, why, and at what rate new ideas and technology spread through cultures. The concept was first studied by the French sociologist Gabriel Tarde (1890) and by German and Austrian anthropologists such as Friedrich Ratzel and Leo Frobenius.[1] Its basic epidemiological or internal-influence form was formulated by H. Earl Pemberton[2], who provided examples of institutional diffusion such as postage stamps and compulsory school laws.

The key elements in diffusion research are:

Element Definition

InnovationRogers defines an innovation as "an idea, practice, or object that is perceived as new by an individual or other unit of adoption" [5].

Communication A communication channel is "the means by which messages get from

Page 139: MODELS 5

channels one individual to another" [6].

Time

"The innovation-decision period is the length of time required to pass through the innovation-decision process" [7]. "Rate of adoption is the relative speed with which an innovation is adopted by members of a social system" [8].

Social system"A social system is defined as a set of interrelated units that are engaged in joint problem solving to accomplish a common goal" [9].

Decisions

Two factors determine what type a particular decision is :

Whether the decision is made freely and implemented voluntarily,

Who makes the decision?

Based on these considerations, three types of innovation-decisions have been identified

within diffusion of innovations.

Type Definition

Optional Innovation-Decision

This decision is made by an individual who is in some way distinguished from others in a social system.

Collective Innovation-Decision

This decision is made collectively by all individuals of a social system.

Authority Innovation-Decision

This decision is made for the entire social system by few individuals in positions of influence or power.

Diffusion of an innovation occurs through a five–step process. This process is a type of decision-making. It occurs through a series of communication channels over a period of time among the members of a similar social system. Ryan and Gross first indicated the identification of adoption as a process in 1943 (Rogers 1962, p. 79). Rogers categorizes the five stages (steps) as: awareness, interest, evaluation, trial, and adoption. An individual might reject an innovation at any time during or after the adoption process. In later editions of the Diffusion of Innovations Rogers changes the terminology of the five stages to: knowledge, persuasion, decision, implementation, and confirmation. However the descriptions of the categories have remained similar throughout the editions.

Page 140: MODELS 5

Five stages of the adoption process

Stage Definition

Knowledge

In this stage the individual is first exposed to an innovation but lacks information about the innovation. During this stage of the process the individual has not been inspired to find more information about the innovation.

PersuasionIn this stage the individual is interested in the innovation and actively seeks information/detail about the innovation.

Decision

In this stage the individual takes the concept of the innovation and weighs the advantages/disadvantages of using the innovation and decides whether to adopt or reject the innovation. Due to the individualistic nature of this stage Rogers notes that it is the most difficult stage to acquire empirical evidence (Rogers 1964, p. 83).

Implementation

In this stage the individual employs the innovation to a varying degree depending on the situation. During this stage the individual determines the usefulness of the innovation and may search for further information about it.

ConfirmationAlthough the name of this stage may be misleading, in this stage the individual finalizes their decision to continue using the innovation and may use the innovation to its fullest potential.

Page 141: MODELS 5

The rate of adoption is defined as: the relative speed with which members of a social system adopt an innovation. It is usually measured by the length of time required for a certain percentage of the members of a social system to adopt an innovation (Rogers 1962, p. 134). The rates of adoption for innovations are determined by an individual’s adopter category. In general individuals who first adopt an innovation require a shorter adoption period (adoption process) than late adopters.

Within the rate of adoption there is a point at which an innovation reaches critical mass. This is a point in time within the adoption curve that enough individuals have adopted an innovation in order that the continued adoption of the innovation is self-sustaining. In describing how an innovation reaches critical mass, Rogers outlines several strategies in order to help an innovation reach this stage. These strategies are: have an innovation adopted by a highly respected individual within a social network, creating an instinctive desire for a specific innovation. Inject an innovation into a group of individuals who would readily use an innovation, and provide positive reactions and benefits for early adopters of an innovation.

Rogers defines several intrinsic characteristics of innovations that influence an individual’s decision to adopt or reject an innovation. The relative advantage is how improved an innovation is over the previous generation. Compatibility is the second characteristic, the level of compatibility that an innovation has to be assimilated into an individual’s life. The complexity of an innovation is a significant factor in whether it is adopted by an individual. If the innovation is too difficult to use an individual will not likely adopt it. The fourth characteristic, trialability, determines how easily an innovation may be experimented with as it is being adopted. If a user has a hard time using and trying an innovation this individual will be less likely to adopt it. The final characteristic, observability, is the extent that an innovation is visible to others. An innovation that is more visible will drive communication among the individual’s peers and personal networks and will in turn create more positive or negative reactions.

Rogers defines an adopter category as a classification of individuals within a social system on the basis of innovativeness. In the book Diffusion of Innovations, Rogers suggests a total of five categories of adopters in order to standardize the usage of adopter categories in diffusion research. The adoption of an innovation follows an S curve when plotted over a length of time.[10] The categories of adopters are: innovators, early adopters, early majority, late majority, and laggards (Rogers 1962, p. 150)

Page 142: MODELS 5

Educational technology is a field of innovation and change. Many of the most important products and practices developed by educational technologists require dramatic shifts in the way we think about, deliver, administer, and assess instruction and training.  Studying the adoption, diffusion, implementation, and institutionalization of innovations is essential to the field of educational technology because the field has suffered from a lack of widespread acceptance of technology (Burkman, 1987).  While it’s possible to point to some notable exceptions, such as the common use of electronic mail or word processors in higher education (Green,1996) or the growing use of performance technology in industry (Desrosiers & Harmon, 1996), the way that education and training are conducted has changed very little during the past few decades. 

One major reason for this lack of utilization is that educational technologists have concentrated their efforts on developing instructionally sound and technically superior products while giving less consideration to other issues.  Technical superiority, while important, is not the only factor that determines whether or not an innovation is widely adopted--it might not even be the most important factor (Pool, 1997).  A complex web of social, economic, technical, organizational, and individual factors interact to influence which technologies are adopted and to alter the effect of a technology after it has been adopted (Segal, 1994). In order to fully understand the field, practitioners have to understand more than just hardware, software, design models, and learning theory.  Understanding why people use educational technology and, perhaps more importantly, why they don’t is at the core of the process. That’s where adoption, diffusion, implementation, and institutionalization come in.

            In this chapter, we will discuss the adoption, diffusion, implementation, and institutionalization of educational technology. We will begin by looking at some of the best known theories about adoption and diffusion. Following this we will discuss some examples of how adoption and diffusion theory has been incorporated into the field of educational technology.  Then, we will discuss a very important trend--the gradual shift in focus from thinking about adoption (the initial decision to use an innovation) to

Page 143: MODELS 5

thinking about implementation and institutionalization. We will define implementation and institutionalization and discuss why this shift is happening.  We will also provide a list of conditions that contribute to implementation (Ely, 1999) and include a summary and conclusions.

Overview of the Adoption and Diffusion Process          

            There has been a long and impressive history of research related to the adoption and diffusion of innovations (Surry & Brennan, 1998).   Many of the most important and earliest studies in this area were conducted by researchers working in the field of rural sociology (Rogers, 1995). In fact, a study that investigated the diffusion of hybrid-seed corn (Ryan & Gross, 1943) is considered to be the first major, influential diffusion study of the modern era (Rogers, 1995).  Other researchers have investigated the diffusion of innovations in such diverse fields as solar power (Keeler, 1976), farm innovations in India (Sekon, 1968), and weather forecasting (Surry, 1993).

            The most widely cited and most influential researcher in the area of adoption and diffusion is Everett Rogers.  Rogers’ Diffusion of Innovations is perhaps the single most important book related to this topic and provides a comprehensive overview of adoption and diffusion theory.  It was first published in 1962 and now in its 4th edition (Rogers, 1995).

            One of the most important theories discussed by Rogers is the Innovation-Decision Process Model. As shown in Figure 1, this model suggests that the adoption of an innovation is not a single act, but a process that occurs over time.  Potential adopters go through five stages when interacting with an innovation. The first stage is “Knowledge” in which potential adopters find out about an innovation and gain a basic understanding of what it is and how it works. The second stage is “Persuasion” in which potential adopters form a positive or negative impression of the innovation. It is only in the third stage, “Decision”, that the innovation is actually adopted or rejected. The fourth stage, “Implementation”, occurs when the innovation is actually used. In the fifth stage, “Confirmation”, the adopter seeks information about the innovation and either continues or discontinues use of the innovation.  The Confirmation Stage might also describe the

adoption of an innovation that was previously rejected.

Page 144: MODELS 5

 

Figure 1.  Five stages of Rogers’ (1995) Innovation-Decision Process Model.

 

            Another important and influential idea discussed by Rogers is the concept of adopter categories.  This concept states that, for any given innovation, a certain percentage of the population will readily adopt the innovation, while others will be less likely to adopt. According to Rogers, there is usually a normal distribution of the various adopter categories that forms the shape of a bell curve (see Figure 2). “Innovators”, those who readily adopt an innovation, make up about 2.5% of any population.  “Early Adopters” make up approximatley 13.5% of the population. Most people will fall into either the Early Majority (34%) or the Late Majority (34%) categories. “Laggards”, those who will resist an innovation until the bitter end, comprise about 16% of the population.  The concept of adopter categories is important because it shows that all innovations go through a natural, predictable, and sometimes lengthy process before becoming widely adopted within a population.

Figure 2.  Hypothesized distribution of adopter categories within a typical population.

 

The concept of perceived attributes (Rogers, 1995) has served as the basis for a number of diffusion studies (e.g., Fliegel & Kivlin, 1966;  Wyner, 1974). Perceived attributes refers to the opinions of potential adopters who base their feelings about of an innovation on how they perceive that innovation in regard to five key attributes: Relative Advantage; Compatibility; Complexity; Trialability, and; Observability.  In short, this construct states that people are more likely to adopt an innovation if the innovation offers them a better way to do something, is compatible with their values, beliefs and needs, is not too complex, can be tried out before adoption, and has observable benefits.  Perceived

Page 145: MODELS 5

attributes are important because they show that potential adopters base their opinions of an innovation on a variety of attributes, not just relative advantage. Educational technologists, therefore, should try to think about how potential adopters will perceive their innovations in terms of all of the five attributes, and not focus exclusively on technical superiority.

            The S-shaped adoption curve is another important idea that Rogers (1995) has described.  This curve shows that a successful innovation will go through a period of slow adoption before experiencing a sudden period of rapid adoption and then a gradual leveling off .  When depicted on a graph , this slow growth, rapid expansion and leveling off form an S-shaped curve (see Figure 3). The period of rapid expansion, for most successful innovations, occurs when social and technical factors combine to permit the innovation to experience dramatic growth.  For example, one can think of the many factors that combined to lead to the widespread acceptance of the World Wide Web between the years 1993 and 1995.

Figure 3.  Example of an S-curve showing initial slow growth, a period of rapid adoption, and a gradual leveling off.

 

Diffusion Theory Applied to Educational Technology

            The theories and concepts discussed by Rogers in Diffusion of Innovations are applicable to the study of innovations in almost any field.  A number of researchers have used these theories and concepts to study the adoption and diffusion of educational technology innovations. In the field of educational technology, diffusion theory has most often been applied to the study of either artifacts, such as computers, or knowledge, such as innovative teaching techniques (Holloway, 1996).  Ernest Burkman (1987) is one of the authors who specifically links diffusion theory with educational technology. Burkman realized that educational technology had been suffering from little utilization and turned to diffusion theory for a possible solution.  He used perceived attributes to develop a

Page 146: MODELS 5

method for developing instructional products that would be more appealing to potential adopters.  Burkman called his new approach “ user-oriented instructional development (UOID)”. The five steps in Burkman’s UOID are:

1) Identify the potential adopter

2) Measure relevant potential adopter perceptions

3) Design and develop a user-friendly product

4) Inform the potential adopter (of the product's user-friendliness)

5) Provide Post Adoption Support

In addition to Burkman, other researchers have incorporated diffuison theory into educational technology applications. For example, Stockdill and Morehouse (1992) used diffusion concepts in a checklist of factors to consider when attempting to increase the adoption of distance learning and other educational technologies. Farquhar and Surry (1994) used diffusion theory to identify and analyze factors that might impede or assist the adoption of instructional innovations within organizations. Sherry, Lawyer-Brook, and Black (1997) used diffusion concepts as the basis for an evaluation of a program intended to introduce teachers to the Internet.  A growing amount of dissertation research is being conducted in the area of diffusion theory as it is related to educational technology.

From Diffusion and Adoption to Implementation

        There appears to be a growing trend in innovation research away from adoption and diffusion towards implementation and institutionalization. As the adoption and diffusion process moves along, the actual use or implementation of an innovation in a specific setting becomes more and more important. Of course, implementation should be an integral part of a comprehensive and systematic change plan from the beginning.  Michael Fullan, prominent researcher in this area, defines implementation as "...the actual use of an innovation in practice."  Further, he calls the implementation perspective, "...both the content and process of dealing with ideas, programs, activities, structures, and policies that are new to the people involved" (Fullan, 1996).  Until Fullan and Pomfret (1977) spelled out the process and issues in their review of implementation research, not much was said about the steps after diffusion and adoption.

From Replication to Mutual Adaptation

        In the process of implementation, innovations that require replication for successful outcomes often follow an approach that is analogous to behavioral learning. That is, each product, procedure, and practice has to maintain a high fidelity to the original or else success cannot be guaranteed.  Fullan and Pomfret (p. 360) introduced the concept of "mutual adaptation" whereby local conditions should be considered and modification of

Page 147: MODELS 5

original materials and procedures should be altered accordingly.  It was felt that the local professionals could make better assessments of the needs and potential reception of the innovation than the original developer or researcher.  Purists, however, felt that if replication was not identical to the original specifications, implementation might fail.

        Once professional educators realized that they could modify programs, products and practices, it was a short step to an approach that was less “lock step” and more analogous to constructivism. Local participation in the modifications created a greater sense of ownership.

Other Models

        One of the tools often used to guide implementation efforts in schools is Hall's Concerns Based Adoption Model (CBAM) (Hall & Hord, 1987).  In the implementation phase of this model, the Levels of Use (LoU) scale is introduced (Hall & Loucks, 1975).  The basic levels are: Nonuse; Orientation (initial information); Preparation (to use); Mechanical use; Routine; Refinement; Integration; and Renewal. The last four levels actually move into the area of institutionalization discussed later in this chapter.  A modification of the LoU, Levels of Technological Implementation (LoTi), based on measurement of classroom use of computers, has been proposed by Moersch (1995).  Moersch modifies Hall's levels to provide guidance for determining the extent of implementation using seven levels: Nonuse; Awareness; Exploration; Infusion; Integration; Expansion; and Refinement.

What About Resistance to Innovations?

        Over the years there have been studies and explorations of the resistance factors that thwart diffusion and implementation efforts. Prominent among those who have journeyed into this puzzling morass are Zaltman and Duncan (1977).  These authors define resistance as "...any conduct that serves to maintain the status quo in the face of pressure to alter the status quo."  The basic argument has been that if we knew what types of resistance exist, perhaps we could design strategies to combat them.  There are many different types of resistance. They can be classified as cultural, social, organizational and psychological.  This approach to implementation has been successful only when strategies for overcoming specific points of resistance have been developed.

Looking for Facilitative Conditions

        A less common approach to understanding the process of implementation has been to tease out reasons for successful programs rather than to identify the barriers.  Where innovations have been adopted and implemented, what are the conditions that appear to facilitate the process?  Are there consistencies among the facilitating conditions from innovation to innovation and from place to place?  This logic reverses a concern for resistance to a more positive one of facilitating factors thus providing an avenue for further exploration.  Rather than to come up with ways to get around resistance, a series of studies looked at successful implementation of innovations and asked, "Why were

Page 148: MODELS 5

these innovations successful?"  The findings of these studies uncovered eight conditions that contribute to implementation (Ely, 1999).

 

1.  Dissatisfaction with the status quo.  Things could be better. Others seem to be moving ahead while we are standing still.  Dissatisfaction is based on an innate feeling or is induced by a "marketing." campaign.

 

 2.  Knowledge and skills exist.  Knowledge and skills are those required by the ultimate user of the innovation.  Without them, people become frustrated and immobilized.  Training is usually a vital part of most successful innovations.

 

3.  Availability of resources.  Resources are the things that are required to make implementation work--the hardware, software, audiovisual media and the like. Without them, implementation is reduced.

 

4.  Availability of time.  Time is necessary to acquire and practice knowledge and skills.  This means good time, "company" time, not just personal time at home.

 

5.  Rewards and/or incentives exist.  An incentive is something that serves as an expectation of a reward--a stimulus to act.  A reward is something given for meeting an acceptable standard of performance.

 

6.  Participation.  This is shared decision-making; communication among all parties involved in the process or their representatives.

 

7.  Commitment.  This condition demonstrates firm and visible evidence that there is endorsement and continuing support for the innovation. This factor is seen most frequently in those who advocate the innovation and their supervisors. 

 

Page 149: MODELS 5

  8.  Leadership.  This factor includes (1) leadership of the executive officer of the organization and, sometimes, by a board and (2) leadership within the institution or project related to the day-to-day activities of the innovation being implemented.

 

Variables in the Setting and the Innovation Itself

        It is clear that the eight conditions are present in varying degrees whenever examples of successful implementation are studied.  What is not so clear is the role of the setting in which the innovation is implemented.  The setting and the nature of the innovation are major factors influencing the degree to which each condition is present.  Some of the variables in the setting include organizational climate, political complexity and certain demographic factors.  Some of the most important variables regarding the innovation are the attributes of the innovation discussed earlier--its relative advantage (when compared with the current status), compatibility with the values of the organization or institution, its complexity (or simplicity), trialability before wholesale adoption and observability by other professionals or the public. But...is implementation the final stage?

        Implementation should lead naturally into institutionalization. Some writers call it "routinization" or "continuation.”  The ultimate criterion for a successful innovation is that it is routinely used in settings for which it was designed.  It has become integral to the organization or the social system and is no longer considered to be an innovation.  A classic work on the topic defines institutionalization as "...an assimilation of change elements into a structured organization modifying the organization in a stable manner....a process through which an organization assimilates an innovation into its structure"  (Miles, Eckholm, & Vandenburghe, 1987).

Indicators of Institutionalization

According to the Regional Laboratory for Educational Improvement of the Northeast and Islands (Eiseman, Fleming & Roody, 1990),  there are six commonly accepted indicators of institutionalization:

  1.  Acceptance by relevant participants--a perception that the innovation legitimately belongs;

  2.  The innovation is stable and reutilized;

  3.  Widespread use of the innovation throughout the institution or organization;

  4.  Firm expectation that use of the practice and/or product will continue within the institution or organization;

Page 150: MODELS 5

  5.  Continuation does not depend upon the actions of specific individuals but upon the organizational culture, structure or procedures; and

  6.  Routine allocations of time and money.

Once implementation has been achieved, one more decision must be made:  "Is this innovation something we want to continue for the immediate future?"  If it is, the above criteria could be used to assess the extent to which the innovation is institutionalized.  Several other indicators of routine use, called "passages and cycles" are listed by Yin and Quick (1978):  support by local funds; new personnel classification; changes in governance; internalization of training; and turnover of key personnel.

Summary and Conclusions

Case studies of diffusion, adoption, implementation and institutionalization have been conducted in many organizations and settings.  One important conclusion is that there is no formula for this process. There are many elements that should be considered in the process, most of them outlined in this chapter.  However, simple transfer of these principles to specific environments would likely be futile.  Just as most instructional development requires a systemic approach so does the change process.  There is no substitute for a "front-end analysis" or needs assessment that yields the goals and objectives to be attained. Communication among all participants throughout the process is essential. A strategy or plan for achieving the goals is the best way to proceed when considering the many variables that are likely to affect the outcomes.

Evaluation should be a constant partner during the process.

        All of this activity should be coordinated by a change agent--a person who is sensitive to the variables that will impinge on the process. The change agent could be an internal person or an external specialist. Awareness and experience with the change process is essential for a successful outcome.

The technology adoption lifecycle is a sociological model developed by Joe M. Bohlen, George M. Beal and Everett M. Rogers at Iowa State University,[1] building on earlier research conducted there by Neal C. Gross and Bryce Ryan. [2][3][4] Their original purpose was to track the purchase patterns of hybrid seed corn by farmers.

Beal, Rogers and Bohlen together developed a technology diffusion model[5] and later Everett Rogers generalized the use of it in his widely acclaimed book, Diffusion of Innovations [6] (now in its fifth edition), describing how new ideas and technologies spread in different cultures. Others have since used the model to describe how innovations spread between states in the U.S.

Page 151: MODELS 5

Rogers' bell curve

The technology adoption lifecycle model describes the adoption or acceptance of a new product or innovation, according to the demographic and psychological characteristics of defined adopter groups. The process of adoption over time is typically illustrated as a classical normal distribution or "bell curve." The model indicates that the first group of people to use a new product is called "innovators," followed by "early adopters." Next come the early and late majority, and the last group to eventually adopt a product are called "laggards."

The demographic and psychological (or "psychographic") profiles of each adoption group were originally specified by the North Central Rural Sociology Committee, Subcommittee for the Study of the Diffusion of Farm Practices (as cited by Beal and Bohlen in their study above).

The report summarized the categories as:

innovators - had larger farms, were more educated, more prosperous and more risk-oriented

early adopters - younger, more educated, tended to be community leaders

early majority - more conservative but open to new ideas, active in community and influence to neighbour

late majority - older, less educated, fairly conservative and less socially active

laggards - very conservative, had small farms and capital, oldest and least educated

Everett M. Rogers (March 6, 1931 - October 21, 2004) was a communication scholar, sociologist, writer, and teacher. He is best known for

Page 152: MODELS 5

originating the diffusion of innovations theory and for introducing the term early adopter.

Roger was born in Early liPinehurst Farm in Carroll, Iowa, in 1931. His father loved electromechanical farm innovations, but was highly resistant to biological–chemical innovations, so he resisted adopting the new hybrid seed corn, even though it yielded 25% more crop and was resistant to drought. During the Iowa drought of 1936, while the hybrid seed corn stood tall on the neighbor’s farm, however, the crop on the Rogers’ farm wilted. Rogers’ father was finally convinced.

Rogers had no plans to attend university until a school teacher drove him and some classmates to Ames to visit Iowa State University. Rogers decided to pursue a degree in agriculture there. He then served in the Korean War for two years. He returned to Iowa State University to earn a Ph.D. in sociology and statistics in 1957.

When the first edition (1962) of Diffusion of Innovations was published, Rogers was an assistant professor of rural sociology at Ohio State University. He was only 30 years old but was becoming a world-renowned academic figure. In the mid-2000s,The Diffusion of Innovations became the second-most-cited book in the social sciences. (Arvind Singhal: Introducing Professor Everett M. Rogers, 47th Annual Research Lecturer, University of New Mexico)[1]. The fifth edition (2003, with Nancy Singer Olaguera) addresses the spread of the Internet, and how it has transformed the way human beings communicate and adopt new ideas.

Rogers proposes that adopters of any new innovation or idea can be categorized as innovators (2.5%), early adopters(13.5%), early majority (34%), late majority (34%) and laggards (16%), based on the mathematically-based Bell curve. These categories, based on standard deviations from the mean of the normal curve, provide a common language for innovation researchers. Each adopter's willingness and ability to adopt an innovation depends on their awareness, interest, evaluation, trial, and adoption. People can fall into different categories for different innovations—a farmer might be an early adopter of mechanical innovations, but a late majority adopter of biological innovations or VCRs.

When graphed, the rate of adoption formed what came to typify the Diffusion of Innovations model, an “s-shaped curve.” (S curve) The graph essentially shows a cumulative percentage of adopters over time – slow at the start, more rapid as adoption increases, then leveling off until only a small percentage of laggards have not adopted. (Rogers Diffusion Of Innovations 1983)

His research and work became widely accepted in communications and technology adoption studies, and also found its way into a variety of other social science studies. Geoffrey Moore's Crossing the Chasm drew from Rogers in explaining how and why technology companies succeed. Rogers was also able to relate his communications research to practical health problems, including hygiene, family planning, cancer prevention, and drunk driving.

Page 153: MODELS 5

LEVERS OF CONTROL SIMONS

The five control levers include:    1. Belief systems,    2. Boundary systems,    3. Internal control systems,    4. Diagnostic systems, and    5. Interactive systems.

Belief systems relate to the fundamental values of the organization. Examples in this category include mission statements and vision statements. Boundary systems describeconstraints in terms of employee behavior, i.e., forbidden actions. Internal control systems are related to protecting assets, while diagnostic systems theoretically provide information indicating when a system is in control or out of control. Interactive systems focus on communicating and implementing the organization's strategy. The purpose of an interactive system is to promote debate related to the assumptions underlying the organization's strategy and ultimately to promote learning and growth.

Page 154: MODELS 5

The confusion is related to how and where the balanced scorecard fits into the levers of control. According to Kaplan & Norton, successful balance scorecard adopters use the scorecard as an interactive system (p. 350). Some balanced scorecard implementations have failed because companies used the scorecard as only a diagnostic system. 

Simon's term "interactive system" seems to be essentially the same as Kaplan & Norton's term "strategic system". The message is, to obtain the potential benefits of the balanced scorecard; an organization has to use it as a strategic system.

Page 155: MODELS 5

"Total quality," "empowerment of employees," and "process reengineering" -- all popular themes in today's organizations -- suggest that traditional controls may no longer be appropriate. In Levers of Control, Harvard Business School professor and author Robert Simons takes a shot at questions about how today's control systems look and how management can implement and utilize them effectively. He also looks at the auditor's role in recommending and evaluating control systems.

Simons' premise is that a fundamental problem in creating organizational value is balancing unlimited opportunity with limited attention. In his analysis, Simons makes several significant assumptions about human behavior: individuals in organizations are ethical; they desire to achieve and contribute; and individuals possess creative potential. However, Simons recognizes that there are inherent tensions between what individuals want to do and actually will do. If the reader disagrees with any of these premises, it will be difficult to find value in the rest of Simons' ideas.

Much of Simons' narrative is devoted to explaining what the new "levers of control" are, how they interrelate, and how these constructs support or contradict other management theories. Simons' familiarity with strategic management theory is evident and integrates well with the discussions of the levers of control.

Simons focuses primarily on the informational aspects of management control systems, the

levers managers use to process and transmit information. According to Simons, if four constructs

are understood and analyzed -- core values, risks to be avoided, strategic uncertainties, and

critical performance variables -- then each construct can be controlled with one of four different

levers. The first lever consists of an organization's beliefs systems, which are often embodied in

the mission statement and are used to communicate core values. Lever two is made up of

boundary systems, which basically form the organization's own "Ten Commandments" and are

used to define acceptable risks and standards of business conduct. Diagnostic control systems,

which include the traditional methods used to measure critical performance variables and

interactive control systems, make up lever three. Finally, the fourth lever of control, interactive

control systems, consists of the formal information systems that managers use to involve

themselves regularly and personally in the decision-making activities of subordinates; the focus is

on process. It is in discussion of this fourth lever that Simons integrates other management

theories most heavily.

Traditional management controls such as diagnostic control systems are given a bad rap by

Simons. The focus of diagnostic systems is on outcomes, Simons maintains, and he argues that

managers either pay too little or too much attention to them. Simons suggests that heavy reliance

Page 156: MODELS 5

on staff groups such as internal auditors as the gatekeepers of diagnostic control systems yields

a number of organizational benefits.

Simons proposes that the levers of control work more -- or less -- effectively depending upon the

current phase of the firm's life cycle. He presents results from his ten-year examination of control

in several companies from ten different industries, including banking; computer, food, and

machinery manufacturing; and health aids. His evidence shows how the ten managers and their

organizations utilized the control levers to drive changes during the first 18 months of the

managers' tenures. Depending upon the reader's experience, the field study may enhance

understanding of the control lever and life cycle interrelationships.

Simons provides an excellent narrative on balancing empowerment and control and provides a

handy summary of what managers and staff groups must do to effectively implement the four

control systems. And, although well articulated, Simons' examination certainly does not

oversimplify the challenges experienced by all members of an organization as they collaborate to

achieve the enterprise's worthwhile goals.

Introduction

One of the most difficult problems managers face today is maintaining control, efficiency, and productivity while still giving employees the freedom to be creative, innovative and flexible.

Giving employees too much autonomy has led to disaster for many companies, including such well-known names as Sears and Standard Chartered Bank.  In these companies and many others, employees had enough independence that they were able to engage in and mask underhanded, and sometimes illegal, activities.  When these deviant behaviors finally came to light, the companies incurred substantial losses not only financially, but also in internal company morale and external public relations.

One method of preventing these kinds of incidents is for companies to revert to the “machinelike bureaucracies” of the 1950s and 60s.  In these work environments, employees were given very specific instructions on how to do their jobs and then were watched constantly by superiors to ensure the instructions were carried out properly. 

In the modern corporate world, this method of managing employees has all but been abandoned except in those industries that lend themselves to standardization and repetition of work activities (e.g., in casinos and on assembly lines). In most industries, managers simply do not have time to watch everyone all the time.  They must find ways to encourage employees to think for themselves, to create new processes and methods, while still retaining enough control to ensure that employee creativity will ultimately benefit and improve the company.

Page 157: MODELS 5

There are four control levers or “systems” that can aid managers in achieving the balance between employee empowerment and effective control:      1. diagnostic control systems,     2. beliefs systems,     3. boundary systems, and     4. interactive control systems.

Diagnostic Control Systems

This control lever relies on quantitative data, statistical analyses and variance analyses.  Managers use these and other numerical comparisons (e.g., actual to budget, increases/decreases in overhead from month to month, etc.) to periodically scan for anything unusual that might indicate a potential problem. 

Diagnostic systems can be very useful for detecting some kinds of problems, but they can also induce employees and even managers to behave unethically in order to meet some kind of preset goal.  Meeting the goal, no matter how it’s done, ensures the numbers won’t fluctuate in a manner that would draw negative attention to a particular department or person. 

Employee bonuses (and sometimes even employment, itself) are often based on how well performance goals have been met or exceeded, measured in quantitative terms.  If the goals are reasonable and attainable, the diagnostic system works quite well.  It enables managers to assign tasks and go on to other things, releasing them from the leash of perpetual surveillance.  Empowered employees are free to complete their work, under some but not undue pressure to meet a deadline, productivity level, or other goal, and to do it in a way that may be new or innovative.

However, when goals become unrealistic, empowered employees may sometimes use their capacity for creativity to manipulate the factors under their control in order not to fall short of their manager’s expectations.  Such manipulations can only have very short-term positive effects and can very possibly, depending on their magnitude, lead to long-run disaster for the company.

Beliefs Systems

This control lever is used to communicate the tenets of corporate culture to every employee of the company.  Beliefs systems are generally broad and designed to appeal to many different types of people working in many different departments. 

In order for beliefs systems to be an effective lever of control, employees must be able to see key values and ethics being upheld by those in supervisory and other top executive positions.  Senior management must be careful not to adopt a particular belief or mission simply because it is in vogue to do so at the time, but because it reflects the true nature and value system of the company as a whole.

Page 158: MODELS 5

It is easier for employees to understand on an informal, innate level the mission and credo of a company that operates in only one industry, as did many companies in the past.  As companies grow more complex, however, it is becoming more and more necessary to establish formal, written mission statements and codes of ethics so that there can be no mistaking where the company is going and how it is going to get there.

Boundary Systems

This control lever is based on the idea that in an age of empowered employees, it has become easier and more effective to set the rules regarding what is inappropriate rather than what is appropriate.  The effect of this kind of thinking is to allow employees to create and define new solutions and methods within defined constraints.  The constraints are set by top management and are meant to steer employees clear of certain industries, types of clients, etc.  They are also intended to focus employee efforts on areas that have been determined to be best for the company, in terms of profitability, productivity, efficiency, etc.

Boundary systems can be thought of in terms of “minimum standards,” and can help to guard the good name of a company, an asset that can be very difficult to rebuild once damaged.  Examples of these kinds of standards include forbidding employees to discuss client matters outside the office or with anyone not employed by the company (sometimes including even spouses) and refusing to work on projects or with clients deemed to be “undesirable.” 

Many times a company will implement a boundary system only after it has suffered a major crisis due to the lack of one.  It is important that companies begin to be proactive in establishing boundaries before they are needed.

Boundary systems are the flipside of belief systems.  They are the “dark, cold constraints” to the “warm, positive, inspirational” tenets of belief systems.

Interactive Control Systems

The key to this control lever is the word “interactive.”  In order for this kind of control system to work, it is critical that subordinates and supervisors maintain regular, face-to-face contact.  Management must be able to glean what is most critical from all aspects of an organization’s operations so that they can establish and maintain on a daily basis their overall strategic plan for the company.

Companies use many different tools to accomplish this kind of regular communication.  One popular method of doing this is to analyze data from reports that are frequently released (for example, the Nielsen ratings), internally generated productions reports, and professional journals. 

Though this may seem somewhat like the diagnostic control system discussed earlier, there are four important characteristics which set the interactive control system

Page 159: MODELS 5

apart: 1)  the interactive system focuses on constantly changing data of an overall strategic nature, 2) the strategic nature of the data warrants attention from all levels of management on a regular basis, 3) the data is best analyzed in a face-to-face setting in groups that include all levels of employees, and 4) the system itself stimulates these regular discussions.

Conclusion

Empowering employees is necessary for the continuing health and improvement of most companies.  Using the four levers of control discussed above in conjunction with one another, managers can unleash the creative potential of their subordinates without losing overall control of their team and its objectives.    

Harness Employees’ Creativity with the Four Levers of Control

Potential  Organizational Blocks  

Managerial Solution  

Control Lever  

To achieve Lack of focus or of resources.

Build and support clear targets.

Diagnostic control systems.

To contribute Uncertainty about purpose.

Communicate core values and mission.

Beliefs systems.

To do right Pressure or temptation.

Specify and enforce rules of the game.

Boundary systems.

To create Lack of opportunity or fear of risk.

Open organizational dialogue to encourage learning.

Interactive control systems.

ORGANIC ORGANIZATION

A term created by Tom Burns and G.M. Stalker in the late 1950s, organic organizations, unlike mechanistic organizations (also coined by Burns and Stalker), are flexible and value external knowledge.

Also called organismic organization, this form of organizational structure was widely sought and proposed, but never proved to really exist since it, adversely to the mechanistic organization, has the least hierarchy and specialization of functions. For an organization to be organic, people in it should be equally leveled, with no job descriptions or classifications, and communication to have a hub-network-like form. It

Page 160: MODELS 5

thrives on the power of personalities, lack of rigid procedures and communication and can react quickly and easily to changes in the environment thus it is said to be the most adaptive form of organization.

An organic organization is a fluid and flexible network of multi-talented individuals who perform a variety of tasks, as per the definition of D. A. Morand.

Organic Organizations Leads to Teamwork An organic organization is when the organization exist dependently, meaning that the organization takes into consideration the needs of their employees. Since in an organic organization the ideas and opinions of the employees are taken into consideration, this leads to group leadership and teamwork. Group leadership, is better than individual leadership because there are several people controlling the environment, instead of one person telling everyone what is expected. Since organic organizations takes into consideration the ideas of the employees this opens the doors to create teamwork among employees. The use of Organic Organizations is good because in some way it becomes an incentive to employees to perform to the best of their ability.

A term created by Tom Burns and G.M. Stalker in the late 1950s, organic organizations, unlikemechanistic organizations (also coined by Burns and Stalker), are flexible and value outside knowledge. Also called organismic organization, this form of organizational structure was widely sought and proposed, but never proved to really exist since it is, adversely to the mechanistic organization, it has the least hierarchy and specialisation of functions. For an organization to be organic, people in it should be equally levelled, with no job descriptions or classifications, and communication to have a hub-network-like form. It thrives on the power of personalities, lack of rigid procedures and communication and can react quickly and easily to changes in the environment thus it is said to be the most adaptive form of organization. An organic organization is a fluid and flexible network of multi-talented individuals who perform a variety of tasks, as per the definition of D. A. Morand.

om Burns, a sociologist, teamed up with G. M. Stalker, a psychologist, to look at the impact of technical innovation on organisations in the 1960s. The question was how a traditional firm in Scotland, moved into electronics in order to have a future and was therefore moving from a position of (diminishing) stability to one of fast moving change. The findings were pessimistic in that they doubted whether traditional structures could incorporate fast moving change. They could not attract electronics research and development engineers into their organisations.

This is because the individual in a traditional pyramidal organisation is not simply committed to the company. There is the group or department with a stable career structure. Its sectional interests can be in conflict with other groups' interests. If something new comes into the firm, these establish sections compete for control over the added functions and resources. This is against the company as a whole, inefficient, and adaptations which do take place accentuate the problem. These are pathological systems.

Page 161: MODELS 5

There are three typical pathological system responses:

Burns and Stalker have an alternative and this is the organismic or Organic form of management (and called Systemic). Gone are the formal roles and specialisms based on assigned, precisely defined, tasks. Gone is the idea that overall knowledge and co-ordination is found only at the top of the hierarchy.

In organismic management a continual adjustment and flexibility in individual tasks is emphasised. Knowledge is collaborative rather than restricted into specialisms. Communication is horizontal, vertical and diagonal as required by the types of work involved. An organisational chart would depend on which job is being done and what process it involves, and it may not last long. Everyone should consult and consider the overall aims of the company as the situation keeps changing.

Technical additions and fast change means that experts are needed. Experts may know more than many managers. Expert career structures go beyond the organisation (just as do top executives) and may be based on individual reputations. The politicking then is more diffuse. The power system leaks out at many levels.

There are a number of sociological analyses here. One is the Weberian ideal types of mechanistic and organismic organisations. So they are not actual expected organisations but tendencies for analysis. The mechanistic relates to Weber also on bureaucracy and its rational-legal authority. This seemed to be the depressing summit of capitalist organisation. However, they have shown it needs stability. Without any reference to human fulfilment, they have argued for a need for a more human and responsive type of institution. So there is more than a hint of the Parsonian sociology of functional systems with adaptation, goal attainment, integration and pattern maintenance (Haralambos, Holborn, 1995, 873), with manifest and latent functions - actually, motivations - in terms of people using the manifest language of the overt system of formal control while operating latently with other motivations (Merton, 1949, in Coser, Rosenberg, 1976, 528). One organisation then adapts successfully to a stable system, and one to a changing system.

There is also history. A firm has to know its past and reveal to itself the three systems of motivation. The mechanistic organisation defends itself through its people in positions of power, career climbing and purposive decision taking. It takes a huge change to become organismic if it can be done.

Because of the use of sociological categories, the mechanistic and the organismic can be applied elsewhere. I applied it to historical and broad Christian Churches. Heterodox liberal Christians inside these organisations were organismic (or systemic) in authority, because they took it upon themselves to be the experts and specialists of theology in their very diverse writings. Those heterodox who left to join specialist liberal denominations pursued instead human relations authority, because they were essentially re-creators of open gatherings that discussed truth. Orthodox liberal Christians are bureaucrats and compromisers, unable to hold together a Church that is spiralling away into its new

Page 162: MODELS 5

denominational constituents, each with their own types of authority, namely the charismatic, traditional and systemic.

PATH-GOAL THEORY

The path-goal theory, also known as the path-goal theory of leader effectiveness or the path-goal model, is a leadership theory in the field of organizational studies developed by Robert House, an Ohio State University graduate, in 1971 and revised in 1996. The theory states that a leader's behavior is contingent to the satisfaction, motivation and performance of his subordinates. The revised version also argues that the leader engages in behaviors that complement subordinate's abilities and compensate for deficiencies. The path-goal model can be classified both as a contingency or as a transactional leadership theory.

The theory was inspired by the work of Martin G. Evans (1970),[1] in which the leadership behaviors and the follower perceptions of the degree to which following a particular behavior (path) will lead to a particular outcome (goal).[2] The path-goal theory was also influenced by the expectancy theory of motivation developed by Victor Vroom in 1964.

According to the original theory, the manager’s job is viewed as guiding workers to choose the best paths to reach their goals, as well as the organizational goals. The theory argues that leaders will have to engage in different types of leadership behavior depending on the nature and the demands of a particular situation. It is the leader’s job to assist followers in attaining goals and to provide the direction and support needed to ensure that their goals are compatible with the organization’s goals.[4]

A leader’s behavior is acceptable to subordinates when viewed as a source of satisfaction, and motivational when need satisfaction is contingent on performance, and the leader facilitates, coaches, and rewards effective performance. The original path-goal theory identifies achievement-oriented, directive, participative, and supportive leader behaviors:

The directive path-goal clarifying leader behavior refers to situations where the leader lets followers know what is expected of them and tells them how to perform their tasks. The theory argues that this behavior has the most positive effect when the subordinates' role and task demands are ambiguous and intrinsically satisfying.[5]

The achievement-oriented leader behavior refers to situations where the leader sets challenging goals for followers, expects them to perform at their highest level, and shows confidence in their ability to meet this expectation.[5] Occupations in which the achievement motive were most predominant were technical jobs, sales persons, scientists, engineers, and entrepreneurs.[2]

Page 163: MODELS 5

The participative leader behavior involves leaders consulting with followers and asking for their suggestions before making a decision. This behavior is predominant when subordinates are highly personally involved in their work.[2]

The supportive leader behavior is directed towards the satisfaction of subordinates needs and preferences. The leader shows concern for the followers’ psychological well being.[5] This behavior is especially needed in situations in which tasks or relationships are psychologically or physically distressing.[2]

Path-goal theory assumes that leaders are flexible and that they can change their style, as situations require. The theory proposes two contingency variables, such as environment and follower characteristics, that moderate the leader behavior-outcome relationship. Environment is outside the control of the follower-task structure, authority system, and work group. Environmental factors determine the type of leader behavior required if the follower outcomes are to be maximized. Follower characteristics are the locus of control, experience, and perceived ability. Personal characteristics of subordinates determine how the environment and leader are interpreted. Effective leaders clarify the path to help their followers achieve goals and make the journey easier by reducing roadblocks and pitfalls. Research demonstrates that employee performance and satisfaction are positively influenced when the leader compensates for the shortcomings in either the employee or the work setting.

In contrast to the Fiedler contingency model, the path-goal model states that the four leadership styles are fluid, and that leaders can adopt any of the four depending on what the situation demands.

Page 164: MODELS 5

The Path-Goal Theory of Leadership was developed to describe the way that leaders encourage and support their followers in achieving the goals they have been set by making the path that they should take clear and easy.

In particular, leaders:

Clarify the path so subordinates know which way to go. Remove roadblocks that are stopping them going there.

Increasing the rewards along the route.

Leaders can take a strong or limited approach in these. In clarifying the path, they may be directive or give vague hints. In removing roadblocks, they may scour the path or help the follower move the bigger blocks. In increasing rewards, they may give occasional encouragement or pave the way with gold.

This variation in approach will depend on the situation, including the follower's capability and motivation, as well as the difficulty of the job and other contextual factors.

House and Mitchell (1974) describe four styles of leadership:

Supportive leadership

Page 165: MODELS 5

Considering the needs of the follower, showing concern for their welfare and creating a friendly working environment. This includes increasing the follower's self-esteem and making the job more interesting. This approach is best when the work is stressful, boring or hazardous.

Directive leadership

Telling followers what needs to be done and giving appropriate guidance along the way. This includes giving them schedules of specific work to be done at specific times. Rewards may also be increased as needed and role ambiguity decreased (by telling them what they should be doing).

This may be used when the task is unstructured and complex and the follower is inexperienced. This increases the follower's sense of security and control and hence is appropriate to the situation.

Participative leadership

Consulting with followers and taking their ideas into account when making decisions and taking particular actions. This approach is best when the followers are expert and their advice is both needed and they expect to be able to give it.

Achievement-oriented leadership

Setting challenging goals, both in work and in self-improvement (and often together). High standards are demonstrated and expected. The leader shows faith in the capabilities of the follower to succeed. This approach is best when the task is complex.

Directive leadership: Specific advice is given to the group and ground rules and structure are established. For example,  clarifying expectations, specifying or assigning certain work tasks to be followed.

Supportive leadership: Good relations are promoted with the group and sensitivity to subordinates' needs is shown.

Participative leadership: Decision making is based on  consultation with the group and information is shared with the  group.

Achievement-oriented leadership: Challenging goals are set and high performance is encouraged while confidence is shown in the groups' ability.

Page 166: MODELS 5

Responsibility assignment matrix

A Responsibility Assignment Matrix (RAM), also known as RACI matrix or Linear Responsibility Chart (LRC), describes the participation by various roles in completing tasks or deliverables for a project or business process. It is especially useful in clarifying roles and responsibilities in cross-functional/departmental projects and processes.[1]

RACI is an acronym derived from the four key responsibilities most typically used: Responsible, Accountable, Consulted, And Informed.

Responsible

Those who do the work to achieve the task. There is typically one role with a participation type of Responsible, although others can be delegated to assist in the work

Page 167: MODELS 5

required (see also RASCI below for separately identifying those who participate in a supporting role).

Accountable (also Approver or final Approving authority)

The one ultimately accountable for the correct and thorough completion of the deliverable or task, and the one to whom Responsible is accountable. In other words, an Accountablemust sign off (Approve) on work that Responsible provides. There must be only one Accountable specified for each task or deliverable.

Consulted

Those whose opinions are sought; and with whom there is two-way communication.

Informed

Those who are kept up-to-date on progress, often only on completion of the task or deliverable; and with whom there is just one-way communication.

Very often the role that is Accountable for a task or deliverable may also be Responsible for completing it (indicated on the matrix by the task or deliverable having a role Accountable for it, but no role Responsible for its completion, i.e. it is implied). Outside of this exception, it is generally recommended that each role in the project or process for each task receive, at most, just one of the participation types. Where more than one participation type is shown, this generally implies that participation has not yet been fully resolved, which can impede the value of this technique in clarifying the participation of each role on each task.

Role Distinction

There is a distinction between a role and individually identified people: a role is a descriptor of an associated set of tasks; may be performed by many people; and one person can perform many roles. For example, an organisation may have 10 people who can perform the role of project manager, although traditionally each project only has one project manager at any one time; and a person who is able to perform the role of project manager may also be able to perform the role of business analyst and tester.

Page 168: MODELS 5

The matrix is typically created with a vertical axis (left-hand column) of tasks (e.g., from a work breakdown structure WBS) or deliverables (e.g., from a product breakdown structure PBS), and a horizontal axis (top row) of roles (e.g., from an organizational chart) - as illustrated in the image of an example responsibility assignment (or RACI)

matrix.

The Seven Habits of Highly Effective People

Page 169: MODELS 5

The Seven Habits of Highly Effective People, first published in 1989, is a self-help book written by Stephen R. Covey. It has sold over 15 million copies in 38 languages since first publication, which was marked by the release of a 15th anniversary edition in 2004. Covey presents an approach to being effective in attaining goals by aligning oneself to what he calls "true north" principles of a character ethic that he presents as universal and timeless.

Each chapter is dedicated to one of the habits, which are represented by the following imperatives:

The First Three Habits surround moving from dependence to independence (i.e. self mastery)

Habit 1: Be Proactive

Synopsis: Take initiative in life by realizing your decisions (and how they align with life's principles) are the primary determining factor for effectiveness in your life. Taking responsibility for your choices and the subsequent consequences that follow.

Habit 2: Begin with the End in Mind

Synopsis: Self-discover and clarify your deeply important character values and life goals. Envision the ideal characteristics for each of your various roles and relationships in life.

Habit 3: Put First Things First

Synopsis: Planning, prioritizing, and executing your week's tasks based on importance rather than urgency. Evaluating if your efforts exemplify your desired character values, propel you towards goals, and enrich the roles and relationships elaborated in Habit 2.

The Next Three are to do with Interdependence (i.e. working with others)

Habit 4: Think Win-Win

Synopsis: Genuinely striving for mutually beneficial solutions or agreements in your relationships. Valuing and respecting people by understanding a "win" for all is ultimately a better long-term resolution than if only one person in the situation had gotten his way.

Habit 5: Seek First to Understand, then to be understood

Synopsis: Using empathetic listening to be genuinely influenced by a person, which compels them to reciprocate the listening, take an open mind to being influenced by you, which creates an atmosphere of caring, respect, and positive problem solving.

Habit 6: Synergize

Page 170: MODELS 5

Synopsis: Combining the strengths of people through positive teamwork, so as to achieve goals no one person could have done alone. How to yield the most prolific performance out of a group of people through encouraging meaningful contribution, and modeling inspirational and supportive leadership

The Last habit relates to self-rejuvenation;

Habit 7: Sharpen the Saw

Synopsis: The balancing and renewal of your resources, energy, and health to create a sustainable long-term effective lifestyle.

Covey coined the term abundance mentality or abundance mindset, a concept in which a person believes there are enough resources and success to share with others. It is commonly contrasted with the scarcity mindset (i.e. destructive and unnecessary competition), which is founded on the idea that, if someone else wins or is successful in a situation, that means you lose; not considering the possibility of all parties winning (in some way or another) in a given situation. Individuals with an abundance mentality are able to celebrate the success of others rather than be threatened by it.

A number of books appearing in business press since then have discussed the idea.The abundance mentality is believed to arrive from having a high self worth and security (see Habits 1, 2, and 3), and leads to the sharing of profits, recognition and responsibility. Organizations may also apply an abundance mentality while doing business.

Covey explains the "Upward Spiral" model in the sharpening the saw section. Through our conscience, along with meaningful and consistent progress, the spiral will result in growth, change, and constant improvement. In essence, one is always attempting to integrate and master the principles outlined in The 7 Habits at progressively higher levels at each iteration. Subsequent development on any habit will render a different experience and you will learn the principles with a deeper understanding. The Upward Spiral model consists of three parts: learn, commit, do. According to Covey, one must be increasingly educating the conscience in order to grow and develop on the upward spiral. The idea of renewal by education will propel one along the path of personal freedom, security, wisdom, and power.

Dr Stephen Covey's inspirational book - 7 Habits Of Highly Effective People®

Dr Stephen Covey is a hugely influential management guru, whose book The Seven Habits Of Highly Effective People, became a blueprint for personal development when it was published in 1990. The Seven Habits are said by some to be easy to understand but not as easy to apply. Don't let the challenge daunt you: The 'Seven Habits' are a remarkable set of inspirational and aspirational standards for anyone who seeks to live a full, purposeful and good life, and are applicable today more than ever, as the business world becomes more attuned to humanist concepts. Covey's values are full of integrity

Page 171: MODELS 5

and humanity, and contrast strongly with the process-based ideologies that characterised management thinking in earlier times.

Stephen Covey, as well as being a renowned writer, speaker, academic and humanist, has also built a huge training and consultancy products and services business - Franklin Covey which has a global reach, and has at one time or another consulted with and provided training services to most of the world's leading corporations.

Stephen Covey's Seven Habits of Highly Effective People®

Habit 1 - be proactive®

This is the ability to control one's environment, rather than have it control you, as is so often the case. Self determination, choice, and the power to decide response to stimulus, conditions and circumstances

Habit 2 - begin with the end in mind®

Covey calls this the habit of personal leadership - leading oneself that is, towards what you consider your aims. By developing the habit of concentrating on relevant activities you will build a platform to avoid distractions and become more productive and successful.

Habit 3 - put first things first®

Covey calls this the habit of personal management. This is about organizing and implementing activities in line with the aims established in habit 2. Covey says that habit 2 is the first or mental creation; habit 3 is the second or physical creation. (See the section on time management.)

Habit 4 - think win-win®

Covey calls this the habit of interpersonal leadership, necessary because achievements are largely dependent on co-operative efforts with others. He says that win-win is based on the assumption that there is plenty for everyone, and that success follows a co-operative approach more naturally than the confrontation of win-or-lose.

Habit 5 - seek first to understand and then to be understood®

One of the great maxims of the modern age. This is Covey's habit of communication, and it's extremely powerful. Covey helps to explain this in his simple analogy 'diagnose before you prescribe'. Simple and effective, and essential for developing and maintaining positive relationships in all aspects of life. (See the associated sections on Empathy, Transactional Analysis, and the Johari Window.)

Habit 6 - synergize®

Page 172: MODELS 5

Covey says this is the habit of creative co-operation - the principle that the whole is greater than the sum of its parts, which implicitly lays down the challenge to see the good and potential in the other person's contribution.

Habit 7 - sharpen the saw®

This is the habit of self renewal, says Covey, and it necessarily surrounds all the other habits, enabling and encouraging them to happen and grow. Covey interprets the self into four parts: the spiritual, mental, physical and the social/emotional, which all need feeding and developing.

Stephen Covey's Seven Habits are a simple set of rules for life - inter-related and synergistic, and yet each one powerful and worthy of adopting and following in its own right. For many people, reading Covey's work, or listening to him speak, literally changes their lives. This is powerful stuff indeed and highly recommended.

This 7 Habits summary is just a brief overview - the full work is fascinating, comprehensive, and thoroughly uplifting. Read the book, or listen to the full audio series if you can get hold of it.

In his more recent book 'The 8th Habit', Stephen Covey introduced (logically) an the eighth habit, which deals with personal fulfilment and helping others to achieve fulfilment too, which aligns helpfully with Maslow's notions of 'Self-Actualization' and 'Transcendence' in the Hierarchy of Needs model, and also with the later life-stages in Erikson's Psychosocial Life-Stage Theory. The 8th Habit book also focuses on leadership, another distinct aspect of fulfilment through helping others. Time will tell whether the 8th Habit achieves recognition and reputation close to Covey's classic original 7 Habits work.

Stephen R. Covey (born October 24, 1932 in Salt Lake City, Utah) is the author of the best-selling book, The Seven Habits of Highly Effective People. Other books he has written include First Things First, Principle-Centered Leadership, and The Seven Habits of Highly Effective Families. In 2004, Covey released The 8th Habit. In 2008, Covey released The Leader In Me—How Schools and Parents Around the World Are Inspiring Greatness, One Child at a Time. He is currently a professor at the Jon M. Huntsman School of Business at Utah State University.

The Seven Surprises for New CEOs

The Seven Surprises for New CEOs were described first in the HBR of October 2004 in an article by Michael Porter, Jay Lorsch and Nitin Nohria on CEO leadership. As a newly minted CEO, one may think to finally have the power to set strategy, the authority to make things happen, and full access to the finer points of your business. But if one expects the job to be as simple as that, you re in for an awakening. Even though you bear full responsibility for your company s well-being, you are a few steps removed from many of the factors that drive results. You have more power than anybody else in the

Page 173: MODELS 5

corporation, but you need to use it with extreme caution. Porter et al have discovered that nothing not even running a large business within the company fully prepares a person to be the chief executive. The following seven surprises are most common for new CEOs: These seven surprises for new CEOs carry some important lessons: First, as a new CEO you must learn to manage organizational context rather than focus on daily operations. Second, you must recognize that your position does not confer the right to lead, nor does it guarantee the loyalty of the organization. Finally, you must remember that you are subject to a host of limitations, even though others might treat you as omnipotent. 

Michael Porter, Professor of Harvard Business School and acknowledged as the most influential living management thinker, has seven surprises for new CEOs.

He says: “As a newly minted CEO, you may think you finally have the power to set strategy, the authority to make things happen, and full access to the finer points of your business. But if you expect the job to be as simple as that, you're in for an awakening. Even though you bear full responsibility for your company's wellbeing, you are a few steps removed from many of the factors that drive results.

“You have more power than anybody else in the corporation, but you need to use it with extreme caution. Nothing - not even running a large business within the company - fully prepares a person to be the chief executive.”

Professor Porter will return to South Africa on July 3 for a full-day event organised by Global Leaders, following his half-day workshop for the Global Leaders Africa Summit a year ago. He will present a cutting-edge programme covering corporate strategy, South Africa’s global competitiveness and CSR initiatives.

The seven most common surprises are:• You can't run the company.• Giving orders is very costly.• It is hard to know what is really going on.• You are always sending a message.• You are not the boss.• Pleasing shareholders is not the goal.• You are still only human.

He explained: “These surprises carry some important and subtle lessons. First, you must learn to manage organisational context rather than focus on daily operations. Second, you must recognise that your position does not confer the right to lead, nor does it guarantee the loyalty of the organisation. 

“Finally, you must remember that you are subject to a host of limitations, even though others might treat you as omnipotent. How well and how quickly you understand, accept, and confront the seven surprises will have a lot to do with your success or failure as a CEO.” 

Page 174: MODELS 5

TWO FACTOR THEORY

The two-factor theory (also known as Herzberg's motivation-hygiene theory) states that there are certain factors in the workplace that cause job satisfaction, while a separate set of factors cause dissatisfaction. It was developed by Frederick Herzberg, a psychologist, who theorized that job satisfaction and job dissatisfaction act independently of each other.

Attitudes and their connection with industrial mental health are related to Maslow's theory of motivation. His findings have had a considerable theoretical, as well as a practical, influence on attitudes toward administration[2]. According to Herzberg, individuals are not content with the satisfaction of lower-order needs at work, for example, those associated with minimum salary levels or safe and pleasant working conditions. Rather, individuals look for the gratification of higher-level psychological needs having to do with achievement, recognition, responsibility, advancement, and the nature of the work itself. So far, this appears to parallel Maslow's theory of a need hierarchy. However, Herzberg added a new dimension to this theory by proposing a two-factor model of motivation, based on the notion that the presence of one set of job characteristics or incentives lead to worker satisfaction at work, while another and separate set of job characteristics lead to dissatisfaction at work. Thus, satisfaction and dissatisfaction are not on a continuum with one increasing as the other diminishes, but are independent phenomena. This theory suggests that to improve job attitudes and productivity, administrators must recognize and attend to both sets of characteristics and not assume that an increase in satisfaction leads to decrease in unpleasurable dissatisfaction.

The two-factor, or motivation-hygiene theory, developed from data collected by Herzberg from interviews with a large number of engineers and accountants in the Pittsburgh area. From analyzing these interviews, he found that job characteristics related to what an individual does — that is, to the nature of the work he performs — apparently have the capacity to gratify such needs as achievement, competency, status, personal worth, and self-realization, thus making him happy and satisfied. However, the absence of such gratifying job characteristics does not appear to lead to unhappiness and dissatisfaction. Instead, dissatisfaction results from unfavorable assessments of such job-related factors as company policies, supervision, technical problems, salary, interpersonal relations on the job, and working conditions. Thus, if management wishes to increase satisfaction on the job, it should be concerned with the nature of the work itself — the opportunities it presents for gaining status, assuming responsibility, and for achieving self-realization. If, on the other hand, management wishes to reduce dissatisfaction, then it must focus on the job environment — policies, procedures, supervision, and working conditions[1]. If management is equally concerned with both (as is usually the case), then managers must give attention to both sets of job factors.

The theory was based around interviews with 203 American accountants & engineers in Pittsburgh, chosen because of their professions' growing importance in the business world. The subjects were asked to relate times when

Page 175: MODELS 5

they felt exceptionally good or bad about their present job or any previous job, and to provide reasons, and a description of the sequence of events giving rise to that positive or negative feeling.

Here is the description of this interview analysis:

Briefly, we asked our respondents to describe periods in their lives when they were exceedingly happy and unhappy with their jobs. Each respondent gave as many "sequences of events" as he could that met certain criteria—including a marked change in feeling, a beginning and an end, and contained some substantive description other than feelings and interpretations…

The proposed hypothesis appears verified. The factors on the right that led to satisfaction (achievement, intrinsic interest in the work, responsibility, and advancement) are mostly unipolar; that is, they contribute very little to job dissatisfaction. Conversely, the dis-satisfiers (company policy and administrative practices, supervision, interpersonal relationships, working conditions, and salary) contribute very little to job satisfaction[3].

Two-factor theory distinguishes between:

Motivators  (e.g., challenging work, recognition, responsibility) that give positive satisfaction, arising from intrinsic conditions of the job itself, such as recognition, achievement, or personal growth[4], and

Hygiene factors  (e.g. status, job security, salary and fringe benefits) that do not give positive satisfaction, though dissatisfaction results from their absence. These are extrinsic to the work itself, and include aspects such as company policies, supervisory practices, or wages/salary[4].

Essentially, hygiene factors are needed to ensure an employee is not dissatisfied. Motivation factors are needed to motivate an employee to higher performance, Herzberg also further classified our actions and how and why we do them, for example, if you perform a work related action because you have to then that is classed as movement, but if you perform a work related action because you want to then that is classed as motivation.

Unlike Maslow, who offered little data to support his ideas, Herzberg and others have presented considerable empirical evidence to confirm the motivation-hygiene theory, although their work has been criticized on methodological grounds.

Skandia Navigator Measuring Intangible Assets Software

Page 176: MODELS 5

The leading feature of Skandia's Navigator is its flexibility. The companies that comprise the Skandia Corporation, maybe even departments in these companies, are not required to

Page 177: MODELS 5

adopt a set form or number of measures. They are not even required to report on the same indicators from year to year, because the Navigator is primarily seen as a navigation tool and not one that provides detailed implementation guidelines. Despite its pioneering work and leadership in measuring IC, Skandia still believes in the value of learning through taking an experimental approach. Nevertheless, the Navigator is adopted widely across Skandia and has been incorporated in the MIS system of Skandia under the Dolphin system.

Skandia applies the BSC idea to the Navigator by applying measures to monitor critical business success factors under each of five focuses: financial, human, process, customer, and renewal. Under the Navigator model, the measuring entity—whether the organization or individual business units or departments—asks the question, "What are the critical factors that enable us to achieve success under each of the focus areas?" Then a number of indicators designed to reflect both present and future performance under these factors are chosen.

Edvinsson explains that the measuring entity may also have a different starting point by asking, "What are the key success factors for the measuring entity in general?" The entity then asks, "What are the indicators that are needed to monitor present and future performance for the chosen success factors?" Once these are determined, as many measures as necessary are chosen to monitor them. Finally, these measures are examined and placed under the five focuses depending on what they purport to measure.

For example, SkandiaLink asked senior managers to identify five separate key success factors for the company in 1997. These included establishing long-term relationships with satisfied customers, establishing long-term relationships with distributors (particularly banks), implementing efficient administrative routines, creating an IT system that supports operations, and employing satisfied and competent employees. Each of these "success factors" generated a set of indicators, and a total of 24 were selected for tracking. For the satisfied customer factor, for example, this generated the following indicators:

•   Satisfied customer index

•   Customer barometer

•   New sales

•   Market share

•   Lapse rate

•   Average response time at the call center

•   Discontinued calls at the call center

Page 178: MODELS 5

•   Average handling time for completed cases

•   Number of new products

These indicators are then grouped under the various focuses. As key success factors change, the overall set of indicators for a certain period (strategic phase) that the Navigator model monitors also changes. Not only does the Navigator allow this high level of flexibility in the choice of indicators from time to time, but it also encourages individual employees to express their goals and monitor their own and their team's performance.

In one example, the Navigator model was used by Skandia's corporate IT to monitor its vision of making IT the company's competitive edge. To that end, the IT department used the following measures: Under the financial focus, the department measured return on capital employed, operating results, and value added/employee. The customer focus looked at the contracts that the department handled for Skandia-affiliated companies. The indicators included number of contracts, savings/contract, surrender ratio, and points of sale. The human focus tracked number of full-time employees, number of managers, number of female managers, and training expense/ employee. Under the process focus the department measured the number of contracts per employee, administrative expense/gross premiums written, and IT/administrative expense.

In Skandia's IC Supplement, published in 1994,46 each of Skandia's companies reported and monitored a different set of indicators reflecting the strategies and key success factors of each. The number of indicators under each focus and the factors that each company attempted to monitor were different, with the exception of recurring generic indicators like customer and employee satisfaction. But even with generic measures, the same measures were not used consistently. For example, two out of five companies looked at employee turnover, as an indicator of employee satisfaction under the human focus, while the other companies focused on the number of full-time employees in addition to or instead of training hours. As a result, the number of indicators generated for the whole organization was enormous.

Compared to the BSC model, where the measures are more or less prescribed, the Navigator's underlying philosophy allows for multiple variations. The underlying philosophy is to provide the highest level of flexibility within a defined framework. Skandia wants the Navigator to be a tool for plotting a course rather than a detailed guideline. The details can be filled in later as management steers the business toward meeting its strategic goals. Being flexible and idiosyncratic to the needs of the measuring unit, the Navigator ensures that the whole organization talks IC, while at the same time allowing each measuring unit to develop its own dialect.

Despite inconsistencies and the huge number of indicators generated, Skandia automated the Navigator, through the Dolphin system, and incorporated it into its management information system (MIS). With time the Dolphin system will probably lead to streamlining the various "navigators," and give rise to a more consistent set of indicators

Page 179: MODELS 5

through sharing and communication. It seems that Skandia is serious about communication despite the inconsistency of the measures used; to an extent that it reported these measures to external stakeholders. In 1993, Skandia appointed an IC (as opposed to financial) controller to "systemically develop intellectual capital information and accounting systems, which can then be integrated with traditional financial accounting". Though IC reporting requires more consistent measures, or a well-defined model, Skandia appears determined to balance between its desire to provide transparency on how their organization is being run while continuing to experiment with the Navigator.

Value reporting framework pwc

The insights gained from our global research programme have been codified into PricewaterhouseCoopers corporate reporting framework. The framework identifies four broad categories of information that all industries and companies share in common: market overview, strategy and structure, managing for value, and performance, all underpinned by relevant performance measures. Each of these broad categories encompasses specific elements that, according to our research, both companies and investors consider critical to assessing performance.

As our ongoing research has expanded across industries and as our experience in applying our knowledge to the real world of corporate reporting has grown, the corporate reporting framework has evolved. Our industry-specific research and analysis enables us to tailor the framework by highlighting the elements that are most important to a particular industry.

Integrated reporting refers to the integrated representation of a company’s performance in terms of both financial and non-financial results. Integrated reporting provides greater context for performance data, clarifies how sustainability fits into operations or a business, and may help embed sustainability into company decision making. Some companies that report in an integrated manner also report additional sustainability information, often online, for specific stakeholder groups.

Page 180: MODELS 5

In the King III Report (otherwise known as King Code of Governance for South Africa 2009), integrated reporting is referred to in this manner: "A key challenge for leadership is to make sustainability issues mainstream. Strategy, risk, performance and sustainability have become inseparable; hence the phrase ‘integrated reporting’ which is used throughout this Report." [1]

Companies that produce integrated reports include BASF, Phillips, Novo Nordisk, United Technologies Corporation (UTC) and American Electric Power (AEP). In 2008, UTC was the first Dow Jones Industrial Average member to produce an integrated report.

The Prince of Wales' Accounting for Sustainability project introduced the Connected Reporting Framework in 2007. Companies reporting using this framework, which links sustainability performance reporting with financial reporting and strategic direction in a connected way, include Aviva, BT and HSBC.[2]

Corporate reporting on financial and non-financial information in a single document has grown as socially responsible investing (SRI) has grown faster than the investment industry overall. Globally, the SRI market has grown at an annual rate of 22% since 2003, while global growth rates for assets under management have stagnated around 10%.[3] As more assets are managed with SRI frameworks, more investors are going beyond financial information to consider non-financial, extra-financial or environmental, social and governance (ESG) information in investment decisions.

IFAC (International Federation of Accountants), the Global Reporting Initiative (GRI), and The Prince's Accounting for Sustainability Project are collaborating to establish an International Integrated Reporting Committee (IIRC) to oversee the development of global integrated reporting standards and guidelines.

Altman Z-score

The Z-score formula for predicting bankruptcy was published in 1968 by Edward I. Altman, who was, at the time, an Assistant Professor of Finance at New York University. The formula may be used to predict the probability that a firm will go into bankruptcy within two years. Z-scores are used to predict corporate defaults and an easy-to-calculate control measure for the financial distress status of companies in academic studies. The Z-score uses multiple corporate income and balance sheet values to measure the financial health of a company.

The Z-score is a linear combination of four or five common business ratios, weighted by coefficients. The coefficients were estimated by identifying a set of firms which had declared bankruptcy and then collecting a matched sample of firms which had survived, with matching by industry and approximate size (assets).

Altman applied the statistical method of discriminant analysis to a dataset of publicly held manufacturers. The estimation was originally based on data from publicly held

Page 181: MODELS 5

manufacturers, but has since been re-estimated based on other datasets for private manufacturing, non-manufacturing and service companies.

The original data sample consisted of 66 firms, half of which had filed for bankruptcy under Chapter 7. All businesses in the database were manufacturers, and small firms with assets of <$1 million were eliminated.

The original Z-score formula was as follows: Z = 0.012T1 + 0.014T2 + 0.033T3 + 0.006T4 + 0.999T5.

T1 = Working Capital / Total Assets. Measures liquid assets in relation to the size of the company.

T2 = Retained Earnings / Total Assets. Measures profitability that reflects the company's age and earning power.

T3 = Earnings Before Interest and Taxes / Total Assets. Measures operating efficiency apart from tax and leveraging factors. It recognizes operating earnings as being important to long-term viability.

T4 = Market Value of Equity / Book Value of Total Liabilities. Adds market dimension that can show up security price fluctuation as a possible red flag

T5 = Sales/ Total Assets. Standard measure for sales turnover (varies greatly from industry to industry).

Altman found that the ratio profile for the bankrupt group fell at -0.25 avg, and for the non-bankrupt group at +4.48 avg.

Z Score Analysis

A company failure or bankruptcy prediction method developed by Professor Edward Altman of New York University. A company's Z score is a positive function of five factors:

(net working capital) / (total assets) (retained earnings) / (total assets) (EBIT) / (total assets) (market value of common and preferred) / (book value of debt) (sales) / (total assets).

Although the weights are not equal, the higher each ratio, the higher the Z score and the lower the probability of bankruptcy Also called Zeta

BASS DIFFUSION MODEL

The Bass diffusion model was developed by Frank Bass and describes the process of how new products get adopted as an interaction between users and potential users. It has been

Page 182: MODELS 5

described as one of the most famous empirical generalisations in marketing, along with the Dirichlet model of repeat buying and brand choice. The model is widely used in forecasting, especially product forecasting and technology forecasting. Mathematically, the basic Bass diffusion is a Riccati equation with constant coefficients.

Frank Bass published his paper "A new product growth model for consumer durables" in 1969.[2] Prior to this, Everett Rogers published Diffusion of Innovations, a highly influential work that described the different stages of product adoption. Bass contributed some mathematical ideas to the concept.

This model has been widely influential in marketing and management science. In 2004 it was selected as one of the ten most frequently cited papers in the 50-year history of Management Science [4]. It was ranked number five, and the only marketing paper in the list. It was subsequently reprinted in the December 2004 issue of Management Science.

Model formulation

 [2]

Where:

 is the rate of change of the installed base fraction

 is the installed base fraction

 is the coefficient of innovation

Page 183: MODELS 5

 is the coefficient of imitation

Sales   is the rate of change of installed base (i.e. adoption)   multiplied by the ultimate market potential  :

 [2]

The time of peak sales 

 [2]

[edit]Explanation

The coefficient p is called the coefficient of innovation, external influence or advertising effect. The coefficient q is called the coefficient of imitation, internal influence or word-of-mouth effect.

Typical values of p and q when time t is measured in years:[5]

The average value of p has been found to be 0.03, and is often less than 0.01 The average value of q has been found to be 0.38, with a typical range between

0.3 and 0.5

Page 184: MODELS 5

 

Extensions to the model

Generalised Bass model (with pricing)

Bass found that his model fit the data for almost all product introductions, despite a wide range of managerial decision variable, e.g. pricing and advertising. This means that decision variable can shift the Bass curve in time, but that the shape of the curve is always similar.

Although many extensions of the model have been proposed, only one of these reduces to the Bass model under ordinary circumstances.[4]. This model was developed in 1994 by Frank Bass, Trichy Krishnan and Dipak Jain:

Page 185: MODELS 5

where   is a function of percentage change in price and other variables

Successive generations

Technology products succeed one another in generations. Norton and Bass extended the model in 1987 for sales of products with continuous repeat purchasing. The formulation for three generations is as follows:[4]

where

 is the incremental number of ultimate adopters of the ith generation product

 is the average (continuous) repeat buying rate among adopters of the ith generation product

 is the time since the introduction of the ith generation product

It has been found that the p and q terms are generally the same between successive generations.

Relationship with other s-curves

There are two special cases of the Bass diffusion model.

The first special case occurs when q=0, when the model reduces to the Exponential distribution.

The second special case reduces to the logistic distribution, when p=0.

The Bass model is a special case of the Gamma/shifted Gompertz distribution (G/SG).

Page 186: MODELS 5

Use in online social networks The rapid, recent (as of early 2007) growth in online social networks (and other virtual communities) has led to an increased use of the Bass diffusion model. The Bass diffusion model is used to estimate the size and growth rate of these social networks.

BRAND PERSONALITY AAKER

The Brand Personality Dimensions of Jennifer Aaker (Journal of Marketing Research, 8/97, pp 347-356) is a framework to describe and measure the “personality” of a brand in five core dimensions, each divided into a set of facets. It is a model to describe the profile of a brand by using an analogy with a human being.  I believe that it's imperative that a brand be carefully "humanized" in order to connect with the audience.

Here's what Ms. Aaker, and her father before her, researched:

THE FIVE CORE DIMENSIONS AND THEIR FACETS

These are:

1. Sincerity (down-to-earth, honest, wholesome, cheerful)2. Excitement (daring, spirited, imaginative, up-to-date)

3. Competence (reliable, intelligent, successful)

4. Sophistication (upper class, charming)

5. Ruggedness (outdoorsy, tough)

Each facet is in turn measured by a set of traits. The trait measurements are done using a five point scale (1 = not at all descriptive, 5 = extremely descriptive) rating the extent to which each trait describes the specific brand.

AN EXPLANATION OF THE TRAITS BELONGING TO EACH OF THE FACETS

These traits are:

o Down-to-eartho Down to earth,

o Family-oriented,

o Small-town

o Honest

Page 187: MODELS 5

o Honest

o Sincere

o Real

o Wholesome

o Wholesome

o Original

o Cheerful

o Cheerful

o Sentimental

o Friendly

o Daring

o Daring

o Trendy

o Exciting

o Spirited

o Spirited

o Cool

o Young

o Imaginative

o Imaginative

o Unique

o Up to date

o Up to date

Page 188: MODELS 5

o Independent

o Contemporary

o Reliable

o Reliable

o Hard working

o Secure

o Intelligent

o Intelligent

o Technical

o Corporate

o Successful

o Successful

o Leader

o Confident

o Upper class

o Upper class

o Glamorous

o Good looking

o Charming

o Charming

o Feminine

o Smooth

o Outdoorsy

Page 189: MODELS 5

o Outdoorsy

o Masculine

o Western

o Tough

o Tough

o Rugged

5 Dimensions of Brand Personality

It’s Saturday, and I have spent the morning  reading and drinking coffee. I’m actually feeling kind of lazy and thinking about the screening of the documentary I worked on earlier this year. So, because I am feeling lazy, and not in the mood to post some lengthy piece with loads of pictures, and such I am going to post a quick break down of the 5 dimensions of brand personality. (I just saw a bunch of you roll your eyes.)

How many times have you heard the statement, “the consumer owns the brand”?

It would probably be safe to say you’ve heard it a dozen or so times, and possibly uttered it yourself, because it happens to be true.  No matter what the product or service that an organization is offering to its target audience, success or failure is dependent upon the consumers’ buying in to what they’re selling.

Consumers make purchasing decisions based on any number of factors they associate with individual brands, and companies spend millions on advertising and marketing activities so that they can influence what those associations might be.  Just as we each chooses our friends based on their personalities, brands can elicit the same sort of response in consumers.  In light of this, wouldn’t it be interesting to know which human personality traits consumers tend to apply to brands?

Well, it’s good thing for us that someone has studied this and given us a few answers:

1st Dimension – SINCERITY

Consumers interpret sincere brands as being down-to-earth, honest, wholesome, and cheerful.  Sure, some people find Rachael Ray annoying, but more people find her endearing – the kind of woman you can sit down with for a chat at the kitchen table.

2nd Dimension – EXCITEMENT

The most exciting brands are daring, spirited, imaginative, and on the cutting edge.  Not only are Burton snowboards on the cutting edge of technology and performance, the

Page 190: MODELS 5

products bearing the Burton name are designed with their audience in mind.  Funky graphics and forward-thinking designs make Burton a leader in their competitive industry.

3rd Dimension – COMPETENCE

Reliability, intelligence, and success are the traits associated with these brands.  Even in these trying economic times, there are a few financial services firms that still manage to play well in consumer minds.  Charles Schwab is the stable, successful, smart guy next door who can tell you what to do with your 401k allocations.

4th Dimension – SOPHISTICATION

A brand that is sophisticated is viewed as charming and fit for the upper classes.  When it comes to esteem and seemingly eternal longevity, the Chanel brand is unequaled.  In good times and bad, this brand remains strong as a symbol of a life lived in all the right places, doing all the right things.

5th Dimension – RUGGEDNESS

Interestingly, consumers pick up on this personality dimension quite well.  Rugged brands are seen as outdoorsy and tough.  The North Face has built an empire by outfitting people who actually do scary outdoorsy things, and those who just like to look good on the streets of NYC.

CRISIS MANAGEMENT

Crisis management is the process by which an organization deals with a major event that threatens to harm the organization, its stakeholders, or the general public. Three elements are common to most definitions of crisis: (a) a threat to the organization, (b) the element of surprise, and (c) a short decision time.[1] Venette[2] argues that "crisis is a process of transformation where the old system can no longer be maintained." Therefore the fourth defining quality is the need for change. If change is not needed, the event could more accurately be described as a failure or incident.

In contrast to risk management, which involves assessing potential threats and finding the best ways to avoid those threats, crisis management involves dealing with threats after they have occurred. It is a discipline within the broader context of management consisting of skills and techniques required to identify, assess, understand, and cope with a serious situation, especially from the moment it first occurs to the point that recovery procedures start.

Crisis management consists of:

Methods used to respond to both the reality and perception of crises.

Page 191: MODELS 5

Establishing metrics to define what scenarios constitute a crisis and should consequently trigger the necessary response mechanisms.

Communication that occurs within the response phase of emergency management scenarios.

Crisis management methods of a business or an organization are called Crisis Management Plan.

Crisis management is occasionally referred to as incident management, although several industry specialists such as Peter Power argue that the term crisis management is more accurate. [3]

The credibility and reputation of organizations is heavily influenced by the perception of their responses during crisis situations. The organization and communication involved in responding to a crisis in a timely fashion makes for a challenge in businesses. There must be open and consistent communication throughout the hierarchy to contribute to a successful crisis communication process.

The related terms emergency management and business continuity management focus respectively on the prompt but short lived "first aid" type of response (e.g. putting the fire out) and the longer term recovery and restoration phases (e.g. moving operations to another site). Crisis is also a facet of risk management, although it is probably untrue to say that Crisis Management represents a failure of Risk Management since it will never be possible to totally mitigate the chances of catastrophes occurring.

During the crisis management process, it is important to identify types of crises in that different crises necessitate the use of different crisis management strategies.Potential crises are enormous, but crises can be clustered.

Lerbinger categorized seven types of crises

Natural disaster

Technological crises

Confrontation

Malevolence

Crisis of skewed management value

Crisis of deception

Crisis of management misconduct

Page 192: MODELS 5

Crisis Management Model

Successfully defusing a crisis requires an understanding of how to handle a crisis – before they occur. Gonzalez-Herrero and Pratt found the different phases of Crisis Management.

There are 3 phases in any Crisis Management are as below

1 The diagnosis of the impending trouble or the danger signals

2. Choosing appropriate Turnaround Strategy

3 Implementation of the change process and its monitoring.

Management Crisis Planning

No corporation looks forward to facing a situation that causes a significant disruption to their business, especially one that stimulates extensive media coverage. Public scrutiny can result in a negative financial, political, legal and government impact. Crisis management planning deals with providing the best response to a crisis.

Contingency planning

Preparing contingency plans in advance, as part of a crisis management plan, is the first step to ensuring an organization is appropriately prepared for a crisis. Crisis management teams can rehearse a crisis plan by developing a simulated scenario to use as a drill. The plan should clearly stipulate that the only people to speak publicly about the crisis are the designated persons, such as the company spokesperson or crisis team members. The first hours after a crisis breaks are the most crucial, so working with speed and efficiency is important, and the plan should indicate how quickly each function should be performed. When preparing to offer a statement externally as well as internally, information should be accurate. Providing incorrect or manipulated information has a tendency to backfire and will greatly exacerbate the situation. The contingency plan should contain information and guidance that will help decision makers to consider not only the short-term consequences, but the long-term effects of every decision.

Business continuity planning

When a crisis will undoubtedly cause a significant disruption to an organization, a business continuity plan can help minimize the disruption. First, one must identify the critical functions and processes that are necessary to keep the organization running. Then each critical function and or/process must have its own contingency plan in the event that one of the functions/processes ceases or fails. Testing these contingency plans by rehearsing the required actions in a simulation will allow for all involved to become more sensitive and aware of the possibility of a crisisis. As a result, in the event of an actual crisis, the team members will act more quickly and effectively.

Page 193: MODELS 5

Structural-functional systems theory

Providing information to an organization in a time of crisis is critical to effective crisis management. Structural-functional systems theory addresses the intricacies of information networks and levels of command making up organizational communication. The structural-functional theory identifies information flow in organizations as "networks" made up of members and "links". Information in organizations flow in patterns called networks

Examples of successful crisis management

Tylenol (Johnson and Johnson)

In the fall of 1982, a murderer added 65 milligrams of cyanide to some Tylenol capsules on store shelves, killing seven people, including three in one family. Johnson & Johnson recalled and destroyed 31 million capsules at a cost of $100 million. The affable CEO, James Burke, appeared in television ads and at news conferences informing consumers of the company's actions. Tamper-resistant packaging was rapidly introduced, and Tylenol sales swiftly bounced back to near pre-crisis levels.[16]

When another bottle of tainted Tylenol was discovered in a store, it took only a matter of minutes for the manufacturer to issue a nationwide warning that people should not use the medication in its capsule form.[17]

Odwalla Foods

When Odwalla's apple juice was thought to be the cause of an outbreak of E. coli infection, the company lost a third of its market value. In October 1996, an outbreak of E. coli bacteria in Washington state, California, Colorado and British Columbia was traced to unpasteurized apple juice manufactured by natural juice maker Odwalla Inc. Forty-nine cases were reported, including the death of a small child. Within 24 hours, Odwalla conferred with the FDA and Washington state health officials; established a schedule of daily press briefings; sent out press releases which announced the recall; expressed remorse, concern and apology, and took responsibility for anyone harmed by their products; detailed symptoms of E. coli poisoning; and explained what consumers should do with any affected products. Odwalla then developed - through the help of consultants - effective thermal processes that would not harm the products' flavors when production resumed. All of these steps were communicated through close relations with the media and through full-page newspaper ads.[18]

Mattel

Mattel Inc., the toy maker, has been plagued with more than 28 product recalls and in Summer of 2007, amongst problems with exports from China, faced two product recall in two weeks. The company "did everything it could to get its message out, earning high marks from consumers and retailers. Though upset by the situation, they were

Page 194: MODELS 5

appreciative of the company's response. At Mattel, just after the 7 a.m. recall announcement by federal officials, a public relations staff of 16 was set to call reporters at the 40 biggest media outlets. They told each to check their e-mail for a news release outlining the recalls, invited them to a teleconference call with executives and scheduled TV appearances or phone conversations with Mattel's chief executive. The Mattel CEO Robert Eckert did 14 TV interviews on a Tuesday in August and about 20 calls with individual reporters. By the week's end, Mattel had responded to more than 300 media inquiries in the U.S. alone."[19]

Pepsi

The Pepsi Corporation faced a crisis in 1993 which started with claims of syringes being found in cans of diet Pepsi. Pepsi urged stores not to remove the product from shelves while it had the cans and the situation investigated. This led to an arrest, which Pepsi made public and then followed with their first video news release, showing the production process to demonstrate that such tampering was impossible within their factories. A second video news release displayed the man arrested. A third video news release showed surveillance from a convenience store where a woman was caught replicating the tampering incident. The company simultaneously publicly worked with the FDA during the crisis. The corporation was completely open with the public throughout, and every employee of Pepsi was kept aware of the details.[citation needed] This made public communications effective throughout the crisis. After the crisis had been resolved, the corporation ran a series of special campaigns designed to thank the public for standing by the corporation, along with coupons for further compensation. This case served as a design for how to handle other crisis situations.[20][citation needed]

Examples of unsuccessful crisis management

Bhopal

The Bhopal disaster in which poor communication before, during, and after the crisis cost thousands of lives, illustrates the importance of incorporating cross-cultural communication in crisis management plans. According to American University’s Trade Environmental Database Case Studies (1997), local residents were not sure how to react to warnings of potential threats from the Union Carbide plant. Operating manuals printed only in English is an extreme example of mismanagement but indicative of systemic barriers to information diffusion. According to Union Carbide’s own chronology of the incident (2006), a day after the crisis Union Carbide’s upper management arrived in India but was unable to assist in the relief efforts because they were placed under house arrest by the Indian government. Symbolic intervention can be counter productive; a crisis management strategy can help upper management make more calculated decisions in how they should respond to disaster scenarios. The Bhopal incident illustrates the difficulty in consistently applying management standards to multi-national operations and the blame shifting that often results from the lack of a clear management plan.[21]

Ford and Firestone Tire and Rubber Company

Page 195: MODELS 5

The Ford-Firestone Tire and Rubber Company dispute transpired in August 2000. In response to claims that their 15-inch Wilderness AT, radial ATX and ATX II tire treads were separating from the tire core—leading to grisly, spectacular crashes—Bridgestone/Firestone recalled 6.5 million tires. These tires were mostly used on the Ford Explorer, the world's top-selling sport utility vehicle (SUV).[22]

The two companies committed three major blunders early on, say crisis experts. First, they blamed consumers for not inflating their tires properly. Then they blamed each other for faulty tires and faulty vehicle design. Then they said very little about what they were doing to solve a problem that had caused more than 100 deaths—until they got called to Washington to testify before Congress.[23]

Exxon

On March 24, 1989, a tanker belonging to the Exxon Corporation ran aground in the Prince William Sound in Alaska. The Exxon Valdez spilled millions of gallons of crude oil into the waters off Valdez, killing thousands of fish, fowl, and sea otters. Hundreds of miles of coastline were polluted and salmon spawning runs disrupted; numerous fishermen, especially Native Americans, lost their livelihoods. Exxon, by contrast, did not react quickly in terms of dealing with the media and the public; the CEO, Lawrence Rawl, did not become an active part of the public relations effort and actually shunned public involvement; the company had neither a communication plan nor a communication team in place to handle the event—in fact, the company did not appoint a public relations manager to its management team until 1993, 4 years after the incident; Exxon established its media center in Valdez, a location too small and too remote to handle the onslaught of media attention; and the company acted defensively in its response to its publics, even laying blame, at times, on other groups such as the Coast Guard. These responses also happened within days of the incident.

POSITIONING TROUT

In marketing, positioning has come to mean the process by which marketers try to create an image or identity in the minds of their target market for its product, brand, or organization.

Re-positioning involves changing the identity of a product, relative to the identity of competing products, in the collective minds of the target market.

De-positioning involves attempting to change the identity of competing products, relative to the identity of your own product, in the collective minds of the target market.

The original work on Positioning was consumer marketing oriented, and was not as much focused on the question relative to competitive products as much as it was focused on cutting through the ambient "noise" and establishing a moment of real contact with the intended recipient. In the classic example of Avis claiming "No.2, We Try Harder", the point was to say something so shocking (it was by the standards of the day) that it cleared

Page 196: MODELS 5

space in your brain and made you forget all about who was #1, and not to make some philosophical point about being "hungry" for business.

The growth of high-tech marketing may have had much to do with the shift in definition towards competitive positioning. An important component of hi-tech marketing in the age of the World Wide Web is positioning in major search engines such as Google, Yahoo and Bing, which can be accomplished through Search Engine Optimization, also known as SEO. This is an especially important component when attempting to improve competitive positioning among a younger demographic, which tends to be web oriented in their shopping and purchasing habits as a result of being highly connected and involved in social media in general.

Although there are different definitions of Positioning, probably the most common is: identifying a market niche for a brand, product or service utilizing traditional marketing placement strategies (i.e. price, promotion, distribution, packaging, and competition).

Also positioning is defined as the way by which the marketers creates impression in the customers mind.

Positioning is a concept in marketing which was first introduced by Jack Trout ( "Industrial Marketing" Magazine- June/1969) and then popularized by Al Ries and Jack Trout in their bestseller book "Positioning - The Battle for Your Mind." (McGraw-Hill 1981)

This differs slightly from the context in which the term was first published in 1969 by Jack Trout in the paper "Positioning" is a game people play in today’s me-too market place" in the publication Industrial Marketing, in which the case is made that the typical consumer is overwhelmed with unwanted advertising, and has a natural tendency to discard all information that does not immediately find a comfortable (and empty) slot in the consumers mind. It was then expanded into their ground-breaking first book, "Positioning: The Battle for Your Mind," in which they define Positioning as "an organized system for finding a window in the mind. It is based on the concept that communication can only take place at the right time and under the right circumstances" (p. 19 of 2001 paperback edition)

What most will agree on is that Positioning is something (perception) that happens in the minds of the target market. It is the aggregate perception the market has of a particular company, product or service in relation to their perceptions of the competitors in the same category. It will happen whether or not a company's management is proactive, reactive or passive about the on-going process of evolving a position. But a company can positively influence the perceptions through enlightened strategic actions.

Generally, the product positioning process involves:

Defining the market in which the product or brand will compete (who the relevant buyers are)

Page 197: MODELS 5

Identifying the attributes (also called dimensions) that define the product 'space'

Collecting information from a sample of customers about their perceptions of each product on the relevant attributes

Determine each product's share of mind

Determine each product's current location in the product space

Determine the target market's preferred combination of attributes (referred to as an ideal vector)

Examine the fit between:

The position of your product

The position of the ideal vector

Position.

(Faheem, 2010) The process is similar for positioning your company's services. Services, however, don't have the physical attributes of products - that is, we can't feel them or touch them or show nice product pictures. So you need to ask first your customers and then yourself, what value do clients get from my services? How are they better off from doing business with me? Also ask: is there a characteristic that makes my services different?

Write out the value customers derive and the attributes your services offer to create the first draft of your positioning. Test it on people who don't really know what you do or what you sell, watch their facial expressions and listen for their response. When they want to know more because you've piqued their interest and started a conversation, you'll know you're on the right track.

More generally, there are three types of positioning concepts:

1. Functional positions Solve problems

Provide benefits to customers

Get favorable perception by investors (stock profile) and lenders

2. Symbolic positions

Self-image enhancement

Page 198: MODELS 5

Ego identification

Belongingness and social meaningfulness

Affective fulfillment

3. Experiential positions

Provide sensory stimulation

Provide cognitive stimulation

Brand management in highly competitive and dynamic markets, will only be effective if the brand itself stays close to its roots of uniqueness and core values, focuses on specific market segments and captures a competitive positioning in a specific market. The two brand management tools that could fulfil that role are brand identity and brand positioning. Brand identity and brand positioning need to be connected within their own specific functions; brand identity to express the brand tangible and intangible unique characteristics on the long term, whereas brand positioning a competitive orientated combat tool fulfils on the short term. Positioning communicates a specific aspect of identity at a given time in a given specific market segment within a field of competition. Hence positioning derives from identity and may chance over time and/or differ per product (Kapferer, 2007:95-102).

Brand positioning is the sum of all activities that position the brand in the mind of the customer relative to its competition. Positioning is not about creating something new or different, but to manipulate the mind set and to retie existing connections (Ries & Trout, 2001:2-5). Kotler and Keller define brand positioning as an "act of designing the company's offering and image to occupy a distinct place in the mind of the target market." The objective of positioning is to locate the brand into the minds of stakeholders; customers and prospect in particular. A recognizable and trusted customer-focused value proposition can be the result of a successful positioning without doing something to the product itself. It's the rational and persuasive reason to buy the brand in highly competitive target markets (Kotler & Keller, 2006:310). Therefore it is essential to understand and to know the position a brand owns in the mind of a customer instead of defining what the brand stands for. To position a brand efficiently within its market, it is critical to evaluate the brand objectively and assess how the brand is viewed by customers and prospects (Ries & Trout, 2001:193-206).

Positioning the brand in highly over-communicated B2B environments could easily fail when the minds of customers and prospect obviously get confused. Trout suspects that people discriminate information to access their mind as a self-defence mechanism. For this reason, management needs to understand five mental elements in the positioning process, to position the brand successful in the mind of customers and prospects (Trout, 1995:3-8):

Page 199: MODELS 5

1. Minds are limited

According to social scientists our selective process has at least three rings of defence: (1) selective exposure, (2) selective attention, (3) selective retention (Trout, 1995:11-12).

"Learning is simply remembering what we're interested in" (Trout, 1995:13).

The mind accepts only (new) information which matches its current mindset, every mismatch will be filtered out and blocked. (Ries & Trout, 2001:29).

2. Minds hate confusion

To avoid confusion emphasize simplicity and focus on the most obvious powerful attribute and position it well into the mind (Trout, 1995:11-24).

3. Minds are insecure

Minds tend to be emotional rather then rational. Often people tend to follow others as a principle, social proof to guide the insecure mind in making decisions and reduce the risk of doing something wrong. Behavioural scientists say there are five forms of perceived risk: (1) monetary risk, (2) functional risk, (3) physical risk, (4) social risk, (5) psychological risk (Trout, 1995:25-28)

4. Minds don't change

According Petty and Cacioppo (as quoted in Trout, 1995:35)"...beliefs are thought to provide the cognitive foundation of an attitude. In order to change an attitude, then, it is presumably necessary to modify the information on which that attitude rests. It is generally necessary, therefore, to change a person's belief, eliminate old beliefs, or introduce new beliefs."

Once the market has made up it mind about a brand it is impossible to chance that mind (Trout, 1995:34).

In general, the mind is sensitive to prior knowledge or experience. (Ries & Trout, 2001:6) At the end it comes back to what the market is familiar and already comfortable with (Trout, 1995:34-35).

5. Minds can lose focus

Variations to the brand, for example line extensions, can leverage the distortion of minds, in other words: the mind loses focus. To enforce the mindset it is necessary to stay focussed and consistent on the key attributes of the brand.

Positioning is in essence a strategy to position the brand against other brands (Trout, 1995:146). Consequently, positioning requires a balance of ideal points of parity and point of different brand associations within the given market and competitive

Page 200: MODELS 5

environment. Establishing brand positioning starts with identifying: (1) the target market, (2) the nature of competition, (3) the points of parity (POP), and (4) the points of difference (POD) (Keller, 2006:98-99).

Marketeers can use a brand mantra to emphasize the core brand associations that reflects the "heart and soul" of the brand. The brand mantra is a three- to five-word (phrase) that captures the indisputable "heart and soul"; the essence or spirit, of the brand positioning. The brand mantra communicates what the brand is and what it is not. Next to that, it can provide a brand guidance to appropriate product extension, line extension, acquisitions and mergers, internal communication. A brand mantra provides (as well) a short list of crucial brand considerations which leverage a consistent brand image and internal branding. Keller distinguish three determinant categories to design a brand mantra: (1) the emotional modifier, (2) the descriptive modifier, (3) the brand function (Keller, 2006:121-123).

Positioning results from an analytical process based on four questions: (1) a brand for what, (2) a brand for whom, (3) a brand for when, (4) a brand against whom? Obviously it indicates the brand's distinctive characteristics, essential points of difference, attractiveness to the market and "raison d'être". Kapferer's standard approach to achieve the desired positioning is based on four determinations; (1) definition of target market, (2) definition of the frame of reference and subjective category, (3) promise or consumer benefit, (4) reason to believe (Kapferer, 2007:100-102).

According to Ries and Trout the secret of a successful position is to balance a unique position with an appeal that's not too narrow. Organizations should look for manageable smaller targets which deliver the appropriate and unique value proposition rather than a bigger homogeneous highly competitive market. Its success is captured by the willingness to sacrifice a minor role the total market in return for leadership in specific oligopolistic market segments (Ries & Trout, 2001:208).

Treacy and Wiersema argue that leadership comes with a strong focus on delivering excellent customer value. Based on a three-year study of 40 companies they distinguish three value disciplines on which organizations should focus; (1) operational excellence, (2) customer intimacy, or (3) product leadership. See figure 18. The challenge for organizations is to sustain the chosen value disciple persistent through the organization and to create internal long term consistency. Two golden rules to success; (1) excel in one of the three value disciplines to create a leadership position and (2) deliver adequate level of excellence in the two other value disciplines (Treacy & Wiersema, 1993:3-14).

Page 201: MODELS 5

 

Value disciple model (Treacy & Wiersema, 1993,3)

According to Kapferer there is a direct connection between the brand essence, brand identity and brand position that enables the brand to change over (long term) time within certain degrees of freedom and still remain it self. Brand positioning capitalizes on a specific part of identity within the playing field which varies by segment, demography, market dynamics and time. For that Kapferer argues that on a global level a unified identity can initiate multiple specific market strategies for different markets without jeopardizing the brand essence and identity (Kapferer, 2007:105).

ENTERPRISE ARCHITECTURE ZACHMAN

The Zachman Framework is an Enterprise Architecture framework for enterprise architecture, which provides a formal and highly structured way of viewing and defining an enterprise. It consists of a two dimensional classification matrix based on the intersection of six communication questions (What, Where, When, Why, Who and How) with six rows according to reification transformations.

The Zachman Framework is not a methodology in that it lacks specific methods and processes for collecting, managing, or using the information that it describes.The Framework is named after its creator John Zachman, who first developed the concept in the 1980s at IBM. It has been updated several times since.

The Zachman "Framework" is a taxonomy for organizing architectural artifacts (in other words, design documents, specifications, and models) that takes into account both whom the artifact targets (for example, business owner and builder) and what particular issue (for example, data and functionality) is being addressed.

Page 202: MODELS 5

The term "Zachman Framework" has multiple meanings. It can refer to any of the frameworks proposed by John Zachman:

The initial framework, named A Framework for Information Systems Architecture, by John Zachman published in an 1987 article in the IBM Systems journal.[5]

The Zachman Framework for Enterprise Architecture, an update of the 1987 original in the 1990s extended and renamed .[6]

One of the later versions of the Zachman Framework, offered by Zachman International as industry standard.In other sources the Zachman Framework is introduced as a framework, originated by and named after John Zachman, represented in numerous ways, see image. This framework is explained as, for example:

a framework to organize and analyze data[7],

a framework for enterprise architecture.[8]

a classification system, or classification scheme[9]

a matrix, often in a 6x6 matrix format

a two-dimensional model[10] or an analytic model.

a two-dimensional schema, used to organize the detailed representations of the enterprise.[11]

Beside the frameworks developed by John Zachman numerous extensions and or applications have been developed, which are also sometimes called Zachman Frameworks.

The Zachman Framework summarizes a collection of perspectives involved in enterprise architecture. These perspectives are represented in a two-dimensional matrix that defines along the rows the type of stakeholders and with the columns the aspects of the architecture. The framework does not define a methodology for an architecture. Rather, the matrix is a template that must be filled in by the goals/rules, processes, material, roles, locations, and events specifically required by the organization. Further modeling by mapping between columns in the framework identifies gaps in the documented state of the organization.[12]

The framework is a simple and logical structure for classifying and organizing the descriptive representations of an enterprise. It is significant to both the management of the enterprise, and the actors involved in the development of enterprise systems.[13] While there is not order of priority for the columns of the Framework, the top-down order of the rows is significant to the alignment of business concepts and the actual physical enterprise. The level of detail in the Framework is a function of each cell (and not the

Page 203: MODELS 5

rows). When done by IT the lower level of focus is on information technology, however it can apply equally to physical material (ball valves, piping, transformers, fuse boxes for example) and the associated physical processes, roles, locations etc. related to those items.

Framework for enterprise architecture

In the 1997 paper "Concepts of the Framework for Enterprise Architecture" Zachman explained that the framework should be referred to as a "Framework for Enterprise Architecture", and should have from the beginning. In the early 1980s however, according to Zachman, there was "little interest in the idea of Enterprise Reengineering or Enterprise Modeling and the use of formalisms and models was generally limited to some aspects of application development within the Information Systems community".[20]

In 2008 Zachman Enterprise introduced the Zachman Framework: The Official Concise Definition as a new Zachman Framework standard.

Planner's View (Scope) - The first architectural sketch is a "bubble chart" or Venn diagram, which depicts in gross terms the size, shape, partial relationships, and basic purpose of the final structure. It corresponds to an executive summary for a planner or investor who wants an overview or estimate of the scope of the system, what it would cost, and how it would relate to the general environment in which it will operate.

Owner's View (Enterprise or Business Model) - Next are the architect's drawings that depict the final building from the perspective of the owner, who will have to

Page 204: MODELS 5

live with it in the daily routines of business. They correspond to the enterprise (business) models, which constitute the designs of the business and show the business entities and processes and how they relate.

Designer's View (Information Systems Model) - The architect's plans are the translation of the drawings into detail requirements representations from the designer's perspective. They correspond to the system model designed by a systems analyst who must determine the data elements, logical process flows, and functions that represent business entities and processes.

Builder's View (Technology Model) - The contractor must redraw the architect's plans to represent the builder's perspective, with sufficient detail to understand the constraints of tools, technology, and materials. The builder's plans correspond to the technology models, which must adapt the information systems model to the details of the programming languages, input/output (I/O) devices, or other required supporting technology.

Subcontractor View (Detailed Specifications) - Subcontractors work from shop plans that specify the details of parts or subsections. These correspond to the detailed specifications that are given to programmers who code individual modules without being concerned with the overall context or structure of the system. Alternatively, they could represent the detailed requirements for various commercial-off-the-shelf (COTS), government off-the-shelf (GOTS), or components of modular systems software being procured and implemented rather than built.

Actual System View or The Functioning Enterprise

Focus or Columns

In summary, each perspective focuses attention on the same fundamental questions, then answers those questions from that viewpoint, creating different descriptive representations (i.e., models), which translate from higher to lower perspectives. The basic model for the focus (or product abstraction) remains constant. The basic model of each column is uniquely defined, yet related across and down the matrix.[26] In addition, the six categories of enterprise architecture components, and the underlying interrogatives that they answer, form the columns of the Zachman Framework and these are:[24]

1. The data description — What2. The function description — How

3. The Network description — Where

4. The people description — Who

Page 205: MODELS 5

5. The time description — When

6. The motivation description — Why

In Zachman’s opinion, the single factor that makes his framework unique is that each element on either axis of the matrix is explicitly distinguishable from all the other elements on that axis. The representations in each cell of the matrix are not merely successive levels of increasing detail, but actually are different representations — different in context, meaning, motivation, and use. Because each of the elements on either axis is explicitly different from the others, it is possible to define precisely what belongs in each cell.

Leadership Styles at Work

We believe that the focus of this type of training should be on the situational use of leadership styles, and the flexing of those styles to varying circumstances at work.  For example, what is the most effective style to use when placed in a certain situation?  This is one of the guiding principals behind the various models of leadership styles.

This last point is an important one.  Research has demonstrated that the leader's ability to adopt his or her leadership style to the situation at hand is important to their organization's success.  The best leaders are skilled at several styles, and instinctively understand when to use them at work.

Choosing a Leadership Style

In the following sections we are going to explain the six different leadership styles that were identified by Daniel Goleman in connection with his theory of emotional intelligence.  We've chosen Goleman's model of leadership style because it's both simple and all-encompassing.

In his writings, Goleman described a total of six different leadership styles.  Much of this information already appears in our article on situational leadership.  If you're interested in the effective application of different leadership styles, then you might want to look at that article too because it also speaks to the theory put forth by Ken Blanchard and Paul Hersey.

The examples of leadership styles appearing below contain a brief description of the leader's characteristics, as well as an example of when the styles are most effective.

Coaching Leaders

In the Coaching Leadership Style the leader focuses on helping others in their personal development, and in their job-related activities.  The coaching leader aids others to get up to speed by working closely with them to make sure they have the

Page 206: MODELS 5

knowledge and tools to be successful.  This situational leadership style works best when the employee already understands their weaknesses, and is receptive to improvement suggestions or ideas.

Pacesetting Leaders

When employees are self-motivated and highly skilled, the Pacesetting Leadership Style is extremely effective.  The pacesetting leader sets very high performance standards for themselves and the group.  They exemplify the behaviors they are seeking from other members of the group.  This leadership style needs to be used sparingly since workers can often "burn out" due to the demanding pace of this style.

Democratic Leaders

The Democratic Leadership Style gives members of the work group a vote, or a say, in nearly every decision the team makes.  When used effectively, the democratic leader builds flexibility and responsibility.  They can help identify new ways to do things with fresh ideas.  Be careful with this style, however, because the level of involvement required by this approach, as well as the decision-making process, can be very time consuming.

Affiliative Leaders

The Affiliative Leadership Style is most effective in situations where morale is low or teambuilding is needed.  This leader is easily recognized by their theme of "employee first."  Employees can expect much praise from this style; unfortunately, poor performance may also go without correction.

Authoritative Leaders

If your business seems to be drifting aimlessly, then the Authoritative Leadership Style can be very effective in this type of situation.  The authoritative leader is an expert in dealing with the problems or challenges at hand, and can clearly identify goals that will lead to success.  This leader also allows employees to figure out the best way to achieve those goals.

Coercive Leaders

The Coercive Leadership Style should be used with caution because it's based on the concept of "command and control," which usually causes a decrease in motivation among those interacting with this type of manager.  The coercive leader is most effective in situations where the company or group requires a complete turnaround.  It is also effective during disasters, or dealing with under performing employees - usually as a last resort.

Mastering Multiple Leadership Styles

Page 207: MODELS 5

The formula for a leader's success is really quite simple:  The more leadership styles that you are able to master, the better the leader you will become.  Certainly the ability to switch between styles, as situations warrant, will result in superior results and workplace climate.

In fact, Goleman's research revealed that leaders who were able to master four or more leadership styles - especially the democratic, authoritative, affiliative and coaching styles - often achieved superior performance from their followers as well as a healthy climate in which to work.

It's not easy to master multiple leadership styles.  In order to master a new way of leading others, we may need to unlearn old habits.  This is especially important for leaders that fall back on the pacesetting and coercive leadership styles, which have a negative affect on thework environment.

Learning a new leadership style therefore takes practice and perseverance.  The more often the new style or behavior is repeated, the stronger the link between the situation at hand and the desired reaction.

You can work with a coach, a mentor, or keep your own notes on how you reacted under certain conditions.  Learning a new skill requires time, patience, feedback, and even rewards to stay motivated.  If you're going to attempt to learn a different leadership style make sure your approach contains each of these elements.

Daniel Goleman, Richard Boyatzis and Annie McKee, in Primal Leadership, describe six styles of leading that have different effects on the emotions of the target followers.

These are styles, not types. Any leader can use any style, and a good mix that is customised to the situation is generally the most effective approach.

The Visionary Leader

The Visionary Leader moves people towards a shared vision, telling them where to go but not how to get there - thus motivating them to struggle forwards. They openly share information, hence giving knowledge power to others.

They can fail when trying to motivate more experienced experts or peers.

This style is best when a new direction is needed.

Overall, it has a very strong impact on the climate.

The Coaching Leader

The Coaching Leader connects wants to organizational goals, holding long conversations that reach beyond the workplace, helping people find strengths and

Page 208: MODELS 5

weaknesses and tying these to career aspirations and actions. They are good at delegating challenging assignments, demonstrating faith that demands justification and which leads to high levels of loyalty.

Done badly, this style looks like micromanaging.

It is best used when individuals need to build long-term capabilities.

It has a highly positive impact on the climate.

The Affiliative Leader

The Affiliative Leader creates people connections and thus harmony within the organization. It is a very collaborative style which focuses on emotional needs over work needs.

When done badly, it avoids emotionally distressing situations such as negative feedback. Done well, it is often used alongside visionary leadership.

It is best used for healing rifts and getting through stressful situations.

It has a positive impact on climate.

The Democratic Leader

The Democratic Leader acts to value inputs and commitment via participation, listening to both the bad and the good news.

When done badly, it looks like lots of listening but very little effective action.

It is best used to gain buy-in or when simple inputs are needed ( when you are uncertain).

It has a positive impact on climate.

The Pace-setting Leader

The Pace-setting Leader builds challenge and exciting goals for people, expecting excellence and often exemplifying it themselves. They identify poor performers and demand more of them. If necessary, they will roll up their sleeves and rescue the situation themselves.

They tend to be low on guidance, expecting people to know what to do. They get short term results but over the long term this style can lead to exhaustion and decline.

Page 209: MODELS 5

Done badly, it lacks Emotional Intelligence, especially self-management. A classic problem happens when the 'star techie' gets promoted.

It is best used for results from a motivated and competent team.

It often has a very negative effect on climate (because it is often poorly done).

The Commanding Leader

The Commanding Leader soothes fears and gives clear directions by his or her powerful stance, commanding and expecting full compliance (agreement is not needed). They need emotional self-control for success and can seem cold and distant.

This approach is best in times of crisis when you need unquestioned rapid action and with problem employees who do not respond to other methods.

JUST IN TIME

An inventory strategy companies employ to increase efficiency and decrease waste by receiving goods only as they are needed in the production process, thereby reducing inventory costs.

This method requires that producers are able to accurately forecast demand.

A good example would be a car manufacturer that operates with very low inventory levels, relying on their supply chain to deliver the parts they need to build cars. The parts needed to manufacture the cars do not arrive before nor after they are needed, rather do they arrive just as they are needed. 

This inventory supply system represents a shift away from the older "just in case" strategy where producers carried large inventories in case higher demand had to be met.

Just-in-time (JIT) is an inventory strategy that strives to improve a business's return on investment by reducing in-process inventory and associated carrying costs. Just In Time production method is also called the Toyota Production System. To meet JIT objectives, the process relies on signals or Kanban ( 看板 Kanban?) between different points in the process, which tell production when to make the next part. Kanban are usually 'tickets' but can be simple visual signals, such as the presence or absence of a part on a shelf. Implemented correctly, JIT can improve a manufacturing organization's return on investment, quality, and efficiency.

Quick notice that stock depletion requires personnel to order new stock is critical to the inventory reduction at the center of JIT. This saves warehouse space and costs. However, the complete mechanism for making this work is often misunderstood.

Page 210: MODELS 5

For instance, its effective application cannot be independent of other key components of a lean manufacturing system or it can "...end up with the opposite of the desired result."[1] In recent years manufacturers have continued to try to hone forecasting methods (such as applying a trailing 13 week average as a better predictor for JIT planning,[2] however some research demonstrates that basing JIT on the presumption of stability is inherently flawed.

Philosophy of JIT is simple: inventory is waste. JIT inventory systems expose hidden causes of inventory keeping, and are therefore not a simple solution for a company to adopt. The company must follow an array of new methods to manage the consequences of the change. The ideas in this way of working come from many different disciplines including statistics, industrial engineering, production management, and behavioral science. The JIT inventory philosophy defines how inventory is viewed and how it relates to management.

Inventory is seen as incurring costs, or waste, instead of adding and storing value, contrary to traditional accounting. This does not mean to say JIT is implemented without an awareness that removing inventory exposes pre-existing manufacturing issues. This way of working encourages businesses to eliminate inventory that does not compensate for manufacturing process issues, and to constantly improve those processes to require less inventory. Secondly, allowing any stock habituates management to stock keeping. Management may be tempted to keep stock to hide production problems. These problems include backups at work centers, machine reliability, process variability, lack of flexibility of employees and equipment, and inadequate capacity.

In short, the just-in-time inventory system focus is having “the right material, at the right time, at the right place, and in the exact amount”-Ryan Grabosky, without the safety net of inventory. The JIT system has broad implications for implementers.

During the birth of JIT, multiple daily deliveries were often made by bicycle. Increased scale has required a move to vans and lorries (trucks). Cusumano (1994) highlighted the potential and actual problems this causes with regard to gridlock and burning of fossil fuels. This violates three JIT waste guidelines:

1. Time—wasted in traffic jams2. Inventory—specifically pipeline (in transport) inventory

3. Scrap—fuel burned while not physically moving

Benefits

Main benefits of JIT include:

Page 211: MODELS 5

Reduced setup time. Cutting setup time allows the company to reduce or eliminate inventory for "changeover" time. The tool used here is SMED (single-minute exchange of dies).

The flow of goods from warehouse to shelves improves. Small or individual piece lot sizes reduce lot delay inventories, which simplifies inventory flow and its management.

Employees with multiple skills are used more efficiently. Having employees trained to work on different parts of the process allows companies to move workers where they are needed.

Production scheduling and work hour consistency synchronized with demand. If there is no demand for a product at the time, it is not made. This saves the company money, either by not having to pay workers overtime or by having them focus on other work or participate in training.

Increased emphasis on supplier relationships. A company without inventory does not want a supply system problem that creates a part shortage. This makes supplier relationships extremely important.

Supplies come in at regular intervals throughout the production day. Supply is synchronized with production demand and the optimal amount of inventory is on hand at any time. When parts move directly from the truck to the point of assembly, the need for storage facilities is reduced.

JIT Just-in-Time manufacturing

`Just-in-time' is a management philosophy and not a technique.

It originally referred to the production of goods to meet customer demand exactly, in time, quality and quantity, whether the `customer' is the final purchaser of the product or another process further along the production line.

It has now come to mean producing with minimum waste. "Waste" is taken in its most general sense and includes time and resources as well as materials. Elements of JIT include:

Continuous improvement.o Attacking fundamental problems - anything that does not add value to the

product.

o Devising systems to identify problems.

o Striving for simplicity - simpler systems may be easier to understand, easier to manage and less likely to go wrong.

Page 212: MODELS 5

o A product oriented layout - produces less time spent moving of materials and parts.

o Quality control at source - each worker is responsible for the quality of their own output.

o Poka-yoke - `foolproof' tools, methods, jigs etc. prevent mistakes

o Preventative maintenance, Total productive maintenance - ensuring machinery and equipment functions perfectly when it is required, and continually improving it.

Eliminating waste. There are seven types of waste:

o waste from overproduction.

o waste of waiting time.

o transportation waste.

o processing waste.

o inventory waste.

o waste of motion.

o waste from product defects.

Good housekeeping - workplace cleanliness and organisation.

Set-up time reduction - increases flexibility and allows smaller batches. Ideal batch size is 1item. Multi-process handling - a multi-skilled workforce has greater productivity, flexibility and job satisfaction.

Levelled / mixed production - to smooth the flow of products through the factory.

Kanbans  - simple tools to `pull' products and components through the process.

Jidoka (Autonomation) - providing machines with the autonomous capability to use judgement, so workers can do more useful things than standing watching them work.

Andon (trouble lights) - to signal problems to initiate corrective action.

Just in Time or JIT method creates the movement of material into a specific location at the required time, i.e. just before the material is needed in the manufacturing

Page 213: MODELS 5

process. The technique works when each operation is closely synchronized with the subsequent ones to make that operation possible. JIT is a method of inventory control that brings material into the production process, warehouse or to the customer just in time to be used, which reduces the need to store excessive levels of material in the warehouse.

Just in time is a ‘pull’ system of production, so actual orders provide a signal for when a product should be manufactured. Demand-pull enables a firm to produce only what is required, in the correct quantity and at the correct time.

This means that stock levels of raw materials, components, work in progress and finished goods can be kept to a minimum. This requires a carefully planned scheduling and flow of resources through the production process. Modern manufacturing firms use sophisticated production scheduling software to plan production for each period of time, which includes ordering the correct stock. Information is exchanged with suppliers and customers through EDI (Electronic Data Interchange) to help ensure that every detail is correct.

Supplies are delivered right to the production line only when they are needed. For example, a car manufacturing plant might receive exactly the right number and type of tyres for one day’s production, and the supplier would be expected to deliver them to the correct loading bay on the production line within a very narrow time slot.

Advantages of JIT

Lower stock holding means a reduction in storage space which saves rent and insurance costs

As stock is only obtained when it is needed, less working capital is tied up in stock

There is less likelihood of stock perishing, becoming obsolete or out of date

Avoids the build-up of unsold finished product that can occur with sudden changes in demand

Less time is spent on checking and re-working the product of others as the emphasis is on getting the work right first time

Disadvantages of JIT

There is little room for mistakes as minimal stock is kept for re-working faulty product

Production is very reliant on suppliers and if stock is not delivered on time, the whole production schedule can be delayed

Page 214: MODELS 5

There is no spare finished product available to meet unexpected orders, because all product is made to meet actual orders – however, JIT is a very responsive method of production

FOUR DIMENSIONS OF RELATIONAL WORK

What Are the Four Dimensions?

The four dimensions were identified by Timothy Butler, Director of Career Development Programs at Harvard Business School, and James Waldroop, a founding principal of the consulting firm Peregrine Partners.

Butler and Waldroop analyzed the psychological tests of over 7,000 business professionals and published their findings in their 2004 article, "Understanding 'People' People." According to their findings, the Four Dimensions of Relational Work are:

1. Influence.2. Interpersonal facilitation.

3. Relational creativity.

4. Team leadership.

Many of us are strong in at least one of these areas – but we may be strong in several areas, or in none of them.

It's not relevant which area is stronger. What is relevant is that if we, or our team members, have a strength in one area, we should try to match their work to that strength.

Butler and Waldroop argue that a good match will make both the manager and the team happier, because everyone will be using their natural strengths. This should also improve the team's performance and productivity.

Let's examine each of the Four Dimensions in greater detail:

1. Influence

People who are strong in this dimension enjoy being able to influence others. They're great at negotiating and persuading, and they love having knowledge and ideas that they can share. Influencers are also good at creating networks: they excel at making strategic friendships and connections.

Influencers don't always have to be in a sales role to use this strength effectively. Perhaps a team member always seems able to "lift" tired colleagues. Or maybe a

Page 215: MODELS 5

manager can be relied on to persuade clients to give his team a little more time on a deadline. Both are effective influencers.

2. Interpersonal Facilitation

Team members who are strong in this area are often "behind the scenes" workers. They're good at sensing people's emotions and motivations. They're also skilled at helping others cope with emotional issues and conflict.

For instance, if you suspect that someone you're dealing with has a "hidden agenda" during group meetings, then you may need to ask for help from someone on your team who is strong in interpersonal facilitation. A person with strong intuition will likely have some insight into what is motivating this other team member.

3. Relational Creativity

People who are strong in this dimension are masters at using pictures and words to create emotion, build relationships, or motivate others to act.

Remember that relational creativity is different from influencing. Influencing involves person-to-person interaction, while relational creativity occurs from a distance. An example is a corporate copywriter who writes such a moving speech that the CEO ia able to inspire the entire company to meet an aggressive deadline.

4. Team Leadership

Team members who are strong in team leadership succeed through their interactions with others.

This area also might sound like the influencing dimension, but there's an important difference. Influencers thrive on the end result and the role they play in closing a deal. But team leaders thrive on working through other people to accomplish goals, and they're more interested in the people and processes necessary to reach the goal.

Tip:You can also apply the Four Dimensions of Relational Work to yourself when thinking about your own career development. For example, if you're strong in interpersonal facilitation, you may decide to pursue a career that uses that strength.

Assessing the Four Dimensions

Page 216: MODELS 5

It's generally easy to evaluate technical skills when you're recruiting or reviewing a team member's work history. However, identifying someone's interpersonal skills and strengths takes more effort.

Use the following tips to help you to assess your current team members, or to ensure that you're hiring the right person for a position.

Listen carefully - For example, when you ask a job candidate to explain the best moment at her last job, listen closely. If she talks about when she influenced a key decision, she might be strong in the influence dimension. Remember, influencers love to impact and shape decisions, so also try to find out if she's ever served on a committee or executive board.

Structure your conversation around a specific skill - For instance, if you need to find a new team member who is strong in interpersonal facilitation, then structure your interview or performance appraisal around that skill. Ask the candidate to describe how he would resolve a conflict between two other colleagues. You could even try role playing.

Ask when the person experiences "flow" - Finding someone skilled at relational creativity can be difficult. This is because someone may be strong in this area, but has never had a job, project, or task that used this strength. Ask your team member or candidate to describe a time when she experiencedflow. If her task at that time was creative, she might be strong in relational creativity.

Notice how the person makes you feel - It's often easy to identify a person skilled in team leadership, even if he has never held a management position. Pay attention to how you feel when talking to this person, and how that person interacts with other members of his team. If he gets people excited and motivated about their work, or about the opportunities that the organization faces, then he might excel at team leadership.

Rewarding Your Team

As well as using the four dimensions to build your team, and assign tasks and projects to the most appropriate people, you can also use the model to reward your team effectively. Relational work is often ignored or undervalued. But these interpersonal traits are what make the organization function effectively.

It's important to compensate your team members for these skills, because the more they're rewarded, the more they'll use those skills.

Start by educating your team members about their own dimension. You could do this in informal, one-on-one conversations or during their performance appraisals. Try to connect some type of compensation to their skill, and make sure they understand that they'll be rewarded for using their strengths.

Page 217: MODELS 5

You can also reward team members by giving them work that uses their strength. This may require you to create a new role, or mean simply reshaping the role that a person has now. It doesn't have to be a huge change; adding tasks or projects that use people's strengths can influence dramatically how satisfied they are with their jobs – and with the organization.

Tip 1: To help ensure balance, try to structure your teams so that all four dimensions are represented by someone. (Of course, this may not be a suitable approach for all teams - so use your best judgment.)

Tip 2:When you look for people to fill each dimension, don't make decisions based on job titles, because team members may not currently be in roles or positions that use their strengths.

Key Points

The Four Dimensions of Relational Work can help you understand team members' interpersonal strengths, as well as your own strengths. The four dimensions are influence, interpersonal facilitation, relational creativity, and team leadership.

Matching people's strongest dimensions with the work they do benefits everyone. When you and your team are using your strengths, you're all more satisfied and excited about what you're doing – and your organization benefits from improved productivity and engagement.

Page 218: MODELS 5