development of a knowledge-based self-assessment … eta al 2003 knowledge-based... · development...
TRANSCRIPT
Development of a knowledge-based self-assessment system
for measuring organisational performance
Kwai-Sang China,*, Kit-Fai Punb, Henry Lauc
aDepartment of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Tat Chee Avenue,
Hong Kong, People’s Republic of ChinabDepartment of Mechanical Engineering, The University of The West Indies, St Augustine, Trinidad and Tobago
cDepartment of Manufacturing Engineering, Hong Kong polytechnic University, Hum Hom, Hong kong, People’s Republic of China
Abstract
Effective performance measurement is an important task in the discipline of engineering management. With the support of City University
of Hong Kong, a research project was initiated to develop a knowledge-based expert self-assessment (KES) training toolkit on measuring and
assessing organisational performance based on the evaluation criteria of a renowned Business Excellence Model—the Malcolm Baldrige
National Quality Award (MBNQA). This paper explains the development of and elaborates the system framework, requirements, design and
validation of the toolkit. The project results shows that the toolkit could facilitate the teaching of students of engineering management courses
by providing a stimulating learning environment and practical experiences in measuring and assessing enterprise performance. Incorporating
the KES model and toolkit into the engineering management curriculum can provide students and industrial users with hands-on experience
and insights of organisational PM.
q 2003 Elsevier Science Ltd. All rights reserved.
Keywords: Performance measurement; Self-assessment; Knowledge-based system
1. Introduction
There are various dimensions of enterprise performance
measurement (PM), such as financial versus non-financial
and qualitative versus quantitative. Traditionally, many
organisations rely largely on financial measures and process
outcomes using self-referenced objective data from internal
sources. Meanwhile, there are also many organisations
adopting the total quality management (TQM) philosophies
to foster continuous performance improvements. In this
context, it is essential for organisations to monitor their
performance on a regular basis. Since the 1990s, the
introduction of various international standards and qual-
ity/business excellence awards has helped thousands of
organisations measure and assess their performance through
exercises of management reviews, internal and external
audits. Self-assessment against the compliance require-
ments of standards (e.g. ISO 9001:1994, ISO 14001:1996,
and OHSAS 18001:1999) or criteria of renowned awards
(e.g. the Malcolm Baldrige National Quality Award
(MBNQA), the European Quality Award (EQA), and
the Deming Prize (DP)) provides this type of assessment
framework. The topic has received considerable attention
from academic researchers and is well-defined in the
literature (Bemowski & Stratton, 1995; Conti, 1997;
Coulambidou & Dale, 1995; Hakes, 1998; Lascelles &
Peacock, 1996). However, little use has been made of them
to develop decision models and analysis tools for supporting
the organisational performance assessment process. It is
also found that a number of organisations do not have
sufficient expertise and knowledge to carry out their own
self-assessment process. Knowledge-based and action-
learning approaches thus provide a feasible solution to
apply these frameworks for self-assessment.
The idea of developing a knowledge-based or expert self-
assessment training toolkit with a prototype system is
useful, because it means that there is a ready-made
methodology for applying the performance excellence
model (e.g. MBNQA) to a business. Supported by the
City UHK’s quality enhancement fund, the purpose of this
project was to develop such a knowledge-based expert self-
assessment (KES) training toolkit and prototype system that
facilitate teaching and student learning of engineering
management courses in enterprise PM. This paper reviews
the concepts and development of PM that help safeguard
0957-4174/03/$ - see front matter q 2003 Elsevier Science Ltd. All rights reserved.
doi:10.1016/S0957-4174(02)00192-6
Expert Systems with Applications 24 (2003) 443–455
www.elsevier.com/locate/eswa
* Corresponding author. Fax: þ852-2788-8423.
E-mail address: [email protected] (K.-S. Chin).
continuous performance improvement in organisations. It
discusses the concepts and applications of knowledge-based
expert systems and web-based training for measuring and
assessing enterprise performance. It describes the frame-
work of the model and explains the system requirements and
design of the KES training toolkit, and draws the
conclusions based on the evaluation results of the KES
system. The project affirms that the proposed KES training
toolkit and system can facilitate organisational learning and
performance improvement processes.
2. Notion of performance measures
According to Neely, Gregory and Platts (1995), PM is a
process of quantifying the efficiency and effectiveness of
action that leads to performance. In the past, the focus of
attention has been on measuring financial performance, such
as sales turnover, profit, debt and return on investment.
These financial measures do not match entirely with the
competencies and skills required by companies for today’s
changing business environment (Geanuracos & Meiklejohn,
1993; Medori, Steeple, Pye, & Wood, 1995). It is not only
enough to know the amount of gross profit or loss, but also
necessary to explain the driving forces behind success or
failure. Rather than to analyse these reasons from a
historical perspective, it is really important to understand
organisational excellence, which potentially leads to the
success of a business in the future (Kanji, 2001). Accounting
figures alone do not emphasise the elements that will lead to
good or poor future financial results. Many other indicators
of business performance (such as quality, customer
satisfaction, innovation and market share) that can always
reflect an organisation’s economic condition and growth
prospects better than it’s reported earnings do (Eccles &
Pyburn, 1992). Therefore, performance measures must go
beyond the presentation of financial figures and serve as the
driver for fostering performance not only in financial terms
but also in non-financial aspects like quality, customer
satisfaction, innovation and market share.
3. Concepts of total quality management
and business excellence
The concepts of TQM and business excellence (BE) have
come to the fore in recent times, being adopted by
organisations as the means of understanding and satisfying
the needs and expectations of their customers and taking
costs out of their operations (Dale, 1999). TQM is an
integrated management philosophy and set of practices that
emphasise among continuous improvement, meeting custo-
mer requirements, reducing rework, long-range thinking,
increased employee involvement and teamwork, process
redesign, competitive benchmarking, team-based problem-
solving, constant measurement of results, and closer
relationships with suppliers (Powell, 1995; Whitney &
Pavett, 1998). It refers to a basic vision of what an
organisation should look like and how it should be managed.
This includes a stakeholder perspective, customer and
people orientation and corporate responsibility (Dale,
1999; Van Schalkwyk, 1998). TQM creates an organis-
ational culture that fosters continuous improvements in
everything by everyone at all times, and requires changes in
organisational processes, strategic priorities, individual
belief, attitudes and behaviours (Dale, 1999; Shin, Kali-
nowski & EI-Enein, 1998). The shift from traditional
management to TQM is revolutionary and the implemen-
tation of TQM involves a fundamental change in the way in
which business is conducted (Bounds, Yorks, Adams, &
Ranney, 1994). Those changes include making customers a
top priority, a relentless pursuit of continuous improvement
of business processes, and managing the systems of the
organisation through teamwork.
Meanwhile, the pursuit of corporate excellence as a way
of managing businesses for competitive advantage has been
increasingly recognisable and has led, among others to the
formation of the European Foundation for Quality Manage-
ment (EFQM) in 1988 (Hakes, 1997). The EFQM sub-
sequently developed its BE model and used it as a framework
for the award of the EQA and the associated national quality
awards (Adebanjo, 2001; EFQM, 2002). The EFQM model
was largely based on the concept of TQM as both a holistic
philosophy and an improvement on other TQM-based
models, such as the MBNQA. Recent developments of
these national and regional quality awards serve as models of
TQM and offer a continually changing blueprints and/or
tools for self-assessment and benchmarking (Pun, Chin, &
Lau, 1999). If used properly, these tools will help
organisations evaluate their current level of performance,
identify and prioritise areas for improvement, integrate
improvement actions in their business plan and identify best
practice (Adebanjo, 2001). The opportunity to carry out
future assessments against these models also means that
progress towards excellence can be measured and promotes
continuous improvement. The TQM approach to perform-
ance measurement is consistent with BE initiatives under
way in many companies: cross-functional integration,
continuous improvement, customer–supplier partnerships
and team rather than individual accountability. In addition,
corporate efforts to decentralise decision-making through
empowerment, improved efficiency and competitiveness,
increased cooperation and execution of strategy are consist-
ent with the balanced scorecard framework of performance
measures (Kanji & Moura e Sa, 2002; Walker, 1996).
4. The TQM–BE–PM integration and self-assessment
Recent research suggests that both TQM and PM can
produce economic value to many firms (Dale, 1999;
Kermally, 1997; Neely, 1998). One of the best indicators
K.-S. Chin, et al. / Expert Systems with Applications 24 (2003) 443–455444
is the achievement or competitive advantage obtained from
integrating TQM–BE concepts into performance measures.
The integration has to comprise a thorough definition of
measures and indicators to monitor the TQM implemen-
tation process and corporate performance from a stakehol-
der’s perspective. Many researchers and practitioners
believe that few well-defined performance dimensions and
critical success factors (CSF) can help develop specific
measures to monitor progress and performance towards
excellence (Kanji, 2001; Neely et al., 1995). In many
circumstances, these measurement systems are embedded in
the CSF. Despite being at some extent organisation or
industry-specific, these factors can be grouped into some
principles that have been systematically proven to be
universally valid. Kanji (2001) argues that the criteria for
performance measures are rooted in these factors of the
organisation and ultimately correspond to the determinants
of BE. Various balanced scorecard techniques (Kaplan &
Norton, 1996) and various excellence awards (EFQM, 2002;
NIST, 2002) are examples that incorporate the principles
identified using a CSF approach and have been empirically
tested and validated in different contexts.
Self-assessment is a comprehensive, systematic and
regular review of an organisation’s activities that ultimately
result in planned improvement actions (EFQM, 2002;
Henderson, 1997). The assessment process help organis-
ations identify their strengths and shortcomings and best
practices where they exist (Neely, 1998). According to
Hillman (1994), the three main elements in self-assessment
are model, measurement and management. The objective of
self-assessment is to identify and act on the areas of the
improvement process that require additional effort, while
recognising and maintaining that which is already going
well. Karapetrovic and Willborn (2001) add that self-
assessments are aimed at identifying strengths, weaknesses
and opportunities for improvement. With the common
direction and an increased consistency of purpose, self-
assessments can provide organisations with opportunities to
build greater unity in pursuit of initiatives that effect
improvement (Hill, 1996; Shergold & Reed, 1996). They do
generate the results and valuable inputs into the annual
corporate planning cycle, and also encourage the integration
of a range of quality initiatives and performance improve-
ments that may have been separately pursued across the
organisation (Beasley, 1994; Pun et al., 1999; Van der Wiele
& Brown, 1999). In other words, self-assessment is a means
that help organisations analyse their status quo in integrating
TQM–BE with performance measurement in achieving the
strategic objectives. Adebanjo (2001) also argue that one
key benefit of the use of the BE models is the opportunity for
self-assessment and benchmarking.
Henderson (1997) argues that organisations must estab-
lish their performance measurement systems with self-
assessment orientation. Otherwise, this may result in
fragmentation of efforts, slow response and weak pro-
ductivity growth in the organisations. Business environment
and operational situations vary in different organisations.
The identification of various CSF or indicators provides a
feasible means for linking TQM–BE concepts and per-
formance measures strategically (Kanji, 2001). The inte-
gration will bring changes to the current operations and
practices, and will only succeed if they are implemented as a
long-term organisational paradigm shift, but not a quick fix
(Bounds et al., 1994).
5. Model for knowledge-based expert self-assessment
5.1. A systems framework
The proposed KES model has seven categories of
evaluation criteria with respect to that of MBNQA (NIST,
2002). These categories are leadership, strategic planning,
customer and market focus, information and analysis,
human resource focus, process management, and business
results. The criteria provide a systematic framework for
assessing and measuring performance on a composite of key
indicators of organisation performance (Fig. 1). This
includes evaluating performance, identifying areas for
improvement, and developing recommendations and plans
for further action. A total of 1000 score points is allocated to
18 items of the seven categories, each focusing on a major
requirement. Under each item, there are several areas to be
addressed. The organisation can assess its performance on
these areas with relevant information. The framework
constitutes several core approaches, deployment and results
elements that govern the operations of the KES model for
self-assessment and initiating continuous improvement. A
summary of evaluation criteria, items and sub-items of the
KES model is shown in Table 1.
5.2. Modelling of self-assessment instruments
A user can furnish its performance information using the
KES model with respect to three dimensions, namely
approach, deployment, and results dimensions as advocated
Fig. 1. A systems perspective of the MBNQA criteria framework.
K.-S. Chin, et al. / Expert Systems with Applications 24 (2003) 443–455 445
by the MBNQA (NIST, 2002). Firstly, the approach
dimension covers what an organisation plans to do and
the reasons for it. This refers to how the organisation
addresses the evaluation requirements, or in other words, the
method(s) being used. Secondly, the deployment dimension
covers what the organisation does to deploy the approach.
This refers to the extent to which the approach is applied to
individual evaluation criteria and sub-criteria. Lastly, the
results dimension covers what the organisation achieves in
performance and what the organisation does to assess and
review both the approach and the deployment of the
approach. This stresses the analysis of the results achieved
and monitoring of the ongoing learning activities. Table 2
shows a summary of the performance results, anticipated
outcomes and focal areas of self-assessments. The items
developed for each criterion in the self-assessment tool are
modelled, and guidelines are given as to the identification of
key factors required to fulfil each criterion’s requirements.
For each sub-section of individual criteria, items are
developed to assess the presence of approaches and the
extent of deployment; and for the sub-sections of results
orientation, the extent of positive trend in the results is
assessed. The structure for developing items in the self-
assessment tool is depicted in Fig. 2.
The organisational performance assessment is designed
in questionnaires. Users are required to answer a series of
questions among seven categories. The first six categories
(i.e. leadership, strategic planning, customer and market
focus, information and analysis, human resources, and
process management) are assigned as approach-deploy-
ment, while the category 7 (i.e. business results) is
assigned as results. For the approach-deployment dimen-
sions of questions, users should base on the scoring
guidelines of approach-deployment. For the results
dimension, users should base on the scoring guidelines
of results. A set of self-assessment questionnaire is
developed. To illustrate how items are developed and
included in the questionnaire, examples from category 1
of the evaluation criteria are shown in Fig. 3. These
particular items refer to the ‘policies, objectives, and
strategies’ under the category of leadership criterion. The
self-assessments of these items are based on the approach
and deployment dimensions.
Table 1
Categories of evaluation criteria and score points
Categories of evaluation criteria Number of items Number of sub-items Number of questions Score points
Leadership 2 7 44 120
Strategic planning 2 5 30 85
Customer and market focus 2 6 31 85
Information and analysis 2 5 27 90
Human resource focus 3 9 54 85
Process management 3 8 38 85
Business results 4 8 95 450
Total 18 48 319 1000
Table 2
Self-assessment dimensions, anticipated outcomes and focal areas
Dimensions Anticipated outcomes Focal areas
Approach A sound approach includes
having a clear rationale,
defined and developed
processes and a clear focus
on stakeholder—supporting
policy and strategy and
linked to other approaches
where appropriate
Appropriateness and
effectiveness of use of
the methods
Alignment with the
organisation’s needs
Degree to which the
approach is repeatable,
integrated, and consistently
applied
Reliable information and data
Evidence of innovation
Deployment The strategies, policies and
actions should be deployed
in relevant areas, in
systematic manner
Deployment addressing the
evaluation requirements
Adopted by all appropriate
work units
Results The results should show
positive trends and/or
sustained performance.
Performance measurement
targets should be met or
exceeded, and performance
will compare well with
others and will have been
caused by the approaches.
In addition, the scope of the
results should address the
relevant areas, and the
outputs should be used to
identify priorities, plan and
implement improvement
Company’s current
performance
Performance relative to
appropriate comparisons
and/or benchmarks
Rate, breadth, and importance
of the performance
improvements
Linkage of results measures
to process and action plan
All relevant factors of other
three dimensions
Subject to regular
measurement
K.-S. Chin, et al. / Expert Systems with Applications 24 (2003) 443–455446
5.3. Self-assessment scoring methods
The users (i.e. assessors and/or auditors) are required to
examine whether the organisations have the necessary
approaches, the extent of deployment of their approaches.
They assess the ability of the approaches to fulfil
requirements and not to judge the approaches against any
specific methods. A scoring method is proposed to allocate a
percentage score to each sub-criterion. A set of scoring
guides is used to facilitate the users in making a decision to
respond to individual criteria and items. The calculations of
assigned scores are explained below corresponding to the
results of individual items, categories, approach-deploy-
ment, results, and overall score. These are:
1. For individual items,
ItemScore ¼ sum upðansi=10 £ ðweight=nÞÞ
where n, is the total number of questions in the item,
weight, the points allocated to the item, ansi, is the
answer of each question
2. For individual categories,
CategScore ¼ sum upðItemScoreiÞ
where ItemScorei is the scores of each item
3. For approach-deployment, the score is sum up the scores
from category 1 to 6.
4. For results, the score is sum up the scores of all items in
category 7.
5. For overall score, it is the sum of approach-
deployment and results.
Data collected must be analysed to identify improvement
opportunities and control the outcomes and the way they are
being perceived. The percentage scores assigned to
individual criteria and sub-criteria are then combined to
give an overall score. The scores are computed and then
recorded in the scores summary sheet. The maximum score
for each criterion is ranging from 85 to 450 points out of
1000 points. They are taken together to calculate the final
score points for the organisation.
5.4. Interpretation of self-assessment scoring results
The KES model forms a single framework that can be
integrated in the performance management system of
organisations. The maximum possible score of the overall
performance index is 1000. However, an organisation
attaining an overall performance index of 800 scores can
be considered as excellent performer and of 600 score or
above as good performer, respectively. The scoring analysis
can help the user utilise its resources and keep up
improvement progress that it may experience. It also
indicates how individual evaluation criteria are interrelated
in responding effectively to the mission, goals and
requirements of the organisation. Through regular self-
assessments, user organisations can simulate where they
should concentrate their improvement efforts in a way that
maximises their performances that determine their sustained
improvement and growth. They can also take into account
their starting point and the constraints that they may have in
getting improvements above certain levels in particular
performance areas.
6. System requirements of KES
The system requirements for the KES training toolkit and
prototype system development addressed three main areas,
including (1) the self-assessment requirements, (2) the
training requirements, and (3) the web design requirements.
The self-assessment requirements describe what elements
the system needs to have in order to serve as a self-
assessment tool that is based on MBNQA (NIST, 2002). The
training requirements describe elements in achieving
the training purpose that acknowledges the user about
the concepts of organisational self-assessment through the
use of self-assessment system (Boyle, 1997; Gibbons &
Fairweather, 1998). The web design requirements describe
elements of user interface, system flow, and other design
areas (Heimpel, 2000). In order to help develop a user-
friendly system, several important elements are identified in
Table 3 with respect to these three areas.
Fig. 2. The structure for developing a self-assessment item.
Fig. 3. An illustrated example of questionnaire development.
K.-S. Chin, et al. / Expert Systems with Applications 24 (2003) 443–455 447
7. Building of knowledge base
7.1. Structuring acquired knowledge with Microsoft Access
The knowledge acquisition is to find out all the
information for the KES training system. The knowledge
was acquired from literature and interviews with industrial
experts and academics. After the knowledge acquisition, the
acquired expert knowledge was to be stored into different
tables of the database with the aid of the Microsoft Access
software. There are seven tables storing the questions,
strengths, weaknesses and recommendations for seven
categories of criteria, respectively. There are also multiple
sets of tables for storing (1) the answers and input
information, (2) the calculated scores for all categories
and items and (3) the assessment progress of individual
users.
7.2. Establishment of ‘if– then’ rules
The acquired knowledge can be represented in facts,
rules and frames and then processed by the inference engine.
The KES system adopted a rule-based approach to control
the inference engine and make conclusions. Several simple
if-then; rules were established for giving comment to the
answers (Table 4), strengths (Table 5), the weaknesses
(Table 6), and recommendations (Table 7). In addition to
generating recommendation, there is recommendation only
for sub-item. Each sub-item has one set of recommendation.
Table 3
System requirements of KES
Requirement Important elements
Self-assessment Allow users to answer the self-assessment questions
with respect to the seven evaluation criteria
Provide scoring guidelines for users to follow
Allow users to submit relevant information
Show the self-assessment results of different categories
and items in term of scores, overall scores and relative
percentages
Provide a summary report of self-assessment and a
feedback report showing the strengths and weaknesses
of the organisation
Offer recommendations and generate the priorities for
improvement based on the results
Allow users to be able to search for the previous
self-assessment record
Store the answers, supporting information, and scores
systematically in the database
Training Communicate with users about the self-assessment
Provide information on MBNQA framework and
criteria
Allow users to explore the self-assessment questions in
different categories
Provide sample practices relating to assessment items
for users to learn
Assist users to investigate the organisational strengths
and weaknesses
Provide glossary for the user to look up for common
terms
Provide a user manual for ease reference
Provide useful links for users to get further information
if needed
Web design Provide clear and ease-to-follow instructions to users
Allow users to find out tasks easily when they want to
browse
Allow users to be able to choose the criteria to answer
Provide warning messages for any wrong step made by
users
Design web pages in clear and consistent navigation
Make good use of colours (e.g. text in a single-coloured
background)
Avoid the simultaneous display of highly saturated,
spectrally extreme colours
Table 4
Rules for giving comment to answers
If (condition) Then (conclusion)
Answer value # 1 Very poor
1 , answer value # 3 Poor
3 , answer value # 6 Acceptable
6 , answer value # 8 Good
Answer value . 8 Excellent
Table 5
Rules for evaluating strength areas
If (condition) Then (conclusion)
Answer value . 7 This is assigned to
be the strengths
Number of strength ¼ 0 The organisation does not
have any visible strength.
The organisation should try
hard to develop its
strength(s) on this criterion
to achieve higher performance
0 , Number of strength # 3 The organisation has only
few strengths or strength
areas. The organisation should
put more effort on
continuous improvements
3 , Number of strength # 6 The organisational performance is
about average. There are
only few strengths or
strength areas. The organisation
should maintain these strengths
and deal with other
poor performance areas
6 , Number of strength # 10 The organisation has a
certain number of strength
areas. The organisation should
maintain these strength areas
and work hard on
the others
Number of strength . 10 The organisational performance is
good in many areas.
It is essential for
the organisation to put
effort to maintain all
these strength areas
K.-S. Chin, et al. / Expert Systems with Applications 24 (2003) 443–455448
If there exists more than one weakness in the same sub-item,
only one set of recommendation will be displayed for that
sub-item.
8. System structure of KES
The KES system is a knowledge-based/expert system
(KBS/ES). It is designed to execute through Internet, and
the web server plays an important role in the system design.
Fig. 4 shows the KBS/ES structure of the KES training
system. The active server pages (ASP) scripting will be
executed when receiving a request to the server.
The execution of the scripts can be treated as the inference
engine of the KBS/ES (Boyle, 1997; Evans & Lindsay,
1987). The working memory is the temporary memory that
stores the temporary information during the script
execution. For the database, it will store the permanent
information. During the execution of the scripts, when there
is a need of information from the database, or a need of
writing down information to the database, the server will
establish a connection to the database, retrieving and
rewriting data will be able to take place. The relationship
between the working memory and the database is similar to
the relationship between the working memory and the
database.
Based on the use of the flowchart technique, it will be
easily to understand the process and the flow of the system.
Fig. 5 shows the system process flowchart that describes the
overall flow of the assessment system. There are three loops.
The first loop allows users to choose different set of
questionnaires. The second loop runs from the first page to
the last page of that set of questionnaire. The third loop is
used to check the progress status. Figs. 6 and 7 show the
flowcharts for seven questionnaires and searching records,
respectively.
9. System maintenance
This web-based system has two major components for
ease maintenance. The first component is the ASP scripting
that controls the overall functions of the system, and the
second one is the database that stores all relevant
information. In order to change or improve the features,
the ASP scripting is required to modify based on any
necessary changes. The modification of ASP scripting is a
difficult task because wrong scripting might lead to the
malfunction of the system. Regarding the self-assessment
questions in the questionnaires, the information in the
database can be directly modified. Moreover, there are
some questions that are needed to submit supporting
information. If those questions are not needed, it is flexible
for the system to change the ‘checkInput’ value from ‘1’ to
‘0’ in the table for questionnaire in the database. If there
are other questions that are designed to request supporting
Table 6
Rules for evaluating weak areas
If (condition) Then (conclusion)
Answer value , 3 This is assigned to
be the weaknesses
Number of weakness ¼ 0 The organisational performance is
excellence and there is
no visible weakness. The
organisation should maintain that
good performance
0 , Number of weakness # 3 The organisation has few
weaknesses or weak areas.
The organisation should prevent
these weaknesses
3 , Number of weakness # 6 The organisational performance is
about average. There are
a number of weaknesses
or weak areas. The
organisation should seek ways
to avoid and prevent
these weaknesses
6 , Number of weakness # 10 The organisation has a
certain number of weak
areas. The organisation should
get rid of these
weak areas
Number of weakness . 10 The organisational performance is
very bad that there
are many weak areas.
It is essential for
the organisation to deal
with these weaknesses immediately.
The organisation should also
prevent the situation deteriorating
Table 7
Rules for giving recommendations
If (condition) Then (conclusion)
Number of weakness ¼ 0 The organisation should put effort
to maintain the strength areas
and improve the weak areas
There exists weakness in the
sub-item
Display the recommendation of that
sub-item
There does not exist weakness
in the sub-item
No recommendation
Fig. 4. The KBS/ES structure of KES training system.
K.-S. Chin, et al. / Expert Systems with Applications 24 (2003) 443–455 449
information, the ‘checkInput’ value should be changed
from ‘0’ to ‘1’. This provides a flexible way to modify
some self-assessment questions. There is the database
containing the relevant information for the system (such as
the self-assessment questions, the answers, supporting
information, and the progress status of users). The system
developer is able to use the information for further
investigation.
10. Development of training features
According to Gandell, Weston, Finkelstein and Winer
(2000), several web capabilities can be used as action-based
teaching and learning strategies. They include content
presentation; searchable information; information
exchange; guidance, practice, and feedback; discussion,
and simulations. Gibbons and Fairweather (1998) also argue
Fig. 5. System process flowchart.
K.-S. Chin, et al. / Expert Systems with Applications 24 (2003) 443–455450
that web-based training can offer several advantages as
compared with traditional modes of training. The KES
system provides a web-based environment for the training
of organisational performance measurement. The users
(including undergraduates, postgraduates and industrial
users) can capture the knowledge of organisational
performance assessment through the practices on an on-
line assessment tool. During the practices of this assessment
tool, they will learn what is organisational assessment, the
criteria in the assessment, the areas to be assessed, the
sample practices of these areas, the scoring conditions, and
special terms, as well as the identification of strengths and
weaknesses and improvement methods. Several training
features of the system are explained as follows.
10.1. A virtual company for web-based training
Based on the current industrial scenario in Hong Kong, a
sample virtual company was created for web-based training.
The company provides a simulated organisation structure
and operations for users (particularly those undergraduates
who may not have any industrial experiences) to familiarise
with the operations and performance measurement practice
in an industrial organisation. The company’s profile and
general information are provided, comprising the important
elements in managing the company. Fig. 8 shows a sample
page of the operation profiles of the virtual company.
Alternatively, student users are highly advised to contact
industrial companies to collect real data, while industrial
Fig. 6. Flowchart for seven questionnaires.
K.-S. Chin, et al. / Expert Systems with Applications 24 (2003) 443–455 451
users are encouraged to employ the KES system in their
organisations. The students are required to prepare the
information sheet to build up the operation profiles of any
organisations under the assessment of organisational
performance. After completing the self-assessment, the
KES system can generate the results with scores, summary
reports, strengths, weaknesses, and recommendations. Users
can learn how and why the company can obtain such results
and generate action plans for improvement. For instance, the
users can analyse the strengths and weaknesses of the virtual
company and other organisations with referring to the
summary reports.
10.2. On-line user manual
An on-line user manual is created to provide users with
guidelines from the start to the end of the assessment
practices. Users can refer to the manual any time during the
assessment practices (Fig. 9). Besides, in order to provide
users with quick reference to the terms related to
performance measurement and self-assessment, a glossary
is built into the KES system.Fig. 7. Flowchart for searching records.
Fig. 8. Profile of a virtual company for web-based training.
K.-S. Chin, et al. / Expert Systems with Applications 24 (2003) 443–455452
11. Evaluation of the KES system
In order to evaluate the applicability of the KES model
and accompanying training toolkit, a web-based user
satisfaction survey was conducted. A group of seven invited
respondents (including industrial users and postgraduates)
were asked to complete a survey questionnaire after a trial
of the KES system. The questionnaire contained 10
questions that addressed three main areas, including the
contents, interface design, and willingness to use. A five-
point Likert-scale was employed.
Most respondents agreed that the contents of the system
met the requirements of organisational performance
assessment. Results show that respondents understood the
rationale and relationship of self-assessment questions, and
most of them agreed that these questions are well
structured. The KES system adopted a step-by-step
approach of self-assessment. Most (i.e. five out of seven)
respondents agreed that the system flow was clear and easy
to follow. Five out of seven respondents agreed that the
system is user-friendly. One respondent was dissatisfied
with the long waiting time and other problems in loading
the pages. Most respondents were happy with the trail of
the KES system and agreed that they become more aware
of the measurement and assessment of organisational
performance. More than half of the respondents indicated
that they were willing to use the system. On the other hand,
one respondent expressed his reservation because he could
not complete all the tasks in the assessment, and could not
obtain the results, scores, summary, feedback, and
recommendations from the system. Generally speaking,
this assessment system uses an innovative approach to
assess an organisational performance. The system provides
a practical means to figure out areas for improvements,
strengths for development, and recommendations for
improvement. In term of system design, it is also user-
friendly and easy to use. However, long assessment time
and a large amount of questions are disadvantages to those
who just want to have a preliminary assessment on the
organisational performance.
12. Conclusion
The success and continuity of an organisation depend
on its performance. Recent business literature gives much
prominence to balanced scorecards, TQM, BE models,
and other similar approaches for assessing enterprise
Fig. 9. An on-line user manual.
K.-S. Chin, et al. / Expert Systems with Applications 24 (2003) 443–455 453
performance. Most of these systems and/or frameworks
share the basic understanding that success in today’s
global marketplace requires measures of the critical
aspects of performance. The interest in TQM–BE has
been fuelled with a range of national and regional awards
(such as, MBNQA and EQA). These awards are being
increasingly used by organisations as part of the business
improvement process and strategic benchmarking. As the
goal of both MBNQA and self-assessment is to achieve
BE, it is possible to use the concepts of self-assessment
with the catalyst of MBNQA criteria to enhance the self-
assessment approach. The pro of this approach stresses the
comprehensiveness and thoroughness, while the con is the
huge amount of resources. Besides, the need of consult-
ants or experts in the self-assessment is exclusively
important.
The KES model adopts the guiding principles embodied
with MBNQA. The scoring guides provide users with an
objective mean of self-assessment to profile their strengths
and weaknesses, and identify improvement opportunities
with respect to the seven evaluation criteria. The self-
assessment results obtained constitute a solid foundation for
comparing performance records, integrating key operations
requirements, and stepping towards result-oriented per-
formance improvement. Based on an exploratory user
satisfaction survey with a group of users, the findings
validated the potential applicability of the KES system.
Despite the fact that the model may supplement any BE
Model, it serves three important purposes.
1. It is a working tool for guiding the implementation of
performance measurement system in organisations.
2. It helps organisations improve their management prac-
tices in relation to performance measures and self-
assessment; and
3. It facilitates the sharing information of best practices
and benchmarking performance within and among
organisations.
The KES training toolkit (including the model and
associated KES system) adopts knowledge-based tech-
nology and action-learning approach. It is anticipated that
the training toolkit can provide students and users with
stimulating learning environment to get practical experi-
ences in measuring and assessing enterprise performance.
The toolkit materials can also enrich the engineering
management curriculum in the university sector.
Acknowledgements
The authors would like to thank City University of Hong
Kong for supporting this project under the Quality
Enhancement Fund (Project no. 8710199). The authors
also appreciate the contribution from Mr I K Leung of the
Department of Manufacturing Engineering and Engineering
Management of City University of Hong Kong in the
software development.
References
Adebanjo, D. (2001). TQM and business excellence: is there really a
conflict? Measuring Business Excellence, 5(3), 37–40.
Beasley, K. (1994). Self-assessment: A tool for integrated management.
Cheltenham: Stanley Thorns Publishers.
Bemowski, K., & Stratton, B. (1995). How do people use the Baldrige
criteria? Quality Progress, 28(5), 43–47.
Bounds, G., Yorks, L., Adams, M., & Ranney, G. (1994). Beyond total
quality management: Towards the emerging paradigm. New York:
McGraw-Hill.
Boyle, T. (1997). Design for multimedia learning. USA: Prentice-Hall.
Conti, T. (1997). Organisational self-assessment. London: Chapman &
Hall.
Coulambidou, L., & Dale, B. G. (1995). The use of quality management
self-assessment in the UK: A state of the art study. Quality World
Technical Supplement, September, 110–118.
Dale, B. G. (1999). Managing quality (3rd ed). Oxford: Blackwell
Publishers.
Eccles, R., & Pyburn, P. J. (1992). Creating a comprehensive system to
measure performance. Management Accounting, October, 41–44.
EFQM (2002). The European Quality Award, http://www.EFQM.org/
(April).
Evans, J. R., & Lindsay, W. M. (1987). Expert systems for statistical quality
control. In N. A. Botten, & T. Raz (Eds.), Expert systems (pp.
131–136). Industrial Engineering and Management Press.
Gandell, T., Weston, C., Finkelstein, A., & Winer, L. (2000). Appropriate
use of the web in teaching higher education. In B. L. Mann (Ed.),
Perspectives in web course management (pp. 61–68). Canadian
Scholars’ Press.
Geanuracos, J., & Meiklejohn, I. (1993). Performance measurement: The
new agenda. London: Business Intelligence.
Gibbons, A. S., & Fairweather, P. G. (1998). Computer-based instruction:
Design and development. New Jersey: Educational Technology
Publications.
Hakes, C. (1997). The corporate self-assessment handbook. Bristol: Bristol
Quality Centre.
Hakes, C. (1998). Total quality management: The key to business
improvement. London: Chapman & Hall.
Heimpel, R. (2000). Elements of web course design. In B. L. Mann (Ed.),
Perspectives in web course management (pp. 119–134). Canadian
Scholars’ Press.
Henderson, S. (1997). Black swans don’t fly double loops: the limits of the
learning organisation? The Learning Organisation, 5(3), 99–105.
Hill, R. (1996). A measure of the learning organisation. Industrial and
Commercial Training, 28(1), 19–25.
Hillman, G. P. (1994). Making self-assessment successful. The TQM
Magazine, 6(3), 29–31.
Kanji, G. K (2001). An integrated approach of organisational excellence,
http://www.gopal-kanji.com (December).
Kanji, G. K., & Moura e Sa, P. (2002). Kanji’s business scorecard. Total
quality management, 13(1), 13–27.
Kaplan, R. S., & Norton, D. P. (1996). The balanced scorecard:
Translating strategy into action. Boston, MA: Harvard Business
School Press.
Karapetrovic, S., & Willborn, S. (2001). Audit and self-assessment in
quality management: comparison and compatibility. Managerial
Auditing Journal, 16(6), 366–377.
Kermally, S. (1997). Managing performance in brief. Oxford: Butterworth/
Heinemann.
K.-S. Chin, et al. / Expert Systems with Applications 24 (2003) 443–455454
Lascelles, D. M., & Peacock, R. (1996). Self-assessment for business
excellence. Berkshire: McGraw-Hill.
Medori, D., Steeple, D., Pye, T., & Wood, R (1995). Performance measures:
the way forward. Proceedings of the Eleventh National Conference on
Manufacturing Research (pp. 589–593). Leicester: DeMontfort University.
Neely, A. (1998). Measuring business performance—why, what and how.
London: The Economist Books.
Neely, A., Gregory, M., & Platts, K. (1995). Measuring performance system
design: a literature review and research agenda. International Journal of
Operations and Production Management, 15(4), 80–116.
NIST (2002). Malcolm Baldrige National Quality Award, http://www.nist.
gov/ (April).
Powell, T. C. (1995). Total quality management as competitive advantage:
a review and empirical study. Strategic Management Journal, 13(2),
119–134.
Pun, K. F., Chin, K. S., & Lau, H. (1999). A self-assessed quality
management system based on integration of MBNQA/ISO 9000/ISO
14000. International Journal of Quality and Reliability Management,
16(6), 606–629.
Shergold, K., & Reed, D. M. (1996). Striving for excellence: how self-
assessment using the business excellence model can result in step
improvements in all areas of business activities. The TQM Magazine,
8(6), 48–52.
Shin, D., Kalinowski, J. K., & EI-Enein, G. A. (1998). Critical
implementation issues in total quality management. SAM Advanced
Management Journal, 63(1), 10–14.
Van der Wiele, T., & Brown, A. (1999). Self-assessment practices in
Europe and Australia. International Journal of Quality and Reliability
Management, 16(3), 238–251.
Van Schalkwyk, J. C. (1998). Total quality management and the
performance measurement barrier. The TQM Magazine, 10(2),
124–131.
Walker, K. (1996). Corporate performance reporting revisited: the balanced
scorecard and dynamic management reporting. Industrial Management
and Data Systems, 26, 24–30.
Whitney, G., & Pavett, C. (1998). Total quality management as an
organisational change: predictors of successful implementation. Quality
Management Journal, 5(4), 9–22.
K.-S. Chin, et al. / Expert Systems with Applications 24 (2003) 443–455 455