a parallel approach to the analytic hierarchy process decision support tool

6
~ Pergamon Computing ,~lstems in Engineering, Vol. 6, Nos 4/5, pp. 431 436, 1995 Copyright '.¢" 1995 Elsevier Science Ltd 0956-0561(95)00045-3 Printed in Great Britain. All rights reserved 0956-0521/95 $9.50 + 0.00 A PARALLEL APPROACH TO THE ANALYTIC HIERARCHY PROCESS DECISION SUPPORT TOOL Lugs DIAS, Jo,~o PAULO COSTA and Jo~,o NAMORADO CLiMACO Faculdade de Economia da Universidade de Coimbra, Av. Dias da Silva, 166-3000 Coimbra, Portugal and INESC, Rua Antero de Quental, 199-3000 Coimbra, Portugal (Received 9 December 1993; accepted in revisedJbrm 30 June 1995) Abstract--The Multiple Criteria Decision Aiding methods dedicated to discrete problems follow different philosophies and strategies for selecting, clustering or ranking alternatives. This work presents a tool using one such method--the Analytic Hierarchy Process (AHP). The Decision Maker (DM) can structure his criteria as a hierarchy tree having the alternatives as leaf nodes. The DM must then build matrices for each node by performing pairwise comparisons between its children. The AHP finds the weights of each child concerning the parent criterion by calculating the elements of the eigenvector corresponding to the maximum eigenvalue of the comparison matrix. Weights are then combined in order to obtain the influence of each alternative on the top of the hierarchy. A DM expects that a Decision Support Tool works faster than he/she does. In order to achieve speed a parallel approach was developed. Parallel implementations described in this work follow different message-passing strategies and capitalise on the fact that the vector of weights for each matrix can be calculated independently. The authors used a network of four lnmos Transputers. Research will focus on finding which implementation will run faster and how the DMs options affect the speedups obtainable. I. INTRODUCTION 2. THE ANALYTIC HIERARCHY PROCESS Decision makers often face difficult choices needing fast responses: the executive cannot lose a business opportunity, the engineer cannot fall behind sched- ule, the physician has a patient that cannot wait, etc. Choices can become rather difficult when involving several criteria as usually no alternative excels at all of those criteria. The Decision Maker (DM) should acknowledge such difficulties and use a Decision Support Tool based on some Multiple Criteria Decision Aiding method. The Analytic Hierarchy Process AHP~--is one such method. It requires the DM to establish priorities for the criteria by judging them in pairs in order to evaluate their relative importance. The output of the method is a vector containing the global value of each alternative considering the DMs input. Decision Support Systems should be designed to be used by a non-specialist DM. A DM expects a user-friendly environment and a system that works faster than he/she does. Moreover, the decision maker will have to run the system many times with different parameter sets for some "what-if" analysis to cope with uncertainty, which makes speed a crucial factor. In this work a parallel approach to the AHP is discussed as a means to obtain speed. This paper is structured as follows. First AHP is explained. Then the chosen parallel approach is described. That description includes the hardware and software environment. Finally results are presented and conclusions are derived. The Analytic Hierarchy Process invites the DM to structure his conflicting criteria hierarchically as an inverted tree. An example (choice of a computer) of a hierarchy of criteria and alternatives is shown in Fig. 1. The top of the hierarchy (i.e. the root of the tree) is usually a quite general criterion such as "best choice" or "welfare of society". At an inferior level the DM states the criteria that have an influence on the top criterion. These criteria are represented in the tree as children of the root node. The DM can further subdivide each criterion into lower level criteria that have an influence on it again as a parent~children relationship in the tree. This process stops when the DM is satisfied in the sense that he/she feels that the hierarchy adequately represents the structure of the decision problem. The alternatives can then be appended as the leaves of the tree. Note that the DM can build a hierarchy in the top-down fashion described above or in a down top fashion starting with a set of low level criteria that are successively aggregated until only the top criterion remains. The purpose of the method is to derive the weight (global value) of each alternative concerning the top of the hierarchy. The weight of each alternative concerning a particular low level criterion is found (in a way described later) by comparing the alternatives between themselves in relation to that criterion. That must be done for each alternative concerning each criterion at the bottom of the hierarchy. Similarly, all the criteria except the one at the top of the hierarchy 431

Upload: luis-dias

Post on 26-Jun-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

~ Pergamon Computing ,~lstems in Engineering, Vol. 6, Nos 4/5, pp. 431 436, 1995 Copyright '.¢" 1995 Elsevier Science Ltd

0956-0561(95)00045-3 Printed in Great Britain. All rights reserved 0956-0521/95 $9.50 + 0.00

A P A R A L L E L A P P R O A C H TO THE A N A L Y T I C H I E R A R C H Y PROCESS DECISION SUPPORT TOOL

Lugs DIAS, Jo,~o PAULO COSTA and Jo~,o NAMORADO CLiMACO

Faculdade de Economia da Universidade de Coimbra, Av. Dias da Silva, 166-3000 Coimbra, Portugal and INESC, Rua Antero de Quental, 199-3000 Coimbra, Portugal

(Received 9 December 1993; accepted in revisedJbrm 30 June 1995)

Abstract--The Multiple Criteria Decision Aiding methods dedicated to discrete problems follow different philosophies and strategies for selecting, clustering or ranking alternatives. This work presents a tool using one such method--the Analytic Hierarchy Process (AHP). The Decision Maker (DM) can structure his criteria as a hierarchy tree having the alternatives as leaf nodes. The DM must then build matrices for each node by performing pairwise comparisons between its children. The AHP finds the weights of each child concerning the parent criterion by calculating the elements of the eigenvector corresponding to the maximum eigenvalue of the comparison matrix. Weights are then combined in order to obtain the influence of each alternative on the top of the hierarchy. A DM expects that a Decision Support Tool works faster than he/she does. In order to achieve speed a parallel approach was developed. Parallel implementations described in this work follow different message-passing strategies and capitalise on the fact that the vector of weights for each matrix can be calculated independently. The authors used a network of four lnmos Transputers. Research will focus on finding which implementation will run faster and how the DMs options affect the speedups obtainable.

I. INTRODUCTION 2. THE ANALYTIC HIERARCHY PROCESS

Decision makers often face difficult choices needing fast responses: the executive canno t lose a business oppor tuni ty , the engineer canno t fall behind sched- ule, the physician has a pat ient tha t canno t wait, etc. Choices can become ra ther difficult when involving several criteria as usually no al ternat ive excels at all of those criteria. The Decision Make r (DM) should acknowledge such difficulties and use a Decision Suppor t Tool based on some Mult iple Cri teria Decision Aiding method. The Analyt ic Hierarchy Process AHP~-- i s one such method. It requires the D M to establish priorities for the criteria by judging them in pairs in order to evaluate their relative impor tance . The ou tpu t of the me thod is a vector conta in ing the global value of each al ternat ive considering the D M s input.

Decision Suppor t Systems should be designed to be used by a non-specialist DM. A D M expects a user-friendly env i ronment and a system tha t works faster than he/she does. Moreover , the decision maker will have to run the system many times with different pa ramete r sets for some " w h a t - i f " analysis to cope with uncertainty, which makes speed a crucial factor. In this work a parallel approach to the A H P is discussed as a means to ob ta in speed.

This paper is s t ructured as follows. First A H P is explained. Then the chosen parallel app roach is described. Tha t descr ipt ion includes the hardware and software envi ronment . Finally results are presented and conclusions are derived.

The Analyt ic Hierarchy Process invites the DM to s tructure his conflicting criteria hierarchically as an inverted tree. An example (choice of a computer ) of a hierarchy of criteria and al ternatives is shown in Fig. 1. The top of the hierarchy (i.e. the root of the tree) is usually a quite general cri terion such as "best choice" or "welfare of society". At an inferior level the D M states the criteria tha t have an influence on the top criterion. These criteria are represented in the tree as children of the root node. The D M can fur ther subdivide each cri terion into lower level criteria tha t have an influence on it again as a parent~chi ldren relat ionship in the tree. This process stops when the D M is satisfied in the sense that he/she feels tha t the hierarchy adequately represents the s tructure of the decision problem. The al ternat ives can then be appended as the leaves of the tree. Note that the DM can build a hierarchy in the t o p - d o w n fashion described above or in a down top fashion start ing with a set of low level criteria tha t are successively aggregated until only the top cri terion remains.

The purpose of the method is to derive the weight (global value) of each al ternat ive concerning the top of the hierarchy. The weight of each al ternat ive concerning a par t icular low level cri terion is found (in a way described later) by compar ing the al ternat ives between themselves in relat ion to tha t criterion. Tha t must be done for each al ternat ive concerning each cri terion at the bo t tom of the hierarchy. Similarly, all the criteria except the one at the top of the hierarchy

431

432 Luis Dias et al.

s C R

C 4 1 1/2 Best computer R 5 2 1

a b c ~ a b c a b i 3 t a 1 1 4 a 1 1/5 ab 1/51 15 7 [_1/4 1/4 1J [ 3 b 1 1 4 b L 5 1

c c 3 1/3

Speed (S) Cost (C) Reliability (R)

Computer a Computer b Computer c

Fig. 1. Example of a 2-level criteria hierarchy. The alterna- tives are the leaf nodes. Next to each criterion is the

comparison matrix for its children.

have a weight (an influence) on its parent criterion. Therefore, considering the example depicted in Fig. 1, a good performance of "Computer a" considering low level criterion "Speed" is not meaningful if "Speed" does not weigh much in relation to "Best computer".

The DM does not need to express weights directly. The DM is instead required to evaluate all alterna- tives in relation to a low level criterion and all criteria in relation to their parent in a pairwise fashion. One feature of the hierarchical approach is that the DM concentrates on one criterion at a time. For each criterion the results of this evaluation is a comparison matrix of its children (also shown in Fig. 1). If one denotes A as one such matrix one can say that: - - T h e values a,j result from comparing child i and child j in relation to semantic such as:

• a,: i = 1 if i and j are equally important; • aij = 3 if i is weakly more important than j ; • a~j = 5 if i is strongly more important than j ; • a~ = 7 if i is demonstrably more important than j ; • agj = 9 if i is absolutely more important than j ; • a~ is an even number if the DM desires a more

detailed comparison.

- - A is square, positive (ao>0) , and reciprocal (a u = 1/aj~ and ai~ = 1).

The example depicted in Fig. 1 shows that for this DM "Reliability" is strongly more important than "Speed" and "Cost" (understood as low cost) is between weakly and strongly more important than "Speed" on what concerns the top criterion (in this case, a goal).

According to Saaty ~ the weights w = (w~ . . . . . w,,) of the children of A are the elements of the eigenvec- tor corresponding to the its maximum eigenvalue 2 .... (see Appendix for more explanation). We therefore have A w = •max W.

The method does not assume that the decision maker 's pairwise comparisons are consistent thus allowing for subjective and unstructured judgement. A Consistency Index (CI) is proposed by Saaty ~ to

measure (if the DM wishes so) the consistency of judgement for each comparison matrix so that the validity of the method's results can be assessed. It equals (n - 2m=×)/(n - 1). If and only if the consist- ency index is to be calculated then 2m= x must be found. This can be approximated by making 2m= X equal to the average of (Aw)j /wi (i = 1, 2 . . . . . n).

For determining the eigenvectors the algorithm chosen was the one that Saaty ~ considers gives a good approximation. It consists of an iterative process that stops when the difference between two iterations' values is less than a chosen threshold E (a small positive number):

A0= A k ~-- 1/*k indicates the iteration number */ Ak *--- (Ak _ i )2 {Wi}k ~ S u m of row i of Ak/SUm of all [aiAk, for i = l , . . . , n

REPEAT k ~ k + l Ak ~-- (Ak 1)2 {wi}k~-Sum of row i of Ak/SUm of all

[aii]k, for i = 1 . . . . . n UNTILI{Wi}k-- {Wi}k ~[ <~, for i = 1 . . . . . n.

The consistency of the DM (measurable by the CI), the choice of whether to calculate the CI and the choice of e will influence the speedup obtainable in parallel approaches.

In summary, an AHP outline is as follows:

Step 1. Get the DMs options (tree struc- ture, comparison matrices, whether to calcu- late the CI or not, threshold E). Step 2. For each node in the hierarchy do: 2.1. Calculate the eigenvector of the comparison matrix corresponding to the maximum eigenvalue. 2.2. If the CI is to be calculated then compute the maximum eigenvalue and then the CI. Step 3. Combine the weights: 3.1. Build one matrix per level containing one column per criterion in that level. Fill each column with the vector of weights of the children of one node. 3.2. Multiply these matrices to obtain the weight of each alternative in relation to the top criterion.

As an example, the weights for the matrices in Fig. I would be: concerning "Best computer": "Speed"--0.10, "Cost"--0.33, "Reliabil i ty"--0.57; concerning "Speed": a ~ . 7 3 , D--0.19, c ~ . 0 8 ; concerning "Cost": a ~ . 4 ( 4 ) , b--0.4(4), c--0.1(1); concerning "Reliability": a--0.10, b~0 .64 , c--0.26. Finally, concerning "Best computer": a--[0.73 0.4(4) 0,10] × [0.10 0.33 0.57]T = [0.28] b [0.19 0.4(4) 0.64] × [0.10 0.33 0.57] T = [0.53] c--[0.08 0.1(1) 0.26] × [0.10 0.33 0.57] T= [0.19].

Analytic hierarchy process decision support tool 433

3. PARALLEL APPROACH AT Bus

The AHP offers some opportunities for the parallel program designer to exploit in order to improve execution speed. One possible implementation of a parallel AHP takes advantage of the fact that after the criterion tree of matrices has been built the vector of weights for each matrix can be calculated indepen- dently (Step 2 above). This parallel implementation takes advantage of the fact that the iterative procedure used to calculate the vector of weights takes most of the execution time. The parallel implementations followed a common strategy:

-- Read the data and build the tree (not in parallel). -Calculate the weights in parallel: each processor will receive a comparison matrix and will return its eigenvector and eventually its CI. When a pro- cessor finishes its task it will receive another matrix until there are no more matrices to send. A task referred as the "master" task will control the whole process and collect the resulting weights. The weights are then combined by the "master" task to obtain the final result (not in parallel).

The implementations use an affordable four-node multicomputer.: Research will focus on how different implementations run compared to a sequential approach and on how the DMs choices affect the speedups obtainable.

3.1. Encironment overview

The work described in this paper used a multicom- puter add-in board with four nodes. Each node consists of an Inmos T800 Transputer 3 processor with a local memory of 1 MByte. The transputers commu- nicate with each other through direct full-duplex links. One of the transputers acts as an interface to the host (an Intel 486 based PC) by communicating with its AT bus. This Transputer is called the root node (Fig. 2).

The software used to develop and run the appli- cation was 3Ls Alien File Server and Parallel C compiler? The typical program cycle consists of editing the source code on a general purpose text editor, compiling the code, linking the code with a run-time library, writing a configuration file and running the program. There are two run-time libraries available: a "standard" library for the root node (which is the only that can communicate with the host and thus perform I/O) and a standalone smaller library for the remaining nodes. On what concerns the configurer there are two options. A general purpose configurer allows the program designer to place the processes in chosen processors and to allocate transputer links to channels of com- munication between processes. There is also a "flood configurer'" that is only used in maste~worker pro- grams like the implemented processor farm presented below. When using the latter configurer all communi-

Fig. 2. Hardware overview.

cation routing becomes transparent to the pro- grammer and the program makes use of all available processors.

3.2. Implemented programs

The following programs were built exploiting the parallel calculation of the comparison matrices" weights. All work is performed serially except Step 2 of the AHP outline (on the parallel versions). The parallelization of this step follows a division of labour strategy as described above.

(a) i486--straightforward sequential implemen- tation of AHP running on the host processor.

(b) single straightforward sequential implemen- tation of AHP running on the root transputer.

(c) farm--parallel program using the language's farming functions. The master task sends the com- parison matrices to worker tasks which calculate its weights and eventually its CI. The master task sends the matrices from the main body of the program and receives the results through threads. All other calcu- lations are performed by the master task. Two configurations were tested: one places worker tasks in all available processors (farm4) while the other runs all the tasks on the root transputer (farm 1).

(d) DIYfarm a do-it-yourself version of a farm- ing model. The network is treated as a tree structure in which the root processor is the root of the tree and all other transputers are the leaves. All communi- cations are handled explicitly and the processors' status is monitored. The root node runs a master task which has a thread for sending matrices for worker nodes and has receiving threads (one per worker) for getting the results. There are three worker processes:

Top criterion

Leve l 1

c r i t e r i a

Level 2

• " C r i t e r i a

A l t e r n a t i v e s

Fig. 3. Tree structure.

434 Luis Dias et al.

Table 1. Execution times considering a threshold of 0.01

Series CI? i486 single farrnl farm4 DIYfarml DIYfarm3 finall final4* final4 meanCI

Datal N 175 245 335 207 368 138 354 135 133 1.397 Y 190 258 348 214 383 142 369 140 138

Data2 N 765 1121 1475 825 1612 579 1602 579 531 1.137 Y 830 1175 1528 850 1673 593 1663 597 544

Data3 N 104 143 186 104 203 73 204 74 68 1.11 Y 109 150 193 107 211 76 211 77 69

Data4 N 292 435 610 378 660 274 654 266 258 0.636 Y 322 464 639 391 694 283 687 276 265

Data5 N 157 213 269 174 296 99 294 102 88 1.268 Y 163 223 278 151 306 103 304 105 91

Data6 N 271 366 453 243 498 166 486 169 138 1.371 Y 279 380 466 255 513 170 501 173 143

Data7 N 91 16 25 25 25 22 25 16 17 0 Y 103 17 27 25 27 22 26 17 17

in implementat ion DIYfarm3 there is a worker per non- root node while in implementat ion DIYfarm 1 all workers run on the root processor.

(e) F ina l - -a variation of DIYfarm. The master has only one thread per channel both sending matrices and receiving results. The worker task remains unchanged. Three different configurations were devised:

- - f i na l l has only one worker which runs on the root transputer;

- - f ina l4* has one worker per non-root transputer; - - f ina l4 has one worker task per non-root t ransputer

and an additional worker sharing the root t ransputer with the master task.

4. RESULTS

Our research focused on two different aspects: one was finding which implementat ion would run faster while the other was finding how the DMs choices

Table 2. Real speedups for the fastest program considering configurations with 2, 3 and 4 processors

Series CI? Threshold 2P 3P 4P

Datal N 0.01 1.24 1.66 1.84 6 nodes Y 0.01 1.22 1.74 1.87 5 x 5 Y 0.0001 1.30 1.83 1.98

Data2 N 0.01 1.23 1.71 2.11 25 nodes Y 0.01 1.26 1.73 2.16 5 x 5 Y 0.0001 1.30 1.83 2.28

Data3 N 0.01 1.23 1.70 2.10 31 nodes Y 0.01 1.26 1.74 2.17 5 x 5 Y 0.0001 1.29 1.80 2.34

Data4 N 0.01 1.05 1.40 1.69 25 nodes Y 0.01 1.08 1.42 1.75 3 x 3 Y 0.0001 1.15 1.58 1.91

Data5 N 0,01 1.34 1.92 2.42 25 nodes Y 0.01 1.36 1.92 2.45 7 x 7 Y 0.0001 1.39 2.02 2.48

Data6 N 0.01 1.44 2.08 2.65 25 nodes Y 0.01 1.44 2.08 2.66 9 x 9 Y 0.0001 1.43 2.12 2.66

Data7 N 0.01 0.80 0.89 0.94 31 nodes Y 0.01 0,81 0.94 1.00 2 x 2 Y 0.0001 0.81 0.89 1.00

affect the obtainable speedup. Seven different tree structures similar to the one in Fig. 3 were devised to answer these questions. For each of these structures we randomly generated ten sample problems. We measured the time it took to run the calculation of the eigenvectors (Step 2--eventual ly in parallel) plus the final weight multiplications (Step 3 - -pe r fo rmed seri- ally) and calculated its arithmetic mean. The mean was a good statistic as there were no extreme values. It was always close to the median value. The measured part of the program execution was repeated a number of times (different from one structure to another) so that the total execution times would become similar. The tree structures are:

e d a t a l - - O n e criterion at the top level having 5 children. There are 5 alternatives. There are 6 comparison matrices with dimension 5.

• d a t a 2 - - O n e criterion at the top level having 4 children, each child having 5 children. There are 5 alternatives. There are 24 (4 criteria at level 1 plus 20 criteria at level 2) comparison matrices with dimension 5 plus one (for the top criterion) with dimension 4.

• d a t a 3 - - O n e criterion at the top level having 5 children, each child having 5 children. There are 5 alternatives. There are 31 comparison matrices having dimension 5.

• d a t a 4 - - O n e criterion at the top level having 4 children, each child having 5 children. There are 3 alternatives. There is one comparison matrix

2.50

2.0(]

~.50 ' ~

1.00

O.5O

0,00 6

i

d 25

Number of nodes

[

j 3~

/h20 ........ ;1 [ [ ] 3 processors J ~ ]4p ......... ]

Fig. 4. Real speedup for the problems with nodes having dimension 5 using a threshold of 0.0001.

Analytic hierarchy process decision support tool 435

300

250 I I ~

J 2 00 ~ 2 processors

S .... " ' ~ ] ' ;~3p ......... I

1 0 0 1 i ~ I ~ i ~ !' I 1 ' 4p .........

o,o ~ i L

ooo, , , - 3 ~ 7

Node dimension Fig. 5. Real speedup for the problems having 25 nodes using

a threshold of 0.0001.

25

2

S 1.51

05 ~

0 6 25 31

Number of nodes

~.~.threshold 001 and no 01' i threshold 0.0001 and el

Fig. 6. Real speedup for the problems with nodes having dimension 5 using four processors.

• d a t a 5 - - O n e criterion at the top level having 4 children, each child having 5 children. There are 7 alternatives. There is one comparison matrix with dimension 4, 4 with dimension 5, and 20 with dimension 7.

• data6 One criterion at the top level having 4 children, each child having 5 children. There are 9 alternatives. There is one comparison matrix with dimension 4, 4 with dimension 5, and 20 with dimension 9.

• data7 Binary tree (each criterion has 2 children) with 5 levels and 2 alternatives. This makes 31 comparison matrices of dimension 2.

The results in Table 1 show that the implemen- tation final4 consistently beat all other transputer implementations and beat the host in most of the cases. The expression real speedup 5 refers to the fastest sequential implementation which--con- trary to intuition---is not always the straightforward one. The expression nominal ~ speedup refers to the speedup of a program when running unmodified on several processors. On the contrary, real speedup (or speedup rati&) is reserved for designating the speedup obtainable over the fastest sequential implementation.

The results in Table 2 show how does the real speedup vary with the dimension of the problem and the dimension of the network of processors (2, 3 or

4 processors) considering the fastest program. It also features results using two different values for the threshold ~: 0.01 and 0.0001. The results for a threshold of 0.0001 are presented in Figs 4 and 5, featuring speedup as a function of problem size. In Fig. 4 it can be seen that maximum speedups were achieved for configurations using two or three pro- cessors, in that speedup does not grow with the problem size (number of nodes). In what concerns the configuration with four processors there seems to be room for a little more growth. In Fig. 5 maximum speedups were not achieved, although there is little growth in the case of a two-processor network as node dimension increases. The results show that although there was a decreasing efficiency in the use of processors, speedup kept increasing as the number of processors increased and that this increase is larger when the problem size is larger. Naturally, these conclusions are only valid for the problem dimensions experimented with.

One other aspect to be studied was the influence of the DMs options (chosen threshold ~, whether to calculate the Cl, number of criterion nodes, size of nodes and inconsistency) on speedup. It can be seen from the results that speedup generally increases when the C1 is to be calculated and also increases as the threshold decreases. Both of these lectors contrib- ute to higher computational effort on the worker tasks without increasing communication among processors.

25 T ~

2 ~ C25 & 31

I 1 7

025 ©

I

qv25 ~5

i C threshold 0.01 and no Cl • ~hreshold 0,0001 and C~

0 1 2 3 4 5 6 7 8 9 Node dimension

Fig. 7. Real speedup for the problems having 25 or 31 nodes using four processors.

436 Luis Dias et al.

3

2.5

2

S 1.5 1

0.5

0

0

,,dJ,~ • qpqbo

,glb#~

0.5 1 1.5 2

Consistency Index (CI)

Fig. 8. Real speedup versus CI for all problems, using four processors.

3 2:,

0.5 4

0 . . . . 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

Consistency Index (CI)

Fig. 9. Real speedup and regression line Y* = 0.97 + 1.73X versus CI for all problems except the one with only 6 nodes.

On what concerns the n u m b e r of nodes in the hierarchy of criteria we studied the per formance of our best parallel implementa t ion (final4) on the struc- tures d a t a l , da ta2 and data3, i.e. s t ructures with nodes of d imension 5. The results (summarised in Fig. 6) show tha t if the n u m b e r of nodes is higher than 25 then speedup does not increase, which is unders t andab le as we have only four processors available.

For s tudying the relat ionship between the dimen- sion of the cri terion nodes and speedup we used all s t ructures except d a t a l , i.e. s t ructures with 25 or 31 nodes. Figure 7 shows that the difference between 25 and 31 nodes is of little impor tance but the difference between its d imensions is not. Higher node size cont r ibutes to bet ter speedups.

Final ly we studied the impact of inconsistency on speedup, taking into account all seven structures available. At first sight there does not seem to be a re la t ionship (Fig. 8). However, the points at the lower r ight of the char t refer to the tree s t ructure with only six nodes, which is the only s tructure with a n u m b e r of nodes in the order of the n u m b e r of processors available. If we exclude tha t smaller s t ructure we find an a lmost l inear relat ionship (Fig. 9). This relation- ship, however, may not remain valid out of the scope defined by our data and the excellent correla t ion

coefficient found owes much to the problems with no inconsistency.

5. C O N C L U S I O N S

We have obta ined good speedups (both nominal and real) on our conceptual ly simple approach to the A H P decision suppor t tool. We found tha t the fastest implementa t ion was the final4, in which we did not use the language 's farming functions. Instead we handled communica t ions explicitly, had one thread per channel at the mas ter task and had a worker task in each of the four t ransputers .

We also found tha t higher node dimension and inconsistency cont r ibute to bet ter speedups. This approach therefore excels when the nodes have a larger n u m b e r of children. This happens, for instance, when the D M is considering a large n u m b e r of alternatives.

REFERENCES

1. T. L. Saaty, The Analytic Hierarchy Process, McGraw- Hill, New York, 1980.

2. W. C. Athas and C. L. Seitz, "Multicomputers: mess- age-passing concurrent computers", Computer, 21, 9-24 (1988).

3. I M S T800 Transputer: Preliminary Data, Inmos Ltd, 1987.

4. Parallel C: User Guide, 3L Ltd, Livingstone, 1988. 5. A. Gupta, Parallelism in Production Systems, Pitman,

London, 1987. 6. L. Lakshmivarahan and K. D. Sudarshan, Analysis and

Design o f Parallel Algorithms, McGraw-Hill, 1990.

APPENDIX

We reproduce here Saaty's' justification for his method of calculating weights.

Let A be a n by n comparison matrix. The matrix is reciprocal: a 0 = 1/aj, and a, = 1. The method requires that we find the vector of weights w resulting from the compari- sons expressed in A. If the comparisons were based in exact measurements, then each ao.= wi/w j and w would be an eigenvector of A for the eigenvalue n: Aw = nw. It can also be shown that when a~ = 1 the sum of all eigenvalues of A equals n. If the matrix A is based on exact measurements then the maximum eigenvalue "~max equals n and all other eigenvalues of A are null.

Another known result is that if the a~ changes by a small amount in a positive reciprocal matrix then the eigenvalues change little. Based on this fact Saaty ~ argues that even if a,j < > w~/wj and there is some inconsistency (which holds true in general for subjective measurements) the vector of weights w will be the eigenvector of A when considering its maximum eigenvalue. That eigenvector is then normalised so that Ewe= I.