[ieee 2012 ieee 36th ieee annual computer software and applications conference workshops (compsacw)...

6
A Reputation System for Trustworthy QoS Information in Service-Based Systems Jun Lin , Changhai Jiang , Hai Hu , Kai-Yuan Cai , †Beijing University of Aeronautics and Astronautics, Beijing 100191, China, [email protected] Stephen S. Yau †† , Dazhi Huang Arizona State University, Tempe, AZ 85287-8809, USA, [email protected] Abstract—During service-based systems (SBS) development, the qualities of service (QoS) are significant factors to compose high-quality workflows and the QoS claimed by service providers may be not trustworthy enough. In this paper, a reputation system supervising both service providers and service clients without much monitoring cost is introduced. The approach is based on the claimed QoS deviation from the feedback QoS reported by monitors or clients and is able to provide more trustworthy predicted QoS. An experiment conducted at last validates that the reputation system is effective and efficient. Keywords-service-based systems; qualities of services; reputation system; trustworthiness I. PROBLEM FORMULATION To facilitate the development and management of applications, service-oriented architecture (SOA) [1] has been adopted for many large-scale distributed applications, including e-commerce, transportation, homeland security and military [2]. Systems developed based on SOA are called service-based systems (SBS). The basic components of SBS are individual services, each of which provides certain capabilities. By composing several services into a workflow, the certain application is implemented in a convenient and reliable way.1 Before workflow composition, developers need to identify the capabilities required by the application systems and search available services providing such functions. During the service selecting process, the qualities of services (QoS) such as throughput, response time, accuracy, security protection and functionality related attributes are also significant attributes of the candidate services besides the functional requirements in order to compose high-quality workflows. Service-level agreement (SLA) [3] and protection-level agreement (PLA) [4] [5] are provided by service providers to claim their services’ QoS in service contracts. The application developers use these QoS information to select services among the available services providing the needed functionality. Lin, Jiang, Hu and Cai are supported by the National Natural Science Foundation of China (No.60973006) and the Beijing Natural Science Foundation (No. 4112033) Yau and Huang are supported by the U.S. National Science Foundation under grant number 0725340. However, the QoS profiles provided by service providers may be questionable since service providers may exaggerate the qualities of their services to attract the interests of developers. Obviously, to guarantee the quality of the whole application, trustworthy QoS information is required to help the developers make good decisions on service selection. Under such situation where the participants of such distributed collaborative systems do not have face-to-face access to each other, reputation systems are widely adopted to assist trust evaluating and decision consulting [6] [7] [8] [9]. Recent years, many reputation evaluation mechanisms based on presumed distributions of the service user feedback ratings or large numbers of monitors for supervision are developed. To save more monitor resource and provider better supervision on not only service providers but also users, a well-designed reputation system for both services and users with a predicted real QoS is introduced in this paper to help encourage trustworthy behavior and deter dishonest participation without any presumed distribution and much monitoring cost. The rest of this paper is organized as follows: some previous works on the related topics are presented in Section II; the general approach is described in Section III; more details on the computation and updating of client and provider reputation are presented in Section IV; Section V reports the experiment conducted on a SBS testbed and analyzes the experimental data; Section VI gives the conclusions and future works to do. II. RELATED STUDIES To specify and guarantee the qualities of services, there are two contracts between service provider and service user from the legal and monetary view: SLA and PLA. SLA assures users that they can get the promised qualities of service they pay for and obligates the service providers to achieve their service promises. PLA is a contract on the quality of protection (QoP) which specifies the security criteria. However, in many scenarios the contracts is not effective enough to keep a trustworthy QoS profile and the researchers have developed several kinds of approaches to improving the trustworthiness of the QoS information. One common practice of these solutions is the reputation system. Recent research has shown that reputation systems have contributed in fraud avoidance in the context of peer-to-peer e-commerce 2012 IEEE 36th International Conference on Computer Software and Applications Workshops 978-0-7695-4758-9/12 $26.00 © 2012 IEEE DOI 176 2012 IEEE 36th International Conference on Computer Software and Applications Workshops 978-0-7695-4758-9/12 $26.00 © 2012 IEEE DOI 10.1109/COMPSACW.2012.41 176

Upload: dazhi

Post on 28-Feb-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

A Reputation System for Trustworthy QoS Information in Service-Based Systems

Jun Lin†, Changhai Jiang†, Hai Hu†, Kai-Yuan Cai†,

†Beijing University of Aeronautics and Astronautics, Beijing 100191, China, [email protected]

Stephen S. Yau††, Dazhi Huang Arizona State University, Tempe, AZ 85287-8809, USA,

[email protected]

Abstract—During service-based systems (SBS) development, the qualities of service (QoS) are significant factors to compose high-quality workflows and the QoS claimed by service providers may be not trustworthy enough. In this paper, a reputation system supervising both service providers and service clients without much monitoring cost is introduced. The approach is based on the claimed QoS deviation from the feedback QoS reported by monitors or clients and is able to provide more trustworthy predicted QoS. An experiment conducted at last validates that the reputation system is effective and efficient.

Keywords-service-based systems; qualities of services;

reputation system; trustworthiness

I. PROBLEM FORMULATION

To facilitate the development and management of applications, service-oriented architecture (SOA) [1] has been adopted for many large-scale distributed applications, including e-commerce, transportation, homeland security and military [2]. Systems developed based on SOA are called service-based systems (SBS). The basic components of SBS are individual services, each of which provides certain capabilities. By composing several services into a workflow, the certain application is implemented in a convenient and reliable way.1

Before workflow composition, developers need to identify the capabilities required by the application systems and search available services providing such functions. During the service selecting process, the qualities of services (QoS) such as throughput, response time, accuracy, security protection and functionality related attributes are also significant attributes of the candidate services besides the functional requirements in order to compose high-quality workflows. Service-level agreement (SLA) [3] and protection-level agreement (PLA) [4] [5] are provided by service providers to claim their services’ QoS in service contracts. The application developers use these QoS information to select services among the available services providing the needed functionality.

Lin, Jiang, Hu and Cai are supported by the National Natural Science Foundation of China

(No.60973006) and the Beijing Natural Science Foundation (No. 4112033)

Yau and Huang are supported by the U.S. National Science Foundation under grant number

0725340.

However, the QoS profiles provided by service providers may be questionable since service providers may exaggerate the qualities of their services to attract the interests of developers. Obviously, to guarantee the quality of the whole application, trustworthy QoS information is required to help the developers make good decisions on service selection. Under such situation where the participants of such distributed collaborative systems do not have face-to-face access to each other, reputation systems are widely adopted to assist trust evaluating and decision consulting [6] [7] [8] [9]. Recent years, many reputation evaluation mechanisms based on presumed distributions of the service user feedback ratings or large numbers of monitors for supervision are developed. To save more monitor resource and provider better supervision on not only service providers but also users, a well-designed reputation system for both services and users with a predicted real QoS is introduced in this paper to help encourage trustworthy behavior and deter dishonest participation without any presumed distribution and much monitoring cost.

The rest of this paper is organized as follows: some previous works on the related topics are presented in Section II; the general approach is described in Section III; more details on the computation and updating of client and provider reputation are presented in Section IV; Section V reports the experiment conducted on a SBS testbed and analyzes the experimental data; Section VI gives the conclusions and future works to do.

II. RELATED STUDIES

To specify and guarantee the qualities of services, there are two contracts between service provider and service user from the legal and monetary view: SLA and PLA. SLA assures users that they can get the promised qualities of service they pay for and obligates the service providers to achieve their service promises. PLA is a contract on the quality of protection (QoP) which specifies the security criteria.

However, in many scenarios the contracts is not effective enough to keep a trustworthy QoS profile and the researchers have developed several kinds of approaches to improving the trustworthiness of the QoS information. One common practice of these solutions is the reputation system. Recent research has shown that reputation systems have contributed in fraud avoidance in the context of peer-to-peer e-commerce

2012 IEEE 36th International Conference on Computer Software and Applications Workshops

978-0-7695-4758-9/12 $26.00 © 2012 IEEE

DOI

176

2012 IEEE 36th International Conference on Computer Software and Applications Workshops

978-0-7695-4758-9/12 $26.00 © 2012 IEEE

DOI 10.1109/COMPSACW.2012.41

176

interactions such as eBay [10] [11], Amazon [12] and Yahoo [13], and led to better buyer satisfaction.

Most of these systems calculate the reputations of services with feedbacks from service users. Feedback may be screened out or given a weight based on their deviations from the majority opinion using statistical techniques to ensure the effectiveness of the reputation evaluation [9] [14] [15], and often with a presumed distribution of the collected ratings. Vu, Hauswirth and Aberer used trust/distrust propagation and clustering techniques in assigning the weights to user reports [16]. They made some assumptions on the properties of the service users and distribution of user ratings in the system. Yau and Sun [17] presented an approach to estimate the trust of service providers using not only QoS profile, but also the trust values with respect to collaboration and competition relationships between the estimated provider and the other ones. At the same time, the transitivity of the trust values with respect to collaboration and competition is also included in this approach.

For the other systems which accumulate reputation through monitors and feedbacks of users, their approaches lack a comprehensive user supervision mechanism and thus need much monitoring to ensure the trustworthiness of users. Yau, Huang and Yin [18] presented an approach, in which the reputation of a service is based on the deviation between the QoS profile claimed by its provider and that reported by monitors and service client feedback.

III. GENERAL APPROACH

To construct a reputation system for SBS, we consider the following types of participants in a general SBS as depicted in Fig. 1: system administrator, service providers and service clients.

Figure 1. The types of participants in a general SBS.

System administrator is the agency that manages service directory for the providers to publish and the clients to search and use services. It is also responsible for managing the reputation system including its related entities called service proxies. The service proxies in our reputation system are the trusted reference entities monitoring the services. They pass the communications between the client invoking a service and the invoked service, while observing and reporting the QoS themselves. The service providers publish services with claimed QoS and functions while the service clients search services in the directory and select service based on service functions and it’s claimed QoS.

Assuming that the QoS reports of the proxies are totally trustful, the idea of our approach is to generate the reputation of the invoked service by comparing the reported QoS from the proxy with the claimed QoS and then calculate a predicted QoS of the service which deviated less from the reported QoS than the claimed one. Considering that monitoring all the service invocations using proxies will incur too much overhead in large scale SBS, we use proxies to collect QoS reports only for a portion of service invocations and for the others we rely on clients’ feedback QoS reports instead of proxies’ under the assumption that supervised by a mechanism named client reputation the clients’ feedback reports can be regarded to be trustful at a certain level. Whether a client will talk to the service directly or via a proxy is determined by the system at the time of service invocation and is not transparent to the client.

Besides, the system will generate a service provider reputation from all of the provider’s corresponding service reputations and this result will affect the reputation of any service newly published by the provider in turn. The details of client reputation, service and service provider reputation will be elaborated in Section IV. The reputation system processes the QoS reports and updates the reputation of the client, the service and its provider as well as the predicted QoS of the service after every invocation.

Finally, we will present the service reputation and the predicted QoS to the client instead of the claimed QoS as a service-selection reference at the time of a client search. The overview of the reputation system is shown in Fig. 2.

Figure 2. The overview of the reputation system.

177177

The major activities of system participants are illustrated in Fig. 2, where the minus sign between client reported QoS and proxy reported QoS stands for calculating the absolute deviation between the them, the other minus sign between claimed QoS and the others stands for subtracting claimed QoS from proxy reported QoS or client reported QoS and an arrow exiting a box entering two different boxes stands for the same value entering different input at the same time.

IV. COMPUTATION OF REPUTATION

A. Client Reputation In our reputation system, the reputation of a client is

defined as the probability a client submits valid QoS reports. Considering that generally the QoS of any specific SBS will contain many aspects, the client as well as the service and service provider reputation would accordingly be a vector whose elements correspond to different QoS aspects.

We use proxies as references to learn clients’ credibility and if the invocation is an unmonitored one, we skip the client reputation updating. After a monitored invocation, the client and the proxy will both submit a QoS report regarding this service performance. Let be the QoS report from client C about C’s ith invocation and be the QoS report from the proxy about C’s ith invocation, the corresponding reputation evaluation of C about its ith invocation is ,

, (1)

where

. (2)

And is an adjustable parameter standing for the credibility threshold. Considering possible error factors such as network latency, there will be a normal difference between the two reports. For any specific SBS and QoS aspects, experiments need to be conducted to statistically determine these values.

A reputation evaluation measures the deviation of a client’s QoS report from the proxy’s one after a service invocation, and is discretized to 0 and 1 where 1 stands for a valid client report and 0 otherwise.

Defining as the client reputation of C after C’s tth invocation, if the tth invocation is one of the early several invocations by C we directly use the value of as . For the invocations with enough history information, we calculate as the weighted aggregation of a list that contains several over the past invocations and

for the latest time. Suppose the last invocation is the tth, then the list is , , …, and . If any of these invocations is not monitored, we look further

back and shift to older invocations ahead. Let be the weight assigned to or , we have the reputation of client C at the end of the tth invocation as

. (3)

Then, for unmonitored cases only the aspects in client’s QoS report whose corresponding client reputation is larger than 0.5 will be adopted by administrator to compare with claimed QoS aspects from the provider.

B. Service and Service Provider Reputation The service reputation reflects the anticipated deviation

of a service’s actual QoS from its claimed one. It is normalized within (-1, 1), where 0 means complete accuracy and positive or negative numbers indicate deviation towards underrating or overrating. The service provider reputation reflects the overall reputation of services provided by the provider.

1) Updating Service Reputation and predicted QoS The inputs for service reputation updating are the proxy’s

QoS report or the accepted client’s QoS report aspects for the unmonitored invocation. Let denote the single reputation evaluation for the ith invocation of service S,

and be the QoS report from the proxy or the client and the claimed QoS by the provider of service S for S’s ith invocation respectively,

(4)

and

, (5)

where is an adjustable parameter for different QoS aspects [18].

Note that for different QoS aspects, the way to compute will be different. Generally we divide them into two

categories, statistical and non-statistical, which results in the difference in computing and discussed above.

Statistical QoS aspects include availability, reliability and etc. which require multiple data samples to generate one value. For these QoS aspects, the corresponding QoS elements from multiple invocations of the same service will produce one QoS aspect value which substitutes in the formula above. All the updating of reputations and predicted QoS should be done after every batch of invocations. For example, the availability of a service can be deduced by the percentage of responded invocations among all invocations according to the QoS reports.

For the other QoS aspects that contain an independent measurement in each report such as time related qualities, every QoS report will generate a reputation evaluation using function .

178178

We define the reputation of service S invoked for t times as which is the weighted aggregation of the latest for S’s tth invocation and over the past invocations. At the end of the early several invocations of S, is defined as the single reputation evaluation of the latest invocation. The same weight array is used and the service reputation of S at the end of S’s tth invocation

. (6)

Then, we can calculate the predicted QoS of service S using the claimed QoS and service reputation after the tth invocation

, (7)

where

, (8)

and is an adjustable parameter [18]. 2) Updating Service Provider Reputation

The service provider reputation is computed as the weighted average reputation of all services published by the provider. Suppose provider P has provided n services , …,

in the SBS, its reputation

, (9)

where is the weight of , which is calculated as the percentage of the invocation times of among all the invocation times of P’s services and is the latest service reputation of each .

V. EXPERIMENTS

To evaluate the effectiveness and efficiency of the reputation system, we develop an experimental service-based system with the reputation system discussed above which conducts software testing using different test strategies. The workflow is composed as Fig. 3 which consists of 11 services and we select one service named TestStrategyService to evaluate the reputation system. Acting as service provider, we develop, publish and maintain several services classified as TestStrategyService which have the same function of selecting test cases but based on different test strategies. Each of the service is published with a vector of claimed QoS values named test steps and defect number which reflect the service’s efficiency to detect deficiencies. For the both QoS aspects, the larger the QoS value observed, the better the QoS is. At the same time, we take the role of several clients to search, select and use the service in the workflow and submit reports. Because the target QoS is statistical, the clients submit QoS reports and

the system updates the reputations as well as predicted QoS after each conduct of a test batch which contains large number invocations of the selected service.

Figure 3. The workflow of the experimental service-based system.

A. Parameter Settings Set credibility threshold in (2) as 0.1 to keep strict

supervision on clients and in (5) and (8) as 100 to gain a preferable calculation precision. Choose in (3) and (6) as a geometric sequence whose first term, common ratio and length is 0.42, 0.6 and 6 respectively to assign more weigh to recent data. 50% of the test batches are monitored randomly by proxies for its real QoS. Set that 80% of the clients to be honest while the others to be not. All the client and service reputations are initialized as 0.

B. Data Analysis & General Discussions

1) Situation 1. A dishonest provider with a dishonest service is designed

whose all claimed QoS aspects are better than the real ones. The claimed QoS (cQoS), proxy reported QoS (prQoS),

client reported QoS (rQoS) and predicted QoS (pQoS) of defect number are depicted as Fig. 4. Dishonest rQoS is totally detected and pQoS follows prQoS without any latency. The service reputation which is always less than 0 in Fig. 5 reflects the dishonesty of the service well. The absolute values of deviations are depicted as Fig. 6. From the figure we can find that pQoS’s offset is much smaller than that of cQoS for this situation.

The QoS as well as service reputation of another QoS aspect named test steps and the analysis are similar as these of defect number and we do not repeat it any more.

Figure 4. The cQoS, prQoS, rQoS and pQoS of defect number

24262830323436

1 6 11 16 21 26 31 36 41 46

pQoS prQoS rQoS cQoS

179179

Figure 5. The reputation of for the QoS aspect of defect number.

Figure 6. The absolute values of the deviation of pQoS and cQoS from

prQoS.

2) Situation 2. All the service providers are designed to be honest at

first. After the 10th batch invocation, the QoS aspects of one service change to be worse than the original.

The cQoS, prQoS, rQoS and pQoS of defect number are depicted as Fig.7. With the QoS turning worse, the pQoS turns lower only after a litter of latency. A shortage of clients’ behavior history results in the abuse of the dishonest rQoS and two cases of incorrect pQoS at the 40th and 45th batch. Fig. 8 reflects the deterioration of the service reputation. The absolute values of the offsets are depicted as Fig. 9 which shows pQoS’s offset is much smaller than cQoS.

Figure 7. The cQoS, prQoS, rQoS and pQoS of defect number for 60 test

batches.

Figure 8. The reputation of the selected service for the QoS aspect of

defect number.

Figure 9. Absolute values of the deviation of pQoS and cQoS from prQoS.

For the same reason we do not repeat the case of the QoS of test steps any more.

3) Situation 3. All the service providers are designed to be honest. Fig. 10 shows the cQoS, prQoS, rQoS and pQoS of first

QoS aspect named test steps. After a period of clients’ reputation accumulation, the system deters dishonest rQoS well and thus be able to make correct pQoS. The service reputation which is never far from 0 is depicted as Fig. 11. The absolute values of the deviations are depicted as Fig. 12. For the situation where the real QoS nearly equals to the claimed QoS, pQoS’s deviation is still smaller than cQoS.

The QoS as well as service reputation of defect number and the analysis are the same as these of test steps.

Figure 10. The cQoS, prQoS, rQoS and pQoS of test steps

Figure 11. The reputation of the selected service for the first QoS aspect.

Figure 12. Absolute values of the deviation of pQoS and cQoS from prQoS.

-0.04-0.03-0.02-0.01

0

0 10 20 30 40 50

NOTmonitored monitored

0

2

4

6

8

10

1 6 11 16 21 26 31 36 41 46

|pQoS-prQoS| |cQoS-prQoS|

24

26

28

30

32

34

36

1 6 11 16 21 26 31 36 41 46 51 56

pQoS prQoS rQoS cQoS

-0.04-0.03-0.02-0.01

0

0 10 20 30 40 50

NOTmonitored monitored

0

2

4

6

8

10

1 6 11 16 21 26 31 36 41 46 51 56

|pQoS-prQoS| |cQoS-prQoS|

200220240260280300320

1 6 11 16 21 26 31 36 41 46

pQoS prQoS rQoS cQoS

-0.10

0.10.20.3

0 10 20 30 40 50

NOTmonitored monitored

0

20

40

60

80

1 6 11 16 21 26 31 36 41 46

|pQoS-prQoS| |cQoS-prQoS|

180180

For other situations where attacks occur, our reputation system is suf ciently robust against most of the attack types [19]. By introducing proxies as benchmarks and concealing information on whether to monitor the current invocation, unfair ratings and discriminations are avoided respectively. For proliferation case where multiple representations of the same service pretend to represent different services to increase the probability to be chosen, the system can force the profits the provider makes to a reasonable level by initializing new provider reputation to 0. Because the reputation system gives a large weight to recent history when updating reputation, there is almost no chance to exploit the reputation lag. The value imbalance exploitation which means providing a large number of high quality low value services and a small number of deceptive high value services to make high profits can be solved by introducing a function of service value as weight into the provider reputation updating algorithm. However, our system is not capable for re-entry attack type which means an agent with a low score leaves a community and re-enters the community under a different identity. We should study a more strict supervision mechanism for service registry and publishing in the future.

For the overhead consideration, we use proxies to collect feedback only for a portion of invocations and the system consumes much less monitor resource than usual. And the extra network delay can be ignored because the proxies simply pass communications between clients and services while recording. For the reduction of computation complexity, we should study in the future how to enhance the efficiency of the algorithm which the system calculates the aggregation of history information with and counts most of the calculation overhead in order to apply this system to large-scale practical SBS.

VI. CONCLUSIONS AND FUTURE WORKS

From the experiment above, we can obtain the conclusion as follows:

The reputation system can give good evaluation of service and provider reputation despite the dishonest clients and the lack of monitoring.

The reputation system can give good prediction of QoS which is much closer to actual QoS than the claimed QoS despite exaggerations made by providers and clients.

The reputation system is able to provide good supervision on clients without many monitors in the situations of honest and dishonest service provider.

When computing reputation for different QoS aspects, the way to choose difference parameters should be developed so that the reputation data calculated from (5) and (8) can be more precise and meaningful to any particular QoS aspect. Also note that larger numbers do not always mean better qualities for different QoS aspects, calculations should be developed so that a positive reputation always indicates a good reputation. Besides, the qualities of services are determined by not only the services’ own attributes but also the applied situations and contexts and we should study about how to take this consideration into calculation.

REFERENCES [1] IEEE Service-Oriented Architecture Standards,

http://www.soa-standards.org. [2] M. Bartoletti, P. Degano, and G. L. Ferrari, “Enforcing Secure

Service Composition,” Proc. 18th IEEE Computer Security Foundations Workshop (CSFW), 2005, pp. 211-223.

[3] H. Rajan and M. Hosamani, “Tisa: towards trustworthy services in a service-oriented architecture,” IEEE Trans. on Services Computing, vol. 1(4), pp. 201–213, 2008.

[4] M. Alam, X. Zhang, M. Nauman, and T. Ali, “Behavioral Attestation for Web Services (BA4WS),” Proc. 2008 ACM Workshop on Secure Web Services, 2008, pp. 21-28.

[5] S. Yoshihama, T. Ebringer, M. Nakamura, S. Munetoh, and H. Maruyama, “WS-Attestation: Efficient and Eine-Grained Remote Attestation on Web Services,” Proc. 2005 IEEE Int’l Conf. on Web Services, 2005, pp. 750-757.

[6] F. Emekci, O. D. Sahin, D. Agrawal, and A. E. Abbadi, “A Peer-to-Peer Framework for Web Service Discovery with Ranking,” Proc. of IEEE Int’l Conf. on Web Services, 2004, pp. 192-199.

[7] S. Kalepu, S. Krishnaswamy, and S. W. Loke, “Reputation = F(User Ranking, Compliance, Verity),” Proc. of IEEE Int’l Conf. on Web Services, 2004, pp. 200-207.

[8] L. Zeng, B. Benatallah, M. Dumas, J. Kalagnanam, and Q. Z. Sheng, “Web Engineering: Quality Driven Web Service Composition,” Proc. of Int’l World Wide Web Conf., 2003, pp. 411-421.

[9] Z. Malik and A. Bouguettaya, “RATEWeb: Reputation Assessment for Trust Establishment among Web Services,” The VLDB J., vol.18(4), 2009, pp. 885-911.

[10] D. Houser and J. Wooders, “Reputation in Auctions: Theory, and Evidence from Ebay,” Journal of Economics and Management Strategy, vol. 15, June 2006, pp. 353-369.

[11] P. Resnick, R. Zeckhauser, J. Swanson, and K. Lockwood, “The Value of Reputation on Ebay: A Controlled Experiment,” Experimental Economics, vol. 9, June 2006, pp. 79-101.

[12] K. Lin, H. Lu, T. Yu, and C. Tai, “A Reputation and Trust Management Broker Framework for Web Applications,” Proc. 2005 IEEE Int’l Conf. on e-Technology, e-Commerce and e-Service (EEE’05), IEEE Computer Society Washington, DC, USA, 2005, pp. 262–269.

[13] L. Xiong and L. Liu, “A Reputation-Based Trust Model for Peer-to-Peer Ecommerce Communities,” Proc. of IEEE Conf. on Electronic Commerce, June 2003.

[14] C. Dellarocas, “Immunizing Online Reputation Reporting Systems against Unfair Ratings and Discriminatory Behavior,” Proc. of 2nd ACM Conf. on Electronic Commerce, 2000, pp. 150-157.

[15] A. Whitby, A. Josang, and J. Indulska, “Filtering Out Unfair Ratings in Bayesian Reputation Systems,” The Icfain J. of Management Research, Vol. 4(2), 2005, pp. 48-64.

[16] L. H. Vu, M. Hauswirth, and K. Aberer, “QoS-based Service Selection and Ranking with Trust and Reputation Management,” Proc. of OTM'05, R. Meersman and Z. Tari (Eds.), LNCS 3760, 2005, pp. 466-483.

[17] S. S. Yau and P. Sun, “An Approach to Improving Trust Estimation in Service-Based Systems,” Proc of 3rd IEEE International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC2011), October. 2011, pp. 361-367.

[18] S. S. Yau, J. Huang, and Y. Yin, “Improving The Trustworthiness of Service QoS Information in Service-Based Systems,” Proc. IEEE 7th Conf. on Autonomic and Trusted Computing, Oct. 2010, pp. 208-218.

[19] A. Jøsang and J. Golbeck, “Challenges for Robust of Trust and Reputation Systems,” Proc. of the 5th International Workshop on Security and Trust Management (STM 2009), Sep. 2009.

181181