Computer Networks 190(2021)107952 Contents lists available at ScienceDirect Computer Networks ELSEVIER journal homepage:www.elsevier.com/locate/comnet An adaptive trust model based on recommendation filtering algorithm for the Internet of Things systems Guozhu Chen",Fanping Zeng2,b,",Jian Zhangd,Tingting Lu",Jingfei Shen",Wenjuan Shua .School of Computer Science and Technology,University of Science and Technology of China,Hefei,Anhui,China Anhui Province Key Lab of Software in Computing and Communication,Hefei,Anhui,China State Key Laboratory of Computer Science,Institute of Software Chinese Academy of Sciences,Beijing,China University of Chinese Academy of Sciences,Beijing,China ARTICLE INFO ABSTRACT Keywords: The Internet of Things (loT)is growing rapidly and brings great convenience to humans.But it also causes Internet of Things some security issues which may have negative impacts on humans.Trust management is an effective method Trust model to solve these problems by establishing trust relationships among interconnected IoT objects.In this paper, we propose an adaptive trust model based on recommendation filtering algorithm for the IoT systems.The utilization of sliding window and time decay function when calculating direct trust can greatly accelerate the convergence rate of trust evaluation. We design a recommendation filtering algorithm to effectively filter out bad recommendations and minimize the impact of malicious objects.An adaptive weight is developed to better combine direct trust and recommendation trust into synthesis trust so as to adapt to the dynamically hostile environment.In the simulation experiments,we compare our adaptive trust model with three related models:TBSM,NRB and NTM. The experimental results indicate that our trust model converges fast and the mean absolute error is always less than 0.05 when the proportion of malicious nodes is from 10%to 70%.The comparative experiments further verify the effectiveness of our trust model in terms of accuracy,convergence rate and resistance to trust related attacks. 1.Introduction objects which have different functions and provide diverse services and applications.Consequently,an IoT trust model should be universal and The concept of Internet of Things (loT)is to connect a large number capable of running on various types of objects.Second,most objects of objects in the real physical world to the Internet based on standard have limited capacities so that the existing trust models in p2P and communication protocols and unique addressing schemes [1].These social networks are no longer applicable.Third,many of the objects will interconnected objects can be service providers offering services and sharing resources and information with each other.For the past few be malicious for their own benefits and then carry out various malicious years,IoT has grown rapidly and a series of relevant services and ap- attacks in order to reduce the trust value of others or improve their plications including smart home,smart city and smart community [2] own trustworthiness.As a result,IoT trust models should be resistant emerged.These services and applications bring great convenience to to those malicious attacks. humans,but they also cause some security issues that may do harm to To meet the challenges discussed above,we propose an adaptive our lives.For example,a misbehaved object can perform various types trust model to establish trust relationships among objects.Our trust of malicious attacks to destroy the integrity and availability of data model based on the recommendation filtering algorithm can effectively and network resources.Trust management is an effective method to resist malicious attacks carried out by misbehaved objects and evaluate solve the above security issues by establishing trust relationships among the trust value of target objects accurately.The major contributions of objects and then excluding malicious objects.It allows multiple objects to share their opinions about the trust value of their companions [3]. our paper are as follows: Although trust management can effectively solve some of the secu- rity problems,there are still some challenges in building trust man. We propose a system architecture based on trust third parties agement systems.First,there are a large number of heterogeneous (TTPs)which provides a secure and reliable trust computing Corresponding author at School of Computer Science and Technology,University of Science and Technology of China,Hefei,Anhui,China. E-mail addresses:chengz18@mail.ustc.edu.cn (G.Chen),billzeng@ustc.edu.cn (F.Zeng). https://doi.org/10.1016/j.comnet.2021.107952 Received 7 September 2020;Received in revised form 27 January 2021;Accepted 17 February 2021 Available online 22 February 2021 1389-1286/@2021 Elsevier B.V.All rights reserved
Computer Networks 190 (2021) 107952 Available online 22 February 2021 1389-1286/© 2021 Elsevier B.V. All rights reserved. Contents lists available at ScienceDirect Computer Networks journal homepage: www.elsevier.com/locate/comnet An adaptive trust model based on recommendation filtering algorithm for the Internet of Things systems Guozhu Chen a , Fanping Zeng a,b,∗ , Jian Zhang c,d , Tingting Lu a , Jingfei Shen a , Wenjuan Shu a a School of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui, China b Anhui Province Key Lab of Software in Computing and Communication, Hefei, Anhui, China c State Key Laboratory of Computer Science, Institute of Software Chinese Academy of Sciences, Beijing, China d University of Chinese Academy of Sciences, Beijing, China A R T I C L E I N F O Keywords: Internet of Things Trust model A B S T R A C T The Internet of Things (IoT) is growing rapidly and brings great convenience to humans. But it also causes some security issues which may have negative impacts on humans. Trust management is an effective method to solve these problems by establishing trust relationships among interconnected IoT objects. In this paper, we propose an adaptive trust model based on recommendation filtering algorithm for the IoT systems. The utilization of sliding window and time decay function when calculating direct trust can greatly accelerate the convergence rate of trust evaluation. We design a recommendation filtering algorithm to effectively filter out bad recommendations and minimize the impact of malicious objects. An adaptive weight is developed to better combine direct trust and recommendation trust into synthesis trust so as to adapt to the dynamically hostile environment. In the simulation experiments, we compare our adaptive trust model with three related models: TBSM, NRB and NTM. The experimental results indicate that our trust model converges fast and the mean absolute error is always less than 0.05 when the proportion of malicious nodes is from 10% to 70%. The comparative experiments further verify the effectiveness of our trust model in terms of accuracy, convergence rate and resistance to trust related attacks. 1. Introduction The concept of Internet of Things (IoT) is to connect a large number of objects in the real physical world to the Internet based on standard communication protocols and unique addressing schemes [1]. These interconnected objects can be service providers offering services and sharing resources and information with each other. For the past few years, IoT has grown rapidly and a series of relevant services and applications including smart home, smart city and smart community [2] emerged. These services and applications bring great convenience to humans, but they also cause some security issues that may do harm to our lives. For example, a misbehaved object can perform various types of malicious attacks to destroy the integrity and availability of data and network resources. Trust management is an effective method to solve the above security issues by establishing trust relationships among objects and then excluding malicious objects. It allows multiple objects to share their opinions about the trust value of their companions [3]. Although trust management can effectively solve some of the security problems, there are still some challenges in building trust management systems. First, there are a large number of heterogeneous ∗ Corresponding author at: School of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui, China. E-mail addresses: chengz18@mail.ustc.edu.cn (G. Chen), billzeng@ustc.edu.cn (F. Zeng). objects which have different functions and provide diverse services and applications. Consequently, an IoT trust model should be universal and capable of running on various types of objects. Second, most objects have limited capacities so that the existing trust models in P2P and social networks are no longer applicable. Third, many of the objects will be malicious for their own benefits and then carry out various malicious attacks in order to reduce the trust value of others or improve their own trustworthiness. As a result, IoT trust models should be resistant to those malicious attacks. To meet the challenges discussed above, we propose an adaptive trust model to establish trust relationships among objects. Our trust model based on the recommendation filtering algorithm can effectively resist malicious attacks carried out by misbehaved objects and evaluate the trust value of target objects accurately. The major contributions of our paper are as follows: • We propose a system architecture based on trust third parties (TTPs) which provides a secure and reliable trust computing https://doi.org/10.1016/j.comnet.2021.107952 Received 7 September 2020; Received in revised form 27 January 2021; Accepted 17 February 2021
G.Chen et al. Computer Networks 190(2021)107952 environment and hence saves storage and computing resources evaluated by the trustor.If the trustor is satisfied with the service of IoT objects.Although in some previous work [4-7]and [8], provided by the trustee,it will give the trustee a high trust rating. the authors proposed hybrid architectures which are similar to However,the trustor cannot interact with all trustees directly all the ours,they did not specify what components are included in their time.In this situation,the trustor needs recommendations from other proposed architecture and they did not explain how to apply their objects that have interaction histories with trustees.Those objects who trust models to the architectures they proposed.Instead,we clar- give recommendations to the trustor are called recommenders. ify the components included in our architecture and the functions According to the above descriptions,we know that there are two of these components.Meanwhile,we explain the process of trust types of trust relationships including direct trust and recommendation evaluation and the interaction process of these components in the trust between a trustor and a trustee.The type of a given trust relation- architecture we proposed. ship depends on the way the trustor communicates with the trustee.If Considering that the impact of past feedback will decrease over the trustor communicates with the trustee directly,this trust relation- time,we introduce a sliding window to store feedback and use ship is considered direct trust.Otherwise,we call the trust relationship a time decay function to reduce the weight of the previous recommendation trust.In our trust model,the trustor evaluates the feedback.The differences from [5]and [6]are that we not only trustee's trust value by synthesis trust that combines direct trust and use the decay function to reduce the impact of previous feedback, recommendation trust by adaptive weight. we also propose a sliding window to save the feedback of the most recent period of time.The use of the sliding window can reflect 2.2.Attack model the changes in the trust value of the IoT objects more quickly because of the fact that recent behaviors can better reflect the A malicious object is dishonest and can perform malicious attacks current trust status of IoT objects. such as providing bad service or recommending adverse trust informa- We design a recommendation filtering algorithm based on k- tion about trustees to the trustor.We call these attacks trust related means to filter out bad recommendations provided by malicious attacks.The trust related attacks are summarized as follows: recommenders.Although a similar filtering algorithm was pro- On-off attacks:A malicious object behaves well for a period of posed in [5],we also introduce three important factors on the time and badly at other times.For example,a trustee can provide basis of our filtering algorithm.Even if the filtering algorithm a trustor with good service that does not need many resources and cannot completely filter out the bad recommendations,the use prefers not to serve the trustor when the trustor needs too many of these three important factors can reduce the negative impact resources. of the bad recommendations on the calculation of the recommen- Self promoting attacks:A malicious object can promote its dation trust as much as possible. reputation by offering good recommendations about itself so We introduce an adaptive weight that can adjust automatically ac- that it can be selected as a service provider and then provides cording to the dynamic environment to combine direct trust and poor service.A service requester can hardly select good service recommendation trust.The experimental results indicate that our providers under these attacks if the trust model does not ignore adaptive trust model enables fast and accurate trust evaluation bad recommendations about the malicious object itself. and resists malicious attacks in the dynamically hostile environ- Bad mouthing attacks:A malicious recommender can slander ment.Compared with the fixed weight used in [9]and [10],our the reputation of a well-behaved trustee by providing the trustor adaptive weight enables fast and accurate trust evaluation and with bad recommendations about that trustee.As a result,the resists malicious attacks in the dynamically hostile environment. trustee that is evaluated by the trustor with a low trust rating cannot be selected as a service provider. The remainder of this paper is organized as follows.In Section 2,we introduce the concept of trust and attack model in loT.In Section 3,we Ballot stuffing attacks:These attacks are similar to bad survey the related work of IoT trust models.In Section 4,we propose mouthing attacks.A badly-behaved trustee that cannot offer the system architecture and the process of trust evaluation.In Section 5, satisfying service will be highly rated by malicious recommenders we elaborate on our adaptive trust model and we give the experimental that give opposite recommendations to the trustor.When multi- results and relevant analysis in Section 6.Finally,we summarize the ple recommenders collaborate with each other to perform these paper and outline the future work in Section 7. attacks at the same time,they can boost the reputation of a bad trustee quickly. 2.Background Selective misbehavior attacks:A malicious recommender pro vides the trustor with bad recommendations about some trustees In this section,we first introduce the concept of trust in IoT,the and gives correct recommendations about others.In such a case, the trustor can hardly judge if the recommender is malicious main participants in the trust model and the types of trust.Then,we because of its intermittent malicious behavior. list some trust related attacks that can break the trust management system.Finally,we introduce some common outlier detection methods From the above description of trust related attacks,we know that which can be used to detect bad recommendations caused by those trust models are under many security threats that can break the func- trust related attacks and filter out them from all the recommendations tionality of trust management systems.Therefore,trust models should received by the trustor. consider multiple trust factors in order to evaluate trustees accurately. They should also take more defensive measures to avoid the negative 2.1.Trust in internet of things effect of bad recommendations so as to improve the stability of trust evaluation in the dynamically hostile environment. In human society,trust usually indicates the degree of subjective belief between people.People are more likely to communicate with 2.3.Outlier detection methods people they trust.Similarly,IoT objects are more willing to use services provided by trusted objects.Objects can evaluate the trust value of In Section 2.2,we have already introduced that malicious rec- others through trust models before using their service. ommenders which perform some trust related attacks such as bad There are three main participants in a trust model:trustor,trustee mouthing attacks and ballot stuffing attacks will provide bad recom- and recommender.A trustor is an object who wants to evaluate the mendations to the trustor.If the trustor uses these bad recommen- trust value of others.Correspondingly,a trustee is an object who is dations,the accuracy of the recommendation trust evaluation will be 2
Computer Networks 190 (2021) 107952 2 G. Chen et al. environment and hence saves storage and computing resources of IoT objects. Although in some previous work [4–7] and [8], the authors proposed hybrid architectures which are similar to ours, they did not specify what components are included in their proposed architecture and they did not explain how to apply their trust models to the architectures they proposed. Instead, we clarify the components included in our architecture and the functions of these components. Meanwhile, we explain the process of trust evaluation and the interaction process of these components in the architecture we proposed. • Considering that the impact of past feedback will decrease over time, we introduce a sliding window to store feedback and use a time decay function to reduce the weight of the previous feedback. The differences from [5] and [6] are that we not only use the decay function to reduce the impact of previous feedback, we also propose a sliding window to save the feedback of the most recent period of time. The use of the sliding window can reflect the changes in the trust value of the IoT objects more quickly because of the fact that recent behaviors can better reflect the current trust status of IoT objects. • We design a recommendation filtering algorithm based on 𝑘- means to filter out bad recommendations provided by malicious recommenders. Although a similar filtering algorithm was proposed in [5], we also introduce three important factors on the basis of our filtering algorithm. Even if the filtering algorithm cannot completely filter out the bad recommendations, the use of these three important factors can reduce the negative impact of the bad recommendations on the calculation of the recommendation trust as much as possible. • We introduce an adaptive weight that can adjust automatically according to the dynamic environment to combine direct trust and recommendation trust. The experimental results indicate that our adaptive trust model enables fast and accurate trust evaluation and resists malicious attacks in the dynamically hostile environment. Compared with the fixed weight used in [9] and [10], our adaptive weight enables fast and accurate trust evaluation and resists malicious attacks in the dynamically hostile environment. The remainder of this paper is organized as follows. In Section 2, we introduce the concept of trust and attack model in IoT. In Section 3, we survey the related work of IoT trust models. In Section 4, we propose the system architecture and the process of trust evaluation. In Section 5, we elaborate on our adaptive trust model and we give the experimental results and relevant analysis in Section 6. Finally, we summarize the paper and outline the future work in Section 7. 2. Background In this section, we first introduce the concept of trust in IoT, the main participants in the trust model and the types of trust. Then, we list some trust related attacks that can break the trust management system. Finally, we introduce some common outlier detection methods which can be used to detect bad recommendations caused by those trust related attacks and filter out them from all the recommendations received by the trustor. 2.1. Trust in internet of things In human society, trust usually indicates the degree of subjective belief between people. People are more likely to communicate with people they trust. Similarly, IoT objects are more willing to use services provided by trusted objects. Objects can evaluate the trust value of others through trust models before using their service. There are three main participants in a trust model: trustor, trustee and recommender. A trustor is an object who wants to evaluate the trust value of others. Correspondingly, a trustee is an object who is evaluated by the trustor. If the trustor is satisfied with the service provided by the trustee, it will give the trustee a high trust rating. However, the trustor cannot interact with all trustees directly all the time. In this situation, the trustor needs recommendations from other objects that have interaction histories with trustees. Those objects who give recommendations to the trustor are called recommenders. According to the above descriptions, we know that there are two types of trust relationships including direct trust and recommendation trust between a trustor and a trustee. The type of a given trust relationship depends on the way the trustor communicates with the trustee. If the trustor communicates with the trustee directly, this trust relationship is considered direct trust. Otherwise, we call the trust relationship recommendation trust. In our trust model, the trustor evaluates the trustee’s trust value by synthesis trust that combines direct trust and recommendation trust by adaptive weight. 2.2. Attack model A malicious object is dishonest and can perform malicious attacks such as providing bad service or recommending adverse trust information about trustees to the trustor. We call these attacks trust related attacks. The trust related attacks are summarized as follows: • On–off attacks: A malicious object behaves well for a period of time and badly at other times. For example, a trustee can provide a trustor with good service that does not need many resources and prefers not to serve the trustor when the trustor needs too many resources. • Self promoting attacks: A malicious object can promote its reputation by offering good recommendations about itself so that it can be selected as a service provider and then provides poor service. A service requester can hardly select good service providers under these attacks if the trust model does not ignore bad recommendations about the malicious object itself. • Bad mouthing attacks: A malicious recommender can slander the reputation of a well-behaved trustee by providing the trustor with bad recommendations about that trustee. As a result, the trustee that is evaluated by the trustor with a low trust rating cannot be selected as a service provider. • Ballot stuffing attacks: These attacks are similar to bad mouthing attacks. A badly-behaved trustee that cannot offer satisfying service will be highly rated by malicious recommenders that give opposite recommendations to the trustor. When multiple recommenders collaborate with each other to perform these attacks at the same time, they can boost the reputation of a bad trustee quickly. • Selective misbehavior attacks: A malicious recommender provides the trustor with bad recommendations about some trustees and gives correct recommendations about others. In such a case, the trustor can hardly judge if the recommender is malicious because of its intermittent malicious behavior. From the above description of trust related attacks, we know that trust models are under many security threats that can break the functionality of trust management systems. Therefore, trust models should consider multiple trust factors in order to evaluate trustees accurately. They should also take more defensive measures to avoid the negative effect of bad recommendations so as to improve the stability of trust evaluation in the dynamically hostile environment. 2.3. Outlier detection methods In Section 2.2, we have already introduced that malicious recommenders which perform some trust related attacks such as bad mouthing attacks and ballot stuffing attacks will provide bad recommendations to the trustor. If the trustor uses these bad recommendations, the accuracy of the recommendation trust evaluation will be
G.Chen et al. Computer Networks 190 (2021)107952 reduced.In order to effectively avoid the negative impact of these trust mouthing attacks,ballot stuffing attacks,selective misbehavior attacks related attacks,these bad recommendations can be regarded as outliers and on-off attacks.We also explain these attacks in detail in Section 2.2 and detected by outlier detection methods.Therefore,the trustor can and our trust model can resist these trust related attacks effectively.In use a recommendation filtering algorithm based on outlier detection the next paragraph,we introduce some specific trust models and their methods to eliminate these bad recommendations when evaluating the advantages and limitations. recommendation trust of trustees.In this subsection,we introduce some Chen et al.[18]clarified the concept of trust and reputation in common outlier detection methods and then we will compare and IoT and proposed an IoT trust management model based on fuzzy analyze these methods in Section 5.2.1 so as to explain why we choose theory.But in their model,a trustor cannot evaluate trustees without k-means to filter out bad recommendations. direct interactions.To solve this problem,our trust model adopts the recommendation trust evaluation to help the trustor calculate the trust Grubbs'test:Grubbs'test which was proposed by Grubbs et al. [11]is a statistically based outlier detection method.It is used to value of trustees indirectly.Nitti et al.[19]proposed two types of trust models:subjective model and objective model.In the subjective detect outliers in one-dimensional data under the assumption that model,each trustor calculates and stores the trust value of trustees the data is generated by a Gaussian distribution.It calculates the itself.In the objective model,a distributed hash table is designed for z score of each data instance and compares the z score with the storing the information of each node.But these two trust models are threshold.The z score is calculated by dividing the absolute value susceptible to malicious nodes in the network.Considering that the of the difference between the data instance and the average value trust evaluation is sensitive to context,Saied et al.[20]designed a of the data by the standard deviation of the data.A data instance context-aware and multi-service approach to trust management.The whose z score greater than the threshold will be regarded as an model selects a certain number of historical trust values to calculate outlier. the current trust value.But it is difficult to quickly evaluate the Box plot:Box plot [12]is a simple statistical technique to detect trustworthiness when there is not enough trust related information.To outliers in one-dimensional and multi-dimensional data.It first solve this problem,Xia et al.[21]designed a kernel-based nonlinear calculates the Inter Quartile Range(/OR)which is the difference multivariate gray prediction model to predict the direct trust which between the first quartile(O)and the third quartile(O3).Then, needs a small amount of historical information.Experimental results data instances greater than 3+1.5 /OR or less than 01-1.5 indicate the accuracy and convergence rate of the trust model.But,the IOR will be regarded as outliers. proportion of malicious nodes is only 30%in their experiments.Our Isolation forest:Isolation forest was brought by Liu et al.[13] trust model is still accurate when the proportion of malicious nodes is and can be viewed as the unsupervised counterpart of decision as high as 70%. trees.An isolation tree is generated with a given sample set by Some work brings social attributes to the IoT.A comprehensive recursively choosing one random attribute and one random split model was proposed in [22]and used the social relations of users on value of the data on every tree node until the height limit is the real social platform to establish the social relationship among nodes reached or the terminal leaf contains one distinct data instance. so as to make the experimental results more persuasive.Chen et al.[9] The principle is that outliers have a higher chance of being divided trust into three types based on social attributes:honesty,coop- isolated on an earlier stage than normal data instances.Hence, eration and community-interest.The trust model separately calculates outliers are expected to have a shorter height in the isolation the three types of trust and combines them according to the actual trees. scenario.However,it needs a large number of experiments to determine Local outlier factor(LOF):LOF [14]is a well-known approach the best weight.When the trustor and the trustee do not interact with that first introduced the concept of local outliers.The LOF score each other directly,recommendations are important to trust evaluation. for a data instance is based on the average ratio of the instance's Xia et al.[23]proposed a trust model that divides recommendations neighbors'density to the instance's density.For a normal instance into direct recommendations and indirect recommendations and uses lying in a dense region,its local density will be similar to that of direct trust and similarity value to calculate the weight of the two its neighbors,while for an outlier,its local density will be lower types of recommendations.But their work lacked security analysis of than that of its neighbors.Hence,LOF scores of normal instances their model.To avoid the impact of bad recommendations,a trust are close to 1 while outliers'LOF scores are much greater than 1. model with clustering technique was proposed in [5]to dynamically DBSCAN:DBSCAN [15]is a density-based clustering algorithm filter out attacks related to bad recommendations.Similarly,Chen and can be used as an outlier detection method.It has two user- et al.[6]developed a trust management system that adopts distributed specified parameters that determine the density of the data and collaborative filtering to select feedback and uses social contacts as it autonomously determines the number of clusters.Users can filters.However,they did not illustrate how to establish social contacts determine which clusters of data instances are outliers according between nodes.Same as above related work,our model adopts a to the rules set in advance by themselves. recommendation filtering algorithm to filter out bad recommendations k-Means:k-Means [16]is another clustering algorithm and can provided by malicious recommenders.Besides,our model considers also be used for outlier detection.k is the number of clusters and three important factors:direct trust,similarity value and confidence needs to be specified by users in advance.Similar to DBSCAN, level to further reduce the impact of bad recommendations. users can determine which clusters of data instances are outliers Machine learning based trust models have been proposed in re- according to their own rules. cent years.A trust model based on SVM and k-means was presented in [24]to classify the extracted trust features and combine them to 3.Related work produce a final trust value,whereas it is only valid in some situations. Caminha et al.[25]proposed a smart trust management method that In this section,we survey recently proposed trust models for en- can detect on-off attacks.However,this method cannot resist collusion hancing the security of IoT systems.Guo et al.[17]published a survey attacks such as bad mouthing attacks.A trust evaluation method based and presented a classification of trust models for IoT and this classi. on usage scenarios was presented in [26].The authors believed that fication contains eight classes based on five trust design dimensions: the trustworthiness of the service provided by the target node varies trust composition,trust propagation,trust aggregation,trust update according to the scenario in which the service is used and they used and trust formation.The trust model we propose also involves these neural network training to obtain the trustworthiness of the service. five dimensions.Furthermore,they presented trust related attacks that Alshehri et al.[7]proposed a clustering-driven intelligent method can perturb the trust computation models:self promoting attacks,bad that can filter out dishonest recommenders.In addition,Boudagdigue
Computer Networks 190 (2021) 107952 3 G. Chen et al. reduced. In order to effectively avoid the negative impact of these trust related attacks, these bad recommendations can be regarded as outliers and detected by outlier detection methods. Therefore, the trustor can use a recommendation filtering algorithm based on outlier detection methods to eliminate these bad recommendations when evaluating the recommendation trust of trustees. In this subsection, we introduce some common outlier detection methods and then we will compare and analyze these methods in Section 5.2.1 so as to explain why we choose 𝑘-means to filter out bad recommendations. • Grubbs’ test: Grubbs’ test which was proposed by Grubbs et al. [11] is a statistically based outlier detection method. It is used to detect outliers in one-dimensional data under the assumption that the data is generated by a Gaussian distribution. It calculates the 𝑧 score of each data instance and compares the 𝑧 score with the threshold. The 𝑧 score is calculated by dividing the absolute value of the difference between the data instance and the average value of the data by the standard deviation of the data. A data instance whose 𝑧 score greater than the threshold will be regarded as an outlier. • Box plot: Box plot [12] is a simple statistical technique to detect outliers in one-dimensional and multi-dimensional data. It first calculates the Inter Quartile Range(𝐼𝑄𝑅) which is the difference between the first quartile(𝑄1 ) and the third quartile(𝑄3 ). Then, data instances greater than 𝑄3 + 1.5 ∗ 𝐼𝑄𝑅 or less than 𝑄1 − 1.5 ∗ 𝐼𝑄𝑅 will be regarded as outliers. • Isolation forest: Isolation forest was brought by Liu et al. [13] and can be viewed as the unsupervised counterpart of decision trees. An isolation tree is generated with a given sample set by recursively choosing one random attribute and one random split value of the data on every tree node until the height limit is reached or the terminal leaf contains one distinct data instance. The principle is that outliers have a higher chance of being isolated on an earlier stage than normal data instances. Hence, outliers are expected to have a shorter height in the isolation trees. • Local outlier factor(LOF): LOF [14] is a well-known approach that first introduced the concept of local outliers. The LOF score for a data instance is based on the average ratio of the instance’s neighbors’ density to the instance’s density. For a normal instance lying in a dense region, its local density will be similar to that of its neighbors, while for an outlier, its local density will be lower than that of its neighbors. Hence, LOF scores of normal instances are close to 1 while outliers’ LOF scores are much greater than 1. • DBSCAN: DBSCAN [15] is a density-based clustering algorithm and can be used as an outlier detection method. It has two userspecified parameters that determine the density of the data and it autonomously determines the number of clusters. Users can determine which clusters of data instances are outliers according to the rules set in advance by themselves. • 𝑘-Means: 𝑘-Means [16] is another clustering algorithm and can also be used for outlier detection. 𝑘 is the number of clusters and needs to be specified by users in advance. Similar to DBSCAN, users can determine which clusters of data instances are outliers according to their own rules. 3. Related work In this section, we survey recently proposed trust models for enhancing the security of IoT systems. Guo et al. [17] published a survey and presented a classification of trust models for IoT and this classification contains eight classes based on five trust design dimensions: trust composition, trust propagation, trust aggregation, trust update and trust formation. The trust model we propose also involves these five dimensions. Furthermore, they presented trust related attacks that can perturb the trust computation models: self promoting attacks, bad mouthing attacks, ballot stuffing attacks, selective misbehavior attacks and on–off attacks. We also explain these attacks in detail in Section 2.2 and our trust model can resist these trust related attacks effectively. In the next paragraph, we introduce some specific trust models and their advantages and limitations. Chen et al. [18] clarified the concept of trust and reputation in IoT and proposed an IoT trust management model based on fuzzy theory. But in their model, a trustor cannot evaluate trustees without direct interactions. To solve this problem, our trust model adopts the recommendation trust evaluation to help the trustor calculate the trust value of trustees indirectly. Nitti et al. [19] proposed two types of trust models: subjective model and objective model. In the subjective model, each trustor calculates and stores the trust value of trustees itself. In the objective model, a distributed hash table is designed for storing the information of each node. But these two trust models are susceptible to malicious nodes in the network. Considering that the trust evaluation is sensitive to context, Saied et al. [20] designed a context-aware and multi-service approach to trust management. The model selects a certain number of historical trust values to calculate the current trust value. But it is difficult to quickly evaluate the trustworthiness when there is not enough trust related information. To solve this problem, Xia et al. [21] designed a kernel-based nonlinear multivariate gray prediction model to predict the direct trust which needs a small amount of historical information. Experimental results indicate the accuracy and convergence rate of the trust model. But, the proportion of malicious nodes is only 30% in their experiments. Our trust model is still accurate when the proportion of malicious nodes is as high as 70%. Some work brings social attributes to the IoT. A comprehensive model was proposed in [22] and used the social relations of users on the real social platform to establish the social relationship among nodes so as to make the experimental results more persuasive. Chen et al. [9] divided trust into three types based on social attributes: honesty, cooperation and community-interest. The trust model separately calculates the three types of trust and combines them according to the actual scenario. However, it needs a large number of experiments to determine the best weight. When the trustor and the trustee do not interact with each other directly, recommendations are important to trust evaluation. Xia et al. [23] proposed a trust model that divides recommendations into direct recommendations and indirect recommendations and uses direct trust and similarity value to calculate the weight of the two types of recommendations. But their work lacked security analysis of their model. To avoid the impact of bad recommendations, a trust model with clustering technique was proposed in [5] to dynamically filter out attacks related to bad recommendations. Similarly, Chen et al. [6] developed a trust management system that adopts distributed collaborative filtering to select feedback and uses social contacts as filters. However, they did not illustrate how to establish social contacts between nodes. Same as above related work, our model adopts a recommendation filtering algorithm to filter out bad recommendations provided by malicious recommenders. Besides, our model considers three important factors: direct trust, similarity value and confidence level to further reduce the impact of bad recommendations. Machine learning based trust models have been proposed in recent years. A trust model based on SVM and 𝑘-means was presented in [24] to classify the extracted trust features and combine them to produce a final trust value, whereas it is only valid in some situations. Caminha et al. [25] proposed a smart trust management method that can detect on–off attacks. However, this method cannot resist collusion attacks such as bad mouthing attacks. A trust evaluation method based on usage scenarios was presented in [26]. The authors believed that the trustworthiness of the service provided by the target node varies according to the scenario in which the service is used and they used neural network training to obtain the trustworthiness of the service. Alshehri et al. [7] proposed a clustering-driven intelligent method that can filter out dishonest recommenders. In addition, Boudagdigue
G.Chen et al. Computer Networks 190(2021)107952 Node TTP Feedback Feedback Feedback sender Recerver RKepository Trust Evaluation Request Request Recommendation Sender Receiver Direct Trust Trust Trust Value Trust Value Receiver Sender Synthesis Trust Fig.1.Architecture of the trust model. trustee and recommender are all nodes.Meanwhile,a node can play Node Closest TTP Other TTPs different roles according to different requirements.Nodes are usually Send feedback IoT objects with limited capabilities and resources so that they can Store feedback hardly perform complex computing all the time.To solve such prob- lems,TTPs that provide safe and reliable trust computing environment are introduced into our trust model.We divide the nodes into multiple groups and each group has a TTP responsible for assisting the node in trust evaluation.Each node sends feedback about the services it has Trust evaluation request received from service providers to its closest TTP in the process of Search feedback trust evaluation.Hence,our system architecture is a hybrid architecture and TTPs in our architecture play supporting roles.The nodes in the Feedback request trustor roles really need to perform trust evaluation to evaluate the trust Send feedback back value of trustees.With the help of TTPs to evaluate the trust value of trustees,nodes can save energy as much as possible and thus extend Trust evaluation their lifetime. There are three components in a node,which are feedback sender, request sender and trust value receiver.The details of these three Send trust value components are as follows. Feedback sender:It sends feedback that is provided by nodes to TTPs.If a node is satisfied with the service it has received from the service provider,it will give positive feedback to its closest Fig.2.Process of trust evaluation. TTP through the feedback sender. Request sender:If a node wants to learn about the trust value of others,it will send a request for trust evaluation to its closest TTP et al.[27]proposed a distributed advanced analytical trust model through the request sender. based on a Markov chain which can effectively resist bad mouthing Trust value receiver:It receives the trust value of trustees that is attacks and ballot stuffing attacks.But they do not explain how to sent by TTPs. select suitable nodes as recommenders.Wang et al.[4]proposed a There are five components in a TTP,which are feedback receiver, novel trust mechanism based on a multilayer structure that solves energy consumption problems.Trust models based on machine learning feedback repository,request receiver,trust evaluation module and trust value sender.The details of these five components are as follows. may require large amounts of data to ensure the performance of trust evaluation.On the contrary,our model uses adaptive weight to com- Feedback receiver:It is a component that receives feedback from bine direct trust and recommendation trust according to the current nodes and then sends the feedback to the feedback repository. environment and only requires some necessary information to rapidly Feedback repository:It is a place where stores feedback from evaluate trustees.In addition,the introduction of TTPs can reduce the nodes.The feedback in the feedback repository will be used to energy consumption of IoT objects and extend their lifetime. evaluate the trust value of trustees later. Request receiver:It receives the request for trust evaluation from 4.System overview the trustor and then notifies trust evaluation module to evaluate the specific trustee's trust value In this section,we first present the architecture of our trust model Trust evaluation module:This module computes the direct trust, and specify the role of each component in the architecture.Then,we recommendation trust and synthesis trust of trustees through give the process of trust evaluation in our trust model so as to explain feedback from feedback repository and the trust model we pro- how the components work together to establish trust relationships posed. .Trust value sender:It sends trust value that is evaluated by trust among objects in a dynamically hostile IoT environment. evaluation module to the trustor sending trust request before. 4.1.The proposed system architecture 4.2.Process of trust evaluation Fig.1 illustrates the system architecture of our trust model.There In this subsection,we elaborate on how the components mentioned are two main entities in it:nodes and trust third parties (TTPs).Trustor, above cooperate with each other in the trust management system in
Computer Networks 190 (2021) 107952 4 G. Chen et al. Fig. 1. Architecture of the trust model. Fig. 2. Process of trust evaluation. et al. [27] proposed a distributed advanced analytical trust model based on a Markov chain which can effectively resist bad mouthing attacks and ballot stuffing attacks. But they do not explain how to select suitable nodes as recommenders. Wang et al. [4] proposed a novel trust mechanism based on a multilayer structure that solves energy consumption problems. Trust models based on machine learning may require large amounts of data to ensure the performance of trust evaluation. On the contrary, our model uses adaptive weight to combine direct trust and recommendation trust according to the current environment and only requires some necessary information to rapidly evaluate trustees. In addition, the introduction of TTPs can reduce the energy consumption of IoT objects and extend their lifetime. 4. System overview In this section, we first present the architecture of our trust model and specify the role of each component in the architecture. Then, we give the process of trust evaluation in our trust model so as to explain how the components work together to establish trust relationships among objects in a dynamically hostile IoT environment. 4.1. The proposed system architecture Fig. 1 illustrates the system architecture of our trust model. There are two main entities in it: nodes and trust third parties (TTPs). Trustor, trustee and recommender are all nodes. Meanwhile, a node can play different roles according to different requirements. Nodes are usually IoT objects with limited capabilities and resources so that they can hardly perform complex computing all the time. To solve such problems, TTPs that provide safe and reliable trust computing environment are introduced into our trust model. We divide the nodes into multiple groups and each group has a TTP responsible for assisting the node in trust evaluation. Each node sends feedback about the services it has received from service providers to its closest TTP in the process of trust evaluation. Hence, our system architecture is a hybrid architecture and TTPs in our architecture play supporting roles. The nodes in the trustor roles really need to perform trust evaluation to evaluate the trust value of trustees. With the help of TTPs to evaluate the trust value of trustees, nodes can save energy as much as possible and thus extend their lifetime. There are three components in a node, which are feedback sender, request sender and trust value receiver. The details of these three components are as follows. • Feedback sender: It sends feedback that is provided by nodes to TTPs. If a node is satisfied with the service it has received from the service provider, it will give positive feedback to its closest TTP through the feedback sender. • Request sender: If a node wants to learn about the trust value of others, it will send a request for trust evaluation to its closest TTP through the request sender. • Trust value receiver: It receives the trust value of trustees that is sent by TTPs. There are five components in a TTP, which are feedback receiver, feedback repository, request receiver, trust evaluation module and trust value sender. The details of these five components are as follows. • Feedback receiver: It is a component that receives feedback from nodes and then sends the feedback to the feedback repository. • Feedback repository: It is a place where stores feedback from nodes. The feedback in the feedback repository will be used to evaluate the trust value of trustees later. • Request receiver: It receives the request for trust evaluation from the trustor and then notifies trust evaluation module to evaluate the specific trustee’s trust value. • Trust evaluation module: This module computes the direct trust, recommendation trust and synthesis trust of trustees through feedback from feedback repository and the trust model we proposed. • Trust value sender: It sends trust value that is evaluated by trust evaluation module to the trustor sending trust request before. 4.2. Process of trust evaluation In this subsection, we elaborate on how the components mentioned above cooperate with each other in the trust management system in
G.Chen et al. Computer Networks 190(2021)107952 order to implement trust evaluation.Fig.2 illustrates the process of In Eq.(2),denotes the amount of positive feedback provided trust evaluation and the detailed description is as follows: by trustoriabout trusteeat timeanddenotes the amount of negative feedback.e is a time decay function and is a decay (1)Each node periodically sends feedback about the services it has factor that affects the decay rate of the time decay function.m is the received from service providers to its closest TTP via its feedback sender. size of the sliding window.pf and nf are the amount of positive and negative feedback at time t,respectively. (2)Each feedback receiver of the TTP receives feedback from nodes and uploads the feedback to its feedback repository. Another problem we need to solve in the direct trust evaluation is to migrate the risk of on-off attacks.We use a penalty factor to amplify (3)A trustor will use request sender to a send trust evaluation the influence of negative feedback and the trust value of the trustee request to its closest TTP when it wants to obtain the trust value will decrease faster if it provides the trustor with bad service.Trustor of the target trustee. will give negative feedback about the trustee and the weight of negative (4)When a TTP receives a trust evaluation request from the trustor, feedback will be greater with the influence of the penalty factor.Eq.(3) it first searches whether there is feedback about the target is the final formula to evaluate the direct trust. trustee in its feedback repository.If not,it will request feedback about that trustee from other TTPs.The TTP which stores the DT= 唱+1 required feedback will send them back. (3) 号+唱*PF+2 (5)The TTP utilizes the feedback and its trust evaluation module to evaluate the direct trust,recommendation trust and synthesis In Eq.(3),PF is the penalty factor.The calculation of and trust of the target trustee can be found in Eq.(2). (6)After the work of the trust evaluation module,the TTP sends the target trustee's trust value to the trustor through the trust value 5.2.Recommendation trust sender. (7)Finally,the trustor receives the trust value of the trustee and When the trustor does not interact with the trustee directly,it then decides whether to receive services provided by the trustee. lacks essential information to evaluate the trustee's direct trust.At this time,the trustor needs to request recommendations from recom- 5.The proposed trust model menders who have interacted with the trustee before and then uses these recommendations to calculate the recommendation trust of the In this section,we propose the concrete methods used in the trust trustee.Under the trust related attacks,the trustor may receive some model that can evaluate the trust value accurately and steadily in the bad recommendations.To avoid the influence of these attacks,we dynamically hostile environment. propose a recommendation filtering algorithm based on k-means to filter out malicious recommenders.For the recommendations provided 5.1.Direct trust by remaining recommenders after filtering,some important factors are applied to ensure the accuracy of the recommendation trust. We adopt a Bayesian inference model [28]based on beta probability density function to evaluate the direct trust of the trustee.Eq.(1)shows 5.2.1.The choice of k-means the direct trust of trustor i about trustee j. We have already discussed why we need a recommendation filtering a9+1 algorithm based on outlier detection methods in Section 2.3.Now we (1) analyze the applicability of these outlier detection methods according 0+唱+2 to the characteristics of bad recommendations and explain why we finally propose a recommendation filtering algorithm based on k-means In Eq.(1),DT represents the direct trust of trustor i about trustee instead of other outlier detection methods.The bad recommendations j at time t.It is a real number in the range of [0,1]where 1 indicates about the trustee provided by malicious recommenders are often op- complete trust,0.5 indicates uncertainty and 0 indicates complete posite to the ground truth of the trustee.For example,if the ground distrust.denotes the total number of positive feedback given by truth of a well-behaved trustee is 1,malicious recommenders are likely trustor i about trustee j from the beginning of trust evaluation to to give recommendations less than 0.5 to reduce the recommendation current time 1.Similarly,is the total number of negative feedback. trust of the trustee.These behaviors performed by malicious recom- If the services provided by the trustee can meet the requirements,the menders are called bad mouthing attacks.Ballot stuffing attacks are trustor will give positive feedback to the trustee.On the contrary,the just the opposite of these behaviors. trustor will give negative feedback. When the proportion of malicious recommenders is relatively small, We consider the influence of feedback is blunted over time because most of the recommendations received by the trustor are close to feedback from past interactions cannot accurately reflect the current the ground truth of the trustee.The six outlier detection methods status of the trustee.So the weight of previous feedback should be re- introduced above can all effectively detect bad recommendations in duced.To achieve this,we introduce a time decay function whose value such a case.Then,the trustor can filter out these outliers based on the will decrease constantly over time,and adopt a sliding window which detection results.However,when the proportion of malicious recom- only stores and updates the feedback from recent interactions.The menders increases,the proportion of bad recommendations will also sliding window has m time slots in order from its left side to the right increase.Not all of these outlier detection methods are effective in this side.Each time slot stores the amount of positive and negative feedback situation.The average value of all recommendations is no longer close during an interaction and the corresponding time when this interaction to the average value of good recommendations,but a value between happened.The rightmost time slot stores the latest feedback that has good recommendations and bad recommendations.The z scores of the most important influence to the direct trust evaluation.Eq.(2) all recommendations will be less than the fixed threshold and thus shows the calculation of positive feedback and negative feedback. grubbs'test cannot detect bad recommendations as outliers.Similarly, the first quartile will fall among bad recommendations instead of good 9-∑e*+pj recommendations,resulting in all recommendations being within the (2) specified range of the box plot.Therefore,box plot cannot detect =∑e-+n时 bad recommendations either.Both isolation forest and LOF treat data instances in the sparse area as outliers.The difference is that isolation
Computer Networks 190 (2021) 107952 5 G. Chen et al. order to implement trust evaluation. Fig. 2 illustrates the process of trust evaluation and the detailed description is as follows: (1) Each node periodically sends feedback about the services it has received from service providers to its closest TTP via its feedback sender. (2) Each feedback receiver of the TTP receives feedback from nodes and uploads the feedback to its feedback repository. (3) A trustor will use request sender to a send trust evaluation request to its closest TTP when it wants to obtain the trust value of the target trustee. (4) When a TTP receives a trust evaluation request from the trustor, it first searches whether there is feedback about the target trustee in its feedback repository. If not, it will request feedback about that trustee from other TTPs. The TTP which stores the required feedback will send them back. (5) The TTP utilizes the feedback and its trust evaluation module to evaluate the direct trust, recommendation trust and synthesis trust of the target trustee. (6) After the work of the trust evaluation module, the TTP sends the target trustee’s trust value to the trustor through the trust value sender. (7) Finally, the trustor receives the trust value of the trustee and then decides whether to receive services provided by the trustee. 5. The proposed trust model In this section, we propose the concrete methods used in the trust model that can evaluate the trust value accurately and steadily in the dynamically hostile environment. 5.1. Direct trust We adopt a Bayesian inference model [28] based on beta probability density function to evaluate the direct trust of the trustee. Eq. (1) shows the direct trust of trustor 𝑖 about trustee 𝑗. 𝐷𝑇 (𝑡) 𝑖𝑗 = 𝛼 (𝑡) 𝑖𝑗 + 1 𝛼 (𝑡) 𝑖𝑗 + 𝛽 (𝑡) 𝑖𝑗 + 2 (1) In Eq. (1), 𝐷𝑇 (𝑡) 𝑖𝑗 represents the direct trust of trustor 𝑖 about trustee 𝑗 at time 𝑡. It is a real number in the range of [0, 1] where 1 indicates complete trust, 0.5 indicates uncertainty and 0 indicates complete distrust. 𝛼 (𝑡) 𝑖𝑗 denotes the total number of positive feedback given by trustor 𝑖 about trustee 𝑗 from the beginning of trust evaluation to current time 𝑡. Similarly, 𝛽 (𝑡) 𝑖𝑗 is the total number of negative feedback. If the services provided by the trustee can meet the requirements, the trustor will give positive feedback to the trustee. On the contrary, the trustor will give negative feedback. We consider the influence of feedback is blunted over time because feedback from past interactions cannot accurately reflect the current status of the trustee. So the weight of previous feedback should be reduced. To achieve this, we introduce a time decay function whose value will decrease constantly over time, and adopt a sliding window which only stores and updates the feedback from recent interactions. The sliding window has 𝑚 time slots in order from its left side to the right side. Each time slot stores the amount of positive and negative feedback during an interaction and the corresponding time when this interaction happened. The rightmost time slot stores the latest feedback that has the most important influence to the direct trust evaluation. Eq. (2) shows the calculation of positive feedback and negative feedback. 𝛼 (𝑡) 𝑖𝑗 = ∑𝑚 𝑖=1 𝑒 −𝜆(𝑡−𝑡 𝑖 ) ∗ 𝛼 (𝑡 𝑖 ) 𝑖𝑗 + 𝑝𝑓 𝛽 (𝑡) 𝑖𝑗 = ∑𝑚 𝑖=1 𝑒 −𝜆(𝑡−𝑡 𝑖 ) ∗ 𝛽 (𝑡 𝑖 ) 𝑖𝑗 + 𝑛𝑓 (2) In Eq. (2), 𝛼 (𝑡 𝑖 ) 𝑖𝑗 denotes the amount of positive feedback provided by trustor 𝑖 about trustee 𝑗 at time 𝑡 𝑖 and 𝛽 (𝑡 𝑖 ) 𝑖𝑗 denotes the amount of negative feedback. 𝑒 −𝜆(𝑡−𝑡 𝑖 ) is a time decay function and 𝜆 is a decay factor that affects the decay rate of the time decay function. 𝑚 is the size of the sliding window. 𝑝𝑓 and 𝑛𝑓 are the amount of positive and negative feedback at time 𝑡, respectively. Another problem we need to solve in the direct trust evaluation is to migrate the risk of on–off attacks. We use a penalty factor to amplify the influence of negative feedback and the trust value of the trustee will decrease faster if it provides the trustor with bad service. Trustor will give negative feedback about the trustee and the weight of negative feedback will be greater with the influence of the penalty factor. Eq. (3) is the final formula to evaluate the direct trust. 𝐷𝑇 (𝑡) 𝑖𝑗 = 𝛼 (𝑡) 𝑖𝑗 + 1 𝛼 (𝑡) 𝑖𝑗 + 𝛽 (𝑡) 𝑖𝑗 ∗ 𝑃 𝐹 + 2 (3) In Eq. (3), 𝑃 𝐹 is the penalty factor. The calculation of 𝛼 (𝑡) 𝑖𝑗 and 𝛽 (𝑡) 𝑖𝑗 can be found in Eq. (2). 5.2. Recommendation trust When the trustor does not interact with the trustee directly, it lacks essential information to evaluate the trustee’s direct trust. At this time, the trustor needs to request recommendations from recommenders who have interacted with the trustee before and then uses these recommendations to calculate the recommendation trust of the trustee. Under the trust related attacks, the trustor may receive some bad recommendations. To avoid the influence of these attacks, we propose a recommendation filtering algorithm based on 𝑘-means to filter out malicious recommenders. For the recommendations provided by remaining recommenders after filtering, some important factors are applied to ensure the accuracy of the recommendation trust. 5.2.1. The choice of 𝑘-means We have already discussed why we need a recommendation filtering algorithm based on outlier detection methods in Section 2.3. Now we analyze the applicability of these outlier detection methods according to the characteristics of bad recommendations and explain why we finally propose a recommendation filtering algorithm based on 𝑘-means instead of other outlier detection methods. The bad recommendations about the trustee provided by malicious recommenders are often opposite to the ground truth of the trustee. For example, if the ground truth of a well-behaved trustee is 1, malicious recommenders are likely to give recommendations less than 0.5 to reduce the recommendation trust of the trustee. These behaviors performed by malicious recommenders are called bad mouthing attacks. Ballot stuffing attacks are just the opposite of these behaviors. When the proportion of malicious recommenders is relatively small, most of the recommendations received by the trustor are close to the ground truth of the trustee. The six outlier detection methods introduced above can all effectively detect bad recommendations in such a case. Then, the trustor can filter out these outliers based on the detection results. However, when the proportion of malicious recommenders increases, the proportion of bad recommendations will also increase. Not all of these outlier detection methods are effective in this situation. The average value of all recommendations is no longer close to the average value of good recommendations, but a value between good recommendations and bad recommendations. The 𝑧 scores of all recommendations will be less than the fixed threshold and thus grubbs’ test cannot detect bad recommendations as outliers. Similarly, the first quartile will fall among bad recommendations instead of good recommendations, resulting in all recommendations being within the specified range of the box plot. Therefore, box plot cannot detect bad recommendations either. Both isolation forest and LOF treat data instances in the sparse area as outliers. The difference is that isolation
G.Chen et al. Computer Networks 190 (2021)107952 forest is based on the global distribution of data instances while LOF Algorithm 1 Recommendation Filtering Algorithm is based on the local density of data instances.When the proportion of malicious recommenders increases,a bad recommendation will also be Input:R=(r1.r2.....m).Iterationsmax in a dense area with other bad recommendations surrounding it.Hence, Output:R'=(. it is difficult to judge whether the bad recommendation is an outlier or l:for each re∈Rdo not based on its height in the isolation tree.Similarly,the LOF score 2: Construct vector (DTir:DTrj) of a bad recommendation will be close to 1 because its local density is 3:end for almost the same as its neighbors'.Obviously,neither isolation forest nor 4:Initialize: LOF can effectively detect bad recommendations when the proportion 5: randomly select two vectors (DTir:DT).(DTir:DT) of malicious recommenders increases. 6: Iterations 1.R'= DBSCAN and k-means are both clustering-based outlier detection 7:repeat methods.The former autonomously determines the number of clusters 8 C1=0,C2=0 based on the density of data instances.The latter determines the 9 Flag false number of clusters according to the user-specified parameter k and 10: for each (DTir:DT)do divides data instances into clusters based on the distance between 11: dist=(DTirs -DTins+(DTrs -DT)2 them and the centroids.The same is that both of them need to de- termine which clusters of data instances are outliers according to the 12: dist =V(DTins DTin,+(DTrU -DTj)2 user-specified rules.In the recommendation trust evaluation,we can 13: if dist =DTir,then means is O(Ikm)where I is the number of iterations specified by users. 31: Cfilter=C1 Because I and k are much smaller than m,the time complexity of k- 32:else means can be regarded as O(m).We can conclude that k-means is more 33:Cfilter=C2 efficient than DBSCAN in terms of time complexity.Based on the above 34:end if comparative analysis,we can conclude from both effectiveness and 35:for each (DTin DT E Cfilter do efficiency that the choice of k-means to detect bad recommendations is 36: R'=R'Urk better than other outlier detection methods.Consequently,we propose 37:end for a recommendation filtering algorithm based on k-means. 38:return R' 5.2.2.Recommendation filtering algorithm based on k-means We take the direct trust of the trustor about the recommenders and the recommendations provided by the recommenders as vectors which vectors are added to the corresponding clusters until it does not change will be taken by the recommendation filtering algorithm as inputs. any more. Through the filtering algorithm,the vectors will be divided into two After the clustering algorithm,the vectors are divided into two clusters.The cluster whose centroid is larger will be selected and the clusters.One of the clusters includes vectors corresponding to the good corresponding recommenders will be regarded as recommenders after recommenders and the centroid's first value of it is larger because the filtering. average direct trust of the good recommenders is larger.We consider Algorithm 1 illustrates the recommendation filtering algorithm in the first cluster as the trustworthy one.Finally,the recommenders cor- detail.The inputs are a list of recommenders R fri,r2.....m responding to each vector in the trustworthy cluster are subsequently and the max number of interactions Iterationsax.The outputs are a used for the recommendation trust evaluation.Through the recommen- list of recommenders R'=(....that remain after filtering. dation filtering algorithm,the impact of trust related attacks such as For each recommender r&,the algorithm constructs a vector of two bad mouthing attacks and ballot stuffing attacks can be minimized by values:the direct trust of the trustor about the recommender and filtering out the malicious recommenders.But the recommendations the recommendation of the recommender about the trustee.At the provided by the remaining recommenders may not be all used.We beginning of filtering,the algorithm randomly selects two vectors as need to consider some important factors that affect the accuracy of the initial centroids of clusters.The core part of the filtering algorithm is recommendation trust. based on the k-means clustering algorithm (Lines 7-29).It separately calculates the Euclidean distance of each vector and the centroid of 5.2.3.Evaluation of recommendation trust two clusters.Then,each vector will be added to the cluster closer to it Although the recommendation filtering algorithm can effectively (Lines 10-18).The centroid of each cluster will be recalculated after all resist some trust related attacks,it may not filter out all the malicious 6
Computer Networks 190 (2021) 107952 6 G. Chen et al. forest is based on the global distribution of data instances while LOF is based on the local density of data instances. When the proportion of malicious recommenders increases, a bad recommendation will also be in a dense area with other bad recommendations surrounding it. Hence, it is difficult to judge whether the bad recommendation is an outlier or not based on its height in the isolation tree. Similarly, the LOF score of a bad recommendation will be close to 1 because its local density is almost the same as its neighbors’. Obviously, neither isolation forest nor LOF can effectively detect bad recommendations when the proportion of malicious recommenders increases. DBSCAN and 𝑘-means are both clustering-based outlier detection methods. The former autonomously determines the number of clusters based on the density of data instances. The latter determines the number of clusters according to the user-specified parameter 𝑘 and divides data instances into clusters based on the distance between them and the centroids. The same is that both of them need to determine which clusters of data instances are outliers according to the user-specified rules. In the recommendation trust evaluation, we can take the direct trust of the trustor about the recommenders and the recommendations provided by the recommenders as data instances. The reason behind that is the average value of the direct trust of the trustor about good recommenders is larger. The recommendations in the cluster which centroid’s first value is the largest can be regarded as good recommendations and others will be deemed to be bad. Then, the trustor can filter out bad recommendations from all recommendations it received based on the clustering results. Whether it is DBSCAN or 𝑘-means, good recommendations and bad recommendations will be divided into different clusters even if the proportion of malicious recommenders increases. Therefore, both of them are effective for detecting bad recommendations according to the rules we set. The space complexity of DBSCAN is 𝑂(𝑚) where 𝑚 is the number of data instances. The space complexity of 𝑘-means is 𝑂(𝑚 + 𝑘) where 𝑘 is the number of clusters specified by users. In the recommendation trust evaluation, we set 𝑘 to 2. As a result, the space complexity of 𝑘- means is approximately equal to the space complexity of DBSCAN. The time complexity of DBSCAN is 𝑂(𝑚𝑙𝑜𝑔𝑚). The time complexity of 𝑘- means is 𝑂(𝐼𝑘𝑚) where 𝐼 is the number of iterations specified by users. Because 𝐼 and 𝑘 are much smaller than 𝑚, the time complexity of 𝑘- means can be regarded as 𝑂(𝑚). We can conclude that 𝑘-means is more efficient than DBSCAN in terms of time complexity. Based on the above comparative analysis, we can conclude from both effectiveness and efficiency that the choice of 𝑘-means to detect bad recommendations is better than other outlier detection methods. Consequently, we propose a recommendation filtering algorithm based on 𝑘-means. 5.2.2. Recommendation filtering algorithm based on 𝑘-means We take the direct trust of the trustor about the recommenders and the recommendations provided by the recommenders as vectors which will be taken by the recommendation filtering algorithm as inputs. Through the filtering algorithm, the vectors will be divided into two clusters. The cluster whose centroid is larger will be selected and the corresponding recommenders will be regarded as recommenders after filtering. Algorithm 1 illustrates the recommendation filtering algorithm in detail. The inputs are a list of recommenders 𝑅 = {𝑟1 , 𝑟2 ,… , 𝑟𝑚} and the max number of interactions 𝐼 𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑠𝑚𝑎𝑥. The outputs are a list of recommenders 𝑅′ = {𝑟 ′ 1 , 𝑟′ 2 , …, 𝑟′ 𝑛 } that remain after filtering. For each recommender 𝑟𝑘 , the algorithm constructs a vector of two values: the direct trust of the trustor about the recommender and the recommendation of the recommender about the trustee. At the beginning of filtering, the algorithm randomly selects two vectors as initial centroids of clusters. The core part of the filtering algorithm is based on the 𝑘-means clustering algorithm (Lines 7–29). It separately calculates the Euclidean distance of each vector and the centroid of two clusters. Then, each vector will be added to the cluster closer to it (Lines 10–18). The centroid of each cluster will be recalculated after all Algorithm 1 Recommendation Filtering Algorithm Input: 𝑅 = {𝑟1 , 𝑟2 , … , 𝑟𝑚}, 𝐼 𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑠𝑚𝑎𝑥 Output: 𝑅′ = {𝑟 ′ 1 , 𝑟′ 2 , …, 𝑟′ 𝑛 } 1: for each 𝑟𝑘 ∈ 𝑅 do 2: 𝐶𝑜𝑛𝑠𝑡𝑟𝑢𝑐𝑡 𝑣𝑒𝑐𝑡𝑜𝑟 (𝐷𝑇𝑖𝑟𝑘 ; 𝐷𝑇𝑟𝑘 𝑗 ) 3: end for 4: Initialize: 5: 𝑟𝑎𝑛𝑑𝑜𝑚𝑙𝑦 𝑠𝑒𝑙𝑒𝑐𝑡 𝑡𝑤𝑜 𝑣𝑒𝑐𝑡𝑜𝑟𝑠 (𝐷𝑇𝑖𝑟𝑥 ; 𝐷𝑇𝑟𝑥 𝑗 ), (𝐷𝑇𝑖𝑟𝑦 ; 𝐷𝑇𝑟𝑦 𝑗 ) 6: 𝐼 𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑠 = 1, 𝑅′ = Ø 7: repeat 8: 𝐶1 = Ø, 𝐶2 = Ø 9: 𝐹 𝑙𝑎𝑔 = 𝑓 𝑎𝑙𝑠𝑒 10: for each (𝐷𝑇𝑖𝑟𝑘 ; 𝐷𝑇𝑟𝑘 𝑗 ) do 11: 𝑑𝑖𝑠𝑡1 = √ (𝐷𝑇𝑖𝑟𝑘 − 𝐷𝑇𝑖𝑟𝑥 ) 2 + (𝐷𝑇𝑟𝑘 𝑗 − 𝐷𝑇𝑟𝑥 𝑗 ) 2 12: 𝑑𝑖𝑠𝑡2 = √ (𝐷𝑇𝑖𝑟𝑘 − 𝐷𝑇𝑖𝑟𝑦 ) 2 + (𝐷𝑇𝑟𝑘 𝑗 − 𝐷𝑇𝑟𝑦 𝑗 ) 2 13: if 𝑑𝑖𝑠𝑡1 𝐼 𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑠𝑚𝑎𝑥 𝑜𝑟 𝐹 𝑙𝑎𝑔 == 𝑓 𝑎𝑙𝑠𝑒 30: if 𝐷𝑇𝑖𝑟𝑥 >= 𝐷𝑇𝑖𝑟𝑦 then 31: 𝐶𝑓 𝑖𝑙𝑡𝑒𝑟 = 𝐶1 32: else 33: 𝐶𝑓 𝑖𝑙𝑡𝑒𝑟 = 𝐶2 34: end if 35: for each (𝐷𝑇𝑖𝑟𝑘 ; 𝐷𝑇𝑟𝑘 𝑗 ) ∈ 𝐶𝑓 𝑖𝑙𝑡𝑒𝑟 do 36: 𝑅′ = 𝑅′ ∪ 𝑟𝑘 37: end for 38: return 𝑅′ vectors are added to the corresponding clusters until it does not change any more. After the clustering algorithm, the vectors are divided into two clusters. One of the clusters includes vectors corresponding to the good recommenders and the centroid’s first value of it is larger because the average direct trust of the good recommenders is larger. We consider the first cluster as the trustworthy one. Finally, the recommenders corresponding to each vector in the trustworthy cluster are subsequently used for the recommendation trust evaluation. Through the recommendation filtering algorithm, the impact of trust related attacks such as bad mouthing attacks and ballot stuffing attacks can be minimized by filtering out the malicious recommenders. But the recommendations provided by the remaining recommenders may not be all used. We need to consider some important factors that affect the accuracy of the recommendation trust. 5.2.3. Evaluation of recommendation trust Although the recommendation filtering algorithm can effectively resist some trust related attacks, it may not filter out all the malicious
G.Chen et al. Computer Networks 190(2021)107952 recommenders.In addition,the well-behaved recommenders may not is the recommendations provided by recommender r about trustee evaluate the trustee accurately due to insufficient interactions and thus j.Its correctness depends on the behavior of the recommender.The cannot provide precise recommendations.To solve those problems that utilization of DTand can minimize the impact of bad and the filtering algorithm cannot deal with,we apply three important imprecise recommendations and thereby improve the precision of the factors in the evaluation of the recommendation trust. recommendation trust evaluation. In human society,we usually trust information provided by people who we believe.Similarly,in the trust model,the trustor tends to 5.3.Synthesis trust use the recommendations provided by recommenders who are highly rated by the trustor.As a result,the direct trust of the trustor about Neither direct trust nor recommendation trust can comprehensively the recommender is needed in the evaluation of the recommendation reflect the trustee's trustworthiness.Hence,our trust model uses syn trust.The second important factor is the similarity of the direct trust thesis trust that is calculated by combining direct trust and recom- evaluation.Generally speaking,the trustor is more willing to receive mendation trust.Eq.(8)shows the calculation of the synthesis trust. recommendations from recommenders who have similar views with it. The similar views mean that the trustor and recommenders give similar evaluation of the direct trust to the trustee who provides the same T号=oDr9+(I-aoR9 (8) service.Eq.(4)shows how to calculate the similarity of the direct trust evaluation between the trustor and the recommender. In Eq.(8),Tdenotes the synthesis trust of trustoriabout trustee j.Its range is between 0 and 1,where 1 means complete trust while 0 59=1- ∑ESeir IDT-DTral (4) means complete distrust.DT and RT is direct trust and recommen- lSet(i,rk川 dation trust calculated by Eqs.(3)and (7).is a weight that weighs In Eq.(4),S denotes the similarity of direct trust evaluation the importance of direct trust and recommendation trust.It falls in the between trustor i and recommender rk.It falls in the interval of [0,1] interval [0.1]and the bigger it is,the more important direct trust is. where 1 means that the trustor and the recommender give exactly the The selection of is pivotal to the trust model.We adopt an adaptive same evaluation for each trustee.Ser(i.r)represents trustees common weight that can adjust automatically according to the dynamically to i and r,and ISer(i.)is the number of common trustees.Assuming hostile environment.The utilization of adaptive weight can resist trust that a recommender only gives the correct recommendations about related attacks such as bad mouthing attacks and ballot stuffing attacks part of trustees,the direct trust evaluation of the trustor and the so that improving the accuracy of trust evaluation.Eq.(9)illustrates the calculation of weight o. recommender about the same trustee may not be similar.In such a case,the similarity of the direct trust evaluation will be very small. Therefore,the trust model can resist selective misbehavior attacks by (9) 1 DTir<DTthreshold. using the factor of similarity in calculating. The last factor is the confidence level of the recommender about In Eq.(9),the calculation of is divided into two parts.The the trustee.The confidence level reflects the number of interactions principle of separate calculation is to compare DTr and DTesl between the recommender and the trustee.The higher the confidence DTr denotes the average value of direct trust of trustor i about all level is,the more the interactions between them are.Consequently,the recommenders.DT is the threshold of direct trust and is set to recommender with a high confidence level is more popular because it 0.5 by default.If DTir is less than DTreshd,will be equal to 1.It can evaluate the direct trust of the trustee accurately through sufficient means that trustor i will only use the direct trust depending on the interactions.Eg.(5)shows the calculation of the confidence level.It direct interactions with the target trustee if most of the recommenders evolves from the beta distribution standard deviation. are malicious.This way of calculating weight can prevent the trustor mistaking a good trustee as a malicious one when the proportion of 12@2+12+1) malicious recommenders is high.When DT,is equal to or greater than (5) (a%+唱+2p(a2+9+3) DT the calculation of is related to the number of interactions IN between trustor i and trustee j,and Ar that the difference between InE(5),denotes the confidence level of recommender the current time and the time of last interaction.The high number of about trusteeat time.andis the accumulated positive interactions means that the trustor has already adequately known about the trustee,so the direct trust of the trustor about the trustee is more and negative feedback given by the k'th recommenderr about target accurate.But if the trustor and the trustee have not interacted recently, trustee j.Eq.(6)shows the way to calculate them and it is similar even if they interacted with each other many times long time ago,the to Eg.(2). direct trust still cannot reflect the current trustworthiness of the trustee. In such a case,we regulate the importance of the direct trust via 4t.The e-w* adaptive weight can be dynamically adjusted according to the current (6) interaction situation to adapt to the dynamically hostile environment. 6.Experimental results and analysis In Eq.(6),m is the size of the sliding window between re and j.y andare decay factors of the time decay function.andis the The detailed performance evaluation of our work is done in two main parts.In the first part,we compare our proposed system architec. positive and negative feedback at time,respectively.We combine the ture based on TTPs with the centralized architecture and the distributed three important factors explained above and give the calculation of the architecture in terms of energy consumption.In the second part,we recommendation trust in Eq.(7). first validate the effectiveness of the recommendation trust evaluation DTS.CH and the adaptive weight.Then,we compare our trust model with three (7) related models:TBSM [9],NRB [23]and NTM [4].These three related 台=1DTsc models all adopted some methods to avoid the negative impact caused by malicious nodes on trust evaluation.In TBSM [9],they established In Eq.(7),RT is the recommendation trust of trusteej calculated social relationships between nodes and used these relationships to help by trustor.is the number of recommenders after filtering.D the trustor not to use bad recommendations provided by malicious 7
Computer Networks 190 (2021) 107952 7 G. Chen et al. recommenders. In addition, the well-behaved recommenders may not evaluate the trustee accurately due to insufficient interactions and thus cannot provide precise recommendations. To solve those problems that the filtering algorithm cannot deal with, we apply three important factors in the evaluation of the recommendation trust. In human society, we usually trust information provided by people who we believe. Similarly, in the trust model, the trustor tends to use the recommendations provided by recommenders who are highly rated by the trustor. As a result, the direct trust of the trustor about the recommender is needed in the evaluation of the recommendation trust. The second important factor is the similarity of the direct trust evaluation. Generally speaking, the trustor is more willing to receive recommendations from recommenders who have similar views with it. The similar views mean that the trustor and recommenders give similar evaluation of the direct trust to the trustee who provides the same service. Eq. (4) shows how to calculate the similarity of the direct trust evaluation between the trustor and the recommender. 𝑆 (𝑡) 𝑖𝑟𝑘 = 1 − ∑ 𝑙∈𝑆𝑒𝑡(𝑖,𝑟𝑘 ) |𝐷𝑇𝑖𝑙 − 𝐷𝑇𝑟𝑘 𝑙 | |𝑆𝑒𝑡(𝑖, 𝑟𝑘 )| (4) In Eq. (4), 𝑆 (𝑡) 𝑖𝑟𝑘 denotes the similarity of direct trust evaluation between trustor 𝑖 and recommender 𝑟𝑘 . It falls in the interval of [0, 1] where 1 means that the trustor and the recommender give exactly the same evaluation for each trustee. 𝑆𝑒𝑡(𝑖, 𝑟𝑘 ) represents trustees common to 𝑖 and 𝑟𝑘 , and |𝑆𝑒𝑡(𝑖, 𝑟𝑘 )| is the number of common trustees. Assuming that a recommender only gives the correct recommendations about part of trustees, the direct trust evaluation of the trustor and the recommender about the same trustee may not be similar. In such a case, the similarity of the direct trust evaluation will be very small. Therefore, the trust model can resist selective misbehavior attacks by using the factor of similarity in calculating. The last factor is the confidence level of the recommender about the trustee. The confidence level reflects the number of interactions between the recommender and the trustee. The higher the confidence level is, the more the interactions between them are. Consequently, the recommender with a high confidence level is more popular because it can evaluate the direct trust of the trustee accurately through sufficient interactions. Eq. (5) shows the calculation of the confidence level. It evolves from the beta distribution standard deviation. 𝐶 (𝑡) 𝑟𝑘 𝑗 = 1 − √√√√√ 12(𝛼 (𝑡) 𝑟𝑘 𝑗 + 1)(𝛽 (𝑡) 𝑟𝑘 𝑗 + 1) (𝛼 (𝑡) 𝑟𝑘 𝑗 + 𝛽 (𝑡) 𝑟𝑘 𝑗 + 2)2(𝛼 (𝑡) 𝑟𝑘 𝑗 + 𝛽 (𝑡) 𝑟𝑘 𝑗 + 3) (5) In Eq. (5), 𝐶 (𝑡) 𝑟𝑘 𝑗 denotes the confidence level of recommender 𝑟𝑘 about trustee 𝑗 at time 𝑡. 𝛼 (𝑡) 𝑟𝑘 𝑗 and 𝛽 (𝑡) 𝑟𝑘 𝑗 is the accumulated positive and negative feedback given by the 𝑘 ′ th recommender 𝑟𝑘 about target trustee 𝑗. Eq. (6) shows the way to calculate them and it is similar to Eq. (2). 𝛼 (𝑡) 𝑟𝑘 𝑗 = ∑𝑚 𝑖=1 𝑒 −𝛾(𝑡−𝑡 𝑖 ) ∗ 𝛼 (𝑡 𝑖 ) 𝑟𝑘 𝑗 𝛽 (𝑡) 𝑟𝑘 𝑗 = ∑𝑚 𝑖=1 𝑒 −𝜎(𝑡−𝑡 𝑖 ) ∗ 𝛽 (𝑡 𝑖 ) 𝑟𝑘 𝑗 (6) In Eq. (6), 𝑚 is the size of the sliding window between 𝑟𝑘 and 𝑗. 𝛾 and 𝜎 are decay factors of the time decay function. 𝛼 (𝑡 𝑖 ) 𝑟𝑘 𝑗 and 𝛽 (𝑡 𝑖 ) 𝑟𝑘 𝑗 is the positive and negative feedback at time 𝑡 𝑖 , respectively. We combine the three important factors explained above and give the calculation of the recommendation trust in Eq. (7). 𝑅𝑇 (𝑡) 𝑖𝑗 = ∑𝑛 𝑘=1 𝐷𝑇 (𝑡) 𝑖𝑟𝑘 𝑆 (𝑡) 𝑖𝑟𝑘 𝐶 (𝑡) 𝑟𝑘 𝑗 ∑𝑛 𝑘=1 𝐷𝑇 (𝑡) 𝑖𝑟𝑘 𝑆 (𝑡) 𝑖𝑟𝑘 𝐶 (𝑡) 𝑟𝑘 𝑗 ∗ 𝐷𝑇 (𝑡) 𝑟𝑘 𝑗 (7) In Eq. (7), 𝑅𝑇 (𝑡) 𝑖𝑗 is the recommendation trust of trustee 𝑗 calculated by trustor 𝑖. 𝑛 is the number of recommenders after filtering. 𝐷𝑇 (𝑡) 𝑟𝑘 𝑗 is the recommendations provided by recommender 𝑟𝑘 about trustee 𝑗. Its correctness depends on the behavior of the recommender. The utilization of 𝐷𝑇 (𝑡) 𝑖𝑟𝑘 , 𝑆 (𝑡) 𝑖𝑟𝑘 and 𝐶 (𝑡) 𝑟𝑘 𝑗 can minimize the impact of bad and imprecise recommendations and thereby improve the precision of the recommendation trust evaluation. 5.3. Synthesis trust Neither direct trust nor recommendation trust can comprehensively reflect the trustee’s trustworthiness. Hence, our trust model uses synthesis trust that is calculated by combining direct trust and recommendation trust. Eq. (8) shows the calculation of the synthesis trust. 𝑇 (𝑡) 𝑖𝑗 = 𝜔𝐷𝑇 (𝑡) 𝑖𝑗 + (1 − 𝜔)𝑅𝑇 (𝑡) 𝑖𝑗 (8) In Eq. (8), 𝑇 (𝑡) 𝑖𝑗 denotes the synthesis trust of trustor 𝑖 about trustee 𝑗. Its range is between 0 and 1, where 1 means complete trust while 0 means complete distrust. 𝐷𝑇 (𝑡) 𝑖𝑗 and 𝑅𝑇 (𝑡) 𝑖𝑗 is direct trust and recommendation trust calculated by Eqs. (3) and (7). 𝜔 is a weight that weighs the importance of direct trust and recommendation trust. It falls in the interval [0, 1] and the bigger it is, the more important direct trust is. The selection of 𝜔 is pivotal to the trust model. We adopt an adaptive weight that can adjust automatically according to the dynamically hostile environment. The utilization of adaptive weight can resist trust related attacks such as bad mouthing attacks and ballot stuffing attacks so that improving the accuracy of trust evaluation. Eq. (9) illustrates the calculation of weight 𝜔. 𝜔 = { 1 − 𝜃 𝑒 −𝛥𝑡𝐼𝑁 𝐷𝑇𝑖𝑟 ≥ 𝐷𝑇𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 , 1 𝐷𝑇𝑖𝑟 < 𝐷𝑇𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 . (9) In Eq. (9), the calculation of 𝜔 is divided into two parts. The principle of separate calculation is to compare 𝐷𝑇𝑖𝑟 and 𝐷𝑇𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 . 𝐷𝑇𝑖𝑟 denotes the average value of direct trust of trustor 𝑖 about all recommenders. 𝐷𝑇𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 is the threshold of direct trust and is set to 0.5 by default. If 𝐷𝑇𝑖𝑟 is less than 𝐷𝑇𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 , 𝜔 will be equal to 1. It means that trustor 𝑖 will only use the direct trust depending on the direct interactions with the target trustee if most of the recommenders are malicious. This way of calculating weight can prevent the trustor mistaking a good trustee as a malicious one when the proportion of malicious recommenders is high. When 𝐷𝑇𝑖𝑟 is equal to or greater than 𝐷𝑇𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 , the calculation of 𝜔 is related to the number of interactions 𝐼𝑁 between trustor 𝑖 and trustee 𝑗, and 𝛥𝑡 that the difference between the current time and the time of last interaction. The high number of interactions means that the trustor has already adequately known about the trustee, so the direct trust of the trustor about the trustee is more accurate. But if the trustor and the trustee have not interacted recently, even if they interacted with each other many times long time ago, the direct trust still cannot reflect the current trustworthiness of the trustee. In such a case, we regulate the importance of the direct trust via 𝛥𝑡. The adaptive weight can be dynamically adjusted according to the current interaction situation to adapt to the dynamically hostile environment. 6. Experimental results and analysis The detailed performance evaluation of our work is done in two main parts. In the first part, we compare our proposed system architecture based on TTPs with the centralized architecture and the distributed architecture in terms of energy consumption. In the second part, we first validate the effectiveness of the recommendation trust evaluation and the adaptive weight. Then, we compare our trust model with three related models: TBSM [9], NRB [23] and NTM [4]. These three related models all adopted some methods to avoid the negative impact caused by malicious nodes on trust evaluation. In TBSM [9], they established social relationships between nodes and used these relationships to help the trustor not to use bad recommendations provided by malicious
G.Chen et al. Computer Networks 190(2021)107952 recommenders.In NRB [23],they divided recommendation trust into Table 1 direct recommendation trust and indirect recommendation trust ac- Simulation parameters. cording to whether the trustor interacts with the trustee directly.In Parameter Value direct recommendation trust,the trustor selects common nodes be- Nodes 200 tween it and the trustee as recommenders.The trustor can easily know Area 1000m×800m TIPs 20 the trustworthiness of recommenders by interacting with them directly Speed 20m/s and thus do not use incorrect recommendations when calculates the PE 1.5 recommendation trust of the trustee.In NTM [4],they did not use Radio range 100m too high or too low recommendations provided by recommenders to Malicious ratio 30% avoid bad mouthing attacks and ballot stuffing attacks.To ensure the 0.5 Routing protocol AODV fairness and the effectiveness of the comparative experiments,we first Mobility Random waypoint model use the best parameter values mentioned in their paper for the unique 0.05 parameters of each trust model.Then,for the network simulation 0.7 parameters such as the number of nodes,the moving range of nodes 5 and the mobility model of nodes,we use the same parameter settings 0.7 0.1 explained below. We perform comprehensive experiments based on the ns-3 simula- tor.The ns-3 simulator is a discrete-event network simulator and the 12000 -New components of our trust model can be added to it.Table 1 lists the +Centralization basic parameter values used to configure the network for experiments 10000 X-Distribution and the default parameter values of our trust model.We consider an IoT environment with 200 nodes and 20 TTPs.The nodes randomly move 8000 in an area of 1000x800 square meter with a speed of 20 m/s while TTPs are uniformly distributed in this area.Each node selects the closest TTP on the distance to help it store the feedback about neighbor nodes and 6000 9 evaluate the trust value of trustees every round of trust evaluation. The mobility model we use in ns-3 is the random waypoint which is 4000 very similar to the real world movement [29].We choose the AODV routing protocol which has been realized in ns-3 and can support our 2000 trust model well.4,y and o are decay factors of the decay function in the calculation of the number of feedback.The size of the decay factor 0 can control the speed of decay.m is the size of the sliding window.6 50 100 150 200 250 300 is the parameter used to calculate the adaptive weight in Eq.(9).For The Number of Nodes the sake of determining the exact value of the constant terms in the above equation,we need to do multiple sets of experiments in the given Fig.3.Energy consumption varying the number of nodes. network environment to obtain better results.For the three relevant trust models in the comparative work,we adopt the same method to obtain the values of some constant terms in their equation.This is to comparison with other relevant models in terms of convergence rate, ensure the fairness of the comparative experiments. stability and attack resistance in the dynamically hostile environment To verify the attack resistance of our trust model,a proportion of For simplicity,we assume that an honest node has a 95%probability malicious nodes that will perform trust related attacks is randomly of generating positive feedback,and a malicious node has a 95% selected from all nodes.Trust related attacks in our experiments include probability of generating negative feedback.This premise simplifies the on-off attacks,self promoting attacks,bad mouthing attacks,ballot way of sending and receiving service requests between the trustor and stuffing attacks and selective misbehavior attacks which are mentioned the trustee.Our trust model depends on the feedback provided by the in Section 2.Under on-off attacks,malicious nodes will perform these trustor,so this premise does not affect the function of the trust model attacks in a random period of time.Under self promoting attacks,ma- and we can pay more attention to the performance of the trust model licious recommenders will give recommendations about themselves to in the dynamically hostile environment. the trustor.Malicious recommenders will give false recommendations The trust value in our experiments is between 0 and 1 where 1 that are opposite to the ground truth about trustee when performing means completely trustworthy and 0 means completely untrustworthy. bad mouthing attacks and ballot stuffing attacks.However,malicious We can understand 0 and 1 as the likelihood of being trustworthy. recommenders will not perform trust related attacks to all trustors. Therefore,even if the trust evaluation results are close,the meaning They will randomly perform these attacks to part of the trustors. of the trust value is different.We use the mean absolute error (MAE) These are selective misbehavior attacks.The range of the proportion to measure the accuracy of trust evaluation.The smaller the MAE is,the is from 10%to 70%and the default value is 30%.The trust evaluation higher the accuracy of the trust evaluation is.The comparative results interval is 100 s and the total simulation time is 10000 s.There are show that our model converges fast and remains stable in the trust 14,833 data packets related to trust messages sent and received by evaluation.Besides,when the proportion of malicious nodes reaches nodes in the entire network and the packet loss rate is 2.7%.These 70%,the mean absolute error (MAE)of our trust evaluation is still less trust messages include feedback messages about neighbors,trust value than 0.05 while the MAE in others becomes larger.It means that our request messages,feedback messages from other TTPs and such similar model is more resistant to trust related attacks than other models. messages sent by the nodes and TTPs. The dynamically hostile environment in our experiments includes 6.1.Energy consumption of the system architecture two aspects.First,nodes randomly move with random directions ac- cording to the random waypoint mobility model.As a result,the In this subsection,we compare our proposed system architecture neighbors of a node are constantly changing.Second,the proportion of based on TTPs with the centralized architecture and the distributed malicious nodes changes from 10%to 70%.Our experiments focus on architecture in terms of energy consumption.We apply our proposed the effectiveness of trust evaluation mechanisms that we adopt and the trust model to the three architectures,respectively.The trust model in 8
Computer Networks 190 (2021) 107952 8 G. Chen et al. recommenders. In NRB [23], they divided recommendation trust into direct recommendation trust and indirect recommendation trust according to whether the trustor interacts with the trustee directly. In direct recommendation trust, the trustor selects common nodes between it and the trustee as recommenders. The trustor can easily know the trustworthiness of recommenders by interacting with them directly and thus do not use incorrect recommendations when calculates the recommendation trust of the trustee. In NTM [4], they did not use too high or too low recommendations provided by recommenders to avoid bad mouthing attacks and ballot stuffing attacks. To ensure the fairness and the effectiveness of the comparative experiments, we first use the best parameter values mentioned in their paper for the unique parameters of each trust model. Then, for the network simulation parameters such as the number of nodes, the moving range of nodes and the mobility model of nodes, we use the same parameter settings explained below. We perform comprehensive experiments based on the ns-3 simulator. The ns-3 simulator is a discrete-event network simulator and the components of our trust model can be added to it. Table 1 lists the basic parameter values used to configure the network for experiments and the default parameter values of our trust model. We consider an IoT environment with 200 nodes and 20 TTPs. The nodes randomly move in an area of 1000×800 square meter with a speed of 20 m/s while TTPs are uniformly distributed in this area. Each node selects the closest TTP on the distance to help it store the feedback about neighbor nodes and evaluate the trust value of trustees every round of trust evaluation. The mobility model we use in ns-3 is the random waypoint which is very similar to the real world movement [29]. We choose the AODV routing protocol which has been realized in ns-3 and can support our trust model well. 𝜆, 𝛾 and 𝜎 are decay factors of the decay function in the calculation of the number of feedback. The size of the decay factor can control the speed of decay. 𝑚 is the size of the sliding window. 𝜃 is the parameter used to calculate the adaptive weight in Eq. (9). For the sake of determining the exact value of the constant terms in the above equation, we need to do multiple sets of experiments in the given network environment to obtain better results. For the three relevant trust models in the comparative work, we adopt the same method to obtain the values of some constant terms in their equation. This is to ensure the fairness of the comparative experiments. To verify the attack resistance of our trust model, a proportion of malicious nodes that will perform trust related attacks is randomly selected from all nodes. Trust related attacks in our experiments include on–off attacks, self promoting attacks, bad mouthing attacks, ballot stuffing attacks and selective misbehavior attacks which are mentioned in Section 2. Under on–off attacks, malicious nodes will perform these attacks in a random period of time. Under self promoting attacks, malicious recommenders will give recommendations about themselves to the trustor. Malicious recommenders will give false recommendations that are opposite to the ground truth about trustee when performing bad mouthing attacks and ballot stuffing attacks. However, malicious recommenders will not perform trust related attacks to all trustors. They will randomly perform these attacks to part of the trustors. These are selective misbehavior attacks. The range of the proportion is from 10% to 70% and the default value is 30%. The trust evaluation interval is 100 s and the total simulation time is 10000 s. There are 14,833 data packets related to trust messages sent and received by nodes in the entire network and the packet loss rate is 2.7%. These trust messages include feedback messages about neighbors, trust value request messages, feedback messages from other TTPs and such similar messages sent by the nodes and TTPs. The dynamically hostile environment in our experiments includes two aspects. First, nodes randomly move with random directions according to the random waypoint mobility model. As a result, the neighbors of a node are constantly changing. Second, the proportion of malicious nodes changes from 10% to 70%. Our experiments focus on the effectiveness of trust evaluation mechanisms that we adopt and the Table 1 Simulation parameters. Parameter Value Nodes 200 Area 1000 m × 800 m TTPs 20 Speed 20 m/s 𝑃 𝐹 1.5 Radio range 100 m Malicious ratio 30% 𝐷𝑇𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 0.5 Routing protocol AODV Mobility Random waypoint model 𝜆 0.05 𝛾 0.7 𝑚 5 𝜎 0.7 𝜃 0.1 Fig. 3. Energy consumption varying the number of nodes. comparison with other relevant models in terms of convergence rate, stability and attack resistance in the dynamically hostile environment. For simplicity, we assume that an honest node has a 95% probability of generating positive feedback, and a malicious node has a 95% probability of generating negative feedback. This premise simplifies the way of sending and receiving service requests between the trustor and the trustee. Our trust model depends on the feedback provided by the trustor, so this premise does not affect the function of the trust model and we can pay more attention to the performance of the trust model in the dynamically hostile environment. The trust value in our experiments is between 0 and 1 where 1 means completely trustworthy and 0 means completely untrustworthy. We can understand 0 and 1 as the likelihood of being trustworthy. Therefore, even if the trust evaluation results are close, the meaning of the trust value is different. We use the mean absolute error (MAE) to measure the accuracy of trust evaluation. The smaller the MAE is, the higher the accuracy of the trust evaluation is. The comparative results show that our model converges fast and remains stable in the trust evaluation. Besides, when the proportion of malicious nodes reaches 70%, the mean absolute error (MAE) of our trust evaluation is still less than 0.05 while the MAE in others becomes larger. It means that our model is more resistant to trust related attacks than other models. 6.1. Energy consumption of the system architecture In this subsection, we compare our proposed system architecture based on TTPs with the centralized architecture and the distributed architecture in terms of energy consumption. We apply our proposed trust model to the three architectures, respectively. The trust model in
G.Chen et al. Computer Networks 190(2021)107952 0.7 TTP which is responsible for handling all trust evaluation requests. -No Model -t-Grubbs'Test There is no TTP in the distributed architecture and each node evaluates 0.6 十Box Plot the trust value of other nodes by itself.Therefore,each node sends and -Y-Isolation Forest LOF receives feedback with its neighbor nodes for the trust evaluation. 0.5 -DBSCAN In the trust management system,the energy consumption of a node A.k-Means is mainly divided into two parts:computing energy consumption and 0.4 communication energy consumption.Computing energy consumption mainly refers to the energy consumption when nodes generate feed- 0,3 back.This part of the energy consumption is the same in the three architectures.Additionally,in the distributed architecture,each node 0.2 needs to evaluate the trust of others,resulting in their computing energy consumption being greater than the centralized architecture and our proposed architecture.As for communication energy consumption, 0.1 it is closely related to the number of trust messages sent and received by nodes in the network.We focus on the communication energy 0.0 10% 20% 30% 40% 509% 60% consumption of nodes in the three architectures,which can be obtained The Proportion of Malicious Nodes through the ns-3 energy module. Fig.3 shows the results of the overall energy consumption of nodes Fig.4.Effectiveness of k-means. by varying the number of nodes in the three architectures.In view of that,we conclude that energy consumption increases as the number (-No Model of nodes increases.But the difference is that the energy consumption increases slowly in our proposed architecture while it increases rapidly in both the centralized architecture and the distributed architecture. Filter Meanwhile,our proposed architecture achieves better energy saving 05 than the other two architectures.The reason behind that is the nodes in our proposed architecture only need to transmit trust messages between 4 their own TTp and themselves.Beyond that,they do not have any additional energy consumption related to trust messages.Therefore, the overall energy consumption does not increase rapidly even if the number of nodes increases.In the centralized architecture,some nodes cannot directly transmit trust messages with the only TTP which is 03 not in their transmission range,and need other nodes to help them relay trust messages.The relay of trust messages increases the energy consumption of some nodes,resulting in higher overall energy con- sumption than our proposed architecture.Besides,the increase in the number of nodes which all send trust messages to the same TTP will be 109 30% 409% 50% The Proportion of Malicious Nodes more likely to cause packet loss,so the retransmission of trust messages further increases the overall energy consumption.In the distributed Fig.5.Effectiven dation rust evaluation architecture,each node needs to transmit trust messages with all neigh- boring nodes,so the average number of trust messages transmitted h14 by each node is more.Compared with the first two architectures,this Xn0.3 will undoubtedly cause each node to consume more energy,leading 0.12 to higher overall energy consumption.Increasing the number of nodes leads to a high density of network which increases the average number 人4w0.7 --Adaptive of neighbor nodes of each node,which in turn significantly increases the energy consumption of each node and results in the maximum 44 overall energy consumption of the distributed architecture.Obviously, + our proposed architecture shows better performance in terms of energy consumption than the other two architectures 6.2.Effectiveness,convergence and attack resistance of the trust model 6.2.1.Effectiveness of k-means h.01 In this subsection,we compare the recommendation filtering al. gorithm based on k-means with other outlier detection methods in- 30% 40% troduced in Section 2.3 to further justify the effectiveness of our 0% 60% The Proportion of Malicious Nodes recommendation filtering algorithm.Fig.4 shows the MAE of the recommendation trust evaluation when using recommendation filtering Fig.6.Effectiveness of adaptive weight algorithms based on different outlier detection methods.In order to ensure the fairness of the comparative experiments,we conduct mul- tiple sets of experiments to determine the values of the parameters our architecture has been introduced in Section 4.The implementation required for each outlier detection method.The proportion of malicious of the trust model in the centralized architecture and distributed ar- nodes increases from 10%to 70%.When each malicious node plays chitecture is slightly different from our proposed architecture based on the role of a recommender,it will be a malicious recommender which TTPs.In the centralized architecture,there is only one TTP at the center performs trust related attacks and provide bad recommendations to the of the network,so each node periodically sends feedback to the same trustor.We can see from Fig.4 that when the proportion of malicious 9
Computer Networks 190 (2021) 107952 9 G. Chen et al. Fig. 4. Effectiveness of 𝑘-means. Fig. 5. Effectiveness of recommendation trust evaluation. Fig. 6. Effectiveness of adaptive weight. our architecture has been introduced in Section 4. The implementation of the trust model in the centralized architecture and distributed architecture is slightly different from our proposed architecture based on TTPs. In the centralized architecture, there is only one TTP at the center of the network, so each node periodically sends feedback to the same TTP which is responsible for handling all trust evaluation requests. There is no TTP in the distributed architecture and each node evaluates the trust value of other nodes by itself. Therefore, each node sends and receives feedback with its neighbor nodes for the trust evaluation. In the trust management system, the energy consumption of a node is mainly divided into two parts: computing energy consumption and communication energy consumption. Computing energy consumption mainly refers to the energy consumption when nodes generate feedback. This part of the energy consumption is the same in the three architectures. Additionally, in the distributed architecture, each node needs to evaluate the trust of others, resulting in their computing energy consumption being greater than the centralized architecture and our proposed architecture. As for communication energy consumption, it is closely related to the number of trust messages sent and received by nodes in the network. We focus on the communication energy consumption of nodes in the three architectures, which can be obtained through the ns-3 energy module. Fig. 3 shows the results of the overall energy consumption of nodes by varying the number of nodes in the three architectures. In view of that, we conclude that energy consumption increases as the number of nodes increases. But the difference is that the energy consumption increases slowly in our proposed architecture while it increases rapidly in both the centralized architecture and the distributed architecture. Meanwhile, our proposed architecture achieves better energy saving than the other two architectures. The reason behind that is the nodes in our proposed architecture only need to transmit trust messages between their own TTP and themselves. Beyond that, they do not have any additional energy consumption related to trust messages. Therefore, the overall energy consumption does not increase rapidly even if the number of nodes increases. In the centralized architecture, some nodes cannot directly transmit trust messages with the only TTP which is not in their transmission range, and need other nodes to help them relay trust messages. The relay of trust messages increases the energy consumption of some nodes, resulting in higher overall energy consumption than our proposed architecture. Besides, the increase in the number of nodes which all send trust messages to the same TTP will be more likely to cause packet loss, so the retransmission of trust messages further increases the overall energy consumption. In the distributed architecture, each node needs to transmit trust messages with all neighboring nodes, so the average number of trust messages transmitted by each node is more. Compared with the first two architectures, this will undoubtedly cause each node to consume more energy, leading to higher overall energy consumption. Increasing the number of nodes leads to a high density of network which increases the average number of neighbor nodes of each node, which in turn significantly increases the energy consumption of each node and results in the maximum overall energy consumption of the distributed architecture. Obviously, our proposed architecture shows better performance in terms of energy consumption than the other two architectures. 6.2. Effectiveness, convergence and attack resistance of the trust model 6.2.1. Effectiveness of 𝑘-means In this subsection, we compare the recommendation filtering algorithm based on 𝑘-means with other outlier detection methods introduced in Section 2.3 to further justify the effectiveness of our recommendation filtering algorithm. Fig. 4 shows the MAE of the recommendation trust evaluation when using recommendation filtering algorithms based on different outlier detection methods. In order to ensure the fairness of the comparative experiments, we conduct multiple sets of experiments to determine the values of the parameters required for each outlier detection method. The proportion of malicious nodes increases from 10% to 70%. When each malicious node plays the role of a recommender, it will be a malicious recommender which performs trust related attacks and provide bad recommendations to the trustor. We can see from Fig. 4 that when the proportion of malicious
G.Chen et al. Computer Networks 190 (2021)107952 Table 2 utilization of the direct trust of the trustor about the recommender Trust evaluation convergence of the trust model. and the similarity of the direct trust evaluation between the trustor Node type Trust model and the recommender can also resist trust related attacks when the Our model TBSM NRB NTM proportion is not too high.The reason is that the direct trust and the Honest 0.97 0.94 0.93 0.78 similarity of honest recommenders are higher and thus the weight of Malicious 0.02 0.05 0.06 0.20 their recommendations is bigger.The confidence level has no effect Honest to malicious 0.03 0.09 0.15 0.24 on evaluation because it only reflects the quantity of interactions between the recommender and the trustee.The recommendation filter- Table 3 ing algorithm may not be effective when the proportion of malicious Trust evaluation accuracy rate of the trust model. recommenders exceeds half.In this case,we must use the direct trust, Trust model Accuracy rate the similarity and the confidence level of the recommender.These three Our model 97.35% important factors can reduce the weight of the bad recommendations TBSM 90.73% as much as possible.When we combine the filtering algorithm and NrB 91.45% NTM the three important factors together,the MAE approaches 0 and keeps 71.23% stable.Through the experimental results,we demonstrate that the rec- ommendation trust evaluation used in our trust model can effectively exclude bad and imprecise recommendations. nodes is small,all outlier detection methods can work and reduce the MAE of recommendation trust evaluation.Due to the different 6.2.3.Effectiveness of adaptive weight characteristics of each method,the filtering effect is also different, Fig.6 shows the MAE of the trust evaluation using different weights resulting in different MAE.When the proportion of malicious nodes when the proportion of malicious nodes increases from 10%to 70%. increases,grubbs'test,box plot and LOF have almost no effect on the We can see that the MAE is the least when using the adaptive weight in MAE of the recommendation trust evaluation,which means that they the trust evaluation.The reason is that the adaptive weight can adjust cannot effectively detect the bad recommendations.We have already automatically according to the current interaction situation between analyzed the reasons why these three methods do not work in sub the trustor and the trustee so as to adapt to the dynamically hostile Section 5.2.1.We can observe that the MAE of the recommendation environment.If the trustor has frequently interacted with the trustee filtering algorithm based on isolation forest is even larger than the MAE recently,it can judge whether the trustee is trustworthy from its direct without any model when the proportion of malicious nodes reaches trust toward the trustee and the direct trust is more credible than the 50%.The reason is that isolation forest detects outliers based on the recommendation trust calculated from recommendations provided by global distribution of data instances.When the proportion of malicious other recommenders in such case.However,a fixed weight cannot nodes increases,the area where the bad recommendations are located freely regulate the importance of the direct trust and the recommen- becomes denser and the area where the good recommendations are dation trust,which results in a larger MAE.For example,if the weight located is relatively sparser.This causes isolation forest to mistake good of the direct trust is 0.3 while the weight of the recommendation recommendations for outliers and increases the MAE of recommenda- trust is 0.7,the convergence rate of the trust evaluation is fast even tion trust evaluation.We can judge that the filtering effect of DBSCAN if the trustor does not interact with the trustee directly.The reason is not as good as k-means according to the MAE of the recommendation behind that is the trustor can rely on the recommendation trust which is trust evaluation.DBSCAN marks the data instances as core points, evaluated from recommendations provided by recommenders.But if the border points and noise points in the process of clustering.When the proportion of malicious nodes is large,most of the recommendations proportion of malicious nodes increases,some good recommendations are wrong and thus affect the accuracy of the recommendation trust. that are not in the dense area will be marked as noise points.This However,the weight of the recommendation trust will not be high part of the recommendations will not be in the cluster selected by the when we use the adaptive trust.The trustor determines the weight trustor and will be filtered out.In this case,the MAE of the recommen- of the direct trust and the recommendation trust according to the dation trust evaluation will increase slightly.But it will not happen number of interactions between the trustor and the trustee,the time of in the recommendation filtering algorithm based on k-means which interaction and the average trust value of recommenders.In conclusion, only divides the good recommendations and the bad recommendations the adaptive weight in our trust model effectively combines the direct into two different clusters.Through comparative experiments based on trust with the recommendation trust and reduces the MAE of trust different outlier detection methods,we justify the effectiveness of our evaluation. recommendation filtering algorithm based on k-means once again. 6.2.4.Convergence rate and stability of the trust model 6.2.2.Effectiveness of recommendation trust evaluation In this subsection,we investigate the convergence rate and stability In this subsection,we validate the effectiveness of our recommen- of our trust model and three relevant trust models(TBSM [9],NRB [23] dation trust evaluation by separately observing the impact of factors and NTM [4])are used for comparison.Fig.7(a)shows the trust we discuss in Section 5.2.Fig.5 shows the MAE of recommendation evaluation of the trustor about an honest trustee who is randomly trust evaluation when considering different factors and the proportion selected and its ground truth is constant at 1 over time.We observe of malicious nodes increases from 10%to 70%.We can observe that that our trust model converges faster than other trust models and the MAE increases rapidly without any defensive measure because the remains stable with the minimum trust deviation.The convergence trustor will use bad recommendations provided by malicious recom- rate of NRB [23]and NTM [4]is slow and the stability of them is menders in the environment to evaluate the recommendation trust poor because they cannot effectively filter out bad recommendations of the trustee.These bad recommendations will seriously affect the and reduce the impact of malicious nodes.In our trust model and accuracy of recommendation trust and thus lead to a bigger MAE. TBSM [9],the convergence rate is fast and the stability is good because When we use the proposed recommendation filtering algorithm,the we can effectively filter out bad recommendations.Our trust model is MAE is around 0.1 and remains stable regardless of the increase of the better because the adaptive weight we proposed can better combine proportion of malicious nodes.Because the recommendation filtering the direct trust and the recommendation trust to reduce the MAE algorithm we proposed can effectively filter out bad recommendations. of the trust evaluation.Fig.7(b)shows the trust evaluation about a Even if the proportion of malicious nodes in the environment increases, malicious trustee whose ground truth is always 0.Our trust model the filtering algorithm can still select trustworthy recommenders.The also approaches ground truth faster.The reason is the same as the 10
Computer Networks 190 (2021) 107952 10 G. Chen et al. Table 2 Trust evaluation convergence of the trust model. Node type Trust model Our model TBSM NRB NTM Honest 0.97 0.94 0.93 0.78 Malicious 0.02 0.05 0.06 0.20 Honest to malicious 0.03 0.09 0.15 0.24 Table 3 Trust evaluation accuracy rate of the trust model. Trust model Accuracy rate Our model 97.35% TBSM 90.73% NRB 91.45% NTM 71.23% nodes is small, all outlier detection methods can work and reduce the MAE of recommendation trust evaluation. Due to the different characteristics of each method, the filtering effect is also different, resulting in different MAE. When the proportion of malicious nodes increases, grubbs’ test, box plot and LOF have almost no effect on the MAE of the recommendation trust evaluation, which means that they cannot effectively detect the bad recommendations. We have already analyzed the reasons why these three methods do not work in sub Section 5.2.1. We can observe that the MAE of the recommendation filtering algorithm based on isolation forest is even larger than the MAE without any model when the proportion of malicious nodes reaches 50%. The reason is that isolation forest detects outliers based on the global distribution of data instances. When the proportion of malicious nodes increases, the area where the bad recommendations are located becomes denser and the area where the good recommendations are located is relatively sparser. This causes isolation forest to mistake good recommendations for outliers and increases the MAE of recommendation trust evaluation. We can judge that the filtering effect of DBSCAN is not as good as 𝑘-means according to the MAE of the recommendation trust evaluation. DBSCAN marks the data instances as core points, border points and noise points in the process of clustering. When the proportion of malicious nodes increases, some good recommendations that are not in the dense area will be marked as noise points. This part of the recommendations will not be in the cluster selected by the trustor and will be filtered out. In this case, the MAE of the recommendation trust evaluation will increase slightly. But it will not happen in the recommendation filtering algorithm based on 𝑘-means which only divides the good recommendations and the bad recommendations into two different clusters. Through comparative experiments based on different outlier detection methods, we justify the effectiveness of our recommendation filtering algorithm based on 𝑘-means once again. 6.2.2. Effectiveness of recommendation trust evaluation In this subsection, we validate the effectiveness of our recommendation trust evaluation by separately observing the impact of factors we discuss in Section 5.2. Fig. 5 shows the MAE of recommendation trust evaluation when considering different factors and the proportion of malicious nodes increases from 10% to 70%. We can observe that the MAE increases rapidly without any defensive measure because the trustor will use bad recommendations provided by malicious recommenders in the environment to evaluate the recommendation trust of the trustee. These bad recommendations will seriously affect the accuracy of recommendation trust and thus lead to a bigger MAE. When we use the proposed recommendation filtering algorithm, the MAE is around 0.1 and remains stable regardless of the increase of the proportion of malicious nodes. Because the recommendation filtering algorithm we proposed can effectively filter out bad recommendations. Even if the proportion of malicious nodes in the environment increases, the filtering algorithm can still select trustworthy recommenders. The utilization of the direct trust of the trustor about the recommender and the similarity of the direct trust evaluation between the trustor and the recommender can also resist trust related attacks when the proportion is not too high. The reason is that the direct trust and the similarity of honest recommenders are higher and thus the weight of their recommendations is bigger. The confidence level has no effect on evaluation because it only reflects the quantity of interactions between the recommender and the trustee. The recommendation filtering algorithm may not be effective when the proportion of malicious recommenders exceeds half. In this case, we must use the direct trust, the similarity and the confidence level of the recommender. These three important factors can reduce the weight of the bad recommendations as much as possible. When we combine the filtering algorithm and the three important factors together, the MAE approaches 0 and keeps stable. Through the experimental results, we demonstrate that the recommendation trust evaluation used in our trust model can effectively exclude bad and imprecise recommendations. 6.2.3. Effectiveness of adaptive weight Fig. 6 shows the MAE of the trust evaluation using different weights when the proportion of malicious nodes increases from 10% to 70%. We can see that the MAE is the least when using the adaptive weight in the trust evaluation. The reason is that the adaptive weight can adjust automatically according to the current interaction situation between the trustor and the trustee so as to adapt to the dynamically hostile environment. If the trustor has frequently interacted with the trustee recently, it can judge whether the trustee is trustworthy from its direct trust toward the trustee and the direct trust is more credible than the recommendation trust calculated from recommendations provided by other recommenders in such case. However, a fixed weight cannot freely regulate the importance of the direct trust and the recommendation trust, which results in a larger MAE. For example, if the weight of the direct trust is 0.3 while the weight of the recommendation trust is 0.7, the convergence rate of the trust evaluation is fast even if the trustor does not interact with the trustee directly. The reason behind that is the trustor can rely on the recommendation trust which is evaluated from recommendations provided by recommenders. But if the proportion of malicious nodes is large, most of the recommendations are wrong and thus affect the accuracy of the recommendation trust. However, the weight of the recommendation trust will not be high when we use the adaptive trust. The trustor determines the weight of the direct trust and the recommendation trust according to the number of interactions between the trustor and the trustee, the time of interaction and the average trust value of recommenders. In conclusion, the adaptive weight in our trust model effectively combines the direct trust with the recommendation trust and reduces the MAE of trust evaluation. 6.2.4. Convergence rate and stability of the trust model In this subsection, we investigate the convergence rate and stability of our trust model and three relevant trust models (TBSM [9], NRB [23] and NTM [4]) are used for comparison. Fig. 7(a) shows the trust evaluation of the trustor about an honest trustee who is randomly selected and its ground truth is constant at 1 over time. We observe that our trust model converges faster than other trust models and remains stable with the minimum trust deviation. The convergence rate of NRB [23] and NTM [4] is slow and the stability of them is poor because they cannot effectively filter out bad recommendations and reduce the impact of malicious nodes. In our trust model and TBSM [9], the convergence rate is fast and the stability is good because we can effectively filter out bad recommendations. Our trust model is better because the adaptive weight we proposed can better combine the direct trust and the recommendation trust to reduce the MAE of the trust evaluation. Fig. 7(b) shows the trust evaluation about a malicious trustee whose ground truth is always 0. Our trust model also approaches ground truth faster. The reason is the same as the