• Designing privacy-aware internet of things applications

      Perera, Charith; Barhamgi, Mahmoud; Bandara, Arosha K.; Ajmal, Muhammad; Price, Blaine; Nuseibeh, Bashar; Cardiff University; Universite Claude Bernard Lyon; Open University, United Kingdom; University of Derby (Elsevier BV, 2019-09-28)
      Internet of Things (IoT) applications typically collect and analyse personal data that can be used to derive sensitive information about individuals. However, thus far, privacy concerns have not been explicitly considered in software engineering processes when designing IoT applications. With the advent of behaviour driven security mechanisms, failing to address privacy concerns in the design of IoT applications can also have security implications. In this paper, we explore how a Privacy-by-Design (PbD) framework, formulated as a set of guidelines, can help software engineers integrate data privacy considerations into the design of IoT applications. We studied the utility of this PbD framework by studying how software engineers use it to design IoT applications. We also explore the challenges in using the set of guidelines to influence the IoT applications design process. In addition to highlighting the benefits of having a PbD framework to make privacy features explicit during the design of IoT applications, our studies also surfaced a number of challenges associated with the approach. A key finding of our research is that the PbD framework significantly increases both novice and expert software engineers’ ability to design privacy into IoT applications.
    • Privacy-preserving crowd-sensed trust aggregation in the user-centeric internet of people networks

      Azad, Muhammad; Perera, Charith; Bag, Samiran; Barhamgi, Mahmoud; Hao, Feng; University of Derby; Cardiff University; University of Warwick; Universite Claude Bernard Lyon (ACM, 2020)
      Today we are relying on the Internet technologies for various types of services ranging from personal communication to the entertainment. The online social networks (Facebook, twitter, youtube) has seen an increase in subscribers in recent years developing a social network among people termed as the Internet of People. In such a network, subscribers use the content disseminated by other subscribers. The malicious users can also utilize such platforms for spreading the malicious and fake content that would bring catastrophic consequences to a social network if not identified on time. People crowd-sensing on the Internet of people system has seen a prospective solution for the large scale data collection by leveraging the feedback collections from the people of the internet that would not only help in identifying malicious subscribers of the network but would also help in defining better services. However, the human involvement in crowd-sensing would have challenges of privacy-preservation, intentional spread of false high score about certain user/content undermining the services, and assigning different trust scores to the peoples of the network without disclosing their trust weights. Therefore, having a privacy-preserving system for computing trust of people and their content in the network would play a crucial role in collecting high-quality data from the people. In this paper, a novel trust model is proposed for evaluating the trust of the people in the social network without compromising the privacy of the participating people. The proposed systems have inherent properties of the trust weight assignment to a different class of user i.e. it can assign different weights to different users of the network, has decentralized setup, and ensures privacy properties under the malicious and honest but curious adversarial model. We evaluated the performance of the system by developing a prototype and applying it to different online social network dataset.