Spotlight on Lablet Research #11 - Cloud-Assisted IoT Systems Privacy

Spotlight on Lablet Research #11 -

Cloud-Assisted IoT Systems Privacy

 

Lablet: University of Kansas

The goal of this project is to develop principles and methods to model privacy requirements, threats, and protection mechanisms in cloud-assisted IoT systems.

The key to realizing the smart functionalities envisioned through the Internet of Things (IoT) is to securely and efficiently communicate, maintain, and analyze the tremendous amount of data generated by IoT devices. Therefore, integrating IoT with the cloud platform to utilize its computing and big data analysis capabilities becomes critically important, since IoT devices are computational units with strict performance and energy constraints. However, when data is transferred among connected devices or to the cloud, new security and privacy issues arise. In this project, the University of Kansas researchers, led by Principal Investigator (PI) Fengjun Li and Co-PI Bo Luo, investigated the privacy threats in the cloud-assisted IoT systems, in which heterogeneous and distributed data are collected, integrated, and analyzed by different IoT applications. The research aim is to develop a privacy threat analysis framework for modeling privacy threats in the cloud-assisted IoT systems and provide a holistic solution toward privacy protection.

The number of IoT devices is expected to reach 125 billion by 2030. These devices collect a variety of data that may contain privacy-sensitive information of the users. However, it is difficult to quantitatively assess privacy leakage of a text snippet. The researchers are among the first to develop a context-aware, text-based quantitative model for private information assessment to address this problem. The team developed a computational framework using natural language processing and deep neural networks to train prediction models, which can be used to measure privacy scores of texts from a social network dataset. These models serve as the foundation for developing user alerting mechanisms to warn users when they attempt to disseminate sensitive information online.

To support big data analytics over IoT-generated data while protecting IoT data privacy, the KU researchers developed two privacy-preserving learning frameworks for collaborative learning tasks in cloud-assisted IoT systems. First, in the scenarios where clients (e.g., IoT devices) hold horizontally partitioned data (i.e., data with the same distribution or feature space), they developed a privacy-preserving incremental learning protocol. This protocol allows clients to train Support Vector Machines (SVM) using a linear, polynomial kernel function and securely update the SVM model by exchanging the kernel matrix using homomorphic encryption. As IoT devices are continuously generating new data, this privacy-preserving incremental learning scheme can actively update the learning model by integrating the new data into the quadratic program and modifying the kernel parameters when necessary. Therefore, it is critical to predicate tasks in IoT applications.

The researchers also designed a blockchain-based framework to support privacy-preserving and accountable federated learning tasks in IoT applications. As an emerging collaborative learning technique, Federated Learning (FL) allows learning of high-quality, personalized models from data across distributed sources without transmitting the raw data from the devices to anywhere else. This facilitates large-scale collaborative IoT applications by not only leveraging computational resources on IoT devices but also improving user privacy protection. To prevent privacy leakage, model updates (e.g., updated stochastic gradient descents) are securely exchanged among the clients using homomorphic encryption, multi-party computation, and differential privacy schemes. As the IoT devices are loosely federated in the FL tasks, some of them may be malicious by cheating or colluding in the model update. The researchers developed a blockchain-based framework to provide an accountable federated learning service by leveraging the immutability and decentralized trust properties of the blockchain to provide provenance of model update.

Successful completion of this project will result in: 1) a systematic methodology to model privacy threats in data communication, storage, and analysis processes in IoT applications; 2) a privacy threats analysis framework with an extensive catalogue of application-specific privacy needs and privacy-specific threat categorization; and 3) a privacy protection framework that maps existing Privacy Enhancing Technologies (PETs) to the identified privacy needs and threats of IoT applications to simplify the selection of sound privacy protection countermeasures.

Additional details on the project can be found here

Submitted by Anonymous on