Provably Authentic Model Training in Machine Learning

The objective of the research project is to develop a provably secure protocol for ensuring the integrity of outsourced machine learning model training using cryptography.

Project information

Project duration

-

Funded by

Multiple sources (Focus area spearhead projects)

Project coordinator

University of Oulu

Project description

Machine learning is increasingly used in a diverse field of domains. Soon, machine learning is envisioned to empower safety critical systems from personalized medicine to energy management systems. The massive generation of data will require complex system architectures and AI/ML model generation over a distributed paradigm.

Machine learning model training requires an extensive amount of computational power. It has become evident that in future paradigms, such as edge computing, individuals and organizations need to outsource training tasks to external providers either in the cloud or on the edge. The limited computational capabilities of user devices will not be able to train complex models themselves. However, delegation of training raises serious concerns of trust and integrity. We want to be able to verify the accuracy and robustness of externally trained models.

In this project, we study methods for provably authentic model training. In particular, we apply cryptographic mechanisms and methods to enforce the model trainer to follow the correct model training algorithm on the fixed training data. As a result, when a user gets a trained model it will be accompanied by cryptographic proof of its correctness.

Project leader: Juha Partala (http://cc.oulu.fi/~jpartala/)