Mobile menu icon
Skip to navigation | Skip to main content | Skip to footer
Mobile menu icon Search iconSearch
Search type

Department of Computer Science


Verification Based Model Extraction Attack and Defence for Deep Neural Networks

Primary supervisor

Additional supervisors

  • Youcheng Sun

Contact admissions office

Other projects with the same supervisor

Funding

  • Competition Funded Project (Students Worldwide)

This research project is one of a number of projects at this institution. It is in competition for funding with one or more of these projects. Usually the project which receives the best applicant will be awarded the funding. Applications for this project are welcome from suitably qualified candidates worldwide. Funding may only be available to a limited set of nationalities and you should read the full department and project details for further information.

Project description

Deep learning models, especially deep neural networks (DNNs), have become the standard models for solving many complex real-world problems, such as image recognition, speech recognition, natural language processing, and autonomous driving. However, training large-scale DNN models is by no means trivial, which requires not only large-scale datasets but also significant computational resources. The training cost can grow rapidly with task complexity and model capacity. For instance, it can cost $1.6 million to train a BERT model on Wikipedia and Book corpora (15 GB). It is thus of utmost importance to protect DNNs from unauthorized duplication or reproduction.

Model extraction (also known as model stealing) is considered to be a challenging security threat to DNNs. The goal of model extraction is to accurately steal the functionality of the victim model through its prediction API. To achieve this, an adversary queries the victim DNN model for predictions (i.e., the probability vector). The adversary first obtains an annotated dataset by querying the victim model for a set of auxiliary samples, then trains a copy of the victim model on the annotated dataset. A number of defence techniques have been also proposed to protect the DNN from model stealing.

So far, existing model extraction attack/defence methods are mostly heuristics driven. Their performances lack the formal guarantee or understanding. Different from such heuristics, a formal verification technique needs to provide provable guarantees for its results. The verification of DNNs has been widely studied in adversarial machine learning. In this project, you are going to investigate formal methods for mathematically rigorous reasoning of the theory and practice for DNN model extraction.

Person specification

For information

Essential

Applicants will be required to evidence the following skills and qualifications.

  • You must be capable of performing at a very high level.
  • You must have a self-driven interest in uncovering and solving unknown problems and be able to work hard and creatively without constant supervision.

Desirable

Applicants will be required to evidence the following skills and qualifications.

  • You will have good time management.
  • You will possess determination (which is often more important than qualifications) although you'll need a good amount of both.

General

Applicants will be required to address the following.

  • Comment on your transcript/predicted degree marks, outlining both strong and weak points.
  • Discuss your final year Undergraduate project work - and if appropriate your MSc project work.
  • How well does your previous study prepare you for undertaking Postgraduate Research?
  • Why do you believe you are suitable for doing Postgraduate Research?