Verifying Cyber-attacks in CUDA Deep Neural Networks for Self-Driving Cars
Primary supervisor
Contact admissions office
Other projects with the same supervisor
- Finding Vulnerabilities in IoT Software using Fuzzing, Symbolic Execution and Abstract Interpretation
- Application Level Verification of Solidity Smart Contracts
- Exploiting Software Vulnerabilities at Large Scale
- Designing Safe & Explainable Neural Models in NLP
- Using Program Synthesis for Program Repair in IoT Security
- Verification Based Model Extraction Attack and Defence for Deep Neural Networks
- Automated Repair of Deep Neural Networks
- Automatic Detection and Repair of Software Vulnerabilities in Unmanned Aerial Vehicles
- Combining Concolic Testing with Machine Learning to Find Software Vulnerabilities in the Internet of Things
- Hybrid Fuzzing Concurrent Software using Model Checking and Machine Learning
Funding
- Directly Funded Project (Students Worldwide)
This research project has funding attached. Applications for this project are welcome from suitably qualified candidates worldwide. Funding may only be available to a limited set of nationalities and you should read the full department and project details for further information.
Project description
Compute Unified Device Architecture (CUDA) is a parallel computing platform and Application Programming Interface (API) model created by NVIDIA, which extends C/C++ and Fortran, in order to create a computational model that aims to harness the computational power of Graphical Processing Units (GPUs). Recent NVIDIA GPUs offer highly tuned implementations for typical routines required by Deep Neural Networks (DNNs), e.g., forward and backward convolution, pooling, normalisation, and activation layers, which lead to a prospect of a wide-scale deployment of such networks for perception modules and end-to-end controllers for self-driving cars. However, this wide-scale deployment also raises additional research questions of how the GPU software can be verified, validated and certified to meet standard requirements of safety-critical applications, especially when those applications are connected to the internet and thus subject to adversarial perturbations. As a result, the main goals of this PhD research are: (1) analyse and develop a deeper understanding of CUDA DNNs to capture main properties of interest to establish a secure and safety operation of CUDA DNNs; (2) model the CUDA DNN library by taking into account aspects of security and safety; and then (3) verify realistic applications of self-driving cars that rely on such library, using explicit-state and symbolic model checking techniques to prevent possible cyber threats/attacks.
Person specification
For information
- Candidates must hold a minimum of an upper Second Class UK Honours degree or international equivalent in a relevant science or engineering discipline.
- Candidates must meet the School's minimum English Language requirement.
- Candidates will be expected to comply with the University's policies and practices of equality, diversity and inclusion.
Essential
Applicants will be required to evidence the following skills and qualifications.
- You must be capable of performing at a very high level.
- You must have a self-driven interest in uncovering and solving unknown problems and be able to work hard and creatively without constant supervision.
Desirable
Applicants will be required to evidence the following skills and qualifications.
- You will have good time management.
- You will possess determination (which is often more important than qualifications) although you'll need a good amount of both.
General
Applicants will be required to address the following.
- Comment on your transcript/predicted degree marks, outlining both strong and weak points.
- Discuss your final year Undergraduate project work - and if appropriate your MSc project work.
- How well does your previous study prepare you for undertaking Postgraduate Research?
- Why do you believe you are suitable for doing Postgraduate Research?