
Autonomy and verification
The Autonomy and Verification Group focuses on autonomous systems and their development, verification, and analysis
Our researchers
Applications include robots, self-driving vehicles, distributed sensor-rich systems and decision-making software, alongside the development of verification techniques, standards and ethics for such systems with respect to safety, responsibility, assurance and trust.
The Group is part of a distributed Autonomy and Verification Lab across several universities.
We expect to soon see fully autonomous vehicles, robots, and software, all of which will need to be able to make their own decisions and take their actions, without direct human intervention.
Once humans delegate decision-making processes, even partially, then how can we be sure what autonomous systems will do? Will they be safe? Can we ever trust them? What if they fail? This is especially important once robotic devices, autonomous vehicles, etc, increasingly begin to be deployed in safety-critical situations.
Research focus
Our research is focused on the following specialist areas:
-
Programming autonomous systems
Autonomous systems make their own decisions, and potentially take their own actions, without direct human control. How can we practically and effectively build these systems, especially if we want to be sure about aspects such as verification, reliability, ethics and responsibility? These increasingly systems can be purely software entities or could be embodied systems such as robots or vehicles. We are involved in internationally leading research devising new ways to capture `autonomy' and make the systems' decisions and actions transparent to allow for analysis, explainability, and broader societal confidence.
Photo: Science in HD on Unsplash.
-
Ethics, responsibility, and trustworthiness
Once decision-making is delegated to autonomous systems, it essentially becomes the responsibility of the system's software to make the `right' decisions. How do we know what the right decisions will be, and how can we be sure that our systems will make these right decisions? What standards and regulations should be in place before these complex systems are allowed to be deployed? How can we delegate responsibility if we are not certain of the system's aims and intentions? All these aspects come together in questions of safety, ethics, responsibility, trustworthiness, etc.
Photo: Andy Kelly on Unsplash.
-
Verification of autonomous systems
What tools can we use to assess key properties of autonomous systems. From safety to security, and from privacy to reliability, we need techniques that will provide us with strong guarantees across the range of (increasingly autonomous systems). We produce world-leading research tackling the formal verification of a wide range of critical properties, across both software and in robots, vehicles, sensors, swarms, etc. This work feeds into both improved development processes and standards and regulations for future systems.
Photo: Markus Spiske on Unsplash.