Mobile menu icon
Skip to navigation | Skip to main content | Skip to footer
Mobile menu icon Search iconSearch
Search type

Department of Computer Science

Explainable AI and Dialog for Trust in Human-Robot Interaction

Primary supervisor

Additional supervisors

  • Angelo Cangelosi

Contact admissions office

Other projects with the same supervisor


  • Competition Funded Project (Students Worldwide)

This research project is one of a number of projects at this institution. It is in competition for funding with one or more of these projects. Usually the project which receives the best applicant will be awarded the funding. Applications for this project are welcome from suitably qualified candidates worldwide. Funding may only be available to a limited set of nationalities and you should read the full department and project details for further information.

Project description

In this project we will investigate how contemporary explainable AI methods can be used and extended as a foundation for building trust in human-robot interaction. As part of the broader En-TRUST project (UKRI Trustworthy Autonomous Systems Node on Trust), we will develop novel transparent cognitive architectures of trust, which will address a fundamental gap in the safe and ethical application of AI.

The project will emphasise the development of novel neuro-symbolic architectures which can support the construction of fully transparent and trust-building interaction in robotics. Particular emphasis will be given to deep learning components which enable explainability, deeper inference and semantic control, such as multi-hop inference architectures and long-term memory based models.

Topics of interest include:
- Explanation representation and generation in the context of robotics.
- Modelling the nexus between explanation and trust in robotics.
- Dialogue systems for human-robot trust and interaction.
- Deep learning models for explanation representation.
- Memory-based deep learning models for explainability.
- Neuro-symbolic models for explanation generation.
- Trustworthy multi-hop inference architectures.

Applicants are expected to have:

- An excellent undergraduate degree in Computer Science (or related discipline), and preferably, a relevant M.Sc. degree.
- Confidence and independence in programming complex systems in Python or Java.
- Previous academic or industry experience in Machine Learning, Robotics or Natural Language Processing (desired).
- Excellent report writing and presentation skills.

Please note that applicants must additionally satisfy the standard requirements for postgraduate studies at the University of Manchester, such as a first-class or high upper-second class (or an equivalent international qualification) and English language qualifications, as stated in the PGR guidelines.

Qualified applicants can apply directly at the website:

For questions on the position please address it to Andre Freitas ( and Angelo Cangelosi (

For further information please visit:

Person specification

For information


Applicants will be required to evidence the following skills and qualifications.

  • You must be capable of performing at a very high level.
  • You must have a self-driven interest in uncovering and solving unknown problems and be able to work hard and creatively without constant supervision.


Applicants will be required to evidence the following skills and qualifications.

  • You will have good time management.
  • You will possess determination (which is often more important than qualifications) although you'll need a good amount of both.


Applicants will be required to address the following.

  • Comment on your transcript/predicted degree marks, outlining both strong and weak points.
  • Discuss your final year Undergraduate project work - and if appropriate your MSc project work.
  • How well does your previous study prepare you for undertaking Postgraduate Research?
  • Why do you believe you are suitable for doing Postgraduate Research?