Mobile menu icon
Skip to navigation | Skip to main content | Skip to footer
Mobile menu icon Search iconSearch
Search type

Department of Computer Science


Generating Goals from Responsibilities for Long Term Autonomy

Primary supervisor

Contact admissions office

Funding

  • Competition Funded Project (Students Worldwide)

This research project is one of a number of projects at this institution. It is in competition for funding with one or more of these projects. Usually the project which receives the best applicant will be awarded the funding. Applications for this project are welcome from suitably qualified candidates worldwide. Funding may only be available to a limited set of nationalities and you should read the full department and project details for further information.

Project description

At its most general, an agent is an abstract concept that represents an autonomous computational entity that makes its own decisions. A general agent is thus simply the encapsulation of some distributed computational component within a larger system. However, in many settings, it is increasingly important for the agent to have explicit reasons (that it could explain, if necessary) for making one choice over another.

Beliefs-Desires-Intentions (BDI) programming languages provide this capability. They are based on the concept of rational agency and draw heavily from the logic programming paradigm. Crucially BDI agents make decisions based on intuitive concepts of how an agent's beliefs and desires lead to particular choices. In the BDI programming paradigm, beliefs represent the agent's (possibly incorrect)
information about its environment, desires represent the agent's long-term goals, while intentions represent the goals that the agent is actively pursuing.

In general BDI programming languages assume that goals are either supplied at the start of program execution by the programmer, are communicated to the agent explicitly from some external source (e.g., the user), or triggered by events as part of "plans" supplied by the programmer.

The object of this project is to study whether some concept of Responsibility will enable a principled framework for generating goals - particularly when considering an autonomous agent (for instance an agent controlling a robot deployed in a warehouse) expected to operate over a long period. Furthermore, can concepts of responsibilities assist in assigning priorities to goals, and reasoning in co-operative multi-agent environments. The work will involve for formal work, developing a logical framework for reasoning about responsibilities and relating them to goals and adapting or extending the reasoning cycle of some BDI programming language in order to evaluate the theory. It requires a student confident in Maths and Logic, as well as programming in Java. Experience with a Logic Programming Language such as Prolog will also be a benefit.

Person specification

For information

Essential

Applicants will be required to evidence the following skills and qualifications.

  • You must be capable of performing at a very high level.
  • You must have a self-driven interest in uncovering and solving unknown problems and be able to work hard and creatively without constant supervision.

Desirable

Applicants will be required to evidence the following skills and qualifications.

  • You will have good time management.
  • You will possess determination (which is often more important than qualifications) although you'll need a good amount of both.

General

Applicants will be required to address the following.

  • Comment on your transcript/predicted degree marks, outlining both strong and weak points.
  • Discuss your final year Undergraduate project work - and if appropriate your MSc project work.
  • How well does your previous study prepare you for undertaking Postgraduate Research?
  • Why do you believe you are suitable for doing Postgraduate Research?