Automating Exam Generation for Assessment

Primary supervisor

Contact admissions office

Other projects with the same supervisor

Funding

  • Competition Funded Project (Students Worldwide)
This research project is one of a number of projects at this institution. It is in competition for funding with one or more of these projects. Usually the project which receives the best applicant will be awarded the funding. Applications for this project are welcome from suitably qualified candidates worldwide. Funding may only be available to a limited set of nationalities and you should read the full department and project details for further information.

Project description

One of the most significant and difficult tasks in education is the creation of valid, reliable, and useful assessments esp. exams. If we consider the question level, we generally trade of ease of creation (where so called subjective questions dominate: it's very easy to write essay questions) to ease of marking (where so-called objective questions such as multiple choice questions (MCQs) dominate: it's very hard to mark essay questions). Even if we succeed in question generation, a valid, reliable, and useful exam is not merely a set of individually good questions, but must also provide for good coverage of material and an appropriate balance of difficulty, among other properties.

Recently there has been some progress in terms of question generation (either from text or from structured sources such as ontologies or linked data), but the field is still immature and comparatively little work has been done on exam generation. The goal of this thesis is to significantly advance both areas. There is the strong possibility of industrial collaboration on this project.

▲ Up to the top