Pervasive Technology for Multimodal Human Memory Augmentation

Primary supervisor

Contact admissions office

Funding

  • Competition Funded Project (Students Worldwide)

Project description

This project aims to investigate the use of technology in supporting human multi-sensory memory augmentation.

Technology has long had an important role in supporting human memory. However, developments in pervasive computing are beginning to offer the potential to re-think and re-define how technology can augment human memory. For example, widespread sensing and systems for quantified self are creating an environment in which it is possible to capture fine-grained traces of many aspects of human activity that may help in remembering items of interest. Likewise, ever-present display technologies such as digital signage, and mobile and wearable devices provide opportunities to ensure that memory cues are available at opportune moments.

Emerging research in pervasive computing for human memory augmentation has largely focused on a single sensory modality, predominantly using visual data for memory capture and cueing (using, for example, worn camera such as the SenseCam or Narrative Clip). However, both human perception and human memory are multi-sensory (i.e. they combine visual, audio, touch, smell, taste); future technology for digital memory augmentation should reflect this multi-sensory perspective.

This PhD project will consider how we might create a multi-sensory memory augmentation system, analogous to biological memory, that fuses data from the pervasive technology all around us. This augmentation system is unlikely to directly mirror the human senses, but will instead make novel use of new developments in non-visual capture and presentation technologies (e.g. the always-on audio of Siri, Amazon Echo etc.). The project will lay the foundation for future multi-sensory platforms by uniquely considering how multiple channels reflecting the different senses might be combined to create technology that mirrors our natural capacity for multi-sensory human memory.

Development of multi-sensory memory augmentation must address challenges in both technical and social domains, such as:

(i) What forms of multi-sensory data can be captured by (or on behalf of) users whilst engaged in their daily activities?
(ii) What social, ethical and usability issues limit this capture?
(iii) How can data streams representing multiple sensory inputs be mined and integrated to create effective human memory cues?
(iv) What are the most appropriate presentation mediums for multi-sensory cueing?
Given the broad scope of the project, we are open to and interested in applicants??? ideas within the scope of the described research area.

This PhD project may examine either the human elements of multi-sensory memory augmentation, or the systems aspects of implementing this, or may explore a combination of human and systems themes.

The successful candidate will work closely with members of the EPSRC PACTMAN project (exploring the role of privacy and consent in memory augmentation), which includes partners at the Universities of Edinburgh, Essex and Lancaster, the NHS and BBC. A willingness to engage with other disciplines (e.g. psychology) is essential.

The PhD project will be supervised by Dr Sarah Clinch, a Lecturer in Ubiquitous Computing at The University of Manchester. Her research focuses on the development and deployment of pervasive computing for new and emerging application domains, particularly those connected to human cognition, health and well-being. Dr Clinch publishes her work at leading international conferences including CHI, UbiComp and MobiSys. She is very keen to support motivated and talented students in developing their research skills. You can find out more about Dr Clinch at: http://sclinch.com. Informal enquires about the project are also welcome and should be made directly to Dr Clinch (sarah.clinch@manchester.ac.uk).

▲ Up to the top