Presenting multiple streams of dynamic Web content to screen reader users
Modern Web pages are often much more interactive than they used to be, and are sometimes essentially applications. Sections of a page may update, either automatically, or as a result of some interaction by the user; indeed, one page may contain many such regions. Sighted users can generally monitor for and detect these changes, and evaluate their interest in the new content, quickly, without too much effort, and without being too distracted from their primary task. For screen-reader users, however, who are interacting with the page in an audio environment, this is a much more difficult task, rendering these pages difficult or impossible to use. The SASWAT project aims to understand how sighted users monitor and interact with these types of pages, and to use this understanding to develop techniques that will allow screen-reader users to interact effectively with them.
Full scientific details: http://www.cs.manchester.ac.uk/our-research/groups/interaction-analysis-and-modelling/areas-and-projects/saswat
Code repository: https://bitbucket.org/IAMLab/
Data repository: http://iam-data.cs.manchester.ac.uk/investigations
Technical reports: http://iam-data.cs.manchester.ac.uk/investigations
Funded By: EPSRC (EP/E062954/1)
This project is complete
The growth of Web 2.0 technologies is fundamentally changing the way that people interact with the Web. A short time ago, navigating the Web was simply a matter of clicking links, moving from one static page to another. Now it’s possible to spend a considerable amount of time interacting with a single page through its “dynamic micro content” – items such as tickers, slideshows, videos, search facilities – that update independently, without changing the URL.
These Web pages provide an exciting, interactive experience for sighted users. For visually disabled users, however, they simply result in further barriers to accessibility. Adaptive technologies, such as screen readers, are currently unable to deal with dynamic updates.
The SASWAT project aims to address this, by understanding the sighted user’s experience, and mapping this to audio for visually disabled users.
There are two parallel strands to the SASWAT research:
We consider that viewing dynamic Web pages has many of the characteristics of a conversation. As the user reads the page, so the topic of conversation changes. If some of this information changes, how do we tell the user? Is the information sufficiently important that we must interrupt immediately, or has the conversation moved on sufficiently that the change is of little interest? We aim to use eye-tracking studies to develop a model of how attention is allocated when users interact with dynamic Web pages, and use this model as a basis for controlling information flow so that interaction can occur as naturally as possible.
Dynamic updates can be classified into patterns according to how the user interacts with them, and developers often use patterns from libraries such as the Yahoo! pattern library when developing sites. Can analysis of where and how these patterns are implemented be combined with experimental data about how people use them to suggest ways of presentation? In particular, can developers use pattern class as a basis for making the update more accessible, e.g., through ARIA markup?
HCW Researchers Present at W4A Conference (5/8/2009)
How do people split their visual attention across multiple screens? #media #hci (2/15/2012)
IJHCS publication for the SASWAT project (12/14/2011)
New Lab Member (9/4/2007)
Paper Accepted for publication in Universal Access in the Information Society #accessibility #a11y (10/26/2011)
Papers Accepted at HT07 (7/2/2007)
Papers Accepted at W4A 2011 Hyderabad, India (2/10/2011)
Papers Accepted at W4A’09 Madrid (3/5/2009)
Research Associate Vacancy (6/6/2007)
Technical evaluation of the update classification system (5/30/2012)
Final report summary:
Eye-tracking experiments allowed us to build a model of how sighted user allocate their attention when Web pages update. This Dynamic Update Viewing-likelihood (DUV) model is able to predict, to around 80% accuracy how likely a person is likely to fixate on some dynamic content based only on information about the update (its size and duration) and how it was initiated – no knowledge of the user’s task is necessary. This model was used as a basis for a set of rules for audio presentation of updates, again based on the type of update (classified by how the page was affected, and how the update was initiated). An implementation of the rules has been evaluated, showing that users who were blind or visually impaired found updates easier to deal with than the relatively quiet way in which current screen readers often present them.
Final report: Pending