The typical design approaches used in the development of assistive computing systems have a tendency to place an individual’s disability at the centre of the design process. The emphasis is on what a person lacks and how this can be compensated for in the design to bring individuals onto a level par with everyone else.
This deficit model of disability where people lack abilities can lead to designers asking the wrong type of questions. For instance, if you can’t use a mouse/keyboard to operate a graphical application (due to a lack of motor control), the obvious question is how do we make this experience more accessible (i.e. how do we normalise the interaction?)
A common solution is to provide users with some hardware that enables them to control the on-screen cursor and operate the system via head tracking or some other form of specialised (and typically expensive) hardware.
The problem with this approach is that people are having to use “alternative” technology to work with existing software designed for “average users” (i.e. people who can operate a mouse, keyboard, stylus, or use their fingers). So, whilst specialised hardware may make a graphical application somewhat accessible, it will likely result in a tedious and frustrating user experience.
Icons will be too small to select comfortably, drop down menus with huge lists of options will be difficult to navigate, and the overall design of the application will not optimally support these alternative (or less common) input methods.
Perhaps a better design approach is to focus on the abilities people have and to make this the central focus of the design process. If, for example, eye control is the primary ability an individual has then how can we design interfaces that better support eye input? What would that interface look like and how can we create effective and comfortable user experiences that support a user’s workflow?
A focus on disability leans us towards asking how can we make an experience accessible for people with a particular impairment. This is a very different question to asking how we can create interfaces that support an individual’s abilities. The former feels more constrained, limited, and less ambitious whereas the latter is about innovation and creative solutions.
This approach also raises new important questions and issues for designers – in particular, an important step in designing something new is to deeply understand your target audience to ensure you deliver a user experience that meets their needs.
But what language do you use when eliciting requirements?
It’s a sensitive area. Disability is (arguably) a relatively well-understood term that some will feel comfortable with whereas this term clearly offends other people. On the other hand, asking which “abilities” someone has – when they clearly feel they have some form of impairment – may come across as a little rude and patronising.
This is a difficult area to balance and it may not be possible to please everyone.
It’s clear, however, that despite work over the past twenty years or so into assistive computing, the majority of user interfaces still hold significant accessibility issues for many users.
A design shift from disability to ability may help encourage designers to focus on building tailored and effective interface designs that better support users and their abilities (as opposed to the lesser goal of attempting to bring everyone onto a level playing field).