OptiKey: A Free Eye-Gaze Communication Tool

OptiKey is a FREE and open source assistive tool I found online recently that’s designed for people with motor and speech impairments. It’s essentially an on-screen keyboard that can be controlled via a user’s eyes to enable communication with others (via text-to-speech) and general control of a computer.

A screenshot of OptiKey being used with Word to write text

It runs on Windows (although could be used on a Mac with software such as Parallels) and is designed to work in conjunction with a range of low-cost eye trackers that are currently available on the market (e.g. Tobii EyeX, EyeTribe, etc.). It therefore holds huge potential as a cost-effective solution for people with a range of conditions such as Cerebral Palsy, ALS, Motor Neurone Disease, and many others.

It also allows users to control the mouse cursor via their eyes making it possible to complete other standard tasks such as web browsing, sending emails, and writing in Word (although some tasks are still likely to be difficult – e.g. drawing with your eyes is especially tricky).

A screenshot of OptiKey being used to browse the web

It can be tempting to sometimes think that free software may be substandard, but a lot of work has clearly gone into the development of OptiKey and I’m sure it can genuinely help a lot of people. It has the feel of a paid product and seems free of any major bugs (from the testing I’ve been doing with it).

To get started, all you need is a calibrated eye tracking device (I’ve been using the Tobii EyeX) and then download the software. There is guidance for getting started and lots of help available that describe the different features of the application. There are also a couple of videos that provide an overview of how to use the software.

Button selections in OptiKey can be made in a variety of ways – either through dwell time (i.e. looking at a button for a set period of time) or via physical buttons (e.g. large switches). There’s also the option to use OptiKey with a webcam (via cursor control software such as Enable Viacam, Camera Mouse, and Opengazer) if you don’t have access to an eye tracker.

The developer of OptiKey (Julius Sweetland) states that he developed the application to “… challenge the outrageously expensive, unreliable, and difficult to use AAC (alternative and augmentative communication) products on the market”. This is certainly a worthy goal and much more feasible now given the massive reduction in price of eye tracking technology over recent years.

In speaking to disabled artists on the D2ART project (and through past projects), it’s clear that many people with disabilities haven’t previously considered eye gaze technology due to the expensive price tag. However, we’re now entering an exciting period where this technology is much cheaper and therefore more accessible to a much larger audience.

This, in turn, presents great opportunities for designers and developers to create innovative eye gaze applications that can genuinely help people with disabilities. OptiKey clearly demonstrates this potential – I’d strongly encourage you to check it out.

Embrace The Shake

Artist Phil Hansen used to specialise in Pointillism – a painting technique where a series of small points/dots are used to create a larger overall image. This was until he developed a shake in his hand whilst at art school which led to him being unable to make accurate points.

As the problem worsened, the points Phil was trying to draw started to look more like little tadpoles. To address the shakiness he would hold the pen tighter in an attempt to gain more control, but this resulted in further complications that ultimately led to him having trouble holding anything.

The end result was that Phil became disillusioned, left art school, and eventually quit art completely.

Some years later, he went to see a neurologist who said he had permanent nerve damage. Phil describes in his TED talk how he showed the neurologist the squiggly lines he could draw (when trying to draw straight) who then encouraged him to “embrace the shake”.

This was a turning point for Phil – he started to create a range of “scribble” pictures and produced some amazing artwork. Whilst he wasn’t completely passionate about this particular “technique”, this led him to realise he could still make art that fulfilled him despite his shake – he just needed to find a different approach.

This focus enabled him to broaden his creative horizons to not just one type of art, but to start exploring other ways of creating work that wasn’t affected by his shake. In his own words, it was the first time he realised that “…embracing a limitation can drive creativity“.

He purchased some materials to support his work and began creating some incredible pieces of art. You’ve got to check out his work on swapping a brush for karate chops to create a Bruce Lee portrait, painting a picture with hamburger grease, and writing stories on a revolving canvas to create an incredible larger image.

So, why tell this story? Well, I think it has huge relevance for user experience designers. What would happen if we were to “embrace the shake” a little more? That is – instead of seeing accessibility and design for disability as a tick-box exercise, what if we were to view it as a creative constraint to help drive innovation and creativity?

By its very nature, designing for people with impairments typically requires the creation of new and unique solutions. If someone can only use their right-eyebrow to interact with a system, you have to think very differently about the design of that system. The interface paradigms used for keyboard and mouse interactions aren’t particularly useful here – we need to find new ways of using systems that require a more creative approach.

We shouldn’t see accessibility as an irritation and something where we only do the bare minimum to cover ourselves (or worse, do nothing). This mentality is clearly ethically and morally wrong – but worse still this pervasive view results in many designers missing a great opportunity to work on fascinating and challenging interface design problems.

If you’re working on a new project in the near future, why not try placing an increased design emphasis on accessibility? If you focus early on the solutions required for people with a range of abilities, who knows – perhaps it might lead to some interesting creative insights that will help shape your overall design (for both your target audience and people with impairments).

(Check out Phil’s TED talk to hear his story and to see some of his amazing work)

Faking Disability

It’s generally accepted when conducting a user evaluation of some technology that you must test with your target audience.

If you’re researching the potential of a new medical prototype to support consultants during delicate surgical procedures it makes little sense testing with psychology undergraduates – you want to work directly with consultants to ensure you’re gathering relevant data.

Testing with undergraduates (if they’re not the target audience) will rightly lead to people questioning the validity of the data collected and results reported. This all sounds obvious, but it’s still surprisingly common in the accessibility research field to see studies that are not being conducted with disabled participants.

It’s not unusual, for example, to see sighted people being blindfolded to simulate blindness or for people with no motor impairment to be used to evaluate eye tracking interfaces.

Whilst on the surface this may seem like a reasonable approach (especially if the target audience are difficult to recruit), several studies have demonstrated that this is not a reliable replacement. People who “pretend” to have a disability do not necessarily use systems in the same way as those who actually have those impairments.

But more than this – by not collaborating closely with disabled users from the initial phases of a project and during the evaluation process, we miss a golden opportunity to incorporate their unique insights into our work.

It’s clearly fine to use colleagues, students, and anyone else in early stage evaluations to iron out any issues and to make initial observations, but this is not sufficient for later stage research that is looking to be published in leading conferences and journals.

It’s essential that assistive computing designers and researchers work closely with their target audience – both to inform the design of any experimental prototypes and to evaluate the effectiveness of new tools developed.

This shouldn’t deter researchers and designers – it should instead encourage us to seek out partnerships with disabled users and related organisations to help enhance the quality of our work and increase the likelihood of creating genuine impact.

It can be tempting to take short-cuts and test with anyone you can find – but ultimately the quality of the work suffers and the validity of any results will be highly questionable.

We need to embrace more the opportunities to work directly with disabled people throughout all stages of a project and to integrate their feedback and observations into our research and design work.

Case Studies in Assistive Research

Recruiting participants when conducting research studies around assistive technology can be a real issue. This can be for a variety of reasons – you might have difficulty in locating participants with a particular condition, people may have difficulties in visiting your lab, or you may lack the funding to cover an individual’s accessibility costs.

This can result in user studies where far fewer participants are recruited than would typically be required to make useful conclusions from any data collected. This is especially problematic in the assistive computing research space as the abilities of participants (with the same medical diagnosis) can vary massively thus leading to a range of additional variables being incorporated into an evaluation design.

As a consequence, the value of standard statistical analysis becomes highly questionable even when you have a large sample size – and more so if you’re only testing with a small number of participants. We therefore often lack the control required to perform rigorous controlled studies and attempting to force this approach isn’t particularly valuable.

It’s important to be fair here – a small sample size can still be incredibly useful and lead to interesting insights. Recruiting large numbers of people with ALS, for instance, will likely always be difficult – so any studies we can conduct in this area with the target audience can be highly valuable (even if it’s only with a couple of people).

However, there are still lots of examples of studies that have been conducted with a small sample size. This is understandable as recruitment can be difficult, but are these types of studies really the optimal way to move the field forward? Is there an alternative?

I’d personally like to see more case studies that clearly highlight the abilities of an individual (in terms of medical diagnosis, self-disclosure from participants, and observations from researchers) – followed by a detailed description around performance on the experimental task(s) undertaken.

This approach would allow us to really understand the abilities of an individual (beyond a basic medical diagnosis) and how any impairments influenced the interaction experience being evaluated.

Sure – this makes it hard to generalise your findings to a wider audience – but isn’t that the point? When it comes to assistive research and computing we’re designing and testing technology for people with very specific and unique requirements.

Trying to generalise in this area doesn’t always make sense. Generic guidelines, solutions, and research findings will always lead to average user experiences. Instead, we need to strive for personal, customisable, and highly tailored applications that better support people of all abilities.

Perhaps case studies are a more appropriate tool to help achieve this aim?

Design for Ability

The typical design approaches used in the development of assistive computing systems have a tendency to place an individual’s disability at the centre of the design process. The emphasis is on what a person lacks and how this can be compensated for in the design to bring individuals onto a level par with everyone else.

This deficit model of disability where people lack abilities can lead to designers asking the wrong type of questions. For instance, if you can’t use a mouse/keyboard to operate a graphical application (due to a lack of motor control), the obvious question is how do we make this experience more accessible (i.e. how do we normalise the interaction?)

A common solution is to provide users with some hardware that enables them to control the on-screen cursor and operate the system via head tracking or some other form of specialised (and typically expensive) hardware.

The problem with this approach is that people are having to use “alternative” technology to work with existing software designed for “average users” (i.e. people who can operate a mouse, keyboard, stylus, or use their fingers). So, whilst specialised hardware may make a graphical application somewhat accessible, it will likely result in a tedious and frustrating user experience.

Icons will be too small to select comfortably, drop down menus with huge lists of options will be difficult to navigate, and the overall design of the application will not optimally support these alternative (or less common) input methods.

Perhaps a better design approach is to focus on the abilities people have and to make this the central focus of the design process. If, for example, eye control is the primary ability an individual has then how can we design interfaces that better support eye input? What would that interface look like and how can we create effective and comfortable user experiences that support a user’s workflow?

A focus on disability leans us towards asking how can we make an experience accessible for people with a particular impairment. This is a very different question to asking how we can create interfaces that support an individual’s abilities. The former feels more constrained, limited, and less ambitious whereas the latter is about innovation and creative solutions.

This approach also raises new important questions and issues for designers – in particular, an important step in designing something new is to deeply understand your target audience to ensure you deliver a user experience that meets their needs.

But what language do you use when eliciting requirements?

It’s a sensitive area. Disability is (arguably) a relatively well-understood term that some will feel comfortable with whereas this term clearly offends other people. On the other hand, asking which “abilities” someone has – when they clearly feel they have some form of impairment – may come across as a little rude and patronising.

This is a difficult area to balance and it may not be possible to please everyone.

It’s clear, however, that despite work over the past twenty years or so into assistive computing, the majority of user interfaces still hold significant accessibility issues for many users.

A design shift from disability to ability may help encourage designers to focus on building tailored and effective interface designs that better support users and their abilities (as opposed to the lesser goal of attempting to bring everyone onto a level playing field).