Assistive Technology: The Myo Gesture Armband

The Myo Armband is an interesting product that enables people to control different systems via simple gestures. The promotional video below shows a variety of scenarios where the tool might be used including the use of games, mid-air gestures for pausing and playing videos, manipulating presentation slides, and controlling drones and robots.

I’m personally not completely convinced that people would want or need to use the technology in those scenarios, but I’ve been playing around with the device recently and it clearly has some potential as an assistive tool. In particular, I can see how it might be useful for some people with physical impairments who may have difficulty in using traditional tools (i.e. mouse, keyboard, stylus, etc.) to interact with technology.

To start using it you need to slip the armband on, calibrate the device, and it can then start tracking the different movements and gestures that you make. It works by reading the electrical activity in your muscles and movements of your arm thus enabling you to perform a range of mid-air gestures which can be mapped to different events or actions (e.g. moving a fist up and down to scroll a page).

The product includes software that allows you to type via a virtual keyboard and control the mouse cursor via different gestures. Whilst this is initially quite engaging and fun, it soon becomes frustrating to use and it’s clear that bolting this technology onto existing platforms such as Windows will never be a pleasant user experience over the longer term.

New application designs are needed that better support the novel interaction method offered through this device – simply trying to use it with interfaces that have been designed for a mouse and keyboard are unlikely to work effectively. I can imagine that it has some potential for people with physical impairments as a switch based interface – that is, for example, if people can reliably tap two fingers together, this could be used as input for controlling a system.

Admittedly, this type of interaction would be tedious and slow, but for some people who have significant issues in using a mouse and keyboard it may provide them with a more accessible (and pain free) method for controlling a computer. The Myo also comes with SDKs (software development kits) for a variety of platforms (Windows, Mac, Android, and iOS), so there’s the opportunity for devleopers to create solutions that are specifically customised to an individual’s preferred method of control.

There’s also some great work going on around combining the device with prosthetics (check out the video below). This project is looking at how the use of Myo can be used to control specific movements of a robotic prosthetic hand to perform fine motor movements such as picking up objects with a couple of fingers. Really impressive work!

The Myo is certainly an interesting device with lots of potential – it will be exciting to see what’s developed with it over the next couple of years (also make sure to check out the Myo app store which provides some examples of what’s possible…).

An Introduction to Eye Gaze Tracking Technology

Eye gaze tracking enables people with severe physical impairments to interact with systems via their eyes. This technology has been around for a while now, but it used to be very expensive (several thousands of pounds just a few years ago) and therefore was not a viable option for the vast majority of people who could potentially benefit from using it.

However, the price has dropped dramatically over the past couple of years and there are now devices which cost under £100 which can accurately track a user’s eyes. Furthermore – there is also open source software (and other low cost alternatives) that work with these devices and enable users to control systems in a cost effective way.

The process for setting up eye gaze tracking is relatively straightforward – it typically works by plugging a portable eye tracking device into your computer and then calibrating your eyes via software (usually through following a dot around a screen). Once a user’s eyes have been calibrated they can then start using their eyes to interact with applications.

Eye Gaze Tracking Devices

Tobii EyeX: This device provides eye gaze tracking for under £100. It’s marketed primarily towards game developers, but has huge potential as an assistive device. There is also a software development kit available for those of you interested in developing your own eye tracking applications.

Eye Tribe: The Eye Tribe tracker is another more affordable eye gaze tracking device that allows you to easily add eye tracking functionality to a computer. Like the Tobii EyeX sensor, there is also a software development kit that enables developers to build applications that support eye gaze interactions.

Tobii Dynavox: In addition to the Tobii EyeX, Tobii also provides other eye tracking solutions through Tobii Dynavox – part of the Tobii group that specialises in building assistive technologies. Their products include the I-Series+, EyeMobile, PCEye Explore, and the PCEye Mini. These products tend to be more expensive than the Tobii EyeX and Eye Tribe devices.

Eye Gaze Tracking Software

Tobii Dynavox Applications: In addition to their eye tracking devices, Tobii also sell a range of eye gaze software. This includes AAC software, computer access tools, and educational applications.

OptiKey: This is open source communication and Windows control software that enables people with severe physical and motor impairments to use their eyes to interact with a wide range of applications. I’ve previously written about OptiKey if you’d like some further details about how it works.

Project IRIS: This is software that enables users to control the mouse cursor via their eyes. You can also use it to set up “interactors” which are essentially large custom buttons you can create and then associate specific keyboard shortcuts.

Grid3: Grid 3 is a recent update to Grid 2 and is one of the most popular assistive applications available on the market. It can work with a range of different technologies, but is commonly used with eye tracking devices. It provides communication tools, customised interfaces to common applications (e.g. Facebook, web browsers, etc.), environment control, and a suite of accessible games.

Interaction Considerations

Much of the software we use today has typically been designed for use with a mouse and keyboard, but this type of interaction is very different from using your eyes to interact with systems. Attempting to select small icons, for example, is quick and easy to do with a mouse, but can be incredibly tedious and frustrating when using your eyes (primarily because our eyes are always moving).

Software that’s designed for eye gaze tracking (e.g. OptiKey) often provides workarounds to address this type of problem (e.g. the ability to zoom into areas of a screen and then make a more accurate selection), but these additional selection steps can slow down your ability to efficiently use an interface. However, for people with severe physical impairments where eye gaze tracking may be the only viable solution for interacting with computers, this may be perceived as a relatively minor issue.

Another consideration when using eye tracking technology is an effect commonly referred to as the “Midas Touch”. This refers to the problem that if you’re using your eyes to operate some software then everything is potentially “selectable” which can lead users to frequently make accidental selections. The potential for everything on the screen to be selectable can also result in “tiring” user experiences because it can require lots of focused attention and concentration to ensure you don’t select objects by mistake.

The core design problem here is how can a computer distinguish between whether you’re casually looking around the screen (with no intention to make a selection) versus when you actually want to look at an object to select it (e.g. selecting a link on a webpage)? Again, eye tracking software normally provides a variety of potential solutions to the Midas Touch problem, but it’s an important issue to be aware of if you’re considering using this type of technology.

It is also likely that you would not want to use your eyes to control a mouse cursor – it can be easy to assume that this is how eye tracking technology might work best, but due to the nature of our eye movements (i.e. they are constantly on the move), this can produce a very jarring and uncomfortable user experience. Moreover, if you’re using your eyes to control the cursor, it is difficult to focus on other parts of the screen without selecting something by mistake (this is again relevant to the Midas Touch problem).

Videos

Here are a selection of videos where you can see some examples of eye tracking in use and the types of products and software that are commercially available.

Tobii Dynavox – The Future of Disability Technology

This video provides an overview of Tobii’s I-Series+ that uses eye tracking to enable people to communicate with others and to control standard software such as Facebook, Skype, etc. It also provides a high level overview of how the technology works.

The Eye Tribe Tracker

This is a promotional video for the Eye Tribe tracker (which is available for under £100) that provides some nice examples of how eye tracking technology might be used in a variety of different areas (not just for assistive purposes).

Tobii EyeMobile

This video provides an overview of Tobii’s EyeMobile product that enables users to control a Windows 8 tablet with their eyes. This type of solution will be considerably more expensive than using a budget eye gaze tracker such as the Tobii EyeX or Eye Tribe, but it does provide an integrated solution that should work out-of-the-box.

OptiKey – FREE Eye Gaze Software

This video includes an overview of the free and open source software “OptiKey” by the developer Julius Sweetland. In this video he highlights his motivation for making the software free and why he decided to build the application.

Grid 3

This is a promotional video for Grid 3 – assistive software that is commonly used in conjunction with eye gaze tracking technology. The video provides examples of the type of functionality that Grid 3 provides in addition to showing some of their accessible applications (e.g. a customised interface for using Facebook).

Sarah Ezekiel

Sarah is an artist with motor neurone disease who produces her artistic work through eye gaze alone. Check out Sarah’s website to see some examples of her work (I’ve been working with Sarah through the D2ART project to explore the potential of new tools for disabled artists).

Further Reading (Academic Papers)

Eye tracking in human-computer interaction and usability research: Ready to deliver the promises
Robert J. K. Jacob and Keith S. Karn (2003)

This chapter was written in 2003, but is still highly relevant. It provides an overview of the history of eye tracking technology and highlights many of the key issues involved in eye-based interactions.

Twenty Years of Eye Typing: Systems and Design Issues
Päivi Majaranta and Kari-Jouko Räihä (2002)

This paper discusses eye typing systems and highlights the key design issues in building such systems. Again, whilst the paper is over ten years old, it still presents interaction issues that are highly relevant in today’s eye tracking systems.

Eye gaze tracking for human computer interaction
Drewes (2010)

This is a link to a PhD thesis which in the earlier sections provides an overview of how eye tracking works and some of the interaction issues involved in using this technology to interact with software and systems.

Assistive Technology for People with Parkinson’s Disease

People with Parkinson’s disease can have significant issues in using computers and can experience real difficulties when using a traditional mouse and keyboard. Tremor is often the symptom that many associate with Parkinson’s, but there are also several others that can influence the use of technology such as muscle stiffness, reduced motor ability, issues with coordination, and a general lack of energy.

As the condition develops it can result in people becoming increasingly dependent on others for support with a wide range of everyday tasks. Having the ability to use technology and computers can therefore play a crucial role in enabling people to maintain some level of control and independence in their lives.

Making recommendations around assistive technology to support people with Parkinson’s can be particularly problematic due to the diversity of symptoms that individuals can experience. The fact that Parkinson’s is a degenerative condition also complicates matters as symptoms can change and develop over time.

There has been surprisingly little research that has explored how people with Parkinson’s find using assistive technology and the potential it has to support their use of computers. One study that has investigated this area was conducted by Begnum and Begnum who examined the potential of “off-the-shelf” technology and peripherals to help make interactions with computers more comfortable.

They ran a study with eight people who had Parkinson’s where they made 45 assistive tools available to participants – these included a wide array of specially designed mice and keyboards, joysticks, trackballs, key guards, multi-touch tablets, and numerous other devices. The authors specifically chose tools that can easily be purchased from major retailers to ensure they are widely accessible to people with Parkinson’s.

The participants were allowed to choose which items to interact with and were given some set tasks to complete in addition to having time to freely experiment with each chosen device. There were several themes that emerged from the study – in particular, all participants (unsurprisingly) had issues in controlling an on-screen mouse pointer and clicking buttons.

To address these core issues, trackballs were found to be especially effective for people with tremor and all participants were reportedly helped when using these devices. Trackballs of different sizes and types were tested and the most important consideration appeared to be the match of hand size to trackball size. So, unsurprisingly, smaller trackballs worked best for people with smaller hands whereas users with larger hands preferred larger trackballs.

However, despite trackballs helping with mouse control, participants still had significant issues with clicking buttons and as such these types of devices didn’t provide an ideal solution for any of the users taking part in the study.

Several participants preferred a joystick for making clicks primarily because the buttons are positioned in different locations from a standard mouse and trackball. On the other hand, a joystick approach also received negative feedback as some people found them heavy and cumbersome to use – this, in turn, resulted in some participants experiencing pain and tiredness.

This difference of opinion demonstrates the difficulty in finding appropriate assistive solutions for people with Parkinson’s – what works for some people may not be effective for others – it’s always crucial to develop solutions for individuals and their own specific requirements.

In terms of keyboards, smaller keyboards worked well in some cases as they limited finger movements which in turn reduced pain and fatigue. Conversely, some users with difficulty in selecting smaller targets needed larger keyboards (despite the “knock-on” issues around straining). Split keyboards appeared to work well for participants with muscle stiffness, although rubber keyboards and key guards were not perceived positively.

In summary, the authors conclude that despite the wide range of technology tested that there was no single solution that was universally effective for mouse and keyboard use. However, it’s important to highlight that several of the devices evaluated did (in some way) enhance mouse and keyboard control and thus can offer significant benefits over traditional tools (check out the paper for the list of technology tested). These were also tools that are widely available and can easily be purchased online (or from major retailers).

Unfortunately, as the authors discuss, many people with Parkinson’s are unaware of the assistive options commercially available and thus persist in struggling to use a standard mouse and keyboard – or become increasingly reliant on others to perform tasks for them (in turn reducing levels of independence).

This is a real shame and more clearly needs to be done to raise awareness of the assistive technology available on the market that can better support and enhance use of computers and other technologies.

Reference
Begnum, M. E. N. & Begnum, K. M. (2012) On the usefulness of off-the-shelf computer peripherals for people with Parkinson’s Disease. Universal Access in the Information Society, 11(4), 347-357.

Assistive Technologies and Social Exclusion

Digital technologies are often heralded as having the potential to transform opportunities for disabled people. And rightly so – there are many examples of assistive digital tools that enable people with severe impairments to still maintain independence and participate in activities that wouldn’t be possible without the use of technology.

There are also a wide range of new innovative products constantly being released that are typically more affordable than they were 5-10 years ago. For instance, eye gaze tracking, mid-air gesturing, speech recognition, and motion tracking have dropped significantly in price over the past few years and are now much more accessible to disabled people.

But is the widely reported potential around digital tools for disabled people actually being realised today? Is assistive technology helping a majority of disabled people in their everyday lives and enabling them to develop and maintain independence? Are disabled users aware of the latest cutting-edge opportunities that enable them to interact with systems in new ways?

This is highly important to understand better as more of our everyday activities are increasingly carried out online. If disabled people are not able to access or use assistive tools, digital technologies and online experiences could actually be adding another significant barrier thus leading to further exclusion.

McDonald and Clayton explored these types of areas in their paper “Back to the future, disability and the digital divide”. They wanted to investigate the impact of digital technologies on enhancing life opportunities for disabled people from deprived areas in Sunderland (UK). The authors conducted a survey with 811 people (300 disabled and 511 non-disabled) to explore how disabled people currently use digital and assistive technologies and the extent to which they can help them address disabling barriers and social exclusion (in comparison to a non-disabled socially excluded population).

There were some surprising findings – for instance, 71% of disabled participants stated that they had never used a laptop or personal computer. Moreover, 73% of respondents reported never having connected to the Web! These are worrying results – it can be easy to assume that disabled people will use these types of products and technologies because they are so pervasive and clearly offer huge benefits.

But it seems this isn’t happening for some reason and a majority of people from this group of participants aren’t accessing even the most common and readily available mainstream products. When examining potential reasons behind these findings, the authors mention that the main barriers in accessing digital and online technologies are insufficient funds and lacking confidence in their skills or knowledge.

In addition to the above digital technologies, the authors also asked about assistive technologies and the extent to which disabled people are engaging with them. Again, the results are surprising – only 7% of people stated that they are using technology to assist them in independent living. So, in addition to a large number of participants not using common mainstream products, it appears that they are also not engaging with technologies that have been designed specifically with their needs in mind.

This study was conducted a few years ago so the situation may have changed over that period – although my feeling is there’s unlikely to have been a dramatic shift. It’s also arguable whether these results also apply to other regions around the UK and abroad. It’s clear, though, that despite the huge opportunities offered by digital technologies that they are not being utilised by disabled people (I’ve regularly seen this in my own research). If even the most basic tools aren’t commonly used, then it’s highly unlikely that more innovative devices are being widely used (or that people are even aware of them).

The authors conclude that rather than presenting enhanced life opportunities, digital technologies “… add an extra layer of exclusion” and that “… [they] will create a new level of social inequality reinforcing the digital divide within the UK”. These are strong statements that could become a reality if a majority of disabled users remain unaware of the digital assistive tools available or if the extortionate cost of some tools continues to make them inaccessible.

Reference
Macdonald, S. J. & Clayton, J. (2013) Back to the future, disability and the digital divide. Disability & Society, 28(5), 702-718.

Boomer Foot Mouse: Using Your Feet For Computer Control

The Boomer Foot Mouse is a device which looks like it has some interesting potential as an assistive tool. As the name of the device suggests, it enables the control of an on-screen cursor using only your feet. It doesn’t seem to be marketed primarily as an assistive device, but clearly has potential for people with a range of physical impairments (who may have difficulty in using a traditional mouse and keyboard).

a photo of the boomer foot mouse

From the overview video on the product’s website (included below) it looks like the middle button is used for cursor control whereas the left and right buttons are (unsurprisingly) for left and right clicks.

It’s fixed at an 18° angle (I’m guessing to make it more comfortable to use) and fitted with anti-skid pads to restrict movement during use (which is especially important for people with motor impairments who may accidentally move/kick it away).

It’s a novel device, although I wonder if any longer term evaluations have been conducted to see how people have found using it over extended periods of time. I’d imagine that some degree of motor control would be essential – especially when using the middle button for controlling the cursor.

Also, do people prefer to use one foot for controlling everything or does some combination of both feet work better? Or maybe this is purely down to an individual’s preferences? I wonder what double clicking would be like as well – if you’re performing some activity that requires lots rapid double clicks (perhaps when playing a game) it seems like this could quickly make your feet start to ache.

On the overview video an example scenario is provided where the user is editing a spreadsheet – I can see that the device might be well suited for this type of relatively basic interaction, but I’m not sure how well it might support more creative and artistic tasks where fluid and dynamic cursor movements may be required.

It’s certainly an interesting device that might be useful for many disabled users – although it’s a little “pricey” at the moment at around $895 – which may ultimately prove a little too expensive for some people.

Designing Assistive Technology for Social Acceptance

Assistive technologies are typically designed with a primary focus on making them functional and usable for people with a range of different impairments. This is clearly a worthy goal and essential to ensure that assistive tools can help support disabled users in their everyday lives.

However, much less work to date has focused on the equally crucial area around the social impact of assistive technologies and how people feel when using them in a range of social situations. It is therefore unclear what disabled people think about assistive tools and how they are perceived by others out “in the wild” (i.e. beyond a development studio or research laboratory).

To address this lack of work, Shinohara and Wobbrock interviewed individuals with a variety of impairments to explore how the use of assistive technology is influenced by different social contexts. The interviews highlighted a number of interesting and important themes – in particular, it’s clear that certain assistive technologies can “mark” people out as being disabled and thus make them feel more self-conscious (e.g. phones that speak aloud).

This was especially prevalent in tools that were specifically designed as assistive devices (as opposed to mainstream products) – these types of tools tend to stand out more due to their uncommon and non-standard design. Conversely, devices that are smaller and look more like mainstream products unsurprisingly attract less attention.

The authors also reported that participants appreciated how assistive technologies can help provide equal access, but also that some tools are not as technically advanced as mainstream devices. To compound this further, these less technically capable devices typically cost significantly more than mainstream products.

You therefore end up paying far more for much less!

Another theme to emerge from the interviews was the appearance and aesthetics of assistive tools. In particular, it was highlighted that little effort appears to be put into the look and feel of assistive devices. On the surface it can be easy to assume that this doesn’t matter – if you can’t, for example, see an assistive device (due to a visual impairment), why does it matter what it looks like?

However, the authors reported that this is an important consideration for disabled users and their assistive tools. Blind users, for instance, still wanted devices that are beautiful and aesthetically appealing. This can help to reduce social anxieties around the use of assistive technologies whereas using large, bulky, and unattractive devices can attract unwanted attention.

Avoidance was another theme the researchers highlighted – in the sense that participants described avoiding certain assistive technologies as much as possible at different points in time. For instance, people avoided using a cane for as long as possible because of the way it can immediately mark you out as being disabled.

I’ve also heard similar views from disabled artists I’ve worked with on the D2ART project who have spoken, for example, about putting off using a wheelchair for as long as possible for similar reasons (despite experiencing significant issues in being able to walk). However, once they decided to try out a chair (with much apprehension), they actually found it to be very liberating and offer more opportunities and independence.

To address many of the issues around the social impact of assistive technologies, Shinohara and Wobbrock suggest a new design approach – Design for Social Acceptance (DSA) – where increased emphasis is placed on designing technologies that are more socially acceptable.

This means moving away from designing assistive tools in isolation without the input of disabled users – a more holistic and collaborative approach is required in conjunction with longer term field studies to explore perceptions and the impact of different designs in a social context.

This clearly adds time, resource, and complexity to a project – but is crucial to increase the likelihood that assistive tools will be “accepted” and regularly utilised by disabled users.

Reference
Shinohara, K. & Wobbrock, J. O. (2011) In the shadow of misperception: assistive technology use and social interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 705-714).

Finger Mouse – A Cheap Assistive Tool for Physical Impairments

There are lots of alternative mice around on the market that aim to help support people with physical impairments when using computers. However, there are also other interesting devices that have potential as assistive tools, but are not necessarily marketed or designed with this area in mind.

One example is Finger Mouse – an optical mouse I found recently that you can strap to your finger. Around the time I saw it was only £1.50 – it looks like it’s gone up to around £4 now (at the time of publishing this post), but it’s still very cheap and has potential as an assistive tool.

We evaluated use of this device on the D2ART project last year with physically impaired artists to explore whether it might be of any use in their practice. It proved surprisingly popular – in particular, artists with arthritis, dystonia, and other physical impairments found the device especially useful.

These positive responses were primarily because it enabled them to hold a mouse in a different way that was more comfortable than when using their whole hand to control a “standard” mouse (which can cause physical pain and discomfort).

fingermouse

In fact, since the evaluation sessions I’ve spoken with several of the artists we worked with and they have informed me that they have purchased the device and are using it as an alternative to a traditional mouse.

It’s a very basic device that’s powered through a USB connection and is designed to be operated between your index finger and thumb (although you could potentially use it on a different finger).

The artists that we worked with varied in how they preferred to use it – some liked to operate it directly on a table whereas others tended to use their own body as the surface on which they placed the device (i.e. predominantly their lap).

One artist with cerebral palsy also experimented with using it on the floor which is how she typically produces her artistic work (as it provides her with more stability and control). As such, she seemed to prefer using the device in a similar way to how she creates her artistic work.

If you’re on the lookout for an alternative mouse, this one may well be worth checking out. It’s very affordable and provides a different approach for controlling your computer – and it was clearly a popular option with the artists we worked with on the D2ART project.

3D Print Your Own Mouth Controlled Mouse (For Under $20!)

Assistive tools and technologies can often be very expensive which in turn can make it difficult for the people who need them the most to afford and access them. It’s almost like disabled users have to pay an additional “tax” to get hold of technology that can help them in everyday tasks (whereas the rest of us can use more affordable “mainstream” products).

That’s why I love this project – it shows how to build a mouth operated mouse for under $20! The core components can be generated from a 3D printer and the other parts are easily and readily available across the web for a small amount of cash.

The mouse can be connected to most PCs via a standard USB connection and the design of the casing enables it to be mounted to tripods thus allowing users to place it in a comfortable position that best suits their individual needs.

As you can see from the video above, the mouse works much like a joystick in that you move the stick with your mouth to control the on-screen cursor. The right button can be activated through pushing down on the mouthpiece whilst the left button can be “clicked” through a sensor built into the device that can detect when the user has sucked air through it.

The video shows the user completing a variety of tasks, although it also highlights some potential interaction issues that may still need to be addressed. For instance, you can see the user struggling to select multiple files and then moving them into another folder. This does eventually get completed successfully, but it takes a few attempts (I’m guessing this could become less of an issue after continued use with the device).

I also wonder how tiring use of this mouse might become over time – could it result in jaw ache or neck strain over extended periods of interaction? Furthermore, it would be good to know more about how dragging with the device works – it seems like this would involve sucking air through the device for a lengthy period of time (i.e. to simulate a click and hold) which could be an issue for many users.

It might be best to combine this approach with software where the user can select a button click they would like to perform (e.g. click and hold) and then use the joystick on the device to perform the actual dragging movements (e.g. Dwell Clicker 2).

It would also be interesting to get a sense of what it’s like to use applications such as Photoshop and Illustrator that typically require the selection and manipulation of small icons and interface elements. These can easily be controlled with a standard mouse, but how does this device compare?

It’s great to see projects like this one! It shows what’s possible with some cheap materials, a 3D printer, and a bit of creativity. There’s clearly huge potential in this area for creating affordable assistive tools that can be tailored to the specific requirements of individual users.

Multi-Touch Web Browsers for People with Tremor

People with tremor can experience significant barriers when attempting to use a range of input devices for computers. For instance, trying to accurately control a mouse or use a standard keyboard can be particularly problematic and in turn often results in frustrating user experiences.

Moreover, people with tremor can find touch-based interfaces incredibly difficult to use effectively. In their paper – Designing a Touchscreen Web Browser for People with Tremor – Wacharamanotham et al. discuss the issues that tremor can cause when using multi-touch interfaces.

They highlight an evaluation they conducted with 20 participants (with intention or Parkinsonian tremor) in which they tracked each user’s touches when using single and multiple fingers for a range of actions (e.g. tapping and sliding).

In terms of tapping, the authors observed “inadvertant lift-and-land movements” when manipulating an object – that is, tremor can cause a user’s finger(s) to briefly lift off the screen when (for example) they are attempting to select or move an object to a different location.

This can make it hard to distinguish double and triple taps from a single tap which can cause interaction issues as double/triple taps could potentially have different actions associated with them (depending on the application being used).

It was also found that sliding movements can be problematic as the jitter caused from tremor can again lead to the finger being raised from the surface – which can result in the interface element becoming inactive. Furthermore, the authors found users’ fingers jittered more the closer they got to a target (they suggest this is probably due to increased anxiety around getting close to completing a task).

To address some of these issues Wacharamanotham et al. describe a new design for a touch-based web browser. In particular, they introduce “swabbing” – an interaction technique that enables a selection to be made via sliding movements towards the intended target (which the authors previously found lowered error rates).

A screenshot of the multi-touch browser with the swabbing interface overlaid on some text.

As can be seen in the image above, the web browser utilises a semi-transparent overlay that contains different options thus enabling users to browse and navigate the web through the swabbing interface.

A user can display the swabbing interface by tapping with five fingers on the screen – a single finger can also be held down to toggle a zoom mode to allow users to focus on a specific area of the page (thus refining the number of links, for example, that can be selected).

It seems from the paper that several overlays are available (although I’ve not personally tested the system) – the first provides basic browser functionality such as entering a URL, going back and forward between pages, and navigating between different browser tabs. There are also overlays for entering form fields and inputting text which requires the use of two-finger and three-finger sliding to navigate between characters.

In terms of selecting a link on a page, each has an arrow next to it that matches the colour and angle of the relevant target in the swabbing interface (you can see an example of this in the image above). The user can then select the appropriate link of interest by sliding towards the direction of the target.

The paper is from 2013 and the authors discuss running a longitudinal study where a comparison would be made between this interface and a standard touch-screen web browser. It would be really interesting to see the results from this work – in particular, I wonder how well this type of approach would work for different web experiences (e.g. standard web pages versus interactive applications/games).

Also, how long does it take for people to become familiar with this new interaction method and how quickly can people accurately browse across pages? Does it provide any benefits over simply making targets (i.e. links) much larger in size to help facilitate accurate selection?

It’s certainly an interesting interaction approach I’ve not seen before!

Reference
Wacharamanotham, C., Kehrig, D., Mertens, A., Schlick, C., & Borchers, J. (2013) Designing a Touchscreen Web Browser for People with Tremor. Workshop on Mobile Accessibility, CHI2013.

Bolting On Accessibility

The process of making software, applications, and other interactive experiences accessible for disabled users often involves incorporating assistive solutions into existing and standard interfaces.

Eye gaze tracking software (e.g. OptiKey), for example, typically attempts to make Microsoft Windows more accessible through a range of different features (e.g. zooming, altering cursor size, snapping cursors to interface elements to enhance selection, etc.).

This is clearly important work and whilst it does help to make Windows more accessible, there’s still the underlying issue that we’re “bolting” novel technologies onto existing platforms.

To continue this example – Windows has primarily been designed to be used by a mouse, keyboard, and our fingers – using our eyes to control an interactive experience is very different to traditional approaches and comes with a whole host of pros and cons. Attempting to use your eyes to control a “standard” interface is therefore likely to be problematic and awkward in many circumstances.

For instance, take the selection of small items in an interface (e.g. an icon, an item from a drop-down list, etc.) – this is simple and fast to do with a mouse and keyboard, but much more difficult with your eyes.

A common solution in eye gaze applications is to use a magnifier – the user first looks in the general area where they’d like to make a selection, a magnifier is then displayed (after a button press or a dwell time selection), and the user can then make a more accurate selection within that magnification window (through again looking at the target and performing another button press).

So, whilst this approach enables users to make (relatively) accurate selections via eye gaze, they are also being forced to perform several steps to select an item when only two steps should be required (i.e. look at the item and select it – via dwell time or a button press).

There’s no doubt that applications such as OptiKey and some of the tools offered by Tobii (e.g. Windows Control) can be a real enabler for many disabled users – but there is surely value in moving away from only making current platforms and software more accessible.

I’d like to see more applications that have been designed and optimised specifically for people using eye gaze tracking and other assistive technologies. This may require radically different interface designs from what we’re traditionally familiar with using, but this can also result in novel and more effective interaction experiences.

And who knows – these “new” interface approaches may end up appealing to a much wider mainstream audience!