People with physical and motor impairments can face significant barriers in producing artistic and graphical work. Through the D2ART project I’ve observed first-hand the issues people face when using traditional tools such as brushes and pencils.
Some assistive tools such as head wands, mouth sticks, and custom designed grips (for holding brushes) can help make this process a little more accessible, but can also lead to other health complications such as damage to teeth and neck strain.
Digital technologies hold much potential in providing new opportunities for people with motor impairments to produce artistic work. On the D2ART project we’ve been exploring a range of different technologies such as mid-air gesturing (using a Leap Motion sensor), eye tracking (via Tobii EyeX), head tracking (through webcams and software such as EnableViacam), and facial expression “switches” (e.g. using KinesicMouse).
Another technology that holds potential is voice recognition – that is, the ability to use your voice to control systems and produce graphical and artistic work. However, there has been little research to date that has investigated the potential of this approach for creative activities.
One interesting paper I recently read (Voicedraw: a hands-free voice-driven drawing application for people with motor impairments) described work around conducting a field study with a “voice painter” to explore his process for creating artistic work via his voice.
The artist had a spinal cord injury at C4-C5 level which affected the dexterity of his elbow and wrist (in addition to having no sensation in his hands). His speech was unimpaired, but he had limited lung capacity.
This artist had been using Dragon Dictate along with Microsoft Paint and the MouseGrid feature that enables users to navigate different elements of an interface using their voice. The painter would begin by moving the cursor in one of eight positions (top, upper right, right, left, etc.) through a spoken command (e.g. “drag upper right”).
The pointer would then start moving in that position at the default speed set by the artist which could then be altered at any time through additional commands (e.g. “much faster”, “move left”, etc.).
Whilst this approach can enable the production of graphical work, the authors also highlight several issues with this method. For instance, if the speech recognition fails mid-way through an extensive series of movements it can result in costly mistakes that typically requires having to undo everything and starting again (a similar issue can also occur when using the eraser tool).
The painter also informed the researchers that his current creative methods restricted expressiveness and meant that he couldn’t achieve fluid brush strokes. The authors attempted to address this through an experimental prototype – VoiceDraw – that provided people with motor impairments more control when creating freeform drawings with their voice.
In particular, they looked at the use of non-speech sounds as a way for providing people with the ability to draw in fluid movements. The application uses different vowel sounds to move the pointer in a particular position (e.g. “ee” for left, “a” for up, “aw” for right, and “oo” for down). The prototype also recognises distinct sounds such as “ck” and “ch” which were mapped to different types of events/interactions (e.g. a mouse click for making selections).
The thickness of a brush could be altered whilst painting through the user varying the loudness of their speech – the pitch of a voice was initially tested for this purpose, but the artist experienced issues with this approach
The researchers asked the artist to evalaute the application through performing a series of tasks that compared his current approach with VoiceDraw (you can see examples of the artistic results in the paper).
The authors found that the artist was able to learn the interface and create drawings with smoother curves than was possible with his existing approach. Moreover, the authors found that he was also able to produce work significantly faster when using VoiceDraw.
The research in this paper shows some potential in the use of voice for graphical/artistic work, but there are still numerous interesting interaction issues that would need to be addressed to create a more fully featured tool.
For example, how would such an application enable you to pan and zoom to do more detailed work, apply different filters to elements of digital drawings and paintings, cut/copy/paste, etc., and work with multiple layers (as is common in many traditional graphical and artistic applications)?
Lots of interesting questions! This certainly seems like a fruitful area for further research.
Harada, S., Wobbrock, J., and Landay, J. (2007) Voicedraw: a hands-free voice-driven drawing application for people with motor impairments. In Proceedings of the 9th international ACM SIGACCESS conference on Computers and accessibility, pp. 27-34.