Finger Mouse – A Cheap Assistive Tool for Physical Impairments

There are lots of alternative mice around on the market that aim to help support people with physical impairments when using computers. However, there are also other interesting devices that have potential as assistive tools, but are not necessarily marketed or designed with this area in mind.

One example is Finger Mouse – an optical mouse I found recently that you can strap to your finger. Around the time I saw it was only £1.50 – it looks like it’s gone up to around £4 now (at the time of publishing this post), but it’s still very cheap and has potential as an assistive tool.

We evaluated use of this device on the D2ART project last year with physically impaired artists to explore whether it might be of any use in their practice. It proved surprisingly popular – in particular, artists with arthritis, dystonia, and other physical impairments found the device especially useful.

These positive responses were primarily because it enabled them to hold a mouse in a different way that was more comfortable than when using their whole hand to control a “standard” mouse (which can cause physical pain and discomfort).

fingermouse

In fact, since the evaluation sessions I’ve spoken with several of the artists we worked with and they have informed me that they have purchased the device and are using it as an alternative to a traditional mouse.

It’s a very basic device that’s powered through a USB connection and is designed to be operated between your index finger and thumb (although you could potentially use it on a different finger).

The artists that we worked with varied in how they preferred to use it – some liked to operate it directly on a table whereas others tended to use their own body as the surface on which they placed the device (i.e. predominantly their lap).

One artist with cerebral palsy also experimented with using it on the floor which is how she typically produces her artistic work (as it provides her with more stability and control). As such, she seemed to prefer using the device in a similar way to how she creates her artistic work.

If you’re on the lookout for an alternative mouse, this one may well be worth checking out. It’s very affordable and provides a different approach for controlling your computer – and it was clearly a popular option with the artists we worked with on the D2ART project.

3D Print Your Own Mouth Controlled Mouse (For Under $20!)

Assistive tools and technologies can often be very expensive which in turn can make it difficult for the people who need them the most to afford and access them. It’s almost like disabled users have to pay an additional “tax” to get hold of technology that can help them in everyday tasks (whereas the rest of us can use more affordable “mainstream” products).

That’s why I love this project – it shows how to build a mouth operated mouse for under $20! The core components can be generated from a 3D printer and the other parts are easily and readily available across the web for a small amount of cash.

The mouse can be connected to most PCs via a standard USB connection and the design of the casing enables it to be mounted to tripods thus allowing users to place it in a comfortable position that best suits their individual needs.

As you can see from the video above, the mouse works much like a joystick in that you move the stick with your mouth to control the on-screen cursor. The right button can be activated through pushing down on the mouthpiece whilst the left button can be “clicked” through a sensor built into the device that can detect when the user has sucked air through it.

The video shows the user completing a variety of tasks, although it also highlights some potential interaction issues that may still need to be addressed. For instance, you can see the user struggling to select multiple files and then moving them into another folder. This does eventually get completed successfully, but it takes a few attempts (I’m guessing this could become less of an issue after continued use with the device).

I also wonder how tiring use of this mouse might become over time – could it result in jaw ache or neck strain over extended periods of interaction? Furthermore, it would be good to know more about how dragging with the device works – it seems like this would involve sucking air through the device for a lengthy period of time (i.e. to simulate a click and hold) which could be an issue for many users.

It might be best to combine this approach with software where the user can select a button click they would like to perform (e.g. click and hold) and then use the joystick on the device to perform the actual dragging movements (e.g. Dwell Clicker 2).

It would also be interesting to get a sense of what it’s like to use applications such as Photoshop and Illustrator that typically require the selection and manipulation of small icons and interface elements. These can easily be controlled with a standard mouse, but how does this device compare?

It’s great to see projects like this one! It shows what’s possible with some cheap materials, a 3D printer, and a bit of creativity. There’s clearly huge potential in this area for creating affordable assistive tools that can be tailored to the specific requirements of individual users.

Multi-Touch Web Browsers for People with Tremor

People with tremor can experience significant barriers when attempting to use a range of input devices for computers. For instance, trying to accurately control a mouse or use a standard keyboard can be particularly problematic and in turn often results in frustrating user experiences.

Moreover, people with tremor can find touch-based interfaces incredibly difficult to use effectively. In their paper – Designing a Touchscreen Web Browser for People with Tremor – Wacharamanotham et al. discuss the issues that tremor can cause when using multi-touch interfaces.

They highlight an evaluation they conducted with 20 participants (with intention or Parkinsonian tremor) in which they tracked each user’s touches when using single and multiple fingers for a range of actions (e.g. tapping and sliding).

In terms of tapping, the authors observed “inadvertant lift-and-land movements” when manipulating an object – that is, tremor can cause a user’s finger(s) to briefly lift off the screen when (for example) they are attempting to select or move an object to a different location.

This can make it hard to distinguish double and triple taps from a single tap which can cause interaction issues as double/triple taps could potentially have different actions associated with them (depending on the application being used).

It was also found that sliding movements can be problematic as the jitter caused from tremor can again lead to the finger being raised from the surface – which can result in the interface element becoming inactive. Furthermore, the authors found users’ fingers jittered more the closer they got to a target (they suggest this is probably due to increased anxiety around getting close to completing a task).

To address some of these issues Wacharamanotham et al. describe a new design for a touch-based web browser. In particular, they introduce “swabbing” – an interaction technique that enables a selection to be made via sliding movements towards the intended target (which the authors previously found lowered error rates).

A screenshot of the multi-touch browser with the swabbing interface overlaid on some text.

As can be seen in the image above, the web browser utilises a semi-transparent overlay that contains different options thus enabling users to browse and navigate the web through the swabbing interface.

A user can display the swabbing interface by tapping with five fingers on the screen – a single finger can also be held down to toggle a zoom mode to allow users to focus on a specific area of the page (thus refining the number of links, for example, that can be selected).

It seems from the paper that several overlays are available (although I’ve not personally tested the system) – the first provides basic browser functionality such as entering a URL, going back and forward between pages, and navigating between different browser tabs. There are also overlays for entering form fields and inputting text which requires the use of two-finger and three-finger sliding to navigate between characters.

In terms of selecting a link on a page, each has an arrow next to it that matches the colour and angle of the relevant target in the swabbing interface (you can see an example of this in the image above). The user can then select the appropriate link of interest by sliding towards the direction of the target.

The paper is from 2013 and the authors discuss running a longitudinal study where a comparison would be made between this interface and a standard touch-screen web browser. It would be really interesting to see the results from this work – in particular, I wonder how well this type of approach would work for different web experiences (e.g. standard web pages versus interactive applications/games).

Also, how long does it take for people to become familiar with this new interaction method and how quickly can people accurately browse across pages? Does it provide any benefits over simply making targets (i.e. links) much larger in size to help facilitate accurate selection?

It’s certainly an interesting interaction approach I’ve not seen before!

Reference
Wacharamanotham, C., Kehrig, D., Mertens, A., Schlick, C., & Borchers, J. (2013) Designing a Touchscreen Web Browser for People with Tremor. Workshop on Mobile Accessibility, CHI2013.

Bolting On Accessibility

The process of making software, applications, and other interactive experiences accessible for disabled users often involves incorporating assistive solutions into existing and standard interfaces.

Eye gaze tracking software (e.g. OptiKey), for example, typically attempts to make Microsoft Windows more accessible through a range of different features (e.g. zooming, altering cursor size, snapping cursors to interface elements to enhance selection, etc.).

This is clearly important work and whilst it does help to make Windows more accessible, there’s still the underlying issue that we’re “bolting” novel technologies onto existing platforms.

To continue this example – Windows has primarily been designed to be used by a mouse, keyboard, and our fingers – using our eyes to control an interactive experience is very different to traditional approaches and comes with a whole host of pros and cons. Attempting to use your eyes to control a “standard” interface is therefore likely to be problematic and awkward in many circumstances.

For instance, take the selection of small items in an interface (e.g. an icon, an item from a drop-down list, etc.) – this is simple and fast to do with a mouse and keyboard, but much more difficult with your eyes.

A common solution in eye gaze applications is to use a magnifier – the user first looks in the general area where they’d like to make a selection, a magnifier is then displayed (after a button press or a dwell time selection), and the user can then make a more accurate selection within that magnification window (through again looking at the target and performing another button press).

So, whilst this approach enables users to make (relatively) accurate selections via eye gaze, they are also being forced to perform several steps to select an item when only two steps should be required (i.e. look at the item and select it – via dwell time or a button press).

There’s no doubt that applications such as OptiKey and some of the tools offered by Tobii (e.g. Windows Control) can be a real enabler for many disabled users – but there is surely value in moving away from only making current platforms and software more accessible.

I’d like to see more applications that have been designed and optimised specifically for people using eye gaze tracking and other assistive technologies. This may require radically different interface designs from what we’re traditionally familiar with using, but this can also result in novel and more effective interaction experiences.

And who knows – these “new” interface approaches may end up appealing to a much wider mainstream audience!

Using YouTube to Explore Multi-Touch Technology Experiences for People with Motor Impairments

It can be easy to assume that people with motor impairments might be excluded from the use of multi-touch devices (smart phones, tablets, etc.) due to the nature of these types of interactions (i.e. you need to use your hands and fingers). However, recent research by Anthony et al. found that this is not necessarily the case and that these types of devices can present new opportunities for disabled users.

This study used an intriguing approach in that the researchers analysed over 180 non-commercial videos uploaded to YouTube that included people with physical impairments using a touch-enabled device. This is a great way to get “real-world” perspectives on how people use multi-touch interfaces working in their own environments (outside of an artificial lab).

The authors highlight some of the limitations of the approach (e.g. sampling bias – that is, the study only included people with an inclination to upload videos onto YouTube), but there is no doubt that this type of rich data can provide an interesting insight into how people are using the technology.

There were some interesting findings – in particular, from the wide range of videos it’s clear that many people with motor impairments are making use of multi-touch devices. The majority of users (91%) were able to use devices via direct touch (i.e. through their hands and fingers) whereas others (8%) used indirect methods such as head wands, mouth sticks, and a stylus. Children also used arm and leg slings to provide more stable control.

Users often tailored devices to make them more accessible – for example, people created their own guides and barriers which were overlaid on the device (to make selections easier), built custom pointing devices (e.g. head/mouth pointers with copper wiring), and used screen protectors in the form of plastic bags.

Whilst people were able to use their fingers, there were still some interaction issues – sometimes fingernails would come into contact with the device which cannot be accurately recognised (they’re not conductive enough). People were also holding their fingers on the screen for too long which can be detected as something different from a short tap. Moreover, the authors observed difficulties when dragging and sliding movements were required.

Despite these issues it’s clear that multi-touch devices hold much potential for people with motor impairments – as the authors state “…rather than finding a touch oriented interaction completely inaccessible, motor-impaired users in our videos and in our survey responses commented that these devices empower them to be more independent and do things they otherwise could not“.

However, the authors also highlight several areas where improvements could be made:

Adapting the sensitivity of the device: Head stick users, for example, can have difficulty tapping at the speed necessary to make selections – it would be useful if this could be tweaked (to allow more time for selections) or if the system could automatically learn this over time.

Multi-Touch Interactions: Mouth stick users can (unsurprisingly) have difficulty in performing 2-finger or 3-finger gestures. There is an accessibility feature on iOS products (AssistiveTouch) that could support these types of interactions, but the researchers didn’t spot anyone using this functionality.

Ignoring Long and Extended Touches: It would potentially be beneficial for some users if the device could ignore touches where there had been no significant movement for an extended period of time. This would help to mitigate accidental selections and make interactions a little smoother.

I think that we also need more apps that are specifically designed to support people with physical impairments and take into consideration many of the interaction issues discussed in the paper. The nature and general design of apps means that they tend to be more accessible than traditional desktop software designed for mouse/keyboards (e.g. larger icons/buttons for our fingers), but there is more that can be done to design apps that provide better user experiences for people with motor impairments.

Overall, this is a really interesting piece of research highlighting both the potential for touch-based interactions to provide new opportunities for physically impaired users, but also the work that still needs to be done.

Reference
Anthony, L., YooJin, K., and Findlater, L. (2013) Analyzing user-generated youtube videos to understand touchscreen use by people with motor impairments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1223-1232). ACM.

The Sensel Morph – A New Assistive Tool?

The Sensel Morph looks like a very cool new device that could hold some potential as an assistive tool. On the surface it looks like a standard multi-touch trackpad (designed to replace the mouse and keyboard), but its key feature is the ability to determine the amount of pressure that each finger is applying when in contact with the device.

It’s so sensitive it can also detect the bristles from a paintbrush! However, the (arguably) most exciting feature is that touch can be tracked through any flexible material. This means that “tactile overlays” can be placed over the device and used to control a wide range of applications.

This has huge potential and essentially means you can convert the device into any type of interface. The developers are also currently working on an online tool that allows a user to design their own tactile overlays and then 3D print them.

This could be of real use to disabled users – it would enable them to design and customise an interface to their own needs thus making it easier for them to interact with different systems. This would reduce the need for disabled users to adapt themselves to current assistive technology and instead build interfaces that better support their own individual requirements.

This type of device could be particularly useful to people with motor impairments who may experience certain barriers when attempting to use standard touch interfaces. For example, large buttons (used to make selections easier) could essentially act as shortcuts to different applications or specific features within software.

Different interactive controls could also be used such as circular dials, sliders, or anything else that could enhance interactions with a particular application. This type of approach – combined with other assistive technologies (e.g. eye gaze tracking) – could help rapidly speed up workflow.

The Sensel Morph could also be used to help support blind and partially sighted users through the use of Braille overlays that use keys mapped to shortcuts (as discussed above) which could then be used to augment interactions typically performed through screen reader software. The combination of these tools offers exciting new interactive possibilities that could be of real value to people with visual impairments.

Moreover, the device presents new opportunities for disabled artists – there’s a nice case study online of an artist using the Sensel Morph:

This could be particularly useful for artists who need to use more traditional assistive tools such as head wands and mouth sticks. Attempting to use common artistic applications such as Illustrator and Photoshop with these tools can be particularly frustrating due to difficulties in trying to select small icons and other interface controls.

If, however, multiple Sensel Morphs were combined, one could contain large buttons which act as shortcuts to key tools (e.g. brushes, an eraser, colour palettes, zoom features, etc.) whilst a separate one could be used for actually producing artistic work (i.e. painting and drawing). This may provide a much more accessible approach that would present new and interesting opportunities for disabled artists.

You can currently pre-order the Sensel Morph for $249 – I’ve not had a chance to play with one yet, but I’m really excited to see what can be achieved with this device from an accessibility perspective.

VoiceArt: Creating Artistic and Graphical Work via Non-Speech Sounds

People with physical and motor impairments can face significant barriers in producing artistic and graphical work. Through the D2ART project I’ve observed first-hand the issues people face when using traditional tools such as brushes and pencils.

Some assistive tools such as head wands, mouth sticks, and custom designed grips (for holding brushes) can help make this process a little more accessible, but can also lead to other health complications such as damage to teeth and neck strain.

Digital technologies hold much potential in providing new opportunities for people with motor impairments to produce artistic work. On the D2ART project we’ve been exploring a range of different technologies such as mid-air gesturing (using a Leap Motion sensor), eye tracking (via Tobii EyeX), head tracking (through webcams and software such as EnableViacam), and facial expression “switches” (e.g. using KinesicMouse).

Another technology that holds potential is voice recognition – that is, the ability to use your voice to control systems and produce graphical and artistic work. However, there has been little research to date that has investigated the potential of this approach for creative activities.

One interesting paper I recently read (Voicedraw: a hands-free voice-driven drawing application for people with motor impairments) described work around conducting a field study with a “voice painter” to explore his process for creating artistic work via his voice.

The artist had a spinal cord injury at C4-C5 level which affected the dexterity of his elbow and wrist (in addition to having no sensation in his hands). His speech was unimpaired, but he had limited lung capacity.

This artist had been using Dragon Dictate along with Microsoft Paint and the MouseGrid feature that enables users to navigate different elements of an interface using their voice. The painter would begin by moving the cursor in one of eight positions (top, upper right, right, left, etc.) through a spoken command (e.g. “drag upper right”).

The pointer would then start moving in that position at the default speed set by the artist which could then be altered at any time through additional commands (e.g. “much faster”, “move left”, etc.).

Whilst this approach can enable the production of graphical work, the authors also highlight several issues with this method. For instance, if the speech recognition fails mid-way through an extensive series of movements it can result in costly mistakes that typically requires having to undo everything and starting again (a similar issue can also occur when using the eraser tool).

The painter also informed the researchers that his current creative methods restricted expressiveness and meant that he couldn’t achieve fluid brush strokes. The authors attempted to address this through an experimental prototype – VoiceDraw – that provided people with motor impairments more control when creating freeform drawings with their voice.

In particular, they looked at the use of non-speech sounds as a way for providing people with the ability to draw in fluid movements. The application uses different vowel sounds to move the pointer in a particular position (e.g. “ee” for left, “a” for up, “aw” for right, and “oo” for down). The prototype also recognises distinct sounds such as “ck” and “ch” which were mapped to different types of events/interactions (e.g. a mouse click for making selections).

The thickness of a brush could be altered whilst painting through the user varying the loudness of their speech – the pitch of a voice was initially tested for this purpose, but the artist experienced issues with this approach

The researchers asked the artist to evalaute the application through performing a series of tasks that compared his current approach with VoiceDraw (you can see examples of the artistic results in the paper).

The authors found that the artist was able to learn the interface and create drawings with smoother curves than was possible with his existing approach. Moreover, the authors found that he was also able to produce work significantly faster when using VoiceDraw.

The research in this paper shows some potential in the use of voice for graphical/artistic work, but there are still numerous interesting interaction issues that would need to be addressed to create a more fully featured tool.

For example, how would such an application enable you to pan and zoom to do more detailed work, apply different filters to elements of digital drawings and paintings, cut/copy/paste, etc., and work with multiple layers (as is common in many traditional graphical and artistic applications)?

Lots of interesting questions! This certainly seems like a fruitful area for further research.

Reference
Harada, S., Wobbrock, J., and Landay, J. (2007) Voicedraw: a hands-free voice-driven drawing application for people with motor impairments. In Proceedings of the 9th international ACM SIGACCESS conference on Computers and accessibility, pp. 27-34.

OptiKey: A Free Eye-Gaze Communication Tool

OptiKey is a FREE and open source assistive tool I found online recently that’s designed for people with motor and speech impairments. It’s essentially an on-screen keyboard that can be controlled via a user’s eyes to enable communication with others (via text-to-speech) and general control of a computer.

A screenshot of OptiKey being used with Word to write text

It runs on Windows (although could be used on a Mac with software such as Parallels) and is designed to work in conjunction with a range of low-cost eye trackers that are currently available on the market (e.g. Tobii EyeX, EyeTribe, etc.). It therefore holds huge potential as a cost-effective solution for people with a range of conditions such as Cerebral Palsy, ALS, Motor Neurone Disease, and many others.

It also allows users to control the mouse cursor via their eyes making it possible to complete other standard tasks such as web browsing, sending emails, and writing in Word (although some tasks are still likely to be difficult – e.g. drawing with your eyes is especially tricky).

A screenshot of OptiKey being used to browse the web

It can be tempting to sometimes think that free software may be substandard, but a lot of work has clearly gone into the development of OptiKey and I’m sure it can genuinely help a lot of people. It has the feel of a paid product and seems free of any major bugs (from the testing I’ve been doing with it).

To get started, all you need is a calibrated eye tracking device (I’ve been using the Tobii EyeX) and then download the software. There is guidance for getting started and lots of help available that describe the different features of the application. There are also a couple of videos that provide an overview of how to use the software.

Button selections in OptiKey can be made in a variety of ways – either through dwell time (i.e. looking at a button for a set period of time) or via physical buttons (e.g. large switches). There’s also the option to use OptiKey with a webcam (via cursor control software such as Enable Viacam, Camera Mouse, and Opengazer) if you don’t have access to an eye tracker.

The developer of OptiKey (Julius Sweetland) states that he developed the application to “… challenge the outrageously expensive, unreliable, and difficult to use AAC (alternative and augmentative communication) products on the market”. This is certainly a worthy goal and much more feasible now given the massive reduction in price of eye tracking technology over recent years.

In speaking to disabled artists on the D2ART project (and through past projects), it’s clear that many people with disabilities haven’t previously considered eye gaze technology due to the expensive price tag. However, we’re now entering an exciting period where this technology is much cheaper and therefore more accessible to a much larger audience.

This, in turn, presents great opportunities for designers and developers to create innovative eye gaze applications that can genuinely help people with disabilities. OptiKey clearly demonstrates this potential – I’d strongly encourage you to check it out.

Embrace The Shake

Artist Phil Hansen used to specialise in Pointillism – a painting technique where a series of small points/dots are used to create a larger overall image. This was until he developed a shake in his hand whilst at art school which led to him being unable to make accurate points.

As the problem worsened, the points Phil was trying to draw started to look more like little tadpoles. To address the shakiness he would hold the pen tighter in an attempt to gain more control, but this resulted in further complications that ultimately led to him having trouble holding anything.

The end result was that Phil became disillusioned, left art school, and eventually quit art completely.

Some years later, he went to see a neurologist who said he had permanent nerve damage. Phil describes in his TED talk how he showed the neurologist the squiggly lines he could draw (when trying to draw straight) who then encouraged him to “embrace the shake”.

This was a turning point for Phil – he started to create a range of “scribble” pictures and produced some amazing artwork. Whilst he wasn’t completely passionate about this particular “technique”, this led him to realise he could still make art that fulfilled him despite his shake – he just needed to find a different approach.

This focus enabled him to broaden his creative horizons to not just one type of art, but to start exploring other ways of creating work that wasn’t affected by his shake. In his own words, it was the first time he realised that “…embracing a limitation can drive creativity“.

He purchased some materials to support his work and began creating some incredible pieces of art. You’ve got to check out his work on swapping a brush for karate chops to create a Bruce Lee portrait, painting a picture with hamburger grease, and writing stories on a revolving canvas to create an incredible larger image.

So, why tell this story? Well, I think it has huge relevance for user experience designers. What would happen if we were to “embrace the shake” a little more? That is – instead of seeing accessibility and design for disability as a tick-box exercise, what if we were to view it as a creative constraint to help drive innovation and creativity?

By its very nature, designing for people with impairments typically requires the creation of new and unique solutions. If someone can only use their right-eyebrow to interact with a system, you have to think very differently about the design of that system. The interface paradigms used for keyboard and mouse interactions aren’t particularly useful here – we need to find new ways of using systems that require a more creative approach.

We shouldn’t see accessibility as an irritation and something where we only do the bare minimum to cover ourselves (or worse, do nothing). This mentality is clearly ethically and morally wrong – but worse still this pervasive view results in many designers missing a great opportunity to work on fascinating and challenging interface design problems.

If you’re working on a new project in the near future, why not try placing an increased design emphasis on accessibility? If you focus early on the solutions required for people with a range of abilities, who knows – perhaps it might lead to some interesting creative insights that will help shape your overall design (for both your target audience and people with impairments).

(Check out Phil’s TED talk to hear his story and to see some of his amazing work)

Faking Disability

It’s generally accepted when conducting a user evaluation of some technology that you must test with your target audience.

If you’re researching the potential of a new medical prototype to support consultants during delicate surgical procedures it makes little sense testing with psychology undergraduates – you want to work directly with consultants to ensure you’re gathering relevant data.

Testing with undergraduates (if they’re not the target audience) will rightly lead to people questioning the validity of the data collected and results reported. This all sounds obvious, but it’s still surprisingly common in the accessibility research field to see studies that are not being conducted with disabled participants.

It’s not unusual, for example, to see sighted people being blindfolded to simulate blindness or for people with no motor impairment to be used to evaluate eye tracking interfaces.

Whilst on the surface this may seem like a reasonable approach (especially if the target audience are difficult to recruit), several studies have demonstrated that this is not a reliable replacement. People who “pretend” to have a disability do not necessarily use systems in the same way as those who actually have those impairments.

But more than this – by not collaborating closely with disabled users from the initial phases of a project and during the evaluation process, we miss a golden opportunity to incorporate their unique insights into our work.

It’s clearly fine to use colleagues, students, and anyone else in early stage evaluations to iron out any issues and to make initial observations, but this is not sufficient for later stage research that is looking to be published in leading conferences and journals.

It’s essential that assistive computing designers and researchers work closely with their target audience – both to inform the design of any experimental prototypes and to evaluate the effectiveness of new tools developed.

This shouldn’t deter researchers and designers – it should instead encourage us to seek out partnerships with disabled users and related organisations to help enhance the quality of our work and increase the likelihood of creating genuine impact.

It can be tempting to take short-cuts and test with anyone you can find – but ultimately the quality of the work suffers and the validity of any results will be highly questionable.

We need to embrace more the opportunities to work directly with disabled people throughout all stages of a project and to integrate their feedback and observations into our research and design work.