Download PDF

11.3.3: Action Research and Industrial Design

Edwin Blake, William Tucker, Meryl Glaser and Adinda Freudenthal

Community based Action Research and Industrial Design approaches (from 2008)


Mobile Gestures

Deaf people prefer to use sign language to communicate with each other. There are problems with the video quality when using the real-time video communication available on mobile phones. The alternative is using text based communication on mobile phones however results from other research studies show that Deaf people prefer using sign language to communicate with each other rather than text. This project looked at implementing a gesture based interface for an asynchronous video communication for Deaf people. The gesture interface was implemented on a store and forward video architecture since this preserves the video quality even when there is low bandwidth. The table below provides an overview of the cycle.

Cycle overview

Description

Timeframe

2008-2010

Community

DCCT

Local champion

N/A

Intermediary

Meryl Glaser (SLED), DCCT staff

Prototype

VideoChat

Coded by

Tshifhiwa Ramuhaheli (UCT)

Supervised by

Edwin Blake (UCT)

Technical details

 

Mobile gestures cycle overview

Diagnosis

This research builds on the video communication applications developed for Deaf people. Previous systems used a mouse and keyboard to select the desired options. When users are signing they have to sit at a reasonable distance from the camera and the computer so they would have to often move forward to use a mouse or keyboard.

Previously video chat was only on computers and the intention was also to extend the interface to mobile devices. It is clear that the computer of choice in the developing world is the mobile phone. Mainly because of the need to reduce power consumption the processing power of mobile phones has not increased as spectacularly as that of computers.

Plan Action

The objective was to find out if a gesture based interface could improve the usability of an asynchronous or store and forward video communication for Deaf users. We wanted to investigate whether using an interface that can be controlled using hand gestures would make it easier for Deaf people to communicate with each other. By implementing this interface the users are able to control the application from a comfortable signing distance.

The other objective was to extend video communication to mobile phones. However on most mobile phones the camera with better video quality in terms of resolution and frame rate is at the back of the phone. This quality is needed for sign language video communication. Unfortunately this introduces the problem of how to display the video because when the camera faces the user, the phone screen faces away from the user. We investigated whether a television can be used together with the rear camera to record video.

Requirements were gathered in an ongoing fashion throughout the project. The researcher learnt sign language at the accredited Sign Language Education and Development (SLED) centre. The researchers make weekly visits to Deaf community at the Bastion for the Deaf in Newlands. These visits were both about gathering information for the research and assisting the community with ICT needs. Although the researcher was learning sign language a professional interpreter was used during interviews, focus groups and evaluation sessions. During the requirements gathering stage a focus group study was conducted with Deaf users to get feedback on the current video communication. A frequent comment by users was that a touch screen might be more effective than having to use a mouse.


Overview of how the computer and mobile prototype work.

The computer screen serves in the place of a TV to display the video. The phone is essentially used as a camera. The setup mimics a situation where a powerful phone uses a TV as its video output.

Implement Action The main iterative stages were:

  1. Computer Prototype: try out gesture-based interface
  2. First Mobile Phone Prototype: all processing on the phone
  3. Second Mobile Phone Prototype: phone off-loaded processing onto PC (to mimic a situation where more powerful phones were available: see above).

The gesture based interface was designed to be similar to a touch screen in the sense that users just have to move their hand to a certain marked area displayed on the screen (see figures below). Instead of touching the screen at the marked area they just have to move their hand in front of the camera. The background of the screen displays a video of the user and the marked areas so when a user moves their hand they can see its corresponding position on the screen at real time.Once their hand is on the desired marked area they have to hold it up for a second until it is detected. This is implemented in order to cater for the situation where a user accidentally put their hand on the marked area while signing. If a hand is moved too quickly the gesture is ignored as it is assumed that the user is signing and they accidentally moved their hand to the marked area.


The user is selecting the record option in order to start recording a video message.

The gesture based interface was designed to be similar to a touch screen in the sense that users just have to move their hand to a certain marked area displayed on the screen. Instead of touching the screen at the marked area they just have to move their hand in front of the camera.


The user is recording a video message.

The two options that are available are cancel(on the left) which cancels the recording if it is selected and send (on the right) which sends the video to the other user.


The user selected send and a confirmation message is shown at the bottom of the screen.


The user selected cancel and the confirmation screen is displayed asking the user if they are sure they want to cancel recording the video message.

Evaluate Action

  1. Computer Prototype: evaluated with Deaf users to determine if the gesture-based interface was useful.
  2. Mobile Phone Prototype: not evaluated with the Deaf users because it did not meet the performance requirements required in order to produce a usable interface.
  3. Simulated Mobile Phone Prototype: The users had to evaluate if the prototype was usable and effective in facilitating sign language video communication. The prototype was evaluated using a questionnaire, observations and interviews.

Reflection/Diagnosis

The users liked the new way of interacting with the video communication prototype and thought that it made it easier to communicate using video. Although there was a problem with the quality of the video when users were signing fast the users were able to see the signs in the message. The video quality was not what the users would have preferred but they thought that it was good enough to communicate with each other as long as the users were not signing fast.

Cycle overview

Description

Timeframe

2008-2009

Community

DCCT

Local champion

N/A

Intermediary

Meryl Glaser (SLED), DCCT staff

Prototype

SignSupport

Coded by

Koos Looijesteijn (TU Delft)

Supervised by

Adinda Freudenthal (TU Delft), Henri Christiaans (TU Delft), Edwin Blake (UCT), William Tucker (UWC), Meryl Glaser (SLED)

Technical details

Looijesteijn (2009), Freudenthal and Looijesteijn (2008)

SignSupport v1 cycle overview
Industrial Design approaches were brought into the project to complement the action research by applying context and user analysis methods before starting another design round. We started from the reflections on earlier work to design a telecommunication solution for Deaf-to-Deaf communication. However, as in every industrial design assignment, the design was not started immediately, but first a thorough investigation was conducted. It is important to first step back and check whether the right question is asked, and to understand the user needs and societal context for design. For this the communication problems of the Deaf community were studied in a very general manner: field research about the South African context; a literature review about being Deaf in South Africa; cultural probes (Mattelmäki and Battarbee, 2002) and context mapping (Sleeswijk Visser, 2009) with generative tools (Sanders, 2001); and analysis of data.

We found that there is a need for telecommunication between Deaf people, but that most of the problems they pointed out had to do with communicating with hearing people. For this reason, the design assignment was reformulated as: Design a solution that supports Deaf illiterate South Africans to overcome communication problems with hearing people (Looijesteijn , 2009).

The technical solutions should support various types of communication to hearing people. Finding a solution for talking to just anyone is technically not solvable in a reasonable time frame. Therefore, the ICT solutions will be built up module by module. We focused on the platform and on one example application, talking with a doctor.

Diagnosis

Two diagnosis rounds were conducted in preparation. An approach new to the project was taken: In particular the use of‘generative tools’ made a big difference.
Generative tools are drawing/building materials used in a focus session to make a visual representation of a task or a situation under discussion (see figure below). In this investigation Deaf participants were asked to create a visual artefact about day-to-day experiences regarding communication problems, and about communication ‘dreams’ - how they want it to be. The participants discussed amongst each other the visualized stories from their lives. The research data consists of what the Deaf ‘said’ to each other. This was translated by an interpreter and recorded on video. Because the researcher could follow the conversations he could occasionally ask questions. The clarifications were also part of the research data.


Deaf participants using generative tools.
Sleeswijk Visser (2009) explains that by asking participants to use generative tools and making a visual artefact, a deeper level of knowledge can be uncovered. These deeper knowledge levels are hard to reach by other means, such as interviewing. Tacit knowledge can be revealed (knowledge a person is not aware of about himself), i.e. non-verbal knowledge. Also made explicit are ‘obvious elements of life’ which are not easily mentioned in interviews because the participant does not realize it might be relevant for the researcher.

A visual artefact produced from using generative tools.
Cultural probes were used prior to the generative tools session, in order to introduce participants to the unconventional approach. They were given some creative homework assignments.

Context mapping refers to the designer making a (mental) map about the context of use. This is done by fusing design research data in his/her mind. The data came from the generative sessions ( it is not the creative artefacts, but the discussions between the participants that composes the data), from the cultural probes, and also from an ethnography (Fetterman, 1998) to understand the South African context and a literature review about being Deaf in South Africa. This was needed because the designer was Dutch, so he was unfamiliar with the South African context.

A key finding from the session was that the Deaf people want an aid to communicate to each other, but more importantly, most of the problems they pointed out had to do with communicating with hearing people. This became our design goal, viz. a communication aid from Deaf-to-hearing. Other details were uncovered, e.g., the Deaf participants explained that in South Africa doctors always wear a mouth mask, therefore they cannot see the doctor's facial expressions. Doctor-Deaf communication is virtually nonexistent. The Deaf participants are very anxious that they will be treated wrongly because of misunderstandings. Another example was about taxis. In South Africa people share taxi buses which have a set route, but sometimes in consultation with the passengers they deviate. Deaf passengers can miss this conversation and end up in a wrong part of town, leaving them walking home. This is very inconvenient, and can also be dangerous because one should not walk just anywhere in Cape Town. Some very nasty experiences were shared with us, which made it perfectly clear to us what their priorities are and also what the local contextual factors are. Many of the problems we uncovered are Deaf related, but they tend to be very much society specific.

From the first investigation it became clear we had to design a system which would assist in communicating with people from various public services. We decided to start by designing a module for communication with a doctor, because this had been indicated as the most serious problem by the Deaf participants. After this a second more focused investigation was conducted, including interviews with doctors working (or who had worked) in South Africa and a physician in training, and a literature review about South African healthcare, including traditional healers.

Plan action

A design was made to support communication with the police, a pharmacist, in a taxi, etc. Technology investigations and field studies revealed that using an advanced mobile phone (or PDA) was the most suitable solution. Initially, we were considering using a PC in the hospital as a platform, but we found out that many physicians in hospitals don’t have a PC at their disposal. It would be better to empower Deaf people - and every Deaf person seems to have a mobile phone.
The final technology choices were dependent on the local situation. Internet is extremely expensive in South Africa and our users are poor, so we decided to use canned video set as packages which is cheaper than streaming video. The dialogues we support follow a tree structure because communication with officials, e.g. doctors, is structured along known paths.It should be noted that these ‘known paths’ are not so easy to determine, because again there is a lot of tacit knowledge involved in such communication. Therefore, an investigation checking these assumptions about doctor-patient communication was performed before starting to design. A structure for the dialogue tree was designed and has now been implemented (see SignSupport v2 elswhere). Furthermore, a user interface and interaction design for doctor and Deaf users were developed.

User interface of Deaf to doctor communication
This figure shows canned video and text. The doctor can work with text and the Deaf user can work with SASL. A dialog tree organizes the conversation so that the required sentence is findable depending on current context of conversation.


The overall design of SignSupport v1 mock-up.

Evaluate Action

The usability test: A usability test of the prototype running on a PC with partially working software was conducted. There were no unsolvable problems with the interaction or understanding.

Focus group evaluation - Deaf participants' opinion about the concept:
The Deaf participants liked the design, and appreciated its aims very much. Its essential properties touch on their basic needs, and on human rights, both of which are sometimes at risk. The Deaf participants also liked the user interface solution with the canned video and the dialogue style.


A Deaf user with the SignSupport mock-up running on a PC

Reflection

Feasibility assessment: It was estimated that the effort required to bring the design to implementation was too much for a realistic time frame, i.e., because of all the possible dialogue trees. Therefore, we decided that as a next step we would continue with, a somewhat simpler dialogue tree for a pharmacy scenario. This investigation has started. This application will require some different user interface properties. We plan to implement it as the first of a set of applications, ranging from communication with the police, civil services, and doctors. We hope that the South African government will understand the need for such technology and will also recognize the low costs of ICT as compared with human interpreters. Our future prototype is not only meant for user testing but will also be used shown to government. Once its value is established we hope other applications will be sponsored from the government budget for Universal Access.


This project offers some prototypes to provide browser-based and mobile video communication services for Deaf people and evaluates these prototypes. The aim of this research is to identify an acceptable video communication technology for Deaf people by designing and evaluating several prototypes. The goal is to find one that Deaf people would like to use in their day-to-day life. The project focuses on two technologies - browser-based systems and mobile applications. Several challenges emerged, for example, specific Deaf user requirements are difficult to obtain, the technical details must be hidden from end users, and evaluation of prototypes includes both technical and social aspects. This project describes work to provide South African Sign Language communication for Deaf users in a disadvantaged Deaf community in Cape Town. We posit an experimental design to evaluate browser-based and mobile technologies in order to learn what constitutes acceptable video communication for Deaf users. Two browser-based prototypes and two mobile prototypes were built to this effect. Both qualitative data and quantitative data are collected with user tests to evaluate the prototypes. The video quality of Android satisfies Deaf people, and the portable asynchronous communication is convenient for Deaf users. The server performance is low on bandwidth, and will therefore cost less than other alternatives, although Deaf people feel the handset is costly. The table below provides an overview of the cycle.

Cycle overview

Description

Timeframe

2009-2010

Community

DCCT

Local champion

N/A

Intermediary

Meryl Glaser (SLED), DCCT staff

Prototype

Prototype-Flash, Prototype-HTML5, Prototype-J2ME, Prototype-Android

Coded by

Yuan Yuan Wang (UWC)

Supervised by

William Tucker (UWC)

Technical details

Wang and Tucker (2009), Wang and Tucker (2010), Wang (2011)

Deaf Video Chat v2 cycle overview


Prototype-Flash architecture


Prototype-Flash user interface

Prototype-HTML5 architecture

Prototype-HTML5 user interface

Prototype-J2ME user interface


Prototype-Android architecture


Prototype-Android user interface


Many Deaf people use their mobile phones for communication with SMSs yet they would prefer to converse in South African Sign Language. Deaf people with a capital D are different from deaf or hard of hearing as they primarily use sign language to communicate. This study explores how to implement a Deaf-to-hearing communication aid on a mobile phone to support a Deaf persons visit to a medical doctor. The aim is to help a Deaf person use sign language to tell a hearing doctor in English about medical problems using a cell phone. A preliminary trial of a computer-based mock-up indicated that Deaf users would like to see the prototype on a cell phone. A prototype will be built for a mobile phone browser using sign language video arranged in an organized way to identify a medical problem. That problem is then identified in English and shown to the doctor with the phone. User trials data will be collected with questionnaires, semi- structured interviews and video recordings. The technical goal is to implement the prototype on a mobile device in a context free manner, allowing the plug and play of more communication scenarios, such as visits to a doctor's office, the Department of Home Affairs or the police station. The table below provides an overview of the cycle.

Cycle overview

Description

Timeframe

2009-2010

Community

DCCT

Local champion

N/A

Intermediary

Meryl Glaser (SLED), DCCT staff

Prototype

SignSupport

Coded by

Muyowa Mutemwa (UWC)

Supervised by

William Tucker (UWC)

Technical details

Mutemwa and Tucker (2010)

SignSupport v2 cycle overview


Introduction and login screens
The Deaf user’s login screen (a), after entering in the username and password the Deaf user continues to the next page by clicking on the smiling image. The introduction page (b) displays a SASL video and its English equivalent which describes to the Deaf user what the system is all about and how s/he can use it.


Question screen with video embedded in XHTML and an answer screen.
The question screen (a) has video embedded into an XHTML page playing inside a mobile browser using Adobe Flash player. The English text equivalent appears below the video and the navigation arrow is between the SASL video and the English text. A page displaying an answer (b) has a SASL video and its English equivalent describing the answer. The Deaf user can navigate to the previous page using the left arrow and to the next page using the right arrow, or can accept the answer by clicking on the smiley face.


Two Deaf users testing SignSupport


New scenario creation screen
To create a new scenario, the user must click an option. Option 1 creates the introduction page. Option 2 adds questions and their answers. Option 3 generates the hearing persons screen. Option 4 creates the response pages. Option 5 creates the English summary pages.