HCI Bibliography Home | HCI Conferences | About ASSETS | ASSETS Conf Proceedings | Detailed Records | RefWorks | EndNote | Hide Abstracts
ASSETS Tables of Contents: 949698000204050607080910111213 ⇐ MORE

Fourteenth Annual ACM SIGACCESS Conference on Assistive Technologies

Fullname:Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
Editors:Matt Huenerfauth; Sri Kurniawan
Location:Boulder, Colorado
Dates:2012-Oct-22 to 2012-Oct-24
Publisher:ACM
Standard No:ISBN: 978-1-4503-1321-6; ACM DL: Table of Contents; hcibib: ASSETS12
Papers:78
Pages:304
Links:Conference Website
Summary:The ASSETS conference explores the design, evaluation, and use of computing and information technologies to benefit people with disabilities and older adults. ASSETS is the premier forum for presenting innovative research on mainstream and specialized assistive technologies, accessible computing, and assistive applications of computer, network, and information technologies. This includes the use of technology by and in support of:
* Individuals with hearing, sight and other sensory impairments
* Individuals with motor impairments
* Individuals with memory, learning and cognitive impairments
* Individuals with multiple impairments
* Older adults with diverse capabilities
* Professionals who work with these populations
  1. Screen reader usage
  2. Designing for older adults
  3. Communication aids
  4. Accessibility at large
  5. Interactions without sight
  6. Understanding aging performance
  7. Shared work
  8. Visual impairment simulation
  9. Sign language
  10. Posters and demonstrations
  11. Student research competition abstract

Screen reader usage

Back navigation shortcuts for screen reader users BIBAFull-Text 1-8
  Romisa Rohani Ghahari; Mexhid Ferati; Tao Yang; Davide Bolchini
When screen reader users need to back track pages to re-find previously visited content, they are forced to listen to some portion of each unwanted page to recognize it. This makes aural back navigation inefficient, especially on large websites. To address this problem, we introduce topic- and list-based back: two navigation strategies that provide back browsing shortcuts by leveraging the conceptual structure of content-rich websites. Both are manifested in Webtime, an accessible website on the history of the Web. A controlled study (N=10) conducted at the Indiana School for the Blind and Visually Impaired compared topic- and list-based back to traditional back mechanisms while participants completed fact-finding tasks. Topic- and list-based back significantly decreased time-on-task and number of backtracked pages; the navigation shortcuts were also associated with positive improvements in perceived cognitive effort and navigation experience. The proposed strategies can operate as a supplement to current back mechanisms in information-rich websites.
Capture: a desktop display-centric text recorder BIBAFull-Text 9-16
  Oren Laadan; Andrew Shu; Jason Nieh
As more and more information is designed for human visual consumption through computer displays, the need to capture and process display-centric content is becoming increasingly important, especially for visually impaired users. We present Capture, a novel display-centric text recorder that facilitates real-time access to onscreen text and its structure and contextual information, including data associated with both foreground and background windows. Capture provides an intelligent caching architecture that integrates with the standard accessibility framework available on modern operating systems to continuously track onscreen text and metadata. This enables fast, semantic information recording without any modifications to applications, window systems, or operating system kernels. The recorded data is useful for a variety of problem domains, including assistive technologies, desktop search, auditing, and predictive graphical user interfaces. We have implemented a Capture prototype on Linux with the GNOME Accessibility Toolkit. Our results on real desktop applications demonstrate that Capture provides low runtime overhead and much more complete recording of onscreen text than modern desktop screen readers used for visually impaired users.
Thematic organization of web content for distraction-free text-to-speech narration BIBAFull-Text 17-24
  Muhammad Asiful Islam; Faisal Ahmed; Yevgen Borodin; I. V. Ramakrishnan
People with visual disabilities, especially those who are blind, have digital content narrated to them by text-to-speech (TTS) engines (e.g., with the help of screen readers). Naively narrating web pages, particularly the ones consisting of several diverse pieces (e.g., news summaries, opinion pieces, taxonomy, ads), with TTS engines without organizing them into thematic segments will make it very difficult for the blind user to mentally separate out and comprehend the essential elements in a segment, and the effort to do so can cause significant cognitive stress. One can alleviate this difficulty by segmenting web pages into thematic pieces and then narrating each of them separately. Extant segmentation methods typically segment web pages using visual and structural cues. The use of such cues without taking into account the semantics of the content, tends to produce "impure" segments containing extraneous material interspersed with the essential elements. In this paper, we describe a new technique for identifying thematic segments by tightly coupling visual, structural, and linguistic features present in the content. A notable aspect of the technique is that it produces segments with very little irrelevant content. Another interesting aspect is that the clutter-free main content of a web page, that is produced by the Readability tool and the "Reader" feature of the Safari browser, emerges as a special case of the thematic segments created by our technique. We provide experimental evidence of the effectiveness of our technique in reducing clutter. We also describe a user study with 23 blind subjects of its impact on web accessibility.

Designing for older adults

Basic senior personas: a representative design tool covering the spectrum of European older adults BIBAFull-Text 25-32
  Bernhard Wöckl; Ulcay Yildizoglu; Isabella Buber; Belinda Aparicio Diaz; Ernst Kruijff; Manfred Tscheligi
The persona method is a powerful approach to focus on needs and characteristics of target users, keeping complex user data, numbers and diagrams alive during the whole design cycle. However, the development of prosperous personas requires a considerable amount of time, effort and specific skills. This paper introduces the development of a set of 30 basic senior personas, covering a broad range of characteristics of European older adults, following a quantitative development approach. The aim of this tool is to support researchers and developers in extending empathy for their target users when developing ICT solutions for the benefit of older adults. The main innovation lies in the representativeness of the basic senior personas. The personas build on multifaceted quantitative data from a single source including micro-level information from roughly 12,500 older individuals living in different European countries. The resulting personas may be applied in their basic form but are extendable to specific contexts. Also, the suggested tool addresses the drawbacks of current existing personas describing older adults: being representative and cost-efficient. The basic senior personas, a filter tool, a manual and templates for "persona marketing" articles are available for free online under http://elderlypersonas.cure.at.
Considerations for technology that support physical activity by older adults BIBAFull-Text 33-40
  Chloe Fan; Jodi Forlizzi; Anind Dey
Barriers to physical activity prevent older adults from meeting recommended physical activity levels necessary for maintaining quality of life. As technology becomes more advanced, we have the opportunity and the responsibility to address concerns faced by the aging population. We seek opportunities for technology to empower older adults to overcome barriers on their own by interviewing and learning from older adults who have successfully overcome these barriers. In this paper, we present a set of needs that technology can address, and considerations for designing technology interventions that support physical activity by older adults.
Design recommendations for tv user interfaces for older adults: findings from the eCAALYX project BIBAFull-Text 41-48
  Francisco Nunes; Maureen Kerwin; Paula Alexandra Silva
While guidelines for designing websites and iTV applications for older adults exist, no previous work has suggested how to best design TV user interfaces (UIs) that are accessible to older adults. Building upon pertinent guidelines from related areas, this paper presents thirteen recommendations for designing UIs for TV applications for older adults. These recommendations are the result of iterative design, testing, and development of a TV-based health system for older adults that aims to provide a holistic solution to improve quality of life for older adults with chronic conditions by fostering their autonomy and reducing hospitalization costs. The authors' work and experience shows that widely known UI design guidelines unsurprisingly apply to the design of TV-based applications for older adults, but acquire a crucial importance in this context.

Communication aids

What we talk about: designing a context-aware communication tool for people with aphasia BIBAFull-Text 49-56
  Shaun K. Kane; Barbara Linam-Church; Kyle Althoff; Denise McCall
Many people with aphasia experience difficulty recalling words extemporaneously, but can recognize those words when given an image, text, or audio prompt. Augmented and alternative communication (AAC) systems can help address this problem by enabling people with aphasia to browse and select from a list of vocabulary words. However, these systems can be difficult to navigate, especially when they contain large amounts of content. In this paper, we describe the design of TalkAbout, a context-aware, adaptive AAC system that provides users with a word list that is adapted to their current location and conversation partner. We describe the design and development of TalkAbout, which we conducted in collaboration with 5 adults with aphasia. We then present guidelines for developing and evaluating context-aware technology for people with aphasia.
iSCAN: a phoneme-based predictive communication aid for nonspeaking individuals BIBAFull-Text 57-64
  Ha Trinh; Annalu Waller; Keith Vertanen; Per Ola Kristensson; Vicki L. Hanson
The high incidence of literacy deficits among people with severe speech impairments (SSI) has been well documented. Without literacy skills, people with SSI are unable to effectively use orthographic-based communication systems to generate novel linguistic items in spontaneous conversation. To address this problem, phoneme-based communication systems have been proposed which enable users to create spoken output from phoneme sequences. In this paper, we investigate whether prediction techniques can be employed to improve the usability of such systems. We have developed iSCAN, a phoneme-based predictive communication system, which offers phoneme prediction and phoneme-based word prediction. A pilot study with 16 able-bodied participants showed that our predictive methods led to a 108.4% increase in phoneme entry speed and a 79.0% reduction in phoneme error rate. The benefits of the predictive methods were also demonstrated in a case study with a cerebral palsied participant. Moreover, results of a comparative evaluation conducted with the same participant after 16 sessions using iSCAN indicated that our system outperformed an orthographic-based predictive communication device that the participant has used for over 4 years.
Detecting linguistic HCI markers in an online aphasia support group BIBAFull-Text 65-70
  Yoram M. Kalman; Kathleen Geraghty; Cynthia K. Thompson; Darren Gergle
Aphasia is an acquired language disorder resulting from trauma or injury to language areas of the brain. Despite extensive research on the impact of aphasia on traditional forms of communication, little is known about the impact of aphasia on computer-mediated communication (CMC). In this study we asked whether the well-documented language deficits associated with aphasia can be detected in online writing of people with aphasia. We analyzed 150 messages (14,754 words) posted to an online aphasia support forum, by six people with aphasia and by four controls. Significant linguistic differences between people with aphasia and controls were detected, suggesting five putative linguistic HCI markers for aphasia. These findings suggest that interdisciplinary research on communication disorders and CMC has both applied and theoretical implications.

Accessibility at large

A readability evaluation of real-time crowd captions in the classroom BIBAFull-Text 71-78
  Raja S. Kushalnagar; Walter S. Lasecki; Jeffrey P. Bigham
Deaf and hard of hearing individuals need accommodations that transform aural to visual information, such as captions that are generated in real-time to enhance their access to spoken information in lectures and other live events. The captions produced by professional captionists work well in general events such as community or legal meetings, but is often unsatisfactory in specialized content events such as higher education classrooms. In addition, it is hard to hire professional captionists, especially those that have experience in specialized content areas, as they are scarce and expensive. The captions produced by commercial automatic speech recognition (ASR) software are far cheaper, but is often perceived as unreadable due to ASR's sensitivity to accents, background noise and slow response time. We ran a study to evaluate the readability of captions generated by a new crowd captioning approach versus professional captionists and ASR. In this approach, captions are typed by classmates into a system that aligns and merges the multiple incomplete caption streams into a single, comprehensive real-time transcript. Our study asked 48 deaf and hearing readers to evaluate transcripts produced by a professional captionist, ASR and crowd captioning software respectively and found the readers preferred crowd captions over professional captions and ASR.
Web accessibility as a side effect BIBAFull-Text 79-86
  John T. Richards; Kyle Montague; Vicki L. Hanson
This paper explores evidence for the conjecture that improvements in Web accessibility have arisen, in part, as side effects of changes in Web technology and associated shifts in the way Web pages are designed and coded. Drawing on an earlier study of Web accessibility trends over the past 14 years, it discusses several possible indirect contributors to improving accessibility including the use of new browser capabilities to create more sophisticated page layouts, a growing concern with improved page rank in search results, and a shift toward cross-device content design. Understanding these examples may inspire the creation of additional technologies with incidental accessibility benefits.
How do professionals who create computing technologies consider accessibility? BIBAFull-Text 87-94
  Cynthia Putnam; Kathryn Wozniak; Mary Jo Zefeldt; Jinghui Cheng; Morgan Caputo; Carl Duffield
In this paper, we present survey findings about how user experience (UX) and human-computer interaction (HCI) professionals, who create information and communication technologies (ICTs), reported considering accessibility in their work. Participants (N = 199) represented a wide range of job titles and nationalities. We found that most respondents (87%, N = 173) reported that accessibility was important or very important in their work; however, when considerations for accessibility were discussed in an open-ended question (N =185) the scope was limited. Additionally, we found that aspects of empathy and professional experience were associated with how accessibility considerations were reported. We also found that many respondents indicated that decisions about accessibility were not in their control. We argue that a better understanding about how accessibility is considered by professionals has implications for academic programs in HCI and UX as to how well programs are preparing students to consider and advocate for inclusive design.

Interactions without sight

Helping visually impaired users properly aim a camera BIBAFull-Text 95-102
  Marynel Vázquez; Aaron Steinfeld
We evaluate three interaction modes to assist visually impaired users during the camera aiming process: speech, tone, and silent feedback. Our main assumption is that users are able to spatially localize what they want to photograph, and roughly aim the camera in the appropriate direction. Thus, small camera motions are sufficient for obtaining a good composition. Results in the context of documenting accessibility barriers related to public transportation show that audio feedback is valuable. Visually impaired users were not affected by audio feedback in terms of social comfort. Furthermore, we observed trends in favor of speech over tone, including higher ratings for ease of use. This study reinforces earlier work that suggests users who are blind or low vision find assisted photography appealing and useful.
Learning non-visual graphical information using a touch-based vibro-audio interface BIBAFull-Text 103-110
  Nicholas A. Giudice; Hari Prasath Palani; Eric Brenner; Kevin M. Kramer
This paper evaluates an inexpensive and intuitive approach for providing non-visual access to graphic material, called a vibro-audio interface. The system works by allowing users to freely explore graphical information on the touchscreen of a commercially available tablet and synchronously triggering vibration patterns and auditory information whenever an on-screen visual element is touched. Three studies were conducted that assessed legibility and comprehension of the relative relations and global structure of a bar graph (Exp 1), Pattern recognition via a letter identification task (Exp 2), and orientation discrimination of geometric shapes (Exp 3). Performance with the touch-based device was compared to the same tasks performed using standard hardcopy tactile graphics. Results showed similar error performance between modes for all measures, indicating that the vibro-audio interface is a viable multimodal solution for providing access to dynamic visual information and supporting accurate spatial learning and the development of mental representations of graphical material.
Exploration and avoidance of surrounding obstacles for the visually impaired BIBAFull-Text 111-118
  Limin Zeng; Denise Prescher; Gerhard Weber
Proximity-based interaction through a long cane is essential for the blind and the visually impaired. We designed and implemented an obstacle detector consisting of a 3D Time-of-Flight (TOF) camera and a planar tactile display to extend the interaction range and provide rich non-visual information about the environment. Users choose a better path after acquiring the spatial layout of obstacles than with a white cane alone. A user study with 6 blind people was analyzed and showed extra time is needed to ensure safe walking while reading the layout. Both hanging and ground-based obstacles were circumvented. Tactile mapping information has been designed for representation of precise spatial information around a blind user.

Understanding aging performance

Understanding the role of age and fluid intelligence in information search BIBAFull-Text 119-126
  Shari Trewin; John T. Richards; Vicki L. Hanson; David Sloan; Bonnie E. John; Cal Swart; John C. Thomas
In this study, we explore the role of age and fluid intelligence on the behavior of people looking for information in a real-world search space. Analyses of mouse moves, clicks, and eye movements provide a window into possible differences in both task strategy and performance, and allow us to begin to separate the influence of age from the correlated but isolable influence of cognitive ability. We found little evidence of differences in strategy between younger and older participants matched on fluid intelligence. Both performance and strategy differences were found between older participants having higher versus lower fluid intelligence, however, suggesting that cognitive factors, rather than age per se, exert the dominant influence. This underscores the importance of measuring and controlling for cognitive abilities in studies involving older adults.
Elderly text-entry performance on touchscreens BIBAFull-Text 127-134
  Hugo Nicolau; Joaquim Jorge
Touchscreen devices have become increasingly popular. Yet they lack of tactile feedback and motor stability, making it difficult effectively typing on virtual keyboards. This is even worse for elderly users and their declining motor abilities, particularly hand tremor. In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users' hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speed- and accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design.

Shared work

Crowdsourcing subjective fashion advice using VizWiz: challenges and opportunities BIBAFull-Text 135-142
  Michele A. Burton; Erin Brady; Robin Brewer; Callie Neylan; Jeffrey P. Bigham; Amy Hurst
Fashion is a language. How we dress signals to others who we are and how we want to be perceived. However, this language is primarily visual, making it inaccessible to people with vision impairments. Someone who is low-vision or completely blind cannot see what others are wearing or readily know what constitutes the norms and extremes of fashion, but most everyone they encounter can see (and judge) their fashion choices. We describe our findings of a diary study with people with vision impairments that revealed the many accessibility barriers fashion presents, and how an online survey revealed that clothing decisions are often made collaboratively, regardless of visual ability. Based on these findings, we identified a need for a collaborative and real-time environment for fashion advice. We have tested the feasibility of providing this advice through crowdsourcing using VizWiz, a mobile phone application where participants receive nearly real-time answers to visual questions. Our pilot study results show that this application has the potential to address a great need within the blind community, but remaining challenges include improving photo capture and assembling a set of crowd workers with the requisite expertise. More broadly our research highlights the feasibility of using crowdsourcing for subjective, opinion-based advice.
Online quality control for real-time crowd captioning BIBAFull-Text 143-150
  Walter S. Lasecki; Jeffrey P. Bigham
Approaches for real-time captioning of speech are either expensive (professional stenographers) or error-prone (automatic speech recognition). As an alternative approach, we have been exploring whether groups of non-experts can collectively caption speech in real-time. In this approach, each worker types as much as they can and the partial captions are merged together in real-time automatically. This approach works best when partial captions are correct and received within a few seconds of when they were spoken, but these assumptions break down when engaging workers on-demand from existing sources of crowd work like Amazon's Mechanical Turk. In this paper, we present methods for quickly identifying workers who are producing good partial captions and estimating the quality of their input. We evaluate these methods in experiments run on Mechanical Turk in which a total of 42 workers captioned 20 minutes of audio. The methods introduced in this paper were able to raise overall accuracy from 57.8% to 81.22% while keeping coverage of the ground truth signal nearly unchanged.
Designing for individuals: usable touch-screen interaction through shared user models BIBAFull-Text 151-158
  Kyle Montague; Vicki L. Hanson; Andy Cobley
Mobile touch-screen devices are becoming increasingly popular across a diverse range of users. Whilst there is a wealth of information and utilities available via downloadable apps, there is still a large proportion of users with visual and motor impairments who are unable to use the technology fully due to their interaction needs. In this paper we present an evaluation of the use of shared user modelling and adaptive interfaces to improve the accessibility of mobile touch-screen technologies. By using abilities based information collected through application use and continually updating the user model and interface adaptations, it is easy for users to make applications aware of their needs and preferences. Three smart phone apps were created for this study and tested with 12 adults who had diverse visual and motor impairments. Results indicated significant benefits from the shared user models that can automatically adapt interfaces, across applications, to address usability needs.

Visual impairment simulation

PassChords: secure multi-touch authentication for blind people BIBAFull-Text 159-166
  Shiri Azenkot; Kyle Rector; Richard Ladner; Jacob Wobbrock
Blind mobile device users face security risks such as inaccessible authentication methods, and aural and visual eavesdropping. We interviewed 13 blind smartphone users and found that most participants were unaware of or not concerned about potential security threats. Not a single participant used optional authentication methods such as a password-protected screen lock. We addressed the high risk of unauthorized user access by developing PassChords, a non-visual authentication method for touch surfaces that is robust to aural and visual eavesdropping. A user enters a PassChord by tapping several times on a touch surface with one or more fingers. The set of fingers used in each tap defines the password. We give preliminary evidence that a four-tap PassChord has about the same entropy, a measure of password strength, as a four-digit personal identification number (PIN) used in the iPhone's Passcode Lock. We conducted a study with 16 blind participants that showed that PassChords were nearly three times as fast as iPhone's Passcode Lock with VoiceOver, suggesting that PassChords are a viable accessible authentication method for touch screens.
"So that's what you see": building understanding with personalized simulations of colour vision deficiency BIBAFull-Text 167-174
  David R. Flatla; Carl Gutwin
Colour vision deficiencies (CVD) affect the everyday lives of a large number of people, but it is difficult for others -- even friends and family members -- to understand the experience of having CVD. Simulation tools can help provide this experience; however, current simulations are based on general models that have several limitations, and therefore cannot accurately reflect the perceptual capabilities of most individuals with reduced colour vision. To address this problem, we have developed a new simulation approach that is based on a specific empirical model of the actual colour perception abilities of a person with CVD. The resulting simulation is therefore a more exact representation of what a particular person with CVD actually sees. We tested the new approach in two ways. First, we compared its accuracy with that of the existing models, and found that the personalized simulations were significantly more accurate than the old method. Second, we asked pairs of participants (one with CVD, and one close friend or family member without CVD) to discuss images of everyday scenes that had been simulated with the CVD person's particular model. We found that the personalized simulations provided new insights into the details of the CVD person's experience. The personalized-simulation approach shows great promise for improving understanding of CVD (and potentially other conditions) for people with ordinary perceptual abilities.
Evaluation of dynamic image pre-compensation for computer users with severe refractive error BIBAFull-Text 175-182
  Jian Huang; Armando Barreto; Malek Adjouadi
Visual distortion and blurring impede the efficient interaction between computers and their users. Visual problems can be caused by eye diseases, severe refractive errors or combinations of both. Several image enhancement methods based on contrast sensitivity have been used to help people with eye diseases (e.g., age-related macular degeneration and cataracts), whereas few methods have been designed for people with severe refractive errors. This paper describes a new pre-compensation method to counter the visual blurring caused by the severe refractive errors of a specific computer user. It preprocesses the pictorial information through dynamic pre-compensation in advance, aiming to present customized images on the basis of the ocular aberrations of the specific computer user. The new method improves the previous static pre-compensation method by updating the aberration data according to pupil size variations, in real-time. The real-time aberration data enable us to generate better suited pre-compensated images, as the pre-compensation model is updated dynamically. An empirical study was conducted to evaluate the efficiency of the new pre-compensation method, through an icon recognition test. From the results of statistical analysis, we found that participants achieved significantly higher accuracy levels in recognizing the icons with dynamic pre-compensation, than when viewing the original icons. The accuracy is also significantly boosted when the icons were processed with dynamic pre-compensation method, in comparison with the previous static pre-compensation method.

Sign language

Effect of presenting video as a baseline during an American sign language animation user study BIBAFull-Text 183-190
  Pengfei Lu; Hernisa Kacorri
Animations of American Sign Language (ASL) have accessibility benefits for many signers with lower levels of written language literacy. Our lab has conducted several prior studies to evaluate synthesized ASL animations by asking native signers to watch different versions of animations and to answer comprehension and subjective questions about them. As an upper baseline, we used an animation of a virtual human carefully created by a human animator who is a native ASL signer. Considering whether to instead use videos of human signers as an upper baseline, we wanted to quantify how including a video upper baseline would affect how participants evaluate the ASL animations presented in a study. In this paper, we replicate a user study we conducted two years ago, with one difference: replacing our original animation upper baseline with a video of a human signer. We found that adding a human video upper baseline depressed the subjective Likert-scale scores that participants assign to the other stimuli (the synthesized animations) in the study when viewed side-by-side. This paper provides methodological guidance for how to design user studies evaluating sign language animations and facilitates comparison of studies that have used different upper baselines.
Design and evaluation of classifier for identifying sign language videos in video sharing sites BIBAFull-Text 191-198
  Caio D. D. Monteiro; Ricardo Gutierrez-Osuna; Frank M. Shipman
Video sharing sites provide an opportunity for the collection and use of sign language presentations about a wide range of topics. Currently, locating sign language videos (SL videos) in such sharing sites relies on the existence and accuracy of tags, titles or other metadata indicating the content is in sign language. In this paper, we describe the design and evaluation of a classifier for distinguishing between sign language videos and other videos. A test collection of SL videos and videos likely to be incorrectly recognized as SL videos (likely false positives) was created for evaluating alternative classifiers. Five video features thought to be potentially valuable for this task were developed based on common video analysis techniques. A comparison of the relative value of the five video features shows that a measure of the symmetry of movement relative to the face is the best feature for distinguishing sign language videos. Overall, an SVM classifier provided with all five features achieves 82% precision and 90% recall when tested on the challenging test collection. The performance would be considerably higher when applied to the more varied collections of large video sharing sites.

Posters and demonstrations

Turning off-the-shelf games into biofeedback games BIBAFull-Text 199-200
  Regan L. Mandryk; Michael Kalyn; Yichen Dang; Andre Doucette; Brett Taylor; Shane Dielschneider
Biofeedback games help users maintain specific mental or physical states and are useful to help people with cognitive impairments learn to self-regulate their brain function. However, biofeedback games are expensive and difficult to create and are not sufficiently appealing to hold a user's interest over the long term. We present two systems that turn off-the-shelf games into biofeedback games. Our desktop approach uses visual feedback via texture-based graphical overlays that vary in their obfuscation of an underlying game based on the user's physiological state. Our mobile approach presents multi-modal feedback (audio or vibration) of a user's physiological state on an iPhone.
Smartphone application for indoor scene localization BIBAFull-Text 201-202
  Nabeel Younus Khan; Brendan McCane
Blind people are unable to navigate easily in unfamiliar indoor environments without assistance. Knowing the current location is a particularly important aspect of indoor navigation. Scene identification in indoor buildings without any Global Positioning System (GPS) is a challenging problem. We present a smart phone based assistive technology which uses computer vision techniques to localize the indoor location from a scene image. The aim of our work is to guide blind people during navigation inside buildings where GPS is not effective. Our current system uses a client-server model where the user takes a photo from their current location, the image is sent to the server, the location is sent back to the mobile device, and a voice message is used to convey the location information.
Specialized DVD player to render audio description and its usability performance BIBAFull-Text 203-204
  Claude Chapdelaine
Our DVD Player was designed based on the needs identified in our prior in situ study with blind and visually impaired individuals. In this study we identified three major issues that would significantly improve the audio-visual content with audio description (AD). Beside the regular accessible functions found in existing video player, we added three specialized functions to enhance the user experience. Thus, the player provides context information on the content, basic and an augmented quantity of AD and recall functions on key information (scene identification, actions in the scene and the actors in the scene). We propose to demonstrate the CRIM DVDPlayer and to discuss the results of our seven months usability study.
IDEAL: a dyslexic-friendly ebook reader BIBAFull-Text 205-206
  Gaurang Kanvinde; Luz Rello; Ricardo Baeza-Yates
We present an ebook reader for Android which displays ebooks in a more accessible manner for users with dyslexia. The ebook reader combines features that other related tools already have, such as text-to-speech technology, and new features, such as displaying the text with an adapted text layout based on the results of a user study with participants with dyslexia. Since there is no universal profile of a user with dyslexia, the layout settings are customizable and users can override the special layout setting according to their reading preferences.
FEPS: a sensory substitution system for the blind to perceive facial expressions BIBAFull-Text 207-208
  Md. Iftekhar Tanveer; A. S. M. Iftekhar Anam; A. K. M Mahbubur Rahman; Sreya Ghosh; Mohammed Yeasin
This work demonstrates a visual-to-auditory Sensory Substitution System (SSD) called Facial Expression Perception through Sound (FEPS). It is designed to enable the visually impaired people to participate in a more effective social communication by perceiving their interlocutor's facial expressions. The earlier SSDs provided feedback on inferred emotions, where as, this system responds to the facial movements. This is a better method than emotion inference due to complexities in expression-to-emotion mapping, the problem of capturing multitude of possible emotions derived from a limited facial movements and the difficulty to correctly predict emotions due to lack in ground truth data. In this work, the user's ability to understand the facial expressions has been ensured by a usability study.
SymbolPath: a continuous motion overlay module for icon-based assistive communication BIBAFull-Text 209-210
  Karl Wiegand; Rupal Patel
Augmentative and alternative communication (AAC) systems are often used by individuals with severe speech impairments. Icon-based AAC systems typically present users with arrays of icons that are sequentially selected to construct utterances, which are then spoken aloud using text-to-speech (TTS) synthesis. For touch-screen devices, users must lift their finger or hand to select individual icons and avoid selecting multiple icons at once. Because many individuals with severe speech impairments have concomitant limb impairments, repetitive and precise movements can be slow and effortful. The current work aims to enhance message formulation ease and speed by using continuous motion icon selection rather than discrete input. SymbolPath is an overlay module that can be integrated with existing icon-based AAC systems to enable continuous motion icon selection. Message formulation using SymbolPath consists of drawing a continuous path through a set of desired icons. The system then determines the most likely subset of desired icons on that path and rearranges them to form a meaningful and grammatical sentence. In addition to demonstrating the SymbolPath module, we plan to present usability data and discuss iterative modifications to the software.
Automated description generation for indoor floor maps BIBAFull-Text 211-212
  Devi A. Paladugu; Hima Bindu Maguluri; Qiongjie Tian; Baoxin Li
People with visual impairment generally suffer from diminished freedom in navigating an environment. A practical need is to navigate through unfamiliar indoor environments such as school buildings, hotels, etc., for which commonly-used existing tools like canes, seeing-eye dogs and GPS devices cannot provide adequate support. We demonstrate a prototype system that aims at addressing this practical need. The input to the system is the name of the building/establishment supplied by a user, which is used by a web crawler to determine the availability of a floor map on the corresponding website. If available, the map is downloaded and used by the proposed system to generate a verbal description giving an overview of the locations of key landmarks inside the map with respect to one another. Our preliminary survey and experiments indicate that this is a promising direction to pursue in supporting indoor navigation for the visually impaired.
Non-visual-cueing-based sensing and understanding of nearby entities in aided navigation BIBAFull-Text 213-214
  Juan Diego Gomez; Guido Bologna; Thierry Pun
Exploring unfamiliar environments is a challenging task in which additionally, unsighted individuals frequently fail to gain perception of obstacles and make serendipitous discoveries. This is because the mental depiction of the context is drastically lessened due to the absence of visual information. It is still not clear in neuroscience, whether stimuli elicited by visual cueing can be replicated by other senses (cross-model transfer). In the practice, however, everyone recognizes a key, whether it is felt in a pocket or seen on a table. We present a context-aware aid system for the blind that merges three levels of assistance enhancing the intelligibility of the nearby entities: an exploration module to help gain awareness of the surrounding context, an alerting method for warning the user when a stumble is likely, and, finally, a recognition engine that retrieves natural targets previously learned. Practical experiences with our system show that in the absence of visual cueing, the audio and haptic trajectory playback coupled with computer-vision methods is a promising approach to depict dynamic information of the immediate environment.
EZ ballot with multimodal inputs and outputs BIBAFull-Text 215-216
  Seunghyun Lee; Xiao Xiong; Liu Elaine Yilin; Jon Sanford
Current accessible voting machines require many voters with visual, cognitive and dexterity limitations to vote with assistance, if they can vote at all. To address accessibility problems, we developed the EZ Ballot. The linear layout of the EZ ballot structure fundamentally re-conceptualizes ballot design to provide the same simple and intuitive voting experience for all voters, regardless of ability or input/output (I/O) device used. Further, multimodal I/O interfaces were seamlessly integrated with the ballot structure to provide flexibility in accommodating voters with different abilities.
Breath mobile: a software-based hands-free and voice-free breathing controlled mobile phone interface BIBAFull-Text 217-218
  Jackson Feijó Filho; Thiago Valle; Wilson Prata
This work proposes the use of a low-cost software based breathing interface for mobile phones as an alternative interaction technology for people with motor disabilities. It attempts to explore the processing of the audio from the microphone in mobile phones to trigger and launch software events. A proof of concept of this work is demonstrated by the implementation and experimentation of a mobile application prototype that enables users to perform a basic operation on the phone, such as calling through "puffing" interaction.
What is wrong with this word? dyseggxia: a game for children with dyslexia BIBAFull-Text 219-220
  Luz Rello; Clara Bayarri; Azuki Gorriz
We present Dyseggxia, a game application with word exercises for children with dyslexia. We design the content of the game combining linguistic and pedagogical criteria as well as corpus analysis. The main contributions are (i) designing exercises by using the analysis of errors written by people with dyslexia and (i) presenting Spanish reinforcement exercises in the form of a computer game. The game is available for free on iOS and Android.
Tapulator: a non-visual calculator using natural prefix-free codes BIBAFull-Text 221-222
  Vaspol Ruamviboonsuk; Shiri Azenkot; Richard E. Ladner
A new non-visual method of numeric entry into a smartphone is designed, implemented, and tested. Users tap the smartphone screen with one to three fingers or swipe the screen in order to enter numbers. No buttons are used -- only simple, easy-to-remember gestures. A preliminary valuation with sighted users compares the method to a standard accessible numeric keyboard with a VoiceOver-like screen reader interface for non-visual entry. We found that users entered numbers faster and with higher accuracy with our number entry method than with a VoiceOver-like interface, showing there is potential for use among blind people as well. The Tapulator, a complete calculator based on this non-visual numeric entry that uses simple gestures for arithmetic operations and other calculator actions is described.
Design goals for a system for enhancing AAC with personalized video BIBAFull-Text 223-224
  Katie O'Leary; Charles Delahunt; Patricia Dowden; Ivan Darmansya; Jiaqi Heng; Eve A. Riskin; Richard E. Ladner; Jacob O. Wobbrock
Enabling end-users of Augmentative and Alternative Communication (AAC) systems to add personalized video content at runtime holds promise for improving communication, but the requirements for such systems are as yet unclear. To explore this issue, we present Vid2Speech, a prototype AAC system for children with complex communication needs (CCN) that uses personalized video to enhance representations of action words. We describe three design goals that guided the integration of personalized video to enhance AAC in our early-stage prototype: 1) Providing social-temporal navigation; 2) Enhancing comprehension; and 3) Enabling customization in real time. Our system concept represents one approach to realizing these goals, however, we contribute the goals and the system as a starting point for future innovations in personalized video-based AAC.
Liberi and the racer bike: exergaming technology for children with cerebral palsy BIBAFull-Text 225-226
  Zi Ye; Hamilton A. Hernandez; T. C. Nicholas Graham; Darcy Fehlings; Lauren Switzer; Md Ameer Hamza; Irina Schumann
Children with cerebral palsy (CP) often have limited opportunities to engage in physical exercise and to interact with other children. We report on the design of a multiplayer exercise video game and a novel cycling-based exergaming station that allow children with CP to perform vigorous exercise while playing with other children. The game and the station were designed through an iterative and incremental participatory design process involving medical professionals, game designers, computer scientists, kinesiologists, physiotherapists, and eight children with CP. The station combines a physical platform allowing children with CP to provide pedaling input into a game, and a standard PC gamepad. With this station seven of eight children could play a cycling-based game effectively. The game is a virtual world featuring several minigames, group play, and an in-game money-based reward system. Abilities and limitations associated with CP were considered when designing the game. The data collected during the design sessions shows that the games are fun, engaging and allow the children to reach exertion levels recommended by the American College of Sports Medicine.
Blue herd: automated captioning for videoconferences BIBAFull-Text 227-228
  Ira R. Forman; Ben Fletcher; John Hartley; Bill Rippon; Allen Wilson
Blue Herd is a project in IBM Research to investigate automated captioning for videoconferences. Today videoconferences are held among meeting participants connected with a variety of devices: personal computers, mobile devices, and multi-participant meeting rooms. Blue Herd is charged with studying automated real-time captioning in that context. This poster explains the system that was developed for personal computers and describes our experiments to include mobile devices and multi-participant meeting rooms.
Toward the development of a BCI and gestural interface to support individuals with physical disabilities BIBAFull-Text 229-230
  Kavita Krishnaswamy; Ravi Kuber
In this paper, we describe a first step towards the development of a solution to support the movement and repositioning of an individual's limbs. Limb repositioning is particularly valuable for individuals with physical disabilities who are either bed or chair-bound, to help reduce the occurrence of contractures and pressure ulcers. A data gathering study has been performed examining attitudes towards using BCI and gestural devices to control a robotic aid to assist with the repositioning process. Findings from a preliminary study evaluating a controller interface prototype suggest that while BCI and gestural technologies may play a valuable role in limiting fatigue from interacting with a mouse or other input device, challenges are faced accurately identifying specific facial expressions (e.g. blinks). Future work would aim to refine algorithms to detect gestures, with a view to augmenting the experience when using a BCI and gestural device to control a robotic aid.
An electronic-textile wearable communication board for individuals with autism engaged in horse therapy BIBAFull-Text 231-232
  Halley P. Profita
Horse therapy is becoming an increasingly popular physical therapy activity for individuals with social, communication, or cognitive impairments as a means to help enhance social and interpersonal skills [1]. However, horse therapy and other very physically engaging therapies pose a challenge for those who rely on a communication board to communicate as the highly unstable nature of such activities impedes device operation. In such an instance, users are typically forced to abandon their communication board [2], rendering them unable to convey vital pieces of information throughout the duration of the physical therapy activity. This poster presents the Electronic-Textile (E-Textile) Wearable Communication Board -- a device that was developed specifically to fill this void and support the communication needs of individuals with autism during horse therapy.
Tongible: a non-contact tongue-based interaction technique BIBAFull-Text 233-234
  Li Liu; Shuo Niu; Jingjing Ren; Jingyuan Zhang
Using tongue to access computer for people with none or minimal upper limb function has been studied in recent years. These studies mainly focus on utilizing mechanical or electromagnetic devices. These devices, however, must contact to people's oral cavity and cause hygiene problems or accidental ingestion. This work presents an interaction technique named Tongible that employs tongue as input without any mechanical or electromagnetic assistive device. In Tongible, six gestures of tongue are captured by an RGB camera and used as basic controlling gestures. Preliminary usability testing suggests that Tongible is effective in pointing and text entry for people with dexterity impairment.
Effectiveness of the haptic chair in speech training BIBAFull-Text 235-236
  Suranga Nanayakkara; Lonce Wyse; Elizabeth A. Taylor
The 'Haptic Chair' [3] delivers vibrotactile stimulation to several parts of the body including the palmar surface of the hand (palm and fingers), and has been shown to have a significant positive effect on the enjoyment of music even by the profoundly deaf. In this paper, we explore the effectiveness of using the Haptic Chair during speech therapy for the deaf. We conducted a 24-week study with 20 profoundly deaf users to validate our initial observations. The improvements in word clarity observed over the duration of this study indicate that the Haptic Chair has the potential to make a significant contribution to speech therapy for the deaf.
Access to UML diagrams with the HUTN BIBAFull-Text 237-238
  Helmut Vieritz; Daniel Schilberg; Sabina Jeschke
Modern software development includes the usage of UML for (model-driven) analysis and design, customer communication etc. Since UML is a graphical notation, alternative forms of representation are needed to avoid barriers for developers and other users with low vision. Here, Human-usable Textual Notation (HUTN) is tested and evaluated in a user interface modeling concept to provide accessible model-driven software design.
An interactive play mat for deaf-blind infants BIBAFull-Text 239-240
  Crystal O'Bryan; Amina Parvez; Dianne Pawluk
There is a great need for the development of interactive toys for deaf-blind infants (1-3 year olds) to motivate their exploration of their environment, and develop their motor and cognitive skills. We describe relevant design criteria, gleaned from the literature and a discussion with professionals who work with deaf-blind children. We then present a toy consisting of a play mat with three activity areas: one for remembering and repeating vibration patterns and two for matching textures. Vibrators which turn on as the infant moves in the direction of an activity area, measured by pressure sensors, are used to encourage the infant to explore in that direction.
Investigating authentication methods used by individuals with down syndrome BIBAFull-Text 241-242
  Yao Ma; Jinjuan Heidi Feng; Libby Kumin; Jonathan Lazar; Lakshmidevi Sreeramareddy
Although there have been numerous studies investigating password usage by neurotypical users, a paucity of research has been conducted to examine the use of authentication methods used by individuals with cognitive impairment. In this paper, we report a longitudinal study that investigates how individuals with Down syndrome interact with three user authentication mechanisms. It confirms that many individuals with DS are capable of using traditional alphanumeric passwords as well as learning other authentication methods. Contrary to previous belief, the result suggests that mnemonic passwords may not be easier to remember for individuals with DS during initial usage.
WatchMe: wrist-worn interface that makes remote monitoring seamless BIBAFull-Text 243-244
  Shanaka Ransiri; Suranga Nanayakkara
Remote monitoring allows us to understand the regular living behaviors of the elderly and alert their loved ones in emergency situations. In this paper, we describe WatchMe, a software and hardware platform that focuses on making ambient monitoring intuitive and seamless. WatchMe system consists of the WatchMe server application and a WatchMe client application implemented on a regular wristwatch. Thus, it requires minimal effort to monitor and is less disruptive to the user. We hope that the WatchMe system will contribute to improving the lives of the elderly by creating a healthy link between them and their loved ones.
Musica Parlata: a methodology to teach music to blind people BIBAFull-Text 245-246
  Alfredo Capozzi; Roberto De Prisco; Michele Nasti; Rocco Zaccagnino
Music education for blind people heavily relies on Braille. The use of Braille for music causes difficulties for the blind student: new meanings for the Braille symbols have to be learned and the reading of the music is not immediate. More-over, in the majority of the cases, music teachers don't know Braille. Although Braille remains the primary means for music education for blind people, alternative methods can help. We propose a new methodology that helps the reading of music scores by means of a software that sings the name of the notes. Singing the name of the notes provides to a blind user a direct perception of the score. Moreover the information is directly conveyed to the student through the ear. Although the method has several limitations we believe that it is effective. The methodology is not intended to "replace" Braille, but only to offer a different approach to the study of music.
Supporting employment matching with mobile interfaces BIBAFull-Text 247-248
  Ziyi Zhang; Scott McCrickard; Shea Tanis; Clayton Lewis
People with cognitive disabilities need careful matching to find appropriate employment. However, it can be difficult for them to articulate their worries and concerns in a timely and useful manner. This work demonstrates how appropriately-designed technology can assist in accommodating cognitive disabilities, providing avenues to assess their concerns about work environments. A mobile application was developed with targeted repeated multimedia surveying to assess work concerns, with temporal- and geo-tagged answers stored for review by a personal assistant, employer, job coach, or person with a cognitive disability. An expert review provided feedback to ensure an appropriate application.
Combining emotion and facial nonmanual signals in synthesized American sign language BIBAFull-Text 249-250
  Jerry C. Schnepp; Rosalee J. Wolfe; John C. McDonald; Jorge A. Toro
Translating from English to American Sign Language (ASL) requires an avatar to display synthesized ASL. Essential to the language are nonmanual signals that appear on the face. Previous avatars were hampered by an inability to portray emotion and facial nonmanual signals that occur at the same time. A new animation system addresses this challenge. Animations produced by the new system were tested with 40 members of the Deaf community in the United States. For each animation, participants were able to identify both nonmanual signals and emotional states. Co-occurring question nonmanuals and affect information were distinguishable, which is particularly striking because the two processes can move an avatar's brows in opposing directions.
Displaying braille and graphics with a "tactile mouse" BIBAFull-Text 251-252
  Victoria E. Hribar; Laura G. Deal; Dianne T. V. Pawluk
Refreshable tactile displays that move with the hand, such as those that resemble computer mice, can be utilized to display tactile graphics faster and more cost effectively to individuals who are blind and visually impaired than traditional paper methods of creating tactile diagrams. However, in tactile diagrams, the word labels can be as important as the diagram itself and so it is important that these displays can present Braille. In this work, we present and discuss findings from a study which used three methods of displaying Braille and tactile graphics simultaneously with a tactile mouse: Braille and graphics at the same amplitude level, Braille and graphics at different amplitude levels, and Braille with a box around it, The simplest method, Braille and graphics at the same amplitude, surprisingly proved to be the most effective.
A participatory design workshop on accessible apps and games with students with learning differences BIBAFull-Text 253-254
  Lisa Anthony; Sapna Prasad; Amy Hurst; Ravi Kuber
This paper describes a Science-Technology-Engineering-Mathematics (STEM) outreach workshop conducted with post-secondary students diagnosed with learning differences, including Learning Disabilities (LD), Attention Deficit / Hyperactivity Disorders (AD/HD), and/or Autism Spectrum Disorders (ASD). In this workshop, students were actively involved in participatory design exercises such as data gathering, identifying accessible design requirements, and evaluating mobile applications and games targeted for diverse users. This hands-on experience broadened students' understanding of STEM areas, provided them with an opportunity to see themselves as computer scientists, and demonstrated how they might succeed in computing careers, especially in human-centered computing and interface design. Lessons learned from the workshop also offer useful insight on conducting participatory design with this unique population.
Hybrid auditory feedback: a new method for mobility assistance of the visually impaired BIBAFull-Text 255-256
  Ibrar Hussain; Ling Chen; Hamid Turab Mirza; Abdul Majid; Gencai Chen
In this paper we present a novel concept of hybrid auditory feedback in mobility assistance for people with visual disabilities in indoor environment. Hybrid auditory feedback is a gradual con-version of sound from speech-only to non-speech (i.e., spearcons) based on the sound repetitiveness and the users' frequency of the travelled route. Using a within-subject design, eight participants carried out a task using a mobility assistant application and followed a same route for few days. Preliminary results suggest that hybrid sounds in auditory feedback are more effective than non-speech and are pleasant compared to speech-only.
ClickerAID: a tool for efficient clicking using intentional muscle contractions BIBAFull-Text 257-258
  Torsten Felzer; Stephan Rinderknecht
This is to propose a demo and poster about a tool designed to assist persons who are temporarily or permanently unable to reliably operate the buttons of a physical pointing device, for example because of tenosynovitis (TSV). It monitors a dedicated muscle of the user and emulates a click event at the current position of the mouse pointer in response to a contraction of that muscle (as small as raising the eyebrow). The ClickType (= type of the click) -- left, right, single, double, drag -- is selected by the user (who is also responsible for moving the mouse pointer) and stays valid until the selection of a new one.
Assistive system experiment designer ASED: a toolkit for the quantitative evaluation of enhanced assistive systems for impaired persons in production BIBAFull-Text 259-260
  Oliver Korn; Albrecht Schmidt; Thomas Hörz; Daniel Kaupp
This paper introduces the toolkit ASED: Assistive System Experiment Designer. Combining a specially constructed assembly table and new software it allows measuring the performance of impaired persons when using assistive systems for production environments (ASiPE). The ASiPE design tested using ASED transgresses the state of the art by three enhancements. With the help of ASED we are able to quantify and rank their effects on work quality and performance. The ASED toolkit, however, is not confined to the design tested but can be used for the experimental analysis of every kind of manual process.
Optimizing gaze typing for people with severe motor disabilities: the iWriter Arabic interface BIBAFull-Text 261-262
  Areej Al-Wabil; Arwa Al-Issa; Itisam Hazzaa; May Al-Humaimeedi; Lujain Al-Tamimi; Bushra Al-Kadhi
Communication in the Arabic language with gaze using dwell time has been made possible by the development of eye typing interfaces. This paper describes the design process for developing iWriter, an Arabic gaze communication system. Design considerations for the optimization of the gaze typing interfaces for Arabic script are discussed.
Toward a design of word processing environment for people with disabilities BIBAFull-Text 263-264
  Adam J. Sporka; Ondrej Polacek
The study presented in this paper is aimed at identifying text editing actions that are routinely performed by the users when composing a text document and that are not directly supported by common word processors. The results of this study will help a design of a novel word processing interface controlled by device with a limited number of input signals, operated by people with certain motor disabilities.
Preliminary evaluation of three eyes-free interfaces for point-and-click computer games BIBAFull-Text 265-266
  Javier Torrente; Eugenio J. Marchiori; José Ángel Vallejo-Pinto; Manuel Ortega-Moral; Pablo Moreno-Ger; Baltasar Fernández-Manjón
This paper presents a preliminary evaluation of the perceived entertainment value and ease of use of three eyes-free interfaces for point-and-click games. Interface 1 (I1) uses a web-like cyclical navigation system to change the focused interactive element. Interface 2 (I2) uses a sonar to help the user locate interactive elements with the mouse. Interface 3 (I3) interprets natural language commands typed in by the player. Results suggest that I2 adds more entertainment value and is appropriate for experienced players. Players find I1 is the easiest to use while I3 seems more adequate for users with little gaming experience.
Accessible collaborative writing for persons who are blind: a usability study BIBAFull-Text 267-268
  John G. Schoeberlein; Yuanqiong Wang
Collaborative writing applications are widely utilized in organizations to co-author documents and jointly exchange ideas. Unfortunately, for persons who are blind, collaborative writing applications are often difficult to access and use. Therefore, this paper presents the results from several usability studies that examined how visually able persons and persons who are blind interact with collaborative writing applications, and the accessibility and usability issues they encounter.
E-Arithmetic: non-visual arithmetic manipulation for students with impaired vision BIBAFull-Text 269-270
  Nancy Alajarmeh; Enrico Pontelli
In this paper we present a web-based system that enables children with impaired vision to handle basic arithmetic knowledge: addition, subtraction, multiplication, and division. Taking into consideration the accommodation of varied levels of vision disability -- minor to severe -- the new system provides an electronic auditory alternative to the currently used tools.
A music application for visually impaired people using daily goods and stationeries on the table BIBAFull-Text 271-272
  Ikuko Eguchi Yairi; Takuya Takeda
Music applications, like a GarageBand, have become more popular today because they afford intuitive comfortable visual interface for users to play, remix, and compose music. But visually impaired people have difficulties to use such a software application. Therefore, this paper introduces a novel music interface for visually impaired people using daily goods and stationery on the table. An experimental system was developed with Kinect for AR marker and gesture recognition, and with sounds of three instruments, the piano, the guitar and the percussion. Five blind young people participated in the evaluation of right combination of the goods. The results shows that the proposed interface is effective both single use and collaborative work.
A feasibility study of crowdsourcing and google street view to determine sidewalk accessibility BIBAFull-Text 273-274
  Kotaro Hara; Victoria Le; Jon Froehlich
We explore the feasibility of using crowd workers from Amazon Mechanical Turk to identify and rank sidewalk accessibility issues from a manually curated database of 100 Google Street View images. We examine the effect of three different interactive labeling interfaces (Point, Rectangle, and Outline) on task accuracy and duration. We close the paper by discussing limitations and opportunities for future work.
Replicating semantic connections made by visual readers for a scanning system for nonvisual readers BIBAFull-Text 275-276
  Debra Yarrington; Kathleen F. McCoy
When scanning through a text document for the answer to a question, visual readers are able to quickly locate text within the document related to the answer while simultaneously getting a general sense of the document's content. For nonvisual readers, however, this poses a challenge, especially when the relevant text is spread out or worded in a way that can't be searched for directly. Our goal is to make the scanning experience quicker for nonvisual readers by giving them an experience similar to that of visual readers. To do this we first determined what visual scanners focused on by using an eye-tracker while they scanned for answers to complex questions. Resulting data revealed that text with loose semantic connections to the question are important. This paper reports on our efforts to develop a method that automatically replicates the connections made by visual scanners. Ultimately, our goal is a system that replicates the visual scanning experience, allowing nonvisual readers to quickly glean information in a manner similar to how visual readers glean information when scanning. This work stems from work with students who are nonvisual readers and is aimed at making their school experience more equitable with students who scan visually.
It is not a talking book: it is more like really reading a book! BIBAFull-Text 277-278
  Yasmine N. El-Glaly; Francis Quek; Tonya L. Smith-Jackson; Gurjot Dhillon
In this research we designed, developed, and tested a reading system that enables Individuals with Blindness or Severe Visual Impairment (IBSVI) to fuse audio, tactile landmarks, and spatial information in order to read. This system renders electronic text documents on iPad-type devices, and reads aloud each word touched by the user's finger. A tactile overlay on the iPad screen helps IBSVI to navigate a page, furnishing a framework of tactile landmarks to give IBSVI a sense of place on the page. As the user moves her finger along the tangible pattern of the overlay, the text on the iPad screen that is touched is rendered audibly using a text-to-speech bsynthesizer.
MeetUp: a universally designed smartphone application to find another BIBAFull-Text 279-280
  Nara Kim; Matthew I. Moyers
A universally designed mobile application, MeetUp, that assists users to meet up is designed and implemented for blind and sighted users. The Android application MeetUp was originally designed for blind users, but a visual interface was added so it could become usable by sighted users as well. The system design and user interactions with the application are described.
Gesture interface magnifiers for low-vision users BIBAFull-Text 281-282
  Seunghyun (Tina) Lee; Jon Sanford
This study compared different types of magnification and navigation methods on low-vision handheld magnifiers to determine the feasibility of a touch screen gesture interface. The results show that despite the fact that participants had no experience using gestures for magnification or navigation, participants were more satisfied with them. Gestures were faster and more preferred than the indirect input methods for pushing a button or rotating a knob, which had previously been familiar to participants from other electronic device interfaces. The study suggests that the use of gestures may afford an alternative and more natural magnification and navigation method for a new user-centric low vision magnifier.
3D point of gaze estimation using head-mounted RGB-D cameras BIBAFull-Text 283-284
  Christopher McMurrough; Christopher Conly; Vassilis Athitsos; Fillia Makedon
This paper presents a low-cost, wearable headset for 3D Point of Gaze (PoG) estimation in assistive applications. The device consists of an eye tracking camera and forward facing RGB-D scene camera which, together, provide an estimate of the user gaze vector and its intersection with a 3D point in space. The resulting system is able to compute the 3D PoG in real-time using inexpensive and readily available hardware components.
Displaying error & uncertainty in auditory graphs BIBAFull-Text 285-286
  Jared M. Batterman; Bruce N. Walker
Clear representation of uncertainty or error is crucial in graphs and other displays of data. Error bars are quite common in visual graphs, even though they are not necessarily well-designed, and often are not well understood, even by those who use them often (e.g., scientists, engineers). There has been little study of how to represent uncertainty in auditory graphs, such as those used increasingly by students and scientists with vision impairment. This study used conceptual magnitude estimation to determine how well different auditory dimensions (frequency, tempo) can represent error and uncertainty. The results will lead to more effective auditory displays of quantitative information and data.
Visualizations for self-reflection on mouse pointer performance for older adults BIBAFull-Text 287-288
  Jasmine Jones; Steven Hall; Mieke Gentis; Carrie Reynolds; Chitra Gadwal; Amy Hurst; Judah Ronch; Callie Neylan
Aging causes physical and cognitive changes that influence how we interact with the world around us. As personal data becomes increasingly available from a variety of sources, older adults can use this information to better understand these changes and adapt. Our project explores information visualization as a tool to help older adults interpret and understand their own personal data. To test this concept, we created visualizations of user's pointer performance metrics to help demystify problems in real-world mouse use. In a user study conducted with older adults with a range of computing experience, we learned that visualizations such as these can be a highly engaging information medium for this population. This paper presents our value-driven design process and recommendations for visualizations for older adults.

Student research competition abstract

Accessible skimming: faster screen reading of web pages BIBAFull-Text 289-290
  Faisal Ahmed
Sighted people know how to quickly glance over the headlines and news articles online to get the gist of information. On the other hand, people who are blind use screen-readers to listen through the content narrated by a serial audio interface. This interface does not give them an opportunity to know what content to skip and what to listen to. In this work, I present an automated approach to facilitate non-visual skimming of web pages.
Accessible web automation interface: a user study BIBAFull-Text 291-292
  Yury Puzis
With the growth of the Web as a platform for performing many useful daily tasks, such as shopping and paying bills, and as an important vehicle for doing business, the Web's potential to improve the quality of life of blind and low-vision users is greater than ever. However, the growth of sophistication of Web applications continues to outpace the capabilities of tools that help make the Web more accessible. Web automation has the potential to bridge the divide between the ways visually impaired users and sighted users access the Web, and enable visually impaired users to breeze through Web browsing tasks that beforehand were slow, hard, or even impossible to achieve. Typical automation interfaces require that the user record a macro, a useful sequence of browsing steps, so that these steps can be replayed in the future. In this paper, I present the results of evaluation of two web automation user interfaces that enable web automation without having to record macros. The experiments suggest that the approach has the potential to significantly increase accessibility and usability of web pages by reducing interaction time, and by enhancing user experience.
Detecting hunchback behavior in autistic children with smart phone assistive devices BIBAFull-Text 293-294
  Shu-Hsien Lin
The research target in this case study was an autistic student at a special education school, who often unconsciously became hunchbacked during group activities or when not talking to people. We designed a system for hunchback detection using smart phone. Combined with an assistive T-shirt, the system could detect whether his/her back was hunched. Through this demonstration, we showcased the potential of using smart phones to develop simpler and more effective assistive devices for people with disabilities.
Detecting the hand-mouthing behavior of children with intellectual disability using Kinect imaging technology BIBAFull-Text 295-296
  Tzu-Wei Wei
Research indicates that approximately 17% of individuals with intellectual disability engage in hand-mouthing behavior. The proportion is even higher among those with extremely severe intellectual disability. Stereotypic and excessive hand-mouthing behavior may lead to an unpleasant odor, lesions of the skin and muscular tissues, and infections. Typically, a substantial amount of staff intervention is required for a special education teacher to correct hand-mouthing behavior. However, this results in prolonged treatment periods and has negative effects on the students' learning and interaction with their peers, which leads to barriers in their social integration. In this study, we applied Kinect imaging technology to detect children's hand-mouthing behavior. This method enables rapid verification of the hand mouthing intervention strategies proposed by special education teachers, thereby reducing students' hand-mouthing behavior and facilitating individual learning.
Face tracking user interfaces using vision-based consumer devices BIBAFull-Text 297-298
  Norman H. Villaroman
One form of natural user interaction with a personal computer is based on natural face movements. This is especially helpful for users who cannot effectively use common input devices with their hands but have sufficient control of their heads. Using vision-based consumer devices makes such a user interface readily available and allows its use to be non-intrusive. This user interface presents some significant challenges particularly with accuracy and design. This research aims to investigate such problems and discover solutions to creating a usable and robust face tracking user interface using currently available technology. Design requirements were set and different design options were implemented and evaluated.
Kinempt: a Kinect-based prompting system to transition autonomously through vocational tasks for individuals with cognitive impairments BIBAFull-Text 299-300
  Yu-Chi Tsai
Kinect is used as assistive technology for individuals with cognitive impairments to achieve the goal of performing task steps independently. In a community-based rehabilitation program under the guidance of three job coaches, a task prompting system called Kinempt was designed to assist two participants involving pre-service food preparation training. Results indicate that for participants with cognitive disabilities, acquisition of job skills may be facilitated by use of Kinempt in conjunction with operant conditioning strategies.
Reusable game interfaces for people with disabilities BIBAFull-Text 301-302
  Javier Torrente
Computer games are a very popular media today, spanning across multiple aspects of life, not only leisure but also health or education. But despite their importance their current level of accessibility is still low. One of the causes is that accessibility has an additional cost and effort for developers that is in many cases unaffordable. As a way to facilitate developers' job, this work proposes the creation of specialized tools to deal with accessibility. The hypothesis defined was that it was possible to produce tools that could reduce the input needed to adapt the games for people with special needs but achieving a good level of usability, resulting in a reduction of the cost and effort required. As game development tools and approaches are heterogeneous and diverse, two case studies were set up targeting two different platforms: a high level PC game authoring tool, and a low-level Android game programming framework. Several games were developed using the tools developed, and their usability was tested. Initial results depict that high usability levels can be achieved with a minimum additional input from the game author.
Wii remote as a web navigation device for people with cerebral palsy BIBAFull-Text 303-304
  Nithin Santhanam
This study evaluates the use of the Nintendo Wii remote relative to the standard wireless mouse as a web navigational device. Nine participants with cerebral palsy performed three typical web tasks. Six of them showed improved task times using the Wii remote. With suitable customization available from freely available software, the Wii remote shows promise to be a flexible and inexpensive alternative.