[1]
Exploring Interface Design for Independent Navigation by People with Visual
Impairments
Poster Session 2
/
Brady, Erin L.
/
Sato, Daisuke
/
Ruan, Chengxiong
/
Takagi, Hironobu
/
Asakawa, Chieko
Seventeenth International ACM SIGACCESS Conference on Computers and
Accessibility
2015-10-26
p.387-388
© Copyright 2015 ACM
Summary: Most user studies of navigation applications for people with visual
impairments have been limited by existing localization technologies, and
appropriate instruction types and information needs have been determined
through interviews. Using Wizard-of-Oz navigation interfaces, we explored how
people with visual impairments respond to different instruction intervals,
precision, output modalities, and landmark use during in situ navigation tasks.
We present the results of an experimental study with nine people with visual
impairments, and provide direction and open questions for future work on
adaptive navigation interfaces.
[2]
Can a blind person understand your world?
After dinner "William Loughborough" speech
/
Asakawa, Chieko
Proceedings of the 2014 International Cross-Disciplinary Conference on Web
Accessibility (W4A)
2014-04-07
p.24
© Copyright 2014 ACM
Summary: Computers have changed the lives of blind people by allowing us to access
vast amounts of information on the net. Now we can read daily newspapers, hear
digital textbooks, shop for goods online, and join online social networks.
However, "sensing the surrounding real world" is still challenging for such
tasks as checking the colors of merchandise, responding to street signs, or
recognizing smiling faces. Driving a car is still one of the largest
challenges, but technology is continually breaking new ground. The expansion of
online data is now pushing machine learning techniques and crowd sourcing
methods, which together enable blind people to understand ever more about the
real world. Just as importantly, these same technologies can help sighted
people better understand the world, too. We have entered an era of assisted
cognition, not only for persons with disabilities, but for everyone. In this
talk, I will offer predictions about near-future possibilities and discuss how
these technologies can change our lives.
[3]
Age-Based Task Specialization for Crowdsourced Proofreading
Age-Related Issues
/
Kobayashi, Masatomo
/
Ishihara, Tatsuya
/
Itoko, Toshinari
/
Takagi, Hironobu
/
Asakawa, Chieko
UAHCI 2013: 7th International Conference on Universal Access in
Human-Computer Interaction, Part II: User and Context Diversity
2013-07-21
v.2
p.104-112
Keywords: Accessibility; Micro-tasks; Crowdsourcing; Collaboration; Elderly;
Intergenerational Communications
© Copyright 2013 Springer-Verlag
Summary: Crowdsourcing can efficiently produce accessible digital books for people
with print disabilities. However, particularly in Japan, the proofreading step
tends to be expensive because of language-related issues. The elderly
population is a promising source of proofreaders. Our surveys found that they
have strong linguistic skills and want to contribute to society. So why do they
rarely participate in Internet-based work scenarios such as crowdsourcing? We
introduce a collaborative crowdsourcing model that aims to fully utilize the
linguistic skills of the elderly by encouraging younger people to support the
elderly in overcoming their limited technical skills. We decompose each
proofreading task into several types of sub-tasks, where some tasks require
more linguistic skills while the other tasks need more technical skills, so
that the linguistic and technical tasks can be distributed to older and younger
participants, respectively. We also discuss other scenarios that may be
suitable for such multi-generational crowdsourcing model.
[4]
How Unfamiliar Words in Smartphone Manuals Affect Senior Citizens
Access to Mobile Interaction
/
Ishihara, Tatsuya
/
Kobayashi, Masatomo
/
Takagi, Hironobu
/
Asakawa, Chieko
UAHCI 2013: 7th International Conference on Universal Access in
Human-Computer Interaction, Part III: Applications and Services for Quality of
Life
2013-07-21
v.3
p.636-642
Keywords: Word familiarity; text readability; ageing; smartphone
© Copyright 2013 Springer-Verlag
Summary: Elderly people are motivated to continue working, but may have difficulties
working in full-time jobs and need flexible working styles to compensate for
their declining physical abilities. ICT can help support flexible working
styles by enhancing communication between people in distant places. Smartphones
offer various features for communication and information gathering, thus
creating more opportunities to work. However, smartphone adoption has been slow
for the elderly. One of the reasons is that elderly people have lower
familiarity with computer terminology and therefore find the manuals difficult
to understand. In this study, we investigated factors that make smartphone
manuals hard to understand. We first asked elderly people about their
familiarity with words found in smartphone manuals. Our second survey asked
about sentences extracted from the smartphone manuals. By analyzing these
results, we found that the comprehension was highly correlated with their
familiarity with the specialized vocabulary.
[5]
Accessible photo album: enhancing the photo sharing experience for people
with visual impairment
Papers: design for the blind
/
Harada, Susumu
/
Sato, Daisuke
/
Adams, Dustin W.
/
Kurniawan, Sri
/
Takagi, Hironobu
/
Asakawa, Chieko
Proceedings of ACM CHI 2013 Conference on Human Factors in Computing Systems
2013-04-27
v.1
p.2127-2136
© Copyright 2013 ACM
Summary: While a photograph is a visual artifact, studies reveal that a number of
people with visual impairments are also interested in being able to share their
memories and experiences with their sighted counterparts in the form of a
photograph. We conducted an online survey to better understand the challenges
faced by people with visual impairments in sharing and organizing photos, and
reviewed existing tools and their limitations. Based on our analysis, we
developed an accessible mobile application that enables a visually impaired
user to capture photos along with audio recordings for the ambient sound and
memo description and to browse through them eyes-free. Five visually impaired
participants took part in a study in which they used our app to take
photographs in naturalistic settings and to share them later with a sighted
viewer. The participants were able to use our app to identify each photograph
on their own during the photo sharing session, and reported high satisfaction
in having been able to take the initiative during the process.
[6]
Lessons Learned from Crowd Accessibility Services
Designing for Inclusiveness I
/
Takagi, Hironobu
/
Harada, Susumu
/
Sato, Daisuke
/
Asakawa, Chieko
Proceedings of IFIP INTERACT'13: Human-Computer Interaction-1
2013
v.1
p.587-604
Keywords: Crowd-sourcing; accessibility; digital book; captioning; Web accessibility
© Copyright 2013 IFIP
Summary: Crowd accessibility services for people with disabilities, driven by
crowd-sourcing methods, are gaining traction as a viable means of realizing
innovative services by leveraging both human and machine intelligence. As the
approach matures, researchers and practitioners are seeking to build various
types of services. However, many of them encounter similar challenges, such as
variations in quality and sustaining contributor participation for durable
services. There are growing needs to share tangible knowledge about the best
practices to help build and maintain successful services. Towards this end, we
are sharing our experiences with crowd accessibility services that we have
deployed and studied. Initially, we developed a method to analyze the dynamics
of contributor participation. We then analyzed the actual data from three
service deployments spanning several years. The service types included Web
accessibility improvement, text digitization, and video captioning. We then
summarize the lessons learned and future research directions for sustainable
services.
[7]
Question-Answer Cards for an Inclusive Micro-tasking Framework for the
Elderly
Seniors and Usability
/
Kobayashi, Masatomo
/
Ishihara, Tatsuya
/
Kosugi, Akihiro
/
Takagi, Hironobu
/
Asakawa, Chieko
Proceedings of IFIP INTERACT'13: Human-Computer Interaction-3
2013
v.3
p.590-607
Keywords: Micro-Tasks; Gamification; Skill Assessment; Ageing; Elderly; Senior
Workforce
© Copyright 2013 IFIP
Summary: Micro-tasking (e.g., crowdsourcing) has the potential to help "long-tail"
senior workers utilize their knowledge and experience to contribute to their
communities. However, their limited ICT skills and their concerns about new
technologies can prevent them from participating in emerging work scenarios. We
have devised a question-answer card interface to allow the elderly to
participate in micro-tasks with minimal ICT skills and learning efforts. Our
survey identified a need for skill-based task recommendations, so we also added
a probabilistic skill assessment model based on the results of the micro-tasks.
We also discuss some scenarios to exploit the question-answer card framework to
create new work opportunities for senior citizens. Our experiments showed that
untrained seniors performed the micro-tasks effectively with our interface in
both controlled and realistic conditions, and the differences in their skills
were reliably assessed.
[8]
Characteristics of Elderly User Behavior on Mobile Multi-touch Devices
User Preferences and Behaviour
/
Harada, Susumu
/
Sato, Daisuke
/
Takagi, Hironobu
/
Asakawa, Chieko
Proceedings of IFIP INTERACT'13: Human-Computer Interaction-4
2013
v.4
p.323-341
Keywords: Mobile; Multi-touch; Smartphones; Tablet; Aging; Elderly
© Copyright 2013 IFIP
Summary: Smartphones and tablet devices have been rapidly proliferating, and
multi-touch interaction, powerful processors and rich array of sensors make
these devices an attractive service platform for older users. While there is an
increasing number of work investigating the issues that elderly users
experience through their interaction with mobile devices, most have focused
either on evaluation of low-level interaction characteristics or on qualitative
survey. Therefore, we conducted a user study with 21 elderly participants to
analyze the needs and issues faced by this user group under naturalistic usage
scenarios. Specifically, we interviewed each participant about their
experiences, had them perform various practical tasks using our custom testing
application, and analyzed the operation logs using our custom visualizations.
Based on our results, we summarize the types of issues observed, present design
considerations for the applications studied, and future research directions.
[9]
How voice augmentation supports elderly web users
Web accessibility
/
Sato, Daisuke
/
Kobayashi, Masatomo
/
Takagi, Hironobu
/
Asakawa, Chieko
/
Tanaka, Jiro
Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies
2011-10-24
p.155-162
© Copyright 2011 ACM
Summary: Online Web applications have become widespread and have made our daily life
more convenient. However, older adults often find such applications
inaccessible because of age-related changes to their physical and cognitive
abilities. Two of the reasons that older adults may shy away from the Web are
fears of the unknown and of the consequences of incorrect actions. We are
extending a voice-based augmentation technique originally developed for blind
users. We want to reduce the cognitive load on older adults by providing
contextual support. An experiment was conducted to evaluate how voice
augmentation can support elderly users in using Web applications. Ten older
adults participated in our study and their subjective evaluations showed how
the system gave them confidence in completing Web forms. We believe that voice
augmentation may help address the users' concerns arising from their low
confidence levels.
[10]
Elderly User Evaluation of Mobile Touchscreen Interactions
Accessibility II
/
Kobayashi, Masatomo
/
Hiyama, Atsushi
/
Miura, Takahiro
/
Asakawa, Chieko
/
Hirose, Michitaka
/
Ifukube, Tohru
Proceedings of IFIP INTERACT'11: Human-Computer Interaction
2011-09-05
v.1
p.83-99
Keywords: Mobile; Smartphones; Touchscreens; Gestures; Aging; Elderly; Senior
Citizens; User Evaluation
© Copyright 2011 IFIP
Summary: Smartphones with touchscreen-based interfaces are increasingly used by
non-technical groups including the elderly. However, application developers
have little understanding of how senior users interact with their products and
of how to design senior-friendly interfaces. As an initial study to assess
standard mobile touchscreen interfaces for the elderly, we conducted
performance measurements and observational evaluations of 20 elderly
participants. The tasks included performing basic gestures such as taps, drags,
and pinching motions and using basic interactive components such as software
keyboards and photo viewers. We found that mobile touchscreens were generally
easy for the elderly to use and a week's experience generally improved their
proficiency. However, careful observations identified several typical problems
that should be addressed in future interfaces. We discuss the implications of
our experiments, seeking to provide informal guidelines for application
developers to design better interfaces for elderly people.
[11]
Sasayaki: augmented voice web browsing experience
Sound interactions
/
Sato, Daisuke
/
Zhu, Shaojian
/
Kobayashi, Masatomo
/
Takagi, Hironobu
/
Asakawa, Chieko
Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems
2011-05-07
v.1
p.2769-2778
© Copyright 2011 ACM
Summary: Auditory user interfaces have great Web-access potential for billions of
people with visual impairments, with limited literacy, who are driving, or who
are otherwise unable to use a visual interface. However a sequential
speech-based representation can only convey a limited amount of information. In
addition, typical auditory user interfaces lose the visual cues such as text
styles and page structures, and lack effective feedback about the current
focus. To address these limitations, we created Sasayaki (from whisper in
Japanese), which augments the primary voice output with a secondary whisper of
contextually relevant information, automatically or in response to user
requests. It also offers new ways to jump to semantically meaningful locations.
A prototype was implemented as a plug-in for an auditory Web browser. Our
experimental results show that the Sasayaki can reduce the task completion
times for finding elements in webpages and increase satisfaction and
confidence.
[12]
On the audio representation of radial direction
Sound interactions
/
Harada, Susumu
/
Takagi, Hironobu
/
Asakawa, Chieko
Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems
2011-05-07
v.1
p.2779-2788
© Copyright 2011 ACM
Summary: We present and evaluate an approach towards eyes-free auditory display of
spatial information that considers radial direction as a fundamental type of
value primitive. There are many benefits to being able to sonify radial
directions, such as indicating the heading towards a point of interest in a
direct and dynamic manner, rendering a path or shape outline by sonifying a
continual sequence of tangent directions as the path is traced, and providing
direct feedback of the direction of motion of the user in a physical space or a
pointer in a virtual space. We propose a concrete mapping of vowel-like sounds
to radial directions as one potential method to enable sonification of such
information. We conducted a longitudinal study with five sighted and two blind
participants to evaluate the learnability and effectiveness of this method.
Results suggest that our directional sound mapping can be learned within a few
hours and be used to aurally perceive spatial information such as shape
outlines and path contours.
[13]
Are synthesized video descriptions acceptable?
Communication
/
Kobayashi, Masatomo
/
O'Connell, Trisha
/
Gould, Bryan
/
Takagi, Hironobu
/
Asakawa, Chieko
Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies
2010-10-25
p.163-170
© Copyright 2010 ACM
Summary: We conducted a series of experiments to assess the feasibility of
synthesized narrations to describe online videos. To reduce the cultural bias,
we included adult blind or low-vision participants from Japan and the U.S. in
the main study. Our research also includes a follow-up study we conducted in
Japan to assess the effectiveness of synthesized video descriptions in
realistic situations. The results showed that synthesized video descriptions
were generally accepted in both countries. We also found that appropriate
technology support allowed a novice describer to make effective video
descriptions. Based on these results, we discuss the implications for
developing a technology platform for describing online videos.
[14]
Sasayaki: an augmented voice-based web browsing experience
Posters and Demonstrations
/
Zhu, Shaojian
/
Sato, Daisuke
/
Takagi, Hironobu
/
Asakawa, Chieko
Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies
2010-10-25
p.279-280
© Copyright 2010 ACM
Summary: While the usability of voice-based Web navigation has been steadily
improving, it is still not as easy for users with visual impairments as it is
for sighted users. One reason is that sequential voice representation can only
convey a limited amount of information at a time. Another challenge comes from
the fact that current voice browsers omit various visual cues such as text
styles and page structures, and lack meaningful feedback about the current
focus. To address these issues, we created Sasayaki, an intelligent voice-based
user agent that augments the primary voice output of a voice browser with a
secondary voice that whispers contextually relevant information as appropriate
or in response to user requests. A prototype has been implemented as a plug-in
for a voice browser. The results from a pilot study show that our Sasayaki
agent is able to improve users' information search task time and their overall
confidence level. We believe that our intelligent voice-based agent has great
potential to enrich the Web browsing experiences of users with visual
impairments.
[15]
Exploratory Analysis of Collaborative Web Accessibility Improvement
/
Sato, Daisuke
/
Takagi, Hironobu
/
Kobayashi, Masatomo
/
Kawanaka, Shinya
/
Asakawa, Chieko
ACM Transactions on Accessible Computing
2010-10
v.3
n.2
p.5
© Copyright 2010-10 ACM
Summary: The Web is becoming a platform for daily activities and is expanding the
opportunities for collaboration among people all over the world. The effects of
these innovations are seen not only in major Web services such as wikis and
social networking services but also in accessibility services. Collaborative
accessibility improvement has great potential to make the Web more adaptive.
Screen reader users, developers, site owners, and any Web volunteers who want
to help the users are invited into the activities to improve accessibility in a
timely manner. The Social Accessibility Project is an experimental service for
a new needs-driven improvement model based on collaborative metadata authoring
technologies. In 20 months, about 19,000 pieces of metadata were created for
more than 3,000 Web pages through collaboration based on 355 requests submitted
from users. We encountered many challenges as we sought to create a new
mainstream approach and created distinctive features in new user interfaces to
address some of these challenges. Although the new features increased user
participation, serious issues remain. The productivity of the volunteers
exceeded our expectations, but we found large and important problems in the
users' lack of awareness of their own accessibility problems. This is a
critical problem for sustaining the active use of the service, because about
70% of the improvement starts with a request from a user. Helping users with
visual impairments understand the actual issues is a crucial and challenging
topic, and will lead to improved accessibility. We first introduce examples of
collaboration, analyze several kinds of statistics on the activities of the
users and volunteers of the pilot service, and then discuss our findings and
challenges. Five future foci are considered: site-wide metadata authoring,
encouraging active participation by users, quality management for the created
metadata, metadata for dynamic HTML applications, and collaborations with site
owners.
[16]
Social accessibility: the challenge of improving web accessibility through
collaboration
Web accessibility challenge
/
Sato, Daisuke
/
Kobayashi, Masatomo
/
Takagi, Hironobu
/
Asakawa, Chieko
Proceedings of the 2010 International Cross-Disciplinary Conference on Web
Accessibility (W4A)
2010-04-26
p.28
Keywords: accessibility, crowd sourcing, social computing, web
© Copyright 2010 ACM
Summary: There are billions of people who face problems in accessing webpages,
including people with disabilities, elderly people, and illiterate people in
developing countries. The needs for accessible webpages have become too broad
to be left only to Web developers. The wisdom of crowds has become part of a
key strategy to combine various skills and knowledge into a community that can
address the needs for accessibility. Social Accessibility is one such project
for visually impaired people, which has been operating for more than a year,
producing findings and new challenges. Based on our experiences, the
collaborative approach can work well and be expanded for people with other
problems such as poor hearing, aged eyes, and reading problems.
[17]
Collaborative web accessibility improvement: challenges and possibilities
Web accessibility II
/
Takagi, Hironobu
/
Kawanaka, Shinya
/
Kobayashi, Masatomo
/
Sato, Daisuke
/
Asakawa, Chieko
Eleventh Annual ACM SIGACCESS Conference on Assistive Technologies
2009-10-26
p.195-202
Keywords: accessibility, collaboration, metadata, social computing, web
© Copyright 2009 ACM
Summary: Collaborative accessibility improvement has great potential to make the Web
more adaptive in a timely manner by inviting users into the improvement
process. The Social Accessibility Project is an experimental service for a new
needs-driven improvement model based on collaborative metadata authoring
technologies. In 10 months, about 18,000 pieces of metadata were created for
2,930 webpages through collaboration. We encountered many challenges as we
sought to create a new mainstream approach. The productivity of the volunteer
activities exceeded our expectation, but we found large and important problems
in the screen reader users' lack of awareness of their own accessibility
problems. In this paper, we first introduce examples, analyze some statistics
from the pilot service and then discuss our findings and challenges. Three
future directions including site-wide authoring are considered.
[18]
Providing synthesized audio description for online videos
Posters and system demonstrations
/
Kobayashi, Masatomo
/
Fukuda, Kentarou
/
Takagi, Hironobu
/
Asakawa, Chieko
Eleventh Annual ACM SIGACCESS Conference on Assistive Technologies
2009-10-26
p.249-250
Keywords: audio description, external metadata, online videos, speech synthesis,
text-to-speech (tts), web accessibility
© Copyright 2009 ACM
Summary: We describe an initial attempt to develop a common platform for adding an
audio description (AD) to an online video so that blind and visually impaired
people can enjoy such material. A speech synthesis technology allows content
providers to offer the AD at minimal cost. We exploit external metadata so that
the AD can be independent of the video format. The external approach also
allows external supporters to add ADs to any online videos. Our technology
includes an authoring tool for writing AD scripts, a Web browser add-on for
synthesizing ADs synchronized with original videos, and a text-based format to
exchange AD scripts.
[19]
What's Next? A Visual Editor for Correcting Reading Order
HCI and Web Applications 1
/
Sato, Daisuke
/
Kobayashi, Masatomo
/
Takagi, Hironobu
/
Asakawa, Chieko
Proceedings of IFIP INTERACT'09: Human-Computer Interaction
2009-08-24
v.1
p.364-377
Keywords: Reading flow; reading order; Web accessibility; ARIA flowto
© Copyright 2009 IFIP
Summary: The reading order, i.e. the serialized form, of the webpage should be a
meaningful order for alternative representations such as the audible forms
needed for visually impaired users. However, the serialized form rarely
receives attention because it is visually elusive for authors using the
existing WISIWYG authoring environments. Therefore we propose a new
visualization technique called "reading flow" that visualizes the order of the
serialized form with variable granularity by using a visible path extending
through the elements in the content. This allows the authors to instantly
evaluate the ordering by the visual pattern of the path. Our approach also
allows them to interactively and intuitively reorganize the order of the
serialized form. The results of two comparative experiments show that our
reading flow greatly increases the ability of the authors to understand and
organize the ordering compared to the existing techniques.
[20]
EDITED BOOK
The Universal Access Handbook
2009
n.61
p.1034
CRC Press
== Introduction to Universal Access ==
Universal Access and Design for All in the Evolving Information Society
+ Stephanidis, C.
Perspectives on Accessibility: From Assistive Technologies to Universal Access and Design for All
+ Emiliani, P. L.
Accessible and Usable Design of Information and Communication Technologies
+ Vanderheiden, G. C.
== Diversity in the User Population ==
Dimensions of User Diversity
+ Ashok, M.
+ Jacko, J. A.
Motor Impairments and Universal Access
+ Keates, S.
Sensory Impairments
+ Kinzel, E.
+ Jacko, J. A.
Cognitive Disabilities
+ Lewis, C.
Age-Related Diff erences in the Interface Design Process
+ Kurniawan, S.
International and Intercultural User Interfaces
+ Marcus, A.
+ Rau, P.-L. P.
== Technologies for Diverse Contexts of Use ==
Accessing the Web
+ Hanson, V. L.
+ Richards, J. T.
+ Harper, S.
+ Trewin, S.
Handheld Devices and Mobile Phones
+ Kaikkonen, A.
+ Kaasinen, E.
+ Ketola, P.
Virtual Reality
+ Hughes, D.
+ Smith, E.
+ Shumaker, R.
+ Hughes, C.
Biometrics and Universal Access
+ Fairhurst, M. C.
Interface Agents: Potential Benefits and Challenges for Universal Access
+ and, E. André
M. Rehm
== Development Lifecycle of User Interfaces ==
User Requirements Elicitation for Universal Access
+ Antona, M.
+ Ntoa, S.
+ Adami, I.
+ Stephanidis, C.
Unified Design for User Interface Adaptation
+ Savidis, A.
+ Stephanidis, C.
Designing Universally Accessible Games
+ Grammenos, D.
+ Savidis, A.
+ Stephanidis, C.
Software Requirements for Inclusive User Interfaces
+ Savidis, A.
+ Stephanidis, C.
Tools for Inclusive Design
+ Waller, S.
+ Clarkson, P. J.
The Evaluation of Accessibility, Usability, and User Experience
+ Petrie, H.
+ Bevan, N.
== User Interface Development: Architectures, Components, and Tools ==
A Unified Soft ware Architecture for User Interface Adaptation
+ Savidis, A.
+ Stephanidis, C.
A Decision-Making Specifi cation Language for User Interface Adaptation
+ Savidis, A.
+ Stephanidis, C.
Methods and Tools for the Development of Unified Web-Based User Interfaces
+ Doulgeraki, C.
+ Partarakis, N.
+ Mourouzis, A.
+ Stephanidis, C.
User Modeling: A Universal Access Perspective
+ Adams, R.
Model-Based Tools: A User-Centered Design for All Approach
+ Stary, C.
Markup Languages in Human-Computer Interaction
+ Paternò, F.
+ Santoro, C.
Abstract Interaction Objects in User Interface Programming Languages
+ Savidis, A.
== Interaction Techniques and Devices ==
Screen Readers
+ Asakawa, C.
+ Leporini, B.
Virtual Mouse and Keyboards for Text Entry
+ Evreinov, G.
Speech Input to Support Universal Access
+ Feng, J.
+ Sears, A.
Natural Language and Dialogue Interfaces
+ Jokinen, K.
Auditory Interfaces and Sonification
+ Nees, M. A.
+ Walker, B. N.
Haptic Interaction
+ Jansson, G.
+ Raisamo, R.
Vision-Based Hand Gesture Recognition for Human-Computer Interaction
+ Zabulis, X.
+ Baltzakis, H.
+ Argyros, A.
Automatic Hierarchical Scanning for Windows Applications
+ Ntoa, S.
+ Savidis, A.
+ Stephanidis, C.
Eye Tracking
+ Majaranta, P.
+ Bates, R.
+ Donegan, M.
Brain-Body Interfaces
+ Gnanayutham, P.
+ George, J.
Sign Language in the Interface: Access for Deaf Signers
+ Huenerfauth, M.
+ Hanson, V. L.
Visible Language for Global Mobile Communication: A Case Study of a Design Project in Progress
+ Marcus, A.
Contributions of "Ambient" Multimodality to Universal Access
+ Carbonell, N.
== Application Domains ==
Vocal Interfaces in Supporting and Enhancing Accessibility in Digital Libraries
+ Catarci, T.
+ Kimani, S.
+ Dubinsky, Y.
+ Gabrielli, S.
Theories and Methods for Studying Online Communities for People with Disabilities and Older People
+ Pfeil, U.
+ Zaphiris, P.
Computer-Supported Cooperative Work
+ Gross, T.
+ Fetter, M.
Developing Inclusive e-Training
+ Savidis, A.
+ Stephanidis, C.
Training through Entertainment for Learning Difficulties
+ Savidis, A.
+ Grammenos, D.
+ Stephanidis, C.
Universal Access to Multimedia Documents
+ Petrie, H.
+ Weber, G.
+ Völkel, T.
Interpersonal Communication
+ Waller, A.
Universal Access in Public Terminals: Information Kiosks and ATMs
+ Kouroupetroglou, G.
Intelligent Mobility and Transportation for All
+ Bekiaris, E.
+ Panou, M.
+ Gaitanidou, E.
+ Mourouzis, A.
+ Ringbauer, B.
Electronic Educational Books for Blind Students
+ Grammenos, D.
+ Savidis, A.
+ Georgalis, Y.
+ Bourdenas, T.
+ Stephanidis, C.
Mathematics and Accessibility: A Survey
+ Pontelli, E.
+ Karshmer, A. I.
+ Gupta, G.
Cybertherapy, Cyberpsychology, and the Use of Virtual Reality in Mental Health
+ Renaud, P.
+ Bouchard, S.
+ Chartier, S.
+ Bonin, M-P
== Nontechnological Issues ==
Policy and Legislation as a Framework of Accessibility
+ Kemppainen, E.
+ Kemp, J. D.
+ Yamada, H.
Standards and Guidelines
+ Vanderheiden, G. C.
eAccessibility Standardization
+ Engelen, J.
Management of Design for All
+ Bühler, C.
Security and Privacy for Universal Access
+ Maybury, M. T.
Best Practice in Design for All
+ Miesenberger, K.
== Looking to the Future ==
Implicit Interaction
+ Ferscha, A.
Ambient Intelligence
+ Streitz, N. A.
+ Privat, G.
Emerging Challenges
+ Stephanidis, C.
[21]
Accessibility commons: a metadata infrastructure for web accessibility
Web accessibility
/
Kawanaka, Shinya
/
Borodin, Yevgen
/
Bigham, Jeffrey P.
/
Lunn, Darren
/
Takagi, Hironobu
/
Asakawa, Chieko
Tenth Annual ACM SIGACCESS Conference on Assistive Technologies
2008-10-13
p.153-160
© Copyright 2008 ACM
Summary: Research projects, assistive technology, and individuals all create metadata
in order to improve Web accessibility for visually impaired users. However,
since these projects are disconnected from one another, this metadata is
isolated in separate tools, stored in disparate repositories, and represented
in incompatible formats. Web accessibility could be greatly improved if these
individual contributions were merged. An integration method will serve as the
bridge between future academic research projects and end users, enabling new
technologies to reach end users more quickly. Therefore we introduce
Accessibility Commons, a common infrastructure to integrate, store, and share
metadata designed to improve Web accessibility. We explore existing tools to
show how the metadata that they produce could be integrated into this common
infrastructure, we present the design decisions made in order to help ensure
that our common repository will remain relevant in the future as new metadata
is developed, and we discuss how the common infrastructure component
facilitates our broader social approach to improving accessibility.
[22]
Social accessibility: achieving accessibility through collaborative metadata
authoring
Collaborative accessibility
/
Takagi, Hironobu
/
Kawanaka, Shinya
/
Kobayashi, Masatomo
/
Itoh, Takashi
/
Asakawa, Chieko
Tenth Annual ACM SIGACCESS Conference on Assistive Technologies
2008-10-13
p.193-200
© Copyright 2008 ACM
Summary: Web content is under the control of site owners, and therefore the site
owners have the responsibility to make their content accessible. This is a
basic assumption of Web accessibility. Users who want access to inaccessible
content must ask the site owners for help. However, the process is slow and too
often the need is mooted before the content becomes accessible. Social
Accessibility is an approach to drastically reduce the burden on site owners
and to shorten the time to provide accessible Web content by allowing
volunteers worldwide to -- renovate' any webpage on the Internet. Users
encountering Web access problems anywhere at any time will be able to
immediately report the problems to a social computing service. Volunteers can
be quickly notified, and they can easily respond by creating and publishing the
requested accessibility metadata -- also helping any other users who encounter
the same problems. Site owners can learn about the methods for future
accessibility renovations based on the volunteers' external metadata. There are
two key technologies to enable this process, the external metadata that allows
volunteers to annotate existing Web content, and the social computing service
that supports the collaborative renovations. In this paper, we will first
review previous approaches, and then propose the Social Accessibility approach.
The scenario, implementation, and results of a pilot service are introduced,
followed by discussion of future directions.
[23]
EDITED BOOK
Web Accessibility: A Foundation for Research
Human-Computer Interaction Series
/
Harper, Simon
/
Yesilada, Yeliz
2008
n.21
p.355
Springer London
DOI: 10.1007/978-1-84800-050-6
== Understanding Disabilities ==
Visual Impairments (3-13)
+ Barreto, A.
Cognitive and Learning Impairments (15-23)
+ Lewis, Clayton
Hearing Impairments (25-35)
+ Cavender, Anna
+ Ladner, Richard E.
Physical Impairment (37-46)
+ Trewin, Shari
Ageing (47-58)
+ Kurniawan, Sri H.
== Evaluation and Methodologies ==
Web Accessibility and Guidelines (61-78)
+ Harper, Simon
+ Yesilada, Yeliz
Web Accessibility Evaluation (79-106)
+ Abou-Zahra, Shadi
End User Evaluations (107-126)
+ Jay, Caroline
+ Lunn, Darren
+ Michailidou, Eleni
Authoring Tools (127-138)
+ Treviranus, Jutta
== Applications ==
Assistive Technologies (142-162)
+ Edwards, Alistair D. N.
Desktop Browsers (163-193)
+ Gunderson, Jon
Specialized Browsers (195-213)
+ Raman, T. V.
Browser Augmentation (215-229)
+ Hanson, Vicki L.
+ Richards, John T.
+ Swart, Cal
Transcoding (231-260)
+ Asakawa, Chieko
+ Takagi, Hironobu
== Specialised Areas ==
Education (263-271)
+ Salomoni, Paola
+ Mirri, Silvia
+ Ferretti, Stefano
+ Roccetti, Marco
Specialized Documents (274-285)
+ Munson, Ethan V.
+ Pimentel, Maria Graça da
Multimedia and Graphics (287-299)
+ Regan, Bob
+ Kirkpatrick, Andrew
Mobile Web and Accessibility (302-313)
+ Hori, Masahiro
+ Kato, Takashi
Semantic Web (315-330)
+ Horrocks, Ian
+ Bechhofer, Sean
Web 2.0 (331-343)
+ Gibson, Becky
Universal Usability (346-355)
+ Horton, Sarah
+ Leventhal, Laura
[24]
Automatic accessibility transcoding for flash content
Web accessibility
/
Sato, Daisuke
/
Miyashita, Hisashi
/
Takagi, Hironobu
/
Asakawa, Chieko
Ninth Annual ACM SIGACCESS Conference on Assistive Technologies
2007-10-15
p.35-42
© Copyright 2007 ACM
Summary: It is not surprising that rich Internet content, such as Flash and DHTML, is
some of the most pervasive content because of its visual attractiveness to the
sighted majority. Such visually rich content has been causing severe
accessibility problems, especially for people with visual disabilities. For
Flash content, the kinds of accessibility information necessary for screen
readers is not usually provided in the existing content. A typical example of
such missing data is the lack of alternative text for buttons, hypertext links,
widget roles, and so on. One of the major reasons is that the current
accessibility framework of Flash content imposes a burden on content authors to
make their content accessible. As a result, adding support for accessibility
tends to be neglected, and screen reader users are left out of the richer
Internet experiences.
Therefore, we decided to develop an automatic accessibility transcoding
system for Flash content to allow users to access a wider range of existing
content, and to reduce the workload for content authors by using an automatic
repair algorithm. It works as a client-side transcoding system based on the
internal object model inside the Flash content. It adds and repairs
accessibility information for existing Flash content, so screen readers can
present more accessible information to users. Our experiment using the pilot
system showed that 55% of the missing alternative texts for buttons in the
tested websites could be added automatically.
[25]
Aibrowser for multimedia: introducing multimedia content accessibility for
visually impaired users
Non-visual presentation of information
/
Miyashita, Hisashi
/
Sato, Daisuke
/
Takagi, Hironobu
/
Asakawa, Chieko
Ninth Annual ACM SIGACCESS Conference on Assistive Technologies
2007-10-15
p.91-98
© Copyright 2007 ACM
Summary: Multimedia content with Rich Internet Applications using Dynamic HTML
(DHTML) and Adobe Flash is now becoming popular in various websites. However,
visually impaired users cannot deal with such content due to audio interference
with the speech from screen readers and intricate structures strongly optimized
for sighted users.
We have been developing an Accessibility Internet Browser for Multimedia
(aiBrowser) to address these problems. The browser has two novel features:
non-visual multimedia audio controls and alternative user interfaces using
external metadata. First, by using the aiBrowser, users can directly control
the audio from the embedded media with fixed shortcut keys. Therefore, this
allows blind users to increase or decrease the media volume, and pause or stop
the media to handle conflicts between the audio of the media and the speech
from the screen reader. Second, the aiBrowser can provide an alternative
simplified user interface suitable for screen readers by using external
metadata, which can even be applied to dynamic content such as DHTML and Flash.
In this paper, we discuss accessibility problems with multimedia content due
to streaming media and the dynamic changes in such content, and explain how the
aiBrowser addresses these problems by describing non-visual multimedia audio
controls and external metadata-based alternative user interfaces. The
evaluation of the aiBrowser was conducted by comparing it to JAWS, one of the
most popular screen readers, on three well known multimedia-content-intensive
websites.
The evaluation showed that the aiBrowser made the content that was
inaccessible with JAWS relatively accessible by using the multimedia audio
controls and alternative interfaces with metadata which included alternative
text, heading information, and so on. It also drastically reduced the
keystrokes for navigation with aiBrowser, which implies to improve the
non-visual usability.