[1]
WearWrite: Crowd-Assisted Writing from Smartwatches
Fat Fingers, Small Watches
/
Nebeling, Michael
/
To, Alexandra
/
Guo, Anhong
/
de Freitas, Adrian A.
/
Teevan, Jaime
/
Dow, Steven P.
/
Bigham, Jeffrey P.
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.3834-3846
© Copyright 2016 ACM
Summary: The physical constraints of smartwatches limit the range and complexity of
tasks that can be completed. Despite interface improvements on smartwatches,
the promise of enabling productive work remains largely unrealized. This paper
presents WearWrite, a system that enables users to write documents from their
smartwatches by leveraging a crowd to help translate their ideas into text.
WearWrite users dictate tasks, respond to questions, and receive notifications
of major edits on their watch. Using a dynamic task queue, the crowd receives
tasks issued by the watch user and generic tasks from the system. In a
week-long study with seven smartwatch users supported by approximately 29 crowd
workers each, we validate that it is possible to manage the crowd writing
process from a watch. Watch users captured new ideas as they came to mind and
managed a crowd during spare moments while going about their daily routine.
WearWrite represents a new approach to getting work done from wearables using
the crowd.
[2]
"With most of it being pictures now, I rarely use it": Understanding
Twitter's Evolving Accessibility to Blind Users
Social Media and Health
/
Morris, Meredith Ringel
/
Zolyomi, Annuska
/
Yao, Catherine
/
Bahram, Sina
/
Bigham, Jeffrey P.
/
Kane, Shaun K.
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.5506-5516
© Copyright 2016 ACM
Summary: Social media is an increasingly important part of modern life. We
investigate the use of and usability of Twitter by blind users, via a
combination of surveys of blind Twitter users, large-scale analysis of tweets
from and Twitter profiles of blind and sighted users, and analysis of tweets
containing embedded imagery. While Twitter has traditionally been thought of as
the most accessible social media platform for blind users, Twitter's increasing
integration of image content and users' diverse uses for images have presented
emergent accessibility challenges. Our findings illuminate the importance of
the ability to use social media for people who are blind, while also
highlighting the many challenges such media currently present this user base,
including difficulty in creating profiles, in awareness of available features
and settings, in controlling revelations of one's disability status, and in
dealing with the increasing pervasiveness of image-based content. We propose
changes that Twitter and other social platforms should make to promote fuller
access to users with visual impairments.
[3]
An Uninteresting Tour Through Why Our Research Papers Aren't Accessible
alt.chi: Authorship and Reviews
/
Bigham, Jeffrey P.
/
Brady, Erin L.
/
Gleason, Cole
/
Guo, Anhong
/
Shamma, David A.
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.621-631
© Copyright 2016 ACM
Summary: Our research is delivered as Portable Document Format (PDF) documents, and
very few include basic metadata to make them accessible to people with
disabilities. As a result, many people are either unable to read them
efficiently or at all. Over the past few years, we have tried everything from
writing guidelines and giving accessibility feedback, to enforcing
accessibility standards and volunteering to make PDFs accessible ourselves. The
problem with making PDFs accessible is in part due to the lack of good tools,
but the complexity of the PDF format makes improving tools difficult. Making
accessible research papers is as much about our choices as a community: our
choice of publication format, and our choice to make accessibility a voluntary
task for authors. In this paper, we overview the context in which PDFs became
our publication format, the difficulty in making PDF documents accessible given
current tools, what we have tried to make our PDFs more accessible, and
potential options for doing better in the future.
[4]
InstructableCrowd: Creating IF-THEN Rules via Conversations with the Crowd
Late-Breaking Works: Engineering of Interactive Systems
/
Huang, Ting-Hao Kenneth
/
Azaria, Amos
/
Bigham, Jeffrey P.
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.1555-1562
© Copyright 2016 ACM
Summary: In this paper, we introduce InstructableCrowd, a system that allows
end-users to instruct the crowd to create trigger-action ("if, then") rules
based on their needs. We create a framework which enables users to converse
with the crowd using their phone and describe a problem which they might have.
We create an interface for a crowd worker to both chat with the user and
compose a rule with an "IF" part connected to the user's phone sensors (e.g.
incoming emails, GPS location, meeting calendar, weather information etc.), and
a "THEN" part connected to user's phone effectors (e.g. sending an email,
creating an alarm, posting a tweet, etc.). The system then sends the rules
created by the crowd to the user's phone in order to help the user solve his
problem.
[5]
Productivity Decomposed: Getting Big Things Done with Little Microtasks
Workshop Summaries
/
Teevan, Jaime
/
Iqbal, Shamsi T.
/
Cai, Carrie J.
/
Bigham, Jeffrey P.
/
Bernstein, Michael S.
/
Gerber, Elizabeth M.
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.3500-3507
© Copyright 2016 ACM
Summary: It is difficult to accomplish meaningful goals with limited time and
attentional resources. However, recent research has shown that concrete plans
with actionable steps allow people to complete tasks better and faster. With
advances in techniques that can decompose larger tasks into smaller units, we
envision that a transformation from larger tasks to smaller microtasks will
impact when and how people perform complex information work, enabling efficient
and easy completion of tasks that currently seem challenging. In this workshop,
we bring together researchers in task decomposition, completion, and sourcing.
We will pursue a broad understanding of the challenges in creating, allocating,
and scheduling microtasks, as well as how accomplishing these microtasks can
contribute towards productivity. The goal is to discuss how intersections of
research across these areas can pave the path for future research in this
space.
[6]
Coding Varied Behavior Types Using the Crowd
Demos
/
Yim, Jinyeong
/
Jasani, Jeel
/
Henderson, Aubrey
/
Koutra, Danai
/
Dow, Steven
/
Leung, Winnie
/
Lim, Ellen
/
Gordon, Mitchell
/
Bigham, Jeffrey
/
Lasecki, Walter
Companion Proceedings of ACM CSCW 2016 Conference on Computer-Supported
Cooperative Work and Social Computing
2016-02-27
v.2
p.114-117
© Copyright 2016 ACM
Summary: Social science researchers spend significant time annotating behavioral
events in video data in order to quantitatively assess interactions [2]. These
behavioral events may be instantaneous changes, continuous actions that span
unbounded periods of time, or behaviors that would be best described by
severity or other scalar ratings. The complexity of these judgments, coupled
with the time and effort required to meticulously assess video, results in a
training and evaluation process that can take days or weeks. Computational
analysis of video data is still limited due to the challenges introduced by
objective interpretation and varied contexts. Glance [4] introduced a means of
leveraging human intelligence by recruiting crowds of paid online workers to
accurately analyze hours of video data in a matter of minutes. This approach
has been shown to expedite work in human-centered fields, as well as generate
training data for automated recognition systems. In this paper, we describe an
interactive demonstration of an improved, more expressive version of Glance
that expands the initial set of supported annotation formats (e.g. time range,
classification, etc.) from one to nine. Worker interfaces for each of these
options are dynamically generated, along with tutorials, based on the analyst's
question. These new features allow analysts to acquire more specific
information about events in video datasets.
[7]
A Spellchecker for Dyslexia
Reading and Language
/
Rello, Luz
/
Ballesteros, Miguel
/
Bigham, Jeffrey P.
Seventeenth International ACM SIGACCESS Conference on Computers and
Accessibility
2015-10-26
p.39-47
© Copyright 2015 ACM
Summary: Poor spelling is a challenge faced by people with dyslexia throughout their
lives. Spellcheckers are therefore a crucial tool for people with dyslexia, but
current spellcheckers do not detect real-word errors, which are a common type
of errors made by people with dyslexia. Real-word errors are spelling mistakes
that result in an unintended but real word, for instance, form instead of from.
Nearly 20% of the errors that people with dyslexia make are real-word errors.
In this paper, we introduce a system called Real Check that uses a
probabilistic language model, a statistical dependency parser and Google
n-grams to detect real-world errors. We evaluated Real Check on text written by
people with dyslexia, and showed that it detects more of these errors than
widely used spellcheckers. In an experiment with 34 people (17 with dyslexia),
people with dyslexia corrected sentences more accurately and in less time with
Real Check.
[8]
Dytective: Toward a Game to Detect Dyslexia
Poster Session 1
/
Rello, Luz
/
Ali, Abdullah
/
Bigham, Jeffrey P.
Seventeenth International ACM SIGACCESS Conference on Computers and
Accessibility
2015-10-26
p.307-308
© Copyright 2015 ACM
Summary: Detecting dyslexia is crucial so that people who have dyslexia can receive
training to avoid associated high rates of academic failure. In this paper we
present Dytective, a game designed to detect dyslexia. The results of a
within-subjects experiment with 40 children (20 with dyslexia) show significant
differences between groups who played Dytective. These differences suggest that
Dytective could be used to help identify those likely to have dyslexia.
[9]
CAN: composable accessibility infrastructure via data-driven crowdsourcing
Human computation
/
Huang, Yun
/
Dobreski, Brian
/
Deo, Bijay Bhaskar
/
Xin, Jiahang
/
Barbosa, Natã Miccael
/
Wang, Yang
/
Bigham, Jeffrey P.
Proceedings of the 2015 International Cross-Disciplinary Conference on Web
Accessibility (W4A)
2015-05-18
p.2
© Copyright 2015 ACM
Summary: Despite persistent effort, many web pages are still not accessible to
everyone. Fixing web accessibility problems can be complicated. Developers need
to have extensive knowledge not only of possible accessibility problems but
also of approaches for fixing them. This paper is about using the large number
of accessibility issues on real websites and crowd-sourced fixes for them as a
unique source of learning materials for web developers to learn how to build
accessible components in a cost-efficient manner. In this paper, we present the
design, development and study of CAN (Composable Accessibility Infrastructure),
a crowdsourcing infrastructure that collects web accessibility issues and their
fixes, dynamically composes solutions on-the-fly, and delivers the
crowd-sourced content as teaching materials. Our unique CAN user interaction
and system design enables end users with disabilities to both benefit from and
contribute to the system without additional effort in their daily web browsing,
and allows web developers to experience real accessibility issues and initiate
a learning process with first-hand materials. CAN also provides an opportunity
for data-driven discovery of the common implementation practices that cause
accessibility issues. We show how CAN addresses a set of accessibility issues
on the top 100 popular websites. We also present our user study results where
web developers who had varying knowledge of web accessibility all found our
system an effective and interesting platform to learning web accessibility.
[10]
Measuring text simplification with the crowd
Human computation
/
Lasecki, Walter S.
/
Rello, Luz
/
Bigham, Jeffrey P.
Proceedings of the 2015 International Cross-Disciplinary Conference on Web
Accessibility (W4A)
2015-05-18
p.4
© Copyright 2015 ACM
Summary: Text can often be complex and difficult to read, especially for people with
cognitive impairments or low literacy skills. Text simplification is a process
that reduces the complexity of both wording and structure in a sentence, while
retaining its meaning. However, this is currently a challenging task for
machines, and thus, providing effective on-demand text simplification to those
who need it remains an unsolved problem. Even evaluating the simplicity of text
remains a challenging problem for both computers, which cannot understand the
meaning of text, and humans, who often struggle to agree on what constitutes a
good simplification.
This paper focuses on the evaluation of English text simplification using
the crowd. We show that leveraging crowds can result in a collective decision
that is accurate and converges to a consensus rating. Our results from 2,500
crowd annotations show that the crowd can effectively rate levels of
simplicity. This may allow simplification systems and system builders to get
better feedback about how well content is being simplified, as compared to
standard measures which classify content into 'simplified' or 'not simplified'
categories. Our study provides evidence that the crowd could be used to
evaluate English text simplification, as well as to create simplified text in
future work.
[11]
A plug-in to aid online reading in Spanish
Learning and language
/
Rello, Luz
/
Carlini, Roberto
/
Baeza-Yates, Ricardo
/
Bigham, Jeffrey P.
Proceedings of the 2015 International Cross-Disciplinary Conference on Web
Accessibility (W4A)
2015-05-18
p.7
© Copyright 2015 ACM
Summary: Reading text on the Web is a challenging task for many people, such as those
with cognitive impairments, reading difficulties or people who are learning a
new language. In this paper we present a web browser plug-in to help with
reading Spanish text on the Web. The plug-in is freely available for Chrome and
presents definitions and simpler synonyms on demand for the selected web text.
The tool was modified following the suggestions of 5 people (2 with diagnosed
dyslexia) who tested the tool using the think aloud protocol and undertook a
subsequent interview.
[12]
Enhancing Android accessibility for users with hand tremor by reducing fine
pointing and steady tapping
Wearables, tactiles and mobiles
/
Zhong, Yu
/
Weber, Astrid
/
Burkhardt, Casey
/
Weaver, Phil
/
Bigham, Jeffrey P.
Proceedings of the 2015 International Cross-Disciplinary Conference on Web
Accessibility (W4A)
2015-05-18
p.29
© Copyright 2015 ACM
Summary: Smartphones and tablets with touchscreen have demonstrated potential to
support the needs of individuals with motor impairments such as hand tremor.
However, those users still face major challenges with conventional touchscreen
gestures. These challenges are mostly caused by the fine precision requirement
to disambiguate between targets on small screens. To reduce the difficulty
caused by hand tremor in combination with small touch targets on the screen, we
developed an experimental system-wide assistive service called Touch Guard. It
enables enhanced area touch and a series of complementary features. This
service provides the enhanced area touch feature through two possible
disambiguation modes: magnification and descriptive targets list. In a
laboratory study with motor-impaired users, we compared both modes to
conventional tapping and tested Touch Guard with real-world applications.
Targets list based disambiguation was more successful, reducing the error rate
by 65% compared to conventional tapping. In addition, several challenges and
design implications were discovered when presenting new touchscreen interaction
techniques to users with motor impairments. As the experimental product of an
intern research project at Google, Touch Guard demonstrates broad potential for
solving accessibility issues for people with hand tremor using their familiar
mobile devices, instead of high-cost hardware.
[13]
Creating accessible PDFs for conference proceedings
Standards and best practices
/
Brady, Erin
/
Zhong, Yu
/
Bigham, Jeffrey P.
Proceedings of the 2015 International Cross-Disciplinary Conference on Web
Accessibility (W4A)
2015-05-18
p.34
© Copyright 2015 ACM
Summary: A responsibility we have as researchers is to disseminate the results of our
research widely. A primary way we do this is through research publications.
When these publications are not accessible to everyone, some readers will be
excluded and the impact of our research limited. In this paper, we explore this
problem in two ways. First, we report on the accessibility of 1,811 papers in
the technical program of several top conferences related to accessibility and
human-computer interaction. Second, we reflect on our experience making papers
accessible for any CHI 2015 author who requested it. We offer thoughts on
research challenges and future work that may make our community's research more
accessible.
[14]
Gauging Receptiveness to Social Microvolunteering
Motivation & Participation
/
Brady, Erin
/
Morris, Meredith Ringel
/
Bigham, Jeffrey P.
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.1055-1064
© Copyright 2015 ACM
Summary: Crowd-powered systems that help people are difficult to scale and sustain
because human labor is expensive and worker pools are difficult to grow. To
address this problem we introduce the idea of social microvolunteering, a type
of intermediated friendsourcing in which a person can provide access to their
friends as potential workers for microtasks supporting causes that they care
about. We explore this idea by creating Visual Answers, an exemplar social
microvolunteering application for Facebook that posts visual questions from
people who are blind. We present results of a survey of 350 participants on the
concept of social microvolunteering, and a deployment of the Visual Answers
application with 91 participants, which collected 618 high-quality answers to
questions asked over 12 days, illustrating the feasibility of the approach.
[15]
The Effects of Sequence and Delay on Crowd Work
Evaluating Crowdsourcing
/
Lasecki, Walter S.
/
Rzeszotarski, Jeffrey M.
/
Marcus, Adam
/
Bigham, Jeffrey P.
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.1375-1378
© Copyright 2015 ACM
Summary: A common approach in crowdsourcing is to break large tasks into small
microtasks so that they can be parallelized across many crowd workers and so
that redundant work can be more easily compared for quality control. In
practice, this can result in the microtasks being presented out of their
natural order and often introduces delays between individual microtasks. In
this paper, we demonstrate in a study of 338 crowd workers that non-sequential
microtasks and the introduction of delays significantly decreases worker
performance. We show that interruptions where a large delay occurs between two
related tasks can cause up to a 102% slowdown in completion time, and
interruptions where workers are asked to perform different tasks in sequence
can slow down completion time by 57%. We conclude with a set of design
guidelines to improve both worker performance and realized pay, and
instructions for implementing these changes in existing interfaces for crowd
work.
[16]
Apparition: Crowdsourced User Interfaces that Come to Life as You Sketch
Them
Understanding Crowdwork in Many Domains
/
Lasecki, Walter S.
/
Kim, Juho
/
Rafter, Nick
/
Sen, Onkur
/
Bigham, Jeffrey P.
/
Bernstein, Michael S.
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.1925-1934
© Copyright 2015 ACM
Summary: Prototyping allows designers to quickly iterate and gather feedback, but the
time it takes to create even a Wizard-of-Oz prototype reduces the utility of
the process. In this paper, we introduce crowdsourcing techniques and tools for
prototyping interactive systems in the time it takes to describe the idea. Our
Apparition system uses paid microtask crowds to make even hard-to-automate
functions work immediately, allowing more fluid prototyping of interfaces that
contain interactive elements and complex behaviors. As users sketch their
interface and describe it aloud in natural language, crowd workers and sketch
recognition algorithms translate the input into user interface elements, add
animations, and provide Wizard-of-Oz functionality. We discuss how design teams
can use our approach to reflect on prototypes or begin user studies within
seconds, and how, over time, Apparition prototypes can become fully-implemented
versions of the systems they simulate. Powering Apparition is the first
self-coordinated, real-time crowdsourcing infrastructure. We anchor this
infrastructure on a new, lightweight write-locking mechanism that workers can
use to signal their intentions to each other.
[17]
Zensors: Adaptive, Rapidly Deployable, Human-Intelligent Sensor Feeds
Understanding Crowdwork in Many Domains
/
Laput, Gierad
/
Lasecki, Walter S.
/
Wiese, Jason
/
Xiao, Robert
/
Bigham, Jeffrey P.
/
Harrison, Chris
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.1935-1944
© Copyright 2015 ACM
Summary: The promise of "smart" homes, workplaces, schools, and other environments
has long been championed. Unattractive, however, has been the cost to run wires
and install sensors. More critically, raw sensor data tends not to align with
the types of questions humans wish to ask, e.g., do I need to restock my
pantry? Although techniques like computer vision can answer some of these
questions, it requires significant effort to build and train appropriate
classifiers. Even then, these systems are often brittle, with limited ability
to handle new or unexpected situations, including being repositioned and
environmental changes (e.g., lighting, furniture, seasons). We propose Zensors,
a new sensing approach that fuses real-time human intelligence from online
crowd workers with automatic approaches to provide robust, adaptive, and
readily deployable intelligent sensors. With Zensors, users can go from
question to live sensor feed in less than 60 seconds. Through our API, Zensors
can enable a variety of rich end-user applications and moves us closer to the
vision of responsive, intelligent environments.
[18]
Exploring Privacy and Accuracy Trade-Offs in Crowdsourced Behavioral Video
Coding
Understanding Crowdwork in Many Domains
/
Lasecki, Walter S.
/
Gordon, Mitchell
/
Leung, Winnie
/
Lim, Ellen
/
Bigham, Jeffrey P.
/
Dow, Steven P.
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.1945-1954
© Copyright 2015 ACM
Summary: Coding behavioral video is an important method used by researchers to
understand social phenomenon. Unfortunately, traditional hand-coding approaches
can take days or weeks of time to complete. Recent work has shown that these
tasks can be completed quickly by leveraging the parallelism of large online
crowds, but using the crowd introduces new concerns about accuracy,
reliability, privacy, and cost. To explore these issues, we conducted
interviews with 12 researchers who frequently code behavioral video, to
investigate common practices and challenges with video coding. We find accuracy
and privacy to be the researchers' primary concerns. To explore this more
concretely, we used sample videos to investigate whether crowds can accurately
recognize instances of commonly coded behaviors, and show that the crowd yields
accurate results. Then, we demonstrate a method for obfuscating participant
identity with a video blur filter, and find, as expected, that workers' ability
to identify participants decreases as blur level increases. The workers'
ability to accurately and reliably code behaviors also decreases, but not as
steeply as the identity test. This trade-off between coding quality and privacy
protection suggests that researchers can use online crowds to code for some key
behaviors in video without compromising participant identity. We conclude with
a discussion of how researchers can balance privacy and accuracy on their own
data using a system we introduce called Incognito.
[19]
RegionSpeak: Quick Comprehensive Spatial Descriptions of Complex Images for
Blind Users
Accessibility at Home & on The Go
/
Zhong, Yu
/
Lasecki, Walter S.
/
Brady, Erin
/
Bigham, Jeffrey P.
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.2353-2362
© Copyright 2015 ACM
Summary: Blind people often seek answers to their visual questions from remote
sources, however, the commonly adopted single-image, single-response model does
not always guarantee enough bandwidth between users and sources. This is
especially true when questions concern large sets of information, or spatial
layout, e.g., where is there to sit in this area, what tools are on this work
bench, or what do the buttons on this machine do? Our RegionSpeak system
addresses this problem by providing an accessible way for blind users to (i)
combine visual information across multiple photographs via image stitching, em
(ii) quickly collect labels from the crowd for all relevant objects contained
within the resulting large visual area in parallel, and (iii) then
interactively explore the spatial layout of the objects that were labeled. The
regions and descriptions are displayed on an accessible touchscreen interface,
which allow blind users to interactively explore their spatial layout. We
demonstrate that workers from Amazon Mechanical Turk are able to quickly and
accurately identify relevant regions, and that asking them to describe only one
region at a time results in more comprehensive descriptions of complex images.
RegionSpeak can be used to explore the spatial layout of the regions
identified. It also demonstrates broad potential for helping blind users to
answer difficult spatial layout questions.
[20]
ApplianceReader: A Wearable, Crowdsourced, Vision-based System to Make
Appliances Accessible
WIP Theme: Ubicomp, Robots and Wearables
/
Guo, Anhong
/
Chen, Xiang 'Anthony'
/
Bigham, Jeffrey P.
Extended Abstracts of the ACM CHI'15 Conference on Human Factors in
Computing Systems
2015-04-18
v.2
p.2043-2048
© Copyright 2015 ACM
Summary: Visually impaired people can struggle to use everyday appliances with
inaccessible control panels. To address this problem, we present
ApplianceReader -- a system that combines a wearable point-of-view camera with
on-demand crowdsourcing and computer vision to make appliance interfaces
accessible. ApplianceReader sends photos of appliance interfaces that it has
not seen previously to the crowd, who work in parallel to quickly label and
describe elements of the interface. Computer vision techniques then track the
user's finger pointing at the controls and read out the labels previously
provided by the crowd. This enables visually impaired users to interactively
explore and use appliances without asking the crowd repetitively.
ApplianceReader broadly demonstrates the potential of hybrid approaches that
combine human and machine intelligence to effectively realize intelligent,
interactive access technology today.
[21]
Accessible Crowdwork?: Understanding the Value in and Challenge of Microtask
Employment for People with Disabilities
Collaborating Under Constraints
/
Zyskowski, Kathryn
/
Morris, Meredith Ringel
/
Bigham, Jeffrey P.
/
Gray, Mary L.
/
Kane, Shaun K.
Proceedings of ACM CSCW 2015 Conference on Computer-Supported Cooperative
Work and Social Computing
2015-02-28
v.1
p.1682-1693
© Copyright 2015 ACM
Summary: We present the first formal study of crowdworkers who have disabilities via
in-depth open-ended interviews of 17 people (disabled crowdworkers and job
coaches for people with disabilities) and a survey of 631 adults with
disabilities. Our findings establish that people with a variety of disabilities
currently participate in the crowd labor marketplace, despite challenges such
as crowdsourcing workflow designs that inadvertently prohibit participation by,
and may negatively affect the worker reputations of, people with disabilities.
Despite such challenges, we find that crowdwork potentially offers different
opportunities for people with disabilities relative to the normative office
environment, such as job flexibility and lack of a need to rely on public
transit. We close by identifying several ways in which crowd labor platform
operators and/or individual task requestors could improve the accessibility of
this increasingly important form of employment.
[22]
How companies engage customers around accessibility on social media
Practices and tools
/
Brady, Erin
/
Bigham, Jeffrey P.
Sixteenth International ACM SIGACCESS Conference on Computers and
Accessibility
2014-10-20
p.51-58
© Copyright 2014 ACM
Summary: Social media offers a targeted way for mainstream technology companies to
communicate with people with disabilities about the accessibility problems that
they face. While companies have started to engage with users on social media
about accessibility, they differ greatly in terms of their approach and how
well they support the ways in which their users want to engage. In this paper,
we describe current use patterns of six corporate accessibility teams and their
users on Twitter, and present an analysis of these interactions. We find that
while many users want to interact directly with companies about accessibility,
companies prefer to redirect them to other channels and use Twitter for
broadcast messages promoting their accessibility work instead. Our analysis
demonstrates that users want to use social media to become part of the process
of improving accessibility of mainstream technology, and suggests the extent to
which a company is able to leverage this input depends greatly on how they
choose to present themselves and interact on social media.
[23]
Increasing the bandwidth of crowdsourced visual question answering to better
support blind users
Poster abstracts
/
Lasecki, Walter S.
/
Zhong, Yu
/
Bigham, Jeffrey P.
Sixteenth International ACM SIGACCESS Conference on Computers and
Accessibility
2014-10-20
p.263-264
© Copyright 2014 ACM
Summary: Many of the visual questions that blind people ask cannot be easily answered
with a single image or a short response, especially when questions are of an
exploratory nature, e.g. what is in this area, or what tools are available on
this work bench? We introduce RegionSpeak to allow blind users to capture large
areas of visual information, identify all of the objects within them, and
explore their spatial layout with fewer interactions. RegionSpeak helps blind
users capture all of the relevant visual information using an interface
designed to support stitching multiple images together. We use a parallel
crowdsourcing workflow that asks workers to define and describe regions of
interest, allowing even complex images to be described quickly. The regions and
descriptions are displayed on an auditory touchscreen interface, allowing users
to know what is in a scene and how it is laid out.
[24]
Legion scribe: real-time captioning by non-experts
Demonstration abstracts
/
Lasecki, Walter S.
/
Kushalnagar, Raja
/
Bigham, Jeffrey P.
Sixteenth International ACM SIGACCESS Conference on Computers and
Accessibility
2014-10-20
p.303-304
© Copyright 2014 ACM
Summary: The promise of affordable, automatic approaches to real-time captioning
imagines a future in which deaf and hard of hearing (DHH) users have immediate
access to speech in the world around them my simply picking up their phone or
other mobile device. While the challenges of processing highly variable natural
language has prevented automated approaches from completing this task reliably
enough for use in settings such as classrooms or workplaces [4], recent work in
crowd-powered approaches have allowed groups of non-expert captionists to
provide a similarly-flexible source of captions for DHH users. This is in
contrast to current human-powered approaches, which use highly-trained
professional captionists who can type up to 250 words per minute (WPM), but
also can cost over $100/hr. In this paper, we describe a real-time demo of
Legion:Scribe (or just "Scribe"), a crowd-powered captioning system that allows
untrained participants and volunteers to provide reliable captions with less
than 5 seconds of latency by computationally merging their input into a single
collective answer that is more accurate and more complete than any one worker
could have generated alone.
[25]
Making the web easier to see with opportunistic accessibility improvement
Building and using webpages
/
Bigham, Jeffrey P.
Proceedings of the 2014 ACM Symposium on User Interface Software and
Technology
2014-10-05
v.1
p.117-122
© Copyright 2014 ACM
Summary: Many people would find the Web easier to use if content was a little bigger,
even those who already find the Web possible to use now. This paper introduces
the idea of opportunistic accessibility improvement in which improvements
intended to make a web page easier to access, such as magnification, are
automatically applied to the extent that they can be without causing negative
side effects. We explore this idea with oppaccess.js, an easily-deployed system
for magnifying web pages that iteratively increases magnification until it
notices negative side effects, such as horizontal scrolling or overlapping
text. We validate this approach by magnifying existing web pages 1.6x on
average without introducing negative side effects. We believe this concept
applies generally across a wide range of accessibility improvements designed to
help people with diverse abilities.