[1]
RetroFab: A Design Tool for Retrofitting Physical Interfaces using
Actuators, Sensors and 3D Printing
Collaborative Fabricatio? Making Much of Machines
/
Ramakers, Raf
/
Anderson, Fraser
/
Grossman, Tovi
/
Fitzmaurice, George
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.409-419
© Copyright 2016 ACM
Summary: We present RetroFab, an end-to-end design and fabrication environment that
allows non-experts to retrofit physical interfaces. Our approach allows for
changing the layout and behavior of physical interfaces. Unlike customizing
software interfaces, physical interfaces are often challenging to adapt because
of their rigidity. With RetroFab, a new physical interface is designed that
serves as a proxy interface for the legacy controls that are now operated by
actuators. RetroFab makes this concept of retrofitting devices available to
non-experts by automatically generating an enclosure structure from an
annotated 3D scan. This enclosure structure holds together actuators, sensors
as well as components for the redesigned interface. To allow retrofitting a
wide variety of legacy devices, the RetroFab design tool comes with a toolkit
of 12 components. We demonstrate the versatility and novel opportunities of our
approach by retrofitting five domestic objects and exploring their use cases.
Preliminary user feedback reports on the experience of retrofitting devices
with RetroFab.
[2]
ChronoFab: Fabricating Motion
Prototyping for Fabricatio, 3D Designing, Modelling & Printing
/
Kazi, Rubaiat Habib
/
Grossman, Tovi
/
Mogk, Cory
/
Schmidt, Ryan
/
Fitzmaurice, George
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.908-918
© Copyright 2016 ACM
Summary: We present ChronoFab, a 3D modeling tool to craft motion sculptures,
tangible representations of 3D animated models, visualizing an object's motion
with static, transient, ephemeral visuals that are left behind. Our tool casts
3D modeling as a dynamic art-form by employing 3D animation and dynamic
simulation for the modeling of motion sculptures. Our work is inspired by the
rich history of stylized motion depiction techniques in existing 3D motion
sculptures and 2D comic art. Based on a survey of such techniques, we present
an interface that enables users to rapidly explore and craft a variety of
static 3D motion depiction techniques, including motion lines, multiple
stroboscopic stamps, sweeps and particle systems, using a 3D animated object as
input. In a set of professional and non-professional usage sessions, ChronoFab
was found to be a superior tool for the authoring of motion sculptures,
compared to traditional 3D modeling workflows, reducing task completion times
by 79%.
[3]
Skuid: Sketching Dynamic Illustrations Using the Principles of 2D Animation
Expressive HCI
/
Kazi, Rubaiat Habib
/
Grossman, Tovi
/
Umetani, Nobuyuki
/
Fitzmaurice, George
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.4599-4609
© Copyright 2016 ACM
Summary: We present Skuid, a sketching tool for crafting animated illustrations that
contain the exaggerated dynamics of stylized 2D animations. Skuid provides a
set of motion amplifiers which implement a set of established principles of 2D
animation. These amplifiers break down a complex animation effect into
independent, understandable chunks. Each amplifier imposes deformations to an
underlying grid, which in turn updates the corresponding strokes. Users can
combine these amplifiers at will when applying them to an existing animation,
promoting rapid experimentation. Skuid leverages the freeform nature of
sketching, allowing users to rapidly sketch, record motion, explore exaggerated
dynamics using the amplifiers, and fine-tune their animations. Practical
results confirm that users with no prior experience in animation can produce
expressive animated illustrations quickly and easily with Skuid.
[4]
Object-Oriented Drawing
Expressive HCI
/
Xia, Haijun
/
Araujo, Bruno
/
Grossman, Tovi
/
Wigdor, Daniel
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.4610-4621
© Copyright 2016 ACM
Summary: We present Object-Oriented Drawing, which replaces most WIMP UI with
Attribute Objects. Attribute Objects embody the attributes of digital content
as UI objects that can be manipulated through direct touch gestures. In the
paper, the fundamental UI concepts are presented, including Attribute Objects,
which may be moved, cloned, linked, and freely associated with drawing objects.
Other functionalities, such as attribute-level blending and undo, are also
demonstrated. We developed a drawing application based on the presented
concepts with simultaneous touch and pen input. An expert assessment of our
application shows that direct physical manipulation of Attribute Objects
enables a user to quickly perform interactions which were previously tedious,
or even impossible, with a coherent and consistent interaction experience
throughout the entire interface.
[5]
Faster Command Selection on Touchscreen Watches
Interaction with Small Displays
/
Lafreniere, Benjamin
/
Gutwin, Carl
/
Cockburn, Andy
/
Grossman, Tovi
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.4663-4674
© Copyright 2016 ACM
Summary: Small touchscreens worn on the wrist are becoming increasingly common, but
standard interaction techniques for these devices can be slow, requiring a
series of coarse swipes and taps to perform an action. To support faster
command selection on watches, we investigate two related interaction techniques
that exploit spatial memory. WristTap uses multitouch to allow selection in a
single action, and TwoTap uses a rapid combination of two sequential taps. In
three quantitative studies, we investigate the design and performance of these
techniques in comparison to standard methods. Results indicate that both
techniques are feasible, able to accommodate large numbers of commands, and
fast users are able to quickly learn the techniques and reach performance of
1.0 seconds per selection, which is approximately one-third of the time of
standard commercial techniques. We also provide insights into the types of
applications for which these techniques are well-suited, and discuss how the
techniques could be extended.
[6]
The Effect of Visual Appearance on the Performance of Continuous Sliders and
Visual Analogue Scales
Natural User Interfaces for InfoVis
/
Matejka, Justin
/
Glueck, Michael
/
Grossman, Tovi
/
Fitzmaurice, George
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.5421-5432
© Copyright 2016 ACM
Summary: Sliders and Visual Analogue Scales (VASs) are input mechanisms which allow
users to specify a value within a predefined range. At a minimum, sliders and
VASs typically consist of a line with the extreme values labeled. Additional
decorations such as labels and tick marks can be added to give information
about the gradations along the scale and allow for more precise and repeatable
selections. There is a rich history of research about the effect of labelling
in discrete scales (i.e., Likert scales), however the effect of decorations on
continuous scales has not been rigorously explored. In this paper we perform a
2,000 user, 250,000 trial online experiment to study the effects of slider
appearance, and find that decorations along the slider considerably bias the
distribution of responses received. Using two separate experimental tasks, the
trade-offs between bias, accuracy, and speed-of-use are explored and design
recommendations for optimal slider implementations are proposed.
[7]
ExoSkin: On-Body Fabrication
Seams of Craft, Design and Fabrication
/
Gannon, Madeline
/
Grossman, Tovi
/
Fitzmaurice, George
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.5996-6007
© Copyright 2016 ACM
Summary: There is a long tradition for crafting wearable objects directly on the
body, such as garments, casts, and orthotics. However, these high-skill, analog
practices have yet to be augmented by digital fabrication techniques. In this
paper, we explore the use of hybrid fabrication workflows for on-body printing.
We outline design considerations for creating on-body fabrication systems, and
identify several human, machine, and material challenges unique to this
endeavor. Based on our explorations, we present ExoSkin, a hybrid fabrication
system for designing and printing digital artifacts directly on the body.
ExoSkin utilizes a custom built fabrication machine designed specifically for
on-body printing. We demonstrate the potential of on-body fabrication with a
set of sample workflows, and share feedback from initial observation sessions.
[8]
SKUID: Sketching Stylized Animated Drawings with Motion Amplifiers
Video Showcase Presentations
/
Kazi, Rubaiat Habib
/
Grossman, Tovi
/
Umetani, Nobuyuki
/
Fitzmaurice, George
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.6
© Copyright 2016 ACM
Summary: We present Skuid, a sketching tool for crafting animated illustrations that
contain the exaggerated dynamics of stylized 2D animations. Skuid provides a
set of motion amplifiers which implement a set of established principles of 2D
animation. These amplifiers break down a complex animation effect into
independent, understandable chunks. Each amplifier imposes deformations to an
underlying grid, which in turn updates the corresponding strokes. Users can
combine these amplifiers at will when applying them to an existing animation,
promoting rapid experimentation. Skuid leverages the freeform nature of
sketching, allowing users to rapidly sketch, record motion, explore exaggerated
dynamics using the amplifiers, and fine-tune their animations.
[9]
Smart Makerspace: An Immersive Instructional Space for Physical Tasks
Session 4: Let's Get Practical
/
Knibbe, Jarrod
/
Grossman, Tovi
/
Fitzmaurice, George
Proceedings of the 2015 ACM International Conference on Interactive
Tabletops and Surfaces
2015-11-15
p.83-92
© Copyright 2015 ACM
Summary: We present the Smart Makerspace; a context-rich, immersive instructional
workspace for novice and intermediate makers. The Smart Makerspace guides
makers through the completion of a DIY task, while providing detailed
contextually-relevant assistance, domain knowledge, tool location, usage cues,
and safety advice. Through an initial exploratory study, we investigate the
challenges faced in completing maker tasks. Our observations allow us to define
design goals and a design space for a connected workshop. We describe our
implementation, including a digital workbench, augmented toolbox, instrumented
power-tools and environmentally aware audio. We present a qualitative user
study that produced encouraging results; providing features that users
unanimously found useful.
[10]
NanoStylus: Enhancing Input on Ultra-Small Displays with a Finger-Mounted
Stylus
Session 7A: Wearable and Mobile Interactions
/
Xia, Haijun
/
Grossman, Tovi
/
Fitzmaurice, George
Proceedings of the 2015 ACM Symposium on User Interface Software and
Technology
2015-11-05
v.1
p.447-456
© Copyright 2015 ACM
Summary: Due to their limited input area, ultra-small devices, such as smartwatches,
are even more prone to occlusion or the fat finger problem, than their larger
counterparts, such as smart phones, tablets, and tabletop displays. We present
NanoStylus -- a finger-mounted fine-tip stylus that enables fast and accurate
pointing on a smartwatch with almost no occlusion. The NanoStylus is built from
the circuitry of an active capacitive stylus, and mounted within a custom
3D-printed thimble-shaped housing unit. A sensor strip is mounted on each side
of the device to enable additional gestures. A user study shows that NanoStylus
reduces error rate by 80%, compared to traditional touch interaction and by
45%, compared to a traditional stylus. This high precision pointing capability,
coupled with the implemented gesture sensing, gives us the opportunity to
explore a rich set of interactive applications on a smartwatch form factor.
[11]
Candid Interaction: Revealing Hidden Mobile and Wearable Computing
Activities
Session 7A: Wearable and Mobile Interactions
/
Ens, Barrett
/
Grossman, Tovi
/
Anderson, Fraser
/
Matejka, Justin
/
Fitzmaurice, George
Proceedings of the 2015 ACM Symposium on User Interface Software and
Technology
2015-11-05
v.1
p.467-476
© Copyright 2015 ACM
Summary: The growth of mobile and wearable technologies has made it often difficult
to understand what people in our surroundings are doing with their technology.
In this paper, we introduce the concept of candid interaction: techniques for
providing awareness about our mobile and wearable device usage to others in the
vicinity. We motivate and ground this exploration through a survey on current
attitudes toward device usage during interpersonal encounters. We then explore
a design space for candid interaction through seven prototypes that leverage a
wide range of technological enhancements, such as Augmented Reality, shape
memory muscle wire, and wearable projection. Preliminary user feedback of our
prototypes highlights the trade-offs between the benefits of sharing device
activity and the need to protect user privacy.
[12]
MoveableMaker: Facilitating the Design, Generation, and Assembly of Moveable
Papercraft
Session 8B: Fabrication 3 -- Complex Shapes and Properties
/
Annett, Michelle
/
Grossman, Tovi
/
Wigdor, Daniel
/
Fitzmaurice, George
Proceedings of the 2015 ACM Symposium on User Interface Software and
Technology
2015-11-05
v.1
p.565-574
© Copyright 2015 ACM
Summary: In this work, we explore moveables, i.e., interactive papercraft that
harness user interaction to generate visual effects. First, we present a survey
of children's books that captured the state of the art of moveables. The
results of this survey were synthesized into a moveable taxonomy and informed
MoveableMaker, a new tool to assist users in designing, generating, and
assembling moveable papercraft. MoveableMaker supports the creation and
customization of a number of moveable effects and employs moveable-specific
features including animated tooltips, automatic instruction generation,
constraint-based rendering, techniques to reduce material waste, and so on. To
understand how MoveableMaker encourages creativity and enhances the workflow
when creating moveables, a series of exploratory workshops were conducted. The
results of these explorations, including the content participants created and
their impressions, are discussed, along with avenues for future research
involving moveables.
[13]
Supporting Subtlety with Deceptive Devices and Illusory Interactions
Grip, Move & Tilt: Novel Interaction
/
Anderson, Fraser
/
Grossman, Tovi
/
Wigdor, Daniel
/
Fitzmaurice, George
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.1489-1498
© Copyright 2015 ACM
Summary: Mobile devices offer constant connectivity to the world, which can
negatively affect in-person interaction. Current approaches to minimizing the
social disruption and improving the subtlety of interactions tend to focus on
the development of inconspicuous devices that provide basic input or output.
This paper presents a more general approach to subtle interaction and
demonstrates how a number of principles from magic can be leveraged to improve
subtlety. It also presents a framework that can be used to classify subtle
interfaces along with a modular set of novel interfaces that fit within this
framework. Lastly, the paper presents a new evaluation paradigm specifically
designed to assess the subtlety of interactions. This paradigm is used to
compare traditional approaches to our new subtle approaches. We find our new
approaches are over five times more subtle than traditional interactions, even
when participants are aware of the technologies being used.
[14]
Tactum: A Skin-Centric Approach to Digital Design and Fabrication
Design and 3D Object Fabrication
/
Gannon, Madeline
/
Grossman, Tovi
/
Fitzmaurice, George
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.1779-1788
© Copyright 2015 ACM
Summary: Skin-based input has become an increasingly viable interaction model for
user interfaces, however it has yet to be explored outside the domain of mobile
computing. In this paper, we examine skin as an interactive input surface for
gestural 3D modeling-to-fabrication systems. When used as both the input
surface and base canvas for digital design, skin-input can enable non-experts
users to intuitively create precise forms around highly complex physical
contexts: our own bodies. In this paper, we outline design considerations when
creating interfaces for such systems. We then discuss interaction techniques
for three different modes of skin-centric modeling: direct, parametric, and
generative. We also present Tactum, a new fabrication-aware design system that
captures a user's skin-centric gestures for 3D modeling directly on the body.
Lastly, we show sample artifacts generated with our system, and share a set of
observations from design professionals.
[15]
Your Paper is Dead!: Bringing Life to Research Articles with Animated
Figures
alt.chi: New User Interfaces
/
Grossman, Tovi
/
Chevalier, Fanny
/
Kazi, Rubaiat Habib
Extended Abstracts of the ACM CHI'15 Conference on Human Factors in
Computing Systems
2015-04-18
v.2
p.461-475
© Copyright 2015 ACM
Summary: The dissemination of scientific knowledge has evolved over the centuries
from handwritten manuscripts transcribed and published as physical black and
white prints-on-paper, to digital documents in full color available for
consultation online. Even if it now primarily relies on digital media, academic
publishing still generally adheres to its historical rigid paper-based
style-where static content is presented at the ready-to-print letter format. In
this paper, we reflect on our experience of authoring a published academic
article that embeds an animated figure and discuss the opportunities and
caveats of transitioning to such practice at the wider academic literature
scale.
[16]
Technology Transfer of HCI Research Innovations: Challenges and
Opportunities
Panels
/
Chilana, Parmit K.
/
Czerwinski, Mary P.
/
Grossman, Tovi
/
Harrison, Chris
/
Kumar, Ranjitha
/
Parikh, Tapan S.
/
Zhai, Shumin
Extended Abstracts of the ACM CHI'15 Conference on Human Factors in
Computing Systems
2015-04-18
v.2
p.823-828
© Copyright 2015 ACM
Summary: There has been a longstanding concern within HCI that even though we are
accumulating great innovations in the field, we rarely see these innovations
develop into products. Our panel brings together HCI researchers from academia
and industry who have been directly involved in technology transfer of one or
more HCI innovations. They will share their experiences around what it takes to
transition an HCI innovation from the lab to the market, including issues
around time commitment, funding, resources, and business expertise. More
importantly, our panelists will discuss and debate the tensions that we
(researchers) face in choosing design and evaluation methods that help us make
an HCI research contribution versus what actually matters when we go to market.
[17]
A series of tubes: adding interactivity to 3D prints using internal pipes
Interacting with 3D data
/
Savage, Valkyrie
/
Schmidt, Ryan
/
Grossman, Tovi
/
Fitzmaurice, George
/
Hartmann, Björn
Proceedings of the 2014 ACM Symposium on User Interface Software and
Technology
2014-10-05
v.1
p.3-12
© Copyright 2014 ACM
Summary: 3D printers offer extraordinary flexibility for prototyping the shape and
mechanical function of objects. We investigate how 3D models can be modified to
facilitate the creation of interactive objects that offer dynamic input and
output. We introduce a general technique for supporting the rapid prototyping
of interactivity by removing interior material from 3D models to form internal
pipes. We describe this new design space of pipes for interaction design, where
variables include openings, path constraints, topologies, and inserted media.
We then present PipeDream, a tool for routing such pipes through the interior
of 3D models, integrated within a 3D modeling program. We use two distinct
routing algorithms. The first has users define pipes' terminals, and uses path
routing and physics-based simulation to minimize pipe bending energy, allowing
easy insertion of media post-print. The second allows users to supply a desired
internal shape to which we fit a pipe route: for this we describe a
graph-routing algorithm. We present several prototypes created using our tool
to show its flexibility and potential.
[18]
Kitty: sketching dynamic and interactive illustrations
Creative tools
/
Kazi, Rubaiat Habib
/
Chevalier, Fanny
/
Grossman, Tovi
/
Fitzmaurice, George
Proceedings of the 2014 ACM Symposium on User Interface Software and
Technology
2014-10-05
v.1
p.395-405
© Copyright 2014 ACM
Summary: We present Kitty, a sketch-based tool for authoring dynamic and interactive
illustrations. Artists can sketch animated drawings and textures to convey the
living phenomena, and specify the functional relationship between its entities
to characterize the dynamic behavior of systems and environments. An underlying
graph model, customizable through sketching, captures the functional
relationships between the visual, spatial, temporal or quantitative parameters
of its entities. As the viewer interacts with the resulting dynamic interactive
illustration, the parameters of the drawing change accordingly, depicting the
dynamics and chain of causal effects within a scene. The generality of this
framework makes our tool applicable for a variety of purposes, including
technical illustrations, scientific explanation, infographics, medical
illustrations, children's e-books, cartoon strips and beyond. A user study
demonstrates the ease of usage, variety of applications, artistic
expressiveness and creative possibilities of our tool.
[19]
Video lens: rapid playback and exploration of large video collections and
associated metadata
Video
/
Matejka, Justin
/
Grossman, Tovi
/
Fitzmaurice, George
Proceedings of the 2014 ACM Symposium on User Interface Software and
Technology
2014-10-05
v.1
p.541-550
© Copyright 2014 ACM
Summary: We present Video Lens, a framework which allows users to visualize and
interactively explore large collections of videos and associated metadata. The
primary goal of the framework is to let users quickly find relevant sections
within the videos and play them back in rapid succession. The individual UI
elements are linked and highly interactive, supporting a faceted search
paradigm and encouraging exploration of the data set. We demonstrate the
capabilities and specific scenarios of Video Lens within the domain of
professional baseball videos. A user study with 12 participants indicates that
Video Lens efficiently supports a diverse range of powerful yet desirable video
query tasks, while a series of interviews with professionals in the field
demonstrates the framework's benefits and future potential.
[20]
Swipeboard: a text entry technique for ultra-small interfaces that supports
novice to expert transitions
Input techniques
/
Chen, Xiang 'Anthony'
/
Grossman, Tovi
/
Fitzmaurice, George
Proceedings of the 2014 ACM Symposium on User Interface Software and
Technology
2014-10-05
v.1
p.615-620
© Copyright 2014 ACM
Summary: Ultra-small smart devices, such as smart watches, have become increasingly
popular in recent years. Most of these devices rely on touch as the primary
input modality, which makes tasks such as text entry increasingly difficult as
the devices continue to shrink. In the sole pursuit of entry speed, the
ultimate solution is a shorthand technique (e.g., Morse code) that sequences
tokens of input (e.g., key, tap, swipe) into unique representations of each
character. However, learning such techniques is hard, as it often resorts to
rote memory. Our technique, Swipeboard, leverages our spatial memory of a
QWERTY keyboard to learn, and eventually master a shorthand, eyes-free text
entry method designed for ultra-small interfaces. Characters are entered with
two swipes; the first swipe specifies the region where the character is
located, and the second swipe specifies the character within that region. Our
study showed that with less than two hours' training, Tested on a reduced word
set, Swipeboard users achieved 19.58 words per minute (WPM), 15% faster than an
existing baseline technique.
[21]
Duet: exploring joint interactions on a smart phone and a smart watch
Watches and small devices
/
Chen, Xiang 'Anthony'
/
Grossman, Tovi
/
Wigdor, Daniel J.
/
Fitzmaurice, George
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.159-168
© Copyright 2014 ACM
Summary: The emergence of smart devices (e.g., smart watches and smart eyewear) is
redefining mobile interaction from the solo performance of a smart phone, to a
symphony of multiple devices. In this paper, we present Duet -- an interactive
system that explores a design space of interactions between a smart phone and a
smart watch. Based on the devices' spatial configurations, Duet coordinates
their motion and touch input, and extends their visual and tactile output to
one another. This transforms the watch into an active element that enhances a
wide range of phone-based interactive tasks, and enables a new class of
multi-device gestures and sensing techniques. A technical evaluation shows the
accuracy of these gestures and sensing techniques, and a subjective study on
Duet provides insights, observations, and guidance for future work.
[22]
Draco: bringing life to illustrations with kinetic textures
Image and animation authoring
/
Kazi, Rubaiat Habib
/
Chevalier, Fanny
/
Grossman, Tovi
/
Zhao, Shengdong
/
Fitzmaurice, George
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.351-360
© Copyright 2014 ACM
Summary: We present Draco, a sketch-based interface that allows artists and casual
users alike to add a rich set of animation effects to their drawings, seemingly
bringing illustrations to life. While previous systems have introduced
sketch-based animations for individual objects, our contribution is a unified
framework of motion controls that allows users to seamlessly add coordinated
motions to object collections. We propose a framework built around kinetic
textures, which provide continuous animation effects while preserving the
unique timeless nature of still illustrations. This enables many dynamic
effects difficult or not possible with previous sketch-based tools, such as a
school of fish swimming, tree leaves blowing in the wind, or water rippling in
a pond. We describe our implementation and illustrate the repertoire of
animation effects it supports. A user study with professional animators and
casual users demonstrates the variety of animations, applications and creative
possibilities our tool provides.
[23]
History assisted view authoring for 3D models
3D interaction: modeling and prototyping
/
Chen, Hsiang-Ting
/
Grossman, Tovi
/
Wei, Li-Yi
/
Schmidt, Ryan M.
/
Hartmann, Björn
/
Fitzmaurice, George
/
Agrawala, Maneesh
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.2027-2036
© Copyright 2014 ACM
Summary: 3D modelers often wish to showcase their models for sharing or review
purposes. This may consist of generating static viewpoints of the model or
authoring animated fly-throughs. Manually creating such views is often tedious
and few automatic methods are designed to interactively assist the modelers
with the view authoring process. We present a view authoring assistance system
that supports the creation of informative view points, view paths, and view
surfaces, allowing modelers to author the interactive navigation experience of
a model. The key concept of our implementation is to analyze the model's
workflow history, to infer important regions of the model and representative
viewpoints of those areas. An evaluation indicated that the viewpoints
generated by our algorithm are comparable to those manually selected by the
modeler. In addition, participants of a user study found our system easy to use
and effective for authoring viewpoint summaries.
[24]
CADament: a gamified multiplayer software tutorial system
Learning and games
/
Li, Wei
/
Grossman, Tovi
/
Fitzmaurice, George
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.3369-3378
© Copyright 2014 ACM
Summary: We present CADament, a gamified multiplayer tutorial system for learning
AutoCAD. Compared with existing gamified software tutorial systems, CADament
generates engaging learning experience through competitions. We investigate two
variations of our game, where over-the-shoulder learning was simulated by
providing viewports into other player's screens. We introduce an empirical lab
study methodology where participants compete with one another, and we study
knowledge transfer effects by tracking the migration of strategies between
players during the study session. Our study shows that CADament has an
advantage over pre-authored tutorials for improving learners' performance,
increasing motivation, and stimulating knowledge transfer.
[25]
Investigating the feasibility of extracting tool demonstrations from in-situ
video content
Tutorials
/
Lafreniere, Ben
/
Grossman, Tovi
/
Matejka, Justin
/
Fitzmaurice, George
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.4007-4016
© Copyright 2014 ACM
Summary: Short video demonstrations are effective resources for helping users to
learn tools in feature-rich software. However manually creating demonstrations
for the hundreds (or thousands) of individual features in these programs would
be impractical. In this paper, we investigate the potential for identifying
good tool demonstrations from within screen recordings of users performing
real-world tasks. Using an instrumented image-editing application, we collected
workflow video content and log data from actual end users. We then developed a
heuristic for identifying demonstration clips, and had the quality of a sample
set of clips evaluated by both domain experts and end users. This multi-step
approach allowed us to characterize the quality of 'naturally occurring' tool
demonstrations, and to derive a list of good and bad features of these videos.
Finally, we conducted an initial investigation into using machine learning
techniques to distinguish between good and bad demonstrations.