Single user multitouch on the DiamondTouch: from 2 x 1D to 2D | | BIBAK | Full-Text | 1-8 | |
François Bérard; Yann Laurillau | |||
The DiamondTouch is a widely used multi-touch surface that offers high
quality touch detection and user identification. But its underlying detection
mechanism relies on two 1D projections (x and y) of the 2D surface. This
creates ambiguous responses when a single user exercises multiple contacts on
the surface and limits the ability of the DiamondTouch to provide full support
of common multi-touch interactions such as the unconstrained translation,
rotation and scaling of objects with two fingers. This paper presents our
solution to reduce this limitation. Our approach is based on a precise
modeling, using mixtures of Gaussians, of the touch responses on each array of
antennas. This greatly reduces the shadowing of the touch locations when two or
more fingers align with each other. We use these accurate touch detections to
implement two 1D touch trackers and a global 2D tracker. The evaluation of our
system shows that, in many situations, it can provide the complete 2D locations
of at least two contacts points from the same user. Keywords: DiamondTouch, expectation maximization, input device, multitouch, tracking |
reacTIVision and TUIO: a tangible tabletop toolkit | | BIBAK | Full-Text | 9-16 | |
Martin Kaltenbrunner | |||
This article presents the recent updates and an evaluation of reacTIVision,
a computer vision toolkit for fiducial marker tracking and multi-touch
interaction. It also discusses the current and future development of the TUIO
protocol and framework, which has been primarily designed as an abstraction
layer for the description and transmission of pointers and tangible object
states in the context of interactive tabletop surfaces. The initial protocol
definition proved to be rather robust due to the simple and straightforward
implementation approach, which also supported its widespread adoption within
the open source community. This article also discusses the current limitations
of this simplistic approach and provides an outlook towards a next generation
protocol definition, which will address the need for additional descriptors and
the protocol's general extensibility. Keywords: computer vision, human computer interaction, interactive surfaces,
protocols, tangible user interfaces |
PyMT: a post-WIMP multi-touch user interface toolkit | | BIBAK | Full-Text | 17-24 | |
Thomas E. Hansen; Juan Pablo Hourcade; Mathieu Virbel; Sharath Patali; Tiago Serra | |||
Multi-touch and tabletop input paradigms open novel doors for post-WIMP
(Windows, Icons, Menus, Pointer) user interfaces. Developing these novel
interfaces and applications poses unique challenges for designers and
programmers alike. We present PyMT (Python Multi-Touch), a toolkit aimed at
addressing these challenges. We discuss PyMT's architecture and sample
applications to demonstrate how it enables rapid development of prototypes and
interaction techniques while being accessible to novice programmers and
providing great flexibility and creative freedom to advanced users. We share
experiences gathered in the open source development of PyMT to explore design
and programming challenges posed by multi-touch tabletop and post-WIMP
interfaces. Specifically, we discuss changes to the event model and the
implementation of development and debugging tools that we found useful along
the way. Keywords: GUI, Python, UI toolkits, graphics, multi-touch, open source, post-WIMP,
user interfaces |
FiberBoard: compact multi-touch display using channeled light | | BIBAK | Full-Text | 25-28 | |
Daniel Jackson; Tom Bartindale; Patrick Olivier | |||
Multi-touch displays based on infrared (IR) light offer many advantages over
alternative technologies. Existing IR multi-touch devices either use complex
custom electronic sensor arrays, or a camera that must be placed relatively
distant from the display. FiberBoard is an easily constructed compact
IR-sensing multi-touch display. Using an array of optical fibers, reflected IR
light is channeled to a camera. As the fibers are flexible the camera is free
to be positioned so as to minimize the depth of the device. The resulting
display is around one tenth of the depth of a conventional camera-based
multi-touch display. We describe our prototype, its novel calibration process,
and virtual camera software based on existing multi-touch image processing
tools. Keywords: fiber optics, infrared sensing, multi-touch |
Inverted FTIR: easy multitouch sensing for flatscreens | | BIBA | Full-Text | 29-32 | |
Florian Echtler; Andreas Dippon; Marcus Tönnis; Gudrun Klinker | |||
The increased attention which multitouch interfaces have received in recent years is partly due to the availability of cheap sensing hardware such as FTIR-based screens. However, this method has so far required a bulky projector-camera setup behind the screen. In this paper, we present a new approach to FTIR sensing by "inverting" the setup and placing the camera in front of the screen. This allows the use of unmodified flat screens as display, thereby dramatically shrinking the space required behind the screen and enabling the easy construction of new types of interactive surfaces. |
CRISTAL: a collaborative home media and device controller based on a multi-touch display | | BIBAK | Full-Text | 33-40 | |
Thomas Seifried; Michael Haller; Stacey D. Scott; Florian Perteneder; Christian Rendl; Daisuke Sakamoto; Masahiko Inami | |||
While most homes are inherently social places, existing devices designed to
control consumer electronics typically only support single user interaction.
Further, as the number of consumer electronics in modern homes increases,
people are often forced to switch between many controllers to interact with
these devices. To simplify interaction with these devices and to enable more
collaborative forms of device control, we propose an integrated remote control
system, called CRISTAL (Control of Remotely Interfaced Systems using
Touch-based Actions in Living spaces). CRISTAL enables people to control a wide
variety of digital devices from a centralized, interactive tabletop system that
provides an intuitive, gesture-based interface that enables multiple users to
control home media devices through a virtually augmented video image of the
surrounding environment. A preliminary user study of the CRISTAL system is
presented, along with a discussion of future research directions. Keywords: collaborative interface, multi-touch, remote controller |
Analysis of natural gestures for controlling robot teams on multi-touch tabletop surfaces | | BIBAK | Full-Text | 41-48 | |
Mark Micire; Munjal Desai; Amanda Courtemanche; Katherine M. Tsui; Holly A. Yanco | |||
Multi-touch technologies hold much promise for the command and control of
mobile robot teams. To improve the ease of learning and usability of these
interfaces, we conducted an experiment to determine the gestures that people
would naturally use, rather than the gestures they would be instructed to use
in a pre-designed system. A set of 26 tasks with differing control needs were
presented sequentially on a DiamondTouch to 31 participants. We found that the
task of controlling robots exposed unique gesture sets and considerations not
previously observed, particularly in desktop-like applications. In this paper,
we present the details of these findings, a taxonomy of the gesture set, and
guidelines for designing gesture sets for robot control. Keywords: gestures, human-computer interaction, human-robot interaction, multi-touch,
robot control, tabletop interface |
Developing the story: designing an interactive storytelling application | | BIBAK | Full-Text | 49-52 | |
John Helmes; Xiang Cao; Siân E. Lindley; Abigail Sellen | |||
This paper describes the design of a tabletop storytelling application for
children, called TellTable. The goal of the system was to stimulate creativity
and collaboration by allowing children to develop their own story characters
and scenery through photography and drawing, and record stories through direct
manipulation and narration. Here we present the initial interface design and
its iteration following the results of a preliminary trial. We also describe
key findings from TellTable's deployment in a primary school that relate to its
design, before concluding with a discussion of design implications from the
process. Keywords: children, drawing, interactive tabletop, interface design, photography,
storytelling, tangibles |
FluidPaint: an interactive digital painting system using real wet brushes | | BIBAK | Full-Text | 53-56 | |
Peter Vandoren; Luc Claesen; Tom Van Laerhoven; Johannes Taelman; Chris Raymaekers; Eddy Flerackers; Frank Van Reeth | |||
This paper presents FluidPaint, a novel digital paint system using real wet
brushes. A new interactive canvas, accurately registering brush footprints and
paint strokes in high precision has been developed. It is based on the
real-time imaging of brushes and other painting instruments as well as the
real-time co-located rendering of the painting results. This new painting user
interface enhances the user experience and the artist's expressiveness. User
tests demonstrate the intuitive nature of FluidPaint, naturally integrating
interface elements of traditional painting in a digital paint system. Keywords: paint system, tabletop hardware, tangible UI, wet brushes |
Stacks on the surface: resolving physical order using fiducial markers with structured transparency | | BIBAK | Full-Text | 57-60 | |
Tom Bartindale; Chris Harrison | |||
We present a method for identifying the order of stacked items on
interactive surfaces. This is achieved using conventional, passive fiducial
markers, which in addition to reflective regions, also incorporate structured
areas of transparency. This allows particular orderings to appear as unique
marker patterns. We discuss how such markers are encoded and fabricated, and
include relevant mathematics. To motivate our approach, we comment on various
scenarios where stacking could be especially useful. We conclude with details
from our proof-of-concept implementation, built on Microsoft Surface. Keywords: computer vision, direct manipulation, fiducial markers, input, interaction,
physical, physical state, piles, stacking, tangible, vertical ordering |
I-Grabber: expanding physical reach in a large-display tabletop environment through the use of a virtual grabber | | BIBAK | Full-Text | 61-64 | |
Martha Abednego; Joong-Ho Lee; Won Moon; Ji-Hyung Park | |||
While working on large tabletop interfaces, reaching and manipulating
objects beyond the physical reach of a user can be considerably vexing. In
order to reach such an object, a user may have to physically move to the
object's location. Alternatively, a user could attempt to reach for the object
from his/her current location, but the territory of other users may become
obstructed as a result. We propose a multi-touch interaction technique which
enables users to easily select and manipulate objects that are beyond their
physical reach. Our technique provides direct visual feedback to users, which
allows them to be aware of their current active location. Using a controllable
"interactive grabber" (I-Grabber) as a virtual hand extension, users can reach
and manipulate any object from their current location. Keywords: distance reaching, interaction technique, tabletop environment, touch input |
Visualizing and manipulating automatic document orientation methods using vector fields | | BIBAK | Full-Text | 65-68 | |
Pierre Dragicevic; Yuanchun Shi | |||
We introduce and illustrate a design framework whereby tabletop documents
are oriented according to vector fields that can be visualized and altered by
end users. We explore and illustrate the design space using interactive 2D
mockups and show how this approach can potentially combine the advantages of
the fully manual and fully automatic document orientation methods previously
proposed in the literature. Keywords: orientation, tabletop, vector fields |
PaperLens: advanced magic lens interaction above the tabletop | | BIBAK | Full-Text | 69-76 | |
Martin Spindler; Sophie Stellmach; Raimund Dachselt | |||
In order to improve the three-dimensional (3D) exploration of virtual spaces
above a tabletop, we developed a set of navigation techniques using a handheld
magic lens. These techniques allow for an intuitive interaction with
two-dimensional and 3D information spaces, for which we contribute a
classification into volumetric, layered, zoomable, and temporal spaces. The
proposed PaperLens system uses a tracked sheet of paper to navigate these
spaces with regard to the Z-dimension (height above the tabletop). A formative
user study provided valuable feedback for the improvement of the PaperLens
system with respect to layer interaction and navigation. In particular, the
problem of keeping the focus on selected layers was addressed. We also propose
additional vertical displays in order to provide further contextual clues. Keywords: interaction techniques, multi-layer interaction, spatially aware displays,
tangible interaction, three-dimensional space |
Exploring tangible and direct touch interfaces for manipulating 2D and 3D information on a digital table | | BIBA | Full-Text | 77-84 | |
Mark Hancock; Otmar Hilliges; Christopher Collins; Dominikus Baur; Sheelagh Carpendale | |||
On traditional tables, people often manipulate a variety of physical objects, both 2D in nature (e.g., paper) and 3D in nature (e.g., books, pens, models, etc.). Current advances in hardware technology for tabletop displays introduce the possibility of mimicking these physical interactions through direct-touch or tangible user interfaces. While both promise intuitive physical interaction, they are rarely discussed in combination in the literature. In this paper, we present a study that explores the advantages and disadvantages of tangible and touch interfaces, specifically in relation to one another. We discuss our results in terms of how effective each technique was for accomplishing both a 3D object manipulation task and a 2D information visualization exploration task. Results suggest that people can more quickly move and rotate objects in 2D with our touch interaction, but more effectively navigate the visualization using tangible interaction. We discuss how our results can be used to inform future designs of tangible and touch interaction. |
The Haptic Tabletop Puck: tactile feedback for interactive tabletops | | BIBAK | Full-Text | 85-92 | |
Nicolai Marquardt; Miguel A. Nacenta; James E. Young; Sheelagh Carpendale; Saul Greenberg; Ehud Sharlin | |||
In everyday life, our interactions with objects on real tables include how
our fingertips feel those objects. In comparison, current digital interactive
tables present a uniform touch surface that feels the same, regardless of what
it presents visually. In this paper, we explore how tactile interaction can be
used with digital tabletop surfaces. We present a simple and inexpensive device
-- the Haptic Tabletop Puck -- that incorporates dynamic, interactive haptics
into tabletop interaction. We created several applications that explore tactile
feedback in the area of haptic information visualization, haptic graphical
interfaces, and computer supported collaboration. In particular, we focus on
how a person may interact with the friction, height, texture and malleability
of digital objects. Keywords: friction, haptic, height, malleability, tabletops, tactile |
Enhancing input on and above the interactive surface with muscle sensing | | BIBAK | Full-Text | 93-100 | |
Hrvoje Benko; T. Scott Saponas; Dan Morris; Desney Tan | |||
Current interactive surfaces provide little or no information about which
fingers are touching the surface, the amount of pressure exerted, or gestures
that occur when not in contact with the surface. These limitations constrain
the interaction vocabulary available to interactive surface systems. In our
work, we extend the surface interaction space by using muscle sensing to
provide complementary information about finger movement and posture. In this
paper, we describe a novel system that combines muscle sensing with a
multi-touch tabletop, and introduce a series of new interaction techniques
enabled by this combination. We present observations from an initial system
evaluation and discuss the limitations and challenges of utilizing muscle
sensing for tabletop applications. Keywords: EMG, muscle sensing, surface computing, tabletops |
Hand distinction for multi-touch tabletop interaction | | BIBAK | Full-Text | 101-108 | |
Chi Tai Dang; Martin Straub; Elisabeth André | |||
Recent multi-touch multi-user tabletop systems offer rich touch contact
properties to applications. Not only they provide touch positions, but also
finger orientations. Applications can use these properties separated for each
finger or derive information by combining the given touch contact data. In this
paper, we present an approach to map fingers to their associated joined hand
contributing to potential enhancements for gesture recognition and user
interaction. For instance, a gesture can be composed of multiple fingers of one
hand or different hands. Therefore, we present a simple heuristic for mapping
fingers to hands that makes use of constraints applied to the touch position
combined with the finger orientation. We tested our approach with collected
diverse touch contact data and analyze the results. Keywords: input/interaction, multi-touch, tabletop hardware, touch properties,
tracking |
Getting practical with interactive tabletop displays: designing for dense data, "fat fingers," diverse interactions, and face-to-face collaboration | | BIBAK | Full-Text | 109-116 | |
Stephen Voida; Matthew Tobiasz; Julie Stromer; Petra Isenberg; Sheelagh Carpendale | |||
Tabletop displays with touch-based input provide many powerful affordances
for directly manipulating and collaborating around information visualizations.
However, these devices also introduce several challenges for interaction
designers, including discrepancies among the resolutions of the visualization,
the tabletop's display, and its sensing technologies; a need to support diverse
types of interactions required by different visualization techniques; and the
ability to support face-to-face collaboration. As a result, most interactive
tabletop applications for working with information currently demonstrate
limited functionality and do not approach the power or versatility of their
desktop counterparts.
We present a series of design considerations, informed by prior interaction design and focus+context visualization research, for ameliorating the challenges inherent in designing practical interaction techniques for tabletop information visualization applications. We then discuss two specific techniques, i-Loupe and iPodLoupe, which illustrate how different choices among these design considerations enable vastly different experiences in working with complex data on interactive surfaces. Keywords: i-Loupe, iPodLoupe, information visualization, interaction lenses,
resolution discrepancy |
Extending touch: towards interaction with large-scale surfaces | | BIBAK | Full-Text | 117-124 | |
Alexander Schick; Florian van de Camp; Joris Ijsselmuiden; Rainer Stiefelhagen | |||
Touch is a very intuitive modality for interacting with objects displayed on
arbitrary surfaces. However, when using touch for large-scale surfaces, not
every point is reachable. Therefore, an extension is required that keeps the
intuitivity provided by touch: pointing. We will present our system that allows
both input modalities in one single framework. Our method is based on 3D
reconstruction, using standard RGB cameras only, and allows seamless switching
between touch and pointing, even while interacting. Our approach scales very
well with large surfaces without modifying them. We present a technical
evaluation of the system's accuracy, as well as a user study. We found that
users preferred our system to a touch-only system, because they had more
freedom during interaction and could solve the presented task significantly
faster. Keywords: computer vision, large surfaces, multitouch, touch and pointing interaction,
visual hull |
Simulating grasping behavior on an imaging interactive surface | | BIBAK | Full-Text | 125-132 | |
Andrew D. Wilson | |||
We present techniques and algorithms to simulate grasping behavior on an
imaging interactive surface (e.g., Microsoft Surface). In particular, we
describe a contour model of touch contact shape, and show how these contours
may be represented in a real-time physics simulation in a way that allows more
realistic grasping behavior. For example, a virtual object may be moved by
"squeezing" it with multiple contacts undergoing motion. The virtual object is
caused to move by simulated contact and friction forces. Previous work [14]
uses many small rigid bodies ("particle proxies") to approximate touch contact
shape. This paper presents a variation of the particle proxy approach which
allows grasping behavior. The advantages and disadvantages of this new approach
are discussed. Keywords: game physics engines, interactive surfaces |
Sticky tools: full 6DOF force-based interaction for multi-touch tables | | BIBA | Full-Text | 133-140 | |
Mark Hancock; Thomas ten Cate; Sheelagh Carpendale | |||
Tabletop computing techniques are using physically familiar force-based interactions to enable compelling interfaces that provide a feeling of being embodied with a virtual object. We introduce an interaction paradigm that has the benefits of force-based interaction complete with full 6DOF manipulation. Only multi-touch input, such as that provided by the Microsoft Surface and the SMART Table, is necessary to achieve this interaction freedom. This paradigm is realized through sticky tools: a combination of sticky fingers, a physically familiar technique for moving, spinning, and lifting virtual objects; opposable thumbs, a method for flipping objects over; and virtual tools, a method for propagating behaviour to other virtual objects in the scene. We show how sticky tools can introduce richer meaning to tabletop computing by drawing a parallel between sticky tools and the discussion in Urp [20] around the meaning of tangible devices in terms of nouns, verbs, reconfigurable tools, attributes, and pure objects. We then relate this discussion to other force-based interaction techniques by describing how a designer can introduce complexity in how people can control both physical and virtual objects, how physical objects can control both physical and virtual objects, and how virtual objects can control virtual objects. |
Navigation modes for combined table/screen 3D scene rendering | | BIBAK | Full-Text | 141-148 | |
Rami Ajaj; Frédéric Vernier; Christian Jacquemin | |||
This paper compares two navigation techniques for settings that combine a 2D
table-top view and a 3D large wall display, both rendering the same 3D virtual
scene. The two navigation techniques, called Camera Based (CB) and View Based
(VB) strongly rely on the spatial relationships between both displays. In the
CB technique, the 3D point of view displayed on the wall is controlled through
a draggable icon on the 2D table-top view. The VB technique presents the same
icon on the table-top view but statically located at the center and oriented
toward the physical wall display while the user pans and rotates the whole
scene around the icon. While CB offers a more consistent 2D view, VB reduces
the user's mental rotations required to understand the relations between both
views. We perform a comparative user study showing user's preference for VB
technique while performances for complex tasks are better with the CB
technique. Finally we discuss other aspects of such navigation techniques, such
as the possibility of having more than one point of view, occlusion, and
multiple users. Keywords: interactive table-top, navigation, virtual reality |
Investigating multi-touch and pen gestures for diagram editing on interactive surfaces | | BIBAK | Full-Text | 149-156 | |
Mathias Frisch; Jens Heydekorn; Raimund Dachselt | |||
Creating and editing large graphs and node-link diagrams are crucial
activities in many application areas. For them, we consider multi-touch and pen
input on interactive surfaces as very promising. This fundamental work presents
a user study investigating how people edit node-link diagrams on an interactive
tabletop. The study covers a set of basic operations, such as creating, moving,
and deleting diagram elements. Participants were asked to perform spontaneous
gestures for 14 given tasks. They could interact in three different ways: using
one hand, both hands, as well as pen and hand together. The subjects'
activities were observed and recorded in various ways, analyzed and enriched
with think-aloud data. As a result, we contribute a user-elicited collection of
touch and pen gestures for editing node-link diagrams. The study provides
valuable insight how people would interact on interactive surfaces for this as
well as other tabletop domains. Keywords: bimanual input, diagram editing, hand gestures, multi-touch, node-link
diagrams, pen interaction, tabletop |
The effects of changing projection geometry on the interpretation of 3D orientation on tabletops | | BIBA | Full-Text | 157-164 | |
Mark Hancock; Miguel Nacenta; Carl Gutwin; Sheelagh Carpendale | |||
Applications with 3D models are now becoming more common on tabletop displays. Displaying 3D objects on tables, however, presents problems in the way that the 3D virtual scene is presented on the 2D surface; different choices in the way the projection is designed can lead to distorted images and difficulty interpreting angles and orientations. To investigate these problems, we studied people's ability to judge object orientations under different projection conditions. We found that errors increased significantly as the center of projection diverged from the observer's viewpoint, showing that designers must take this divergence into consideration, particularly for multi-user tables. In addition, we found that a neutral center of projection combined with parallel projection geometry provided a reasonable compromise for multi-user situations. |
ShadowGuides: visualizations for in-situ learning of multi-touch and whole-hand gestures | | BIBAK | Full-Text | 165-172 | |
Dustin Freeman; Hrvoje Benko; Meredith Ringel Morris; Daniel Wigdor | |||
We present ShadowGuides, a system for in-situ learning of multi-touch and
whole-hand gestures on interactive surfaces. ShadowGuides provides on-demand
assistance to the user by combining visualizations of the user's current hand
posture as interpreted by the system (feedback) and available postures and
completion paths necessary to finish the gesture (feedforward). Our experiment
compared participants learning gestures with ShadowGuides to those learning
with video-based instruction. We found that participants learning with
ShadowGuides remembered more gestures and expressed significantly higher
preference for the help system. Keywords: displacement, gesture learning, marking menus, multi-finger |
Stacked Half-Pie menus: navigating nested menus on interactive tabletops | | BIBAK | Full-Text | 173-180 | |
Tobias Hesselmann; Stefan Flöring; Marwin Schmitt | |||
Hierarchical menus can be found in many of today's software applications.
However, these menus are often optimized for mouse or keyboard interaction and
their suitability for touch screen-based interactive tabletops is questionable.
On touch based interfaces, screen occlusion by the user, menu item size and the
usage of intuitive navigation paradigms are essential aspects that need to be
considered. In this paper we present our approach: "Stacked Half-Pie menus"
that allow visualization of an unlimited number of hierarchical menu items as
well as interactive navigation and selection of these items by touch. Our
evaluation shows a fairly high usability of touchable half-pie menus, making
them an interesting alternative to other established menu types on interactive
tabletops. Keywords: data selection, hierarchical pie menus, interactive surfaces, tabletops |
WebSurface: an interface for co-located collaborative information gathering | | BIBAK | Full-Text | 181-188 | |
Philip Tuddenham; Ian Davies; Peter Robinson | |||
Co-located collaborative Web browsing is a relatively common task and yet is
poorly supported by conventional tools. Prior research in this area has focused
on adapting conventional browsing interfaces to add collaboration support. We
propose an alternative approach, drawing on ideas from tabletop interfaces. We
present WebSurface, a novel tabletop interface for collaborative Web browsing.
WebSurface explores two design challenges of this approach: providing
sufficient resolution for legible text; and navigating through information. We
report our early experiences with an exploratory user study, in which pairs of
collaborators gathered information using WebSurface. The findings suggest that
a tabletop approach for collaborative Web browsing can help address limitations
of conventional tools, and presents beneficial affordances for information
layout. Keywords: collaborative web browsing, tabletop |
Actions speak loudly with words: unpacking collaboration around the table | | BIBAK | Full-Text | 189-196 | |
Rowanne Fleck; Yvonne Rogers; Nicola Yuill; Paul Marshall; Amanda Carr; Jochen Rick; Victoria Bonnett | |||
The potential of tabletops to enable groups of people to simultaneously
touch and manipulate a shared tabletop interface provides new possibilities for
supporting collaborative learning. However, findings from the few studies
carried out to date have tended to show small or insignificant effects compared
with other technologies. We present the Collaborative Learning Mechanisms
framework used to examine the coupling of verbal interactions and physical
actions in collaboration around the tabletop and reveal subtle mechanisms at
play. Analysis in this way revealed that what might be considered undesirable
or harmful interactions and intrusions in general collaborative settings, might
be beneficial for collaborative learning. We discuss the implications of these
findings for how tabletops may be used to support children's collaboration, and
the value of considering verbal and physical aspects of interaction together in
this way. Keywords: children, collaborative learning, framework, user studies |
Collaborative Puzzle Game: a tabletop interactive game for fostering collaboration in children with Autism Spectrum Disorders (ASD) | | BIBA | Full-Text | 197-204 | |
A. Battocchi; F. Pianesi; D. Tomasini; M. Zancanaro; G. Esposito; P. Venuti; A. Ben Sasson; E. Gal; P. L. Weiss | |||
We present the design and evaluation of the Collaborative Puzzle Game (CPG), a tabletop interactive activity developed for fostering collaboration in children with Autism Spectrum Disorder (ASD). The CPG was inspired by cardboard jigsaw puzzles and runs on the MERL DiamondTouch table [7]. Digital pieces can be manipulated by direct finger touch. The CPG features a set of interaction rules called Enforced Collaboration (EC); in order to be moved, puzzle pieces must be touched and dragged simultaneously by two players. Two studies were conducted to test whether EC has the potential to serve as an interaction paradigm that would help foster collaborative skills. In Study 1, 70 boys with typical development were tested and in Study 2 16 boys with ASD were tested. Results show that EC has a positive effect on collaboration although it appears to be associated with a more complex interaction. For children with ASD, EC was also related to a higher number of "negotiation" moves, which may reflect their higher need of coordination during the collaborative activity. |
Trollskogen: a multitouch table top framework for enhancing communication amongst cognitively disabled children | | BIBA | Full-Text | D1 | |
Ru Zarin | |||
Trollskogen is a communicative framework designed to enhance communication among people with cognitive disabilities. The forest is split up into interactive modules that provide a fun and engaging learning environment while helping improve on certain aspects of speech, reading/writing and symbol based languages. This framework has been deployed on a custom multi-touch table prototype built at the Interactive institute Umeå, enabling the children to interact with their fingers in a more natural, intuitive way rather than a traditional keyboard/mouse setup. |
The Haptic Tabletop Puck: the video | | BIBA | Full-Text | D2 | |
Nicolai Marquardt; Miguel A. Nacenta; James E. Young; Sheelagh Carpendale; Saul Greenberg; Ehud Sharlin | |||
In everyday life, our interactions with objects on real tables include how our fingertips feel those objects. In comparison, current digital interactive tables present a uniform touch surface that feels the same, regardless of what it presents visually. In this video, we demonstrate how tactile interaction can be used with digital tabletop surfaces. We present a simple and inexpensive device -- the Haptic Tabletop Puck -- that incorporates dynamic, interactive haptics into tabletop interaction. We created several applications that explore tactile feedback in the area of haptic information visualization, haptic graphical interfaces, and computer supported collaboration. In particular, we focus on how a person may interact with the friction, height, texture and malleability of digital objects. |
Getting practical with interactive tabletop displays: designing for dense data, "fat fingers," diverse interactions, and face-to-face collaboration | | BIBA | Full-Text | D3 | |
Stephen Voida; Matthew Tobiasz; Julie Stromer; Petra Isenberg; Sheelagh Carpendale | |||
Tabletop displays with touch-based input provide many powerful affordances for directly manipulating and collaborating around information visualizations. However, these devices also introduce several challenges for interaction designers, including discrepancies among the resolutions of the visualization, the tabletop's display, and its sensing technologies; a need to support diverse types of interactions required by different visualization techniques; and the ability to support face-to-face collaboration. As a result, most interactive tabletop applications for working with information currently demonstrate limited functionality and do not approach the power or versatility of their desktop counterparts. We present two specific techniques, i-Loupe and iPodLoupe, which illustrate how different design choices for addressing these challenges enable vastly different experiences in working with complex data on interactive surfaces. |
RealTimeChess: a real-time strategy and multiplayer game for tabletop displays | | BIBA | Full-Text | D4 | |
Jonathan Chaboissier; Frédéric Vernier | |||
RealTimeChess (RTC) is a real-time strategy and multiplayer game designed for tabletop display. This is based on pieces of chess but enable players (from two to four) to play all at the same time. The speed of the game can be adjusted and it is not necessary to know the strategies of Chess in order to play RTC. There are several types of game, including a tutorial for beginners. Finally, RTC seamlessly combines direct and remote interactions techniques. |
Curator: a design environment for curating tabletop museum experiences | | BIBA | Full-Text | D5 | |
Benjamin Sprengart; Anthony Collins; Judy Kay | |||
Interactive tabletops show great potential to be used in learning contexts, particularly in museums, as a way for people to collaboratively learn and explore rich sets of digital information. However, it is a real challenge for exhibition designers, or Curators, to create exhibitions for tabletop displays, as it is tedious to create these data-sets manually. Curator is a cross-platform tool that can be used by non-technical designers and museum staff to construct rich information collections for exploration on our interactive tabletop. After the data-set has been constructed using Curator on a desktop computer, this information can be tested and displayed on the tabletop immediately, providing an engaging, collaborative experience for exploration and learning. |
Seamless interaction between "creation" and "appreciation": multi-touch drawing interface | | BIBA | Full-Text | D6 | |
Daisuke Funato; Satoshi Shibuya; Ayumi Kizuka; Ken-ichi Kimura; Rina Naganuma | |||
In the present study, it proposes the system that supports "creation" and "appreciation" in the seamless by using the multi touch interface. This system achieved the interface where the drawing was able to be expressed by using a variety of physical operations and personal belongings by using the multi touch interface, and the diversity of the drawing act assumed to be difficult for mouse and pen tablet was achieved. Moreover, the creative environment to be able to appreciate others' works and own works was achieved by the function that the picture produced by the user is immediately posted to an online gallery. |
PaperLens: advanced magic lens interaction above the tabletop | | BIBA | Full-Text | D7 | |
Martin Spindler; Raimund Dachselt | |||
To solve the challenge of exploring large information spaces on interactive surfaces such as tabletops, we developed an optically tracked, lightweight, passive display (magic lens) that provides elegant three-dimensional exploration of rich datasets. This can either be volumetric, layered, zoomable, or temporal information spaces, which are mapped onto the physical volume above a tabletop. By moving the magic lens through the volume, corresponding data is displayed, thus serving as a window into virtuality. Hereby, various interaction techniques are introduced, which especially utilize the lens' height in a novel way, e.g. for zooming or displaying various information layers. |
PAC-PAC: pinching gesture recognition for augmented tabletop video game | | BIBA | Full-Text | D8 | |
Toshiki Sato; Haruko Mamiya; Kentaro Fukuchi; Hideki Koike | |||
A novel tabletop entertainment system that allows simultaneous interactions by multiple participants was developed. The newly developed interaction technique of this system recognizes a pinching gesture performed with the thumb and forefinger. This gesture recognition technique enables rapid response and high degree-of-freedom input for the players. |
MeTaTop: a multi-sensory and multi-user interface for collaborative analysis | | BIBA | Full-Text | D9 | |
Christoph Bichlmeier; Sandro-Michael Heining; Latifa Omary; Philipp Stefan; Ben Ockert; Ekkehard Euler; Nassir Navab | |||
The video demonstrates the potentials of integrating TableTop systems into medical environments to support the clinical workflow. Our group investigates its application to collaboratively review patients' data such as medical imaging data for diagnostics and the preparation and planning of surgical procedures. Usually, a team of clinicians dealing with a particular case reviews available patient data on a computer monitor. Browsing through stacks of slices reconstructed from volumetric imaging data is performed by only one person with classical interfaces such as keyboard or mouse. Other members of the team passively examine the presented imagery by looking over the main user's shoulder. In order to enhance the collaborative aspect of analyzing patient data, we suggest providing every participant with the abilities to contribute more actively. For this reason, we designed and developed a multi-touch TableTop display system to support team oriented discussion and decision making upon intuitively, interactively and effectively presented patients' data. |
3D multitouch advanced interaction techniques and applications | | BIBA | Full-Text | D10 | |
Christophe Bortolaso; Emmanuel Dubois; Nicolas Dittlo; Jean-Baptiste de la Rivière | |||
The Cubtile is a 3D multitouch device composed of 5 multitouch surfaces. It allows the use of classical multitouch gestures in 3D, and therefore to ease the manipulation of 3D scenes: it provides more direct ways to handle complex 3D operations such as applying arbitrary rotations. The video illustrates several of those advanced gestures that the cubtile supports, and demonstrates the integration of this device in an actual museal application: it allows visitors, even non experts, to manipulate with great efficiency a 3D environment used to teach the basics of species classifications. |
TAP: visual analytics on surface computers | | BIBA | Full-Text | D11 | |
Stefan Flöring; Tobias Hesselmann | |||
In this demo we present TaP, a gesture driven visual analytics application for the exploration of vast amounts of multidimensional data. Using multitouch gestures, users can intuitively interact with the system to look at data under different perspectives, modify visualisations in different ways and ultimately unearth insights hidden inside the data. Using interactive tabletops, the system also enables multiple users to collaboratively analyse data in separate charts on the screen. The application also integrates stacked half-pie menus, a new approach for navigating in deeply nested hierarchic data structures, specifically designed for the use on interactive tabletops. |
TouchTones: multi-user collaborative music composer | | BIBA | Full-Text | D12 | |
Darren David; Lee Granas; Jules Konig; Nathan Moody; Joshua Santangelo | |||
TouchTones lets up to four people create music collaboratively on Microsoft Surface. You don't need to know anything about music to make something that sounds beautiful. Start an instrument playing by touching a colored spinner, change the arrow directions on the grid to change the melody, and that's about it! TouchTones provides an immediate and enjoyable musical experience for any small group. TouchTones can be learned with only a few seconds of exploration or by viewing its integrated help video. From there, additional features emerge through play. Create tricky melody paths through the note grid, or use multiple fingers and play TouchTones like a keyboard. Tested with users from age 4 to age 60, TouchTones opens up either minutes or hours of enjoyment, for as few as one user or even a whole family. Touchtones is a collaborative, multi-touch, multi-user, grid-based music sequencer that is being released as freeware for Microsoft Surface. It has four instruments distributed across four octaves, all playing to a master tempo. Sounds can be triggered by user-controlled animated "sprites" or by simply pressing a colored button and pressing one of the icons on the grid at the same time. The patterns on the grid produce melody, and anyone can alter the melody, even while it's playing. Volume and reset controls help to round out the simple and wholly visual user interface. While TouchTones comes with a clean, modern design and a set of pleasant sounds, it has been designed to be reskinnable. Both the sounds and visuals can be completely customized to match any brand, mood, or theme. |
FiberBoard: compact multi-touch display using channeled light | | BIBA | Full-Text | D13 | |
Daniel Jackson; Tom Bartindale; Patrick Olivier | |||
Multi-touch displays based on infrared (IR) light offer many advantages over alternative technologies. Existing IR multi-touch devices either use complex custom electronic sensor arrays, or a camera that must be placed relatively distant from the display. FiberBoard is an easily constructed compact IR-sensing multi-touch display. Using an array of optical fibers, reflected IR light is channeled to a camera. As the fibers are flexible the camera is free to be positioned so as to minimize the depth of the device. The resulting display is around one tenth of the depth of a conventional camera-based multi-touch display. We present our prototype, its novel calibration process, and virtual camera software based on existing multi-touch image processing tools. |
A multi-touch tabletop interface for applying collaborative creativity techniques | | BIBA | Full-Text | D14 | |
Marc René Frieß; Martin Kleinhans; Florian Echtler; Florian Forster; Georg Groh | |||
The demo-video is dedicated to a collaborative multi-touch tabletop interface for a tool, which is targeted to support idea-generation by providing a generic architectural model for collaborative creativity techniques. This tool also provides situated support through the possibility of selecting between different user-interaction paradigms adapted to the interaction situation. In order to address more communication and coordination relevant co-located settings, a tabletop interface can be seen as a promising way of IT-support. |
Nori Scrum meeting table | | BIBA | Full-Text | D15 | |
Henning Voss; Georg Schneider | |||
Scrum is a process model, commonly used for agile software development. Scrum is based on several meetings where teams of developers meet to monitor and plan a software development process together. These meetings, which are an integral part of Scrum, are usually held without the support of software or digital media. We developed an interactive, multitouch-enabled Meeting-Table, which can be used by teams during the whole development process. Development teams are guided by the multitouch Application through meetings and can plan and observe their work directly and together. Through the support of the development team with our interactive meeting table, the efficiency of development teams in a Scrum software development process can be increased. |