HCI Bibliography Home | HCI Conferences | About ITS | ITS Conf Proceedings | Detailed Records | RefWorks | EndNote | Hide Abstracts
ITS Tables of Contents: 091011121314 ⇐ MORE

Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces

Fullname:Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces
Editors:Orit Shaer; Chia Shen; Meredith Ringel Morris; Michael Horn
Location:Cambridge, Massachusetts
Dates:2012-Nov-11 to 2012-Nov-14
Publisher:ACM
Standard No:ISBN: 978-1-4503-1209-7; ACM DL: Table of Contents; hcibib: ITS12
Papers:63
Pages:410
Links:Conference Website
Summary:We are pleased to welcome you to the ACM International Conference on Interactive Tabletops and Surfaces (ITS) 2012, held from November 11-14th, in Cambridge, Massachusetts. ITS 2012 marks the 7th anniversary of a series of annual research workshops and conferences that began in 2006 in Adelaide, Australia. In this short amount of time, the field of Interactive Tabletops and Surfaces has rapidly evolved. Hardware prices are falling, core sensing technology is improving, software toolkits are maturing, and interaction paradigms are beginning to solidify. As a result, interactive tabletops and surfaces are increasingly making their way beyond laboratories and appearing in a wide variety of private and public spaces including museums, offices, schools, airports, hotels, medical centers, and homes. Such surfaces employ a variety of technologies including capacitive sensing, computer vision, motion sensors, and appear in a variety of formfactors such as large walls and tabletops, tablets and slates, flexible materials, and on-demand projection.
    Sponsored by ACM and generously supported by National Science Foundation and industry, the ACM conference on Interactive Tabletops and Surfaces brings together researchers and practitioners from a variety of backgrounds and interest areas. The intimate size of this singletrack symposium provides an ideal venue for leading researchers, practitioners, and students to exchange research results and experiences.
    This year we received 103 paper and note submissions (63 papers and 40 notes) of which we accepted 24 papers and 6 notes for an overall acceptance rate of 29%. We also introduced a new review process modeled after the 2012 ACM CSCW conference, in which papers and notes underwent two review cycles. After the first review by two external reviewers and one program committee member, submissions received either a "Revise" or "Reject" decision. Authors of papers in the "Revise" pool then participated in a second phase of the submission process, in which they had 31/2 weeks to revise and resubmit their work based on the initial reviews (and also received an additional meta-review from a second program committee member). Revised submissions were then re-reviewed as the basis for final decisions. This is similar to a journal review process, except that it is limited to one revision with a strict deadline. We believe that this process has resulted in an extremely high quality papers and notes program, which we hope that you find enlightening and exciting!
    In addition to the papers and notes program, the conference hosts several other programs. The Doctoral Symposium (DS) accepted seven doctoral students to discuss their research with peers and with a panel of experienced ITS researchers in an informal setting. The conference also includes an interactive hands-on reception that presents a juried selection of 21 posters and 18 demos. This year, ITS brings the theme of Beyond Flat Displays in our pre-conference venues with a tutorial and a workshop: four tutorials that offer novel practical experiences to conference attendees with diverse skills and backgrounds, and a pre-conference workshop that will challenge attendees to explore the realm Beyond Flat Displays. The National Science Foundation extends generous support for DS attendees and for student travel awards; this support will result in more opportunities for students to engage in this diverse and growing community.
    The titles of the papers, notes, posters, and demos, reflect the diversity of ideas and perspectives that comprise the ITS community, ranging from large tabletop surfaces to non-flat displays, from pen and paper to free-space interaction, from education to surfaces "in the wild". What brings it all together is a commitment to innovation that expands our understanding of design considerations of ITS technologies and of their applications.
  1. Interacting in 3D
  2. Multiple displays and devices
  3. Surfaces in the wild
  4. Off the wall: free-space interactions with TVs and projected displays
  5. Surfaces in education
  6. Pens and paper
  7. Touching with precision
  8. Interaction techniques and widgets
  9. Understanding users
  10. Interacting with information using surfaces
  11. Doctoral symposium
  12. Demo session
  13. Posters

Interacting in 3D

Comparing elicited gestures to designer-created gestures for selection above a multitouch surface BIBAFull-Text 1-10
  Dmitry Pyryeskin; Mark Hancock; Jesse Hoey
Many new technologies are emerging that make it possible to extend interaction into the three-dimensional space directly above or in front of a multitouch surface. Such techniques allow people to control these devices by performing hand gestures in the air. In this paper, we present a method of extending interactions into the space above a multitouch surface using only a standard diffused surface illumination (DSI) device, without any additional sensors. Then we focus on interaction techniques for activating graphical widgets located in this above-surface space. We have conducted a study to elicit gestures for above-table widget activation. A follow-up study was conducted to evaluate and compare these gestures based on their performance. Our results showed that there was no clear agreement on what gestures should be used to select objects in mid-air, and that performance was better when using gestures that were chosen less frequently, but predicted to be better by the designers, as opposed to those most frequently suggested by participants.
Direct manipulation and the third dimension: co-planar dragging on 3D displays BIBAFull-Text 11-20
  Max Möllers; Patrick Zimmer; Jan Borchers
Recent advances in touch and display technologies are supporting a wide-spread use of touch-based direct manipulation techniques as well as 3D displays that give a perspectively correct view. Both techniques have consistency constraints including the following: With direct manipulation, a dragged object should stick to the finger tip. With viewer centered projection, head movement should update the scene's projection to preserve a sound 3D impression, e.g., leaning around a house should reveal its backyard. Unfortunately, these two contradict each other, making a combination, e.g., moving the head while touching or dragging an object, non-trivial. We introduce a design space of perspectively adjusted methods for direct manipulation to cope with this limitation, select nine different strategies from it, and evaluate six of them in depth. Participants dragged a box through a 3D maze with multiple, partially occluded levels. We identified one method to be among the fastest while yielding up to 32% less collisions than the other fast methods.
Evaluation of depth perception for touch interaction with stereoscopic rendered objects BIBAFull-Text 21-30
  Dimitar Valkov; Alexander Giesler; Klaus Hinrichs
Recent developments in the domain of Human Computer Interaction have suggested to combine stereoscopic visualization with touch interaction. Although this combination has the potential to provide more intuitive and natural interaction setups for a wide range of applications, until now interaction with such systems is mainly constrained to simple navigation, whereas manipulation of the stereoscopically displayed objects is supported only rather rudimentarily.
   In this paper we investigate the users' ability to discriminate depth or depth motion of stereoscopically rendered objects while she is performing touch gestures and discuss implications for object selection and manipulation. Our results show that there is a usable range of imperceptible manipulation, which -- if properly applied -- could support interaction with objects floating in the vicinity around the display surface without noticeable impact on a user's visual or touch performance.

Multiple displays and devices

Gradual engagement: facilitating information exchange between digital devices as a function of proximity BIBAFull-Text 31-40
  Nicolai Marquardt; Till Ballendat; Sebastian Boring; Saul Greenberg; Ken Hinckley
The increasing number of digital devices in our environment enriches how we interact with digital content. Yet, cross-device information transfer -- which should be a common operation -- is surprisingly difficult. One has to know which devices can communicate, what information they contain, and how information can be exchanged. To mitigate this problem, we formulate the gradual engagement design pattern that generalizes prior work in proxemic interactions and informs future system designs. The pattern describes how we can design device interfaces to gradually engage the user by disclosing connectivity and information exchange capabilities as a function of inter-device proximity. These capabilities flow across three stages: (1) awareness of device presence/connectivity, (2) reveal of exchangeable content, and (3) interaction methods for transferring content between devices tuned to particular distances and device capabilities. We illustrate how we can apply this pattern to design, and show how existing and novel interaction techniques for cross-device transfers can be integrated to flow across its various stages. We explore how techniques differ between personal and semi-public devices, and how the pattern supports interaction of multiple users.
Eliciting usable gestures for multi-display environments BIBAFull-Text 41-50
  Teddy Seyed; Chris Burns; Mario Costa Sousa; Frank Maurer; Anthony Tang
Multi-display environments (MDEs) have advanced rapidly in recent years, incorporating multi-touch tabletops, tablets, wall displays and even position tracking systems. Designers have proposed a variety of interesting gestures for use in an MDE, some of which involve a user moving their hands, arms, body or even a device itself. These gestures are often used as part of interactions to move data between the various components of an MDE, which is a longstanding research problem. But designers, not users, have created most of these gestures and concerns over implementation issues such as recognition may have influenced their design. We performed a user study to elicit these gestures directly from users, but found a low level of convergence among the gestures produced. This lack of agreement is important and we discuss its possible causes and the implication it has for designers. To assist designers, we present the most prevalent gestures and some of the underlying conceptual themes behind them. We also provide analysis of how certain factors such as distance and device type impact the choice of gestures and discuss how to apply them to real-world systems.
MobiSurf: improving co-located collaboration through integrating mobile devices and interactive surfaces BIBAFull-Text 51-60
  Julian Seifert; Adalberto Simeone; Dominik Schmidt; Paul Holleis; Christian Reinartz; Matthias Wagner; Hans Gellersen; Enrico Rukzio
One of the most popular scenarios for advertising interactive surfaces in the home is their support for solving co-located collaborative tasks. Examples include joint planning of events (e.g., holidays) or deciding on a shared purchase (e.g., a present for a common friend). However, this usually implies that all interactions with information happen on the common display. This is in contrast to the current practices to use personal devices and further, most people's behavior to constantly switch between individual and group phases because people have differing search strategies, preferences, etc. We therefore investigated how the combination of personal devices and a simple way of exchanging information between these devices and an interactive surface changes the way people solve collaborative tasks compared to an existing approach of using personal devices. Our study results clearly indicate that the combination of personal and a shared device allows users to fluently switch between individual and group work phases and users take advantage of both device classes.

Surfaces in the wild

Tabletop games for photo consumption at theme parks BIBAFull-Text 61-70
  Edward Anstead; Abigail Durrant; Steve Benford; David Kirk
This paper broadly explores novel tabletop interaction design opportunities for photo-souvenir consumption in a theme park context. We present the design and user evaluation of two tabletop applications for the playful triaging of photo collections within groups from a day trip to a UK theme park. Combining triaging with gameplay, the designs explore two distinct styles of user interaction, requiring either speed and dexterity or thoughtful strategy. Herein we discuss the rationale for the design process and the findings generated from our evaluation. Our study reveals the social impact of gameplay on user engagement with triaging tasks and implications for the deployment of interactive tabletop interfaces within theme parks to support photo consumption as part of the park experience.
Investigating menu discoverability on a digital tabletop in a public setting BIBAFull-Text 71-80
  Mindy Seto; Stacey Scott; Mark Hancock
A common challenge to the design of digital tabletops for public settings is how to effectively invite and guide passersby -- who often have no prior experience with such technology -- to interact using unfamiliar interaction methods and interfaces. We characterize such enticement from the system interface as the system's discoverability. A particular challenge to modern surface interfaces is the discoverability of system functionality: does the system require gestures? are there system menus? if so, how are they invoked? This research focuses on the discoverability of system menus on digital tabletops designed for public settings. An observational study of menu invocation methods in a museum setting is reported. Study findings suggest that discernible and recognizable interface elements, such as buttons, supported by the use of animation, can effectively attract and guide the discovery of menus. Design recommendations for improving menu discoverability are also presented.
Re-collision: a collision reconstruction forensics tabletop interface BIBAFull-Text 81-84
  Marcel Tozser; Nicole Sultanum; Ehud Sharlin; Ken Rutherford; Colin Foster
In this paper we present the design, implementation and preliminary evaluation of Re-Collision, a prototype collision reconstruction tabletop interface. Re-Collision was developed through a participatory design process involving expert users from the Calgary Police Forensics Team and Collision Reconstruction Team. We briefly cover fundamental domain characteristics emerging from interview sessions and explain how these informed the design of Re-Collision. The paper details our current prototype implementation and discusses results of a design critique conducted with domain experts using our system, helping us assess the potential of tabletop interfaces as aids in the process of collision reconstruction, as well as delineate and discuss relevant design implications.

Off the wall: free-space interactions with TVs and projected displays

Investigating mid-air pointing interaction for projector phones BIBAFull-Text 85-94
  Christian Winkler; Ken Pfeuffer; Enrico Rukzio
Projector phones, mobile phones with built-in projectors, might significantly change the way we are going to use and interact with mobile phones. The potential of combining the mobile and the projected display and further the potential of the mid-air space between them have yet to be explored. In this paper we assess these potentials by reporting two user studies: First, an experimental comparison of four techniques for target selection on the projection, including interaction on the touchscreen of the projector phone as well as performing pointing gestures in mid-air around the phone. Our results indicate that interacting behind the phone yields the highest performance, albeit showing a twice as high error rate. Second, a follow-up experiment where we analyzed the performance of the two best tech-niques of the first study within realistic mobile application scenarios such as browsing and gaming. The results show that mobile applications benefit from the projection, e.g., by overcoming the fat-finger problem on touchscreens and increasing the visibility of small objects. Our findings speak for the integration of a tracking camera at the bottom of the projector phone to enable mid-air pointing interaction.
Web on the wall: insights from a multimodal interaction elicitation study BIBAFull-Text 95-104
  Meredith Ringel Morris
New sensing technologies like Microsoft's Kinect provide a low-cost way to add interactivity to large display surfaces, such as TVs. In this paper, we interview 25 participants to learn about scenarios in which they would like to use a web browser on their living room TV. We then conduct an interaction-elicitation study in which users suggested speech and gesture interactions for fifteen common web browser functions. We present the most popular suggested interactions, and supplement these findings with observational analyses of common gesture and speech conventions adopted by our participants. We also reflect on the design of multimodal, multi-user interaction-elicitation studies, and introduce new metrics for interpreting user-elicitation study findings.
Kinected browser: depth camera interaction for the web BIBAFull-Text 105-108
  Daniel Liebling; Meredith Ringel Morris
Interest in and development of gesture interfaces has recently exploded, fueled in part by the release of Microsoft Corporation's Kinect, a low-cost, consumer-packaged depth camera with integrated skeleton tracking. Depth-camera-based gestures can facilitate interaction with the Web on keyboard-and-mouse-free and/or multi-user technologies, such as large display walls or TV sets. We present a toolkit for bringing such gesture affordances into modern Web browsers using existing Web programming methods. Our framework is designed to enable Web programmers to incrementally add this capability with minimum effort by leveraging Web standard DOM structures and event models. We describe our framework's design and architecture, and illustrate its usability and versatility.

Surfaces in education

A collaborative environment for engaging novices in scientific inquiry BIBAFull-Text 109-118
  Consuelo Valdes; Michelle Ferreirae; Taili Feng; Heidi Wang; Kelsey Tempel; Sirui Liu; Orit Shaer
We describe the design, implementation, and evaluation of GreenTouch, a collaborative environment that enables novice users to engage in authentic scientific inquiry. GreenTouch consists of a mobile user interface for capturing data in the field, a web application for data curation in the cloud, and a tabletop interface for exploratory analysis of heterogeneous data. This paper contributes: 1) the design, implementation, and validation of a collaborative environment which allows novices to engage in scientific data capture, curation, and analysis; 2) empirical evidence for the feasibility and value of integrating interactive surfaces in college-level education based on an in situ study with 54 undergraduate students; and 3) insights collected through iterative design, providing concrete lessons and guidelines for designing multi-touch interfaces for collaborative inquiry of complex domains.
Orchestrating a multi-tabletop classroom: from activity design to enactment and reflection BIBAFull-Text 119-128
  Roberto Martinez Maldonado; Yannis Dimitriadis; Judy Kay; Kalina Yacef; Marie-Theresa Edbauer
If multi-tabletop classrooms were available in each school, how would teachers plan and enact their activities to enhance learning and collaboration? How can they evaluate how the activities actually went compared with the plan? Teachers' effectiveness in orchestrating the classroom has a direct impact on students learning. Interactive tabletops offer the potential to support teachers by enhancing their awareness and classroom control. This paper describes our mechanisms to help a teacher orchestrate a classroom activity using multiple interactive tabletops. We analyse automatically captured interaction data to assess whether the activity design, as intended by the teacher, was actually followed during its enactment. We report on an authentic classroom study embedded in the curricula of an undergraduate Management unit. This involved 236 students across 14 sessions. The main contribution of the paper is an approach for designing a multi-tabletop classroom that can help teachers plan their learning activities; and provide data for assessment and reflection on the enactment of a series of classroom sessions.
Combinatorix: a tangible user interface that supports collaborative learning of probabilities BIBAFull-Text 129-132
  Bertrand Schneider; Paulo Blikstein; Wendy Mackay
Teaching abstract concepts is notoriously difficult, especially when we lack concrete metaphors that map to those abstractions. Combinatorix offers a novel approach that combines tangible objects with an interactive tabletop to help students explore, solve and understand probability problems. Students rearrange physical tokens to see the effects of various constraints on the problem space; a second screen displays the associated changes in an abstract representation, e.g., a probability tree. Using participatory design, college students in a combinatorics class helped iteratively refine the Combinatorix prototype, which was then tested successfully with five students. Combinatorix serves as an initial proof-of-concept that demonstrates how tangible tabletop interfaces that map tangible objects to abstract concepts can improve problem-solving skills.

Pens and paper

Tangible paper interfaces: interpreting pupils' manipulations BIBAFull-Text 133-142
  Quentin Bonnard; Patrick Jermann; Amanda Legge; Frédéric Kaplan; Pierre Dillenbourg
Paper interfaces merge the advantages of the digital and physical world. They can be created using normal paper augmented by a camera+projector system. They are particularly promising for applications in education, because paper is already fully integrated in the classroom, and computers can augment them with a dynamic display. However, people mostly use paper as a document, and rarely for its characteristics as a physical body. In this article, we show how the tangible nature of paper can be used to extract information about the learning activity. We present an augmented reality activity for pupils in primary schools to explore the classification of quadrilaterals based on sheets, cards, and cardboard shapes. We present a preliminary study and an in-situ, controlled study, making use of this activity. From the detected positions of the various interface elements, we show how to extract indicators about problem solving, hesitation, difficulty levels of the exercises, and the division of labor among the groups of pupils. Finally, we discuss how such indicators can be used, and how other interfaces can be designed to extract different indicators.
Empirical evaluation of uni- and bimodal pen and touch interaction properties on digital tabletops BIBAFull-Text 143-152
  Fabrice Matulic; Moira Norrie
Combined bimanual pen and touch input on digital tabletops is an appealing interaction paradigm enjoying growing popularity among many HCI researchers. Due to its relative novelty, its properties are still relatively unexplored and many hypotheses emerging from intuition and extrapolations from studies about touch and other pointing devices remain to be verified. We present an empirical evaluation consisting of three experiments aimed at investigating a few important issues of pen and touch interaction on horizontal surfaces. Specifically, we examine the compromise between speed and accuracy for the two input modalities in positioning and tracing contexts, the influence of palm-resting on pen precision and bimanual coordination for pen mode-switching via postures. We report on quantitative and qualitative results obtained from these trials and discuss their potential impact on the design of pen and touch systems.
Hand-rewriting: automatic rewriting similar to natural handwriting BIBAFull-Text 153-162
  Tomoko Hashida; Kohei Nishimura; Takeshi Naemura
We have developed a hybrid writing and erasure system called Hand-rewriting in which both human users and computer systems can write and erase freely on the same piece of paper. When the user writes on a piece of paper with a pen, for example, the computer system can erase what is written on the paper, and additional content can be written on the paper in natural print-like colors. We achieved this hybrid writing and erasure on paper by localized heating combined with handwriting with thermochromic ink and localized ultraviolet-light exposure on paper coated with photochromic material. This paper describes our research motivation, design, and implementation of this interface and examples of applications.

Touching with precision

Towards the keyboard of oz: learning individual soft-keyboard models from raw optical sensor data BIBAFull-Text 163-172
  Jörg Edelmann; Philipp Mock; Andreas Schilling; Peter Gerjets; Wolfgang Rosenstiel; Wolfgang Straßer
Typing on a touchscreen display usually lacks haptic feedback which is crucial for maintaining finger to key assignment, especially for touch typists who are not looking at their keyboard. This leads to typing being substantially more error prone on these devices. We present a soft keyboard model which we developed from typing data collected from users with diverging typing behavior. For data acquisition, we used a simulated perfect classifier we refer to as The Keyboard of Oz. In order to draw near to this classifier we used the complete sensor data of each keystroke and applied supervised machine learning techniques to learn and evaluate an individual keyboard model. The model not only accounts for individual keystroke distributions but also incorporates a classifier based on the images obtained from an optical touch sensor. The resulting highly individual classifier has remarkable classification accuracy. Additionally, we present an approach to compensate for hand drift during typing utilizing a Kalman filter. We show that this filter performs significantly better with the keyboard model which takes raw sensor data into account.
Finger and hand detection for multi-touch interfaces based on maximally stable extremal regions BIBAFull-Text 173-182
  Philipp Ewerling; Alexander Kulik; Bernd Froehlich
We propose a new approach for touch detection on optical multi-touch devices that exploits the fact that the camera images reveal not only the actual touch points, but also objects above the screen such as the hand or arm of a user. Our touch processing relies on the Maximally Stable Extremal Regions algorithm for finding the users' fingertips in the camera image. The hierarchical structure of the generated extremal regions serves as a starting point for agglomerative clustering of the fingertips into hands. Furthermore, we suggest a heuristic supporting the identification of individual fingers as well as the distinction between left hands and right hands if all five fingers of a hand are in contact with the touch surface.
   Our evaluation confirmed that the system is robust against detection errors resulting from non-uniform illumination and reliably assigns touch points to individual hands based on the implicitly tracked context information. The efficient multithreaded implementation handles two-handed input from multiple users in real-time.
Measuring the linear and rotational user precision in touch pointing BIBAFull-Text 183-192
  François Bérard; Amélie Rochet-Capellan
This paper addresses the limit of user precision in pointing to a target when the finger is already in contact with a touch surface. User precision was measured for linear and rotational pointing. We developed a novel experimental protocol that improves the estimation of user's precision as compare to previous protocols. Our protocol depends on high-resolution measurements of finger motions. This was achieved by the means of two optical finger trackers specially developed for this study. The trackers provide stable and precise measurements of finger translations and rotations. We used them in two user experiments that revealed that (a) user's precision for linear pointing is about 150dpi or 0.17mm, and (b) user can reliably point at sectors as narrow as 2.76 degrees in 2s in rotational pointing. Our results provide new information for the optimization of interactions and sensing devices that involve finger pointing on a surface.

Interaction techniques and widgets

SnapRail: a tabletop user interface widget for addressing occlusion by physical objects BIBAFull-Text 193-196
  Genki Furumi; Daisuke Sakamoto; Takeo Igarashi
The screen of a tabletop computer is often occluded by physical objects such as coffee cups. This makes it difficult to see the virtual elements under the physical objects (visibility) and manipulate them (manipulability). Here we present a user interface widget, called "SnapRail," to address these problems, especially occlusion of a manipulable collection of virtual discrete elements such as icons. SnapRail detects a physical object on the surface and the virtual elements under the object. It then snaps the virtual elements to a rail widget that appears around the object. The user can then manipulate the virtual elements along the rail widget. We conducted a preliminary user study to evaluate the potential of this interface and collect initial feedback. The SnapRail interface received positive feedback from participants of the user study.
HandyWidgets: local widgets pulled-out from hands BIBAFull-Text 197-200
  Takuto Yoshikawa; Buntarou Shizuki; Jiro Tanaka
Large multi-touch tabletops are useful for collocated collaborative work involving multiple users. However, applying traditional WIMP interfaces to tabletops causes problems where users cannot reach GUI elements, such as icons or buttons, on the opposite side with their hands, and they sometimes have difficulty in reading the content of GUI elements because their view does not match the orientation of the content. To solve these problems, we present HandyWidgets that are widgets localized around users' hands. The widgets are quickly invoked by a bimanual multi-touch gesture which we call "pull-out". This gesture also allows users to adjust the position, orientation, and size of the widgets, in a continuous manner after invocation.

Understanding users

Touch, click, navigate: comparing tabletop and desktop interaction for map navigation tasks BIBAFull-Text 205-213
  Elham Beheshti; Anne Van Devender; Michael Horn
Multi-touch tabletops and desktop computers offer different affordances for interaction with digital maps. Previous research suggests that these differences may affect how a person navigates in the world. To test this idea we randomly assigned 22 participants to one of two conditions. Participants used the interfaces to complete a series of tasks in which they interacted with a digital map of a fictitious city and then attempted to navigate through a corresponding virtual world. However, based on participant performance, we find no evidence that interface type affects navigation ability. We discuss map navigation strategies across the two conditions and analyze multi-touch gestures used by participants in the tabletop condition. Finally, based on these analyses, we consider implications for the design of interactive map interfaces.
Microanalysis of active reading behavior to inform design of interactive desktop workspaces BIBAFull-Text 215-224
  Matthew Hong; Anne Marie Piper; Nadir Weibel; Simon Olberding; James Hollan
Hybrid paper-digital desktop workspaces have long been of interest in HCI, yet their design remains challenging. One continuing challenge is to support fluid interaction with both paper and digital media, while taking advantage of established practices with each. Today researchers are exploiting depth cameras and computer vision to capture activity on and above the desktop and enable direct interaction with digitally projected and physical media. One important prerequisite to augmenting desktop activity is understanding human behavior in particular contexts and tasks. Here we study active reading on the desktop. To better understand active reading practices and identify patterns that might serve as signatures for different types of related activity, we conducted a microanalysis of single users reading on and above the desktop workspace. We describe the relationship between multimodal body-based contextual cues and the interactions they signify in a physical desktop workspace. Detailed analysis of coordinated interactions with paper documents provides an empirical basis for designing digitally augmented desktop workspaces. We conclude with prototype design interactions for hybrid paper-digital desktop workspaces.
Interaction and recognition challenges in interpreting children's touch and gesture input on mobile devices BIBAFull-Text 225-234
  Lisa Anthony; Quincy Brown; Jaye Nias; Berthel Tate; Shreya Mohan
As mobile devices like the iPad and iPhone become increasingly commonplace, touchscreen interactions are quickly overtaking other interaction methods in terms of frequency and experience for many users. However, most of these devices have been designed for the general, typical user. Trends indicate that children are using these devices (either their parents' or their own) for entertainment or learning activities. Previous work has found key differences in how children use touch and surface gesture interaction modalities vs. adults. In this paper, we specifically examine the impact of these differences in terms of automatically and reliably understanding what kids meant to do. We present a study of children and adults performing touch and surface gesture interaction tasks on mobile devices. We identify challenges related to (a) intentional and unintentional touches outside of onscreen targets and (b) recognition of drawn gestures, that both indicate a need to design tailored interaction for children to accommodate and overcome these challenges.

Interacting with information using surfaces

Branch-explore-merge: facilitating real-time revision control in collaborative visual exploration BIBAFull-Text 235-244
  Will McGrath; Brian Bowman; David McCallum; Juan David Hincapié-Ramos; Niklas Elmqvist; Pourang Irani
Collaborative work is characterized by participants seamlessly transitioning from working together (coupled) to working alone (decoupled). Groupware should therefore facilitate smoothly varying coupling throughout the entire collaborative session. Towards achieving such transitions for collaborative exploration and search, we propose a protocol based on managing revisions for each collaborator exploring a dataset. The protocol allows participants to diverge from the shared analysis path (branch), study the data independently (explore), and then contribute back their findings onto the shared display (merge). We apply this concept to collaborative search in multidimensional data, and propose an implementation where the public view is a tabletop display and the private views are embedded in handheld tablets. We then use this implementation to perform a qualitative user study involving a real estate dataset. Results show that participants leverage the BEM protocol, spend significant time using their private views (40% to 80% of total task time), and apply public view changes for consultation with collaborators.
Use your head: tangible windows for 3D information spaces in a tabletop environment BIBAFull-Text 245-254
  Martin Spindler; Wolfgang Büschel; Raimund Dachselt
Tangible Windows are a novel concept for interacting with virtual 3D information spaces in a workbench-like multi-display environment. They allow for performing common 3D interaction tasks in a more accessible manner by combining principles of tangible interaction, head-coupled perspective, and multi-touch techniques. Tangible Windows unify the interaction and representation space in a single device. They either act as physical peepholes into a virtual 3D world or as physical containers for parts of that world and are well-suited for the collaborative exploration and manipulation of such information spaces. One important feature of Tangible Windows is that the use of obtrusive hardware, such as HMDs, is strictly avoided. Instead, lightweight paper-based displays are used. We present different techniques for canonical 3D interaction tasks such as viewport control or object selection and manipulation, based on the combination of independent input modalities. We tested these techniques on a self-developed prototype system and received promising early user feedback.
TouchWave: kinetic multi-touch manipulation for hierarchical stacked graphs BIBAFull-Text 255-264
  Dominikus Baur; Bongshin Lee; Sheelagh Carpendale
The increasing popularity of touch-based devices is driving us to rethink existing interfaces. Within this opportunity, the complexity of information visualizations offers particular challenges. We explore these challenges to bring multi-touch interactions to a specific visualization technique, stacked graphs. Stacked graphs are a visually appealing and popular method for presenting time series data, however, they come with associated problems-issues with legibility, difficulties with comparisons, and restrictions in scalability. We present TouchWave, a rethinking and extension of stacked graphs for multi-touch capable devices that provides a variety of flexible layout adjustments, interactive options for querying data values, and seamlessly switching between different visualizations. In addition to ameliorating the main issues of stacked graphs, TouchWave also integrates hierarchical data within stacked graphs. We demonstrate TouchWave capabilities with two datasets-a music listening history and movie box office revenues and discuss the implications for weaning other visualizations off mouse and keyboard.

Doctoral symposium

Perception and reality: exploring urban planners' vision on GIS tasks for multi-touch displays BIBAFull-Text 265-270
  Rojin S. Vishkaie; Richard Levy
For urban and transportation planners, GIS has become an essential tool in land use planning and in the design of urban infrastructure. The use of PC-based GIS software has made it possible to analyze massive data sets on an urban scale. Although there is a wide availability of new multi-touch displays such as smart-phones, tablets, tabletops and large wall displays, they are rarely used by planning professionals. This research explores the role that such displays may have with GIS experts, specifically urban planners. In conducting this research, interviews with GIS professionals focused on how they analyzed GIS data, as well as their perceptions of the types of tasks that maybe suitable for multi-touch displays. The primary tasks identified from the participants in the study were collaboration, visualization and data analysis. This research will further analyze the tasks, with respect to urban planners and consider the properties of various multi-touch displays to explore the role of multi-touch displays in the GIS context.
Scalable interaction design for collaborative visual exploration of big data BIBAFull-Text 271-276
  Ioannis Leftheriotis
Novel input devices such as tangibles, smartphones, multi-touch surfaces etc. have given impetus to new interaction techniques. In this PhD research, the main motivation is to study novel interaction techniques and designs that augment collaboration in a collocated environment. Furthermore, the main research aim is to take advantage of scalable interaction design techniques and tools that can be applied in a variety of devices so as to help users to work together on a problem with an abstract big data set, using visualizations on a collocated context.
Spatially aware tangible display interaction in a tabletop environment BIBAFull-Text 277-282
  Martin Spindler
One limiting factor of digital tabletops is that interaction is usually restricted to a single 2D surface. I propose to extend this interaction space to the 3D space above the table by using spatially aware handheld displays. Utilizing the spatial position and orientation of such tangible displays provides an additional dimension of interaction that allows users to interact with complex information spaces in a more direct and natural way. The simultaneous use of multiple tangible displays explicitly supports collaborative work. My research includes the identification of basic interaction principles, the design and implementation of a technical framework, and the development and evaluation of interactive systems demonstrating the benefits of tangible displays for different application domains.
Improving awareness of automated actions within digital tabletops BIBAFull-Text 283-288
  Y.-L. Betty Chang
My research investigates information visualization techniques that improve the awareness of complex automated activities within digital tabletop interfaces. As a case study, I am exploring digital tabletop board gaming as the context to enable rapid design cycles and easy manipulation of variables, such as level of complexity. Preliminary work has revealed that automation reduces workload; however, it also increases the potential for confusion, restricts flexibility, and may negatively impact the gaming experience. Through a series of laboratory studies, my dissertation research will investigate the impact on awareness and decision making processes of following three factors: 1) persistent display of automation results, 2) animation of automated actions, and 3) user control of automated actions. Finally, a field study is planned to deploy and validate the design concepts explored in the laboratory studies.
Designing tabletop activities for inquiry-based learning: lessons from phylogenetics, neuroscience and logistics BIBAFull-Text 289-294
  Bertrand Schneider
In this paper I discuss the lessons learnt from designing learning environments for science education. More specifically, I describe four projects I designed and (or) evaluated: Walden, a multi-touch multi-displays for informal science education; the Tinker Table, a Tangible User Interface for students in logistics; Phylo-Genie, a learning scenario for collaborative learning of phylogenetics; and finally BrainExplorer, a pen-based tabletop environment that enables direct interaction with a small-scale brain. I summarize my findings by defining 3 ways in which technology can enhance knowledge building for inquiry-based learning: via a "Representational Effect", by providing rich interaction techniques and by preparing for future learning.

Demo session

3D touch panel interface using an autostereoscopic display BIBAFull-Text 295-298
  Takehiro Niikura; Takashi Komuro
We propose a 3D touch panel interface using an autostereoscopic display and a high-speed stereo camera. With this system, the virtual objects are stereoscopically-presented, and the objects respond to the hand movement captured by a stereo camera, which makes users feel like they are touching the objects directly. Since we used high-speed camera for detecting the fingertip, it can realize more accurate synchronization between the real object and virtual object without a feeling of strangeness.
FloTree: a multi-touch interactive simulation of evolutionary processes BIBAFull-Text 299-302
  Kien Chuan Chua; Yongqiang Qin; Florian Block; Brenda Phillips; Judy Diamond; E. Margaret Evans; Michael S. Horn; Chia Shen
We present FloTree, a multi-user simulation that illustrates key dynamic processes underlying evolutionary change. Our intention is to create a informal learning environment that links micro-level evolutionary processes to macro-level outcomes of speciation and biodiversity. On a multi-touch table, the simulation represents change from generation to generation in a population of organisms. By placing hands or arms on the surface, visitors can add environmental barriers, thus interrupting the genetic flow between the separated populations. This results in sub-populations that accumulate genetic differences independently over time, sometimes leading to the formation of new species. Learners can morph the result of the simulation into a corresponding phylogenetic tree. The free-form hand and body touch gestures invite creative input from users, encourages social interaction, and provides an opportunity for deep engagement.
SynFlo: an interactive installation introducing synthetic biology concepts BIBAFull-Text 303-306
  Kimberly Chang; Wendy Xu; Nicole Francisco; Consuelo Valdes; Robert Kincaid; Orit Shaer
SynFlo is an interactive installation that utilizes tangible interaction to help illustrate core concepts of synthetic biology through outreach programs. This playful installation allows users to create useful virtual life forms from standardized genetic components, exploring common synthetic biology concepts and techniques. The installation consists of Sifteo cubes, which are used to modify virtual E. coli to serve as environmental biosensors. The modified bacteria can then be deployed into an environment represented by a tabletop computer, where they detect environmental toxins. The goal of this research is to explore ways to develop effective interactive activities for outreach in STEM and to communicate the excitement and constraints of cutting-edge research.
MoClo planner: supporting innovation in bio-design through multi-touch interaction BIBAFull-Text 307-310
  Sirui Liu; Kara Lu; Nahum Seifeselassie; Casey Grote; Nicole Francisco; Veronica Lin; Linda Ding; Consuelo Valdes; Robert Kincaid; Orit Shaer
Synthetic biology is an emerging field that promises to revolutionize biotechnology through the design and construction of new biological constructs useful for medicine, agriculture, and industry. Software tools for this field are currently immature. Our research investi-gates how interactive tabletops and surfaces could be utilized to enhance innovation in biological design. Here, we present the MoClo Planner, a multi-touch interface, which supports the design and construction of complex biological constructs. MoClo planner was developed in close collaboration with domain scientists to simplify the design stage of a cutting-edge laboratory method.
Fluid surface: interactive water surface display for viewing information in a bathroom BIBAFull-Text 311-314
  Yoichi Takahashi; Yasushi Matoba; Hideki Koike
Information is becoming accessible everywhere in everyday life due of the spread of smart phones and portable personal computers; however are very few methods in accessing contents in a bathing environment. Sometimes smart phones can be carried into a bathroom but it is unnatural to be holding a device at during bathing, so a suitable technique for information browsing in a bathing environment is required. We propose an interactive water surface display system, which uses image-recognition techniques. By using water, the system can perform an intuitive interaction peculiar to water such as poking a finger up from under the water surface, stroking the water surface and scooping up water. In this paper, we discuss interaction design in a bathroom, describing an implementation of our system and its applications.
Flexible surfaces for interactive audio BIBAFull-Text 315-318
  Jess Rowland; Adrian Freed
We present here flat flexible audio speaker surface arrays, which are transparent, formed to various environments, and allow for user interaction. These speaker arrays provide an alternative to traditional models of sound reproduction, which often involve discreet point source systems and bulky hardware passively received by the user. The surface array system opens up new possibilities for acoustic spaces, creativity, and sound interactivity.
8D display: a relightable glasses-free 3D display BIBAFull-Text 319-322
  Matthew Hirsch; Shahram Izadi; Henry Holtzman; Ramesh Raskar
magine a display that behaves like a window. Glancing through it, viewers perceive a virtual 3D scene with correct parallax, without the need to wear glasses or track the user. Light that passes through the display correctly illuminates the virtual scene. While researchers have considered such displays, or prototyped subsets of these capabilities, we contribute a new, interactive, relightable, glasses-free 3D display. By simultaneously capturing a 4D light field, and displaying a 4D light field, we are able to realistically modulate the incident light on rendered content. We present our optical design, and GPU pipeline. Beyond mimicking the physical appearance of objects under natural lighting, an 8D display can create arbitrary directional illumination patterns and record their interaction with physical objects. Our hardware points the way towards novel 3D interfaces, in which users interact with digital content using light widgets, physical objects, and gesture.
Skin games BIBAFull-Text 323-326
  Alvaro Cassinelli; Jussi Angesleva; Yoshihiro Watanabe; Gonzalo Frasca; Masatoshi Ishikawa
Recent developments in computer vision hardware have popularized the use of (free hand) gestures as well as full body posture as a form of input control in commercial gaming applications. However, the computer screen remains the place where the eyes must be placed at all times. Freeing graphic output from that rectangular cage is a hot topic in Spatial Augmented Reality (SAR). Using static or dynamic projection mapping and 'smart projectors', it is possible to recruit any surface in the surrounding for displaying the game's graphics. The present work introduces an original interaction paradigm building on kinetic interfaces and SAR: in 'Skin Games' the body acts simultaneously as the controller and as the (wildly deformable) projection surface on which to display the game's output.

Posters

Quantitative evaluation of an illusion of fingertip motion BIBAFull-Text 327-330
  Hiroyuki Okabe; Taku Hachisu; Michi Sato; Shogo Fukushima; Hiroyuki Kajimoto
In recent years, touch panels have become widespread as an intuitive means to activate device operations. Because the touch panel has a space over which a finger and a corresponding cursor moves, certain actions become intuitive compared to force input-type devices such as a pointing stick. If we could add an illusory feeling of finger motion with the force input interface, it would become more intuitive. We have found a new haptic illusion of "motion", which occurs when an electrical tactile flow is presented on the fingertip while experiencing a shearing force. We have also investigated occurrence conditions, focusing on the relation between shear force and movement speed of the electrical tactile stimulation. In our study, we investigated directional characteristic focusing on the illusory position of the finger perceived using a new electrocutaneous display mounted on a six-axis force sensor.
Seamless integration of mobile devices into interactive surface environments BIBAFull-Text 331-334
  Andreas Dippon; Norbert Wiedermann; Gudrun Klinker
This poster abstract describes the seamless integration of uninstrumented mobile devices into an interactive surface environment. By combining a depth camera with a RGB camera for tracking, we are able to identify uninstrumented mobile devices using visual marker tracking. We describe the technical details of combining the two cameras and an example application for the integration of mobile devices.
Control of ridge by using visuotactile cross-modal phenomenon BIBAFull-Text 335-338
  Maki Yokoyama; Taku Hachisu; Michi Sato; Shogo Fukushima; Hiroyuki Kajimoto
Currently, touch panels are used in many devices, and there have been many proposals to add tactile sensation to touch panels, which require additional electro-mechanical components. In this paper, we propose a simple method of adding a tactile sensation of a ridge, using just a thin sheet. The sheet has ridges that are haptically imperceptible, but once a visual cue such as a line is presented, visuotactile cross-modal response induces a haptic ridge. We tested the effects of the visual cue and height of the ridge. The result showed that the visual cue definitely enhances feeling of a ridge.
Touch-consistent perspective for direct interaction under motion parallax BIBAFull-Text 339-342
  Yusuke Sugano; Kazuma Harada; Yoichi Sato
A 3D display is a key component to present virtual space in an intuitive way to users. A motion parallax-based 3D display can be easily combined with multi-touch surfaces, and it is expected to bring a natural experience of viewing and controlling 3D space. However, since virtual objects are rendered in accordance with the head position of the user, their projected positions are not fixed on the display surface. We propose a novel formulation of head-coupled perspective that adaptively changes the position of the projection image plane to maintain touch consistency of direct interaction.
Perceived intensity of click sensation for small touchscreen devices BIBAFull-Text 343-346
  Jeong Mook Lim; Heesook Shin; Jong-uk Lee; Ki-Uk Kyung
'Click' and 'Tap' are one of the most commonly used gestures UI on touchscreen. There were many studies that aimed to provide realistic tactile feedback when user touched a graphic object on touchscreen. In this study, we analyzed perceived intensity levels of tactile sensation that can be utilized in clicking or tapping gesture on touchscreen. Two experiments were conducted to investigate the number of perceived intensity levels. The first was to find the range of accelerations for our new haptic device. The second experiment was to measure the number of intensity levels that can be reliably identified in the acceleration ranges. Four stimuli were used in this experiment. The result showed average information transfer of 1.34 bits for intensity identification, or equivalently, 2.53 correctly identifiable intensity levels.
Novel interaction techniques using touch-sensitive tangibles in tabletop environments BIBAFull-Text 347-350
  Saphyra Amaro; Masanori Sugimoto
In this work, we propose techniques for interaction that use a touch-sensitive tangible to assist 3D manipulation in tabletop applications. The objective of this research is to investigate the effectiveness and user satisfaction with this combination for performing virtual object manipulation in tabletop environments. A prototype of a touch-sensitive tangible was constructed and some of the proposed techniques were implemented, namely 3D translation and rotation. We conducted a pilot study to compare 3D manipulation on the tabletop with and without the tangible, from which we found that the touch-sensitive tangible was useful for 3D manipulation tasks.
Tangible interactions on a flat panel display using actuated paper sheets BIBAFull-Text 351-354
  Kota Amano; Akio Yamamoto
This paper describes active tangible interactions on a flat panel display using plain paper sheets as tangible media. A prototype system employs transparent planar electrostatic actuators that cover the surface of a flat-panel display to realize co-located tangible and visual interactions. In a demonstration program, users can interact with animated computer graphics through plain paper sheets actuated by the electrostatic actuators.
Revisiting hovering: interaction guides for interactive surfaces BIBAFull-Text 355-358
  Victor Cheung; Jens Heydekorn; Stacey Scott; Raimund Dachselt
Current touch-based interactive surfaces rely heavily on a trial-and-error approach for guiding users through the interaction process. In contrast, the legacy WIMP (Windows, Icons, Menus, Pointer) paradigm employs various methods to provide user assistance. A commonly used strategy is the use of mouse hovering. This research explores how this strategy can be adapted and expanded to user interaction with interactive surfaces to provide user assistance as well as to help address common surface interaction issues, such as precisions. Design dimensions and considerations are discussed, and potential hover interaction techniques are proposed. These techniques emphasize the use of animation to facilitate user engagement and improve the overall user experience.
An immersive surface for 3D interactions BIBAFull-Text 359-362
  Yusuke Takeuchi; Masanori Sugimoto
This paper proposes a new tabletop interface that enables a user to visualize projected objects as if they existed on the tabletop surface. It uses head tracking, without the need for any specialized head-mounted hardware, displays, or markers. Nowadays, many interactive tabletop interfaces support interactions above the surface because this is more intuitive. In these 3D interactions, users should be able to gauge the size and height of the projected virtual objects. We evaluate our system quantitatively via a 3D interaction task, by comparing it with a standard tabletop system.
Development of a context-enhancing surface based on the entrainment of embodied rhythms and actions sharing via interaction BIBAFull-Text 363-366
  Michiya Yamamoto; Yusuke Shigeno; Ryuji Kawabe; Tomio Watanabe
Surfaces with social functions are a recent trend. As the next step in the evolution of these surfaces, we propose a context-enhancing surface, which allows people to entrain and share their embodied rhythms to support their daily lively life. Thus, we developed a prototype system using sensors and screens. We propose a goal-oriented interaction and an entrainment-oriented interaction to support cheering in sports and liveliness in presentation.
Development of wall amusement with infrared radars BIBAFull-Text 367-370
  Yosuke Kimura; Haruki Ohta; Atsushi Karino; Tomoyuki Takami
This paper describes a wall amusement system with wall surface and infrared radars. A screen is projected on a wall by an ultra-short throw projector. Interaction between wall surface and players is created by infrared radar measurement. This system can be set up anywhere if there is a certain sized wall. We present here the system using mirrors with infrared radars. This makes both the time and space resolution of infrared radar measurement improved very much. We use this new system to develop exergames with wall surface.
Tool support for developing scalable multiuser applications on multi-touch screens BIBAFull-Text 371-374
  Ioannis Leftheriotis; Konstantinos Chorianopoulos; Letizia Jaccheri
MT (Multi-touch) screens are platforms that enhance multiuser collaboration. In this work, we underline the need for novel interaction techniques and toolkits that allow multi-user collaboration on larger MT surfaces. We present ChordiAction toolkit that makes use of a novel chorded interaction technique allowing simultaneous multi-user interaction on scalable MT applications. We describe the design, the architecture and some efficient customizations practices of the toolkit and show how it can be effectively embedded in an application for multiuser interaction. As a proof of concept, we present some example applications using ChordiAction toolkit showing its potentials and discuss our future plans for further evaluation of this technique.
ITS in the classroom: perspectives on using a multi-touch classroom BIBAFull-Text 375-378
  Emma Mercier; Steven Higgins; Elizabeth Burd; James McNaughton
Using interactive surfaces in a classroom requires an understanding of multiple users and stakeholders. While research on how students using tables provides some insight, exploring the roles and needs of the teacher, and the interaction between groups in a classroom, adds an additional dimension to this design challenge. We summarize three years of design and research in a multi-touch classroom, to illuminate some of the issues involved in placing interactive surfaces in the classroom environment. Results indicate that the tables can be used to support joint cognition, that the arrangement of tables in the classroom may influence collaborative interactions, and that allowing the teacher to project content from student tables to a shared interactive whiteboard (IWB) for whole group discussion can facilitate progress within the groups.
Mobile assistant: enhancing desktop interaction using mobile phone BIBAFull-Text 379-382
  Haijun Xia; Jingning Zhang; Yeshuang Zhu; Chun Yu; Yuanchun Shi
Touchscreen of mobile phone is an important input channel, whereas most desktop computers have no such a modality. Furthermore, we observe a common situation where ubiquitous mobile phones are placed aside while users operate computers. In this poster, we present Mobile Assistant (MA), which allows a touch mobile phone manipulated by the non-dominant hand to assist desktop interaction without introducing extra devices. To demonstrate this interaction concept, two tasks are investigated: Symbol Input and Button Access. The user study shows that MA can significantly improve the input efficiency for users to complete GUI tasks.
Amazing forearm as an innovative interaction device and data storage on tabletop display BIBAFull-Text 383-386
  Seiya Koura; Shunsuke Suo; Asako Kimura; Fumihisa Shibata; Hideyuki Tamura
In this study, we propose interaction techniques that use the forearm on tabletop displays positively. We define the forearm as the part of the arm between the elbow and the hand. On direct input surfaces, users' forearms often create problems such as incorrect recognition and occlusions. Therefore, the forearm is often considered problematic. Conversely, we think it could be possible to use it to create new interaction techniques. We propose new interaction on tabletop displays that use the forearm. In this paper, we describe how users can manipulate menu and data storages by these techniques. Our study offers new possibilities for using the problematic forearm in tabletop displays.
WALDEN: multi-surface multi-touch simulation of climate change and species loss in thoreau's woods BIBAFull-Text 387-390
  Bertrand Schneider; Matthew Tobiasz; Charles Willis; Chia Shen
We present a case study of an interactive, multiple heterogeneous-display, multi-touch visualization for informal science education. Our visual simulation application, called WALDEN, has been developed using a Microsoft Surface and a large data wall. Multiple displays offer users the opportunity to interact with large visual datasets and observe complex visual simulations. We discuss the design of our system, findings from our case study, the shortcomings it revealed and how we plan to address them.
Comparing the effect of interactive tabletops and desktops on students' cognition BIBAFull-Text 391-394
  Shima Salehi; Bertrand Schneider; Paulo Blikstein
In this pilot study we investigated the effect of technological platform on the quality of students' cognition when analyzing a computer simulation. As an indicator of performance, we measured the percentage of ideal cycles of cognition; an ideal cycle of cognition is defined as having three distinct steps: planning an action, executing it and evaluating its effects. The results of this study suggest that individuals were not affected by the orientation of display; dyads, however, had twice as many ideal cycles of cognition when interacting with a tabletop than with a desktop. We discuss the implications and limitations of those preliminary results for classroom instruction.
PiMarking: co-located collaborative digital annotating on large tabletops BIBAFull-Text 395-398
  Yongqiang Qin; Chenjun Wu; Yuanchun Shi
There are situations under which co-located people are required to perform collaborative marking tasks: for example, human resource officers need to review resumes together and teachers need to grade answer sheets after an examination. In this poster, we introduce PiMarking, a collaborative system designed to accommodate user-authenticated marking tasks and face-to-face discussions on large-scale interactive tabletop surfaces. PiMarking makes it easy for user-differentiation, document sharing and synchronized marking among group members. PiMarking puts forward user permission management mechanisms, allowing three modes for document sharing: distributed copy, share display and synchronized marking. We conducted a preliminary study using a realistic resume marking task, which proved the effectiveness of the features provided by PiMarking.
uEmergency: a collaborative system for emergency management on very large tabletop BIBAFull-Text 399-402
  Yongqiang Qin; Jie Liu; Chenjun Wu; Yuanchun Shi
The vertical display, indirect input and distant communication in traditional Emergency Management Information System provide unintuitive human-computer-interaction, and thus reduce the efficiency of decision-making. This paper presents uEmergency, a multi-user collaborative system for emergency management on very large-scale interactive tabletop. It allows people to carry out face-to-face communication based on a horizontal global map. Real-time situation can be browsed and analyzed directly using their fingers and digital pens. In this paper, we also present the results of a study where two groups carried out a task for fighting forest fire based on this system. The results suggest that uEmergency can effectively help people manipulate objects, analyze situation and collaborate for coping with an emergency.
Bridging private and shared interaction surfaces in co-located group settings BIBAFull-Text 403-406
  Stacey Scott; Phillip McClelland; Guillaume Besacier
This work-in-progress paper describes the design of an interaction technique that addresses user interaction challenges with digital object transfer between private and shared surfaces, particularly in co-located group settings. We propose a transfer technique for bridging tablets and digital tables that builds on existing interaction techniques, such as virtual embodiments and multi-display bridging techniques, to improve awareness of the transfer process both for the person performing the transfer and for their collaborators. The technique also minimizes effort involved in the transfer action, enabling people to focus on the activity at hand -- or the ongoing conversation -- rather than on the technologies being used.
BrainExplorer: an innovative tool for teaching neuroscience BIBAFull-Text 407-410
  Bertrand Schneider; Jenelle Wallace; Roy Pea; Paulo Blikstein
Neuroscience has recently brought many insights into the inner workings of the human brain. The way neuroscience is taught, however, has lagged behind and still relies on direct instruction or textbooks. We argue that the spatial nature of the brain makes it an ideal candidate for hands-on activities coupled with a tangible interface. In this paper we introduce BrainExplorer, a learning environment for teaching neuroscience. BrainExplorer allows users to explore neural pathways on a custom tabletop platform. We conducted an evaluation with 28 participants comparing students who learned neuroscience content through using BrainExplorer with students who learned by reading a textbook chapter. We found that our system promotes learning along 3 dimensions: memorizing scientific terminology, understanding a dynamic system, and transferring knowledge to a new situation.