HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,284,120
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: nacenta_m* Results: 46 Sorted by: Date  Comments?
Help Dates
Limit:   
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 46 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 |
[1] iVoLVER: Interactive Visual Language for Visualization Extraction and Reconstruction Display and Visualizations / Méndez, Gonzalo Gabriel / Nacenta, Miguel A. / Vandenheste, Sebastien Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.4073-4085
ACM Digital Library Link
Summary: We present the design and implementation of iVoLVER, a tool that allows users to create visualizations without textual programming. iVoLVER is designed to enable flexible acquisition of many types of data (text, colors, shapes, quantities, dates) from multiple source types (bitmap charts, webpages, photographs, SVGs, CSV files) and, within the same canvas, supports transformation of that data through simple widgets to construct interactive animated visuals. Aside from the tool, which is web-based and designed for pen and touch, we contribute the design of the interactive visual language and widgets for extraction, transformation, and representation of data. We demonstrate the flexibility and expressive power of the tool through a set of scenarios, and discuss some of the challenges encountered and how the tool fits within the current infovis tool landscape.

[2] Gaze-Contingent Manipulation of Color Perception Eye Gaze / Mauderer, Michael / Flatla, David R. / Nacenta, Miguel A. Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.5191-5202
ACM Digital Library Link
Summary: Using real time eye tracking, gaze-contingent displays can modify their content to represent depth (e.g., through additional depth cues) or to increase rendering performance (e.g., by omitting peripheral detail). However, there has been no research to date exploring how gaze-contingent displays can be leveraged for manipulating perceived color. To address this, we conducted two experiments (color matching and sorting) that manipulated peripheral background and object colors to influence the user's color perception. Findings from our color matching experiment suggest that we can use gaze-contingent simultaneous contrast to affect color appearance and that existing color appearance models might not fully predict perceived colors with gaze-contingent presentation. Through our color sorting experiment we demonstrate how gaze-contingent adjustments can be used to enhance color discrimination. Gaze-contingent color holds the promise of expanding the perceived color gamut of existing display technology and enabling people to discriminate color with greater precision.

[3] Constructing Interactive Visualizations with iVoLVER Interactivity Demos / Méndez, Gonzalo Gabriel / Nacenta, Miguel A. Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.3727-3730
ACM Digital Library Link
Summary: iVoLVER, the Interactive Visual Language for Visualization Extraction and Reconstruction, is a web-based pen and touch system that graphically supports the construction of interactive visualizations and allows the extraction of data from different types of digital artifacts and photographs. Together, these features enable the creation of visualizations from data that is not structured in traditional formats without the need of textual programming. This demonstration shows how iVoLVER visualizations are constructed and illustrates an interactive example that can be used in teaching and educational contexts.

[4] "Local Remote" Collaboration: Applying Remote Group Awareness Techniques to Co-located Settings Workshops / Scott, Stacey D. / Graham, T. C. Nicholas / Wallace, James R. / Hancock, Mark / Nacenta, Miguel Companion Proceedings of ACM CSCW 2015 Conference on Computer-Supported Cooperative Work and Social Computing 2015-03-14 v.2 p.319-324
ACM Digital Library Link
Summary: Co-located environments have long been considered ideal for many types of group work, such as planning, decision-making, and design, since they provide a rich communication environment (e.g. delay-free voice communication, face-to-face interaction, eye gaze, and non-verbal communication), as well as promote awareness and coordination through the use of shared artifacts. However, the recent move towards multi-device ecologies in co-located settings, such as the use of multiple personal devices (e.g., laptops, tablets) or multiple personal devices in conjunction with larger, shared displays, such as digital walls or tabletops, can interfere with these common co-located communication and collaboration strategies, as various group members mentally and/or physical shift their focus to their personal devices rather than to their collaborators or to any physically shared artifacts. Group communications and coordination can easily breakdown in these scenarios as the lack of a physically shared group focus of attention can limit awareness of other's activities and task progress. In this workshop, researchers and practitioners will explore design techniques that can be used to address this issue, and improve group awareness in these co-located multi-device ecologies. This will be accomplished through group presentations, brainstorming sessions, and small-group breakout sessions.

[5] Designing the Unexpected: Endlessly Fascinating Interaction for Interactive Installations Paper Session 2: Focus on Interaction / MacDonald, Lindsay / Brosz, John / Nacenta, Miguel A. / Carpendale, Sheelagh Proceedings of the 2015 International Conference on Tangible and Embedded Interaction 2015-01-15 p.41-48
ACM Digital Library Link
Summary: We present A Delicate Agreement, an interactive art installation designed to intrigue viewers by offering them an unfolding story that is endlessly fascinating. To achieve this, we set our story in the liminal space of an elevator, and populated this elevator with a set of unique characters. Viewers watch the story unfold through peepholes in the elevator's doors, where in turn their gaze can trigger changes in the storyline. This storyline's interactive response was created via a complex adaptive system using simple rules based on Goffman's performance theory.

[6] User-defined Interface Gestures: Dataset and Analysis Session 1: Gestures / Grijincu, Daniela / Nacenta, Miguel A. / Kristensson, Per Ola Proceedings of the 2014 ACM International Conference on Interactive Tabletops and Surfaces 2014-11-16 p.25-34
ACM Digital Library Link
Summary: We present a video-based gesture dataset and a methodology for annotating video-based gesture datasets. Our dataset consists of user-defined gestures generated by 18 participants from a previous investigation of gesture memorability. We design and use a crowd-sourced classification task to annotate the videos. The results are made available through a web-based visualization that allows researchers and designers to explore the dataset. Finally, we perform an additional descriptive analysis and quantitative modeling exercise that provide additional insights into the results of the original study. To facilitate the use of the presented methodology by other researchers we share the data, the source of the human intelligence tasks for crowdsourcing, a new taxonomy that integrates previous work, and the source code of the visualization tool.

[7] Paper vs. tablets: the effect of document media in co-located collaborative work Connection and collaboration / Haber, Jonathan / Nacenta, Miguel A. / Carpendale, Sheelagh Proceedings of the 2014 International Conference on Advanced Visual Interfaces 2014-05-27 p.89-96
ACM Digital Library Link
Summary: With new computer technologies portable devices are rapidly approaching the dimensions and characteristics of traditional pen and paper-based tools. Text and graphic documents are now commonly viewed using small tablet computers. We conducted a study with small groups of participants to better understand how paper-based text and graphics are used by small collaborative groups as compared to how these groups make use of documents presented on a digital tablet with digital styluses. Our results indicate that digital tools, as compared to paper tools, can affect the levels of verbal communication and participant gaze engagement with other group members. Additionally, we observed how participants spatially arranged paper-based and digital tools during collaborative group activities, how often they switched from digital to paper, and how they still prefer paper overall.

[8] Demo hour Demo hour / Karagozler, M. Emre / Poupyrev, Ivan / Fedder, Gary K. / Suzuki, Yuri / Yao, Lining / Niiyama, Ryuma / Ou, Jifei / Follmer, Sean / Ishii, Hiroshi / Brosz, John / Nacenta, Miguel A. / Pusch, Richard / Carpendale, Sheelagh / Hurter, Christophe / Rekimoto, Jun interactions 2014-05 v.21 n.3 p.6-9
ACM Digital Library Link
Summary: UIST is a premier forum for innovations in the software and hardware of human-computer interfaces. The UIST demo program enables attendees to experience firsthand the most interesting next-generation user interface technologies. The UIST 2013 demo program featured technologies ranging from energy-harvesting interactive paper to pneumatically actuated materials, providing attendees a vivid preview of some of the interactive systems that might shape our daily lives in the future. -- Per Ola Kristensson and T. Scott Saponas, UIST 2013 Demo Chairs

[9] Depth perception with gaze-contingent depth of field The third dimension / Mauderer, Michael / Conte, Simone / Nacenta, Miguel A. / Vishwanath, Dhanraj Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.1 p.217-226
ACM Digital Library Link
Summary: Blur in images can create the sensation of depth because it emulates an optical property of the eye; namely, the limited depth of field created by the eye's lens. When the human eye looks at an object, this object appears sharp on the retina, but objects at different distances appear blurred. Advances in gaze-tracking technologies enable us to reproduce dynamic depth of field in regular displays, providing an alternative way of conveying depth. In this paper we investigate gaze-contingent depth of field as a method to produce realistic 3D images, and analyze how effectively people can use it to perceive depth. We found that GC DOF increases subjective perceived realism and depth and can contribute to the perception of ordinal depth and distance between objects, but it is limited in its accuracy.

[10] Quantitative measurement of virtual vs. physical object embodiment through kinesthetic figural after effects Multitouch interaction / Alzayat, Ayman / Hancock, Mark / Nacenta, Miguel Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.1 p.2903-2912
ACM Digital Library Link
Summary: Over the past decade, multi-touch surfaces have become commonplace, with many researchers and practitioners describing the benefits of their natural, physical-like interactions. We present a pair of studies that empirically investigates the psychophysical effects of direct interaction with both physical and virtual artefacts. We use the phenomenon of Kinesthetic Figural After Effects-a change in understanding of the physical size of an object after a period of exposure to an object of different size. Our studies show that, while this effect is robustly reproducible when using physical artefacts, this same effect does not manifest when manipulating virtual artefacts on a direct, multi-touch tabletop display. We contribute quantitative evidence suggesting a psychophysical difference in our response to physical vs. virtual objects, and discuss future research directions to explore measurable phenomena to evaluate the presence of physical-like changes from virtual on-screen objects.

[11] Transmogrification: causal manipulation of visualizations Visualization & video / Brosz, John / Nacenta, Miguel A. / Pusch, Richard / Carpendale, Sheelagh / Hurter, Christophe Proceedings of the 2013 ACM Symposium on User Interface Software and Technology 2013-10-08 v.1 p.97-106
ACM Digital Library Link
Summary: A transmogrifier is a novel interface that enables quick, on-the-fly graphic transformations. A region of a graphic can be specified by a shape and transformed into a destination shape with real-time, visual feedback. Both origin and destination shapes can be circles, quadrilaterals or arbitrary shapes defined through touch. Transmogrifiers are flexible, fast and simple to create and invite use in casual InfoVis scenarios, opening the door to alternative ways of exploring and displaying existing visualizations (e.g., rectifying routes or rivers in maps), and enabling free-form prototyping of new visualizations (e.g., lenses).

[12] Multi-touch pinch gestures: performance and ergonomics Touch fundamentals / Hoggan, Eve / Nacenta, Miguel / Kristensson, Per Ola / Williamson, John / Oulasvirta, Antti / Lehtiö, Anu Proceedings of the 2013 ACM International Conference on Interactive Tabletops and Surfaces 2013-10-06 p.219-222
ACM Digital Library Link
Summary: Multi-touch gestures are prevalent interaction techniques for many different types of devices and applications. One of the most common gestures is the pinch gesture, which involves the expansion or contraction of a finger spread. There are multiple uses for this gesture -- zooming and scaling being the most common -- but little is known about the factors affecting performance and ergonomics of the gesture motion itself. In this note, we present the results from a study where we manipulated angle, direction, distance, and position of two-finger pinch gestures. The study provides insight into how variables interact with each other to affect performance and how certain combinations of pinch gesture characteristics can result in uncomfortable or difficult pinch gestures. Our results can help designers select faster pinch gestures and avoid difficult pinch tasks.

[13] ITS 2013 workshop on visual adaptation of interfaces Workshops and tutorials / Dostal, Jakub / Nacenta, Miguel / Raedle, Roman / Reiterer, Harald / Stellmach, Sophie Proceedings of the 2013 ACM International Conference on Interactive Tabletops and Surfaces 2013-10-06 p.491-492
ACM Digital Library Link
Summary: This workshop proposed to bring together researchers interested in visual adaptation of interfaces. The gaze-tracking community is often constrained to visual adaptation at short distances where gaze data is reliably available. Researchers working on distance-based interfaces tend to work in room-sized environments, with wall-sized displays or multiple displays. Visual adaptation using contextual information or personalisation is relatively independent of the size of the environment but comes with its own set of challenges due to the complexities of dealing with contextual information. Even though most of these researchers are creating visually adaptive interfaces, their approaches, concerns and constraints differ. The aim of this workshop was to create an opportunity to increase awareness of the diverse research as well as for establishing areas of possible collaboration.

[14] Memorability of pre-designed and user-defined gesture sets Papers: gesture studies / Nacenta, Miguel A. / Kamber, Yemliha / Qiang, Yizhou / Kristensson, Per Ola Proceedings of ACM CHI 2013 Conference on Human Factors in Computing Systems 2013-04-27 v.1 p.1099-1108
ACM Digital Library Link
Summary: We studied the memorability of free-form gesture sets for invoking actions. We compared three types of gesture sets: user-defined gesture sets, gesture sets designed by the authors, and random gesture sets in three studies with 33 participants in total. We found that user-defined gestures are easier to remember, both immediately after creation and on the next day (up to a 24% difference in recall rate compared to pre-designed gestures). We also discovered that the differences between gesture sets are mostly due to association errors (rather than gesture form errors), that participants prefer user-defined sets, and that they think user-defined gestures take less time to learn. Finally, we contribute a qualitative analysis of the tradeoffs involved in gesture type selection and share our data and a video corpus of 66 gestures for replicability and further analysis.

[15] The effects of tactile feedback and movement alteration on interaction and awareness with digital embodiments Papers: embodied interaction 2 / Doucette, Andre / Mandryk, Regan L. / Gutwin, Carl / Nacenta, Miguel / Pavlovych, Andriy Proceedings of ACM CHI 2013 Conference on Human Factors in Computing Systems 2013-04-27 v.1 p.1891-1900
ACM Digital Library Link
Summary: Collaborative tabletop systems can employ direct touch, where people's real arms and hands manipulate objects, or indirect input, where people are represented on the table with digital embodiments. The input type and the resulting embodiment dramatically influence tabletop interaction: in particular, the touch avoidance that naturally governs people's touching and crossing behavior with physical arms is lost with digital embodiments. One result of this loss is that people are less aware of each others' arms, and less able to coordinate actions and protect personal territories. To determine whether there are strategies that can influence group interaction on shared digital tabletops, we studied augmented digital arm embodiments that provide tactile feedback or movement alterations when people touched or crossed arms. The study showed that both augmentation types changed people's behavior (people crossed less than half as often) and also changed their perception (people felt more aware of the other person's arm, and felt more awkward when touching). This work shows how groupware designers can influence people's interaction, awareness, and coordination abilities when physical constraints are absent.

[16] Multi-touch rotation gestures: performance and ergonomics Papers: mobile gestures / Hoggan, Eve / Williamson, John / Oulasvirta, Antti / Nacenta, Miguel / Kristensson, Per Ola / Lehtiö, Anu Proceedings of ACM CHI 2013 Conference on Human Factors in Computing Systems 2013-04-27 v.1 p.3047-3050
ACM Digital Library Link
Summary: Rotations performed with the index finger and thumb involve some of the most complex motor action among common multi-touch gestures, yet little is known about the factors affecting performance and ergonomics. This note presents results from a study where the angle, direction, diameter, and position of rotations were systematically manipulated. Subjects were asked to perform the rotations as quickly as possible without losing contact with the display, and were allowed to skip rotations that were too uncomfortable. The data show surprising interaction effects among the variables, and help us identify whole categories of rotations that are slow and cumbersome for users.

[17] Sometimes when we touch: how arm embodiments change reaching and collaboration on digital tables Gesture and touch / Doucette, Andre / Gutwin, Carl / Mandryk, Regan L. / Nacenta, Miguel / Sharma, Sunny Proceedings of ACM CSCW'13 Conference on Computer-Supported Cooperative Work 2013-02-23 v.1 p.193-202
ACM Digital Library Link
Summary: In tabletop work with direct input, people avoid crossing each others' arms. This natural touch avoidance has important consequences for coordination: for example, people rarely grab the same item simultaneously, and negotiate access to the workspace via turn-taking. At digital tables, however, some situations require the use of indirect input (e.g., large tables or remote participants), and in these cases, people are often represented with virtual arm embodiments. There is little information about what happens to coordination and reaching when we move from physical to digital arm embodiments. To gather this information, we carried out a controlled study of tabletop behaviour with different embodiments. We found dramatic differences in moving to a digital embodiment: people touch and cross with virtual arms far more than they do with real arms, which removes a natural coordination mechanism in tabletop work. We also show that increasing the visual realism of the embodiment does not change behaviour, but that changing the thickness has a minor effect. Our study identifies important design principles for virtual embodiments in tabletop groupware, and adds to our understanding of embodied interaction in small groups.

[18] Factors influencing visual attention switch in multi-display user interfaces: a survey Visual Attention / Rashid, Umar / Nacenta, Miguel A. / Quigley, Aaron Proceedings of the 2012 ACM International Symposium on Pervasive Displays 2012-06-04 p.1
ACM Digital Library Link
Summary: Multi-display User Interfaces (MDUIs) enable people to take advantage of the different characteristics of different display categories. For example, combining mobile and large displays within the same system enables users to interact with user interface elements locally while simultaneously having a large display space to show data. Although there is a large potential gain in performance and comfort, there is at least one main drawback that can override the benefits of MDUIs: the visual and physical separation between displays requires that users perform visual attention switches between displays. In this paper, we present a survey and analysis of existing data and classifications to identify factors that can affect visual attention switch in MDUIs. Our analysis and taxonomy bring attention to the often ignored implications of visual attention switch and collect existing evidence to facilitate research and implementation of effective MDUIs.

[19] The LunchTable: a multi-user, multi-display system for information sharing in casual group interactions Collaborative Displays / Nacenta, Miguel A. / Jakobsen, Mikkel R. / Dautriche, Remy / Hinrichs, Uta / Dörk, Marian / Haber, Jonathan / Carpendale, Sheelagh Proceedings of the 2012 ACM International Symposium on Pervasive Displays 2012-06-04 p.18
ACM Digital Library Link
Summary: People often use mobile devices to access information during conversations in casual settings, but mobile devices are not well suited for interaction in groups. Large situated displays promise to better support access to and sharing of information in casual conversations. This paper presents the LunchTable, a multi-user system based on semi-public displays that supports such casual group interactions around a lunch table. We describe our design goals and the resulting system, as well as a weeklong study of the interaction with the system in the lunch space of a research lab. Our results show substantial use of the LunchTable for sharing visual information such as online maps and videos that are otherwise difficult to share in conversations. Also, equal simultaneous access from several users does not seem critical in casual group interactions.

[20] The cost of display switching: a comparison of mobile, large display and hybrid UI configurations User and cognitive models / Rashid, Umar / Nacenta, Miguel A. / Quigley, Aaron Proceedings of the 2012 International Conference on Advanced Visual Interfaces 2012-05-22 p.99-106
ACM Digital Library Link
Summary: Attaching a large external display can help a mobile device user view more content at once. This paper reports on a study investigating how different configurations of input and output across displays affect performance, subjective workload and preferences in map, text and photo search tasks. Experimental results show that a hybrid configuration where visual output is distributed across displays is worst or equivalent to worst in all tasks. A mobile device-controlled large display configuration performs best in the map search task and equal to best in text and photo search tasks (tied with a mobile-only configuration). After conducting a detailed analysis of the performance differences across different UI configurations, we give recommendations for the design of distributed user interfaces.

[21] FatFonts: combining the symbolic and visual aspects of numbers Visualization / Nacenta, Miguel / Hinrichs, Uta / Carpendale, Sheelagh Proceedings of the 2012 International Conference on Advanced Visual Interfaces 2012-05-22 p.407-414
ACM Digital Library Link
Summary: In this paper we explore numeric typeface design for visualization purposes. We introduce FatFonts, a technique for visualizing quantitative data that bridges the gap between numeric and visual representations. FatFonts are based on Arabic numerals but, unlike regular numeric typefaces, the amount of ink (dark pixels) used for each digit is proportional to its quantitative value. This enables accurate reading of the numerical data while preserving an overall visual context. We discuss the challenges of this approach that we identified through our design process and propose a set of design goals that include legibility, familiarity, readability, spatial precision, dynamic range, and resolution. We contribute four FatFont typefaces that are derived from our exploration of the design space that these goals introduce. Finally, we discuss three example scenarios that show how FatFonts can be used for visualization purposes as valuable representation alternatives.

[22] Workshop on Infrastructure and Design Challenges of Coupled Display Visual Interfaces: in conjunction with Advanced Visual Interfaces 2012 (AVI'12) Workshops / Quigley, Aaron / Dix, Alan / Nacenta, Miguel / Rodden, Tom Proceedings of the 2012 International Conference on Advanced Visual Interfaces 2012-05-22 p.815-817
ACM Digital Library Link
Summary: An increasing number of interactive displays of very different sizes, portability, projectability and form factors are starting to become part of the display ecosystems that we make use of in our daily lives. Displays are shaped by human activity into an ecological arrangement and thus an ecology. Each combination or ecology of displays offer substantial promise for the creation of applications that effectively take advantage of the wide range of input, affordances, and output capability of these multi-display, multi-device and multi-user environments. Although the last few years have seen an increasing amount of research in this area, knowledge about this subject remains under explored, fragmented, and cuts across a set of related but heterogeneous issues. This workshop brings together researchers and practitioners interested in the challenges posed by infrastructure and design.

[23] The HapticTouch toolkit: enabling exploration of haptic interactions Touchy feely / Ledo, David / Nacenta, Miguel A. / Marquardt, Nicolai / Boring, Sebastian / Greenberg, Saul Proceedings of the 6th International Conference on Tangible and Embedded Interaction 2012 v.9 p.115-122
ACM Digital Library Link
Summary: In the real world, touch based interaction relies on haptic feedback (e.g., grasping objects, feeling textures). Unfortunately, such feedback is absent in current tabletop systems. The previously developed Haptic Tabletop Puck (HTP) aims at supporting experimentation with and development of inexpensive tabletop haptic interfaces in a do-it-yourself fashion. The problem is that programming the HTP (and haptics in general) is difficult. To address this problem, we contribute the Haptictouch toolkit, which enables developers to rapidly prototype haptic tabletop applications. Our toolkit is structured in three layers that enable programmers to: (1) directly control the device, (2) create customized combinable haptic behaviors (e.g., softness, oscillation), and (3) use visuals (e.g., shapes, images, buttons) to quickly make use of these behaviors. In our preliminary exploration we found that programmers could use our toolkit to create haptic tabletop applications in a short amount of time.

[24] ToCoPlay: Graphical Multi-touch Interaction for Composing and Playing Music Sound and Smell / Lynch, Sean / Nacenta, Miguel A. / Carpendale, Sheelagh Proceedings of IFIP INTERACT'11: Human-Computer Interaction 2011-09-05 v.3 p.306-322
Keywords: Multi-touch; collaboration; composition; music; musical instrument
Link to Digital Content at Springer
Summary: With the advent of electronic music and computers, the human-sound interface is liberated from the specific physical constraints of traditional instruments, which means that we can design musical interfaces that provide arbitrary mappings between human actions and sound generation. This freedom has resulted in a wealth of new tools for electronic music generation that expand the limits of expression, as exemplified by projects such as Reactable and Bricktable. In this paper we present ToCoPlay, an interface that further explores the design space of collaborative, multi-touch music creation systems. ToCoPlay is unique in several respects: it allows creators to dynamically transition between the roles of composer and performer, it takes advantage of a flexible spatial mapping between a musical piece and the graphical interface elements that represent it, and it applies current and traditional interface interaction techniques for the creation of music.

[25] Second workshop on engineering patterns for multi-touch interfaces Workshops / Luyten, Kris / Vanacken, Davy / Weiss, Malte / Borchers, Jan / Nacenta, Miguel ACM SIGCHI 2011 Symposium on Engineering Interactive Computing Systems 2011-06-13 p.335-336
ACM Digital Library Link
Summary: Multi-touch gained a lot of interest in the last couple of years and the increased availability of multi-touch enabled hardware boosted its development. However, the current diversity of hardware, toolkits, and tools for creating multi-touch interfaces has its downsides: there is only little reusable material and no generally accepted body of knowledge when it comes to the development of multi-touch interfaces. This workshop is the second workshop on this topic and the workshop goal remains unchanged: to seek a consensus on methods, approaches, toolkits, and tools that aid in the engineering of multi-touch interfaces and transcend the differences in available platforms. The patterns mentioned in the title indicate that we are aiming to create a reusable body of knowledge.
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 46 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 |