HCI Bibliography Home | HCI Conferences | About MOBILEHCI | MOBILEHCI Conf Proceedings | Detailed Records | RefWorks | EndNote | Hide Abstracts
MOBILEHCI Tables of Contents: 020304050607080910111213 ⇐ MORE

Proceedings of the 14th Conference on Human-computer interaction with mobile devices and services

Fullname:Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services
Editors:Elizabeth Churchill; Sriram Subramanian; Patrick Baudisch; Kenton O'Hara
Location:San Francisco, California
Dates:2012-Sep-21 to 2012-Sep-24
Publisher:ACM
Standard No:ISBN: 978-1-4503-1105-2; ACM DL: Table of Contents; hcibib: MOBILEHCI12
Papers:54
Pages:450
Links:Conference Website
Summary:It is our great pleasure to welcome you to the 2012 ACM International Conference on Human-Computer Interaction with Mobile Devices and Services -- MobileHCI 2012.
    MobileHCI is the world's leading conference in the field of Human Computer Interaction concerned with portable and personal devices and with the services to which they enable access. Mobile HCI provides a multidisciplinary forum for academics, hardware and software developers, designers and practitioners to discuss the challenges and potential solutions for effective interaction with and through mobile devices, applications, and services.
    The conference continues to attract a significant number of submissions; this year we received 212 valid paper submissions. We have continued our commitment to improve the quality of the review process. A senior program committee of 38 internationally renowned scientists from academia and industry was assembled. Each paper received 3 or more high-quality peer reviews, as well as an additional meta review by the assigned PC member. Following last year's successful cross-Atlantic, split committee meeting, the 38 committee members assembled in two locations (Palo Alto and Berlin) that were linked by audio and video connections. This provided an opportunity for the papers and reviews to be discussed in detail and all final decisions to be agreed upon by the Program Committee as a whole.
    The outcome of this process was that 54 of the 212 submissions were accepted (25%) for inclusion in the final Program, to be presented in San Francisco in September 2012. Of these, 39 were full papers and 15 were notes. A shepherding process was also used in which 6 of the 54 accepted papers were revised and improved under the expert guidance of a dedicated committee member. In our commitment to continually improving the quality of the Program, 8 papers/notes were given special recognition of excellence by being nominated for consideration as a Best Paper. A jury, consisting of 5 members of the Program Committee, was established to judge which of these papers represented the highest caliber of research in the field to be deserving of the Best Paper award. The final decision is to be revealed at the conference itself.
  1. Patterns of use
  2. Touch input
  3. Panel discussion
  4. Off and around the screen
  5. Trust and privacy
  6. Body, space and motion
  7. Understanding use
  8. Mobile augmented reality
  9. Understanding touch
  10. Multiplexing
  11. Non-visual interaction
  12. Location
  13. Collaboration and sharing
  14. Learning and training

Patterns of use

Understanding tablet use: a multi-method exploration BIBAFull-Text 1-10
  Hendrik Müller; Jennifer Gove; John Webb
Tablet ownership has grown rapidly over the last year. While market research surveys have helped us understand the demographics of tablet ownership and provided early insights into usage, there is little comprehensive research available. This paper describes a multi-method research effort that employed written and video diaries, in-home interviews, and contextual inquiry observations to learn about tablet use across three locations in the US. Our research provides an in-depth picture of frequent tablet activities (e.g., checking emails, playing games, social networking), locations of use (e.g., couch, bed, table), and contextual factors (e.g., watching TV, eating, cooking). It also contributes an understanding of why and how people choose to use tablets. Popular activities for tablet use, such as media consumption, shopping, cooking, and productivity are also explored. The findings from our research provide design implications and opportunities for enriching the tablet experience, and agendas for future research.
Exploring iPhone usage: the influence of socioeconomic differences on smartphone adoption, usage and usability BIBAFull-Text 11-20
  Ahmad Rahmati; Chad Tossell; Clayton Shepard; Philip Kortum; Lin Zhong
Previous studies have found that smartphone users differ by orders of magnitude. We explore this variability to understand how users install and use native applications in ecologically-valid environments. A quasi-experimental approach is applied to compare how users in different socio-economic status (SES) groups adopt new smartphone technology along with how applications are installed and used. We present a longitudinal study of 34 iPhone 3GS users. 24 of these participants were chosen from two carefully selected SES groups who were otherwise similar and balanced. Usage data collected through an in-device programmable logger, as well as several structured interviews, identify similarities, differences, and trends, and highlight systematic differences in smartphone usage. A group of 10 lower SES participants were later recruited and confirm the influence of SES diversity on device usage. Among our findings are that a large number of applications were uninstalled, lower SES groups spent more money on applications and installed more applications overall, and the lowest SES group perceived the usability of their iPhones poorly in comparison to the other groups. We further discuss the primary reasons behind this low score, and suggest design implications to better support users across SES brackets.
A note paper on note-taking: understanding annotations of mobile phone calls BIBAFull-Text 21-24
  Juan Pablo Carrascal; Rodrigo de Oliveira; Mauro Cherubini
Note-taking has been largely studied in contexts of work meetings. However, often people need to remember information exchanged in informal situations, such as during mobile phone conversations. In this paper we present a study conducted with 59 subjects who had their phone calls semi-automatically transcribed for later annotation. Analysis of the 621 calls and the subjects' annotation behavior revealed that phone recall is indeed a relevant user need. Furthermore, identifying patterns in phone calls such as numbers and names provide better indicators of annotation than variables related to the callers' profile, context of calls, or quality of service. Our findings suggest implications for the design of mobile phone annotation tools.
Patterns of usage and context in interaction with communication support applications in mobile devices BIBAFull-Text 25-34
  Vassilios Stefanis; Athanasios Plessas; Andreas Komninos; John Garofalakis
Contact lists are one of the most frequently used applications on mobile devices. Users are reluctant to delete or remove contacts from their repositories and as modern smartphones provide an unlimited contact list storage space, these become increasingly large, sometimes measuring several hundred entries. In this paper we present our findings from two experiments with user-subjective and quantitative data concerning the use of mobile contact lists. We examine the role that frequency and recency of usage plays in the determination of a contact's importance, with a view to aid the speed and efficacy of the information seeking and retrieval process during the use of the contact list application.

Touch input

An exploration of inadvertent variations in mobile pressure input BIBAFull-Text 35-38
  Craig Stewart; Eve Hoggan; Laura Haverinen; Hugues Salamin; Giulio Jacucci
This paper reports the results of an exploratory study into inadvertent grip pressure changes on mobile devices with a focus on the differences between static lab-based and mobile walking environments. The aim of this research is to inform the design of more robust pressure input techniques that can accommodate dynamic mobile usage. The results of the experiment show that there are significant differences in grip pressure in static and walking conditions with high levels of pressure variation in both. By combining the pressure data with accelerometer data, we show that grip pressure is closely related to user movement.
The fat thumb: using the thumb's contact size for single-handed mobile interaction BIBAFull-Text 39-48
  Sebastian Boring; David Ledo; Xiang 'Anthony' Chen; Nicolai Marquardt; Anthony Tang; Saul Greenberg
Modern mobile devices allow a rich set of multi-finger interactions that combine modes into a single fluid act, for example, one finger for panning blending into a two-finger pinch gesture for zooming. Such gestures require the use of both hands: one holding the device while the other is interacting. While on the go, however, only one hand may be available to both hold the device and interact with it. This mostly limits interaction to a single-touch (i.e., the thumb), forcing users to switch between input modes explicitly. In this paper, we contribute the Fat Thumb interaction technique, which uses the thumb's contact size as a form of simulated pressure. This adds a degree of freedom, which can be used, for example, to integrate panning and zooming into a single interaction. Contact size determines the mode (i.e., panning with a small size, zooming with a large one), while thumb movement performs the selected mode. We discuss nuances of the Fat Thumb based on the thumb's limited operational range and motor skills when that hand holds the device. We compared Fat Thumb to three alternative techniques, where people had to precisely pan and zoom to a predefined region on a map and found that the Fat Thumb technique compared well to existing techniques.
The hold-and-move gesture for multi-touch interfaces BIBAFull-Text 49-58
  Alexander Kulik; Jan Dittrich; Bernd Froehlich
We present the two-finger gesture hold-and-move as an alternative to the disruptive long-tap which utilizes dwell times for switching from panning to object dragging mode in touch interfaces. We make use of a second finger for object selection and manipulation while workspace panning is operated with the first finger. Since both operations can be performed simultaneously, the cumbersome and hard-to-control autoscrolling function is no longer needed when dragging an object beyond the currently visible viewport. Single-finger panning and pinch zooming still work as expected. A user study revealed that hold-and-move enables faster object dragging than the conventional dwell-time approach and that it is preferred by most users.
Brush-and-drag: a multi-touch interface for photo triaging BIBAFull-Text 59-68
  Seon Joo Kim; Hongwei Ng; Stefan Winkler; Peng Song; Chi-Wing Fu
Due to the convenience of taking pictures with various digital cameras and mobile devices, people often end up with multiple shots of the same scene with only slight variations. To enhance photo triaging, which is a very common photowork activity, we propose an effective and easy-to-use brush-and-drag interface that allows the user to interactively explore and compare photos within a broader scene context. First, we brush to mark an area of interest on a photo with our finger(s); our tailored segmentation engine automatically determines corresponding image elements among the photos. Then, we can drag the segmented elements from different photos across the screen to explore them simultaneously, and further perform simple finger gestures to interactively rank photos, select favorites for sharing, or to remove unwanted ones. This novel interaction method was implemented on a consumer-level tablet computer and demonstrated to offer effective interactions in a user study.

Panel discussion

A longitudinal review of Mobile HCI research methods BIBAFull-Text 69-78
  Jesper Kjeldskov; Jeni Paay
This paper revisits a research methods survey from 2003 and contrasts it with a survey from 2010. The motivation is to gain insight about how mobile HCI research has evolved over the last decade in terms of approaches and focus. The paper classifies 144 publications from 2009 published in 10 prominent outlets by their research methods and purpose. Comparing this to the survey for 2000-02 show that mobile HCI research has changed methodologically. From being almost exclusively driven by engineering and applied research, current mobile HCI is primarily empirically driven, involves a high number of field studies, and focus on evaluating and understanding, as well as engineering. It has also become increasingly multi-methodological, combining and diversifying methods from different disciplines. At the same time, new opportunities and challenges have emerged.

Off and around the screen

EdgeSplit: facilitating the selection of off-screen objects BIBAFull-Text 79-82
  Zahid Hossain; Khalad Hasan; Hai-Ning Liang; Pourang Irani
Devices with small viewports (e.g., smartphones or GPS) result in interfaces where objects of interest can easily reside outside the view, into off-screen space. Researchers have addressed this challenge and have proposed visual cues to assist users in perceptually locating off-screen objects. However, little attention has been placed on methods for selecting the objects. Current designs of off-screen cues can result in overlaps that can make it difficult to use the cues as handles through which users can select the off-screen objects they represent. In this paper, we present EdgeSplit, a technique that facilitates both the visualization and selection of off-screen objects on small devices. EdgeSplit exploits the space around the device's borders to display proxies of off-screen objects and then partitions the border regions to allow for non-overlapping areas that make selection of objects easier. We present an effective algorithm that provides such partitioning and demonstrate the effectiveness of EdgeSplit for selecting off-screen objects.
Around device interaction for multiscale navigation BIBAFull-Text 83-92
  Brett Jones; Rajinder Sodhi; David Forsyth; Brian Bailey; Giuliano Maciocci
In this paper we study the design space of free-space interactions for multiscale navigation afforded by mobile depth sensors. Such interactions will have a greater working volume, more fluid control and avoid screen occlusion effects intrinsic to touch screens. This work contributes the first study to show that mobile free-space interactions can be as good as touch. We also analyze sensor orientation and interaction volume usage, resulting in strong implications for how sensors should be placed on mobile devices. We describe a user study evaluating mobile free-space navigation techniques and the impacts of sensor orientation on user experience. Finally, we discuss guidelines for future mobile free-space interaction techniques and sensor design.
Dynamic visualization of large numbers of off-screen objects on mobile devices: an experimental comparison of wedge and overview+detail BIBAFull-Text 93-102
  Stefano Burigat; Luca Chittaro; Andrea Vianello
Overview+Detail [25] and Wedge [16] have been proposed in the literature as effective approaches to resolve the off-screen objects problem on mobile devices. However, they have been studied with a small number of off-screen objects and (in most studies) with static scenarios, in which users did not have to perform any navigation activity. In this paper, we propose improvements to Wedge and Overview+Detail which are specifically aimed at simplifying their use in dynamic scenarios that involve large numbers of off-screen objects. We compare the effectiveness of the two approaches in the considered scenario with a user study, whose results show that Overview+Detail allows users to be faster in searching for off-screen objects and more accurate in estimating their location.
How to position the cursor?: an exploration of absolute and relative cursor positioning for back-of-device input BIBAFull-Text 103-112
  Khalad Hasan; Xing-Dong Yang; Hai-Ning Liang; Pourang Irani
Observational studies indicate that most people use one hand to interact with their mobile devices. Interaction on the back-of-devices (BoD) has been proposed to enhance one-handed input for various tasks, including selection and gesturing. However, we do not possess a good understanding of some fundamental issues related to one-handed BoD input. In this paper, we attempt to fill this gap by conducting three studies. The first study explores suitable selection techniques; the second study investigates the performance and suitability of the two main modes of cursor movement: Relative and Absolute; and the last study examines solutions to the problem of reaching the lower part of the device. Our results indicate that for BoD interaction, relative input is more efficient and accurate for cursor positioning and target selection than absolute input. Based on these findings provide guidelines for designing BoD interactions for mobile devices.

Trust and privacy

Soft trust and mCommerce shopping behaviours BIBAFull-Text 113-122
  Serena Hillman; Carman Neustaedter; John Bowes; Alissa Antle
Recently, there has been widespread growth of shopping and buying on mobile devices, termed mCommerce. With this comes a need to understand how to best design experiences for mobile shopping. To help address this, we conducted a diary and interview study with mCommerce shoppers who have already adopted the technology and shop on their mobile devices regularly. Our study explores typical mCommerce routines and behaviours along with issues of soft trust, given its long-term concern for eCommerce. Our results describe spontaneous purchasing and routine shopping behaviours where people gravitate to their mobile device even if a computer is nearby. We found that participants faced few trust issues because they had limited access to unknown companies. In addition, app marketplaces and recommendations from friends offered a form of brand protection. These findings suggest that companies can decrease trust issues by tying mCommerce designs to friend networks and known marketplaces. The caveat for shoppers, however, is that they can be easily lured into a potentially false sense of trust.
Context-aware, technology enabled social contribution for public safety using M-Urgency BIBAFull-Text 123-132
  Shivsubramani Krishnamoorthy; Ashok Agrawala
M-Urgency is a public safety system that (1) redefines how emergency calls are made to a Public Safety Answering Point (PSAP) like the 911 system and (2) is designed to be context-aware of the situation in which it is used. M-Urgency enables mobile users to stream live audio and video from their devices to local PSAP along with the audio stream, the real time location and the relevant context information, enabling appropriate and prompt service. This paper presents a new feature, incorporated in M-Urgency, that enables social contribution whereby users in the vicinity of an emergency event can help operation conducted by the emergency personnel through verbal/visual information useful to them or by providing assistance. Our experiments show a very positive response from the participant to the capabilities of our system. In this paper, we also discuss the social implications such as privacy and security for this system.
"There are no secrets here!": professional stakeholders' views on the use of GPS for tracking dementia patients BIBAFull-Text 133-142
  Yngve Dahl; Kristine Holbø
This paper investigates the attitudes of professional stakeholders involved in dementia care to GPS tracking of dementia patients. Data were gathered via focus groups that met in the context of a field experiment in which patients' spatial activities were tracked using GPS. Four main topics emerged: (1) different perspectives on the purpose of the measure; (2) privacy concerns and underlying premises for employing GPS technology in professional care, including; (3) knowledge about patients; and (4) routines for use.
   Our findings highlight the need to consider carefully which aspects of dementia patients' movements a GPS tracking system should provide to care workers, and how positioning information should be presented. We found that the level of detail required is intimately linked to the purpose of use. Positioning data that were regarded as being irrelevant for the immediate situation could be perceived as violations of patient privacy and damaging for the system's efficiency.
But i don't trust my friends: ecofriends -- an application for reflective grocery shopping BIBAFull-Text 143-146
  Jakob Tholander; Anna Ståhl; Mattias Jacobsson; Lisen Schultz; Sara Borgström; Maria Normark; Elsa Kosmack-Vaara
The Ecofriends application was designed to encourage people to reflect on their everyday grocery shopping from social and ecological perspectives. Ecofriends portrays the seasonality of various grocery products as being socially constructed, emphasizing subjective dimensions of what it means for a product to be in season, rather than attempting to communicate it as an established fact. It provides the user with unexpected information (news, weather, blog posts and tweets) about the place where the product was grown, and visualises how the product's popularity shifts throughout the year, among the user's friends, among chefs and other food experts, and the general public. Key findings from users' first encounters with the system are presented. In particular, we discuss aspects of trust, information fragments as catalysts, and how several of the participants were challenged by the system's portrayal of season.

Body, space and motion

A recognition safety net: bi-level threshold recognition for mobile motion gestures BIBAFull-Text 147-150
  Matei Negulescu; Jaime Ruiz; Edward Lank
Designers of motion gestures for mobile devices face the difficult challenge of building a recognizer that can separate gestural input from motion noise. A threshold value is often used to classify motion and effectively balances the rates of false positives and false negatives. We present a bi-level threshold recognition technique designed to lower the rate of recognition failures by accepting either a tightly thresholded gesture or two consecutive possible gestures recognized by a relaxed model. Evaluation of the technique demonstrates that the technique can aid in recognition for users who have trouble performing motion gestures. Lastly, we suggest the use of bi-level thresholding to scaffold the learning of gestures.
Extending a mobile device's interaction space through body-centric interaction BIBAFull-Text 151-160
  Xiang 'Anthony' Chen; Nicolai Marquardt; Anthony Tang; Sebastian Boring; Saul Greenberg
Modern mobile devices rely on the screen as a primary input modality. Yet the small screen real-estate limits interaction possibilities, motivating researchers to explore alternate input techniques. Within this arena, our goal is to develop Body-Centric Interaction with Mobile Devices: a class of input techniques that allow a person to position and orient her mobile device to navigate and manipulate digital content anchored in the space on and around the body. To achieve this goal, we explore such interaction in a bottom-up path of prototypes and implementations. From our experiences, as well as by examining related work, we discuss and present three recurring themes that characterize how these interactions can be realized. We illustrate how these themes can inform the design of Body-Centric Interactions by applying them to the design of a novel mobile browser application. Overall, we contribute a class of mobile input techniques where interactions are extended beyond the small screen, and are instead driven by a person's movement of the device on and around the body.
Tilt displays: designing display surfaces with multi-axis tilting and actuation BIBAFull-Text 161-170
  Jason Alexander; Andrés Lucero; Sriram Subramanian
We present a new type of actuatable display, called Tilt Displays, that provide visual feedback combined with multi-axis tilting and vertical actuation. Their ability to physically mutate provides users with an additional information channel that facilitates a range of new applications including collaboration and tangible entertainment while enhancing familiar applications such as terrain modelling by allowing 3D scenes to be rendered in a physical-3D manner. Through a mobile 3x3 custom built prototype, we examine the design space around Tilt Displays, categorise output modalities and conduct two user studies. The first, an exploratory study examines users' initial impressions of Tilt Displays and probes potential interactions and uses. The second takes a quantitative approach to understand interaction possibilities with such displays, resulting in the production of two user-defined gesture sets: one for manipulating the surface of the Tilt Display, the second for conducting everyday interactions.
m+pSpaces: virtual workspaces in the spatially-aware mobile environment BIBAFull-Text 171-180
  Jessica Cauchard; Markus Löchtefeld; Mike Fraser; Antonio Krüger; Sriram Subramanian
We introduce spatially-aware virtual workspaces for the mobile environment. The notion of virtual workspaces was initially conceived to alleviate mental workload in desktop environments with limited display real-estate. Using spatial properties of mobile devices, we translate this approach and illustrate that mobile virtual workspaces greatly improve task performance for mobile devices. In a first study, we compare our spatially-aware prototype (mSpaces) to existing context switching methods for navigating amongst multiple tasks in the mobile environment. We show that users are faster, make more accurate decisions and require less mental and physical effort when using spatially-aware prototypes. We furthermore prototype pSpaces and m+pSpaces, two spatially-aware systems equipped with pico-projectors as auxiliary displays to provide dual-display capability to the handheld device. A final study reveals advantages of each of the different configurations and functionalities when comparing all three prototypes. Drawing on these findings, we identify design considerations to create, manipulate and manage spatially-aware virtual workspaces in the mobile environment.

Understanding use

Creative cameraphone use in rural developing regions BIBAFull-Text 181-190
  David Frohlich; Simon Robinson; Kristen Eglinton; Matt Jones; Elina Vartiainen
In this paper we consider the current and future use of cameraphones in the context of rural South Africa, where many people do not have access to the latest models and ICT infrastructure is poor. We report a new study of cameraphone use in this setting, and the design and testing of a novel application for creating rich multimedia narratives and materials. We argue for better creative media applications on mobile platforms in this region, and greater attention to their local use.
AutoWeb: automatic classification of mobile web pages for revisitation BIBAFull-Text 191-200
  Jie Liu; Wenchang Xu; Yuanchun Shi
Revisitation in mobile Web browsers takes more time than that in desktop browsers due to the limitations of mobile phones. In this paper, we propose AutoWeb, a novel approach to speed up revisitation in mobile Web browsing. In AutoWeb, opened Web pages are automatically classified into different groups based on their contents. Users can more quickly revisit an opened Web page by narrowing down search scope into a group of pages that share the same topic. We evaluated the classification accuracy and the accuracy is 92.4%. Three experiments were conducted to investigate revisitation performance in three specific tasks. Results show AutoWeb can save significant time for revisitation by 29.5%, especially for long time Web browsing, and that it improves overall mobile Web revisitation experience. We also compare automatic classification with other revisitation methods.
Enough power to move: dimensions for representing energy availability BIBAFull-Text 201-210
  Anders Lundström; Cristian Bogdan; Filip Kis; Ingvar Olsson; Lennart Fahlén
Energy and design of energy-feedback are becoming increasingly important in the mobile HCI community. Our application area concerns electric vehicles, we thus depart from home and workplace appliances and address range and energy anxiety caused by short driving distance capabilities and long charging times in mobile settings. Meanwhile some research has been done on energy management of mobile devices, less has been done on mobility devices like electric vehicles. We explore this topic by letting conventional fuel car drivers reflect on their current driving habits through an exploration tool that we developed. Our results demonstrate three dimensions related to energy availability to consider for design of energy dependent mobility devices and provide explanations on how these dimensions could be utilize in our design through energy visualizations. With this we contributed not only by demonstrating aspects of energy availability and mobility, but also through opening up for new interesting possibilities and inquires in our and possibly other domains.

Mobile augmented reality

Revisiting peephole pointing: a study of target acquisition with a handheld projector BIBAFull-Text 211-220
  Bonifaz Kaufmann; David Ahlström
Peephole pointing is a promising interaction technique for large workspaces that contain more information than can be appropriately displayed on a single screen. In peephole pointing a window to the virtual workspace is moved in space to reveal additional content. In 2008, two different models for peephole pointing were discussed. Cao, Li and Balakrishnan proposed a two-component model, whereas Rohs and Oulasvirta investigated a similar model, but concluded that Fitts' law is sufficient for predicting peephole pointing performance. We present a user study performed with a handheld projector showing that Cao et al.'s model only outperforms Fitts' law in prediction accuracy when different peephole sizes are used and users have no prior knowledge of target location. Nevertheless, Fitts' law succeeds under the conditions most likely to occur. Additionally, we show that target overshooting is a key characteristic of peephole pointing and present the implementation of an orientation aware handheld projector that enables peephole interaction without instrumenting the environment.
Mobile augmented reality: exploring design and prototyping techniques BIBAFull-Text 221-230
  Marco de Sá; Elizabeth Churchill
As mobile devices are enhanced with more sensors, powerful embedded cameras, and increased processing power and features, new user experiences become possible. A good example is the recent emergence of Augmented Reality (AR) applications that are designed for personal use while people are on-the-go. However, designing effective and usable AR experiences for mobile devices poses challenges for the design process. In this paper we outline reasons why simulating a compelling, mobile AR experience with sufficient veracity for effective formative design is a challenge, and present our work on prototyping and evaluation techniques for mobile AR. An experiment within the context of an ongoing design project (Friend Radar) is presented along with resulting findings and guidelines. We reflect on the benefits and drawbacks of low, mixed and high fidelity prototypes for mobile AR by framing them into a set of analytic categories extracted from the existing literature on prototyping and design.
Playing it real: magic lens and static peephole interfaces for games in a public space BIBAFull-Text 231-240
  Jens Grubert; Ann Morrison; Helmut Munz; Gerhard Reitmayr
Magic lens and static peephole interfaces are used in numerous consumer mobile phone applications such as Augmented Reality browsers, games or digital map applications in a variety of contexts including public spaces. Interface performance has been evaluated for various interaction tasks involving spatial relationships in a scene. However, interface usage outside laboratory conditions has not been considered in depth in the evaluation of these interfaces.
   We present findings about the usage of magic lens and static peephole interfaces for playing a find-and-select game in a public space and report on the reactions of the public audience to participants' interactions.
   Contrary to our expectations participants favored the magic lens over a static peephole interface despite tracking errors, fatigue and potentially conspicuous gestures. Most passersby did not pay attention to the participants and vice versa. A comparative laboratory experiment revealed only few differences in system usage.
Integrating the physical environment into mobile remote collaboration BIBAFull-Text 241-250
  Steffen Gauglitz; Cha Lee; Matthew Turk; Tobias Höllerer
We describe a framework and prototype implementation for unobtrusive mobile remote collaboration on tasks that involve the physical environment. Our system uses the Augmented Reality paradigm and model-free, markerless visual tracking to facilitate decoupled, live updated views of the environment and world-stabilized annotations while supporting a moving camera and unknown, unprepared environments. In order to evaluate our concept and prototype, we conducted a user study with 48 participants in which a remote expert instructed a local user to operate a mock-up airplane cockpit. Users performed significantly better with our prototype (40.8 tasks completed on average) as well as with static annotations (37.3) than without annotations (28.9). 79% of the users preferred our prototype despite noticeably imperfect tracking.

Understanding touch

Touch behavior with different postures on soft smartphone keyboards BIBAFull-Text 251-260
  Shiri Azenkot; Shumin Zhai
Text entry on smartphones is far slower and more error-prone than on traditional desktop keyboards, despite sophisticated detection and auto-correct algorithms. To strengthen the empirical and modeling foundation of smartphone text input improvements, we explore touch behavior on soft QWERTY keyboards when used with two thumbs, an index finger, and one thumb. We collected text entry data from 32 participants in a lab study and describe touch accuracy and precision for different keys. We found that distinct patterns exist for input among the three hand postures, suggesting that keyboards should adapt to different postures. We also discovered that participants' touch precision was relatively high given typical key dimensions, but there were pronounced and consistent touch offsets that can be leveraged by keyboard algorithms to correct errors. We identify patterns in our empirical findings and discuss implications for design and improvements of soft keyboards.
Digging unintentional displacement for one-handed thumb use on touchscreen-based mobile devices BIBAFull-Text 261-270
  Wenchang Xu; Jie Liu; Chun Yu; Yuanchun Shi
There is usually an unaware screen distance between initial contact and final lift-off when users tap on touchscreen-based mobile devices with their fingers, which may affect users' target selection accuracy, gesture performance, etc. In this paper, we summarize such case as unintentional displacement and give its models under both static and dynamic scenarios. We then conducted two user studies to understand unintentional displacement for the widely-adopted one-handed thumb use on touchscreen-based mobile devices under both scenarios respectively. Our findings shed light on the following four questions: 1) what are the factors that affect unintentional displacement; 2) what is the distance range of the displacement; 3) how is the distance varying over time; 4) how are the unintentional points distributed around the initial contact point. These results not only explain certain touch inaccuracy, but also provide important reference for optimization and future design of UI components, gestures, input techniques, etc.
PinyinPie: a pie menu augmented soft keyboard for Chinese pinyin input methods BIBAFull-Text 271-280
  Ying Liu; Xiantao Chen; Lingzhi Wang; Hequan Zhang; Shen LI
Soft keyboard for Chinese pinyin input methods are rarely studied although it is one of the default methods on devices with touch screens. Via an analysis of the digraph frequency of the pinyin system, we discovered a unique characteristic of the pinyin system: only 10 Roman letters are needed for the subsequent characters in a pinyin syllable after the leading letter. Making use of this feature and existing knowledge on layout optimization of soft keyboard, pie menu and ShapeWriter, we designed a pie menu augmented keyboard. We conducted a user study to compare user performance to test if the pie menu can help to increase user performance with a working prototype. We found that after about 2 hours' use of the pie menu augmented quasi-QWERTY keyboard, users can reach a speed of 25 Chinese characters per minute with slightly lower error rate. Moreover, users can well remember the layout of the pie menu after about two hours' use of it.

Multiplexing

An empirical investigation into how users adapt to mobile phone auto-locks in a multitask setting BIBAFull-Text 281-290
  Duncan Brumby; Vahab Seyedi
Auto-locks are a necessary feature on many modern day mobile devices, but can they sometimes have detrimental consequences? In this paper we investigate how auto-locks can affect behavior in a demanding multitasking scenario. A study is conducted in which participants had to enter text using a touch-screen interface while driving a simulated vehicle in a lab setting. Different auto-lock mechanisms were implemented on the secondary device, manipulating both the duration of the lockout threshold (i.e., the period of inactivity before the auto-lock was initiated) and the complexity of the unlock procedure (i.e., how easy it was for the user to unlock the device once it had locked). Results showed that lane-keeping performance on the primary driving task was worse when there was a shorter lockout threshold. The reason for this was two-fold: (1) participants took fewer long pauses between typing actions, so as to avoid being locked out of the device, and (2) when the device did lock, unlocking it took time and further distracted the driver. In support of this latter finding, we also found that a more complex unlock procedure, which required a pin code to be entered, resulted in worse lane-keeping performance than when the device could be unlocked by making a simple button press. These findings suggest that auto-locks can dissuade users from regularly interleaving attention between other ongoing activities. Designers should keep this in mind when incorporating auto-locks in mobile devices.
Back to the app: the costs of mobile application interruptions BIBAFull-Text 291-294
  Luis Leiva; Matthias Böhmer; Sven Gehring; Antonio Krüger
Smartphone users might be interrupted while interacting with an application, either by intended or unintended circumstances. In this paper, we report on a large-scale observational study that investigated mobile application interruptions in two scenarios: (1) intended back and forth switching between applications and (2) unintended interruptions caused by incoming phone calls. Our findings reveal that these interruptions rarely happen (at most 10% of the daily application usage), but when they do, they may introduce a significant overhead (can delay completion of a task by up to 4 times). We conclude with a discussion of the results, their limitations, and a series of implications for the design of mobile phones.
Visual search on a mobile device while walking BIBAFull-Text 295-304
  Ji Jung Lim; Cary Feria
As smartphone usage increases, safety concerns have arisen. Previous research suggested cognitive impairments while using mobile devices in walking conditions. Mobile user interfaces that are designed in ways not to require users' full attention may mitigate the safety concerns. Primary focus of this research was on the perception process during visual search rather than the physical target selection by finger tapping, which most previous research focused on. The effects of object size, contrast, and target location on mobile devices while walking and standing were examined. A serial visual search using "T" and "L" shapes on a mobile device was conducted, which controlled for the physical target selection involvement. The results showed that walking, bigger object size, and the target position in the outer area of the mobile device display slowed the visual search reaction time. This suggests mobile interface improvement possibilities by proper object sizing and placement.
Bridging waiting times on web pages BIBAFull-Text 305-308
  Florian Alt; Alireza Sahami Shirazi; Albrecht Schmidt; Richard Atterer
High-speed Internet connectivity makes browsing a convenient task. However, there are many situations in which surfing the web is still slow due to limited bandwidth, slow servers, or complex queries. As a result, loading web pages can take several seconds, making (mobile) browsing cumbersome. We present an approach which makes use of the time spent on waiting for the next page, by bridging the wait with extra cached or preloaded content. We show how the content (e.g., news, Twitter) can be adapted to the user's interests and to the context of use, hence making mobile surfing more comfortable. We compare two approaches: in time-multiplex mode, the entire screen displays bridging content until the loading is finished. In space-multiplex mode, content is displayed alongside the requested content while it loads. We use an HTTP proxy to intercept requests and add JavaScript code, which allows the bridging content from websites of our choice to be inserted. The approach was evaluated with 15 participants, assessing suitable content and usability.

Non-visual interaction

Thermal icons: evaluating structured thermal feedback for mobile interaction BIBAFull-Text 309-312
  Graham Wilson; Stephen Brewster; Martin Halvey; Stephen Hughes
This paper expands the repertoire of non-visual feedback for mobile interaction, established through Earcons and Tactons, by designing structured thermal cues for conveying information. Research into the use of thermal feedback for HCI has not looked beyond basic 'yes-no' detection of stimuli to the unique identification of those stimuli. We first designed thermal icons that varied along two parameters to convey two pieces of information. We also designed intramodal tactile icons, combining one thermal and one vibrotactile parameter, to test perception of different tactile cues and so evaluate the possibility of augmenting vibrotactile displays with thermal feedback. Thermal icons were identified with 82.8% accuracy, while intramodal icons had 96.9% accuracy, suggesting thermal icons are a viable means of conveying information in mobile HCI, for when audio and/or vibrotactile feedback is not suitable or desired.
Touching the micron: tactile interactions with an optical tweezer BIBAFull-Text 313-316
  Stuart Lamont; Richard Bowman; Matthias Rath; John Williamson; Roderick Murray-Smith; Miles Padgett
A tablet interface for manipulating microscopic particles is augmented with vibrotactile and audio feedback. The feedback is generated using a novel real-time synthesis library based on approximations to physical processes, and is efficient enough to run on mobile devices, despite their limited computational power. The feedback design and usability testing was done with a realistic simulator on appropriate tasks, allowing users to control objects more rapidly, with fewer errors and applying more consistent forces. The feedback makes the interaction more tangible, giving the user more awareness of changes in the characteristics of the optical tweezers as the number of optical traps changes.
An evaluation of BrailleTouch: mobile touchscreen text entry for the visually impaired BIBAFull-Text 317-326
  Caleb Southern; James Clawson; Brian Frey; Gregory Abowd; Mario Romero
We present the evaluation of BrailleTouch, an accessible keyboard for blind users on touchscreen smartphones. Based on the standard Perkins Brailler, BrailleTouch implements a six-key chorded braille soft keyboard. Eleven blind participants typed for 165 twenty-minute sessions on three mobile devices: 1) BrailleTouch on a smartphone; 2) a soft braille keyboard on a touchscreen tablet; and 3) a commercial braille keyboard with physical keys. Expert blind users averaged 23.2 words per minute (wpm) on the BrailleTouch smartphone. The fastest participant, a touchscreen novice, achieved 32.1 wpm during his first session. Overall, participants were able to transfer their existing braille typing skills to a touchscreen device within an hour of practice. We report the speed for braille text entry on three mobile devices, an in depth error analysis, and the lessons learned for the design and evaluation of accessible and eyes-free soft keyboards.
PocketMenu: non-visual menus for touch screen devices BIBAFull-Text 327-330
  Martin Pielot; Anastasia Kazakova; Tobias Hesselmann; Wilko Heuten; Susanne Boll
We present PocketMenu, a menu optimized for non-visual, in-pocket interaction with menus on handheld devices with touch screens. By laying out menu items along the border of the touch screen its tactile features guide the interaction. Additional vibro-tactile feedback and speech allows identifying the individual menu items non-visually. In an experiment, we compared PocketMenu with iPhone's VoiceOver. Participants had to control an MP3 player while walking down a road with the device in the pocket. The results provide evidence that in the above context the PocketMenu outperforms VoiceOver in terms of completion time, selection errors, usability. Hence, it enables usage of touch screen apps in mobile contexts (e.g. walking, hiking, or skiing) and limited interaction spaces (e.g. device resting in a pocket).
Clicking blindly: using spatial correspondence to select targets in multi-device environments BIBAFull-Text 331-334
  Krzysztof Pietroszek; Edward Lank
We propose spatial correspondence targeting to support interaction between devices in multi-device environments when network connectivity fails. In spatial correspondence targeting, for a given target on surface A, an end-user envisions the relative position of that target on surface B and interacts on surface B without any visual depiction of the target on surface B. The targeting task relies on human spatial visualization ability, i.e. the ability to relate the spatial position of objects on one display to their scale-invariant position on another display. We provide experimental evidence that demonstrates that users may be able to target up to 25 discrete targets using a smartphone screen even in the absence of a depiction of the target on the smartphone screen. We argue that the accuracy of spatial correspondence targeting is sufficient for the technique to have many practical applications.

Location

A real-world study of an audio-tactile tourist guide BIBAFull-Text 335-344
  Delphine Szymczak; Kirsten Rassmus-Gröhn; Charlotte Magnusson; Per-Olof Hedvall
This paper reports on the in-context evaluation of an audio-tactile interactive tourist guide -- one test was done in a medieval city center, and the other was done at an archaeological site. The activity theory framework was used as a perspective to guide design, field-study and analysis. The evaluation shows that the guide allows users to experience an augmented reality, while keeping the environment in focus (in contrast with the common key-hole like experience that on-screen augmented reality generates). The evaluation also confirms the usefulness of extending the vibrational feedback to convey also distance information as well as directional information.
TraceViz: "brushing" for location based services BIBAFull-Text 345-348
  Yung-Ju Chang; Pei-Yao Hung; Mark Newman
The popularization of Location Based Services (LBS) has created new challenges for interaction designers in validating the design of their applications. Existing tools designed to play back GPS location traces data streams have shown potential for testing LBS applications and for supporting rapid and reflective prototyping. However, selecting a useful set of location traces from among a large collection remains a difficult task. In this paper, we present TraceViz, the first system that is aimed specifically at supporting LBS designers in exploring, filtering, and selecting location traces. TraceViz employs dynamic queries and "brushing" to allow LBS designers to flexibly adjust their trajectory filter criteria to find location traces of interest. An evaluation performed with eight LBS designers and developers indicates that TraceViz is helpful for rapidly locating useful traces and also highlights areas for future improvement.
Urban exploration using audio scents BIBAFull-Text 349-358
  Andreas Komninos; Peter Barrie; Vassilios Stefanis; Athanasios Plessas
We describe the design and evaluation of an audio-based mixed reality navigation system that uses the concept of audio scents for the implicit guidance of tourists and visitors of urban areas, as an alternative to turn-by-turn guidance systems. A field trial of our prototype uncovers great potential for this type of implicit navigation and is received positively by our participants. We discuss the technical implementation of our prototype, detailed findings from quantitative and subjective evaluation data gathered during the field trial and highlight possible strands for further research and development.
iFitQuest: a school based study of a mobile location-aware exergame for adolescents BIBAFull-Text 359-368
  Andrew Macvean; Judy Robertson
Exergames, games that encourage and facilitate physical exercise, are growing in popularity thanks to progressions in ubiquitous technologies. While initial findings have confirmed the potential of such games, little research has been done on systems which target the needs of adolescent children. In this paper we introduce iFitQuest, a mobile location-aware exergame designed with adolescent children in mind. In an attempt to understand how exergames can be used to target adolescent children, and whether they can be effective for this demographic, we outline the results of a school based field study conducted within a P.E. class. Through a detailed analysis of our results, we conclude that iFitQuest appeals to twelve to fifteen year olds and causes them to exercise at moderate to vigorous levels. However, in order to develop effective systems that can dynamically adapt to the adolescent users, further research into different categories of users' behavior is required.
Tacticycle: supporting exploratory bicycle trips BIBAFull-Text 369-378
  Martin Pielot; Benjamin Poppinga; Wilko Heuten; Susanne Boll
Going on excursions to explore unfamiliar environments by bike is a popular activity in many places in this world. To investigate the nature of exploratory bicycle trips, we studied tourists on their excursions on a famous vacation island. We found that existing navigation systems are either not helpful or discourage exploration. We therefore propose Tacticycle, a conceptual prototype of a user interface for a bicycle navigation system. Relying on a minimal set of navigation cues, it helps staying oriented while supporting spontaneous navigation and exploration at the same time. In cooperation with a bike rental, we rented the Tacticycle prototype to tourists who took it on their actual excursions. The results show that they always felt oriented and encouraged to playfully explore the island, providing a rich, yet relaxed travel experience. On the basis of these findings, we argue that exploratory trips can be very well supported by providing minimal navigation cues only.

Collaboration and sharing

Displaying mobile feedback during a presentation BIBAFull-Text 379-382
  Jaime Teevan; Daniel Liebling; Ann Paradiso; Carlos Garcia Jurado Suarez; Curtis von Veh; Darren Gehring
Smartphone use in presentations is often seen as distracting to the audience and speaker. However, phones can encourage people participate more fully in what is going on around them and build stronger ties with their companions. In this paper, we describe a smartphone interface designed to help audience members engage fully in a presentation by providing real time mobile feedback. This feedback is then aggregated and reflected back to the group via a projected visualization, with notifications provided to the presenter and the audience on interesting feedback events. We deployed this system in a large enterprise meeting, and collected information about the attendees' experiences with it via surveys and interaction logs. Participants report that providing mobile feedback was convenient, helped them pay close attention to the presentation, and enabled them to feel connected with other audience members.
MobiComics: collaborative use of mobile phones and large displays for public expression BIBAFull-Text 383-392
  Andrés Lucero; Jussi Holopainen; Tero Jokela
We explore shared collocated interactions with mobile phones and public displays in an indoor public place. We introduce MobiComics, an application that allows a group of collocated persons to flexibly create and edit comic strip panels using their mobile phones. The prototype supports ad hoc sharing of comic strip panels between people and onto two public displays by taking the spatial arrangement of people into account, measured with a radio tracking technology integrated in their mobile phones. MobiComics also includes game-like elements to foster social interaction between participants. Our evaluations show that people enjoyed creating panels collaboratively and sharing content using the proposed interaction techniques. The included game-like features positively influenced social interaction.
I wanted to settle a bet!: understanding why and how people use mobile search in social settings BIBAFull-Text 393-402
  Karen Church; Antony Cousin; Nuria Oliver
Recent work in mobile computing has highlighted that conversations and social interactions have a significant impact on mobile Web and mobile search behaviours. To date, however, this social element has not been explored fully and little is known about why and how mobile users search for information in social settings. The goal of this work is to provide a deeper understanding of social mobile search behaviours so that we may improve future mobile search experiences that involve a social component. To this end we present the results of two studies: a survey involving almost 200 users and a two-week diary and follow-up interview study of 20 users. Our results extend past research in the mobile search space, by exploring the motivations, circumstances and experiences of using mobile search in social settings to satisfy group information needs. Our findings point to a number of open research challenges and implications for enriching the search experiences of mobile users.
Degrees of sharing: proximate media sharing and messaging by young people in khayelitsha BIBAFull-Text 403-412
  Marion Walton; Gary Marsden; Silke Haßreiter; Sena Allen
This paper explores the phone and mobile media sharing relationships of a group of young mobile phone users in Khayelitsha, South Africa. Intensive sharing took place within peer and intimate relationships, while resource sharing characterized relationships with a more extensive circle, including members of the older generation. Phones were kept open to others to avoid inferences of stinginess, disrespect, or secretiveness and the use of privacy features (such as passwords) was complicated by conflicts between an ethos of mutual support and the protection of individual property and privacy. Collocated phone use trumped online sharing but media on phones constituted public personae similar to social media 'profiles'. Proximate sharing within close relationships allowed social display, relationship-building and deference to authority. We suggest changes to current file-based interfaces for Bluetooth pairing, media 'galleries', and peer-to-peer text communication to better support such proximate exchanges of media and messaging.
Investigating collaborative annotation on slate pcs BIBAFull-Text 413-416
  Jennifer Pearson; George Buchanan; Harold Thimbleby
Mobile reading is becoming evermore popular with the introduction of eInk devices such as the Kindle, as well as the many reading applications available on slate PCs and cellular handsets. The portable nature and large storage capacity of these modern mobile devices is making reading a more technology orientated activity. One aspect of mobile reading that has been given surprisingly little attention is collective reading -- which is a common activity with paper documents. We investigate the support of group reading using slate PCs, focussing on collective annotation. In the past, desktop PCs have proved inferior in many ways for reading, when compared to paper. Notably, user evaluations of our new system, BuddyBooks, demonstrate that the slate PC form factor can, in contrast, provide advantages for group reading. While annotation practices change with the new format, coordinating within the group can be improved when touch-interaction is carefully exploited.

Learning and training

An investigation into the use of tactile instructions in snowboarding BIBAFull-Text 417-426
  Daniel Spelmezan
In many sports, athletes are spatially separated from their coach while practicing an exercise. This spatial separation makes learning new skills arduous because the coach cannot give instructions or feedback on performance. We present the findings of an in the wild study that demonstrate the potential for teaching sport skills with realtime tactile instructions. We focused on snowboard training. Ten amateurs learned a riding technique with a wearable system that automatically provided tactile instructions during descents. These instructions were in sync with the movements of the snowboard and signaled how to move the body. We found that tactile instructions could help snowboarders to improve their skills. We report insights into the snowboarders' opinion and give recommendations for teaching sport skills with tactile instructions. Our findings help to identify the conditions under which tactile instructions can support athletes in sports training.
Tip tap tones: mobile microtraining of mandarin sounds BIBAFull-Text 427-430
  Darren Edge; Kai-Yin Cheng; Michael Whitney; Yao Qian; Zhijie Yan; Frank Soong
Learning a second language is hard, especially when the learner's brain must be retrained to identify sounds not present in his or her native language. It also requires regular practice, but many learners struggle to find the time and motivation. Our solution is to break down the challenge of mastering a foreign sound system into minute-long episodes of "microtraining" delivered through mobile gaming. We present the example of Tip Tap Tones -- a mobile game with the purpose of helping learners acquire the tonal sound system of Mandarin Chinese. In a 3-week, 12-user study of this system, we found that an average of 71 minutes' gameplay significantly improved tone identification by around 25%, regardless of whether the underlying sounds had been used to train tone perception. Overall, results suggest that mobile microtraining is an efficient, effective, and enjoyable way to master the sounds of Mandarin Chinese, with applications to other languages and domains.
MemReflex: adaptive flashcards for mobile microlearning BIBAFull-Text 431-440
  Darren Edge; Stephen Fitchett; Michael Whitney; James Landay
Flashcard systems typically help students learn facts (e.g., definitions, names, and dates), relying on intense initial memorization with subsequent tests delayed up to days later. This approach does not exploit the short, sparse, and mobile opportunities for microlearning throughout the day, nor does it support learners who need the motivation that comes from successful study sessions. In contrast, our MemReflex system of adaptive flashcards gives fast-feedback by retesting new items in quick succession, dynamically scheduling future tests according to a model of the learner's memory. We evaluate MemReflex across three user studies. In the first two studies, we demonstrate its effectiveness for both audio and text modalities, even while walking and distracted. In the third study of second-language vocabulary learning, we show how MemReflex enhanced learner accuracy, confidence, and perceptions of control and success. Overall, the work suggests new directions for mobile microlearning and "micro activities" in general.
Mobilizing education: evaluation of a mobile learning tool in a low-income school BIBAFull-Text 441-450
  Vanessa Frias-Martinez; Jesus Virseda; Aldo Gomero
The pervasiveness of feature phones in emerging economies has contributed to the advent of mobile learning applications for low-income populations. However, many of these tools lack the proper evaluation required to understand their educational impact. In this paper, we extend the state of the art by presenting the evaluation of a game-based mobile learning tool in both formal and informal settings at a low-income school in Lima, Peru. We show that EducaMovil improves knowledge acquisition in the formal environment of a classroom. In addition, use of the tool in more informal settings such as school breaks enhances the level of knowledge, as long as there is continuous engagement over time. We also demonstrate that EducaMovil can be used as a paperless complement to homework. Finally, we provide teachers with a set of guidelines for a successful deployment of EducaMovil at their schools.