HCI Bibliography Home | HCI Conferences | MobileHCI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
MobileHCI Tables of Contents:

Proceedings of 2013 Conference on Human-computer interaction with mobile devices and services

Fullname:Proceedings of the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services
Editors:Michael Rohs; Albrecht Schmidt; Daniel Ashbrook; Enrico Rukzio
Location:Munich, Germany
Dates:2013-Aug-27 to 2013-Aug-30
Standard No:ISBN: 978-1-4503-2273-7; ACM DL: Table of Contents; hcibib: MOBILEHCI13
Links:Conference Website
  1. Tactile user interfaces
  2. Specific application areas
  3. Select and interact
  4. Navigation and selection
  5. User behavior
  6. Touch and text input
  7. Navigation, location and maps
  8. Unconventional mobile user interfaces, services and hardware
  9. Security and privacy
  10. Adaptation and design
  11. Developing world
  12. Collaboration and communication
  13. Studies
  14. Touch screen interaction and multi-modal user interfaces
  15. Industrial case studies
  16. Demos
  17. Posters
  18. Workshops

Tactile user interfaces

Peripheral vibro-tactile displays BIBAFull-Text 1-10
  Martin Pielot; Rodrigo de Oliveira
We report from a study exploring the boundaries of the peripheral perception of vibro-tactile stimuli. For three days, we exposed 15 subjects to a continual vibration pattern that was created by a mobile device worn in their trouser pocket. In order to guarantee that the stimuli would not require the subjects' focal attention, the vibration pattern was tested and refined to minimise its obtrusiveness, and during the study, the participants adjusted its intensity to just above their personal detection threshold. At random times, the vibration stopped and participants had to acknowledge these events as soon as they noticed them. Only 6.5% of the events were acknowledged fast enough to assume that the cue had been on the focus of the participants' attention. The majority of events were answered between 1 and 10 minutes, which indicates that the participants were aware of the cue without focussing on it. In addition, participants reported not to be annoyed by the signal in 94.4% of the events. These results provide evidence that vibration patterns can form non-annoying, lightweight information displays, which can be consumed at the periphery of a user's attention.
Side pressure for bidirectional navigation on small devices BIBAFull-Text 11-20
  Daniel Spelmezan; Caroline Appert; Olivier Chapuis; Emmanuel Pietriga
Virtual navigation on a mobile touchscreen is usually performed using finger gestures: drag and flick to scroll or pan, pinch to zoom. While easy to learn and perform, these gestures cause significant occlusion of the display. They also require users to explicitly switch between navigation mode and edit mode to either change the viewport's position in the document, or manipulate the actual content displayed in that viewport, respectively. SidePress augments mobile devices with two continuous pressure sensors co-located on one of their sides. It provides users with generic bidirectional navigation capabilities at different levels of granularity, all seamlessly integrated to act as an alternative to traditional navigation techniques, including scrollbars, drag-and-flick, or pinch-to-zoom. We describe the hardware prototype, detail the associated interaction vocabulary for different applications, and report on two laboratory studies. The first shows that users can precisely and efficiently control SidePress; the second, that SidePress can be more efficient than drag-and-flick touch gestures when scrolling large documents.
Finding my beat: personalised rhythmic filtering for mobile music interaction BIBAFull-Text 21-30
  Daniel Boland; Roderick Murray-Smith
A novel interaction style is presented, allowing in-pocket music selection by tapping a song's rhythm on a device's touchscreen or body. We introduce the use of rhythmic queries for music retrieval, employing a trained generative model to improve query recognition. We identify rhythm as a fundamental feature of music which can be reproduced easily by listeners, making it an effective and simple interaction technique for retrieving music. We observe that users vary in which instruments they entrain with and our work is the first to model such variability. An experiment was performed, showing that after training the generative model, retrieval performance improved two-fold. All rhythmic queries returned a highly ranked result with the trained generative model, compared with 47% using existing methods. We conclude that generative models of subjective user queries can yield significant performance gains for music retrieval and enable novel interaction techniques such as rhythmic filtering.
VibPress: estimating pressure input using vibration absorption on mobile devices BIBAFull-Text 31-34
  Sungjae Hwang; Andrea Bianchi; Kwang-yun Wohn
This paper introduces VibPress, a software technique that enables pressure input interaction on mobile devices by measuring the level of vibration absorption with the built-in accelerometer when the device is in contact with a damping surface (e.g., user's hands). This is achieved using a real-time estimation algorithm running on the device. Through a user evaluation, we provide evidence that this system is faster than previous software-based approaches, and accurate as hardware-augmented approaches (up to 99.7% accuracy). With this work, we also provide an insight about the maximum number of pressure levels that users can reliably distinguish, reporting usability metrics (time, errors and cognitive load) for different pressure levels and types of gripping gestures (press and squeeze).

Specific application areas

Annotating ecology: looking to biological fieldwork for mobile spatial annotation workflows BIBAFull-Text 35-44
  Derek Reilly; Bonnie MacKay
We present findings from a qualitative study of the spatial practices of biological fieldwork. We argue that these fieldwork practices inform a vision of decentralized spatial annotation in which a variety of motivations, needs, and perspectives coexist, and may support each other synergistically. We contrast this with current and past designs of mobile spatial annotation systems in the literature. From our analysis we identify three guidelines for mobile annotation systems design in biological fieldwork that we argue also extend to other domains: allowing the management of space through user control over annotation processes, promoting structured but flexible annotation through user-defined annotation formats, and providing robust and comprehensive integration of disparate data sources to allow ad hoc, exploratory queries.
Apps for art's sake: resistance and innovation BIBAFull-Text 45-54
  Jo Briggs; Mark Blythe
The paper reports on the growing phenomena of art-making on mobile devices and contributes findings from two studies of artists' responses to iPad painting apps: the first is a series of exploratory workshops where artists were recruited to engage with a range of art apps, the second is a series of in-depth interviews with two artists who had incorporated the device and Brushes app into their painting practice over a period of months and years. The artists in both studies generally agreed that the devices and apps were easy to use and enjoyable but remained ambivalent about the technologies and outcomes. Although there was excitement around new creative possibilities there were also tensions around the status of the work being produced. The paper reflects on the role of popular digital production apparatus and information exchange on the constitution of artist-identities at a time of rapid techno-cultural change. It argues that while tablet computing and art apps have democratized certain artistic processes these technologies have generated conflict with traditional conceptions of art and curation.
The reconfiguration of triage by introduction of technology BIBAFull-Text 55-64
  Marc Jentsch; Leonardo Ramirez; Lisa Wood; Erion Elmasllari
Triage is the process of sorting patients by order of treatment necessity in large scale emergencies. Usually, a paper tag is attached to each patient containing their classification and the results of an initial, quick diagnosis. Several projects have aimed to electronically augment the process by using ubiquitous computing components. In this paper we present drawbacks of introducing technology to the process, which have not been discussed elsewhere, based on an extensive set of expert workshops discussing the employment of technology in triage with the aid of technology probes. Our main finding is that the common set of functionalities of electronic triage systems involves unwanted reconfiguration of triage processes. By presenting a set of implications for the design of these mobile technologies, we show how potential negative effects can be mitigated.
Enhancing remote live sports experiences through an eyes-free interaction BIBAFull-Text 65-68
  Pedro Centieiro; Teresa Romão; A. Eduardo Dias
When using mobile screen apps on touch-based mobile devices to interact with live sports TV broadcasts, people need to keep an eye on those devices, since they do not provide tactile feedback. If the app on the mobile screen requires user interaction when something exciting is about to happen on the TV screen, the user needs to shift attention from the TV to the mobile screen, spoiling the whole experience. This paper presents WeBet, a mobile game that prompts users to bet if a goal is about to happen during a football match, without requiring their visual attention. WeBet aims to study if this concept conveys an exciting user experience by allowing a natural interaction with the mobile screen without looking at it. Results from a preliminary user test helped to validate our approach and identified important refinements for future work.

Select and interact

Voice augmented manipulation: using paralinguistic information to manipulate mobile devices BIBAFull-Text 69-78
  Daisuke Sakamoto; Takanori Komatsu; Takeo Igarashi
We propose a technique called voice augmented manipulation (VAM) for augmenting user operations in a mobile environment. This technique augments user interactions on mobile devices, such as finger gestures and button pressing, with voice. For example, when a user makes a finger gesture on a mobile phone and voices a sound into it, the operation will continue until stops making the sound or makes another finger gesture. The VAM interface also provides a button-based interface, and the function connected to the button is augmented by voiced sounds. Two experiments verified the effectiveness of the VAM technique and showed that repeated finger gestures significantly decreased compared to current touch-input techniques, suggesting that VAM is useful in supporting user control in a mobile environment.
Understanding performance of eyes-free, absolute position control on touchable mobile phones BIBAFull-Text 79-88
  Yuntao Wang; Chun Yu; Jie Liu; Yuanchun Shi
Many eyes-free interaction techniques have been proposed for touchscreens, but few researches have studied human's eyes-free pointing ability with mobile phones. In this paper, we investigate the single-handed thumb performance of eyes-free, absolute position control on mobile touch screens. Both 1D and 2D experiments were conducted. We explored the effects of target size and location on eyes-free touch patterns and accuracy. Our findings show that variance of touch points per target will converge as target size decreases. The centroid of touch points per target tends to be offset to the left of target center along horizontal direction, and shift toward screen center along vertical direction. Average accuracy drops from 99.6% of 2×2 layout to 85.0% of 4×4 layout, and average per target varies depending on the location of target. Our findings and design implications provide a foundation for future researches based on eyes-free, absolute position control using thumb on mobile devices.
Mobile pointing task in the physical world: balancing focus and performance while disambiguating BIBAFull-Text 89-98
  William Delamare; Céline Coutrix; Laurence Nigay
We address the problem of mobile distal selection of physical objects when pointing at them in augmented environments. We focus on the disambiguation step needed when several objects are selected with a rough pointing gesture. A usual disambiguation technique forces the users to switch their focus from the physical world to a list displayed on a handheld device's screen. In this paper, we explore the balance between change of users' focus and performance. We present two novel interaction techniques allowing the users to maintain their focus in the physical world. Both use a cycling mechanism, respectively performed with a wrist rolling gesture for P2Roll or with a finger sliding gesture for P2Slide. A user experiment showed that keeping users' focus in the physical world outperforms techniques that require the users to switch their focus to a digital representation distant from the physical objects, when disambiguating up to 8 objects.
Playing it real again: a repeated evaluation of magic lens and static peephole interfaces in public space BIBAFull-Text 99-102
  Jens Grubert; Dieter Schmalstieg
We repeated a study on the usage of a magic lens and a static peephole interface for playing a find-and-select game in a public space. While we reproduced the study setup and procedure the task was conducted in a public transportation stop with different characteristics. The results on usage duration and user preference were significantly different from those reported for previous conditions. We investigate possible causes, specifically the differences in the spatial characteristics and the social contexts in which the study took place.

Navigation and selection

Towards the design of an intuitive multi-view video navigation interface based on spatial information BIBAFull-Text 103-112
  Silviu Apostu; Anas Al-Nuaimi; Eckehard Steinbach; Michael Fahrmair; Xiaohang Song; Andreas Möller
A Multi-View Video (MVV) is a set of related videos that capture an interesting scene from different perspectives at overlapping times. The work at hand is concerned with the design of innovative user interfaces (UI) for viewing MVVs. As a first contribution four different MVV UIs are designed. While different in design, their common aim is to allow a pleasant viewing and perspective switching experience by reducing the cognitive effort associated with constructing a mental map of the scene. This is achieved by incorporating the spatial relationships of the available views in the UI elements. As a second contribution a quality model is developed and a methodical evaluation process is designed. This is used to evaluate and compare the UIs. In a third contribution we use principal component analysis (PCA) to reveal information about the perceptual quality space which helps validating our proposed quality model. Based on the findings, a series of conclusions for best design practices are provided.
Toward compound navigation tasks on mobiles via spatial manipulation BIBAFull-Text 113-122
  Michel Pahud; Ken Hinckley; Shamsi Iqbal; Abigail Sellen; Bill Buxton
We contrast the Chameleon Lens, which uses 3D movement of a mobile device held in the nonpreferred hand to support panning and zooming, with the Pinch-Flick-Drag metaphor of directly manipulating the view using multi-touch gestures. Lens-like approaches have significant potential because they can support navigation-selection, navigation-annotation, and other such compound tasks by off-loading navigation to the nonpreferred hand while the preferred hand annotates, marks a location, or draws a path on the screen. Our experimental results show that the Chameleon Lens is significantly slower than Pinch-Flick-Drag for the navigation subtask in isolation. But our studies also reveal that for navigation between a few known targets the lens performs significantly faster, that differences between the Chameleon Lens and Pinch-Flick-Drag rapidly diminish as users gain experience, and that in the context of a compound navigation-annotation task, the lens performs as well as Pinch-Flick-Drag despite its deficit for the navigation subtask itself.
Imaginary devices: gesture-based interaction mimicking traditional input devices BIBAFull-Text 123-126
  Christian Steins; Sean Gustafson; Christian Holz; Patrick Baudisch
We propose Imaginary Devices, a set of freehand gestures that mimic the use of physical input devices. Imaginary Devices allow users to choose the input modality best suited for the task at hand, such as a steering wheel for a driving game or a joystick for a flight simulator. Exploiting the skills that users have acquired using physical input devices, they can instantly begin interacting with an Imaginary Device. Since no physical device is involved, users can switch quickly and effortlessly among a number of devices.
   We demonstrate the potential of Imaginary Devices with Grand Theft Auto, a game that requires players to change between roles often and quickly, and we examine the viability of the concept in two user studies. In the first study, we found that participants produced a wide range of postures to represent each device but all were able to reproduce the correct posture after a short demonstration. In the second study, we found that Imaginary Devices afford precise input control and approach the baseline performance set by physical devices.

User behavior

Does size matter?: investigating the impact of mobile phone screen size on users' perceived usability, effectiveness and efficiency BIBAFull-Text 127-136
  Dimitrios Raptis; Nikolaos Tselios; Jesper Kjeldskov; Mikael B. Skov
Given the wide adoption of smartphones, an interesting debate is taking place regarding their optimal screen size and specifically whether possible portability issues counterbalance the obvious benefits of a larger screen. Moreover, the lack of scientific evidence about the concrete impact of mobile phones' screen size on usability raises questions both to practitioners and researchers. In this paper, we investigate the impact of a mobile phone's screen size on users' effectiveness, efficiency and perceived usability as measured using System Usability Scale (SUS). An experiment was conducted with 60 participants, which interacted with the same information seeking application on three different devices of the same brand that differed on their screen size. A significant effect of screen size on efficiency was derived, leading to an important finding that users who interact with larger than 4.3in screens are more efficient during information seeking tasks.
Mobile devices as infotainment user interfaces in the car: contextual study and design implications BIBAFull-Text 137-146
  Jani Heikkinen; Erno Mäkinen; Jani Lylykangas; Toni Pakkanen; Kaisa Väänänen-Vainio-Mattila; Roope Raisamo
The spreading of mobile devices to all areas of everyday life impacts many contexts of use, including cars. Even though driving itself has remained relatively unchanged, there are now a wide variety of new in-car tasks, which people perform with both integrated infotainment systems and their mobile devices. To gain insights into this new task context and how it could be improved, we conducted a qualitative, contextual study in which we observed real-life car journeys with eight participants. The focus was on user interaction with touchscreen mobile devices, due to their wide range of functions and services. The findings show that the car is an extension of other contexts and it contains a rich set of infotainment tasks, including use of social media. Drivers emphasized gesture interaction and the use of non-visual modalities, for replacing visual information and notifying of changes in the driving context. Based on the findings, we present design implications for future in-car infotainment systems.
Managing distractions in complex settings BIBAFull-Text 147-150
  Robin Deegan
Mobile devices are being used in more and more complex settings such as cars or medical environments and these environments are causing serious distractions for the mobile user. This paper presents novel research that investigates mobile user experiences when interacting with cognitively demanding distractions. This research finds that, surprisingly, the user's primary task is not always affected by the distraction but, in this case, the actual interaction between user and device is. This observation initially appears to contradict current research which suggests that a distraction will affect the primary task. The main conclusion of this paper is that a user, when dealing with distraction, can balance their cognitive processes by applying less cognitive resources to the mobile device interaction in order to maintain their performance at the primary task. Essentially, the interface can appear more difficult and less user friendly.

Touch and text input

Exploring pinch and spread gestures on mobile devices BIBAFull-Text 151-160
  Jessica J. Tran; Shari Trewin; Calvin Swart; Bonnie E. John; John C. Thomas
Pinching and spreading gestures are prevalent in mobile applications today, but these gestures have not yet been studied extensively. We conducted an exploratory study of pinch and spread gestures with seated participants on a phone and a tablet device. We found device orientation did not have a significant effect on gesture performance, most pinch and spread tasks were completed in a single action, and they were executed in 0.9-1.2 seconds. We also report how participants chose to sit with the mobile device, variations in gesture execution method, and the effect of varying target width and gesture size. Our task execution times for different gesture distances and precision levels display a surprisingly good fit to a simple Fitts's Law model. We conclude with recommendations for future studies.
No-look flick: single-handed and eyes-free Japanese text input system on touch screens of mobile devices BIBAFull-Text 161-170
  Yoshitomo Fukatsu; Buntarou Shizuki; Jiro Tanaka
We present a single-handed and eyes-free Japanese kana text input system on touch screens of mobile devices. We first conducted preliminary experiments to investigate the accuracy with which subjects could single-handedly point to and flick without using their eyes. We found from the results that users can point at a screen that was divided into 2 x 2 with 100% accuracy and that users can flick at a 2 x 2 grid without using their eyes with 96.1% accuracy using our algorithm for flick recognition. The system used kana letter input based on two-stroke input with three keys to enable accurate eyes-free typing. First, users flick for consonant input, and then similarly flick for vowel input. We conducted a long-term user study to measure basic text entry speed and error rate performance under eyes-free conditions, and readability of transcribed phrases. As a result, the mean text entry speed was 51.2 characters per minute (cpm) in the 10th session of the user study and the mean error rate was 0.6% of all characters. The mean text entry speed was 33.9 cpm in the 11th session, which was conducted under totally eyes-free conditions and the mean error rate was 4.8% of all characters. We not only measured cpm and error rate, but also measured error rate of reading, which we devised as a novel metric to measure how accurately users can read transcribed phrases. The mean error rate of reading in the 11th session was 5.7% of all phrases.
Analysis of children's handwriting on touchscreen phones BIBAFull-Text 171-174
  Elba del Carmen Valderrama Bahamóndez; Thomas Kubitza; Niels Henze; Albrecht Schmidt
Drawing and handwriting play a central role in primary schools. So far handwriting is practiced mainly on paper and blackboards. Providing tasks on paper can be challenging in developing countries. With the potential availability of mobile phones in classrooms, there is a new medium that can be used. We determined the effect of different touch technologies on children's handwriting for 18 third grade and 20 sixth grade participants. Children drew and wrote using different input techniques. We measured their performance and asked teachers to assess the legibility. We show that writing on touchscreens is less legible and slower than on paper. Further, the comparison of touchscreen technologies indicates that capacitive screens operated with a stylus yield the highest readability and are faster to use for writing than resistive screens. In contrast to these quantitative findings participants from third grade indicated that they prefer resistive screens with a thin stylus compared to using capacitive screens with a stylus or fingers.
Sandwich keyboard: fast ten-finger typing on a mobile device with adaptive touch sensing on the back side BIBAFull-Text 175-178
  Oliver Schoenleben; Antti Oulasvirta
This Note introduces a keyboard design that affords ten-finger touch typing by utilizing a touch sensor on the back side of a device. Previous work has used physical buttons. Using a touch sensor has the benefit that it retains the form factor and does not insist on a peripheral device. Moreover, any layout can be used. However, it is difficult to hit targets on a flat surface with no haptic feedback. Sandwich Keyboard is a prototype that folds any three-row keyboard layout and thus, by retaining the finger-to-letter assignment, supports transfer. Sandwich Keyboard includes an algorithm for constant adaptation of key targets in the back. We also learned that the detection of key presses from finger release enhances the performance of touch-typing on a multitouch sensor. After eight hours of training, experienced typists of the QWERTY and of the Dvorak Standard Keyboard (DSK) layout reached 26.1 and 46.2 wpm, respectively. We discuss improvements necessary for further increasing both speed and accuracy.
Writing handwritten messages on a small touchscreen BIBAFull-Text 179-182
  Wolf Kienzle; Ken Hinckley
We present a method for composing handwritten messages on a small touchscreen device. A word is entered by drawing overlapped, screen sized letters on top of each other. The system does not require gestures or timeouts to delimit characters within a word -- it automatically segments the overlapping strokes and renders the message in real-time as the user is writing. The auto-segmentation algorithm was designed for practicality; it is extremely simple, requires only public domain data for training, and runs very fast on low-power devices. Drawings may also be included with the text. Experimental data indicates the effectiveness of our system, even for novice users.

Navigation, location and maps

Can you see me now?: location, visibility and the management of impressions on foursquare BIBAFull-Text 183-192
  Shion Guha; Jeremy Birnholtz
Location based social networking applications enable people to share their location with friends for social purposes by "checking in" to places they visit. Prior research suggests that both privacy and impression management motivate location disclosure concerns. In this interview study of foursquare users, we explore the ways people think about location sharing and its effects on impression management and formation. Results indicate that location-sharing decisions depend on the perceived visibility of the check-in, blur boundaries between public and private venues, and can initiate tensions within the foursquare friend network. We introduce the concept of "check-in transience" to explain factors contributing to impression management and argue that sharing location is often used as a signaling strategy to achieve social objectives.
City scene: field trial of a mobile street-imagery-based navigation service BIBAFull-Text 193-202
  Tuomas Vaittinen; Miikka Salminen; Thomas Olsson
Mobile navigation services are becoming more than merely digital maps; many of them include imagery from the street level. The user can benefit from enriching the 2D-map-based navigation with panoramic imagery from the citizen's perspective, hence gaining an authentic view of the frequent landmarks that urban environments include. In this paper we describe the user-centered design of a mobile street-imagery-based navigation service supporting navigation and exploration of unfamiliar cities. The service was evaluated with a field trial using tourists as participants. The participants used the service freely for the pedestrian navigation tasks that were relevant to them during the trial period. This approach shed light on issues that have not been raised by previous studies on image-based navigation, which have relied on more formal test tasks. The study confirmed that the images help with detecting the destination or assessing the atmosphere of a remote location but brought into focus the real world challenges related to downloading times and positioning accuracy.
Moving beyond the map: automated landmark based pedestrian guidance using street level panoramas BIBAFull-Text 203-212
  Jason Wither; Carmen E. Au; Raymond Rischpater; Radek Grzeszczuk
In the past people have used very different forms of directions depending on how those directions were acquired. If a person is giving another person directions in a familiar area, he will frequently use landmarks to describe the route [10]. If the person gets the route from a personal navigation system though, it will be displayed on a map and make use of street names for the directions.
   In this paper we present a system to automatically give landmark based navigation to pedestrians by using panoramic imagery to both find salient landmarks along a route automatically, and to present those landmarks to a pedestrian navigator in an immersive and intuitive manner. Our system primarily uses automatically detected business signs as landmarks, and currently works in a half dozen cities around the world. We have also evaluated our system and found that people can effectively navigate solely using landmark enhanced panoramas of decision points along the route.
Investigating collaborative mobile search behaviors BIBAFull-Text 213-216
  Shahriyar Amini; Vidya Setlur; Zhengxin Xi; Eiji Hayashi; Jason Hong
People use mobile devices to search, locate and discover local information around them. Mobile local search is frequently a social activity. This paper presents the results of a survey and an exploratory user study of collaborative mobile local search. The survey results show that people frequently search with others and that these searches often involve the use of more than one mobile device. We prototyped a collaborative mobile search app, which we used as a tool to investigate users' collaborative mobile search behavior. Our study results provide insights into how users collaborate while performing search. We also provide design considerations to inform future mobile local search technologies.

Unconventional mobile user interfaces, services and hardware

Responsive lighting: the city becomes alive BIBAFull-Text 217-226
  Esben Skouboe Poulsen; Ann Morrison; Hans Jørgen Andersen; Ole B. Jensen
We distributed fourteen controllable street lamps in a city square and recorded three comparative and one 'usual' condition, operating the public lighting as if it were an interactive stage. First tested was adaptive lighting that responded to people's occupancy patterns. Second was a mobile phone application that allowed people to customise color and responsive behaviours in the overhead lighting system. Third was ambient lighting, responding to wind velocity. The study extends the discussion on multiuser interaction design in public lighting by asking: how can interactions using mobile phones, thermal tracking and wind inputs afford new social behaviors, without disturbing the usual public functions of street lighting? This research lays foundational work on the affordances of mobile phones for engagement and interaction with public lighting. The study indicates the use of personal phones as a tool for interaction in this setting has potential to provide a stronger ownership to urban place.
Exploring smartphone-based web user interfaces for appliances BIBAFull-Text 227-236
  Katie Derthick; James Scott; Nicolas Villar; Christian Winkler
We describe the SAWUI architecture by which smartphones can easily show user interfaces for nearby appliances, with no modification or pre-installation of software on the phone, no reliance on cloud services or networking infrastructure, and modest additional hardware in the appliance. In contrast to appliances? physical user interfaces, which are often as simple as buttons, icons and LEDs, SAWUIs leverage smartphones? powerful UI hardware to provide personalized, self-explanatory, adaptive, and localized UIs.
   To explore the opportunities created by SAWUIs, we conducted a study asking designers to redesign two appliances to include SAWUIs. Task characteristics including frequency, proximity, and complexity were used in deciding whether to place functionality on the physical UI, the SAWUI, or both. Furthermore, results illustrate how, in addition to support for accomplishing tasks, SAWUIs have the potential to enrich human experiences around appliances by increasing user autonomy and supporting better integration of appliances into users' social and personal lives.
Twisting touch: combining deformation and touch as input within the same interaction cycle on handheld devices BIBAFull-Text 237-246
  Johan Kildal; Andrés Lucero; Marion Boberg
We present a study that investigates the potential of combining, within the same interaction cycle, deformation and touch input in a handheld device. Using a flexible, input-only device connected to an external display, we compared a multitouch input technique and two hybrid deformation-plus-touch input techniques (bending and twisting the device, plus either front- or back-touch), in an image-docking task. We compared and analyzed the performance (completion time) and user experience (UX) obtained in each case, using multiple assessment metrics. We found that combining device deformation with front-touch produced the best UX. All the interaction techniques showed the same efficiency in task completion. This was a surprising finding, since multitouch (an integral input technique) was expected to be the most efficient technique in an image docking task (an interaction in an integral perceptual space). We discuss these findings in relation to self-reported qualitative data and observed interaction-procedure metrics. We found that the interaction procedures with the hybrid techniques were more sequential but also more paced. These findings suggest that the benefits of deformation input can still be observed when deformation and touch are combined in an input device.
ProjectorKit: easing rapid prototyping of interactive applications for mobile projectors BIBAFull-Text 247-250
  Martin Weigel; Sebastian Boring; Jürgen Steimle; Nicolai Marquardt; Saul Greenberg; Anthony Tang
Researchers have developed interaction concepts based on mobile projectors. Yet pursuing work in this area -- particularly in building projector-based interactions techniques within an application -- is cumbersome and time-consuming. To mitigate this problem, we contribute ProjectorKit, a flexible open-source toolkit that eases rapid prototyping mobile projector interaction techniques.

Security and privacy

Improving user authentication on mobile devices: a touchscreen graphical password BIBAFull-Text 251-260
  Hsin-Yi Chiang; Sonia Chiasson
Typing text passwords is challenging when using touchscreens on mobile devices and this is becoming more problematic as mobile usage increases. We designed a new graphical password scheme called Touchscreen Multi-layered Drawing (TMD) specifically for use with touchscreens. We conducted an exploratory user study of three existing graphical passwords on smart phones and tablets with 31 users. From this, we set our design goals for TMD to include addressing input accuracy issues without having to memorize images, while maintaining an appropriately secure password space. Design features include warp cells which allow TMD users to continuously draw their passwords across multiple layers in order to create more complex passwords than normally possible on a small screen. We compared the usability of TMD to Draw A Secret (DAS) on a tablet computer and a smart phone with 90 users. Results show that TMD improves memorability, addresses the input accuracy issues, and is preferred as a replacement for text passwords on mobile devices.
Patterns in the wild: a field study of the usability of pattern and pin-based authentication on mobile devices BIBAFull-Text 261-270
  Emanuel von Zezschwitz; Paul Dunphy; Alexander De Luca
Graphical password systems based upon the recall and reproduction of visual patterns (e.g. as seen on the Google Android platform) are assumed to have desirable usability and memorability properties. However, there are no empirical studies that explore whether this is actually the case on an everyday basis. In this paper, we present the results of a real world user study across 21 days that was conducted to gather such insight; we compared the performance of Android-like patterns to personal identification numbers (PIN), both on smartphones, in a field study. The quantitative results indicate that PIN outperforms the pattern lock when comparing input speed and error rates. However, the qualitative results suggest that users tend to accept this and are still in favor of the pattern lock to a certain extent. For instance, it was rated better in terms of ease-of-use, feedback and likeability. Most interestingly, even though the pattern lock does not provide any undo or cancel functionality, it was rated significantly better than PIN in terms of error recovery; this provides insight into the relationship between error prevention and error recovery in user authentication.
Know your enemy: the risk of unauthorized access in smartphones by insiders BIBAFull-Text 271-280
  Ildar Muslukhov; Yazan Boshmaf; Cynthia Kuo; Jonathan Lester; Konstantin Beznosov
Smartphones store large amounts of sensitive data, such as SMS messages, photos, or email. In this paper, we report the results of a study investigating users' concerns about unauthorized data access on their smartphones (22 interviewed and 724 surveyed subjects). We found that users are generally concerned about insiders (e.g., friends) accessing their data on smartphones. Furthermore, we present the first evidence that the insider threat is a real problem impacting smartphone users. In particular, 12% of subjects reported a negative experience with unauthorized access. We also found that younger users are at higher risk of experiencing unauthorized access. Based on our results, we propose a stronger adversarial model that incorporates the insider threat. To better reflect users' concerns and risks, a stronger adversarial model must be considered during the design and evaluation of data protection systems and authentication methods for smartphones.
Money on the move workload, usability and technology acceptance of second-screen ATM-interactions BIBAFull-Text 281-284
  Georg Regal; Marc Busch; Stephanie Deutsch; Christina Hochleitner; Martin Lugmayr; Manfred Tscheligi
In this paper we compare one single-screen touch interaction with an automated teller machine (ATM) against two alternative second-screen ATM interactions using a smartphone. In an experimental laboratory study, those three ATM interactions were compared by means of workload (NASA-TLX), usability (SEQ, UMUX) and technology acceptance (selected TAM3-scales and additional scales for trust and security) in a randomized, controlled within-subjects design (n=24). In one smartphone ATM interaction the Personal Identification Number (PIN) was entered on the mobile phone, in the other smartphone ATM interaction the PIN was entered on the PIN-pad of the ATM. The results indicate that overall second-screen ATM interaction all interaction done on the mobile phone -- performed best.

Adaptation and design

Design guidelines for adaptive multimodal mobile input solutions BIBAFull-Text 285-294
  Bruno Dumas; María Solórzano; Beat Signer
The advent of advanced mobile devices in combination with new interaction modalities and methods for the tracking of contextual information, opens new possibilities in the field of context-aware user interface adaptation. One particular research direction is the automatic context-aware adaptation of input modalities in multimodal mobile interfaces. We present existing adaptive multimodal mobile input solutions and position them within closely related research fields. Based on a detailed analysis of the state of the art, we propose eight design guidelines for adaptive multimodal mobile input solutions. The use of these guidelines is further illustrated through the design and development of an adaptive multimodal calendar application.
Keep doing what i just did: automating smartphones by demonstration BIBAFull-Text 295-303
  Rodrigo de A. Maués; Simone Diniz Junqueira Barbosa
Automating tasks can make a smartphone easier to use and more battery efficient. However, currently little work has been done to help end-users to create such automations. In this paper, we explore an approach for automating smartphone tasks by demonstration. We have developed a mobile application called Keep Doing It that continuously records users' interactions with their smartphones. After users performed a task that they would like to automate, they can ask our application to create the automation based on their latest actions. Since users only have to use their smartphones, as they would naturally do, to demonstrate automations, we believe that our approach can lower the barrier for creating smartphone automations. Overall, an initial evaluation of the approach suggests that users would be willing to automate their phones by demonstration.
At the mobile experience flicks: making short films to make sense for mobile interaction design BIBAFull-Text 304-307
  Michael Leitner; Gilbert Cockton; Joyce S. R. Yee
We introduce four short films to analyse, display and make sense of mobile experience and mobile context for design purposes. The films were scripted and produced on the basis of diary and interview data looking at mobile texting and mobile social media use. We experience the making of the films as a way to understand, frame and focus the design space for researchers and designers. We reflect on our own process of making these short films and discuss the value of such an approach for mobile interaction design.

Developing world

ACQR: acoustic quick response codes for content sharing on low end phones with no internet connectivity BIBAFull-Text 308-317
  Jennifer Pearson; Simon Robinson; Matt Jones; Amit Nanavati; Nitendra Rajput
In this paper we introduce Acoustic Quick Response codes to facilitate sharing between Interactive Voice Response (IVR) service users. IVRs are telephone-based, and similar to the world wide web in many aspects, but currently lack support for content sharing. Our approach uses 'audio codes' to let people share their call positions, and allows callers to hold their normal (low-end) handsets together to synchronise. The technique uses remote generation and recognition of audio codes to ensure that sharing is possible on any type of phone without the need for textual literacy or an internet connection. We begin by exploring existing user needs for sharing, then evaluate the technical robustness of our audio-based design. We demonstrate the value of the approach for voice service users over several separate studies -- including an eight-month extended field deployment -- then conclude with a discussion of future possibilities for such scenarios.
Exploring the interplay between community media and mobile web in developing regions BIBAFull-Text 318-327
  Akhil Mathur; Sharad Jaiswal
In this paper, we present the lessons learned by bringing in content and content-creators from local community media initiatives into the technological fold of the web. Our work focuses on the Community Radio (CR) ecosystem in India, and through extensive field-studies we develop an in-depth understanding of the operations, strengths and challenges of a CR station. Based on this, we outline the design of a system that combines the content creation processes of a CR station with the mobile web. The system was evaluated with the users of a CR station in Bangalore over a month long deployment. Our key take-away is that the incorporation of a mobile web based delivery system can play a critical role in expanding the reach and consumption of community media in the target communities. Conversely, the relevance of CR content and role of radio jockeys (as trusted members of the community) can be a key driver in adoption of the mobile web in these communities. Together, such a hybrid approach points the way forward for more successful deployments of community media systems, and reveals several interesting HCI issues to be studied further.
The paper slip should be there!: perceptions of transaction receipts in branchless banking BIBAFull-Text 328-331
  Saurabh Panjwani; Mohona Ghosh; Ponnurangam Kumaraguru; Soumya Vardhan Singh
Mobile-based branchless banking has become a key mechanism for enabling financial inclusion in the developing world. A key component of all branchless banking systems is a mechanism to provide receipts to users after each transaction as evidence for successful transaction completion. In this paper, we present results from a field study that explores user perceptions of different receipt delivery mechanisms in the context of a branchless banking system in India. Our study shows that users have an affinity for paper receipts: despite the provision of an SMS receipt functionality by the system developers and their discouragement of the use of paper, users have pro-actively initiated a practice of issuing and accepting paper receipts. Several users are aware of the security limitations of paper receipts but continue to use them because of their usability benefits. We conclude with design recommendations for receipt delivery systems in branchless banking.

Collaboration and communication

'Eyes free' in-car assistance: parent and child passenger collaboration during phone calls BIBAFull-Text 332-341
  Chandrika Cycil; Mark Perry; Eric Laurier; Alex Taylor
This paper examines routine family car journeys, looking specifically at how passengers assist during a mobile telephone call while the drivers address the competing demands of handling the vehicle, interacting with various artefacts and controls in the cabin, and engage in co-located and remote conversations while navigating through busy city roads. Based on an analysis of video fragments, we see how drivers and child passengers form their conversations and requests around the call so as to be meaningful and paced to the demands, knowledge and abilities of their co-occupants, and how the conditions of the road and emergent traffic are oriented to and negotiated in the context of the social interaction that they exist alongside. The study provides implications for the design of car-based collaborative media and considers how hands- and eyes-free natural interfaces could be tailored to the complexity of activities in the car and on the road.
Smartphone use does not have to be rude: making phones a collaborative presence in meetings BIBAFull-Text 342-351
  Matthias Böhmer; T. Scott Saponas; Jaime Teevan
Our personal smartphones are our daily companions, coming with us everywhere, including into enterprise meetings. This paper looks at smartphone use in meetings. Via a survey of 398 enterprise workers, we find that people believe phone use interferes with meeting productivity and collaboration. While individuals tend to think that they make productive use of their own phones in meetings, they perceive others as using their phones for unrelated tasks. To help smartphones create a more collaborative meeting environment, we present an application that identifies and describes meeting attendees. We deploy the application to 114 people at real meetings, and find that users value being able to access information about the other people in the room, particularly when those people are unfamiliar. To prevent users from disengaging from the meeting while using their phones, we employ a gaming approach that asks trivia questions about the other attendees. We observe that gameplay focuses attention within the meeting context and sparks conversations. These findings suggest ways smartphone applications might help users engage with the people around them in enterprise environments, rather than removing them from their immediate social context.
What's up with WhatsApp?: comparing mobile instant messaging behaviors with traditional SMS BIBAFull-Text 352-361
  Karen Church; Rodrigo de Oliveira
With the advent of instant mobile messaging applications, traditional SMS is in danger of loosing it's reign as the king of mobile messaging. Applications like WhatsApp allow mobile users to send real-time text messages to individuals or groups of friends at no cost. While there is a vast body of research on traditional text messaging practices, little is understood about how and why people have adopted and appropriated instant mobile messaging applications. The goal of this work is to provide a deeper understanding of the motives and perceptions of a popular mobile messaging application called WhatsApp and to learn more about what this service offers above and beyond traditional SMS. To this end, we present insights from two studies an interview study and a large-scale survey highlighting that while WhatsApp offers benefits such as cost, sense of community and immediacy, SMS is still considered a more reliable, privacy preserving technology for mobile communication.


Upright or sideways?: analysis of smartphone postures in the wild BIBAFull-Text 362-371
  Alireza Sahami Shirazi; Niels Henze; Tilman Dingler; Kai Kunze; Albrecht Schmidt
In this paper, we investigate how smartphone applications, in particular web browsers, are used on mobile phones. Using a publicly available widget for smart phones, we recorded app usage and the phones' acceleration and orientation from 1,330 devices. Combining app usage and sensor data we derive the device's typical posture while different apps are used. Analyzing motion data shows that devices are moved more while messaging and navigation apps are used as opposed to browser and other common applications. The time distribution between landscape and portrait depicts that most of the landscape mode time is used for burst interaction (e.g., text entry), except for Media apps, which are mostly used in landscape mode. Additionally, we found that over 31% of our users use more than one web browser. Our analysis reveals that the duration of mobile browser sessions is longer by a factor of 1.5 when browsers are explicitly started through the system's launcher in comparison to being launched from within another app. Further, users switch back and forth between apps and web browsers, which suggest that a tight and smooth integration of web browsers with native apps can improve the overall usability. From our findings we derive design guidelines for app developers.
Making sense of screen mobility: dynamic maps and cartographic literacy in a highly mobile activity BIBAFull-Text 372-381
  Oskar Juhlin; Alexandra Weilenmann
Dynamic, digital maps are increasingly used in many settings. It is an emerging domain of technology extending on previous maps studies and positioning technology. We draw upon ethnographic field studies of collaborative hunting, where hunting dogs are tracked and their location made visible on digital maps. We discuss mobility of two different kinds. First, we refer to mobility as the practice of physical movements of hunters, dogs and prey. Second, we refer to the movement of symbolic objects on a digital map screen, i.e. screen mobility, and the interpretational work that the hunters do to make sense of it. Representations of motion on a screens, are of ongoing practical concern for the hunters. We show how they interpret such mobility in terms of accelerations, distance, trajectories and temporal alignments. The findings are used to revisit mobility theories and populate them with new notions to inspire design in broad domains.
User-specific touch models in a cross-device context BIBAFull-Text 382-391
  Daniel Buschek; Simon Rogers; Roderick Murray-Smith
We present a machine learning approach to train user-specific offset models, which map actual to intended touch locations to improve accuracy. We propose a flexible framework to adapt and apply models trained on touch data from one device and user to others. This paper presents a study of the first published experimental data from multiple devices per user, and indicates that models not only improve accuracy between repeated sessions for the same user, but across devices and users, too. Device-specific models outperform unadapted user-specific models from different devices. However, with both user- and device-specific data, we demonstrate that our approach allows to combine this information to adapt models to the targeted device resulting in significant improvement. On average, adapted models improved accuracy by over 8%. We show that models can be obtained from a small number of touches (≈60). We also apply models to predict input-styles and identify users.

Touch screen interaction and multi-modal user interfaces

Oh app, where art thou?: on app launching habits of smartphone users BIBAFull-Text 392-395
  Alina Hang; Alexander De Luca; Jonas Hartmann; Heinrich Hussmann
In this paper, we present the results of a four-week real world study on app launches on smartphones. The results show that smartphone users are confident in the way they navigate on their devices, but that there are many opportunities for refinements. Users in our study tended to sort apps based on frequency of use, putting the most frequently used apps in places that they considered fastest to reach. Interestingly, users start most apps from within other apps, followed by the use of the homescreen.
SpeechTouch: precise cursor positioning on touch screen mobiles BIBAFull-Text 396-399
  Yingying Jiang; Feng Tian; Guang Li; Xiaolong Zhang; Yi Du; Guozhong Dai; Hongan Wang
On touch-screen mobile phones, text-related applications (e.g., email, notes) usually involve a cursor in editing tasks. However, accurately positioning the cursor to the editing position by touch is often a challenge because of the "fat finger" problem. To improve the accuracy of one-touch cursor positioning, we propose SpeechTouch, a multimodal method that combines imprecise touch input and simple speech input for precise cursor positioning. Our evaluation study shows that this method can significantly improve user performance in cursor positioning on touch screen mobiles.
Rapid selection of hard-to-access targets by thumb on mobile touch-screens BIBAFull-Text 400-403
  Neng-Hao Yu; Da-Yuan Huang; Jia-Jyun Hsu; Yi-Ping Hung
Current touch-based UIs commonly employ regions near the corners and/or edges of the display to accommodate essential functions. As the screen size of mobile phones is ever increasing, such regions become relatively distant from the thumb and hard to reach for single-handed use. In this paper, we present two techniques: CornerSpace and BezelSpace, designed to accommodate quick access to screen targets outside the thumb's normal interactive range. Our techniques automatically determine the thumb's physical comfort zone and only require minimal thumb movement to reach distant targets on the edge of the screen. A controlled experiment shows that BezelSpace is significantly faster and more accurate. Moreover, both techniques are application-independent, and instantly accommodate either hand, left or right.
Sparse selection of training data for touch correction systems BIBAFull-Text 404-407
  Daryl Weir; Daniel Buschek; Simon Rogers
Touch offset models which improve input accuracy on mobile touch screen devices typically require the use of a large number of training points. In this paper, we describe a method for selecting training points such that high performance can be attained with fewer data. We use the Relevance Vector Machine (RVM) algorithm, and show that performance improvements can be obtained with fewer than 10 training examples. We show that the distribution of training points is conserved across users and contains interesting structure, and compare the RVM to two other offset prediction models for small training set sizes.
OMG!: a new robust, wearable and affordable open source mobile gaze tracker BIBAFull-Text 408-411
  Kristian Lukander; Sharman Jagadeesan; Huageng Chi; Kiti Müller
We present a novel, robust, affordable and wearable, mobile gaze tracker. The tracker takes a model-based approach to tracking gaze and maps the calculated gaze on to a scene video. The system is built from standard off-the-shelf components, and is the first to our knowledge using a 3D printed frame. The system will be published as open source, and the total cost of the components for building the system is 350∈. The model-based tracking provides a solution robust to changing lighting conditions and frame slippage on the head of the user.
MagPen: magnetically driven pen interactions on and around conventional smartphones BIBAFull-Text 412-415
  Sungjae Hwang; Andrea Bianchi; Myungwook Ahn; Kwangyun Wohn
This paper introduces MagPen, a magnetically driven pen interface that works both on and around mobile devices. The proposed device is accompanied by a new vocabulary of gestures and techniques that increase the expressiveness of the standard capacitive stylus. These techniques are: 1) detecting the orientation that the stylus is pointing to, 2) selecting colors using locations beyond screen boundaries, 3) recognizing different spinning gestures associated with different actions, 4) inferring the pressure being applied to the pen, and 5) identifying various pens associated with different operational modes. These techniques are achieved using commonly available smartphones that sense and analyze the magnetic field produced by a permanent magnet embedded in a standard capacitive stylus. This paper explores how magnets can be used to expand the design space of current pen interaction, and proposes a new technology to achieve such results.

Industrial case studies

A DIY power monitor to compare mobile energy consumption in situ BIBAFull-Text 416-421
  Paul Holleis; Marko Luther; Gregor Broll; Bertrand Souville
Many current smartphones need to be recharged every day despite only average usage. This problem is intensified when phones need to track their location on a continuous basis. We provide a yet unavailable platform for accurately measuring the energy consumption of different hardware/software solutions comparing up to three variants in a mobile setting. We show that the accuracy of our system is similar to a fixed, commercial system but is especially useful to evaluate and optimize technology and algorithms that require the phone to be used on the go.
Co-designing for NFC and ATMs: an inspirational bits approach BIBAFull-Text 422-427
  Alexander Meschtscherjakov; Christine Gschwendtner; Manfred Tscheligi; Petra Sundström
This paper addresses the field of mobile payment through Near Field Communication (NFC) and Automated Teller Machines (ATMs). We report how we used the Inspirational Bits method to inspire the design of novel NFC usage scenarios and application ideas for ATMs in a joint industry and academia project. We describe a set of small applications exposing different properties of NFC (henceforth, referred to as NFC-Bits) and how they informed and inspired collaborative design ideas for different use cases. The NFC-Bits reveal a broad range of NFC characteristics in a playful manner. We outline some of the developed use cases and describe features of MyPocketA™, an application stemmed from these ideas.
Giving voice to enterprise mobile applications BIBAFull-Text 428-433
  Chaya Bijani; Brent-Kaan White; Mark Vilrokx
Speech technology is gaining popularity and is used in a wide range of mobile applications and consumer devices. Speech-based interfaces circumvent the typing difficulties posed by soft keyboards. This paper describes the design and research of a speech-based interface for an enterprise sales application that combines speech recognition, natural language processing, and web services to drive the application. The paper further delves into the research methods applied -- paper prototype testing, structured interviews, and a lab study -- to gather feedback at various design stages. Finally, the paper describes how each of the three distinct research methods provided researchers with a more complete understanding of the user experience and led to a redesign.
HMI development for a purpose-built electric taxi in Singapore BIBAFull-Text 434-439
  Sebastian Osswald; Daniel Zehe; Philipp Mundhenk; Pratik Sheth; Martin Schaller; Stephan Schickram; Daniel Gleyzes
In this paper, we describe the development process of a human-machine interface (HMI) for a purpose-built electric taxi in Singapore. Based on the requirements of taxi drivers and passengers, we developed an automotive communication structure to support the connectivity between the vehicle HMI and mobile devices, support taxi-related functionalities and emphasize the characteristics of the electric vehicle. We carried out seven requirement studies in Singapore, implemented an integrated HMI platform, and developed different interface prototypes for the vehicle HMI and smart devices. This work contributes to the emerging field of automotive user interfaces by proposing a fully integrated HMI for an electric taxi.


An accessible, large-print, listening and talking e-book to support families reading together BIBAFull-Text 440-443
  Abbas Attarwala; Cosmin Munteanu; Ronald Baecker
Reading is an activity that is not only informative or pleasurable, but can have significant social benefits. Especially in a family setting, it is part of the interaction between children and their parents, it helps create a bond between children and their grandparents, and even bring adults and their older parents closer. However, with families increasingly living or spending time in different locations or managing busy schedules that afford very little time together, the social opportunities enabled by reading are often lost. Furthermore, reading can be a challenge for older adults or for those with impaired eyesight. To address these problems, we are proposing ALLT -- an Accessible, Large-Print, Listening and Talking e-book. ALLT is a tablet-based e-reading application that enhances the capabilities of e-book readers through customizable and intelligent accessibility features. It provides support for asynchronous "reading together" by synchronizing the audio recording of one user with the text that is later read by another user. This addresses the needs of a variety of users, from visually impaired adults reading together with a loved one, to children being able to replay an interactive story previously read together with their grandparents. In this demo paper we present ALLT's features and detail how they support asynchronously reading together.
An experimentation environment for mobile 3D and virtual reality BIBAFull-Text 444-447
  Wolfgang Hürst; Jannes Beurskens; Marco van Laar
Unlike desktop screens, mobile devices can be moved around freely. This allows us to create different experiences when exploring 3D data and virtual environments on such handhelds. Yet, this additional degree of freedom not only introduces exciting new possibilities, but also potential issues when used in actual applications. We present a demo environment that enables users to explore different kinds of 3D visualizations on smartphones and tablets, experiment with various characteristics and implementation options, and experience the advantages and disadvantages of these different realizations.
Creating a stereoscopic magic-lens to improve depth perception in handheld augmented reality BIBAFull-Text 448-451
  Klen Copic Pucihar; Paul Coulton; Jason Alexander
Handheld Augmented Reality (AR) is often presented using the magic-lens paradigm where the handheld device is portrayed as if it was transparent. Such a virtual transparency is usually implemented using video captured by a single camera rendered on the device's screen. This removes binocular-disparity, which may undermine user's ability to correctly estimate depth when seeing the world through the magic-lens. To confirm such an assumption this paper presents a qualitative user study that compares a magic-lens implemented on a mobile phone and a transparent glass replica. Observational results and questionnaire analysis indicate that binocular-disparity may play a significant role in participants' depth perception. These promising results led to the subsequent implementation of a stereoscopic magic-lens prototype on a commercially available mobile device.
Enhanced virtual transparency in handheld AR: digital magnifying glass BIBAFull-Text 452-455
  Klen Copic Pucihar; Paul Coulton
Handheld Augmented Reality (AR) is often presented using the magic-lens paradigm in which a magic-lens is a transparent interface. Such transparency is usually implemented by rendering camera captured video on the device's screen. The transparency quality is limited by the video stream quality which may be affected by: unfocused camera lens, poor lighting conditions and limited video stream resolution. All these factors may reduce the readability of the AR scene. To address quality of rendering and increase scene readability, this paper presents an enhanced virtual transparency solution where segments of the scene are replaced by high definition digital content. The proposed enhanced virtual transparency is demonstrated through the design of a digital magnifying glass which has been implemented on of-the-shelf mobile phone.
Interaction with services using an augmented reality user interface BIBAFull-Text 456-459
  Seamus Hickey; Antti Karhu; Juha Hyvärinen; Leena Arhippainen; Peter Antoniac
This paper introduces a new concept of interacting with mobile devices while on the move. The concept is using a combination of interaction techniques, like AR, touch, tilting and rotating the mobile device. The validation is done using some basic scenarios in which the users interact with various services available on the street. The user interface elements are represented by AR anchored objects (still pictures) and floating objects that are giving visual cues to where services are available. The users are aided in their actions by freezing the user interface when the device is down tilted so that they could easily manipulate the objects.
Mobile augmented reality: exploring content in natural and controlled settings using 3d tracking BIBAFull-Text 460-463
  Anton Fedosov; Tobias Eble
The advent of the mobile device has propelled education, adoption and implementation of Augmented Reality on a massive scale. The success and evolution of this event and the underlying technology over the last 5 years is nearly indisputable proof that the industry is progressing, though incredible examples of useful augmented reality applications are yet to be explored. The ability to recognize images, markers and 3D objects is one of the most important aspects of Augmented Reality. With present work we are proposing three application examples, which make recognition and visual search more intuitive, natural and accessible.
PACMAN UI: vision-based finger detection for positioning and clicking manipulations BIBAFull-Text 464-467
  Haruhisa Kato; Hiromasa Yanagihara
This paper proposes an intuitive input interface that can handle various operations based on finger image recognition. It receives continuous analog input by detecting a knuckle of the user's clenched fist. In contrast to the conventional wireless mouse, whose sensitivity cannot be changed dynamically, the proposed method brings not only stable positioning but also quick clicking with a small finger gesture. In order to evaluate operability, we conducted a user experiment: a time trial for target selection. The subjects completed the task with the proposed controller in 44% less time than with a conventional wireless mouse. We confirmed that the proposed method can reliably follow finger gestures.
Pufftext: a puff controlled software-based hands-free spin keyboard for mobile phones BIBAFull-Text 468-471
  Jackson Feijó Filho; Wilson Prata; Thiago Valle
This work proposes the use of a low-cost software-based puff controlled hands-free spinning keyboard for mobile phones as an alternative interaction technology for people with motor disabilities. It attempts to explore the processing of the audio from the microphone in mobile phones to select characters from a spinning keyboard. A proof of concept of this work is demonstrated by the implementation and experimentation of a mobile application prototype that enables users to perform text input through "puffing" interaction.
Storyteller: in-situ reflection on study experiences BIBAFull-Text 472-475
  Benjamin Poppinga; Stefan Oehmcke; Wilko Heuten; Susanne Boll
Diary studies are often applied in HCI research to collect qualitative user impressions. Unfortunately, the period between creation of a diary entry and the later reflection can be too long, which leads to a limited currentness and contextuality. This eventually results in incomplete or misinterpreted data. In this paper we present Storyteller, a mobile application that allows a quick creation of diary entries and encourages users to reflect on these in-situ through a storytelling approach. We argue that this can lead to more accurate and substantial qualitative insights.
TimedNavigation: a time-based navigation approach BIBAFull-Text 476-479
  Janko Timmermann; Benjamin Poppinga; Martin Pielot; Wilko Heuten; Susanne Boll
Travelers sometimes need to reach a destination in a given amount of time. However, today's navigation systems try to route users to the destination as fast as possible. In this paper, we present the concept of time-based pedestrian navigation. We use a map on a smartphone that highlights streets depending on whether they will lead to a destination in time. Our map also allows the users to choose between route alternatives during the walk.


A study of developing auditory icons for mobile phone service applications in Taiwan region BIBAFull-Text 480-485
  Jiunde Lee; Yu-ning Chang
Mobile technology advancements incite expansive opportunity spaces in value-added services to users. This however also introduces critical user experience problems (visual occlusion/clutter). Dynamic contexts of use of mobile phones and diverse functionalities compete for users' visual attention. Interaction designers might need to rethink the design and evaluation of such user interface types. The present study aimed to explore the possible design concepts and rationales of auditory icons for mobile phone service applications in Taiwan region.
A tap and gesture hybrid method for authenticating smartphone users BIBAFull-Text 486-491
  Ahmed Arif; Michel Pahud; Ken Hinckley; William Buxton
This paper presents a new tap and gesture hybrid method for authenticating mobile device users. The new technique augments four simple gestures -- up, down, left, and right, to the dominant digit lock technique, allowing users to either tap or perform any one of the four gestures on the digit keys. It offers in total 6250000 unique four-symbol password combinations, which is substantially more than the conventional techniques. Results of a pilot study showed that the new technique was slower and more error prone than the digit lock technique. However, we believe with practice it could get faster and more accurate. Also, most users were comfortable and all of them felt more secured while using the new technique.
An interface for context-aware retrieval of mobile contacts BIBAFull-Text 492-497
  Vassilios Stefanis; Andreas Komninos; Athanasios Plessas; John Garofalakis
Our work discusses a mobile contact retrieval interface which attempts to contextually predict the contacts a user is likely to need access to, in order to facilitate the retrieval process. We compare our prototype implementation with retrieval from traditional applications (contact list and call log) in a preliminary lab experiment and discuss our findings from user behaviour. We conclude with suggestions on how to improve this interface in order to further enhance the retrieval process.
Assessing contextual mood in public transport: a pilot study BIBAFull-Text 498-503
  Pedro Maurício Costa; Jeremy Pitt; Teresa Galvão; João Falcão e Cunha
In recent years, the technological developments in mobile and communication networks have paved the way for smart environments, whose final goal is to provide users with enhanced experiences. The measure of user experience satisfaction, or quality of experience, may be defined as an affective state in response to a service. Thus, an experiment was devised to explore the relationship between users' affective state and their context, for assessing quality of experience in urban public transport services. A pilot study, conducted to evaluate the feasibility and requirements of such an experiment is presented, leading to a large scale field study.
Connecting stakeholders through context logging BIBAFull-Text 504-509
  Sami Vihavainen; Kimmo Karhu; Andrea Botero
Logging users' physical context through the sensors of mobile devices and the abilities to track users' online behavior has increasingly evolved. In this poster abstract we will present a work in progress in understanding how various stakeholders can be connected through context logging technologies, and the underlying motivations. We will (1) describe the general components and information flow of context logging systems, (2) frame the primary stakeholders from the context logging point of view, (3) discuss the application areas through the different stakeholders and their goals for applying context logging. Moreover, we will present ContextLogger3, a context-logging tool that combines automatically acquired sensor and mobile activity data and manually user created textual notes.
   We conclude by suggesting that designers should systematically consider implementing affordances for fostering design conversation between the users that log/are logged and the other stakeholders. We also suggest considering affordances that enable users to have manual control on context logging to narrow the gap between the realities represented by the automatic context logging and those perceived by the user.
Contextualise! personalise! persuade!: a mobile HCI framework for behaviour change support systems BIBAFull-Text 510-515
  Sebastian Prost; Johann Schrammel; Kathrin Röderer; Manfred Tscheligi
This paper presents a context-aware, personalised, persuasive (CPP) system design framework applicable to the sustainable transport field and other behaviour change support system domains. It operates on a situational, a user, and a target behaviour layer. Emphasis is placed on interlinking each layer's behaviour change factors for greater effectiveness. A prototype CPP system for more sustainable travel behaviour is introduced to demonstrate how the framework can be applied in practice.
Developing mobile interface metaphors and gestures BIBAFull-Text 516-521
  Dietrich Kammer; Deborah Schmidt; Mandy Keck; Rainer Groh
Interaction design for mobile applications is challenging due to the diversity of technologies and devices. In addition, the now ubiquitous multi-touch screens demand novel and engaging interface metaphors. In this paper, we report insights and three practical results from a workshop with undergraduate students. The aim was to experiment with new technologies by providing a set of creativity techniques for ideation. By tackling interaction design both for tablets and smartphones, flexible interface metaphors were developed.
Evaluating NFC and touchscreen interactions in collaborative mobile pervasive games BIBAFull-Text 522-527
  Michael Wolbert; Abdallah El Ali
This paper presents the motivation, design, and pilot evaluation of CountMeIn, a pervasive collaborative game to improve the waiting time experience (e.g., waiting for a train, or traffic light to turn green). We tested two versions of CountMeIn, an NFC-based and touchscreen version in a small pilot study. Our early results showed that the NFC-based version increases collaboration, and was overall more positively perceived than the touchscreen version. We discuss the challenges ahead in deploying CountMeIn in a real-world setting.
Exploring mobile representations of folksonomies to support the example context of a community gardening project BIBAFull-Text 528-533
  Tanja Döring; Axel Sylvester; Albrecht Schmidt; Rainer Malaka
In this paper, we present results from an ongoing participatory design research project that focuses on the design and evaluation of a tagging-system and corresponding visualizations to support actors and visitors of an urban gardening project. The contributions of this work are threefold. First, it addresses the yet underexplored field of integrating context information beyond location and time into mobile tagging systems and folksonomies. Second, it suggests novel visualizations to explore tagged data and folksonomies beyond tag clouds and simple map representations on mobile phones. And third, it gives novel insights into supporting the embedded practices of a DIY (Do-It-Yourself) urban gardening community with interactive systems based on ethnographic fieldwork and participatory design.
Landscape vs portrait mode: which is faster to use on your smart phone? BIBAFull-Text 534-539
  Karsten Seipp; Kate Devlin
Touchscreen smart phones can be operated in portrait (P) and landscape (L) orientation. However, whether a device is faster to operate in P or L and where to put a button in each layout for best findability and operability remains unclear. This research makes a first attempt to examine in which orientation a touch-operated interface is faster to use and whether certain "zones" can be identified that have a particularly good performance in either orientation. Our results indicate that such zones exist in both L and P, and that L is faster to use than P. However, the effects are only visible when the user has not been primed with the target name. We conclude our study with practical advice for designers to improve usability and efficiency of time-critical applications and dialogues.
Lessons learned from participatory design with and for people with dementia BIBAFull-Text 540-545
  Julia M. Mayer; Jelena Zach
In this paper we describe challenges and lessons learned from developing a mobile touch screen based assistive tool for people with dementia. We focus not on features of the tool but the general participatory design process that was applied. Insights presented are gained from interviews, focus groups and observations. We found that projecting problems on imaginary characters and using simple games specifically customized for people with dementia ease the process of eliciting user needs and realistic prototypes allow design evaluations with people with dementia early on.
Linear interface for graphical interface of touch-screen: a pilot study on improving accessibility of the android-based mobile devices BIBAFull-Text 546-551
  Daniel Kocielinski; Jolanta Brzostek-Pawlowska
The article presents the idea of creating a blind-friendly Versatile Multimodal Linear Interface (VMLI) model for touchscreen-based devices as well as the results of pilot study and implementation of VMLI.
   VMLI enhances operation of touchscreen-based mobile devices and facilitates the use of a touchscreen by the blind. VMLI transforms a planar layout of objects into hierarchically structured linear lists that allow non-sequential access to items. List items are read by VMLI using installed text-to-speech software; users navigate through lists and select items using specific touchscreen gestures preferred by the blind. VMLI allows choosing user's preferred method of text input: using a virtual QWERTY keyboard, a virtual Braille keyboard or a physical one of a Braille note-taker. The results of preliminary research on pilot implementation of VMLI into the Android system, concerning functions like managing contacts and composing messages to selected recipients, give grounds for continuing works on VMLI.
Milky: on-product app for emotional product to human interactions BIBAFull-Text 552-557
  Matthieu Deru; Simon Bergweiler
In this paper we present a new way of emotional interaction with products. Based on the rapid prototyping Microsoft Gadgeteer platform, we concretized our vision of an on-product app by implementing an anthropomorphic intelligent milk carton. The purpose of this realization is to give customers a better view of a product's life-cycle. This realization also demonstrates that the frontier between pure mobile applications development and the creation of tangible objects is very thin and opens new way to integrate the Internet of Things over an anthropomorphic user interface, thus leading to a new product to human interaction form.
Moving shapes: a multiplayer game based on color detection running on public displays BIBAFull-Text 558-563
  Denys J. C. Matthies; Ngo Dieu Huong Nguyen; Shaunna Janine Lucas; Daniel Botz
Over the past few years the use of public displays has increased drastically, with the most common public displays being flat surface LED walls or projections on walls. Presently interactive public displays often make use of depth cameras. This paper introduces a cheaper variant that allows people to interact with the display and each other by using the color detection abilities of an ordinary webcam. As proof of concept a simple game was created that demonstrates how people are able to control and interact with photographed shapes via their own smartphones. Alternately a special hardware interface was built for users who do not own a smartphone. Contrary to ordinary games, this game works without points; instead, the leading user is awarded the ability to make decisions about game speed and is able to influence the audio through his movements.
Proximity sensor: privacy-aware location sharing BIBAFull-Text 564-569
  Heiko Müller; Jutta Fortmann; Janko Timmermann; Wilko Heuten; Susanne Boll
In this paper we report on a participatory design study with young girls. Our goal was to create a mobile phone app to display the spatial proximity of friends in an abstract and privacy-aware manner. A group of 16 girls worked along the user-centred design process to create initial paper-based designs of an app that respects one's friends privacy while displaying their proximity to allow for spontaneous meetings or re-grouping after separation. Our participants created very promising results which we intend to implement and evaluate further against a broader audience.
Representing and interpreting reformation in the wild BIBAFull-Text 570-575
  Effie Lai-Chong Law; Nicola Louise Bedall-Hill; Ross Parry; Adair Richards; Melissa Hawker
A mobile educational app combining the strengths of sound pedagogical frameworks Generic Learning Outcomes and of interactive technologies Augmented Reality (AR) and Quick Response (QR) code to deliver the historical information of the Howard family in the Reformation is being developed by an interdisciplinary team. Visiting the relatively small archaeological site -- Thetford Priory -- with the new technologies may contribute to resolving an age-old puzzle through interpretations from multiple perspectives and to fostering the sustainability of such a site.
Revisiting phone call UIs for multipurpose mobile phones BIBAFull-Text 576-581
  Matthias Böhmer; Sven Gehring; Jonas Hempel; Antonio Krüger
While mobile phones made a significant evolution in recent years from single-purpose communication devices to multi-purpose devices, the fundamental design of phone call applications did not evolve accordingly. While its implementation leveraged from new hardware and software capabilities, the fundamental decisions people are able to make when they receive a call did not change. Currently, when a call comes in, a modal dialog opens where the callee can either decline or accept the call. A recent study found that the current user interfaces of phone call applications (phone UIs) often lead to an increased overhead when application usage is being interrupted by phone calls [6]. In this paper, we revisit phone call UIs for multipurpose smartphones. We contribute a new design space for mobile phone call UIs, going beyond the simple accept-or-decline dilemma. We present a prototype implementation and discuss open challenges.
Smart2poster. bridging information and locality BIBAFull-Text 582-587
  Antonio Lotito; Giovanni Luca Spoto; Antonella Frisiello; Vito Macchia; Thomas Bolognesi; Francesco Ruà
This paper presents the Smart2Poster concept, a solution proposing an interaction modality aimed at bridging the information and the surrounding physical world, by means of familiar objects (a poster, a smartphone and/or a TV screen) and based on the Near Field Communication technology (NFC) that enables a local Peer-to-Peer communication without requiring further connectivity. Moreover, the paper describes the usage scenarios that driven the design and the implementation of the first prototype.
Stop questioning me!: towards optimizing user involvement during data collection on mobile devices BIBAFull-Text 588-593
  Nicholas Micallef; Mike Just; Lynne Baillie; Gunes Kayacik
Current methods of behavioral data collection from mobile devices either require significant involvement from participants to verify the 'ground truth' of the data, or approximations that involve post-experiment comparisons to seed data. In this paper we argue that user involvement can be gracefully reduced by performing more intelligent seed comparisons. We aim to reduce the participant involvement to the 'most interesting' temporal slots, both during the experiment and in post-experiment verification. We carried out a 2 week study with 4 users, consisting of an initial opportunistic gathering of mobile sensor data. Our findings suggest that by using such a method we can significantly reduce user involvement.
Tablets use in emerging markets: an exploration BIBAFull-Text 594-599
  Lucia Terrenghi; Laura Garcia-Barrio; Lidia Oshlyansky
Tablet sales are growing worldwide and changing the landscape of personal computing. This is true across mature markets as well as emerging ones. However, little research has been done on the influence of tablets in the emerging markets. This paper presents insights gained during an exploratory study on the use of tablets in four cities: Sao Paulo, Mexico City, Jakarta and Bangalore. The results uncover similarities and differences in the use of tablets in mature markets versus emerging markets and identify implications for design across markets.
WozARd: a wizard of oz tool for mobile AR BIBAFull-Text 600-605
  Günter Alce; Klas Hermodsson; Mattias Wallergård
Wizard of Oz methodology is useful when conducting user studies of a system that is in early development. It is essential to be able to simulate part of the system and to collect feedback from potential users. Using a human to act as the system is one way to do this.
   The Wizard of Oz tool presented here is called WozARd and it aims at offering a set of tools that help the test leader control the visual, tactile and auditive output that is presented to the test participant. Additionally, it is suitable for using in an augmented reality environment where images are overlaid on the phone's camera view or on glasses.
   The main features that were identified as necessary include presentation of media such as images, video and sound, navigation and location based triggering, automatically taking photos, capability to log test results and visual feedback, and the integration of Sony SmartWatch for interaction possibilities.
i2ME: a framework for building interactive mockups BIBAFull-Text 606-611
  Shah Rukh Humayoun; Steffen Hess; Felix Kiefer; Achim Ebert
Providing interactive mockups of mobile applications in designing phases introduces new challenges for interaction designers compared to the traditional way of static mockups. In particular, the complexity of this process increases when it comes to enabling the user to actually explore the intended user experience of the mobile environment by enhancing traditional handmade sketches or tool-generated wireframes with concrete mobile interaction elements. We introduce a framework, called i2ME (interactive Mockup-Building for Mobile Environment), for building interactive mockups with concrete mobile interaction elements. The framework enhances the static mockups (handmade or tool-generated) with screen transitions and multi-touch gestures, and enables the deployment of the resulting HTML5+JavaScript based interactive mockups to multiple platforms and device classes.


Informing future design via large-scale research methods and big data BIBAFull-Text 612-615
  Mattias Rost; Alistair Morrison; Henriette Cramer; Frank Bentley
With the launch of 'app stores' on several mobile platforms and the great uptake of smartphones among the general population, researchers have begun utilising these distribution channels to deploy research software to large numbers of users. Previous Research In The Large workshops have sought to establish base-line practice in this area. We have seen the use of app stores as being successful as a methodology for gathering large amounts of data, leading to design implications, but we have yet to explore the full potential for this data's use and interpretation. How is it possible to leverage the practices of large-scale research, beyond the current approaches, to more directly inform future designs? We propose that the time is right to re-energise discussions on large-scale research, looking further than the basic methodological issues and assessing the potential for informing the design of new mobile software.
Designing mobile augmented reality BIBAFull-Text 616-621
  Hartmut Seichter; Jens Grubert; Tobias Langlotz
The development of mobile Augmented Reality application became increasingly popular over the last few years. However, many of the existing solutions build on the reuse of available standard metaphors for visualization and interaction without considering the manifold contextual factors of their use. Within this workshop we want to discuss theoretical design approaches and practical tools which should help developers to make more informed choices when exploring the design space of Augmented Reality interfaces in mobile contexts.
Entertainment technology in transportation against frustration, aggression and irrationality BIBAFull-Text 622-625
  David Wilfinger; Alexander Meschtscherjakov; Manfred Tscheligi; Petra Sundström; Dalila Szostak; Roderick McCall
This workshop addresses two strong fields within the Mobile HCI community: games & entertainment and transportation user interfaces. Using transportation technology (e.g., a car, plane, or traveling in public transportation) can be frustrating due to crowded streets, delays, and other travelers. Frustration may lead to aggression and negative experiences of other road members and passengers [4] leading to irrational behaviors [6]. Games & entertainment technology offer potential to resolve these negative user experiences. This workshop brings together entertainment and transportation user interface experts, who are willing to understand mobile entertainment technology as a potential solution to improve the experience of all travelers, drivers, and workers within the transportation field. The overall aim of the workshop is to create a common understanding of the challenges of entertainment in transportation, as well as further extend the research agenda for entertainment in this context from both from a scientific and an industrial perspective.
SiMPE: 8th workshop on speech and sound in mobile and pervasive environments BIBAFull-Text 626-629
  Amit A. Nanavati; Nitendra Rajput; Saurabh Srivastava; Cumhur Erkut; Antti Jylhä; Alexander I. Rudnicky; Stefania Serafin; Markku Turunen
The SiMPE workshop series started in 2006 with the goal of enabling speech processing on mobile and embedded devices. The SiMPE 2012 workshop extended the notion of audio to non-speech "Sounds" and thus the expansion became "Speech and Sound". SiMPE 2010 and 2011 brought together researchers from the speech and the HCI communities. Speech User interaction in cars was a focus area in 2009. Multimodality got more attention in SiMPE 2008. In SiMPE 2007, the focus was on developing regions.
   With SiMPE 2013, the 8th in the series, we continue to explore the area of speech along with sound. Akin to language processing and text-to-speech synthesis in the voice-driven interaction loop, sensors can track continuous human activities such as singing, walking, or shaking the mobile phone, and non-speech audio can facilitate continuous interaction. The technologies underlying speech processing and sound processing are quite different and these communities have been working mostly independent of each other. And yet, for multimodal interactions on the mobile, it is perhaps natural to ask whether and how speech and sound can be mixed and used more effectively and naturally.
U-PriSM 2: the second usable privacy and security for mobile devices workshop BIBAFull-Text 630-632
  Sonia Chiasson; Heather Crawford; Serge Egelman; Pourang Irani
The Second Usable Privacy and Security for Mobile Devices Workshop (U-PriSM 2) was held with MobileHCI'13. The U-PriSM 2 workshop was an opportunity for researchers and practitioners to discuss research challenges and experiences around the usable privacy and security of mobile devices (smart phones and tablets). Security often involves having non-security experts, or even novice users, regularly making important security decisions while their main focus is on other primary tasks. This is especially true for mobile devices where users can quickly and easily install apps, where user interfaces are minimal due to space constraints, and where users are often distracted by their environment.
   Participants had a chance to explore mobile device usage and the unique usable security and privacy challenges that arise, discuss proposed systems and ideas that address these needs, and work towards the development of design principles to inform future development in the area.
Workshop on prototyping to support the interaction designing in mobile application development (PID-MAD 2013) BIBAFull-Text 633-636
  Shah Rukh Humayoun; Steffen Hess; Achim Ebert
Recent changes in the mobile environment; such as the addition of multi-touch gestures, usage of sensors, or single-focused mobile apps; brought several challenges for interaction designers in communicating their ideas and thoughts enduring early design activities. Traditional prototyping techniques may not provide sufficient support due to the lack of current mobile interaction paradigms in them. Therefore, a shift is required in prototyping techniques and approaches in order to support properly the interaction design process of mobile application development for the current mobile environment.
   Targeting these concerns, the workshop envisions that the research must address the need for a change in existing prototyping techniques as well as focusing on novel prototyping approaches and frameworks that would support not only the interaction design process but the whole development process of mobile application development.