HCI Bibliography Home | HCI Conferences | MUM Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
MUM Tables of Contents: 0405060708091011121314

Proceedings of the 2013 International Conference on Mobile and Ubiquitous Multimedia

Fullname:Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia
Editors:Matthias Kranz; Kåre Synnes; Sebastian Boring; Kristof Van Laerhoven
Location:Luleå, Sweden
Dates:2013-Dec-02 to 2013-Dec-05
Standard No:ISBN: 978-1-4503-2648-3; ACM DL: Table of Contents; hcibib: MUM13
Links:Conference Website
  1. Visualization techniques
  2. Public display and collaboration
  3. Supporting architectures and methods
  4. New interaction methods
  5. Inferring the mobile user's context
  6. Designs and explorations
  7. Applications
  8. Location and navigation
  9. Demos
  10. Posters
  11. UbiChallenge

Visualization techniques

BubblesDial: exploring large display content graphs on small devices BIBAFull-Text 1
  Joanna Bergstrom-Lehtovirta; Tommy Eklund; Antti Jylhä; Kai Kuikkaniemi; Chao An; Giulio Jacucci
Large interfaces are fixed to a certain use context, for example to a physical smart space. Mobile counterparts of public interfaces can allow user to continue interacting with the content also when leaving the space. However, wall applications make use of a large display surface and fitting the same user interface to the constraints of a mobile screen is challenging. Starting with Bubble Wall -- an information exploration application for multitouch walls -- we developed interfaces for browsing the same content graphs on mobile devices. A comparison study in exploration and navigation tasks was conducted with two mobile interfaces reproducing the large screen interactions: BubbleSpace with a more faithful redesign and BubblesDial reducing the interactions for better fit to a small screen. The BubblesDial scored significantly better in usability and performance evaluation, especially when priming with use of the Bubble Wall. We also present implications for the redesign of these large content graph interactions for mobile use.
Using mobile devices to enable visual multiplexing on public displays: three approaches compared BIBAFull-Text 2
  Morin Ostkamp; Jonas Hülsermann; Christian Kray; Gernot Bauer
Public displays have become ubiquitous in many places regularly visited by large numbers of people (e.g., traffic hubs or malls). In addition to advertising they often provide information related to the location (e.g., time tables). However, individuals can have difficulties to find information relevant to them -- either due to information overload or lack of personalization. Multiplexing, as defined in information theory, can help to address this issue by increasing the number of available channels. We propose three methods for visual multiplexing, and report on a controlled lab-based comparison study. Our results indicate that visual multiplexing via mobile devices can be a feasible solution to provide personalized multimedia content on public displays, and that the three methods tested differ in terms of performance. We found that the content type shown has an impact on which method works best, and that self-reported workload differed according to content type and multiplexing method.
Beyond heat maps: mining common swipe gestures BIBAFull-Text 3
  Klaus Schaefers; David Ribeiro; Ana Correia de Barros
Heat maps are a common tool in research to visualize human-computer-interaction. Despite being widely used to track navigation, clicks, cursor moves or eye gaze, heat maps have not yet been explored as a means to understand users' gestural interaction with mobile devices. This understanding is particularly relevant in the case of older adult users who are often novice users and may also struggle with accuracy in gesture performance. This paper explores the application of the DBScan clustering algorithm to uncover the most relevant swipe gestures in a data sets containing the user interaction of two mobile applications. An intuitive visualization of the clustering results will be presented and compared in a case study with a heat map visualization, discussing the novelty and usefulness of these visualizations for user behaviour and usability studies.
Is autostereoscopy useful for handheld AR? BIBAFull-Text 4
  Frederic Kerber; Pascal Lessel; Michael Mauderer; Florian Daiber; Antti Oulasvirta; Antonio Krüger
Some recent mobile devices have autostereoscopic displays that enable users to perceive stereoscopic 3D without lenses or filters. This might be used to improve depth discrimination of objects overlaid to a camera viewfinder in augmented reality (AR). However, it is not known if autostereoscopy is useful in the viewing conditions typical to mobile AR. This paper investigates the use of autostereoscopic displays in an psychophysical experiment with twelve participants using a state-of-the-art commercial device. The main finding is that stereoscopy has a negligible if any effect on a small screen, even in favorable viewing conditions. Instead, the traditional depth cues, in particular object size, drive depth discrimination.
Mobile photo sharing through collaborative space in stereoscopic 3D BIBAFull-Text 5
  Jonna Häkkilä; Maaret Posti; Leena Ventä-Olkkonen; Olli Koskenranta; Ashley Colley
Mobile user interfaces (UIs) utilizing stereoscopic 3D have so far been driven predominantly by visual design, and when interaction design has been considered, the focus has been on single user use cases. In this paper, we introduce a novel way to utilize stereoscopy in a mobile user interface in a collaborative task. We define a shared space, appearing below the screen level, which is used in a collaborative task of sharing mobile phone photos. Two users are able to share content by moving items between the private layer, at zero parallax, and shared layer, at positive parallax. We developed the application through an iterative design process including two user studies (n=27, n=19). We report positive evaluation results especially in regard of the utilitarian aspects of the UI design.

Public display and collaboration

Evaluation of a programming toolkit for interactive public display applications BIBAFull-Text 6
  Jorge C. S. Cardoso; Rui José
Interaction is repeatedly pointed out as a key enabling element towards more engaging and valuable public displays. Still, most digital public displays today do not support any interactive features. We argue that this is mainly due to the lack of efficient and clear abstractions that developers can use to incorporate interactivity into their applications. As a consequence, interaction represents a major overhead for developers, and users are faced with inconsistent interaction models across different displays. This paper describes the results of the evaluation of a widget toolkit for generalized interaction with public displays. Our toolkit was developed for web-based applications and it supports multiple interaction mechanisms, automatically generated graphical interfaces, asynchronous events and concurrent interaction. We have evaluated the toolkit along various dimensions -- system performance, API usability, and real-world deployment -- and we present and discuss the results in this paper.
Evaluating the experiential user experience of public display applications in the wild BIBAFull-Text 7
  Tuuli Keskinen; Jaakko Hakulinen; Tomi Heimonen; Markku Turunen; Sumita Sharma; Toni Miettinen; Matti Luhtala
Studying pervasive systems in the wild has recently gained significant interest. However, few methods exist that focus on the subjective of user experience of such systems rather than objective metrics, like performance and task success. Especially multimodal interaction in this context poses challenges to understanding how different input and output methods affect the users' experience. We present a new method for evaluating the experiential user experience of interactive systems. It combines two existing approaches from different fields: a questionnaire-based evaluation method called SUXES, intended for evaluating user expectations and experiences, and a theoretical experience framework, Experience Pyramid, originally developed for analyzing and improving experiential tourism products. The new method was used in two field studies of multimodal public display applications. Our findings show that the method is a practical approach for user experience evaluation in the wild, especially in the case of pervasive applications that aim to provide novel experiences rather than facilitate task-oriented information access.
MobiZone: personalized interaction with multiple items on interactive surfaces BIBAFull-Text 8
  Markus Rader; Clemens Holzmann; Enrico Rukzio; Julian Seifert
Current interactive surfaces do not support user identification. Hence, personalized applications that consider user-specific access control are not possible. Diverse approaches for identifying and distinguishing users have been investigated in previous research. Token-based approaches -- e.g., which utilize the user's mobile phone -- are especially promising, as they also allow for consideration of the user's personal digital context (e.g., stored messages, contacts, or media data). However, existing interaction techniques are limited regarding their ability to enable users to manipulate (e.g., select or copy) multiple items at the same time, as they are cumbersome when the number of files exceeds a certain amount. We present MobiZone, a technique that enables users to interact with large numbers of items on an interactive surface while enabling personalized access by using the mobile phone as a token. MobiZone provides a spatial zone that can be positioned, resized and associated with any action according to the user's needs; items enclosed by the zone can be manipulated simultaneously. We present three interaction techniques (FlashLight&Control, Remote&Control, and Place&Control) that enable users to control the zone. Additionally, we report the results of a comparative user study in which we compared the different interaction techniques for MobiZone. The results indicate that users are fastest with Remote&Control, and they also rated Remote&Control slightly higher than the other techniques.
Designing for presence in social television interaction BIBAFull-Text 9
  Jarmo Palviainen; Kati Kuusinen; Kaisa Väänänänen-Vainio-Mattila
In the past years, people have started to use social media to interact actively about TV content. However, despite of over a decade of active research and product development, Social TV has not been adopted by large populations. This paper aims to support designing interaction for Social TV services and, more specifically, designing for the presence and togetherness between viewers. Our constructive research consisted of a series of user studies and iterative prototyping. We conducted user studies with altogether 51 participants both in laboratory and in real-life contexts. To support the presence, the final prototype includes three different modalities of communication -- voice and text-based chat, and animated gestures with avatars. The qualitative findings imply that gestures with avatars in virtual space support Social TV and the experience of presence. Finally, we present an analysis of Social TV heuristics and their validity in the context of our designs.
A cross-device drag-and-drop technique BIBAFull-Text 10
  Adalberto L. Simeone; Julian Seifert; Dominik Schmidt; Paul Holleis; Enrico Rukzio; Hans Gellersen
Many interactions naturally extend across smart-phones and devices with larger screens. Indeed, data might be received on the mobile but more conveniently processed with an application on a larger device, or vice versa. Such interactions require spontaneous data transfer from a source location on one screen to a target location on the other device. We introduce a cross-device Drag-and-Drop technique to facilitate these interactions involving multiple touchscreen devices, with minimal effort for the user. The technique is a two-handed gesture, where one hand is used to suitably align the mobile phone with the larger screen, while the other is used to select and drag an object between devices and choose which application should receive the data.

Supporting architectures and methods

Blur-resistant joint 1D and 2D barcode localization for smartphones BIBAFull-Text 11
  Gábor Sörös; Christian Flörkemeier
With the proliferation of built-in cameras barcode scanning on smartphones has become widespread in both consumer and enterprise domains. To avoid making the user precisely align the barcode at a dedicated position and angle in the camera image, barcode localization algorithms are necessary that quickly scan the image for possible barcode locations and pass those to the actual barcode decoder. In this paper, we present a barcode localization approach that is orientation, scale, and symbology (1D and 2D) invariant and shows better blur invariance than existing approaches while it operates in real time on a smartphone. Previous approaches focused on selected aspects such as orientation invariance and speed for 1D codes or scale invariance for 2D codes. Our combined method relies on the structure matrix and the saturation from the HSV color system. The comparison with three other real-time barcode localization algorithms shows that our approach outperforms the state of the art with respect to symbology and blur invariance at the expense of a reduced speed.
Towards an information architecture for flexible reuse of digital media BIBAFull-Text 12
  Gerard Oleksik; Hans-Christian Jetter; Jens Gerken; Natasa Milic-Frayling; Rachel Jones
Our research is concerned with designing support for ubiquitous access, organization, and interpretation of digital assets that users produce and store across multiple devices and computing platforms. Through a co-design study with scientists we identified specific aspects of conceptual maps they wish to create for their projects in order to make sense of diverse collections of digital media related to their individual and collective work. By the use of a technology probe called DeskPiles we observed the mechanisms involved in such a process. We identified the practice of using sub-document elements to compose the views of digital collections and diverse linking strategies to express semantic relations among concepts and ideas and to enable access to supporting digital assets. Our findings reveal the need to expand traditional information architectures and include (1) an extended range of content reference options and (2) transformation services to enable extraction, conversion, and linking of digital content. We advocate a granular and multi-format representation of content as the basis for reuse and ubiquitous access of digital media in diverse multi-device environments.
NooSphere: an activity-centric infrastructure for distributed interaction BIBAFull-Text 13
  Steven Houben; Søren Nielsen; Morten Esbensen; Jakob E. Bardram
Distributed interaction is a computing paradigm in which the interaction with a computer system is distributed over multiple devices, users and locations. Designing and developing distributed interaction systems is intrinsically difficult as it requires the engineering of a stable infrastructure to support the actual system and user interface. As an approach to this re-engineering problem, we introduce NooSphere, an activity-centric infrastructure and programming framework that provides a set of fundamental distributed services that enables quick development and deployment of distributed interactive systems. In this paper, we describe the requirements, design and implementation of NooSphere and validate the infrastructure by implementing three canonical real deployable applications constructed on top of the NooSphere infrastructure.

New interaction methods

AR typing interface for mobile devices BIBAFull-Text 14
  Masakazu Higuchi; Takashi Komuro
We propose a new user interface system for mobile devices. By using augmented reality (AR) technology, the system overlays virtual objects on real images captured by a camera attached to the back of a mobile device, and the user can operate the mobile device by manipulating the virtual objects with his/her hand in the space behind the mobile device. This system allows the user to operate the device in a wide three-dimensional space and to select small objects easily. Also, the AR technology provides the user with a sense of reality in operating the device. We developed a typing application using our system and verified the effectiveness by user studies. The results showed that more than half of the subjects felt that the operation area of the proposed system is larger than that of a smartphone and that both AR and unfixed key-plane are effective for improving typing speed.
Adding context to multi-touch region selections BIBAFull-Text 15
  Sven Strothoff; Klaus Hinrichs
As applications originating on desktop computers find their way onto multi-touch enabled mobile devices many interaction tasks that were designed for computer mice spread to new touch-based environments. One example is region selection, for instance in image editing applications. Several studies already investigated multi-touch object selections however, region selections have not been closely examined.
   Our proposed selection technique was designed for multi-touch interaction and better suits mobile devices. Taking advantage of multiple touches enables the user to easily extend, modify and refine selections based on the order and relative position -- the context--of touches.
   We were concerned that controlling more degrees of freedom with our technique could negatively impact simple selections. To evaluate its performance we present a user study that compares it to currently used techniques. Results show that our multi-touch region selection represents a good compromise of speed and precision, while providing the possibility to easily refine the selected region.
Finger in air: touch-less interaction on smartphone BIBAFull-Text 16
  Zhihan Lv; Alaa Halawani; Muhammad Sikandar Lal Khan; Shafiq Ur Réhman; Haibo Li
In this paper we present a vision based intuitive interaction method for smart mobile devices. It is based on markerless finger gesture detection which attempts to provide a 'natural user interface'. There is no additional hardware necessary for real-time finger gesture estimation. To evaluate the strengths and effectiveness of proposed method, we design two smart phone applications, namely circle menu application -- provides user with graphics and smart phone's status information, and bouncing ball game -- a finger gesture based bouncing ball application. The users interact with these applications using finger gestures through the smart phone's camera view, which trigger the interaction event and generate activity sequences for interactive buffers. Our preliminary user study evaluation demonstrates effectiveness and the social acceptability of proposed interaction approach.
Evaluation of hybrid front- and back-of-device interaction on mobile devices BIBAFull-Text 17
  Markus Löchtefeld; Christoph Hirtz; Sven Gehring
With the recent trend of increasing display sizes of mobile devices, one-handed interaction has become increasingly difficult when a user wants to maintain a safe grip around the device at the same time. In this paper we evaluate how a combination of hybrid front- and back-of-device touch input can be used to overcome the difficulties when using a mobile device with one hand. Our evaluation shows that, even though such a technique is slower than conventional front-of-device input, it allows for accurate and safe input.

Inferring the mobile user's context

Towards scalable activity recognition: adapting zero-effort crowdsourced acoustic models BIBAFull-Text 18
  Long-Van Nguyen-Dinh; Ulf Blanke; Gerhard Tröster
Human activity recognition systems traditionally require a manual annotation of massive training data, which is laborious and non-scalable. An alternative approach is mining existing online crowd-sourced repositories for open-ended, free annotated training data. However, differences across data sources or in observed contexts prevent a crowd-sourced based model reaching user-dependent recognition rates.
   To enhance the use of crowd-sourced data in activity recognition, we take an essential step forward by adapting a generic model based on crowd-sourced data to a personalized model. In this work, we investigate two adapting approaches: 1) a semi-supervised learning to combine crowd-sourced data and unlabeled user data, and 2) an active-learning to query the user for labeling samples where the crowd-sourced based model fails to recognize. We test our proposed approaches on 7 users using auditory modality on mobile phones with a total data of 14 days and up to 9 daily context classes. Experimental results indicate that the semi-supervised model can indeed improve the recognition accuracy up to 21% but is still significantly outperformed by a supervised model on user data. In the active learning scheme, the crowd-sourced model can reach the performance of the supervised model by requesting labels of 0.7% of user data only. Our work illustrates a promising first step towards an unobtrusive, efficient and open-ended context recognition system by adapting free online crowd-sourced data into a personalized model.
Enabling low-cost particulate matter measurement for participatory sensing scenarios BIBAFull-Text 19
  Matthias Budde; Rayan El Masri; Till Riedel; Michael Beigl
This paper presents a mobile, low-cost particulate matter sensing approach for the use in Participatory Sensing scenarios. It shows that cheap commercial off-the-shelf (COTS) dust sensors can be used in distributed or mobile personal measurement devices at a cost one to two orders of magnitude lower than that of current hand-held solutions, while reaching meaningful accuracy. We conducted a series of experiments to juxtapose the performance of a gauged high-accuracy measurement device and a cheap COTS sensor that we fitted on a Bluetooth-enabled sensor module that can be interconnected with a mobile phone. Calibration and processing procedures using multi-sensor data fusion are presented, that perform very well in lab situations and show practically relevant results in a realistic setting. An on-the-fly calibration correction step is proposed to address remaining issues by taking advantage of co-located measurements in Participatory Sensing scenarios. By sharing few measurement across devices, a high measurement accuracy can be achieved in mobile urban sensing applications, where devices join in an ad-hoc fashion. A performance evaluation was conducted by co-locating measurement devices with a municipal measurement station that monitors particulate matter in a European city, and simulations to evaluate the on-the-fly cross-device data processing have been done.
Using time use with mobile sensor data: a road to practical mobile activity recognition? BIBAFull-Text 20
  Marko Borazio; Kristof Van Laerhoven
Having mobile devices that are capable of finding out what activity the user is doing, has been suggested as an attractive way to alleviate interaction with these platforms, and has been identified as a promising instrument in for instance medical monitoring. Although results of preliminary studies are promising, researchers tend to use high sampling rates in order to obtain adequate recognition rates with a variety of sensors. What is not fully examined yet, are ways to integrate into this the information that does not come from sensors, but lies in vast data bases such as time use surveys. We examine using such statistical information combined with mobile acceleration data to determine 11 activities. We show how sensor and time survey information can be merged, and we evaluate our approach on continuous day-and-night activity data from 17 different users over 14 days each, resulting in a data set of 228 days. We conclude with a series of observations, including the types of activities for which the use of statistical data has particular benefits.
Human activity recognition using social media data BIBAFull-Text 21
  Zack Zhu; Ulf Blanke; Alberto Calatroni; Gerhard Tröster
Human activity recognition is a core component of context-aware, ubiquitous computing systems. Traditionally, this task is accomplished by analyzing signals of wearable motion sensors. While such signals can effectively distinguish various low-level activities (e.g. walking or standing), two issues exist: First, high-level activities (e.g. watching movies or attending lectures) are difficult to distinguish from motion data alone. Second, instrumentation of complex body sensor network at population scale is impractical. In this work, we take an alternative approach of leveraging rich, dynamic, and crowd-generated self-report data as the basis for in-situ activity recognition. By treating the user as the "sensor", we make use of implicit signals emitted from natural use of mobile smart-phones. Applying an L1-regularized Linear SVM on features derived from textual content, semantic location, and time, we are able to infer 10 meaningful classes of daily life activities with a mean accuracy of up to 83.9%. Our work illustrates a promising first step towards comprehensive, high-level activity recognition using free, crowd-generated, social media data.
Inferring mood in ubiquitous conversational video BIBAFull-Text 22
  Dairazalia Sanchez-Cortes; Joan-Isaac Biel; Shiro Kumano; Junji Yamato; Kazuhiro Otsuka; Daniel Gatica-Perez
Conversational social video is becoming a worldwide trend. Video communication allows a more natural interaction, when aiming to share personal news, ideas, and opinions, by transmitting both verbal content and nonverbal behavior. However, the automatic analysis of natural mood is challenging, since it is displayed in parallel via voice, face, and body. This paper presents an automatic approach to infer 11 natural mood categories in conversational social video using single and multimodal nonverbal cues extracted from video blogs (vlogs) from YouTube. The mood labels used in our work were collected via crowdsourcing. Our approach is promising for several of the studied mood categories. Our study demonstrates that although multimodal features perform better than single channel features, not always all the available channels are needed to accurately discriminate mood in videos.

Designs and explorations

User-centred design of a mobile self-management solution for Parkinson's disease BIBAFull-Text 23
  Ana Correia de Barros; João Cevada; Àngels Bayés; Sheila Alcaine; Berta Mestre
Parkinson's disease (PD) is a highly prevalent and disabling condition, requiring frequent medication adjustments. In parallel, non-adherence to medical treatment might lead to severe consequences. Therefore, a solution to monitor PD symptoms, allowing neurologists to make informed decisions about medication adjustments, and one which could promote medical treatment adherence would be beneficial for both the patient and the medical doctor. In this paper we present the rationale and user-centred process for the design of four smartphone applications for the self-management of PD. We present the methods for evaluation and the results of usability tests. The results show that user-centred methods were efficient and that people with PD were able to achieve high task completion rates on usability tests with three of the applications for PD self-management. Future work should focus on detailed improvement of touch screen sensitivity to optimize error prevention.
Losing your creativity -- storytelling comparison between children and adolescents BIBAFull-Text 24
  Panu Åkerman; Arto Puikkonen
In this paper we study pico projector based storytelling among adolescents. We compare the results of our user study of 17 students to the results of our earlier study among young children. Our main focus was on creativity, playfulness and fun as well as on the ubiquitous nature of the technology and use of environment. The comparison highlighted interesting differences. The nature of creativity seems to be changing, but the sources of fun and playfulness still share similarities. Groups also utilize surroundings and the ubiquitous nature of the technology in a slightly different manner. The perceived capabilities of the provided technology also had a more profound effect on the adolescents, even to the extent of it restricting their creativity.
Moths at midnight: design implications for supporting ecology-focused citizen science BIBAFull-Text 25
  Alan Chamberlain; Chloe Griffiths
This paper presents some initial findings, which form a set of design implications from a study that relates to the increasingly popular activity of people setting up moth traps in private gardens. Moth trapping can either be done by an individual or a small group and involves setting a trap that will safely catch moths overnight. The trap is opened in the morning and the contents identified and recorded. This information is usually reported to the local records centre (LRC). This research is based on a rapid ethnographic study and interviews, which demonstrate a series of intervention points that would augment this branch of citizen science, (also known as crowd-sourced science) where mobile ubiquitous technology could both support the fore-mentioned activity and enhance the user's experience. These points relate to: the identification of species; habitat; flight season; verification; learning; reporting; and associated social information sharing.
The railway blues: affective interaction for personalised transport experiences BIBAFull-Text 26
  Pedro Maurício Costa; Asimina Vasalou; Jeremy Pitt; Teresa Galvão; João Falcão e Cunha
The convergence of personal devices, pervasive communication networks and remote computing has caused a fundamental shift in the user interaction paradigm. Multiple methods have enabled an implicit loop of interaction that goes beyond the traditional graphical interfaces. Human emotion is one of such dimensions, supporting the development of empathic systems. Thus, quality of user experience, a subjective measure, may be defined as the resulting affective state from an interaction, which can be dynamically assessed. In mobile ubiquitous settings, leveraging this affective interaction for providing personalisation and immersive digital services has the potential to significantly impact user experience. This paper investigates the relationship between user affect and experience in the context of urban public transport.
Early perceptions of an augmented home BIBAFull-Text 27
  Leena Ventä-Olkkonen; Maaret Posti; Jonna Häkkilä
In this paper, we focus on charting future ubiquitous computing use cases utilizing mixed reality (MR) in domestic environments, which has so far been an unexplored domain for MR. We describe our early user research, where during one week participants from 12 households were asked to brainstorm and assess their perceptions of imaginary mixed reality in their homes. During the one-week period this resulted in 167 user generated concept ideas. We focus on the thematic findings that emerged from the collected material, and describe how people most often related the potential of MR concepts with applications related to wellbeing, home-related information, communication and entertainment.


A comparative user study of faceted search in large data hierarchies on mobile devices BIBAFull-Text 28
  Mark Schneider; Ansgar Scherp; Jochen Hunz
We compare two approaches for exploring large, hierarchical data spaces on mobile devices using facets. While the first approach arranges the facets in a 3x3 grid, the second approach makes use of a scrollable list of facets for exploring the data. As concrete scenario, we use category hierarchies that are dynamically obtained from different, distributed social media sources. We have conducted a between-group experiment of the two approaches with 64 subjects (41 male, 23 female) executing the same set of tasks of typical mobile users' information needs. The results show that subjects using the grid-based approach require significantly more time and more clicks for completing the tasks. However, regarding the subjects' satisfaction there is no significant difference between the two approaches. Thus, if efficiency is not the primary objective, the grid-based approach might be of interest as it provides a navigation element that allows the users to see the exact position in the data space. This might be a very useful feature in scenarios where knowing the exact position is quite crucial such as browsing classification schemas in digital libraries or other formal taxonomies.
Micro-crowdfunding: achieving a sustainable society through economic and social incentives in micro-level crowdfunding BIBAFull-Text 29
  Mizuki Sakamoto; Tatsuo Nakajima
This paper introduces a new approach, named micro-crowdfunding, for motivating people to participate in achieving a sustainable society. Increasing people's awareness of how they participate in maintaining the sustainability of common resources, such as public sinks, toilets, shelves, and office areas, is central to achieving a sustainable society. Micro-crowdfunding, as proposed in the paper, is a new type of community-based crowdsourcing architecture that is based on the crowdfunding concept and uses the local currency idea as a tool for encouraging people who live in urban environments to increase their awareness of how important it is to sustain small, common resources through their minimum efforts. Because our approach is lightweight and uses a mobile phone, people can participate in micro-crowdfunding activities with little effort anytime and anywhere.
   We present the basic concept of micro-crowdfunding and a prototype system. We also describe our experimental results, which show how economic and social factors are effective in facilitating micro-crowdfunding. Our results show that micro-crowdfunding increases the awareness about social sustainability, and we believe that micro-crowdfunding makes it possible to motivate people for achieving a sustainable society.
Visual authentication: a secure single step authentication for user authorization BIBAFull-Text 30
  Luis Roalter; Matthias Kranz; Andreas Möller; Stefan Diewald; Tobias Stockinger; Marion Koelle; Patrick Lindemann
User authentication on publicly exposed terminals with established mechanisms, such as typing the credentials on a virtual keyboard, can be insecure e.g. due to shoulder surfing or due to a hacked terminal. In addition, username and password entry can be time-consuming and thus improvable with relation to usability. As security and comfort are often competing with each other, novel authentication and authorization methods especially for public terminals are desirable. In this paper, we present an approach on a distributed authentication and authorization system, where the user can be easily identified and enabled to use a service with his smartphone. The smartphone (as personal and private device the user is always in control of) can provide a highly secure authentication token that is renewed and exchanged in the background without the user's participation. The claimed improvements were supported by a user survey with an implementation of a digital room management system as an example for a public display. The proposed authentication procedure would increase security and yet enable fast authentication within publicly exposed terminals.
AffectiView: mobile video camera application using physiological data BIBAFull-Text 31
  Takumi Shirokura; Nagisa Munekata; Tetsuo Ono
In recent years, devices such as smartphones have made it increasingly easier for people to capture videos and then share those videos on Social Networking Services. Shared videos are able of representing almost all emotional experiences, such as interest, astonishment and excitement. However, it is difficult for people who have little knowledge concerning professional video shooting to express these experiences using only video. To address this issue, we developed AffectiView -- a mobile video camera application that captures users' affective responses while they are capturing videos, and also provides a way for sharing that data with other users. In this System, affective responses are measured using physiological proxies. We have organized into three styles the representation of physiological signals between users found in prior works. This system provides video shooters' emotional experiences to viewers by applying each of these three styles to the captured video. A preliminary user study has revealed the positive effects of capturing and sharing physiological signals together with video.
NoseTapping: what else can you do with your nose? BIBAFull-Text 32
  Ondrej Polacek; Thomas Grill; Manfred Tscheligi
Touch-screen interfaces on smart devices became ubiquitous in our everyday lives. In specific contextual situations, capacitive touch interfaces used on current mobile devices are not accessible when, for example, wearing gloves during a cold winter. Although the market has responded by providing capacitive styluses or touchscreen-compatible gloves, these solutions are not widely accepted and appropriate in such particular situations. Using the nose instead of fingers is an easy way to overcome this problem. In this paper, we present in-depth results of a user study on nose-based interaction. The study was complemented by an online survey to elaborate the potential and acceptance of the nose-based interaction style. Based on the insights gained in the study, we identify the main challenges of nose-based interaction and contribute to the state of the art of design principles for this interaction style by adding two new design principles and refining one already existing principle. In addition, we investigated in the emotional effect of nose-based interaction based on the user experiences evolved during the user study.

Location and navigation

Evaluating landmark attraction model in collaborative wayfinding in virtual learning environments BIBAFull-Text 33
  Pekka Kallioniemi; Jaakko Hakulinen; Tuuli Keskinen; Markku Turunen; Tomi Heimonen; Laura Pihkala-Posti; Mikael Uusi-Mäkelä; Pentti Hietala; Jussi Okkonen; Roope Raisamo
In Virtual Learning Environments efficient navigation is a major issue, especially when it is used as a component in the learning process. This paper addresses the challenges in creating meaningful navigation routes from language learning perspective. The work is grounded on findings from a specific case on German language learning, wherein two remotely located users communicated in a wayfinding guidance scenario. The users navigated through 360-degree virtual panoramic images using body gestures and could receive communication help via spoken hints by pointing at objects in the scenery. An important design consideration is how to choose these objects, as they have both navigational importance and pedagogical significance in terms of learning the desired language. Wayfinding interactions from 21 participants were compared to the values provided by a landmark attraction model applied on the landmarks along the routes. The results show that there was a clear connection between prominence of landmarks and time spent on each panorama. This indicates that together with pedagogical planning, the model can aid in selecting the interactive content for language learning applications in virtual environments.
I want to view it my way: interfaces to mobile maps should adapt to the user's orientation skills BIBAFull-Text 34
  Stefan Bienk; Markus Kattenbeck; Bernd Ludwig; Manuel Müller; Christina Ohm
Efficient human-computer-interaction is a key to success for navigation systems, in particular when pedestrians are using them. Due to the increasing computational power of recent mobile devices, complex multimedia user interfaces to pedestrian navigation systems can be implemented. In order to be able to provide the best-suited interface to each user, we present a user study comparing not only three map presentation modes (bird's eye, egocentric and a combined one), but also involving the users' sense of direction as a second independent factor. In the experiment conducted, we did not focus on a global navigation task, but on the repeated subtask of locating objects on the map. ANOVA analysis of the task completion time revealed a significant interaction effect of presentation mode and the sense of direction of the test persons. Consequently, we advocate user-adaptive presentation modes for pedestrian navigation systems.
Semantic enrichment of mobile phone data records BIBAFull-Text 35
  Zolzaya Dashdorj; Luciano Serafini; Fabrizio Antonelli; Roberto Larcher
The pervasiveness of mobile phones creates an unprecedented opportunity for analyzing human dynamics with the help of the data they generate. This enables a novel human-driven approach for service creation in a variety of domains (e.g., healthcare, transportation, etc.) Telecom operators own and manage billions of mobile network events (Call Detailed Records -- CDRs) per day: interpreting such a big stream of data needs a deep understanding of the events' context through the available background knowledge. We introduce an ontological and stochastic model (HRBModel) to interpret mobile human behavior using merged mobile network data and the geo-referenced background knowledge (e.g., OpenStreetMap, etc.) The model characterizes locations with human activities that can happen (with a given likelihood) there. This allows us to predictively compile sets of tasks that people are likely to engage in under certain contextual conditions or to characterize exceptional events detected from anomalies in the CDR. An experimental evaluation of the approach is presented.
Cicada fingerprinting system: from artificial to sustainable BIBAFull-Text 36
  Shunsuke Aoki; Hiroki Kobayashi; Kaoru Sezaki
Location estimation with artificial infrastructures for mobile computing has been actively studied, but originally, people get a lot of information from nature during a course of a day. Nature including animals and plants supplies various information acoustically and visually. In this context, we present the concept of Cicada Fingerprinting System, which is a future localization scheme that will enable us to make the most of the information from nature. In our system, users collect the chirp of cicadas as acoustic data via smartphone embedded with microphone. Using the chirp of cicadas such as Wi-Fi fingerprinting, we can specify the location of users regardless of the existence of a roof. That is to say, cicada fingerprinting system applies cicada's instinctive behaviour to a localization. Furthermore, by our system, users are able to feel a sense of belonging to a nature even in urban area, where we spend much time in daily life. This novel system is designed for making general users conscious of presence of nature around.


Eye drop: an interaction concept for gaze-supported point-to-point content transfer BIBAFull-Text 37
  Jayson Turner; Andreas Bulling; Jason Alexander; Hans Gellersen
The shared displays in our environment contain content that we desire. Furthermore, we often acquire content for a specific purpose, i.e., the acquisition of a phone number to place a call. We have developed a content transfer concept, Eye Drop. Eye Drop provides techniques that allow fluid content acquisition, transfer from shared displays, and local positioning on personal devices using gaze combined with manual input. The eyes naturally focus on content we desire. Our techniques use gaze to point remotely, removing the need for explicit pointing on the user's part. A manual trigger from a personal device confirms selection. Transfer is performed using gaze or manual input to smoothly transition content to a specific location on a personal device. This work demonstrates how techniques can be applied to acquire and apply actions to content through a natural sequence of interaction. We demonstrate a proof of concept prototype through five implemented application scenarios.
Seek'N'Share: a platform for location-based collaborative mobile learning BIBAFull-Text 38
  Tomi Heimonen; Markku Turunen; Sanna Kangas; Tamás Pallos; Pasi Pekkala; Santeri Saarinen; Katariina Tiitinen; Tuuli Keskinen; Matti Luhtala; Olli Koskinen; Jussi Okkonen; Roope Raisamo
We present a location-based collaborative mobile learning platform called Seek'N'Share. It is comprised of a Web-based learning assignment editor and a mobile application for exploring and capturing multimedia content in the field. The editor enables drag-and-drop creation of learning tasks, areas and points of interest using an intuitive Web interface. Assignments are accessed with an Android application that uses location information to provide content and tasks to learners as they explore the environment. The mobile application enables the learners to record audio, video and take pictures of their environments. This supports the overall goal of putting together a presentation as the outcome of the learning activity by combining predefined, contextual information with user-generated content. The platform is currently piloted with local schools. Its novelty lies in its flexible support for creating location-based learning activities for unconstrained environments, and the possibility for the learners to collaboratively document their learning outcomes in situ.
Magic Ring: a self-contained gesture input device on finger BIBAFull-Text 39
  Lei Jing; Zixue Cheng; Yinghui Zhou; Junbo Wang; Tongjun Huang
Control and Communication in the computing environment with diverse equipment could be clumsy, obtrusive, and frustrating even just for finding the right input device or getting familiar with the input interface. In this paper, we present Magic Ring (MR), a finger ring shape input device using inertial sensor to detect the subtle finger gestures and routine daily activities. As a self-contained, always-available, and hands-free input device, we believe that MR will enable diverse applications in the intelligent computing environment. In this demonstration, we will show a prototype design of MR and three proof-of-concept application systems: a remote controller to control the electrical appliance like TV, radio, and lamp using simple finger gestures; a natural communication tools to chat using the simplified sign languages; a daily activity tracker to record daily activities such as room cleaning, eating, cooking, writing with only one MR on the index finger.
Technical framework supporting a cross-device drag-and-drop technique BIBAFull-Text 40
  Adalberto L. Simeone; Julian Seifert; Dominik Schmidt; Paul Holleis; Enrico Rukzio; Hans Gellersen
We present the technical framework supporting a cross-device Drag-and-Drop technique designed to facilitate interactions involving multiple touchscreen devices. This technique supports users that need to transfer information received or produced on one device to another device which might be more suited to process it. Furthermore, it does not require any additional instrumentation. The technique is a two-handed gesture where one hand is used to suitably align the mobile phone with the larger screen, while the other is used to select and drag an object from one device to the other where it can be applied directly onto a target application. We describe the implementation of the framework that enables spontaneous data-transfer between a mobile device and a desktop computer.
Mobile dictation for healthcare professionals BIBAFull-Text 41
  Tuuli Keskinen; Aleksi Melto; Jaakko Hakulinen; Markku Turunen; Santeri Saarinen; Tamás Pallos; Pekka Kallioniemi; Riitta Danielsson-Ojala; Sanna Salanterä
We demonstrate a mobile dictation application utilizing automatic speech recognition for healthcare professionals. Development was done in close collaboration between human-technology interaction and nursing science researchers and professionals working in the area. Our work was motivated by the need for improvements in getting spoken patient information to the next treatment steps without additional steps. In addition, we wanted to enable truly mobile spoken information entry, i.e., dictation can take place on the spot. In order to study the applicability we conducted a small-scale Wizard-of-Oz evaluation in a real hospital environment with real nurses. Our main focus was to gather subjective expectations and experiences from the actual nurses themselves. The results show true potential for our mobile dictation application and its further development.


SmartPiggy: a piggy bank that talks to your smartphone BIBAFull-Text 42
  Tobias Stockinger; Marion Koelle; Patrick Lindemann; Lukas Witzani; Matthias Kranz
Saving money is usually a tedious task that requires a high degree of self-control for many of us. Some people have one or more specific savings targets in mind and thus need to prioritize them. We propose connecting a savings box with a personal smartphone. Thus, people become motivated to keep track of their savings for multiple targets. Using a savings box capable of counting money and connecting it to an app, we believe people stick to their plans to save with higher motivation and are happier with their behavior. In this paper, we present first evidence for the success of this concept. We gathered feedback through an online user study in which participants were shown a video prototype. We propose further research directions with our SmartPiggy, to confirm the feasibility of behavioral economics in HCI.
AppDetox: helping users with mobile app addiction BIBAFull-Text 43
  Markus Löchtefeld; Matthias Böhmer; Lyubomir Ganev
With the increasing adoption of smartphones also a problematic phenomena become apparent: People are changing their habits and become addicted to different services that these devices provide. In this paper we present AppDetox: an app that allows users to purposely create rules that keep them from using certain apps. We describe our deployment of the app on a mobile application store, and present initial findings gained through observation of about 11,700 users of the application. We find that people are rather rigorous when restricting their app use, and that mostly they suppress use of social networking and messaging apps.
Putting books back on the shelf: situated interactions with digital book collections on smartphones BIBAFull-Text 44
  Lauren Norrie; Marion Koelle; Roderick Murray-Smith; Matthias Kranz
We consider the reasons why we organise books in a physical environment and investigate whether situating interactions with a smartphone could improve the user experience of e-readers. Our prototype uses the Kinect depth sensor to detect the position of a user in relation to sections of a physical bookshelf. We also built a mobile application that allows users to browse and organise digital books by moving between each section. We present our initial observations of a user study that evaluated search and categorisation tasks with our prototype. Our findings motivate reasons to explore digital books in a physical environment and indicate issues to consider when designing situated interactions with e-readers.
jActivity: supporting mobile web developers with HTML5/JavaScript based human activity recognition BIBAFull-Text 45
  Michael Hauber; Anja Bachmann; Matthias Budde; Michael Beigl
Human Activity Recognition (HAR) using accelerometers has been studied intensively in the past decade. Recent HTML5 methods allow sampling a mobile phone's sensors from within web pages. Our objective is to leverage this for the creation of individual activity recognition modules that can be included into web applications to allow them to gain context-awareness. In this work, jActivity, a first prototype of such a platform-independent HTML5/JavaScript framework is presented, along with experiments to determine the general feasibility and challenges for HAR in web applications. Our results indicate that the realization looks promising, albeit so far limited to certain devices/user agents.
Assisting maintainers in the semiconductor factory: iterative co-design of a mobile interface and a situated display BIBAFull-Text 46
  Roland Buchner; Patricia M. Kluckner; Astrid Weiss; Manfred Tscheligi
Maintaining machines in semiconductor factories is a challenging task that, so far, is not sufficiently supported by mobile interactive technology. This paper describes the early development of a maintainer support system. Our goal was to develop a user-experience prototype, which consists of a mobile and a situated interface, to support maintenance activities and the coordination between maintainers and shift-leads. The interfaces are meant to reduce the amount of information and to improve awareness for defective equipment. Efforts described in this paper include the development of a conceptual user experience prototype, following an iterative user-centered design approach. Based on the requirements analysis, an initial mock-up of both interfaces was developed and later on discussed with maintainers in a workshop. With an interactive Wizard of Oz (WOz) prototype we examined the cooperative aspect as well as user experience factors (e.g., distraction, trust, usability) in a simulated factory environment.
Tele-embodied agent (TEA) for video teleconferencing BIBAFull-Text 47
  Muhammad Sikandar Lal Khan; Shafiq ur Réhman; Zhihan Lu; Haibo Li
We propose a design of teleconference system which express nonverbal behavior (in our case head gesture) along with audio-video communication. Previous audio-video conferencing systems are abortive in presenting nonverbal behaviors which we, as human, usually use in face to face interaction. Recently, research in teleconferencing systems has expanded to include nonverbal cues of remote person in their distance communication. The accurate representation of non-verbal gestures for such systems is still challenging because they are dependent on hand-operated devices (like mouse or keyboard). Furthermore, they still lack in presenting accurate human gestures. We believe that incorporating embodied interaction in video teleconferencing, (i.e., using the physical world as a medium for interacting with digital technology) can result in nonverbal behavior representation. The experimental platform named Tele-Embodied Agent (TEA) is introduced which incorporates remote person's head gestures to study new paradigm of embodied interaction in video teleconferencing. Our preliminary test shows accuracy (with respect to pose angles) and efficiency (with respect to time) of our proposed design. TEA can be used in medical field, factories, offices, gaming industry, music industry and for training.
Co-creating a digital 3D city with children BIBAFull-Text 48
  Jonna Häkkilä; Maaret Posti; Olli Koskenranta; Leena Ventä-Olkkonen
In this paper, we present our co-creation work on a digital 3D city model, where school children were asked create drawings for the city landscape of their hometown. Altogether 40 drawings were then integrated into the 3D virtual world model to form a decorated 3D view of the city square and the main street. Our work presents a novel use case in using the local 3D city model as a creativity platform for children, opening a new viewpoint on how virtual world presentations can be used with e.g. to facilitate the perception of the local community.
Who's there?: experience-driven design of urban interaction using a tangible user interface BIBAFull-Text 49
  Leena Ventä-Olkkonen; Marianne Kinnula; Graham Dean; Tobias Stockinger; Claudia Zúñiga
During recent years public displays relying on new types of display technologies have made their way to the city scene. In this paper, we present a concept that combines tangible interfaces with such ubiquitous urban interaction. We set out to create a tangible connection between different cities and employed an experience-driven design process towards our concept called 'Who's There?'. We evaluated the concept by using a cardboard prototype with a group of fifteen users in a busy market square, where it generated considerable engagement and discussion with members of the public.
Jogging in a virtual city BIBAFull-Text 50
  Jonna Häkkilä; Leena Ventä-Olkkonen; Henglin Shi; Ville Karvonen; Yun He; Mikko Häyrynen
In our research, we explore the possibilities for combining digital 3D city representation with a wellness application, and introduce our demo that aims to make running in a gym more inspiring and motivating. We present a prototype, where the running exercise on a treadmill is converted to a distance on the local city map, and visualized in a 3D mirror world presentation of the city. The user leaves a personalized tag on the spot reached, and in addition to his achievement, is able to see the performance of other runners on the streets of the virtual world. We evaluated the system in the gym, where 32 people tried out the prototype. The application was perceived entertaining and interesting, and especially the ability to compare the results with earlier runners was perceived motivating.


2nd International UBI Challenge 2013 BIBAFull-Text 51
  Timo Ojala
This paper summarizes the 2nd UBI Challenge that invited the global R&D community to design, implement, deploy and evaluate novel applications and services in real world setting atop the open urban computing testbed in Oulu, Finland. The paper first recaps the 1st UBI Challenge and then provides a procedural description of the 2nd UBI Challenge. The paper concludes with a discussion on issues related to participation in the UBI Challenge.
HotCity: enhancing ubiquitous maps with social context heatmaps BIBAFull-Text 52
  Andreas Komninos; Jeries Besharat; Denzil Ferreira; John Garofalakis
In this paper we present HotCity, a service that demonstrates how collecting and mining the interactions that users make with the urban environment through social networks, can help tourists better plan activities, through sharing the collectively generated social context of a smart, connected city, as a background layer to mapped POI. The data for our service stems from the collection and analysis of 1-month worth of collected human-physical environment interactions (i.e., Foursquare check-ins) data for Oulu, a medium-sized city in Finland, where our service is deployed in ubiquitous public displays. Our analysis demonstrates that a good model of the city's dynamics can be built despite the low popularity of Foursquare amongst locals. Our findings from the field-based trial of the HotCity service yield several useful insights and important contributions. We found that the method of using a heatmap as an intermediate layer of environmental context does not negatively affect the experience of users at the cognitive level, compared with a more traditional map and POI type of interface, where temporal aspects of context are not present. In the concluding sections, we discuss how this cloud-based service can also be used in a variety of ubiquitous computing platforms.
Martians from outer space: experimenting with location-aware cooperative multiplayer gaming on public displays BIBAFull-Text 53
  Jukka Holappa; Tommi Heikkinen; Elina Roininen
In this paper, we describe our experiences with location-aware cooperative multiplayer game on public displays. The game world is modelled after the city of Oulu, Finland where players protect the city from a Martian invasion. We investigate the potential of the used platform, the effects of locality and how a more complex and challenging gaming experience on public displays is received. We demonstrate that locality does have a significant effect on the game-play especially when the player can actually see the familiar surroundings in the game world. We also show that while the use of the different services vary a lot from place to place, our game can maintain a very good ranking when compared to other, more casual games.