HCI Bibliography Home | HCI Conferences | APCHI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
APCHI Tables of Contents: 04081213

Proceedings of the 2012 Asia Pacific Conference on Computer Human Interaction

Fullname:Proceedings of the 10th Asia Pacific Conference on Computer Human Interaction
Editors:Kentaro Go; Mitsuhiko Karashima; Shin'ichi Fukuzumi; Xiangshi Ren
Location:Matsue-city, Shimane, Japan
Dates:2012-Aug-28 to 2012-Aug-31
Standard No:ISBN: 978-1-4503-1496-1; ACM DL: Table of Contents; hcibib: APCHI12
Links:Conference Website
  1. Haptic and model
  2. Game
  3. Interaction by hand and foot
  4. Usability and text entry
  5. Social
  6. 3D pointing
  7. Ergonomics design
  8. Robot and agents
  9. Multimodal
  10. Gesture and user experience
  11. Robot and VR
  12. Pen and UI design
  13. Home
  14. Eldery
  15. Touch
  16. UI design and framework
  17. Planning and measuring
  18. 3D
  19. UX / design
  20. CMC / CSCW

Haptic and model

An investigation of the relationship between texture and human performance in steering tasks BIBAFull-Text 1-6
  Minghui Sun; Xiangshi Ren; Shumin Zhai; Toshiharu Mukai
Steering law is a fundamental model for steering tasks. Many researchers have investigated it according to different input devices, difficulty of task, subjective bias and scale effect etc. However, there is little study about the effect of surface environments especially on the texture of the interaction surface. In this paper, we experimentally investigated users' performances with various surface textures in steering tasks. Five common but different materials were used to supply different textures. Several potential factors of friction are considered in this study. The results showed that texture has no significant effect on movement time. Users naturally and dynamically adjust their force to suit different textures. In a limited range, the smoother the surface is, the more trajectory errors were performed. Our evaluation also proved that different textures can affect user satisfaction significantly.


Enhancing collaboration in tabletop board game BIBAFull-Text 7-10
  Taoshuai Zhang; Jie Liu; Yuanchun Shi
Through combination of tabletop and mobile phones, we introduce a mechanism containing private, public and group workspaces for computer-mediated tabletop board game. It can sustain the important sociality between players while ensuring privacy and enhancing visual effect. Based on the popular board game Monopoly, we design Copoly on a multi-touch tabletop and mobile phones where players can form groups to collaborate. A qualitative and quantitative user study was conducted to explore the pattern of collaboration and its effect on tabletop game experience. The results indicated that social bonding did play an important role on the frequency and pattern of collaboration in tabletop games, and players gained a more joyful experience through competition and collaboration.
The influence of cooperative game design patterns for remote play on player experience BIBAFull-Text 11-20
  Anastasiia Beznosyk; Peter Quax; Karin Coninx; Wim Lamotte
The collaborative nature of many modern multiplayer games raises a lot of questions in cooperative game design. We address one of them in this paper by analyzing cooperative game patterns in remote gameplay in order to define benefits and drawbacks for each one. With the help of a user experiment, we analyzed player experience in a set of existing cooperative patterns for games played remotely without communication. By comparing patterns, supporting closely- and loosely-coupled collaboration, we discovered that the first type provided a more enjoyable experience but introduced additional challenges in case of a lack of communication. By analyzing patterns for both closely- and loosely-coupled interaction, we determined the most beneficial pattern within each type. We concluded with the results of a pattern comparison in co-located and remote setups.

Interaction by hand and foot

Novel interaction techniques based on a combination of hand and foot gestures in tabletop environments BIBAFull-Text 21-28
  Nuttapol Sangsuriyachot; Masanori Sugimoto
Interactive tables, or tabletop devices, employ multi-finger gestures to interact with digital contents on a table's surface. Many studies have confirmed the convenience and intuitiveness of multi-finger gestures performed with the hands. However, there are still some tasks which cannot be conducted effectively by users via two-handed or multi-finger gestures. Given that feet are used occasionally in the real world to support the hands in the performance of complex tasks such as driving a car, we considered that it might be useful to combine foot gestures with hand gestures to enhance user interactions with tabletop environments.
   In this study, we developed a high-resolution foot sensing platform based on multi-touch techniques known as frustrated total internal reflection and diffused illumination. We then used the device to study the effect of combining hand and foot gestures on tabletop systems by using a 3D drawing application. We conducted user evaluations to compare foot gestures and identified which gestures were most comfortable for performing a 3D model rotation task. We also compared the performance in a 3D drawing task when using only hand gestures with the performance when using hand and foot gestures together. Finally, we discussed how hand and foot gesture combination techniques could provide new user experiences in tabletop environments.
A comparison of flick and ring document scrolling in touch-based mobile phones BIBAFull-Text 29-34
  Huawei Tu; Feng Wang; Feng Tian; Xiangshi Ren
This study quantitatively analyzed the performance of two scrolling techniques (flick and ring) for document navigation in touch-based mobile phones by means of three input methods (index finger, pen and thumb). Our findings were as follows: (1) overall, for the three input methods, flick resulted in shorter movement time and fewer numbers of crossings than ring, suggesting that flick is superior to ring for document navigation in touch-based mobile phones; (2) regarding pen and thumb input, there were interaction effects between scrolling technique and target distance. Ring led to shorter movement time than flick for large target distance. This finding indicated that ring has a potential interaction advantage, which should be deeply explored for future scrolling technique design; (3) both flick and ring document scrolling in touch-based mobile phones can be modeled by the Anderson model [2]. We believe these findings offer several insights for scrolling technique design for document navigation in touch-based mobile phones.

Usability and text entry

Tag-based interaction in online and mobile banking: a preliminary study of the effect on usability BIBAFull-Text 35-40
  Rajinesh Ravendran; Ian MacColl; Michael Docherty
In this paper we describe tag-based interaction afforded by a tag-based interface in online and mobile banking, and present our preliminary usability evaluation findings. We conducted a pilot usability study with a group of banking users by comparing the present 'conventional' interface and tag-based interface. The results show that participants perceive the tag-based interface as more usable in both online and mobile contexts. Participants also rated the tag-based interface better despite their unfamiliarity and perceived it as more user-friendly. Additionally, the results highlight that tag-based interaction is more effective in the mobile context especially to inexperienced mobile banking users. This in turn could have a positive effect on the adoption and acceptance of mobile banking in general and also specifically in Australia. We discuss our findings in more detail in the later sections of this paper and conclude with a discussion on future work.


Social life logging: can we describe our own personal experience by using collective intelligence? BIBAFull-Text 41-50
  Koh Sueda; Henry Been-Lirn Duh; Jun Rekimot
A famous Gestalt psychologist Kurt Koffka left a statement "The whole is other than the sum of its parts." Similarly, collective intelligence such as social tagging exposes a social milieu that cannot be obtained from the descriptions of each individual. Previous automatic (or passive) life logging projects mainly focused on recording the individual life activity however, sometimes it is difficult to recollect the situation from their own perspective logs alone. In this project, we propose a social life logging system called "KiokuHacker" (Kioku means memory in Japanese) that encourages the user to describe their life activity by using a massive amount of processed geotagged social tagging from the Internet. The result of a one year user test not only shows that our social life logging system encourages the user's reminiscence which the user cannot recollect by oneself but also indicates that the user evokes their reminiscence which is not directly related with to the tags/scenes the system displayed.

3D pointing

Select ahead: efficient object selection technique using the tendency of recent cursor movements BIBAFull-Text 51-58
  Soonchan Park; Seokyeol Kim; Jinah Park
Virtual hand is one of the most intuitive metaphors for object selection in the virtual environment because of its natural mapping between the user action input and the cursor. However, it has a limitation of lengthy cursor manipulation for object selection task which is directly related to the level of workload of the user in performing object selection. In this paper, we propose 'Select Ahead' as a new object selection technique that improves efficiency by reducing the physical workload. Select Ahead guides the user to select the distant object along the estimated tendency of the recent cursor movements. We evaluate the relative performance of Select Ahead through the experiments in the 3D virtual environment with various object densities. The results show that Select Ahead significantly reduces the length of the cursor movements compared to those of the 3D point cursor and the 3D bubble cursor regardless of the object density. In the aspect of the total duration time for selection, Select Ahead outperforms the 3D point cursor and has no significant difference compared to that of the 3D bubble cursor.
3d object selection for hand-held auto-stereoscopic display BIBAFull-Text 59-66
  Euijai Ahn; Hyunseok Yang; Gerard Kim
Interacting in a small (mobile) auto-stereoscopic display can be difficult because of the lack of accurate tracking of an interaction proxy, and having to maintain a fixed viewpoint and adapt to a different level of depth perception sensitivity. In this paper, we first propose to modify a standard stylus into a mechanical chain with joint sensors for 3D tracking. We also investigate a way to assist the user in selecting an object in the small phone space through supplementary multimodal feedback, such as sound and tactility. We have carried out an experiment comparing the effects of various combinations of multimodal feedback to object selection performance.
An exploration of interaction styles in mobile devices for navigating 3d environments BIBAFull-Text 309-313
  Hai-Ning Liang; James Trenchard; Myron Semegen; Pourang Irani
Large displays are becoming more ubiquitous, but often only present passive information to passerby (e.g., about the 3D layouts and maps of buildings). To improve users' experience, museums and similar places could have a system where users would be able to interactively navigate maps of these public, large buildings to browse quickly what is available and plan their trips so that they are efficient and more enjoyable. Personal touch-based mobile devices can be used effectively as input devices, allowing for opportunistic and serendipitous user interaction. In this paper, we explore the coupling of mobile devices to large displays. We present three interaction styles that enable users to navigate in 3D environments and describe the result of a usability study with the three styles. The results of our study indicate that users prefer a combination of two styles, one supporting discrete, precise motions and the other fluid, continuous movements.

Ergonomics design

Extending "out of the body" saltation to 2d mobile tactile interaction BIBAFull-Text 67-74
  Youngsun Kim; Jaedong Lee; Gerard Kim
Funneling and saltation are two main perceptual illusion techniques for vibro-tactile feedback. They are often used to minimize the number of vibrators to be worn on the body and thereby build a less cumbersome and expensive feedback device. Recently, these techniques have been found to elicit "out of the body" experience, i.e. feeling for phantom sensations indirectly on a hand-held object. This paper explores the practical applicability of this theoretical result to mobile tactile interaction. Two psychophysical experiments were run to validate: (1) the 1D saltation effect through the hand-held smart phone, and (2) the effect of saltation based approach to 2D phantom sensation elicitation. Experimental results have first confirmed the same "out of the body" saltation effect in 1D, originally tested on a metallic ruler by Miyazaki [15], on an actual mobile device. In addition, 2D modulated phantom sensation with a resolution of 5 x 3 on a 3.5 inch display space was achieved with saltation based stimulation.
The groovepad: ergonomic integration of isotonic and elastic input for efficient control of complementary subtasks BIBAFull-Text 75-84
  Alexander Kulik; André Kunert; Anke Huckauf; Bernd Froehlich
The Groovepad is an input device that uses the physical frame of a regular touchpad as an elastic force sensor to permit additional rate-control input. The two independent input sensors can be used separately, but facilitate frequent and fluent switching between position-controlled and rate-controlled interaction techniques.
   We studied the usability of the Groovepad in pointing, panning, and dragging tasks. Our observations indicate that the use of the two input sensors for the same functionality (e.g., cursor control) can result in a decision dilemma, which adversely affects performance. As an alternative, we propose to use both sensors for complementary subtasks. For example, we performed workspace panning with the elastic frame of the Groovepad, while cursor motion was operated with the touchpad. This particular mapping possesses the compelling property that the frame of the touchpad serves as a tactile reference of the visual workspace. A user study revealed that our approach was preferred and performed significantly better than techniques that only used touchpad input.

Robot and agents

Pygmy: a ring-shaped robotic device that promotes the presence of an agent on human hand BIBAFull-Text 85-92
  Masa Ogata; Yuta Sugiura; Hirotaka Osawa; Michita Imai
The human hand is an appropriate part to attach an agent robot. Pygmy is an anthropomorphic device that produces a presence on a human hand by magnifying the finger expressions. This device is in trial to develop an interaction model of an agent on the hand. It is based on the concept of hand anthropomorphism and uses finger movements to create the anthropomorphic effect. Wearing the device is similar to having eyes and a mouth on the hand; the wearer's hand spontaneously expresses the agent's presence with the emotions conveyed by the eyes and mouth. Interactive manipulation by controllers and sensors make the hand look animated. We observed that the character animated with the device provided user collaboration and interaction as though there were a living thing on the user's hand. Further, the users play with the device by representing characters animated with Pygmy as their doubles.
Motion design of an interactive small humanoid robot with visual illusion BIBAFull-Text 93-100
  Hidenobu Sumioka; Takashi Minato; Kurima Sakai; Shuichi Nishio; Hiroshi Ishiguro
We propose a method that enables users to convey nonverbal information, especially their gestures, through portable robot avatar based on illusory motion. The illusory motion of head nodding is realized with blinking lights for a human-like mobile phone called Elfoid. Two blinking patterns of LEDs are designed based on biological motion and illusory motion from shadows. The patterns are compared to select an appropriate pattern for the illusion of motion in terms of the naturalness of movements and quick perception. The result shows that illusory motions show better performance than biological motion. We also test whether the illusory motion of head nodding provides a positive effect compared with just blinking lights. In experiments, subjects, who are engaged in role-playing game, are asked to complain to Elfoids about their unpleasant situation. The results show that the subject frustration is eased by Elfoid's illusory head nodding.


Empirical study of a vision-based depth-sensitive human-computer interaction system BIBAFull-Text 101-108
  Farzin Farhadi-Niaki; Reza GhasemAghaei; Ali Arya
This paper proposes the results of a user study on vision-based depth-sensitive input system for performing typical desktop tasks through arm gestures. We have developed a vision-based HCI prototype to be used for our comprehensive usability study. Using the Kinect 3D camera and OpenNI software library we implemented our system with high stability and efficiency by decreasing the ambient disturbing factors such as noise or light condition dependency. In our prototype, we designed a capable algorithm using NITE toolkit to recognize arm gestures. Finally, through a comprehensive user experiment we compared our natural arm gestures to the conventional input devices (mouse/keyboard), for simple and complicated tasks, and in two different situations (small and big-screen displays) for precision, efficiency, ease-of-use, pleasantness, fatigue, naturalness, and overall satisfaction to verify the following hypothesis: on a WIMP user interface, the gesture-based input is superior to mouse/keyboard when using big-screen. Our empirical investigation also proves that gestures are more natural and pleasant to be used than mouse/keyboard. However, arm gestures can cause more fatigue than mouse.
Brain -- computer interface (BCI): is it strictly necessary to use random sequences in visual spellers? BIBAFull-Text 109-118
  Manson Cheuk-Man Fong; James William Minett; Thierry Blu; William Shi-Yuan Wang
The P300 speller is a standard paradigm for brain -- computer interfacing (BCI) based on electroencephalography (EEG). It exploits the fact that the user's selective attention to a target stimulus among a random sequence of stimuli enhances the magnitude of the P300 evoked potential. The present study questions the necessity of using random sequences of stimulation. In two types of experimental runs, subjects attended to a target stimulus while the stimuli, four in total, were each intensified twelve times, in either random order or deterministic order. The 32-channel EEG data were analyzed offline using linear discriminant analysis (LDA). Similar classification accuracies of 95.3% and 93.2% were obtained for the random and deterministic runs, respectively, using the data associated with 3 sequences of stimulation. Furthermore, using a montage of 5 posterior electrodes, the two paradigms attained identical accuracy of 92.4%. These results suggest that: (a) the use of random sequences is not necessary for effective BCI performance; and (b) deterministic sequences can be used in some BCI speller applications.

Gesture and user experience

Area gestures for a laptop computer enabled by a hover-tracking touchpad BIBAFull-Text 119-124
  Sangwon Choi; Jiseong Gu; Jaehyun Han; Geehyuk Lee
A touchpad has been the most popular pointing device for laptop computers. As large, multi-touch sensing touchpads are now common, we came to think about extending its input vocabulary by adding area gestures. In order to explore this possibility, we constructed a laptop-like mock-up with an optical, proximity sensing touchpad, and implemented a few area gestures that may be useful in such an environment. We conducted a user test and ran a task walkthrough with a realistic scenario in order to verify the feasibility of the area gestures in a laptop environment.
An interaction system using mixed hand gestures BIBAFull-Text 125-132
  Zhong Yang; Yi Li; Yang Zheng; Weidong Chen; Xiaoxiang Zheng
This paper presents a mixed hand gesture interaction system in virtual environment, in which "mixed" means static and dynamic hand gestures are combined for both navigation and object manipulation. Firstly, a simple average background model and skin color are used for hand area segmentation. Then a state-based spotting algorithm is employed to automatically identify two types of hand gestures. A voting-based method is used for quick classification of static gestures. And we use the hidden Markov model (HMM) to recognize dynamic gestures. Since the training of HMM requires the consistency of the training data, outputted by the feature extraction, a data aligning algorithm is raised. Through our mixed hand gesture system, users can perform complicated operating commands in a natural way. The experimental results demonstrate that our methods are effective and accurate.

Robot and VR

Robots in my contact list: using social media platforms for human-robot interaction in domestic environment BIBAFull-Text 133-140
  Xiaoning Ma; Xin Yang; Shengdong Zhao; Chi-Wing Fu; Ziquan Lan; Yiming Pu
This paper proposes to put domestic robots as buddies on our contact lists, thereby extending the use of social media in interpersonal interaction further to human-robot interaction (HRI). In detail, we present a robot management system that employs complementary social media platforms for human to interact with the vacuuming robot Roomba, and a surveillance robot which is developed in this paper on top of an iRobot Create. The social media platforms adopted include short message services (SMS), instant messenger (MSN), online shared calendar (Google Calendar), and social networking site (Facebook). Hence, our system can provide a rich set of user-familiar, intuitive and highly-accessible interfaces, allowing users to flexibly choose their preferred tools in different situations. An in-lab experiment and a multi-day field study are also conducted to study the characteristics and strengths of each interface, and to investigate the users' perception to the robots and behaviors in choosing the interfaces during the course of HRI.
Facial design for humanoid robot BIBAFull-Text 141-148
  Ichiroh Kanaya; Shoichi Doi; Shohei Nakamura; Kazuo Kawasaki
In this research, the authors succeeded in creating facial expressions made with the minimum necessary elements for recognizing a face. The elements are two eyes and a mouth made using precise circles, which are transformed to make facial expressions geometrically, through rotation and vertically scaling transformation. The facial expression patterns made by the geometric elements and transformations were composed employing three dimensions of visual information that had been suggested by many previous researches, slantedness of the mouth, openness of the face, and slantedness of the eyes. In addition, the relationships between the affective meanings of the visual information also corresponded to the results of the previous researches.
   The authors found that facial expressions can be classified into 10 emotions: happy, angry, sad, disgust, fear, surprised, angry*, fear*, neutral (pleasant) indicating positive emotion, and neutral (unpleasant) indicating negative emotion. These emotions were portrayed by different geometric transformations. Furthermore, the authors discovered the "Tetrahedral model," which can express most clearly the geometric relationships between facial expressions. In this model, each side connecting the face is an axis that controlled the rotational and vertically scaling transformations of the eyes and mouth.
Empirical evaluation of mapping functions for navigation in virtual reality using phones with integrated sensors BIBAFull-Text 149-158
  Amal Benzina; Arindam Dey; Marcus Toennis; Gudrun Klinker
Mobile phones provide an interesting all-in-one alternative for 3D input devices in virtual environments. Mobile phones are becoming touch sensitive and spatially aware, and they are now a crucial part of our daily activities. We present Phone-Based Motion Control, a novel one-handed travel technique for a virtual environment. The technique benefits from the touch capability offered by growing number of mobile phones to change viewpoint translation in virtual environments, while the orientation of the viewpoint is controlled by built-in sensors in the mobile phone. The travel interaction separates translation (touch based translation control) and rotation (steer based rotation control), putting each set of degrees of freedom (DOF) to a separate interaction technique (separability).
   This paper examines, how many DOF are needed to perform the travel task as easy and comfortable as possible. It also investigates different mapping functions between the user's actions on the mobile phone and the viewpoint change in the virtual environment. For that purpose, four techniques are implemented: rotate by heading, rotate by roll, rotate by roll with fixed horizon and a merged rotation. Each technique has either 4 or 5 degrees of freedom and different mappings between phone and viewpoint coordinates in the virtual environment. We perform an extensive user study to explore different aspects related to the travel techniques in terms of degrees of freedom, mapping functions. Results of the user evaluation show that 4 DOF techniques seem to perform better the travel task. Even though, the results were not statistically decisive in favor of the usage of the mobile roll to control the viewpoint heading in the virtual environment despite the good results in terms of accuracy and time, there is a clear tendency from the users to prefer the mobile roll as the desired mapping.

Pen and UI design

Building interactive prototypes of mobile user interfaces with a digital pen BIBAFull-Text 159-168
  Clemens Holzmann; Manuela Vogler
Paper prototyping is commonly used to identify usability problems in the early stages of user interface design, but it is not very well suited for the evaluation of mobile interfaces. The reason is that mobile applications are used in a rich real-world context, which is hard to emulate with a paper prototype. A more powerful technique is to test the design on a mobile device, but building a functional design prototype requires much more effort. In this paper, we try to get the best of both worlds by building interactive prototypes with a digital pen. We developed a system which allows for sketching a user interface on paper and manually associating the interface elements with functionality. This enables designers to bring their design ideas to paper without any restrictions, define the meaning of selected interface elements, and test them on a mobile device instantaneously. We conducted a user study in which the participants had to design and test a small application with our system. The results provide evidence for the feasibility and positive aspects of our approach, but also showed some limitations and missing functionalities of its current implementation.
Mode switching techniques through pen and device profiles BIBAFull-Text 169-176
  Huawei Tu; Xing-Dong Yang; Feng Wang; Feng Tian; Xiangshi Ren
In pen-based interfaces, inking and gesturing are two central tasks, and switching from inking to gesturing is an important issue. Previous studies have focused on mode switching in pen-based desktop devices. However, because pen-based handheld devices are smaller and more mobile than pen-based desktop devices, the principles in mode switching techniques for pen-based desktop devices may not apply to pen-based handheld devices. In this paper, we investigated five techniques for switching between ink and gesture modes in two form factors of pen-based handheld devices respectively: PDA and Tablet PC. Two quantitative experiments were conducted to evaluate the performance of these mode switching techniques. Results showed that in Tablet PC, pressure performed the fastest but resulted in the most errors. In PDA, back tapping offered the fastest performance. Although pressing and holding was significantly slower than the other techniques, it resulted in the fewest errors in Tablet PC and PDA. Pressing button on handheld device offered overall fast and accurate performance in Tablet PC and PDA.


HomeOrgel: interactive music box for the aural representation of home activities BIBAFull-Text 177-186
  Maho Oki; Koji Tsukada; Kazutaka Kurihara; Itiro Siio
We propose a music-box-type interface, "HomeOrgel", that can express various activities in the home using sound. Users can also control the volume and content using common methods for controlling a music box: opening the cover and winding the spring. Users can hear the sounds of past home activities, such as cooking and the opening/closing of doors with the background music (BGM) mechanism of the music box. We developed the HomeOrgel device and installed it in an actual house. We also verify the effectiveness of our system through evaluation and discussion.
Cooking support with information projection onto ingredient BIBAFull-Text 193-198
  Yu Suzuki; Shunsuke Morioka; Hirotada Ueda
Recipes once only appearing in cookbooks are being digitalized, now accessible through PCs and on mobile devices, including smart phones. Researchers endeavor to provide details of the cooking processes in these computerized recipes, however, cooking support systems tailored to novice cooks remain a matter of research. This paper details a cooking support system for novices that specifically takes into consideration the needs of inexperienced cooks. This system provides concrete instructions for cooking by superimposing a cutting line and a knife CG over ingredients. In addition, a conversational Robot "Phyno" provides additional verbal and gestural support. This system not only provides detailed visual support for cooking novices, but also contributes to enhancement of the safety of those novice cooks. This paper explores the advantages and drawbacks of this system and reflects on its adequacy based on trial evidence.


Elderly mental model of reminder system BIBAFull-Text 193-200
  Fariza Hanis Abdul Razak; Rafidah Sulo; Wan Adilah Wan Adnan
The growing numbers of elderly is inevitable. As we get older, we will experience some memory declines, thus an assistive technology such as reminder system is recommended. However, the uptake of reminder system is still low. Many researchers from the western countries are interested in exploring the use of reminder system as part of assistive technology for the elderly. Nevertheless, no research is solely focused on what actually elderly users expect from a reminder system. Hence, this paper attempts to assess and propose elderly mental model on reminder system. We conducted a series of studies: interview, usability evaluation and drawing activity with eight (8) participants. Our results revealed that elderly users expected that a reminder system should be simple, familiar, flexible and recognizable to them. We also learned that drawing and user study can be effective methods for assessing a mental model depending on type of user groups involved in the study.
Restrain from pervasive logging employing geo-temporal policies BIBAFull-Text 201-208
  Mohsin Ali Memon; Jiro Tanaka; Tomonari Kamba
Life logging has been a prominent research concern in recent years with the invention of wearable life capture gadgets and it has played a significant role in some situations such as helping Alzheimer disease patients. However, at the same time, it has raised privacy concerns among ordinary people. At present, life log devices are pervasively capturing information, including people in the vicinity without their consent. This will produce a great concern in the future if the majority of people come to have life log devices that record continuously what is happening around. In this paper, we propose a mechanism to restrict people from capturing a person in their personal digital diaries in real time by introducing Geo-temporal privacy framework. Furthermore, the system ensures that the unwilling party is not revealed to the life logging system users and privacy is sustained when the Geo-temporal framework discontinues the log activity after an encounter with the reluctant party. The prototype is developed on an Android-based smart phone that works as a life log device with a policy controller. The phone is connected to an Infrared Transmitter/Receiver with an interface board, for identifying human proximity.


Multi-tapping shortcut: a technique for augmenting linear menus on multi-touch surface BIBAFull-Text 209-218
  Kentaro Go; Hiroki Kasuga
In this paper, we propose the Multi-Tapping Shortcut (MTS), a technique aimed at augmenting linear menus on multi-touch surfaces. We designed this multi-finger two-handed interaction technique in an attempt to overcome limitations of direct pointing on interactive surfaces while maintaining compatibility with traditional interaction techniques. Multi-tapping Shortcuts exploit multi-tapping by simply tapping a finger on the surface several times. This report describes the results of an experimental evaluation of our technique, with comparison to the Radial Stroke Shortcut (RSS) technique. Results show that the mean task completion time with MTS is 21.7% faster than that with RSS. MTS also outperformed RSS in terms of error and some users' assessments of comfort.

UI design and framework

Designing a user interface for a painting application supporting real watercolor painting processes BIBAFull-Text 219-226
  Jiho Yeom; Geehyuk Lee
While research on non-photorealistic rendering is providing simulation-based realistic watercolor painting effects, the current digital painting interfaces are yet to be improved for providing realistic watercolor painting experience. This study proposes a digital watercolor painting interface to support real watercolor painting processes. We evaluated the new interface in comparison with a conventional digital watercolor painting interface with respect to effectiveness, efficiency, and satisfaction. The new interface was not different in terms of efficiency, but was shown to be more effective in that it enabled the users produce more satisfactory paintings than the conventional interface. It was also favored in terms of satisfaction. This result suggests that a user interface supporting real watercolor painting processes is important for the usability of a simulation-based digital watercolor painting system.
Docking window framework: supporting multitasking by docking windows BIBAFull-Text 227-236
  Hirohito Shibata; Kengo Omura
When performing tasks using computers, multiple documents are used with multiple applications. During working with computers, multiple tasks, perhaps involving multiple documents, are switched. This paper presents the Docking Window Framework: an extended multi-window system supporting such multitasking situations. It enables construction of workspaces comprising multiple windows with simple switching of workspaces. Although previous systems emphasized the support of task-switching after workspace construction, the proposed system characteristically supports construction of workspaces through a docking user interface. It also supports operation of multiple windows simultaneously, provides a tile layout of windows to reduce the overhead of window operations, and supports saving and restoration of workspaces. We conducted two experiments to evaluate the system. In window arrangement tasks, participants performed tasks faster using the proposed system than when using a popular window system (Windows XP). Moreover, in task-switching tasks, participants using our system performed multiple tasks in parallel more efficiently.

Planning and measuring

RIM: risk interaction model for vehicle navigation BIBAFull-Text 237-242
  Linmi Tao; Lixia Meng; Fuchun Sun
Interactive auto-driving systems are used for disabled and elderly persons. In such systems, human errors during operation or interaction could lead to serious consequences during motion. A novel human-robot interaction model, termed risk interaction model (RIM), is proposed for quantitative evaluation of the risk for complex interactive systems in terms of human safety. The risk elements for system-human interaction are defined, and quantitative relations among the elements are formalized based on experimental analysis. Extensive experiments are used to validate RIM.


Scan modeling: 3d modeling techniques using cross section of a shape BIBAFull-Text 243-250
  Tatsuhito Oe; Buntarou Shizuki; Jiro Tanaka
In clay modeling, a creator makes a model by using the shapes of objects, including hands. In contrast, in traditional 3D modeling environments, shapes are assigned by the systems a priori, i.e., no real world object's shape is used. In this paper, we present Scan Modeling, in which the creator performs 3D modeling by scanning any real object. To realize Scan Modeling, we developed an input device called "Wakucon". Wakucon is square-shaped with 245 millimeters per side. The creator uses it to scan a cross section of a shape by placing it inside of the device. By moving the Wakucon or the objects inserted into the device spatially while scanning these objects, the creator can perform 3D modeling. In this paper, we show interaction techniques and implementation of Scan Modeling. Additionally, we present an evaluation of the accuracy of reconstructed 3D models when using the Wakucon.
Spring: a solution for managing the third DOF with tactile interface BIBAFull-Text 251-258
  Robin Vivian; Jérôme Dinet; David Bertolo
Tablets with touch-screens are one of the most widely used interfaces for three main reasons: the effectiveness and efficiency and fun side interactions. Associated with significant increases in power, the multi-touch touchscreen terminals are able to manipulate real-time application of virtual worlds for entertainment or learning. This raises a question that was already current with conventional interfaces (mouse for example) how to define, with 2D input interface, designation, orientation and move actions (simply and intuitively) on objects in 3D space that require at least 6 degrees of freedom (6 DOF for the object and sometimes six other for the camera)? Our study provides a way of managing the depth dimension with the principle of universal interaction: to screw / unscrew. The first part of this paper presents the theoretical framework based on prior studies and the second part describes the formalization of a grammar of gesture allowing the intuitive interaction management of depth component.
User-defined surface+motion gestures for 3d manipulation of objects at a distance through a mobile device BIBAFull-Text 299-308
  Hai-Ning Liang; Cary Williams; Myron Semegen; Wolfgang Stuerzlinger; Pourang Irani
One form of input for interacting with large shared surfaces is through mobile devices. These personal devices provide interactive displays as well as numerous sensors to effectuate gestures for input. We examine the possibility of using surface and motion gestures on mobile devices for interacting with 3D objects on large surfaces. If effective use of such devices is possible over large displays, then users can collaborate and carry out complex 3D manipulation tasks, which are not trivial to do. In an attempt to generate design guidelines for this type of interaction, we conducted a guessability study with a dual-surface concept device, which provides users access to information through both its front and back. We elicited a set of end-user surface- and motion-based gestures. Based on our results, we demonstrate reasonably good agreement between gestures for choice of sensory (i.e. tilt), multi-touch and dual-surface input. In this paper we report the results of the guessability study and the design of the gesture-based interface for 3D manipulation.

UX / design

How to motivate people to use internet at home: understanding the psychology of non-active users BIBAFull-Text 259-268
  Momoko Nakatani; Takehiko Ohno; Ai Nakane; Akinori Komatsubara; Shuji Hashimoto
Although many Internet services exist that can raise our quality of life, there are still many non-active users who cannot fully enjoy the convenience of the Internet and its potential even when they have computers in the home. To deeply understand this failure to use the Internet, we conducted a field study, and arrived at an integrated model depicting the psychology of active/non-active computer users. Our model enables us to understand the psychology of the users and the external factors affecting them and sheds light on how non-active users are stuck in a negative loop. Users that received a support service designed on our model dramatically changed their attitude and started to use the service actively.
Drawing and acting as user experience research tools BIBAFull-Text 269-278
  Alexandre Fleury
This paper discusses the use of participant-generated drawings and drama workshops as user experience research methods. In spite of the lack of background literature on how drawings can generate useful insights on HCI issues, drawings have been successfully used in other research fields. On the contrary, drama workshops seem to be increasingly popular in recent participatory design research. After briefly introducing such previous work, three case studies are presented, illustrating the use of drawing and drama workshops when investigating the relationship between media technology users and two specific devices, namely televisions and mobile phones. The paper focuses on the methods and discusses their benefits and the challenges associated with their application. In particular, the findings are compared to those collected through a quantitative cross-cultural survey. The experience gathered during the three case studies is very encouraging and calls for additional reports of UX evaluations involving drawing- and theatre-based exercises.


Effects of trust on group buying websites in China BIBAFull-Text 279-288
  Na Chen; Pei-Luen Patrick Rau
The research aimed to investigate 1) the factors influencing Chinese customers' trust and purchasing probability of group buying websites; 2) the differences of trust on B2C and group buying websites; and 3) whether the Theory of Reasoned Action and Gefen's summarization of trust antecedents applicable for Chinese group buying websites.
   The study consisted of three phases: 1) a pre-questionnaire about general trust on B2C and group buying websites; 2) an in-lab experiment following by a post-questionnaire after each trust situation; 3) a short open interview.
   According to the results, 1) Cognition-Based Antecedent trust is the most important factor influencing Chinese customers' trusts and purchasing probabilities of both B2C and group buying websites. 2) Participants show significantly lower general Trust Beliefs and Trust Intends on group buying websites. However, under the same trust situation, participants show significantly higher purchasing probabilities on group buying websites. 3) The Theory of Reasoned Action and Gefen's summarization of trust antecedents are not applicable for current Chinese group buying. Some implications for group buying websites were discussed.
Estimation of conversational activation level during video chat using turn-taking information BIBAFull-Text 289-298
  Yurie Moriya; Takahiro Tanaka; Toshimitu Miyajima; Kinya Fujita
In this paper, we discuss the feasibility of estimating the activation level of a conversation by using phonetic and turn-taking features. First, we recorded the voices of conversations of six three-person groups at three different activation levels. Then, we calculated the phonetic and turn-taking features, and analyzed the correlation between the features and the activity level. The analysis revealed that response latency, overlap rate, and speech rate correlate with the activation levels and they are less sensitive to individual deviation. Then, we formulated multiple regression equations, and examined the estimation accuracy using the analyzed data of the six three-person groups. The results demonstrated the feasibility to estimate activation level at approximately 18% root-mean-square error (RMSE).