[1]
Exploring current practices for battery use and management of smartwatches
Smart watches
/
Min, Chulhong
/
Kang, Seungwoo
/
Yoo, Chungkuk
/
Cha, Jeehoon
/
Choi, Sangwon
/
Oh, Younghan
/
Song, Junehwa
Proceedings of the 2015 International Symposium on Wearable Computers
2015-09-07
p.11-18
© Copyright 2015 ACM
Summary: As an emerging wearable device, a number of commercial smartwatches have
been released and widely used. While many people have concerns about the
battery life of a smartwatch, there is no systematic study for the main usage
of a smartwatch, its battery life, or battery discharging and recharging
patterns of real smartwatch users. Accordingly, we know little about the
current practices for battery use and management of smartwatches. To address
this, we conduct an online survey to examine usage behaviors of 59 smartwatch
users and an in-depth analysis on the battery usage data from 17 Android Wear
smartwatch users. We investigate the unique characteristics of smartwatches'
battery usage, users' satisfaction and concerns, and recharging patterns
through an online survey and data analysis on battery usage.
[2]
Simulation of an Affordance-Based Human-Machine Cooperative Control Model
Using an Agent-Based Simulation Approach
HCI in Business, Industry and Innovation
/
Oh, YeongGwang
/
Ju, IkChan
/
Kim, Namhun
HCI International 2015: 17th International Conference on HCI, Part III:
Users and Contexts
2015-08-02
v.3
p.226-237
Keywords: Human and robot collaboration; Affordance theory; Agent-based simulation
© Copyright 2015 Springer International Publishing Switzerland
Summary: An automated system relies mostly on a robot, rather than a human operator.
In the automated system considered in this paper, a human operator mainly
verifies the product quality, where the performance of the human is affected by
his or her characteristics. To present this kind of system, an ABM is better
than DES to simulate the role of the human operator. This is because the human
characteristics are dynamic and are affected significantly by time and
environment. This paper presents a DES-ABM model which simulates the
performance of a human operator in a human-machine cooperative environment. It
may enable this model to be utilized for further development in controller
toward the supervisory control.
[3]
Novel Method for Notification from Interactive Smart Cover
Smart Devices, Objects and Materials
/
Oh, Young Hoon
/
Ju, Da Young
DAPI 2014: 3rd International Conference on Distributed, Ambient, and
Pervasive Interactions
2015-08-02
p.437-448
Keywords: Notification; Interactive; Accessory; Appcessory; Cover
© Copyright 2015 Springer International Publishing Switzerland
Summary: Traditional interaction method on mobile device often causes notification
stress. Several research projects based on the software approach were attempted
but it is not always perfect solution. In this design work, we propose a new
interaction method with Interactive Smart Cover. This mobile device accessory
adds a new notification channel as well as protects device. We extend its
potential to future devices such as smartwatch. Future applicability of the
accessory and its limitation will be discussed as well.
[4]
Activity Context Integration in Mobile Computing Environments
Location, Motion and Activity Recognition
/
Oh, Yoosoo
DAPI 2014: 3rd International Conference on Distributed, Ambient, and
Pervasive Interactions
2015-08-02
p.527-535
Keywords: Activity recognition; Context integration; Embedded middleware
© Copyright 2015 Springer International Publishing Switzerland
Summary: In this paper, we propose an approach of activity context integration as a
means of evaluating semantic information, by integrating situational
information from heterogeneous sensors in a smartphone. The proposed activity
context integration is a method to provide a foundation for interacting with
situation-aware mobile computing systems. Moreover, we develop a context-aware
embedded middleware that generates high-level integrated contexts through the
fusion of internal * external sensors in a smartphone. The proposed system
extracts semantic information such as the user's activities.
[5]
Simplified Expressive Mobile Development with NexusUI, NexusUp, and
NexusDrop
Papers: Networked Wireless Systems
/
Taylor, Benjamin
/
Allison, Jesse
/
Holmes, Daniel
/
Conlin, William
/
Oh, Yemin
NIME 2014: New Interfaces for Musical Expression
2014-06-30
p.39
© Copyright 2014 Authors
Summary: Developing for mobile and multimodal platforms is more important now than
ever, as smartphones and tablets proliferate and mobile device orchestras
become commonplace. We detail NexusUI, a JavaScript framework that enables
rapid prototyping and development of expressive multitouch electronic
instrument interfaces within a web browser.
[6]
Demos
NIME 2014: New Interfaces for Musical Expression
2014-06-30
p.66
© Copyright 2014 Authors
3DinMotion -- A mocap based interface for real time visualisation and sonification of multi-user interactions
+ Renaud, Alain
+ Charbonnier, Caecilia
+ Chagué, Sylvain
A Simple Architecture for Server-based (Indoor) Audio Walks
+ Resch, Thomas
+ Krebs, Matthias
Manhattan: End-User Programming for Music
+ Nash, Chris
Musical Instrument Mapping Design with Echo State Networks
+ Kiefer, Chris
Optical Measurement of Acoustic Drum Strike Locations
+ Sokolovskis, Janis
+ McPherson, Andrew
Simplified Expressive Mobile Development with NexusUI, NexusUp, and NexusDrop
+ Taylor, Benjamin
+ Allison, Jesse
+ Holmes, Daniel
+ Conlin, William
+ Oh, Yemin
Soundbeam
+ Hutchins, Charles
Tangible Scores: Shaping the Inherent Instrument Score
+ Tomás, Enrique
+ Kaltenbrunner, Martin
Techniques in Swept Frequency Capacitive Sensing: An Open Source Approach
+ Honigman, Colin
+ Hochenbaum, Jordan
+ Kapur, Ajay
Wubbles: a collaborative ephemeral musical instrument
+ Berthaut, Florent
+ Knibbe, Jarrod
[7]
Efficient CPU-GPU work sharing for data-parallel JavaScript workloads
WWW 2014 posters
/
Piao, Xianglan
/
Kim, Channoh
/
Oh, Younghwan
/
Kim, Hanjun
/
Lee, Jae W.
Companion Proceedings of the 2014 International Conference on the World Wide
Web
2014-04-07
v.2
p.357-358
© Copyright 2014 ACM
Summary: Modern web browsers are required to execute many complex, compute-intensive
applications, mostly written in JavaScript. With widespread adoption of
heterogeneous processors, recent JavaScript-based data-parallel programming
models, such as River Trail and WebCL, support multiple types of processing
elements including CPUs and GPUs. However, significant performance gains are
still left on the table since the program kernel runs on only one compute
device, typically selected at kernel invocation. This paper proposes a new
framework for efficient work sharing between CPU and GPU for data-parallel
JavaScript workloads. The work sharing scheduler partitions the input data into
smaller chunks and dynamically dispatches them to both CPU and GPU for
concurrent execution. For four data-parallel programs, our framework improves
performance by up to 65% with a geometric mean speedup of 33% over GPU-only
execution.
[8]
EDITED BOOK
Natural Interaction with Robots, Knowbots and Smartphones: Putting Spoken
Dialog Systems into Practice
/
Mariani, Joseph
/
Rosset, Sophie
/
Garnier-Rizet, Martine
/
Devillers, Laurence
2014
p.397
Springer New York
== Spoken Dialog Systems in Everyday Applications ==
Spoken Language Understanding for Natural Interaction: The Siri Experience (3-14)
+ Bellegarda, Jerome R.
Development of Speech-Based In-Car HMI Concepts for Information Exchange Internet Apps (15-28)
+ Hofmann, Hansjörg
+ Silberstein, Anna
+ Ehrlich, Ute
+ Berton, André
+ Müller, Christian
+ Mahr, Angela
Real Users and Real Dialog Systems: The Hard Challenge for SDS (29-36)
+ Black, Alan W.
+ Eskenazi, Maxine
A Multimodal Multi-device Discourse and Dialogue Infrastructure for Collaborative Decision-Making in Medicine (37-47)
+ Sonntag, Daniel
+ Schulz, Christian
== Spoken Dialog Prototypes and Products ==
Yochina: Mobile Multimedia and Multimodal Crosslingual Dialogue System (51-57)
+ Xu, Feiyu
+ Schmeier, Sven
+ Ai, Renlong
+ Uszkoreit, Hans
Walk This Way: Spatial Grounding for City Exploration (59-67)
+ Boye, Johan
+ Fredriksson, Morgan
+ Götze, Jana
+ Gustafson, Joakim
+ Königsmann, Jürgen
Multimodal Dialogue System for Interaction in AmI Environment by Means of File-Based Services (69-77)
+ Ábalos, Nieves
+ Espejo, Gonzalo
+ López-Cózar, Ramón
+ Ballesteros, Francisco J.
+ Soriano, Enrique
+ Guardiola, Gorka
Development of a Toolkit Handling Multiple Speech-Oriented Guidance Agents for Mobile Applications (79-85)
+ Hara, Sunao
+ Kawanami, Hiromichi
+ Saruwatari, Hiroshi
+ Shikano, Kiyohiro
Providing Interactive and User-Adapted E-City Services by Means of Voice Portals (87-98)
+ Griol, David
+ García-Jiménez, María
+ Callejas, Zoraida
+ López-Cózar, Ramón
== Multi-domain, Crosslingual Spoken Dialog Systems ==
Efficient Language Model Construction for Spoken Dialog Systems by Inducting Language Resources of Different Languages (101-110)
+ Misu, Teruhisa
+ Matsuda, Shigeki
+ Mizukami, Etsuo
+ Kashioka, Hideki
+ Li, Haizhou
Towards Online Planning for Dialogue Management with Rich Domain Knowledge (111-123)
+ Lison, Pierre
A Two-Step Approach for Efficient Domain Selection in Multi-Domain Dialog Systems (125-131)
+ Lee, Injae
+ Kim, Seokhwan
+ Kim, Kyungduk
+ Lee, Donghyeon
+ Choi, Junhwi
+ Ryu, Seonghan
+ Lee, Gary Geunbae
== Human-Robot Interaction ==
From Informative Cooperative Dialogues to Long-Term Social Relation with a Robot (135-151)
+ Buendia, Axel
+ Devillers, Laurence
Integration of Multiple Sound Source Localization Results for Speaker Identification in Multiparty Dialogue System (153-165)
+ Nakashima, Taichi
+ Komatani, Kazunori
+ Sato, Satoshi
Investigating the Social Facilitation Effect in Human--Robot Interaction (167-177)
+ Wechsung, Ina
+ Ehrenbrink, Patrick
+ Schleicher, Robert
+ Möller, Sebastian
More Than Just Words: Building a Chatty Robot (179-185)
+ Gilmartin, Emer
+ Campbell, Nick
Predicting When People Will Speak to a Humanoid Robot (187-198)
+ Sugiyama, Takaaki
+ Komatani, Kazunori
+ Sato, Satoshi
Designing an Emotion Detection System for a Socially Intelligent Human-Robot Interaction (199-211)
+ Chastagnol, Clément
+ Clavel, Céline
+ Courgeon, Matthieu
+ Devillers, Laurence
Multimodal Open-Domain Conversations with the Nao Robot (213-224)
+ Jokinen, Kristiina
+ Wilcock, Graham
Component Pluggable Dialogue Framework and Its Application to Social Robots (225-237)
+ Jiang, Ridong
+ Tan, Yeow Kee
+ Limbu, Dilip Kumar
+ Dung, Tran Anh
+ Li, Haizhou
== Spoken Dialog Systems Components ==
Visual Contribution to Word Prominence Detection in a Playful Interaction Setting (241-247)
+ Heckmann, Martin
Label Noise Robustness and Learning Speed in a Self-Learning Vocal User Interface (249-259)
+ Ons, Bart
+ Gemmeke, Jort F.
+ Van hamme, Hugo
Topic Classification of Spoken Inquiries Using Transductive Support Vector Machine (261-267)
+ Torres, Rafael
+ Kawanami, Hiromichi
+ Matsui, Tomoko
+ Saruwatari, Hiroshi
+ Shikano, Kiyohiro
Frame-Level Selective Decoding Using Native and Non-native Acoustic Models for Robust Speech Recognition to Native and Non-native Speech (269-274)
+ Oh, Yoo Rhee
+ Chung, Hoon
+ Kang, Jeom-ja
+ Lee, Yun Keun
Analysis of Speech Under Stress and Cognitive Load in USAR Operations (275-281)
+ Charfuelan, Marcela
+ Kruijff, Geert-Jan
== Dialog Management ==
Does Personality Matter? Expressive Generation for Dialogue Interaction (285-301)
+ Walker, Marilyn A.
+ Sawyer, Jennifer
+ Lin, Grace
+ Wing, Sam
Application and Evaluation of a Conditioned Hidden Markov Model for Estimating Interaction Quality of Spoken Dialogue Systems (303-312)
+ Ultes, Stefan
+ ElChab, Robert
+ Minker, Wolfgang
FLoReS: A Forward Looking, Reward Seeking, Dialogue Manager (313-325)
+ Morbini, Fabrizio
+ DeVault, David
+ Sagae, Kenji
+ Gerten, Jillian
+ Nazarian, Angela
+ Traum, David
A Clustering Approach to Assess Real User Profiles in Spoken Dialogue Systems (327-334)
+ Callejas, Zoraida
+ Griol, David
+ Engelbrecht, Klaus-Peter
+ López-Cózar, Ramón
What Are They Achieving Through the Conversation? Modeling Guide--Tourist Dialogues by Extended Grounding Networks (335-341)
+ Mizukami, Etsuo
+ Kashioka, Hideki
Co-adaptation in Spoken Dialogue Systems (343-353)
+ Chandramohan, Senthilkumar
+ Geist, Matthieu
+ Lefèvre, Fabrice
+ Pietquin, Olivier
Developing Non-goal Dialog System Based on Examples of Drama Television (355-361)
+ Nio, Lasguido
+ Sakti, Sakriani
+ Neubig, Graham
+ Toda, Tomoki
+ Adriani, Mirna
+ Nakamura, Satoshi
A User Model for Dialog System Evaluation Based on Activation of Subgoals (363-374)
+ Engelbrecht, Klaus-Peter
Real-Time Feedback System for Monitoring and Facilitating Discussions (375-387)
+ Sarda, Sanat
+ Constable, Martin
+ Dauwels, Justin
+ Shoko Dauwels (Okutsu), +
+ Elgendi, Mohamed
+ Mengyu, Zhou
+ Rasheed, Umer
+ Tahir, Yasir
+ Thalmann, Daniel
+ Magnenat-Thalmann, Nadia
Evaluation of Invalid Input Discrimination Using Bag-of-Words for Speech-Oriented Guidance System (389-397)
+ Majima, Haruka
+ Torres, Rafael
+ Kawanami, Hiromichi
+ Hara, Sunao
+ Matsui, Tomoko
+ Saruwatari, Hiroshi
+ Shikano, Kiyohiro
[9]
User Guiding Information Supporting Application for Clinical Procedure in
Traditional Medicine
Complex Information Environments
/
Jang, Hyunchul
/
Oh, Yong-Taek
/
Kim, Anna
/
Kim, Sang Kyun
HIMI 2013: Human Interface and the Management of Information, Part II:
Information and Interaction for Health, Safety, Mobility and Complex
Environments
2013-07-21
v.2
p.100-109
Keywords: User guiding; Decision support; Ontology; Traditional medicine; Korean
medicine
© Copyright 2013 Springer-Verlag
Summary: Medical diagnostic procedures generally comprise a step of collecting
patients' symptoms, a step of diagnostic decisions, and a step of selecting
appropriate methods of treatment. In traditional medical treatment based on
analogical inference, analyzing present collected symptoms and choosing
symptoms to query are mightily important for the diagnosis and these are
essential conditions for appropriate treatment. Use of information systems that
support present diversity of symptoms information and considerable options for
the next step can avoid missing out timely and useful knowledge during the
procedures. We have developed an application that having user interfaces
guiding various analytic cases and their next optional choice and clinicians
are able to improve the efficiency of procedures with this. By analyzing
semantically linked data to symptoms, the application is possible to support
efficiently collecting symptoms and selecting methods of treatment. This
interfaces help users by requiring a minimal operation but supporting diverse
probabilities.
[10]
NEXUS: Collaborative Performance for the Masses, Handling Instrument
Interface Distribution through the Web
Session 1: Performance (1)
/
Allison, Jesse
/
Oh, Yemin
/
Taylor, Benjamin
NIME 2013: New Interfaces for Musical Expression
2013-05-27
p.1
Keywords: NIME, distributed performance systems, Ruby on Rails, collaborative
performance, distributed instruments, distributed interface, HTML5, browser
based interface
© Copyright 2013 Authors
Summary: Distributed performance systems present many challenges to the artist in
managing performance information, distribution and coordination of interface to
many users, and cross platform support to provide a reasonable level of
interaction to the widest possible user base.
Now that many features of HTML 5 are implemented, powerful browser based
interfaces can be utilized for distribution across a variety of static and
mobile devices. The author proposes leveraging the power of a web application
to handle distribution of user interfaces and passing interactions via OSC to
and from realtime audio/video processing software. Interfaces developed in this
fashion can reach potential performers by distributing a unique user interface
to any device with a browser anywhere in the world.
[11]
NEXUS: Collaborative Performance for the Masses, Handling Instrument
Interface Distribution through the Web
Demos (3)
/
Allison, Jesse
/
Oh, Yemin
/
Taylor, Benjamin
NIME 2013: New Interfaces for Musical Expression
2013-05-27
p.131
[12]
A Study on the Operator's Erroneous Responses to the New Human Interface of
a Digital Device to be Introduced to Nuclear Power Plants
Part IV / Cognitive and Psychological Issues in HCI
/
Oh, Yeon Ju
/
Lee, Yong Hee
/
Yun, Jong Hun
HCI International 2011: 14th International Conference on HCI - Posters'
Extended Abstracts, Part I
2011-07-09
v.5
p.337-341
Keywords: human error; EEG; ECG; nuclear power plant; human interface
Copyright © 2011 Springer-Verlag
Summary: It is extremely difficult to investigate completely the defects in digital
devices, and to prevent human errors in their interface during the design
aspect of nuclear power plants (NPPs). Human interface errors have been
investigated through usability studies and reliability analysis (HRA). Several
methods and various programs are available for prevention of human errors.
However, it is very limited to explain the detail mechanism of human errors by
quantitative usability approaches. Therefore, we define Error Segment (ES) and
Interaction Segment (IS) to predict a specific human error potential (HEP) in a
digital device and its human interface. In this study predicted HEP is to be
verified by experiments including data analysis of EEG, ECG and behavioral
observations. Thus the HEP in the human interface of a digital device can be
more carefully considered for preventing human errors in NPPs.
[13]
Foundation of a New Digital Ecosystem for u-Content: Needs, Definition, and
Design
Developing Virtual and Mixed Environments
/
Oh, Yoosoo
/
Duval, Sébastien
/
Kim, Sehwan
/
Yoon, Hyoseok
/
Ha, Taejin
/
Woo, Woontack
VMR 2011: 4th International Conference on Virtual and Mixed Reality, Part
II: Systems and Applications
2011-07-09
v.2
p.377-386
Copyright © 2011 Springer-Verlag
Summary: In this paper, we analyze and classify digital ecosystems to demonstrate the
need for a new digital ecosystem, oriented towards contents for ubiquitous
virtual reality (U-VR), and to identify appropriate designs. First, we survey
the digital ecosystems, explore their differences, identify unmet challenges,
and consider their appropriateness for emerging services tightly linking real
and virtual (i.e. digital) spaces. Second, we define a new type of content
ecosystem (u-Content ecosystem) and describe its necessary and desirable
features. Finally, the results of our analysis show that our proposed ecosystem
surpasses the existing ecosystems for U-VR applications and contents.
[14]
Development of Web-Based Participatory Trend Forecasting System: urtrend.net
Human Centered Design Methods and Tools
/
Jung, Eui-Chul
/
Lee, SoonJong
/
Chung, HeeYun
/
Kim, BoSup
/
Lee, HyangEun
/
Oh, YoungHak
/
Cho, YounWoo
/
Ra, WoongBae
/
Kwon, HyeJin
/
Lee, June-Young
HCD 2011: 2nd International Conference on Human Centered Design
2011-07-09
p.65-73
Keywords: Participatory System Design; Web 2.0; Trend Forecasting System
Copyright © 2011 Springer-Verlag
Summary: The goal of this research is to develop a participatory system that can
capture live trend issues and people's latent needs in the issues. Web 2.0
technology is adopted because open and sharable information platform is
important for this development. The urtrend.net is developed with three sub
systems: issue monitoring & generation system, imagination & creation
system, and value finding system. This paper focuses on the development of the
first and second sub systems. Using the System 1, trend related data are
gathered and analyzed to extract emerging trend issues in our lives. Using the
System 2, people can join freely the public discussion on the issues from the
System 1. System 3 will be developed to analyze people's discussion to provide
deep insights for designers. The urtrend.net enables designers and planners to
be more creative and innovative because the system will produce more
sophisticated trend information with rich and informative resources.
[15]
Vision-based Korean Manual Alphabet recognition game for beginners
Posters
/
Oh, Young-Joon
/
Jung, Keechul
Proceedings of the 2008 International Conference on Advances in Computer
Entertainment Technology
2008-12-03
p.417
© Copyright 2008 ACM
Summary: The Korean Manual Alphabet (KMA) corresponds to the vocabulary of Korean
Sign Language (KSL) and people use this KMA when he/she spells each letter of a
word such as a newly coined word without current body language in deaf society
[1]. Hearing people usually does not know/understand KSL words, because he/she
can use KMA in order to communicate simply with the deaf without learning
complex sign languages. Min, et al developed the glove-based KMA recognition
using blue-tooth [2]. In this paper, we propose the vision-based KMA
recognition game interface using a low price Universal Serial Bus (USB) based
camera. We aim to provide that beginners can learn KMA letters easily when
playing game at the same time. In addition, the system is able to detect a
user's hand from captured images and find out one Korean Alphabet (KA) falling
letter in game that corresponds to a KMA letter. We evaluated capabilities of
the proposed system in order to provide convenience and reliability to the
users.