[1]
Empathy Glasses
Late-Breaking Works: Collaborative Technologies
/
Masai, Katsutoshi
/
Kunze, Kai
/
sugimoto, Maki
/
Billinghurst, Mark
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.1257-1263
© Copyright 2016 ACM
Summary: In this paper, we describe Empathy Glasses, a head worn prototype designed
to create an empathic connection between remote collaborators. The main novelty
of our system is that it is the first to combine the following technologies
together: (1) wearable facial expression capture hardware, (2) eye tracking,
(3) a head worn camera, and (4) a see-through head mounted display, with a
focus on remote collaboration. Using the system, a local user can send their
information and a view of their environment to a remote helper who can send
back visual cues on the local user's see-through display to help them perform a
real world task. A pilot user study was conducted to explore how effective the
Empathy Glasses were at supporting remote collaboration. We describe the
implications that can be drawn from this user study.
[2]
Facial Expression Recognition in Daily Life by Embedded Photo Reflective
Sensors on Smart Eyewear
Wearable and Mobile IUI 2
/
Masai, Katsutoshi
/
Sugiura, Yuta
/
Ogata, Masa
/
Kunze, Kai
/
Inami, Masahiko
/
Sugimoto, Maki
Proceedings of the 2016 International Conference on Intelligent User
Interfaces
2016-03-07
v.1
p.317-326
© Copyright 2016 ACM
Summary: This paper presents a novel smart eyewear that uses embedded photo
reflective sensors and machine learning to recognize a wearer's facial
expressions in daily life. We leverage the skin deformation when wearers change
their facial expressions. With small photo reflective sensors, we measure the
proximity between the skin surface on a face and the eyewear frame where 17
sensors are integrated. A Support Vector Machine (SVM) algorithm was applied
for the sensor information. The sensors can cover various facial muscle
movements and can be integrated into everyday glasses. The main contributions
of our work are as follows. (1) The eyewear recognizes eight facial expressions
(92.8% accuracy for one time use and 78.1% for use on 3 different days). (2) It
is designed and implemented considering social acceptability. The device looks
like normal eyewear, so users can wear it anytime, anywhere. (3) Initial field
trials in daily life were undertaken. Our work is one of the first attempts to
recognize and evaluate a variety of facial expressions in the form of an
unobtrusive wearable device.
[3]
MARCut: Marker-based Laser Cutting for Personal Fabrication on Existing
Objects
Work-in-Progress
/
Kikuchi, Takashi
/
Hiroi, Yuichi
/
Smith, Ross T.
/
Thomas, Bruce H.
/
Sugimoto, Maki
Proceedings of the 2016 International Conference on Tangible and Embedded
Interaction
2016-02-14
p.468-474
© Copyright 2016 ACM
Summary: Typical personal fabrication using a laser cutter allows objects to be
created from raw material and the engraving of existing objects. Current
methods to precisely align an object with the laser is a difficult process due
to indirect manipulations. In this paper, we propose a marker-based system as a
novel paradigm for direct interactive laser cutting on existing objects. Our
system, MARCut, performs the laser cutting based on tangible markers that are
applied directly onto the object to express the design. Two types of markers
are available; hand constructed Shape Markers that represent the desired
geometry, and Command Markers that indicate the operational parameters such as
cut, engrave or material.
[4]
Toward a platform for collecting, mining, and utilizing behavior data for
detecting students with depression risks
Data modeling & information management for pervasive assistive
environments
/
Suzuki, Einoshin
/
Deguchi, Yutaka
/
Matsukawa, Tetsu
/
Ando, Shin
/
Ogata, Hiroaki
/
Sugimoto, Masanori
Proceedings of the 2015 International Conference on PErvasive Technologies
Related to Assistive Environments
2015-07-01
p.26
© Copyright 2015 ACM
Summary: In this paper, we present our plan for constructing a platform for
collecting, mining, and utilizing behavior data for detecting students with
depression risks. Unipolar depression makes a large contribution to the burden
of disease, being at the first place in middle- and high-income countries. We
survey descriptors of depressions and then design a data collection platform in
a classroom based on the assumption that such descriptors are also effective to
students with depression risks. Visual, acoustic, and e-learning data are
chosen for collection and various issues including devices, preprocessing, and
consent agreements are investigated. We also show two kinds of utilization
scenarios of the collected data and introduce several techniques and methods we
developed for feature extraction and early detection.
[5]
BESIDE: Body Experience and Sense of Immersion in Digital Paleontological
Environment
WIP Theme: Gesture and Multimodal
/
Yoshida, Ryuichi
/
Egusa, Ryohei
/
Saito, Machi
/
Namatame, Miki
/
Sugimoto, Masanori
/
Kusunoki, Fusako
/
Yamaguchi, Etsuji
/
Inagaki, Shigenori
/
Takeda, Yoshiaki
/
Mizoguchi, Hiroshi
Extended Abstracts of the ACM CHI'15 Conference on Human Factors in
Computing Systems
2015-04-18
v.2
p.1283-1288
© Copyright 2015 ACM
Summary: We are developing an immersive learning support system for a paleontological
environment within a museum. The system measures the physical movement of the
learner using a Kinect sensor, and provides a sense of immersion in the
paleontological environment by adapting the surroundings according to these
movements. As the first stage of this project, we have developed a prototype
system that allows learners to experience the paleontological environment.
Here, we evaluate the operability of the system, degree of learning support,
and sense of immersion for primary schoolchildren. This paper summarizes the
current system and describes the evaluation results.
[6]
3D FDM-PAM: rapid and precise indoor 3D localization using acoustic signal
for smartphone
Posters
/
Nakamura, Masanari
/
Sugimoto, Masanori
/
Akiyama, Takayuki
/
Hashizume, Hiromichi
Adjunct Proceedings of the 2014 International Joint Conference on Pervasive
and Ubiquitous Computing
2014-09-13
v.2
p.123-126
© Copyright 2014 ACM
Summary: In this paper, we present an indoor 3D positioning method for smartphones
using acoustic signals. In our proposed 3D Frequency Division Multiplexing --
Phase Accordance Method (3D FDM -- PAM), four speakers simultaneously emit
burst signals comprising two carrier waves at different frequencies to enable
the rapid calculation of the position of the smartphone. Through experiments,
we show that 3D FDM -- PAM can achieve a standard deviation of less than 2.8 cm
at 7.8 measurements per second. The worst positioning error was 48.3 cm at the
95th percentile. We investigate the causes of error and discuss potential
improvements to the localization performance.
[7]
KIKIWAKE: participatory design of language play game for children to promote
creative activity-based on recognition of Japanese phonology
Wednesday short papers
/
Nakadai, Takahiro
/
Taguchi, Tomoki
/
Egusa, Ryohei
/
Namatame, Miki
/
Sugimoto, Masanori
/
Kusunoki, Fusako
/
Yamaguchi, Etsuji
/
Inagaki, Shigenori
/
Takeda, Yoshiaki
/
Mizoguchi, Hiroshi
Proceedings of ACM IDC'14: Interaction Design and Children
2014-06-17
p.265-268
© Copyright 2014 ACM
Summary: This study proposes a system for supporting the Shotoku Taishi game, which
is a language play game that uses the voice of children. The Shotoku Taishi
game is a group game in which multiple people presenting a problem vocalize
different words at the same time and the respondents are required to guess what
the combination of the words is. The authors developed and implemented a system
using a microphone array to extract the voice of a specific person presenting a
problem in this game. The participants were 36 elementary school students whose
native language was Japanese. The results showed that the participants were
enjoying the Shotoku Taishi game and that this group activity was a creative
activity that deepened their awareness of the Japanese language.
[8]
Human SUGOROKU: learning support system of vegetation succession with
full-body interaction interface
Works-in-progress
/
Nakayama, Tomohiro
/
Adachi, Takayuki
/
Muratsu, Keita
/
Mizoguchi, Hiroshi
/
Namatame, Miki
/
Sugimoto, Masanori
/
Kusunoki, Fusako
/
Yamaguchi, Etsuji
/
Inagaki, Shigenori
/
Takeda, Yoshiaki
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.2
p.2227-2232
© Copyright 2014 ACM
Summary: In this study, we developed a simulation game called "Human SUGOROKU" that
consists of a full-body interaction system to enable elementary school students
to enjoy and learn vegetation succession. The students' sense of immersion is
improved by enabling them to play this game using their body movements. We
conducted an experiment with the students and investigated the affects of the
full-body interaction through questionnaires. The results showed that the
full-body interaction promotes a sense of immersion in the game and enhance
their understanding of vegetation succession. This paper describes the
structure of this system and the questionnaires results.
[9]
Virtual slicer: interactive visualizer for tomographic medical images based
on position and orientation of handheld device
/
Shimamura, Sho
/
Kanegae, Motoko
/
Morita, Jun
/
Uema, Yuji
/
Inami, Masahiko
/
Hayashida, Tetsu
/
Saito, Hideo
/
Sugimoto, Maki
Proceedings of the 2014 Virtual Reality International Conference
2014-04-09
p.12
© Copyright 2014 ACM
Summary: This paper introduces an interface that helps understand the correspondence
between the patient and medical images. Surgeons determine the extent of
resection by using tomographic images such as MRI (Magnetic Resonance Imaging)
data. However, understanding the relationship between the patient and
tomographic images is difficult. This study aims to visualize the
correspondence more intuitively. In this paper, we propose an interactive
visualizer for medical images based on the relative position and orientation of
the handheld device and the patient. We conducted an experiment to verify the
performances of the proposed method and several other methods. In the
experiment, the proposed method showed the minimum error.
[10]
Virtual rope slider
/
Kodera, Tatsuya
/
Tani, Naoto
/
Morita, Jun
/
Maeda, Naoya
/
Tsuboi, Kazuna
/
Kanegae, Motoko
/
Shinozuka, Yukiko
/
Shimamura, Sho
/
Kubo, Kadoki
/
Nakayama, Yusuke
/
Lee, Jaejun
/
Pruneau, Maxime
/
Saito, Hideo
/
Sugimoto, Maki
Proceedings of the 2014 Virtual Reality International Conference
2014-04-09
p.36
© Copyright 2014 ACM
Summary: This paper proposes "Virtual Rope Slider", which expands a rope sliding
experience by stimulating sense of sight, hearing, wind and vestibular
sensation. A rope slide in a real world has physical restrictions in terms of
scale and location whereas our "Virtual Rope Slider" provides scale and
location independent experiences in the virtual environment. The user is able
to perceive a different sense of scale in the virtualized scenes by multi-modal
stimulation with physical simulation.
[11]
Move-it sticky notes providing active physical feedback through motion
In focus or not?
/
Probst, Kathrin
/
Haller, Michael
/
Yasu, Kentaro
/
Sugimoto, Maki
/
Inami, Masahiko
Proceedings of the 2014 International Conference on Tangible and Embedded
Interaction
2014-02-16
p.29-36
© Copyright 2014 ACM
Summary: Post-it notes are a popular paper format that serves a multitude of purposes
in our daily lives, as they provide excellent affordances for quick capturing
of informal notes, and location-sensitive reminding. In this paper, we present
Move-it, a system that combines Post-it notes with a technologically enhanced
paperclip to demonstrate how a passive piece of paper can be turned into an
"active" medium that conveys information through motion. We present two
application examples that investigate the applicability of Move-it sticky notes
for ambient information awareness. In comparison to existing notification
systems, experimental results show that they reduce negative effects of
interruptions on emotional state and performance, and provide unique
affordances by combining advantages of physical and digital systems into a
novel active paper interface.
[12]
PukaPuCam: Enhance Travel Logging Experience through Third-Person View
Camera Attached to Balloons
Short Presentations
/
Yamamoto, Tsubasa
/
Sugiura, Yuta
/
Low, Suzanne
/
Toda, Koki
/
Minamizawa, Kouta
/
Sugimoto, Maki
/
Inami, Masahiko
Proceedings of the 2013 International Conference on Advances in Computer
Entertainment
2013-11-12
p.428-439
Keywords: life logging; third-person view; balloon; sightseeing
© Copyright 2013 Springer International Publishing
Summary: PukaPuCam is an application service that utilizes a camera attached to
balloons, to capture users' photo continuously from a third-person view. Then,
users can glance through their photos by using PukaPuCam Viewer. PukaPuCam
records the interaction between users and their surrounding objects or even
with the people they meet. As balloon experiences air resistance, it can change
its inclination according to the user's speed and thus, capture pictures from
different direction or angles. This gives rise to interesting and unusual
records to be added to the user's collection. As compare to other similar
devices, PukaPuCam uses a common design people are familiarize with -- a
balloon; making it an interesting application to be used at tourist spots. As
balloons are cute, we aim to give users a more enjoyable, delightful
experience.
[13]
Development of a Full-Body Interaction Digital Game for Children to Learn
Vegetation Succession
Extended Abstracts
/
Adachi, Takayuki
/
Mizoguchi, Hiroshi
/
Namatame, Miki
/
Kusunoki, Fusako
/
Sugimoto, Masanori
/
Muratsu, Keita
/
Yamaguchi, Etsuji
/
Inagaki, Shigenori
/
Takeda, Yoshiaki
Proceedings of the 2013 International Conference on Advances in Computer
Entertainment
2013-11-12
p.492-496
Keywords: Interactive Content; Ultrasonic Sensor; Embodiment; Learning Support System
© Copyright 2013 Springer International Publishing
Summary: In this study, we developed a simulation game called "Human SUGOROKU" that
simulates vegetation succession of the real forest area in the virtual world.
This game consists of a full-body interaction system to enable children to
enjoy and learn vegetation succession by playing with their body movement. We
conducted an experiment with children and investigated the effects of the
full-body interaction through interviews. The results showed that the full-body
interaction promotes a sense of immersion in the game. This paper describes the
structure of this system and the interview results.
[14]
Generation of the Certain Kind of Figures Using the Movement Sense of
Localized Sound and Its Application
Universal Access and eInclusion
/
Shimizu, Michio
/
Sugimoto, Masahiko
/
Itoh, Kazunori
HCI International 2013: 15th International Conference on HCI: Posters'
Extended Abstracts Part I
2013-07-21
v.6
p.197-201
Keywords: the movement sense of the localized sound; the input tactile sense guide; a
figure education
© Copyright 2013 Springer-Verlag
Summary: In this report, the easy figure which consists of a line segment and its
combination is virtually expressed by the movement sense of the localized sound
on a virtual sound screen. In order to create a psychological simple figure,
the system which used together the movement sense of localized sound and the
input tactile sense guide is proposed.
[15]
Generation of the Certain Kind of Figures Using the Movement Sense of
Localized Sound and Its Application
Perception and Interaction
/
Shimizu, Michio
/
Sugimoto, Masahiko
/
Itoh, Kazunori
HCI International 2013: 15th International Conference on HCI: Posters'
Extended Abstracts Part I
2013-07-21
v.6
p.507-510
Keywords: the movement sense of the localized sound; the input tactile sense guide; a
figure education
© Copyright 2013 Springer-Verlag
Summary: In this report, the easy figure which consists of a line segment and its
combination is virtually expressed by the movement sense of the localized sound
on a virtual sound screen. In order to create a psychological simple figure,
the system which used together the movement sense of localized sound and the
input tactile sense guide is proposed.
[16]
Preliminary Design of a Network Protocol Learning Tool Based on the
Comprehension of High School Students: Design by an Empirical Study Using a
Simple Mind Map
Learning and Education
/
Satoh, Makoto
/
Muramatsu, Ryo
/
Kayama, Mizue
/
Itoh, Kazunori
/
Hashimoto, Masami
/
Otani, Makoto
/
Shimizu, Michio
/
Sugimoto, Masahiko
HCI International 2013: 15th International Conference on HCI: Posters'
Extended Abstracts Part II
2013-07-21
v.7
p.89-93
Keywords: Learning Tool; High School Student; Empirical Comprehension; Mind Map;
Network Protocol
© Copyright 2013 Springer-Verlag
Summary: The purpose of this study is to develop a learning tool for high school
students studying the scientific aspects of information and communication
networks. More specifically, we focus on the basic principles of network
protocols as the aim to develop our learning tool. Our tool gives students
hands-on experience to help understand the basic principles of network
protocols.
[17]
Human SUGOROKU: full-body interaction system for students to learn
vegetation succession
Short Papers
/
Adachi, Takayuki
/
Goseki, Masafumi
/
Muratsu, Keita
/
Mizoguchi, Hiroshi
/
Namatame, Miki
/
Sugimoto, Masanori
/
Kusunoki, Fusako
/
Yamaguchi, Etsuji
/
Inagaki, Shigenori
/
Takeda, Yoshiaki
Proceedings of ACM IDC'13: Interaction Design and Children
2013-06-24
p.364-367
© Copyright 2013 ACM
Summary: In this study, we developed a simulation game called "Human SUGOROKU" that
consists of a full-body interaction system to enable elementary school students
to enjoy and learn vegetation succession. The students' sense of immersion is
improved by enabling them to play this game using their body movements. We
conducted an experiment with the students and investigated the affects of the
full-body interaction through interviews. The results showed that the full-body
interaction promotes a sense of immersion in the game. This paper describes the
structure of this system and the interview results.
[18]
KIKIWAKE: sound source separation system for children-computer interaction
Music and audio
/
Taguchi, Tomoki
/
Goseki, Masafumi
/
Egusa, Ryohei
/
Namatame, Miki
/
Sugimoto, Masanori
/
Kusunoki, Fusako
/
Yamaguchi, Etsuji
/
Inagaki, Shigenori
/
Takeda, Yoshiaki
/
Mizoguchi, Hiroshi
Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing
Systems
2013-04-27
v.2
p.757-762
© Copyright 2013 ACM
Summary: In general living environments, in order to separate children's voices from
backgrounds noise, we developed a sound separation system by a microphone
array. We created a game by use of this developed system, and conducted
evaluation experiment intended for elementary school children. As a result, we
confirmed this system could separate 3 voices, and the game promotes children's
interest in or concerns about a microphone array in a quantitative way.
[19]
3D Object Surface Tracking Using Partial Shape Templates Trained from a
Depth Camera for Spatial Augmented Reality Environments
Posters
/
Tsuboi, K.
/
Oyamada, Y.
/
Sugimoto, M.
/
Saito, H.
Proceedings of AUIC'13, Australasian User Interface Conference
2013-01-29
p.125-126
© Copyright 2013 Australian Computer Society
Summary: We present a 3D object tracking method using a single depth camera for
Spatial Augmented Reality (SAR). The drastic change of illumination in a SAR
environment makes object tracking difficult. Our method uses a depth camera to
train and track the 3D physical object. The training allows maker-less tracking
of the moving object under illumination changes. The tracking is a combination
of feature based matching and frame sequential matching of point clouds. Our
method allows users to adapt 3D objects of their choice into a dynamic SAR
environment.
[20]
Novel interaction techniques using touch-sensitive tangibles in tabletop
environments
Posters
/
Amaro, Saphyra
/
Sugimoto, Masanori
Proceedings of the 2012 ACM International Conference on Interactive
Tabletops and Surfaces
2012-11-11
p.347-350
© Copyright 2012 ACM
Summary: In this work, we propose techniques for interaction that use a
touch-sensitive tangible to assist 3D manipulation in tabletop applications.
The objective of this research is to investigate the effectiveness and user
satisfaction with this combination for performing virtual object manipulation
in tabletop environments. A prototype of a touch-sensitive tangible was
constructed and some of the proposed techniques were implemented, namely 3D
translation and rotation. We conducted a pilot study to compare 3D manipulation
on the tabletop with and without the tangible, from which we found that the
touch-sensitive tangible was useful for 3D manipulation tasks.
[21]
An immersive surface for 3D interactions
Posters
/
Takeuchi, Yusuke
/
Sugimoto, Masanori
Proceedings of the 2012 ACM International Conference on Interactive
Tabletops and Surfaces
2012-11-11
p.359-362
© Copyright 2012 ACM
Summary: This paper proposes a new tabletop interface that enables a user to
visualize projected objects as if they existed on the tabletop surface. It uses
head tracking, without the need for any specialized head-mounted hardware,
displays, or markers. Nowadays, many interactive tabletop interfaces support
interactions above the surface because this is more intuitive. In these 3D
interactions, users should be able to gauge the size and height of the
projected virtual objects. We evaluate our system quantitatively via a 3D
interaction task, by comparing it with a standard tabletop system.
[22]
Novel interaction techniques based on a combination of hand and foot
gestures in tabletop environments
Interaction by hand and foot
/
Sangsuriyachot, Nuttapol
/
Sugimoto, Masanori
Proceedings of the 2012 Asia Pacific Conference on Computer Human
Interaction
2012-08-28
p.21-28
© Copyright 2012 Springer-Verlag
Summary: Interactive tables, or tabletop devices, employ multi-finger gestures to
interact with digital contents on a table's surface. Many studies have
confirmed the convenience and intuitiveness of multi-finger gestures performed
with the hands. However, there are still some tasks which cannot be conducted
effectively by users via two-handed or multi-finger gestures. Given that feet
are used occasionally in the real world to support the hands in the performance
of complex tasks such as driving a car, we considered that it might be useful
to combine foot gestures with hand gestures to enhance user interactions with
tabletop environments.
In this study, we developed a high-resolution foot sensing platform based on
multi-touch techniques known as frustrated total internal reflection and
diffused illumination. We then used the device to study the effect of combining
hand and foot gestures on tabletop systems by using a 3D drawing application.
We conducted user evaluations to compare foot gestures and identified which
gestures were most comfortable for performing a 3D model rotation task. We also
compared the performance in a 3D drawing task when using only hand gestures
with the performance when using hand and foot gestures together. Finally, we
discussed how hand and foot gesture combination techniques could provide new
user experiences in tabletop environments.
[23]
Stop motion goggle: augmented visual perception by subtraction method using
high speed liquid crystal
/
Koizumi, Naoya
/
Sugimoto, Maki
/
Nagaya, Naohisa
/
Inami, Masahiko
/
Furukawa, Masahiro
Proceedings of the 2012 Augmented Human International Conference
2012-03-08
p.14
© Copyright 2012 ACM
Summary: Stop Motion Goggle (SMG) expands visual perception by allowing users to
perceive visual information selectively through a high speed shutter. In this
system, the user can easily observe not only periodic rotational motion such as
rotating fans or wheels, but also random motion like bouncing balls. In this
research, we developed SMG and evaluated the effect of SMG on visual perception
of high speed moving objects. Furthermore this paper describes users' behaviors
under the expanded visual experience.
[24]
HATs: interact using height-adjustable tangibles in tabletop interfaces
Graspable interfaces
/
Mi, Haipeng
/
Sugimoto, Masanori
Proceedings of the 2011 ACM International Conference on Interactive
Tabletops and Surfaces
2011-11-13
p.71-74
© Copyright 2011 ACM
Summary: We present Height-Adjustable Tangibles (HATs) for table-top interaction.
HATs are active tangibles with 4 degrees of freedom that are capable of moving,
rotating, and changing height. By adding height as an additional dimension for
manipulation and representation, HATs offer more freedom to users than ordinary
tangibles. HATs support bidirectional interaction, enabling them to reflect
changes in the digital model via active visual feedback and to assist users via
haptic feedback. A number of scenarios for using HATs are proposed, including
interaction with complex and dependent models and applying HATs as tangible
indicator widgets. We then introduce the implementation of HAT prototypes, for
which we utilize motor-driven potentiometers to realize bidirectional
interaction via the height dimension.
[25]
Novel interaction techniques by combining hand and foot gestures on tabletop
environments
Multi-surface
/
Sangsuriyachot, Nuttapol
/
Mi, Haipeng
/
Sugimoto, Masanori
Proceedings of the 2011 ACM International Conference on Interactive
Tabletops and Surfaces
2011-11-13
p.268-269
© Copyright 2011 ACM
Summary: Despite the convenience and intuitiveness of multi-touch gestures, there are
some tasks that do not allow users to effectively conduct even using two-handed
gestures. We propose novel input techniques of combining hand and foot gestures
to enhance user interactions on tabletop environments. We have developed an
early prototype of a sensor-based foot platform which recognizes subtle foot
gestures, designed foot gestures and interactions which support simultaneous
users' tasks and obtained their informal feedback.