[1]
Facial Expression Recognition in Daily Life by Embedded Photo Reflective
Sensors on Smart Eyewear
Wearable and Mobile IUI 2
/
Masai, Katsutoshi
/
Sugiura, Yuta
/
Ogata, Masa
/
Kunze, Kai
/
Inami, Masahiko
/
Sugimoto, Maki
Proceedings of the 2016 International Conference on Intelligent User
Interfaces
2016-03-07
v.1
p.317-326
© Copyright 2016 ACM
Summary: This paper presents a novel smart eyewear that uses embedded photo
reflective sensors and machine learning to recognize a wearer's facial
expressions in daily life. We leverage the skin deformation when wearers change
their facial expressions. With small photo reflective sensors, we measure the
proximity between the skin surface on a face and the eyewear frame where 17
sensors are integrated. A Support Vector Machine (SVM) algorithm was applied
for the sensor information. The sensors can cover various facial muscle
movements and can be integrated into everyday glasses. The main contributions
of our work are as follows. (1) The eyewear recognizes eight facial expressions
(92.8% accuracy for one time use and 78.1% for use on 3 different days). (2) It
is designed and implemented considering social acceptability. The device looks
like normal eyewear, so users can wear it anytime, anywhere. (3) Initial field
trials in daily life were undertaken. Our work is one of the first attempts to
recognize and evaluate a variety of facial expressions in the form of an
unobtrusive wearable device.
[2]
Augmented Winter Ski with AR HMD
/
Fan, Kevin
/
Seigneur, Jean-Marc
/
Guislain, Jonathan
/
Nanayakkara, Suranga
/
Inami, Masahiko
Proceedings of the 2016 Augmented Human International Conference
2016-02-25
p.34
© Copyright 2016 Authors
Summary: At time of writing, several affordable Head-Mounted Displays (HMD) are going
to be released to the mass market, most of them for Virtual Reality (VR with
Oculus Rift, Samsung Gear...) but also for indoor Augmented Reality (AR) with
Hololens. We have investigated how to adapt such HMD as Oculus Rift for an
outdoor AR ski slope. Rather than setting physical obstacles such as poles, our
system employs AR to render dynamic obstacles by different means. During the
demo, skiers will wear a video-see-through HMD while trying to ski on a real
ski slope where AR obstacles are rendered.
[3]
Electrosmog Visualization through Augmented Blurry Vision
/
Fan, Kevin
/
Seigneur, Jean-Marc
/
Nanayakkara, Suranga
/
Inami, Masahiko
Proceedings of the 2016 Augmented Human International Conference
2016-02-25
p.35
© Copyright 2016 Authors
Summary: Electrosmog is the electromagnetic radiation emitted from wireless
technology such as Wi-Fi hotspots or cellular towers, and poses potential
hazard to human. Electrosmog is invisible, and we rely on detectors which show
level of electrosmog in a warning such as numbers. Our system is able to detect
electrosmog level from number of Wi-Fi networks, connected cellular towers and
strengths, and show in an intuitive representation by blurring the vision of
the users wearing a Head-Mounted Display (HMD). The HMD displays in real-time
the users' augmented surrounding environment with blurriness, as though the
electrosmog actually clouds the environment. For demonstration, participants
can walk in a video-see-through HMD and observe vision gradually blurred while
approaching our prepared dense wireless network.
[4]
Quantifying reading habits: counting how many words you read
Quantifying and communicating through wearables
/
Kunze, Kai
/
Masai, Katsutoshi
/
Inami, Masahiko
/
Sacakli, Ömer
/
Liwicki, Marcus
/
Dengel, Andreas
/
Ishimaru, Shoya
/
Kise, Koichi
Proceedings of the 2015 International Conference on Ubiquitous Computing
2015-09-07
p.87-96
© Copyright 2015 ACM
Summary: Reading is a very common learning activity, a lot of people perform it
everyday even while standing in the subway or waiting in the doctors office.
However, we know little about our everyday reading habits, quantifying them
enables us to get more insights about better language skills, more effective
learning and ultimately critical thinking. This paper presents a first
contribution towards establishing a reading log, tracking how much reading you
are doing at what time. We present an approach capable of estimating the words
read by a user, evaluate it in an user independent approach over 3 experiments
with 24 users over 5 different devices (e-ink reader, smartphone, tablet,
paper, computer screen). We achieve an error rate as low as 5% (using a medical
electrooculography system) or 15% (based on eye movements captured by optical
eye tracking) over a total of 30 hours of recording. Our method works for both
an optical eye tracking and an Electrooculography system. We provide first
indications that the method works also on soon commercially available smart
glasses.
[5]
Smart Eyewear for Interaction and Activity Recognition
Interactivity
/
Ishimaru, Shoya
/
Kunze, Kai
/
Tanaka, Katsuma
/
Uema, Yuji
/
Kise, Koichi
/
Inami, Masahiko
Extended Abstracts of the ACM CHI'15 Conference on Human Factors in
Computing Systems
2015-04-18
v.2
p.307-310
© Copyright 2015 ACM
Summary: vice class with a lot of possibilities for user interaction design and
unobtrusive activity tracking. In this paper we show applications using an
early prototype of J!NS MEME, smart glasses with integrated electrodes to
detect eye movements (Electrooculography, EOG) and motion sensors
(accelerometer and gyroscope) to monitor head motions. We present several
demonstrations: We show a simple eye movement visualization, detecting
left/right eye motion and blink. Additionally, users can play a game, "Blinky
Bird". They need to help a bird avoid obstacles using eye movements. We
implemented online detection of reading and talking behavior using a
combination of blink, eye movement and head motion. We can give people a long
term view of their reading, talking, and also walking activity over the day.
[6]
Gravitamine spice: a system that changes the perception of eating through
virtual weight sensation
Altered Experiences
/
Hirose, Masaharu
/
Iwazaki, Karin
/
Nojiri, Kozue
/
Takeda, Minato
/
Sugiura, Yuta
/
Inami, Masahiko
Proceedings of the 2015 Augmented Human International Conference
2015-03-09
p.33-40
© Copyright 2015 ACM
Summary: The flavor of food is not just limited to the sense of taste, but also it
changes according to the perceived information from other perception such as
the auditory, visual, tactile senses, or through individual experiences or
cultural background, etc. We proposed "Gravitamine Spice", a system that
focuses on the cross-modal interaction between our perception; mainly the
weight of food we perceived when we carry the utensils. This system consists of
a fork and a seasoning called the "OMOMI". User can change the weight of the
food by sprinkling seasoning onto it. Through this sequence of actions, users
can enjoy different dining experiences, which may change the taste of their
food or the feeling towards the food when they are chewing it.
[7]
RippleTouch: initial exploration of a wave resonant based full body haptic
interface
Haptics and Exoskeletons
/
Withana, Anusha
/
Koyama, Shunsuke
/
Saakes, Daniel
/
Minamizawa, Kouta
/
Inami, Masahiko
/
Nanayakkara, Suranga
Proceedings of the 2015 Augmented Human International Conference
2015-03-09
p.61-68
© Copyright 2015 ACM
Summary: We propose RippleTouch, a low resolution haptic interface that is capable of
providing haptic stimulation to multiple areas of the body via a single point
of contact actuator. Concept is based on the low frequency acoustic wave
propagation properties of the human body. By stimulating the body with
different amplitude modulated frequencies at a single contact point, we were
able to dissipate the wave energy in a particular region of the body, creating
a haptic stimulation without direct contact. The RippleTouch system was
implemented on a regular chair, in which, four base range speakers were mounted
underneath the seat and driven by a simple stereo audio interface. The system
was evaluated to investigate the effect of frequency characteristics of the
amplitude modulation system. Results demonstrate that we can effectively create
haptic sensations at different parts of the body with a single contact point
(i.e. chair surface). We believe RippleTouch concept would serve as a scalable
solution for providing full-body haptic feedback in variety of situations
including entertainment, communication, public spaces and vehicular
applications.
[8]
How much do you read?: counting the number of words a user reads using
electrooculography
Learning and Reading
/
Kunze, Kai
/
Katsutoshi, Masai
/
Uema, Yuji
/
Inami, Masahiko
Proceedings of the 2015 Augmented Human International Conference
2015-03-09
p.125-128
© Copyright 2015 ACM
Summary: We read to acquire knowledge. Reading is a common activity performed in
transit and while sitting, for example during commuting to work or at home on
the couch. Although reading is associated with high vocabulary skills and even
with increased critical thinking, we still know very little about effective
reading habits. In this paper, we argue that the first step to understanding
reading habits in real life we need to quantify them with affordable and
unobtrusive technology. Towards this goal, we present a system to track how
many words a user reads using electrooculography sensors. Compared to previous
work, we use active electrodes with a novel on-body placement optimized for
both integration into glasses (or head-worn eyewear etc) and for reading
detection. Using this system, we present an algorithm capable of estimating the
words read by a user, evaluate it in an user independent approach over
experiments with 6 users over 4 different devices (8" and 9" tablet, paper,
laptop screen). We achieve an error rate as low as 7% (based on eye motions
alone) for the word count estimation (std = 0.5%).
[9]
PukuPuCam: a recording system from third-person view in scuba diving
Posters & Demonstrations
/
Hirose, Masaharu
/
Sugiura, Yuta
/
Minamizawa, Kouta
/
Inami, Masahiko
Proceedings of the 2015 Augmented Human International Conference
2015-03-09
p.161-162
© Copyright 2015 ACM
Summary: In this paper, we propose "PukuPuCam" system, an apparatus to record one's
diving experience from a third-person view, allowing the user to recall the
experience at a later time. "PukuPuCam" continuously captures the center of the
user's view point, by attaching a floating camera to the user's body using a
string. With this simple technique, it is possible to maintain the same
viewpoint regardless of the diving speed or the underwater waves. Therefore,
user can dive naturally without being conscious about the camera. The main aim
of this system is to enhance the diving experiences by recording user's
unconscious behaviour and interactions with the surrounding environment.
[10]
The augmented narrative: toward estimating reader engagement
Posters & Demonstrations
/
Kunze, Kai
/
Sanchez, Susana
/
Dingler, Tilman
/
Augereau, Olivier
/
Kise, Koichi
/
Inami, Masahiko
/
Tsutomu, Terada
Proceedings of the 2015 Augmented Human International Conference
2015-03-09
p.163-164
© Copyright 2015 ACM
Summary: We present the concept of bio-feedback driven computing to design a
responsive narrative, which acts according to the readers experience. We
explore on how to detect engagement and give our evaluation on the usefulness
of different sensor modalities. We find temperature and blink frequency are
best to estimate engagement and can classify engaging and non-engaging
user-independent without error for a small user sample size (5 users).
[11]
Graffiti fur: turning your carpet into a computer display
Novel hardware I
/
Sugiura, Yuta
/
Toda, Koki
/
Hoshi, Takayuki
/
Kamiyama, Youichi
/
Igarashi, Takeo
/
Inami, Masahiko
Proceedings of the 2014 ACM Symposium on User Interface Software and
Technology
2014-10-05
v.1
p.149-156
© Copyright 2014 ACM
Summary: We devised a display technology that utilizes the phenomenon whereby the
shading properties of fur change as the fibers are raised or flattened. One can
erase drawings by first flattening the fibers by sweeping the surface by hand
in the fiber's growth direction, and then draw lines by raising the fibers by
moving the finger in the opposite direction. These material properties can be
found in various items such as carpets in our living environments. We have
developed three different devices to draw patterns on a "fur display" utilizing
this phenomenon: a roller device, a pen device and pressure projection device.
Our technology can turn ordinary objects in our environment into rewritable
displays without requiring or creating any non-reversible modifications to
them. In addition, it can be used to present large-scale image without glare,
and the images it creates require no running costs to maintain.
[12]
Tracs: transparency-control for see-through displays
Augmented reality II
/
Lindlbauer, David
/
Aoki, Toru
/
Walter, Robert
/
Uema, Yuji
/
Höchtl, Anita
/
Haller, Michael
/
Inami, Masahiko
/
Müller, Jörg
Proceedings of the 2014 ACM Symposium on User Interface Software and
Technology
2014-10-05
v.1
p.657-661
© Copyright 2014 ACM
Summary: We present Tracs, a dual-sided see-through display system with controllable
transparency. Traditional displays are a constant visual and communication
barrier, hindering fast and efficient collaboration of spatially close or
facing co-workers. Transparent displays could potentially remove these
barriers, but introduce new issues of personal privacy, screen content privacy
and visual interference. We therefore propose a solution with controllable
transparency to overcome these problems. Tracs consists of two see-through
displays, with a transparency-control layer, a backlight layer and a
polarization adjustment layer in-between. The transparency-control layer is
built as a grid of individually addressable transparency-controlled patches,
allowing users to control the transparency overall or just locally.
Additionally, the locally switchable backlight layer improves the contrast of
LCD screen content. Tracs allows users to switch between personal and
collaborative work fast and easily and gives them full control of transparent
regions on their display.
[13]
Smarter eyewear: using commercial EOG glasses for activity recognition
Demos
/
Ishimaru, Shoya
/
Uema, Yuji
/
Kunze, Kai
/
Kise, Koichi
/
Tanaka, Katsuma
/
Inami, Masahiko
Adjunct Proceedings of the 2014 International Joint Conference on Pervasive
and Ubiquitous Computing
2014-09-13
v.2
p.239-242
© Copyright 2014 ACM
Summary: Smart eyewear computing is a relatively new subcategory in ubiquitous
computing research, which has enormous potential. In this paper we present a
first evaluation of soon commercially available Electrooculography (EOG)
glasses (J!NS MEME) for the use in activity recognition. We discuss the
potential of EOG glasses and other smart eye-wear. Afterwards, we show a first
signal level assessment of MEME, and present a classification task using the
glasses. We are able to distinguish of 4 activities for 2 users (typing,
reading, eating and talking) using the sensor data (EOG and acceleration) from
the glasses with an accuracy of 70% for 6 sec. windows and up to 100% for a 1
minute majority decision. The classification is done user-independent.
The results encourage us to further explore the EOG glasses as platform for
more complex, real-life activity recognition systems.
[14]
Position paper: brain teasers -- toward wearable computing that engages our
mind
WAHM 2014 -- Workshop on Ubiquitous Technologies for Augmenting the Human
Mind
/
Ishimaru, Shoya
/
Kunze, Kai
/
Kise, Koichi
/
Inami, Masahiko
Adjunct Proceedings of the 2014 International Joint Conference on Pervasive
and Ubiquitous Computing
2014-09-13
v.2
p.1405-1408
© Copyright 2014 ACM
Summary: The emerging field of cognitive activity recognition -- real life tracking
of mental states -- can give us new possibilities to enhance our minds.
In this paper, we outline the use of wearable computing to engage the user's
mind. We argue that the more personal technology becomes the more it should
also adopt to the user's long term goals improving mental fitness. We present a
the concept of computing to engage our minds, discuss some enabling
technologies as well as challenges and opportunities.
[15]
Sweat Sensing Technique for Wearable Device Using Infrared Transparency
HCI for Health, Well-Being and Sport
/
Ogata, Masa
/
Inami, Masahiko
/
Imai, Michita
HCI International 2014: 16th International Conference on HCI, Part III:
Applications and Services
2014-06-22
v.3
p.323-331
Keywords: Sweat; Wearable device; Sensing; Photo transparency
© Copyright 2014 Springer International Publishing
Summary: Wearable devices that are worn on the hand and display information are
rapidly becoming pervasive. However, acquiring and displaying a user's own
data, such as the amount of sweat flowing and the required amount of water for
a particular activity, on a wearable device remains difficult. We propose a
technique that senses the amount of sweat flowing from the human body. The
technique, which is implemented in a wearable device, utilizes infrared
transparency via a sponge that can hold the sweat. We selected sponge as the
material to hold the sweat because it enables repeated measurement of the
amount of sweat flowing from the human body. Consequently, we also outline the
development and testing of a prototype device that actualizes the proposed
technique and discuss its efficacy and feasibility.
[16]
Augmenting a Wearable Display with Skin Surface as an Expanded Input Area
Design for Novel Interaction Techniques and Realities
/
Ogata, Masa
/
Sugiura, Yuta
/
Makino, Yasutoshi
/
Inami, Masahiko
/
Imai, Michita
DUXU 2014: Third International Conference on Design, User Experience, and
Usability, Part II: User Experience Design for Diverse Interaction Platforms
and Environments
2014-06-22
v.2
p.606-614
Keywords: Skin Deformation; Wearable Display; Photo reflectivity
© Copyright 2014 Springer International Publishing
Summary: Wearable devices such as the wristwatch-type smart watch, are becoming
smaller and easier to implement. However, user interaction using wearable
displays is limited owing to the small display area. On larger displays such as
tablet computers, the user has more space to interact with the device and
present various inputs. A wearable device has a small display area, which
clearly decreases its ability to read finger gestures. We propose an augmented
wearable display to expand the user input area over the skin. A user can employ
finger gestures on the skin to control a wearable display. The prototype device
has been implemented using techniques that sense skin deformation by measuring
the distance between the skin and the wearable (wristwatch-type) device. With
this sensing technique, we show three types of input functions, and create
input via the skin around the wearable display and the device.
[17]
Workshop on assistive augmentation
Workshop summaries
/
Huber, Jochen
/
Rekimoto, Jun
/
Inami, Masahiko
/
Shilkrot, Roy
/
Maes, Pattie
/
Ee, Wong Meng
/
Pullin, Graham
/
Nanayakkara, Suranga Chandima
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.2
p.103-106
© Copyright 2014 ACM
Summary: Our senses are the dominant channel for perceiving the world around us, some
more central than the others, such as the sense of vision. Whether they have
impairments or not, people often find themselves at the edge of sensorial
capability and seek assistive or enhancing devices. We wish to put sensorial
ability and disability on a continuum of usability for certain technology,
rather than treat one or the other extreme as the focus.
The overarching topic of the workshop proposed here is the design and
development of assistive technology, user interfaces and interactions that
seamlessly integrate with a user's mind, body and behavior, providing an
enhanced perception. We call this "Assistive Augmentation".
The workshop aims to establish conversation and idea exchange with
researchers and practitioners at the junction of human-computer interfaces,
assistive technology and human augmentation. The workshop will serve as a hub
for the emerging community of assistive augmentation researchers.
[18]
Virtual slicer: interactive visualizer for tomographic medical images based
on position and orientation of handheld device
/
Shimamura, Sho
/
Kanegae, Motoko
/
Morita, Jun
/
Uema, Yuji
/
Inami, Masahiko
/
Hayashida, Tetsu
/
Saito, Hideo
/
Sugimoto, Maki
Proceedings of the 2014 Virtual Reality International Conference
2014-04-09
p.12
© Copyright 2014 ACM
Summary: This paper introduces an interface that helps understand the correspondence
between the patient and medical images. Surgeons determine the extent of
resection by using tomographic images such as MRI (Magnetic Resonance Imaging)
data. However, understanding the relationship between the patient and
tomographic images is difficult. This study aims to visualize the
correspondence more intuitively. In this paper, we propose an interactive
visualizer for medical images based on the relative position and orientation of
the handheld device and the patient. We conducted an experiment to verify the
performances of the proposed method and several other methods. In the
experiment, the proposed method showed the minimum error.
[19]
Present information through afterimage with eyes closed
3. Look into Your Eyes
/
Nojiri, Kozue
/
Low, Suzanne
/
Toda, Koki
/
Sugiura, Yuta
/
Kamiyama, Youichi
/
Inami, Masahiko
Proceedings of the 2014 Augmented Human International Conference
2014-03-07
p.3
© Copyright 2014 ACM
Summary: We propose a display method using the afterimage effect to illustrate
images, so that people can perceive the images with their eyes closed.
Afterimage effect is an everyday phenomenon that we often experienced and it is
commonly utilized in many practical situations such as in movie creation.
However, many of us are not aware of it. We strongly believe that this
afterimage effect is an interesting phenomenon to display information to the
users. We conducted an experiment to compare the duration of the afterimage
effect to the duration of participant exposure to the image projection. We also
prototyped a wearable type display to give more flexibility and mobility to our
proposal. With this, one can utilize this method for various applications such
as to confirm password at a bank etc.
[20]
Pressure detection on mobile phone by camera and flash
4. Beyond Smartphones
/
Low, Suzanne
/
Sugiura, Yuta
/
Lo, Dixon
/
Inami, Masahiko
Proceedings of the 2014 Augmented Human International Conference
2014-03-07
p.11
© Copyright 2014 ACM
Summary: This paper proposes a method to detect pressure asserted on a mobile phone
by utilizing the back camera and flash on the phone. There is a gap between the
palm and camera when the phone is placed on the palm. This allows the light
from the flashlight to be reflected to the camera. However, when pressure is
applied on the phone, the gap will reduce, reducing the brightness captured by
the camera. This phenomenon is applied to detect two gestures: pressure applied
on the screen and pressure applied when user squeezes the phone. We also
conducted an experiment to detect the change in brightness level depending on
the amount of force asserted on the phone when it is placed in two positions:
parallel to the palm and perpendicular to the palm. The results show that when
the force increases, the brightness level decreases. Using the phones ability
to detect fluctuations in brightness, various pressure interaction applications
such as for gaming purposes may be developed.
[21]
Emotional priming of mobile text messages with ring-shaped wearable device
using color lighting and tactile expressions
4. Beyond Smartphones
/
Pradana, Gilang Andi
/
Cheok, Adrian David
/
Inami, Masahiko
/
Tewell, Jordan
/
Choi, Yongsoon
Proceedings of the 2014 Augmented Human International Conference
2014-03-07
p.14
© Copyright 2014 ACM
Summary: In this paper, as a hybrid approach to place a greater emphasis on existing
cues in Computer Mediated Communication (CMC), the authors explore the
emotional augmentation benefit of vibro-tactile stimulation, color lighting,
and simultaneous transmission of both signals to accompany text messages. Ring
U, A ring-shaped wearable system aimed at promoting emotional communications
between people using vibro-tactile and color lighting expressions, is proposed
as the implementation method. The result of the experiment has shown that
non-verbal stimuli can prime the emotion of a text message, and it can be
driven into the direction of the emotional characteristic of the stimuli.
Positive stimuli can prime the emotion to a more positive valence, and negative
stimuli can invoke a more negative valence. Another finding from the experiment
is that compared to the effect on valence, touch stimuli have more effect on
the activity level.
[22]
Multi-touch steering wheel for in-car tertiary applications using infrared
sensors
7. Driving
/
Koyama, Shunsuke
/
Sugiura, Yuta
/
Ogata, Masa
/
Withana, Anusha
/
Uema, Yuji
/
Honda, Makoto
/
Yoshizu, Sayaka
/
Sannomiya, Chihiro
/
Nawa, Kazunari
/
Inami, Masahiko
Proceedings of the 2014 Augmented Human International Conference
2014-03-07
p.5
© Copyright 2014 ACM
Summary: This paper proposes a multi-touch steering wheel for in-car tertiary
applications. Existing interfaces for in-car applications such as buttons and
touch displays have several operating problems. For example, drivers have to
consciously move their hands to the interfaces as the interfaces are fixed on
specific positions. Therefore, we developed a steering wheel where touch
positions can correspond to different operating positions. This system can
recognize hand gestures at any position on the steering wheel by utilizing 120
infrared (IR) sensors embedded in it. The sensors are lined up in an array
surrounding the whole wheel. An Support Vector Machine (SVM) algorithm is used
to learn and recognize the different gestures through the data obtained from
the sensors. The gestures recognized are flick, click, tap, stroke and twist.
Additionally, we implemented a navigation application and an audio application
that utilizes the torus shape of the steering wheel. We conducted an experiment
to observe the possibility of our proposed system to recognize flick gestures
at three positions. Results show that an average of 92% of flick could be
recognized.
[23]
SpiderVision: extending the human field of view for augmented awareness
8. Super Perception
/
Fan, Kevin
/
Huber, Jochen
/
Nanayakkara, Suranga
/
Inami, Masahiko
Proceedings of the 2014 Augmented Human International Conference
2014-03-07
p.47
© Copyright 2014 ACM
Summary: We present SpiderVision, a wearable device that extends the human field of
view to augment a user's awareness of things happening behind one's back.
SpiderVision leverages a front and back camera to enable users to focus on the
front view while employing intelligent interface techniques to cue the user
about activity in the back view. The extended back view is only blended in when
the scene captured by the back camera is analyzed to be dynamically changing,
e.g. due to object movement. We explore factors that affect the blended
extension, such as view abstraction and blending area. We contribute results of
a user study that explore 1) whether users can perceive the extended field of
view effectively, and 2) whether the extended field of view is considered a
distraction. Quantitative analysis of the users' performance and qualitative
observations of how users perceive the visual augmentation are described.
[24]
Move-it sticky notes providing active physical feedback through motion
In focus or not?
/
Probst, Kathrin
/
Haller, Michael
/
Yasu, Kentaro
/
Sugimoto, Maki
/
Inami, Masahiko
Proceedings of the 2014 International Conference on Tangible and Embedded
Interaction
2014-02-16
p.29-36
© Copyright 2014 ACM
Summary: Post-it notes are a popular paper format that serves a multitude of purposes
in our daily lives, as they provide excellent affordances for quick capturing
of informal notes, and location-sensitive reminding. In this paper, we present
Move-it, a system that combines Post-it notes with a technologically enhanced
paperclip to demonstrate how a passive piece of paper can be turned into an
"active" medium that conveys information through motion. We present two
application examples that investigate the applicability of Move-it sticky notes
for ambient information awareness. In comparison to existing notification
systems, experimental results show that they reduce negative effects of
interruptions on emotional state and performance, and provide unique
affordances by combining advantages of physical and digital systems into a
novel active paper interface.
[25]
Cuddly: Enchant Your Soft Objects with a Mobile Phone
Long Presentations
/
Low, Suzanne
/
Sugiura, Yuta
/
Fan, Kevin
/
Inami, Masahiko
Proceedings of the 2013 International Conference on Advances in Computer
Entertainment
2013-11-12
p.138-151
Keywords: Soft objects; mobile phone based computing; camera-based measurement; flash
light
© Copyright 2013 Springer International Publishing
Summary: Cuddly is a mobile phone application that will enchant soft objects to
enhance human's interaction with the objects. Cuddly utilizes the mobile
phone's camera and flash light (LED) to detect the surrounding brightness value
captured by the camera. When one integrate Cuddly with a soft object and
compresses the object, the brightness level captured by the camera will
decrease. Utilizing the measurement change in brightness values, we can
implement diverse entertainment applications using the different functions a
mobile phone is embedded with, such as animation, sound, Bluetooth
communication etc. For example, we created a boxing game by connecting two
devices through Bluetooth; with one device inserted into a soft object and the
other acting as a screen.