[1]
TableHop: An Actuated Fabric Display Using Transparent Electrodes
Shape Changing Displays
/
Sahoo, Deepak Ranjan
/
Hornbæk, Kasper
/
Subramanian, Sriram
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.3767-3780
© Copyright 2016 ACM
Summary: We present TableHop, a tabletop display that provides controlled
self-actuated deformation and vibro-tactile feedback to an elastic fabric
surface while retaining the ability for high-resolution visual projection. The
surface is made of a highly stretchable pure spandex fabric that is
electrostatically actuated using electrodes mounted on its top or underside. It
uses transparent indium tin oxide electrodes and high-voltage modulation to
create controlled surface deformations. Our setup actuates pixels and creates
deformations in the fabric up to +/- 5 mm. Since the electrodes are
transparent, the fabric surface functions as a diffuser for rear-projected
visual images, and avoid occlusion by users or actuators. Users can touch and
interact with the fabric to experience expressive interactions as with any
fabric based shape-changing interface. By using frequency modulation in the
high-voltage circuit, it can also create localized tactile sensations on the
user's fingertip when touching the surface. We provide simulation and
experimental results for the shape of the deformation and frequency of the
vibration of the surface. These results can be used to build prototypes of
different sizes and form-factors. We present a working prototype of TableHop
that has 30x40 cm2 surface area and uses a grid of 3x3 transparent electrodes.
It uses a maximum of 9.46 mW and can create tactile vibrations of up to 20 Hz.
TableHop can be scaled to make large interactive surfaces and integrated with
other objects and devices. TableHop will improve user interaction experience on
2.5D deformable displays.
[2]
Mid-Air Haptics and Displays: Systems for Un-instrumented Mid-air
Interactions
Workshop Summaries
/
Subramanian, Sriram
/
Seah, Sue Ann
/
Shinoda, Hiroyuki
/
Hoggan, Eve
/
Corenthy, Loic
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.3446-3452
© Copyright 2016 ACM
Summary: A fundamental shift is underway in how we interact with our computers and
devices. Touchless sensing products are being launched across consumer
electronics, home, automotive and healthcare industries. Recent advances in
haptics and display technologies has meant that interaction designers can also
provide users with tactile feedback in mid-air AND display visual elements
wherever the user needs them without in anyway instrumenting the user. The
overarching goal of this workshop is to bring together a group of researchers
spanning across multiple facets of exploring interactions with mid-air systems
to discuss, explore, and outline research challenges for this novel area. We
are especially interested in exploring how novel display and haptic technology
provide users with more compelling and immersive experiences without
instrumenting them in anyway.
[3]
Enhancing Interactivity with Transcranial Direct Current Stimulation
Posters
/
Wan, Bo
/
Vi, Chi
/
Subramanian, Sriram
/
Plasencia, Diego Martinez
Companion Proceedings of the 2016 International Conference on Intelligent
User Interfaces
2016-03-07
v.2
p.41-44
© Copyright 2016 ACM
Summary: Transcranial Direct Current Stimulation (tDCS) is a non-invasive type of
neural stimulation known for modulation of cortical excitability leading to
positive effects on working memory and attention. The availability of low-cost
and consumer grade tDCS devices has democratized access to such technology
allowing us to explore its applicability to HCI. We review the relevant
literature and identify potential avenues for exploration within the context of
enhancing interactivity and use of tDCS in the context of HCI.
[4]
Dynamir: Optical Manipulations Using Dynamic Mirror Brushes
Session 3: Fingers, Handprints and Dynamic Mirrors
/
Berthaut, Florent
/
Sahoo, Deepak Ranjan
/
McIntosh, Jess
/
Das, Diptesh
/
Subramanian, Sriram
Proceedings of the 2015 ACM International Conference on Interactive
Tabletops and Surfaces
2015-11-15
p.55-58
© Copyright 2015 ACM
Summary: Mirror surfaces are part of our everyday life. Among them, curved mirrors
are used to enhance our perception of the physical space, e.g., convex mirrors
are used to increase our field of view in the street, and concave mirrors are
used to zoom in on parts our face in the bathroom. In this paper, we
investigate the opportunities opened when these mirrors are made dynamic, so
that their effects can be modulated to adapt to the environment or to a user's
actions. We introduce the concept of dynamic mirror brushes that can be moved
around a mirror surface. We describe how these brushes can be used for various
optical manipulations of the physical space. We also present an implementation
using a flexible mirror sheet and three scenarios that demonstrate some of the
interaction opportunities.
[5]
Ghost Touch: Turning Surfaces into Interactive Tangible Canvases with
Focused Ultrasound
Session 6: Artistic Sand & Biking
/
Marzo, Asier
/
McGeehan, Richard
/
McIntosh, Jess
/
Seah, Sue Ann
/
Subramanian, Sriram
Proceedings of the 2015 ACM International Conference on Interactive
Tabletops and Surfaces
2015-11-15
p.137-140
© Copyright 2015 ACM
Summary: Digital art technologies take advantage of the input, output and processing
capabilities of modern computers. However, full digital systems lack the
tangibility and expressiveness of their traditional counterparts. We present
Ghost Touch, a system that remotely actuate the artistic medium with an
ultrasound phased array. Ghost Touch transforms a normal surface into an
interactive tangible canvas in which the users and the system collaborate in
real-time to produce an artistic piece. Ghost Touch is able to detect traces
and reproduce them, therefore enabling common digital operations such as copy,
paste, save or load whilst maintaining the tangibility of the traditional
medium. Ghost Touch has enhanced expressivity since it uses a novel algorithm
to generate multiple ultrasound focal points with specific intensity levels.
Different artistic effects can be performed on sand, milk&ink or liquid
soap.
[6]
Lifespan-based Partitioning of Index Structures for Time-travel Text Search
Session 1D: Text Processing
/
Nandi, Animesh
/
Subramanian, Suriya
/
Lakshminarasimhan, Sriram
/
Deshpande, Prasad M.
/
Raghavan, Sriram
Proceedings of the 2015 ACM Conference on Information and Knowledge
Management
2015-10-19
p.123-132
© Copyright 2015 ACM
Summary: Time-travel text search over a temporally evolving document collection is
useful in various applications. Supporting a wide range of query classes
demanded by these applications require different index layouts optimized for
their respective query access patterns. The problem we tackle is how to
efficiently handle different query classes using the same index layout.
Our approach is to use list intersections on single-attribute indexes of
keywords and temporal attributes. Although joint predicate evaluation on
single-attribute indexes is inefficient in general, we show that partitioning
the index based on version lifespans coupled with exploiting the
transaction-time ordering of record-identifiers, can significantly reduce the
cost of list intersections.
We empirically evaluate different index partitioning alternatives on top of
open-source Lucene, and show that our approach is the only technique that can
simultaneously support a wide range of query classes efficiently, have high
indexing throughput in a real-time ingestion setting, and also have negligible
extra storage costs.
[7]
Continuous Tactile Feedback for Motor-Imagery Based Brain-Computer
Interaction in a Multitasking Context
Brain-Computer Interaction
/
Jeunet, Camille
/
Vi, Chi
/
Spelmezan, Daniel
/
N'Kaoua, Bernard
/
Lotte, Fabien
/
Subramanian, Sriram
Proceedings of IFIP INTERACT'15: Human-Computer Interaction, Part I
2015-09-14
v.1
p.488-505
Keywords: Brain-Computer interaction; Tactile feedback; Multitasking
© Copyright 2015 Springer International Publishing Switzerland
Summary: Motor-Imagery based Brain Computer Interfaces (MI-BCIs) allow users to
interact with computers by imagining limb movements. MI-BCIs are very promising
for a wide range of applications as they offer a new and non-time locked
modality of control. However, most MI-BCIs involve visual feedback to inform
the user about the system's decisions, which makes them difficult to use when
integrated with visual interactive tasks. This paper presents our design and
evaluation of a tactile feedback glove for MI-BCIs, which provides a
continuously updated tactile feedback. We first determined the best parameters
for this tactile feedback and then tested it in a multitasking environment: at
the same time users were performing the MI tasks, they were asked to count
distracters. Our results suggest that, as compared to an equivalent visual
feedback, the use of tactile feedback leads to a higher recognition accuracy of
the MI-BCI tasks and fewer errors in counting distracters.
[8]
Need for Touch in Human Space Exploration: Towards the Design of a Morphing
Haptic Glove -- ExoSkin
Tangible and Tactile Interaction
/
Seah, Sue Ann
/
Obrist, Marianna
/
Roudaut, Anne
/
Subramanian, Sriram
Proceedings of IFIP INTERACT'15: Human-Computer Interaction, Part IV
2015-09-14
v.4
p.18-36
Keywords: Space; Touch; Haptic feedback; Haptic glove; User experience;
Extra-vehicular activities; Haptic jamming; Field study; Technology probes
© Copyright 2015 Springer International Publishing Switzerland
Summary: The spacesuit, particularly the spacesuit glove, creates a barrier between
astronauts and their environment. Motivated by the vision of facilitating
full-body immersion for effortless space exploration, it is necessary to
understand the sensory needs of astronauts during extra-vehicular activities
(EVAs). In this paper, we present the outcomes from a two-week field study
performed at the Mars Desert Research Station, a facility where crews carry out
Mars-simulated missions. We used a combination of methods (a haptic logbook,
technology probes, and interviews) to investigate user needs for haptic
feedback in EVAs in order to inform the design of a haptic glove. Our results
contradict the common belief that a haptic technology should always convey as
much information as possible, but should rather offer a controllable transfer.
Based on these findings, we identified two main design requirements to enhance
haptic feedback through the glove: (i) transfer of the shape and pressure
features of haptic information and (ii) control of the amount of haptic
information. We present the implementation of these design requirements in the
form of the concept and first prototype of ExoSkin. ExoSkin is a morphing
haptic feedback layer that augments spacesuit gloves by controlling the
transfer of haptic information from the outside world onto the astronauts'
skin.
[9]
Control of Non-Solid Diffusers by Electrostatic Charging
Non-Rigid Interaction Surfaces
/
Sahoo, Deepak Ranjan
/
Plasencia, Diego Martinez
/
Subramanian, Sriram
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.11-14
© Copyright 2015 ACM
Summary: The form factors of displays using fog or water surface are limited by our
ability to control the non-solid substances used as the diffuser. We propose a
charging technique for polar aerosols (e.g., mist or fog) that allows control
of the shape and trajectory of such non-solid diffusers using electric fields.
We report experiments that allowed us to design a charging mechanism that
produces charged fog aerosols with homogeneous electrical mobility. We
illustrate our idea by demonstrating how electric fields can be used to control
the shape of a fog display and the trajectory of a bubble display.
[10]
LeviPath: Modular Acoustic Levitation for 3D Path Visualisations
Interaction in 3D Space
/
Omirou, Themis
/
Marzo, Asier
/
Seah, Sue Ann
/
Subramanian, Sriram
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.309-312
© Copyright 2015 ACM
Summary: LeviPath is a modular system to levitate objects across 3D paths. It
consists of two opposed arrays of transducers that create a standing wave
capable of suspending objects in mid-air. To control the standing wave, the
system employs a novel algorithm based on combining basic patterns of movement.
Our approach allows the control of multiple beads simultaneously along
different 3D paths. Due to the patterns and the use of only two opposed arrays,
the system is modular and can scale its interaction space by joining several
LeviPaths. In this paper, we describe the hardware architecture, the basic
patterns of movement and how to combine them to produce 3D path visualisations.
[11]
Emotions Mediated Through Mid-Air Haptics
Feeling & Communicating Emotions
/
Obrist, Marianna
/
Subramanian, Sriram
/
Gatti, Elia
/
Long, Benjamin
/
Carter, Thomas
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.2053-2062
© Copyright 2015 ACM
Summary: Touch is a powerful vehicle for communication between humans. The way we
touch (how) embraces and mediates certain emotions such as anger, joy, fear, or
love. While this phenomenon is well explored for human interaction, HCI
research is only starting to uncover the fine granularity of sensory
stimulation and responses in relation to certain emotions. Within this paper we
present the findings from a study exploring the communication of emotions
through a haptic system that uses tactile stimulation in mid-air. Here, haptic
descriptions for specific emotions (e.g., happy, sad, excited, afraid) were
created by one group of users to then be reviewed and validated by two other
groups of users. We demonstrate the non-arbitrary mapping between emotions and
haptic descriptions across three groups. This points to the huge potential for
mediating emotions through mid-air haptics. We discuss specific design
implications based on the spatial, directional, and haptic parameters of the
created haptic descriptions and illustrate their design potential for HCI based
on two design ideas.
[12]
Opportunities and Challenges for Data Physicalization
Natural User Interfaces for InfoVis
/
Jansen, Yvonne
/
Dragicevic, Pierre
/
Isenberg, Petra
/
Alexander, Jason
/
Karnik, Abhijit
/
Kildal, Johan
/
Subramanian, Sriram
/
Hornbæk, Kasper
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.3227-3236
© Copyright 2015 ACM
Summary: Physical representations of data have existed for thousands of years. Yet it
is now that advances in digital fabrication, actuated tangible interfaces, and
shape-changing displays are spurring an emerging area of research that we call
Data Physicalization. It aims to help people explore, understand, and
communicate data using computer-supported physical data representations. We
call these representations physicalizations, analogously to visualizations --
their purely visual counterpart. In this article, we go beyond the focused
research questions addressed so far by delineating the research area,
synthesizing its open challenges and laying out a research agenda.
[13]
Marionette: a Multi-Finger Tilt Feedback Device for Curvatures and Haptic
Images Perception
WIP Theme: Gesture and Multimodal
/
Krusteva, Diana
/
Sahoo, Deepak
/
Marzo, Asier
/
Subramanian, Sriram
/
Coyle, David
Extended Abstracts of the ACM CHI'15 Conference on Human Factors in
Computing Systems
2015-04-18
v.2
p.1229-1234
© Copyright 2015 ACM
Summary: Marionette is a haptic device designed to explore touch perception limits
between real and device induced shapes. Its novelty resides in the support for
2D exploration over a flat surface and multi-finger capabilities. Marionette is
able to apply inclination to four fingers with two degrees of freedom while the
user moves the device as if it were a mouse. The device is aimed at enabling a
new set of haptic user studies. Preliminary results suggest that the limit of
curvature perception in 2D curves is mainly determined by the inclination
information while touching with both one and four fingers. Additionally,
Marionette supports haptic images such as maps, time changing functions and
haptically enhanced telepresence.
[14]
Spending Time with Money: From Shared Values to Social Connectivity
Social Dynamics and My Phone
/
Ferreira, Jennifer
/
Perry, Mark
/
Subramanian, Sriram
Proceedings of ACM CSCW 2015 Conference on Computer-Supported Cooperative
Work and Social Computing
2015-02-28
v.1
p.1222-1234
© Copyright 2015 ACM
Summary: There is a rapidly growing momentum driving the development of mobile
payment systems for co-present interactions, using near-field communication on
smartphones and contactless payment systems. The design (and marketing)
imperative for this is to enable faster, simpler, effortless and secure
transactions, yet our evidence shows that this focus on reducing transactional
friction may ignore other important features around making payments. We draw
from empirical data to consider user interactions around financial exchanges
made on mobile phones. Our findings examine how the practices around making
payments support people in making connections, to other people, to their
communities, to the places they move through, to their environment, and to what
they consume. While these social and community bonds shape the kinds of
interactions that become possible, they also shape how users feel about, and
act on, the values that they hold with their co-users. We draw implications for
future payment systems that make use of community connections, build trust,
leverage transactional latency, and generate opportunities for rich social
interactions.
[15]
Through the combining glass
Augmented reality I
/
Plasencia, Diego Martinez
/
Berthaut, Florent
/
Karnik, Abhijit
/
Subramanian, Sriram
Proceedings of the 2014 ACM Symposium on User Interface Software and
Technology
2014-10-05
v.1
p.341-350
© Copyright 2014 ACM
Summary: Reflective optical combiners like beam splitters and two way mirrors are
used in AR to overlap digital contents on the users' hands or bodies.
Augmentations are usually unidirectional, either reflecting virtual contents on
the user's body (Situated Augmented Reality) or augmenting user's reflections
with digital contents (AR mirrors). But many other novel possibilities remain
unexplored. For example, users' hands, reflected inside a museum AR cabinet,
can allow visitors to interact with the artifacts exhibited. Projecting on the
user's hands as their reflection cuts through the objects can be used to reveal
objects' internals. Augmentations from both sides are blended by the combiner,
so they are consistently seen by any number of users, independently of their
location or, even, the side of the combiner through which they are looking.
This paper explores the potential of optical combiners to merge the space in
front and behind them. We present this design space, identify novel
augmentations/interaction opportunities and explore the design space using
three prototypes.
[16]
Identifying suitable projection parameters and display configurations for
mobile true-3D displays
3D
/
Serrano, Marcos
/
Hildebrandt, Dale
/
Subramanian, Sriram
/
Irani, Pourang
Proceedings of 2014 Conference on Human-Computer Interaction with Mobile
Devices and Services
2014-09-23
p.135-143
© Copyright 2014 ACM
Summary: We present a two-part exploration on mobile true-3D displays, i.e.
displaying volumetric 3D content in mid-air. We first identify and study the
parameters of a mobile true-3D projection, in terms of the projection's
distance to the phone, angle to the phone, display volume and position within
the display. We identify suitable parameters and constraints, which we propose
as requirements for developing mobile true-3D systems. We build on the first
outcomes to explore methods for coordinating the display configurations of the
mobile true-3D setup. We explore the resulting design space through two
applications: 3D map navigation and 3D interior design. We discuss the
implications of our results for the future design of mobile true-3D displays.
[17]
Portallax: bringing 3D displays capabilities to handhelds
3D
/
Plasencia, Diego Martinez
/
Karnik, Abhijit
/
Muñoz, Jonatan Martinez
/
Subramanian, Sriram
Proceedings of 2014 Conference on Human-Computer Interaction with Mobile
Devices and Services
2014-09-23
p.145-154
© Copyright 2014 ACM
Summary: We present Portallax, a clip-on technology to retrofit mobile devices with
3D display capabilities. Available technologies (e.g. Nintendo 3DS or LG
Optimus) and clip-on solutions (e.g. 3DeeSlide and Grilli3D) force users to
have a fixed head and device positions. This is contradictory to the nature of
a mobile scenario, and limits the usage of interaction techniques such as
tilting the device to control a game. Portallax uses an actuated parallax
barrier and face tracking to realign the barrier's position to the user's
position. This allows us to provide stereo, motion parallax and perspective
correction cues in 60 degrees in front of the device. Our optimized design of
the barrier minimizes colour distortion, maximizes resolution and produces
bigger view-zones, which support 81% of adults' interpupillary distances and
allow eye tracking implemented with the front camera. We present a reference
implementation, evaluate its key features and provide example applications
illustrating the potential of Portallax.
[18]
Perception of ultrasonic haptic feedback on the hand: localisation and
apparent motion
Touch and stylus interaction
/
Wilson, Graham
/
Carter, Thomas
/
Subramanian, Sriram
/
Brewster, Stephen A.
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.1133-1142
© Copyright 2014 ACM
Summary: Ultrasonic haptic feedback is a promising means of providing tactile
sensations in mid-air without encumbering the user with an actuator. However,
controlled and rigorous HCI research is needed to understand the basic
characteristics of perception of this new feedback medium, and so how best to
utilise ultrasonic haptics in an interface. This paper describes two
experiments conducted into two fundamental aspects of ultrasonic haptic
perception: 1) localisation of a static point and 2) the perception of motion.
Understanding these would provide insight into 1) the spatial resolution of an
ultrasonic interface and 2) what forms of feedback give the most convincing
illusion of movement. Results show an average localisation error of 8.5mm, with
higher error along the longitudinal axis. Convincing sensations of motion were
produced when travelling longer distances, using longer stimulus durations and
stimulating multiple points along the trajectory. Guidelines for feedback
design are given.
[19]
Is my phone alive?: a large-scale study of shape change in handheld devices
using videos
Shape-changing interfaces
/
Pedersen, Esben W.
/
Subramanian, Sriram
/
Hornbæk, Kasper
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.2579-2588
© Copyright 2014 ACM
Summary: Shape-changing handheld devices are emerging as research prototypes, but it
is unclear how users perceive them and which experiences they engender. The
little data we have on user experience is from single prototypes, only covering
a small part of the possibilities in shape change. We produce 51 videos of a
shape-changing handheld device by systematically varying seven parameters of
shape change. In a crowd-sourced study, 187 participants watched the videos and
described their experiences using rating scales and free text. We find
significant and large differences among parameters of shape change. Shapes that
have previously been used for notifications were rated the least urgent; the
degree of shape change was found to impact experience more than type of shape
change. The experience of shape change was surprisingly complex: hedonic
quality were inversely related to urgency, and some shapes were perceived as
ugly, yet useful. We discuss how to advance models of shape change and improve
research on the experience of shape change.
[20]
Changibles: analyzing and designing shape changing constructive assembly
Shape-changing interfaces
/
Roudaut, Anne
/
Reed, Rebecca
/
Hao, Tianbo
/
Subramanian, Sriram
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.2593-2596
© Copyright 2014 ACM
Summary: Advances in shape changing assemblies have been made in reconfiguration
algorithms, hardware designs and interaction techniques. However no tools exist
for guiding designers in building those modular devices and especially for
choosing the shape of the units. The task becomes even more complex when the
units themselves can change their shapes to animate the entire assembly. In
this paper, we contribute with the first analysis tool which helps the designer
to both choose the right subset of forms for the units and to create an
assembly with maximum accuracy from the set of given objects. We introduce the
concept of Changibles that are interactive wireless units that can reshape
themselves and be attached together to create an animated assembly. We present
a use case to demonstrate the use of our tool, with an instantiation of six
Changibles that are used to construct a pulsing heart assembly.
[21]
Temporal, affective, and embodied characteristics of taste experiences: a
framework for design
Sensory experiences: smell and taste
/
Obrist, Marianna
/
Comber, Rob
/
Subramanian, Sriram
/
Piqueras-Fiszman, Betina
/
Velasco, Carlos
/
Spence, Charles
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.2853-2862
© Copyright 2014 ACM
Summary: We present rich descriptions of taste experience through an analysis of the
diachronic and synchronic experiences of each of the five basic taste
qualities: sweet, sour, salt, bitter, and umami. Our findings, based on a
combination of user experience evaluation techniques highlight three main
themes: temporality, affective reactions, and embodiment. We present the taste
characteristics as a framework for design and discuss each taste in order to
elucidate the design qualities of individual taste experiences. These findings
add a semantic understanding of taste experiences, their temporality enhanced
through descriptions of the affective reactions and embodiment that the five
basic tastes elicit. These findings are discussed on the basis of established
psychological and behavioral phenomena, highlighting the potential for
taste-enhanced design.
[22]
SensaBubble: a chrono-sensory mid-air display of sight and smell
Sensory experiences: smell and taste
/
Seah, Sue Ann
/
Plasencia, Diego Martinez
/
Bennett, Peter D.
/
Karnik, Abhijit
/
Otrocol, Vlad Stefan
/
Knibbe, Jarrod
/
Cockburn, Andy
/
Subramanian, Sriram
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.2863-2872
© Copyright 2014 ACM
Summary: We present SensaBubble, a chrono-sensory mid-air display system that
generates scented bubbles to deliver information to the user via a number of
sensory modalities. The system reliably produces single bubbles of specific
sizes along a directed path. Each bubble produced by SensaBubble is filled with
fog containing a scent relevant to the notification. The chrono-sensory aspect
of SensaBubble means that information is presented both temporally and
multimodally. Temporal information is enabled through two forms of persistence:
firstly, a visual display projected onto the bubble which only endures until it
bursts; secondly, a scent released upon the bursting of the bubble slowly
disperses and leaves a longer-lasting perceptible trace of the event. We report
details of SensaBubble's design and implementation, as well as results of
technical and user evaluations. We then discuss and demonstrate how SensaBubble
can be adapted for use in a wide range of application contexts -- from an
ambient peripheral display for persistent alerts, to an engaging display for
gaming or education.
[23]
MisTable: reach-through personal screens for tabletops
Novel mobile displays and devices
/
Plasencia, Diego Martinez
/
Joyce, Edward
/
Subramanian, Sriram
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.3493-3502
© Copyright 2014 ACM
Summary: We present MisTable, a tabletop system that combines a conventional
horizontal interactive surface with personal screens between the user and the
tabletop surface. These personal screens, built using fog, are both see-through
and reach-through. Being see-through provides direct line of sight of the
personal screen and the elements behind it on the tabletop. Being reach-through
allows the user to switch from interacting with the personal screen to reaching
through it to interact with the tabletop or the space above it. The personal
screen allows a range of customisations and novel interactions such as
presenting 2D personal contents on the screen, 3D contents above the tabletop
or augmenting and relighting tangible objects differently for each user.
Besides, having a personal screen for each user allows us to customize the view
of each of them according to their identity or preferences. Finally, the
personal screens preserve all well-established tabletop interaction techniques
like touch and tangible interactions. We explore the challenges in building
such a reach-through system through a proof-of-concept implementation and
discuss the possibilities afforded by the system.
[24]
Error related negativity in observing interactive tasks
Brain computer interfaces
/
Vi, Chi Thanh
/
Jamil, Izdihar
/
Coyle, David
/
Subramanian, Sriram
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.3787-3796
© Copyright 2014 ACM
Summary: Error Related Negativity is triggered when a user either makes a mistake or
the application behaves differently from their expectation. It can also appear
while observing another user making a mistake. This paper investigates ERN in
collaborative settings where observing another user (the executer) perform a
task is typical and then explores its applicability to HCI. We first show that
ERN can be detected on signals captured by commodity EEG headsets like an
Emotiv headset when observing another person perform a typical multiple-choice
reaction time task. We then investigate the anticipation effects by detecting
ERN in the time interval when an executer is reaching towards an answer. We
show that we can detect this signal with both a clinical EEG device and with an
Emotiv headset. Our results show that online single trial detection is possible
using both headsets during tasks that are typical of collaborative interactive
applications. However there is a trade-off between the detection speed and the
quality/prices of the headsets. Based on the results, we discuss and present
several HCI scenarios for use of ERN in observing tasks and collaborative
settings.
[25]
D-FLIP: Dynamic and Flexible Interactive PhotoShow
Short Presentations
/
Vi, Chi Thanh
/
Takashima, Kazuki
/
Yokoyama, Hitomi
/
Liu, Gengdai
/
Itoh, Yuichi
/
Subramanian, Sriram
/
Kitamura, Yoshifumi
Proceedings of the 2013 International Conference on Advances in Computer
Entertainment
2013-11-12
p.415-427
Keywords: Dynamic PhotoShow; Emergent Computing; EEG
© Copyright 2013 Springer International Publishing
Summary: We propose D-FLIP, a novel algorithm that dynamically displays a set of
digital photos using different principles for organizing them. A variety of
requirements for photo arrangements can be flexibly replaced or added through
the interaction and the results are continuously and dynamically displayed.
D-FLIP uses an approach based on combinatorial optimization and emergent
computation, where geometric parameters such as location, size, and photo angle
are considered to be functions of time; dynamically determined by local
relationships among adjacent photos at every time instance. As a consequence,
the global layout of all photos is automatically varied. We first present
examples of photograph behaviors that demonstrate the algorithm and then
investigate users' task engagement using EEG in the context of story
preparation and telling. The result shows that D-FLIP requires less task
engagement and mental efforts in order to support storytelling.