HCI Bibliography Home | HCI Conferences | HCII Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
HCII Tables of Contents: 89-1a89-1b89-2a89-2b91-1a91-1b91-2a91-2b93-1a93-1b93-1c

Proceedings of the Sixth International Conference on Human-Computer Interaction

Fullname:Proceedings of the Sixth International Conference on Human-Computer Interaction
Editors:Uuichiro Anzai; Katsuhiko Ogawa; Hirohiko Mori
Location:Tokyo, Japan
Dates:1995-Jul-09 to 1995-Jul-14
Publisher:Elsevier Science
Standard No:ISBN 0-444-81795-6 ISSN 0921-2647; hcibib: HCII95
Pages:1179+1067
Links:www.elsevier.com
  1. HCII 1995-07-09 Volume I. Human and Future Computing
    1. I.1 Gestural Interface
    2. I.2 Visual Interface
    3. I.3 Multimedia Art and Entertainment
    4. I.4 User Interface for All -- Everybody, Everywhere, and Anytime
    5. I.5 Kansei Engineering
    6. I.6 Cognitive Science and HCI for Cooperation
    7. I.7 Multimodal Interface
    8. I.8 Nonverbal Communication
    9. I.9 Hypermedia / Hypertext
    10. I.10 Collaboration 1
    11. I.11 Collaboration 2
    12. I.12 Collaboration 3
    13. I.13 Virtual Reality 1
    14. I.14 Virtual Reality 2
    15. I.15 Virtual Reality 3
    16. I.16 Pen-Based Interface
    17. I.17 Three Dimensional Realtime Human-Computer Interfaces -- Virtual Reality

HCII 1995-07-09 Volume I. Human and Future Computing

I.1 Gestural Interface

Gesture Recognition for Manipulation in Artificial Realities BIBA 5-10
  Richard Watson; Paul O'Neill
In [1], we conclude that the flexible manipulation, by a human operator, of virtual objects in artificial realities is augmented by a gesture interface. Such an interface is described here and it can recognise static gestures, posture-based dynamic gestures, pose-based dynamic gestures, a "virtual control panel" involving posture and pose and simple pose-based trajectory analysis of postures.
   The interface is based on a novel, application independent technique for recognising gestures. Gestures are represented by what we term approximate splines, sequences of critical points (local minima and maxima) of the motion of degrees of freedom of the hand and wrist. This scheme allows more flexibility in matching a gesture performance spatially and temporally and reduces the computation required, compared with a full spline curve fitting approach. Training the gesture set is accomplished through the interactive presentation of a small number of samples of each gesture.
Hand Gesture Recognition Using Computer Vision Based on Model-Matching Method BIBA 11-16
  Nobutaka Shimada; Yoshiaki Shirai; Yoshinori Kuno
This paper proposes a method of 3-D model-based hand pose recognition from monocular silhouette image sequences. The principle of the method is to search for the hand pose which matches best to a silhouette in an image among possible candidates generated from the 3-D hand model. The number of candidates is reduced by considering the locations of features extracted from the silhouette, the prior probability of shape appearance, and the sensitivity of the shape change to the model parameter change. In addition, the multiple solutions are preserved to obtain the globally optimal solution over a long sequence.
A Human-Computer Dialogue Agent with Body Gestures, Hand Motion, and Speech BIBA 17-22
  Shan Lu; Shujun Yoshizaka; Toshiyuki Kamiya; Hitoshi Miyai
This paper presents an anthropomorphic dialogue agent system with human-like motion generation, which enables us to communicate with computer in nonverbal way. Some of motions of body, head, and hands are controlled based on the database extracted by analyzing the actual behavior of news announcers, movie stars, and puppets in conversational situations. As an experiment, the system integrated with voice input and output was implemented for CG Librarian, with the ability to guide and help users in the virtual library environment.
How Can Feelings be Conveyed in Network? -- Use of Gestural Animations as Nonverbal Information -- BIBA 23-28
  T. Inoue; K. Okada; Y. Matsushita
The purpose of this paper is to examine the possibility of making use of gestural animations to convey feelings in asynchronous network communication.
   Generally speaking, nonverbal communication is more important than verbal communication in face-to-face communication. This is because nonverbal communication conveys feelings more deeply than verbal communication. However nonverbal communication has not been used in traditional character based network communication until now, due to the development of multimedia network, it has been thought that nonverbal communication can be valuable for network communication. There are many nonverbal behaviors: facial expression, eye contact, paralanguage, posture, gesture, and so on. Among these, gesture should be regarded as more important. From this viewpoint, we have researched how to express one's feelings through network [1][2].
   However, exactly what feelings can be conveyed by gestures on display are not well known so far. Thus, an evaluation of feelings expressed by gesture has been done using animated cartoons. As a result, feelings which can be conveyed by gesture are distinguished from five types by Japanese: "Introverted negative feelings like sad", "Positive feelings like happy", "Extroverted negative feelings like angry", "Strained feelings like surprised or fear"m "Indifferent feelings like boring". And an evaluation of electronic mails which contain gestural CG animations has been done as an example of asynchronous network communication. As a result, the effect of using gestural animations has been revealed.
   In the following chapters, first, the importance of nonverbal communication and expression of feelings through gesture is discussed. Secondly, the lack of nonverbal communication and the need of expression of feelings in network communication are discussed. Thirdly, an evaluation of feelings expressed through gesture is explained. Fourthly, the results are applied to an electronic mail with a CG animation and its evaluation is explained. And finally, these are concluded.
Agent-Typed Multimodal Interface Using Speech, Pointing Gestures, and CG BIBA 29-34
  Haru Ando; Hideaki Kikuchi; Nobuo Hataoka
This paper proposes a sophisticated agent-typed user interface using speech, pointing gestures and CG technologies. An "Agent-typed Interior Design System" has been implemented as a prototype for evaluating the proposed agent-typed interface, which has speech and pointing gestures as input modalities, and in which the agent is realized by 3 dimensional CG (3-D CG) and speech guidance. In this paper, the details of system implementation and evaluation results, which clarified the effectiveness of the agent-typed interface, are described.

I.2 Visual Interface

Eye-Gaze Control of Multimedia Systems BIBA 37-42
  John Paulin Hansen; Allan W. Andersen; Peter Roed
Several non-intrusive systems for recordings of eye movements, gaze locations, pupil size and blink frequencies have been introduced in recent years. The application of this technology falls into two main categories: (a) active device control, and (b) passive recordings. Active device control means the voluntary use of gaze positioning to do selections. Traditionally, passive recordings of the user's ocular behavior have been made for analysis of, e.g., human computer interaction [1] or newspaper reading [2]. The first part of this paper describes a multimedia system that applies both types of recordings separately. The last part introduces a qualitatively new interaction principle, termed Interest and Emotion Sensitive media (IES) [3], that emerges when the two types are integrated. It is suggested that interest and emotion sensitive media hold great potential for information systems in general, e.g., applied to information stands and interactive television.
Relationship between Eye Movement and Visual Cognition for Ambiguous Man/Girl Figures BIBA 43-48
  Yasufumi Kume; Nozomi Sato; Eiichi Bamba
In this paper, cognitive mechanism in human being is examined by measuring human's eye movement. A series of extending a Fisher's ambiguous figures is used as visual stimulus. During the experimental session, subject's eye movement is measured by eye mark recorder, and eye fixation time, the number of times in eye fixation, eye movement velocity are calculated. On the basis of these data, internal cognitive mechanism in human being is discussed.
The Role of Visuality: Interface Design of a CD-ROM as Cognitive Tool BIBA 49-54
  Gui Bonsiepe
When designing a CD-ROM for cognitive purposes, the challenge consists in finding a balance between two extremes: on the one side the rapidly tiring visual and acoustic overload of arcade games, and on the other side the visual atrophy of a command-line interface with monospace teletype font.
   The development of a CD-ROM makes evident the tension between language and image, between logocentrism and pictocentriscm. Easily the infatuation with the effects of mtv-style animation can tempt the designer to overlook the purpose that the selection, organisation and presentation of information should in first place serve for effective communication -- a task that implies a cognitive effort by the designer.
An Interface for Sound Browsing in Video Handling Environment BIBA 55-60
  Kenichi Minami; Akihito Akutsu; Yoshinobu Tonomura; Hiroshi Hamada
New video handling techniques are indispensable for easier human-video interaction. In this paper, a new approach to video handling based on auditory information is proposed. Musical sound and voiced sound are detected by means of spectrum analysis from the sound track of video data, and a sound browsing interface is developed. Presence of the detected sound is indicated on the interface using appropriate images and coloured indicators. The interface provides an intuitive browsing environment in which users can randomly access to the desired sound by selecting the images.
The Intelligibility of Time-Compressed Digital-Video Lectures BIBA 61-66
  Kevin A. Harrigan
In Education, videotaped lectures are widely used. Time-compressed videotaped lectures are lectures that are played back in less time than the original recording. This paper first reviews the literature on time-compressed video. The SPECIAL System II is then described. It is a computer application that allows learners to have an iconic index into digitally-recorded videotaped lectures. The most unique aspect of the system is that the learner has control of the percentage of time-compression. Finally, a formal experiment is described in which the maximum percentage of time-compression that would be useful to provide for the user was determined.
TacTool: A Tactile Rapid Prototyping Tool for Visual Interfaces BIBA 67-74
  David V. Keyson; Hok Kong Tang
This paper describes the TacTool development tool and input device for designing and evaluating visual user interfaces with tactile feedback. TacTool is currently supported by the IPO trackball with force feedback in the x and y directions. The tool is designed to enable both the designer and the user to apply and create tactile fields in a user interface with no knowledge of computer programming. The user works with a set of tactile object fields called TouchCons and visual representations to build a graphical interface with tactile feedback. Direct manipulation of objects enables creation of new complex fields which can be used for informational and navigational purposes. For example, the user can use a "path" object to draw a road which can be subsequently felt as a tactile channel, or a "hole" object which contains forces towards the centre of the hole. Tactile fields can be placed while an application is running; for example, a "tactile marker" can be placed to mark a significant point. A pulling force back towards this point can be always active or produced upon request. In addition to tactile feedback, TouchCons can provide active movement cues. For example, a "hint" field is used to a create tactile directional cue which is a system-driven ball movement. Tactile information can thus be used to support a two-way communication channel between the system and user.

I.3 Multimedia Art and Entertainment

Network Neuro-Baby with Robotics Hand (An Automatic Facial Expression Synthesizer that Responds to Expressions of Feeling in the Human Voice and Handshake) BIBA 77-82
  Naoko Tosa; Hideki Hashimoto; Kaoru Sezaki; Yasuharu Kunii; Toyotoshi Yamaguchi; Kotaro Sabe; Ryosuke Nishino; Hiroshi Harashima; Fumio Harashima
Neuro-Baby (NB) is a totally new type of interactive performance system which responds to the human voice with a computer-generated baby face and sound effects. Emotion space model is employed to categorize the feelings of the speaker. To recognize the human voice we used a neural network which has been taught the relationship between a set of digitized wave patterns and the location of several emotion types in the emotion space. The facial expression is synthesized continuously according to the location which the neural network generates. The flexible design of NB is possible by changing the facial design, the layout in the emotion space, sensitivity to the transition of the feelings or the teaching pattern for the neural network.
   By networking NB's, we can enjoy a non-verbal communication with each other. Such a Networked NB's will help the mutual understanding, absorption of cultural gap as well as international cultural exchange very much. The first result will be demonstrated in 1995, by connecting two NB's between Japan and USA. The networking issues concerning such a system is also addressed.
On the User Adaptive Function of the Multimedia Learning System "The Technique of Gamelan -- Music and Dance" BIBA 83-88
  Tsutomu Oohashi; Emi Nishina; Norie Kawai; Yoshitaka Fuwamoto
This multimedia learning system is capable of displaying a teacher's model dance on the screen at any speed and as much as time of repetition desired. Conversely, in the case of face-to-face instruction, an extreme runs of repetition may make a teacher exhausted, and the style of model performance might be deviated. Despite a teacher is able to change the speed of performance to a certain degree, an extremely slow movement is very difficult. The learner's request on the repeated performance or changing of the speed to the teacher may be regarded as a heavy burden or in some case it may not be practical. With our multimedia learning system, the learner does not have to be worried about the response of the teacher. By making the best use of the advantage of this multimedia learning system that surpasses the limit of most patient teacher, and at the same time by exerting the best effort in a consistent manner to trying to go beyond one's limit, the multimedia learning system may lead the learner to acquire a technical skill to the extent exceeding the level of learning effect, normally acquired through the traditional face-to-face method of training. We believe this kind of advanced user adaptive function that "a computers adapts to the human, rather than the human adapts to a computer" will cultivate the new stage of interaction between human beings and computers.
Multimedia Interactive Art: System Design and Artistic Concept of Real-Time Performance with Computer Graphics and Computer Music BIBA 89-94
  Yoichi Nagashima
This is the report of some applications of human-computer interaction about experimental performances of multimedia interactive arts. The human performer and the computer systems perform computer graphics and computer music interactively in real-time. As the technical point of view, this paper is intended as an investigation of some special approaches: (1) the idea of "chaos" information processing techniques used in the musical part, (2) real-time communication system for the message of performance, (3) some original sensors and pattern detecting techniques, (4) distributed system using many computers for convenience to develop and to arrange.
Conception of Bioadaptable Children's Computer Toys BIBA 95-99
  V. V. Savchenko
The paper considers algorithms for new generation of children's toys -- bioadaptable toys. Basing on the analysis of physiological and/or psychophysiological parameters a functional state or behaviour response of a child are interpreted at a specified time moment, and depending on their values an adequate (corresponding to the set objective) control algorithm of a toy "behaviour" is generated. The toy "behaviour" is considered as a semantic biofeedback organization, and hence can be used for a functional state correction of a child.
A Media Supported Playland and Interactions among Players BIBA 101-106
  Yu Shibuya; Hiroshi Tamura; Ken-ichi Okamoto
In this paper, we propose a new type of participatory entertainment using video images. In this entertainment, a player overlaps his self image on background picture and controls it spatially. A spatial control system of the self image is examined. By using this system, we evaluate the difficulty of moving the self image to the desired direction and locating it in the appropriate location. In spite of this difficulty, it is enjoyable enough to play with overlapped images. Furthermore, watching such images with other people might be more interesting. There are persons who want to show their overlapped images or persons who want to watch the other persons' overlapped images. There are interactions among these persons through the overlapped images.
Virtual Performer: An Environment for Interactive Multimedia Art BIBA 107-112
  Haruhiro Katayose; Tsutomu Kanamori; Takashi Sakaguchi; Yoichi Nagashima; Kosuke Sato; Seiji Inokuchi
This paper describes the overview of the Virtual Performer which is designed to compose and perform interactive multimedia arts with it. The Virtual Performer consists of sensory facilities, presentation facilities and authoring facilities. The sensory facilities consist of various transducers units. Its plug-in style offers users the free and optimal set up of sensors. The presentation facilities are MIDI, digital sound processing, CG generation and interactive Video controls. The authoring facilities offer two ways to design scenarios. One is to write recognition-action rules. The other is normal sequencing technique. It is mainly used to switch the scene and the former is mainly used to model the world of each scene. This paper shows some ongoing activities to produce multimedia art with the virtual Performer.
Human-System Interaction Based on Active Objects BIBA 113-118
  Luis del Pino; Dag Belsnes
Virtual world applications pose a number of challenging problems concerning Human-System Interaction. In this type of system, the interface is a direct representation of the system state. Objects at the interface level have a direct correspondence with objects within the virtual world, which are inherently active. A new conceptual framework is required, and further research is needed to solve some of the problems (like multiple inheritance of behaviour) that arise when applying object orientation to active interface objects. Games and simulation systems represent an excellent testing ground in this sense.

I.4 User Interface for All -- Everybody, Everywhere, and Anytime

Human Information Technology for Living Oriented Innovation BIB 121-124
  Hiroshi Tamura
Human Interfaces for Individuals, Environment and Society BIBA 125-130
  Hirotada Ueda
The FRIEND21 (Future Personalized Information Environment Development), a Japanese government six-years project aimed at conducting research into human interfaces for the 21st century, came to an end on March 31, 1994. When FRIEND21 research first began, the phrase "media fusion" was still somewhat novel, and the concept that information, telecommunications, publishing, broadcasting, and various other media sectors would steadily be "fused" as the 21st century approaches was still somewhat new. At the same time cognitive psychology was very quickly gaining acceptance in human interface research, and people had begun to advocate the idea that cognitive engineering should be applied to system design focused on the user [1].
   Although in UIMS (User Interface Management System) most of the attention has been drawn to tool kits for window systems, another aspect of UIMS is the separation of two domains; function and presentation (metaphors). The merits of this separation becomes important from the perspective of the user rather than the system designer, because the human interface must change adaptively and dynamically along with the individual user, environment and society. Adaptive change has to be done through the dynamic combination of presentation and functions. It requires a framework that handles intention, tasks, context, polysemy and multiplicity in an integrated manner. We proposed a set of frameworks in our FRIEND21 project [2]; Metaware as the cognitive principles of systematic description and control for adaptive change, and the Agency Model as a mechanism for embodying the metaware principle.
   With the aim of making it possible for everyone in the 21st century to be able to employ computers in their daily lives, FRIEND21 research focused on the development of basic technologies for software that would be commercially available five years after the completion of the project. The FRIEND21 project began with the slogan "systems that anyone can use anywhere anytime": however, our research gradually developed a somewhat different objective, that is, the creation of a computerized society imbued with sympathy and care. In other words, it would be a collaboration or a coevolution between a human and a computer.
User Interfaces for Disabled and Elderly People: A European Perspective BIBA 131-136
  Pier Luigi Emiliani
Computer-based systems are widely diffused in tasks regarding access to and processing of information in all application environments, while their use in connection to telematics services is emerging. The increasing complexity of equipment, and of the applications accessible through them, has resulted in more in-depth attention to usability issues of the user interface. This is particularly important for disabled and elderly people, who in principle have the same needs for communication and access to information as do all other users, but different requirements and preferences for the functionalities of equipment, services and applications; particular attention to the man-machine interfaces is required, both at the level of media and interaction peripherals.
   This paper briefly reviews the current situation in European research and development activities regarding man-machine interfaces of computer systems and telecommunication terminals, services and applications that are accessible by people with disabilities; particular reference is made to specific projects supported by the Commission of the European Union which are carried out under the responsibility of the author.
Towards User Interfaces for All: Some Critical Issues BIBA 137-142
  C. Stephanidis
Mainstream research and development in the HCI field has mainly addressed the needs of the "average" able user [1]; only in recent years, have some efforts been directed towards exploiting such technological outcomes, particularly in the Assistive Technology field. It is argued that these efforts have been carried out, until now, in a fragmented way, often following ad hoc procedures and addressing specific problems of specific users or user groups. The most common approach has been to adapt commercially available software products, or to develop special purpose applications in order to enable accessibility by a target user category. A representative example of adaptation oriented approaches are the so called screen readers for enabling partial accessibility of graphical User Interfaces by blind users. However, more recent approaches have demonstrated the technical feasibility of providing User Interface development systems which, at design time, take into consideration the access requirements of both able and disabled users [2], [3].
   The evident speed of technological progress in the HCI field necessitates a more holistic approach towards solving accessibility issues for people with disabilities, since the application of adaptation methods becomes inappropriate due to: (i) the high cost for producing customised case-specific solutions for the different application domains, interaction technologies/environments, and target user groups, and (ii) technical problems, since for the vast majority of emerging interaction technologies, the application of adaptation oriented approaches may become practically impossible or meaningless (for instance, consider the problem of automatically reproducing, a Virtual Reality based information cyberspace in a non-visual form).
   It is argued that, by following a proactive approach (i.e. addressing accessibility issues at design time), it is possible to ensure that forthcoming technologies are made accessible to all. Moreover, it is claimed that the population at large stands to gain additional benefits from such proactive considerations of emerging technological advancements (i.e. there is added value).
Access Considerations of Human-Computer Interfaces for People with Physical Disabilities BIBA 143-148
  F. Shein
Universal accessible design of human-computer interfaces will truly benefit not just those persons with obvious disabilities, but all users who for one reason or another have difficulty using current computer technologies. A key facet of universal design will be flexibility that allows tailoring to the specific abilities of the end-user. No 'one' interface will be right for everyone. This paper focuses on issues related to individuals with physical disabilities and presents an overview of the author's research in this area over the past decade.
Navigating the Graphical User Interface (GUI) by the Visually Impaired Computer User BIBA 149-154
  Arthur I. Karshmer
The use of modern computers by the visually handicapped has become more difficult over the past few years. In earlier systems the user interface was a simple character based environment. In those systems, simple devices like screen readers, braille output and speech synthesizers were effective. Current systems now run Graphical User Interfaces (GUIs) which have rendered these simple aids almost useless. What has become an enabling technology for the sighted computer user is rapidly becoming a disabling technology for the visually impaired.
Supporting User Interfaces for All Through User Modeling BIBA 155-157
  A. Kobsa
For adapting computer systems to the needs of different users, so-called "user models" are most often needed [1], [2], [3], [4]. User models are collections of information and assumptions about individual users (as well as user groups) which are needed in the adaptation process. This holds true for the "manual" adaptation of the interface by the system developer according to the requirements of specific user groups or individual users; it is even more necessary if a system is supposed to automatically adapt to the requirements of the current user [5].

I.5 Kansei Engineering

Hybrid Kansei Engineering System and Design Support BIBA 161-166
  Yukihiro Matsubara; Mitsuo Nagamachi
Kansei Engineering is defined as "translating technology of a consumer's feeling and image for a product into design elements" (Nagamachi, 1989). There are two type of the Kansei Engineering System (KES), one for the consumer decision supporting system called the Forward KES, and another for designer supporting system called the Backward KES. The combined computerized system of the Forward KES and Backward KES must be the powerful supporting tools for both users. This paper introduces the structure of the combined system, and proposes the Hybrid KES as the new general framework of Kansei Engineering System.
Neural Networks Kansei Expert System for Wrist Watch Design BIBA 167-172
  Shigekazu Ishihara; Mitsuo Nagamachi; Keiko Ishihara
In Kansei engineering, several multivariate analyses are used for analyzing human feelings and building rules. Principal component analysis is used for analyzing semantic structure. Although this method is reliable, they are time and computing resource consuming and require user's statistical expertise. In this paper, we introduce an automatic semantic structure analyzer and Kansei expert systems builder using self-organizing neural networks, ART1.5-SSS and PCAnet. ART1.5-SSS is our modified version of ARTI.5, a variant of the Adaptive Resonance Theory neural network. It is used as a stable non-hierarchical cluster analyzer and feature extractor, even in a small sample size condition. PCAnet is based on Sanger (1989), it performs principal component analysis. These networks enable quick and automatic rule building in Kansei engineering expert systems.
A Study of Image Recognition on Kansei Engineering BIBA 173-178
  T. Jindo; M. Nagamachi; Y. Matsubara
Kansei engineering is a discipline that concerns the creation of products that convey a certain desired image to customers, based on research into the relationships between the perceived impression of a product and its physical properties such as color or shape. This paper presents the results of a study concerning the classification of the features of products which tend to incorporate specialized or subjective elements. An attempt was made in this study to use image recognition technology to judge and classify the physical properties, i.e., design elements, of the automotive steering wheel automatically in order to eliminate any subjective aspects.
An Automatic Experimental System for Ergonomic Comfort BIBA 179-184
  K. Nishikawa; M. Nagamachi
To evaluate ergonomic comfort, we constructed a climate chamber equipped with a computer-assisted system for measuring physiological responses and psychological effects.
   This climate chamber is necessary to study the effect of thermal environments on the occupants. The condition of the chamber is determined by four parameters: air temperature, air velocity, relative humidity, and wall radiation. We can control these parameters remotely from a control room using a computer. The general method for evaluating the effects of thermal environment is to measure electroencephalograms, electrocardiograms, and subjects' vote. These data can be acquired and analyzed in the control room for automatic control and analysis using another computer with original programs.
Kitchen Planning System using Kansei VR BIBA 185-190
  N. Enomoto; J. Nomura; K. Sawada; R. Imamura; M. Nagamachi
Using Virtual Reality (VR) technology, Matsushita Electric Works, Ltd. has been developing several application systems for industrial use since 1990. And since 1991, we have used a VR kitchen system to assist our customers in planning kitchens with our kitchen products. This system has been used by many customers and it caused the sales of our kitchen products to soar. As the next version of our VR kitchen system, the Kansei VR system is being developed by employing Kansei Engineering [Nagamachi, 1986]. This engineering strategy takes the consumers' qualitative preferences of kitchen layout, design, and decoration and produces an actual kitchen layout, complete with materials and decoration. This enables the customers to have a concrete image of the kitchens which will be built, and also enables them to see and touch the kitchen components in the virtual space. In order to take the consumers' idea of an ideal kitchen layout, and convert that to an actual three-dimensional layout, we had some investigations with our customers. As Kansei information, we used adjectives which express the atmosphere of the kitchen and consumers' lifestyle. This paper presents a detailed overview of the Kansei VR system and the results of the Kansei investigations.
A Study of Kansei Rule Generation using Genetic Algorithm BIBA 191-196
  T. Tsuchiya; Y. Matsubara; M. Nagamachi
Kansei Engineering Expert System simulates human image in an evaluation of product design. The knowledge-base of the system is acquired using the method based on Genetic Algorithm. An experiment of passenger car is shown for confirming to induce the useful rules.

I.6 Cognitive Science and HCI for Cooperation

SOFT Science and Technology Meets Cognitive Science and Human-Computer Interaction for Cooperation BIBA 199-204
  J. Long; H. Inoue; T. Kato; N. Miyake; T. Green; M. Harrison; E. Pollitzer
A workshop for 'Cooperation between Japan and the United Kingdom on SOFT Science and Technology' was held in Osaka, Japan last year, and hosted by the Science and Technology Agency. The aim was to explore the potential for Japanese/UK cooperation in SOFT Science and Technology, particularly as it relates to Cognitive Science and Human-Computer Interaction (HCI), in the way the UK has brought them together, under the Joint Councils' Initiative (JCI). The workshop raised a number of important technical issues which require further development and dissemination, to which this session is intended to contribute. More generally, the aim is to consider how human science and engineering can be more effectively brought together to solve human problems, with reference to Japan and the UK and their eventual cooperation. The discussion, however, will extend to collaboration in general. The session will be technical, rather than organisational or logistic, focussing on specific examples, as well as wider issues.
   Session participants are drawn from both Japanese and UK research communities, and from the human (cognitive) sciences and from engineering (HCI). However, what unites them is a concern to establish the more effective engineering of human technology systems, whether individual or social, industrial or domestic, for the whole population, including people with special needs.

I.7 Multimodal Interface

Multimodal Interface with Speech and Motion of Stick: CoSMoS BIBA 207-212
  Takeshi Ohashi; Takeshi Yamanouchi; Atsushi Matsunaga; Toshiaki Ejima
We propose a multimodal interface system that allows the use of speech and simple gesture commands for a user. To command by gesture, the user takes a stick and moves it. Voice commands express verbs, adjectives and names. A combination of these two modalities can provide a powerful and flexible communication environment.
A Multi-Modal Interface with Speech and Touch Screen BIBA 213-218
  Seiichi Nakagawa; Jian Xim Zhang; Wicha Chengcharoen
A multi-modal input method by combining speech and touch-screen was studied. Since these two methods are complementary to each other, they can be used together to create more user-friendly human interface.
   To illustrate this, a multi-modal robot control simulation system was built on the Sun Sparc1O using the attached AD converter with 11.025kHz sampling rate. The task was defined as follows: An operator can operate the robot in the computer world by giving commands to the pseudo robot mapped on the display.
   To evaluate this system, the robot control simulation system using only the speech input method and a similar system using only the touch-screen input method were also developed. Six novice subjects performed two given tasks by using three input methods. After evaluating theses systems, it was found out that the multi-modal system incorporating both speech input and touch-screen input was able to cover up the deficiencies that were present when using only touch-screen input.
A Multimodal Operational System for Security Services BIBA 219-224
  M. L. Bourguet; S. Mimura; S. Ikeno; M. Komura
This paper addresses the question of multimodality applied to operational systems intended for professional users in charge of difficult and stressful tasks. For this type of application, the efficiency brought by a multimodal interface not only follows from the naturalness of the interaction but from the match between the communication modalities and the task to be performed. We report our experience in the design and prototyping of SECOM's Centralized Security System (CSS), a multimodal operational system for security services.
Help and Prompting in Broad Band Multimedia Services BIBA 225-230
  Laureano Cavero; Pedro Concejero; Juan Gili
This paper describes a set of experiments carried out to investigate the effectiveness of different procedures to provide on-line help to the user. An experimental task was devised based on a prototype of a touristic broad band videotex application. Four different media to provide help were compared: text, speech, video sequence and live human operator help. Three types of prompting were also tested: auditory prompts, visual prompts (text) and both types presented simultaneously.
Object-Oriented Multimedia User Interface BIBA 231-236
  V. Trajkovic; S. Gievska; D. Davcev
In this paper, we present an object oriented multimedia (MM) user interface (UI). The main objective of this study was to create a general and extendible user interface model that allows different media manipulation and easy connection to object oriented applications.
   We introduce a new abstract construction called Visual Widget which main property is its Semantic Knowledge about the relationships to other Visual Widgets.
   As an example, we connected our user interface to our system for creation and presentation of continuous media streams.
A Multimodal Computer-Augmented Interface for Distributed Applications BIBAK 237-240
  Luc Julia; Adam Cheyer
In this paper, we present a distributed application integrating handwriting, gesture and speech recognition for a map-based task. Our implementation combines the best features of two existing agent-style approaches to developing multimodal systems.
Keywords: Distributed applications, Multimodal interface, Agent architecture, Pen computing, Speech recognition
Terminological Storage and Filtering of Unstructured Multimedia Information BIBA 241-247
  K. Ahmad; C. Thiopoulos
The current generation of multimedia systems can store and retrieve files containing textual and image data. Apart from a narrow range of textual data, current systems regard all other data types as unstructured data. We propose a system for unstructured multimedia information, based on free lexical descriptions of the multimedia objects to be stored in the system. The retrieval operates then on the descriptors interlinked to a hypertext structure by making use of an information filter.

I.8 Nonverbal Communication

A Modeling of Facial Expression and Emotion for Recognition and Synthesis BIBA 251-256
  Shigeo Morishima; Fumio Kawakami; Hiroshi Yamada; Hiroshi Harashima
Facial expression is essential to the human communication as well as voice. It's including several kinds of factors which can express non-verbal information as a voluntary or a spontaneous activity. So recognizing and synthesizing the facial expression also can surely improve the communication environment between human and machine. Emotion is one of the most important factors which facial expression can describe.
   In this paper, an approach to achieve the modeling of human's emotion appeared on the face is presented. This system helps analysis, synthesis and coding of face image in the emotion level. There are many researches about facial image recognition [3]~[5] and synthesis [6] recently. However, the trials of modeling of emotion conditions are only discussed by some psychologists in a few years ago [7]~[10]. But it's not available as the criteria of our application, so an emotion model is originally constructed based on multi-layered neural network in this paper. The dimension of the expression space can be reduced to 3D which can represent a variety of face categories by the interpolation and linear combination of the basic expressions. This system can also realize analysis and synthesis of expression simultaneously.
A Multi-Modal Virtual Environment that Enhances Creativity through Human-to-Computer-to-Human Communication BIBA 257-262
  Yuri A. Tijerino; Shinji Abe; Fumio Kishino
In general, current research in virtual reality technology concentrates on providing better visualization and manipulation devices or methods. However, virtual environments have enormous potential as artificial media for communication of creative ideas. This paper describes some aspects of a system that enhances creativity and communication of creative ideas. The system is based on an intuitive multi-modal interface that combines simple hand gestures with speech information to produce instantaneous visual feedback of interpretations of the intention in the interactions. The paper also identifies the need to create ontologies of 3D shapes as the basis for the interpretation of intentions and proposes a two-level representation for such purpose.
Non-Vocal Behaviors in Communication and Coordination of TV Conferences BIBAK 263-268
  Sooja Choi; Hiroshi Tamura
In spite of general emphasis on the significance of facial expressions and nonverbal cues in communication, experiences of using TV conference have not confirmed any notable enhancement in communication by introduction of talking head video of conference participants.
   In the previous paper, authors have reported some evaluation methods using conference model. Although the participants used the visual and non-verbal behaviors actually, most of the communications were done by verbal language. Thus absolute need to initiate non-vocal behavior was not built in the models. This paper proposes an elaborated conference model in which active non-vocal actions, such as show-hands or janken(toss) are expected to occur in the sessions.
Keywords: TV conference, Non-vocal behavior (NVB), Visual attention, Communication, Coordination, Talk analysis
Effects of Pitch Adaptation in Prosody on Human-Machine Verbal Communication BIBA 269-274
  Tomio Watanabe
A machine which adapts to the optimal speech characteristics of a human speaker would be useful for smooth human-machine verbal communication. This paper focuses on the fundamental period of speech in pitch structure for prosodic adaptation and discusses the relationship between the fundamental period of a speaker's utterance and the fundamental period of the speaker's own preferable listening. From the results of sensory evaluation by paired comparison of varied fundamental periods, it is found that the mean fundamental period is preferred as listening fundamental period, independent of the speaker's utterance fundamental period. In consideration of the previous results of temporal adaptation, it is concluded that the prosodic adaptation in temporal structure, such as the speech activity adaptation, is effective in human-machine verbal communication, while for the adaptation of the fundamental period in pitch structure, it is sufficient that the machine's output fundamental period is set at the mean value.

I.9 Hypermedia / Hypertext

Interface Alternatives for Hypertext Environments BIBA 277-282
  Garry Patterson
This paper seeks to make explicit the ranges of design choices among which authors select whenever they create a particular hypertext. It is suggested that the interface characteristics which are most helpful for hypertexts written as tutorials may differ from those design features which benefit users wishing to gain access through hypertexts to large (encyclopaedic) information sources, or hypertexts used by people as a means of personal information management. Although there is little evidence about which design features work best in which circumstances, an understanding of the range of interface options may help authors appreciate the trade-offs they often have to make when designing hypertexts.
On the Value of Non-Content Information in Networked Hypermedia Documents BIBA 283-288
  S. Lenman; C. Chapdelaine
A proposed classification of information in hypermedia documents into hypermedia, interfaces, end nodes and embellishments is used as a basis for raising some questions concerning usability of networked hypermedia on the WWW. An experiment was conducted in order to see if embellishments, here defined as document elements that have a decorative function, enhance memory of hypermedia documents. The results showed better recall of document names and a tendency to better recall of document content for documents with embellishments. However, many other factors are at play, and in a networked environment document transfer time also has to be taken into account.
Usability Problems with Network Hypermedia BIBA 289-294
  C. Chapdelaine; S. Lenman
The objective of the study presented here was to investigate usability problems encountered by people using the Mosaic browser for accessing the World Wide Web (WWW). It was found that 48% of the observed problems could be attributed to the implementation of the browser, mainly problems caused by improper feedback. Another 52% of the observed problems could be attributed to the design of the documents, mainly problems related to structure and presentation. New browsers might eliminate some of the problems observed with Mosaic. Still, the need remains for more precise and complete guidelines for hypermedia document design in order to assist content producers in improving the overall usability of the WWW.
Using Discourse to Aid Hypertext Navigation BIBA 295-300
  Robert Inder; Jon Oberlander
Hyper-media is about following links, but this obliges readers to track what they have and have not seen. Navigating within a document can be a significant task, which, if not done well enough, can leave one 'lost in hyperspace'. Displaying the structure of a hyper-document is one way of helping readers move through it. But we have adopted an alternative approach, as advocated by Nielsen [1]. Guided by ideas from Discourse Theory, we are trying to recognise the structure of the reader's own interaction with the system. We have created DS-Info, an enhanced version of the Info hypertext system within the Emacs editor. DS-Info uses the distinction between structural and cross-structural links to identify topics and digressions. We are currently empirically evaluating DS-Info. Preliminary results are encouraging.
Cognitively Adapted Hypertext for Learning BIBA 301-306
  Kelvin Clibbon
This paper discusses the effect of adapting hypertext to the learner. Cognitive overhead and disorientation limit the effectiveness of hypertext for learning. By cognitively adapting a hypertext system to the user and by providing instructional cues, the effects of these problems might be reduced. A quasi-experimental evaluation study is reported, with a view to testing the efficacy of this theory.
Building the HCI of Hypermedia Applications. The Abstract Data View Approach BIBA 307-312
  G. Rossi; D. Schwabe; C. J. P. Lucena; D. D. Cowan
In this paper we present a novel approach for specifying the interface aspects of a hypermedia application with Abstract Data Views. Using Abstract Data Views (ADV) it is possible to describe, in an implementation-independent way, important aspects of the design such as which media objects the user of the hypermedia application will perceive, in which way the user will interact with these objects, and which interface transformation will take place while navigating through the hypermedia. ADVs are presented in the context of an object-oriented hypermedia design method (OOHDM). We briefly discuss which design problems must be solved in order to specify the interface of a hypermedia application; then we present Configuration Diagrams as a design tool to specify the static relationships between interface objects and nodes in a hypermedia application. ADVcharts, a notation combining concepts from Statecharts, Objectcharts and Petri Nets are later presented as a formalism to specify the dynamic aspects of a hypermedia application. We finally discuss some further issues such as reuse in the large of interface models.
Multimedia Authoring: A 3D Interactive Visualization Interface Based on a Structured Document Model BIBA 313-318
  Nabil Layaida; Jean-Yves Vion-Dury
Multimedia authoring is inherently a complex and tedious task. Users have to specify all the details of a multimedia presentation (temporal coordination, user interaction and spatial placement of data on the user display). At the same time they have to keep in mind a global overview of the entire document.
   Current document preparation systems use scripting languages or timelines to specify multimedia presentations. A complete specification of all the low-level presentation details puts a heavy burden on the editing task, and the creativity of the author becomes limited by a high cognitive overload. The complexity of editing multimedia documents is mainly related to the various tasks involved: document organization, temporal synchronization, spatial placement of multimedia objects and resource attribution. It is therefore necessary to perform an analysis of theses tasks to build an efficient authoring interface.
   In this paper, we present an interface based on a structured document model and multiple interactive views. This synthetic approach relieves the user from low-level and error-prone descriptions by reducing the document complexity. We believe that it is an efficient way for enhancing the overall interface expressive power. Moreover, the structured approach eases the automatic processing of multimedia documents, allowing a rapid production of spatial and temporal layouts starting from high level logical descriptions and presentation directives.
   In the first part, of the paper we present the state of the art, the multimedia data and document model. In the second part, we describe our user interface and its different views.

I.10 Collaboration 1

Formulating Collaborative Engineering Design Using Machine Learning Method and Decision Theory BIBA 321-326
  Tetsuo Sawaragi; Michael R. Fehling; Osamu Katai; Yukihiro Tsuboshita
This paper discusses about the formulation of the collaborative design by multiple agents using the technique from machine learning in artificial intelligence and uncertainty reasoning from the decision theory. We introduce a learning technique for concept formation from prior examples (i.e., design precedents) as a method for constructing a design agent's own perspectives. Then, a design coordinator's activity is formulated decision-theoretically concerning with the selection of design prototypes. The formulation is illustrated using the examples of designing girder of the bridge.
Modeling Coordination Work: Lessons Learned from Analyzing a Cooperative Work Setting BIBA 327-332
  Peter H. Carstensen
In complex work settings the effort required to coordinate the distributed activities conducted by mutually interdependent actors is burdensome. Thus, it becomes relevant to address the possibility for designing computer-based mechanisms supporting the coordination activities. This paper discusses what a conceptual framework must provide to analysts and designers in order to support them in analyzing the coordination aspects of a work setting. The discussion is based on experiences from analyzing the coordination aspects of a large software design and test project by means of conceptual framework.
The Scenarionnaire: Empirical Evaluation of Software-Ergonomical Requirements for Groupware BIBA 333-338
  Markus Rohde
Concerning the design of groupware systems, a lack of software-ergonomical requirements is to note. Considering the existence of different roles of interaction during the application of groupware, potential conflicts of interest between users can be found. To moderate these potential conflicts, software-ergonomical design principles are developed but not yet implemented in groupware applications. Therefore, one has to choose methods which allow to evaluate such principles in a prospective way in a very early phase of development. The scenarionnaire is a questionnaire consisting of scenarios offering different design options to judge by users of groupware. Their subjective judgements are seen as indicators for the usability of certain design requirements.

I.11 Collaboration 2

Dynamics of Socially Distributed Cognition in Organization BIBA 341-346
  Takaya Endo
Through experience on human interface testing, evaluating and designing activities in participatory design process, we have been learning several points from people connected to real problems in organizations. Some of them are importance of being an active cognitive listener or cognitive observer and an active re-interpreter, an active annotator or an active representation to find out implicit problems and to envision them, and importance of being a participant in real organizational activities rather than a mere objective designer. And then, we have been confronted with the need to develop systematic and macroscopic cognitive engineering (CE), understandable not only for individuals but also for organizations, for resolving representation and human interface (Hl) problems, such as HM(Machine)I, HG(Graphical representation)I, HH(Human)I, HE(Environment)I, HT(Task)I, HJ(Job)I, HO(Organization)I, HS(Society)I, etc., from viewpoints of individuals and organizations or societies. It is important to research and develop for new CE methodologies to bridge between microscopic view and macroscopic view for harmonious development of cognitive artifacts and humans and organizations. As for CE methodologies, SPSC (Social Problem Solving CE), PRFC (Problem Representation Facilitating CE), MYTC (Myself-Yourself-Task communicating CE), CMOC (Cerebellum Mode Operating CE), BECC (Behavior-Emotion-Cognition systems CE), MMBC (Microscopic-Macroscopic Bridging CE), IECC (Internal-External Considering CE), CARC (Cognitive Artifacts Reflecting CE), and HDEC (Harmonious Development Evolving CE) were proposed for solving fundamental problems on human interface and human communication including artifact-mediated human communication in organization [1].
   In this introductory paper, we discuss the fundamental issues on human communication and distributed cognition in organization that will play basic roles in above-mentioned MYTC and BECC.
The Model of Media Conference BIBA 347-352
  Katsumi Takada; Hiroshi Tamura; Yu Shibuya
The routine conferences are classified into four types: i.e. 1- message transfer, 2- transaction, 3- coordination, 4- tactic decision type. General conference models of the transaction and coordination type are formulated and experimental analysis of the model are described in this paper.
What is Expert Performance in Emergency Situations? BIBA 353-358
  Hiroshi Ujita
To understand expert behavior and define what constitutes good performance in emergency situations in huge and complex plants, human performance evaluation should be made from viewpoints of not only error, but also various cognitive, psychological, and behavioral characteristics. Quantitative and qualitative measures of human performance are proposed for both individual operators and crews, based on the operator performance analysis experiment, among which cognitive and behavioral aspects are the most important.
Human-Machine Interfaces for Cooperative Work BIBA 359-364
  G. Johannsen
The characteristics of cooperative work in industrial plants and transportation systems are investigated. Expert group meetings with different human user classes in a cement plant are described. An information flow diagram is shown for this application domain. Customer-oriented tasks are described for the other example of the integrated transportation systems. The participative design methodology for human-machine interfaces is briefly outlined. Some features of human-machine interfaces for supporting cooperation in cement plants and for passenger support in integrated transportation systems are explained.
An Evaluation Method of Communication in a Software Development Project and its Application for Diagnosis BIBA 365-370
  Mie Nakatani; Hiroshi Harashima; Shogo Nishida
A software development project becomes larger recently and it is difficult for the manager to understand problems of communication. This paper proposes a systematic evaluation method and the method is applied to a real project. The result of evaluation is thought to be useful to understand the communication problems in software development projects.
Architecture for Synchronous Groupware Application Development BIBA 371-376
  Roland Balter; Slim Ben Atallah; Rushed Kanawati
This paper describes the design choices and the prototype implementation of CoopScan, a generic framework for synchronous groupware development. We focus on architectural issues and on strategies for integration of existing single-user applications into a collaborative environment. In this work, we propose a generic approach to application re-use. This approach is validated through the development of a testbed synchronous collaborative editor.

I.12 Collaboration 3

Mechanisms for Conflict Management in Groupware BIBA 379-384
  V. Wulf
The activation of certain functions in groupware affects different users who might have conflicting interests. We will develop technical mechanisms to support users in handling these conflicts. The usage of these mechanisms depends on changing necessities of the different fields of application. Furthermore, technical mechanisms are embedded in social practice of conflict management in the individual fields of application. Therefore, a software architecture for groupware applications should allow to equip ordinary functions flexibly with technical mechanisms for conflict management.
Intelligent Support to Operators through Cooperation BIBAK 385-390
  P. Brezillon
We design an intelligent cooperative system to support operators in their task of supervision in nuclear plants application. In this situation, cooperation has two modes, namely a waking state and a participating state. During the waking state, the system observes the operator's behavior and the consequences on the process behavior. During the participating state, the cooperative system builds jointly with the user a solution to a problem. The cooperation depends of the system capabilities to explain and to incrementally acquire knowledge. This implies a revision of the design and the development of cooperative systems. We develop these ideas in our application.
Keywords: Intelligent cooperative system, Cooperation, Explanation, Context
Cooperative Annotation on Remote Real Objects BIBA 391-396
  Masahide Shinozaki; Amane Nakajima
In normal desktop conferencing systems, it is difficult to cooperatively work with real objects in real time. In our system, a user can easily work with remote users, because remote users can draw annotation onto real objects from remote sites, move and erase the annotation dynamically. Remote users can overwrite the annotation on a video window, and the annotation is displayed on real object at the same position as in the video window by using projection. In a local site, a user can manipulate the real objects looking annotation from a remote site. We have made a prototype system based on a multimedia conferencing system called ConverStation/2 [1]. In this paper, we describe the system configuration and the result of experiment in detail.
PeCo-Mediator: Supporting Access to Unknown Partners for Cooperation using Collective Personal Connections -- Adaptable Menu-Based Query Interface -- BIBA 397-402
  Hiroaki Ogata; Yoneo Yano; Nobuko Furugori; Jin Qun
This paper describes a groupware system, called PeCo-Mediator, and its adaptable menu-based query user-interface (UI). PeCo-Mediator collects group users' personal connections (PeCo) to help users finding partners who can solve their problem in business activities. Moreover, its UI is adaptable for a user's original perspective and another's viewpoint to use effectively diverse personal information.
Structured Cooperative Editing and Group Awareness BIBA 403-408
  Dominique Decouchant; Vincent Quint; Manuel Romero Salcedo
Cooperative editing is an important field in CSCW. Many editors have been developed or extended to allow several users to work simultaneously on shared documents. At the same time, an important research activity is carried out in the field of structured documents. Cooperative editing and structured documents share many common issues and it seems natural to take advantage of the advances in these two fields for constructing new tools that allow users to cooperate in producing complex structured documents.
   This paper presents the main features of Alliance, a cooperative asynchronous structured editor that we have developed with that approach. This application allows several users distributed on a local area network to cooperate for producing documents in a structured way. An early version of Alliance is described in [1].
   As in any groupware application, group awareness is a key issue in a cooperative editor [2] [9]. Group awareness allows each user to be informed of the work done by the others; it also allows him/her to decide how and when his/her own contribution should be shown to others. In the rest of this paper, we focus on the principles of Alliance group awareness, which is based on active icons that indicate the status of shared document fragments.
   In the next section, we present the principles of structured editing and Grif, the structured editor on which Alliance is based. Section 2. discusses main issues in cooperative editing. Section 3. focuses on group awareness and explains it relation to document structure. Finally, the perspectives of this work are presented.
Work Groups in Computerized Manufacturing Systems BIBA 409-414
  Christina Kirsch; Eberhard Ulich
Currently the discussion of new production concepts is dominated, under the label of "Lean Production", by a renaissance of the concept of team work. Yet, the definition of team work differs considerably, from the "Toyotistic" work group in Japanese manufacturing to the European concept of semi-autonomous work-groups. The lack of a "mutual agreed upon" definition of group work and empiric evidence for its advantage gave rise to the investigation of the relation between work group autonomy and efficiency of computerization and working conditions within the research project "GRIPS" (original "Gestaltung rechnerunterstutzer integrierter Produktionssysteme", Strohm et al. 1994).
Modeling and Simulation of Operator Team Behavior in Nuclear Power Plants BIBA 415-420
  K. Sasou; K. Takano; S. Yoshimura; K. Haraoka; M. Kitamura
This paper depicts the technique of simulation for plant operators facing abnormal events within the plant. "SYBORG: Simulation System for the Behavior of an Operating Group" is installed in workstations, and simulates the behavior (cognitive action, communication) of a team consisting of 3 operators. This study introduces a "mental model" that describes and illustrates how operators predict plant behavior and make decisions to prevent the deterioration of its conditions. SYBORG simulates decision making processes via communication among operators, considering human relations such as: position, personality, credibility, etc. This paper also shows some interesting results of simulation with SYBORG.

I.13 Virtual Reality 1

A Network Virtual Reality Skiing System -- System Overview and Skiing Movement Estimation -- BIBA 423-428
  Akihisa Kenmochi; Shin'ich Fukuzumi; Keiji Nemoto; Katsuya Shinohara
A network Virtual Reality skiing system is developed. The system allows the user to practice a variety of long and short turn actions, which is realized by two newly developed technologies: an apparatus for measuring the virtual-skier's body movements and position, and a method for estimating skier's velocity and position on the virtual slope on the basis of an actual skiing model. The system is capable of network connection, which allows more than two players to share the same virtual skiing environment and to compete each other.
Proposal of CYBERSCOPE World BIBA 429-434
  Akira Hiraiwa; Masaaki Fukumoto; Noboru Sonehara
Tele-existence is the concept for the technology that enables a person to have the real-time sensation of being in a place other than where the user actually is [1,2,3]. This is a proposal of a personal networked tele-existence system called CYBERSCOPE.
Visual Engineering System -- VIGOR: Virtual Environment for Visual Engineering and Operation BIBA 435-440
  Miwako Doi; Nobuko Kato; Naoko Umeki; Takahiro Harashima; Keigo Matsuda
We have developed a visual and interactive system bridging among systems developers, human factors engineers, human operators, and clients. The system -- VIGOR (Virtual environment for visual engineering and operation) has six components; a virtual space, a virtual user, interactive operation, dynamic simulation, physical simulation, and rendering.
   The virtual space component has a new LOD (level of detail) method, physical locate constraints, behavioral and structural representations of objects, and adequate fonts in order to offer high quality image, high speed, and natural interaction.
   The virtual user works in the virtual space instead of a real human operator. We can quantitatively compare layout alternatives, measuring motions, visibility, and reach assessment using the virtual user.
   We have applied VIGOR to the operation room design, hospital facilities layout, social human interface design, and operation and patrol training.
A Learning Environment for Maintenance of Power Equipment using Virtual Reality BIBA 441-446
  Shotaro Miwa; Takao Ueda; Masanori Akiyoshi; Shogo Nishida
This paper deals with a learning environment for maintenance of power equipment using Virtual Reality. First of all, the insights of cognitive science and the analysis of maintenance expertise are discussed from the viewpoint of understanding support system. Then design philosophy of the learning environment is proposed based on the analysis. The prototype system is designed and implemented using both EWS (Engineering WorkStation) and GWS (Graphic WorkStation). This prototype system is applied to the maintenance of the Gas Insulated Substation, which is one of power equipment, and its performance is evaluated through demonstration.
Evaluation of the Safety Features of a Virtual Reality System BIBA 447-452
  Y. Sugioka; S. Tadatsu; T. Nakayama; Y. Yamamoto; T. Kobayashi; Y. Takahashi; N. Yamaoka; Y. Nakanishi; T. Hayasaka; G. Goto; M. Sudo; Y. Kusaka; N. Furuta; K. Shindo; K. Yamazaki; T. Yamaguchi
The Hyper Hospital we proposed is a novel medical care system which is constructed on an electronic or computerized information network using virtual reality as the principal human interface [1]. The major purpose of the Hyper Hospital is to restore humane interactions between patients and various medical caretakers by making a much closer contact between them in the real medical scene, than that in current conventional medical practice [2],[3].
   The Hyper Hospital will be built as a distributed system on a computerized information network. Each node of the Hyper Hospital network serves as a part of the networked medical care system, and shares an activity from a variety of medical care facilities. Of these facilities, the most important is the personal VR system, which is designed to support each patient, providing a means to keep his or her contact to the Hyper Hospital network from his or her private space. To implement the Hyper Hospital, it is mandatory to develop human-machine interfaces utilizing the VR technology, such as a cybernetic interface [4], a special VR software framework that allows participants to modify the VR world [5], and the ethological, psychological, and physiological studies of the behavior of normal and diseased people [6].
   In the course of the development of the Hyper Hospital, we examined the safety features of our own virtual reality system from the physiological, neurological and psychological viewpoints [7]. In the present paper, we report results of our second series of experiments using healthy young subjects carried out in order to check the safety features of our VR system.

I.14 Virtual Reality 2

An Architecture Model for Multimodal Interfaces with Force Feedback BIBAK 455-460
  Christophe Ramstein
Multimodal interfaces with force feedback pose new problems both in terms of their design and for hardware and software implementation. The first problem is to design and build force-feedback pointing devices that permit users both to select and manipulate interface objects (windows, menus and icons) and at the same time feel these objects with force and precision through their tactile and kinesthetic senses. The next problem is to model the interface such that it can be returned to the user via force-feedback devices: the task is to define the fields of force corresponding to interface objects and events, and to design algorithms to synthesize these forces in such a way as to provide optimum real-time operation. The final problem concerns the hardware and software architecture to be used to facilitate the integration of this technology with contemporary graphic interfaces. An architecture model for a multimodal interface is presented: it is based on the notion of a multiagent model and breaks down inputs and outputs according to multiple modalities (visual, auditory and haptic). These modalities are represented by independent software components that communicate with one another via a higher-level control agent.
Keywords: Multimodal interface, Software architecture model, Force feedback, Haptic device, Physical model
Surface Display: Presentation of Curved Surface in Virtual Reality Environment BIBA 461-465
  Koichi Hirota; Michitaka Hirose
The force feedback can be thought to be an interface through the phenomenon of contact. We have remarked on the cross sections in the interface and trited to classify the methodologies to realize force feedback from the difference in the cross section. Based on this discussion, the concept of Surface Display was submitted and technical issues in the implementation of the concept was pointed out. A prototype device to make clear the concept was created and our solutions against these technical issues in the device was stated.
Coherency between Kinesthetic and Visual Sensation for Two-Handed-Input in a Virtual Environment BIBA 467-472
  Masahiro Ishii; P. Sukanya; Ryo Takamatsu; Makoto Sato; Hiroshi Kawarada
This paper is about constructing a virtual work space for performing any tasks by both hands manipulation. We intend to provide a virtual environment that can encourage users to accomplish any tasks as they usually act in the real environment. Our approach is using a three dimensional spatial interface device that allows the user to handle virtual objects directly by free hands and be feel-able some physical properties of the virtual objects such as contact, weight, etc. We have investigated the suitable conditions for constructing our virtual work space by simulating some basic assembly work, a Face-and-Fit task. Then select the conditions that the subjects feel most comfortable in performing this task to set up our virtual work space. Finally, we have verified the possibility to perform more complex tasks in this virtual work space by providing some simple virtual models then let subjects create new models by assembling these component models together. The subjects can naturally perform assembly operations and accomplish the task. Our evaluation shows that this virtual work space has potential to be used for performing any tasks that need hands manipulation or cooperation between both hands in natural manner.
On the Computer Simulation of Ball Dribble in the Virtual Environment BIBA 473-478
  Takashi Takeda; Yoshio Tsutsui
We have developed a platform for the training system using the virtual reality technologies, which is expected effectively applicable for rehabilitation and various sports training [1], [2]. As a part of our research using the platform, we have implemented and installed on the platform the simulation capability of a basketball (and other balls with comparatively large mass) dribbling, and begun the study of it's effectiveness as a training capability.
   As a basketball has a comparatively small rebound coefficient, keeping it rebounding at a certain height requires some momentum which seems useful for muscle training. And our virtual training system, by changing the ball's size, weight, or the rebound coefficient, can easily provide a training environment optimal for the trainee's health condition.
   This paper reports on the dependency of "the difficulty / easiness" of ball dribbling on the rebound coefficient, and the gravity. The measurements were conducted using our virtual environment with the 3D visual, force, and haptic interfaces.
The Impetus Method for the Object Manipulation in Virtual Environment without Force Feedback BIBA 479-484
  Ryugo Kijima; Michitaka Hirose
Natural and realistic object manipulation are important for virtual environments (VE) because true applications require this capability. Moreover, the user can recognize the law that dominates the behavior through the act of the manipulation (active presence) in addition to (passive) presence through visual sense [1] (Figure 1). However till now, the realized manipulations have been generally based on gestures, which are very simple, symbolic, and not realistic. (For example, the hand grabs the object when the gesture is FIST and it moves sticking to the hand [2])
   The aim of this paper is to propose the Impetus method for manipulation calculation as a faster, simpler, and more reliable method. While this method is not based on the physics directly [3~7], it can provide several merits similar to the physics. This method can provide pseudo physical attributes to control the characteristics of the phenomena. Also, this method is systematized by itself and can easily be used with the other methods as an integrated system.
Sound Distance Localization using Virtual Environment BIBA 485-490
  Michiko Ohkura; Yasuyuki Yanagida; Susumu Tachi
In this paper, attention is focused on the qualitative understanding of the role of sound intensity which is considered to be one of the most important cues for sound localization with distance. By using virtual environmental display system, experiments were successfully conducted to clarify the relation between sound intensity and apparent distance from the sound source.

I.15 Virtual Reality 3

The NRaD Virtual Presence Program BIBA 493-498
  Steven A. Murray
Research issues and methods for a new program of virtual presence research are described. The U.S. Navy anticipates extensive use of virtual environment (VE) systems for both mission and training needs. The Navy Command, Control, and Ocean Surveillance Center, RDTE Division (NRaD) is supporting these needs by developing empirical human engineering guidelines for VE system design. The Virtual Presence Program involves parallel investigations of visual and display system performance, spatial orientation, interaction methods, and studies of operator task performance.
The Task, Interaction, and Display (TID) Taxonomy for Human-Virtual Environment Interaction BIBA 499-504
  Kay M. Stanney; Phillip Hash; Dave Dryer
A taxonomy is proposed that classifies virtual environment tasks according to the type of task, user interaction, and display (TID) that evoke efficient human task performance. The TID can assist virtual environment designers by guiding and detecting their design efforts.
Enhancing the Fidelity of Virtual Environments through the Manipulation of Virtual Time BIBA 505-510
  Dutch Guckenberger; Kay Stanney
This paper investigates the benefits of manipulating simulated time in virtual environments. Above real time training in virtual environments was tested by having subjects perform a simple tracking and targeting task under two levels of time compression in a virtual environment (real-time or 1.0x and 1.7x). Results indicated that within both subject groups (1.0x and 1.7x), there were no significant differences detected between the perceived temporal and mental demands of the testing and training phases. This indicates that the VT group did not perceive the change in temporal demands between the training (1.7x) and the testing (1.0x) phases. There were, however, significant differences in the perceived temporal demands between subject groups. The VT group perceived less temporal demands during the testing (1.0x) phase than the control group. This perceived reduction could be potentially beneficial for time-critical tasks, where training to ready responses is essential for effective task performance. In addition, training under the accelerated time condition did not lead to any negative transfer of training.
Training Independent Living Skills in a "Barrier-Free" Virtual World BIBA 511-516
  Lyn Mowafy; Jay Pollack; Mike Stang; Larry Wallace
There are a variety of technological and operational needs in conventional programs for training independent living skills that are not being addressed, or are being solved poorly. In this paper, we will explore the potential of virtual environment technologies for filling the training gap. Our goal is to specify operating guidelines, technological changes and a research agenda for the development of advanced systems to train individuals "handicapped" by their physical environment. To demonstrate how these guidelines may be implemented, we will describe a program currently under development for training individuals with cognitive impairments how to access public transportation services.
Impact of Using Advanced Human Computer Interaction to Design Ground Vehicle Systems BIBA 517-522
  Grace M. Bochenek
Optimization of the user interface in conjunction with the Virtual Prototyping Process has several benefits. It results in improved requirements because the User has the opportunity to explore more concepts and determine what technologies provide the greatest payoff prior to committing to a system development. The intrinsic modeling flexibility, together with the great potential as a human-factor tool, makes Virtual Reality applications the choice for new simulation and simulator designs (Burdea and Coiffet, 1994). The time from concept to production can be significantly shortened (50% goal) by eliminating the hardware build test build cycles required in the traditional prototyping process, and significant cost savings can be achieved in system development and production.
Applied Virtual Reality Research and Applications at NASA/Marshall Space Flight Center BIBA 523-528
  Joseph P. Hale
A Virtual Reality (VR) applications program has been under development at NASA/Marshall Space Flight Center (MSFC) since 1989. The objectives of the MSFC VR Applications Program are to develop, assess, validate, and utilize VR in hardware development, operations development and support, mission operations training and science training. Before this technology can be utilized with confidence in these applications, it must be validated for each particular class of application. That is, the precision and reliability with which it maps onto real settings and scenarios, representative of a class, must be calculated and assessed. The approach of the MSFC VR Applications Program is to develop and validate appropriate virtual environments and associated object kinematic and behavior attributes for specific classes of applications. These application-specific environments and associated simulations will be validated, where possible, through empirical comparisons with existing, accepted tools and methodologies. These validated VR analytical tools will then be available for use in the design and development of space systems and operations and in training and mission support systems. Specific validation studies for selected classes of applications have been completed or are currently underway. These include macro-ergonomic "control-room class" design analysis, Spacelab stowage reconfiguration training, a full-body micro-gravity functional reach simulator, and a gross anatomy teaching simulator. This paper describes the MSFC VR Applications Program and the validation studies.

I.16 Pen-Based Interface

Pen-Based Interfaces in Engineering Environments BIBA 531-536
  R. Zhao; H.-J. Kaufmann; T. Kern; W. Muller
Conceptual design is usually done with paper and pen. Notepad computers open a new way for designing pen-based user interfaces for supporting such kind of design activities. However, pen-based user interfaces have some flaws, such as the recognition rate of hand-written characters, the size of memory and display, which limits the application of such kind of computers. The main idea we have is to combine powerful graphical workstations with pen-based computers, each for appropriate applications, but within an integrated environment, in which the user interface adapts the applications. This paper describes our engineering subenvironment, the EXPRESS modeling environment, the design issues of using gestural interfaces for editing EXPRESS-G diagrams.
OS/omicron V4: An Operating System for Handwriting Interfaces BIBA 537-542
  Eiichi Hayakawa; Tomoyuki Morinaga; Yasushi Kato; Kazuaki Nakajima; Mitarou Namiki; Nobumasa Takahashi
This paper describes a operating system (OS) that supports handwriting interfaces. The key concept of this OS is to present a data model with paper metaphor called "Virtual Paper" in order to support features of multiple data types and meanings of pen data. OS, data manager, compiler and window system are implemented to utilize a "Virtual Paper" as a system resource.
Computing in the Ink Domain BIBA 543-548
  D. Lopresti; A. Tomkins
In this paper we have proposed treating electronic ink as first-class computer data. Doing so may help overcome some of the more stubborn barriers impeding the wide-spread acceptance of pen-computing. We outlined what we consider to be the important open questions, and described a system we have built that demonstrates certain aspects of this philosophy. Still, much work remains to be done.
The Design of a Pen-Based Interface 'SHOSAI' for Creative Work BIBA 549-554
  Naoki Kato; Masaki Nakagawa
This paper have presented the system named SHOSAI for supporting creative work, using a handwriting (pen) interface. Since handwriting does not obstruct people's thinking, it is suitable for creative work. The system is designed so as not to obstruct the user's thinking and to combine the merits of paper with the advantages of computers. In order to satisfy this design concept, the system provides interaction techniques with a pen, the metaphor interface and lazy recognition. By using this system, a user can create documents effectively from the stage of thinking about contents to the stage of printing final drafts.
An Experimental Study of Interfaces Exploiting a Pen's Merits BIBA 555-560
  Naoki Kato; Natsuko Fukuda; Masaki Nakagawa
This paper described experiments comparing pen- and mouse-based interfaces. From these experiments the following results were obtained: (1) A pen is faster than a mouse for pointing and dragging, (2) For right-handed people it takes more time to move down-right than other directions with a pen, (3) The timing of visual feedback has an effect on performance for tasks where precision is required.
Interactive Freehand Sketch Interpreter for Geometric Modelling BIBA 561-566
  S. Sugishita; K. Kondo; H. Sato; S. Shimada; F. Kimura
This paper presents a new method to deal with idea sketches for inputting geometric models at a workstation. The idea sketches are drawn on a CRT screen with a stylus pen and a tablet by designers at an initial stage of design procedures. They can keep their drawing styles at a workstation as the same manner as using a pen on paper. The system 'Sketch Interpreter' can create correct geometric models in a computer even though input idea sketches are incorrect perspectively. Data created are transferred to an advanced 3D-CAD system. The system is applied as a front-end processor for a design practice.
Recognition of On-Line Handdrawn Geometric Shapes by Fuzzy Filtering and Neural Network Classification BIBA 567-572
  Figen Ulgen; Andrew Flavell; Norio Akamatsu
As hand-held computers are becoming widely utilized in many areas, alternative means of user-computer interaction are acquiring a much wider level of acceptance. Presenting users who may not necessarily have extensive computer training with a familiar environment increases the acceptability of, and provides a smooth integration to, advanced technology. Recognition of handdrawn shapes is beneficial in drawing packages and in the automated sketch entry in hand-held computers. In this paper, we present a new approach to invariant geometric shape recognition which utilizes a fuzzy function to reduce noise and a neural network for classification. Our application's aim is to recognize ellipses, circles, rectangles, squares and triangles. The neural network learns the relationships between the internal angles of a shape and its classification, therefore only a few training samples which represent the class of the shape are sufficient. The results of our prototype system are very successful, such that the neural network correctly classified shapes which bear little resemblance to the shapes in the training set.

I.17 Three Dimensional Realtime Human-Computer Interfaces -- Virtual Reality

CIA-Tool: A Tool for Cooperative-Interactive Planning in Virtual Environments BIBA 575-586
  Andre Hinkenjann; Oliver Riedel
In many planning tasks the planning specialist sooner or later is confronted with the necessity of presenting his/her design to the future user and perhaps altering the design, if necessary. In the ideal case the iterative planning process should be carried out interactively with the user to fundamentally reduce the duration of each step in the iterative planning spiral. A further advantage of such a mutual cooperation between planer and user in a design would be a reduction of costs, an advantage rarely separated from a shortened project duration. In addition, one has the possibility to more quickly evaluate various variants of the design.
   This approach has been partially applied in a prototype for the planning of rooms, [1] one in which the main focus of the planning was on the interactive aspects and less focus was placed on the conception of the Computer Supported Cooperative Work (CSCW). A reason for this was the deficiency in the quality and quantity of the necessary hardware, as well as the unsolved problems of integrating several VR-devices within an application. Through a strategic alliance with the British firm Division, which also includes the common development of new software packages, the realization of most of the concepts of the Cooperative-Interactive Application Tool (CIA-Tool) were made possible. The possibilities of the CIA-Tool with an application such as interior design were already established in 1993 ([2], [3]).
Virtual Reality -- the Ultimative Interface? BIBA 587-596
  Wilhelm Bauer; Hans-Jorg Bullinger; Andreas Rossler
Computers, telecommunication and new media are converging rapidly today. This paper shows, that Virtual Reality will be one of the integrating key technologies. A technical concept is introduced, which potentially integrates all digital media of the future. In the end, the steps towards this concept and some of its impacts are discussed.
Multimodal Communication in Virtual Environments BIBAK 597-604
  Marc Cavazza; Xavier Pouteau; Didier Pernel
Virtual Environment techniques provide tools for a new generation of interfaces, which can place the user in a more realistic and complete environment. To achieve a realistic and usable interface, it is necessary to enter the third dimension by allowing the operator to access data out of a 3D world and its symbolic representation. On one hand, to make those data available, information processing is an important part of the system. On the other hand, the way of accessing data also requires a specific attention. This last point leads to the definition of the concept of "extend pointing" as a crucial part of multimodal communication for those applications is presented. In this paper, a formal framework for implementation is proposed. The interest of this approach is illustrated by situations from a fully-implemented multimodal communication prototype for virtual environments.
Keywords: Multimodal communication, Enhancement elements, Coverbal gestures, Pointing gesture in a 3D environment
Virtual Reality Technology as Human Interface to Networked Medical System -- Its General Construction, User Reconfigurable Design, New Cybernetic Interface, Feasibility, and Safety Features BIB 605-610
  T. Yamaguchi; K. Yamazaki