HCI Bibliography Home | HCI Conferences | INTETAIN Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
INTETAIN Tables of Contents: 050809111314

Proceedings of the 2005 International Conference on INtelligent TEchnologies for interactive enterTAINment

Fullname:INTETAIN 2005: First International Conference on Intelligent Technologies for Interactive Entertainment
Editors:Mark Maybury; Oliviero Stock; Wolfgang Wahlster
Location:Madonna di Campiglio, Italy
Dates:2005-Nov-30 to 2005-Dec-02
Publisher:Springer Berlin Heidelberg
Series:Lecture Notes in Computer Science 3814
Standard No:DOI: 10.1007/11590323 hcibib: INTETAIN05; ISBN: 978-3-540-30509-5 (print), 978-3-540-31651-0 (online)
Links:Online Proceedings | Conference Website
COMPASS2008: Multimodal, Multilingual and Crosslingual Interaction for Mobile Tourist Guide Applications BIBAFull-Text 3-12
  Ilhan Aslan; Feiyu Xu; Hans Uszkoreit; Antonio Krüger; Jörg Steffen
COMPASS2008 is a general service platform developed to be utilized as the tourist and city explorers assistant within the information services for the Beijing Olympic Games 2008. The main goals of COMPASS2008 are to help foreigners to overcome language barriers in Beijing and assist them in finding information anywhere and anytime they need it. Novel strategies have been developed to exploit the interaction of multimodality, multilinguality and cross-linguality for intelligent information service access and information presentation via mobile devices.
Discovering the European Heritage Through the ChiKho Educational Web Game BIBAFull-Text 13-22
  Francesco Bellotti; Edmondo Ferretti; Alessandro De Gloria
The rapid success of the Internet has spurred a continuous development of web-based technologies. Several applications also feature multimedia environments which are very appealing and can effectively convey knowledge, also in a life-long learning perspective. This paper presents aims and developments of the ChiKho EU project. ChiKho has designed a web-distributed educational game which allows players to share and improve their knowledge about the heritage of European cities and countries. The paper also describes the ChiKho's logical structure, the interaction modalities and the technical architecture. We finally presents preliminary usability results from tests with final users which we performed in the four ChiKho exhibition sites (London, Genoa, Plovdiv and Kedainiai), where we launched a prototype version of the web game.
Squidball: An Experiment in Large-Scale Motion Capture and Game Design BIBAFull-Text 23-33
  Christoph Bregler; Clothilde Castiglia; Jessica DeVincezo; Roger Luke DuBois; Kevin Feeley; Tom Igoe; Jonathan Meyer; Michael Naimark; Alexandru Postelnicu; Michael Rabinovich; Sally Rosenthal; Katie Salen; Jeremi Sudol; Bo Wright
This paper describes Squidball, a new large-scale motion capture based game. It was tested on up to 4000 player audiences last summer at SIGGRAPH 2004. It required the construction of the world's largest motion capture space at the time, and many other challenges in technology, production, game play, and study of group behavior. Our aim was to entertain the SIGGRAPH Electronic Theater audience with a cooperative and energetic game that is played by the entire audience together, controlling real-time graphics and audio by bouncing and batting multiple large helium-filled balloons across the entire theater space. We detail in this paper the lessons learned.
Generating Ambient Behaviors in Computer Role-Playing Games BIBAFull-Text 34-43
  Maria Cutumisu; Duane Szafron; Jonathan Schaeffer; Matthew McNaughton; Thomas Roy; Curtis Onuczko; Mike Carbonaro
Many computer games use custom scripts to control the ambient behaviors of non-player characters (NPCs). Therefore, a story writer must write fragments of computer code for the hundreds or thousands of NPCs in the game world. The challenge is to create entertaining and non-repetitive behaviors for the NPCs without investing substantial programming effort to write custom non-trivial scripts for each NPC. Current computer games have simplistic ambient behaviors for NPCs; it is rare for NPCs to interact with each other. In this paper, we describe how generative behavior patterns can be used to quickly and reliably generate ambient behavior scripts that are believable, entertaining and non-repetitive, even for the more difficult case of interacting NPCs. We demonstrate this approach using BioWare's Neverwinter Nights game.
Telepresence Techniques for Controlling Avatar Motion in First Person Games BIBAFull-Text 44-53
  Henning Groenda; Fabian Nowak; Patrick Rößler; Uwe D. Hanebeck
First person games are computer games, in which the user experiences the virtual game world from an avatar's view. This avatar is the user's alter ego in the game. In this paper, we present a telepresence interface for the first person game Quake III Arena, which gives the user the impression of presence in the game and thus leads to identification with his avatar. This is achieved by tracking the user's motion and using this motion data as control input for the avatar. As the user is wearing a head-mounted display and he perceives his actions affecting the virtual environment, he fully immerses into the target environment. Without further processing of the user's motion data, the virtual environment would be limited to the size of the user's real environment, which is not desirable. The use of Motion Compression, however, allows exploring an arbitrarily large virtual environment while the user is actually moving in an environment of limited size.
Parallel Presentations for Heterogenous User Groups -- An Initial User Study BIBAFull-Text 54-63
  Michael Kruppa; Ilhan Aslan
Presentations on public information systems, like a large screen in a museum, usually cannot support heterogeneous user groups appropriately, since they offer just a single channel of information. In order to support these groups with mixed interests, a more complex presentation method needs to be used. The method proposed in this paper combines a large stationary presentation system with several Personal Digital Assistants (PDAs), one for each user. The basic idea is to "overwrite" presentation parts on the large screen, which are of little interest to a particular user, with a personalized presentation on the PDA. We performed an empirical study with adult participants to examine the overall performance of such a system (i.e. How well is the information delivered to the users and how high is the impact of the cognitive load?). The results show, that after an initial phase of getting used to the new presentation method, subjects' performance during parallel presentations was on par with performance during standard presentations. A crucial moment within these presentations is whenever the user needs to switch his attentional focus from one device to another. We compared two different methods to warn the user of an upcoming device switch (a virtual character "jumping" from one device to another and an animated symbol) with a version, where we did not warn the users at all. Objective measures did not favour either method. However, subjective measures show a clear preference for the character version.
Performing Physical Object References with Migrating Virtual Characters BIBAFull-Text 64-73
  Michael Kruppa; Antonio Krüger
In this paper we address the problem of performing references to wall mounted physical objects. The concept behind our solution is based on virtual characters. These characters are capable of performing reasonable combinations of motion, gestures and speech in order to disambiguate references to real world objects. The new idea of our work is to allow characters to migrate between displays to find an optimal position for the reference task. We have developed a rule-based system that, depending on the individual situation in which the reference is performed, determines the most appropriate reference method and technology from a number of different alternatives. The described technology has been integrated in a museum guide prototype combining mobile- and stationary devices.
AI-Mediated Interaction in Virtual Reality Art BIBAFull-Text 74-83
  Jean-luc Lugrin; Marc Cavazza; Mark Palmer; Sean Crooks
In this paper, we introduce a novel approach to the use of AI technologies to support user experience in Virtual Reality Art installations. The underlying idea is to use semantic representations for interaction events, so as to modify the course of actions to create specific impressions in the user. The system is based on a game engine ported to a CAVE-like immersive display, and uses the engine's event system to integrate AI-based simulation into the user real-time interaction loop. The combination of a set of transformation operators and heuristic search provides a powerful mechanism to generate chain of events. The work is illustrated by the development of an actual VR Art installation inspired from Lewis Caroll's work, and we illustrate the system performance on several examples from the actual installation.
Laughter Abounds in the Mouths of Computers: Investigations in Automatic Humor Recognition BIBAFull-Text 84-93
  Rada Mihalcea; Carlo Strapparava
Humor is an aspect of human behavior considered essential for inter-personal communication. Despite this fact, research in human-computer interaction has almost completely neglected aspects concerned with the automatic recognition or generation of humor. In this paper, we investigate the problem of humor recognition, and bring empirical evidence that computational approaches can be successfully applied to this task. Through experiments performed on very large data sets, we show that automatic classification techniques can be effectively used to distinguish between humorous and non-humorous texts, with significant improvements observed over apriori known baselines.
AmbientBrowser: Web Browser for Everyday Enrichment BIBAFull-Text 94-103
  Mitsuru Minakuchi; Satoshi Nakamura; Katsumi Tanaka
We propose an AmbientBrowser system that helps people acquire knowledge during everyday activities. It continuously searches Web pages using both system-defined and user-defined keywords, and displays summarized text obtained from pages found by searches. The system's sensors detect users' and environmental conditions and control the system's behavior such as knowledge selection or a style of presentation. Thus, the user can encounter a wide variety of knowledge without active operations. A pilot study showed that peripherally displayed knowledge could be read and could engage a user's interest. We implemented the system using a random information retrieval mechanism, an automatic kinetic typography composer, and easy methods of interaction using sensors.
Ambient Intelligence in Edutainment: Tangible Interaction with Life-Like Exhibit Guides BIBAFull-Text 104-113
  Alassane Ndiaye; Patrick Gebhard; Michael Kipp; Martin Klesen; Michael Schneider; Wolfgang Wahlster
We present COHIBIT, an edutainment exhibit for theme parks in an ambient intelligence environment. It combines ultimate robustness and simplicity with creativity and fun. The visitors can use instrumented 3D puzzle pieces to assemble a car. The key idea of our edutainment framework is that all actions of a visitor are tracked and commented by two life-like guides. Visitors get the feeling that the anthropomorphic characters observe, follow and understand their actions and provide guidance and motivation for them. Our mixed-reality installation provides a tangible, (via the graspable car pieces), multimodal, (via the coordinated speech, gestures and body language of the virtual character team) and immersive (via the large-size projection of the life-like characters) experience for a single visitor or a group of visitors. The paper describes the context-aware behavior of the virtual guides, the domain modeling and context classification as well as the event recognition in the instrumented environment.
Drawings as Input for Handheld Game Computers BIBAFull-Text 114-123
  Mannes Poel; Job Zwiers; Anton Nijholt; Rudy de Jong; Edward Krooman
The Nintendo DS™ is a hand held game computer that includes a small sketch pad as one of it input modalities. We discuss the possibilities for recognition of simple line drawing on this device, with focus of attention on robustness and real-time behavior. The results of our experiments show that with devices that are now becoming available in the consumer market, effective image recognition is possible, provided a clear application domain is selected. In our case, this domain was the usage of simple images as input modality for computer games that are typical for small hand held devices.
Let's Come Together -- Social Navigation Behaviors of Virtual and Real Humans BIBAFull-Text 124-133
  Matthias Rehm; Elisabeth André; Michael Nischt
In this paper, we present a game-like scenario that is based on a model of social group dynamics inspired by theories from the social sciences. The model is augmented by a model of proxemics that simulates the role of distance and spatial orientation in human-human communication. By means of proxemics, a group of human participants may signal other humans whether they welcome new group members to join or not. In this paper, we describe the results of an experiment we conducted to shed light on the question of how humans respond to such cues when shown by virtual humans.
Interacting with a Virtual Rap Dancer BIBAFull-Text 134-143
  Dennis Reidsma; Anton Nijholt; Rutger Rienks; Hendri Hondorp
This paper presents a virtual dancer that is able to dance to the beat of music coming in through the microphone and to motion beats detected in the video stream of a human dancer. In the current version its moves are generated from a lexicon that was derived manually from the analysis of the video clips of nine rap songs of different rappers. The system also allows for adaptation of the moves in the lexicon on the basis of style parameters.
Grounding Emotions in Human-Machine Conversational Systems BIBAFull-Text 144-154
  Giuseppe Riccardi; Dilek Hakkani-Tür
In this paper we investigate the role of user emotions in human-machine goal-oriented conversations. There has been a growing interest in predicting emotions from acted and non-acted spontaneous speech. Much of the research work has gone in determining what are the correct labels and improving emotion prediction accuracy. In this paper we evaluate the value of user emotional state towards a computational model of emotion processing. We consider a binary representation of emotions (positive vs. negative) in the context of a goal-driven conversational system. For each human-machine interaction we acquire the temporal emotion sequence going from the initial to the final conversational state. These traces are used as features to characterize the user state dynamics. We ground the emotion traces by associating its patterns to dialog strategies and their effectiveness. In order to quantify the value of emotion indicators, we evaluate their predictions in terms of speech recognition and spoken language understanding errors as well as task success or failure. We report results on the 11.5K dialog corpus samples from the How may I Help You? corpus.
Water, Temperature and Proximity Sensing for a Mixed Reality Art Installation BIBAFull-Text 155-163
  Isaac Rudomin; Marissa Diaz; Benjamín Hernández; Daniel Rivera
"Fluids" is an interactive and immersive mixed reality art installation that explores the relation of intimacy between reality and virtuality. We live in two different but connected worlds: our physical environment and virtual space. In this paper we discuss how we integrated them by using water and air as interfaces. We also discuss how we designed mechanisms for natural and subtle navigation between and within the different environments of the piece, how we designed the environments and the installation so as to take advantage of the low cost alternatives that are available today.
Geogames: A Conceptual Framework and Tool for the Design of Location-Based Games from Classic Board Games BIBAFull-Text 164-173
  Christoph Schlieder; Peter Kiefer; Sebastian Matyas
Location-based games introduce an element that is missing in interactive console games: movements of players involving locomotion and thereby the physical effort characteristic of any sportive activity. The paper explores how to design location-based games combining locomotion with strategic reasoning by using classical board games as templates. It is shown that the straightforward approach to "spatialize" such games fails. A generic approach to spatialization is presented and described within a conceptual framework that defines a large class of geogames. The framework is complemented by a software tool allowing the game designer to find the critical parameter values which determine the game's balance of reasoning skills and motoric skills. In order to illustrate the design method, a location-based version of the game TicTacToe is defined and analyzed.
Disjunctor Selection for One-Line Jokes BIBAFull-Text 174-182
  Jeff Stark; Kim Binsted; Ben Bergen
Here we present a model of a subtype of one-line jokes (not puns) that describes the relationship between the connector (part of the set-up) and the disjunctor (often called the punchline). This relationship is at the heart of what makes this common type of joke humorous. We have implemented this model in a system, DisS (Disjunctor Selector), which, given a joke set-up, can select the best disjunctor from a list of alternatives. DisS agrees with human judges on the best disjunctor for one typical joke, and we are currently testing it on other jokes of the same sub-type.
Multiplayer Gaming with Mobile Phones -- Enhancing User Experience with a Public Screen BIBAFull-Text 183-192
  Hanna Strömberg; Jaana Leikas; Riku Suomela; Veikko Ikonen; Juhani Heinilä
We have studied the use of a public screen integrated to a mobile multiplayer game in order to create a new kind of user experience. In the user evaluations, the game FirstStrike was tested by eight user groups, each containing four players. The evaluations showed that communication between the players increased with the usage of the public display and alliances were built. It was also found that the possibility to identify the players by adding the players' photographs into the shared display makes the game more personal. We believe that this new way of communication is a result of using the shared public screen in a mobile multiplayer game.
Learning Using Augmented Reality Technology: Multiple Means of Interaction for Teaching Children the Theory of Colours BIBAFull-Text 193-202
  Giuliana Ucelli; Giuseppe Conti; Raffaele De Amicis; Rocco Servidio
Augmented Reality technology permits the concurrent interaction with the real environment and computer-generated virtual objects, thus making it an interesting technology for developing educational applications that allows manipulation and visualization. The work described extends the traditional concept of book with rendered graphics to help children understand fundamentals of the theory of colours. A three-dimensional virtual chameleon shows children how, from the combination of primary colours, it is possible to get secondary colours and viceversa. The chameleon responds to children's actions changing appearance according to the colours of the surroundings. Our tangible interface becomes an innovative teaching tool conceived for supporting school learning methods, where the child can learn by playing with the virtual character, turning over the pages of the book and manipulating the movable parts. The main scientific contribution of this work is in showing what the use of augmented reality-based interfaces can bring to improve existing learning methods.
Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter Explaining 2D-Presented Information BIBAFull-Text 203-212
  Herwin van Welbergen; Anton Nijholt; Dennis Reidsma; Job Zwiers
Entertainment, education and training are changing because of multi-party interaction technology. In the past we have seen the introduction of embodied agents and robots that take the role of a museum guide, a news presenter, a teacher, a receptionist, or someone who is trying to sell you insurances, houses or tickets. In all these cases the embodied agent needs to explain and describe. In this paper we contribute the design of a 3D virtual presenter that uses different output channels to present and explain. Speech and animation (posture, pointing and involuntary movements) are among these channels. The behavior is scripted and synchronized with the display of a 2D presentation with associated text and regions that can be pointed at (sheets, drawings, and paintings). In this paper the emphasis is on the interaction between 3D presenter and the 2D presentation.
Entertainment Personalization Mechanism Through Cross-Domain User Modeling BIBAFull-Text 215-219
  Shlomo Berkovsky; Tsvi Kuflik; Francesco Ricci
The growth of available entertainment information services, such as movies and CD listings, or travels and recreational activities, raises a need for personalization techniques for filtering and adapting contents to customer's interest and needs. Personalization technologies rely on users data, represented as User Models (UMs). UMs built by specific services are usually not transferable due to commercial competition and models' representation heterogeneity. This paper focuses on the second obstacle and discusses architecture for mediating UMs across different domains of entertainment. The mediation facilitates improving the accuracy of the UMs and upgrading the provided personalization.
User Interview-Based Progress Evaluation of Two Successive Conversational Agent Prototypes BIBAFull-Text 220-224
  Niels Ole Bernsen; Laila Dybkjær
The H. C. Andersen system revives a famous character and makes it carry out natural interactive conversation for edutainment. We compare results of the structured user interviews from two subsequent user tests of the system.
Adding Playful Interaction to Public Spaces BIBAFull-Text 225-229
  Amnon Dekel; Yitzhak Simon; Hila Dar; Ezri Tarazi; Oren Rabinowitz; Yoav Sterman
Public spaces are interactive by the very fact that they are designed to be looked at, walked around, and used by multitudes of people on a daily basis. Architects design such spaces to create physical scenarios for people to interact with, but this interaction will usually be one sided -- the physical space does not usually change or react. In this paper we present three interaction design projects which add reactive dynamics into objects located in public spaces and in the process enhance the forms of interaction possible with them.
Report on a Museum Tour Report BIBAFull-Text 230-234
  Dina Goren-Bar; Michela Prete
A simulation study about some basic dimensions of adaptivity that guided the development of the personalized summary reports about museum visits, as part of PEACH are presented. Each participant was exposed to three simulated tour reports that realized a sequential adaptive, a thematic adaptive and a non-adaptive version, respectively, and subsequently on each of the dimensions investigated. Results were unexpected. The possible reasons are discussed and conditions under which personalized report generators can be preferred over non personalized ones are proposed.
A Ubiquitous and Interactive Zoo Guide System BIBAFull-Text 235-239
  Helmut Hlavacs; Franziska Gelies; Daniel Blossey; Bernhard Klein
We describe a new prototype for a zoo information system. The system is based on RFID and allows to retrieve information about the zoo animals in a quick and easy way. RFID tags identifying the respective animals are placed near the animal habitats. Zoo visitors are equipped with PDAs containing RFID readers and WLAN cards. The PDAs may then read the RFID tag IDs and retrieve respective HTML-documents from a zoo Web server showing information about the animals at various levels of detail and languages. Additionally, the system contains a JXTA and XML based peer-to-peer subsystem, enabling zoos to share their content with other zoos in an easy way. This way, the effort for creating multimedia content can be reduced drastically.
Styling and Real-Time Simulation of Human Hair BIBAFull-Text 240-245
  Yvonne Jung; Christian Knöpfle
We present a method for realistic, real-time simulation of human hair, which is suitable for the use in complex virtual reality applications. The core idea is to reduce the enormous amount of hair on a human head by combining neighbored hair into wisps and use our cantilever beam algorithm to simulate them. The final rendering of these wisps is done using special hardware accelerated shaders, which deliver high visual accuracy. Furthermore we present our first attempt for interactive hair styling.
Motivational Strategies for an Intelligent Chess Tutoring System BIBAFull-Text 246-250
  Bruno Lepri; Cesare Rocchi; Massimo Zancanaro
The recognition of student's motivational states and the adaptation of instructions to the student's motivations are hot topics in the field of intelligent tutoring systems. In this paper, we describe a prototype of an Intelligent Chess Tutoring System based on a set of motivational strategies borrowed from Dweck's theory. The main objectives of the prototype are to teach some chess tactics to middle-level players and to help them to avoid helpless reactions after their errors. The prototype was implemented using Flash Mx 2004. The graphical user interface encompasses a life-like character functioning as tutor.
Balancing Narrative Control and Autonomy for Virtual Characters in a Game Scenario BIBAFull-Text 251-255
  Markus Löckelt; Elsa Pecourt; Norbert Pfleger
We report on an effort to combine methods from storytelling and multimodal dialogue systems research to achieve flexible and immersive performances involving believable virtual characters conversing with human users. The trade-off between narrative control and character autonomy is exemplified in a game scenario.
Web Content Transformed into Humorous Dialogue-Based TV-Program-Like Content BIBAFull-Text 256-261
  Akiyo Nadamoto; Adam Jatowt; Masaki Hayashi; Katsumi Tanaka
A browsing system is described for transforming declarative web content into humorous-dialogue TV-program-like content that is presented through character agent animation and synthesized speech. We call this system Web2Talkshow which enables users to obtain web content in a manner similar to watching TV. Web content is transformed into humorous dialogues based on the keywords-set of the web page. By using Web2Talkshow, users will be able to watch and listen to desired web content in an easy, pleasant, and userfriendly way, like watching a comedy show.
Content Adaptation for Gradual Web Rendering BIBAFull-Text 262-266
  Satoshi Nakamura; Mitsuru Minakuchi; Katsumi Tanaka
We previously proposed a gradual Web rendering system. This system rendered Web content incrementally according to the context of the user and the environment, enabling casual Web browsing. Unfortunately, it had low levels of readability and enjoyment. In this paper, we describe the problems with it and introduce content adaptation mechanisms to solve these problems.
Getting the Story Right: Making Computer-Generated Stories More Entertaining BIBAFull-Text 267-271
  K. Oinonen; M. Theune; A. Nijholt; D. Heylen
In this paper we describe our efforts to increase the entertainment value of the stories generated by our story generation system, the Virtual Storyteller, at the levels of plot creation, discourse generation and spoken language presentation. We also discuss the construction of a story database that will enable our system to learn from previous stories and user feedback, to create better and more entertaining stories.
Omnipresent Collaborative Virtual Environments for Open Inventor Applications BIBAFull-Text 272-276
  Jan Peciva
This paper presents a library for collaborative virtual environments that will enable developers to extend their standalone 3D applications into the collaborative virtual environment applications without many efforts. The collaborative virtual environment applications usually put many extraordinary consistency problems on developers. Many of these problems are related to distributed systems and parallel processing. Most of them are already taken care in the library presented in this paper. To show usability and make programming with the library even easier, it was integrated as the extension into the Open Inventor. As a result, all Open Inventor applications can now benefit from collaboration with other applications. Many of them require just few lines of changes to their sources to get robust collaboration; compared to hundreds lines of code of hand-made solution that will give just simple collaboration without any consistency guaranties.
SpatiuMedia: Interacting with Locations BIBAFull-Text 277-282
  Russell Savage; Ophir Tanz; Yang Cai
Affordable wireless networking enables a wide range of devices to encode spatial data into media both for the outdoor and indoor environment. In this paper, the authors present a platform for generating, transmitting and sharing the location-aware media flow. Interactive applications, such as video search by locations, location search by interesting spots, navigation games and online SpatiuMedia community, are presented.
Singing with Your Mobile: From DSP Arrays to Low-Cost Low-Power Chip Sets BIBAFull-Text 283-287
  Barry Vercoe
The world's first software-only Karaoke Machine released in Japan (2002) has no ASIC for sound synthesis and effects processing, but instead a group of load-sharing DSPs that run a multiprocessor version of the author's Extended Csound to support a 64-voice orchestra, real-time MPEG decode, live voice tracking with pitch and tempo following and a full range of audio effects. A new version of the software aimed at low-cost low-power silicon now enables similar interactive performance on lightweight mobile platforms with the same professional audio quality and Internet connectivity. This presentation will describe the system, and close with a live demonstration.
Bringing Hollywood to the Driving School: Dynamic Scenario Generation in Simulations and Games BIBAFull-Text 288-292
  I. H. C. Wassink; E. M. A. G. van Dijk; J. Zwiers; A. Nijholt; J. Kuipers; A. O. Brugman
In this paper we discuss a framework for simulation software called the movie metaphor. It is applied to the Dutch Driving Simulator for dynamic control of traffic scenarios. This framework resolves software complexity by the use of agent protocols inspired by the way of working on a movie set. It defines clear responsibilities for the agents so that the system is extensible, maintainable and easy to understand. The framework is a software pattern for multiagent systems especially suitable for simulation software and games.
Webcrow: A Web-Based Crosswords Solver BIBAFull-Text 295-298
  Giovanni Angelini; Marco Ernandes; Marco Gori
Webcrow is a software system whose aim is to solve crosswords. Problems of like solving crosswords have been informally defined as AI-Complete and are extremely challenging for machines. Webcrow represents the first solver for Italian crosswords and the first system that tackles this language game using the Web as knowledge base. Currently, Webcrow is competitive against an average crossword player. Crosswords that are "easy" for expert humans (i.e. crosswords from the cover pages of La Settimana Enigmistica) are solved, in a 15 minutes time limit, with 80% of correct words and over 90% of correct letters. Webcrow well performs on crosswords that are designed for experts and, in general, it outperforms an average undergraduate student on this kind of crosswords.
COMPASS2008: The Smart Dining Service BIBAFull-Text 299-302
  Ilhan Aslan; Feiyu Xu; Jörg Steffen; Hans Uszkoreit; Antonio Krüger
The Compass2008 project is a sino-german cooperation, aiming at integrating advanced information technologies to create a high-tech information system that helps visitors to access location-sensitive information services during the 2008 Olympic Games in their preferred language, offering a variety of service-adaptive modalities available on the mobile devices. In this paper, we demonstrate one of the COMPASS2008 services, the Smart Dining Service, to showcase the new interaction concepts between multimodality, multilingual and location-sensitive information search.
DaFEx: Database of Facial Expressions BIBAFull-Text 303-306
  Alberto Battocchi; Fabio Pianesi; Dina Goren-Bar
DaFEx (Database of Facial Expressions) is a database created with the purpose of providing a benchmark for the evaluation of the facial expressivity of Embodied Conversational Agents (ECAs). DaFEx consists of 1008 short videos containing emotional facial expressions of the 6 Ekman's emotions plus the neutral expression. The facial expressions were recorded by 8 Italian professional actors (4 male and 4 female) in two acting conditions ("utterance" and "no-utterance") and at 3 intensity levels (high, medium, low). Very much attention has been paid to image quality and framing. The high number of videos, the number of variables considered, and the very good video quality, make of DaFEx a reference corpus both for the evaluation of ECAs and the research in emotion psychology.
PeaceMaker: A Video Game to Teach Peace BIBAFull-Text 307-310
  Asi Burak; Eric Keylor; Tim Sweeney
PeaceMaker is a computer game simulation of the Israeli-Palestinian conflict. It is a tool that can be used to teach Israeli and Palestinian teenagers how both sides can work together to achieve peace. The player can choose to take the role of either the Israeli Prime Minister or the Palestinian President, react to in-game events, and interact with other political leaders and social groups to establish a stable resolution to the conflict. Derived from gameplay conventions found in commercial strategy games, PeaceMaker aims to prove that computer games can deal with current and serious political issues and that playing for peace and non-violence could be as challenging and satisfying as playing for the opposite goal.
A Demonstration of the ScriptEase Approach to Ambient and Perceptive NPC Behaviors in Computer Role-Playing Games BIBAFull-Text 311-314
  Maria Cutumisu; Duane Szafron; Jonathan Schaeffer; Matthew McNaughton; Thomas Roy; Curtis Onuczko; Mike Carbonaro
Writing manual code to script the behaviors of thousands of non-player characters in a computer role-playing game adventure has a tremendous negative impact on the quality of games and their entertainment level. Many games use shared custom scripts for background characters that produce repetitive and predictable behaviors. Game designers often need help from programmers when designing a game story and this can lead to lost productivity and a distorted design vision. ScriptEase is a tool that enables game designers to use ambient and perceptive patterns to specify complex, non-repetitive entertaining behaviors for interactive characters, without writing code. This demonstration illustrates how entertaining ambient and perceptive behaviors can be easily and reliably inserted into BioWare Corp.'s Neverwinter Nights game stories.
Multi-user Multi-touch Games on DiamondTouch with the DTFlash Toolkit BIBAFull-Text 315-319
  Alan Esenther; Kent Wittenburg
Games and other forms of tabletop electronic entertainment are a natural application of the new multi-user multi-touch tabletop technology DiamondTouch [3]. Electronic versions of familiar tabletop games such as ping-pong or air hockey require simultaneous touch events that can be uniquely associated with different users. Multi-touch two-handed gestures useful for, e.g., rotating, stretching, capturing, or releasing also have natural applications for entertainment applications built on electronic tabletops. Here we show a set of games that are illustrative of the capabilities of an underlying authoring toolkit we call DTFlash. DTFlash is designed so that those familiar with Macromedia Flash authoring tools can add multi-user multi-touch gestures and behaviors to web-enabled games and other applications for the DiamondTouch table.
Enhancing Social Communication Through Story-Telling Among High-Functioning Children with Autism BIBAFull-Text 320-323
  E. Gal; D. Goren-Bar; E. Gazit; N. Bauminger; A. Cappelletti; F. Pianesi; O. Stock; M. Zancanaro; P. L. Weiss
We describe a first prototype of a system for storytelling for high functioning children with autism. The system, based on the Story-Table developed by IRST-itc, is aimed at supporting a group of children in the activity of storytelling. The system is based on a unique multi-user touchable device (the MERL Diamond Touch) designed with the purpose of enforcing collaboration between users. The instructions were simplified in order to allow children with communication disabilities to learn and operate the story table. First pilot results are very encouraging. The children were enthusiastic about communicating through the ST and appeared to be able to learn to operate it with little difficulty.
Tagsocratic: Learning Shared Concepts on the Blogosphere BIBAFull-Text 324-327
  D. Goren-Bar; I. Levi; C. Hayes; P. Avesani
The blogosphere refers to the social network of weblogs composed by bloggers and read and linked to by other bloggers. In this paper we suggest that this is a new type of collaborative entertainment in which emerging topics of shared interest are discussed and developed online. We introduce Tagsocratic, a system designed to facilitate the social interactions among bloggers. To do that, we use a novel agent-based architecture where agents interact in order to learn their respective topic competences. We describe the language games technique on which this architecture is based and our future work in this domain.
Delegation Based Multimedia Mobile Guide BIBAFull-Text 328-331
  Ilenia Graziola; Cesare Rocchi; Dina Goren-Bar; Fabio Pianesi; Oliviero Stock; Massimo Zancanaro
We introduce a new interaction model based on delegation where the visitor can signal her preferences during the visit by means of a graphical widget (called the like-o-meter). The system takes into account such a feedback and selects/organizes the content according to users' liking or disliking. The user model incrementally updates the information collected from users' behaviour on the interface, and shows it to the visitor through the same widget which thus act as an output as well as an input device.
Personalized Multimedia Information System for Museums and Exhibitions BIBAFull-Text 332-335
  Jochen Martin; Christian Trummer
In this paper we present a multimedia information system based on an application server for the publishing of rich media content in museums and exhibitions. With this system it is possible to realize personalized and adaptive, knowledge based exhibitions and exhibition components for use in place and online. The system allows the presentation of digital content adjusted to the individual visitor's interests. It supports different display devices from PDA's up to projection screens and allows the integration of different localization techniques for location based information presentation. All these points lead to a completely new and exciting experience for the exhibition visitor. Furthermore, a special authoring tool is included that makes the creation and administration of exhibitions very easy for the museums.
Lets Come Together -- Social Navigation Behaviors of Virtual and Real Humans BIBAFull-Text 336
  Matthias Rehm; Elisabeth André; Michael Nischt
In this paper, we present a game-like scenario that is based on a model of social group dynamics inspired by theories from the social sciences. The model is augmented by a model of proxemics that simulates the role of distance and spatial orientation in human-human communication. By means of proxemics, a group of human participants may signal other humans whether they welcome new group members to join or not. In this paper, we describe the results of an experiment we conducted to shed light on the question of how humans respond to such cues when shown by virtual humans.
Automatic Creation of Humorous Acronyms BIBAFull-Text 337-340
  Oliviero Stock; Carlo Strapparava
Society needs humor, not just for entertainment. In the current business world, humor is considered to be so important that companies may hire humor consultants. Humor can be used "to criticize without alienating, to defuse tension or anxiety, to introduce new ideas, to bond teams, ease relationships and elicit cooperation". As far as human-computer interfaces are concerned, in the future we will demand naturalness and effectiveness that require the incorporation of models of possibly all human cognitive capabilities, including the handling of humor [1]. There are many practical settings where computational humor will add value. Among them there are: business world applications (such as advertisement, e-commerce, etc.), general computer-mediated communication and human-computer interaction, increase in the friendliness of natural language interfaces, educational and edutainment systems. Not necessarily applications need to emphasize interactivity. For instance there are important prospects for humor in automatic information presentation. In the Web age presentations will become more and more flexible and personalized and will require humor contributions for electronic commerce developments (e.g. product promotion, getting selective attention, help in memorizing names etc) more or less as it happened in the world of advertisement within the old broadcast communication.