HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,246,196
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: dengel_a* Results: 22 Sorted by: Date  Comments?
Help Dates
Limit:   
[1] Quantifying reading habits: counting how many words you read Quantifying and communicating through wearables / Kunze, Kai / Masai, Katsutoshi / Inami, Masahiko / Sacakli, Ömer / Liwicki, Marcus / Dengel, Andreas / Ishimaru, Shoya / Kise, Koichi Proceedings of the 2015 International Conference on Ubiquitous Computing 2015-09-07 p.87-96
ACM Digital Library Link
Summary: Reading is a very common learning activity, a lot of people perform it everyday even while standing in the subway or waiting in the doctors office. However, we know little about our everyday reading habits, quantifying them enables us to get more insights about better language skills, more effective learning and ultimately critical thinking. This paper presents a first contribution towards establishing a reading log, tracking how much reading you are doing at what time. We present an approach capable of estimating the words read by a user, evaluate it in an user independent approach over 3 experiments with 24 users over 5 different devices (e-ink reader, smartphone, tablet, paper, computer screen). We achieve an error rate as low as 5% (using a medical electrooculography system) or 15% (based on eye movements captured by optical eye tracking) over a total of 30 hours of recording. Our method works for both an optical eye tracking and an Electrooculography system. We provide first indications that the method works also on soon commercially available smart glasses.

[2] Seed, a Natural Language Interface to Knowledge Bases Knowledge Management / Eldesouky, Bahaa / Maus, Heiko / Schwarz, Sven / Dengel, Andreas HIMI 2015: 17th International Conference on Human Interface and the Management of Information, Symposium on Human Interface, Part I: Information and Knowledge Design 2015-08-02 v.1 p.280-290
Keywords: Usability; Semantic Web; Natural language; Knowledge bases
Link to Digital Content at Springer
Summary: The World Wide Web has been rapidly developing in the last decade. In recent years, the Semantic Web has gained a lot of traction. It is a vision of the Web where data is understandable by machines as well as humans. Developments in the Semantic Web made way for the creation of massive knowledge bases containing a wealth of structured information. However, allowing end-users to interact with and benefit from these knowledge bases remains a challenge.
    In this paper, we present Seed, an extensible knowledge-supported natural language text composition tool, which provides a user-friendly way of interacting with complex knowledge systems. It is integrable not only with public knowledge bases on the Semantic Web, but also with private knowledge bases used in personal or enterprise contexts.
    By means of a long-term formative user-study and a short-term user evaluation of a sizable population of test subjects, we show that Seed was successfully used in exploring, modifying and creating the content of complex knowledge bases. We show it enables end-users do so with nearly no domain knowledge while hiding the complexity of the underlying knowledge representation.

[3] Daily activity recognition combining gaze motion and visual features PETMEI -- 4th International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction / Shiga, Yuki / Dengel, Andreas / Toyama, Takumi / Kise, Koichi / Utsumi, Yuzuko Adjunct Proceedings of the 2014 International Joint Conference on Pervasive and Ubiquitous Computing 2014-09-13 v.2 p.1103-1111
ACM Digital Library Link
Summary: Recognition of user activities is a key issue for context-aware computing. We present a method for recognition of user daily activities using gaze motion features and image-based visual features. Gaze motion features dominate for inferring the user's egocentric context whereas image-based visual features dominate for recognition of the environments and the target objects. The experimental results show the fusion of those different type of features improves performance of user daily activity recognition.

[4] In the blink of an eye: combining head motion and eye blink frequency for activity recognition with Google Glass 3. Look into Your Eyes / Ishimaru, Shoya / Kunze, Kai / Kise, Koichi / Weppner, Jens / Dengel, Andreas / Lukowicz, Paul / Bulling, Andreas Proceedings of the 2014 Augmented Human International Conference 2014-03-07 p.15
ACM Digital Library Link
Summary: We demonstrate how information about eye blink frequency and head motion patterns derived from Google Glass sensors can be used to distinguish different types of high level activities. While it is well known that eye blink frequency is correlated with user activity, our aim is to show that (1) eye blink frequency data from an unobtrusive, commercial platform which is not a dedicated eye tracker is good enough to be useful and (2) that adding head motion patterns information significantly improves the recognition rates. The method is evaluated on a data set from an experiment containing five activity classes (reading, talking, watching TV, mathematical problem solving, and sawing) of eight participants showing 67% recognition accuracy for eye blinking only and 82% when extended with head motion patterns.

[5] A mixed reality head-mounted text translation system using eye gaze input Posters / Toyama, Takumi / Sonntag, Daniel / Dengel, Andreas / Matsuda, Takahiro / Iwamura, Masakazu / Kise, Koichi Proceedings of the 2014 International Conference on Intelligent User Interfaces 2014-02-24 v.1 p.329-334
ACM Digital Library Link
Summary: Efficient text recognition has recently been a challenge for augmented reality systems. In this paper, we propose a system with the ability to provide translations to the user in real-time. We use eye gaze for more intuitive and efficient input for ubiquitous text reading and translation in head mounted displays (HMDs). The eyes can be used to indicate regions of interest in text documents and activate optical-character-recognition (OCR) and translation functions. Visual feedback and navigation help in the interaction process, and text snippets with translations from Japanese to English text snippets, are presented in a see-through HMD. We focus on travelers who go to Japan and need to read signs and propose two different gaze gestures for activating the OCR text reading and translation function. We evaluate which type of gesture suits our OCR scenario best. We also show that our gaze-based OCR method on the extracted gaze regions provide faster access times to information than traditional OCR approaches. Other benefits include that visual feedback of the extracted text region can be given in real-time, the Japanese to English translation can be presented in real-time, and the augmentation of the synchronized and calibrated HMD in this mixed reality application are presented at exact locations in the augmented user view to allow for dynamic text translation management in head-up display systems.

[6] Analysis and forecasting of trending topics in online media streams Social dynamics / Althoff, Tim / Borth, Damian / Hees, Jörn / Dengel, Andreas Proceedings of the 2013 ACM International Conference on Multimedia 2013-10-21 p.907-916
ACM Digital Library Link
Summary: Among the vast information available on the web, social media streams capture what people currently pay attention to and how they feel about certain topics. Awareness of such trending topics plays a crucial role in multimedia systems such as trend aware recommendation and automatic vocabulary selection for video concept detection systems. Correctly utilizing trending topics requires a better understanding of their various characteristics in different social media streams. To this end, we present the first comprehensive study across three major online and social media streams, Twitter, Google, and Wikipedia, covering thousands of trending topics during an observation period of an entire year. Our results indicate that depending on one's requirements one does not necessarily have to turn to Twitter for information about current events and that some media streams strongly emphasize content of specific categories. As our second key contribution, we further present a novel approach for the challenging task of forecasting the life cycle of trending topics in the very moment they emerge. Our fully automated approach is based on a nearest neighbor forecasting technique exploiting our assumption that semantically similar topics exhibit similar behavior.
    We demonstrate on a large-scale dataset of Wikipedia page view statistics that forecasts by the proposed approach are about 9-48k views closer to the actual viewing statistics compared to baseline methods and achieve a mean average percentage error of 45-19% for time periods of up to 14 days.

[7] Gaze guided object recognition using a head-mounted eye tracker Gaze informed user interfaces / Toyama, Takumi / Kieninger, Thomas / Shafait, Faisal / Dengel, Andreas Proceedings of the 2012 Symposium on Eye Tracking Research & Applications 2012-03-28 p.91-98
ACM Digital Library Link
Summary: Wearable eye trackers open up a large number of opportunities to cater for the information needs of users in today's dynamic society. Users no longer have to sit in front of a traditional desk-mounted eye tracker to benefit from the direct feedback given by the eye tracker about users' interest. Instead, eye tracking can be used as a ubiquitous interface in a real-world environment to provide users with supporting information that they need. This paper presents a novel application of intelligent interaction with the environment by combining eye tracking technology with real-time object recognition. In this context we present i) algorithms for guiding object recognition by using fixation points ii) algorithms for generating evidence of users' gaze on particular objects iii) building a next generation museum guide called Museum Guide 2.0 as a prototype application of gaze-based information provision in a real-world environment. We performed several experiments to evaluate our gaze-based object recognition methods. Furthermore, we conducted a user study in the context of Museum Guide 2.0 to evaluate the usability of the new gaze-based interface for information provision. These results show that an enormous amount of potential exists for using a wearable eye tracker as a human-environment interface.

[8] A robust realtime reading-skimming classifier Visual attention: studies, tools, methods / Biedert, Ralf / Hees, Jörn / Dengel, Andreas / Buscher, Georg Proceedings of the 2012 Symposium on Eye Tracking Research & Applications 2012-03-28 p.123-130
ACM Digital Library Link
Summary: Distinguishing whether eye tracking data reflects reading or skimming already proved to be of high analytical value. But with a potentially more widespread usage of eye tracking systems at home, in the office or on the road the amount of environmental and experimental control tends to decrease. This in turn leads to an increase in eye tracking noise and inaccuracies which are difficult to address with current reading detection algorithms. In this paper we propose a method for constructing and training a classifier that is able to robustly distinguish reading from skimming patterns. It operates in real time, considering a window of saccades and computing features such as the average forward speed and angularity. The algorithm inherently deals with distorted eye tracking data and provides a robust, linear classification into the two classes read and skimmed. It facilitates reaction times of 750ms on average, is adjustable in its horizontal sensitivity and provides confidence values for its classification results; it is also straightforward to implement. Trained on a set of six users and evaluated on an independent test set of six different users it achieved a 86% classification accuracy and it outperformed two other methods.

[9] Towards robust gaze-based objective quality measures for text Eye tracking applications I / Biedert, Ralf / Dengel, Andreas / Elshamy, Mostafa / Buscher, Georg Proceedings of the 2012 Symposium on Eye Tracking Research & Applications 2012-03-28 p.201-204
ACM Digital Library Link
Summary: An increasing amount of text is being read digitally. In this paper we explore how eye tracking devices can be used to aggregate reading data of many readers in order to provide authors and editors with objective and implicitly gathered quality feedback. We present a robust way to jointly evaluate the gaze data of multiple readers, with respect to various reading-related features. We conducted an experiment in which a group of high school students composed essays subsequently read and rated by a group of seven other students. Analyzing the recorded data, we find that the amount of regression targets, the reading-to-skimming ratio, reading speed and reading count are the most discriminative features to distinguish very comprehensible from barely comprehensible text passages. By employing machine learning techniques, we are able to classify the comprehensibility of text automatically with an overall accuracy of 62%.

[10] Universal eye-tracking based text cursor warping Uses and applications / Biedert, Ralf / Dengel, Andreas / Käding, Christoph Proceedings of the 2012 Symposium on Eye Tracking Research & Applications 2012-03-28 p.361-364
ACM Digital Library Link
Summary: In this paper we present an approach to build an eye-tracking based text cursor placement system. When triggered, the system employs a computer vision based analysis of the screen's content around the current gaze position to find the most likely designated gaze target. Eventually it synthesizes a mouse event at that position, allowing for a rapid text cursor repositioning even in applications which do not support eye tracking explicitly. For our system we compared three different computer vision methods in a simulation run and evaluated the best candidate in two double blinded user studies. We used a total of 19 participants to assess the system's objective and perceived end user speed up. We can demonstrate that in terms of reposition time the OCR based method is superior to the other tested methods, it also beats common keyboard-mouse interaction for some users. We conclude that while the tool was almost universally preferred subjectively over keyboard-mouse interaction, the highest speed can be achieved by using the right amount of eye tracking.

[11] Reading and estimating gaze on smart phones Uses and applications / Biedert, Ralf / Dengel, Andreas / Buscher, Georg / Vartan, Arman Proceedings of the 2012 Symposium on Eye Tracking Research & Applications 2012-03-28 p.385-388
ACM Digital Library Link
Summary: While lots of reading happens on mobile devices, little research has been performed on how the reading-interaction actually takes place. Therefore we describe our findings on a study conducted with 18 users which were asked to read a number of texts while their touch and gaze data was being recorded. We found three reader types and identified their preferred alignment of text on the screen. Based on our findings we are able to computationally estimate the reading area with an approximate .81 precision and .89 recall. Our computed reading speed estimate has an average 10.9% wpm error in contrast to the measured speed, and combining both techniques we can pinpoint the reading location at a given time with an overall word error of 9.26 words, or about three lines of text on our device.

[12] Attentive documents: Eye tracking as implicit feedback for information retrieval and beyond / Buscher, Georg / Dengel, Andreas / Biedert, Ralf / Elst, Ludger V. ACM Transactions on Interactive Intelligent Systems 2012-01 v.1 n.2 p.9
ACM Digital Library Link
Summary: Reading is one of the most frequent activities of knowledge workers. Eye tracking can provide information on what document parts users read, and how they were read. This article aims at generating implicit relevance feedback from eye movements that can be used for information retrieval personalization and further applications.
    We report the findings from two studies which examine the relation between several eye movement measures and user-perceived relevance of read text passages. The results show that the measures are generally noisy, but after personalizing them we find clear relations between the measures and relevance. In addition, the second study demonstrates the effect of using reading behavior as implicit relevance feedback for personalizing search. The results indicate that gaze-based feedback is very useful and can greatly improve the quality of Web search. The article concludes with an outlook introducing attentive documents keeping track of how users consume them. Based on eye movement feedback, we describe a number of possible applications to make working with documents more effective.

[13] EDITED BOOK Search Computing: Broadening Web Search Lecture Notes in Computer Science 7538 / Ceri, Stefano / Brambilla, Marco 2012 n.16 p.254 Springer Berlin Heidelberg
DOI: 10.1007/978-3-642-34213-4
ISBN: 978-3-642-34212-7 (print), 978-3-642-34213-4 (online)
Link to Digital Content at Springer
== Extraction and Integration ==
Web Data Reconciliation: Models and Experiences (1-15)
	+ Blanco, Lorenzo
	+ Crescenzi, Valter
	+ Merialdo, Paolo
	+ Papotti, Paolo
A Domain Independent Framework for Extracting Linked Semantic Data from Tables (16-33)
	+ Mulwad, Varish
	+ Finin, Tim
	+ Joshi, Anupam
Knowledge Extraction from Structured Sources (34-52)
	+ Unbehauen, Jörg
	+ Hellmann, Sebastian
	+ Auer, Sören
	+ Stadler, Claus
Extracting Information from Google Fusion Tables (53-67)
	+ Brambilla, Marco
	+ Ceri, Stefano
	+ Cinefra, Nicola
	+ Sarma, Anish Das
	+ Forghieri, Fabio
	+ et al
Materialization of Web Data Sources (68-81)
	+ Bozzon, Alessandro
	+ Ceri, Stefano
	+ Zagorac, Srdan
== Query and Visualization Paradigms ==
Natural Language Interfaces to Data Services (82-97)
	+ Guerrisi, Vincenzo
	+ Torre, Pietro La
	+ Quarteroni, Silvia
Mobile Multi-domain Search over Structured Web Data (98-110)
	+ Aral, Atakan
	+ Akin, Ilker Zafer
	+ Brambilla, Marco
Clustering and Labeling of Multi-dimensional Mixed Structured Data (111-126)
	+ Brambilla, Marco
	+ Zanoni, Massimiliano
Visualizing Search Results: Engineering Visual Patterns Development for the Web (127-142)
	+ Morales-Chaparro, Rober
	+ Preciado, Juan Carlos
	+ Sánchez-Figueroa, Fernando
== Exploring Linked Data ==
Extending SPARQL Algebra to Support Efficient Evaluation of Top-K SPARQL Queries (143-156)
	+ Bozzon, Alessandro
	+ Valle, Emanuele Della
	+ Magliacane, Sara
Thematic Clustering and Exploration of Linked Data (157-175)
	+ Castano, Silvana
	+ Ferrara, Alfio
	+ Montanelli, Stefano
Support for Reusable Explorations of Linked Data in the Semantic Web (176-190)
	+ Cohen, Marcelo
	+ Schwabe, Daniel
== Games, Social Search and Economics ==
A Survey on Proximity Measures for Social Networks (191-206)
	+ Cohen, Sara
	+ Kimelfeld, Benny
	+ Koutrika, Georgia
Extending Search to Crowds: A Model-Driven Approach (207-222)
	+ Bozzon, Alessandro
	+ Brambilla, Marco
	+ Ceri, Stefano
	+ Mauri, Andrea
BetterRelations: Collecting Association Strengths for Linked Data Triples with a Game (223-239)
	+ Hees, Jörn
	+ Roth-Berghofer, Thomas
	+ Biedert, Ralf
	+ Adrian, Benjamin
	+ Dengel, Andreas
An Incentive-Compatible Revenue-Sharing Mechanism for the Economic Sustainability of Multi-domain Search Based on Advertising (240-254)
	+ Brambilla, Marco
	+ Ceppi, Sofia
	+ Gatti, Nicola
	+ Gerding, Enrico H.

[14] Eye tracking analysis of preferred reading regions on the screen Work-in-progress, April 12-13 / Buscher, Georg / Biedert, Ralf / Heinesch, Daniel / Dengel, Andreas Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010-04-10 v.2 p.3307-3312
Keywords: eye tracking, mouse movements, reading, scrolling
ACM Digital Library Link
Summary: We report on an exploratory study analyzing preferred reading regions on a monitor using eye tracking. We show that users have individually preferred reading regions, varying in location on the screen and in size. Furthermore, we explore how scrolling interactions and mouse movements are correlated with position and size of the individually preferred reading regions.

[15] Text 2.0 Work-in-progress, April 14-15 / Biedert, Ralf / Buscher, Georg / Schwarz, Sven / Hees, Jörn / Dengel, Andreas Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010-04-10 v.2 p.4003-4008
Keywords: attentive text, eye tracking, reading
ACM Digital Library Link
Summary: We discuss the idea of text responsive to reading and argue that the combination of eye tracking, text and real time interaction offers various possibilities to enhance the reading experience. We present a number of prototypes and applications facilitating the user's gaze in order to assist comprehension difficulties and show their benefit in a preliminary evaluation.

[16] Segment-level display time as implicit feedback: a comparison to eye tracking Expansion and feedback / Buscher, Georg / van Elst, Ludger / Dengel, Andreas Proceedings of the 32nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2009-07-19 p.67-74
Keywords: display time, eye tracking, implicit feedback, personalization
ACM Digital Library Link
Summary: We examine two basic sources for implicit relevance feedback on the segment level for search personalization: eye tracking and display time. A controlled study has been conducted where 32 participants had to view documents in front of an eye tracker, query a search engine, and give explicit relevance ratings for the results. We examined the performance of the basic implicit feedback methods with respect to improved ranking and compared their performance to a pseudo relevance feedback baseline on the segment level and the original ranking of a Web search engine.
    Our results show that feedback based on display time on the segment level is much coarser than feedback from eye tracking. But surprisingly, for re-ranking and query expansion it did work as well as eye-tracking-based feedback. All behavior-based methods performed significantly better than our non-behavior-based baseline and especially improved poor initial rankings of the Web search engine.
    The study shows that segment-level display time yields comparable results as eye-tracking-based feedback. Thus, it should be considered in future personalization systems as an inexpensive but precise method for implicit feedback.

[17] Query expansion using gaze-based feedback on the subdocument level Query analysis & models: 1 / Buscher, Georg / Dengel, Andreas / van Elst, Ludger Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2008-07-20 p.387-394
ACM Digital Library Link
Summary: We examine the effect of incorporating gaze-based attention feedback from the user on personalizing the search process. Employing eye tracking data, we keep track of document parts the user read in some way. We use this information on the subdocument level as implicit feedback for query expansion and reranking.
    We evaluated three different variants incorporating gaze data on the subdocument level and compared them against a baseline based on context on the document level. Our results show that considering reading behavior as feedback yields powerful improvements of the search result accuracy of ca. 32% in the general case. However, the extent of the improvements varies depending on the internal structure of the viewed documents and the type of the current information need.

[18] Eye movements as implicit relevance feedback Works in progress / Buscher, Georg / Dengel, Andreas / van Elst, Ludger Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems 2008-04-05 v.2 p.2991-2996
ACM Digital Library Link
Summary: Reading detection is an important step in the process of automatic relevance feedback generation based on eye movements for information retrieval tasks. We describe a reading detection algorithm and present a preliminary study to find expressive eye movement measures.

[19] Generating and using gaze-based document annotations Works in progress / Buscher, Georg / Dengel, Andreas / van Elst, Ludger / Mittag, Florian Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems 2008-04-05 v.2 p.3045-3050
ACM Digital Library Link
Summary: In this paper we describe a prototypical system that is able to generate document annotations based on eye movement data. Document parts can be annotated as being read or skimmed. We further explain ideas how such gaze-based document annotations could enhance document-centered office work in the future.

[20] Managing a document-based information space Visualization / Deller, Matthias / Agne, Stefan / Ebert, Achim / Dengel, Andreas / Hagen, Hans / Klein, Bertin / Bender, Michael / Bernardin, Tony / Hamann, Bernd Proceedings of the 2008 International Conference on Intelligent User Interfaces 2008-01-13 p.119-128
ACM Digital Library Link
Summary: We present a novel user interface in the form of a complementary virtual environment for managing personal document archives, i.e., for document filing and retrieval. Our implementation of a spatial medium for document interaction, exploratory search and active navigation plays to the strengths of human visual information processing and further stimulates it.
    Our system provides a high degree of immersion so that the user readily forgets the artificiality of our environment. Three well-integrated features support this immersion: first, we enable users to interact more naturally through gestures and postures (the system can be taught custom ones); second, we exploit 3D display technology; and third, we allow users to manage arrangements (manually edited structures, as well as computer-generated semantic structures). Our ongoing evaluation indicates that even non-expert users can efficiently work with the information in a document collection and that the process can actually be enjoyable.

[21] Human-centered interaction with documents Regular contributions / Dengel, Andreas / Agne, Stefan / Klein, Bertin / Ebert, Achim / Deller, Matthias Proceedings of the 2006 ACM International Workshop on Human-Centered Multimedia 2006-10-27 p.35-44
Keywords: 3D displays, 3D user interface, data glove, gesture recognition, immersion
ACM Digital Library Link
Summary: In this paper, we discuss a new user interface, a complementary environment for the work with personal document archives, i.e. for document filing and retrieval. We introduce our implementation of a spatial medium for document interaction, explorative search and active navigation, which exploits and further stimulates the human strengths of visual information processing. Our system achieves a high degree of immersion of the user, so that he/she forgets the artificiality of his/her environment. This is done by means of a tripartite ensemble of allowing users to interact naturally with gestures and postures (as an option gestures and postures can be individually taught to the system by users), exploiting 3D technology, and supporting the user to maintain structures he/she discovers, as well as provide computer calculated semantic structures. Our ongoing evaluation shows that even non-expert users can efficiently work with the information in a document collection, and have fun.

[22] An Approach to Integrated Office Document Processing & Management Posters / Mattos, Nelson M. / Mitschang, Bernhard / Dengel, Andreas / Bleisinger, Rainer Proceedings of the Conference on Office Automation Systems 1990-04-25 p.118-122
Summary: We propose an approach towards an integrated document processing and management system that has the intention to capture essentially freely structured documents, like those typically used in the office domain. The document analysis system ANASTASIL is capable to reveal the structure as well as the contents of complex paper documents. Moreover, it facilitates the handling of the containing information. Analyzed documents are stored in the management system KRISYS that is connected to several different subsequent services. The described system can be considered as an ideal extension of the human clerk, making his tasks in information processing easier. The symbolic representation of the analysis results allow an easy transformation in a given international standard, e.g., ODA/ODIF or SGML, and to interchange it via global network.