HCI Bibliography Home | HCI Conferences | ADCS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ADCS Tables of Contents: 121314

Proceedings of ADCS'13, Australasian Document Computing Symposium

Fullname:Proceedings of the 18th Australasian Document Computing Symposium
Editors:Shane Culpepper; Guido Zuccon; Laurianne Sitbon
Location:Brisbane, Australia
Dates:2013-Dec-05 to 2013-Dec-06
Publisher:ACM
Standard No:ISBN: 978-1-4503-2524-0; ACM DL: Table of Contents; hcibib: ADCS13
Papers:18
Pages:116
Links:Conference Website
Economic models of search BIBAFull-Text 1
  Leif Azzopardi
Searching is inherently an interactive process usually requiring a number of queries to be submitted and a number of documents to be assessed in order to find the desired amount of relevant information. While numerous models of search have been proposed, they have been largely conceptual in nature providing a descriptive account of the search process. For example, Bates' Berry Picking metaphor aptly describes how information seekers forage for relevant information [4]. However it lacks any predictive or explanatory power. In this talk, I will outline how microeconomic theory can be applied to interactive information retrieval, where the search process can be viewed as a combination of inputs (i.e. queries and assessments) which are used to "produce" output (i.e. relevance). Under this view, it is possible to build models that not only describe the relationship between interaction, cost and gain, but also explain and predict behaviour. During the talk, I will run through a number of examples of how economics can explain different behaviours. For example, why PhD students should search more than their supervisors (using an economic model developed by Cooper [6]), why queries are short [1], why Boolean searchers need to explore more results, and why it is okay to look at the first few results when searching the web [2]. I shall then describe how the cost of different interactions affect search behaviour [3], before extending the current theory to include other variables (such as the time spent on the search result page, the interaction with snippets, etc) to create more sophisticated and realistic models. Essentially, I will argue that by using such models we can:
  • 1. theorise and predict how users will behave when interacting with systems,
  • 2. ascertain how the costs of different interaction will influence search
        behaviour,
  • 3. understand why particular interaction styles, strategies, techniques are or
        are not adopted by users, and,
  • 4. determine what interactions and functionalities are worthwhile based on
        their expected gain and associated costs.
  • Using eye tracking for evaluating web search interfaces BIBAFull-Text 2-9
      Hilal Al Maqbali; Falk Scholer; James A. Thom; Mingfang Wu
    Using eye tracking in the evaluation of web search interfaces can provide rich information on users' information search behaviour, particularly in the matter of user interaction with different informative components on a search results screen. One of the main issues affecting the use of eye tracking in research is the quality of captured eye movements (calibration), therefore, in this paper, we propose a method that allows us to determine the quality of calibration, since the existing eye tracking system (Tobii Studio) does not provide any criteria for this aspect. Another issue addressed in this paper is the adaptation of gaze direction. We use a black screen displaying for 3 seconds between screens to avoid the effect of the previous screen on user gaze direction on the coming screen. A further issue when employing eye tracking in the evaluation of web search interfaces is the selection of the appropriate filter for the raw gaze-points data. In our studies, we filtered this data by removing noise, identifying gaze points that occur in Area of Interests (AOIs), optimising gaze data and identifying viewed AOIs.
    Efficient top-k retrieval with signatures BIBAFull-Text 10-17
      Timothy Chappell; Shlomo Geva; Anthony Nguyen; Guido Zuccon
    This paper describes a new method of indexing and searching large binary signature collections to efficiently find similar signatures, addressing the scalability problem in signature search. Signatures offer efficient computation with acceptable measure of similarity in numerous applications. However, performing a complete search with a given search argument (a signature) requires a Hamming distance calculation against every signature in the collection. This quickly becomes excessive when dealing with large collections, presenting issues of scalability that limit their applicability.
       Our method efficiently finds similar signatures in very large collections, trading memory use and precision for greatly improved search speed. Experimental results demonstrate that our approach is capable of finding a set of nearest signatures to a given search argument with a high degree of speed and fidelity.
    An enterprise search paradigm based on extended query auto-completion: do we still need search and navigation? BIBAFull-Text 18-25
      David Hawking; Kathy Griffiths
    Enterprise query auto-completion (QAC) can allow website or intranet visitors to satisfy a need more efficiently than traditional searching and browsing. The limited scope of an enterprise makes it possible to satisfy a high proportion of information needs through completion. Further, the availability of structured sources of completions such as product catalogues compensates for sparsity of log data. Extended forms (X-QAC) can give access to information that is inaccessible via a conventional crawled index.
       We show that it can be guaranteed that for every suggestion there is a prefix which causes it to appear in the top k suggestions. Using university query logs and structured lists, we quantify the significant keystroke savings attributable to this guarantee (worst case). Such savings may be of particular value for mobile devices. A user experiment showed that a staff lookup task took an average of 61% longer with a conventional search interface than with an X-QAC system.
       Using wine catalogue data we demonstrate a further extension which allows a user to home in on desired items in faceted-navigation style. We also note that advertisements can be triggered from QAC.
       Given the advantages and power of X-QAC systems, we envisage that websites and intranets of the [near] future will provide less navigation and rely less on conventional search.
    Classifying microblogs for disasters BIBAFull-Text 26-33
      Sarvnaz Karimi; Jie Yin; Cecile Paris
    Monitoring social media in critical disaster situations can potentially assist emergency and media personnel to deal with events as they unfold, and focus their resources where they are most needed. We address the issue of filtering massive amounts of Twitter data to identify high-value messages related to disasters, and to further classify disaster-related messages into those pertaining to particular disaster types, such as earthquake, flooding, fire, or storm. Unlike post-hoc analysis that most previous studies have done, we focus on building a classification model on past incidents to detect tweets about current incidents. Our experimental results demonstrate the feasibility of using classification methods to identify disaster-related tweets. We analyse the effect of different features in classifying tweets and show that using generic features rather than incident-specific ones leads to better generalisation on the effectiveness of classifying unseen incidents.
    ADCS reaches adulthood: an analysis of the conference and its community over the last eighteen years BIBAFull-Text 34-41
      Bevan Koopman; Guido Zuccon; Lance De Vine; Aneesha Bakharia; Peter Bruza; Laurianne Sitbon; Andrew Gibson
    How influential is the Australian Document Computing Symposium (ADCS)? What do ADCS articles speak about and who cites them? Who is the ADCS community and how has it evolved?
       This paper considers eighteen years of ADCS, investigating both the conference and its community. A content analysis of the proceedings uncovers the diversity of topics covered in ADCS and how these have changed over the years. Citation analysis reveals the impact of the papers. The number of authors and where they originate from reveal who has contributed to the conference. Finally, we generate co-author networks which reveal the collaborations within the community. These networks show how clusters of researchers form, the effect geographic location has on collaboration, and how these have evolved over time.
    Merging algorithms for enterprise search BIBAFull-Text 42-49
      PengFei (Vincent) Li; Paul Thomas; David Hawking
    Effective enterprise search must draw on a number of sources -- for example web pages, telephone directories, and databases. Doing this means we need a way to make a single sorted list from results of very different types.
       Many merging algorithms have been proposed but none have been applied to this, realistic, application. We report the results of an experiment which simulates heterogeneous enterprise retrieval, in a university setting, and uses multi-grade expert judgements to compare merging algorithms. Merging algorithms considered include several variants of round-robin, several methods proposed by Rasolofo et al. in the Current News Metasearcher, and four novel variations including a learned multi-weight method.
       We find that the round-robin methods and one of the Rasolofo methods perform significantly worse than others. The GDS_TS method of Rasolofo achieves the highest average NDCG@10 score but the differences between it and the other GDS_methods, local reranking, and the multi-weight method were not significant.
    Power walk: revisiting the random surfer BIBAFull-Text 50-57
      Laurence A. F. Park; Simeon Simoff
    Measurement of graph centrality provides us with an indication of the importance or popularity of each vertex in a graph. When dealing with graphs that are not centrally controlled (such as the Web, social networks and academic citation graphs), centrality measure must 1) correlate with vertex importance/popularity, 2) scale well in terms of computation, and 3) be difficult to manipulate by individuals. The Random Surfer probability transition model, combined with Eigenvalue Centrality produced PageRank, which has shown to satisfy the required properties. Existing centrality measures (including PageRank) make the assumption that all directed edges are positive, implying an endorsement. Recent work on sentiment analysis has shown that this assumption is not valid. In this article, we introduce a new method of transitioning a graph, called Power Walk, that can successfully compute centrality scores for graphs with real weighted edges. We show that it satisfies the desired properties, and that its computation time and centrality ranking is similar to when using the Random Surfer model for non-negative matrices. Finally, stability and convergence analysis shows us that both stability and convergence when using the power method, are dependent on the Power Walk parameter β.
    Exploring the magic of WAND BIBAFull-Text 58-65
      Matthias Petri; J. Shane Culpepper; Alistair Moffat
    Web search services process thousands of queries per second, and filter their answers from collections containing very large amounts of data. Fast response to queries is a critical service expectation. The well-known WAND processing strategy is one way of reducing the amount of computation necessary when executing such a query. The value of WAND has now been validated in a wide range of studies, and has become one of the key baselines against which all new top-k processing algorithms are benchmarked. However, most previous implementations of WAND-based retrieval approaches have been in the context of the BM25 Okapi similarity scoring regime. Here we measure the performance of WAND in the context of the alternative Language Model similarity score computation, and find that the dramatic efficiency gains reported in previous studies are no longer achievable. That is, when the primary goal of a retrieval system is to maximize effectiveness, WAND is relatively unhelpful in terms of attaining the secondary objective of maximizing query throughput rates. However, the BM-WAND algorithm does in fact help reducing the percentage of postings to be scored, but with additional computational overhead. We explore a variety of tradeoffs between scoring metric and processing regime and present new insight into how score-safe algorithms interact with rank scoring.
    Integrated instance- and class-based generative modeling for text classification BIBAFull-Text 66-73
      Antti Puurula; Sung-Hyon Myaeng
    Statistical methods for text classification are predominantly based on the paradigm of class-based learning that associates class variables with features, discarding the instances of data after model training. This results in efficient models, but neglects the fine-grained information present in individual documents. Instance-based learning uses this information, but suffers from data sparsity with text data. In this paper, we propose a generative model called Tied Document Mixture (TDM) for extending Multinomial Naive Bayes (MNB) with mixtures of hierarchically smoothed models for documents. Alternatively, TDM can be viewed as a Kernel Density Classifier using class-smoothed Multinomial kernels. TDM is evaluated for classification accuracy on 14 different datasets for multi-label, multi-class and binary-class text classification tasks and compared to instance- and class-based learning baselines. The comparisons to MNB demonstrate a substantial improvement in accuracy as a function of available training documents per class, ranging up to average error reductions of over 26% in sentiment classification and 65% in spam classification. On average TDM is as accurate as the best discriminative classifiers, but retains the linear time complexities of instance-based learning methods, with exact algorithms for both model estimation and inference.
    Choices in batch information retrieval evaluation BIBAFull-Text 74-81
      Falk Scholer; Alistair Moffat; Paul Thomas
    Web search tools are used on a daily basis by billions of people. The commercial providers of these services spend large amounts of money measuring their own effectiveness and benchmarking against their competitors; nothing less than their corporate survival is at stake. Techniques for offline or "batch" evaluation of search quality have received considerable attention, spanning ways of constructing relevance judgments; ways of using them to generate numeric scores; and ways of inferring system "superiority" from sets of such scores.
       Our purpose in this paper is consider these mechanisms as a chain of inter-dependent activities, in order to explore some of the ramifications of alternative components. By disaggregating the different activities, and asking what the ultimate objective of the measurement process is, we provide new insights into evaluation approaches, and are able to suggest new combinations that might prove fruitful avenues for exploration. Our observations are examined with reference to data collected from a user study covering 34 users undertaking a total of six search tasks each, using two systems of markedly different quality.
       We hope to encourage broader awareness of the many factors that go into an evaluation of search effectiveness, and of the implications of these choices, and encourage researchers to carefully report all aspects of the evaluation process when describing their system performance experiments.
    Conditional collocation in Japanese BIBAFull-Text 82-88
      Takumi Sonoda; Takao Miura
    Analysis of Collocation is targeted for Natural Language Processing (NLP). From a linguistic perspective, collocation provides us with a way to place words close together in a natural manner. By this approach, we can examine deep structure of semantics through words and their situation. Although there have been some investigation based on co-occurrence, few discussion has been made about conditional collocation. In this investigation, we discuss a computational approach to extract conditional collocation by using data mining and statistical techniques.
    Visual summarisation of text for surveillance and situational awareness in hospitals BIBAFull-Text 89-96
      Hanna Suominen; Leif Hanlen
    Nosocomial infections (NIs, any infection that a patient contracts in a healthcare institution) cost 100, 000 lives and five billion dollars per year for 300 million Americans alone. Surveillance in hospitals holds the potential of reducing NI rates by more than thirty per cent but performing this task by hand is impossible at scale of every appointment, examination, intervention, and other event in healthcare. Narratives in patient records can indicate NIs and their automated processing could scale out surveillance. This paper describes a text summarisation system for NI surveillance and situational awareness in hospitals. The system is a cascaded sentence, report, and patient classifier. It generates three types of visual summaries for an input of patient narratives and ward maps: cross-sectional statuses at the same point of time, longitudinal trends in time, and highlighted text to see the textual evidence leading to a given status or trend. This gives evidence for and against a given NI in the levels of hospitals, wards, patients, reports, and sentences. The system has excellent recall and precision (e.g., 0.95 and 0.71 for reports) in summarisation for the subset of NIs from fungal species on 1,880 authentic records of 527 patients from 3 hospitals. To demonstrate the system design, we have developed a mobile iPad compatible web-application and a simulation with eighteen patients on three medical wards in one hospital during one month with 61 records in total. The design is extendable to other summarisation tasks.
    Quality biased thread retrieval using the voting model BIBAFull-Text 97-100
      Ameer Tawfik Albaham; Naomie Salim
    Thread retrieval is an essential tool in knowledge-based forums. However, forum content quality varies from excellent to mediocre and spam; thus, search methods should find not only relevant threads but also those with high quality content. Some studies have shown that leveraging quality indicators improves thread search. However, these studies ignored the hierarchical and the conversational structures of threads in estimating topical relevance and content quality. In that regard, this paper introduces leveraging message quality indicators in ranking threads. To achieve this, we first use the Voting Model to convert message level quality features into thread level features. We then train a learning to rank method to combine these thread level features. Preliminary results with some features reveal that representing threads as collections of messages is superior to treating them as concatenations of their messages. The results show also the utility of leveraging message content quality as compared to non quality-based methods.
    Malformed UTF-8 and spam BIBAFull-Text 101-104
      Matt Crane; Andrew Trotman; Richard O'Keefe
    In this paper we discuss some of the document encoding errors that were found when scaling our indexer and search engine up to large collections crawled from the web, such as ClueWeb09. In this paper we describe the encoding errors, what effect they could have on indexing and searching, how they are processed within our indexer and search engine and how they relate to the quality of the page measured by another method.
    Crisis management knowledge from social media BIBAFull-Text 105-108
      Karl Kreiner; Aapo Immonen; Hanna Suominen
    More and more crisis managers, crisis communicators and laypeople use Twitter and other social media to provide or seek crisis information. In this paper, we focus on retrospective conversion of human-safety related data to crisis management knowledge. First, we study how Twitter data can be classified into the seven categories of the United Nations Development Program Security Model (i.e., Food, Health, Politics, Economic, Personal, Community, and Environment). We conclude that these topic categories are applicable, and supplementing them with classification of individual authors into more generic sources of data (i.e., Official authorities, Media, and Laypeople) allows curating data and assessing crisis maturity. Second, we introduce automated classifiers, based on supervised learning and decision rules, for both tasks and evaluate their correctness. This evaluation uses two datasets collected during the crises of Queensland floods and NZ Earthquake in 2011. The topic classifier performs well in the major categories (i.e., 120-190 training instances) of Economic (F = 0.76) and Community (F = 0.67) while in the minor categories (i.e., 0-60 training instances) the results are more modest (F ≤ 0.41). The source classifier shows excellent results (F ≥ 0.83) in all categories.
    Towards information retrieval evaluation with reduced and only positive judgements BIBAFull-Text 109-112
      Diego Mollá; David Martinez; Iman Amini
    This paper proposes a document distance-based approach to automatically expand the number of available relevance judgements when those are limited and reduced to only positive judgements. This may happen, for example, when the only available judgements are extracted from a list of references in a published clinical systematic review. We show that evaluations based on these expanded relevance judgements are more reliable than those using only the initially available judgements. We also show the impact of such an evaluation approach as the number of initial judgements decreases.
    Managing short postings lists BIBAFull-Text 113-116
      Andrew Trotman; Xiang-Fei Jia; Matt Crane
    Previous work has examined space saving and throughput increasing techniques for long postings lists in an inverted file search engine. In this contribution we show that highly sporadic terms (terms that occur in 1 or 2 documents) are a high proportion of the unique terms in the collection and that these terms are seen in queries. The previously known space saving method of storing their short postings lists in the vocabulary is compared to storing in the postings file. We quantify the saving as about 6.5%, with no loss in precision, and suggest the adoption of this technique.