HCI Bibliography Home | HCI Journals | About IJHCS | Journal Info | IJHCS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJHCS Tables of Contents: 4041424344454647484950515253

International Journal of Human-Computer Studies 43

Editors:B. R. Gaines
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Links:Table of Contents
  1. IJHCS 1995 Volume 43 Issue 1
  2. IJHCS 1995 Volume 43 Issue 2
  3. IJHCS 1995 Volume 43 Issue 3
  4. IJHCS 1995 Volume 43 Issue 4
  5. IJHCS 1995 Volume 43 Issue 5/6

IJHCS 1995 Volume 43 Issue 1

The Role of Flight Progress Strips in En Route Air Traffic Control: A Time-Series Analysis BIBA 1-13
  Mark B. Edwards; Dana K. Fuller; O. U. Vortac; Carol A. Manning
Paper flight progress strips (FPSs) are currently used in the United States en route air traffic control system to document flight information. Impending automation will replace these paper strips with electronic flight data entries. In this observational study, control actions, communication events, and computer interactions were recorded and analysed using time-series regression models. Regression models were developed to predict FPS activities (writing, manipulating, looking) at different levels of traffic complexity, for individuals and teams of air traffic controllers. Results indicated that writing was well predicted by a common, simple time-series equation. The ability to predict FPS manipulations was modest, but prediction of looking at FPSs was poor. Overall, these data indicate that (1) flight strip activities were similar for individuals and for the data-side controllers in the team (whose primary responsibility is the strips), and (2) flight strip activity for teams was predictable from the radar-side controller's actions but not the data-side controller's actions.
StEP(3D): A Standardized Evaluation Plan for Three-Dimensional Interaction Techniques BIBASummary 15-41
  Scott B. Grissom; Gary Perlman
Usability evaluation is a critical component of software development. However skills necessary to develop a valid and reliable evaluation plan may deter some organizations from performing usability evaluations. These organizations would benefit by having an evaluation plan available to them that was already designed for their needs. A standardized evaluation plan (StEP) is designed to evaluate or compare a wide variety of systems that share certain capabilities. StEPs are developed for a specific domain by usability specialists. These plans can then be used by evaluators with limited experience or facilities because the skills necessary to use a StEP are not as demanding as the skills needed to develop a StEP.
   Techniques have been proposed to make three-dimensional interfaces more flexible and responsive to the user but the usability of these techniques have generally not been evaluated empirically. StEP(3D), a standardized evaluation plan for the usability of three-dimensional interaction techniques, combines performance-based evaluation with a user satisfaction questionnaire. It is designed to be portable and simple enough that evaluators can make comparisons of three-dimensional interaction techniques without special equipment or experience. It evaluates the usability of interaction techniques for performing quick and unconstrained three-dimensional manipulations. Two empirical experiments are reported that demonstrate the reliability and validity of StEP(3D). Experiment 1 shows StEP(3D) is appropriate for comparing techniques on different hardware platforms during summative evaluations. Experiment 2 shows StEP(3D) is sensitive enough to detect subtle changes in an interface during formative design.
   We make recommendations for developing StEPs based on data we collected and on our experiences with the development of StEP(3D). However, the recommendations are not limited to three-dimensional interaction techniques. Most of the recommendations apply to the development of StEPs in any domain and address issues such as portability, participant selection, experiment protocol and procedures, and usability measures. A collection of StEPs designed for particular domains and purposes would provide a library of reusable evaluation plans. This reusable approach to usability evaluation should reduce the cost of evaluations because organizations are able to take advantage of previously designed plans. At the same time, this approach should improve the quality of usability evaluations because StEPs are developed and validated by usability specialists.
Consultant-2: Pre- and Post-Processing of Machine Learning Applications BIBA 43-63
  D. Sleeman; M. Rissakis; S. Craw; N. Graner; S. Sharma
The knowledge acquisition bottleneck in the development of large knowledge-based applications has not yet been resolved. One approach which has been advocated is the systematic use of Machine Learning (ML) techniques. However, ML technology poses difficulties to domain experts and knowledge engineers who are not familiar with it. This paper discusses Consultant-2, a system which makes a first step towards providing system support for a "pre- and post-processing" methodology where a cyclic process of experiments with an ML tool, its data, data description language and parameters attempts to optimize learning performance.
   Consultant-2 has been developed to support the use of Machine Learning Toolbox (MLT), an integrated architecture of 10 ML tools, and has evolved from a series of earlier systems. Consultant-0 and Consultant-1 had knowledge only about how to choose an ML algorithm based on the nature of the domain data. Consultant-2 is the most sophisticated. It, additionally, has knowledge about how ML experts and domain experts pre-process domain data before a run with the ML algorithm, and how they further manipulate the data and reset parameters after a run of the selected ML algorithm, to achieve a more acceptable result. How these several KBs were acquired and encoded is described. In fact, this knowledge has been acquired by interacting both with the ML algorithm developers and with domain experts who had been using the MLT toolbox on real-world tasks. A major aim of the MLT project was to enable a domain expert to use the toolbox directly; i.e. without necessarily having to involve either a ML specialist or a knowledge engineer. Consultant's principal goal was to provide specific advice to ease this process.
A Comprehension-Based Model of Correct Performance and Errors in Skilled, Display-Based, Human-Computer Interaction BIBA 65-99
  Muneo Kitajima; Peter G. Polson
This paper describes a computational model of skilled use of an application with a graphical user interface. The model provides a principled explanation of action slips, errors made by experienced users. The model is based on Hutchins, Hollan and Norman's analysis of direct manipulation and is implemented using Kintsch and Mannes's construction-integration theory of action planning. The model attends to a limited number of objects on the screen and then selects action on one of them, such as moving mouse cursor, clicking mouse button, typing letters, and so on, by integrating information from various sources. These sources include the display, task goals, expected display states, and knowledge about the interface and the application domain. The model simulates a graph drawing task. In addition, we describe how the model makes errors even when it is provided with the knowledge sufficient to generate correct actions.
Using Interaction Framework to Guide the Design of Interactive Systems BIBA 101-130
  Ann E. Blandford; Michael D. Harrison; Philip J. Barnard
Understanding the properties of interactions is essential to the design of effective interactive systems involving two or more agents, and to the evaluation of existing systems. This understanding can inform the design of multi-agent systems by helping the designer identify properties that a system should conform to. In addition, a focus on the properties of interactions can lead to a better understanding of the space of possibilities, by recognizing features of multi-agent systems which are often simply incidental outcomes of design, not explicitly considered in the design specification. We present an Interaction Framework, in which abstract interactional requirements and properties can be expressed in a way which is not biased towards the perspective of any one agent to the interaction. These can be used to derive requirements on the design of computer systems, to highlight those aspects of users which influence the properties of the interaction, and hence to guide the design of the interactive system.
Structured and Opportunistic Processing in Design: A Critical Discussion BIBA 131-151
  Linden J. Ball; Thomas C. Ormerod
We present a critical discussion of research into the nature of design expertise, in particular evaluating claims that opportunism is a major influence on the behaviour of expert designers. We argue that the notion of opportunism has been under-constrained, and as a consequence the existence of opportunism in expert design has been exaggerated. Much of what has been described as opportunistic design behaviour appears to reflect a mix of breadth-first and depth-first modes of solution development. Whilst acknowledging that opportunities can arise in the design process (e.g. serendipitous solution discovery), such events might equally confirm structured behaviour as cause unstructured behaviour. We argue that the default mode for truly expert designers is typically a top-down and breadth-first approach, since longer-term considerations of cost-effectiveness are more important for expert designers than short-term considerations of cognitive cost. However, there are situations (e.g. when faced with a highly unfamiliar design task) where it is cost-effective for experts to pursue a depth-first mode of solution development. The implications of our analysis for the development of methods and tools to support the design process are also discussed.

IJHCS 1995 Volume 43 Issue 2

Parallel Earcons: Reducing the Length of Audio Messages BIBA 153-175
  Stephen Brewster; Peter C. Wright; Alistair D. N. Edwards
This paper describes a method of presenting structured audio messages, earcons, in parallel so that they take less time to play and can better keep pace with interactions in a human-computer interface. The two component parts of a compound earcon are played in parallel so that the time taken is only that of a single part. An experiment was conducted to test the recall and recognition of parallel compound earcons as compared to serial compound earcons. Results showed that there are no differences in the rates of recognition between the two groups. Non-musicians are also shown to be equal in performance to musicians. Some extensions to the earcon creation guidelines of Brewster, Wright and Edwards are put forward based upon research into auditory stream segregation. Parallel earcons are shown to be an effective means of increasing the presentation rates of audio messages without compromising recognition rates.
A Reformulation Technique and Tool for Knowledge Interchange during Knowledge Acquisition BIBA 177-212
  Ole Jakob Mengshoel
A variety of knowledge acquisition (KA) techniques have proven useful for developing knowledge-based systems (KBSs). Such techniques typically have partially overlapping functionality with respect to the type of knowledge they may be used to acquire. Overlap between KA techniques means that parts of a knowledge base (KB) acquired using one technique may be refined and extended using some other technique; there is a need for knowledge interchange. This need is aggravated by the dynamic KBS development life-cycle, with frequent switches between life-cycle phases and techniques within each phase. Within this environment, it is very hard to predict exactly how knowledge interchange between acquisition techniques should take place. To address this problem, we have developed a knowledge reformulation technique and tool to support the knowledge engineer in performing knowledge interchange. The approach has two main types of functionality. First, it contains functions used for translation from and into knowledge acquisition KBs, using the Knowledge Interchange Format (KIF) as an intermediate language. The translation functions are based on grammars of KA techniques as well as approaches to formalize the knowledge acquisition KBs. We have developed translators for the well-known KA techniques card sort and repertory grid. The translators are written using the definite-clause grammar (DCG) formalism, which is based on Prolog. The second type of functions in the knowledge reformulation approach are denoted adaption functions. The adaption functions are used for KB organizing, structuring and editing. These functions are utilized by the knowledge engineer to modify KBs before and after translation. In this paper we present the knowledge reformulation technique and tool, along with the methodological basis for the approach. An example of how the Knowledge Reformulation tool (KRF) can be used is also included.
Keyboard User Verification: Toward an Accurate, Efficient, and Ecologically Valid Algorithm BIBA 213-222
  Renee Napier; William Laverty; Doug Mahar; Ron Henderson; Michael Hiron; Michael Wagner
This paper proposes new measures of individual differences in typing behaviour which provide a means of accurately verifying the identity of the typist. A first study examined the efficacy of a multivariate measure of inter-key latencies and a probabilistic discriminator statistic in conjunction with an individual filtering system which eliminates occasional disfluent keystrokes. The results indicate that, under optimum conditions but with a very small test sample, these measures lead to better typist verification than measures suggested earlier by Umphress and Williams and then Leggett and Williams. A second study validated the improved algorithm under more ecologically valid conditions and showed that when training and test sessions were separated by one week, typist verification using the new algorithm achieved combined false-acceptance and false-rejection rates of 0.9% and 3.8% for test samples of 300 and 50 digraphs respectively.
Can Computer Personalities be Human Personalities? BIBA 223-239
  Clifford Nass; Youngme Moon; B. J. Fogg; Byron Reeves; D. Christopher Dryer
The claim that computer personalities can be human personalities was tested by demonstrating that (1) computer personalities can be easily created using a minimal set of cues, and (2) that people will respond to these personalities in the same way they would respond to similar human personalities. The present study focused on the "similarity-attraction hypothesis," which predicts that people will prefer to interact with others who are similar in personality. In a 2 x 2, balanced, between-subjects experiment (n = 48), dominant and submissive subjects were randomly matched with a computer that was endowed with the properties associated with dominance or submissiveness. Subjects recognized the computer's personality type, distinct from friendliness and competence. In addition, subjects not only preferred the similar computer, but they were more satisfied with the interaction. The findings demonstrate that personality does not require richly defined agents, sophisticated pictorial representations, natural language processing, or artificial intelligence. Rather, even the most superficial manipulations are sufficient to exhibit personality, with powerful effects.
Levels and Types of Mediation in Instructional Systems: An Individual Differences Approach BIBA 241-259
  Nigel Ford
Thirty-eight university students were tested for field-dependence/-independence using Riding's computer-administered Cognitive Styles Analysis (CSA). They also learned using computerized versions of Pask and Scott's teaching materials designed to suit holist and serialist learning strategies. It was found that (a) students' holist and serialist competence could be predicted using CSA scores, (b) learning in marched conditions (using instructional materials structured to suit their learning styles) was significantly superior for both holists and serialists than in mismatched conditions, and (c) serialist instructional materials resulted in overall better learning performance and efficiency than did holist materials. Possible reasons for the lack of positive correlations reported in previous studies, along with implications for the development of user models to support the development of adaptive instructional systems, are discussed.
Internet: Which Future for Organized Knowledge, Frankenstein or Pygmalion? BIBA 261-274
  Luciano Floridi
The Internet is like a new country, with a growing population of millions of well educated citizens. If it wants to keep track of its own cultural achievements in real time, it will have to provide itself with an infostructure like a virtual National Library system. This paper proposes that institutions all over the world should take full advantage of the new technologies available, and promote and coordinate such a global service. This is essential in order to make possible a really efficient management of human knowledge on a global scale.
Bulletin BIB 275-277

IJHCS 1995 Volume 43 Issue 3

Knowledge-Based Hypermedia

Editorial: Knowledge-Based Hypermedia BIB 279
  David Madigan
Schema-Based Authoring and Querying of Large Hypertexts BIBA 281-299
  Bernd Amann; Michel Scholl; Antoine Rizk
Modern hypertext applications require new system support for hypertext authoring and user navigation through large sets of documents connected by links. This system support must be based on advanced, typed data models for describing the information structure in different application domains. Schema based structuring through strongly typed documents and links has already been proposed and put to practical use in a multitude of hypertext applications. Systems such as Multicard/02 and MORE have moreover exploited conceptual schemas for querying the resulting hyperdocuments in a more structured way. In this paper, we show how hypertext schemas and query languages can be utilized for designing hypertext authoring and browsing environments for large hypertexts. We illustrate our mechanisms using the Gram data model and describe their implementation on top of the Multicard hypermedia system connected to the O2 object-oriented database management system.
Rich Hypertext: A Foundation for Improved Interaction Techniques BIBA 301-321
  Kurt Normark; Kasper Østerbye
Hypertext has broader applications than being just an information browsing method. Hypertext is a framework which allows for a powerful structuring of large amounts of information. Using hypertext concepts, it is possible to model data in such a way that users can manipulate this information in many different ways, and at different levels of abstraction. In this paper the relationship between the internal representation and the external presentations is discussed. In particular, it is illustrated how a well designed internal representation serves as a foundation for specialized interactions, which can be tailored to specific application areas, primarily by taking the underlying types of information into account. The hypertext interaction approach proposed in this paper has been shaped by our experience with a prototype, which has been operational since 1993.
Concept Maps as Hypermedia Components BIBA 323-361
  Brian R. Gaines; Mildred G. Shaw
Concept mapping has a history of use in many disciplines as a formal or semi-formal diagramming technique. Concept maps have an abstract structure as typed hypergraphs, and computer support for concept mapping can associate visual attributes with node types to provide an attractive and consistent appearance. Computer support can also provide interactive interfaces allowing arbitrary actions to be associated with nodes such as hypermedia links to other maps and documents. This article describes a general concept mapping system that is open architecture for integration with other systems, scriptable to support arbitrary interactions and computations, and cutomizable to emulate many styles of map. The system supports collaborative development of concept maps across local area and wide area networks, and integrates with World-Wide Web in both client helper and server gateway roles. A number of applications are illustrated ranging through education, artificial intelligence, active documents, hypermedia indexing and concurrent engineering. It is proposed that concept maps be regarded as basic components of any hypermedia system, complementing text and images with formal and semi-formal active diagrams.
Adding Macroscopic Semantics to Anchors in Knowledge-Based Hypertext BIBA 363-382
  Jocelyne Nanard; Marc Nanard
We have developed a hypertext system that uses types to incorporate knowledge in hypertext. This paper addresses the problem of representing and using factual knowledge about documents for improving user interaction with documents in the context of a task. This application gives us the opportunity to discuss the extension of typing to anchors. We show that attaching knowledge to anchors through types must take into account the context of use of the anchored text. Thus, we introduce the notion of semantic anchoring of concepts within documents. We show how our system makes it possible to implement this approach without adding any new features. Beyond the experiment itself, the foundations of the approach and its connection with hypertext systems modelling, knowledge-based hypertext and knowledge acquisition are presented.
A Demonstrational Interface for Recording Technical Procedures by Annotation of Videotaped Examples BIBA 383-417
  Henry Lieberman
In conventional knowledge acquisition, a domain expert interacts with a knowledge engineer, who interviews the expert, and codes knowledge about the domain objects and procedures in a rule-based language, or other textual representation language. This indirect methodology can be tedious and error-prone, since the domain expert's verbal descriptions can be inaccurate or incomplete, and the knowledge engineer may not correctly interpret the expert's intent. We describe a user interface that allows a domain expert who is not a programmer to construct representations of objects and procedures directly from a video of a human performing an example procedure. The domain expert need not be fluent in the underlying representation language, since all interaction is through direct manipulation. Starting from digitized video, the user selects significant frames that illustrate before- and after- states of important operations. Then the user graphically annotates the contents of each selected frame, selecting portions of the image to represent each part, labeling the parts, and indicating part/whole relationships. The actions that represent the transition between frames are described using the technique of programming by demonstration (also called programming by example). The user performs operations on concrete visual objects in the graphical interface, and the system records the user's actions. Explanation-based learning techniques are used to synthesize a generalized program that can be used on subsequent examples. The knowledge acquisition and video annotation facilities are implemented as part of the graphical editor Mondrian, which incorporates a programming by demonstration facility. We explain the operation of Mondrian's interface in its base domain of graphical editing as well as for the video annotation and knowledge acquisition application. The result of the knowledge acquisition process is object descriptions for each object in the domain, generalized procedural descriptions, and visual and natural language documentation of the procedure. We illustrate the system in the domain of documentation of operational and maintenance procedures for electrical devices.
Experiences with Semantic Net Based Hypermedia BIBA 419-439
  W. Wang; R. Rada
The Many Using and Creating Hypermedia (MUCH) systems is based on the Dexter model and treats the storage layer as a semantic net. The MUCH system provides a number of recommended link types for representing application domain concepts, such as thesauri, documents, and annotations. Users of the system are expected to use those link types in the course of authoring meaningful hypermedia. This paper is based on the logs of usage of the MUCH system over 2 years by over 200 people. Contrary to the expectations of the builders of the MUCH system, the users did not exploit the ability to type semantic links. Typically authors used the default link type regardless of their semantic intentions. When a link type other than the default type was chosen, that choice was often inconsistent with the way another user would label a similar link. The system has proven to be useful for authoring conventional documents. Authors, however, were not practically able to produce hypertext documents. Based on these experiences a new system, RICH (Reusable Intelligent Collaborative Hypermedia), has been designed and built which emphasizes rules for typing links and maintaining the integrity of the semantic net.
Hypermedia Exploration with Interactive Dynamic Maps BIBA 441-464
  Mountaz Zizi; Michel Beaudouin-Lafon
Interactive dynamic maps (IDMs) help users interactively explore webs of hypermedia documents. IDMs provide automatically-generated abstract graphical views at different levels of granularity. Visual cues give users a better understanding of the content of the web, which results in better navigation control and more accurate and effective expressions of queries. IDMs consist of: topic maps, which provide visual abstractions of the semantic content of a web of documents and document maps, which provide visual abstractions of subsets of documents.
   The major contributions of this work include (1) automatic techniques for building maps directly from a web of documents, including extraction of semantic content and use of a spatial metaphor for generating layout and filling space, (2) a direct manipulation interaction paradigm for exploring webs of documents, using maps and an integrated graphical query language, and (3) the ability to use the maps themselves as documents that can be customized, stored in a library and shared among users.
Repertory Hypergrids for Large-Scale Hypermedia Linking BIBA 465-481
  David Madigan; C. Richard Chapman; Jonathan R. Gavrin; Ole Villumsen; John H. Boose
Creation and maintenance of links in large hypermedia documents are difficult. Motivated by an application to a federal clinical practice guideline for cancer pain management, we have developed and evaluated a repertory grid-based linking scheme we call repertory hypergrids. Harnessing established knowledge acquisition techniques, the repertory hypergrid assigns each "knowledge chunk" a location in "context space". A chunk links to another chunk if they are both close in context space. We have developed a program to convert the hypergrid and associated knowledge chunks to HTML and have made the hypermedia clinical practice guideline available on the World Wide Web.
   To evaluate the scheme, we conducted two analyses. First, we conducted a protocol analysis using the paper-based guidelines. Six users of the guideline addressing typical cancer pain management tasks made 30 explicit links. The repertory hypergrid using a neighborhood size of 16 captures of 24 of these links. With optimization, the repertory hypergrid captures 27 of the links with a neighborhood size of 14. Second, 18 users addressed the same tasks, six using the paper-based guideline, six using the hypermedia document with repertory hypergrid-created links ("TALARIA"), and six using the hypermedia document with randomly selected links ("Random TALARIA"). TALARIA users found the required information significantly more quickly than either the users of the paper-based guideline or of Random TALARIA, with no loss in accuracy.
Selective Text Utilization and Text Traversal BIBA 483-497
  Gerard Salton; James Allan
Many large collections of full-text documents are currently stored in machine-readable form and processed automatically in various ways. These collections may include different types of documents, such as messages, research articles, and books, and the subject matter may vary widely. To process such collections, robust text analysis methods must be used, capable of handling materials in arbitrary subject areas, and flexible access must be provided to texts and text excerpts of varying size.
   In this study, global text comparison methods are used to identify similarities between text elements, followed by local context-checking operations that resolve ambiguities and distinguish superficially similar texts from texts that actually cover identical topics. A linked text structure, known as a text relationship map, is then created that relates similar texts at various levels of detail. In particular, text links are available for full texts, as well as text sections, paragraphs, and sentence groups. The relationship graphs are usable as conceptualization tools to illustrate various text manipulation operations and may also serve as browsing maps in situations where searches or text traversal operations are conducted under user control. In this study, the relationship maps are used to identify important text passages, to traverse texts selectively both within particular documents and between documents, and to provide flexible text access to large text collections in response to various kinds of user needs. An automated 29-volume encyclopedia is used as an example to illustrate various possible text accessing and traversal operations. Implementation details are not included in this initial study.
Bulletin BIB 499-501

IJHCS 1995 Volume 43 Issue 4

Strategies of Failure Diagnosis in Computer-Controlled Manufacturing Systems: Empirical Analysis and Implications for the Design of Adaptive Decision Support Systems BIBA 503-521
  Udo Konradt
This study investigates strategies in failure diagnosis at cutting-machine-tools with a verbal knowledge acquisition technique. Sixty-nine semi-structured interviews were performed with mechanical and electrical maintenance technicians, and a protocol analysis was conducted. Strategies were analysed in dependence of the technician's job experience, his familiarity with the problem and problem complexity. The technicians were categorized into three groups, novices, advanced, and experts, based upon level of experience. Results show that typical strategies of failure diagnosis are "Historical information", "Least effort", "Reconstruction", and "Sensory check". Strategies that lead to a binary reduction of the problem space, such as "Information uncertainty" and "Split half", play only a minor role in real-life failure diagnosis. Job experience and the familiarity with the problem significantly influenced the occurrence of strategies. In addition to "Symptomatic search" and "Topographic search", results show frequent use of case-based strategies, particularly in routine failures. In novel situations, technicians usually used "Topographic search". A software design method, the strategy-based software design (SSD) is proposed, that uses strategies to derive decision support systems, that are adaptive to the different working styles and the changing levels of experience in user groups. The methodology is briefly described and illustrated by the development of an information support system for maintenance and repair.
Evaluating Group Effectiveness through a Computer-Supported Cooperative Training Environment BIBA 523-538
  Kathleen M. Swigger; Robert Brazile
The long-term goal of this research is to teach effective computer-supported cooperative problem solving skills. In order to address this problem, we built a special interface designed to improve group effectiveness. The special interface is based on a communication competency model that assumes group effectiveness in a particular task depends upon the performance of certain competencies (or skills) that aid groups in collective problem solving. In order to support this model, we provide special online tools that correspond to and, in turn, support each of the competencies. This paper presents an evaluation of the Computer-Supported Cooperative Training (CSCT) environment and delineates the group behaviors that lead to successful task performance in this environment. Groups using the interface demonstrated more effective skills when compared with groups who performed the same task face-to-face. Furthermore, the CSCT environment showed that the competencies relating to group problem description and generation of alternative solutions were the most predictive of successful group interaction.
Cognitive and Computer Models of Physical Systems BIBA 539-559
  S. Chandra; D. I. Blockley
Models of physical systems range from those of initial individual cognition to mathematical representations on a computer which are accepted as the developed final models. It is conjectured that a formalization of the qualitative cognitive models will help us to understand how they are formed and will eventually help us to produce better computer models. The structure of these models would provide qualitative descriptions and explanations of behaviour which could be assimilated by non-specialists. It is argued that cognitive models should be produced with an awareness of the possible form of the final computer model. To illustrate this, a case study of the development of the cognitive and computer models of a naturally parallel physical process is presented. This early work is part of the broader goal of producing an appropriate computing environment through which various models and techniques are combined for producing explanations. A procedure for developing models from the primitive stage to computer implementation is suggested. Theories in cognitive science and research on mental models are briefly discussed.
A User-Adapted Iconic Language for the Medical Domain BIBA 561-577
  B. De Carolis; F. De Rosis; S. Errore
Although icons are presented as a universal language, some claim that cultural background, education and environment might influence the users' interpretation of their meaning. If this is true, the iconic language should be adapted to the user's characteristics. This paper presents results of a study that was aimed at designing the iconic language of a medical decision support system to be used in several European countries. The study included four main phases: listing and classification of the messages to be represented, collection of proposals about icons from representatives of potential users, preparation of candidates for evaluation and final evaluation of candidates by a sample of users. Results of this study indicate which icons are universally considered as "good" or "bad", and which ones are "controversial", that is, which are clearly preferred or clearly rejected by different interviewed subgroups. These results are also compared with results of previous studies, to single out factors which seem to condition acceptance of iconic messages. Finally, the paper describes the architecture of the interface which supports adapting icons to the user characteristics.
Optimizing Digraph-Latency Based Biometric Typist Verification Systems: Inter and Intra Typist Differences in Digraph Latency Distributions BIBA 579-592
  D. Mahar; R. Napier; M. Wagner; W. Laverty; R. D. Henderson; M. Hiron
Umphress and Williams have shown that individual differences in digraph latency may provide a means of accurately verifying the identity of computer users. The present research refined this technique by exploring inter and intra subject differences in digraph latency distributions. Experiment 1 showed that there is marked heterogeneity in the latency with which individual subjects type different digraphs. Consequently, it was found that typist verification accuracy improved when a digraph-specific index of the distance between test and reference digraph latencies was employed. Experiment 1 also showed the utility of nonlinear modelling as a tool to establish optimum verification parameter settings. Experiment 2 showed that the use of a common low-pass temporal filter cutoff setting for all typists when screening digraphs is unwise. It was found that there is a significant interaction between subjects and filter settings such that verification accuracy may improve if subject-specific filter settings are used.
A Model for Justification Production by Expert Planning Systems BIBA 593-619
  Susan M. Bridges
Although explanation capability is one of the distinguishing characteristics of expert systems, the explanation facilities of most existing systems are quite primitive. This paper describes an architecture for production of explanations that justify reasoning that is based on the premises that (1) the task of justifying expert decisions is an intelligence-requiring activity and (2) the appropriate model for machine-produced justifications should be explanations written by people. The architecture consists of (1) a processing strategy for an Augmented Phrase Structured Grammar (APSG) for the purpose of structuring text, (2) a special language designed for writing text structuring grammars, and (3) a Justification Grammar (JG) representing the structure of justification texts. A prototype justification production system is presented to demonstrate how the model can be used to synthesize knowledge from a variety of sources and produce coherent, multisentential text similar to that produced by a domain expert.
Erratum BIB 621

IJHCS 1995 Volume 43 Issue 5/6

The Role of Formal Ontology in the Information Technology

Editorial: The Role of Formal Ontology in the Information Technology BIB 623-624
  N. Guarino; R. Poli
Formal Ontology, Conceptual Analysis and Knowledge Representation BIBA 625-640
  Nicola Guarino
The purpose of this paper is to defend the systematic introduction of formal ontological principles in the current practice of knowledge engineering, to explore the various relationships between ontology and knowledge representation, and to present the recent trends in this promising research area. According to the "modelling view" of knowledge acquisition proposed by Clancey, the modelling activity must establish a correspondence between a knowledge base and two separate subsystems: the agent's behaviour (i.e. the problem-solving expertise) and its own environment (the problem domain). Current knowledge modelling methodologies tend to focus on the former sub-system only, viewing domain knowledge as strongly dependent on the particular task at hand: in fact, AI researchers seem to have been much more interested in the nature of reasoning rather than in the nature of the real world. Recently, however, the potential value of task-independent knowledge bases (or "ontologies") suitable to large scale integration has been underlined in many ways.
   In this paper, we compare the dichotomy between reasoning and representation to the philosophical distinction between epistemology and ontology. We introduce the notion of the ontological level, intermediate between the epistemological and the conceptual levels discussed by Brachman, as a way to characterize a knowledge representation formalism taking into account the intended meaning of its primitives. We then discuss some formal ontologic distinctions which may play an important role for such purpose.
Formal Ontology, Common Sense and Cognitive Science BIBA 641-667
  Barry Smith
Common sense is on the one hand a certain set of processes of natural cognition -- of speaking, reasoning, seeing, and so on. On the other hand common sense is a system of beliefs (of folk physics and folk psychology). Over against both of these is the world of common sense, the world of objects to which the processes of natural cognition and the corresponding belief-contents standardly relate. What are the structures of this world and how does its scientific treatment relate to traditional and contemporary metaphysics and formal ontology? Can we embrace a thesis of common-sense realism to the effect that the world of common sense exists uniquely? Or must we adopt instead a position of cultural relativism which would assign distinct worlds of common sense to each group and epoch? The present paper draws on recent work in the fields of naive and qualitative physics, in perceptual and developmental psychology, and in cognitive anthropology, in order to consider in a new light these and related questions and to draw conclusions for the methodology and philosophical foundations of the cognitive sciences.
Top-Level Ontological Categories BIBA 669-685
  John F. Sowa
Philosophers have spent 25 centuries debating ontological categories. Their insights are directly applicable to the analysis, design, and specification of the ontologies used in knowledge-based systems. This paper surveys some of the ontological questions that arise in artificial intelligence, some answers that have been proposed by various philosophers, and an application of the philosophical analysis to the clarification of some current issues in AI. Two philosophers who have developed the most complete systems of categories are Charles Sanders Peirce and Alfred North Whitehead. Their analyses suggest a basic structure of categories that can provide some guidelines for the design of AI systems.
Bimodality of Formal Ontology and Mereology BIBA 687-696
  Roberto Poli
From the distinctions between "ontology" and "logic" and between "formal" and "material" we obtain two basic oppositions. Keeping the term "ontology" constant yields the opposition between "formal ontology" and "material ontology". This raises a question: when one speaks of ontology, how can its formal aspects be distinguished from its material ones? If, instead, we keep the term "formal" constant, the opposition is between "formal ontology" and "formal logic". The question here is therefore: when we talk about "formal" how can we distinguish between logic and ontology?
   Starting from these questions, I propose to update the somewhat old distinction between formal ontology as the domain of the distributive-collective opposition and material ontology as the domain of the parts-whole oppositions.
Knowledge Representation in Conceptual Realism BIBA 697-721
  Nino B. Cocchiarella
Knowledge representation in Artificial Intelligence (AI) involves more than the representation of a large number of facts or beliefs regarding a given domain, i.e. more than a mere listing of those facts or beliefs as data structures. It may involve, for example, an account of the way the properties and relations that are known or believed to hold of the objects in that domain are organized into a theoretical whole -- such as the way different branches of mathematics, or of physics and chemistry, or of biology and psychology, etc., are organized, and even the way different parts of our commonsense knowledge or beliefs about the world can be organized. But different theoretical accounts will apply to different domains, and one of the questions that arises here is whether or not there are categorial principles of representation and organization that apply across all domains regardless of the specific nature of the objects in those domains. If there are such principles, then they can serve as a basis for a general framework of knowledge representation independently of its application to particular domains. In what follows I will give a brief outline of some of the categorial structures of conceptual realism as a formal ontology. It is this system that I propose we adopt as the basis of a categorial framework for knowledge representation.
Classical Mereology and Restricted Domains BIBA 723-740
  Carola Eschenbach; Wolfgang Heydrich
Classical Mereology, the formal theory of the concepts of part, overlap and sum as defined by Lesniewski does not have any notion of being a whole. Because of this neutrality the concepts of Mereology are applicable in each and every domain. This point of view is not generally accepted. But a closer look at domain-specific approaches defining non-classical (quasi)-mereological notions reveals that the question of whether something belongs to a restricted domain (and, thus, fulfills a certain criterion of integrity) has come to be mixed up with the question of whether it exists. We claim that the structural differences between restricted domains are not based on different mereological concepts, but on different concepts of being a whole. Taking Classical Mereology for granted in looking at different domains can shed more light on the specific nature of these domains, their similarities and differences. Three examples of axiomatic accounts dealing with restricted domains (linear orders of extended entities as they can be found in discussions of the ontology of time, topological structure and set-theory) are discussed. We show that Classical Mereology is applicable to these domains as soon as they are seen as being embedded in a less restricted (or even the most comprehensive) domain. Each of the accounts may be axiomatically formulated by adding one non-mereological primitive to whatever concepts are chosen to develop Classical Mereology. These primitives are strongly related to the domain-specific notions of integrity or being a whole.
Sheaf Mereology and Husserl's Morphological Ontology BIBA 741-763
  Jean Petitot
This paper begins with Husserl's phenomenological distinction between formal ontology (analytic theory of general objects) and "material" regional ontologies (types of "essences" of objects which prescribe "synthetic a priori" rules). It then shows that, as far as its "ontological design" is concerned, transcendental phenomenology can be seen as an "object-oriented" epistemology (opposed to the classical "procedural" epistemology). The paper also analyses the morphological example, which constitutes the core of Husserl's third Logical Investigation, of the unilateral relation of foundation between sense qualities and spatio-temporal extension. It gives a geometrical model using the geometrical concepts of fibration, sheaf and topos.
Algebraic Semantics for Natural Language: Some Philosophy, Some Applications BIBA 765-784
  Godehard Link
Information processing, when performed by an intelligent agent, draws on a wide array of knowledge sources. Among them are world knowledge, situation knowledge, conceptual knowledge and linguistic knowledge. The focus in this paper will be on the semantic knowledge which is part of the general linguistic competence of any speaker of a natural language (NL).
   In particular, this knowledge contains ways of organizing the linguistic ontology, i.e. the collection of heterogeneous entities that make up the domain of discourse. The representation language that is proposed here to model this knowledge stresses the structural properties of the ontology. This approach has been pursued under the name of algebraic semantics.
   The paper starts out by explaining the term "algebraic semantics" as it is used in logic. Two senses of "algebraic" are distinguished that are called here "conceptual" and "structural". These two senses of the algebraic method are then applied to NL semantics. The conceptual part is realized by the method of structuring the domains of linguistic ontology in various ways. Thus plural entities are recognized along with mass entities and events. The common outlook here is mereological or lattice-theoretical. Some applications to the study of plurals are given that are to show the usefulness of the algebraic approach. Finally, the ontology of plurals is addressed, and comments are made on some relevant discussion of mereology in recent philosophical work. In sum, it is contended that the algebraic perspective while being of interest in semantics and philosophy proper, also fits both the spirit and the practice of much work that has been done in the Artificial Intelligence (AI) field of knowledge representation.
Ontological Domains, Semantic Sorts and Systematic Ambiguity BIBA 785-807
  Johannes Dolling
This paper is concerned with some aspects of the relationship between ontological knowledge and natural language understanding. More specifically, I will consider how knowledge of ontological domains and knowledge of lexical meaning work together in the interpretation of linguistic expressions. An essential assumption is that in accordance with ontological distinctions there are various semantic sorts which linguistic expressions can be divided. The specific purpose of the paper is to explore how under these conditions the intricate problem of systematic ambiguity can be dealt with. Here the term "systematic ambiguity" stands for the phenomenon that a word or a phrase has several possible meanings which systematically related to one another and from which a suitable meaning can be selected dependently on the linguistic and non-linguistic context of use. Taking into consideration that many predicative expressions impose on their arguments certain sortal selection restrictions. I will deal with the phenomenon that a word or a phrase being systematically ambiguous in some cases adapt itself to the semantic format of the expression it is combined with. Such an adaptation eliminating one or more possible meaning of the word or phrase is in fact a coercion of its semantic sort. I will argue for an approach which takes into account a set of semantic coercion operations to meet sortal constraints. Moreover, I will show how such sort coercions performed in language understanding are sanctioned by world knowledge.
A Linguistic Ontology BIBA 809-818
  Kathleen Dahlgren
This paper defends the choice of a linguistically-based content ontology for natural language processing and demonstrates that a single common-sense ontology produces plausible interpretations at all levels from parsing through reasoning. The paper explores some of the problems and tradeoffs for a method which has just one content ontology. A linguistically-based content ontology represents the "world view" encoded in natural language. The content ontology (as opposed to the formal semantic ontology which distinguishes events from propositions, and so on) is best grounded in the culture, rather than in the world itself, or in the mind. By "world view" we mean naive assumptions about "what there is" in the world, and how it should be classified. These assumptions are time-worn and reflected in language at several levels: morphology, syntax and lexical semantics. The content ontology presented in the paper is part of a Naive Semantic lexicon. Naive Semantics is a lexical theory in which associated with each word sense is a naive theory (or set of beliefs) about the objects or events of reference. While naive semantic representations are not combinations of a closed set of primitives, they are also limited by a shallowness assumption. Included is just the information required to form a semantic interpretation incrementally, not all of the information known about objects. The Naive Semantic ontology is based upon a particular language, its syntax and its word senses. To the extent that other languages codify similar world views, we predict that their ontologies are similar. Applied in a computational natural language understanding system, this linguistically-motivated ontology (along with other native semantic information) is sufficient to disambiguate words, disambiguate syntactic structure, disambiguate formal semantic representations, resolve anaphoric expressions and perform reasoning tasks with text.
Sketch of an Ontology Underlying the Way We Talk About the World BIBA 819-830
  Jerry R. Hobbs
A general structure is proposed for an underlying conceptualization of the world that is particularly well suited to language understanding. It consists of a set of core theories of a very abstract character. Some of the most important of these are discussed, in particular, core theories that explicate the concepts of systems and the figure-ground relation, scales, change, causality, and goal-directed behavior. These theories are too abstract to impose many constraints on the entities and situations they are applied to; rather their main purpose is to provide the basis for a rich vocabulary for talking about entities and situations. The fact that the core theories apply so widely means that they provide a great many domains of discourse with a rich vocabulary.
Taxonomies of Logically Defined Qualitative Spatial Relations BIBA 831-846
  A. G. Cohn; D. A. Randell; Z. Cui
This paper develops a taxonomy of qualitative spatial relations for pairs of regions, which are all logically defined from two primitive (but axiomatized) notions. The first primitive is the notion of two regions being connected, which allows eight jointly exhaustive and pairwise disjoint relations to be defined. The second primitive is the convex hull of a region which allows many more relations to be defined. We also consider the development of the useful notions of composition tables for the defined relations and networks specifying continuous transitions between pairs of regions. We conclude by discussing what kind of criteria to apply when deciding how fine grained a taxonomy to create.
Towards a Causal Ontology Coping with the Temporal Constraints between Causes and Effects BIBA 847-863
  Paolo Terenziani
The paper describes a causal ontology in which the temporal implications of causation (and, in particular, the temporal constraints it imposes between causes and effects) are coped with in detail. It proposes a classification of causal relations on the basis of the temporal constraints they impose between their causes and effects, and further refines the basic classification for coping also with causation with threshold and with phenomena of production/consumption of stuff. Finally, the paper sketches an application of the causal ontology for developing causal nets used as domain knowledge for natural language interpretation, and briefly addresses also the reasoning techniques required to this purpose.
Midwinters, End Games, and Body Parts: A Classification of Part-Whole Relations BIBA 865-889
  Peter Gerstl; Simone Pribbenow
This paper deals with the conceptual part-whole relation as it occurs in language processing, visual perception, and general problem solving. One important long-term goal is to develop a naive or common sense theory of the mereological domain, that is the domain of parts and wholes and their relations. In this paper, we work towards such a theory by presenting a classification of part-whole relations that is suitable for different cognitive tasks and give proposals for the representation and processing of these relations. In order to be independent of specific tasks like language understanding or the recognition of objects, we use structural properties to develop our classification.
   The paper starts with a brief overview of the mereological research in different disciplines and two examples of the role of part-whole relations in linguistics (possessive constructions) and knowledge processing (reasoning about objects). In the second section, we discuss two important approaches to mereological problems: the "Classical Extensional Mereology: as presented by Simons and the meronymic system of part-whole relations proposed by Winston, Chaffin and Hermann. Our own work is described in the third and last section. First, we discuss different kinds of wholes according to their inherent compositional structure; complexes, collections, and masses. Then partitions induced by or independent of the compositional structure of a whole are described, accompanied by proposals for their processing.
Ontological Foundations for State and Identity Within the Object-Oriented Paradigm BIBA 891-906
  Flavio Bonfatti; Luca Pazzi
Objects can be seen, at an abstract level, as information tokens made of two parts: an identification part and state, or value part. The identification part contains an object identifier different from that of any other object. The state part contains instead a structured value denoting the collective value of the attributes of the object. While the identifier assigned to an object remains fixed, the state is allowed to change, i.e. different values can be found in the state part of the object at different times. An object model with identifiers abstracts the formal properties of identity achieving a neat separation between object identification and object representation. Object identification becomes therefore a formal property preserved by the system. Traditional approaches in data and knowledge representation use instead some aspects of individuals' state, which only occasionally satisfy the uniqueness and continuity properties of identity. The problem is that identificative attributes chosen at a given time, may carry different values or may not be unique as the context changes; in general, identification is conceptually different from representation. The paper proposes an ontological foundation for the concept of object state and identity, showing formality the equivalence with the infinite properties which are inherent in the cognition of real world distinct entities (Leibniz' principle).
Toward Principles for the Design of Ontologies Used for Knowledge Sharing BIBA 907-928
  Thomas R. Gruber
Recent work in Artificial Intelligence (AI) is exploring the use of formal ontologies as a way of specifying content-specific agreements for the sharing and reuse of knowledge among software entities. We take an engineering perspective on the development of such ontologies. Formal ontologies are viewed as designed artifacts, formulated for specific purposes and evaluated against objective design criteria. We describe the role of ontologies in supporting knowledge sharing activities, and then present a set of criteria to guide the development of ontologies for these purposes. We show how these criteria are applied in case studies from the design of ontologies for engineering mathematics and bibliographic data. Selected design decisions are discussed, and alternative representation choices are evaluated against the design criteria.
On the Relationship between Ontology Construction and Natural Language: A Socio-Semiotic View BIBA 929-944
  John A. Bateman
The design and construction of "ontologies" is currently a topic of great interest for diverse groups. Less clear is the extent to which these groups are addressing a common area of concern. By considering the kinds of information and information organizations that are required for adequate accounts of natural language and for sophisticated natural language capabilities in computational systems, this paper distinguishes several different classes of "ontology", each with its own characteristics and principles. A classification for these ontological "realms" is motivated on the basis of systemic-functional semiotics. The resulting stratified "meta-ontology" offers a unifying framework for relating distinct ontological realms while maintaining their individual orientations. In this context, formal ontology can be seen to provide a rather small (although important) component of the overall organization necessary. Claims for the sufficiency of formal ontology in AI and NLP need then to be treated with caution.
An Environment for Reusing Ontologies within a Knowledge Engineering Approach BIBA 945-965
  Thomas Pirlein; Rudi Studer
Domain models can be constructed more easily and made more robust by reusing ontologies in a well-defined way. In this paper the KARO approach is introduced which provides various means of retrieving and adapting components of an ontology as part of a domain model construction process. KARO is based on the knowledge-processing component LILOG-KR provided by the LILOG text-understanding system. Above all, the notion of classification is applied for the retrieval of relevant categories. The upper structure of LILOG-KB serves as an exemplary ontology. By integrating KARO into the Model-based and Incremental Knowledge Engineering Environment (MIKE) the reuse of a predefined ontology can be integrated into the development process of expert systems in a systematic way.