HCI Bibliography Home | HCI Journals | About JUS | Journal Info | JUS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
JUS Tables of Contents: 010203040506070809

Journal of Usability Studies 2

Editors:Avi Parush
Publisher:Usability Professionals' Association
Standard No:ISSN 1931-3357
Links:Journal Home Page | Table of Contents
  1. JUS 2006 Volume 2 Issue 1
  2. JUS 2007 Volume 2 Issue 2
  3. JUS 2007 Volume 2 Issue 3
  4. JUS 2007 Volume 2 Issue 4

JUS 2006 Volume 2 Issue 1

Post-Modern Usability BIBPDF 1-6
  Arnold M. Lund
A Methodology for Testing Voting Systems BIBAHTMLPDF 7-21
  Ted Selker; Elizabeth Rozenwieg; Anna Pandolfo
This paper compares the relative merit in realistic versus lab style experiments for testing voting technology. By analyzing three voting experiments, we describe the value of realistic settings in showing the enormous challenges for voting process control and consistent voting experiences.
   The methodology developed for this type of experiment will help other researchers to test polling place protocols and administration. Comparing the results from laboratory experiments with voter verification and realistic voting experiments further validates the procedure of testing equipment in laboratory settings.
   The methodology and protocol for testing voting systems can be applied to any voting technology. This protocol matches the real-world conditions of voting by replicating them for the experiment.
Case WAP and Accountability: Shortcomings of the Mobile Internet as an Interactional Problem BIBAHTMLPDF 22-38
  Ilpo Koskinen; Petteri Repo; Kaarina Hyvönen
Wireless Application Protocol (WAP) is designed to allow access to the Internet on a mobile phone. Attempts to explain its limited success have focused on attitudinal and cognitive reasons for non-use, finding that although people recognize the benefits of WAP, issues like lack of content, privacy concerns, and reference group behavior account for non-use. Such explanations have also been incomplete in that they have not addressed problems related to actual use and interaction with the technology. Our article studies the use of WAP as situated action. We focus on how users make sense of WAP pages and how they disambiguate in situ the responses from the service, i.e., new pages and new menus. Our method of transcribing videos of WAP use following the conventions of conversation analysis offers a cost-effective tool for understanding user interaction with technology and provides useful implications for design.
Reliability and Validity of the Mobile Phone Usability Questionnaire (MPUQ) BIBAHTMLPDF 39-53
  Young Sam Ryu; Tonya L. Smith-Jackson
This study was a follow-up to determine the psychometric quality of the usability questionnaire items derived from a previous study (Ryu and Smith-Jackson, 2005), and to find a subset of items that represents a higher measure of reliability and validity. To evaluate the items, the questionnaire was administered to a representative sample involving approximately 300 participants. The findings revealed a six-factor structure, including (1) Ease of learning and use, (2) Assistance with operation and problem solving, (3) Emotional aspect and multimedia capabilities, (4) Commands and minimal memory load, (5) Efficiency and control, and (6) Typical tasks for mobile phones. The appropriate 72 items constituted the Mobile Phone Usability Questionnaire (MPUQ), which evaluates the usability of mobile phones for the purpose of making decisions among competing variations in the end-user market, determining alternatives of prototypes during the development process, and evolving versions during an iterative design process.

JUS 2007 Volume 2 Issue 2

A Great Leap Forward: The Birth of the Usability Profession (1988-1993) BIBPDF 54-60
  Joe Dumas
Heuristic Evaluation Quality Score (HEQS): A Measure of Heuristic Evaluation Skills BIBAHTMLPDF 61-75
  Shazeeye Kirmani; Shamugam Rajasekaran
Heuristic Evaluation is a discount usability engineering method involving three or more evaluators who evaluate the compliance of an interface based on a set of heuristics. Because the quality of the evaluation is highly dependent on their skills, it is critical to measure these skills to ensure evaluations are of a certain standard. This study provides a framework to quantify heuristic evaluation skills. Quantification is based on the number of unique issues identified by the evaluators as well as the severity of each issue. Unique issues are categorized into eight user interface parameters and severity is categorized into three. A benchmark computed from the collated evaluations is used to compare skills across applications as well as within applications. The result of this skill measurement divides the evaluators into levels of expertise. Two case studies illustrate the process, as well as its applications. Further studies will help define an expert's profile.
Usability Evaluation of the Spatial OLAP Visualization and Analysis Tool (SOVAT) BIBAHTMLPDF 76-95
  Mathew Scotch; Bambang Parmanto; Valerie Monaco
Increasingly sophisticated technologies, such as On-Line Analytical Processing (OLAP) and Geospatial Information Systems (GIS), are being leveraged for conducting community health assessments (CHA). Little is known about the usability of OLAP and GIS interfaces with respect to CHA. We conducted an iterative usability evaluation of the Spatial OLAP Visualization and Analysis Tool (SOVAT), a software application that combines OLAP and GIS. A total of nine graduate students and six community health researchers were asked to think-aloud while completing five CHA questions using SOVAT. The sessions were analyzed after every three participants and changes to the interface were made based on the findings. Measures included elapsed time, answers provided, erroneous actions, and satisfaction. Traditional OLAP interface features were poorly understood by participants, and combined OLAP-GIS features needed to be better emphasized. The results suggest that the changes made to the SOVAT interface resulted in increases in both usability and user satisfaction.
Comments on "A Methodology for Testing Voting Systems" BIBPDF 96-98
  Whitney Quesenbery; John Cugini; Dana Chisnell; Bill Killam; Ginny Redish
Reply to Comments on "A Methodology for Testing Voting Systems" BIBPDF 99-101
  Ted Selker; Elizabeth Rosenzweig; Anna Pandolfo

JUS 2007 Volume 2 Issue 3

Introduction BIBHTML i
  Avi Parush
Expanding Usability Testing to Evaluate Complex Systems BIBAHTMLPDF 102-111
  Ginny Redish
This essay discusses ways that usability professionals can expand usability testing to evaluate complex systems, such as intelligence gathering and medical decision-making, that do not lend themselves to more traditional laboratory-based usability testing. In the essay, Redish explains what complex systems are, why they don't lend themselves to traditional usability test methodologies, and what other techniques are available for gathering and analyzing the data. The essay also discusses the importance of involving domain experts in the design of the test to ensure that both the components and the system as a whole are being adequately tested.
Adapting Usability Investigations for Agile User-Centered Design BIBAHTMLPDF 112-132
  Desiree Sy
When our company chose to adopt an Agile development process for new products, our User Experience Team took the opportunity to adjust, and consequently improve, our user-centered design (UCD) practices. Our interface design work required data from contextual investigations to guide rapid iterations of prototypes, validated by formative usability testing. This meant that we needed to find a way to conduct usability tests, interviews, and contextual inquiry -- both in the lab and the field -- within an Agile framework. To achieve this, we adjusted the timing and granularity of these investigations, and the way that we reported our usability findings.
   This paper describes our main adaptations. We have found that the new Agile UCD methods produce better-designed products than the "waterfall" versions of the same techniques. Agile communication modes have allowed us to narrow the gap between uncovering usability issues and acting on those issues by incorporating changes into the product.
Group Usability Testing: Evolution in Usability Techniques BIBAHTMLPDF 133-144
  Laura L. Downey
Usability testing has a long history. In its early form, it was conducted with many individual participants much like traditional research experiments. With the advent of discount usability engineering techniques, fewer participants were required (5-7 versus 30-50) and protocols were simplified. The evolution from "many to few" in usability testing has become the standard in formative testing. What is the next tool in our toolbox?
   This paper introduces a formative method called "group usability testing." It involves several to many participants individually, but simultaneously, performing tasks, with one to several testers observing and interacting with participants. The idea for group usability testing arose as an answer to limited time resources and the availability of many users gathered together in one place. The approach is described via a case study. Data characteristics, benefits and drawbacks of group usability testing are discussed. Additionally it is compared/contrasted with individual usability testing, co-discovery, task-based focus groups, and cooperative usability testing.
Usability studies and the Hawthorne Effect BIBAHTMLPDF 145-154
  Ritch Macefield
This paper provides a brief review of the Hawthorne effect, a discussion of how this effect relates to usability studies and help for practitioners in defending their studies against criticisms made on the basis of this effect.

JUS 2007 Volume 2 Issue 4

Introduction BIBHTML i
  Avi Parush
Surviving Our Success: Three Radical Recommendations BIBPDF 155-161
  Jared Spool
Making Usability Recommendations Useful and Usable BIBAHTMLPDF 162-179
  Rolf Molich; Robin Jeffries; Joseph Dumas
This paper evaluates the quality of recommendations for improving a user interface resulting from a usability evaluation. The study compares usability comments written by different authors, but describing similar usability issues. The usability comments were provided by 17 professional teams who independently evaluated the usability of the website for the Hotel Pennsylvania in New York. The study finds that only 14 of the 84 studied comments (17%) addressing six usability problems contained recommendations that were both useful and usable. Fourteen recommendations were not useful at all. Sixteen recommendations were not usable at all. Quality problems include recommendations that are vague or not actionable, and ones that may not improve the overall usability of the application. The paper suggests characteristics for "useful and usable recommendations," that is, recommendations for solving usability problems that lead to changes that efficiently improve the usability of a product.
User Research of a Voting Machine: Preliminary Findings and Experiences BIBAHTMLPDF 180-189
  Menno de Jong; Joris van Hoof; Jordy Gosselt
This paper describes a usability study of the Nedap voting machine in the Netherlands. On the day of the national elections, 566 voters participated in our study immediately after having cast their real vote. The research focused on the correspondence between voter intents and voting results, distinguishing between usability (correspondence between voter intents and voter input) and machine reliability (correspondence between voter input and machine output). For the sake of comparison, participants also cast their votes using a paper ballot.
   The machine reliability appeared to be 100%, indicating that, within our sample, all votes that had been cast were correctly represented in the output of the voting machine. Regarding usability, 1.4% of the participants had cast the wrong vote using the voting machine. This percentage was similar to that of the paper ballot.
Metaphor-Based Design of High-Throughput Screening Process Interfaces BIBAHTMLPDF 190-210
  David B. Kaber; Noa Segall; Rebecca S. Green
This paper describes work on developing usable interfaces for creating and editing methods for high-throughput screening of chemical and biological compounds in the domain of life sciences automation. A modified approach to metaphor-based interface design was used as a framework for developing a screening method editor prototype analogous to the presentation of a recipe in a cookbook. The prototype was compared to an existing screening method editor application in terms of effectiveness, efficiency, and satisfaction of novice users and was found to be superior.