

Future research thus needs to investigate how the ingredients of A2Us (e.g., elements within the broad activity of user testing such as think aloud or participant selection) contribute to valuable outcomes of evaluation and iteration within software development.

This would greatly reduce the complexity of experiments relative to direct comparisons of UEMs, where the supposedly independent variables (i.e., the UEMs) are too ‘reactive’ and multi-faceted to allow credible comparisons of their role in effective usability work. These models of usability work, and the circumstances under which it can be effective, would provide a focused basis for future experimental work that would investigate the impact of specific focused interventions below the level of A2Us. Meta-reviews would then derive models from these case studies.
#Xsort applet full#
The proposal is that case studies of usability work within full software development contexts focus on the interactions between approaches to usability (A2Us, a looser and broader concept than UEMs). It closes with a discussion of future strategies that would avoid direct comparisons, but instead take a meta-review approach to in depth case studies of usability work. This report explores the motivations behind the work of MAUSE WG2, its relation to other MAUSE WGs (WG1: Critical Review and Analysis of Individual UEMs) and (WG4: Review on the Computational and Definitional Approaches in Usability Evaluation), and the results of group and individual activities (i.e., the implementation aspect of Comparing UEMs). Insights and issues from these collaborative exercises fed into a final independent analysis by the co-author of this report. A collection of constructs was formed over two meetings, and applied collaboratively in coding exercises to a collection of evaluation problem and summary reports at two further meetings. The WG2 CODELIGHTS MSE thus sought to illuminate UEM comparison through the research activity of coding qualitative data sets for usability evaluations. Such constructs shine light on differences and similarities between UEMs through a process of categorisation, often referred to as coding in qualitative human science methodologies. The approach taken was to first gather a collection of constructs that were being used or considered to compare usability evaluation methods (UEMs), for example, on the basis of the severity of the problems that they could find, or the value to software development of their discoveries, explanations and recommendations. To do this, we began with the CODELIGHTS MSE, where members from France to Rumania and from Iceland to Italy contributed evaluation data, reports and coding constructs that were used collaboratively in two workshops during the course of the MAUSE project.

Aware of the many methodological challenges in this area, WG2 began by considering how credible multi-site experiments (MSEs) could be designed. Within the MAUSE project, Working Group (WG) 2 was charged with Comparing UEMs: Strategies and Implementation. COST Action 294 focused on the Maturity of Information Technology Usability Evaluation (MAUSE).
