Bowen, J., Reeves, S., & Schweer, A. (2013). A tale of two studies. In Proceedings of the Fourteenth Australasian Under Interface Conference (AUIC2013), Adelaide, Australia, 29 January - 1 February 2013. (pp. 81-89).
Permanent Research Commons link: https://hdl.handle.net/10289/8425
Running user evaluation studies is a useful way of getting feedback on partially or fully implemented software systems. Unlike hypothesis-based testing (where specific design decisions can be tested or comparisons made between design choices) the aim is to find as many problems (both usability and functional) as possible prior to implementation or release. It is particularly useful in small-scale development projects that may lack the resources and expertise for other types of usability testing. Developing a user-study that successfully and efficiently performs this task is not always straightforward however. It may not be obvious how to decide what the participants should be asked to do in order to explore as many parts of the system’s interface as possible. In addition, ad hoc approaches to such study development may mean the testing is not easily repeatable on subsequent implementations or updates, and also that particular areas of the software may not be evaluated at all. In this paper we describe two (very different) approaches to designing an evaluation study for the same piece of software and discuss both the approaches taken, the differing results found and our comments on both of these.
Copyright © 2013, Australian Computer Society, Inc. This paper appeared at the Fourteenth Australasian User Interface Conference (AUIC 2013), Adelaide, Australia. Conferences in Research and Practice in Information Technology (CRPIT), Vol. , Ross T. Smith and Burkhard Wuensche, Ed.