Research Commons
      • Browse 
        • Communities & Collections
        • Titles
        • Authors
        • By Issue Date
        • Subjects
        • Types
        • Series
      • Help 
        • About
        • Collection Policy
        • OA Mandate Guidelines
        • Guidelines FAQ
        • Contact Us
      • My Account 
        • Sign In
        • Register
      View Item 
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      A tale of two studies

      Bowen, Judy; Reeves, Steve; Schweer, Andrea
      Thumbnail
      Files
      A tale of two studies.pdf
      1.589Mb
      Link
       crpit.com
      Citation
      Export citation
      Bowen, J., Reeves, S., & Schweer, A. (2013). A tale of two studies. In Proceedings of the Fourteenth Australasian Under Interface Conference (AUIC2013), Adelaide, Australia, 29 January - 1 February 2013. (pp. 81-89).
      Permanent Research Commons link: https://hdl.handle.net/10289/8425
      Abstract
      Running user evaluation studies is a useful way of getting feedback on partially or fully implemented software systems. Unlike hypothesis-based testing (where specific design decisions can be tested or comparisons made between design choices) the aim is to find as many problems (both usability and functional) as possible prior to implementation or release. It is particularly useful in small-scale development projects that may lack the resources and expertise for other types of usability testing. Developing a user-study that successfully and efficiently performs this task is not always straightforward however. It may not be obvious how to decide what the participants should be asked to do in order to explore as many parts of the system’s interface as possible. In addition, ad hoc approaches to such study development may mean the testing is not easily repeatable on subsequent implementations or updates, and also that particular areas of the software may not be evaluated at all. In this paper we describe two (very different) approaches to designing an evaluation study for the same piece of software and discuss both the approaches taken, the differing results found and our comments on both of these.
      Date
      2013
      Type
      Conference Contribution
      Publisher
      ACS
      Rights
      Copyright © 2013, Australian Computer Society, Inc. This paper appeared at the Fourteenth Australasian User Interface Conference (AUIC 2013), Adelaide, Australia. Conferences in Research and Practice in Information Technology (CRPIT), Vol. , Ross T. Smith and Burkhard Wuensche, Ed.
      Collections
      • Computing and Mathematical Sciences Papers [1454]
      • General Papers [46]
      Show full item record  

      Usage

      Downloads, last 12 months
      25
       
       

      Usage Statistics

      For this itemFor all of Research Commons

      The University of Waikato - Te Whare Wānanga o WaikatoFeedback and RequestsCopyright and Legal Statement