Research Commons
      • Browse 
        • Communities & Collections
        • Titles
        • Authors
        • By Issue Date
        • Subjects
        • Types
        • Series
      • Help 
        • About
        • Collection Policy
        • OA Mandate Guidelines
        • Guidelines FAQ
        • Contact Us
      • My Account 
        • Sign In
        • Register
      View Item 
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Putting formal specifications under the magnifying glass: Model-based testing for validation

      Aydal, Emine G.; Paige, Richard F.; Utting, Mark; Woodcock, Jim
      Thumbnail
      Files
      2009-ICST-Emine2.pdf
      377.2Kb
      DOI
       10.1109/ICST.2009.20
      Link
       doi.ieeecomputersociety.org
      Find in your library  
      Citation
      Export citation
      Aydal, E. G., Paige, R. F., Utting, M. & Woodcock, J. (2009). Putting formal specifications under the magnifying glass: Model-based testing for validation. In Proceedings of 2nd International Conference on Software Testing, Verification, and Validation, Denver, Colorado, USA, April 1-4, 2009 (pp.131 - 140). IEEE.
      Permanent Research Commons link: https://hdl.handle.net/10289/1831
      Abstract
      A software development process is effectively an abstract form of model transformation, starting from an end-user model of requirements, through to a system model for which code can be automatically generated. The success (or failure) of such a transformation depends substantially on obtaining a correct, well-formed initial model that captures user concerns.

      Model-based testing automates black box testing based on the model of the system under analysis. This paper proposes and evaluates a novel model-based testing technique that aims to reveal specification/requirement-related errors by generating test cases from a test model and exercising them on the design model. The case study outlined in the paper shows that a separate test model not only increases the level of objectivity of the requirements, but also supports the validation of the system under test through test case generation. The results obtained from the case study support the hypothesis that there may be discrepancies between the formal specification of the system modeled at developer end and the problem to be solved, and using solely formal verification methods may not be sufficient to reveal these. The approach presented in this paper aims at providing means to obtain greater confidence in the design model that is used as the basis for code generation.
      Date
      2009
      Type
      Conference Contribution
      Publisher
      IEEE Computer Society
      Rights
      ©2009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
      Collections
      • Computing and Mathematical Sciences Papers [1455]
      Show full item record  

      Usage

      Downloads, last 12 months
      113
       
       
       

      Usage Statistics

      For this itemFor all of Research Commons

      The University of Waikato - Te Whare Wānanga o WaikatoFeedback and RequestsCopyright and Legal Statement