Research Commons
      • Browse 
        • Communities & Collections
        • Titles
        • Authors
        • By Issue Date
        • Subjects
        • Types
        • Series
      • Help 
        • About
        • Collection Policy
        • OA Mandate Guidelines
        • Guidelines FAQ
        • Contact Us
      • My Account 
        • Sign In
        • Register
      View Item 
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      History-based visual mining of semi-structured audio and text

      Bouamrane, Matt-Mouley; Luz, Saturnino; Masoodian, Masood
      Thumbnail
      Files
      History-Based Visual Mining of Semi-Structured Audio and Text.pdf
      148.5Kb
      DOI
       10.1109/MMMC.2006.1651349
      Link
       ieeexplore.ieee.org
      Find in your library  
      Citation
      Export citation
      Bouamrane, M. – M., Luz, S. & Masoodian, M. (2006) History based visual mining of semi-structured audio and text. In Proceedings of the 12th International Multi-media modelling conference, MMM2006, Beijing, China, Jan, 2006(pp.360-363). Washington, DC, USA: IEEE Computer Society.
      Permanent Research Commons link: https://hdl.handle.net/10289/1700
      Abstract
      Accessing specific or salient parts of multimedia recordings remains a challenge as there is no obvious way of structuring and representing a mix of space-based and time-based media. A number of approaches have been proposed which usually involve translating the continuous component of the multimedia recording into a space-based representation, such as text from audio through automatic speech recognition and images from video (keyframes). In this paper, we present a novel technique which defines retrieval units in terms of a log of actions performed on space-based artefacts, and exploits timing properties and extended concurrency to construct a visual presentation of text and speech data. This technique can be easily adapted to any mix of space-based artefacts and continuous media.
      Date
      2006
      Type
      Conference Contribution
      Publisher
      IEEE Computer Society
      Rights
      This article has been published in Proceedings of the 12th International Multi-media modelling conference, MMM2006, Beijing, China, Jan, 2006. ©2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
      Collections
      • Computing and Mathematical Sciences Papers [1454]
      Show full item record  

      Usage

      Downloads, last 12 months
      69
       
       
       

      Usage Statistics

      For this itemFor all of Research Commons

      The University of Waikato - Te Whare Wānanga o WaikatoFeedback and RequestsCopyright and Legal Statement