1992 Working Papers

Browse

Recent Submissions

  • Publication
    Comparing human and computational models of music prediction
    (Working Paper, Department of Computer Science, University of Waikato, 1992) Witten, Ian H.; Manzara, Leonard C.; Conklin, Derrell
    The information content of each successive note in a piece of music is not an intrinsic musical property but depends on the listener's own model of a genre of music. Human listeners' models can be elicited by having them guess successive notes and assign probabilities to their guesses by gambling. Computational models can be constructed by developing a structural framework for prediction, and "training" the system by having it assimilate a corpus of sample compositions and adjust its internal probability estimates accordingly. These two modeling techniques turn out to yield remarkably similar values for the information content, or "entropy," of the Bach chorale melodies. While previous research has concentrated on the overall information content of whole pieces of music, the present study evaluates and compares the two kinds of model in fine detail. Their predictions for two particular chorale melodies are analyzed on a note-by-note basis, and the smoothed information profiles of the chorales are examined and compared. Apart from the intrinsic interest of comparing human with computational models of music, several conclusions are drawn for the improvement of computational models.
  • Publication
    Data logging and performance analysis software for teachers of indigenous New Zealanders: early results
    (Working Paper, Department of Computer Science, University of Waikato, 1992) Barbour, Robert H.; Ford, Greg; Cunningham, Sally Jo
    Technology, in the form of personal computers, is making inroads into everyday life in every part of every nation. It is frequently assumed that this is 'a good thing'. However, there is a need for the people in each cultural group in each nation to appropriate technology for themselves. Indigenous people, such as the Maori of New Zealand/Aotearoa, are in danger of losing their language because technology has a European face. Yet despite the fact that the Maori are currently experiencing a cultural renaissance, there are no commercially available products that are specifically designed for Maori-speaking people.
  • Publication
    Natural language processing in speech understanding systems
    (Working Paper, Department of Computer Science, University of Waikato, 1992) Holmes, Geoffrey
    Speech understanding systems (SUS's) came of age in late 1971 as a result of a five year development programme instigated by the Information Processing Technology Office of the Advanced Research Projects Agency (ARPA) of the Department of Defense in the United States. The aim of the programme was to research and develop practical man-machine communication systems. It has been argued since, that the main contribution of this project was not in the development of speech science, but in the development of artificial intelligence. That debate is beyond the scope of this paper, though no one would question the fact that the field to benefit most within artificial intelligence as a result of this programme is natural language understanding. More recent projects of a similar nature, such as projects in the United Kingdom's ALVEY programme and Europe's ESPRIT programme have added further developments to this important field. This paper presents a review of some of the natural language processing techniques used within speech understanding systems. In particular, techniques for handling syntactic, semantic and pragmatic information are discussed. They are integrated into SUS's as knowledge sources. The most common application of these systems is to provide an interface to a database. The system has to perform a dialogue with a user who is generally unknown to the system. Typical examples are train and aeroplane timetable enquiry systems, travel management systems and document retrieval systems.
  • Publication
    Getting research students started: a tale of two courses
    (Working Paper, Department of Computer Science, University of Waikato, 1992) Witten, Ian H.; Bell, Timothy C.
    As graduate programs in Computer Science grow and mature and undergraduate populations stabilize, an increasing proportion of our resources is being devoted to the training of researchers in the field. Many inefficiencies are evident in our graduate programs. These include undesirably long average times to thesis completion, students' poor work habits and general lack of professionalism, and the unnecessary duplication of having supervisors introduce their students individually to the basics of research. Solving these problems requires specifically targeted education to get students started in their graduate research and introduce them to the skills and tools needed to complete it efficiently and effectively. We have used two different approaches in our respective departments. One is a (half-) credit course on research skills; the other a one-week intensive non-credit "survival course" at the beginning of the year. The advantage of the former is the opportunity to cover material in depth and for students to practice their skills; the latter is much less demanding on students and is easier to fit into an existing graduate program.
  • Publication
    Voice input and information exchange in asynchronous group communication
    (Working Paper, Department of Computer Science, University of Waikato, 1992) McQueen, Robert J.
    Existing computer supported co-operative work (CSCW) systems for group communication typically require some amount of keyboard input, and this may limit their usefulness. A voice input prototype system for asynchronous (time separated transactions) group communication (AGC) with simulated conversion to text was developed and an experiment constructed to investigate if advantages over conventional keyboard input computer conferencing were possible for the information exchange task. Increases in words used and facts disclosed were higher for voice input compared to text input, which implies that voice input capability could be advantageous for future asynchronous group communication systems supporting information exchange.
  • Publication
    Computer improvisation of blues melodies
    (Working Paper, Department of Computer Science, University of Waikato, 1992) Hall, Mark A.
    A computer program has been written which composes blues melodies to fit a given backing chord sequence. The program is comprised of an analysis stage followed by a synthesis stage. The analysis stage takes blues tunes and produces zero, first and second order Markov transition tables covering both pitches and rhythms. In order to capture the relationship between harmony and melody, a set of transition tables is produced for each chord in the analysed songs. The synthesis stage uses the output tables from analysis to generate new melodies; second order tables are used as much as possible, with fall back procedures, to first and zero order tables, to deal with zero frequency problems. Some constraints are encoded in the form of rules to control the placement of rhythmic patterns within measures, pitch values for long duration notes and pitch values for the start of new phrases. A listening experiment was conducted to determine how well the program captures the structure of blues melodies. Results showed that listeners were unable to reliably distinguish human from computer composed melodies.
  • Publication
    Semantic and generative models for lossy text compression
    (Working Paper, Department of Computer Science, University of Waikato, 1992) Witten, Ian H.; Bell, Timothy C.; Moffat, Alistair; Smith, Tony C.; Nevill-Manning, Craig G.
    The apparent divergence between the research paradigms of text and image compression has led us to consider the potential for applying methods developed for one domain to the other. This paper examines the idea of "lossy" text compression, which transmits an approximation to the input text rather than the text itself. In image coding, lossy techniques have proven to yield compression factors that are vastly superior to those of the best lossless schemes, and we show that this a also the case for text. Two different methods are described here, one inspired by the use of fractals in image compression. They can be combined into an extremely effective technique that provides much better compression than the present state of the art and yet preserves a reasonable degree of match between the original and received text. The major challenge for lossy text compression is identified as the reliable evaluation of the quality of this match.
  • Publication
    Drawing diagrams in TEX documents
    (Working Paper, Department of Computer Science, University of Waikato, 1992) Rogers, Bill; Holmes, Geoffrey
    The typesetting language TEX [Knuth (1984)] is now available on a range of computers from mainframes to micros. It has an unequalled ability to typeset mathematical text, with its many formatting features and fonts. TEX has facilities for drawing diagrams using the packages tpic and PiCTEX. This paper describes a TEX preprocessor written in Pascal which allows a programmer to embed diagrams in TEX documents. These diagrams may involve straight or curved lines and labelling text. The package is provided for people who either do not have access to tpic or PiCTEX or who prefer to program in Pascal.