Loading...
Thumbnail Image
Item

Comparing human and computational models of music prediction

Abstract
The information content of each successive note in a piece of music is not an intrinsic musical property but depends on the listener's own model of a genre of music. Human listeners' models can be elicited by having them guess successive notes and assign probabilities to their guesses by gambling. Computational models can be constructed by developing a structural framework for prediction, and "training" the system by having it assimilate a corpus of sample compositions and adjust its internal probability estimates accordingly. These two modeling techniques turn out to yield remarkably similar values for the information content, or "entropy," of the Bach chorale melodies. While previous research has concentrated on the overall information content of whole pieces of music, the present study evaluates and compares the two kinds of model in fine detail. Their predictions for two particular chorale melodies are analyzed on a note-by-note basis, and the smoothed information profiles of the chorales are examined and compared. Apart from the intrinsic interest of comparing human with computational models of music, several conclusions are drawn for the improvement of computational models.
Type
Working Paper
Type of thesis
Series
Computer Science Working Papers
Citation
Witten, I. H., Manzara, L. C., & Conklin, D. (1992). Comparing human and computational models of music prediction (Computer Science Working Papers 92/4). Hamilton, New Zealand: Department of Computer Science, University of Waikato.
Date
1992
Publisher
Department of Computer Science, University of Waikato
Degree
Supervisors
Rights
© 1992 Ian H. Witten, Leonard C. Manzara & Darrell Conklin