Parameter Tuning Using Gaussian Processes

dc.contributor.advisorFrank, Eibe
dc.contributor.advisorHolmes, Geoffrey
dc.contributor.authorMa, Jinjin
dc.date.accessioned2012-07-15T23:31:51Z
dc.date.available2012-07-15T23:31:51Z
dc.date.issued2012
dc.date.updated2012-01-23T02:35:20Z
dc.description.abstractMost machine learning algorithms require us to set up their parameter values before applying these algorithms to solve problems. Appropriate parameter settings will bring good performance while inappropriate parameter settings generally result in poor modelling. Hence, it is necessary to acquire the “best” parameter values for a particular algorithm before building the model. The “best” model not only reflects the “real” function and is well fitted to existing points, but also gives good performance when making predictions for new points with previously unseen values. A number of methods exist that have been proposed to optimize parameter values. The basic idea of all such methods is a trial-and-error process whereas the work presented in this thesis employs Gaussian process (GP) regression to optimize the parameter values of a given machine learning algorithm. In this thesis, we consider the optimization of only two-parameter learning algorithms. All the possible parameter values are specified in a 2-dimensional grid in this work. To avoid brute-force search, Gaussian Process Optimization (GPO) makes use of “expected improvement” to pick useful points rather than validating every point of the grid step by step. The point with the highest expected improvement is evaluated using cross-validation and the resulting data point is added to the training set for the Gaussian process model. This process is repeated until a stopping criterion is met. The final model is built using the learning algorithm based on the best parameter values identified in this process. In order to test the effectiveness of this optimization method on regression and classification problems, we use it to optimize parameters of some well-known machine learning algorithms, such as decision tree learning, support vector machines and boosting with trees. Through the analysis of experimental results obtained on datasets from the UCI repository, we find that the GPO algorithm yields competitive performance compared with a brute-force approach, while exhibiting a distinct advantage in terms of training time and number of cross-validation runs. Overall, the GPO method is a promising method for the optimization of parameter values in machine learning.
dc.format.mimetypeapplication/pdf
dc.identifier.citationMa, J. (2012). Parameter Tuning Using Gaussian Processes (Thesis, Master of Science (MSc)). University of Waikato, Hamilton, New Zealand. Retrieved from https://hdl.handle.net/10289/6497en
dc.identifier.urihttps://hdl.handle.net/10289/6497
dc.language.isoen
dc.publisherUniversity of Waikato
dc.rightsAll items in Research Commons are provided for private study and research purposes and are protected by copyright with all rights reserved unless otherwise indicated.
dc.subjectParameter Tunning
dc.subjectGaussian Process Optimization
dc.subjectMachine Learning
dc.titleParameter Tuning Using Gaussian Processesen
dc.typeThesis
pubs.place-of-publicationHamilton, New Zealanden_NZ
thesis.degree.grantorUniversity of Waikato
thesis.degree.levelMasters
thesis.degree.nameMaster of Science (MSc)
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
thesis.pdf
Size:
2.81 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.07 KB
Format:
Item-specific license agreed upon to submission
Description: