Show simple item record  

dc.contributor.authorSanyang, Momodou L.en_NZ
dc.contributor.authorDurrant, Robert J.en_NZ
dc.contributor.authorKabán, Ataen_NZ
dc.coverage.spatialConference held at Vancouver Convention Centre, Vancouver, Canadaen_NZ
dc.date.accessioned2016-07-04T02:37:33Z
dc.date.available2016en_NZ
dc.date.available2016-07-04T02:37:33Z
dc.date.issued2016en_NZ
dc.identifier.citationSanyang, M. L., Durrant, R. J., & Kaban, A. (2016). How effective is Cauchy-EDA in high dimensions? In Proceedings of IEEE Congress on Evolutionary Computation 2016 (CEC2016). Conference held at Vancouver Convention Centre, Vancouver, Canada: IEEE.en
dc.identifier.urihttps://hdl.handle.net/10289/10506
dc.description.abstractWe consider the problem of high dimensional blackbox optimisation via Estimation of Distribution Algorithms (EDA) and the use of heavy-tailed search distributions in this setting. Some authors have suggested that employing a heavy tailed search distribution, such as a Cauchy, may make EDA better explore a high dimensional search space. However, other authors have found Cauchy search distributions are less effective than Gaussian search distributions in high dimensional problems. In this paper, we set out to resolve this controversy. To achieve this we run extensive experiments on a battery of high-dimensional test functions, and develop some theory which shows that small search steps are always more likely to move the search distribution towards the global optimum than large ones and, in particular, large search steps in high-dimensional spaces nearly always do badly in this respect. We hypothesise that, since exploration by large steps is mostly counterproductive in high dimensions, and since the fraction of good directions decays exponentially fast with increasing dimension, instead one should focus mainly on finding the right direction in which to move the search distribution. We propose a minor change to standard Gaussian EDA which implicitly achieves this aim, and our experiments on a sequence of test functions confirm the good performance of our new approach.en_NZ
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherIEEEen_NZ
dc.relation.urihttp://www.stats.waikato.ac.nz/~bobd/cec2016.pdf
dc.relation.urihttp://www.wcci2016.org/document/cec2016_5new.pdf
dc.rightsThis is an author’s accepted version of an article accepted to be include in the Proceedings of IEEE Congress on Evolutionary Computation 2016 (CEC2016). © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.sourceIEEE Congress on Evolutionary Computation 2016en_NZ
dc.subjectMachine learning
dc.titleHow effective is Cauchy-EDA in high dimensions?en_NZ
dc.typeConference Contribution
dc.relation.isPartOfProceedings of IEEE Congress on Evolutionary Computation 2016 (CEC2016)en_NZ
pubs.elements-id139296
pubs.finish-date2016-07-29en_NZ
pubs.publication-statusAccepteden_NZ
pubs.publisher-urlhttp://www.wcci2016.org/document/cec2016_5new.pdfen_NZ
pubs.start-date2016-07-24en_NZ
dcterms.abstractWe consider the problem of high dimensional blackbox optimisation via Estimation of Distribution Algorithms (EDA) and the use of heavy-tailed search distributions in this setting. Some authors have suggested that employing a heavy tailed search distribution, such as a Cauchy, may make EDA better explore a high dimensional search space. However, other authors have found Cauchy search distributions are less effective than Gaussian search distributions in high dimensional problems. In this paper, we set out to resolve this controversy. To achieve this we run extensive experiments on a battery of high-dimensional test functions, and develop some theory which shows that small search steps are always more likely to move the search distribution towards the global optimum than large ones and, in particular, large search steps in high-dimensional spaces nearly always do badly in this respect. We hypothesise that, since exploration by large steps is mostly counterproductive in high dimensions, and since the fraction of good directions decays exponentially fast with increasing dimension, instead one should focus mainly on finding the right direction in which to move the search distribution. We propose a minor change to standard Gaussian EDA which implicitly achieves this aim, and our experiments on a sequence of test functions confirm the good performance of our new approach.


Files in this item

This item appears in the following Collection(s)

Show simple item record