Show simple item record  

dc.contributor.authorGouk, Henryen_NZ
dc.contributor.authorPfahringer, Bernharden_NZ
dc.contributor.authorFrank, Eibeen_NZ
dc.contributor.authorCree, Michael J.en_NZ
dc.contributor.editorBerlingerio, Micheleen_NZ
dc.contributor.editorBonchi, Francescoen_NZ
dc.contributor.editorGärtner, Thomasen_NZ
dc.contributor.editorHurley, Neilen_NZ
dc.contributor.editorIfrim, Georgianaen_NZ
dc.coverage.spatialDublin, Irelanden_NZ
dc.date.accessioned2019-01-25T00:10:36Z
dc.date.available2019en_NZ
dc.date.available2019-01-25T00:10:36Z
dc.date.issued2019en_NZ
dc.identifier.citationGouk, H., Pfahringer, B., Frank, E., & Cree, M. J. (2019). MaxGain: Regularisation of neural networks by constraining activation magnitudes. In M. Berlingerio, F. Bonchi, T. Gärtner, N. Hurley, & G. Ifrim (Eds.), Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2018. Lecture Notes in Computer Science (Vol. 11051, pp. 541–556). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-10925-7_33en
dc.identifier.isbn978-3-030-10924-0en_NZ
dc.identifier.urihttps://hdl.handle.net/10289/12301
dc.description.abstractEffective regularisation of neural networks is essential to combat overfitting due to the large number of parameters involved. We present an empirical analogue to the Lipschitz constant of a feed-forward neural network, which we refer to as the maximum gain. We hypothesise that constraining the gain of a network will have a regularising effect, similar to how constraining the Lipschitz constant of a network has been shown to improve generalisation. A simple algorithm is provided that involves rescaling the weight matrix of each layer after each parameter update. We conduct a series of studies on common benchmark datasets, and also a novel dataset that we introduce to enable easier significance testing for experiments using convolutional networks. Performance on these datasets compares favourably with other common regularisation techniques. Data related to this paper is available at: https://www.cs.waikato.ac.nz/~ml/sins10/.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherSpringeren_NZ
dc.rights©2019 Springer Nature Switzerland AG.This is the author's accepted version. The final publication is available at Springer via dx.doi.org/10.1007/978-3-030-10925-7_33
dc.sourceJoint European Conference on Machine Learning and Knowledge Discovery in Databasesen_NZ
dc.subjectMachine learning
dc.titleMaxGain: Regularisation of neural networks by constraining activation magnitudesen_NZ
dc.typeConference Contribution
dc.identifier.doi10.1007/978-3-030-10925-7_33en_NZ
dc.relation.isPartOfMachine Learning and Knowledge Discovery in Databases. ECML PKDD 2018. Lecture Notes in Computer Scienceen_NZ
pubs.begin-page541
pubs.elements-id231429
pubs.end-page556
pubs.place-of-publicationCham, Switzerlanden_NZ
pubs.publisher-urlhttps://link.springer.com/chapter/10.1007/978-3-030-10925-7_33en_NZ
pubs.volume11051en_NZ


Files in this item

This item appears in the following Collection(s)

Show simple item record