Show simple item record  

dc.contributor.authorBarddal, Jean Paulen_NZ
dc.contributor.authorGomes, Heitor Muriloen_NZ
dc.contributor.authorEnembreck, Fabrícioen_NZ
dc.contributor.authorPfahringer, Bernharden_NZ
dc.contributor.authorBifet, Alberten_NZ
dc.contributor.editorFrasconi, Paoloen_NZ
dc.contributor.editorLandwehr, Nielsen_NZ
dc.contributor.editorManco, Giuseppeen_NZ
dc.contributor.editorVreeken, Jillesen_NZ
dc.coverage.spatialRiva del Garda, Italyen_NZ
dc.date.accessioned2017-05-04T03:51:49Z
dc.date.available2016en_NZ
dc.date.available2017-05-04T03:51:49Z
dc.date.issued2016en_NZ
dc.identifier.citationBarddal, J. P., Gomes, H. M., Enembreck, F., Pfahringer, B., & Bifet, A. (2016). On dynamic feature weighting for feature drifting data streams. In P. Frasconi, N. Landwehr, G. Manco, & J. Vreeken (Eds.), Proceedings of European Conference on Machine Learning and Knowledge Discovery in Databases (Vol. LNAI 9852, pp. 129–144). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-46227-1_9en
dc.identifier.isbn9783319462264en_NZ
dc.identifier.issn0302-9743en_NZ
dc.identifier.urihttps://hdl.handle.net/10289/11028
dc.description.abstractThe ubiquity of data streams has been encouraging the development of new incremental and adaptive learning algorithms. Data stream learners must be fast, memory-bounded, but mainly, tailored to adapt to possible changes in the data distribution, a phenomenon named concept drift. Recently, several works have shown the impact of a so far nearly neglected type of drifcccct: feature drifts. Feature drifts occur whenever a subset of features becomes, or ceases to be, relevant to the learning task. In this paper we (i) provide insights into how the relevance of features can be tracked as a stream progresses according to information theoretical Symmetrical Uncertainty; and (ii) how it can be used to boost two learning schemes: Naive Bayesian and k-Nearest Neighbor. Furthermore, we investigate the usage of these two new dynamically weighted learners as prediction models in the leaves of the Hoeffding Adaptive Tree classifier. Results show improvements in accuracy (an average of 10.69% for k-Nearest Neighbor, 6.23% for Naive Bayes and 4.42% for Hoeffding Adaptive Trees) in both synthetic and real-world datasets at the expense of a bounded increase in both memory consumption and processing time.en_NZ
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherSpringeren_NZ
dc.rights© 2016 Springer International Publishing Switzerland.This is the author's accepted version. The final publication is available at Springer via dx.doi.org/10.1007/978-3-319-46227-1_9
dc.sourceECML PKDD 2016en_NZ
dc.subjectcomputer science
dc.subjectdata stream mining
dc.subjectconcept drift
dc.subjectfeature drift
dc.subjectfeature weighting
dc.subjectMachine learning
dc.titleOn dynamic feature weighting for feature drifting data streamsen_NZ
dc.typeConference Contribution
dc.identifier.doi10.1007/978-3-319-46227-1_9en_NZ
dc.relation.isPartOfProceedings of European Conference on Machine Learning and Knowledge Discovery in Databasesen_NZ
pubs.begin-page129
pubs.elements-id142693
pubs.end-page144
pubs.finish-date2016-09-23en_NZ
pubs.place-of-publicationCham, Switzerland
pubs.start-date2016-09-19en_NZ
pubs.volumeLNAI 9852en_NZ
dc.identifier.eissn1611-3349en_NZ


Files in this item

This item appears in the following Collection(s)

Show simple item record