Show simple item record  

dc.contributor.authorSmith, Tony C.
dc.contributor.authorLorenz, Michelle
dc.coverage.spatialConference held at Snowbird, Utahen_NZ
dc.date.accessioned2008-12-18T20:53:34Z
dc.date.available2008-12-18T20:53:34Z
dc.date.issued2001
dc.identifier.citationSmith, T.C. & Lorenz, M.(2001). Better text compression from fewer lexical n-grams. In Proceedings of Data Compression Conference (DCC ‘01). Washington, DC, USA: IEEE Computer Society.en
dc.identifier.urihttps://hdl.handle.net/10289/1722
dc.description.abstractWord-based context models for text compression have the capacity to outperform more simple character-based models, but are generally unattractive because of inherent problems with exponential model growth and corresponding data sparseness. These ill-effects can be mitigated in an adaptive lossless compression scheme by modelling syntactic and semantic lexical dependencies independently.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherIEEE Computer Societyen
dc.rightsThis paper has been published in the Proceedings of Data Compression Conference(DCC ‘01). ©2001 IEEE Computer Society.en
dc.subjectcomputer scienceen
dc.subjecttext compressionen
dc.subjectMachine learning
dc.titleBetter text compression from fewer lexical n-gramsen
dc.typeConference Contributionen
dc.identifier.doi10.1109/DCC.2001.10047en
dc.relation.isPartOfDCC 2001: IEEE Data Compression Conferenceen_NZ
pubs.begin-page516en_NZ
pubs.elements-id11594
pubs.end-page516en_NZ
pubs.finish-date2001-03-29en_NZ
pubs.start-date2001-03-27en_NZ


Files in this item

This item appears in the following Collection(s)

Show simple item record