Show simple item record

dc.contributor.authorPadmanabhan, Deepak
dc.contributor.authorVisweswariah, Karthik
dc.contributor.authorWiratunga, Nirmalie
dc.contributor.authorSani, Sadiq
dc.date.accessioned2016-09-21T17:25:13Z
dc.date.available2016-09-21T17:25:13Z
dc.date.issued2012-10-29en
dc.identifier.citationDEEPAK, P., VISWESWARIAH, K., WIRATUNGA, N. and SANI, S. 2012. Two-part segmentation of text documents. In Proceedings of the 21st ACM international conference on information and knowledge management, 29 October - 02 November 2012, Maui, USA. New York: ACM [online], pages 793-802. Available from: https://dx.doi.org/10.1145/2396761.2396862en
dc.identifier.isbn9781450311564en
dc.identifier.urihttp://hdl.handle.net/10059/1830
dc.description.abstractWe consider the problem of segmenting text documents that have a two-part structure such as a problem part and a solution part. Documents of this genre include incident reports that typically involve description of events relating to a problem followed by those pertaining to the solution that was tried. Segmenting such documents into the component two parts would render them usable in knowledge reuse frameworks such as Case-Based Reasoning. This segmentation problem presents a hard case for traditional text segmentation due to the lexical inter-relatedness of the segments. We develop a two-part segmentation technique that can harness a corpus of similar documents to model the behavior of the two segments and their inter-relatedness using language models and translation models respectively. In particular, we use separate language models for the problem and solution segment types, whereas the interrelatedness between segment types is modeled using an IBM Model 1 translation model. We model documents as being generated starting from the problem part that comprises of words sampled from the problem language model, followed by the solution part whose words are sampled either from the solution language model or from a translation model conditioned on the words already chosen in the problem part. We show, through an extensive set of experiments on real-world data, that our approach outperforms the state-of-the-art text segmentation algorithms in the accuracy of segmentation, and that such improved accuracy translates well to improved usability in Case-based Reasoning systems. We also analyze the robustness of our technique to varying amounts and types of noise and empirically illustrate that our technique is quite noise tolerant, and degrades gracefully with increasing amounts of noise.en
dc.language.isoengen
dc.publisherACMen
dc.rightshttps://creativecommons.org/licenses/by-nc-nd/4.0en
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectTexten
dc.subjectSegmentationen
dc.subjectLanguage modelsen
dc.subjectTranslation modelsen
dc.titleTwo-part segmentation of text documents.en
dc.typeConference publicationsen
dc.publisher.urihttp://dx.doi.org/10.1145/2396761.2396862en
dcterms.publicationdate2012-10-29en
refterms.accessExceptionNAen
refterms.dateDeposit2016-09-21en
refterms.dateFCA2016-09-21en
refterms.dateFCD2016-09-21en
refterms.dateFreeToDownload2016-09-21en
refterms.dateFreeToRead2016-09-21en
refterms.dateToSearch2016-09-21en
refterms.depositExceptionNAen
refterms.panelUnspecifieden
refterms.technicalExceptionNAen
refterms.versionAMen
rioxxterms.publicationdate2012-10-29en
rioxxterms.typeConference Paper/Proceeding/Abstracten
rioxxterms.versionAMen


Files in this item

This item appears in the following Collection(s)

Show simple item record

https://creativecommons.org/licenses/by-nc-nd/4.0
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc-nd/4.0