Please use this identifier to cite or link to this item: https://ruomo.lib.uom.gr/handle/7000/1172
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPardis, George-
dc.contributor.authorDiamantaras, Konstantinos I.-
dc.contributor.authorOugiaroglou, Stefanos-
dc.contributor.authorEvangelidis, Georgios-
dc.date.accessioned2022-08-26T08:02:40Z-
dc.date.available2022-08-26T08:02:40Z-
dc.date.issued2019-10-18-
dc.identifier10.1007/978-3-030-33607-3_55en_US
dc.identifier.isbn978-3-030-33606-6en_US
dc.identifier.isbn978-3-030-33607-3en_US
dc.identifier.issn0302-9743en_US
dc.identifier.issn1611-3349en_US
dc.identifier.urihttps://doi.org/10.1007/978-3-030-33607-3_55en_US
dc.identifier.urihttps://ruomo.lib.uom.gr/handle/7000/1172-
dc.description.abstractData reduction, achieved by collecting a small subset of representative prototypes from the original patterns, aims at alleviating the computational burden of training a classifier without sacrificing performance. We propose an extension of the Reduction by finding Homogeneous Clusters algorithm, which utilizes the k-means method to propose a set of homogeneous cluster centers as representative prototypes. We propose two new classifiers, which recursively produce homogeneous clusters and achieve higher performance than current homogeneous clustering methods with significant speed up. The key idea is the development of a tree data structure that holds the constructed clusters. Internal tree nodes consist of clustering models, while leaves correspond to homogeneous clusters where the corresponding class label is stored. Classification is performed by simply traversing the tree. The two algorithms differ on the clustering method used to build tree nodes: the first uses k-means while the second applies EM clustering. The proposed algorithms are evaluated on a variety datasets and compared with well-known methods. The results demonstrate very good classification performance combined with large computational savings.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesLecture Notes in Computer Scienceen_US
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/*
dc.subjectFRASCATI::Natural sciences::Computer and information sciencesen_US
dc.subject.otherClassificationen_US
dc.subject.otherk-meansen_US
dc.subject.otherEMen_US
dc.subject.otherPrototype Generationen_US
dc.titleFast Tree-Based Classification via Homogeneous Clusteringen_US
dc.typeConference Paperen_US
dc.contributor.departmentΤμήμα Εφαρμοσμένης Πληροφορικήςen_US
local.identifier.volume11871en_US
local.identifier.firstpage514en_US
local.identifier.lastpage524en_US
local.identifier.volumetitleIntelligent Data Engineering and Automated Learning – IDEAL 2019en_US
Appears in Collections:Department of Applied Informatics

Files in This Item:
File Description SizeFormat 
2019_IDEAL.pdf386,15 kBAdobe PDFThumbnail
View/Open


This item is licensed under a Creative Commons License Creative Commons