Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleAugust 2024
Enhancing dynamic ensemble selection: combining self-generating prototypes and meta-classifier for data classification
Neural Computing and Applications (NCAA), Volume 36, Issue 32Pages 20295–20320https://doi.org/10.1007/s00521-024-10237-8AbstractIn dynamic ensemble selection (DES) techniques, the competence level of each classifier is estimated from a pool of classifiers, and only the most competent ones are selected to classify a specific test sample and predict its class labels. A ...
- research-articleFebruary 2024
A heuristic hybrid instance reduction approach based on adaptive relative distance and k-means clustering
The Journal of Supercomputing (JSCO), Volume 80, Issue 9Pages 13096–13123https://doi.org/10.1007/s11227-023-05885-xAbstractThe k nearest neighbor (KNN) classifier is one of the well-known instance-based classifiers. Nevertheless, the low efficiency in both running time and memory usage is a great challenge in the KNN classifier and its improvements due to noise and ...
- research-articleMay 2023
Prototype selection for multi-label data based on label correlation
Neural Computing and Applications (NCAA), Volume 36, Issue 5Pages 2121–2130https://doi.org/10.1007/s00521-023-08617-7AbstractIn multi-label learning, the training data is typically large-scale and contains numerous noisy and redundant instances. Directly inducing a classifier with raw data can result in higher memory overhead and lower classification performance. One ...
- research-articleMarch 2023
CSLSEP: an ensemble pruning algorithm based on clustering soft label and sorting for facial expression recognition
Multimedia Systems (MUME), Volume 29, Issue 3Pages 1463–1479https://doi.org/10.1007/s00530-023-01062-5AbstractApplying ensemble learning to facial expression recognition is an important research field nowadays, but all may not be better than many, the redundant learners in the classifier pool may hinder the ensemble system’s performance, so ensemble ...
- research-articleMay 2023
A prototype selection technique based on relative density and density peaks clustering for k nearest neighbor classification
k-nearest neighbor classifier (KNN) is one of the most famous classification models due to its straightforward implementation and an error bounded by twice the Bayes error. However, it usually degrades because of noise and the high cost in computing the ...
-
- research-articleDecember 2022
Dual dimensionality reduction on instance-level and feature-level for multi-label data
Neural Computing and Applications (NCAA), Volume 35, Issue 35Pages 24773–24782https://doi.org/10.1007/s00521-022-08117-0AbstractThe training data in multi-label learning are often high dimensional and contains a quantity of noise and redundant information, resulting in high memory overhead and low classification performance during the learning process. Therefore, ...
- research-articleDecember 2022
Feature space partition: a local–global approach for classification
Neural Computing and Applications (NCAA), Volume 34, Issue 24Pages 21877–21890https://doi.org/10.1007/s00521-022-07647-xAbstractWe propose a local–global classification scheme in which the feature space is, in a first phase, segmented by an unsupervised algorithm allowing, in a second phase, the application of distinct classification methods in each of the generated sub-...
- research-articleNovember 2022
Unsupervised instance selection via conjectural hyperrectangles
Neural Computing and Applications (NCAA), Volume 35, Issue 7Pages 5335–5349https://doi.org/10.1007/s00521-022-07974-zAbstractMachine learning algorithms spend a lot of time processing data because they are not fast enough to commit huge data sets. Instance selection algorithms especially aim to tackle this trouble. However, even instance selection algorithms can suffer ...
- research-articleOctober 2022
A hybrid prototype selection-based deep learning approach for anomaly detection in industrial machines
- Rodrigo de Paula Monteiro,
- Mariela Cerrada Lozada,
- Diego Roman Cabrera Mendieta,
- René Vinicio Sánchez Loja,
- Carmelo José Albanez Bastos Filho
Expert Systems with Applications: An International Journal (EXWA), Volume 204, Issue Chttps://doi.org/10.1016/j.eswa.2022.117528AbstractAnomaly detection in time series is an important task to many applications, e.g, the maintenance policies of rotating machines within industries strongly rely on time series monitoring. Rotating machines are vital elements within ...
Highlights- Learning features for anomaly detection problems may be a challenging task.
- ...
- research-articleMay 2022
K-nearest neighbors rule combining prototype selection and local feature weighting for classification
AbstractK-Nearest Neighbors (KNN) rule is a simple yet powerful classification technique in machine learning. Nevertheless, it suffers from some drawbacks such as high memory consumption, low time efficiency, class overlapping ...
- ArticleOctober 2020
A Density-Based Prototype Selection Approach
AbstractDue to the increasing size of the datasets, prototype selection techniques have been applied for reducing the computational resources involved in data mining and machine learning tasks. In this paper, we propose a density-based approach for ...
- ArticleMay 2020
Anomaly Detection and Prototype Selection Using Polyhedron Curvature
AbstractWe propose a novel approach to anomaly detection called Curvature Anomaly Detection (CAD) and Kernel CAD based on the idea of polyhedron curvature. Using the nearest neighbors for a point, we consider every data point as the vertex of a ...
- research-articleApril 2020
- research-articleMarch 2020
ProLFA: Representative prototype selection for local feature aggregation
Neurocomputing (NEUROC), Volume 381, Issue CPages 336–347https://doi.org/10.1016/j.neucom.2019.11.073Highlights- Representative prototype selection facilitates the interpretability of aggregated representations.
- The discriminability of aggregated representations is strengthened by enforcing domain-invariant projection of bundled descriptors along ...
Given a set of hand-crafted local features, acquiring a global representation via aggregation is a promising technique to boost computational efficiency and improve task performance. Existing feature aggregation (FA) approaches, including Bag of ...
- research-articleJanuary 2020
Prototype Selection and Dimensionality Reduction on Multi-Label Data
CoDS COMAD 2020: Proceedings of the 7th ACM IKDD CoDS and 25th COMADPages 195–199https://doi.org/10.1145/3371158.3371184Multi-label classification problem is one of the most general and relevant problems in the area of classification, where each item of the evaluated dataset is associated with more than one label. This paper discusses novel algorithms for prototype ...
- research-articleOctober 2019
Improving the combination of results in the ensembles of prototype selectors
Neural Networks (NENE), Volume 118, Issue CPages 175–191https://doi.org/10.1016/j.neunet.2019.06.013AbstractPrototype selection is one of the most common preprocessing tasks in data mining applications. The vast amounts of data that we must handle in practical problems render the removal of noisy, redundant or useless instances a convenient ...
Highlights- We propose a new method for combining prototype selection algorithms.
- The ...
- research-articleSeptember 2019
Evolutionary prototype selection for multi-output regression
Neurocomputing (NEUROC), Volume 358, Issue CPages 309–320https://doi.org/10.1016/j.neucom.2019.05.055Highlights- A new prototype selection for multi-output regression data sets is presented.
- A ...
Display Omitted
AbstractA novel approach to prototype selection for multi-output regression data sets is presented. A multi-objective evolutionary algorithm is used to evaluate the selections using two criteria: training data set compression and prediction ...
- research-articleSeptember 2019
How much can k-means be improved by using better initialization and repeats?
Pattern Recognition (PATT), Volume 93, Issue CPages 95–112https://doi.org/10.1016/j.patcog.2019.04.014Highlights- K-means clustering algorithm can be significantly improved by using a better initialization technique, and by repeating (re-starting) the algorithm.
In this paper, we study what are the most important factors that deteriorate the performance of the k-means algorithm, and how much this deterioration can be overcome either by using a better initialization technique, or by repeating (...
- ArticleJune 2019
Characterization of Handwritten Signature Images in Dissimilarity Representation Space
AbstractThe offline Handwritten Signature Verification (HSV) problem can be considered as having difficult data since it presents imbalanced class distributions, high number of classes, high-dimensional feature space and small number of learning samples. ...
- ArticleMay 2019
Principal Sample Analysis for Data Ranking
AbstractBecause of the ever growing amounts of data, challenges have appeared for storage and processing, making data reduction still an important field of study. Numerosity reduction or prototype selection is one of the primary methods of data reduction. ...