default search action
Handbook of Pattern Recognition and Computer Vision, 5th Ed., 2016
- Chi Hau Chen:
Handbook of Pattern Recognition and Computer Vision, 5th Ed. World Scientific 2016, ISBN 978-981-4656-52-8 - Mariusz Flasinski:
Syntactic Pattern Recognition: Paradigm Issues and Open Problems. 3-25 - Li Deng, Navdeep Jaitly:
Deep Discriminative and Generative Models for speech Pattern Recognition. 27-52 - Marco Loog, Jesse H. Krijthe, Are Charles Jensen:
On Measuring and Quantifying Performance: error rates, surrogate Loss, and an Example in Semi-Supervised Learning. 53-68 - Vidar V. Vikjord, Robert Jenssen:
Information Theoretic Clustering using a k-Nearest Neighbors-based Divergence Measure. 69-88 - Laurent Heutte, Caroline Petitjean, Chesner Désir:
Pruning Trees in Random Forests for Minimizing non Detection in Medical imaging. 89-107 - João Paulo Papa, Willian Paraguassu Amorim, Alexandre Xavier Falcão, João Manuel R. S. Tavares:
Recent Advances on optimum-Path Forest for Data Classification: Supervised, Semi-Supervised, and Unsupervised Learning. 109-123 - Ching-Chung Li, Wen-Chyi Lin:
On Curvelet-based texture Features for Pattern Classification. 125-139 - Bo-Yuan Feng, Ke Sun, Parmida Atighechian, Ching Y. Suen:
Computer Recognition and Evaluation of Coins. 141-158 - Meng-Che Chuang, Jenq-Neng Hwang, Kresimir Williams:
Supervised and Unsupervised Feature Descriptors for error-Resilient underwater Live fish Recognition. 159-173 - Yi-Hsuan Yang, Ju-Chiang Wang, Yu-An Chen, Homer H. Chen:
Model Adaptation for Personalized Music Emotion Recognition. 175-193 - Liyan Zhang, Dmitri V. Kalashnikov, Sharad Mehrotra:
Context Assisted Person identification for Images and Videos. 197-216 - Alan Brunton, Augusto Salazar, Timo Bolkart, Stefanie Wuhrer:
Statistical Shape Spaces for 3D Data: a Review. 217-238 - Mehrsan Javan Roshtkhari, Martin D. Levine:
Tracking without Appearance Descriptors. 239-254 - Ziheng Wang, Qiang Ji:
Knowledge Augmented Visual Learning. 255-274 - Kaspar Riesen, Horst Bunke:
Graph Edit Distance - Novel Approximation Algorithms. 275-291 - Marcus Liwicki, Volkmar Frinken, Muhammad Zeshan Afzal:
Latest Developments of LSTM Neural Networks with Applications of Document Image Analysis. 293-311 - Gabriele Cavallaro, Mauro Dalla Mura, Jón Atli Benediktsson:
Analyzing Remote Sensing Images with Hierarchial Morphological Representations. 313-330 - Yuan Yan Tang, Haoliang Yuan:
Manifold-based Sparse Representation for hyperspectral Image Classification. 331-350 - Rouzbeh Maani, Sanjay Kalra, Yee-Hong Yang:
A Review of texture Classification Methods and their Applications in Medical Image Analysis of the brain. 351-369 - Yanbin Lu, Mina Yousefi, John Ellenberger, Richard H. Moore, Daniel B. Kopans, Adam Krzyzak, Ching Y. Suen:
3D Tomosynthesis to Detect Breast cancer. 371-393 - Sonya Cates:
Combining Representations for Improved Sketch Recognition. 397-413 - Sedat Ozer:
Visual Object Recognition with Image Retrieval. 415-425 - Donavan Prieur, Eric Granger, Yvon Savaria, Claude Thibeault:
Efficient identification of Faces in video Streams using low-Power Multi-Core Devices. 427-454 - Gabriele Moser, Paola Costamagna, Andrea De Giorgi, Lissy Pellaco, Andrea Trucco, Sebastiano B. Serpico:
Kernel-based Learning for Fault Detection and identification in fuel cell Systems. 455-472 - Lin Gu, Antonio Robles-Kelly:
Outdoor Shadow modelling and its Applications. 473-490 - Ivan Bogun, Eraldo Ribeiro:
Fast Structured Tracker with Improvedmotionmodel using robust Kalman filter. 491-507 - David J. Michael:
Using 3D Vision for Automated Industrial Inspection. 509-520 - Xianju Wang, Xiangyun Mary Ye:
Vision Challenges in Image-based Barcode Readers. 521-538 - Matt Tanner, Matt Grimm, Harold B. Noyes:
Parallel Pattern Matching using the Automata Processor. 539-559
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.