Abstract
The accuracy of optical character recognition (OCR) has significantly improved recently through the use of deep learning. However, when OCR is used in real applications, the shortage of annotated images often makes training difficult. To solve this problem, there are automatic annotation methods. However, many of these methods are based on active learning, and operators need to confirm generated annotation candidates. I propose a practical automatic annotation method for binarization, which is one of the components of OCR. The purpose with the proposed method is to automatically confirm the quality of annotation candidates. This method consists of three simple processes to achieve this. First, cropping a text from a whole image. Second, applying binarization to the cropped image at all thresholds. Third, recognizing all binarized cropped images and matching the recognition results and correct character database. If the characters match, the cropped binary image is correctly binarized. The method selects that cropped binarized image as an annotation for binarization. Cropping coordinates and the correct character database (DB) can be obtained from a practical OCR system. Because users of such a system usually input corrections for misrecognition of OCR to the system, the system can obtain the correct characters and coordinates. The experimental results indicate that the annotations generated with the proposed method can improve the performance of deep-learning-based binarization. As a result, the normalized edit distance between the recognized text and grand truth text can be reduced by 38.56% on the Find it! receipt image dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
ICDAR 2019 Robust Reading Challenge on Scanned Receipts OCR and Information Extraction - ICDAR 2019 RobustReading Competition. https://rrc.cvc.uab.es/
Artaud, C., Sidère, N., Doucet, A., Ogier, J., Yooz, V.P.D.: Find it! Fraud detection contest report. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 13–18, August 2018
Brock, A., Donahue, J., Simonyan, K.: Large Scale GAN Training for High Fidelity Natural Image Synthesis. arXiv:1809.11096, September 2018
Clemens, W.: Using Otsu’s method to generate data for training of deep learning image segmentation models. https://www.microsoft.com/developerblog/2018/05/17/using-otsus-method-generate-data-training-deep-learning-image-segmentation-models/
Gruning, T., Leifert, G., Straub, T., Michael, J., Labahn, R.: A two-stage method for text line detection in historical documents. Int. J. Doc. Anal. Recogn. (IJDAR) 22(3), 285–302 (2019)
Gupta, A., Vedaldi, A., Zisserman, A.: Synthetic Data for Text Localisation in Natural Images, pp. 2315–2324. IEEE, June 2016
Karatzas, D., et al.: ICDAR 2013 robust reading competition. In: 2013 12th International Conference on Document Analysis and Recognition, pp. 1484–1493, August 2013
Karthika, M., James, A.: A novel approach for document image binarization using bit-plane slicing. Procedia Technol. 19, 758–765 (2015)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440, June 2015
Marvasti, N., Yoruk, E., Acar, B.: Computer-aided medical image annotation: preliminary results with liver lesions in CT. IEEE J. Biomed. Health Inf. 22(5), 1561–1570 (2017)
Otsu, N.: An automatic threshold selection method based on discriminant and least squares criteria. Trans. Inst. Electron. Commun. Eng. Jpn. 63, 349–356 (1980)
Pratikakis, I., Zagoris, K., Barlas, G., Gatos, B.: ICDAR2017 competition on document image binarization (DIBCO 2017). In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), vol. 01, pp. 1395–1403, November 2017
Ravuri, S., Vinyals, O.: Classification Accuracy Score for Conditional Generative Models. arXiv:1905.10887, May 2019
Razavi, A., Oord, A.V.D., Vinyals, O.: Generating Diverse High-Fidelity Images with VQ-VAE-2. arXiv:1906.00446, June 2019
Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv:1505.04597, May 2015
Sauvola, J., Pietikainen, M.: Adaptive document image binarization. Pattern Recogn. 33(2), 225–236 (2000)
Tensmeyer, C., Martinez, T.: Document image binarization with fully convolutional neural networks. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), vol. 1, pp. 99–104, November 2017
Vo, Q.N., Kim, S.H., Yang, H.J., Lee, G.: Binarization of degraded document images based on hierarchical deep supervised network. Pattern Recogn. 74, 568–586 (2018)
Zhang, D., Islam, M.M., Lu, G.: A review on automatic image annotation techniques. Pattern Recogn. 45(1), 346–362 (2012)
Zhang, X.Y., Bengio, Y., Liu, C.L.: Online and Offline Handwritten Chinese Character Recognition: A Comprehensive Study and New Benchmark. arXiv:1606.05763, June 2016
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Odate, R. (2020). Automatic Annotation Method for Document Image Binarization in Real Systems. In: Palaiahnakote, S., Sanniti di Baja, G., Wang, L., Yan, W. (eds) Pattern Recognition. ACPR 2019. Lecture Notes in Computer Science(), vol 12047. Springer, Cham. https://doi.org/10.1007/978-3-030-41299-9_36
Download citation
DOI: https://doi.org/10.1007/978-3-030-41299-9_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-41298-2
Online ISBN: 978-3-030-41299-9
eBook Packages: Computer ScienceComputer Science (R0)