[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110969165B - Handwritten character recognition method, handwritten character recognition device, electronic equipment and storage medium - Google Patents

Handwritten character recognition method, handwritten character recognition device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110969165B
CN110969165B CN201911210562.8A CN201911210562A CN110969165B CN 110969165 B CN110969165 B CN 110969165B CN 201911210562 A CN201911210562 A CN 201911210562A CN 110969165 B CN110969165 B CN 110969165B
Authority
CN
China
Prior art keywords
neural network
character
template
pictures
depth matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911210562.8A
Other languages
Chinese (zh)
Other versions
CN110969165A (en
Inventor
刘毅
李志远
郭晓洲
龚国良
鲁华祥
边昳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Semiconductors of CAS
Original Assignee
Institute of Semiconductors of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Semiconductors of CAS filed Critical Institute of Semiconductors of CAS
Priority to CN201911210562.8A priority Critical patent/CN110969165B/en
Publication of CN110969165A publication Critical patent/CN110969165A/en
Application granted granted Critical
Publication of CN110969165B publication Critical patent/CN110969165B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Character Discrimination (AREA)

Abstract

A handwritten character recognition method is applied to the technical field of computers and comprises the following steps: the method comprises the steps of obtaining a handwritten character picture to be recognized, generating a template character picture identical to a handwritten character picture character to be recognized, inputting the template character picture to a depth matching neural network, enabling the depth matching neural network to extract template character features of the template character picture, inputting the handwritten character picture to be recognized to the depth matching neural network, and enabling the depth matching neural network to recognize the handwritten character picture to be recognized according to the template character features. The application also discloses a handwritten character recognition device, electronic equipment and a storage medium, which can effectively recognize handwritten characters.

Description

Handwritten character recognition method, handwritten character recognition device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and apparatus for recognizing handwritten characters, an electronic device, and a storage medium.
Background
Because of the wide application in photographing documents, checks, forms, certificates, postal envelopes, tickets, manuscripts, and other optical character recognition image recognition systems and handwriting input devices, handwriting Chinese character recognition has been an important research field since the last 80 years. The conventional word recognition system mainly comprises: preprocessing, feature extraction and classification, extracting artificially designed features such as structural features, statistical features and the like, and improving the precision of a model by utilizing an improved classifier such as a secondary decision function and the like, but still is far behind the performance of human beings. In recent years, deep convolutional neural networks have benefited from the explosive development of computing power, massive amounts of training data, and better training techniques, making significant progress in many computer vision tasks. At present, a method based on a deep convolutional neural network becomes a new technology for solving the problem of handwritten Chinese character recognition.
Most Chinese character recognition methods focus on a balanced data set that contains 3755 characters, hundreds of samples each, commonly used in the GB2312-80 standard first order set. During the training process, all test characters are displayed, which is called a closed set recognition problem. However, a more complete set will contain modern chinese text of about 7000 characters. The number of characters in the history document and the academic document exceeds 54000, corresponding to one open set recognition problem. In order to obtain satisfactory recognition performance, training samples per character must be sufficient, especially for deep convolutional neural network based methods. Therefore, the current method can only satisfactorily process a limited number of characters, which severely restricts the application range of handwritten Chinese character recognition.
Disclosure of Invention
The main object of the present application is to provide a method, an apparatus, an electronic device and a storage medium for recognizing handwritten characters, which aims at solving the problem in the prior art.
To achieve the above object, a first aspect of an embodiment of the present application provides a character recognition method, including:
acquiring a handwritten character picture to be identified;
generating a template character picture which is the same as the hand-written character picture character to be recognized;
inputting the template character picture to a depth matching neural network so that the depth matching neural network extracts template character features of the template character picture;
and inputting the handwritten character picture to be recognized to the depth matching neural network, so that the depth matching neural network recognizes the handwritten character picture to be recognized according to the template character features.
Further, before the template character picture is input to the depth matching neural network, the method includes:
constructing a deep convolutional neural network;
inputting a sample set to the deep convolutional neural network, training the deep convolutional neural network by adopting a random gradient descent method through counter propagation cross entropy loss gradients, wherein the sample set comprises a subset of all template character pictures and handwritten character pictures with a batch size of N;
when the error between the template character picture and the handwriting character picture in the sample set is converged, training of the depth convolution neural network is stopped, the depth matching neural network is obtained, and parameters of all layers in the depth matching neural network are stored;
and updating parameters of the Softmax classification layer in the depth matching neural network by using all template character pictures and the handwriting character pictures with the batch size of N.
Further, the updating parameters of the Softmax classification layer in the depth matching neural network by using all template character pictures and the batch size of the N handwritten character pictures comprises:
inputting all the template character pictures into the depth matching neural network, extracting the characteristics of all the template character pictures, and carrying out L2 regularization on the characteristics to obtain template characteristics;
inputting the handwritten character pictures with the batch size of N into the depth matching neural network, and extracting the characteristics of the handwritten character pictures to obtain handwritten characteristics;
updating parameters of a Softmax classification layer in the depth matching neural network according to the template characteristics and the handwriting characteristics;
wherein the template is characterized by phi (t j ) The handwriting feature is phi (x j ) And if the parameter of the Softmax classification layer in the depth matching neural network is M, then:
M=φ(t j ) T φ(x j )。
further, the deep convolutional neural network comprises a convolutional layer, a max pooling layer, a full connection layer and a Softmax classification layer;
the convolution kernel of the convolution layers is 3 multiplied by 3, the convolution step length is 1, the convolution mode is SAME, reLU is adopted as an activation function, and batch standardization is not used;
the size of the pooling core of the maximum pooling layer is 3 multiplied by 3, and the step length is 2;
the number of channels of the full connection layer is 128, and dropout is added after the full connection layer after the features are extracted from the handwritten character picture so as to avoid overfitting.
Further, the weights of the deep convolutional neural network are shared.
A second aspect of the embodiments of the present application provides a handwritten character recognition apparatus, including:
the acquisition module is used for acquiring the handwritten character picture to be identified;
the generation module is used for generating a template character picture which is the same as the hand-written character picture character to be recognized;
the input module is used for inputting the template character picture to a depth matching neural network so that the depth matching neural network extracts template character features of the template character picture;
the recognition module is used for inputting the hand-written character picture to be recognized into the depth matching neural network so that the depth matching neural network recognizes the hand-written character picture to be recognized according to the template character features.
Further, the apparatus further comprises:
the construction module is used for constructing a deep convolutional neural network;
the training module is used for inputting a sample set into the deep convolutional neural network, training the deep convolutional neural network by adopting a random gradient descent method through counter propagation cross entropy loss gradients, wherein the sample set comprises a subset of all template character pictures and handwritten character pictures with the batch size of N;
the storage module is used for stopping training of the depth convolution neural network when errors between the template character picture and the handwriting character picture are converged, obtaining the depth matching neural network and storing parameters of each layer in the depth matching neural network;
and the updating module is used for updating parameters of the Softmax classification layer in the depth matching neural network by using all template character pictures and the handwriting character pictures with the batch size of N.
Further, the updating module includes:
the first extraction submodule is used for inputting all template character pictures into the depth matching neural network, extracting the characteristics of all the template character pictures, and carrying out L2 regularization on the characteristics to obtain template characteristics;
the second extraction submodule is used for inputting the handwriting character pictures with the batch size of N into the depth matching neural network, extracting the characteristics of handwriting character picture samples and obtaining handwriting characteristics;
the updating sub-module is used for updating parameters of a Softmax classification layer in the depth matching neural network according to the template characteristics and the handwriting characteristics;
wherein the template is characterized by phi (t j ) The handwriting feature is phi (x j ) And if the parameter of the Softmax classification layer in the depth matching neural network is M, then:
M=φ(t j ) T φ(x j )。
a third aspect of the embodiments of the present application provides an electronic device, including:
the handwriting character recognition method is characterized in that the handwriting character recognition method provided in the first aspect of the embodiment of the application is realized when the processor executes the program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the handwritten character recognition method provided in the first aspect of the embodiments of the present application.
As can be seen from the foregoing embodiments of the present application, the handwriting character recognition method, apparatus, electronic device and storage medium provided by the present application may achieve the following beneficial effects:
1. the method has better generalization capability on the characters which are not learned, stronger robustness on different training character subsets, and the recognition accuracy of the new characters which are not learned reaches about 71.5%, about 83.5% in 1000 and about 91.6% in 2000 when the corresponding character sets and data sets are generated by randomly selecting 500 Chinese character samples for training.
2. The method has similar and excellent performance when using Chinese character templates of different styles, and has high robustness to various template styles.
3. The matching network achieves similar performance as a CNN-based classifier with a loss of accuracy of only 0.17%.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a handwritten character recognition method according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for recognizing handwritten characters according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a deep convolutional neural network in a method for recognizing handwritten characters according to an embodiment of the present application;
FIG. 4 is a schematic diagram of training a deep convolutional neural network in a handwritten character recognition method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a handwriting recognition device according to an embodiment of the present application;
fig. 6 shows a schematic diagram of a hardware structure of an electronic device.
Detailed Description
In order to make the application objects, features and advantages of the present application more obvious and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart of a handwritten character recognition method according to an embodiment of the present application, where the method may be applied to an electronic device, and the electronic device includes: an electronic device capable of performing data processing in motion such as a mobile phone, a tablet personal computer, a portable computer, a smart watch and smart glasses and an electronic device capable of performing data processing in motion such as a desktop computer, an integrated machine and a smart television, the method mainly comprises the following steps:
s101, acquiring a handwritten character picture to be identified;
the handwriting character pictures can be written by a user through a touch screen by hands or a touch pen, or can be randomly generated by a system according to the writing habit of a human body, and have the personalized characteristics of the user.
S102, generating a template character picture which is the same as the hand-written character picture character to be recognized;
the template character picture refers to a character picture having a specific font.
S103, inputting the template character picture to a depth matching neural network so that the depth matching neural network extracts template character features of the template character picture;
in one embodiment of the present application, before step S103, the method further includes:
constructing a deep convolutional neural network;
the method comprises the steps of inputting a sample set to the deep convolutional neural network, training the deep convolutional neural network by back propagation of cross entropy loss gradients by adopting a random gradient descent method, wherein the sample set comprises a subset of all template character pictures and handwritten character pictures with a batch size of N.
When the error between the template character picture and the handwriting character picture in the sample set is converged, training of the depth convolution neural network is stopped, the depth matching neural network is obtained, and parameters of all layers in the depth convolution neural network are stored.
And updating parameters of a Softmax classification layer in the depth matching neural network by using all template character pictures and the handwriting character pictures with the batch size of N.
In one embodiment of the present application, updating parameters of the Softmax classification layer in the depth matching neural network by using all template character pictures and the handwriting character pictures with a batch size of N includes:
inputting all the template character pictures into the depth matching neural network, extracting the characteristics of all the template character pictures, and carrying out L2 regularization on the characteristics to obtain template characteristics;
inputting the handwritten character pictures with the batch size of N into the depth matching neural network, and extracting the characteristics of the handwritten character pictures to obtain handwritten characteristics;
updating parameters of a Softmax classification layer in the depth matching neural network according to the template characteristics and the handwriting characteristics;
wherein, let the template feature be phi (t) j ) The handwriting feature is phi (x j ) The parameter of the Softmax classification layer in the depth matching neural network is M, and then:
M=φ(t j ) T φ(x j )。
in one embodiment of the present application, the weight sharing of the deep convolutional neural network includes a convolutional layer, a max pooling layer, a full connection layer, and a Softmax classification layer;
the convolution kernel of the convolution layer is 3 multiplied by 3, the convolution step length is 1, the convolution mode is SAME, reLU is adopted as an activation function, and batch standardization is not used;
the size of the pooling core of the maximum pooling layer is 3 multiplied by 3, and the step length is 2;
the number of channels of the fully connected layer is 128, and dropout is added after the fully connected layer after extracting features from the handwritten character picture to avoid overfitting.
S104, inputting the handwritten character picture to be recognized into the depth matching neural network, so that the depth matching neural network recognizes the handwritten character picture to be recognized according to the character features of the template.
In the embodiment of the application, a handwritten character picture to be recognized is obtained, a template character picture identical to the handwritten character picture character to be recognized is generated, the template character picture is input to a depth matching neural network, so that the depth matching neural network extracts template character features of the template character picture, the handwritten character picture to be recognized is input to the depth matching neural network, the handwritten character picture to be recognized is recognized by the depth matching neural network according to the template character features, and handwritten characters can be recognized effectively.
Referring to fig. 2, fig. 2 is a flowchart of a handwritten character recognition method according to an embodiment of the present application, where the method may be applied to an electronic device, and the electronic device includes: a cell phone, tablet (Portable Android Device, PAD), notebook, personal digital assistant (Personal Digital Assistant, PDA), etc., the method comprising the steps of:
s1: constructing a deep convolutional neural network;
the method comprises the steps of designing a structure of a deep convolutional neural network, setting parameters of a convolutional layer and a pooling layer, wherein all the convolutional layers use convolutional kernels with the size of 3 multiplied by 3, the step length is 1, the pooling layer uses windows with the size of 3 multiplied by 3, the step length is 2, and the last two layers are set to be a full-connection layer and a Softmax classification layer. The first three convolution layers are each followed by a maximum pooling layer, the two subsequent convolution layers are followed by a maximum pooling layer, and the two subsequent convolution layers are followed by a full connection layer. The channel numbers of the first three convolution layers are 16, 32 and 64 respectively, the channel numbers of the two middle convolution layers are 128, the channel numbers of the last two convolution layers are 256, and the channel number of the pooling layer is consistent with the channel number of the previous convolution layer.
In one embodiment of the present application, the designed deep convolutional neural network includes seven convolutional layers, five max pooling layers and one full connection layer. The convolution kernel of each convolution layer is 3 multiplied by 3, the convolution step length is set to be 1, and the convolution mode is set to be SAME, namely, a circle of 0 is complemented around the feature map, so that the size of the feature map is ensured not to change after passing through the convolution layers; the size of the pooling core of the maximum pooling layer is 3 multiplied by 3, the step length is set to be 2, and the width and the height of the feature map after pooling are halved; the number of channels of the full connection layer is 128, and dropout is added to the branches for extracting features from the handwritten character picture to avoid overfitting. All convolution layers use ReLU as an activation function and batch normalization is not used, and branches that extract features from handwritten character images add dropout after full connection layer to avoid overfitting. Please refer to fig. 3 for the overall structure of the deep convolutional neural network: input-16C3-MP3-32C3-MP3-64C3-MP3-128C3-128C3-MP3-256C3-256C3-128FC.
S2: training a deep convolutional neural network to extract the characteristics of the template character picture and the handwritten character picture;
it can be appreciated that in the training process, in order to reduce the data amount, a part of the template character pictures are randomly selected for training in each iteration training, that is, a subset T' of the template character pictures is randomly selected as input for training. Then, each label in the hand-written character pictures currently used for training is checked, repeated labels are deleted, and then the hand-written character pictures with the label number batch size of N are randomly selected. As shown in fig. 4, a handwritten character picture with a batch size of N and a template character picture with a subset of T' are input to a deep convolutional neural network for training, and the objective of the trained deep convolutional neural network is to minimize the following cross entropy loss function:
wherein J (phi) is a cross entropy loss function of phi, x i For the ith hand-written character picture, x j For the j-th hand-written character picture, phi (x i ) Hand-written character picture x extracted for deep convolutional neural network i Is characterized by phi (x) j ) Handwriting extraction for deep convolutional neural networksCharacter picture x j Is characterized in that,for label y i Template character picture, & lt, & gt>For the features of the template character pictures extracted by the deep convolutional neural network, T' is a subset of 3755 template character pictures, phi (T) j ) And (5) extracting the features of the j-th template character picture for the deep convolutional neural network.
In the process, a sample set is input into a weight-sharing deep convolutional neural network, a random gradient descent method is adopted, parameters of the deep convolutional neural network are trained through counter propagation cross entropy loss gradients, training is stopped when errors of the deep convolutional neural network on the sample set are converged, the deep matching neural network is obtained, parameters of all layers of the deep matching neural network are stored, and the sample set is a handwritten character picture with the batch size of N and a template character picture with the subset of T'.
In one embodiment of the present application, the deep convolutional neural network may be trained using a local softmax regression of template lot size 128.
In one embodiment of the present application, a collection of image data is selected from a sample set, the input character image size is 64 x 64, and the image is normalized to [ -1,1]. Since BN was not added after the convolutional layer, the learning rate starts from 0.01 and divides by 10 when the verification accuracy stops improving. The deep convolutional neural network is trained using a random gradient descent algorithm with a batch size of 128. And repeating the process, and continuously updating and optimizing parameters of the deep convolutional neural network to obtain the deep matching neural network.
S3: inputting all the template character pictures into a depth matching neural network for testing;
and when the test is carried out, extracting the characteristics of all the template character pictures by inputting all the template character pictures into the depth matching neural network, carrying out L2 regularization on the characteristics, and then storing the characteristics into the depth matching neural network. Wherein, all the template character pictures are 3755 characters commonly used in the GB2312-80 standard primary set.
S4: updating parameters of the Softmax classification layer by utilizing the characteristics of the template character picture and the handwriting character picture;
then, when testing is carried out, the handwriting character pictures with the batch size of N are input into a depth matching neural network, the characteristics of the handwriting character pictures are extracted, and the characteristics of the handwriting character pictures are extracted through a formula phi (t j ) T φ(x j ) Multiplying the characteristic of the template character pictures extracted by the characteristic of the template character pictures, and integrating the characteristic as a parameter of a Softmax classification layer, wherein the training and deducing process is shown in fig. 4, the deep neural network in the training process in fig. 4 refers to a deep convolutional neural network, and the deep neural network in the testing process refers to a deep matching neural network.
S5: when the handwritten character picture needs to be identified, generating a template character picture which is the same as the handwritten character picture, inputting the template character picture into a depth matching network to extract features, and then inputting the handwritten character picture into the depth matching network for identification.
The same template character generating machine as that used in training is used to generate the same template character picture as the hand-written character picture character, and the template character picture is input into a depth matching neural network to extract the characteristics of the template character picture, and then the obtained parameters are spliced to a Softmax classification layer of the network. And then inputting the handwritten character picture into a depth matching neural network for forward reasoning so as to realize the identification of the newly appearing character.
The method adopts the offline CASIA-HWDB1.0 and the CASIA-HWDB1.1 for training, adopts the ICDAR-2013 offline competition data set test, has the recognition rate of a CNN-based classifier of 95.75 percent, has the recognition rate of 95.58 percent by adopting the matching network of the invention, and has only 0.17 percent of precision loss.
When the training is performed by randomly selecting 500 Chinese character samples and generating corresponding template character sets, the recognition accuracy of the new Chinese characters which are not learned reaches about 71.5%,1000 reaches about 83.5%, and 2000 reaches about 91.6%.
In the embodiment of the application, a handwritten character picture to be recognized is obtained, a template character picture identical to the handwritten character picture character to be recognized is generated, the template character picture is input to a depth matching neural network, so that the depth matching neural network extracts template character features of the template character picture, the handwritten character picture to be recognized is input to the depth matching neural network, the handwritten character picture to be recognized is recognized by the depth matching neural network according to the template character features, and handwritten characters can be recognized effectively.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a handwritten character recognition apparatus according to an embodiment of the present application, where the apparatus may be built in an electronic device, and the apparatus mainly includes:
an acquisition module 201, configured to acquire a handwritten character picture to be identified;
a generating module 202, configured to generate a template character picture that is the same as the handwritten character picture character to be identified;
the input module 203 is configured to input the template character picture to a depth matching neural network, so that the depth matching neural network extracts template character features of the template character picture;
the recognition module 204 is configured to input the handwritten character picture to be recognized to the depth matching neural network, so that the depth matching neural network recognizes the handwritten character picture to be recognized according to the template character features.
In one embodiment of the present application, the apparatus further comprises:
the construction module is used for constructing a deep convolutional neural network;
the training module is used for inputting a sample set into the deep convolutional neural network, training the deep convolutional neural network by adopting a random gradient descent method through counter propagation cross entropy loss gradients, wherein the sample set comprises a subset of all template character pictures and handwritten character pictures with the batch size of N;
the storage module is used for stopping training of the depth convolution neural network when errors between the template character picture samples and the handwriting character picture samples are converged, obtaining the depth matching neural network and storing parameters of each layer in the depth matching neural network;
and the updating module is used for updating parameters of the Softmax classification layer in the depth matching neural network by utilizing all template character pictures and the N handwritten character pictures with the batch size.
In one embodiment of the present application, the update module includes:
the first extraction submodule is used for inputting the template character picture to the depth matching neural network, extracting the characteristics of the template character picture, and carrying out L2 regularization on the characteristics to obtain template characteristics;
the second extraction submodule is used for inputting the handwritten character pictures with the batch size of N into the depth matching neural network, extracting the characteristics of the handwritten character pictures and obtaining handwriting characteristics;
the updating sub-module is used for updating parameters of a Softmax classification layer in the depth matching neural network according to the template characteristics and the handwriting characteristics;
wherein, let the template feature be phi (t) j ) The handwriting feature is phi (x j ) The parameter of the Softmax classification layer in the depth matching neural network is M, and then:
M=φ(t j ) T φ(x j )。
in one embodiment of the present application, the deep convolutional neural network comprises a convolutional layer, a max pooling layer, a fully-connected layer and a Softmax classification layer;
the convolution kernel of the convolution layer is 3 multiplied by 3, the convolution step length is 1, the convolution mode is SAME, reLU is adopted as an activation function, and batch standardization is not used;
the size of the pooling core of the maximum pooling layer is 3 multiplied by 3, and the step length is 2;
the number of channels of the fully connected layer is 128, and dropout is added after the fully connected layer after extracting features from the handwritten character picture to avoid overfitting.
In one embodiment of the present application, the weights of the deep convolutional neural network are shared.
In the embodiment of the application, a handwritten character picture to be recognized is obtained, a template character picture identical to the handwritten character picture character to be recognized is generated, the template character picture is input to a depth matching neural network, so that the depth matching neural network extracts template character features of the template character picture, the handwritten character picture to be recognized is input to the depth matching neural network, the handwritten character picture to be recognized is recognized by the depth matching neural network according to the template character features, and handwritten characters can be recognized effectively.
Further, the electronic device includes: the system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the handwriting character recognition method described in the embodiment shown in the previous figures 1 to 4 when executing the computer program.
The embodiment of the application also provides a computer readable storage medium, which can be the electronic device in each embodiment, and the computer readable storage medium can be the storage unit in each embodiment, which is arranged in the main control chip and the data acquisition chip. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the handwritten character recognition method described in the embodiments shown in the foregoing fig. 1 to 4.
By way of example, the electronic device may be any of various types of computer system equipment that is mobile or portable and performs wireless communications. In particular, the electronic apparatus may be a mobile phone or a smart phone (e.g., an iPhone-based (TM) -based phone), a Portable game device (e.g., a Nintendo DS (TM) -based phone, a PlayStation Portable (TM) -Gameboy Advance TM, an iPhone (TM)), a laptop, a PDA, a Portable internet device, a music player, and a data storage device, other handheld devices, and devices such as watches, headphones, pendants, headphones, etc., and the electronic apparatus may also be other wearable devices (e.g., a head-mounted device (HMD) such as an electronic glasses, an electronic garment, an electronic bracelet, an electronic necklace, an electronic tattooing, an electronic device, or a smart watch).
The electronic device may also be any of a number of electronic devices including, but not limited to, cellular telephones, smart phones, other wireless communication devices, personal digital assistants, audio players, other media players, music recorders, video recorders, cameras, other media recorders, radios, medical devices, vehicle transportation equipment, calculators, programmable remote controls, pagers, laptop computers, desktop computers, printers, netbooks, personal Digital Assistants (PDAs), portable Multimedia Players (PMPs), moving picture experts group (MPEG-1 or MPEG-2) audio layer 3 (MP 3) players, portable medical devices, and digital cameras, and combinations thereof.
In some cases, the electronic device may perform a variety of functions (e.g., playing music, displaying video, storing pictures, and receiving and sending phone calls). The electronic apparatus may be a portable device such as a cellular telephone, media player, other handheld device, wristwatch device, pendant device, earpiece device, or other compact portable device, if desired.
As shown in fig. 6, the electronic device 10 may include control circuitry that may include storage and processing circuitry 30. The storage and processing circuitry 30 may include memory, such as hard drive memory, non-volatile memory (e.g., flash memory or other electronically programmable limited delete memory used to form solid state drives, etc.), volatile memory (e.g., static or dynamic random access memory, etc.), and the like, as embodiments of the present application are not limited. Processing circuitry in the storage and processing circuitry 30 may be used to control the operation of the electronic device 10. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 30 may be used to run software in the electronic device 10, such as internet browsing applications, voice over internet protocol (Voice over Internet Protocol, VOIP) telephone call applications, email applications, media playing applications, operating system functions, and the like. Such software may be used to perform some control operations, such as image acquisition based on a camera, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functions implemented based on status indicators such as status indicators of light emitting diodes, touch event detection based on a touch sensor, functions associated with displaying information on multiple (e.g., layered) displays, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in electronic device 10, to name a few.
The electronic device 10 may also include an input-output circuit 42. The input-output circuit 42 is operable to enable the electronic device 10 to input and output data, i.e., to allow the electronic device 10 to receive data from an external device and also to allow the electronic device 10 to output data from the electronic device 10 to an external device. The input-output circuit 42 may further include a sensor 32. The sensors 32 may include ambient light sensors, proximity sensors based on light and capacitance, touch sensors (e.g., based on light touch sensors and/or capacitive touch sensors, where the touch sensors may be part of a touch display screen or may be used independently as a touch sensor structure), acceleration sensors, and other sensors, among others.
The input-output circuitry 42 may also include one or more displays, such as the display 14. The display 14 may comprise one or a combination of several of a liquid crystal display, an organic light emitting diode display, an electronic ink display, a plasma display, and a display using other display technologies. The display 14 may include an array of touch sensors (i.e., the display 14 may be a touch screen display). The touch sensor may be a capacitive touch sensor formed of an array of transparent touch sensor electrodes, such as Indium Tin Oxide (ITO) electrodes, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, etc., as embodiments of the present application are not limited.
The electronic device 10 may also include an audio component 36. Audio component 36 may be used to provide audio input and output functionality for electronic device 10. The audio components 36 in the electronic device 10 may include speakers, microphones, buzzers, tone generators, and other components for generating and detecting sound.
Communication circuitry 38 may be used to provide electronic device 10 with the ability to communicate with external devices. The communication circuitry 38 may include analog and digital input-output interface circuitry, and wireless communication circuitry based on radio frequency signals and/or optical signals. The wireless communication circuitry in the communication circuitry 38 may include radio frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless communication circuitry in the communication circuitry 38 may include circuitry for supporting near field communication (Near Field Communication, NFC) by transmitting and receiving near field coupled electromagnetic signals. For example, the communication circuit 38 may include a near field communication antenna and a near field communication transceiver. The communication circuitry 38 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuit and antenna, and the like.
The electronic device 10 may further include a battery, power management circuitry, and other input-output units 40. The input-output unit 40 may include buttons, levers, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes, and other status indicators, etc.
A user may control the operation of the electronic device 10 by inputting commands through the input-output circuit 42 and may use the output data of the input-output circuit 42 to effect receipt of status information and other outputs from the electronic device 10.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
It should be noted that, in the drawings or the text, the undescribed deep convolutional neural network base unit is a form known to those skilled in the art, and is not described in detail. Furthermore, the above definitions of the elements and methods are not limited to the specific structures, shapes or modes mentioned in the embodiments, and may be simply modified or replaced by those of ordinary skill in the art.
It should also be noted that the present invention may provide examples of parameters that include particular values, but that these parameters need not be exactly equal to the corresponding values, but may approximate the corresponding values within acceptable error margins or design constraints. Furthermore, unless specifically described or steps must occur in sequence, the order of the above steps is not limited to the list above and may be changed or rearranged according to the desired design. In addition, the above embodiments may be mixed with each other or other embodiments based on design and reliability, i.e. the technical features of the different embodiments may be freely combined to form more embodiments.
The foregoing describes a handwritten character recognition method, apparatus, electronic device, and storage medium provided in the present application, and those skilled in the art will recognize that there are variations in terms of the specific implementation and application scope according to the ideas of the embodiments of the present application, so the disclosure should not be construed as limiting the present application.

Claims (8)

1. A method of handwriting character recognition, comprising:
constructing a deep convolutional neural network;
inputting a sample set to the deep convolutional neural network, training the deep convolutional neural network by adopting a random gradient descent method through back propagation cross entropy loss gradient, wherein the sample set comprises a subset of all template character pictures and handwritten character pictures with a batch size of N, wherein the all template character pictures are character pictures with specific fonts, comprise 3755 characters commonly used in a GB2312-80 standard primary set, and are generated by a template character generation machine;
when the error between the template character picture and the handwriting character picture in the sample set is converged, training of the depth convolution neural network is stopped, a depth matching neural network is obtained, and parameters of all layers in the depth matching neural network are stored;
updating parameters of a Softmax classification layer in the depth matching neural network by using all template character pictures and handwriting character pictures with the batch size of N;
acquiring a handwritten character picture to be identified;
generating a template character picture which is the same as the hand-written character picture character to be recognized by using the same template character generating machine during training;
inputting the template character picture to the depth matching neural network so that the depth matching neural network extracts template character features of the template character picture;
inputting the hand-written character picture to be recognized to the depth matching neural network, so that the depth matching neural network recognizes the hand-written character picture to be recognized according to the character characteristics of the template;
the hand-written character pictures can be written by a user through a touch screen by hands or a touch pen, or can be randomly generated by a system according to the writing habit of a human body, and the hand-written character pictures have the personalized characteristics of the user.
2. The method for recognizing handwritten characters according to claim 1, wherein updating parameters of a Softmax classification layer in the depth-matching neural network by using all template character pictures and a lot size N handwritten character pictures comprises:
inputting all the template character pictures into the depth matching neural network, extracting the characteristics of all the template character pictures, and carrying out L2 regularization on the characteristics to obtain template characteristics;
inputting the handwritten character pictures with the batch size of N into the depth matching neural network, and extracting the characteristics of the handwritten character pictures to obtain handwritten characteristics;
updating parameters of a Softmax classification layer in the depth matching neural network according to the template characteristics and the handwriting characteristics;
wherein the template is characterized by phi (t j ) The handwriting feature is phi (x j ) So in the depth matching neural networkThe parameter of the ftmax classification layer is M, then:
M=φ(t j ) T φ(x j )。
3. the handwritten character recognition method according to any of claims 1 to 2, wherein said deep convolutional neural network comprises a convolutional layer, a max pooling layer, a fully connected layer and a Softmax classification layer;
the convolution kernel of the convolution layers is 3 multiplied by 3, the convolution step length is 1, the convolution mode is SAME, reLU is adopted as an activation function, and batch standardization is not used;
the size of the pooling core of the maximum pooling layer is 3 multiplied by 3, and the step length is 2;
the number of channels of the full connection layer is 128, and dropout is added after the full connection layer after the features are extracted from the handwritten character picture so as to avoid overfitting.
4. The handwritten character recognition method according to any of claims 1 to 2, wherein the weights of the deep convolutional neural network are shared.
5. A handwritten character recognition apparatus, comprising:
the construction module is used for constructing a deep convolutional neural network;
the training module is used for inputting a sample set into the deep convolutional neural network, training the deep convolutional neural network by adopting a random gradient descent method through counter propagation cross entropy loss gradient, wherein the sample set comprises a subset of all template character pictures and handwritten character pictures with a batch size of N, wherein the all template character pictures are character pictures with specific fonts, comprise 3755 characters commonly used in a GB2312-80 standard primary set, and are generated by a template character generation machine;
the storage module is used for stopping training of the depth convolution neural network when errors between the template character picture and the handwriting character picture are converged, obtaining a depth matching neural network and storing parameters of each layer in the depth matching neural network;
the updating module is used for updating parameters of a Softmax classification layer in the depth matching neural network by using all template character pictures and N handwriting character pictures with batch size;
the acquisition module is used for acquiring the handwritten character picture to be identified;
the generation module is used for generating a template character picture which is the same as the hand-written character picture character to be recognized by using the same template character generation machine as that used in training;
the input module is used for inputting the template character picture to the depth matching neural network so that the depth matching neural network extracts the template character features of the template character picture;
the recognition module is used for inputting the hand-written character picture to be recognized into the depth matching neural network so that the depth matching neural network recognizes the hand-written character picture to be recognized according to the template character characteristics;
the hand-written character pictures can be written by a user through a touch screen by hands or a touch pen, or can be randomly generated by a system according to the writing habit of a human body, and the hand-written character pictures have the personalized characteristics of the user.
6. The handwritten character recognition apparatus according to claim 5, wherein the updating module comprises:
the first extraction submodule is used for inputting all template character pictures into the depth matching neural network, extracting the characteristics of all the template character pictures, and carrying out L2 regularization on the characteristics to obtain template characteristics;
the second extraction submodule is used for inputting the handwriting character pictures with the batch size of N into the depth matching neural network, extracting the characteristics of handwriting character picture samples and obtaining handwriting characteristics;
the updating sub-module is used for updating parameters of a Softmax classification layer in the depth matching neural network according to the template characteristics and the handwriting characteristics;
wherein the template is characterized byIs phi (t) j ) The handwriting feature is phi (x j ) And if the parameter of the Softmax classification layer in the depth matching neural network is M, then:
M=φ(t j ) T φ(x j )。
7. an electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the handwritten character recognition method according to any of claims 1 to 4 when the computer program is executed.
8. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the handwritten character recognition method according to any one of claims 1 to 4.
CN201911210562.8A 2019-11-28 2019-11-28 Handwritten character recognition method, handwritten character recognition device, electronic equipment and storage medium Active CN110969165B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911210562.8A CN110969165B (en) 2019-11-28 2019-11-28 Handwritten character recognition method, handwritten character recognition device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911210562.8A CN110969165B (en) 2019-11-28 2019-11-28 Handwritten character recognition method, handwritten character recognition device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110969165A CN110969165A (en) 2020-04-07
CN110969165B true CN110969165B (en) 2024-04-09

Family

ID=70032485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911210562.8A Active CN110969165B (en) 2019-11-28 2019-11-28 Handwritten character recognition method, handwritten character recognition device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110969165B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680196A (en) * 2013-11-27 2015-06-03 夏普株式会社 Handwriting character recognizing method and system
CN106408039A (en) * 2016-09-14 2017-02-15 华南理工大学 Off-line handwritten Chinese character recognition method carrying out data expansion based on deformation method
CN106951832A (en) * 2017-02-28 2017-07-14 广东数相智能科技有限公司 A kind of verification method and device based on Handwritten Digits Recognition
CN107169504A (en) * 2017-03-30 2017-09-15 湖北工业大学 A kind of hand-written character recognition method based on extension Non-linear Kernel residual error network
CN107563385A (en) * 2017-09-02 2018-01-09 西安电子科技大学 License plate character recognition method based on depth convolution production confrontation network
CN108319988A (en) * 2017-01-18 2018-07-24 华南理工大学 A kind of accelerated method of deep neural network for handwritten Kanji recognition
CN109977737A (en) * 2017-12-28 2019-07-05 新岸线(北京)科技集团有限公司 A kind of character recognition Robust Method based on Recognition with Recurrent Neural Network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6492894B2 (en) * 2015-04-01 2019-04-03 富士通株式会社 Recognition program, recognition method, and recognition apparatus
US10521654B2 (en) * 2018-03-29 2019-12-31 Fmr Llc Recognition of handwritten characters in digital images using context-based machine learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680196A (en) * 2013-11-27 2015-06-03 夏普株式会社 Handwriting character recognizing method and system
CN106408039A (en) * 2016-09-14 2017-02-15 华南理工大学 Off-line handwritten Chinese character recognition method carrying out data expansion based on deformation method
CN108319988A (en) * 2017-01-18 2018-07-24 华南理工大学 A kind of accelerated method of deep neural network for handwritten Kanji recognition
CN106951832A (en) * 2017-02-28 2017-07-14 广东数相智能科技有限公司 A kind of verification method and device based on Handwritten Digits Recognition
CN107169504A (en) * 2017-03-30 2017-09-15 湖北工业大学 A kind of hand-written character recognition method based on extension Non-linear Kernel residual error network
CN107563385A (en) * 2017-09-02 2018-01-09 西安电子科技大学 License plate character recognition method based on depth convolution production confrontation network
CN109977737A (en) * 2017-12-28 2019-07-05 新岸线(北京)科技集团有限公司 A kind of character recognition Robust Method based on Recognition with Recurrent Neural Network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于集成学习改进的卷积神经网络的手写字符识别";黎育权;《电子技术与软件工程》;20180531;第167页 *

Also Published As

Publication number Publication date
CN110969165A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
US10956771B2 (en) Image recognition method, terminal, and storage medium
US20230040146A1 (en) User device and method for creating handwriting content
CN107943860B (en) Model training method, text intention recognition method and text intention recognition device
CN111985240B (en) Named entity recognition model training method, named entity recognition method and named entity recognition device
US20180336184A1 (en) Emoji word sense disambiguation
US20120151420A1 (en) Devices, Systems, and Methods for Conveying Gesture Commands
CN110162956B (en) Method and device for determining associated account
CN112329926B (en) Quality improvement method and system for intelligent robot
CN109120781A (en) Information cuing method, electronic device and computer readable storage medium
US12008233B2 (en) Electronic device and method for generating a user-customized keypad based on usage characteristics
CN110969165B (en) Handwritten character recognition method, handwritten character recognition device, electronic equipment and storage medium
CN110866114B (en) Object behavior identification method and device and terminal equipment
US20230229245A1 (en) Emoji recommendation method of electronic device and same electronic device
CN117910478A (en) Training method of language model and text generation method
CN110096707B (en) Method, device and equipment for generating natural language and readable storage medium
CN106774983A (en) A kind of input method and equipment
CN115803747A (en) Electronic device for converting handwriting into text and method thereof
CN110795927B (en) n-gram language model reading method, device, electronic equipment and storage medium
CN113806532B (en) Training method, device, medium and equipment for metaphor sentence judgment model
CN114120044B (en) Image classification method, image classification network training method, device and electronic equipment
KR102367853B1 (en) A method of building custom studio
US12124753B2 (en) Electronic device for providing text associated with content, and operating method therefor
CN110942085B (en) Image classification method, image classification device and terminal equipment
US20230359352A1 (en) Method for providing clipboard function, and electronic device supporting same
CN117633537A (en) Training method of vocabulary recognition model, vocabulary recognition method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant