[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN107729801A - A kind of vehicle color identifying system based on multitask depth convolutional neural networks - Google Patents

A kind of vehicle color identifying system based on multitask depth convolutional neural networks Download PDF

Info

Publication number
CN107729801A
CN107729801A CN201710558817.4A CN201710558817A CN107729801A CN 107729801 A CN107729801 A CN 107729801A CN 201710558817 A CN201710558817 A CN 201710558817A CN 107729801 A CN107729801 A CN 107729801A
Authority
CN
China
Prior art keywords
mrow
color
vehicle
msub
car plate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710558817.4A
Other languages
Chinese (zh)
Other versions
CN107729801B (en
Inventor
汤平
汤一平
王辉
吴越
温晓岳
柳展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yinjiang Technology Co.,Ltd.
Original Assignee
Enjoyor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Enjoyor Co Ltd filed Critical Enjoyor Co Ltd
Priority to CN201710558817.4A priority Critical patent/CN107729801B/en
Publication of CN107729801A publication Critical patent/CN107729801A/en
Application granted granted Critical
Publication of CN107729801B publication Critical patent/CN107729801B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24143Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

A kind of vehicle color identifying system based on multitask depth convolutional neural networks, include the vision-based detection subsystem of high-definition camera above road travel line, traffic Cloud Server and vehicle color;The vision-based detection subsystem of vehicle color includes vehicle location detection module, License Plate detection module, car plate background color identification module, Colorimetry module, vehicle color correction module and vehicle color identification module, vehicle location detection module, License Plate detection module and vehicle color identification module share same Faster R CNN depth convolutional neural networks, road vehicle is gone out using depth convolutional neural networks Fast Segmentation, and the car plate further gone out with vehicle image using depth convolutional neural networks Fast Segmentation on road, then these vehicles and car plate spatial positional information shared in road image are provided again.The multitask depth convolutional neural networks that a kind of accuracy of detection of present invention offer is higher, robustness is higher are in vehicle color vision detection system.

Description

A kind of vehicle color identifying system based on multitask depth convolutional neural networks
Technical field
Know the present invention relates to artificial intelligence, Digital Image Processing, convolutional neural networks and computer vision in vehicle color The not application of aspect, belongs to intelligent transportation field.
Background technology
Color is a kind of important external appearance characteristic of vehicle.In real world, due to light source color temperature, light intensity, bat be present Take the photograph the influence of angle, many uncertain factors such as setting of video camera so that the result finally obtained is compared with ideal conditions, car Color can have a certain degree of colour cast;Existing published vehicle color identification method changes for vehicle attitude and vehicle Residing photoenvironment change is very sensitive, when the photoenvironment change residing for vehicle, existing vehicle color identification method The colour recognition degree of accuracy drastically decline, it is impossible to accurately identify vehicle color.
" localization method, the vehicle body of characteristic area are deep disclosed in the Chinese patent application of Application No. 200810041097.5 The recognition methods of light color and body color " provides the recognition methods of a kind of vehicle color identification method, vehicle color weight. Comprise the following steps:
1st, the patent builds the energy function of complexity, it is maximum to search for energy according to the textural characteristics and architectural feature of image
Point;
2nd, according to the maximum point location vehicle color of energy and the identification region of vehicle color weight;
3rd, the pixel color and colour darkness in identification region, and carry out the face that statistics finally gives identification region Color and colour darkness.
But the patent is in sample collection stage early stage, to the vehicle color identification under different light conditions at Reason;Different characteristic attributes is obtained by multiple color spaces in selected characteristic vector;Then make in training pattern Trained with the grader of multiple types;Chinese herbaceous peony cover area is only have selected when fixation and recognition region, for possible reflective Phenomenon is not dealt with so that final vehicle color identification and the identification of the vehicle color depth produces certain deviation.
" a kind of vehicle body color in vehicle video image disclosed in the Chinese patent application of Application No. 200810240292.0 Recognition methods and system " provides a kind of vehicle body color in vehicle video image recognition methods.The patent is in training pattern Multiple step format training is taken, is comprised the following steps:
1st, rough segmentation is carried out to vehicle body sample using cluster according to color template and obtains the sample of certain color or a variety of phases The mixing sample of nearly color;
2nd, mixing sample is finely divided using arest neighbors sorting technique;
3rd, the model obtained according to training is slightly identified to vehicle color;
4th, careful identification is carried out using arest neighbors sorting technique.
But the patent equally not yet considers caused by vehicle color to change under different light conditions;In selected characteristic vector When be also to use by tri- kinds of color spaces of HSV, YIQ, YCbCr and step by step;Then employed in training pattern Cluster and arest neighbors sorting technique are combined training pattern;Vehicle color cognitive phase and it is undeclared be to take which kind of strategy to knowing How the color of each pixel in other region is handled;And the patent only illustrates vehicle color identification method, not to car The identification of colour darkness is explained.
At present, the identification of body color generally comprises two main modules:One is the detection in region to be identified and fixed Position, the determination of body color reference zone, another is to carry out color classification and identification to the image of reference zone.
The detection in region to be identified and various ways are located,《Vehicle color identification method》103544480A、《Based on face The body color recognition methods of Color Histogram》105160691A and《A kind of body color recognition methods and device》105354530A In, detection and positioning licence plate first then according to car plate positional information, determine that vehicle color identifies reference zone.《Vehicle color Recognition methods and device》In 102737221B, according to the reference of the texture of image and structural information positioning vehicle colour recognition Region, carry out main identification region again afterwards and assist in identifying the positioning in region.《A kind of body color recognition methods》 In 105005766A, according to the method for moving object detection in video, determine the boundary rectangle of moving target as colour recognition Reference zone.《A kind of method of automatic identification road gate video and the vehicle color in picture》In 104680195A, do not have There is the positioning method for being expressly recited color candidate region, only specify that candidate region is polylith, is concentrated mainly on hood.
The vision detection technology in deep learning epoch before above-mentioned vision detection technology belongs to, there is accuracy of detection and detection Key issues of change of the problem of robustness is not high, especially illumination and video camera imaging condition, does not solve also very well.In addition, Above-mentioned several patent contents are only disclosed some technical overviews, still have many ins and outs and key issue in practical application still It is not directed to, in particular for《The law on road traffic safety》Various detailed problems solution.
Recent years, deep learning in the technology of computer vision field obtained rapid development, and deep learning can utilize Substantial amounts of training sample and hidden layer successively in depth learn the abstracted information of image, more comprehensively directly obtain characteristics of image. Digital picture is described with matrix, and convolutional neural networks describe the whole of image preferably from local message block Body structure, therefore solve problem using convolutional neural networks mostly in computer vision field, deep learning method.Around Improve accuracy of detection and detection time, depth convolutional neural networks technology is from R-CNN, Faster R-CNN to Fasterer R- CNN.Be embodied in further precision improvement, acceleration, end-to-end and more practical, almost cover from be categorized into detection, point Cut, position every field.Depth learning technology applies to vehicle color vision-based detection and will be one to have very much actual application value Research field.
The vision system of people has color constancy, can obtain object under the photoenvironment and image-forming condition of some changes The invariant features of surface color.But bayonet socket Imaging for Monitoring equipment does not have this " regulation " function, different photoenvironments will It can cause a certain degree of deviation be present between the color of the image of collection and object true colors.After this deviation will influence The degree of accuracy of continuous vehicle color analysis and robustness.So seeking suitable color correction algorithm, photoenvironment etc. is eliminated to face The influence that color shows, image after treatment is set correctly to reflect that the true colors of object have turned into a current research heat Point.
National standard GA 36-2014 define the various details of automotive number plate, wherein, large-scale motor vehicles for civilian use:Yellow bottom is black Word;Small Civil automobile:Blue bottom wrongly written or mispronounced character;People's Armed Police's special purpose vehicle:White background red " WJ ", surplus;Other foreign nationality's automobiles:Black matrix wrongly written or mispronounced character; Make, consulate's foreign nationality's automobile:Black matrix wrongly written or mispronounced character and hollow " making " word mark;Trial Run License Plate:White background The Scarlet Letter, there is " examination " word mark before digital Will;Temporary licence:White background The Scarlet Letter, there is " interim " two word before digital;Automobile benefit licence plate:White gravoply, with black engraved characters.Between the character of car plate At intervals of 12mm.These regulations on car plate, the especially regulation in terms of color, a ginseng is brought to vehicle color identification Examine standard.Due under same illumination condition, the aberration of identical depth occurs in vehicle color and car plate color;By detecting The aberration of car plate color corrects vehicle color, and this is of great significance for lifting the discrimination tool of vehicle color.
The content of the invention
In order to overcome, the accuracy of detection of the vision-based detection mode of existing vehicle color is relatively low, detection robustness is not high not Foot, the present invention provide the multitask depth convolutional neural networks that a kind of accuracy of detection is higher, robustness is higher and regarded in vehicle color Feel detecting system.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of vehicle color identifying system based on multitask depth convolutional neural networks, including installed in road travel line The vision-based detection subsystem of the high-definition camera of top, traffic Cloud Server and vehicle color;
Described high-definition camera is used to obtain the video data on road, configures in the top of lane, passes through network Vedio data on road is transferred to described traffic Cloud Server;
Described traffic Cloud Server is used to receive from the video data on the road that described high-definition camera is obtained, And the vision detection system for being submitted to described vehicle color carries out vehicle color identification;
The vision-based detection subsystem of described vehicle color include vehicle location detection module, License Plate detection module, Car plate background color identification module, Colorimetry module, vehicle color correction module and vehicle color identification module, described car Detection and localization module, described License Plate detection module and described vehicle color identification module share same Faster R-CNN depth convolutional neural networks, road vehicle is gone out using depth convolutional neural networks Fast Segmentation, and with vehicle figure As the car plate further gone out using depth convolutional neural networks Fast Segmentation on road, these vehicles are then provided again and car plate exists Shared spatial positional information in road image.
Further, vehicle and License Plate Segmentation and positioning are made up of two models, and a model is that the selectivity for generating RoI is searched Rope network;Another model is Faster R-CNN vehicles and car plate target detection network;Realized after two Classification and Identification networks One multi-level, successively progressive multi-task learning network of multi-tag, multiple features fusion;
Described selective search network, i.e. RPN;RPN networks export rectangle using any scalogram picture as input The set of target Suggestion box, each frame include 4 position coordinates variables and a score;The target of described target Suggestion box refers to Be Vehicle Object and car plate object;
It is the estimated probability of target/non-targeted to each Suggestion box, is the classification realized with the softmax layers of two classification Layer;K Suggestion box is parameterized by the corresponding k Suggestion box for being referred to as anchor;
Each anchor is centered on current sliding window mouth center, and a kind of corresponding yardstick and length-width ratio, uses 3 kinds of yardsticks With 3 kinds of length-width ratios, so just there is k=9 anchor in each sliding position;
RPN networks are trained, a binary label is distributed to each anchor, is to mark the anchor with this Target;Then distribute positive label and give this two class anchor:(I) have with some real target bounding box, i.e. Ground Truth, GT The ratio between highest IoU, i.e. Interse-ction-over-Union, common factor union, overlapping anchor;(II) with any GT bags Enclosing box has the overlapping anchor of the IoU more than 0.7;Notice that a GT bounding box may distribute positive label to multiple anchor; The IoU ratios that the negative label of distribution gives all GT bounding boxs are below 0.3 anchor;Anon-normal non-negative anchor is to training mesh No any effect is marked, then is abandoned;
The multitask loss in Faster R-CNN is followed, minimizes object function;The loss function of one image is determined Justice is:
Here, i is anchor index, piIt is the prediction probability that anchor is the i-th target, if anchor is Just, GT labelsIt is exactly 1, if anchor is negative,It is exactly 0;tiIt is a vector, represents 4 parameters of the bounding box of prediction Change coordinate,It is the coordinate vector of GT bounding boxs corresponding with positive anchor;λ is a balance weight, here λ=10, NclsIt is The normalized value of cls items is mini-batch size, here Ncls=256, NregThe normalized value for being reg items is anchor positions The quantity put, Nreg~2,400, Classification Loss function LclsThree classifications, i.e. vehicle target object, car plate destination object vs. The logarithm loss of road background:
In formula, LclsFor Classification Loss function, PiIt is the prediction probability of the i-th target for anchor;Surrounded for real target The prediction probability of i-th target of box;
For returning loss function Lreg, defined to minor function:
In formula, LregTo return loss function, R is the loss function of robust, and smooth L are calculated with formula (4)1
In formula, smoothL1For smooth L1Loss function, x are variable;
Faster R-CNN networks, characteristic pattern is obtained after depth convolutional neural networks in input picture, according to feature Figure and RPN networks can then obtain corresponding RoIs, finally then pass through RoI ponds layer;Wherein RoI, i.e. area-of-interest, refer to Be exactly vehicle target object, car plate destination object;
For Faster R-CNN networks, input is N number of Feature Mapping and R RoI;N number of Feature Mapping comes from finally One convolutional layer, the size of each Feature Mapping is w × h × c;
Each RoI is a tuple (n, r, c, h, w), wherein, n is the index of Feature Mapping, n ∈ (0,1,2 ..., N- 1), r, c are top left co-ordinates, and h, w are height and width respectively;
Export the Feature Mapping then obtained by maximum pond;RoI in artwork is mapped with the block in characteristic pattern;Will Characteristic pattern down-sampling is fixed size, is then passed to full connection again.
Further, the selective search network and Fast R-CNN are all stand-alone trainings, are trained and calculated using 4 steps Method, learn shared feature by alternative optimization;The first step, according to above-mentioned training RPN, network ImageNet pre-training Model initialization, and end-to-end finely tune suggests task for region;Second step, the Suggestion box generated using the RPN of the first step, by Fast R-CNN train an individually detection network, and this detection network is equally initial by the model of ImageNet pre-training Change, at this time two networks are also without shared convolutional layer;3rd step, trained with detection netinit RPN, but it is fixed shared Convolutional layer, and only finely tune the exclusive layers of RPN, present two network share convolutional layers;4th step, keep shared convolution Layer is fixed, fine setting Fast R-CNN fc, i.e., full articulamentum;So, two network share identical convolutional layers, a system is formed One network;
By the processing of above-mentioned two network, vehicle target object in a frame video image, car plate target pair are detected As and to they size and locus confined, that is, obtained vehicle target object, the size of car plate destination object And locus;Its rv,cvIt is the top left co-ordinate of vehicle target object in the picture, hv,wvIt is that vehicle target object exists respectively The projected size of the plane of delineation, i.e. height and width;Its rp,cpIt is the top left co-ordinate of car plate in the picture, hp,wpIt is car plate respectively In the projected size of the plane of delineation, i.e. height and width;
The progressive cascade connection between each task is make use of in Faster R-CNN networks, that is, it is precisely fixed to be followed successively by vehicle Position, vehicle, brand and the identification of car system, car plate precise positioning, Car license recognition and color recognizing for vehicle id, acetes chinensis, vehicle color Correction, a colour recognition.
Further, described car plate background color identification module is used to license plate image is handled to obtain under existing environmental condition Car plate background color;License plate image is subjected to grey level histogram processing, is in car plate at the peak valley in grey level histogram The interval of character and character, that is, the background color of car plate;Pixel RGB color component in interval is averaged, finally Obtain the car plate background color under existing environmental condition.
Described Colorimetry module is used to carry on the back the car plate background color of national regulations and the car plate under existing environmental condition Calculating is compared in scape color, obtains the aberration under existing environmental condition;First, with the car plate background color ratio under existing environmental condition To the several types of the car plate background color of national regulations, a kind of car plate background color of immediate national regulations is obtained, and As the car plate background color under standard light;The calculating of aberration is carried out on CIE1976Lab color spaces;In order to fast Speed is realized from RGB color to the conversion of Lab color spaces, using rapid translating mode, as shown in formula (5);
In formula, R, G, B are respectively the color component in RGB color, and L is the lightness point of CIE1976Lab color spaces Amount, a and b are the chromatic component of CIE1976Lab color spaces;
Car plate background color under the car plate background color of national regulations and existing environmental condition is all calculated by formula (5) Obtain respective L, a, b value;Wherein, LNPAnd LRPRespectively under the car plate background color of national regulations and existing environmental condition The brightness value of car plate background color, aNPAnd aRP、bNPAnd bRPRespectively under the car plate background color of national regulations and existing environmental condition Car plate background color colourity, the aberration Δ E between bothabCIE1976Lab aberration is calculated with formula (6);
In formula, Δ L=LNP-LRPFor luminosity equation, Δ a=aNP-aRP, Δ b=bNP-bRPFor colour difference, Δ EabIt is single for aberration Position is NBS.
The aberration that described vehicle color correction module is used to be obtained according to detection is to the vehicle color under existing environmental condition It is corrected, obtains the vehicle color image under national regulations ecotopia;First, it is determined that aberration Δ EabWhether threshold value is exceededIf it exceeds threshold value just carries out vehicle color correction, correction is calculated as shown in formula (7);
In formula, LNMFor the brightness value of the vehicle color under national regulations ecotopia, LRPFor the vehicle under existing environmental condition The brightness value of color, Δ L are the luminosity equation of the car plate background color and the car plate background color under existing environmental condition of national regulations, aNMAnd bNMFor the chromatic value of the vehicle color under national regulations ecotopia, aRPAnd bRPShow the vehicle color under environmental condition The colour difference of the car plate background color of chromatic value, Δ a and Δ b national regulations and the car plate background color under existing environmental condition;
Further, inverse transformation, such as formula are carried out from Lab color space to RGB color to the vehicle color after correction (8) shown in;
In formula, R, G, B are respectively the color component in RGB color, and L is the lightness point of CIE1976Lab color spaces Amount, a and b are the chromatic component of CIE1976Lab color spaces;
Formula (8) equation group is the formula after optimization, and floating-point operation is converted into normal multiplication of integers and the side of displacement Formula, is written as div2^23 in formula by displacement, and expression moves right 23;The span of RGB and Lab in formula be all [0, 255], then by inverse Gamma functions the rgb value of the vehicle color under national regulations ecotopia is obtained.
Described vehicle color identification module is used to the vehicle color after correction be identified, in order to effectively shared Faster R-CNN depth convolutional neural networks, the vehicle image after color correction is put on into corresponding color label and instructed Practice;When being identified to vehicle color, by vehicle precise positioning, car plate precise positioning, the identification of car plate background color, acetes chinensis, After vehicle color correction process step, vehicle color identification is carried out finally by Faster R-CNN depth convolutional neural networks; Identified with the vehicle image after color correction by Faster R-CNN depth convolutional neural networks with regard to that can obtain in standard illumination Under the conditions of vehicle color.
Beneficial effects of the present invention are mainly manifested in:It has been inherently eliminated color caused by illumination and camera setting Difference, effectively raise the detection robustness of vehicle color;The multitask deep learning convolutional neural networks of use, utilize each Progressive cascade system between business, by vehicle precise positioning, vehicle, brand and the identification of car system, car plate precise positioning, Car license recognition And color recognizing for vehicle id, same Faster R-CNN depth convolutional neural networks are shared in vehicle color identification, each being lifted While the fixation and recognition precision of task, overall recognition time is also effectively shortened, improves the real-time of detection identification.
Brief description of the drawings
Fig. 1 is Fast R-CNN structure charts;
Fig. 2 is selective search network;
Fig. 3 is the progressive cascade connection structure charts of multitask Faster R-CNN;
Fig. 4 is multitask Faster R-CNN vehicle color vision-based detection network structures;
Fig. 5 is with the explanation figure that car plate background color method under existing environmental condition is extracted in grey level histogram;
Fig. 6 is the vehicle color identification process flow chart of multitask Faster R-CNN depth convolutional networks.
Embodiment
The invention will be further described below in conjunction with the accompanying drawings.
1~Fig. 6 of reference picture, a kind of vehicle color identifying system based on multitask depth convolutional neural networks, including peace The vision-based detection subsystem of high-definition camera, traffic Cloud Server and vehicle color above road travel line;
Described high-definition camera is used to obtain the video data on road, configures in the top of lane, passes through network Vedio data on road is transferred to described traffic Cloud Server;
Described traffic Cloud Server is used to receive from the video data on the road that described high-definition camera is obtained, And the vision detection system for being submitted to described vehicle color carries out vehicle color identification;Handling process is as shown in fig. 6, head First, segmentation positioning is carried out to video image, extracts the vehicle image in the image;Then, to the car plate figure in vehicle image As carrying out segmentation positioning again, license plate image is extracted;Then license plate image is handled with grey level histogram, obtains car plate Grey level histogram;Then according to the character characteristic distributions of car plate, the background color region of the car plate in grey level histogram is extracted; Further, remove to match the car plate background colour in immediate national standard according to the car plate background colour extracted;Further, The RGB color for the car plate background colour that national standard car plate background colour and detection are obtained is transformed into Lab color spaces respectively; Then the aberration between both is calculated, if aberration exceedes threshold valueThe vehicle image of RGB color is transformed into Lab On color space, chromatic aberration correction is carried out to vehicle image with this aberration, the vehicle image of no color differnece is obtained, then again by color Correction rear vehicle image is transformed on RGB color;Finally, vehicle, brand are carried out to color correction rear vehicle image, is Row, body color identification, finally give accurately surface description.
The People's Republic of China's automotive number plates of People's Republic of China (PRC) industry standards of public safety GA 36-2014, with Lower abbreviation national standard, have to the color of car plate and provide in detailed below, metal material number plate is defined in national standard under the irradiation of A light sources Chromaticity coordinate should meet the regulation of 4.4.1 in GA 666-2006, as shown in table 1;The reflective surface for defining blue background color number plate exists The aberration of chromaticity coordinate and Standard colour board under the irradiation of D65 light sources should be no more than 8.0NBS;Luminance factor should meet GA 666- 4.4.2 regulation in 2006, as shown in table 2.Above-mentioned regulation is to the invention provides a kind of reference of Standard Colors.
Table 1
Table 2
The vision-based detection subsystem of described vehicle color include vehicle location detection module, License Plate detection module, Car plate background color identification module, Colorimetry module, vehicle color correction module and vehicle color identification module;Described car Detection and localization module, described License Plate detection module and described vehicle color identification module share same Faster R-CNN depth convolutional neural networks, road vehicle is gone out using depth convolutional neural networks Fast Segmentation, and with vehicle figure As the car plate further gone out using depth convolutional neural networks Fast Segmentation on road, these vehicles are then provided again and car plate exists Shared spatial positional information in road image.
Vehicle and License Plate Segmentation and positioning are made up of two models, and a model is the selective search network for generating RoI; Another model is Faster R-CNN vehicles and car plate target detection network, and detection unit structure chart is as shown in Figure 1;The present invention In further modification has been made in Faster R-CNN networks to this model;A multilayer is realized after two Classification and Identification networks The successively progressive multi-task learning network of secondary, multi-tag, multiple features fusion, as shown in Figure 4;
Selective search network, i.e. RPN;RPN networks are built any scalogram picture as input, output rectangular target The set of frame is discussed, each frame includes 4 position coordinates variables and a score.For formation zone Suggestion box, at last Small network is slided in the convolution Feature Mapping of shared convolutional layer output, this network is connected to input convolution Feature Mapping entirely In n × n spatial window.Each sliding window is mapped on a low-dimensional vector, a sliding window of each Feature Mapping A corresponding numerical value.This vector exports the layer of the full connection at the same level to two.
In the position of each sliding window, while k suggestion areas is predicted, so position returns layer and has 4k output, The codes co-ordinates of i.e. k bounding box.Layer of classifying exports the score of 2k bounding box, i.e., is target/non-targeted to each Suggestion box Estimated probability, be the classification layer realized with the softmax layers of two classification, k can also be generated with logistic recurrence Point.K Suggestion box is parameterized by the corresponding k Suggestion box for being referred to as anchor.Each anchor is with current sliding window mouth center Centered on, and a kind of corresponding yardstick and length-width ratio, using 3 kinds of yardsticks and 3 kinds of length-width ratios, so just have in each sliding position K=9 anchor.For example, for the convolution Feature Mapping that size is w × h, then a total of w × h × k anchor.RPN nets Network structure chart is as shown in Figure 2.
In order to train RPN networks, a binary label is distributed to each anchor, is to mark the anchor with this It is not target.Then distribute positive label and give this two class anchor:(I) with some real target bounding box, i.e. Ground Truth, GT has the ratio between highest IoU, i.e. Interse-ction-over-Union, common factor union, overlapping anchor;(II) it is and any GT bounding boxs have the overlapping anchor of the IoU more than 0.7.Notice that a GT bounding box may give multiple anchor distribution positive mark Label.The IoU ratios that the negative label of distribution gives all GT bounding boxs are below 0.3 anchor.Anon-normal non-negative anchor is to instruction Practicing target does not have any effect, then abandons.
There are these to define, it then follows the multitask loss in Faster R-CNN, to minimize object function.To an image Loss function be defined as:
Here, i is anchor index, piIt is the prediction probability that anchor is the i-th target, if anchor is Just, GT labelsIt is exactly 1, if anchor is negative,It is exactly 0;tiIt is a vector, represents 4 parameters of the bounding box of prediction Change coordinate,It is the coordinate vector of GT bounding boxs corresponding with positive anchor;λ is a balance weight, here λ=10, NclsIt is The normalized value of cls items is mini-batch size, here Ncls=256, NregThe normalized value for being reg items is anchor positions The quantity put, Nreg~2,400, Classification Loss function Lcls3 classifications, i.e. vehicle target, car plate target vs. road backgrounds Logarithm loses:
For returning loss function Lreg, defined to minor function:
In formula, LregTo return loss function, R is the loss function of robust, and smooth L are calculated with formula (4)1
In formula, smoothL1For smooth L1Loss function, x are variable;
Faster R-CNN network structures in input picture after depth convolutional neural networks as shown in figure 3, can obtain To characteristic pattern, corresponding RoIs can be then obtained according to characteristic pattern and RPN networks, finally then passes through RoI ponds layer.The layer is The only process in level spatial " pyramid " pond.Input is N number of Feature Mapping and R RoI.N number of Feature Mapping comes from most The latter convolutional layer, the size of each Feature Mapping is w × h × c.Each RoI is a tuple (n, r, c, h, w), wherein, N is the index of Feature Mapping, and n ∈ (0,1,2 ..., N-1), r, c are top left co-ordinates, and h, w are height and width respectively.Output then by The Feature Mapping that maximum pond obtains.The effect of this layer mainly has two, first, by the block pair in the RoI and characteristic pattern in artwork It should get up;It by characteristic pattern down-sampling is fixed size that another, which is, is then passed to full connection again.
Selective search network is shared with detecting the weights of network:Selective search network and Faster R-CNN are only Vertical training, differently to change their convolutional layer.Therefore need to allow to share convolution between two networks using a kind of The technology of layer, rather than learn two networks respectively.A kind of 4 practical step training algorithms are used in invention, pass through alternative optimization To learn shared feature.The first step, according to above-mentioned training RPN, the model initialization of network ImageNet pre-training, and hold It is used for region to end fine setting and suggests task.Second step, the Suggestion box generated using the RPN of the first step, is instructed by Faster R-CNN Practice an individually detection network, this detection network is equally by the model initialization of ImageNet pre-training, at this time Two networks are also without shared convolutional layer.3rd step, trained with detection netinit RPN, but fixed shared convolutional layer, and And only finely tune the exclusive layers of RPN, present two network share convolutional layers.4th step, keep shared convolutional layer to fix, finely tune Faster R-CNN fc, i.e., full articulamentum.So, two network share identical convolutional layers, a unified network is formed.
In view of object it is multiple dimensioned the problem of, use three kinds of simple chis for each characteristic point on characteristic pattern Degree, the area of bounding box is respectively 128 × 128,256 × 256,512 × 512 and three kind of length-width ratio, respectively 1:1、1:2、2: 1.Pass through this design, in this way it is no longer necessary to which Analysis On Multi-scale Features or multi-scale sliding window mouth predict big region, can reach section Save the effect of a large amount of run times.
By the processing of above-mentioned two network, vehicle in a frame video image is detected and to its size and locus Confined, that is, obtained size and the locus of vehicle, its rv,cvIt is the top left co-ordinate of vehicle in the picture, hv, wvIt is projected size of the vehicle in the plane of delineation, i.e. height and width respectively;
Because object of interest in the present invention is vehicle, car plate, i.e. object of interest, hereinafter referred to as RoI, in order to position With the various RoI that are partitioned on road, it is necessary in study and training convolutional neural networks, by various vehicles, car plate and road Road background image is put on corresponding label and is trained respectively;So by Faster R-CNN depth convolutional neural networks just It can automatically split and orient vehicle and car plate.In order to lift the positioning precision of car plate, the car after positioning will be split in the present invention Image carries out segmentation positioning by Faster R-CNN depth convolutional neural networks to the car plate of the vehicle again;
A kind of multitask deep learning convolutional neural networks are employed in the present invention, this is due to multitask deep learning net The image recognition of network is often better than single task deep learning network, and multitask has the association between task in learning process Property, i.e., existence information is shared between task, and this is also the necessary condition of multitask deep learning;Multiple tasks are trained at the same time When, network utilizes the induction bias ability of shared information enhancement system and the generalization ability of grader between task;Such as Fig. 3 institutes Show, the progressive cascade connection between each task is taken full advantage of in the present invention, that is, is followed successively by vehicle precise positioning, vehicle, brand With car system identification, car plate precise positioning, Car license recognition and color recognizing for vehicle id, acetes chinensis, vehicle color correction, vehicle color Identification;
Described vehicle location detection module is oriented in road vehicle object images for segmentation, using Faster R-CNN depth convolutional neural networks are handled the video image on road, have obtained size and the locus of vehicle, its rv,cvIt is the top left co-ordinate of vehicle in the picture, hv,wvIt is projected size of the vehicle in the plane of delineation respectively;
Described License Plate detection module is used to split the license plate image oriented in Vehicle Object image, uses Faster R-CNN depth convolutional neural networks are handled vehicle image, have obtained size and the locus of car plate, its rp,cpIt is the top left co-ordinate of car plate in the picture, hp,wpIt is projected size of the car plate in the plane of delineation respectively;
Described car plate background color identification module is used to handle to obtain the car plate back of the body under existing environmental condition to license plate image Scape color;Specific practice is license plate image to be carried out into grey level histogram processing, as shown in figure 5, the peak valley in grey level histogram Place is the interval of the character and character in car plate, that is, the background color of car plate;Pixel RGB color component in interval is entered Row is average, finally gives the car plate background color under existing environmental condition;
Described Colorimetry module is used to carry on the back the car plate background color of national regulations and the car plate under existing environmental condition Calculating is compared in scape color, obtains the aberration under existing environmental condition;First, with the car plate background color ratio under existing environmental condition To the several types of the car plate background color of national regulations, a kind of car plate background color of immediate national regulations is obtained, and As the car plate background color under standard light;The calculating of aberration is carried out on CIE1976Lab color spaces, general next Say, the conversion from RGB color to Lab color spaces is needed in two steps to realize;The first step is from RGB very color 24bit XYZ color space is converted to Lab color spaces by color space conversion to XYZ color space, second step;In order to quickly realize from RGB color to Lab color spaces conversion, present invention employs a kind of rapid translating mode, as shown in formula (5);
Car plate background color under the car plate background color of national regulations and existing environmental condition is all calculated by formula (5) Obtain respective L, a, b value;Wherein, LNPAnd LRPRespectively under the car plate background color of national regulations and existing environmental condition The brightness value of car plate background color, aNPAnd aRP、bNPAnd bRPRespectively under the car plate background color of national regulations and existing environmental condition Car plate background color colourity, the aberration Δ E between bothabFormula (6) can be used to calculate CIE1976Lab aberration;
In formula, Δ L=LNP-LRPFor luminosity equation, Δ a=aNP-aRP, Δ b=bNP-bRPFor colour difference, Δ EabIt is single for aberration Position is NBS;
The aberration that described vehicle color correction module is used to be obtained according to detection is to the vehicle color under existing environmental condition It is corrected, obtains the vehicle color image under national regulations ecotopia;First, it is determined that aberration Δ EabWhether threshold value is exceededIf it exceeds threshold value just carries out vehicle color correction, correction is calculated as shown in formula (7);
In formula, LNMFor the brightness value of the vehicle color under national regulations ecotopia, LRPFor the vehicle under existing environmental condition The brightness value of color, Δ L are the luminosity equation of the car plate background color and the car plate background color under existing environmental condition of national regulations, aNMAnd bNMFor the chromatic value of the vehicle color under national regulations ecotopia, aRPAnd bRPShow the vehicle color under environmental condition The colour difference of the car plate background color of chromatic value, Δ a and Δ b national regulations and the car plate background color under existing environmental condition;
Further, inverse transformation, such as formula are carried out from Lab color space to RGB color to the vehicle color after correction (8) shown in;
In formula, R, G, B are respectively the color component in RGB color, and L is the lightness point of CIE1976Lab color spaces Amount, a and b are the chromatic component of CIE1976Lab color spaces;
Formula (8) equation group is the formula after optimization, and floating-point operation is converted into normal multiplication of integers and the side of displacement Formula, is written as div2^23 in formula by displacement, and expression moves right 23;The span of RGB and Lab in formula be all [0, 255], then by inverse Gamma functions the rgb value of the vehicle color under national regulations ecotopia is obtained;
Described vehicle color identification module is used to the vehicle color after correction be identified, in order to effectively shared Faster R-CNN depth convolutional neural networks, vehicle image after color correction is put on to corresponding color mark in of the invention Label are trained;When being identified to vehicle color, pass through vehicle precise positioning, car plate precise positioning, the identification of car plate background color, color After difference detection, vehicle color correction process step, vehicle face is carried out finally by Faster R-CNN depth convolutional neural networks Color identifies;This vehicle image with after color correction is identified with regard to that can obtain by Faster R-CNN depth convolutional neural networks The color of vehicle under standard illumination condition.
The foregoing is only the preferable implementation example of the present invention, be not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent substitution and improvements made etc., it should be included in the scope of the protection.

Claims (7)

  1. A kind of 1. vehicle color identifying system based on multitask depth convolutional neural networks, it is characterised in that:Including installed in The vision-based detection subsystem of high-definition camera, traffic Cloud Server and vehicle color above road travel line;
    Described high-definition camera is used to obtain video data on road, configures in the top of lane, by network by road Vedio data on road is transferred to described traffic Cloud Server;
    Described traffic Cloud Server is used to receive from the video data on the road that described high-definition camera is obtained, and will It is submitted to the vision detection system of described vehicle color and carries out vehicle color identification;
    The vision-based detection subsystem of described vehicle color includes vehicle location detection module, License Plate detection module, car plate Background color identification module, Colorimetry module, vehicle color correction module and vehicle color identification module, described vehicle are determined Position detection module, described License Plate detection module and described vehicle color identification module share same Faster R- CNN depth convolutional neural networks, road vehicle is gone out using depth convolutional neural networks Fast Segmentation, and use vehicle image The car plate further gone out using depth convolutional neural networks Fast Segmentation on road, then provides these vehicles and car plate in road again Shared spatial positional information in the image of road.
  2. 2. the vehicle color identifying system as claimed in claim 1 based on multitask depth convolutional neural networks, its feature exist In:
    Vehicle and License Plate Segmentation and positioning are made up of two models, and a model is the selective search network for generating RoI;It is another Individual model is Faster R-CNN vehicles and car plate target detection network;Realized after two Classification and Identification networks one it is multi-level, The successively progressive multi-task learning network of multi-tag, multiple features fusion;
    Described selective search network, i.e. RPN;RPN networks export rectangular target using any scalogram picture as input The set of Suggestion box, each frame include 4 position coordinates variables and a score;The target of described target Suggestion box refers to Vehicle Object and car plate object;
    It is the estimated probability of target/non-targeted to each Suggestion box, is the classification layer realized with the softmax layers of two classification;K Suggestion box is parameterized by the corresponding k Suggestion box for being referred to as anchor;
    Each anchor is centered on current sliding window mouth center, and a kind of corresponding yardstick and length-width ratio, uses 3 kinds of yardsticks and 3 Kind length-width ratio, so just has k=9 anchor in each sliding position;
    RPN networks are trained, give each anchor to distribute a binary label, with this come to mark the anchor be mesh Mark;Then distribute positive label and give this two class anchor:(I) have most with some real target bounding box, i.e. Ground Truth, GT The ratio between high IoU, i.e. Interse-ction-over-Union, common factor union, overlapping anchor;(II) surrounded with any GT Box has the overlapping anchor of the IoU more than 0.7;Notice that a GT bounding box may distribute positive label to multiple anchor;Point With negative label give all GT bounding boxs IoU ratios be below 0.3 anchor;Anon-normal non-negative anchor is to training objective There is no any effect, then abandon;
    The multitask loss in Faster R-CNN is followed, minimizes object function;The loss function of one image is defined as:
    <mrow> <mi>L</mi> <mrow> <mo>(</mo> <mo>{</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>}</mo> <mo>,</mo> <mo>{</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>}</mo> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mrow> <mi>c</mi> <mi>l</mi> <mi>s</mi> </mrow> </msub> </mfrac> <munder> <mo>&amp;Sigma;</mo> <mi>i</mi> </munder> <msub> <mi>L</mi> <mrow> <mi>c</mi> <mi>l</mi> <mi>s</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>,</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mo>*</mo> </msubsup> <mo>)</mo> </mrow> <mo>+</mo> <mi>&amp;lambda;</mi> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> </mfrac> <munder> <mo>&amp;Sigma;</mo> <mi>i</mi> </munder> <msubsup> <mi>p</mi> <mi>i</mi> <mo>*</mo> </msubsup> <msub> <mi>L</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>,</mo> <msubsup> <mi>t</mi> <mi>i</mi> <mo>*</mo> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
    Here, i is anchor index, piIt is the prediction probability that anchor is the i-th target, if anchor is just, GT marks LabelIt is exactly 1, if anchor is negative,It is exactly 0;tiIt is a vector, represents 4 parametrization coordinates of the bounding box of prediction,It is the coordinate vector of GT bounding boxs corresponding with positive anchor;λ is a balance weight, here λ=10, NclsIt is cls items Normalized value is mini-batch size, here Ncls=256, NregIt is that the normalized value of reg items is the number of anchor positions Amount, Nreg~2,400, Classification Loss function LclsIt is three classifications, i.e., vehicle target object, car plate destination object vs. roads are carried on the back The logarithm loss of scape:
    <mrow> <msub> <mi>L</mi> <mrow> <mi>c</mi> <mi>l</mi> <mi>s</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>,</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mo>*</mo> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mo>&amp;lsqb;</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mo>*</mo> </msubsup> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mo>*</mo> </msubsup> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
    In formula, LclsFor Classification Loss function, PiIt is the prediction probability of the i-th target for anchor;For real target bounding box The prediction probability of i-th target;
    For returning loss function Lreg, defined to minor function:
    <mrow> <msub> <mi>L</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>,</mo> <msubsup> <mi>t</mi> <mi>i</mi> <mo>*</mo> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>-</mo> <msubsup> <mi>t</mi> <mi>i</mi> <mo>*</mo> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
    In formula, LregTo return loss function, R is the loss function of robust, and smooth L are calculated with formula (4)1
    <mrow> <msub> <mi>smooth</mi> <mrow> <mi>L</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>0.5</mn> <msup> <mi>x</mi> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>&lt;</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>-</mo> <mn>0.5</mn> </mrow> </mtd> <mtd> <mrow> <mi>o</mi> <mi>t</mi> <mi>h</mi> <mi>e</mi> <mi>r</mi> <mi>w</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> <mo>,</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
    In formula, smoothL1For smooth L1Loss function, x are variable;
    Faster R-CNN networks, characteristic pattern is obtained after depth convolutional neural networks in input picture, according to characteristic pattern with And RPN networks can then obtain corresponding RoIs, finally then pass through RoI ponds layer;Wherein RoI, i.e. area-of-interest, refer to just It is vehicle target object, car plate destination object;
    For Faster R-CNN networks, input is N number of Feature Mapping and R RoI;N number of Feature Mapping comes from last Convolutional layer, the size of each Feature Mapping is w × h × c;
    Each RoI is a tuple (n, r, c, h, w), wherein, n is the index of Feature Mapping, n ∈ (0,1,2 ..., N-1), r, C is top left co-ordinate, and h, w are height and width respectively;
    Export the Feature Mapping then obtained by maximum pond;RoI in artwork is mapped with the block in characteristic pattern;By feature Figure down-sampling is fixed size, is then passed to full connection again.
  3. 3. the vehicle color identifying system as claimed in claim 2 based on multitask depth convolutional neural networks, its feature exist In:The selective search network and Fast R-CNN are all stand-alone trainings, using 4 step training algorithms, pass through alternative optimization To learn shared feature;The first step, according to above-mentioned training RPN, the model initialization of network ImageNet pre-training, and hold It is used for region to end fine setting and suggests task;Second step, the Suggestion box generated using the RPN of the first step, is trained by Fast R-CNN One single detection network, this detection network are equally by the model initialization of ImageNet pre-training at this time two Individual network is also without shared convolutional layer;3rd step, trained with detection netinit RPN, but fixed shared convolutional layer, and Only layer exclusive fine setting RPN, present two network share convolutional layers;4th step, keep shared convolutional layer to fix, finely tune Fast R-CNN fc, i.e., full articulamentum;So, two network share identical convolutional layers, a unified network is formed;
    By the processing of above-mentioned two network, vehicle target object in a frame video image, car plate destination object are detected simultaneously Their size and locus are confined, that is, obtained vehicle target object, the size of car plate destination object and sky Between position;Its rv,cvIt is the top left co-ordinate of vehicle target object in the picture, hv,wvIt is vehicle target object respectively in image The projected size of plane, i.e. height and width;Its rp,cpIt is the top left co-ordinate of car plate in the picture, hp,wpIt is that car plate is being schemed respectively The projected size of image plane, i.e. height and width;
    The progressive cascade connection between each task is make use of in Faster R-CNN networks, that is, is followed successively by vehicle precise positioning, car Type, brand and the identification of car system, car plate precise positioning, Car license recognition and color recognizing for vehicle id, acetes chinensis, vehicle color correction, Colour recognition.
  4. 4. the vehicle color identification method based on multitask depth convolutional neural networks as described in one of claims 1 to 3, its It is characterised by:Described car plate background color identification module is used to handle to obtain the car plate back of the body under existing environmental condition to license plate image Scape color;License plate image is subjected to grey level histogram processing, is character and word in car plate at the peak valley in grey level histogram The interval of symbol, that is, the background color of car plate;Pixel RGB color component in interval is averaged, finally gives existing ring Car plate background color under the conditions of border.
  5. 5. the vehicle color identification method as claimed in claim 4 based on multitask depth convolutional neural networks, its feature exist In:Described Colorimetry module is used for the car plate background color of national regulations and the car plate background color under existing environmental condition Calculating is compared, obtains the aberration under existing environmental condition;First, national standard is compared with the car plate background color under existing environmental condition The several types of defined car plate background color, obtain a kind of car plate background color of immediate national regulations, and as Car plate background color under standard light;The calculating of aberration is carried out on CIE1976Lab color spaces;In order to quickly realize From RGB color to the conversion of Lab color spaces, using rapid translating mode, as shown in formula (5);
    <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>L</mi> <mo>=</mo> <mi>Y</mi> <mn>1</mn> <mo>=</mo> <mn>0.2126</mn> <mo>&amp;times;</mo> <mi>R</mi> <mo>+</mo> <mn>0.7152</mn> <mo>&amp;times;</mo> <mi>G</mi> <mo>+</mo> <mn>0.0722</mn> <mo>&amp;times;</mo> <mi>B</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>a</mi> <mo>=</mo> <mn>1.4749</mn> <mo>&amp;times;</mo> <mrow> <mo>(</mo> <mn>0.2213</mn> <mo>&amp;times;</mo> <mi>R</mi> <mo>-</mo> <mn>0.3390</mn> <mo>&amp;times;</mo> <mi>G</mi> <mo>+</mo> <mn>0.1177</mn> <mo>&amp;times;</mo> <mi>B</mi> <mo>)</mo> </mrow> <mo>+</mo> <mn>128</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>b</mi> <mo>=</mo> <mn>0.6245</mn> <mo>&amp;times;</mo> <mrow> <mo>(</mo> <mn>0.1949</mn> <mo>&amp;times;</mo> <mi>R</mi> <mo>+</mo> <mn>0.6057</mn> <mo>&amp;times;</mo> <mi>G</mi> <mo>-</mo> <mn>0.8006</mn> <mo>&amp;times;</mo> <mi>B</mi> <mo>)</mo> </mrow> <mo>+</mo> <mn>128</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
    In formula, R, G, B are respectively the color component in RGB color, and L is the lightness component of CIE1976Lab color spaces, a With the chromatic component that b is CIE1976Lab color spaces;
    Car plate background color under the car plate background color of national regulations and existing environmental condition is all calculated by formula (5) Respective L, a, b value;Wherein, LNPAnd LRPCar plate respectively under the car plate background color of national regulations and existing environmental condition The brightness value of background color, aNPAnd aRP、bNPAnd bRPCar respectively under the car plate background color of national regulations and existing environmental condition The colourity of board background color, the aberration Δ E between bothabCIE1976Lab aberration is calculated with formula (6);
    <mrow> <msub> <mi>&amp;Delta;E</mi> <mrow> <mi>a</mi> <mi>b</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mrow> <msup> <mi>&amp;Delta;L</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>&amp;Delta;a</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>&amp;Delta;b</mi> <mn>2</mn> </msup> </mrow> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
    In formula, Δ L=LNP-LRPFor luminosity equation, Δ a=aNP-aRP, Δ b=bNP-bRPFor colour difference, Δ EabFor aberration, unit is NBS。
  6. 6. the vehicle color identifying system as claimed in claim 5 based on multitask depth convolutional neural networks, it is characterised in that: Described vehicle color correction module is used to be corrected the vehicle color under existing environmental condition according to the aberration that detection obtains, and obtains Vehicle color image under to national regulations ecotopia;First, it is determined that aberration Δ EabWhether threshold value is exceeded Such as Fruit just carries out vehicle color correction more than threshold value, and correction is calculated as shown in formula (7);
    <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>L</mi> <mrow> <mi>N</mi> <mi>M</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>L</mi> <mrow> <mi>R</mi> <mi>P</mi> </mrow> </msub> <mo>+</mo> <mi>&amp;Delta;</mi> <mi>L</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>a</mi> <mrow> <mi>N</mi> <mi>M</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>a</mi> <mrow> <mi>R</mi> <mi>P</mi> </mrow> </msub> <mo>+</mo> <mi>&amp;Delta;</mi> <mi>a</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>b</mi> <mrow> <mi>N</mi> <mi>M</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>b</mi> <mrow> <mi>R</mi> <mi>P</mi> </mrow> </msub> <mo>+</mo> <mi>&amp;Delta;</mi> <mi>b</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
    In formula, LNMFor the brightness value of the vehicle color under national regulations ecotopia, LRPFor the vehicle color under existing environmental condition Brightness value, Δ L is the car plate background color of national regulations and the luminosity equation of the car plate background color under existing environmental condition, aNMWith bNMFor the chromatic value of the vehicle color under national regulations ecotopia, aRPAnd bRPThe colourity of vehicle color under existing environmental condition The colour difference of the car plate background color of value, Δ a and Δ b national regulations and the car plate background color under existing environmental condition;
    Further, inverse transformation is carried out from Lab color space to RGB color to the vehicle color after correction, such as formula (8) institute Show;
    In formula, R, G, B are respectively the color component in RGB color, and L is the lightness component of CIE1976Lab color spaces, a With the chromatic component that b is CIE1976Lab color spaces;
    Formula (8) equation group is the formula after optimization, and floating-point operation is converted into normal multiplication of integers and the mode of displacement, public Displacement is written as div2^23 in formula, expression moves right 23;The span of RGB and Lab in formula are all [0,255], then are passed through Cross the rgb value of the vehicle color obtained against Gamma functions under national regulations ecotopia.
  7. 7. the vehicle color identification method based on multitask depth convolutional neural networks as described in one of claims 1 to 3, its It is characterised by:Described vehicle color identification module is used to the vehicle color after correction be identified, in order to effectively shared Faster R-CNN depth convolutional neural networks, the vehicle image after color correction is put on into corresponding color label and instructed Practice;When being identified to vehicle color, by vehicle precise positioning, car plate precise positioning, the identification of car plate background color, acetes chinensis, After vehicle color correction process step, vehicle color identification is carried out finally by Faster R-CNN depth convolutional neural networks; Identified with the vehicle image after color correction by Faster R-CNN depth convolutional neural networks with regard to that can obtain in standard illumination Under the conditions of vehicle color.
CN201710558817.4A 2017-07-11 2017-07-11 Vehicle color recognition system based on multitask deep convolution neural network Active CN107729801B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710558817.4A CN107729801B (en) 2017-07-11 2017-07-11 Vehicle color recognition system based on multitask deep convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710558817.4A CN107729801B (en) 2017-07-11 2017-07-11 Vehicle color recognition system based on multitask deep convolution neural network

Publications (2)

Publication Number Publication Date
CN107729801A true CN107729801A (en) 2018-02-23
CN107729801B CN107729801B (en) 2020-12-18

Family

ID=61201088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710558817.4A Active CN107729801B (en) 2017-07-11 2017-07-11 Vehicle color recognition system based on multitask deep convolution neural network

Country Status (1)

Country Link
CN (1) CN107729801B (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334955A (en) * 2018-03-01 2018-07-27 福州大学 Copy of ID Card detection method based on Faster-RCNN
CN108389220A (en) * 2018-02-05 2018-08-10 湖南航升卫星科技有限公司 Remote sensing video image motion target real-time intelligent cognitive method and its device
CN108399386A (en) * 2018-02-26 2018-08-14 阿博茨德(北京)科技有限公司 Information extracting method in pie chart and device
CN108416377A (en) * 2018-02-26 2018-08-17 阿博茨德(北京)科技有限公司 Information extracting method in block diagram and device
CN108509978A (en) * 2018-02-28 2018-09-07 中南大学 The multi-class targets detection method and model of multi-stage characteristics fusion based on CNN
CN108564088A (en) * 2018-04-17 2018-09-21 广东工业大学 Licence plate recognition method, device, equipment and readable storage medium storing program for executing
CN108615049A (en) * 2018-04-09 2018-10-02 华中科技大学 A kind of vehicle part detection model compression method and system
CN108734219A (en) * 2018-05-23 2018-11-02 北京航空航天大学 A kind of detection of end-to-end impact crater and recognition methods based on full convolutional neural networks structure
CN108830908A (en) * 2018-06-15 2018-11-16 天津大学 A kind of magic square color identification method based on artificial neural network
CN108846343A (en) * 2018-06-05 2018-11-20 北京邮电大学 Multi-task collaborative analysis method based on three-dimensional video
CN109145798A (en) * 2018-08-13 2019-01-04 浙江零跑科技有限公司 A kind of Driving Scene target identification and travelable region segmentation integrated approach
CN109145759A (en) * 2018-07-25 2019-01-04 腾讯科技(深圳)有限公司 Vehicle attribute recognition methods, device, server and storage medium
CN109241321A (en) * 2018-07-19 2019-01-18 杭州电子科技大学 The image and model conjoint analysis method adapted to based on depth field
CN109447064A (en) * 2018-10-09 2019-03-08 温州大学 A kind of duplicate rows License Plate Segmentation method and system based on CNN
CN109543753A (en) * 2018-11-23 2019-03-29 中山大学 Licence plate recognition method based on adaptive fuzzy repair mechanism
CN109583305A (en) * 2018-10-30 2019-04-05 南昌大学 A kind of advanced method that the vehicle based on critical component identification and fine grit classification identifies again
CN109657590A (en) * 2018-12-11 2019-04-19 合刃科技(武汉)有限公司 A kind of method, apparatus and storage medium detecting information of vehicles
CN109961057A (en) * 2019-04-03 2019-07-02 罗克佳华科技集团股份有限公司 A kind of vehicle location preparation method and device
CN110047102A (en) * 2019-04-18 2019-07-23 北京字节跳动网络技术有限公司 Methods, devices and systems for output information
CN110084190A (en) * 2019-04-25 2019-08-02 南开大学 Unstructured road detection method in real time under a kind of violent light environment based on ANN
CN110222604A (en) * 2019-05-23 2019-09-10 复钧智能科技(苏州)有限公司 Target identification method and device based on shared convolutional neural networks
CN110334709A (en) * 2019-07-09 2019-10-15 西北工业大学 Detection method of license plate based on end-to-end multitask deep learning
CN110738225A (en) * 2018-07-19 2020-01-31 杭州海康威视数字技术股份有限公司 Image recognition method and device
CN110751155A (en) * 2019-10-14 2020-02-04 西北工业大学 Novel target detection method based on Faster R-CNN
CN111126515A (en) * 2020-03-30 2020-05-08 腾讯科技(深圳)有限公司 Model training method based on artificial intelligence and related device
CN111444911A (en) * 2019-12-13 2020-07-24 珠海大横琴科技发展有限公司 Training method and device of license plate recognition model and license plate recognition method and device
CN111507210A (en) * 2020-03-31 2020-08-07 华为技术有限公司 Traffic signal lamp identification method and system, computing device and intelligent vehicle
CN111767915A (en) * 2019-04-02 2020-10-13 顺丰科技有限公司 License plate detection method, device, equipment and storage medium
CN111860539A (en) * 2020-07-20 2020-10-30 济南博观智能科技有限公司 License plate color recognition method, device and medium
CN112215245A (en) * 2020-11-05 2021-01-12 中国联合网络通信集团有限公司 Image identification method and device
CN112507801A (en) * 2020-11-14 2021-03-16 武汉中海庭数据技术有限公司 Lane road surface digital color recognition method, speed limit information recognition method and system
CN113239836A (en) * 2021-05-20 2021-08-10 广州广电运通金融电子股份有限公司 Vehicle body color identification method, storage medium and terminal
CN113673467A (en) * 2021-08-30 2021-11-19 武汉长江通信智联技术有限公司 Vehicle color identification method under white light condition
CN113723408A (en) * 2021-11-02 2021-11-30 上海仙工智能科技有限公司 License plate recognition method and system and readable storage medium
CN114240929A (en) * 2021-12-28 2022-03-25 季华实验室 Color difference detection method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104134067A (en) * 2014-07-07 2014-11-05 河海大学常州校区 Road vehicle monitoring system based on intelligent visual Internet of Things
CN105046196A (en) * 2015-06-11 2015-11-11 西安电子科技大学 Front vehicle information structured output method base on concatenated convolutional neural networks
CN105354572A (en) * 2015-12-10 2016-02-24 苏州大学 Automatic identification system of number plate on the basis of simplified convolutional neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104134067A (en) * 2014-07-07 2014-11-05 河海大学常州校区 Road vehicle monitoring system based on intelligent visual Internet of Things
CN105046196A (en) * 2015-06-11 2015-11-11 西安电子科技大学 Front vehicle information structured output method base on concatenated convolutional neural networks
CN105354572A (en) * 2015-12-10 2016-02-24 苏州大学 Automatic identification system of number plate on the basis of simplified convolutional neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ITEYE_9281: "《从RGB到Lab色彩空间的转换》", 《BLOG.CSDN.NET/ITEYE_9281/ARTICLE/DETAILS/81643572》 *
SHAOQING REN等: "《Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks》", 《ARXIV:1506.01497V3》 *
任玉涛: "《车脸主要信息识别技术研究》", 《中国优秀硕士学位论文全文数据库 信息科技Ⅱ辑》 *

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108389220A (en) * 2018-02-05 2018-08-10 湖南航升卫星科技有限公司 Remote sensing video image motion target real-time intelligent cognitive method and its device
CN108416377B (en) * 2018-02-26 2021-12-10 阿博茨德(北京)科技有限公司 Information extraction method and device in histogram
CN108399386A (en) * 2018-02-26 2018-08-14 阿博茨德(北京)科技有限公司 Information extracting method in pie chart and device
CN108416377A (en) * 2018-02-26 2018-08-17 阿博茨德(北京)科技有限公司 Information extracting method in block diagram and device
CN108399386B (en) * 2018-02-26 2022-02-08 阿博茨德(北京)科技有限公司 Method and device for extracting information in pie chart
CN108509978A (en) * 2018-02-28 2018-09-07 中南大学 The multi-class targets detection method and model of multi-stage characteristics fusion based on CNN
CN108509978B (en) * 2018-02-28 2022-06-07 中南大学 Multi-class target detection method and model based on CNN (CNN) multi-level feature fusion
CN108334955A (en) * 2018-03-01 2018-07-27 福州大学 Copy of ID Card detection method based on Faster-RCNN
CN108615049A (en) * 2018-04-09 2018-10-02 华中科技大学 A kind of vehicle part detection model compression method and system
CN108564088A (en) * 2018-04-17 2018-09-21 广东工业大学 Licence plate recognition method, device, equipment and readable storage medium storing program for executing
CN108734219B (en) * 2018-05-23 2022-02-01 北京航空航天大学 End-to-end collision pit detection and identification method based on full convolution neural network structure
CN108734219A (en) * 2018-05-23 2018-11-02 北京航空航天大学 A kind of detection of end-to-end impact crater and recognition methods based on full convolutional neural networks structure
CN108846343A (en) * 2018-06-05 2018-11-20 北京邮电大学 Multi-task collaborative analysis method based on three-dimensional video
CN108830908A (en) * 2018-06-15 2018-11-16 天津大学 A kind of magic square color identification method based on artificial neural network
CN110738225A (en) * 2018-07-19 2020-01-31 杭州海康威视数字技术股份有限公司 Image recognition method and device
CN109241321A (en) * 2018-07-19 2019-01-18 杭州电子科技大学 The image and model conjoint analysis method adapted to based on depth field
CN109145759A (en) * 2018-07-25 2019-01-04 腾讯科技(深圳)有限公司 Vehicle attribute recognition methods, device, server and storage medium
CN109145798B (en) * 2018-08-13 2021-10-22 浙江零跑科技股份有限公司 Driving scene target identification and travelable region segmentation integration method
CN109145798A (en) * 2018-08-13 2019-01-04 浙江零跑科技有限公司 A kind of Driving Scene target identification and travelable region segmentation integrated approach
CN109447064B (en) * 2018-10-09 2019-07-30 温州大学 A kind of duplicate rows License Plate Segmentation method and system based on CNN
CN109447064A (en) * 2018-10-09 2019-03-08 温州大学 A kind of duplicate rows License Plate Segmentation method and system based on CNN
CN109583305A (en) * 2018-10-30 2019-04-05 南昌大学 A kind of advanced method that the vehicle based on critical component identification and fine grit classification identifies again
CN109583305B (en) * 2018-10-30 2022-05-20 南昌大学 Advanced vehicle re-identification method based on key component identification and fine-grained classification
CN109543753B (en) * 2018-11-23 2024-01-05 中山大学 License plate recognition method based on self-adaptive fuzzy repair mechanism
CN109543753A (en) * 2018-11-23 2019-03-29 中山大学 Licence plate recognition method based on adaptive fuzzy repair mechanism
CN109657590A (en) * 2018-12-11 2019-04-19 合刃科技(武汉)有限公司 A kind of method, apparatus and storage medium detecting information of vehicles
CN111767915A (en) * 2019-04-02 2020-10-13 顺丰科技有限公司 License plate detection method, device, equipment and storage medium
CN109961057A (en) * 2019-04-03 2019-07-02 罗克佳华科技集团股份有限公司 A kind of vehicle location preparation method and device
CN109961057B (en) * 2019-04-03 2021-09-03 罗克佳华科技集团股份有限公司 Vehicle position obtaining method and device
CN110047102A (en) * 2019-04-18 2019-07-23 北京字节跳动网络技术有限公司 Methods, devices and systems for output information
CN110084190B (en) * 2019-04-25 2024-02-06 南开大学 Real-time unstructured road detection method under severe illumination environment based on ANN
CN110084190A (en) * 2019-04-25 2019-08-02 南开大学 Unstructured road detection method in real time under a kind of violent light environment based on ANN
CN110222604B (en) * 2019-05-23 2023-07-28 复钧智能科技(苏州)有限公司 Target identification method and device based on shared convolutional neural network
CN110222604A (en) * 2019-05-23 2019-09-10 复钧智能科技(苏州)有限公司 Target identification method and device based on shared convolutional neural networks
CN110334709A (en) * 2019-07-09 2019-10-15 西北工业大学 Detection method of license plate based on end-to-end multitask deep learning
CN110751155A (en) * 2019-10-14 2020-02-04 西北工业大学 Novel target detection method based on Faster R-CNN
CN111444911A (en) * 2019-12-13 2020-07-24 珠海大横琴科技发展有限公司 Training method and device of license plate recognition model and license plate recognition method and device
CN111444911B (en) * 2019-12-13 2021-02-26 珠海大横琴科技发展有限公司 Training method and device of license plate recognition model and license plate recognition method and device
CN111126515A (en) * 2020-03-30 2020-05-08 腾讯科技(深圳)有限公司 Model training method based on artificial intelligence and related device
CN111507210B (en) * 2020-03-31 2023-11-21 华为技术有限公司 Traffic signal lamp identification method, system, computing equipment and intelligent vehicle
CN111507210A (en) * 2020-03-31 2020-08-07 华为技术有限公司 Traffic signal lamp identification method and system, computing device and intelligent vehicle
CN111860539A (en) * 2020-07-20 2020-10-30 济南博观智能科技有限公司 License plate color recognition method, device and medium
CN111860539B (en) * 2020-07-20 2024-05-10 济南博观智能科技有限公司 License plate color recognition method, device and medium
CN112215245A (en) * 2020-11-05 2021-01-12 中国联合网络通信集团有限公司 Image identification method and device
CN112507801A (en) * 2020-11-14 2021-03-16 武汉中海庭数据技术有限公司 Lane road surface digital color recognition method, speed limit information recognition method and system
CN113239836A (en) * 2021-05-20 2021-08-10 广州广电运通金融电子股份有限公司 Vehicle body color identification method, storage medium and terminal
CN113673467A (en) * 2021-08-30 2021-11-19 武汉长江通信智联技术有限公司 Vehicle color identification method under white light condition
CN113723408B (en) * 2021-11-02 2022-02-25 上海仙工智能科技有限公司 License plate recognition method and system and readable storage medium
CN113723408A (en) * 2021-11-02 2021-11-30 上海仙工智能科技有限公司 License plate recognition method and system and readable storage medium
CN114240929A (en) * 2021-12-28 2022-03-25 季华实验室 Color difference detection method and device

Also Published As

Publication number Publication date
CN107729801B (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN107729801A (en) A kind of vehicle color identifying system based on multitask depth convolutional neural networks
Björklund et al. Robust license plate recognition using neural networks trained on synthetic images
CN106599773B (en) Deep learning image identification method and system for intelligent driving and terminal equipment
CN103824081B (en) Method for detecting rapid robustness traffic signs on outdoor bad illumination condition
CN109190444B (en) Method for realizing video-based toll lane vehicle feature recognition system
CN108108761A (en) A kind of rapid transit signal lamp detection method based on depth characteristic study
CN109344825A (en) A kind of licence plate recognition method based on convolutional neural networks
CN106326858A (en) Road traffic sign automatic identification and management system based on deep learning
CN108171112A (en) Vehicle identification and tracking based on convolutional neural networks
CN106845487A (en) A kind of licence plate recognition method end to end
CN105005989B (en) A kind of vehicle target dividing method under weak contrast
CN104517103A (en) Traffic sign classification method based on deep neural network
CN106022232A (en) License plate detection method based on deep learning
CN105809138A (en) Road warning mark detection and recognition method based on block recognition
CN105354568A (en) Convolutional neural network based vehicle logo identification method
CN108647700A (en) Multitask vehicle part identification model based on deep learning, method and system
CN105809121A (en) Multi-characteristic synergic traffic sign detection and identification method
CN108304785A (en) Road traffic sign detection based on self-built neural network and recognition methods
Priese et al. Ideogram identification in a realtime traffic sign recognition system
Huang et al. An in-car camera system for traffic sign detection and recognition
CN110348396B (en) Deep learning-based method and device for recognizing character traffic signs above roads
CN112464731B (en) Traffic sign detection and identification method based on image processing
CN111860509A (en) Coarse-to-fine two-stage non-constrained license plate region accurate extraction method
Fleyeh Traffic and road sign recognition
CN109993806A (en) A kind of color identification method, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 310012 1st floor, building 1, 223 Yile Road, Hangzhou City, Zhejiang Province

Patentee after: Yinjiang Technology Co.,Ltd.

Address before: 310012 1st floor, building 1, 223 Yile Road, Hangzhou City, Zhejiang Province

Patentee before: ENJOYOR Co.,Ltd.

CP01 Change in the name or title of a patent holder