CN105608441A - Vehicle type identification method and system - Google Patents
Vehicle type identification method and system Download PDFInfo
- Publication number
- CN105608441A CN105608441A CN201610019285.2A CN201610019285A CN105608441A CN 105608441 A CN105608441 A CN 105608441A CN 201610019285 A CN201610019285 A CN 201610019285A CN 105608441 A CN105608441 A CN 105608441A
- Authority
- CN
- China
- Prior art keywords
- region
- feature
- information
- characteristic information
- angle point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a vehicle type identification method and system. The method comprises a process of classifier generation by machine training and a process of determination of a to-be-measured picture. During the process of classifier generation, an image range needed in a picture of a training set is determined based on a licence plate; the determined image range is divided into several areas; feature information in all areas is selected; all feature information in respective selected areas are inputted into a machine training unit to generate classifiers corresponding to all areas one by one; single-area determination is carried out on a to-be-measured picture by the generated classifiers; and according to the single-area determination result, a vehicle type identification result is obtained by multi-area confidence coefficient fusion determination. With the method and system, the precision of vehicle type identification is enhanced effectively and detailed information like a vehicle manufacture can be identified. The method and system can be applied to the intelligent traffic system especially and the strong evidence can be provided for dealing with a traffic accident.
Description
Technical field
The present invention relates to the vehicle recognition technology field in video image processing.
Background technology
Vehicle identification is the important component part of intelligent transportation system, can provide strong evidence for the processing of traffic eventsAccording to, can provide more help for functions such as vehicle trackings simultaneously. In prior art, about vehicle, identification is mainly divided into two classesType is respectively the identification based on template and the identification based on grader.
Wherein the identification based on template is applicable under more specific scene, and having application number is 201410014474.1Chinese patent discloses model recognizing method and device in ETC track, is useful in electronic toll collection, thisVehicle identification under scene only requires that the vehicle identifying can corresponding be kept at the masterplate in system, is identified vehicleDistance between the relatively stable and camera of environment of living in is also relative fixing, but the method can not be processed under complex environmentVehicle identification.
For this reason, having again application number is that the vehicle that 201210049730.1 Chinese patent discloses under a kind of complex scene is knownOther method, the principle adopting is exactly the recognition methods based on grader mentioned above, and it is mainly concerned with selects feature alsoClassify in conjunction with particular machine study method of discrimination, and about feature choose and grader that machine learning generates is certainlyDetermine the key point of vehicle recognition efficiency and the degree of accuracy, choosing and generating grader and can meet for feature in this schemeFor the identification as between compact car and large car, also can meet such as car and jeep, taxi and minibus etc.Between identification, but specific to being inaccessiable as the identification such as production firm, time.
Summary of the invention
The object of this invention is to provide a kind of model recognizing method and device, can identify the accurate type of vehicle such as lifeProduce manufacturer.
In order to solve the problems of the technologies described above, the present invention adopts following technical scheme: a kind of model recognizing method, comprises machineTraining generates grader process and treats the differentiation process of mapping sheet, generating in the process of grader, determines instruction based on car platePractice needed image range in collection picture, by definite image range zoning, the feature letter in selected each regionBreath, drops into machines training by all characteristic informations selected in region separately respectively and generates the grader in corresponding each region one by one,Treat mapping sheet by the grader having generated and carry out single area judging, put through multizone again according to single area judging resultReliability fusion obtains vehicle recognition result.
Principle, extraction and the utilization to image feature information, the present invention are depended in the degree of accuracy of vehicle identificationFirst determined needed image range size taking car plate as reference, because car plate has unified standard in manufacturing process,And in the situation that determining scope, image is carried out to subregion training and method of discrimination, the each self-training of zones of different independently dividesClass device, merges and obtains vehicle recognition result through multizone confidence level again based on single area judging result, and the method takes into full accountOver-fitting phenomenon in image recognition processes, reduced the interference of indivedual factors for recognition result.
The invention also discloses a kind of model recognition system, comprise grader generating apparatus and the differentiation dress for the treatment of mapping sheetPut, described grader generating apparatus is determined needed image range in training set picture based on car plate, by what determinedImage range zoning, the characteristic information in selected each region, throws selected all characteristic informations in region separately respectivelyEnter machine training and generate the grader in corresponding each region one by one; Described discriminating gear by the grader that generated to be measuredPicture carries out single area judging, merges and obtains vehicle recognition result again according to single area judging result through multizone confidence level.
Adopt after technique scheme, tool of the present invention has the following advantages: the present invention is directed to image and carry out region division, forIndependently grader of the each self-training of zones of different, carries out multizone confidence level amalgamation mode based on single area judging result again and obtainsRecognition result, has effectively increased the accuracy of vehicle identification, can further identify such thin such as vehicle production manufacturerJoint information, is specially adapted to intelligent transportation system, can provide strong evidence for the processing of traffic events.
Brief description of the drawings
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is described further:
Fig. 1 is the flow chart of a kind of embodiment of model recognizing method of the present invention;
Fig. 2 is vehicle image exemplary plot in video image;
Fig. 3 is that exemplary plot is divided in the picture portion territory of a kind of embodiment of model recognizing method of the present invention;
Fig. 4 is the feature information extraction exemplary plot of a kind of embodiment of model recognizing method of the present invention.
Detailed description of the invention
In the present invention, relate to the many terms in image recognition technology, comprised angle point, hog feature, sift point spyLevy, random forest and various algorithm, these terms all possess unique definition in art technology, in this article directly in citationThe term of stating.
Referring to Fig. 1, be the flow chart of a kind of preferred embodiment of the present invention, main point for two main process,Machine training generates grader process and treats the differentiation process of mapping sheet, wherein generates grader process and relates to training set figureThe processing of sheet, concrete differentiation process relates to the processing for the treatment of mapping sheet. For the processing of training set picture be also no matterOr all related to for the determining of image range size for the processing of picture to be measured, because in kind convergent-divergent in different picturesRatio is different, need to unify to process to it, because the car plate of vehicle has unified production standard, so can useBe used as determining the reference of image range. Location, position about car plate belongs to routine techniques, repeats no more in this article.
Referring to Fig. 2, in the present embodiment, based on car plate positional information to vehicle before face image carry out scope determine, whenSo the tailstock of vehicle also can adopt the same manner to carry out scope to determine, according to actual conditions, image overall width is car plate5 times of width, image total height, for for 10 times of car plate height, wherein considers that car plate particular location in face before vehicle is on the lower side, instituteWidth by image should be centered by the mid point of car plate, the height of image should be taking the mid point of car plate as reference point topIntercept be highly 7.5 times of car plate height, below intercept be highly 2.5 car plate height, complete scope determine after image be asFig. 2 institute example. Then, then to the definite image of scope unify convergent-divergent so that post-processed, in an embodiment preferably contractingPut the size for 400*200 pixel. As can see from Figure 1, for the range size of training set picture determine method and forThe range size of picture to be measured determines that method is consistent, and this is also follow-up precondition of carrying out vehicle differentiation.
After determining image range size, then carry out subregion division, as shown in Figure 3, in the present embodiment, byPreferably divide for 12 regions through determining the image of range size, wherein the region at car plate place is left out, because ofFor this part region there is no the information that increases discriminant accuracy for concrete vehicle, the angle point on car plate can cause not on the contraryNecessary interference, so this part region is isolated separately. Again referring to Fig. 3, the near zone of car plate both sides too byGet rid of, reason is the same.
Here require emphasis, for training set picture and picture to be measured, zoning can be identical, but examinesConsidering the adjacent edge boundary region abrupt change of information bringing to boundary demarcation can affect follow-up differentiation result, and the present embodiment optimal way isThe image-region that training set picture is relevant is divided the region dividing mode that is different from picture to be measured. As shown in Figure 3, training set pictureBetween the image-region of dividing, exist and partly overlap, and do not exist and partly overlap between the image-region that picture to be measured is divided. Such as a andTwo solid lines of b add that two outward flanges have formed the cut zone in the region 1 of picture division to be measured, and two dotted lines of A and B add twoBar outward flange has formed the cut zone in the region 1 of training set picture division; Solid line a, b and d add uppermost outward flange structureThe cut zone that has become the region 2 of picture division to be measured, dotted line B, C and D add that uppermost edge line has formed training set figureThe cut zone in region 2 that sheet is divided, can see between region 1 that training set picture is divided and region 2 and have lap,And be all greater than and comprised completely picture segmentation institute to be measured corresponding region, as preferably, have more partial width direction 20-25Pixel (as poor in the pixel between Aa), short transverse is 15-20 pixel (as poor in the pixel between Bb).
After completing that above-mentioned image range size is determined and region divides, carry out spy for each region of having dividedThe selected extraction of reference breath, should be first clear and definite, carries out the concrete division methods in region partiting step or region allBe for the follow-up vehicle that identifies more accurately and effectively, with regard to Fig. 3, the dividing mode in 12 regions is that basis is for identificationThe effective information concentration degree of vehicle is come, and so just each region can be distinguished, and is divided into feature sparse region, featureClose quarters and normal areas, the size of each regional location is all determined by above-mentioned effective information concentration degree, certain 12 regionsDividing method be only relatively preferred mode, in theory, region is divided more detailed rules and regulations and is more conducive to follow-up vehicle and knowsNot, but fine-drawn division equally also can affect the efficiency of vehicle identification, and as shown in Figure 3, wherein region 1,3,11 belongs toFeature sparse region, as 8 angle points of the selected detection of optimal way, region 5 and 8 belongs to feature close quarters, as preferred sideFormula is selected need to detect 12 angle points, and other are normal areas, as 10 angle points of the selected detection of optimal way. Hereinafter will be situated betweenContinue due to the difference on the different characteristic informations that cause of the selected quantity of angle point, in the time of follow-up identification vehicle, physical feature is closeCollection region plays a role just larger, and then increases the degree of accuracy of vehicle identification, and concrete angular-point detection method can adopt FASTCorner Detection Algorithm.
Next be the selected extraction for each area characteristic information, the characteristic information described in present embodiment is by threePartial fusion forms, and has specifically comprised positional information, block feature information and some characteristic information, with regard to efficiency, can first carry out anglePoint pairing, can match arbitrarily to the angle point in region separately, and several angle points that matched are one group, referring to Fig. 4,As preferred embodiment, the angle point unification in each region is carried out to three or three pairings, as 8 angle points taking first area asExample, any three angle points all combine as one group, and one has 56 kinds of combinations (8*7*6/1*2*3=56), every three some structuresThe combination becoming is according to the selected extraction of carrying out characteristic information shown in Fig. 4.
Wherein positional information has comprised again relative position information and absolute location information, and referring to Fig. 4, same first area isExample, represents car plate central point with M, and A, B, C represent three points that first area detects, first compares between three points and MDistance, determines the ordering of three points according to the distance of distance, farthest, BM's AM takes second place, and CM is nearest, so the arrangement of three pointsOrder is ABC. For absolute location information, the initial point using M as a two-dimensional direct angle coordinate system, upper right is first quartile, a left sideOn be the second quadrant, lower-left is third quadrant, bottom right is fourth quadrant, the x, y coordinate figure that then extracts successively tri-points of ABC withAnd with respect to the distance of initial point, use respectively AMx, AMy and | AM| represents (with A point for example), altogether 9 values. For relative positionInformation, taking C point as reference point, extracts 2 coordinate displacements of ordering with respect to C of AB (comprising x, y reference axis, totally 4 variablees), logicalCross following methods and calculate (CMx-AMx), (CMy – AMy), (CMx-BMx), (CMy – BMy). Also need in addition two scale parameters,Calculate with the following methods: (| AB|/| BC|), (| AC|/| BC|), by calculating these two scale parameters with respect to other position lettersCease more stable, nonshrink put with rotatory power stronger. Relative position information has extracted like this, and totally 6 dimension variablees, add absolute positionThe 9 dimension variablees of putting information, integral position information comprises 15 dimension variable informations altogether.
Next extract block feature information, the coupling most critical of block feature is alignment, and two pieces need to roughly point toSame region, the Dui Kuai position, position that we use for reference a little here retrains, and has well ensured the alignment of piece coupling, toolBody way is: calculate respectively maximum and the minimum of a value of tri-some respective coordinates of ABC in x and y direction, be designated as Hx, Lx (directions XMaximum and minimum of a value), Hy, Ly (Y-direction maximum and minimum of a value), gets final product rectangle frame of complete confirmation by these 4 values,This rectangle frame unification is scaled to 24*24 pixel size, and then extracting hog feature (can be according to the hog feature extraction side of standardMethod, extracts 64 dimensional features).
(128 dimensions, equally can be according to the sift point spy of standard for the last sift point characteristic information that extracts successively 3 of ABCLevy feature extracting method).
By above-mentioned positional information, block feature information and some characteristic information, in conjunction with forming complete characteristic information, it is always tieed upDegree is 463 dimensions (15+64+128*3=463). Wherein sift point feature proportion is higher, can pass through PCA (PrincipalComponentAnalysis) it is reduced to 256 dimensions by 384 dimensions, total like this characteristic dimension becomes 335 dimension (15+64+256=335), prove by many experiments, for the dimensionality reduction of sift feature, overall accuracy rate is not almost affected, but promotedTraining effectiveness. Identical with first area in the selected extracting method principle of characteristic information in other regions, just angle point quantitatively canCan be different, and about the extraction of characteristic information, be consistent for training set picture method for picture to be measured. The present embodimentTaking car plate center as with reference to merge triangular structure positional information, constraint forms based on triangulation point rectangular block HOG feature andSift point feature, can make full use of the effect of each characteristic information, has effectively promoted follow-up vehicle recognition accuracy.
After the selected extraction that completes characteristic information, start the step of training generation grader, in an embodiment, preferablyCharacteristic information is dropped into random forest and train, random forest can be processed high dimensional feature, and the introducing of randomness makes itBe not easy to be absorbed in over-fitting, and can process discrete variable and non-discrete variable simultaneously, training process is also rapider. NeedEmphasize, the present embodiment adopts subregion to differentiate, and 12 regions, drop into the characteristic information in each region respectively altogetherRandom forest is trained final 12 the random forest graders that produced. Consider that vehicle classification is more, in the present embodimentPreferably use 200 decision trees, can certainly between 200-400, select according to actual demand, after exceeding 400 trees,On discrimination, be difficult to continue to improve, only can bring the decline in efficiency. Decision tree takes C4.5 algorithm to select best attributes, andUse pessimistic Pruning strategy to prevent over-fitting phenomenon. Every tree is used that random what select is 25 dimensional features, and the maximum of single tree is darkDegree is 30 layers.
As described above, in the characteristic information of putting into random forest, comprise three parts, and each Partial Feature letterBreath is different for the importance of vehicle identification, wherein positional information possess more stable character and possess nonshrink put andRotatory power, its importance will be higher than block feature information and some characteristic information, and in reality test, finds block feature informationDiscrimination outline is higher than a characteristic information, so block feature information is more important than a characteristic information again, and the feature of random forestSelection is completely random, can make again it inclined to one side in selecting feature in order neither to destroy the stochastic behaviour of random forestTo in characteristic information more importantly, this enforcement adopts a kind of Weighted random to select the mode of feature, by positional information, block featureBe weighted processing with a feature according to certain ratio, wherein the weight of positional information, block feature information and some characteristic informationRatio should reduce successively, and about this weight proportion, a kind of optimal way is 2 times that positional information is not less than block feature information,And block feature information is not less than 1.5 times of a characteristic information, if to also have a kind of optimal way be positional information accounting example equal portions 4 to9 parts, block feature information accounts for 1.5 to 4 parts, and some feature accounts for 1 to 1.5 part. Positional information in the present embodiment, block feature information andPoint characteristic information weight proportion adopts 5:2:1, and 15 dimensional features of positional information are multiplied by 5, becomes 75 dimensional features, the 0-of new feature0 dimensional feature that 4 dimensional features are corresponding original, 1 dimensional feature that 5-9 dimensional feature is corresponding original, the like; Block feature is multiplied by 2 changesBe 128 dimensional features, 0 dimension that 0-1 dimensional feature is corresponding original, 1 dimension that 2-3 dimensional feature is corresponding original, the like. Point characteristic information256 dimensions remain unchanged, and random forest is in selected characteristic like this, and the scope of search is become existing from 335 original (0-334)459 (0-458), searching algorithm still uses complete random number, has so both ensured the randomness of Feature Selection, carries simultaneouslyRise the more selecteed probability of key character, effectively promoted the discriminating power to vehicle.
After generating 12 random forest graders and completing selected extraction of characteristic information for the treatment of mapping sheet, for treatingFirst mapping sheet carries out each region differentiates separately, and from picture to be measured, the feature of each extracted region enters corresponding regionGrader, generally, random forest, to once differentiating and only export a result, obtains that vehicle that decision tree is maximumClassification, and in the middle of the present embodiment, be local feature due to what use, exist in theory multiple vehicle classifications to have similar partFeature, obtains the maximum result of decision tree if only got, and ignores other results, may cause the loss of effective information, is thisMaximum front 3 to 5 results of embodiment random forest output integrated coupling number, preferably 5 in the present embodiment. Finally carry outMultizone confidence level fusion process: by said process, carry out the fusion of multizone confidence level according to the matching result in each region and sentenceDisconnected, judgment formula is
If wherein a vehicle classification C of supposition is Kr (span is 1-5) in the comprehensive matching degree rank in K region, K districtThe angle point number that territory obtains is Ks, and n is for getting 12, m in the sum the present embodiment of divided region for angle point quantity on the same group,In the present embodiment, m gets 3, and the vehicle classification that confidence level ratio is the highest is vehicle recognition result.
Denominator in above-mentioned formula is equivalent to a normalization constant, ensures in this differentiation process, and institute occursIts confidence level of candidate's classification be added and equal 1, calculate and get that classification that confidence level is the highest differentiates as vehicle by above-mentioned formulaAs a result, but considering that vehicle is differentiated may also there is error, also 5 vehicle before confidence level rank all can be presented to useFamily, can accurately identify the detailed vehicle information such as manufacturer, productive year of vehicle by this method.
In the present embodiment, corresponding to model recognizing method, a kind of model recognition system is also disclosed, described graderGenerating apparatus has comprised that region divides device, during to definite image range zoning, license plate area is got rid of to the district marking offIn territory, comprised feature sparse region, feature close quarters and normal areas, wherein the selected characteristic information of feature close quarters is manyIn normal areas, and the selected characteristic information of normal areas is more than feature sparse region.
Described grader generating apparatus has comprised that characteristic information selectes device, selected angle point, wherein feature in each regionThe quantity of the selected angle point of close quarters is more than normal areas, and normal areas is selected the quantity of angle point more than feature sparse region,Angle point in region is separately matched arbitrarily, and several angle points that matched are one group, the feature in described each regionInformation comprises positional information, block feature information and some characteristic information, described positional information be on the same group between each angle point relativelyPositional information and each angle point be with respect to the absolute location information of car plate central point, and described block feature information is for angle on the same groupThe hog feature that the rectangle frame that surrounds of point extracts, described some characteristic information is for corner location relation sequence on the same groupSift point feature. Described discriminating gear has comprised single area judging device, and described single area judging device is treated mapping sheet and carried outRegion is divided, and this dividing mode, corresponding to the region dividing mode of training set picture, completes region and divides picture to be measured afterwardsCharacteristic information in selected each region again, the selected mode of this characteristic information is corresponding to the selected side of characteristic information of training set pictureFormula, treats mapping sheet by described grader and carries out single area judging.
Described discriminating gear has comprised multizone confidence level determining device, carries out multizone according to the matching result in each regionConfidence level judgement, judgment formula is
If wherein a vehicle classification C is Kr in the comprehensive matching degree rank in K region, the angle point number that K region obtains isKs, n is divided region sum, and m is angle point quantity on the same group, and the vehicle classification that confidence level ratio is the highest is vehicle identification knotReally.
Except above preferred embodiment, the present invention also has other embodiment, and those skilled in the art can be according to thisVarious changes and distortion are made in invention, only otherwise depart from spirit of the present invention, all should belong to claims of the present invention and determineThe scope of justice.
Claims (12)
1. a model recognizing method, comprises that machine training generates grader process and the differentiation process for the treatment of mapping sheet, its spyLevy and be: generating in the process of grader, determine needed image range in training set picture based on car plate, by trueFixed image range zoning, the characteristic information in selected each region, respectively by all feature letters in selected region separatelyBreath drops into machine training and generates the grader in corresponding each region one by one, treats mapping sheet carry out list by the grader having generatedArea judging, merges judgement through multizone confidence level again according to single area judging result and obtains vehicle recognition result.
2. model recognizing method according to claim 1, is characterized in that: in the time of the image range zoning to definiteLicense plate area is got rid of, in the region marking off, comprised feature sparse region, feature close quarters and normal areas, Qi ZhongteLevy the selected characteristic information of close quarters more than normal areas, and the selected characteristic information of normal areas is more than feature sparse region.
3. model recognizing method according to claim 2, is characterized in that: selected angle point, wherein feature in each regionThe quantity of the selected angle point of close quarters is more than normal areas, and normal areas is selected the quantity of angle point more than feature sparse region,Angle point in region is separately matched arbitrarily, and several angle points that matched are one group, the feature in described each regionInformation comprises positional information, block feature information and some characteristic information, described positional information be on the same group between each angle point relativelyPositional information and each angle point be with respect to the absolute location information of car plate central point, and described block feature information is for angle on the same groupThe hog feature that the rectangle frame that surrounds of point extracts, described some characteristic information is for corner location relation sequence on the same groupSift point feature.
4. model recognizing method according to claim 3, is characterized in that: treat mapping sheet and carry out region division, this strokeThe mode of dividing, corresponding to the region dividing mode of training set picture, completes region division picture to be measured afterwards and selectes in each region againCharacteristic information, the selected mode of this characteristic information is corresponding to the selected mode of characteristic information of training set picture, by described pointClass device is treated mapping sheet and is carried out single area judging.
5. model recognizing method according to claim 4, is characterized in that: described characteristic information is dropped into random forestTrain and generate grader, and adopt Weighted random to select characteristic information method, wherein said positional information, block feature letterBreath and some characteristic information form complete characteristic information afterwards according to default weight proportion adjustment and weight reduces successively.
6. model recognizing method according to claim 5, is characterized in that: when treating mapping sheet and carrying out single area judging,Export front 3 to 5 results of each regional complex matching degree, carry out the fusion of multizone confidence level according to the matching result in each region and sentenceDisconnected, judgment formula is
If wherein a vehicle classification C is Kr in the comprehensive matching degree rank in K region, the angle point number that K region obtains is Ks, nFor divided region sum, m is angle point quantity on the same group, and the vehicle classification that confidence level ratio is the highest is vehicle recognition result.
7. model recognizing method according to claim 5, is characterized in that: the part of wherein dividing for training set pictureRegion is greater than the corresponding subregion of dividing for picture to be measured.
8. a model recognition system, comprises grader generating apparatus and the discriminating gear for the treatment of mapping sheet, it is characterized in that:
Described grader generating apparatus is determined needed image range in training set picture based on car plate, by what drawn a circle to approveImage range zoning, the characteristic information in selected each region, throws all characteristic informations in selected region separately respectivelyEnter machine training and generate the grader in corresponding each region one by one;
Described discriminating gear is treated mapping sheet by the grader having generated and is carried out single area judging, according to single area judgingResult merges judgement through multizone confidence level again and obtains vehicle recognition result.
9. model recognition system according to claim 8, is characterized in that: described grader generating apparatus has comprised districtDevice is divided in territory, during to definite image range zoning, license plate area is got rid of, and has comprised feature rare in the region marking offDredge region, feature close quarters and normal areas, wherein feature close quarters is selected characteristic information more than normal areas, and commonThe selected characteristic information in region is more than feature sparse region.
10. model recognition system according to claim 9, is characterized in that: described grader generating apparatus has comprisedCharacteristic information is selected device, selected angle point in each region, and wherein the quantity of the selected angle point of feature close quarters is more than normal areas,And the quantity of the selected angle point of normal areas is more than feature sparse region, the angle point in region is separately matched arbitrarily,Several angle points of pairing are one group, and the characteristic information in described each region comprises positional information, block feature information and some featureInformation, described positional information be on the same group the relative position information between each angle point and each angle point with respect to car plate central pointAbsolute location information, the hog feature of described block feature information for extracting for the rectangle frame that angle point surrounds on the same group, describedPoint characteristic information is the sift point feature for corner location relation sorts on the same group.
11. model recognition systems according to claim 10, is characterized in that: described discriminating gear has comprised single regionArbiter, described single area judging device is treated mapping sheet and is carried out region division, and this dividing mode is corresponding to training set pictureRegion dividing mode, completes region division picture to be measured afterwards and selectes the characteristic information in each region again, this characteristic information choosingDetermine the characteristic information selected mode of mode corresponding to training set picture, treat mapping sheet by described grader and carry out single regionDifferentiate.
12. model recognizing methods according to claim 11, is characterized in that: described discriminating gear has comprised multizoneConfidence level determining device, carries out the judgement of multizone confidence level according to the matching result in each region, and judgment formula is
If wherein a vehicle classification C is Kr in the comprehensive matching degree rank in K region, the angle point number that K region obtains is Ks, nFor divided region sum, m is angle point quantity on the same group, and the vehicle classification that confidence level ratio is the highest is vehicle recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610019285.2A CN105608441B (en) | 2016-01-13 | 2016-01-13 | Vehicle type recognition method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610019285.2A CN105608441B (en) | 2016-01-13 | 2016-01-13 | Vehicle type recognition method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105608441A true CN105608441A (en) | 2016-05-25 |
CN105608441B CN105608441B (en) | 2020-04-10 |
Family
ID=55988367
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610019285.2A Active CN105608441B (en) | 2016-01-13 | 2016-01-13 | Vehicle type recognition method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105608441B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106250852A (en) * | 2016-08-01 | 2016-12-21 | 乐视控股(北京)有限公司 | Virtual reality terminal and hand-type recognition methods and device |
CN106339445A (en) * | 2016-08-23 | 2017-01-18 | 东方网力科技股份有限公司 | Vehicle retrieval method and device based on large data |
CN106504540A (en) * | 2016-12-12 | 2017-03-15 | 浙江宇视科技有限公司 | A kind of analysis method of information of vehicles and device |
CN107450505A (en) * | 2016-05-31 | 2017-12-08 | 优信拍(北京)信息科技有限公司 | A kind of rapid detection system of vehicle, method |
CN107784309A (en) * | 2017-11-01 | 2018-03-09 | 深圳汇生通科技股份有限公司 | A kind of realization method and system to vehicle cab recognition |
CN108319952A (en) * | 2017-01-16 | 2018-07-24 | 浙江宇视科技有限公司 | A kind of vehicle characteristics extracting method and device |
WO2018150243A1 (en) * | 2017-02-16 | 2018-08-23 | International Business Machines Corporation | Image recognition with filtering of image classification output distribution |
WO2018161435A1 (en) * | 2017-03-10 | 2018-09-13 | 深圳大学 | Chinese traditional medicine syndrome element differentiation method and device |
CN109703569A (en) * | 2019-02-21 | 2019-05-03 | 百度在线网络技术(北京)有限公司 | A kind of information processing method, device and storage medium |
CN112580665A (en) * | 2020-12-18 | 2021-03-30 | 深圳赛安特技术服务有限公司 | Vehicle money identification method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090185723A1 (en) * | 2008-01-21 | 2009-07-23 | Andrew Frederick Kurtz | Enabling persistent recognition of individuals in images |
CN104318225A (en) * | 2014-11-19 | 2015-01-28 | 深圳市捷顺科技实业股份有限公司 | License plate detection method and device |
CN105160299A (en) * | 2015-07-31 | 2015-12-16 | 华南理工大学 | Human face emotion identifying method based on Bayes fusion sparse representation classifier |
CN105205486A (en) * | 2015-09-15 | 2015-12-30 | 浙江宇视科技有限公司 | Vehicle logo recognition method and device |
-
2016
- 2016-01-13 CN CN201610019285.2A patent/CN105608441B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090185723A1 (en) * | 2008-01-21 | 2009-07-23 | Andrew Frederick Kurtz | Enabling persistent recognition of individuals in images |
CN104318225A (en) * | 2014-11-19 | 2015-01-28 | 深圳市捷顺科技实业股份有限公司 | License plate detection method and device |
CN105160299A (en) * | 2015-07-31 | 2015-12-16 | 华南理工大学 | Human face emotion identifying method based on Bayes fusion sparse representation classifier |
CN105205486A (en) * | 2015-09-15 | 2015-12-30 | 浙江宇视科技有限公司 | Vehicle logo recognition method and device |
Non-Patent Citations (3)
Title |
---|
BO-YUAN FENG.ETC: "Automatic recognition of serial numbers in bank notes", 《PATTERN RECOGNITION》 * |
BO-YUAN FENG.ETC: "Extraction of Serial Numbers on Bank Notes", 《INTERNATIONAL CONFERENCE ON DOCUMENT ANALYSIS & RECOGNITION》 * |
BO-YUAN FENG.ETC: "Part-Based High Accuracy Recognition of Serial Numbers in Bank Notes", 《SPRINGER INTERNATIONAL PUBLISHING》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107450505A (en) * | 2016-05-31 | 2017-12-08 | 优信拍(北京)信息科技有限公司 | A kind of rapid detection system of vehicle, method |
CN106250852A (en) * | 2016-08-01 | 2016-12-21 | 乐视控股(北京)有限公司 | Virtual reality terminal and hand-type recognition methods and device |
CN106339445A (en) * | 2016-08-23 | 2017-01-18 | 东方网力科技股份有限公司 | Vehicle retrieval method and device based on large data |
CN106339445B (en) * | 2016-08-23 | 2019-06-18 | 东方网力科技股份有限公司 | Vehicle retrieval method and device based on big data |
CN106504540A (en) * | 2016-12-12 | 2017-03-15 | 浙江宇视科技有限公司 | A kind of analysis method of information of vehicles and device |
CN108319952B (en) * | 2017-01-16 | 2021-02-02 | 浙江宇视科技有限公司 | Vehicle feature extraction method and device |
CN108319952A (en) * | 2017-01-16 | 2018-07-24 | 浙江宇视科技有限公司 | A kind of vehicle characteristics extracting method and device |
US10275687B2 (en) | 2017-02-16 | 2019-04-30 | International Business Machines Corporation | Image recognition with filtering of image classification output distribution |
WO2018150243A1 (en) * | 2017-02-16 | 2018-08-23 | International Business Machines Corporation | Image recognition with filtering of image classification output distribution |
GB2572733A (en) * | 2017-02-16 | 2019-10-09 | Ibm | Image recognition with filtering of image classification output distribution |
GB2572733B (en) * | 2017-02-16 | 2021-10-27 | Ibm | Image recognition with filtering of image classification output distribution |
WO2018161435A1 (en) * | 2017-03-10 | 2018-09-13 | 深圳大学 | Chinese traditional medicine syndrome element differentiation method and device |
CN107784309A (en) * | 2017-11-01 | 2018-03-09 | 深圳汇生通科技股份有限公司 | A kind of realization method and system to vehicle cab recognition |
CN109703569A (en) * | 2019-02-21 | 2019-05-03 | 百度在线网络技术(北京)有限公司 | A kind of information processing method, device and storage medium |
CN109703569B (en) * | 2019-02-21 | 2021-07-27 | 百度在线网络技术(北京)有限公司 | Information processing method, device and storage medium |
CN113392809A (en) * | 2019-02-21 | 2021-09-14 | 百度在线网络技术(北京)有限公司 | Automatic driving information processing method and device and storage medium |
CN113392809B (en) * | 2019-02-21 | 2023-08-15 | 百度在线网络技术(北京)有限公司 | Automatic driving information processing method, device and storage medium |
CN112580665A (en) * | 2020-12-18 | 2021-03-30 | 深圳赛安特技术服务有限公司 | Vehicle money identification method and device, electronic equipment and storage medium |
CN112580665B (en) * | 2020-12-18 | 2024-04-19 | 深圳赛安特技术服务有限公司 | Vehicle style identification method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN105608441B (en) | 2020-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105608441A (en) | Vehicle type identification method and system | |
CN100356388C (en) | Biocharacteristics fusioned identity distinguishing and identification method | |
CN105160309B (en) | Three lanes detection method based on morphological image segmentation and region growing | |
Zaklouta et al. | Real-time traffic sign recognition in three stages | |
CN103488973B (en) | Vehicle brand recognition methods and system based on image | |
CN101925905B (en) | Identification and verification of unknown document according to eigen image process | |
CN101540000B (en) | Iris classification method based on texture primitive statistical characteristic analysis | |
CN102496034B (en) | High-spatial resolution remote-sensing image bag-of-word classification method based on linear words | |
CN102254188B (en) | Palmprint recognizing method and device | |
CN102708364B (en) | Cascade-classifier-based fingerprint image classification method | |
CN101329734A (en) | License plate character recognition method based on K-L transform and LS-SVM | |
CN101859382A (en) | License plate detection and identification method based on maximum stable extremal region | |
CN104077594A (en) | Image recognition method and device | |
CN103679191A (en) | An automatic fake-licensed vehicle detection method based on static state pictures | |
US11132582B2 (en) | Individual identification device | |
CN106778529A (en) | A kind of face identification method based on improvement LDP | |
CN102521561A (en) | Face identification method on basis of multi-scale weber local features and hierarchical decision fusion | |
CN105224945B (en) | A kind of automobile logo identification method based on joint-detection and identification algorithm | |
CN103186790A (en) | Object detecting system and object detecting method | |
CN108846831A (en) | The steel strip surface defect classification method combined based on statistical nature and characteristics of image | |
CN105005565A (en) | Onsite sole trace pattern image retrieval method | |
CN104881871A (en) | Traffic image segmentation method based on improved multi-object harmony search algorithm | |
CN104361319A (en) | Fake fingerprint detection method based on SVM-RFE (support vector machine-recursive feature elimination) | |
CN105825233A (en) | Pedestrian detection method based on random fern classifier of online learning | |
CN104573722A (en) | Three-dimensional face race classifying device and method based on three-dimensional point cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20200601 Address after: 250001 floor 17, building 3, Aosheng building, 1166 Xinluo street, Jinan City, Shandong Province Patentee after: Jinan boguan Intelligent Technology Co., Ltd Address before: Hangzhou City, Zhejiang province 310051 Binjiang District West Street Jiangling Road No. 88 building 10 South Block 1-11 Patentee before: ZHEJIANG UNIVIEW TECHNOLOGIES Co.,Ltd. |