[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN106503638A - For the image procossing of colour recognition, vehicle color identification method and system - Google Patents

For the image procossing of colour recognition, vehicle color identification method and system Download PDF

Info

Publication number
CN106503638A
CN106503638A CN201610893880.9A CN201610893880A CN106503638A CN 106503638 A CN106503638 A CN 106503638A CN 201610893880 A CN201610893880 A CN 201610893880A CN 106503638 A CN106503638 A CN 106503638A
Authority
CN
China
Prior art keywords
color
target
pixel points
vehicle
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610893880.9A
Other languages
Chinese (zh)
Other versions
CN106503638B (en
Inventor
孙盛
杨学军
曹志雷
冯力
边荣国
教颖辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronic Information Machine Co Ltd Of Jinpeng
Original Assignee
Electronic Information Machine Co Ltd Of Jinpeng
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electronic Information Machine Co Ltd Of Jinpeng filed Critical Electronic Information Machine Co Ltd Of Jinpeng
Priority to CN201610893880.9A priority Critical patent/CN106503638B/en
Publication of CN106503638A publication Critical patent/CN106503638A/en
Application granted granted Critical
Publication of CN106503638B publication Critical patent/CN106503638B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of image procossing for colour recognition, vehicle color identification method and system, the above-mentioned image processing method for colour recognition, comprise the steps:Multiple videos are chosen from video acquisition system, some two field pictures are chosen respectively from each video, training set is constructed, according to the target context code book that the training set recognizes video acquisition system;According to the corresponding foreground target pixel of destination object in the target context code book recognition target image;The corresponding pixel of destination object reflector segment in the foreground target pixel is rejected, the corresponding subject pixel point of main part is obtained;According to the colour recognition that the subject pixel point carries out destination object, the color of destination object is determined;Which effectively increases the accuracy of recognized color.

Description

Image processing for color recognition, vehicle color recognition method and system
Technical Field
The invention relates to the technical field of image processing, in particular to an image processing method and a vehicle color identification method and system for color identification.
Background
At present, video capture systems, such as video recording systems, for obtaining video images of corresponding places at a certain fixed point are widely used in life and work of people. The video in the video acquisition system is acquired and correspondingly processed, the color of the target object is identified, and the method has important significance for security or related management work; for example, a vehicle monitoring video may be acquired in a gate system of a place such as an urban road or a community gate guard, color recognition of a certain vehicle may be performed, and the identity of the vehicle may be determined for vehicle retrieval, traffic management, and the like.
In a traditional scheme for identifying colors of target images in videos, a method is provided for calculating color values by using an HSI color space and a standard color difference formula, and then color classification of the target images such as vehicles is carried out by using a BP neural network; the method adopts a color template matching method to realize the classification of the colors of the target images; the method comprises the steps of completing classification of target image colors by utilizing an H-S two-dimensional color histogram; and the method also separates the components of the sharpness and the saturation in the HSV color space, and then constructs a classifier to finish color classification so as to correspondingly identify the color of the target image. Some target objects in the video capture system, however, include a body portion that requires color recognition (e.g., a body portion of a vehicle), and may also include a reflective portion that interferes with color recognition (e.g., a window portion on a vehicle); the above-described color recognition scheme is susceptible to interference by the reflective portion in the target object, making the recognized color less accurate.
Disclosure of Invention
Based on this, it is necessary to provide an image processing, vehicle color identification method and system for color identification, aiming at the technical problem that the conventional scheme makes the identified color less accurate.
An image processing method for color recognition, comprising the steps of:
selecting a plurality of videos from a video acquisition system, respectively selecting a plurality of frame images from each video, constructing a training set, and identifying a background target codebook of the video acquisition system according to the training set;
identifying foreground target pixel points corresponding to target objects in the target image according to the background target codebook;
eliminating pixel points corresponding to the light reflecting part of the target object in the foreground target pixel points to obtain main pixel points corresponding to the main part;
and identifying the color of the target object according to the main pixel points, and determining the color of the target object.
An image processing system for color recognition, comprising:
the selection module is used for selecting a plurality of videos from the video acquisition system, respectively selecting a plurality of frame images from each video, constructing a training set, and identifying a background target codebook of the video acquisition system according to the training set;
the identification module is used for identifying foreground target pixel points corresponding to target objects in the target images according to the background target codebook;
the eliminating module is used for eliminating pixel points corresponding to the light reflecting parts of the target objects in the foreground target pixel points to obtain main pixel points;
and the determining module is used for identifying the color of the target object according to the main pixel point and determining the color of the target object.
According to the image processing method and the system for color identification, the background target codebook can be determined according to a plurality of videos in the video acquisition system, the foreground target pixel points corresponding to the target object in the target image are identified by using the upper background target codebook, the pixel points corresponding to the reflective parts which are easy to cause color identification interference in the foreground target pixel points are removed, and corresponding color identification is performed on the main pixel points capable of accurately representing the color of the target object, so that the color of the target object is determined, and the accuracy of the identified color is effectively improved.
A vehicle color identification method, comprising the steps of:
acquiring a vehicle monitoring video from a bayonet system; identifying a target vehicle in a vehicle monitoring video, wherein a vehicle body of the target vehicle is used as a main body part, and a vehicle window of the target vehicle is used as a light reflecting part;
and recognizing the color of the vehicle body according to the image processing method for recognizing the color of the target object, and determining the color of the target vehicle according to the recognized color of the vehicle body.
A vehicle color identification system, comprising:
the acquisition module is used for acquiring a vehicle monitoring video from the bayonet system; selecting a target vehicle in the vehicle monitoring video, wherein the body of the target vehicle is taken as a main body part, and the window of the target vehicle is taken as a light reflecting part;
and the vehicle color determining module is used for identifying the color of the vehicle body according to the image processing system for identifying the color of the target object and determining the color of the target vehicle according to the identified color of the vehicle body.
According to the vehicle color identification method and system, a plurality of vehicle videos can be selected from the bayonet system to determine the background target codebook, the foreground target pixel points corresponding to the target vehicle are identified by using the upper background target codebook, the pixel points corresponding to the vehicle windows which are easy to cause color identification interference in the foreground target pixel points are eliminated, corresponding color identification is carried out on the vehicle body pixel points capable of accurately representing the vehicle color, and the identified vehicle color has higher accuracy.
Drawings
FIG. 1 is a flow diagram of an image processing method for color recognition according to one embodiment;
FIG. 2 is a schematic diagram of a positional relationship between a spindle and a vehicle according to an embodiment;
FIG. 3 is a schematic diagram of a positional relationship between a spindle and a vehicle according to an embodiment;
FIG. 4 is a schematic view of a sub-area of the A-B plane of an embodiment;
FIG. 5 is a schematic view of a vehicle color identification process according to one embodiment;
FIG. 6 is a diagram illustrating an exemplary image processing system for color recognition.
Detailed Description
The following describes in detail specific embodiments of an image processing for color recognition, a vehicle color recognition method, and a system according to the present invention with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart illustrating an image processing method for color recognition according to an embodiment, including the following steps:
s10, selecting a plurality of videos from a video acquisition system, respectively selecting a plurality of frame images from each video, constructing a training set, and identifying a background target codebook of the video acquisition system according to the training set;
the video acquisition system can comprise a system for acquiring video images of a fixed place such as a bayonet system at a vehicle checkpoint. A plurality of videos can be randomly selected from a video acquisition system, and then a plurality of frames of images are selected from the videos, so that the selected images and the training set constructed according to the images can include the characteristics of the corresponding fixed places as much as possible. The background target codebook includes a target object to be identified (e.g., a vehicle passing through a corresponding level) and a location background of a fixed location other than the target object. The background target codebook has a codeword C corresponding to the background target codebookBThe composition, which is determined from a number of frames of images from each video, respectively, has high integrity and accuracy.
S20, identifying foreground target pixel points corresponding to target objects in the target image according to the background target codebook;
the target image comprises a target object and a place background, and the foreground target pixel points are pixel points corresponding to the target object.
In an embodiment, the step of identifying a foreground target pixel point corresponding to a target object in a target image according to the background target codebook may include:
and identifying all code words in the background target codebook, and searching non-background pixel points in pixel points corresponding to the target image to obtain foreground target pixel points.
The pixel points corresponding to the target image comprise background pixel points and non-background pixel points, wherein the background pixel points refer to pixel points corresponding to the site background, and the non-background pixel points refer to pixel points corresponding to the target object, namely the foreground target pixel points.
As an embodiment, the step of searching for a non-background pixel point in a pixel point corresponding to the target image may include:
acquiring RGB value of any pixel point p in a target image, calculating lightness I of the pixel point p according to the RGB value, wherein,r is the R component value of the prime point p, G is the G component value of the prime point p, and B is the B component value of the prime point p;
judging the pixel points which do not satisfy the following condition A or condition B in the target image as non-background pixel points:
condition a: color _ distance (p, C)B) Less than or equal to, wherein p represents a pixel point in the target image, CBA codeword in the background target codebook, Color _ distance () represents a distance of a solved Color, and represents a preset Color threshold; this is typically set to a small threshold, such as 0.3 or 0.5, etc.;
condition B: representing the minimum brightness at which a pixel point in the target image corresponds,representing the maximum brightness of a pixel point in the target image. The brightness function brightness is defined as:wherein,min { } means taking the minimum value, m1Is a preset first lightness parameter, m2The brightness parameter is a preset second brightness parameter; the first lightness parameter m1Can be set to 0.5, the second lightness parameter m2May be set to 1.5.
After all code words are traversed and checked, namely the background target codebook is substituted into the condition A and the condition B to identify the non-background pixel point, if a certain pixel point is not matched with all code words, the pixel point can be judged to be a foreground target pixel point. After all pixel points of the image to be processed are detected in sequence, the foreground target pixel points can be determined.
S30, eliminating pixel points corresponding to the light reflecting part of the target object in the foreground target pixel points to obtain main pixel points corresponding to the main part;
the foreground target pixel points can comprise main body pixel points corresponding to the main body part, namely pixel points needing color identification, and pixel points corresponding to the reflection part of corresponding color identification due to reflection interference, such as pixel points corresponding to windows in a vehicle.
In one embodiment, the target object (e.g., a vehicle) is an axisymmetric figure; the light reflecting parts of the target object are symmetrically distributed above the main body part; under normal circumstances, a light reflecting part is arranged above the main shaft, and a main body part is arranged below the main shaft;
the step of eliminating the pixel points corresponding to the light reflecting part of the target object in the foreground target pixel points to obtain the main pixel points comprises the following steps:
identifying a main axis of a target object corresponding to the foreground target pixel point;
and determining the foreground target pixel points below the main axis as main pixel points.
The specular reflection highlight phenomenon usually occurs on the reflective part such as a vehicle window, even if the interference caused by the specular reflection highlight phenomenon to the collected image cannot be recovered through color correction, if corresponding color recognition is performed according to the interfered image, the recognition result can be greatly influenced. And pixel points corresponding to the light reflecting part of the target object are removed, so that the accuracy of the identified color can be ensured.
As an embodiment, after determining that the distribution state of the foreground target pixel points is horizontal distribution, the step of identifying the gravity center point of the target object corresponding to the foreground target pixel points may include:
the central moment is set as follows:wherein,TBrepresenting foreground target pixel point, (x, y) ∈ TBIndicating that the pixel points x and y belong to foreground target pixel points, p and q respectively indicate subscript variables, integers can be respectively taken, and p and q are different integersRespectively have a corresponding lambdap,qWhen p is 2 and q is 0, for example, λ is obtainedp,qIs λ2,0
According to the formula of calculation of the orientation angleCalculating an orientation angleWhereinAs described aboveIs a temporary variable symbol, an intermediate variable used in the process of solving the parameter of the minimum value; the above steps can be carried out byFirst derivative to calculate heading angle
If it is as describedIs located in the intervalAnd judging that the distribution state of the foreground target pixel points is horizontal distribution.
In this embodiment, in order to accurately detect the area where the light-reflecting portion such as a vehicle window is located, it is first determined that the distribution state of the foreground target pixel points is horizontal distribution, and then the direction of the main axis of the target object is accurately detected. Taking a vehicle as an example, the main axis is parallel to the upper and lower edges of the window of the vehicle, and the orientation angle of the main axis in the whole image is expressed asIf the target object is a vehicle,located in a sectionIf not, it can be determined that the vehicle targets are vertically distributed in the target image, and the schematic positional relationship between the principal axis and the vehicle can be shown in fig. 3. In fig. 2, the vehicle is horizontally disposed, the dividing line (main axis) divides the vehicle into two parts, and the part at the lower end of the dividing line removes the main body part after the window is removed; in fig. 3, the vehicle is vertically distributed, the dividing line divides the vehicle into two parts, and the part at the lower end of the dividing line is the vehicle object (body part) with the window removed.
And S40, identifying the color of the target object according to the main pixel points, and determining the color of the target object.
In an embodiment, the process of performing color identification on the target object according to the main pixel point includes:
carrying out normalized calculation on the RGB value of the main pixel point to obtain a R component value I after the normalized calculationRG component value IGAnd B component value IBWherein, IR=r/(r+g+b),IG=g/(r+g+b),IBB/(R + G + B), wherein R is the R component value of the main pixel point, G is the G component value of the main pixel point, and B is the B component value of the main pixel point;
according to the formula IR、IGAnd IBJudging whether the main pixel point belongs to mixed color or not;
if the main pixel point belongs to the mixed color, inputting the main pixel point into a mixed color classifier for color identification;
and if the main pixel point does not belong to the mixed color, inputting the main pixel point into a single-frequency color classifier for color identification.
For example, a target object such as a vehicle generally includes a mixed color and a single color, the pixel hue of the target object may be identified first to determine whether the main pixel belongs to the mixed color, and then the main pixel is input into a corresponding color classifier (such as a mixed color classifier or a single color classifier) to identify the corresponding color, so as to identify the color of the target object.
As an embodiment, a main pixel point can be converted into an LAB color space, an a-B plane is used, polar coordinates are adopted to sample the a-B plane, and 10 is taken as a quantization unit in the radial direction of the pole; in the polar angle direction, 90 degrees is taken as a quantization unit; the A-B plane is divided into 28 sub-regions, as shown in FIG. 4, each sub-region corresponds to a color statistical table, using mLAB(k)=#{q|q∈binkDenotes the number of pixels in the kth color statistics table, and a 28-dimensional vector m can be constructedLAB=[mLAB(1),mLAB(2),...mLAB(28)]', this vector is the LAB part of the mixed color feature vector.
For RGB components, a 6-dimensional vector is constructed that records the relative relationship of the pixel numbers of the components, and the 6 components of the vector are respectively recorded as B>G、G>R、G>B、G>R、R>B、R>G number of corresponding pixels, using symbol mRGB=[mRGB(1),mRGB(2),...mRGB(6)]' means. The two vectors are combined to form a 34-dimensional feature vector mDGAnd inputting the data into an SVM classifier for classification to complete the construction of the classifier of the single-frequency color.
As an embodiment, 3 color components in RGB colors are quantized into 8 levels, so that there are 24 color levels in total, and the number of pixel points corresponding to the 24 levels form a 24-dimensional feature vector, which is denoted as mquanFurthermore, B in 3 channels is recorded>G、G>R、G>B、G>R、R>B、R>The number of the pixels corresponding to G is recorded as: m isRGB. Two areThe combined result is a 30-dimensional feature vector mDC. And inputting the feature vector into an SVM classifier for classification, and completing the construction of the classifier of the mixed color.
As an example, the above is according to said IR、IGAnd IBThe process of determining whether the main pixel belongs to the mixed color may include:
calculating Euclidean distance d between main pixel point and mixed color central axis vectorpWherein d isp=(IR-0.333)2+(IG-0.333)2+(IB-0.333)2
Calculating Euclidean distance d between main pixel point and mixed mean vectormeanWherein d ismean=(r-ur)2+(g-ug)2+(b-ub)2,urMean, u, of R component values representing all subject pixelsgMean, u, of G component values representing all subject pixelsbRepresenting the average value of the B component values of all the main pixel points;
respectively calculate dmeanMean value of (a)mean,dpMean value of (a)d,dmeanVariance σ ofmeanAnd dpVariance σ ofd
According toCalculating the probability P (gray | P) that the main pixel point belongs to the mixed color;
and judging the main pixel points with P (gray | P) larger than the probability threshold value as the main pixel points belonging to the mixed color.
P (gray | P) is greater than the probability threshold, i.e. P (gray | P) > ThpIn (1), the probability threshold ThpIt can be set to 80% equivalent value, which is obtained by averaging all the mixed color pixels when constructing the two training sample libraries. Due to the structure of the target object such as a vehicleDue to the complexity of the structure and the interference of background noise, the target object to be detected usually contains both mixed color pixels and single-frequency color pixels. The determination can thus be made by the following rules: if the proportion of the mixed color pixels exceeds 80%, the vehicle target color is considered to belong to the mixed color; otherwise, the vehicle target color is considered to be a single color.
As an embodiment, taking a vehicle as an example, in combination with the vehicle colors collected in the bayonet system, the vehicle color categories can be generally divided into two major categories: mixed color, single color, this process is called pixel hue discrimination. Then, training sample libraries are respectively constructed according to two major classes, namely: a mixed color training sample library and a single-frequency color training sample. There is no intersection of the samples in the two training sample libraries. The first setting of the vehicle color category includes: black, white, silver, dark blue, dark green, yellow, red, green, blue.
As shown in fig. 5, the process of identifying the color of the vehicle may be firstly performing pixel hue identification on a target pixel of the vehicle to determine whether a main pixel of the vehicle belongs to a mixed color, if the main pixel of the vehicle belongs to the mixed color, performing mixed color feature extraction on the main pixel of the vehicle, and inputting the main pixel of the vehicle into a mixed color classifier (SVM) to perform color identification, where the mixed color includes black, white, silver, dark blue, and dark green; if the vehicle belongs to a single-frequency color, mixed color feature extraction can be carried out on main pixel points of the vehicle, and then a single-frequency color classifier (SVM) is input for color identification, wherein the single-frequency color comprises yellow, red, green and blue.
The image processing method for color identification provided by the invention can determine the background target codebook according to a plurality of videos in the video acquisition system, identify the foreground target pixel points corresponding to the target object in the target image by using the upper background target codebook, then eliminate the pixel points corresponding to the reflective parts which are easy to cause color identification interference in the foreground target pixel points, and then perform corresponding color identification on the main pixel points capable of accurately representing the color of the target object so as to determine the color of the target object, thereby effectively improving the accuracy of the identified color.
In an embodiment, before the process of performing color identification on the target object according to the main pixel point, the process may further include:
dividing the main pixel point into M pixel sub-regions, and recording the single region as Ti' selecting one frame of image R as a construction frame from M pixel regions, and acquiring a sub-region of the image corresponding to the construction frame R as Ri
Set up hi(p)=αiRi(p)+βiWherein p is TiAny one of the pixel points in' may be, andare each Ti' Pixel mean and RiThe average value of the pixels of (a),andare each Ti' Pixel variance and RiPixel variance of hi(p) is the color mapping function corresponding to the ith sub-region;
according toCalculation αiAnd βiThe value of (c):|Ti' | denotes the pixel capacity of the pixel sub-region;
set up hi(p) weight w for pixel point pi(p),The operator a-b represents the Euclidean distance between a and b;
according toComputing normalized weights
According toAnd hi(p) determining a weighted global color correction function H (p), wherein
And substituting the main pixel points into a weighted global color correction function H (p) for correction.
In this embodiment, in order to overcome the color acquisition difference caused by different illumination conditions, shooting angles, and other factors, color mapping and correction may be performed on the main pixel points. And taking the main pixel points with the reflection parts removed as input image data. Since the real scene image is very complex in composition, the mapping of colors must be performed in zones. In the first step, an open source tool provided by felzenzwald can be used to divide an input image T' (a subject pixel point) into M regions, and a single region is marked as Ti'; in addition, a frame image R with better imaging quality is randomly selected as a construction frame, foreground target pixels of the frame image R are removed, and an image sub-region corresponding to the construction frame R is Ri. Meanwhile, the corresponding pixel coordinates must be kept constant during the construction of the mapping function. And secondly, mapping in an LAB color space in the following way:
hi(p)=αiRi(p)+βiwherein p is a subregion TiAny one of the pixel points in' may be, andare respectively input image data Ti' average and construction frame image data RiThe average value of (a) of (b),andare respectively input image data Ti' variance and construction frame image data RiVariance of hi(p) is the color mapping function corresponding to the ith sub-region, and the parameter α is obtained by minimizing the energy functioniAnd βiThe value of (a) is,
right end expression of the above equation is for parameter αiAnd βiThe first derivative is solved, and the following can be obtained:
wherein, | Ti' I denotes the pixel capacity of the sub-region。
Thirdly, constructing a global color correction function H (p) through a mapping function of the sub-region; defining a sub-region mapping function hi() Weight w for pixel pi(p):
The operator a-b represents the Euclidean distance between a and b. For M regions, there are M partial mapping functions hi() (ii) a In order to ensure the normalization of the calculation results of the weights of different colors, the weight function must be normalized:
a weighted global color correction function can be derived:
and substituting the main pixel point into a weighted global color correction function H (p) for correction.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an image processing system for color recognition according to an embodiment, including:
the selection module 10 is configured to select a plurality of videos from a video acquisition system, select a plurality of frame images from each video, construct a training set, and identify a background target codebook of the video acquisition system according to the training set;
the identification module 20 is configured to identify a foreground target pixel point corresponding to a target object in a target image according to the background target codebook;
the eliminating module 30 is configured to eliminate a pixel point corresponding to a target object reflection part in the foreground target pixel points to obtain a main pixel point;
and the determining module 40 is configured to perform color identification on the target object according to the main pixel point, and determine the color of the target object.
The image processing system for color recognition provided by the present invention and the image processing method for color recognition provided by the present invention correspond to each other one to one, and technical features and advantages thereof described in the embodiments of the image processing method for color recognition are all applicable to the embodiments of the image processing system for color recognition, which is hereby stated.
A vehicle color identification method, comprising the steps of:
acquiring a vehicle monitoring video from a bayonet system; identifying a target vehicle in a vehicle monitoring video, wherein a vehicle body of the target vehicle is used as a main body part, and a vehicle window of the target vehicle is used as a light reflecting part;
and recognizing the color of the vehicle body according to the image processing method for recognizing the color of the target object, and determining the color of the target vehicle according to the recognized color of the vehicle body.
The gate system comprises gate systems in places such as urban roads or community gate inhibition. The target vehicle is the target object in the image processing method for color recognition of the target object, and the color recognition of the vehicle body part (main body part) is performed by using the image processing method for color recognition of the target object, so that the corresponding vehicle color is determined, and the accuracy of the recognized vehicle color can be improved.
According to the vehicle color identification method provided by the invention, a plurality of vehicle videos can be selected from a bayonet system to determine the background target codebook, the foreground target pixel points corresponding to the target vehicle are identified by using the upper background target codebook, the pixel points corresponding to the vehicle windows which are easy to cause color identification interference in the foreground target pixel points are removed, and then the corresponding color identification is carried out on the vehicle body pixel points capable of accurately representing the vehicle color, so that the identified vehicle color has higher accuracy.
A vehicle color identification system, comprising:
the acquisition module is used for acquiring a vehicle monitoring video from the bayonet system; selecting a target vehicle in the vehicle monitoring video, wherein the body of the target vehicle is taken as a main body part, and the window of the target vehicle is taken as a light reflecting part;
and the vehicle color determining module is used for identifying the color of the vehicle body according to the image processing system for identifying the color of the target object and determining the color of the target vehicle according to the identified color of the vehicle body.
The vehicle color recognition system provided by the invention is in one-to-one correspondence with the vehicle color recognition method provided by the invention, and the technical characteristics and the beneficial effects described in the embodiment of the vehicle color recognition method are both applicable to the embodiment of the vehicle color recognition system, so that the statement is made.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image processing method for color recognition, comprising the steps of:
selecting a plurality of videos from a video acquisition system, respectively selecting a plurality of frame images from each video, constructing a training set, and identifying a background target codebook of the video acquisition system according to the training set;
identifying foreground target pixel points corresponding to target objects in the target image according to the background target codebook;
eliminating pixel points corresponding to the light reflecting part of the target object in the foreground target pixel points to obtain main pixel points corresponding to the main part;
and identifying the color of the target object according to the main pixel points, and determining the color of the target object.
2. The image processing method for color identification according to claim 1, wherein the step of identifying foreground target pixel points corresponding to target objects in the target image according to the background target codebook comprises:
and identifying all code words in the background target codebook, and searching non-background pixel points in pixel points corresponding to the target image to obtain foreground target pixel points.
3. The image processing method for color identification according to claim 2, wherein the step of searching for non-background pixels among pixels corresponding to the target image comprises:
acquiring RGB value of any pixel point p in a target image, calculating lightness I of the pixel point p according to the RGB value, wherein,r is the R component value of the prime point p, G is the G component value of the prime point p, and B is the B component value of the prime point p;
judging the pixel points which do not satisfy the following condition A or condition B in the target image as non-background pixel points:
condition a: color _ distance (p, C)B) Less than or equal to, wherein p represents a pixel point in the target image, CBA codeword in the background target codebook, Color _ distance () represents a distance of a solved Color, and represents a preset Color threshold;
condition B: indicating the most corresponding of a pixel point in a target imageThe brightness of the light beam is small,representing the maximum brightness of a pixel point in the target image. The brightness function brightness is defined as:wherein,min { } means taking the minimum value, m1Is a preset first lightness parameter, m2Is a preset second brightness parameter.
4. The image processing method for color recognition according to claim 1, wherein the target object is an axisymmetric figure; the light reflecting parts of the target object are symmetrically distributed above the main body part;
the step of eliminating the pixel points corresponding to the light reflecting part of the target object in the foreground target pixel points to obtain the main pixel points comprises the following steps:
identifying a main axis of a target object corresponding to the foreground target pixel point;
and determining the foreground target pixel points below the main axis as main pixel points.
5. The image processing method for color recognition according to claim 4, wherein the step of recognizing the gravity center point of the target object corresponding to the foreground target pixel point after determining that the distribution state of the foreground target pixel point is horizontal distribution comprises:
the central moment is set as follows:wherein,TBbefore showingScene target pixel point, (x, y) ∈ TBRepresenting that the pixel points x and y belong to foreground target pixel points;
according to the formula of calculation of the orientation angleCalculating an orientation angleWherein
If it is as describedIs located in the intervalAnd judging that the distribution state of the foreground target pixel points is horizontal distribution.
6. The image processing method for color recognition according to claim 1, wherein the process of performing color recognition of a target object according to the subject pixel point comprises:
carrying out normalized calculation on the RGB value of the main pixel point to obtain a R component value I after the normalized calculationRG component value IGAnd B component value IBWherein, IR=r/(r+g+b),IG=g/(r+g+b),IBB/(R + G + B), wherein R is the R component value of the main pixel point, G is the G component value of the main pixel point, and B is the B component value of the main pixel point;
according to the formula IR、IGAnd IBJudging whether the main pixel point belongs to mixed color or not;
if the main pixel point belongs to the mixed color, inputting the main pixel point into a mixed color classifier for color identification;
and if the main pixel point does not belong to the mixed color, inputting the main pixel point into a single-frequency color classifier for color identification.
7. The image processing method for color recognition according to claim 6, wherein the I is a function ofR、IGAnd IBThe process of judging whether the main pixel point belongs to the mixed color comprises the following steps:
calculating Euclidean distance d between main pixel point and mixed color central axis vectorpWherein d isp=(IR-0.333)2+(IG-0.333)2+(IB-0.333)2
Calculating Euclidean distance d between main pixel point and mixed mean vectormeanWherein d ismean=(r-ur)2+(g-ug)2+(b-ub)2,urMean, u, of R component values representing all subject pixelsgMean, u, of G component values representing all subject pixelsbRepresenting the average value of the B component values of all the main pixel points;
respectively calculate dmeanMean value of (a)mean,dpMean value of (a)d,dmeanVariance σ ofmeanAnd dpVariance σ ofd
According toCalculating the probability P (gray | P) that the main pixel point belongs to the mixed color;
and judging the main pixel points with P (gray | P) larger than the probability threshold value as the main pixel points belonging to the mixed color.
8. An image processing system for color recognition, comprising:
the selection module is used for selecting a plurality of videos from the video acquisition system, respectively selecting a plurality of frame images from each video, constructing a training set, and identifying a background target codebook of the video acquisition system according to the training set;
the identification module is used for identifying foreground target pixel points corresponding to target objects in the target images according to the background target codebook;
the eliminating module is used for eliminating pixel points corresponding to the light reflecting parts of the target objects in the foreground target pixel points to obtain main pixel points;
and the determining module is used for identifying the color of the target object according to the main pixel point and determining the color of the target object.
9. A vehicle color recognition method is characterized by comprising the following steps:
acquiring a vehicle monitoring video from a bayonet system; identifying a target vehicle in a vehicle monitoring video, wherein a vehicle body of the target vehicle is used as a main body part, and a vehicle window of the target vehicle is used as a light reflecting part;
the image processing method for color recognition of a target object according to any one of claims 1 to 7, recognizing a color of a vehicle body, and determining a color of a target vehicle based on the recognized color of the vehicle body.
10. A vehicle color identification system, comprising:
the acquisition module is used for acquiring a vehicle monitoring video from the bayonet system; selecting a target vehicle in the vehicle monitoring video, wherein the body of the target vehicle is taken as a main body part, and the window of the target vehicle is taken as a light reflecting part;
a vehicle color determination module for identifying a body color according to the image processing system for color identification of a target object of claim 8, determining a color of the target vehicle according to the identified body color.
CN201610893880.9A 2016-10-13 2016-10-13 Image procossing, vehicle color identification method and system for color identification Active CN106503638B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610893880.9A CN106503638B (en) 2016-10-13 2016-10-13 Image procossing, vehicle color identification method and system for color identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610893880.9A CN106503638B (en) 2016-10-13 2016-10-13 Image procossing, vehicle color identification method and system for color identification

Publications (2)

Publication Number Publication Date
CN106503638A true CN106503638A (en) 2017-03-15
CN106503638B CN106503638B (en) 2019-09-13

Family

ID=58293946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610893880.9A Active CN106503638B (en) 2016-10-13 2016-10-13 Image procossing, vehicle color identification method and system for color identification

Country Status (1)

Country Link
CN (1) CN106503638B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107154032A (en) * 2017-04-20 2017-09-12 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN110519489A (en) * 2019-06-20 2019-11-29 杭州海康威视数字技术股份有限公司 Image-pickup method and device
CN110751053A (en) * 2019-09-26 2020-02-04 高新兴科技集团股份有限公司 Vehicle color identification method, device, equipment and storage medium
CN111340896A (en) * 2020-02-21 2020-06-26 北京迈格威科技有限公司 Object color identification method and device, computer equipment and storage medium
CN111626188A (en) * 2020-05-26 2020-09-04 西南大学 Indoor uncontrollable open fire monitoring method and system
CN112016433A (en) * 2020-08-24 2020-12-01 高新兴科技集团股份有限公司 Vehicle color identification method based on deep neural network
CN113570597A (en) * 2021-09-01 2021-10-29 南通中煌工具有限公司 Method and system for judging dirt degree of mud-headed vehicle carriage based on artificial intelligence
CN118334021A (en) * 2024-06-12 2024-07-12 中铁七局集团第三工程有限公司 Large-diameter pressure steel pipe coating color difference detection method based on machine vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088482A (en) * 1998-10-22 2000-07-11 Symbol Technologies, Inc. Techniques for reading two dimensional code, including maxicode
CN101436252A (en) * 2008-12-22 2009-05-20 北京中星微电子有限公司 Method and system for recognizing vehicle body color in vehicle video image
CN102034080A (en) * 2009-09-24 2011-04-27 北京汉王智通科技有限公司 Vehicle color identification method and device
CN103530366A (en) * 2013-10-12 2014-01-22 湖北微模式科技发展有限公司 Vehicle searching method and system based on user-defined features
CN105005766A (en) * 2015-07-01 2015-10-28 深圳市迈科龙电子有限公司 Vehicle body color identification method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088482A (en) * 1998-10-22 2000-07-11 Symbol Technologies, Inc. Techniques for reading two dimensional code, including maxicode
CN101436252A (en) * 2008-12-22 2009-05-20 北京中星微电子有限公司 Method and system for recognizing vehicle body color in vehicle video image
CN102034080A (en) * 2009-09-24 2011-04-27 北京汉王智通科技有限公司 Vehicle color identification method and device
CN103530366A (en) * 2013-10-12 2014-01-22 湖北微模式科技发展有限公司 Vehicle searching method and system based on user-defined features
CN105005766A (en) * 2015-07-01 2015-10-28 深圳市迈科龙电子有限公司 Vehicle body color identification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DAVID SETIABUDI ET AL.: "Digital Color Classification for Colorful Cross Stitch Threads Using RGB+Euclidean Distance and LAB+CIE94", 《2016 INTERNATIONAL CONFERENCE ON INFORMATION,COMMUNICATION TECHNOLOGY AND SYSTEM》 *
赵占杰 等: "一种基于码本背景模型的运动目标检测方法", 《北京石油化工学院学报》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107154032A (en) * 2017-04-20 2017-09-12 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN110519489A (en) * 2019-06-20 2019-11-29 杭州海康威视数字技术股份有限公司 Image-pickup method and device
CN110751053A (en) * 2019-09-26 2020-02-04 高新兴科技集团股份有限公司 Vehicle color identification method, device, equipment and storage medium
CN110751053B (en) * 2019-09-26 2022-02-22 高新兴科技集团股份有限公司 Vehicle color identification method, device, equipment and storage medium
CN111340896A (en) * 2020-02-21 2020-06-26 北京迈格威科技有限公司 Object color identification method and device, computer equipment and storage medium
CN111340896B (en) * 2020-02-21 2023-10-27 北京迈格威科技有限公司 Object color recognition method, device, computer equipment and storage medium
CN111626188A (en) * 2020-05-26 2020-09-04 西南大学 Indoor uncontrollable open fire monitoring method and system
CN111626188B (en) * 2020-05-26 2022-05-06 西南大学 Indoor uncontrollable open fire monitoring method and system
CN112016433A (en) * 2020-08-24 2020-12-01 高新兴科技集团股份有限公司 Vehicle color identification method based on deep neural network
CN113570597A (en) * 2021-09-01 2021-10-29 南通中煌工具有限公司 Method and system for judging dirt degree of mud-headed vehicle carriage based on artificial intelligence
CN118334021A (en) * 2024-06-12 2024-07-12 中铁七局集团第三工程有限公司 Large-diameter pressure steel pipe coating color difference detection method based on machine vision

Also Published As

Publication number Publication date
CN106503638B (en) 2019-09-13

Similar Documents

Publication Publication Date Title
CN106503638B (en) Image procossing, vehicle color identification method and system for color identification
US7218759B1 (en) Face detection in digital images
CN110298297B (en) Flame identification method and device
CN105404884B (en) Image analysis method
CN109670515A (en) Method and system for detecting building change in unmanned aerial vehicle image
CN110598560A (en) Night monitoring and identifying method and system based on neural network enhancement
CN107610114A (en) Optical satellite remote sensing image cloud snow mist detection method based on SVMs
CN105279772B (en) A kind of trackability method of discrimination of infrared sequence image
CN108491498A (en) A kind of bayonet image object searching method based on multiple features detection
CN113139521A (en) Pedestrian boundary crossing monitoring method for electric power monitoring
CN112149476B (en) Target detection method, device, equipment and storage medium
CN112241692B (en) Channel foreign matter intelligent detection and classification method based on aerial image super-pixel texture
CN113111806B (en) Method and system for target recognition
US8462211B2 (en) Method of detecting objects
CN111259756A (en) Pedestrian re-identification method based on local high-frequency features and mixed metric learning
CN105740844A (en) Insulator cracking fault detection method based on image identification technology
CN109544535B (en) Peeping camera detection method and system based on optical filtering characteristics of infrared cut-off filter
CN108182691B (en) Method and device for identifying speed limit sign and vehicle
CN107045630B (en) RGBD-based pedestrian detection and identity recognition method and system
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
US9953238B2 (en) Image processing method and system for extracting distorted circular image elements
CN109635679B (en) Real-time target paper positioning and loop line identification method
CN111582278B (en) Portrait segmentation method and device and electronic equipment
CN113221603A (en) Method and device for detecting shielding of monitoring equipment by foreign matters
JP3962517B2 (en) Face detection method and apparatus, and computer-readable medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant