CN110460813A - A kind of container representation acquisition device and acquisition method based on video flowing - Google Patents
A kind of container representation acquisition device and acquisition method based on video flowing Download PDFInfo
- Publication number
- CN110460813A CN110460813A CN201910740684.1A CN201910740684A CN110460813A CN 110460813 A CN110460813 A CN 110460813A CN 201910740684 A CN201910740684 A CN 201910740684A CN 110460813 A CN110460813 A CN 110460813A
- Authority
- CN
- China
- Prior art keywords
- image
- container
- truck
- video
- lane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000012545 processing Methods 0.000 claims abstract description 80
- 230000006698 induction Effects 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 13
- 230000001629 suppression Effects 0.000 claims description 13
- 230000001960 triggered effect Effects 0.000 claims description 11
- 230000008859 change Effects 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 239000003086 colorant Substances 0.000 claims description 3
- 238000003708 edge detection Methods 0.000 claims description 3
- 230000007613 environmental effect Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000010276 construction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/153—Segmentation of character regions using recognition of characters or words
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The present invention provides a kind of container representation acquisition device based on video flowing, is arranged on the lane that container truck passes through, comprising: truck station acquisition device, control device, image acquisition and processing device and high-definition network camera.Present invention is mainly applied to the acquisition that container yard or customs, harbour container inlet realize container representation, have the characteristics that video stream signal acquisition, required number of devices are few, adaptable, acquisition is accurate, at low cost.The present invention also discloses a kind of container representation acquisition method based on video flowing.
Description
Technical Field
The invention relates to the technical field of automatic control, in particular to a container image acquisition device and an acquisition method based on video streaming.
Background
The collection and recording of container numbers at container goods yard, customs and port entrance is one of the important tasks of container management and tracking, and at present, there are two main implementation modes, one is to manually record the container numbers, and the other is to collect the container numbers by a container number identification system. The manual recording of the container number is to manually record the vehicle-mounted container number of the container passing through the channel of the container truck on site by an operator, the mode not only wastes manpower, but also has certain potential safety hazard in the working process, the container number is collected by the container number recognition system by installing cameras on two sides of a lane, the scheme is that 4-8 cameras and 4-6 pairs of infrared correlation switches and other control equipment are generally installed on one lane to realize vehicle positioning and container image collection, the scheme needs a large number of equipment and has high cost, only partial images of the container can be collected, the discrimination of the container type (single container, double container and long container) is difficult, and the identification rate of the container number is low.
Disclosure of Invention
The embodiment of the invention provides a container image acquisition device and an acquisition method based on video streaming, which solve the problems that the existing container freight scheme needs a large amount of equipment and is high in cost, and only local images of containers can be acquired.
A video stream based container image capture device for placement on a lane of a container truck, comprising: the collection truck position acquisition device, the control device, the image acquisition processing device and the high-definition network camera, wherein
The position acquisition device of the collection truck comprises a ground induction coil arranged at the starting position of a lane, a vehicle detector connected with the induction coil and two pairs of infrared sensors sequentially arranged along the advancing direction of the laneCorrelation switch A1-A2And A3-A4The distance between the two pairs of infrared correlation switches is greater than the length of the head of the container truck but less than the length of the whole container truck, a lane ground induction coil is connected with a PLC (programmable logic controller) of a control device through a vehicle detector and used for detecting whether the collection truck enters a lane or not, when the collection truck passes through the lane, the ground induction coil generates a trigger signal and transmits the trigger signal to the vehicle detector, the vehicle detector transmits the signal to the PLC of the control device, 2 the infrared correlation switches are connected with the PLC of the control device, when the container truck does not pass through the lane, the infrared correlation switches are in a connected state, when the container truck passes through the lane, the infrared correlation switches are firstly blocked by a container and then are connected, the control device collects the change of the infrared signal, and the position information of the container truck is realized through;
the control device comprises a PLC and a network switch, the PLC judges whether the camera starts shooting or stops shooting through control logic according to the position information of the container truck, which is acquired by the ground sensing coil and the infrared correlation switch, outputs a corresponding shooting or shooting stopping instruction, transmits the shooting instruction to the image acquisition processing device, and the image acquisition processing device sends the shooting instruction to the high-definition network camera to control the high-definition network camera to start or stop shooting video signals;
the image acquisition and processing device mainly comprises an image acquisition and processing server and an image display terminal, wherein the image acquisition and processing server is connected with a network switch of the control device through a network cable, the image acquisition and processing server receives a photographing control instruction sent by the control device, controls a camera to photograph a video signal, stores the video signal into the server and realizes subsequent image processing based on video stream, and the image display terminal is connected with the server through the switch and is used for displaying a complete image of a container and an identified container number processed by the image acquisition and processing server;
high definition network camera, C1、C2The two pairs of infrared correlation switches are symmetrically arranged at the top ends of the two sides of the lane, are arranged between the two pairs of infrared correlation switches and are used for receiving video shooting control instructions of the image acquisition and processing device and shooting videos of the container truck-mounted container passing through the lane.
Alternatively, in the collection truck position acquisition device, when the collection truck just enters the lane, the lane ground induction coil is triggered, and the infrared correlation switch A1-A2The switching-on is changed into the switching-off, and in the advancing process of the truck collection, as the length of the truck head is far less than that of the container body, the front part of the truck head does not reach the infrared correlation switch A3-A4Time, i.e. infrared correlation switch A3-A4Is still in a connected state at the same time, and is provided with an infrared correlation switch A1-A2Has changed from disconnected to connected, indicating that the head of the collection truck passes through A1-A2Has not yet reached A3-A4When the head of the vehicle passes through the switch A, the infrared ray is emitted1-A2When the container body is disconnected again, the container body is indicated to start passing through A1-A2The collection truck continues to travel as A3-A4Is broken and A1-A2Communicated with the container body through A1-A2Enter A3-A4When A is1-A2And A3-A4And simultaneously, the container is switched on, the container monitors the area through the infrared correlation switch, and the traveling position information of the truck is judged through the state change of the infrared correlation switch.
Optionally, in the image acquisition processing device, when the collection truck passes through the intelligent gate lane, a ground sensing coil and an infrared correlation switch of the collection truck position acquisition device are triggered, the control device PLC sends a photographing control command to the image acquisition processing device according to logic judgment, the image acquisition processing device controls 2 high-definition network cameras to respectively acquire video signals, in order to obtain a panoramic image of the whole container, feature extraction, registration and splicing are performed on key frame images in the acquired video streams by using the difference between two adjacent frames of the video stream signals, and finally, a complete container body panoramic image is formed, the image identification processing device processes the image, the container number is identified, and the container number and the image are displayed on the image display terminal computer.
A container image acquisition method based on video streaming comprises the following steps:
the first step is as follows: when the collection truck enters the laneWhen the vehicle is running, the ground induction coil of the lane is triggered, and the infrared correlation switch A is blocked by the vehicle head1-A2The switching-on is changed into the switching-off, and in the advancing process of the truck collection, as the length of the truck head is less than that of the container body, the front part of the truck head does not reach the infrared correlation switch A3-A4Time, i.e. infrared correlation switch A3-A4Is still in a connected state, and the infrared correlation switch A1-A2Has changed from disconnected to connected, indicating that the head of the collection truck passes through A1-A2Has not yet reached A3-A4;
The second step is that: head of the vehicle passes through A1-A2When infrared is emitted to the switch A1-A2When the container body is disconnected again, the container body is indicated to start passing through A1-A2,A1-A2The state change of the image acquisition and processing device is transmitted to the image acquisition and processing device through the control device PLC, and the image acquisition and processing device controls the step C1、C2Starting to record a video;
the third step: the collection truck continues to travel as A3-A4Is broken and A1-A2Communicated with the container body through A1-A2Enter A3-A4;
The fourth step: when A is1-A2And A3-A4At the same time, indicates that the container and the collection truck pass through the infrared detection area, A1-A2And A3-A4The state signal is transmitted to the image acquisition processing device through the control device PLC, and the image identification processing device controls C1、C2Stopping recording the video;
the fifth step: from C1、C2Start video recording to C1、C2Stopping video recording to form two complete container body videos, and storing video signals into an image acquisition and processing device server through an exchanger;
and a sixth step: obtaining a video sequence I of captured video signals by setting a frame rate of capture (fps) of a camera1(x,y),I2(x,y),…,In(x,y);
The seventh step: computing a video sequence I1(x,y),I2(x,y),…,In(x, y) texture feature vector
V=[f1,f2,f3]
Wherein
f1For second order matrix features, f2As an entropy feature, f3Local stationary features;
eighth step: extracting key frames, setting i in a video sequence as a selected key frame, and selecting the next key frame j in the video sequence, wherein the similarity relation calculation formula of the two container digital images is as follows:
wherein, i and j are sequence codes of frames in the video, T is a threshold value for measuring similar conditions, and in the selection of key frames, in order to improve the picture quality, the first frame and the last frame are both key frames;
the ninth step: fusing key frames, fusing images of the key frames by adopting a weighted average method, and assuming that f1,f2Is two images to be spliced, image f1And f2In spatial superposition, the fused image pixel f can be represented as f (x, y)
Wherein d is1,d2Represents a weight value, and d1+d2=1,0≤d1≤1,0≤d21 or less, in the overlap region, d1From 1 to 0, d2Changes from 0 to 1, thereby realizing an overlap region f1To f2A smooth transition of (2);
the tenth step: splicing based on the video stream key frames to form a container panoramic image f (x, y), and storing the container panoramic image f (x, y) to an image acquisition and processing device server;
the eleventh step: the gray scale processing of the image recognition processing device can enhance the contrast of characters and background colors, filter out color features irrelevant to recognition in the image, facilitate the edge detection of the container panoramic image, and assign the average value I of three color components with different weights to the image pixel by adopting a weighted average value method
I=(PRR+PGG+PBB)/3
Wherein P isR,PG,PBThe weighting coefficients of three components of each pixel point R, G, B are respectively, the obtained grayed image is f' (x, y), and in the embodiment, the weighting coefficients are respectively 0.299, 0.587 and 0.114;
the twelfth step: the image recognition processing device carries out denoising processing on the grayed panoramic image f' (x, y) by adopting a second-order zero-mean Gaussian filter function G (x, y)
f″(x,y)=f′(x,y)*G(x,y)
Wherein,the method is a second-order zero-mean Gaussian filter function, sigma is a standard deviation and is used for representing a fuzzy factor in the image detection process, r is a fuzzy radius, and (x, y) are coordinates of pixel points;
the thirteenth step: local gradient computation based on Canny operator
The magnitude g (x, y) and direction theta of local gradient at each point (x, y) of the image based on Canny operatorgThe calculation is as follows:
θg=arctan(gx/gy)
wherein
The fourteenth step is that: canny operator based non-maxima suppression
To obtain the point where the local gradient strength in the gradient direction is maximum, the local gradient amplitude g (x, y) and the gradient direction θ are calculatedgPerforming non-maximum suppression, and defining gradient value of two sub-pixel points in gradient direction at coordinate (x, y) as gt1And gt2Then, there are:
gt1=g2+tanθg×(g1-g2)
gt2=g4+tanθg×(g3-g4)
in the formula: g1、g2、g3And g4Gradient values at respective pixel points; thetagTwo sub-pixel gradient values g in the gradient direction at the determined coordinate (x, y) are taken as the gradient direction at the coordinate (x, y)t1And gt2Then, the gradient values g (x, y) at (x, y) are compared with gt1、gt2If g (x, y) is maximum, then the pixel value f "(x, y) at (x, y) is equal to 1, otherwise the suppression is 0, i.e. there is:
repeating the above steps for all locations (x, y) within image f ' (x, y) completes the suppression of non-maxima for image f ' (x, y) so that image f ' (x, y) is converted to image GTA binary edge profile image of (x, y).
The fifteenth step: threshold value method edge search and connection based on Canny operator
To ensure eliminationEdge search and connection are performed on the edge result f' (x, y) with non-maximum value suppressed by using a threshold method while not omitting edge pixel information while noise is generated, and a threshold [ T ] is seth1,Th2]The gradient value in the image f' (x, y) obtained by suppressing the non-maximum value is larger than Th2The pixel points of (A) are taken as strong edge pixels to form an image GT2(x, y) setting the gradient value at Th1And Th2The pixels in between are taken as weak edge pixels to form an image GT1(x, y), namely:
image GT2(x, y)) a higher threshold is set so that the image loses much of the correct edge pixel information while most of the noise is removed, and image G has lost much of the correct edge pixel informationT1(x, y)) is set to be lower, so that the image G which contains a large amount of environmental noise Canny operators and has low search threshold and complete edge information at the same time of retaining correct edge pixel informationT1(x, y) to patch image G which is low in noise but incomplete in edge informationT2(x, y) to achieve the filtering of most of the noise while obtaining all the edges of a single pixel, i.e. for image GT2Each point in (x, y) has:
in the formula: m (x, y) is a set of 8 pixel coordinates in the neighborhood of coordinates (x, y), and the image G is subjected to edge search and edge connection by a threshold methodT2And (x, y) is the processing result of the image f' (x, y) to be detected.
A sixteenth step: extracting character target area, and connecting the image according to edge informationDividing the image into different connected domains; scanning each connected domain row by row and column by column, collecting pixel points in the regions, and calculating the average pixel width w in each connected domainiAnd average pixel height HiFinally, the height-width ratio of the communication area and the length-width ratio of the ith communication area are calculated and calculatedCalculating the region matching degree e according to the height and width M multiplied by N of the target region;
calculating a connected domain by taking (-e, e) as a domain range, and finally determining a character target region;
seventeenth step: character segmentation, adopting character segmentation method based on row-column scanning to scan the positioning image in 0 deg. direction, and searching the point number of character target when it is fullI.e. the number of foreground points of the searched area exceeds a given threshold value TjFirstly, the region is considered as a character row region, the row boundary of the character row region is determined, then the region is scanned by 90 degrees of columns,i.e. the number of regional foreground points exceeds a given threshold TiThen, preliminarily identifying the region as a character region, determining a column boundary of the character region, and dividing and positioning corresponding characters by combining the column boundary and the column boundary;
and eighteenth step: image recognition, namely recognizing the container number character by adopting a weighted template matching algorithm and finally adopting a classifier, and finally displaying the container panoramic photo and the recognized container number on an image display terminal;
the nineteenth step: redundancy check and boxed judgment, for C1、C2The panoramic images shot by the two cameras are respectively identified by image acquisition and processing software and then are carried out after the box number is identifiedRedundant checking to obtain valid container box number, if the identified box number is a group of data, the container truck is loaded with 40 feet long box or 20 feet single box, if 2 groups of box numbers are identified, the container truck is loaded with 2 double boxes of 20 feet each, if A is1-A2And A3-A4And meanwhile, the container is disconnected, so that the container carried by the collection truck can be judged to be a long container or a double container, and the single container, the long container or the double container type of the container carried by the collection truck can be comprehensively judged.
The invention aims to provide a container image acquisition device based on video streaming, wherein only 2 cameras and 2 pairs of infrared correlation switches are needed for one lane, the design scheme is simplified, the number of equipment is reduced, the construction cost is reduced, video streaming signals of a truck passing through the lane are acquired through the cameras, the images are spliced by adopting a video streaming key frame splicing technology, a panoramic image of a container is spliced, the image acquisition quality is improved, and the identification rate of the container number is correspondingly improved. When a container truck passes through the lane, the ground induction coils arranged on the lane are used for judging that the container truck enters the lane, the advancing position of the container truck is detected through the infrared devices on the two sides of the lane, the camera is triggered to shoot a container video, the video recording is stopped after the container truck passes through the detection area, finally, the extraction of a container panoramic image is realized through key frame images in a video stream, and the identification of the container number is realized through an OCR technology.
The invention is mainly applied to container goods yard or customs, and port container entrance to realize container image acquisition, and has the characteristics of video stream signal acquisition, small quantity of required equipment, strong adaptability, accurate acquisition, low cost and the like.
Drawings
Fig. 1 is a block diagram of an image capturing device for an intelligent gate container of a container based on video streaming;
fig. 2 is a block diagram of a video stream based container image capture device.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention. The present invention is in no way limited to any specific configuration and algorithm set forth below, but rather covers any modification, replacement or improvement of elements, components or algorithms without departing from the spirit of the invention. In the following description, well-known structures and techniques are not shown in order to avoid unnecessarily obscuring the present invention.
Example embodiments will now be described with reference to the accompanying drawings, which may be embodied in various forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
As shown in fig. 1-2, the present invention provides a video stream-based container image capturing apparatus disposed on a lane through which a container truck passes, comprising: the collection truck position acquisition device, the control device, the image acquisition processing device and the high-definition network camera, wherein
The position acquisition device of the collection truck comprises a ground induction coil arranged at the initial position of a lane, a vehicle detector connected with the induction coil and two pairs of infrared correlation switches A sequentially arranged along the advancing direction of the lane1-A2And A3-A4The distance between the two pairs of infrared correlation switches is greater than the length of the head of the container truck but less than the length of the whole container truck, a lane ground induction coil is connected with a PLC (programmable logic controller) of a control device through a vehicle detector and used for detecting whether the collection truck enters a lane or not, when the collection truck passes through the lane, the ground induction coil generates a trigger signal and transmits the trigger signal to the vehicle detector, the vehicle detector transmits the signal to the PLC of the control device, 2 the infrared correlation switches are connected with the PLC of the control device, when the container truck does not pass through the lane, the infrared correlation switches are in a connected state, when the container truck passes through the lane, the infrared correlation switches are firstly blocked by a container and then are connected, the control device collects the change of the infrared signal, and the position information of the container truck is realized through;
the control device comprises a PLC and a network switch, the PLC judges whether the camera starts shooting or stops shooting through control logic according to the position information of the container truck, which is acquired by the ground sensing coil and the infrared correlation switch, outputs a corresponding shooting or shooting stopping instruction, transmits the shooting instruction to the image acquisition processing device, and the image acquisition processing device sends the shooting instruction to the high-definition network camera to control the high-definition network camera to start or stop shooting video signals;
the image acquisition and processing device mainly comprises an image acquisition and processing server and an image display terminal, wherein the image acquisition and processing server is connected with a network switch of the control device through a network cable, the image acquisition and processing server receives a photographing control instruction sent by the control device, controls a camera to photograph a video signal, stores the video signal into the server and realizes subsequent image processing based on video stream, and the image display terminal is connected with the server through the switch and is used for displaying a complete image of a container and an identified container number processed by the image acquisition and processing server;
high definition network camera, C1、C2The two pairs of infrared correlation switches are symmetrically arranged at the top ends of the two sides of the lane, are arranged between the two pairs of infrared correlation switches and are used for receiving video shooting control instructions of the image acquisition and processing device and shooting videos of the container truck-mounted container passing through the lane.
Alternatively, in the collection truck position acquisition device, when the collection truck just enters the lane, the lane ground induction coil is triggered, and the infrared correlation switch A1-A2The switching-on is changed into the switching-off, and in the advancing process of the truck collection, as the length of the truck head is far less than that of the container body, the front part of the truck head does not reach the infrared correlation switch A3-A4Time, i.e. infrared correlation switch A3-A4Is still in a connected state at the same time, and is provided with an infrared correlation switch A1-A2Has changed from disconnected to connected, indicating that the head of the collection truck passes through A1-A2Has not yet reached A3-A4When the head of the vehicle passes through the switch A, the infrared ray is emitted1-A2When the container body is disconnected again, the container body is indicated to start passing through A1-A2The collection truck continues to travel as A3-A4Is broken and A1-A2Communicated with the container body through A1-A2Enter A3-A4When A is1-A2And A3-A4And simultaneously, the container is switched on, the container monitors the area through the infrared correlation switch, and the traveling position information of the truck is judged through the state change of the infrared correlation switch.
Optionally, in the image acquisition processing device, when the collection truck passes through the intelligent gate lane, a ground sensing coil and an infrared correlation switch of the collection truck position acquisition device are triggered, the control device PLC sends a photographing control command to the image acquisition processing device according to logic judgment, the image identification processing device controls 2 high-definition network cameras to respectively acquire video signals, in order to obtain a panoramic image of the whole container, feature extraction, registration and splicing are performed on key frame images in the acquired video streams by using the difference between two adjacent frames of the video stream signals, and finally, a complete container body panoramic image is formed, the image identification processing device processes the image, the container number is identified, and the container number and the image are displayed on the image display terminal computer.
A container image acquisition method based on video streaming comprises the following steps:
the first step is as follows: when the collection truck enters the lane, the lane ground induction coil is triggered,due to the blockage of the headstock, the infrared correlation switch A1-A2The switching-on is changed into the switching-off, and in the advancing process of the truck collection, as the length of the truck head is less than that of the container body, the front part of the truck head does not reach the infrared correlation switch A3-A4Time, i.e. infrared correlation switch A3-A4Is still in a connected state, and the infrared correlation switch A1-A2Has changed from disconnected to connected, indicating that the head of the collection truck passes through A1-A2Has not yet reached A3-A4;
The second step is that: head of the vehicle passes through A1-A2When infrared is emitted to the switch A1-A2When the container body is disconnected again, the container body is indicated to start passing through A1-A2,A1-A2The state change of the image acquisition and processing device is transmitted to the image acquisition and processing device through the control device PLC, and the image acquisition and processing device controls the step C1、C2Starting to record a video;
the third step: the collection truck continues to travel as A3-A4Is broken and A1-A2Communicated with the container body through A1-A2Enter A3-A4;
The fourth step: when A is1-A2And A3-A4At the same time, indicates that the container and the collection truck pass through the infrared detection area, A1-A2And A3-A4The state signal is transmitted to the image acquisition processing device through the control device PLC, and the image identification processing device controls C1、C2Stopping recording the video;
the fifth step: from C1、C2Start video recording to C1、C2Stopping video recording to form two complete container body videos, and storing video signals into an image acquisition and processing device server through an exchanger;
and a sixth step: obtaining a video sequence I of captured video signals by setting a frame rate of capture (fps) of a camera1(x,y),I2(x,y),…,In(x,y);
The seventh step: computing a video sequence I1(x,y),I2(x,y),…,In(x, y) texture feature vector
V=[f1,f2,f3]
Wherein:
f1for second order matrix features, f2As an entropy feature, f3Local stationary features;
eighth step: extracting key frames, setting i in a video sequence as a selected key frame, and selecting the next key frame j in the video sequence, wherein the similarity relation calculation formula of the two container digital images is as follows:
wherein, i and j are sequence codes of frames in the video, T is a threshold value for measuring similar conditions, and in the selection of key frames, in order to improve the picture quality, the first frame and the last frame are both key frames;
the ninth step: fusing key frames, fusing images of the key frames by adopting a weighted average method, and assuming that f1,f2Is two images to be spliced, image f1And f2In spatial superposition, the fused image pixel f can be represented as f (x, y)
Wherein d is1,d2Represents a weight value, and d1+d2=1,0≤d1≤1,0≤d21 or less, in the overlap region, d1From 1 to 0, d2Changes from 0 to 1, thereby realizing an overlap region f1To f2A smooth transition of (2);
the tenth step: splicing based on the video stream key frames to form a container panoramic image f (x, y), and storing the container panoramic image f (x, y) to an image acquisition and processing device server;
the eleventh step: the gray scale processing of the image recognition processing device can enhance the contrast of characters and background colors, filter out color features irrelevant to recognition in the image, facilitate the edge detection of the container panoramic image, and assign the average value I of three color components with different weights to the image pixel by adopting a weighted average value method
I=(PRR+PGG+PBB)/3
Wherein P isR,PG,PBThe weighting coefficients of three components of each pixel point R, G, B are respectively, and the obtained grayed image is f' (x, y), which is respectively 0.299, 0.587 and 0.114 in the embodiment;
the twelfth step: the image recognition processing device carries out denoising processing on the grayed panoramic image f' (x, y) by adopting a second-order zero-mean Gaussian filter function G (x, y)
f″(x,y)=f′(x,y)*G(x,y)
Wherein,the method is a second-order zero-mean Gaussian filter function, sigma is a standard deviation and is used for representing a fuzzy factor in the image detection process, r is a fuzzy radius, and (x, y) are coordinates of pixel points;
the thirteenth step: local gradient computation based on Canny operator
The magnitude g (x, y) and direction theta of local gradient at each point (x, y) of the image based on Canny operatorgThe calculation is as follows:
θg=arctan(gx/gy)
wherein
The fourteenth step is that: canny operator based non-maxima suppression
To obtain the point where the local gradient strength in the gradient direction is maximum, the local gradient amplitude g (x, y) and the gradient direction θ are calculatedgPerforming non-maximum suppression, and defining gradient value of two sub-pixel points in gradient direction at coordinate (x, y) as gt1And gt2Then, there are:
gt1=g2+tanθg×(g1-g2)
gt2=g4+tanθg×(g3-g4)
in the formula: g1、g2、g3And g4Gradient values at respective pixel points; thetagTwo sub-pixel gradient values g in the gradient direction at the determined coordinate (x, y) are taken as the gradient direction at the coordinate (x, y)t1And gt2
Then, the gradient values g (x, y) at (x, y) are compared with gt1、gt2If g (x, y) is maximum, then the pixel value f "(x, y) at (x, y) is equal to 1, otherwise the suppression is 0, i.e. there is:
repeating the above steps for all locations (x, y) within image f ' (x, y) completes the suppression of non-maxima for image f ' (x, y) so that image f ' (x, y) is converted to image GTA binary edge profile image of (x, y).
The fifteenth step: threshold value method edge search and connection based on Canny operator
To ensure to eliminate noiseEdge search and connection are performed on the edge result f' (x, y) with non-maximum value suppressed by using a threshold method without missing edge pixel information while sounding, and a threshold [ T ] is seth1,Th2]The gradient value in the image f' (x, y) obtained by suppressing the non-maximum value is larger than Th2The pixel points of (A) are taken as strong edge pixels to form an image GT2(x, y) setting the gradient value at Th1And Th2The pixels in between are taken as weak edge pixels to form an image GT1(x, y), namely:
image GT2(x, y)) a higher threshold is set so that the image loses much of the correct edge pixel information while most of the noise is removed, and image G has lost much of the correct edge pixel informationT1(x, y)) is set to be lower, so that the image G which contains a large amount of environmental noise Canny operators and has low search threshold and complete edge information at the same time of retaining correct edge pixel informationT1(x, y) to patch image G which is low in noise but incomplete in edge informationT2(x, y) to achieve the filtering of most of the noise while obtaining all the edges of a single pixel, i.e. for image GT2Each point in (x, y) has:
in the formula: m (x, y) is a set of 8 pixel coordinates in the neighborhood of coordinates (x, y), and the image G is subjected to edge search and edge connection by a threshold methodT2And (x, y) is the processing result of the image f' (x, y) to be detected.
A sixteenth step: extracting character target area, and performing connected domain on image according to edge informationDividing, namely dividing the image into different connected domains; scanning each connected domain row by row and column by column, collecting pixel points in the regions, and calculating the average pixel width w in each connected domainiAnd average pixel height HiFinally, the height-width ratio of the communication area and the length-width ratio of the ith communication area are calculated and calculatedCalculating the region matching degree e according to the height and width M multiplied by N of the target region;
calculating a connected domain by taking (-e, e) as a domain range, and finally determining a character target region;
seventeenth step: character segmentation, adopting character segmentation method based on row-column scanning to scan the positioning image in 0 deg. direction, and searching the point number of character target when it is fullI.e. the number of foreground points of the searched area exceeds a given threshold value TjFirstly, the region is considered as a character row region, the row boundary of the character row region is determined, then the region is scanned by 90 degrees of columns,i.e. the number of regional foreground points exceeds a given threshold TiThen, preliminarily identifying the region as a character region, determining a column boundary of the character region, and dividing and positioning corresponding characters by combining the column boundary and the column boundary;
and eighteenth step: image recognition, namely recognizing the container number character by adopting a weighted template matching algorithm and finally adopting a classifier, and finally displaying the container panoramic photo and the recognized container number on an image display terminal;
the nineteenth step: redundancy check and boxed judgment, for C1、C2Panoramic images shot by the two cameras are respectively identified by image acquisition and processing software, and redundant images are carried out after box numbers are identifiedChecking to obtain effective container number, if the identified container number is a group of data, loading the container by the truck to be 40-foot long container or 20-foot single container, if 2 groups of container numbers are identified, loading the container by the truck to be 2-20-foot double containers, if A is1-A2And A3-A4And meanwhile, the container is disconnected, so that the container carried by the collection truck can be judged to be a long container or a double container, and the single container, the long container or the double container type of the container carried by the collection truck can be comprehensively judged.
The invention aims to provide a container image acquisition device based on video streaming, wherein only 2 cameras and 2 pairs of infrared correlation switches are needed for one lane, the design scheme is simplified, the number of equipment is reduced, the construction cost is reduced, video streaming signals of a truck passing through the lane are acquired through the cameras, the images are spliced by adopting a video streaming key frame splicing technology, a panoramic image of a container is spliced, the image acquisition quality is improved, and the identification rate of the container number is correspondingly improved. When a container truck passes through the lane, the ground induction coils arranged on the lane are used for judging that the container truck enters the lane, the advancing position of the container truck is detected through the infrared devices on the two sides of the lane, the camera is triggered to shoot a container video, the video recording is stopped after the container truck passes through the detection area, finally, the extraction of a container panoramic image is realized through key frame images in a video stream, and the identification of the container number is realized through an OCR technology.
The invention is mainly applied to container goods yard or customs, and port container entrance to realize container image acquisition, and has the characteristics of video stream signal acquisition, small quantity of required equipment, strong adaptability, accurate acquisition, low cost and the like.
In this embodiment, the specification and model of each component device is as follows:
it will be appreciated by persons skilled in the art that the above embodiments are illustrative and not restrictive. Different features which are present in different embodiments may be combined to advantage. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art upon studying the specification and the claims. In the claims, the term "comprising" does not exclude other means or steps; the indefinite article "a" does not exclude a plurality; the terms "first" and "second" are used to denote a name and not to denote any particular order.
Claims (4)
1. A video stream based container image capture device for placement on a lane of a container truck, comprising: the collection truck position acquisition device, the control device, the image acquisition processing device and the high-definition network camera, wherein
The position acquisition device of the collection truck comprises a ground induction coil arranged at the initial position of a lane, a vehicle detector connected with the induction coil and two pairs of infrared correlation switches A sequentially arranged along the advancing direction of the lane1-A2And A3-A4The distance between the two pairs of infrared correlation switches is greater than the length of the head of the container truck but less than the length of the whole container truck, a lane ground induction coil is connected with a PLC (programmable logic controller) of a control device through a vehicle detector and used for detecting whether the collection truck enters a lane or not, when the collection truck passes through the lane, the ground induction coil generates a trigger signal and transmits the trigger signal to the vehicle detector, the vehicle detector transmits the signal to the PLC of the control device, 2 the infrared correlation switches are connected with the PLC of the control device, when the container truck does not pass through the lane, the infrared correlation switches are in a connected state, when the container truck passes through the lane, the infrared correlation switches are firstly blocked by a container and then are connected, the control device collects the change of the infrared signal, and the position information of the container truck is realized through;
the control device comprises a PLC and a network switch, the PLC judges whether the camera starts shooting or stops shooting through control logic according to the position information of the container truck, which is acquired by the ground sensing coil and the infrared correlation switch, outputs a corresponding shooting or shooting stopping instruction, transmits the shooting instruction to the image acquisition processing device, and the image acquisition processing device sends the shooting instruction to the high-definition network camera to control the high-definition network camera to start or stop shooting video signals;
the image acquisition and processing device mainly comprises an image acquisition and processing server and an image display terminal, wherein the image acquisition and processing server is connected with a network switch of the control device through a network cable, the image acquisition and processing server receives a photographing control instruction sent by the control device, controls a camera to photograph a video signal, stores the video signal into the server and realizes subsequent image processing based on video stream, and the image display terminal is connected with the server through the switch and is used for displaying a complete image of a container and an identified container number processed by the image acquisition and processing server;
high definition network camera, C1、C2The two pairs of infrared correlation switches are symmetrically arranged at the top ends of the two sides of the lane, are arranged between the two pairs of infrared correlation switches and are used for receiving video shooting control instructions of the image acquisition and processing device and shooting videos of the container truck-mounted container passing through the lane.
2. The image capturing apparatus of claim 1,
in the collecting truck position collecting device, when the collecting truck just enters the lane, the lane ground induction coil is triggered, and the infrared correlation switch A1-A2The switching-on is changed into the switching-off, and in the advancing process of the truck collection, as the length of the truck head is far less than that of the container body, the front part of the truck head does not reach the infrared correlation switch A3-A4Time, i.e. infrared correlation switch A3-A4Is still in a connected state at the same time, and is provided with an infrared correlation switch A1-A2Has changed from disconnected to connected, indicating that the head of the collection truck passes through A1-A2Has not yet reached A3-A4When the head of the vehicle passes through the switch A, the infrared ray is emitted1-A2When the container body is disconnected again, the container body is indicated to start passing through A1-A2The collection truck continues to travel as A3-A4Is broken and A1-A2Connected and connected with container bodyBy A1-A2Enter A3-A4When A is1-A2And A3-A4And simultaneously, the container is switched on, the container monitors the area through the infrared correlation switch, and the traveling position information of the truck is judged through the state change of the infrared correlation switch.
3. The image capturing apparatus of claim 1,
in the image acquisition processing device, when a collection truck passes through an intelligent gate lane, a ground sensing coil and an infrared correlation switch of the collection truck position acquisition device are triggered, a control device PLC sends a photographing control command to the image identification processing device of the image acquisition processing device according to logic judgment, the image identification processing device controls 2 high-definition network cameras to respectively acquire video signals, in order to obtain a panoramic image of the whole container, the difference between two adjacent frames of video stream signals is utilized to extract, register and splice key frame images in the acquired video streams, and finally, a complete container body panoramic image is formed, the image identification processing device processes the image, identifies the container number and displays the container number and the image on an image display terminal computer.
4. A container image acquisition method based on video streaming comprises the following steps:
the first step is as follows: when the collection truck enters the lane, the lane ground induction coil is triggered, and the infrared correlation switch A is blocked by the truck head1-A2The switching-on is changed into the switching-off, and in the advancing process of the truck collection, as the length of the truck head is less than that of the container body, the front part of the truck head does not reach the infrared correlation switch A3-A4Time, i.e. infrared correlation switch A3-A4Is still in a connected state, and the infrared correlation switch A1-A2Has changed from disconnected to connected, indicating that the head of the collection truck passes through A1-A2Has not yet reached A3-A4;
The second step is that: head of the vehicle passes through A1-A2When infrared is emitted to the switch A1-A2When it becomes disconnected againIndicating that the container body begins to pass through A1-A2,A1-A2The state change of the image acquisition and processing device is transmitted to the image acquisition and processing device through the control device PLC, and the image acquisition and processing device controls the step C1、C2Starting to record a video;
the third step: the collection truck continues to travel as A3-A4Is broken and A1-A2Communicated with the container body through A1-A2Enter A3-A4;
The fourth step: when A is1-A2And A3-A4At the same time, indicates that the container and the collection truck pass through the infrared detection area, A1-A2And A3-A4The state signal is transmitted to the image acquisition processing device through the control device PLC, and the image acquisition processing device controls C1、C2Stopping recording the video;
the fifth step: from C1、C2Start video recording to C1、C2Stopping video recording to form two complete container body videos, and storing video signals into an image acquisition and processing device server through an exchanger;
and a sixth step: obtaining a video sequence I of captured video signals by setting a frame rate of capture (fps) of a camera1(x,y),I2(x,y),…,In(x,y);
The seventh step: computing a video sequence I1(x,y),I2(x,y),…,In(x, y) texture feature vector
V=[f1,f2,f3]
Wherein:
f1for second order matrix features, f2As an entropy feature, f3Local stationary features;
eighth step: extracting key frames, setting i in a video sequence as a selected key frame, and selecting the next key frame j in the video sequence, wherein the similarity relation calculation formula of the two container digital images is as follows:
wherein, i and j are sequence codes of frames in the video, T is a threshold value for measuring similar conditions, and in the selection of key frames, in order to improve the picture quality, the first frame and the last frame are both key frames;
the ninth step: fusing key frames, fusing images of the key frames by adopting a weighted average method, and assuming that f1,f2Is two images to be spliced, image f1And f2In spatial superposition, the fused image pixel f can be represented as f (x, y)
Wherein d is1,d2Represents a weight value, and d1+d2=1,0≤d1≤1,0≤d21 or less, in the overlap region, d1From 1 to 0, d2Changes from 0 to 1, thereby realizing an overlap region f1To f2A smooth transition of (2);
the tenth step: splicing based on the video stream key frames to form a container panoramic image f (x, y), and storing the container panoramic image f (x, y) to an image acquisition and processing device server;
the eleventh step: the gray scale processing of the image recognition processing device can enhance the contrast of characters and background colors, filter out color features irrelevant to recognition in the image, facilitate the edge detection of the container panoramic image, and assign the average value I of three color components with different weights to the image pixel by adopting a weighted average value method
I=(PRR+PGG+PBB)/3
Wherein P isR,PG,PBThe weighting coefficients of three components of each pixel point R, G, B are respectively, and the obtained grayed image is f' (x, y), which is respectively 0.299, 0.587 and 0.114 in the embodiment;
the twelfth step: the image recognition processing device carries out denoising processing on the grayed panoramic image f' (x, y) by adopting a second-order zero-mean Gaussian filter function G (x, y)
f″(x,y)=f′(x,y)*G(x,y)
Wherein,the method is a second-order zero-mean Gaussian filter function, sigma is a standard deviation and is used for representing a fuzzy factor in the image detection process, r is a fuzzy radius, and (x, y) are coordinates of pixel points;
the thirteenth step: local gradient computation based on Canny operator
The magnitude g (x, y) and direction theta of local gradient at each point (x, y) of the image based on Canny operatorgThe calculation is as follows:
θg=arctan(gx/gy)
wherein
The fourteenth step is that: canny operator based non-maxima suppression
To obtain the point where the local gradient strength in the gradient direction is maximum, the local gradient amplitude g (x, y) and the gradient direction θ are calculatedgPerforming non-maximum suppression, and defining gradient value of two sub-pixel points in gradient direction at coordinate (x, y) as gt1And gt2Then, there are:
gt1=g2+tanθg×(g1-g2)
gt2=g4+tanθg×(g3-g4)
in the formula: g1、g2、g3And g4Gradient values at respective pixel points; thetagTwo sub-pixel gradient values g in the gradient direction at the determined coordinate (x, y) are taken as the gradient direction at the coordinate (x, y)t1And gt2Then, the gradient values g (x, y) at (x, y) are compared with gt1、gt2If g (x, y) is maximum, then the pixel value f "(x, y) at (x, y) is equal to 1, otherwise the suppression is 0, i.e. there is:
repeating the above steps for all locations (x, y) within image f ' (x, y) completes the suppression of non-maxima for image f ' (x, y) so that image f ' (x, y) is converted to image GTA binary edge profile image of (x, y).
The fifteenth step: threshold value method edge search and connection based on Canny operator
In order to ensure that no edge pixel information is omitted while noise is eliminated, edge search and connection are performed on the edge result f' (x, y) with non-maximum suppression by using a thresholding method, and a threshold value [ T ] is seth1,Th2]The gradient value in the image f' (x, y) obtained by suppressing the non-maximum value is larger than Th2The pixel points of (A) are taken as strong edge pixels to form an image GT2(x, y) setting the gradient value at Th1And Th2The pixels in between are taken as weak edge pixels to form an image GT1(x, y), namely:
image GT2(x, y)) a higher threshold is set so that the image loses much of the correct edge pixel information while most of the noise is removed, and image G has lost much of the correct edge pixel informationT1(x, y)) is set to be lower, so that the image G which contains a large amount of environmental noise Canny operators and has low search threshold and complete edge information at the same time of retaining correct edge pixel informationT1(x, y) to patch image G which is low in noise but incomplete in edge informationT2(x, y) to achieve the filtering of most of the noise while obtaining all the edges of a single pixel, i.e. for image GT2Each point in (x, y) has:
in the formula: m (x, y) is a set of 8 pixel coordinates in the neighborhood of coordinates (x, y), and the image G is subjected to edge search and edge connection by a threshold methodT2And (x, y) is the processing result of the image f' (x, y) to be detected.
Sixteenth, step: extracting a character target area, dividing a connected domain of the image according to the edge information, and dividing the image into different connected domains; scanning each connected domain row by row and column by column, collecting pixel points in the regions, and calculating the average pixel width w in each connected domainiAnd average pixel height HiFinally, the height-width ratio of the communication area and the length-width ratio of the ith communication area are calculated and calculatedCalculating the region matching degree e according to the height and width M multiplied by N of the target region;
calculating a connected domain by taking (-e, e) as a domain range, and finally determining a character target region;
seventeenth step: character segmentation, adopting character segmentation method based on row-column scanning to scan the positioning image in 0 deg. direction, and searching the point number of character target when it is fullI.e. the number of foreground points of the searched area exceeds a given threshold value TjFirstly, the region is considered as a character row region, the row boundary of the character row region is determined, then the region is scanned by 90 degrees of columns,i.e. the number of regional foreground points exceeds a given threshold TiThen, preliminarily identifying the region as a character region, determining a column boundary of the character region, and dividing and positioning corresponding characters by combining the column boundary and the column boundary;
and eighteenth step: image recognition, namely recognizing the container number character by adopting a weighted template matching algorithm and finally adopting a classifier, and finally displaying the container panoramic photo and the recognized container number on an image display terminal;
the nineteenth step: redundancy check and boxed judgment, for C1、C2Panoramic images shot by the two cameras are respectively identified by image acquisition and processing software, redundancy check is carried out after box numbers are identified to obtain effective container numbers, if the identified box numbers are a group of data, the container numbers are loaded into 40-foot long boxes or 20-foot single boxes, if 2 groups of box numbers are identified, the container numbers are loaded into 2 double boxes of 20 feet, and if A is a group of box numbers, the container numbers are loaded into two containers of 20 feet1-A2And A3-A4And meanwhile, the container is disconnected, so that the container carried by the collection truck can be judged to be a long container or a double container, and the single container, the long container or the double container type of the container carried by the collection truck can be comprehensively judged.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910740684.1A CN110460813A (en) | 2019-08-12 | 2019-08-12 | A kind of container representation acquisition device and acquisition method based on video flowing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910740684.1A CN110460813A (en) | 2019-08-12 | 2019-08-12 | A kind of container representation acquisition device and acquisition method based on video flowing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110460813A true CN110460813A (en) | 2019-11-15 |
Family
ID=68485969
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910740684.1A Pending CN110460813A (en) | 2019-08-12 | 2019-08-12 | A kind of container representation acquisition device and acquisition method based on video flowing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110460813A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110942461A (en) * | 2019-12-20 | 2020-03-31 | 上海撬动网络科技有限公司 | Intelligent testing and viewing system for fixed-scene container |
CN112102215A (en) * | 2020-09-03 | 2020-12-18 | 广州南沙联合集装箱码头有限公司 | Image fast splicing method based on error statistics |
CN114157808A (en) * | 2021-12-13 | 2022-03-08 | 北京国泰星云科技有限公司 | Efficient container gate image acquisition system and method |
CN114339159A (en) * | 2021-12-31 | 2022-04-12 | 深圳市平方科技股份有限公司 | Image acquisition method and device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101556692A (en) * | 2008-04-09 | 2009-10-14 | 西安盛泽电子有限公司 | Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points |
CN102426649A (en) * | 2011-10-13 | 2012-04-25 | 石家庄开发区冀科双实科技有限公司 | Simple high-accuracy steel seal digital automatic identification method |
CN104376322A (en) * | 2014-12-01 | 2015-02-25 | 上海海事大学 | Intelligent detecting and evaluating method for container number preprocessing quality of containers |
CN105469055A (en) * | 2015-11-26 | 2016-04-06 | 上海斐讯数据通信技术有限公司 | Cloud computing-based license plate recognition system and method |
CN106372641A (en) * | 2016-08-30 | 2017-02-01 | 北京华力兴科技发展有限责任公司 | Container number identification method and device for carrying vehicle and carrying vehicle |
CN107038683A (en) * | 2017-03-27 | 2017-08-11 | 中国科学院自动化研究所 | The method for panoramic imaging of moving target |
CN107452007A (en) * | 2017-07-05 | 2017-12-08 | 国网河南省电力公司 | A kind of visible ray insulator method for detecting image edge |
-
2019
- 2019-08-12 CN CN201910740684.1A patent/CN110460813A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101556692A (en) * | 2008-04-09 | 2009-10-14 | 西安盛泽电子有限公司 | Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points |
CN102426649A (en) * | 2011-10-13 | 2012-04-25 | 石家庄开发区冀科双实科技有限公司 | Simple high-accuracy steel seal digital automatic identification method |
CN104376322A (en) * | 2014-12-01 | 2015-02-25 | 上海海事大学 | Intelligent detecting and evaluating method for container number preprocessing quality of containers |
CN105469055A (en) * | 2015-11-26 | 2016-04-06 | 上海斐讯数据通信技术有限公司 | Cloud computing-based license plate recognition system and method |
CN106372641A (en) * | 2016-08-30 | 2017-02-01 | 北京华力兴科技发展有限责任公司 | Container number identification method and device for carrying vehicle and carrying vehicle |
CN107038683A (en) * | 2017-03-27 | 2017-08-11 | 中国科学院自动化研究所 | The method for panoramic imaging of moving target |
CN107452007A (en) * | 2017-07-05 | 2017-12-08 | 国网河南省电力公司 | A kind of visible ray insulator method for detecting image edge |
Non-Patent Citations (2)
Title |
---|
高延鹏,李小平,张晓康,孙艳春: "铁路货场智能大门集装箱全景图像采集方法研究", 《物流科技》 * |
高延鹏: "基于全景图像的铁路集装箱货场智能大门研究与设计", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110942461A (en) * | 2019-12-20 | 2020-03-31 | 上海撬动网络科技有限公司 | Intelligent testing and viewing system for fixed-scene container |
CN112102215A (en) * | 2020-09-03 | 2020-12-18 | 广州南沙联合集装箱码头有限公司 | Image fast splicing method based on error statistics |
CN114157808A (en) * | 2021-12-13 | 2022-03-08 | 北京国泰星云科技有限公司 | Efficient container gate image acquisition system and method |
CN114157808B (en) * | 2021-12-13 | 2022-11-29 | 北京国泰星云科技有限公司 | Efficient container gate image acquisition system and method |
CN114339159A (en) * | 2021-12-31 | 2022-04-12 | 深圳市平方科技股份有限公司 | Image acquisition method and device, electronic equipment and storage medium |
CN114339159B (en) * | 2021-12-31 | 2023-06-27 | 深圳市平方科技股份有限公司 | Image acquisition method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110460813A (en) | A kind of container representation acquisition device and acquisition method based on video flowing | |
CN109460709B (en) | RTG visual barrier detection method based on RGB and D information fusion | |
US8290213B2 (en) | Method of locating license plate of moving vehicle | |
JP6904614B2 (en) | Object detection device, prediction model creation device, object detection method and program | |
CN101089875A (en) | Face authentication apparatus, face authentication method, and entrance and exit management apparatus | |
US20180173963A1 (en) | Detection of an Object in a Distorted Image | |
CN101937614A (en) | Plug and play comprehensive traffic detection system | |
CN109447090B (en) | Shield door obstacle detection method and system | |
CN110008771B (en) | Code scanning system and code scanning method | |
US20050123201A1 (en) | Image processing apparatus for detecting and recognizing mobile object | |
CN115953726B (en) | Machine vision container face damage detection method and system | |
CN112766046B (en) | Target detection method and related device | |
CN106874897A (en) | A kind of licence plate recognition method and device | |
CN110084171B (en) | Detection device and detection method for foreign matters on top of subway train | |
JP2017030380A (en) | Train detection system and train detection method | |
CN107729814A (en) | A kind of method and device for detecting lane line | |
JP5306124B2 (en) | Number plate reading apparatus and number plate reading method | |
CN106340031A (en) | Method and device for detecting moving object | |
CN116682268A (en) | Portable urban road vehicle violation inspection system and method based on machine vision | |
US7346193B2 (en) | Method for detecting object traveling direction | |
CN110688876A (en) | Lane line detection method and device based on vision | |
JP2001283374A (en) | Traffic flow measuring system | |
Ugliano et al. | Automatically detecting changes and anomalies in unmanned aerial vehicle images | |
Chaiyawatana et al. | Robust object detection on video surveillance | |
CN114166132A (en) | Vehicle height snapshot measuring method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191115 |