CN111968181A - Real-time parcel position detection method and system based on image processing - Google Patents
Real-time parcel position detection method and system based on image processing Download PDFInfo
- Publication number
- CN111968181A CN111968181A CN202010847660.9A CN202010847660A CN111968181A CN 111968181 A CN111968181 A CN 111968181A CN 202010847660 A CN202010847660 A CN 202010847660A CN 111968181 A CN111968181 A CN 111968181A
- Authority
- CN
- China
- Prior art keywords
- parcel
- images
- target
- cameras
- real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 37
- 238000000034 method Methods 0.000 claims abstract description 13
- 238000011897 real-time detection Methods 0.000 claims abstract description 9
- 230000011218 segmentation Effects 0.000 claims description 41
- 230000004927 fusion Effects 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 12
- 238000003709 image segmentation Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 4
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims description 3
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 101100476202 Caenorhabditis elegans mog-2 gene Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000011900 installation process Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/08—Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
- G06Q10/083—Shipping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/64—Analysis of geometric attributes of convexity or concavity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Business, Economics & Management (AREA)
- Economics (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Entrepreneurship & Innovation (AREA)
- Development Economics (AREA)
- Geometry (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of logistics intelligent sorting, and particularly discloses a real-time parcel position detection method based on image processing, which comprises the following steps: step S1, acquiring parcel images shot by a plurality of cameras in real time; step S2, image splicing is carried out according to the acquired parcel images; step S3, performing double-background modeling according to the spliced images to extract foreground targets in the spliced images; and step S4, detecting the package position of the foreground object so as to output the package position information in real time. The invention also discloses a real-time parcel position detection system based on image processing. The real-time detection method for the parcel position based on image processing can detect and position the parcel position on the belt surface in real time, thereby providing guarantee for a subsequent motor control system to better and more quickly sort the parcels.
Description
Technical Field
The invention relates to the technical field of logistics intelligent sorting, in particular to a real-time parcel position detection method and system based on image processing.
Background
In recent years, with the blowout type development of e-commerce, the business volume of the express delivery industry increases in a geometric mode, and the efficiency of express delivery sorting directly influences the logistics distribution speed and the effect of enterprise management.
The automation level in the existing express sorting process is not very high, the manual sorting mode is still mainly used, and the phenomenon of low efficiency generally exists; therefore, in order to ensure the sorting efficiency of the packages, how to detect the position information of the packages in real time by a quick, simple and stable method and provide stable information for controlling the delivery of the packages become problems which need to be solved urgently.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a real-time parcel position detection method and system based on image processing, which can detect and position parcel positions on a belt surface in real time, thereby providing guarantee for a subsequent motor control system to better and more quickly sort parcels.
As a first aspect of the present invention, there is provided a real-time parcel location detection method based on image processing, including:
step S1, acquiring parcel images shot by a plurality of cameras in real time;
step S2, image splicing is carried out according to the acquired parcel images;
step S3, performing double-background modeling according to the spliced images to extract foreground targets in the spliced images;
and step S4, detecting the package position of the foreground object so as to output the package position information in real time.
Further, the step S2 includes:
step S21, horizontally correcting the parcel images captured by the plurality of cameras;
step S22, completing mapping transformation between the cameras in the horizontal direction and the vertical direction by using a homography transformation matrix among parcel images shot by the cameras so as to realize image splicing of common area images shot by the cameras;
and step S23, carrying out pixel fusion on the overlapped region in the spliced image, wherein alpha weighted fusion is used, namely the fusion weight at the central pixel is 1, and the fusion weight value of the edge pixel is gradually reduced along with the increase of the distance between the edge pixel and the central pixel.
Further, in step S22, the parameter calibration of the multiple cameras needs to be performed in advance, which includes:
step S221, for a plurality of cameras to be spliced, placing checkerboard grids in a common area of the plurality of cameras, and collecting a plurality of groups of checkerboard images;
step S222, performing fast corner detection on each group of collected checkerboard images so as to obtain a matching point set of images to be spliced of a plurality of cameras;
step S223, calculating a homography transformation matrix among the parcel images shot by the cameras based on a RANSAC algorithm, so that the parcel images shot by the cameras are converted into a unified coordinate, the registration of the parcel images shot by the cameras is completed, and parameters of the homography transformation matrix are stored for image registration in an actual algorithm;
and S224, fusing the pixel values of the overlapped area of the registered images according to a certain strategy, and finally completing the seamless splicing of the images.
Further, the step S3 includes:
step S31, carrying out seamless splicing on the parcel images shot by the cameras according to the acquired parameters;
step S32, respectively establishing a self-adaptive mixed Gaussian background model and a K neighbor background model according to the acquired wrapped images spliced by the previous N frames;
step S33, respectively carrying out foreground object detection on the parcel images of the subsequent frames according to the established double-background model;
and step S34, performing morphological filtering processing on the detected foreground target, performing connected domain detection, and removing a noise target existing in the foreground target by using area constraint.
Further, the step S4 includes:
step S41, performing image segmentation on the first foreground target extracted based on the self-adaptive Gaussian mixture background model, and positioning the wrapping position;
step S42, packages detected in the first foreground object are removed from the second foreground object extracted based on the K neighbor background model, the remaining foreground objects are subjected to image segmentation, and package positions are located;
and step S43, fusing the position information of the first foreground object and the second foreground object, and outputting the final position information of the package.
Further, the step S41 includes:
step S411, after binarization processing is carried out on the first foreground target, contour information in the first foreground target is extracted;
step S412, traversing all contours of the first foreground target, and removing a noise interference target in the first foreground target according to area constraint;
and step S413, sequentially traversing the remaining target contours after the noise interference target is removed, preliminarily judging whether the remaining target contours are multi-wrapped adhesion targets according to the aspect ratio constraint, if so, performing multi-level image segmentation processing on the first foreground target, otherwise, saving the first foreground target as an effective wrapped target, and acquiring the position information of the wrapped target.
Further, the step S413 includes:
step S4131, acquiring a region of interest (ROI) in the spliced and fused image according to the positions of the residual target contours;
step S4132, performing OTSU adaptive threshold segmentation on the spliced and fused images with the ROI, and finishing segmentation of the conglutinated parcels until the segmented first target images meet the aspect ratio constraint of a single parcel if the segmented first target images meet the aspect ratio constraint of the single parcel; otherwise, continuing to perform next-stage segmentation;
step S4133, continuing to perform watershed threshold segmentation on the image to be segmented, and if the segmented second target images all meet the aspect ratio constraint of a single parcel, ending the segmentation of the stuck parcel; otherwise, continuing to perform next-stage segmentation;
step S4134, continuing to perform secondary watershed threshold segmentation on the image to be segmented, and if the segmented third target images meet the aspect ratio constraint of a single package, ending the segmentation of the adhered package; otherwise, continuing to perform next-stage segmentation;
step S4135, continuing to perform concave-convex detection on the wrapping contour of the image to be segmented in the previous step, performing re-segmentation by using the concave-convex of the contour, and finishing segmentation of the adhered wrapping if the segmented fourth target image meets the aspect ratio constraint of a single wrapping; otherwise, continuing to perform next-stage segmentation on the contour which does not meet the requirements;
and step S4136, continuing to perform triangular self-adaptive threshold segmentation on the image to be segmented in the previous step until the segmentation of the adhesion package is finished.
Further, the step S43 includes:
and fusing the first foreground target and the second foreground target, combining the same regions, and performing mutual compensation on the missed-detection foreground target extracted by the single model.
Further, the steps S1-S4 are repeatedly executed until the plurality of cameras stop working, and finally the position information of the detected foreground object is converted into the customized world coordinate system, and the position information of the parcel is output in real time.
As a second aspect of the present invention, there is provided an image processing-based parcel location real-time detection system, comprising:
the acquisition module is used for acquiring parcel images shot by a plurality of cameras in real time;
the splicing module is used for splicing images according to the acquired parcel images;
the extraction module is used for carrying out double-background modeling according to the spliced images so as to extract foreground targets in the spliced images;
and the detection module is used for detecting the package position of the foreground target so as to output the package position information in real time.
The parcel position real-time detection method and system based on image processing provided by the invention have the following advantages: the real-time detection and positioning can be carried out on the package position on the belt surface, so that a guarantee is provided for a follow-up motor control system to better and more quickly carry out package sorting.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a flow chart of the real-time parcel location detection method based on image processing according to the present invention.
Fig. 2 is a schematic diagram of the detection result of the parcel location according to the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given to the detailed implementation, structure, features and effects of the method and system for real-time detecting parcel location based on image processing according to the present invention with reference to the accompanying drawings and preferred embodiments. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without any inventive step, are within the scope of the present invention.
In this embodiment, a parcel position real-time detection method based on image processing is provided, and as shown in fig. 1, the parcel position real-time detection method based on image processing includes:
step S1, acquiring parcel images shot by a plurality of cameras in real time;
step S2, image splicing is carried out according to the acquired parcel images;
step S3, performing double-background modeling according to the spliced images to extract foreground targets in the spliced images;
and step S4, detecting the package position of the foreground object so as to output the package position information in real time.
Preferably, the step S2 includes:
step S21, considering the angle deviation of the cameras during the installation process, so it is necessary to horizontally correct the parcel images taken by the multiple cameras;
step S22, completing mapping transformation between the cameras in the horizontal direction and the vertical direction by using a homography transformation matrix among the parcel images shot by the cameras to realize image splicing of common area images shot by the cameras, so that the camera images of multiple visual angles are spliced into an image with a larger visual field;
step S23, carrying out pixel fusion on the overlapped area in the spliced image to realize the seamless splicing effect; in which alpha weighted fusion is mainly used, i.e. the fusion weight at the central pixel is 1, and the fusion weight value of the edge pixel gradually decreases as the distance between the edge pixel and the central pixel increases.
Preferably, in step S22, the parameter calibration of multiple cameras needs to be performed in advance, including:
step S221, for a plurality of cameras to be spliced, placing checkerboard grids in a common area of the plurality of cameras, and collecting a plurality of groups of checkerboard images;
step S222, performing fast corner detection on each group of collected checkerboard images so as to obtain a matching point set of images to be spliced of a plurality of cameras;
step S223, calculating a homography transformation matrix among the parcel images shot by the cameras based on a RANSAC algorithm, so that the parcel images shot by the cameras are converted into a unified coordinate, the registration of the parcel images shot by the cameras is completed, and parameters of the homography transformation matrix are stored for image registration in an actual algorithm;
and S224, fusing the pixel values of the overlapped area of the registered images according to a certain strategy, and finally completing the seamless splicing of the images.
Preferably, the step S3 includes:
step S31, carrying out seamless splicing on the parcel images shot by the cameras according to the acquired parameters, thereby acquiring an image with a larger view field;
step S32, respectively establishing a self-adaptive mixed Gaussian background model (MOG 2) and a K neighbor background model (KNN) according to the acquired parcel images spliced by the previous N frames;
step S33, respectively carrying out foreground object detection on the parcel images of the subsequent frames according to the established double-background model; at this time, the acquired foreground target has more noise interference, and in order to better segment and detect subsequent packages, the extracted foreground target needs to be further subjected to image processing, so that the influence of noise on target extraction is reduced;
and step S34, performing morphological filtering processing on the detected foreground target, performing connected domain detection, and eliminating a noise target existing in the foreground target by using area constraint, thereby ensuring the stability of subsequent foreground target detection.
Preferably, the step S4 includes:
step S41, performing image segmentation on the first foreground target extracted based on the self-adaptive Gaussian mixture background model, and positioning the wrapping position;
step S42, packages detected in the first foreground object are removed from the second foreground object extracted based on the K neighbor background model, the remaining foreground objects are subjected to image segmentation, and package positions are located;
and step S43, fusing the position information of the first foreground object and the second foreground object, and outputting the final position information of the package.
Preferably, the step S41 includes:
step S411, after binarization processing is carried out on the first foreground target, contour information in the first foreground target is extracted;
step S412, traversing all contours of the first foreground target, and removing a noise interference target in the first foreground target according to area constraint;
and step S413, sequentially traversing the remaining target contours after the noise interference target is removed, preliminarily judging whether the remaining target contours are multi-wrapped adhesion targets according to the aspect ratio constraint, if so, performing multi-level image segmentation processing on the first foreground target, otherwise, saving the first foreground target as an effective wrapped target, and acquiring the position information of the wrapped target.
Preferably, the step S413 includes:
step S4131, acquiring a region of interest (ROI) in the spliced and fused image according to the positions of the residual target contours;
step S4132, performing OTSU adaptive threshold segmentation on the spliced and fused images with the ROI, and finishing segmentation of the conglutinated parcels until the segmented first target images meet the aspect ratio constraint of a single parcel if the segmented first target images meet the aspect ratio constraint of the single parcel; otherwise, continuing to perform next-stage segmentation;
step S4133, continuing to perform watershed threshold segmentation on the image to be segmented, and if the segmented second target images all meet the aspect ratio constraint of a single parcel, ending the segmentation of the stuck parcel; otherwise, continuing to perform next-stage segmentation;
step S4134, continuing to perform secondary watershed threshold segmentation on the image to be segmented, and if the segmented third target images meet the aspect ratio constraint of a single package, ending the segmentation of the adhered package; otherwise, continuing to perform next-stage segmentation;
step S4135, continuing to perform concave-convex detection on the wrapping contour of the image to be segmented in the previous step, performing re-segmentation by using the concave-convex of the contour, and finishing segmentation of the adhered wrapping if the segmented fourth target image meets the aspect ratio constraint of a single wrapping; otherwise, continuing to perform next-stage segmentation on the contour which does not meet the requirements;
and step S4136, continuing to perform triangular self-adaptive threshold segmentation on the image to be segmented in the previous step until the segmentation of the adhesion package is finished, thereby realizing effective segmentation of the problem of adhesion of multiple packages.
Preferably, the step S43 includes:
and fusing the first foreground target and the second foreground target, combining the same regions, and mutually compensating the missed detection foreground target extracted by the single model, so that the advantages of the dual models are exerted, and the higher robustness of the detection effect of the package is finally ensured.
Preferably, the steps S1-S4 are repeatedly executed until the plurality of cameras stop working, and finally the position information of the detected foreground object is converted into the customized world coordinate system, and the position information of the parcel is output in real time.
FIG. 2 is a schematic diagram of the detection result of the parcel position according to the present invention; the invention can acquire x and y position information of the parcel relative to the coordinate origin at the upper left corner of the detection area and the number information of the parcels in real time and can acquire the relative position sorting relation of the parcels at the same time (S1-S5), and the sorting method refers to the arrangement of the y coordinate information at the lower right corner of the parcels, namely the distance relation of the parcels to an outlet, and arranges the parcels from near to far in sequence.
As another embodiment of the present invention, there is provided an image processing-based parcel location real-time detection system, including:
the acquisition module is used for acquiring parcel images shot by a plurality of cameras in real time;
the splicing module is used for splicing images according to the acquired parcel images;
the extraction module is used for carrying out double-background modeling according to the spliced images so as to extract foreground targets in the spliced images;
and the detection module is used for detecting the package position of the foreground target so as to output the package position information in real time.
According to the method and the system for detecting the parcel position in real time based on image processing, provided by the invention, the parcel image data shot by a plurality of cameras are fused, the double-background modeling method is combined to extract the parcel information on the belt surface, then the gray characteristic of the image is combined to carry out multi-stage segmentation on the adhered parcel, and finally the parcel position on the belt surface is detected and positioned in real time, so that the guarantee is provided for a subsequent motor control system to better and more quickly sort the parcel.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A parcel position real-time detection method based on image processing is characterized by comprising the following steps:
step S1, acquiring parcel images shot by a plurality of cameras in real time;
step S2, image splicing is carried out according to the acquired parcel images;
step S3, performing double-background modeling according to the spliced images to extract foreground targets in the spliced images;
and step S4, detecting the package position of the foreground object so as to output the package position information in real time.
2. The real-time parcel location detecting method based on image processing as claimed in claim 1, wherein said step S2 includes:
step S21, horizontally correcting the parcel images captured by the plurality of cameras;
step S22, completing mapping transformation between the cameras in the horizontal direction and the vertical direction by using a homography transformation matrix among parcel images shot by the cameras so as to realize image splicing of common area images shot by the cameras;
and step S23, carrying out pixel fusion on the overlapped region in the spliced image, wherein alpha weighted fusion is used, namely the fusion weight at the central pixel is 1, and the fusion weight value of the edge pixel is gradually reduced along with the increase of the distance between the edge pixel and the central pixel.
3. The method for real-time detecting the parcel position based on image processing as claimed in claim 2, wherein in step S22, the parameter calibration of multiple cameras is required in advance, which includes:
step S221, for a plurality of cameras to be spliced, placing checkerboard grids in a common area of the plurality of cameras, and collecting a plurality of groups of checkerboard images;
step S222, performing fast corner detection on each group of collected checkerboard images so as to obtain a matching point set of images to be spliced of a plurality of cameras;
step S223, calculating a homography transformation matrix among the parcel images shot by the cameras based on a RANSAC algorithm, so that the parcel images shot by the cameras are converted into a unified coordinate, the registration of the parcel images shot by the cameras is completed, and parameters of the homography transformation matrix are stored for image registration in an actual algorithm;
and S224, fusing the pixel values of the overlapped area of the registered images according to a certain strategy, and finally completing the seamless splicing of the images.
4. The real-time parcel location detection method based on image processing as claimed in claim 3, wherein said step S3 includes:
step S31, carrying out seamless splicing on the parcel images shot by the cameras according to the acquired parameters;
step S32, respectively establishing a self-adaptive mixed Gaussian background model and a K neighbor background model according to the acquired wrapped images spliced by the previous N frames;
step S33, respectively carrying out foreground object detection on the parcel images of the subsequent frames according to the established double-background model;
and step S34, performing morphological filtering processing on the detected foreground target, performing connected domain detection, and removing a noise target existing in the foreground target by using area constraint.
5. The real-time parcel location detection method based on image processing as claimed in claim 4, wherein said step S4 includes:
step S41, performing image segmentation on the first foreground target extracted based on the self-adaptive Gaussian mixture background model, and positioning the wrapping position;
step S42, packages detected in the first foreground object are removed from the second foreground object extracted based on the K neighbor background model, the remaining foreground objects are subjected to image segmentation, and package positions are located;
and step S43, fusing the position information of the first foreground object and the second foreground object, and outputting the final position information of the package.
6. The real-time parcel location detection method based on image processing as claimed in claim 5, wherein said step S41 includes:
step S411, after binarization processing is carried out on the first foreground target, contour information in the first foreground target is extracted;
step S412, traversing all contours of the first foreground target, and removing a noise interference target in the first foreground target according to area constraint;
and step S413, sequentially traversing the remaining target contours after the noise interference target is removed, preliminarily judging whether the remaining target contours are multi-wrapped adhesion targets according to the aspect ratio constraint, if so, performing multi-level image segmentation processing on the first foreground target, otherwise, saving the first foreground target as an effective wrapped target, and acquiring the position information of the wrapped target.
7. The real-time parcel position detection method based on image processing as claimed in claim 6, wherein said step S413 comprises:
step S4131, acquiring a region of interest (ROI) in the spliced and fused image according to the positions of the residual target contours;
step S4132, performing OTSU adaptive threshold segmentation on the spliced and fused images with the ROI, and finishing segmentation of the conglutinated parcels until the segmented first target images meet the aspect ratio constraint of a single parcel if the segmented first target images meet the aspect ratio constraint of the single parcel; otherwise, continuing to perform next-stage segmentation;
step S4133, continuing to perform watershed threshold segmentation on the image to be segmented, and if the segmented second target images all meet the aspect ratio constraint of a single parcel, ending the segmentation of the stuck parcel; otherwise, continuing to perform next-stage segmentation;
step S4134, continuing to perform secondary watershed threshold segmentation on the image to be segmented, and if the segmented third target images meet the aspect ratio constraint of a single package, ending the segmentation of the adhered package; otherwise, continuing to perform next-stage segmentation;
step S4135, continuing to perform concave-convex detection on the wrapping contour of the image to be segmented in the previous step, performing re-segmentation by using the concave-convex of the contour, and finishing segmentation of the adhered wrapping if the segmented fourth target image meets the aspect ratio constraint of a single wrapping; otherwise, continuing to perform next-stage segmentation on the contour which does not meet the requirements;
and step S4136, continuing to perform triangular self-adaptive threshold segmentation on the image to be segmented in the previous step until the segmentation of the adhesion package is finished.
8. The real-time parcel location detection method based on image processing as claimed in claim 5, wherein said step S43 includes:
and fusing the first foreground target and the second foreground target, combining the same regions, and performing mutual compensation on the missed-detection foreground target extracted by the single model.
9. The real-time parcel position detection method based on image processing as claimed in claim 1, wherein steps S1-S4 are repeatedly executed until a plurality of cameras stop working, and finally the position information of the detected foreground object is converted into a customized world coordinate system, and the position information of the parcel is output in real time.
10. A parcel location real-time detection system based on image processing, comprising:
the acquisition module is used for acquiring parcel images shot by a plurality of cameras in real time;
the splicing module is used for splicing images according to the acquired parcel images;
the extraction module is used for carrying out double-background modeling according to the spliced images so as to extract foreground targets in the spliced images;
and the detection module is used for detecting the package position of the foreground target so as to output the package position information in real time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010847660.9A CN111968181B (en) | 2020-08-21 | 2020-08-21 | Real-time parcel position detection method and system based on image processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010847660.9A CN111968181B (en) | 2020-08-21 | 2020-08-21 | Real-time parcel position detection method and system based on image processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111968181A true CN111968181A (en) | 2020-11-20 |
CN111968181B CN111968181B (en) | 2022-04-15 |
Family
ID=73389937
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010847660.9A Active CN111968181B (en) | 2020-08-21 | 2020-08-21 | Real-time parcel position detection method and system based on image processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111968181B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112581369A (en) * | 2020-12-24 | 2021-03-30 | 中国银联股份有限公司 | Image splicing method and device |
CN112668681A (en) * | 2020-12-24 | 2021-04-16 | 杭州海康机器人技术有限公司 | Method, system and device for determining package information and camera |
CN113034619A (en) * | 2021-04-23 | 2021-06-25 | 中科微至智能制造科技江苏股份有限公司 | Package information measuring method, device and storage medium |
CN113194308A (en) * | 2021-05-24 | 2021-07-30 | 浙江大华技术股份有限公司 | Method and device for determining blocked area of transmission equipment |
CN113554706A (en) * | 2021-07-29 | 2021-10-26 | 中科微至智能制造科技江苏股份有限公司 | Trolley package position detection method based on deep learning |
CN114663284A (en) * | 2022-04-01 | 2022-06-24 | 优利德科技(中国)股份有限公司 | Infrared thermal imaging panoramic image processing method, system and storage medium |
CN115375549A (en) * | 2022-08-30 | 2022-11-22 | 金锋馥(滁州)科技股份有限公司 | Multi-camera image splicing algorithm design for multi-wrapping separation system |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034240A (en) * | 2010-12-23 | 2011-04-27 | 北京邮电大学 | Method for detecting and tracking static foreground |
CN106296677A (en) * | 2016-08-03 | 2017-01-04 | 浙江理工大学 | A kind of remnant object detection method of double mask context updates based on double-background model |
CN106650638A (en) * | 2016-12-05 | 2017-05-10 | 成都通甲优博科技有限责任公司 | Abandoned object detection method |
CN107507221A (en) * | 2017-07-28 | 2017-12-22 | 天津大学 | With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model |
CN108564597A (en) * | 2018-03-05 | 2018-09-21 | 华南理工大学 | A kind of video foreground target extraction method of fusion gauss hybrid models and H-S optical flow methods |
CN109035295A (en) * | 2018-06-25 | 2018-12-18 | 广州杰赛科技股份有限公司 | Multi-object tracking method, device, computer equipment and storage medium |
CN109064404A (en) * | 2018-08-10 | 2018-12-21 | 西安电子科技大学 | It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system |
CN110017773A (en) * | 2019-05-09 | 2019-07-16 | 福建(泉州)哈工大工程技术研究院 | A kind of package volume measuring method based on machine vision |
CN110858392A (en) * | 2018-08-22 | 2020-03-03 | 北京航天长峰科技工业集团有限公司 | Monitoring target positioning method based on fusion background model |
CN111062273A (en) * | 2019-12-02 | 2020-04-24 | 青岛联合创智科技有限公司 | Tracing detection and alarm method for left-over articles |
CN111127486A (en) * | 2019-12-25 | 2020-05-08 | Oppo广东移动通信有限公司 | Image segmentation method, device, terminal and storage medium |
-
2020
- 2020-08-21 CN CN202010847660.9A patent/CN111968181B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034240A (en) * | 2010-12-23 | 2011-04-27 | 北京邮电大学 | Method for detecting and tracking static foreground |
CN106296677A (en) * | 2016-08-03 | 2017-01-04 | 浙江理工大学 | A kind of remnant object detection method of double mask context updates based on double-background model |
CN106650638A (en) * | 2016-12-05 | 2017-05-10 | 成都通甲优博科技有限责任公司 | Abandoned object detection method |
CN107507221A (en) * | 2017-07-28 | 2017-12-22 | 天津大学 | With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model |
CN108564597A (en) * | 2018-03-05 | 2018-09-21 | 华南理工大学 | A kind of video foreground target extraction method of fusion gauss hybrid models and H-S optical flow methods |
CN109035295A (en) * | 2018-06-25 | 2018-12-18 | 广州杰赛科技股份有限公司 | Multi-object tracking method, device, computer equipment and storage medium |
CN109064404A (en) * | 2018-08-10 | 2018-12-21 | 西安电子科技大学 | It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system |
CN110858392A (en) * | 2018-08-22 | 2020-03-03 | 北京航天长峰科技工业集团有限公司 | Monitoring target positioning method based on fusion background model |
CN110017773A (en) * | 2019-05-09 | 2019-07-16 | 福建(泉州)哈工大工程技术研究院 | A kind of package volume measuring method based on machine vision |
CN111062273A (en) * | 2019-12-02 | 2020-04-24 | 青岛联合创智科技有限公司 | Tracing detection and alarm method for left-over articles |
CN111127486A (en) * | 2019-12-25 | 2020-05-08 | Oppo广东移动通信有限公司 | Image segmentation method, device, terminal and storage medium |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112581369A (en) * | 2020-12-24 | 2021-03-30 | 中国银联股份有限公司 | Image splicing method and device |
CN112668681A (en) * | 2020-12-24 | 2021-04-16 | 杭州海康机器人技术有限公司 | Method, system and device for determining package information and camera |
CN112668681B (en) * | 2020-12-24 | 2022-07-01 | 杭州海康机器人技术有限公司 | Method, system and device for determining package information and camera |
CN113034619A (en) * | 2021-04-23 | 2021-06-25 | 中科微至智能制造科技江苏股份有限公司 | Package information measuring method, device and storage medium |
CN113194308A (en) * | 2021-05-24 | 2021-07-30 | 浙江大华技术股份有限公司 | Method and device for determining blocked area of transmission equipment |
CN113194308B (en) * | 2021-05-24 | 2023-02-24 | 浙江大华技术股份有限公司 | Method and device for determining blocked area of transmission equipment |
CN113554706A (en) * | 2021-07-29 | 2021-10-26 | 中科微至智能制造科技江苏股份有限公司 | Trolley package position detection method based on deep learning |
CN113554706B (en) * | 2021-07-29 | 2024-02-27 | 中科微至科技股份有限公司 | Trolley parcel position detection method based on deep learning |
CN114663284A (en) * | 2022-04-01 | 2022-06-24 | 优利德科技(中国)股份有限公司 | Infrared thermal imaging panoramic image processing method, system and storage medium |
CN115375549A (en) * | 2022-08-30 | 2022-11-22 | 金锋馥(滁州)科技股份有限公司 | Multi-camera image splicing algorithm design for multi-wrapping separation system |
Also Published As
Publication number | Publication date |
---|---|
CN111968181B (en) | 2022-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111968181B (en) | Real-time parcel position detection method and system based on image processing | |
CN109886896B (en) | Blue license plate segmentation and correction method | |
CN111178236A (en) | Parking space detection method based on deep learning | |
CN110852173B (en) | Visual positioning method and system for fuzzy weld joint | |
CN108537099A (en) | A kind of licence plate recognition method of complex background | |
CN106780526A (en) | A kind of ferrite wafer alligatoring recognition methods | |
CN107004266A (en) | Method for detecting defects on the surface of a tyre | |
CN111027538A (en) | Container detection method based on instance segmentation model | |
CN116109637B (en) | System and method for detecting appearance defects of turbocharger impeller based on vision | |
CN109781737B (en) | Detection method and detection system for surface defects of hose | |
CN110674812B (en) | Civil license plate positioning and character segmentation method facing complex background | |
CN115760820A (en) | Plastic part defect image identification method and application | |
CN110060239B (en) | Defect detection method for bottle opening of bottle | |
CN117746165A (en) | Method and device for identifying tire types of wheel type excavator | |
CN112863194B (en) | Image processing method, device, terminal and medium | |
CN113971681A (en) | Edge detection method for belt conveyor in complex environment | |
CN112926694A (en) | Method for automatically identifying pigs in image based on improved neural network | |
CN109724988A (en) | A kind of pcb board defect positioning method based on multi-template matching | |
CN118155176B (en) | Automatic control method and system for transfer robot based on machine vision | |
CN113538500B (en) | Image segmentation method and device, electronic equipment and storage medium | |
CN115587966A (en) | Method and system for detecting whether parts are missing or not under condition of uneven illumination | |
KR102436943B1 (en) | A method of recognizing logistics box of RGB-Depth image based on machine learning. | |
CN115619796A (en) | Method and device for obtaining photovoltaic module template and nonvolatile storage medium | |
CN114926332A (en) | Unmanned aerial vehicle panoramic image splicing method based on unmanned aerial vehicle mother vehicle | |
CN115018751A (en) | Crack detection method and system based on Bayesian density analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: No. 979, Antai Third Road, Xishan District, Wuxi City, Jiangsu Province, 214000 Patentee after: Zhongke Weizhi Technology Co.,Ltd. Address before: No. 299, Dacheng Road, Xishan District, Wuxi City, Jiangsu Province Patentee before: Zhongke Weizhi intelligent manufacturing technology Jiangsu Co.,Ltd. |