[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113592390A - Warehousing digital twin method and system based on multi-sensor fusion - Google Patents

Warehousing digital twin method and system based on multi-sensor fusion Download PDF

Info

Publication number
CN113592390A
CN113592390A CN202110784814.9A CN202110784814A CN113592390A CN 113592390 A CN113592390 A CN 113592390A CN 202110784814 A CN202110784814 A CN 202110784814A CN 113592390 A CN113592390 A CN 113592390A
Authority
CN
China
Prior art keywords
warehousing
image
loading
task
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110784814.9A
Other languages
Chinese (zh)
Other versions
CN113592390B (en
Inventor
李岩
陈刚国
杨秀彬
张成威
卢迪
陈冰沁
阮润赓
孙俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiaxing Hengchuang Electric Power Group Co ltd Bochuang Material Branch
Jiaxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
Jiaxing Hengchuang Electric Power Group Co ltd Bochuang Material Branch
Jiaxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiaxing Hengchuang Electric Power Group Co ltd Bochuang Material Branch, Jiaxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd filed Critical Jiaxing Hengchuang Electric Power Group Co ltd Bochuang Material Branch
Priority to CN202110784814.9A priority Critical patent/CN113592390B/en
Publication of CN113592390A publication Critical patent/CN113592390A/en
Application granted granted Critical
Publication of CN113592390B publication Critical patent/CN113592390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10009Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation sensing by radiation using wavelengths larger than 0.1 mm, e.g. radio-waves or microwaves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • Toxicology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Economics (AREA)
  • Biophysics (AREA)
  • Electromagnetism (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a warehousing digital twin method and a warehousing digital twin system based on multi-sensor fusion.

Description

Warehousing digital twin method and system based on multi-sensor fusion
Technical Field
The application belongs to the field of warehousing management and digital twinning, and particularly relates to a warehousing digital twinning method and system based on multi-sensor fusion.
Background
At present, warehousing is in an era of mixing automatic equipment and manual operation, so that the production safety problem in the warehousing process becomes more non-negligible. Due to the mixed manual operation, the existing warehousing lacks the digitization and data accumulation of the operation flow, and brings difficulty to the optimization decision of the warehousing operation flow, the performance evaluation of personnel and the efficiency calculation of operation equipment.
Digital Twin (Digital Twin) is a simulation process integrating multidisciplinary, multi-physical quantity, multi-scale and multi-probability by fully utilizing data such as physical models, sensor updating, operation history and the like, and mapping is completed in a virtual space, so that the full life cycle process of corresponding entity equipment is reflected. Digital twinning is an beyond-realistic concept that can be viewed as a digital mapping system of one or more important, interdependent equipment systems.
In order to better perform warehousing management, the application of the digital twin technology in warehousing is more and more, but most of the existing warehousing digital twin technologies only have a monitoring function and cannot fully apply the existing sensor technology to further improve the application of digitization.
Disclosure of Invention
The application aims to provide a warehousing digital twin method and a warehousing digital twin system based on multi-sensor fusion, and the warehousing digital twin is achieved.
In order to achieve the purpose, the technical scheme adopted by the application is as follows:
a multi-sensor fusion based warehousing digital twinning method, comprising:
step 1, acquiring a current task document from a WMS system, wherein the task document comprises materials, workers, loading and unloading equipment and material flow directions related to a warehousing task;
step 2, acquiring the current position of a worker related to the warehousing task based on a UWB indoor positioning system, determining a shooting area to which the worker belongs according to the current position of the worker, and calling a camera in the shooting area to acquire an image;
step 3, performing saliency detection on the acquired image, dividing the image into a saliency region and a background region, performing target identification on the saliency region in the image by adopting a YOLOv5 deep learning neural network, and inputting the category and the number of the targets obtained by identification into a pre-trained first fully-connected neural network to obtain a current operation scene which is output by the first fully-connected neural network and corresponds to the image;
step 4, if the current operation scene is not converted, timing is continuously carried out on the current operation scene, if the current operation scene is converted, the duration time of the operation scene before conversion is counted, the operation scene, the duration time, materials and workers in the operation scene are bound, and timing is carried out on the converted operation scene; the goods and materials are bound with the loading container through a two-dimensional code scanning gun equipped by a worker, and the loading and unloading equipment identifies the goods and materials in an operation scene through an RFID tag bound with the loading container through an RFID card reader;
step 5, acquiring position information of workers and loading and unloading equipment related to the warehousing task in real time based on the UWB indoor positioning system, acquiring the running state of the loading and unloading equipment related to the warehousing task from the WCS system, controlling three-dimensional models of the workers and the loading and unloading equipment corresponding to the virtual scene to synchronously move according to the actual position information and the running state, and accumulating the running time of the loading and unloading equipment;
step 6, judging whether the warehousing task in the acquired task document is executed completely, if not, returning to the step 2 to continue execution, and if so, executing the step 7;
and 7, recording the total working duration of the current task document according to the duration of the working scene, and counting the data in the warehouse data twin according to a preset time period, wherein the counting content comprises the following steps: the condition of material entering and exiting the warehouse, the time required for entering and exiting the warehouse every time, the time for workers to participate in various operation scenes, and the total operation duration of loading and unloading equipment. Several alternatives are provided below, but not as an additional limitation to the above general solution, but merely as a further addition or preference, each alternative being combinable individually for the above general solution or among several alternatives without technical or logical contradictions.
Preferably, the detecting the saliency of the acquired image, and the segmenting the image into a saliency region and a background region, includes:
3.1, performing feature extraction on the image by adopting a ResNet-101 neural network based on convolution kernel with five sizes of 128 × 128, 64 × 64, 32 × 32, 16 × 16 and 8 × 8 to obtain bottom-layer features of the image with five scales;
step 3.2, inputting the bottom layer features of the images of the five scales into a conversion module respectively for dimension reshaping, and reshaping the bottom layer features of the images of the five scales into consistent dimensions;
3.3, respectively inputting the bottom layer characteristics of the image with the five scales after the dimensionality is reshaped into a two-stage polishing module;
step 3.4, respectively inputting the image bottom layer characteristics of five scales after the two-stage grinding module into a conversion module for dimension reshaping, and reshaping the image bottom layer characteristics of five scales into consistent dimensions;
step 3.5, inputting the bottom-layer features of the five-scale image reshaped in the step 3.4 into a feature fusion module to obtain fused features;
and 3.6, inputting the fused features into a second fully-connected neural network to obtain an image which is input by the second fully-connected neural network and is divided into a saliency region and a background region, and completing saliency detection.
Preferably, the two-stage sanding module comprises two identical sanding modules connected in tandem, each sanding module having an input characteristic defined as F ═ FkK is 1,2.. N }, and the output characteristic is defined as
Figure BDA0003158824320000031
Figure BDA0003158824320000032
Wherein:
cj=ReLU(BN(Conv(fj)))
Figure BDA0003158824320000033
pk=ReLU(BN(Conv(uk+uk+1…+uN)))
Figure BDA0003158824320000034
wherein ReLU () is an activation function, an
Figure BDA0003158824320000035
BN (Conv ()) refers to that any neuron of each layer of neural network is corresponding to an input value f by a normalization methodjThe distribution is forcibly pulled back to the standard normal distribution with the mean value of 0 and the variance of 1; upsample () represents an upsampling function; n is 5.
Preferably, the operation scene includes four types of unloading, carrying, warehousing and inventory.
The application also provides a warehousing digital twin system based on multi-sensor fusion, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the warehousing digital twin method based on multi-sensor fusion when executing the computer program.
According to the method and the system for warehousing digital twinning based on multi-sensor fusion, warehousing operation digital twinning is achieved through the sensors such as a camera, a UWB and an RFID, the technologies such as multi-sensor fusion, computer vision and artificial intelligence are applied, and a warehousing management system, a warehousing control system and a panoramic monitoring system are combined.
Drawings
FIG. 1 is a flow chart of a method of multi-sensor fusion based warehousing digital twinning of the present application;
FIG. 2 is a schematic connection diagram of an embodiment of a warehousing digital twin system based on multi-sensor fusion according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
In one embodiment, a warehousing digital twin method based on multi-sensor fusion is provided, and warehousing management and digital twin are combined to achieve warehousing digital twin.
First, the application sets the basic hardware equipment and software foundation in the warehouse management as follows:
1) the Warehouse Management System (WMS) is a real-time computer software system, which can perfectly manage information, resources, behaviors, inventory and distribution operation according to operation rules and algorithms, and improve efficiency, and comprises receiving processing, shelving management, picking operation, platform management, replenishment management, in-warehouse operation, cross-warehouse operation, circulating inventory, RF operation, processing management, matrix charging and the like.
2) The warehousing control system (WCS system) is a bridge between the WMS system and the loading and unloading equipment and is responsible for coordinating and scheduling various loading and unloading equipment at the bottom layer, so that the loading and unloading equipment at the bottom layer can execute the business process of the warehousing system, and the process is completely executed according to the preset process of a program.
3) Ultra wide band positioning system (UWB indoor positioning system), UWB indoor location principle is similar with the satellite positioning principle, need rely on the UWB basic station of 4 known coordinate positions to fix a position, and staff and handling equipment carry the UWB label. By installing the UWB base station in the warehouse area and ensuring less shielding between the UWB base station and the UWB tag, high-precision positioning of about 15cm in the area is realized.
4) The panoramic monitoring system arranges the camera array in the warehouse area, so that the camera can shoot without dead angles, and the camera number is bound with the shooting area.
5) In the RFID system, a tray for loading materials in a storage area and a container frame are bound by RFID labels, and the loading and unloading equipment is provided with an RFID card reader.
6) Staff is equipped with the two-dimensional code scanning rifle, can bind goods with tray, container frame for binding of goods and materials and RFID label, the goods and materials of the operation of accessible handling equipment discernment finally realize.
7) And constructing a virtual scene corresponding to the warehouse through 3D scanning, and reconstructing three-dimensional models of operation scenes, workers, loading and unloading equipment, materials and the like in the virtual scene.
It should be noted that the WMS system, the WCS system, the UWB indoor positioning system, the panoramic monitoring system, the RFID system, the two-dimensional code scanning gun, and the like are all existing devices or software systems, and may be arranged as needed. And the virtual scene corresponding to the actual scene is constructed through 3D scanning as a basic step in the digital twin, and the method is realized based on the existing scene construction method, and the description is not expanded here.
Based on the hardware device and the software foundation, as shown in fig. 1, the warehousing digital twin method based on multi-sensor fusion provided by the embodiment includes the following steps:
step 1, acquiring a current task document from a WMS system, wherein the task document comprises materials, workers, loading and unloading equipment (such as a forklift, an AGV and the like) and material flow directions (such as the materials are conveyed from a certain vehicle to a certain goods space) related to a warehousing task. The task document is generated by the WMS according to the actual storage flow direction, and the embodiment does not relate to the internal work flow of the WMS, so that the description on how to generate the task document is not expanded.
And 2, acquiring the current position of a worker related to the warehousing task based on the UWB indoor positioning system, determining a shooting area to which the worker belongs according to the current position of the worker, and calling a camera in the shooting area to acquire an image.
In the embodiment, the warehouse area is divided into a plurality of shooting areas in advance, one or more cameras are installed in each shooting area, and the warehouse area is divided into the shooting areas for monitoring by the aid of the cameras.
It should be noted that there are one or more staffs involved in the warehousing task, and if there are multiple staffs, the follow-up identification is performed on a per staff basis. In order to improve the accuracy of identifying the work scene, in this embodiment, it is preferable to provide a plurality of cameras in each shooting area, and perform the work scene with the most complete target image shot by the cameras, or take the work scene identified by the plurality of cameras with the most identical work scene as the final work scene.
And 3, performing saliency detection on the acquired image, dividing the image into a saliency region and a background region, performing target identification on the saliency region in the image by adopting a YOLOv5 deep learning neural network, and inputting the category and the number of the targets obtained by identification into a pre-trained first fully-connected neural network to obtain a current operation scene which is output by the first fully-connected neural network and corresponds to the image.
In the embodiment, the network is polished by training the step-by-step features, the image saliency is detected in a deep learning mode, and the image is divided into a saliency region and a background region. The depth network comprises a skeleton network, a two-stage feature polishing module, two conversion modules and a fusion module.
Specifically, the step significance detection performed in this embodiment includes the following steps:
and 3.1, performing feature extraction on the image by using a ResNet-101 neural network based on convolution kernel with five sizes of 128 × 128, 64 × 64, 32 × 32, 16 × 16 and 8 × 8 to obtain bottom-layer features of the image with five scales.
And 3.2, respectively inputting the bottom layer features of the images with the five scales into a conversion module for dimension reshaping, and reshaping the bottom layer features of the images with the five scales into consistent dimensions, such as 256 dimensions.
Step 3.3, respectively inputting the five-dimension-reshaped image bottom layer characteristics into two-stage polishing modules, wherein the two-stage polishing modules comprise two same polishing modules connected in front and back, and the input characteristic of each polishing module is defined as F ═ FkK is 1,2.. N }, and the output characteristic is defined as
Figure BDA0003158824320000061
Wherein:
cj=ReLU(BN(Conv(fj)))
Figure BDA0003158824320000062
pk=ReLU(BN(Conv(uk+uk+1…+uN)))
Figure BDA0003158824320000063
wherein ReLU () is an activation function, an
Figure BDA0003158824320000064
BN (Conv ()) refers to that any neuron of each layer of neural network is corresponding to an input value f by a normalization methodjForcibly pull back to the mean valueA standard normal distribution of 0 and variance of 1; upsample () represents an upsampling function; n is the type of convolution kernel used in step 3.1, i.e., N is 5 in this embodiment.
And 3.4, respectively inputting the image bottom layer characteristics of five scales after passing through the two-stage grinding module into a conversion module for dimension reshaping (for example, by adopting a reshape function), and reshaping the image bottom layer characteristics of five scales into consistent dimensions, for example, 32 dimensions.
And 3.5, inputting the bottom-layer features of the five-scale image reshaped in the step 3.4 into a feature fusion module to obtain fused features.
And 3.6, inputting the fused features into a second fully-connected neural network to obtain an image which is input by the second fully-connected neural network and is divided into a saliency region and a background region, and completing saliency detection.
After the saliency detection is completed, the operation scene recognition is performed based on the saliency region, the first fully-connected neural network in the embodiment can utilize an operation scene sample which is manually analyzed as a training set to train and input the target types and the target quantities in the operation scene, and the operation scene is deduced, wherein the operation scene can be divided into four types of scenes, namely unloading, carrying, warehousing and checking. And the target for the target recognition output for the salient region in the image may be a target of a vehicle, a loading device, a shelf, a worker, or the like.
It is easy to understand that, in order to improve the accuracy of neural network identification, the YOLOv5 deep learning neural network, the first fully-connected neural network and the second fully-connected neural network used in the present embodiment are all trained neural networks.
Step 4, if the current operation scene is not converted, timing is continuously carried out on the current operation scene, if the current operation scene is converted, the duration time of the operation scene before conversion is counted, the operation scene, the duration time, materials and workers in the operation scene are bound, and timing is carried out on the converted operation scene; the goods and materials are bound with the loading container through a two-dimensional code scanning gun equipped by workers, and the loading and unloading equipment identifies the goods and materials in an operation scene through an RFID tag bound with the loading container through an RFID card reader.
And 5, acquiring the position information of the workers and the handling equipment related to the warehousing task in real time based on the UWB indoor positioning system, acquiring the running state of the handling equipment related to the warehousing task from the WCS system, controlling the three-dimensional models of the workers and the handling equipment corresponding to the virtual scene to synchronously move according to the actual position information and the running state, and accumulating the running time of the handling equipment.
And 6, judging whether the warehousing task in the acquired task document is executed completely, if not, returning to the step 2 to continue execution, and if so, executing the step 7.
Whether the warehousing task is completed or not in the embodiment is understood as whether the material is transported according to the preset flow direction or not. Because a task document may relate to a plurality of operation scenes, and the operation scenes related to the same material flow direction have precedence, one or more workers may be related to one warehousing task, and when a plurality of workers exist, the workers may be identified to be in an idle state because the material flow direction does not reach a certain operation scene.
Therefore, for the staff, the scene is recognized for each staff (i.e., step 3 is executed for each staff), the work scene corresponding to the staff who is recognized as one of the four types of work scenes at the earliest time is taken as the task for the warehousing task, and the work scene corresponding to the staff who is recognized as one of the four types of work scenes at the latest time is taken as the task for the warehousing task, i.e., the work scene in step 4 does not include an idle state, but is the work scene corresponding to the staff who is recognized as one of the four types of work scenes. And 7, recording the total working duration of the current task document according to the duration of the working scene, and counting the data in the warehouse data twin according to a preset time period, wherein the counting content comprises the following steps: the method comprises the steps of storing and taking materials in and out of a warehouse (which can be determined according to the material flow direction), the time required by each time of storage and taking out of the warehouse (which can be determined according to the total working time of a task document), the time of working personnel participating in various working scenes, and the total running time of loading and unloading equipment.
In the statistical analysis, the predetermined time period may be monthly, quarterly, annually, etc. The statistics of the warehouse entry and exit conditions of the goods and the time required by each warehouse entry and exit can be used for adjusting the position of the goods and the materials in the warehouse according to the frequency of the goods and the materials in the warehouse in the following process, and the high-frequency goods and the materials are placed at the exit; the statistics of the time of the working personnel participating in various operation scenes can be used for carrying out comprehensive evaluation subsequently according to the time of the working personnel participating in the scenes and the working efficiency; the total operation time of the loading and unloading equipment can be used for subsequently balancing the operation time of the similar equipment, so that the equipment is prevented from being rapidly aged due to excessive operation.
According to the embodiment, digital twinning of the material storage operation flow is achieved finely through multi-sensor fusion, the storage control efficiency is improved, the storage operation is digitized, and a foundation is laid for high-level applications such as simulation and decision making. The simulation can be based on historical operation conditions, the closest matching between materials and quantity is performed, the operation time required by analyzing a new task document is analyzed, and the decision can be based on the application of the statistical analysis mentioned above.
In another embodiment, a system for warehousing digital twinning based on multi-sensor fusion is provided, namely a computer device, which may be a terminal. The computer device comprises a processor (e.g. an image analysis server, a database server), a memory, a network interface, a display screen (e.g. for presenting statistical data in the form of web pages; the three-dimensional model part of the virtual scene is presented using a threejs (webgl) library) and input means connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method for warehousing digital twinning based on multi-sensor fusion. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
As shown in fig. 2, the computer device of the present embodiment is connected to a WMS system, a WCS system, a UWB indoor positioning system (including a UWB data server), a panoramic monitoring system, and an RFID system by wire or wirelessly to acquire data.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (5)

1. A multi-sensor fusion based warehousing digital twinning method, comprising:
step 1, acquiring a current task document from a WMS system, wherein the task document comprises materials, workers, loading and unloading equipment and material flow directions related to a warehousing task;
step 2, acquiring the current position of a worker related to the warehousing task based on a UWB indoor positioning system, determining a shooting area to which the worker belongs according to the current position of the worker, and calling a camera in the shooting area to acquire an image;
step 3, performing saliency detection on the acquired image, dividing the image into a saliency region and a background region, performing target identification on the saliency region in the image by adopting a YOLOv5 deep learning neural network, and inputting the category and the number of the targets obtained by identification into a pre-trained first fully-connected neural network to obtain a current operation scene which is output by the first fully-connected neural network and corresponds to the image;
step 4, if the current operation scene is not converted, timing is continuously carried out on the current operation scene, if the current operation scene is converted, the duration time of the operation scene before conversion is counted, the operation scene, the duration time, materials and workers in the operation scene are bound, and timing is carried out on the converted operation scene; the goods and materials are bound with the loading container through a two-dimensional code scanning gun equipped by a worker, and the loading and unloading equipment identifies the goods and materials in an operation scene through an RFID tag bound with the loading container through an RFID card reader;
step 5, acquiring position information of workers and loading and unloading equipment related to the warehousing task in real time based on the UWB indoor positioning system, acquiring the running state of the loading and unloading equipment related to the warehousing task from the WCS system, controlling three-dimensional models of the workers and the loading and unloading equipment corresponding to the virtual scene to synchronously move according to the actual position information and the running state, and accumulating the running time of the loading and unloading equipment;
step 6, judging whether the warehousing task in the acquired task document is executed completely, if not, returning to the step 2 to continue execution, and if so, executing the step 7;
and 7, recording the total working duration of the current task document according to the duration of the working scene, and counting the data in the warehouse data twin according to a preset time period, wherein the counting content comprises the following steps: the condition of material entering and exiting the warehouse, the time required for entering and exiting the warehouse every time, the time for workers to participate in various operation scenes, and the total operation duration of loading and unloading equipment.
2. The method for warehousing digital twinning based on multi-sensor fusion as claimed in claim 1, wherein the saliency detection of the acquired image, the segmentation of the image into a saliency region and a background region, comprises:
3.1, performing feature extraction on the image by adopting a ResNet-101 neural network based on convolution kernel with five sizes of 128 × 128, 64 × 64, 32 × 32, 16 × 16 and 8 × 8 to obtain bottom-layer features of the image with five scales;
step 3.2, inputting the bottom layer features of the images of the five scales into a conversion module respectively for dimension reshaping, and reshaping the bottom layer features of the images of the five scales into consistent dimensions;
3.3, respectively inputting the bottom layer characteristics of the image with the five scales after the dimensionality is reshaped into a two-stage polishing module;
step 3.4, respectively inputting the image bottom layer characteristics of five scales after the two-stage grinding module into a conversion module for dimension reshaping, and reshaping the image bottom layer characteristics of five scales into consistent dimensions;
step 3.5, inputting the bottom-layer features of the five-scale image reshaped in the step 3.4 into a feature fusion module to obtain fused features;
and 3.6, inputting the fused features into a second fully-connected neural network to obtain an image which is input by the second fully-connected neural network and is divided into a saliency region and a background region, and completing saliency detection.
3. The method for warehousing digital twinning based on multi-sensor fusion as claimed in claim 2, wherein said two-stage polishing module comprises two identical polishing modules connected in tandem, each having an input characteristic defined as F ═ { F ═ FkK is 1,2.. N }, and the output characteristic is defined as
Figure FDA0003158824310000021
Figure FDA0003158824310000022
Wherein:
cj=ReLU(BN(Conv(fj)))
Figure FDA0003158824310000023
pk=ReLU(BN(Conv(uk+uk+1…+uN)))
Figure FDA0003158824310000024
wherein ReLU () is an activation function, an
Figure FDA0003158824310000025
BN (Conv ()) refers to that any neuron of each layer of neural network is corresponding to an input value f by a normalization methodjThe distribution is forcibly pulled back to the standard normal distribution with the mean value of 0 and the variance of 1; upsample () represents an upsampling function; n is 5.
4. The multi-sensor fusion based warehousing digital twin method according to claim 1, wherein the operation scene comprises four categories of unloading, carrying, warehousing and inventory.
5. A multi-sensor fusion based warehousing digital twinning system comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the multi-sensor fusion based warehousing digital twinning method of any of claims 1-4.
CN202110784814.9A 2021-07-12 2021-07-12 Storage digital twin method and system based on multi-sensor fusion Active CN113592390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110784814.9A CN113592390B (en) 2021-07-12 2021-07-12 Storage digital twin method and system based on multi-sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110784814.9A CN113592390B (en) 2021-07-12 2021-07-12 Storage digital twin method and system based on multi-sensor fusion

Publications (2)

Publication Number Publication Date
CN113592390A true CN113592390A (en) 2021-11-02
CN113592390B CN113592390B (en) 2024-08-02

Family

ID=78246925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110784814.9A Active CN113592390B (en) 2021-07-12 2021-07-12 Storage digital twin method and system based on multi-sensor fusion

Country Status (1)

Country Link
CN (1) CN113592390B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850242A (en) * 2021-11-30 2021-12-28 北京中超伟业信息安全技术股份有限公司 Storage abnormal target detection method and system based on deep learning algorithm
CN114545877A (en) * 2022-02-08 2022-05-27 燕山大学 Bulk cargo-oriented multi-engineering mechanical digital twin online monitoring system and method
CN115529201A (en) * 2022-05-31 2022-12-27 青岛海尔智能家电科技有限公司 Method, system, device, server and storage medium for generating family environment panorama based on digital twinning
CN118504847A (en) * 2024-07-19 2024-08-16 贵州交建信息科技有限公司 Intelligent beam field management method and system based on digital twin technology

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005033448A (en) * 2003-07-11 2005-02-03 Casio Comput Co Ltd Work state management system and program
CN101980248A (en) * 2010-11-09 2011-02-23 西安电子科技大学 Improved visual attention model-based method of natural scene object detection
CN106547880A (en) * 2016-10-26 2017-03-29 重庆邮电大学 A kind of various dimensions geographic scenes recognition methodss of fusion geographic area knowledge
CN107274432A (en) * 2017-06-10 2017-10-20 北京航空航天大学 A kind of common scene intelligent video monitoring method of view-based access control model conspicuousness and depth own coding
CN109101908A (en) * 2018-07-27 2018-12-28 北京工业大学 Driving procedure area-of-interest detection method and device
CN109218619A (en) * 2018-10-12 2019-01-15 北京旷视科技有限公司 Image acquiring method, device and system
KR20190125569A (en) * 2018-04-30 2019-11-07 연세대학교 산학협력단 Method and Apparatus for Generating Scene Situation Information of Video Using Differentiation of Image Feature and Supervised Learning
CN110543867A (en) * 2019-09-09 2019-12-06 北京航空航天大学 crowd density estimation system and method under condition of multiple cameras
CN111242173A (en) * 2019-12-31 2020-06-05 四川大学 RGBD salient object detection method based on twin network
KR102127657B1 (en) * 2020-04-24 2020-06-29 한화시스템 주식회사 Method of artifical intelligence target learning and target identification for next generation naval ship using digital twin
CN111565286A (en) * 2020-07-14 2020-08-21 之江实验室 Video static background synthesis method and device, electronic equipment and storage medium
CN111768375A (en) * 2020-06-24 2020-10-13 海南大学 Asymmetric GM multi-mode fusion significance detection method and system based on CWAM
CN111860900A (en) * 2020-08-14 2020-10-30 中国能源建设集团广东省电力设计研究院有限公司 BIM-based digital twin intelligent machine room management method, device, equipment and medium
CN112053085A (en) * 2020-09-16 2020-12-08 四川大学 Airport scene operation management system and method based on digital twin
CN112131964A (en) * 2020-08-31 2020-12-25 南京汽车集团有限公司 Visual perception system of road operation vehicle and use method thereof
CN112256751A (en) * 2020-10-10 2021-01-22 天津航天机电设备研究所 Warehouse logistics visualization system based on twin data and construction method thereof
CN112418764A (en) * 2020-11-24 2021-02-26 上海治云智能科技有限公司 5G visual warehouse pipe system
CN112581446A (en) * 2020-12-15 2021-03-30 影石创新科技股份有限公司 Method, device and equipment for detecting salient object of image and storage medium
CN112699855A (en) * 2021-03-23 2021-04-23 腾讯科技(深圳)有限公司 Image scene recognition method and device based on artificial intelligence and electronic equipment
WO2021113268A1 (en) * 2019-12-01 2021-06-10 Iven Connary Systems and methods for generating of 3d information on a user display from processing of sensor data
CN112990820A (en) * 2021-03-12 2021-06-18 广东工业大学 Storage management system based on digital twin
CN113065000A (en) * 2021-03-29 2021-07-02 泰瑞数创科技(北京)有限公司 Multisource heterogeneous data fusion method based on geographic entity

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005033448A (en) * 2003-07-11 2005-02-03 Casio Comput Co Ltd Work state management system and program
CN101980248A (en) * 2010-11-09 2011-02-23 西安电子科技大学 Improved visual attention model-based method of natural scene object detection
CN106547880A (en) * 2016-10-26 2017-03-29 重庆邮电大学 A kind of various dimensions geographic scenes recognition methodss of fusion geographic area knowledge
CN107274432A (en) * 2017-06-10 2017-10-20 北京航空航天大学 A kind of common scene intelligent video monitoring method of view-based access control model conspicuousness and depth own coding
KR20190125569A (en) * 2018-04-30 2019-11-07 연세대학교 산학협력단 Method and Apparatus for Generating Scene Situation Information of Video Using Differentiation of Image Feature and Supervised Learning
CN109101908A (en) * 2018-07-27 2018-12-28 北京工业大学 Driving procedure area-of-interest detection method and device
CN109218619A (en) * 2018-10-12 2019-01-15 北京旷视科技有限公司 Image acquiring method, device and system
CN110543867A (en) * 2019-09-09 2019-12-06 北京航空航天大学 crowd density estimation system and method under condition of multiple cameras
WO2021113268A1 (en) * 2019-12-01 2021-06-10 Iven Connary Systems and methods for generating of 3d information on a user display from processing of sensor data
CN111242173A (en) * 2019-12-31 2020-06-05 四川大学 RGBD salient object detection method based on twin network
KR102127657B1 (en) * 2020-04-24 2020-06-29 한화시스템 주식회사 Method of artifical intelligence target learning and target identification for next generation naval ship using digital twin
CN111768375A (en) * 2020-06-24 2020-10-13 海南大学 Asymmetric GM multi-mode fusion significance detection method and system based on CWAM
CN111565286A (en) * 2020-07-14 2020-08-21 之江实验室 Video static background synthesis method and device, electronic equipment and storage medium
CN111860900A (en) * 2020-08-14 2020-10-30 中国能源建设集团广东省电力设计研究院有限公司 BIM-based digital twin intelligent machine room management method, device, equipment and medium
CN112131964A (en) * 2020-08-31 2020-12-25 南京汽车集团有限公司 Visual perception system of road operation vehicle and use method thereof
CN112053085A (en) * 2020-09-16 2020-12-08 四川大学 Airport scene operation management system and method based on digital twin
CN112256751A (en) * 2020-10-10 2021-01-22 天津航天机电设备研究所 Warehouse logistics visualization system based on twin data and construction method thereof
CN112418764A (en) * 2020-11-24 2021-02-26 上海治云智能科技有限公司 5G visual warehouse pipe system
CN112581446A (en) * 2020-12-15 2021-03-30 影石创新科技股份有限公司 Method, device and equipment for detecting salient object of image and storage medium
CN112990820A (en) * 2021-03-12 2021-06-18 广东工业大学 Storage management system based on digital twin
CN112699855A (en) * 2021-03-23 2021-04-23 腾讯科技(深圳)有限公司 Image scene recognition method and device based on artificial intelligence and electronic equipment
CN113065000A (en) * 2021-03-29 2021-07-02 泰瑞数创科技(北京)有限公司 Multisource heterogeneous data fusion method based on geographic entity

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
贺付亮: "复杂环境下用于红外人体目标分割的PCNN模型研究", 《中国博士学位论文全文数据库 信息科技辑》, no. 04, pages 135 - 5 *
魏荣耀: "工业监控中人员面部识别与行为预警系统", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 05, pages 138 - 1253 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850242A (en) * 2021-11-30 2021-12-28 北京中超伟业信息安全技术股份有限公司 Storage abnormal target detection method and system based on deep learning algorithm
CN114545877A (en) * 2022-02-08 2022-05-27 燕山大学 Bulk cargo-oriented multi-engineering mechanical digital twin online monitoring system and method
CN114545877B (en) * 2022-02-08 2024-04-05 燕山大学 Multi-working-procedure mechanical digital twin on-line monitoring system and method for bulk cargo
CN115529201A (en) * 2022-05-31 2022-12-27 青岛海尔智能家电科技有限公司 Method, system, device, server and storage medium for generating family environment panorama based on digital twinning
CN118504847A (en) * 2024-07-19 2024-08-16 贵州交建信息科技有限公司 Intelligent beam field management method and system based on digital twin technology
CN118504847B (en) * 2024-07-19 2024-09-17 贵州交建信息科技有限公司 Intelligent beam field management method and system based on digital twin technology

Also Published As

Publication number Publication date
CN113592390B (en) 2024-08-02

Similar Documents

Publication Publication Date Title
CN113592390B (en) Storage digital twin method and system based on multi-sensor fusion
JP2022091875A (en) Semi-automatic labeling of data set
CN102792332B (en) Image management apparatus, image management method and integrated circuit
US9121751B2 (en) Weighing platform with computer-vision tracking
CN112100425B (en) Label labeling method and device based on artificial intelligence, electronic equipment and medium
CN105469029A (en) System and method for object re-identification
US20210304295A1 (en) Utilizing machine learning to generate augmented reality vehicle information for a vehicle captured by cameras in a vehicle lot
CN111881958A (en) License plate classification recognition method, device, equipment and storage medium
CN109910819A (en) A kind of environment inside car setting method, device, readable storage medium storing program for executing and terminal device
CN116187718A (en) Intelligent goods identification and sorting method and system based on computer vision
CN111124863A (en) Intelligent equipment performance testing method and device and intelligent equipment
CN110650170A (en) Method and device for pushing information
CN109523793A (en) The methods, devices and systems of intelligent recognition information of vehicles
CN115690545B (en) Method and device for training target tracking model and target tracking
CN113496148A (en) Multi-source data fusion method and system
CN115690514A (en) Image recognition method and related equipment
CN113689475A (en) Cross-border head trajectory tracking method, equipment and storage medium
Panahi et al. Automated Progress Monitoring in Modular Construction Factories Using Computer Vision and Building Information Modeling
CN111783528A (en) Method, computer and system for monitoring items on a shelf
CN111750965B (en) Commodity self-service charging method, device and system
US20230033011A1 (en) Methods for action localization, electronic device and non-transitory computer-readable storage medium
US11715297B2 (en) Utilizing computer vision and machine learning models for determining utilization metrics for a space
CN112529038B (en) Method and device for identifying main board material and storage medium
Fudholi et al. YOLO-based Small-scaled Model for On-Shelf Availability in Retail
Blenk Investigating Machine Vision Dataset Quality for Near-Real Time Detection and Tracking on Unmanned Aerial Vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant