CN118552136B - Big data-based supply chain intelligent inventory management system and method - Google Patents
Big data-based supply chain intelligent inventory management system and method Download PDFInfo
- Publication number
- CN118552136B CN118552136B CN202411008265.6A CN202411008265A CN118552136B CN 118552136 B CN118552136 B CN 118552136B CN 202411008265 A CN202411008265 A CN 202411008265A CN 118552136 B CN118552136 B CN 118552136B
- Authority
- CN
- China
- Prior art keywords
- label
- network
- loss value
- image
- cargo
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 230000011218 segmentation Effects 0.000 claims abstract description 87
- 238000012549 training Methods 0.000 claims abstract description 56
- 238000005192 partition Methods 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 32
- 238000007726 management method Methods 0.000 claims description 27
- 230000000875 corresponding effect Effects 0.000 claims description 13
- 230000004913 activation Effects 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 6
- 230000002596 correlated effect Effects 0.000 claims description 6
- 238000002372 labelling Methods 0.000 abstract description 21
- 238000012545 processing Methods 0.000 abstract description 5
- 230000008859 change Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/08—Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
- G06Q10/087—Inventory or stock management, e.g. order filling, procurement or balancing against orders
- G06Q10/0875—Itemisation or classification of parts, supplies or services, e.g. bill of materials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
- G06V10/7753—Incorporation of unlabelled data, e.g. multiple instance learning [MIL]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Business, Economics & Management (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Economics (AREA)
- Medical Informatics (AREA)
- Human Resources & Organizations (AREA)
- Development Economics (AREA)
- Finance (AREA)
- Entrepreneurship & Innovation (AREA)
- Accounting & Taxation (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of data processing, in particular to a supply chain intelligent inventory management system and method based on big data, wherein the method comprises the following steps: acquiring a first cargo image, a second cargo image, a first segmentation network, a second segmentation network and a label generation network, and generating a first feasible label and a second feasible label by using the label generation network; calculating a first loss value according to the first feasible label; calculating a second loss value according to the second feasible label and the second partition network; updating parameters of the second segmentation network according to the first cargo image; calculating a third loss value according to the second feasible labels and the second partition network after parameter updating; calculating a fourth loss value according to the first loss value, the second loss value and the third loss value; network training is accomplished using the fourth loss value and labels are generated for the second cargo image to assist in inventory management. And the labor cost of label labeling work is reduced by automatically labeling the goods images.
Description
Technical Field
The present invention relates generally to the field of data processing. More particularly, the present invention relates to a big data based supply chain intelligent inventory management system and method.
Background
Some goods in supply chain inventory are difficult to preserve, such as fruit, and damage can occur when the goods are stored in an improper environment or for a longer period of time. Such bad goods are not timely processed to cause other goods to be damaged, so that when the bad goods appear, the bad goods need to be timely identified and corresponding processing is given out.
The neural network can accurately identify damaged goods, but before the neural network is utilized to identify the damaged goods, the neural network needs to be trained, and a large number of goods images with labels are required to be trained, so that the labeling work of a large number of goods images needs to be completed manually, the labeling work of the goods images is complex, meanwhile, the workload is large, and therefore, how to realize intelligent labeling is a problem to be solved urgently.
The prior patent application CN117370902a shows a multi-classification multi-label data labeling method and device, wherein the method in the document is to label the same thing by multiple people, and assign proper weight to the label of each person according to the consistency of the label, and further determine the final label. The method in the file can reduce the error rate of manual labeling and improve the labeling accuracy.
The method for marking in the document not only does not reduce the workload of people for marking, but also requires more manpower, so the method in the document is not suitable for solving the problems in the invention. How to reduce the labeling workload and realize intelligent high-quality labeling becomes the research focus of the invention.
Disclosure of Invention
The invention provides a supply chain intelligent inventory management system and method based on big data, which aims to solve the problem of intelligent high-quality labeling.
In a first aspect, the present invention provides a supply chain intelligent inventory management method based on big data, which adopts the following technical scheme:
the intelligent inventory management method of the supply chain based on the big data comprises the following steps:
Acquiring a goods image with a label and a goods image without the label, and respectively marking the goods image as a first goods image and a second goods image;
Acquiring a first segmentation network, a second segmentation network and a label generation network which are trained in advance, and initializing parameters in the label generation network by utilizing random values;
Sequentially inputting the first goods image and the second goods image into a label generating network to obtain respective output results, and respectively marking the output results as a first feasible label and a second feasible label; calculating a cross entropy loss value of the first feasible label and the self-contained label, and marking the cross entropy loss value as a first loss value; taking the second feasible label as a label, inputting a second cargo image into the second segmentation network to obtain a loss value, and marking the loss value as a second loss value; inputting a first cargo image into the second segmentation network and updating parameters in the second segmentation network; taking the second feasible label as a label, inputting a second cargo image into a second segmentation network after updating parameters to obtain a loss value, and marking the loss value as a third loss value;
Calculating a fourth loss value, which is positively correlated with the first loss value and negatively correlated with the difference between the third loss value and the second loss value; training the label generation network by using the fourth loss value; generating a network label generating label by using the trained label to assist in inventory management.
According to the method, the label generation network is constructed to automatically label the goods images without labels, so that the labor cost of label labeling is effectively reduced; further, in order to ensure that the label generating network can generate accurate labels, a proper loss function is constructed to calculate loss values, so that training of the label generating network is completed based on the loss values, and the accuracy of label generating of the label generating network is improved; further, training the network according to the cargo image of the accurate label can improve the accuracy of the network to construct a loss function calculation loss value, and the loss value calculated by the loss function constructed by the feature can effectively evaluate the accuracy of label generation of the label generation network, so that the effective training of the label generation network is realized, and the accuracy of label generation of the label generation network is improved.
Preferably, the calculating the fourth loss value includes:
;
wherein, A first loss value is indicated and is indicative of,A third loss value is indicated and is indicative of,A second loss value is indicated and is indicative of,Indicating the preset zero-proof coefficient,Representing a fourth loss value.
According to the method, the index of the first loss value is introduced when the fourth loss value is calculated, and the accuracy of label generation network label generation is verified by using the manually marked accurate label through the index, so that a foundation is provided for effective training of the label generation network; meanwhile, the change of the loss value calculated by the label generated by the label-free goods image before and after the network update is analyzed to judge the coincidence condition of the loss value corresponding to the generated label to the characteristic that the goods image training network of the accurate label can improve the network accuracy, so that the accuracy of the label generated by the label generating network is effectively verified, and a foundation is provided for the effective training of the label generating network; meanwhile, when a loss function is constructed, the information features of the goods images with labels and the information features of the goods images without labels are introduced, so that the situation that the label generating network is trained by using fewer information features of the goods images with labels can be prevented, the label generating network obtained through training has an overfitting phenomenon on the goods images with labels, and the accuracy of the label generating network is further improved.
Preferably, the method for acquiring the first segmentation network trained in advance includes:
Constructing a first segmentation network;
The first cargo image is sequentially input into the first segmentation network to complete the training of the first stage of the first segmentation network.
Preferably, the method for acquiring the pre-trained second partition network includes:
sequentially inputting the second cargo image into a first segmentation network which is trained in the first stage to obtain a segmented image of the second cargo image, adding 0.5 to each pixel value in the segmented image, and then rounding downwards to obtain a rounded segmented image, wherein the rounded segmented image is used as an initial label image of the second cargo image;
And taking the initial label image of the second cargo image as a label, inputting the second cargo image into a second segmentation network to complete training of the second segmentation network, and obtaining a second segmentation network with the training completed in the first stage.
According to the invention, the second segmentation network is trained in the first stage by using the second cargo image only, and the first cargo image information is prevented from being contained in the second segmentation network after the first stage training is completed, so that the output results before and after training are effectively prevented from being changed when the second segmentation network is trained by using the first cargo image later, and a basis is provided for the subsequent analysis of the loss change condition before and after the first cargo image trains the second segmentation network.
Preferably, the method for acquiring the pre-constructed label generating network includes:
a label generating network comprising 5 convolution layers, 2 downsampling layers, 2 deconvolution layers and 7 activation functions is constructed, and the cross entropy loss function is used as the loss function of the label generating network.
Preferably, the initializing the parameter in the tag generation network by using the random value includes:
Generating a random value between b1 and b2, wherein b1 and b2 respectively represent an upper interval limit value and a lower interval limit value by taking the random value as an initial value of each parameter in the label generation network.
The invention utilizes the random number to initialize the network parameter value, has simple operation, can diversify the network parameter value, and can prevent the problem of lower network training efficiency caused by single network parameter.
Preferably, the method for assisting inventory management includes:
Sequentially inputting the second cargo image into a label generating network after training to obtain an output result, and taking the output result as a final label of the second cargo image;
Taking the final label of the second cargo image as a label, continuing training the first segmentation network which is trained in the first stage by utilizing all the second cargo images, and marking the first segmentation network which is trained as the first segmentation network which is finally trained;
inputting the newly acquired cargo image into a first segmentation network with final training to obtain a segmentation result, adding 0.5 to the data in the segmentation result, rounding up to obtain rounded data, and picking out cargoes corresponding to pixels with the rounded data equal to 1.
According to the method, the label of the second goods image is generated by utilizing the label generation network after training, and label labeling processing is not needed manually, so that the labor cost is effectively saved.
In a second aspect, the present invention provides a supply chain intelligent inventory management system based on big data, which adopts the following technical scheme:
A big data based supply chain intelligent inventory management system comprising: the system comprises a processor and a memory, wherein the memory stores computer program instructions which when executed by the processor realize the intelligent inventory management method based on the big data.
By adopting the technical scheme, the large data-based supply chain intelligent inventory management method generates a computer program, and the computer program is stored in the memory to be loaded and executed by the processor, so that terminal equipment is manufactured according to the memory and the processor, and the use is convenient.
The invention has the following beneficial effects:
the label generating network is constructed to automatically label the goods images without labels, so that the labor cost of label labeling is effectively reduced;
further, in order to ensure that the label generating network can generate accurate labels, a proper loss function is constructed to calculate loss values, so that training of the label generating network is completed based on the loss values, and the accuracy of label generating of the label generating network is improved;
further, training the network according to the cargo image of the accurate label can improve the accuracy of the network to construct a loss function calculation loss value, and the loss value calculated by the loss function constructed by the feature can effectively evaluate the accuracy of label generation of the label generation network, so that the effective training of the label generation network is realized, and the accuracy of label generation of the label generation network is improved.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. In the drawings, embodiments of the invention are illustrated by way of example and not by way of limitation, and like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 is a flow chart of steps of a big data based supply chain intelligent inventory management method in accordance with an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a label generating network according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Specific embodiments of the present invention are described in detail below with reference to the accompanying drawings.
The embodiment of the invention discloses big data-based supply chain intelligent inventory management, and referring to FIG. 1, the method comprises the following steps of S1-S5:
S1: a labeled cargo image and an unlabeled cargo image are acquired and respectively noted as a first cargo image and a second cargo image.
The training network needs a large number of goods images with labels, and labeling work is careful and complicated, so that only part of goods images are labeled for label processing in order to reduce the workload of labeling labels.
Specifically, N cargo images are acquired, where N represents a preset number.
SelectingAnd (3) labeling the selected cargo image by a preset ratio, wherein the pixel corresponding to the cargo with the damage in the cargo image is labeled as 1, the pixel corresponding to the cargo without the damage is labeled as 0, and the cargo image after labeling is labeled as a first cargo image. In this embodiment, 10000 is taken as N and 5 is taken as a 5, and other values may be taken as other embodiments, which are not particularly limited.
The remaining cargo image is noted as a second cargo image.
S2: and acquiring a first segmentation network, a second segmentation network and a label generation network which are trained in advance, and initializing parameters in the label generation network by using random values.
S20: a first segmentation network and a second segmentation network which are trained in advance are obtained.
Preferably, as an example, acquiring the first segmentation network and the second segmentation network trained in advance includes:
The first division network and the second division network are constructed, in this embodiment, the first division network and the second division network are both VGG16 network structures, other embodiments may adopt other network structures, and the present invention is not limited in particular;
The first cargo image is sequentially input into the first segmentation network to complete the training of the first stage of the first segmentation network.
Sequentially inputting the second cargo images into a first segmentation network after the first stage training is completed to obtain segmented images of the second cargo images;
The first split network after the first stage training has a lower precision of splitting, and the second split network after the first stage training has a lower precision of labeling, which is set as a split image of the second cargo image, has a lower precision of labeling, because the first cargo image with labels is less in the first split network than in the first split network.
It should be noted that, the value of each pixel in the segmented image is in the interval of 0 to 1, wherein the closer each pixel value is to which class label, the greater the probability that it belongs to that class. For example, one of the pixels is 0.1, which is relatively close to 0, wherein the category corresponding to 0 is the category corresponding to the bad cargo, so that the pixel has a high probability of being the bad cargo, and the other pixel is 0.8, which is relatively close to 1, wherein the category corresponding to 1 is the category corresponding to the non-bad cargo, so that the pixel has a high probability of being the non-bad cargo.
And adding 0.5 to each pixel value in the segmented image, then rounding downwards to obtain a rounded segmented image, and taking the rounded segmented image as an initial label image of the second cargo image.
It should be noted that rounding down after adding 0.5 to each pixel value in the divided image can be achieved by rounding down the pixel value.
And taking the initial label image of the second cargo image as a label, inputting the second cargo image into a second segmentation network to complete training of the second segmentation network, and obtaining a second segmentation network with the training completed in the first stage.
It should be noted that, the initial label image of the second cargo image is obtained based on the inaccurately segmented image, so the initial label image of the second cargo image is not accurate enough; the second segmentation network after the first stage training is trained based on the initial label image, so that the segmentation result of the second segmentation network after the first stage training is not accurate enough. The segmentation process of the cargo image cannot be performed directly using the second segmentation network.
S21: and initializing parameters in the label generation network by utilizing the random value.
It should be noted that, in order to acquire the tag of the second cargo image, a tag generation network needs to be constructed.
Preferably, as an example, obtaining a pre-built tag generation network includes:
The network structure shown in the schematic structural diagram of the label generating network in fig. 2 is constructed, and the network structure comprises 5 convolution layers, 2 downsampling layers, 2 deconvolution layers and 7 activation functions, wherein after each convolution layer, one activation function exists, in this embodiment, the activation function is a ReLU activation function, and the loss function of the label generating network is a cross entropy loss function.
Preferably, as an example, initializing parameters in the tag generation network with random values includes:
generating a random value between b1 and b2, and generating initial values of parameters in the network by taking the random value as a label.
B1 and b2 represent an upper limit value and a lower limit value, respectively, and in this embodiment, b1 is taken as 0, b2 is taken as 100, and other embodiments may take other values, and this embodiment is not particularly limited.
It should be noted that the invention uses random numbers to initialize network parameter values, has simple operation, and can also make the network parameter values more diversified, thereby preventing the problem of low network training efficiency caused by network parameters.
S3: sequentially inputting the first goods image and the second goods image into a label generating network to obtain respective output results, and respectively marking the output results as a first feasible label and a second feasible label; calculating a cross entropy loss value of the first feasible label and the self-contained label, and marking the cross entropy loss value as a first loss value; taking the second feasible label as a label, inputting a second cargo image into the second segmentation network to obtain a loss value, and marking the loss value as a second loss value; inputting a first cargo image into the second segmentation network and updating parameters in the second segmentation network; and taking the second feasible label as a label, inputting the second cargo image into a second segmentation network after updating the parameters to obtain a loss value, and marking the loss value as a third loss value.
It should be noted that, because the parameters of the tag generation network are generated randomly, the accuracy of the tag fitted based on the randomly generated network parameters is poor, and thus the tag generation network needs to be trained. While training the tag generation network requires setting the appropriate penalty function.
S30: sequentially inputting the first goods image and the second goods image into a label generating network to obtain respective output results, and respectively marking the output results as a first feasible label and a second feasible label; and calculating a cross entropy loss value of the first feasible label and the self-contained label, and marking the cross entropy loss value as a first loss value.
It should be noted that, since the network parameters of the tag generation network are obtained through random initialization, the tag generated by using the network parameters in the current tag generation network is not accurate yet, and thus needs to be trained. And to achieve accurate training, an appropriate loss function is set.
It should be further noted that, since the self-carried label of the first cargo image is obtained by artificial labeling, the label of the first cargo image is relatively high, so that the self-carried label of the first cargo image can be used for supervising the label generated by the label generating network, and thus, the loss function can be constructed to calculate the loss value by analyzing the difference between the label generated by the label generating network and the self-carried label.
Preferably, calculating the cross entropy loss value of the first feasible label and the self-carrying label, and marking the cross entropy loss value as a first loss value, including:
And calculating the cross entropy loss value of the first feasible label and the self-carrying label by using the cross entropy loss function, and marking the cross entropy loss value as a first loss value.
S31: and taking the second feasible label as a label, and inputting a second cargo image into the second segmentation network to obtain a loss value, and recording the loss value as a second loss value.
Because the label of the labeled cargo image has high label accuracy, training the second segmentation network by using the labeled cargo image improves the accuracy of the second segmentation network, so that the feature can be used to verify the accuracy of generating the label by the label generation network. In other words, the loss value calculated using the generated tag and the second split network before training should be large, and the loss value calculated using the generated tag and the second split network after training should be reduced, so the loss function of the tag generation network can be set by analyzing the change of the loss value of the second split network before and after training calculated based on the generated tag.
Preferably, as an example, taking a second feasible label as a label, inputting a second cargo image into the second division network to obtain a loss value, and recording the loss value as a second loss value, including:
Inputting a second cargo image into a second segmentation network after the first-stage training is completed to obtain an output result, and marking the output result as a second output result; and calculating a loss value by using a loss function in the second segmentation network based on the second output result and the second feasible label, and recording the loss value as a second loss value.
S32: the first cargo image is input into the second segmentation network and parameters in the second segmentation network are updated.
Preferably, as an example, inputting the first cargo image into the second division network and updating parameters in the second division network includes:
Inputting the first cargo image into a second segmentation network after the first stage training to obtain an output result, calculating a loss value by using a loss function of the second segmentation network based on the label of the first cargo image and the corresponding output result, and reversely updating network parameters in the second segmentation network based on the loss value.
It should be noted that, the network parameters in the second partition network are reversely updated based on the loss value as the existing method, which is not described herein.
S33: and taking the second feasible label as a label, inputting the second cargo image into a second segmentation network after updating the parameters to obtain a loss value, and marking the loss value as a third loss value.
Preferably, as an example, taking the second feasible label as a label, inputting the second cargo image into the second partition network after updating the parameter to obtain a loss value, and marking the loss value as a third loss value, including:
and inputting the second cargo image into a second segmentation network after updating the parameters to obtain an output result, marking the output result as a third output result, calculating a loss value by using a loss function of the second segmentation network based on the second feasible label and the third output result, and marking the loss value as a third loss value.
S4: a fourth loss value is calculated that is positively correlated with the first loss value and negatively correlated with the difference between the third loss value and the second loss value.
Preferably, as an example, the fourth loss value satisfies the relation:
;
wherein, Representing a first loss value, the smaller the value is, the smaller the difference between the label of the first goods image and the first feasible label is, and the label of the first goods image is manually marked, so that the label of the first goods image is relatively accurate, and when the first feasible label generated by the label generating network is relatively accurate, the smaller the influence of the difference between the first goods image and the first feasible label is; a third loss value is indicated and is indicative of, A second loss value is indicated and is indicative of,The change of the loss value before and after the parameter updating of the second division network is reflected, and the larger the value is, the larger the loss value is reduced, because the second division network is the parameter updated by the first cargo image, the label of the first cargo image is relatively accurate, and therefore after the parameter updating of the second division network is carried out by the first cargo image, the division accuracy is relatively higher before the parameter updating, in other words, when the feasible label is relatively accurate, the third loss value should be smaller than the second loss value, and therefore, in order to make the generated label more accurate, the difference value between the second loss value and the third loss value should be larger.Representing a preset zero-proof coefficient, the embodiment usesTaking 0.0001 as an example for description, other embodiments may take other values, and the embodiment is not particularly limited; Representing a fourth loss value.
When the loss function is constructed, the information features of the goods images with labels and the information features of the goods images without labels are introduced, so that the situation that the label generating network is trained by using fewer information features of the goods images with labels can be prevented, the label generating network obtained through training has an overfitting phenomenon on the goods images with labels, and the accuracy of the label generating network is further improved.
It should be further noted that the calculation formula of the fourth loss value generates the loss function of the network for the tag.
S5: and finishing training the label generation network by using the fourth loss value, and generating a network label generation label by using the trained label generation network label so as to assist in inventory management.
S50: training of the tag generation network is completed by using the fourth loss value.
Preferably, as an example, training the tag generation network using the fourth loss value, generating the network tag generation tag using the trained tag, includes:
And using the fourth loss value to reversely update the label to generate parameters in the network.
It should be noted that, the reverse updating of the parameters in the network by using the loss value is in the prior art, and the embodiment is not limited specifically.
And (3) according to the method in the steps S2-S4, updating parameters in the label generation network by using each first goods image and each second goods image in sequence until the fourth loss value converges, and obtaining the label generation network after training.
S51: to assist in inventory management.
Preferably, as an example, to assist in inventory management, including:
and sequentially inputting the second cargo image into the label generation network after training to obtain an output result, and taking the output result as a final label of the second cargo image.
And taking the final label of the second cargo images as a label, continuing training the first segmentation network which is trained in the first stage by utilizing all the second cargo images, and marking the first segmentation network which is trained as the first segmentation network which is finally trained.
Inputting the newly acquired cargo image into a first segmentation network with final training to obtain a segmentation result, adding 0.5 to the data in the segmentation result, rounding up to obtain rounded data, and picking out cargoes corresponding to pixels with the rounded data equal to 1.
The embodiment of the invention also discloses a supply chain intelligent inventory management system based on big data, which comprises a processor and a memory, wherein the memory stores computer program instructions, and the computer program instructions realize the supply chain intelligent inventory management method based on big data according to the invention when being executed by the processor.
The above system further comprises other components well known to those skilled in the art, such as a communication bus and a communication interface, the arrangement and function of which are known in the art and therefore are not described in detail herein.
In the context of this patent, the foregoing memory may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, the computer readable storage medium may be any suitable magnetic or magneto-optical storage medium, such as, for example, resistance change memory, dynamic random access memory, static random access memory, enhanced dynamic random access memory, high bandwidth memory, hybrid storage cube, etc., or any other medium that can be used to store the desired information and that can be accessed by an application, a module, or both. Any such computer storage media may be part of, or accessible by, or connectable to, the device.
While various embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Many modifications, changes, and substitutions will now occur to those skilled in the art without departing from the spirit and scope of the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention.
The above embodiments are not intended to limit the scope of the present invention, so: all equivalent changes in structure, shape and principle of the invention should be covered in the scope of protection of the invention.
Claims (6)
1. The intelligent inventory management method of the supply chain based on the big data is characterized by comprising the following steps:
Acquiring a goods image with a label and a goods image without the label, and respectively marking the goods image as a first goods image and a second goods image;
Acquiring a pre-trained first segmentation network, comprising: constructing a first segmentation network; sequentially inputting the first cargo image into a first segmentation network to complete training of the first stage of the first segmentation network and a second segmentation network, wherein the method comprises the following steps of: sequentially inputting the second cargo image into a first segmentation network which is trained in the first stage to obtain a segmented image of the second cargo image, adding 0.5 to each pixel value in the segmented image, and then rounding downwards to obtain a rounded segmented image, wherein the rounded segmented image is used as an initial label image of the second cargo image; taking an initial label image of a second cargo image as a label, inputting the second cargo image into a second segmentation network to complete training of the second segmentation network to obtain a second segmentation network with the training completed in the first stage, and initializing parameters in the label generation network by utilizing a random value and a pre-constructed label generation network;
Sequentially inputting the first goods image and the second goods image into a label generating network to obtain respective output results, and respectively marking the output results as a first feasible label and a second feasible label; calculating a cross entropy loss value of the first feasible label and the self-contained label, and marking the cross entropy loss value as a first loss value; inputting a second cargo image into the second segmentation network with the second feasible label as a label to obtain a loss value, and marking the loss value as a second loss value, wherein the method comprises the following steps of: inputting a second cargo image into a second segmentation network after the first-stage training is completed to obtain an output result, and marking the output result as a second output result; calculating a loss value by using a loss function in the second partition network based on the second output result and the second feasible label, and recording the loss value as a second loss value; inputting a first cargo image into the second segmentation network and updating parameters in the second segmentation network; taking the second feasible label as a label, inputting a second cargo image into a second segmentation network after updating parameters to obtain a loss value, and marking the loss value as a third loss value, wherein the method comprises the following steps of: inputting the second cargo image into a second partition network after updating parameters to obtain an output result, marking the output result as a third output result, calculating a loss value by using a loss function of the second partition network based on the second feasible label and the third output result, and marking the loss value as a third loss value;
Calculating a fourth loss value, which is positively correlated with the first loss value and negatively correlated with the difference between the third loss value and the second loss value; training the label generation network by using the fourth loss value; generating a network label generating label by using the trained label to assist in inventory management.
2. The big data based supply chain intelligent inventory management method of claim 1, wherein the calculating a fourth loss value includes:
;
wherein, A first loss value is indicated and is indicative of,A third loss value is indicated and is indicative of,A second loss value is indicated and is indicative of,Indicating the preset zero-proof coefficient,Representing a fourth loss value.
3. The big data based supply chain intelligent inventory management method of claim 1, wherein the pre-built tag generation network acquisition method comprises:
a label generating network comprising 5 convolution layers, 2 downsampling layers, 2 deconvolution layers and 7 activation functions is constructed, and the cross entropy loss function is used as the loss function of the label generating network.
4. The big data based supply chain intelligent inventory management method of claim 1, wherein initializing parameters in a tag generation network with random values comprises:
Generating a random value between b1 and b2, wherein b1 and b2 respectively represent an upper interval limit value and a lower interval limit value by taking the random value as an initial value of each parameter in the label generation network.
5. The big data based supply chain intelligent inventory management method of claim 1, wherein the step of assisting inventory management includes:
Sequentially inputting the second cargo image into a label generating network after training to obtain an output result, and taking the output result as a final label of the second cargo image;
Taking the final label of the second cargo image as a label, continuing training the first segmentation network which is trained in the first stage by utilizing all the second cargo images, and marking the first segmentation network which is trained as the first segmentation network which is finally trained;
inputting the newly acquired cargo image into a first segmentation network with final training to obtain a segmentation result, adding 0.5 to the data in the segmentation result, rounding up to obtain rounded data, and picking out cargoes corresponding to pixels with the rounded data equal to 1.
6. Big data based supply chain intelligent inventory management system, characterized by comprising: a processor and a memory storing computer program instructions that when executed by the processor implement the big data based supply chain intelligent inventory management method of any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411008265.6A CN118552136B (en) | 2024-07-26 | 2024-07-26 | Big data-based supply chain intelligent inventory management system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411008265.6A CN118552136B (en) | 2024-07-26 | 2024-07-26 | Big data-based supply chain intelligent inventory management system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118552136A CN118552136A (en) | 2024-08-27 |
CN118552136B true CN118552136B (en) | 2024-10-25 |
Family
ID=92454988
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411008265.6A Active CN118552136B (en) | 2024-07-26 | 2024-07-26 | Big data-based supply chain intelligent inventory management system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118552136B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023202596A1 (en) * | 2022-04-19 | 2023-10-26 | 华为技术有限公司 | Semi-supervised model training method and system, and related device |
CN117095251A (en) * | 2023-06-12 | 2023-11-21 | 商汤人工智能研究中心(深圳)有限公司 | Training and image segmentation method, device and equipment of image segmentation network |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110414428A (en) * | 2019-07-26 | 2019-11-05 | 厦门美图之家科技有限公司 | A method of generating face character information identification model |
JP2022086893A (en) * | 2020-11-30 | 2022-06-09 | ブラザー工業株式会社 | Acquisition method, acquisition system, and computer program |
CN113113119A (en) * | 2021-03-23 | 2021-07-13 | 中国科学院深圳先进技术研究院 | Training method of semantic segmentation network, image processing method and equipment thereof |
CN113705769B (en) * | 2021-05-17 | 2024-09-13 | 华为技术有限公司 | Neural network training method and device |
CN113221837B (en) * | 2021-06-01 | 2024-06-07 | 北京金山云网络技术有限公司 | Object segmentation method, training method and device of object segmentation model |
CN115661558A (en) * | 2021-07-08 | 2023-01-31 | 华为技术有限公司 | Image generation method, training method for generating countermeasure network and related equipment |
CN113705772A (en) * | 2021-07-21 | 2021-11-26 | 浪潮(北京)电子信息产业有限公司 | Model training method, device and equipment and readable storage medium |
CN113822428A (en) * | 2021-08-06 | 2021-12-21 | 中国工商银行股份有限公司 | Neural network training method and device and image segmentation method |
CN113850826B (en) * | 2021-09-27 | 2024-07-19 | 平安科技(深圳)有限公司 | Image segmentation-based heart image processing method, device, equipment and medium |
KR102406287B1 (en) * | 2021-12-31 | 2022-06-08 | 주식회사 에스아이에이 | super resolution imaging method using collaborative learning |
CN114998681A (en) * | 2022-05-31 | 2022-09-02 | 上海商汤智能科技有限公司 | Network training method based on affinity coefficient |
JP2024527444A (en) * | 2022-06-02 | 2024-07-25 | ▲騰▼▲訊▼科技(深▲セン▼)有限公司 | Image processing method and device, computer device, storage medium, and computer program |
CN115147687A (en) * | 2022-07-07 | 2022-10-04 | 浙江啄云智能科技有限公司 | Student model training method, device, equipment and storage medium |
CN116189884B (en) * | 2023-04-24 | 2023-07-25 | 成都中医药大学 | Multi-mode fusion traditional Chinese medicine physique judging method and system based on facial vision |
CN117612206B (en) * | 2023-11-27 | 2024-09-17 | 深圳市大数据研究院 | Pedestrian re-recognition network model generation method, device, computer equipment and medium |
CN118097326A (en) * | 2023-12-30 | 2024-05-28 | 深圳云天励飞技术股份有限公司 | Target detection model training method and device, electronic equipment and storage medium |
-
2024
- 2024-07-26 CN CN202411008265.6A patent/CN118552136B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023202596A1 (en) * | 2022-04-19 | 2023-10-26 | 华为技术有限公司 | Semi-supervised model training method and system, and related device |
CN117095251A (en) * | 2023-06-12 | 2023-11-21 | 商汤人工智能研究中心(深圳)有限公司 | Training and image segmentation method, device and equipment of image segmentation network |
Also Published As
Publication number | Publication date |
---|---|
CN118552136A (en) | 2024-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111950638B (en) | Image classification method and device based on model distillation and electronic equipment | |
CN111741330B (en) | Video content evaluation method and device, storage medium and computer equipment | |
CN106650928A (en) | Neural network optimization method and device | |
CN112990478B (en) | Federal learning data processing system | |
CN111915555B (en) | 3D network model pre-training method, system, terminal and storage medium | |
CN114897136B (en) | Multi-scale attention mechanism method and module and image processing method and device | |
CN114663662B (en) | Hyper-parameter searching method, device, computer equipment and storage medium | |
CN114022697A (en) | Vehicle re-identification method and system based on multitask learning and knowledge distillation | |
CN118196410A (en) | Remote sensing image semantic segmentation method, system, equipment and storage medium | |
CN117036843A (en) | Target detection model training method, target detection method and device | |
CN112860847A (en) | Video question-answer interaction method and system | |
CN118154867A (en) | Semi-supervised remote sensing image semantic segmentation method and system | |
CN113449878B (en) | Data distributed incremental learning method, system, equipment and storage medium | |
CN111144168A (en) | Crop growth cycle identification method, equipment and system | |
CN111325212A (en) | Model training method and device, electronic equipment and computer readable storage medium | |
CN118552136B (en) | Big data-based supply chain intelligent inventory management system and method | |
CN112381147B (en) | Dynamic picture similarity model establishment and similarity calculation method and device | |
CN117590241A (en) | Lithium battery health state monitoring method based on state attenuation and interactive learning | |
CN116091784A (en) | Target tracking method, device and storage medium | |
CN117011219A (en) | Method, apparatus, device, storage medium and program product for detecting quality of article | |
CN112182422A (en) | Skill recommendation method, skill recommendation device, electronic identification and medium | |
CN113569852B (en) | Training method and device of semantic segmentation model, electronic equipment and storage medium | |
CN118734947B (en) | Knowledge graph completion method and device based on attention penalty and noise sampling | |
CN114638365B (en) | Machine reading understanding reasoning method and device, electronic equipment and storage medium | |
CN113436199B (en) | Semi-supervised video target segmentation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |