[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118555216A - Multi-granularity sampling-based multi-dimensional resource joint prediction method and system for computing power network - Google Patents

Multi-granularity sampling-based multi-dimensional resource joint prediction method and system for computing power network Download PDF

Info

Publication number
CN118555216A
CN118555216A CN202411008898.7A CN202411008898A CN118555216A CN 118555216 A CN118555216 A CN 118555216A CN 202411008898 A CN202411008898 A CN 202411008898A CN 118555216 A CN118555216 A CN 118555216A
Authority
CN
China
Prior art keywords
resource
sampling
aggregation
sequence
same
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411008898.7A
Other languages
Chinese (zh)
Other versions
CN118555216B (en
Inventor
杜军平
杨九钊
邵蓥侠
李昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202411008898.7A priority Critical patent/CN118555216B/en
Publication of CN118555216A publication Critical patent/CN118555216A/en
Application granted granted Critical
Publication of CN118555216B publication Critical patent/CN118555216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a multi-granularity sampling-based multi-dimensional resource joint prediction method and a multi-granularity sampling-based multi-dimensional resource joint prediction system for a computing power network, which relate to the technical field of computing power networks, and comprise the following steps: for each computing node in the computing network, constructing a historical state information sequence of each resource based on historical state data of each resource at a plurality of historical time points; sampling the historical state information sequence of each resource by adopting a plurality of sampling intervals, and constructing a plurality of sampling sequences for each resource; performing attention aggregation on sampling sequences of the same resource to obtain a first aggregation vector of each sampling sequence; performing attention aggregation on first aggregation vectors corresponding to sampling sequences of different resources sampled at the same sampling interval to obtain second aggregation vectors corresponding to each first aggregation vector; and inputting a second aggregate vector corresponding to the sampling sequence of the same resource into a preset prediction model to obtain the resource state of the resource prediction.

Description

Multi-granularity sampling-based multi-dimensional resource joint prediction method and system for computing power network
Technical Field
The invention relates to the technical field of computational power networks, in particular to a multi-granularity sampling-based computational power network multi-dimensional resource joint prediction method and system.
Background
The computing power network is characterized in that the network is used as a center to realize the matching of computing tasks and multidimensional resources, the computing tasks are scheduled to proper computing power nodes according to the needs through the collaborative awareness of multidimensional heterogeneous resources such as data resources, computing power resources, network resources and the like, the unified arrangement, unified operation and unified optimization of the multidimensional resources are realized, and finally the dynamic real-time scheduling of the computing power network is realized. Therefore, how to jointly predict the load states of multidimensional resources such as data resources, computing resources, network resources and the like so as to assist in realizing the collaborative allocation and optimization of the multidimensional resources is a key for solving the problem of how to efficiently utilize the multidimensional resources in the computing network.
In order to realize the most effective allocation of tasks, each resource of each node in the computational power network needs to be predicted to realize efficient allocation, but the prediction mode in the prior art usually only uses the data of a single resource for each single resource, ignores the correlation among the resources, and has poor prediction precision.
Disclosure of Invention
In view of this, embodiments of the present invention provide a multi-granularity sampling-based joint prediction method for multi-dimensional resources of a computing power network, so as to eliminate or improve one or more drawbacks existing in the prior art.
The invention provides a multi-granularity sampling-based computational power network multi-dimensional resource joint prediction method, which comprises the following steps of:
for each computing node in the computing network, acquiring historical state data of each resource at a plurality of historical time points, and constructing a historical state information sequence corresponding to each resource based on the historical state data of each resource at the plurality of historical time points;
Sampling the historical state information sequence of each resource by adopting a plurality of sampling intervals, and constructing a plurality of sampling sequences for each resource;
Performing attention aggregation on sampling sequences of the same resource to obtain a first aggregation vector corresponding to each sampling sequence;
Performing attention aggregation on first aggregation vectors corresponding to sampling sequences of different resources sampled at the same sampling interval to obtain second aggregation vectors corresponding to each first aggregation vector;
And inputting a second aggregate vector corresponding to the sampling sequence of the same resource into a preset prediction model to obtain the resource state of the resource prediction.
According to the scheme, firstly, historical state data of each resource at a plurality of historical time points are acquired for each computing node, the historical state data is based on the resource types and sampling intervals, attention aggregation is carried out twice, in the step of aggregation based on the sampling intervals, the values of each dimension correspond to the same time point, but the first aggregation vectors of different resources are aggregated, and as one resource is utilized and the other resource is also consumed in the actual resource use process, such as a CPU (Central processing unit) and a memory resource, the scheme considers the correlation among the different resources through the aggregation, and the final prediction accuracy is improved.
In some embodiments of the present invention, in the step of constructing a history state information sequence corresponding to each resource based on history state data of each resource at a plurality of history time points, the history state data of each resource at a plurality of history time points is normalized, and the normalized history state data of each resource is combined to construct the history state information sequence of each resource.
In some embodiments of the present invention, in the step of sampling the historical state information sequence of each resource at a plurality of sampling intervals, and constructing a plurality of sampling sequences for each resource, for the same sampling interval, different sampling start positions are used to sample the historical state information sequence of each resource, so that the historical state information sequence of the same resource at the same sampling interval corresponds to the plurality of sampling sequences.
In some embodiments of the present invention, in the step of sampling the historical state information sequence of each resource using different sampling start positions for the same sampling interval, the historical state values less than or equal to the sampling interval in the historical state information sequence are taken as the sampling start positions one by one according to the order in the historical state information sequence.
In some embodiments of the present invention, in the step of performing attention aggregation on sampling sequences of the same resource to obtain a first aggregate vector corresponding to each sampling sequence, each sampling sequence is processed to be the same length by Embedding layers, and then attention aggregation is performed on the sampling sequences processed by Embedding layers of the same resource.
In some embodiments of the present invention, in the step of performing attention aggregation on sampling sequences of the same resource to obtain a first aggregate vector corresponding to each sampling sequence and performing attention aggregation on first aggregate vectors corresponding to sampling sequences of different resources sampled at the same sampling interval to obtain a second aggregate vector corresponding to each first aggregate vector, a multi-head self-attention mechanism is used for aggregation.
In some embodiments of the invention, the softmax function is used in the step of aggregation using a multi-headed self-attention mechanism.
In some embodiments of the present invention, in the step of inputting the second aggregate vector corresponding to the sampling sequence of the same kind of resource into a preset prediction model to obtain the resource state of the resource prediction, the second aggregate vector corresponding to the sampling sequence of the same kind of resource is spliced and input into the prediction model, and the prediction model outputs the resource state of the resource prediction.
In some embodiments of the present invention, in the step of inputting a second aggregate vector corresponding to a sampling sequence of the same kind of resource into a preset prediction model to obtain a predicted resource state of the kind of resource, the prediction model outputs the predicted resource state through a linear layer.
The second aspect of the present invention also provides a multi-granularity sampling based multi-dimensional resource joint prediction system for a computing power network, the system comprising a computer device comprising a processor and a memory, the memory having stored therein computer instructions for executing the computer instructions stored in the memory, the system implementing the steps of the method as hereinbefore described when the computer instructions are executed by the processor.
The third aspect of the present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps performed by the aforementioned multi-granular sampling based method for joint prediction of multi-dimensional resources of a computing power network.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the above-described specific ones, and that the above and other objects that can be achieved with the present invention will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate and together with the description serve to explain the application.
FIG. 1 is a schematic diagram of one embodiment of a multi-granularity sampling-based multi-dimensional resource joint prediction method for a computing power network of the present invention;
Fig. 2 is a schematic diagram of the overall architecture of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following embodiments and the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. The exemplary embodiments of the present invention and the descriptions thereof are used herein to explain the present invention, but are not intended to limit the invention.
It should be noted here that, in order to avoid obscuring the present invention due to unnecessary details, only structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, while other details not greatly related to the present invention are omitted.
Description of the Prior Art
The prior art-prediction is performed using conventional machine learning algorithms, simple convolution models, or multi-layer perceptron models. For example, differential regression models, exponential smoothing models, and other linear prediction algorithms, etc., which can be well applied to time-stable time-series situations;
however, as the complexity and size of computing network resource clusters increases, the randomness and non-linear nature of the historical data becomes apparent. The first prior art approach is far from solving the problems of high complexity and volatility of data. In addition, these methods also often do not take into account the correlation between different computational resources, and only predict individual resource isolation.
The second prior art explores combining multiple predictive models and giving different weights to take advantage of the benefits of two or more models in order to overcome the shortcomings of the conventional models. Training and modeling historical data by utilizing a machine learning and deep learning technology, and adaptively finding out optimal parameters of a network model by combining group intelligence and a heuristic algorithm, so that the prediction accuracy is improved;
However, the innovation of the structural design of the model in the second prior art is relatively limited, and a certain redundancy problem is caused, so that the calculation cost is increased. In addition, these methods also predict individual resources only in isolation.
As shown in fig. 1 and 2, the present invention proposes a multi-granularity sampling-based multi-dimensional resource joint prediction method for a computing power network, which includes the following steps:
step S100, for each computing node in the computing network, acquiring historical state data of each resource at a plurality of historical time points, and constructing a historical state information sequence corresponding to each resource based on the historical state data of each resource at the plurality of historical time points;
in the implementation, a continuous historical state information sequence under a time window T is given The multi-dimensional resource state of the power network of T time nodes is contained. These resource states are presented in the form of multidimensional vectors, each dimension representing different resource information in the computing network.
In a specific implementation process, the historical state values in the historical state information sequence are arranged based on time sequence.
In a specific implementation process, the computing power network is a novel information infrastructure for distributing and flexibly scheduling computing resources, storage resources and network resources among clouds, networks and edges according to service requirements. The cloud network fusion technology, SDN/NFV and other novel network technologies are utilized to deeply fuse edge computing nodes, cloud computing nodes and various network resources, and an integral computing power service comprising computation, storage and connection is provided for clients through a centralized control or distributed scheduling method.
The computing power network has the following remarkable characteristics:
Resource abstraction and unified scheduling: the computing power network abstracts computing resources, storage resources, network resources and the like, provides the abstracted computing resources, the storage resources, the network resources and the like as component parts of products for clients, and performs unified scheduling according to service requirements to realize the optimal utilization of the resources.
And (5) service grade division: the service classes are divided by the service demands of the clients, rather than simply divided by regions, the clients are committed to service SLAs such as network performance, calculation power and the like, and the bottom layer difference is shielded.
Intelligent and elastic expansion: the task execution efficiency and the calculation quality are improved through technical means such as algorithm optimization and intelligent distribution, the service flow is monitored in real time, the calculation force resource is dynamically adjusted, and the elastic expansion and contraction of the resource are realized.
The application scene is wide: the power calculation network can be applied to a plurality of fields such as scientific calculation, data processing, image processing, virtual reality, intelligent manufacturing, smart cities and the like, and provides powerful, efficient and reliable power calculation support for various applications.
In summary, the computing network is used as a new generation information infrastructure, and is leading to new trends of network computing and computing networking, and accelerating the digital transformation and industrial upgrading.
In a specific implementation process, the resources in the computing power node comprise computing resources and storage resources, and the computing resources comprise processor resources and graphic processor resources;
processor (CPU): the central processing unit is a core computing unit of the computing power node and is responsible for executing various computing tasks;
graphics Processor (GPU): GPUs perform well in parallel processing large amounts of data, often used for high performance computing scenarios such as deep learning, image processing, and video rendering.
The storage resources comprise memory resources and hard disk resources;
Memory (RAM): the method is used for temporarily storing the data and instructions being processed by the CPU, and improves the calculation efficiency.
Hard disk/solid state disk (HDD/SSD): for long-term storage of data and programs, SSDs have faster read and write speeds than HDDs.
In a specific implementation process, in the step of acquiring historical state data of each resource at a plurality of historical time points, the historical state data of each resource at the plurality of historical time points is the consumed resource amount or the residual resource amount of each resource at the plurality of historical time points.
Step S200, sampling the historical state information sequence of each resource by adopting a plurality of sampling intervals, and constructing a plurality of sampling sequences for each resource;
in the specific implementation process, in the step of sampling the historical state information sequence of each resource by adopting a plurality of sampling intervals, the sampling intervals can be time intervals or intervals of numerical values in the historical state information sequence, and specifically, if the time intervals are the same, the sampling can be performed at intervals; if the intervals are for values in the sequence of historical state information, the same values may be sampled every interval.
In a specific implementation process, the length of the window T of the history look back to, that is, the length of the input history data, and the prediction length τ need to be considered, so that the resource state of how many time windows in the future are predicted by the scheme is indicated. In addition, it is also necessary to determine the sampling intervalN represents the number of sampling intervals used to specify different time intervals for prediction.
The scheme performs standardization processing on the historical state of the multidimensional resource, then obtains sampling results at different sampling intervals by utilizing multi-granularity sampling, and converts the sampling results into embedded representations in a representation space. The original sample lengths are mapped to hidden vectors in the embedding layer and the embedded representations at different sampling intervals are concatenated together to form a comprehensive embedded representation. Such embedded representations capture resource status information at different sampling intervals and provide a more comprehensive and rich input for subsequent attention-aggregation and prediction steps.
Step S300, performing attention aggregation on sampling sequences of the same resource to obtain a first aggregation vector corresponding to each sampling sequence;
In a specific implementation, the attention aggregation mechanism is a common technique in deep learning, which dynamically gives different attention to different parts according to the input information. In implementing attention aggregation, a function is typically used to calculate the attention weight, thereby implementing the aggregation.
The Softmax function is one of the most commonly used functions in attention aggregation. It converts the original score into a probability distribution as a concentration weight by normalizing the exponential function. In the attention aggregation formula, a Softmax function is generally used to calculate a normalized result after similarity (scaling) between a query vector and a key vector, and then the normalized result is multiplied by a numerical vector to obtain a final attention aggregation result.
The attention scoring function (Attention Scoring Function) is a function that calculates the similarity between the query vector and the key vector, and its output results are used in the Softmax function to calculate the attention weight. Different attention scoring functions may result in different attention gathering operations;
additive attention scoring function (Additive Attention): the query and the key are connected and then input into a multi-layer perceptron (MLP), and a scalar is calculated through a hidden layer (usually using a tanh activation function) of the MLP to be used as a similarity score;
Scaling the dot product attention scoring function (Scaled Dot-Product Attention): the dot product of the query and the key is directly calculated and divided by the square root of the dimension of the key vector (i.e., the scaling factor) to avoid the problem of gradient vanishing due to excessive dot product results.
Step S400, performing attention aggregation on first aggregation vectors corresponding to sampling sequences of different resources sampled at the same sampling interval to obtain second aggregation vectors corresponding to each first aggregation vector;
In the implementation process, the scheme uses a multi-head self-attention mechanism to aggregate information among different samples. Meanwhile, in the sampled sample, a multi-head self-attention mechanism is also used for information interaction, and information aggregation is carried out through implicit correlation among different dimension resources;
In the cross-dimension attention process, the scheme uses a multi-head self-attention mechanism to aggregate information among different samples in the same dimension;
In the cross-sample attention process, the scheme uses a multi-head self-attention mechanism to aggregate information between different samples in the same dimension.
And S500, inputting a second aggregate vector corresponding to the sampling sequence of the same kind of resource into a preset prediction model to obtain the resource state of the resource prediction.
In an implementation, the predicted resource state may be a predicted resource state at a plurality of points in time, which may be an amount of remaining resources or an amount of consumed resources.
In the specific implementation process, the scheme utilizes a multi-granularity sampling and attention mechanism to effectively aggregate and express the continuous historical state information sequence. Finally, by transformation of the linear layer, a predicted sequence of resource states within the future time window τ is obtained. The joint prediction method can better capture the relevance and dynamic change between multidimensional resources and improve the accuracy and the robustness of resource state prediction.
In the specific implementation process, in order to obtain the final prediction result, the prediction value output to the prediction model is neededInverse normalization (destanding) was performed. Inverse normalization is the process of converting normalized data back into the original data in order to obtain a prediction result corresponding to the original data. In particular, the inverse normalization operation may be performed using computational steps and formulas that are inverse to the normalization process. By inverse normalization, the prediction results can be restored to their original scale and range.
According to the scheme, firstly, historical state data of each resource at a plurality of historical time points are acquired for each computing node, the historical state data is based on the resource types and sampling intervals, attention aggregation is carried out twice, in the step of aggregation based on the sampling intervals, the values of each dimension correspond to the same time point, but the first aggregation vectors of different resources are aggregated, and as one resource is utilized and the other resource is also consumed in the actual resource use process, such as a CPU (Central processing unit) and a memory resource, the scheme considers the correlation among the different resources through the aggregation, and the final prediction accuracy is improved.
In some embodiments of the present invention, in the step of constructing a history state information sequence corresponding to each resource based on history state data of each resource at a plurality of history time points, the history state data of each resource at a plurality of history time points is normalized, and the normalized history state data of each resource is combined to construct the history state information sequence of each resource.
In some embodiments of the present invention, in the step of sampling the historical state information sequence of each resource at a plurality of sampling intervals, and constructing a plurality of sampling sequences for each resource, for the same sampling interval, different sampling start positions are used to sample the historical state information sequence of each resource, so that the historical state information sequence of the same resource at the same sampling interval corresponds to the plurality of sampling sequences.
In some embodiments of the present invention, in the step of sampling the historical state information sequence of each resource using different sampling start positions for the same sampling interval, the historical state values less than or equal to the sampling interval in the historical state information sequence are taken as the sampling start positions one by one according to the order in the historical state information sequence.
In the specific implementation process, historical state values smaller than or equal to a sampling interval in a historical state information sequence are used as sampling initial positions one by one, so that each value in an original sequence can be ensured to be sampled; specifically, if the historical state information sequence is ABCDEFG and the sampling interval is 3, the sampling start positions are A, B and C respectively.
By adopting the scheme, the historical state information sequences of the multiple resources are sampled at different sampling intervals, specifically, sampling sequences obtained by sampling different resources at the same sampling initial position at the same sampling interval correspond to each other, and in actual use, the corresponding relation of the resources at the same moment exists due to the fact that the corresponding relation exists in the use of the resources at the same moment.
In some embodiments of the present invention, in the step of performing attention aggregation on sampling sequences of the same resource to obtain a first aggregate vector corresponding to each sampling sequence, each sampling sequence is processed to be the same length by Embedding layers, and then attention aggregation is performed on the sampling sequences processed by Embedding layers of the same resource.
In practice, in deep learning and Natural Language Processing (NLP), the Embedding layer is a very important concept for converting discrete inputs (such as words, characters, or labels) into continuous, dense vector representations. This transformation enables the model to capture semantic relationships between inputs and to make efficient calculations and inferences in continuous space.
In some embodiments of the present invention, in the step of performing attention aggregation on sampling sequences of the same resource to obtain a first aggregate vector corresponding to each sampling sequence and performing attention aggregation on first aggregate vectors corresponding to sampling sequences of different resources sampled at the same sampling interval to obtain a second aggregate vector corresponding to each first aggregate vector, a multi-head self-attention mechanism is used for aggregation.
In some embodiments of the invention, the softmax function is used in the step of aggregation using a multi-headed self-attention mechanism.
In some embodiments of the present invention, in the step of inputting the second aggregate vector corresponding to the sampling sequence of the same kind of resource into a preset prediction model to obtain the resource state of the resource prediction, the second aggregate vector corresponding to the sampling sequence of the same kind of resource is spliced and input into the prediction model, and the prediction model outputs the resource state of the resource prediction.
In some embodiments of the present invention, in the step of inputting a second aggregate vector corresponding to a sampling sequence of the same kind of resource into a preset prediction model to obtain a predicted resource state of the kind of resource, the prediction model outputs the predicted resource state through a linear layer.
In the scheme, in order to fully utilize the association relation existing between the multidimensional resources, the time sequence rule in the historical information is mined, and the multi-granularity sampling and the joint prediction of the multidimensional resource historical information of the computational power network are fully developed. The time sequence formed by the resource history information has trend change information in different modes, and different trend information is presented under different sampling granularities. Finer sampling granularity can better pay attention to local trend changes, and coarser sampling granularity can grasp the change condition of multidimensional resources on a global view. According to the scheme, time sequence modes under different granularities can be captured through multi-granularity sampling, information communication is carried out by utilizing correlation among different dimensions, trend information and related information are fully mined and utilized, and accuracy of future state prediction of multidimensional resources is improved.
The beneficial effect of this scheme includes:
1. The scheme provides a multi-granularity sampling method to extract trend features of different modes in a time sequence so as to improve the accuracy of prediction, wherein trend information of the state of the multi-dimensional resource changing along with time can be captured from different granularities;
2. The scheme provides a cross-sampling and cross-dimension attention aggregation method, which can effectively utilize the implicit interrelationship between time sequence information and dimensions in a multi-dimensional resource history state.
The embodiment of the invention also provides a multi-granularity sampling-based multi-dimensional resource joint prediction system of a computing power network, which comprises computer equipment, wherein the computer equipment comprises a processor and a memory, the memory is stored with computer instructions, the processor is used for executing the computer instructions stored in the memory, and the system realizes the steps realized by the method when the computer instructions are executed by the processor.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, is used for realizing the steps realized by the multi-granularity sampling-based multi-dimensional resource joint prediction method of the computing power network. The computer readable storage medium may be a tangible storage medium such as Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, floppy disks, hard disk, a removable memory disk, a CD-ROM, or any other form of storage medium known in the art.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein can be implemented as hardware, software, or a combination of both. The particular implementation is hardware or software dependent on the specific application of the solution and the design constraints. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. The method processes of the present invention are not limited to the specific steps described and shown, but various changes, modifications and additions, or the order between steps may be made by those skilled in the art after appreciating the spirit of the present invention.
In this disclosure, features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations can be made to the embodiments of the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The multi-granularity sampling-based multi-dimensional resource joint prediction method for the computing power network is characterized by comprising the following steps of:
for each computing node in the computing network, acquiring historical state data of each resource at a plurality of historical time points, and constructing a historical state information sequence corresponding to each resource based on the historical state data of each resource at the plurality of historical time points;
Sampling the historical state information sequence of each resource by adopting a plurality of sampling intervals, and constructing a plurality of sampling sequences for each resource;
Performing attention aggregation on sampling sequences of the same resource to obtain a first aggregation vector corresponding to each sampling sequence;
Performing attention aggregation on first aggregation vectors corresponding to sampling sequences of different resources sampled at the same sampling interval to obtain second aggregation vectors corresponding to each first aggregation vector;
And inputting a second aggregate vector corresponding to the sampling sequence of the same resource into a preset prediction model to obtain the resource state of the resource prediction.
2. The multi-granularity sampling-based multi-dimensional resource joint prediction method of the computing power network according to claim 1, wherein in the step of constructing a history state information sequence corresponding to each resource based on the history state data of each resource at a plurality of history time points, the history state data of each resource at a plurality of history time points is normalized, and the history state data of each resource after normalization is combined to construct the history state information sequence of each resource.
3. The multi-granularity sampling-based multi-dimensional resource joint prediction method of the computing power network according to claim 1, wherein in the step of sampling the historical state information sequence of each resource by adopting a plurality of sampling intervals, constructing a plurality of sampling sequences for each resource, sampling the historical state information sequence of each resource by adopting different sampling starting positions for the same sampling interval, so that the historical state information sequence of the same resource under the same sampling interval corresponds to the plurality of sampling sequences.
4. The multi-granularity sampling-based multi-dimensional resource joint prediction method of the computing power network according to claim 3, wherein in the step of sampling the historical state information sequence of each resource by adopting different sampling starting positions, the historical state values smaller than or equal to the sampling interval in the historical state information sequence are used as the sampling starting positions one by one according to the sequence in the historical state information sequence.
5. The multi-granularity sampling-based multi-dimensional resource joint prediction method of the computing power network according to claim 1, wherein in the step of performing attention aggregation on sampling sequences of the same resource to obtain a first aggregate vector corresponding to each sampling sequence, each sampling sequence is processed into the same length by Embedding layers, and then attention aggregation is performed on the sampling sequences processed by Embedding layers of the same resource.
6. The multi-granularity sampling-based multi-dimensional resource joint prediction method of the computing power network according to claim 1, wherein in the step of performing attention aggregation on sampling sequences of the same resource to obtain first aggregation vectors corresponding to each sampling sequence and performing attention aggregation on first aggregation vectors corresponding to sampling sequences of different resources sampled at the same sampling interval to obtain second aggregation vectors corresponding to each first aggregation vector, a multi-head self-attention mechanism is adopted for aggregation.
7. The multi-granularity sampling-based multi-dimensional resource joint prediction method of the computing network of claim 6, wherein in the step of aggregating using a multi-headed self-attention mechanism, a softmax function is used.
8. The multi-granularity sampling-based multi-dimensional resource joint prediction method of the power network according to any one of claims 1 to 7, wherein in the step of inputting a second aggregate vector corresponding to a sampling sequence of the same kind of resource into a preset prediction model to obtain a resource state of the resource prediction, the second aggregate vector corresponding to the sampling sequence of the same kind of resource is spliced and input into the prediction model, and the prediction model outputs the resource state of the resource prediction.
9. The multi-granularity sampling-based multi-dimensional resource joint prediction method of the power network according to claim 1, wherein in the step of inputting a second aggregate vector corresponding to a sampling sequence of the same resource into a preset prediction model to obtain a predicted resource state of the resource, the prediction model outputs the predicted resource state through a linear layer.
10. A multi-granularity sampling-based multi-dimensional resource joint prediction system of a computing power network, characterized in that the system comprises a computer device, the computer device comprises a processor and a memory, the memory stores computer instructions, the processor is used for executing the computer instructions stored in the memory, and when the computer instructions are executed by the processor, the system realizes the steps realized by the method according to any one of claims 1 to 9.
CN202411008898.7A 2024-07-26 2024-07-26 Multi-granularity sampling-based multi-dimensional resource joint prediction method and system for computing power network Active CN118555216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411008898.7A CN118555216B (en) 2024-07-26 2024-07-26 Multi-granularity sampling-based multi-dimensional resource joint prediction method and system for computing power network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411008898.7A CN118555216B (en) 2024-07-26 2024-07-26 Multi-granularity sampling-based multi-dimensional resource joint prediction method and system for computing power network

Publications (2)

Publication Number Publication Date
CN118555216A true CN118555216A (en) 2024-08-27
CN118555216B CN118555216B (en) 2024-09-20

Family

ID=92455079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411008898.7A Active CN118555216B (en) 2024-07-26 2024-07-26 Multi-granularity sampling-based multi-dimensional resource joint prediction method and system for computing power network

Country Status (1)

Country Link
CN (1) CN118555216B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022110444A1 (en) * 2020-11-30 2022-06-02 中国科学院深圳先进技术研究院 Dynamic prediction method and apparatus for cloud native resources, computer device and storage medium
CN114943456A (en) * 2022-05-31 2022-08-26 北京邮电大学 Resource scheduling method and device, electronic equipment and storage medium
CN116300928A (en) * 2023-03-17 2023-06-23 北京百度网讯科技有限公司 Data processing method for vehicle and training method for data processing model
CN116578436A (en) * 2023-05-11 2023-08-11 西安电子科技大学 Real-time online detection method based on asynchronous multielement time sequence data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022110444A1 (en) * 2020-11-30 2022-06-02 中国科学院深圳先进技术研究院 Dynamic prediction method and apparatus for cloud native resources, computer device and storage medium
CN114943456A (en) * 2022-05-31 2022-08-26 北京邮电大学 Resource scheduling method and device, electronic equipment and storage medium
CN116300928A (en) * 2023-03-17 2023-06-23 北京百度网讯科技有限公司 Data processing method for vehicle and training method for data processing model
CN116578436A (en) * 2023-05-11 2023-08-11 西安电子科技大学 Real-time online detection method based on asynchronous multielement time sequence data

Also Published As

Publication number Publication date
CN118555216B (en) 2024-09-20

Similar Documents

Publication Publication Date Title
KR102482122B1 (en) Method for processing tasks in paralall, device and storage medium
Karim et al. BHyPreC: a novel Bi-LSTM based hybrid recurrent neural network model to predict the CPU workload of cloud virtual machine
US9720738B2 (en) Datacenter scheduling of applications using machine learning techniques
CN109478144A (en) A kind of data processing equipment and method
CN114862656B (en) Multi-GPU-based acquisition method for training cost of distributed deep learning model
CN114915629A (en) Information processing method, device, system, electronic equipment and storage medium
Gupta et al. A joint feature selection framework for multivariate resource usage prediction in cloud servers using stability and prediction performance
JP7243203B2 (en) Optimization device, optimization system, optimization method, and program
US20190034806A1 (en) Monitor-mine-manage cycle
CN118555216B (en) Multi-granularity sampling-based multi-dimensional resource joint prediction method and system for computing power network
Dong et al. Damage forecasting based on multi-factor fuzzy time series and cloud model
Zhang et al. Two-level task scheduling with multi-objectives in geo-distributed and large-scale SaaS cloud
Zhang et al. Monitoring-based task scheduling in large-scale SaaS cloud
CN116701091A (en) Method, electronic device and computer program product for deriving logs
US20240314046A1 (en) Control apparatus, control method and program
CN117667606B (en) High-performance computing cluster energy consumption prediction method and system based on user behaviors
Rossi et al. Clustering-Based Numerosity Reduction for Cloud Workload Forecasting
CN115496054B (en) Multidisciplinary design optimization method, multidisciplinary design optimization system, electronic equipment and storage medium
CN117539948B (en) Service data retrieval method and device based on deep neural network
US11900047B1 (en) Systems, methods and software for improving the energy footprint of an electronic document
US11836531B2 (en) Method, device, and program product for managing computing system
Wu et al. SchedP: I/O-aware Job Scheduling in Large-Scale Production HPC Systems
Siper et al. TABot–A Distributed Deep Learning Framework for Classifying Price Chart Images
Xia et al. Deep&Cross Network for Software-Intensive System Fault Prediction
Ade Energy Optimization Techniques for Virtual Machines Using AI

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant