[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN107506932A - Power grid risk scenes in parallel computational methods and system - Google Patents

Power grid risk scenes in parallel computational methods and system Download PDF

Info

Publication number
CN107506932A
CN107506932A CN201710756256.9A CN201710756256A CN107506932A CN 107506932 A CN107506932 A CN 107506932A CN 201710756256 A CN201710756256 A CN 201710756256A CN 107506932 A CN107506932 A CN 107506932A
Authority
CN
China
Prior art keywords
computing
grid risk
data
parallel
power grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710756256.9A
Other languages
Chinese (zh)
Inventor
莫文雄
章磊
胡金星
张志亮
王莉
何兵
孙煜华
冯圣中
吴永欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Guangzhou Power Supply Bureau Co Ltd
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Guangzhou Power Supply Bureau Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS, Guangzhou Power Supply Bureau Co Ltd filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201710756256.9A priority Critical patent/CN107506932A/en
Publication of CN107506932A publication Critical patent/CN107506932A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Quality & Reliability (AREA)
  • Public Health (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Algebra (AREA)
  • Operations Research (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Educational Administration (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

本发明涉及一种电网风险场景并行计算方法和系统,构建并行计算池,所述并行计算池包括多个计算节点,所述计算节点包括多个计算引擎;获取多个电网风险场景数据,为各所述计算引擎分配对应的电网风险场景数据;传输各所述电网风险场景数据到对应的计算引擎,通过各所述计算引擎对各所述电网风险场景数据进行并行计算;获取并行计算结果。在本方案中,所述的电网风险场景数据,包含了大规模的支路开断的数据,通过对所述场景数据的分配,将各个电网场景数据传输到对应的计算引擎进行并行计算,可以高效地完成计算量大、迭代次数多、实时性要求高的电网风险场景计算任务。

The present invention relates to a method and system for parallel computing of power grid risk scenarios, and constructs a parallel computing pool. The parallel computing pool includes multiple computing nodes, and the computing nodes include multiple computing engines; multiple power grid risk scenario data are obtained for each The computing engine allocates corresponding grid risk scenario data; transmits each grid risk scenario data to the corresponding computing engine, and performs parallel computing on each grid risk scenario data through each computing engine; obtains parallel computing results. In this solution, the power grid risk scenario data includes large-scale branch disconnection data. Through the distribution of the scenario data, each grid scenario data is transmitted to the corresponding computing engine for parallel calculation, which can Efficiently complete the calculation tasks of power grid risk scenarios with a large amount of calculation, a large number of iterations, and high real-time requirements.

Description

电网风险场景并行计算方法和系统Parallel computing method and system for power grid risk scenarios

技术领域technical field

本发明涉及电网技术领域,特别是涉及一种电网风险场景并行计算方法和系统。The invention relates to the technical field of power grids, in particular to a method and system for parallel computing of power grid risk scenarios.

背景技术Background technique

随着社会经济水平的增长与人口的增加,电网规模越来越大,社会对电网的安全性与可靠性的要求也越来越高。为了保障电网正常运行,研究人员提出了电网运行风险评估指标和计算方法。由于要考虑的预想故障场景数量众多,以及对每一故障场景都要进行潮流分析、负荷削减等计算工作,计算量非常之大。With the growth of socio-economic level and the increase of population, the scale of power grid is getting bigger and bigger, and the society has higher and higher requirements for the safety and reliability of power grid. In order to ensure the normal operation of the power grid, researchers have proposed risk assessment indicators and calculation methods for power grid operation. Due to the large number of expected fault scenarios to be considered, and calculations such as power flow analysis and load reduction for each fault scenario, the calculation amount is very large.

为了提高电网风险评估的计算效率,一般通过购买高性能服务器或工作站完成,但由于价格昂贵及维护成本高,一般难以拥有;也有尝试优化算法,但由于技术涉及面广、程度深,优化算法对计算效率带来的提高微乎其微,因此电网风险评估的计算效率低下。In order to improve the calculation efficiency of power grid risk assessment, it is generally completed by purchasing high-performance servers or workstations, but due to the high price and high maintenance costs, it is generally difficult to own; there are also attempts to optimize algorithms, but due to the wide range and depth of technology involved, the optimization algorithm is not suitable for The gains in computational efficiency are negligible, so grid risk assessment is computationally inefficient.

发明内容Contents of the invention

基于此,有必要针对传统的电网风险评估的计算效率低下的问题,提供一种电网风险场景并行计算方法和系统。Based on this, it is necessary to provide a parallel calculation method and system for power grid risk scenarios to solve the problem of low calculation efficiency in traditional power grid risk assessment.

一种电网风险场景并行计算方法,包括以下步骤:A method for parallel computing of power grid risk scenarios, comprising the following steps:

构建并行计算池,并行计算池包括多个计算节点,计算节点包括多个计算引擎;Build a parallel computing pool, which includes multiple computing nodes, and the computing nodes include multiple computing engines;

获取多个电网风险场景数据,为各计算引擎分配对应的电网风险场景数据;Obtain multiple grid risk scenario data, and assign corresponding grid risk scenario data to each computing engine;

传输各电网风险场景数据到对应的计算引擎,通过各计算引擎对各电网风险场景数据进行并行计算;Transmit the risk scenario data of each power grid to the corresponding computing engine, and perform parallel calculations on the risk scenario data of each power grid through each computing engine;

获取并行计算结果。Get parallel computing results.

一种电网风险场景并行计算系统,包括以下模块:A power grid risk scenario parallel computing system includes the following modules:

并行计算池构建模块,用于构建并行计算池,并行计算池包括多个计算节点,计算节点包括多个计算引擎;The parallel computing pool building block is used to build a parallel computing pool. The parallel computing pool includes multiple computing nodes, and the computing nodes include multiple computing engines;

场景数据分配模块,用于获取多个电网风险场景数据,为各计算引擎分配对应的电网风险场景数据,传输各电网风险场景数据到对应的计算引擎;The scenario data allocation module is used to obtain multiple grid risk scenario data, assign corresponding grid risk scenario data to each computing engine, and transmit each grid risk scenario data to the corresponding computing engine;

并行计算模块,用于通过各计算引擎对各电网风险场景数据进行并行计算;The parallel computing module is used to perform parallel computing on the risk scene data of each power grid through each computing engine;

结果汇集模块,用于获取并行计算结果。The result collection module is used to obtain parallel computing results.

根据上述本发明的电网风险场景并行计算方法和系统,利用多个计算节点构建并行计算池,将多个电网场景数据分配到各个计算节点中的各个计算引擎,通过各个计算引擎对电网风险场景数据进行并行计算,获取各个计算引擎的计算结果。在此方案中,通过对场景数据的分配,将各个电网场景数据传输到对应的计算引擎进行并行计算,可以高效地完成计算量大、迭代次数多、实时性要求高的电网风险场景计算任务,从而提高电网风险场景的计算效率。According to the grid risk scenario parallel computing method and system of the present invention, multiple computing nodes are used to construct a parallel computing pool, multiple grid scenario data are distributed to each computing engine in each computing node, and the grid risk scenario data is processed by each computing engine Perform parallel calculations and obtain the calculation results of each calculation engine. In this solution, through the allocation of scene data, each power grid scene data is transmitted to the corresponding computing engine for parallel computing, which can efficiently complete the power grid risk scene computing tasks with large amount of calculation, many iterations, and high real-time requirements. In this way, the calculation efficiency of power grid risk scenarios is improved.

一种可读存储介质,其上存储有可执行程序,该程序被处理器执行时实现上述的电网风险场景并行计算方法的步骤。A readable storage medium, on which an executable program is stored, and when the program is executed by a processor, the steps of the above-mentioned parallel computing method for power grid risk scenarios are realized.

一种计算设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的可执行程序,处理器执行程序时实现上述的电网风险场景并行计算方法的步骤。A computing device includes a memory, a processor, and an executable program stored on the memory and operable on the processor. When the processor executes the program, the steps of the above-mentioned parallel computing method for power grid risk scenarios are realized.

根据上述本发明的电网风险场景并行计算方法,本发明还提供一种可读存储介质和计算设备,用于通过程序实现上述电网风险场景并行计算方法。According to the above-mentioned parallel computing method for power grid risk scenarios of the present invention, the present invention also provides a readable storage medium and a computing device for realizing the above-mentioned parallel computing method for power grid risk scenarios through programs.

附图说明Description of drawings

图1为本发明一个实施例中的电网风险场景并行计算方法的流程示意图;Fig. 1 is a schematic flow chart of a method for parallel computing of power grid risk scenarios in an embodiment of the present invention;

图2为本发明另一实施例中的构建并行计算池步骤的流程示意图;Fig. 2 is a schematic flow chart of the steps of constructing a parallel computing pool in another embodiment of the present invention;

图3为本发明另一实施例中的分配场景数据步骤的流程示意图;Fig. 3 is a schematic flow chart of the step of allocating scene data in another embodiment of the present invention;

图4为本发明另一实施例中的电网风险场景并行计算系统的结构示意图;4 is a schematic structural diagram of a parallel computing system for power grid risk scenarios in another embodiment of the present invention;

图5为本发明另一实施例中的电网风险场景并行计算方法的流程示意图;FIG. 5 is a schematic flow diagram of a method for parallel computing of power grid risk scenarios in another embodiment of the present invention;

图6为本发明另一实施例中的电网风险场景并行计算方法的流程示意图;FIG. 6 is a schematic flow diagram of a method for parallel computing of power grid risk scenarios in another embodiment of the present invention;

图7为本发明另一实施例中的电网风险场景并行计算方法的并行计算架构图。FIG. 7 is a parallel computing architecture diagram of a parallel computing method for power grid risk scenarios in another embodiment of the present invention.

具体实施方式detailed description

为使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步的详细说明。应当理解,此处所描述的具体实施方式仅仅用以解释本发明,并不限定本发明的保护范围。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, and do not limit the protection scope of the present invention.

参见图1所示,为本发明一个实施例的电网风险场景并行计算方法的流程示意图。该实施例中的电网风险场景并行计算方法包括以下步骤:Referring to FIG. 1 , it is a schematic flowchart of a method for parallel computing of power grid risk scenarios according to an embodiment of the present invention. The grid risk scenario parallel computing method in this embodiment includes the following steps:

步骤S110:构建并行计算池,并行计算池包括多个计算节点,计算节点包括多个计算引擎;Step S110: building a parallel computing pool, the parallel computing pool includes multiple computing nodes, and the computing nodes include multiple computing engines;

在本步骤中,计算节点可以是计算机设备,包括多核处理器和存储装置,通过连接多个计算节点的处理器和存储装置,可以构建出并行计算池。In this step, the computing node may be a computer device, including a multi-core processor and a storage device, and a parallel computing pool may be constructed by connecting processors and storage devices of multiple computing nodes.

步骤S120:获取多个电网风险场景数据,为各计算引擎分配对应的电网风险场景数据;Step S120: Obtain multiple power grid risk scenario data, and assign corresponding power grid risk scenario data to each computing engine;

在本步骤中,通过分配电网风险场景数据到各个计算引擎,可以调整各个计算引擎的计算压力。In this step, by allocating grid risk scenario data to each computing engine, the computing pressure of each computing engine can be adjusted.

步骤S130:传输各电网风险场景数据到对应的计算引擎,通过各计算引擎对各电网风险场景数据进行并行计算;Step S130: Transmit the risk scenario data of each power grid to the corresponding calculation engine, and perform parallel calculation on the risk scenario data of each power grid through each calculation engine;

在本步骤中,分配电网风险场景数据完成后,根据分配结果,将各个场景数据传输到各个计算节点的各个计算引擎中,进行并行计算。In this step, after the allocation of grid risk scenario data is completed, each scenario data is transmitted to each computing engine of each computing node for parallel computing according to the distribution result.

步骤S140:获取并行计算结果。Step S140: Acquiring parallel computing results.

在本步骤中,由于并行计算完成后,各个计算结果仍然存放在各个计算引擎中,因此需要通过各个计算引擎返回计算结果。In this step, since each calculation result is still stored in each calculation engine after the parallel calculation is completed, it is necessary to return the calculation result through each calculation engine.

在本实施例中,利用多个计算节点构建并行计算池,将多个电网场景数据分配到各个计算节点中的各个计算引擎,通过各个计算引擎对电网风险场景数据进行并行计算,获取各个计算引擎的计算结果。在此方案中,通过对场景数据的分配,将各个电网场景数据传输到对应的计算引擎进行并行计算,可以高效地完成计算量大、迭代次数多、实时性要求高的电网风险场景计算任务。In this embodiment, multiple computing nodes are used to build a parallel computing pool, multiple power grid scenario data are distributed to each computing engine in each computing node, and each computing engine performs parallel computing on the power grid risk scenario data to obtain the data of each computing engine calculation results. In this solution, through the allocation of scene data, each power grid scene data is transmitted to the corresponding computing engine for parallel computing, which can efficiently complete the computing tasks of power grid risk scenarios with a large amount of calculation, a large number of iterations, and high real-time requirements.

可选的,电网风险场景数据可以包括大规模电网的支路开断的数据。通过对各支路开断数据的整理,形成多个电网风险场景数据;Optionally, the power grid risk scenario data may include branch disconnection data of a large-scale power grid. By sorting out the disconnection data of each branch, multiple power grid risk scenario data are formed;

参见图2所示,为其中一个实施例中构建并行计算池的流程示意图,该实施例中构建并行计算池的步骤包括以下步骤:Referring to Figure 2, it is a schematic flow diagram of building a parallel computing pool in one of the embodiments, and the steps of building a parallel computing pool in this embodiment include the following steps:

步骤S111:获取多个计算节点,将各计算节点添加至集群中,对集群创建任务管理;Step S111: Obtain multiple computing nodes, add each computing node to the cluster, and create task management for the cluster;

步骤S112:配置各计算节点的计算引擎数量,添加集群至初始集群配置文件,获得目标集群配置文件,根据目标集群配置文件创建并行计算池。Step S112: Configure the number of computing engines of each computing node, add the cluster to the initial cluster configuration file, obtain the target cluster configuration file, and create a parallel computing pool according to the target cluster configuration file.

在本实施例中,获取将用于参与并行计算的计算节点,添加计算节点到同一个集群,创建任务管理和配置计算引擎数量后,将该集群添加到初始集群配置文件中,获得的目标集群配置文件,可利用于创建并行计算池。由于实际应用中,针对不同的计算任务,需要有针对性地创建对应的集群。由于集群系统多节点的特征,集群配置错综复杂,尤其在大型系统中,手工配置集群节点效率低下,通过采用集群配置文件,可以提高并行池的创建与管理的效率。In this embodiment, obtain computing nodes that will be used to participate in parallel computing, add computing nodes to the same cluster, create task management and configure the number of computing engines, add the cluster to the initial cluster configuration file, and obtain the target cluster Configuration files that can be used to create parallel computing pools. In practical applications, for different computing tasks, corresponding clusters need to be created in a targeted manner. Due to the multi-node characteristics of the cluster system, the cluster configuration is intricate. Especially in large-scale systems, manual configuration of cluster nodes is inefficient. By using cluster configuration files, the efficiency of creating and managing parallel pools can be improved.

可选的,上述操作可以在并行池其中一个计算节点中完成,通过主机IP地址或主机名,找到可用于参与并行计算的计算节点,并将找到的计算节点添加到一个集群,通过对该集群创建任务统一管理。Optionally, the above operations can be done on one of the computing nodes in the parallel pool. Through the host IP address or host name, find the computing nodes that can be used to participate in parallel computing, and add the found computing nodes to a cluster. Through the cluster Create tasks for unified management.

在其中一个实施例中,配置每个计算节点的计算引擎数量的步骤包括以下步骤:In one of the embodiments, the step of configuring the number of computing engines of each computing node includes the following steps:

将当前计算节点中的计算引擎数量配置为当前计算节点的CPU核数量。Configure the number of computing engines in the current computing node as the number of CPU cores of the current computing node.

在本实施例中,由于电网风险场景数据的计算为数据密集型计算,在计算过程中,基本没有IO操作,很少出现IO阻塞;另一方面,CPU单个核在处理多个进程时,是排队处理的,由于一个计算引擎对应一个进程,因此将每个计算节点的计算引擎数量设置为与该计算引擎的CPU核数量一致,可以减少由于系统调度造成的性能损失,充分利用各个计算节点的计算资源。在其中一个实施例中,各计算节点处于同一个局域网,计算节点间通过局域网进行数据的传输。In this embodiment, since the calculation of power grid risk scenario data is data-intensive calculation, there is basically no IO operation during the calculation process, and IO blocking rarely occurs; on the other hand, when a single CPU core processes multiple processes, it is For queuing processing, since one computing engine corresponds to one process, setting the number of computing engines of each computing node to be consistent with the number of CPU cores of the computing engine can reduce the performance loss caused by system scheduling and make full use of the resources of each computing node. computing resources. In one embodiment, each computing node is in the same local area network, and data transmission is performed between computing nodes through the local area network.

在本实施例中,由于实际操作中电网风险场景数据设计到的数据量非常大,并且场景数据需要传输到各个计算引擎中进行并行计算,因此对数据的传输速率有较高要求。并行计算池中的各个计算节点使用专门铺设的传输介质进行联网,具有较高的传输速率,并且延迟低,提高了并行计算池中场景数据的传输,进而提高并行计算的效率。In this embodiment, since the power grid risk scenario data is designed with a very large amount of data in actual operation, and the scenario data needs to be transmitted to each computing engine for parallel computing, there is a high requirement on the data transmission rate. Each computing node in the parallel computing pool uses a specially laid transmission medium for networking, which has a high transmission rate and low delay, which improves the transmission of scene data in the parallel computing pool, thereby improving the efficiency of parallel computing.

在其中一个实施例中,构建并行计算池后,选取并行计算池中的任意一个计算节点作为客户端,将各电网风险场景数据传输至客户端;其中,客户端为各计算引擎分配对应的电网风险场景数据;In one of the embodiments, after building the parallel computing pool, select any computing node in the parallel computing pool as the client, and transmit the data of each power grid risk scenario to the client; wherein, the client assigns the corresponding power grid to each computing engine risk scenario data;

传输各数据到对应的计算引擎的步骤包括以下步骤:The step of transmitting each data to the corresponding calculation engine includes the following steps:

通过客户端传输各数据到对应的计算引擎。Each data is transmitted to the corresponding computing engine through the client.

在本实施例中,电网风险场景数据的计算设计到大量的运算,通过利用客户端进行计算任务的分配,可以更好地划分计算任务,提高并行计算的效率。In this embodiment, the calculation of power grid risk scenario data involves a large number of calculations. By using the client to allocate calculation tasks, the calculation tasks can be better divided and the efficiency of parallel calculation can be improved.

参见图3所示,为其中一个实施例中为各个计算引擎分配对应的电网风险场景数据的流程示意图,该实施例中为各个计算引擎分配对应的电网风险场景数据的步骤包括以下步骤:Referring to FIG. 3 , it is a schematic flow diagram of assigning corresponding grid risk scenario data to each computing engine in one embodiment, and the step of assigning corresponding grid risk scenario data to each computing engine in this embodiment includes the following steps:

步骤S121:获取并行计算池中的计算引擎总数以及各个计算引擎的引擎序号;Step S121: Obtain the total number of computing engines in the parallel computing pool and the engine serial number of each computing engine;

步骤S122:对电网风险场景数据按序编号,得到场景编号;Step S122: number the grid risk scenario data sequentially to obtain the scenario number;

步骤S123:将当前场景编号对计算引擎总数求余加1,获得与当前场景编号对应的目标引擎序号,将当前场景编号所属的电网风险场景数据分配至目标引擎序号所属的计算引擎。Step S123: Add 1 to the remainder of the current scenario number and the total number of computing engines to obtain the target engine serial number corresponding to the current scenario number, and distribute the power grid risk scenario data to which the current scenario number belongs to the computing engine to which the target engine serial number belongs.

在本实施例中,通过采用场景数据的分配方法,每个场景数据均可以分配得到一个负责对其进行计算的计算引擎,同时每个计算引擎分配得到的场景数据量基本相同,因此使并行计算池达到负载平衡的效果,有利于提高CPU的利用率,最小化任务空闲时间,提高并行计算效率。In this embodiment, by adopting the allocation method of scene data, each scene data can be assigned a calculation engine responsible for its calculation, and at the same time, the amount of scene data allocated to each calculation engine is basically the same, so parallel computing The pool achieves the effect of load balancing, which is conducive to improving CPU utilization, minimizing task idle time, and improving parallel computing efficiency.

在其中的一个实施例中,并行计算的平台为MATLAB,利用平台的单程序多数据流的方法对电网风险场景数据进行并行计算。In one of the embodiments, the platform for parallel computing is MATLAB, and the method of single program and multiple data streams of the platform is used to perform parallel computing on the grid risk scenario data.

在本实施例中,电网风险场景数据的数据量较大,但每个场景数据的计算方法是类似的,只是数据的不同。SPMD(Single Program Multiple Data)指单程序多数据流的并行计算方法,通过利用MATLAB运行并行计算程序,将同一段程序运行在多个计算引擎上,程序中对不同的数据编写有对应的代码,每一个计算引擎利用同一程序对不同的电网风险场景数据进行计算,同时该程序包含必要的逻辑,每个计算引擎可以只执行程序中的部分语句,不必执行整个程序,计算结果的返回值以composite类型的对象存储,因此提高了并行计算的效率。In this embodiment, the power grid risk scenario data has a large amount of data, but the calculation method of each scenario data is similar, and only the data is different. SPMD (Single Program Multiple Data) refers to the parallel computing method of single program and multiple data streams. By using MATLAB to run parallel computing programs, the same program can be run on multiple computing engines. Corresponding codes are written for different data in the program. Each calculation engine uses the same program to calculate different power grid risk scenario data. At the same time, the program contains the necessary logic. Each calculation engine can only execute some statements in the program instead of the entire program. The return value of the calculation result is composite type of object storage, thus improving the efficiency of parallel computing.

在其中的一个实施例中,并行计算的步骤包括以下步骤:In one of the embodiments, the step of parallel computing includes the following steps:

通过当前计算引擎将对应的电网风险场景数据传输到与当前计算引擎对应的GPU;Transmit the corresponding power grid risk scenario data to the GPU corresponding to the current computing engine through the current computing engine;

其中,GPU判断接收的电网风险场景数据的类型,若接收的电网风险场景数据为稠密矩阵数据,则利用gpuArray()函数将稠密矩阵数据传输到GPU显存中进行并行计算;Among them, the GPU judges the type of the received grid risk scenario data, and if the received grid risk scenario data is dense matrix data, use the gpuArray() function to transfer the dense matrix data to the GPU memory for parallel calculation;

若接收的电网风险场景数据为稀疏矩阵数据,则采用MEX函数和CUDA计算库对稀疏矩阵数据进行并行计算;If the received power grid risk scenario data is sparse matrix data, use the MEX function and CUDA computing library to perform parallel calculations on the sparse matrix data;

若接收的电网风险场景数据为多个数据且为标量,则通过arrayfun函数生成自定义函数,采用自定义函数对接收的电网风险场景数据进行并行计算。If the received power grid risk scenario data is a plurality of data and is scalar, then generate a custom function through the arrayfun function, and use the custom function to perform parallel calculation on the received power grid risk scenario data.

在本实施例中,的计算引擎在接收到分配的场景数据之后,将数据传输到GPU(Graphics Processing Unit,图形处理器)进行并行计算。GPU是计算机显示设备的核心部件,每个GPU含有多个流处理器,针对图像处理,这些流处理器被设计以并行方式工作。CUDA(Compute Unified Device Architecture)是GPU的通用并行计算架构,使GPU可以运用于浮点运算。利用GPU远超CPU的浮点运算性能、高内存带宽以及高性价比的特点,将CUDA架构运用于大规模电网风险场景的并行计算任务上,有利于提高计算效率。In this embodiment, after receiving the allocated scene data, the calculation engine transmits the data to a GPU (Graphics Processing Unit, graphics processing unit) for parallel calculation. GPU is the core component of a computer display device. Each GPU contains multiple stream processors. For image processing, these stream processors are designed to work in parallel. CUDA (Compute Unified Device Architecture) is a general-purpose parallel computing architecture of the GPU, which enables the GPU to be used for floating-point operations. Utilizing the floating-point computing performance, high memory bandwidth, and high cost performance of the GPU, which far exceeds that of the CPU, the CUDA architecture is applied to parallel computing tasks in large-scale power grid risk scenarios, which is conducive to improving computing efficiency.

矩阵的计算是电网风险场景计算的基本组成,在矩阵中,若数值为0的元素数目远远多于非0元素的数目,并且非0元素分布没有规律时,则称该矩阵为稀疏矩阵;与之相反,若非0元素数目占大多数时,则称该矩阵为稠密矩阵。The calculation of the matrix is the basic component of the grid risk scenario calculation. In the matrix, if the number of elements with a value of 0 is far more than the number of non-zero elements, and the distribution of non-zero elements is irregular, the matrix is called a sparse matrix; On the contrary, if the number of non-zero elements accounts for the majority, the matrix is called a dense matrix.

为了提高场景数据的并行计算效率,通过gpuArray()函数将稠密矩阵类型的场景数据复制到GPU显存中,生成具有gpuArray属性的矩阵和向量,可以利用GPU自动进行计算。对于稀疏矩阵类型的场景数据,如果采用上述与稠密矩阵类型相同的计算方式,则会消耗大量的GPU显存空间,导致计算规模受到限制,因此采用MEX函数和CUDA计算库对稀疏矩阵数据进行并行计算,其中的MEX函数能够调用各类CUDA计算库进行混合GPU的并行计算。具体地,可以利用CUDA提供的CUSPARSE和CUSOLVER两个稀疏矩阵数值计算库进行GPU的并行计算。接收的电网风险场景数据为多个数据且为标量的情形,可以为输入或输出变量不止一个的情形。In order to improve the parallel computing efficiency of scene data, the scene data of dense matrix type is copied to the GPU memory through the gpuArray() function, and the matrix and vector with gpuArray attribute are generated, which can be automatically calculated by GPU. For scene data of sparse matrix type, if the same calculation method as the dense matrix type is used above, a large amount of GPU memory space will be consumed, resulting in a limited calculation scale. Therefore, MEX function and CUDA calculation library are used to perform parallel calculations on sparse matrix data. , where the MEX function can call various CUDA computing libraries for parallel computing of mixed GPUs. Specifically, two sparse matrix numerical computing libraries, CUSPARSE and CUSOLVER provided by CUDA, can be used to perform parallel computing on GPU. If the received power grid risk scenario data is a plurality of data and is a scalar quantity, it may be a situation where there is more than one input or output variable.

在其中的一个实施例中,获取并行计算结果的步骤包括以下步骤:In one of the embodiments, the step of obtaining the parallel calculation result includes the following steps:

接收各个计算节点返回的计算结果;其中,各个计算节点分别通过gather函数汇集各自包括的各个计算引擎的计算结果。Receive calculation results returned by each computing node; wherein, each computing node collects the calculation results of each computing engine included in each through a gather function.

在本实施例中,由于电网风险场景数据分配到各个计算引擎进行并行计算,需要获取各个计算引擎返回的计算结果。首先各个计算节点通过gather函数,将计算结果从GPU显存回收到物理内存,然后各个计算节点再返回计算结果。通过调用gather函数,提高了获取各个计算引擎的计算结果的速度。In this embodiment, since the power grid risk scenario data is distributed to each calculation engine for parallel calculation, it is necessary to obtain the calculation results returned by each calculation engine. First, each computing node recycles the calculation result from the GPU memory to the physical memory through the gather function, and then each computing node returns the calculation result. By calling the gather function, the speed of obtaining the calculation results of each calculation engine is improved.

根据上述电网风险场景并行计算方法,本发明还提供一种电网风险场景并行计算系统,以下就本发明的电网风险场景并行计算系统的实施例进行详细说明。According to the above-mentioned parallel computing method for power grid risk scenarios, the present invention also provides a parallel computing system for power grid risk scenarios, and an embodiment of the parallel computing system for power grid risk scenarios of the present invention will be described in detail below.

参见图4所示,为本发明一个实施例的电网风险场景并行计算系统的结构示意图,该实施例中的电网风险场景并行计算系统包括:Referring to FIG. 4, it is a schematic structural diagram of a power grid risk scenario parallel computing system according to an embodiment of the present invention. The power grid risk scenario parallel computing system in this embodiment includes:

并行计算池构建模块210,用于构建并行计算池,并行计算池包括多个计算节点,计算节点包括多个计算引擎;A parallel computing pool building module 210, configured to build a parallel computing pool, the parallel computing pool includes a plurality of computing nodes, and the computing nodes include a plurality of computing engines;

场景数据分配模块220,用于获取多个电网风险场景数据,为各计算引擎分配对应的电网风险场景数据,传输各电网风险场景数据到对应的计算引擎;The scene data distribution module 220 is used to obtain multiple power grid risk scene data, assign corresponding power grid risk scene data to each calculation engine, and transmit each power grid risk scene data to the corresponding calculation engine;

并行计算模块230,用于通过各计算引擎对各电网风险场景数据进行并行计算;Parallel calculation module 230, used for performing parallel calculation on each power grid risk scenario data through each calculation engine;

结果汇集模块240,用于获取并行计算结果。The result collection module 240 is used for obtaining parallel computing results.

在其中的一个实施例中,并行计算池构建模块210获取多个计算节点,将各计算节点添加至集群中,对集群创建任务管理;配置各计算节点的计算引擎数量,添加集群至初始集群配置文件,获得目标集群配置文件,根据目标集群配置文件创建并行计算池。In one of the embodiments, the parallel computing pool construction module 210 acquires multiple computing nodes, adds each computing node to the cluster, creates task management for the cluster; configures the number of computing engines of each computing node, and adds the cluster to the initial cluster configuration file, obtain the target cluster configuration file, and create a parallel computing pool based on the target cluster configuration file.

在其中一个实施例中,并行计算池构建模块210将当前计算节点中的计算引擎数量配置为当前计算节点的CPU核数量。In one of the embodiments, the parallel computing pool construction module 210 configures the number of computing engines in the current computing node as the number of CPU cores of the current computing node.

在其中一个实施例中,并行计算池构建模块210将各计算节点配置在同一个局域网,使各计算节点间通过局域网进行数据的传输。In one of the embodiments, the parallel computing pool construction module 210 configures each computing node in the same local area network, so that data transmission between each computing node is performed through the local area network.

在其中一个实施例中,场景数据分配模块220选取并行计算池中的任意一个计算节点作为客户端,将各电网风险场景数据传输至客户端;其中,客户端为各计算引擎分配对应的电网风险场景数据,通过客户端传输各数据到对应的计算引擎。In one of the embodiments, the scenario data distribution module 220 selects any computing node in the parallel computing pool as a client, and transmits the scenario data of each power grid risk to the client; wherein, the client assigns the corresponding power grid risk to each computing engine Scene data, each data is transmitted to the corresponding computing engine through the client.

在其中的一个实施例中,场景数据分配模块220获取并行计算池中的计算引擎总数以及各个计算引擎的引擎序号;对电网风险场景数据按序编号,得到场景编号;将当前场景编号对计算引擎总数求余加1,获得与当前场景编号对应的目标引擎序号,将当前场景编号所属的电网风险场景数据分配至目标引擎序号所属的计算引擎。In one of the embodiments, the scenario data allocation module 220 obtains the total number of computing engines in the parallel computing pool and the engine serial numbers of each computing engine; numbers the grid risk scenario data in order to obtain the scenario number; assigns the current scenario number to the computing engine Add 1 to the remainder of the total to obtain the target engine serial number corresponding to the current scenario number, and distribute the power grid risk scenario data to which the current scenario number belongs to the calculation engine to which the target engine serial number belongs.

在其中的一个实施例中,并行计算池构建模块210利用MATLAB平台构建并行计算池,并行计算模块230利用MATLAB平台的单程序多数据流的方法对电网风险场景数据进行并行计算。In one of the embodiments, the parallel computing pool construction module 210 uses the MATLAB platform to build a parallel computing pool, and the parallel computing module 230 uses the single program multiple data stream method of the MATLAB platform to perform parallel computing on the power grid risk scenario data.

在其中的一个实施例中,场景数据分配模块220通过当前计算引擎将对应的电网风险场景数据传输到与当前计算引擎对应的GPU;其中,GPU判断接收的电网风险场景数据的类型,在接收的电网风险场景数据为稠密矩阵数据时,利用gpuArray()函数将稠密矩阵数据传输到GPU显存中进行并行计算;In one of the embodiments, the scenario data distribution module 220 transmits the corresponding power grid risk scenario data to the GPU corresponding to the current computing engine through the current computing engine; wherein, the GPU judges the type of the received power grid risk scenario data, and in the received When the power grid risk scene data is dense matrix data, use the gpuArray() function to transfer the dense matrix data to the GPU memory for parallel calculation;

在接收的电网风险场景数据为稀疏矩阵数据时,采用MEX函数和CUDA计算库对稀疏矩阵数据进行并行计算;When the received power grid risk scenario data is sparse matrix data, use the MEX function and CUDA computing library to perform parallel calculations on the sparse matrix data;

在接收的电网风险场景数据为多个数据且为标量时,通过arrayfun函数生成自定义函数,采用自定义函数对接收的电网风险场景数据进行并行计算。When the received power grid risk scenario data is multiple data and scalar, a custom function is generated through the arrayfun function, and the custom function is used to perform parallel calculation on the received power grid risk scenario data.

在其中的一个实施例中,结果汇集模块240接收各个计算节点返回的计算结果;其中,各个计算节点分别通过gather函数汇集各自包括的计算引擎的计算结果。In one of the embodiments, the result collection module 240 receives the calculation results returned by each computing node; wherein, each computing node collects the calculation results of the computing engines included in each through a gather function.

本发明的电网风险场景并行计算系统与本发明的电网风险场景并行计算方法一一对应,在上述电网风险场景并行计算方法的实施例阐述的技术特征及其有益效果均适用于电网风险场景并行计算系统的实施例中。The grid risk scenario parallel computing system of the present invention corresponds to the grid risk scenario parallel computing method of the present invention, and the technical features and beneficial effects described in the above-mentioned embodiment of the grid risk scenario parallel computing method are applicable to the grid risk scenario parallel computing In the embodiment of the system.

根据上述电网风险场景并行计算方法,本发明实施例还提供一种可读存储介质和一种计算设备。可读存储介质上存储有可执行程序,该程序被处理器执行时实现上述电网风险场景并行计算方法的步骤;计算设备包括存储器、处理器及存储在存储器上并可在处理器上运行的可执行程序,处理器执行程序时实现上述电网风险场景并行计算方法的步骤。According to the above-mentioned parallel computing method for power grid risk scenarios, an embodiment of the present invention further provides a readable storage medium and a computing device. An executable program is stored on the readable storage medium, and when the program is executed by the processor, the steps of the above-mentioned parallel computing method for power grid risk scenarios are implemented; the computing device includes a memory, a processor, and an executable program stored on the memory and operable on the processor. Executing the program, when the processor executes the program, the steps of implementing the above-mentioned parallel computing method for power grid risk scenarios.

在一个具体的实施例中,电网风险场景并行计算方法包括以下步骤:In a specific embodiment, the grid risk scenario parallel calculation method includes the following steps:

利用数学软件MATLAB构建并行计算池,并行计算池包括多个计算节点,计算节点包括多个计算引擎;其中,构建并行池的步骤包括以下步骤:Utilize the mathematical software MATLAB to build a parallel computing pool, the parallel computing pool includes a plurality of computing nodes, and the computing node includes a plurality of computing engines; wherein, the steps of constructing the parallel pool include the following steps:

获取多个计算节点,将MATLAB添加到计算节点的系统防火墙,并为各个节点安装MATLAB分布式并行计算服务器,并启动服务,各计算节点处于同一个局域网,计算节点间通过局域网进行数据的传输;Obtain multiple computing nodes, add MATLAB to the system firewall of the computing nodes, install MATLAB distributed parallel computing server for each node, and start the service, each computing node is in the same local area network, and data transmission is performed between computing nodes through the local area network;

利用MATLAB提供的一个接口Admin Center将各计算节点添加至集群中,接口中的MATLAB Job Scheduler界面对集群创建任务管理;Use an interface Admin Center provided by MATLAB to add each computing node to the cluster, and the MATLAB Job Scheduler interface in the interface manages the cluster creation task;

将当前计算节点中的计算引擎数量配置为当前计算节点的CPU核数量,添加集群至初始集群配置文件,获得目标集群配置文件,同时为该配置建立Monitor Jobs,监听集群任务状态;根据目标集群配置文件创建并行计算池。Configure the number of computing engines in the current computing node as the number of CPU cores in the current computing node, add the cluster to the initial cluster configuration file, obtain the target cluster configuration file, and create Monitor Jobs for the configuration to monitor the cluster task status; according to the target cluster configuration file to create parallel computing pools.

获取多个电网风险场景数据,为各计算引擎分配对应的电网风险场景数据;Obtain multiple grid risk scenario data, and assign corresponding grid risk scenario data to each computing engine;

其中,构建并行计算池后,选取并行计算池中的任意一个计算节点作为客户端,将各电网风险场景数据传输至客户端;通过客户端传输各数据到对应的计算引擎。Among them, after the parallel computing pool is built, any computing node in the parallel computing pool is selected as the client, and the data of each power grid risk scenario is transmitted to the client; each data is transmitted to the corresponding computing engine through the client.

其中,为各个计算引擎分配对应的电网风险场景数据的步骤包括以下步骤:Wherein, the step of assigning corresponding power grid risk scenario data to each calculation engine includes the following steps:

获取并行计算池中的计算引擎总数,并利用labindex()函数返回计算引擎的引擎序号;Obtain the total number of computing engines in the parallel computing pool, and use the labindex() function to return the engine serial number of the computing engine;

对电网风险场景数据按序编号,得到场景编号;Number the grid risk scenario data sequentially to obtain the scenario number;

将当前场景编号对计算引擎总数求余加1,获得与当前场景编号对应的目标引擎序号,将当前场景编号所属的电网风险场景数据分配至目标引擎序号所属的计算引擎。Add 1 to the remainder of the current scenario number and the total number of computing engines to obtain the target engine serial number corresponding to the current scenario number, and assign the grid risk scenario data to which the current scenario number belongs to the computing engine to which the target engine serial number belongs.

使用MATLAB提供的并行计算工具箱进行并行计算,具体包括:Use the parallel computing toolbox provided by MATLAB to perform parallel computing, including:

传输各电网风险场景数据到对应的计算引擎,通过各计算引擎对各电网风险场景数据进行并行计算,并行计算利用MATLAB平台提供的单程序多数据流方法实现;其中,并行计算的步骤包括以下步骤:Transmit the risk scenario data of each power grid to the corresponding computing engine, and perform parallel computing on the risk scenario data of each power grid through each computing engine, and the parallel computing is realized by using the single program multiple data stream method provided by the MATLAB platform; the parallel computing steps include the following steps :

通过当前计算引擎将对应的电网风险场景数据传输到与当前计算引擎对应的GPU;若接收的电网风险场景数据为稠密矩阵数据,则利用gpuArray()函数将稠密矩阵数据传输到GPU显存中进行并行计算;若接收的电网风险场景数据为稀疏矩阵数据,则采用MEX函数和CUDA计算库对稀疏矩阵数据进行并行计算;若接收的电网风险场景数据为多个数据且为标量,则通过arrayfun函数生成自定义函数,采用自定义函数对接收的电网风险场景数据进行并行计算。Transmit the corresponding power grid risk scenario data to the GPU corresponding to the current computing engine through the current computing engine; if the received power grid risk scenario data is dense matrix data, use the gpuArray() function to transfer the dense matrix data to the GPU memory for parallelism Calculation; if the received power grid risk scenario data is sparse matrix data, use the MEX function and CUDA computing library to perform parallel calculations on the sparse matrix data; if the received power grid risk scenario data is multiple data and is a scalar value, generate it through the arrayfun function User-defined functions, using user-defined functions to perform parallel calculations on the received power grid risk scenario data.

获取并行计算结果;其中,各个计算节点通过gather函数汇集各个计算引擎的计算结果,结果返回到composite object类型的变量中去,然后在客户端收各个计算节点返回的计算结果,每个计算节点的计算结果,通过计算引擎的引擎序号索引取值。Obtain the results of parallel computing; among them, each computing node gathers the computing results of each computing engine through the gather function, and returns the result to a variable of composite object type, and then receives the computing results returned by each computing node on the client side, and each computing node's The calculation result is obtained through the engine serial number index of the calculation engine.

参见图5所示,为本发明另一个实施例的电网风险场景并行计算方法的流程示意图。该实施例中的电网风险场景并行计算方法包括以下步骤:Referring to FIG. 5 , it is a schematic flowchart of a method for parallel computing of power grid risk scenarios according to another embodiment of the present invention. The grid risk scenario parallel computing method in this embodiment includes the following steps:

步骤S310:构建并行计算池,包括以下步骤:Step S310: building a parallel computing pool, including the following steps:

安装相同版本的MATLAB软件到参与并行计算的每个计算节点,并把MATLAB添加到防火墙;安装相同版本的MATLAB软件可以避免由于版本不同带来的兼容性问题;由于后续步骤中并行计算池的各个计算节点通过局域网进行数据交互,为了避免交互过程中某些操作被防火墙影响,因此需要预先对防护墙作相应设置。Install the same version of MATLAB software to each computing node participating in parallel computing, and add MATLAB to the firewall; installing the same version of MATLAB software can avoid compatibility problems caused by different versions; Computing nodes perform data interaction through the LAN. In order to avoid certain operations being affected by the firewall during the interaction process, it is necessary to set up the firewall in advance.

以管理员身份给各计算节点安装MATLAB分布式并行计算服务器,并启动服务;Install the MATLAB distributed parallel computing server for each computing node as an administrator, and start the service;

配置各计算节点在同一个局域网,然后使用Admin Center把各计算节点加入到一个集群中;在MATLAB Job Scheduler中创建任务管理;为个计算节点配置计算引擎数量,优选地,每个计算节点的计算引擎数量与该计算节点CPU核数量相同;Configure each computing node in the same local area network, and then use Admin Center to add each computing node to a cluster; create task management in MATLAB Job Scheduler; configure the number of computing engines for each computing node, preferably, the calculation of each computing node The number of engines is the same as the number of CPU cores of the computing node;

将集群加入到集群配置文件中,同时为该配置文件建立Monitor Jobs,监听集群的任务状态。Add the cluster to the cluster configuration file, and create Monitor Jobs for the configuration file to monitor the task status of the cluster.

利用集群配置文件创建并行计算池。Use a cluster configuration file to create a parallel computing pool.

步骤S320:获取多个电网风险场景数据,为各计算引擎分配对应的电网风险场景数据,包括以下步骤:Step S320: Acquiring multiple grid risk scenario data, and assigning corresponding grid risk scenario data to each calculation engine, including the following steps:

获取需要计算的电网风险场景数据,并且按序对场景进行编号;Obtain the grid risk scenario data that needs to be calculated, and number the scenarios in sequence;

场景的编号对计算引擎数目求余,获得余数+1,得到的结果为负责执行该场景的计算引擎的引擎序号;Calculate the remainder of the number of computing engines for the number of the scene, and obtain the remainder + 1. The result obtained is the engine serial number of the computing engine responsible for executing the scene;

步骤S330:传输各电网风险场景数据到对应的计算引擎,通过各计算引擎对各电网风险场景数据进行并行计算;包括以下步骤:Step S330: Transmitting the risk scenario data of each power grid to the corresponding computing engine, and performing parallel computing on the risk scenario data of each power grid through each computing engine; including the following steps:

使用MATLAB的并行计算工具箱进行并行计算,利用spmd-end开启并行计算程序;其中,spmd-end用于标记需要利用SPMD单程序多数据流的方法进行并行计算的语句。其中,MATLAB的并行计算工具箱可以使用多核处理器、GPU和计算机集群来解决计算问题和数据密集型问题的工具箱,通过利用并行for循环、特殊数组类型和并行化数值算法等高级别构造,可以对MATLAB应用程序进行并行化,而无需进行CUDA或MPI编程。Use MATLAB's parallel computing toolbox for parallel computing, and use spmd-end to start the parallel computing program; among them, spmd-end is used to mark statements that need to use SPMD single program multiple data streams for parallel computing. Among them, MATLAB's parallel computing toolbox can use multi-core processors, GPUs, and computer clusters to solve computing problems and data-intensive problems. By using high-level structures such as parallel for loops, special array types, and parallel numerical algorithms, MATLAB applications can be parallelized without CUDA or MPI programming.

利用labindex()函数返回当前计算引擎的引擎序号;Use the labindex() function to return the engine serial number of the current computing engine;

根据上述对场景的编号的计算结果,传输各场景到对应引擎序号的计算引擎中进行并行计算。According to the calculation result of the number of the scene above, each scene is transmitted to the calculation engine corresponding to the engine serial number for parallel calculation.

其中,一个计算引擎对应一个GPU,各计算引擎的通过gpuDevice()函数选择对应的GPU进行计算,包括:Among them, one computing engine corresponds to one GPU, and each computing engine uses the gpuDevice() function to select the corresponding GPU for computing, including:

稠密矩阵计算:MATLAB提供了可直接GPU化并行计算的内置函数以提高稠密矩阵的计算效率。通过gpuArray()函数复制场景数据到GPU显存中自动在GPU上进行并行计算;Dense matrix calculation: MATLAB provides built-in functions that can be directly parallelized by GPU to improve the calculation efficiency of dense matrices. Copy the scene data to the GPU memory through the gpuArray() function to automatically perform parallel computing on the GPU;

稀疏矩阵计算:由于MATLAB的内置函数不支持稀疏矩阵的直接GPU并行计算,因此采用由CUDA免费提供的CUSPARSE和CUSOLVER两个稀疏矩阵数值计算库,结合MATLAB提供的包含GPU接口的MEX函数进行混合GPU的并行计算;Sparse matrix calculation: Since MATLAB's built-in functions do not support direct GPU parallel calculation of sparse matrices, two sparse matrix numerical calculation libraries, CUSPARSE and CUSOLVER, provided by CUDA for free, are used in combination with the MEX function provided by MATLAB that includes a GPU interface for hybrid GPU parallel computing;

自定义函数:通过MATLAB提供的支持自定义函数的arrayfun函数,对输入、输出数据不止一个并且输入数据为标量场景数据进行并行计算;Custom function: Through the arrayfun function provided by MATLAB that supports custom functions, parallel calculations can be performed on more than one input and output data and the input data is scalar scene data;

在本步骤中,GPU作为协处理器,辅助CPU完成并行度高、数据密集、逻辑简单的运算作业。GPU并行计算的能力更是强大,它内部具有快速存储系统,此外,GPU的硬件设计能够管理数千个并行线程,这数千个线程全部由GPU创建和管理,而不需要开发人员进行任何编程与管理。In this step, the GPU acts as a coprocessor to assist the CPU to complete operations with high parallelism, data-intensive, and simple logic. The ability of GPU parallel computing is even more powerful. It has a fast storage system inside. In addition, the hardware design of GPU can manage thousands of parallel threads. These thousands of threads are all created and managed by GPU without any programming by developers. and management.

步骤S340:获取并行计算结果,包括以下步骤:Step S340: Acquiring parallel computing results, including the following steps:

各计算引擎通过gather函数汇集计算结果返回给CPU,同时reset掉GPU的内存;Each calculation engine collects the calculation results and returns them to the CPU through the gather function, and resets the memory of the GPU at the same time;

由于通过SPMD方法进行并行计算时,每个计算引擎的返回值以composite的类型存储,因此需要将结果返回到composite类型的对象中。可选的,在并行计算前可以预先创建composite对象,并进行初始化赋值。Since the return value of each calculation engine is stored in the type of composite when parallel computing is performed through the SPMD method, the result needs to be returned to an object of the composite type. Optionally, composite objects can be pre-created and initialized before parallel computing.

通过引擎序号索引取值,获取各计算节点的计算结果。Obtain the calculation results of each computing node through the index value of the engine serial number.

在本实施例中,整体基本流程如图6所示,通过MATLAB的分布式并行计算服务器构建如图7所示的并行计算环境,利用MATLAB并行计算工具箱调用CUDA计算库,高效的完成计算量大、迭代次数多、实时性要求高的风险场景计算任务,可以将风险场景计算任务在多台计算机上并行计算,具有良好的容错性,既利用了MATLAB自身并行计算的简便优点,又结合了GPU适合复杂计算的特点,有助于提高电网风险场景计算的效率,实现大规模计算的并行化。In this embodiment, the overall basic process is shown in Figure 6. The parallel computing environment shown in Figure 7 is constructed through the distributed parallel computing server of MATLAB, and the CUDA computing library is called by the MATLAB parallel computing toolbox to efficiently complete the calculation amount. For risk scenario calculation tasks with large size, many iterations and high real-time requirements, the risk scenario calculation tasks can be calculated in parallel on multiple computers, which has good fault tolerance. GPU is suitable for complex calculations, which helps to improve the efficiency of power grid risk scenario calculations and realize the parallelization of large-scale calculations.

以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-mentioned embodiments can be combined arbitrarily. To make the description concise, all possible combinations of the technical features in the above-mentioned embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, should be considered as within the scope of this specification.

以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only express several implementation modes of the present invention, and the descriptions thereof are relatively specific and detailed, but should not be construed as limiting the patent scope of the invention. It should be pointed out that those skilled in the art can make several modifications and improvements without departing from the concept of the present invention, and these all belong to the protection scope of the present invention. Therefore, the protection scope of the patent for the present invention should be based on the appended claims.

Claims (10)

1.一种电网风险场景并行计算方法,其特征在于,包括以下步骤:1. A grid risk scenario parallel computing method, characterized in that, comprising the following steps: 构建并行计算池,所述并行计算池包括多个计算节点,所述计算节点包括多个计算引擎;Build a parallel computing pool, the parallel computing pool includes a plurality of computing nodes, and the computing nodes include a plurality of computing engines; 获取多个电网风险场景数据,为各所述计算引擎分配对应的电网风险场景数据;Acquiring a plurality of grid risk scenario data, and assigning corresponding grid risk scenario data to each of the calculation engines; 传输各所述电网风险场景数据到对应的计算引擎,通过各所述计算引擎对各所述电网风险场景数据进行并行计算;Transmitting each of the power grid risk scenario data to a corresponding computing engine, and performing parallel computing on each of the power grid risk scenario data through each of the computing engines; 获取并行计算结果。Get parallel computing results. 2.根据权利要求1所述的电网风险场景并行计算方法,其特征在于,所述构建并行计算池的步骤包括以下步骤:2. The grid risk scenario parallel computing method according to claim 1, wherein the step of building a parallel computing pool comprises the following steps: 获取多个计算节点,将各所述计算节点添加至集群中,对所述集群创建任务管理;Obtaining multiple computing nodes, adding each of the computing nodes to the cluster, and creating task management for the cluster; 配置各所述计算节点的计算引擎数量,添加所述集群至初始集群配置文件,获得目标集群配置文件,根据所述目标集群配置文件创建并行计算池。Configuring the number of computing engines of each computing node, adding the cluster to the initial cluster configuration file, obtaining a target cluster configuration file, and creating a parallel computing pool according to the target cluster configuration file. 3.根据权利要求2所述的电网风险场景并行计算方法,其特征在于,所述配置每个计算节点的计算引擎数量的步骤包括以下步骤:3. The grid risk scenario parallel computing method according to claim 2, wherein the step of configuring the number of computing engines of each computing node comprises the following steps: 将当前计算节点中的计算引擎数量配置为当前计算节点的CPU核数量。Configure the number of computing engines in the current computing node as the number of CPU cores of the current computing node. 4.根据权利要求2所述的电网风险场景并行计算方法,其特征在于,各所述计算节点处于同一个局域网,计算节点间通过局域网进行数据的传输。4. The method for parallel computing of power grid risk scenarios according to claim 2, wherein each of the computing nodes is in the same local area network, and data transmission is performed between computing nodes through the local area network. 5.根据权利要求2所述的电网风险场景并行计算方法,其特征在于,还包括以下步骤:5. The grid risk scenario parallel calculation method according to claim 2, further comprising the following steps: 选取所述并行计算池中的任意一个计算节点作为客户端,将各所述电网风险场景数据传输至所述客户端;selecting any computing node in the parallel computing pool as a client, and transmitting the data of each power grid risk scenario to the client; 其中,所述客户端为各所述计算引擎分配对应的电网风险场景数据;Wherein, the client assigns corresponding power grid risk scenario data to each of the computing engines; 所述传输各所述数据到对应的计算引擎的步骤包括以下步骤:The step of transmitting each of the data to the corresponding computing engine includes the following steps: 通过所述客户端传输各所述数据到对应的计算引擎。Each of the data is transmitted to the corresponding computing engine through the client. 6.根据权利要求1所述的电网风险场景并行计算方法,其特征在于,所述为各个计算引擎分配对应的电网风险场景数据的步骤包括以下步骤:6. The grid risk scenario parallel computing method according to claim 1, wherein the step of distributing corresponding grid risk scenario data for each computing engine comprises the following steps: 获取所述并行计算池中的计算引擎总数以及各个计算引擎的引擎序号;Obtain the total number of computing engines in the parallel computing pool and the engine serial numbers of each computing engine; 对所述电网风险场景数据按序编号,得到场景编号;Sequentially numbering the grid risk scenario data to obtain the scenario number; 将当前场景编号对所述计算引擎总数求余加1,获得与当前场景编号对应的目标引擎序号,将当前场景编号所属的电网风险场景数据分配至所述目标引擎序号所属的计算引擎。Adding 1 to the remainder of the total number of computing engines from the current scenario number to obtain the target engine serial number corresponding to the current scenario number, and assigning the power grid risk scenario data to which the current scenario number belongs to the computing engine to which the target engine serial number belongs. 7.根据权利要求1所述的电网风险场景并行计算方法,其特征在于,所述并行计算的平台为MATLAB,利用所述平台的单程序多数据流的方法对所述电网风险场景数据进行并行计算。7. The method for parallel computing of power grid risk scenarios according to claim 1, wherein the platform of the parallel computing is MATLAB, and the method of single program and multiple data streams of the platform is used to parallelize the power grid risk scenario data calculate. 8.根据权利要求7所述的电网风险场景并行计算方法,其特征在于,所述并行计算的步骤包括以下步骤:8. The grid risk scenario parallel computing method according to claim 7, wherein the step of parallel computing comprises the following steps: 通过当前计算引擎将对应的电网风险场景数据传输到与当前计算引擎对应的GPU;Transmit the corresponding power grid risk scenario data to the GPU corresponding to the current computing engine through the current computing engine; 其中,所述GPU判断接收的电网风险场景数据的类型,若接收的电网风险场景数据为稠密矩阵数据,则利用gpuArray()函数将所述稠密矩阵数据传输到GPU显存中进行并行计算;Wherein, the GPU judges the type of the received power grid risk scene data, and if the received power grid risk scene data is dense matrix data, then use the gpuArray() function to transfer the dense matrix data to the GPU video memory for parallel calculation; 若接收的电网风险场景数据为稀疏矩阵数据,则采用MEX函数和CUDA计算库对所述稀疏矩阵数据进行并行计算;If the received power grid risk scenario data is sparse matrix data, then use the MEX function and CUDA calculation library to perform parallel calculations on the sparse matrix data; 若接收的电网风险场景数据为多个数据且为标量,则通过arrayfun函数生成自定义函数,采用所述自定义函数对接收的电网风险场景数据进行并行计算。If the received power grid risk scenario data is a plurality of data and is a scalar value, generate a custom function through the arrayfun function, and use the custom function to perform parallel calculation on the received power grid risk scenario data. 9.根据权利要求8所述的电网风险场景并行计算方法,其特征在于,所述获取并行计算结果的步骤包括以下步骤:9. The method for parallel computing of power grid risk scenarios according to claim 8, wherein the step of obtaining parallel computing results comprises the following steps: 接收各个计算节点返回的计算结果;其中,各个计算节点分别通过gather函数汇集各自包括的各个计算引擎的计算结果。Receive calculation results returned by each computing node; wherein, each computing node collects the calculation results of each computing engine included in each through a gather function. 10.一种电网风险场景并行计算系统,其特征在于,包括以下模块:10. A grid risk scenario parallel computing system, characterized in that it comprises the following modules: 并行计算池构建模块,用于构建并行计算池,所述并行计算池包括多个计算节点,所述计算节点包括多个计算引擎;A parallel computing pool construction module, used to build a parallel computing pool, the parallel computing pool includes a plurality of computing nodes, and the computing nodes include a plurality of computing engines; 场景数据分配模块,用于获取多个电网风险场景数据,为各所述计算引擎分配对应的电网风险场景数据,传输各所述电网风险场景数据到对应的计算引擎;The scene data distribution module is used to obtain multiple power grid risk scene data, allocate corresponding power grid risk scene data to each of the calculation engines, and transmit each of the power grid risk scene data to the corresponding calculation engine; 并行计算模块,用于通过各所述计算引擎对各所述电网风险场景数据进行并行计算;A parallel computing module, configured to perform parallel computing on each of the power grid risk scenario data through each of the computing engines; 结果汇集模块,用于获取并行计算结果。The result collection module is used to obtain parallel computing results.
CN201710756256.9A 2017-08-29 2017-08-29 Power grid risk scenes in parallel computational methods and system Pending CN107506932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710756256.9A CN107506932A (en) 2017-08-29 2017-08-29 Power grid risk scenes in parallel computational methods and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710756256.9A CN107506932A (en) 2017-08-29 2017-08-29 Power grid risk scenes in parallel computational methods and system

Publications (1)

Publication Number Publication Date
CN107506932A true CN107506932A (en) 2017-12-22

Family

ID=60694126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710756256.9A Pending CN107506932A (en) 2017-08-29 2017-08-29 Power grid risk scenes in parallel computational methods and system

Country Status (1)

Country Link
CN (1) CN107506932A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108599173A (en) * 2018-06-21 2018-09-28 清华大学 A kind of method for solving and device of batch trend
CN111181914A (en) * 2019-09-29 2020-05-19 腾讯云计算(北京)有限责任公司 Method, device and system for monitoring internal data security of local area network and server

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030195938A1 (en) * 2000-06-26 2003-10-16 Howard Kevin David Parallel processing systems and method
CN102983996A (en) * 2012-11-21 2013-03-20 浪潮电子信息产业股份有限公司 Dynamic allocation method and system for high-availability cluster resource management
CN103617494A (en) * 2013-11-27 2014-03-05 国家电网公司 Wide-area multi-stage distributed parallel power grid analysis system
CN103870338A (en) * 2014-03-05 2014-06-18 国家电网公司 Distributive parallel computing platform and method based on CPU (central processing unit) core management

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030195938A1 (en) * 2000-06-26 2003-10-16 Howard Kevin David Parallel processing systems and method
CN102983996A (en) * 2012-11-21 2013-03-20 浪潮电子信息产业股份有限公司 Dynamic allocation method and system for high-availability cluster resource management
CN103617494A (en) * 2013-11-27 2014-03-05 国家电网公司 Wide-area multi-stage distributed parallel power grid analysis system
CN103870338A (en) * 2014-03-05 2014-06-18 国家电网公司 Distributive parallel computing platform and method based on CPU (central processing unit) core management

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蔡勇: "Matlab的图形处理器并行计算及其在拓扑优化中的应用", 《计算机应用》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108599173A (en) * 2018-06-21 2018-09-28 清华大学 A kind of method for solving and device of batch trend
CN111181914A (en) * 2019-09-29 2020-05-19 腾讯云计算(北京)有限责任公司 Method, device and system for monitoring internal data security of local area network and server

Similar Documents

Publication Publication Date Title
Wang et al. Blink: Fast and generic collectives for distributed ml
CN107688492B (en) Resource control method and device and cluster resource management system
CN103229146B (en) Computer cluster for handling calculating task is arranged and its operating method
KR101262679B1 (en) Device to allocate resource effectively for cloud computing
CN104536937B (en) Big data all-in-one machine realization method based on CPU GPU isomeric groups
US9092266B2 (en) Scalable scheduling for distributed data processing
CN109117252B (en) Method and system for task processing based on container and container cluster management system
Li et al. An effective scheduling strategy based on hypergraph partition in geographically distributed datacenters
CN106656525B (en) Data broadcasting system, data broadcasting method and equipment
Meng et al. Simulation and optimization of HPC job allocation for jointly reducing communication and cooling costs
Wang et al. Dependency-aware network adaptive scheduling of data-intensive parallel jobs
Jiang et al. The limit of horizontal scaling in public clouds
CN107506932A (en) Power grid risk scenes in parallel computational methods and system
US10467046B2 (en) Fast and greedy scheduling machine based on a distance matrix
Sugiarto et al. Optimized task graph mapping on a many-core neuromorphic supercomputer
CN107872527B (en) A LVC integrated remote mode cloud service system and method
Park et al. Cloud computing platform for GIS image processing in U-city
Solt et al. Scalable, fault-tolerant job step management for high-performance systems
Li et al. Building an HPC-as-a-service toolkit for user-interactive HPC services in the cloud
Sukhoroslov et al. Towards a general framework for studying resource management in large scale distributed systems
Megino et al. PanDA: Exascale Federation of Resources for the ATLAS Experiment at the LHC
US10630957B2 (en) Scalable distributed computation framework for data-intensive computer vision workloads
Zhou et al. Improving batch scheduling on blue Gene/Q by relaxing network allocation constraints
Hung et al. Architectures for cloud-based hpc in data centers
Balashov et al. Resource Management in Private Multi-Service Cloud Environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171222