[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112714016B - Electric power Internet of things big data edge analysis method - Google Patents

Electric power Internet of things big data edge analysis method Download PDF

Info

Publication number
CN112714016B
CN112714016B CN202011559413.5A CN202011559413A CN112714016B CN 112714016 B CN112714016 B CN 112714016B CN 202011559413 A CN202011559413 A CN 202011559413A CN 112714016 B CN112714016 B CN 112714016B
Authority
CN
China
Prior art keywords
edge
data
stream data
network
performs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011559413.5A
Other languages
Chinese (zh)
Other versions
CN112714016A (en
Inventor
刘明硕
郑涛
赵梦瑶
王新颖
刘成龙
吴军英
姜丹
常永娟
陈曦
彭娇
贺月
张博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
North China Electric Power University
Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
North China Electric Power University
Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, North China Electric Power University, Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202011559413.5A priority Critical patent/CN112714016B/en
Publication of CN112714016A publication Critical patent/CN112714016A/en
Application granted granted Critical
Publication of CN112714016B publication Critical patent/CN112714016B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Supply And Distribution Of Alternating Current (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the specification discloses an electric power internet of things big data edge analysis method. The method comprises the following steps: the data acquisition system monitors a data source of the terminal equipment and gathers and forwards stream data to the buffer system; the buffer system buffers the stream data and then sends the stream data to a computing system; the computing system performs block layout optimization on each network node and performs edge computation on the stream data; the computing system stores the edge-processed stream data. According to the method, the topological structure of the network end is improved, the edge structure based on the HDFS block layout optimization is constructed, the streaming data is calculated in the edge cluster, the result is fed back to the Internet of things equipment, the calculation capability of the streaming data can be transferred from the traditional cloud computing center to the network edge, and the end-to-end delay is effectively reduced on the premise that the throughput requirement is met.

Description

Electric power Internet of things big data edge analysis method
Technical Field
The application relates to the technical fields of the Internet of things and computers, in particular to a large data edge analysis method of the electric power Internet of things.
Background
Thousands of internet of things devices applied to the power industry can generate a large amount of data. For computers, these large amounts of data have made internet of things applications demanding higher large data processing capabilities and faster response times. For the previous years, the solution to this problem is to transmit large data into cloud computing, and although this approach realizes computing for the application of the internet of things, as the data volume increases, the network load will be greatly increased, the network may be congested, and there will be a certain delay in processing.
Disclosure of Invention
In view of the above, the embodiment of the application provides a method for analyzing the big data edge of the electric power internet of things, which effectively reduces the end-to-end delay on the premise of meeting the throughput requirement.
In order to solve the above technical problems, the embodiments of the present specification are implemented as follows:
the embodiment of the specification provides a big data edge analysis method of an electric power internet of things, which comprises the following steps:
the data acquisition system monitors a data source of the terminal equipment and gathers and forwards stream data to the buffer system;
the buffer system buffers the stream data and then sends the stream data to a computing system;
the computing system performs block layout optimization on each network node and performs edge computation on the stream data;
the computing system stores the edge-processed stream data.
Optionally, the data acquisition system is a distributed log aggregation system, monitors and receives data from the terminal device and sends the data out, and performs preprocessing on the acquired data according to feedback information when the data are summarized and forwarded, so as to realize preliminary data screening.
Optionally, the buffer system generates feedback information through an intelligent optimization algorithm according to the service condition of the streaming data and returns the feedback information to the data acquisition system, and the data acquisition system adjusts the data acquisition frequency based on the feedback information to form a positive feedback type virtuous circle.
Optionally, the edge calculation specifically includes: the edge equipment performs edge prediction on the transmission efficiency of streaming data, and performs overall arrangement on task allocation of each edge node according to the prediction result so as to prevent the edge node from bearing an excessive computing task.
Optionally, the edge calculation adopts a Storm framework, and the result of the edge calculation has three transmission directions, namely, the result is upwards transmitted to a cloud server to continue calculation; secondly, the information is returned to the terminal equipment, and the terminal equipment operates according to the information; thirdly, the result is directly transmitted to a remote management platform without a cloud server, and a worker sends out an instruction through the remote management platform, and the instruction is transmitted to terminal equipment through an edge node.
Optionally, the block layout optimization specifically includes:
Constructing an edge cloud topological structure: assume that there are two special vertices in the internet of things: one represents the central vertex of the internet of things, the other is the client vertex accessing the HDFS data, two types of clusters are set: a data center and edge clusters, each cluster comprising at least one gateway node providing connectivity between the cluster and a network;
the clusters are ordered according to the utilization rate of the clusters, and the clusters are selected according to the reliability decreasing sequence.
Optionally, the storing the stream data after edge processing specifically includes:
And the deep learning framework is used for storing or processing the edge calculation results in different modes, and the deep learning framework judges whether the edge calculation results are to be stored or forwarded and transmitted to a remote management platform or a cloud server if the edge calculation results are to be forwarded, and can also judge whether instructions are to be sent to the terminal equipment or not, so that the work of the cloud server is shared.
Optionally, the computing system includes: cloud server and edge device;
The computing system performs block layout optimization on each network node, and performs edge computation on the stream data, and specifically includes:
And the edge equipment performs partial edge calculation on the stream data, and the partial output calculation result and partial stream data are transmitted to the cloud server to perform stream data recalculation.
Optionally, the edge device and the cloud server adopt the following communication modes: cloud communication, local area network communication, WIFI communication, bluetooth communication, cellular network communication, zigBee data communication or wide area network communication.
Optionally, the terminal device represents a device main body used for managing data or sensing an operation state in an architecture of the internet of things, can monitor, collect or sense an operation state of power grid devices, is responsible for collecting flow data, collects the data and transmits the collected data to edge devices, and the edge devices refer to a computing gateway at the edge of a source of power equipment or network data, and can perform nearby edge computing and processing on the data.
The above-mentioned at least one technical scheme that this description embodiment adopted can reach following beneficial effect:
According to the method, the topological structure of the network end is improved, the edge structure based on the HDFS block layout optimization is constructed, the streaming data is calculated in the edge cluster, the result is fed back to the Internet of things equipment, the calculation capability of the streaming data can be transferred from the traditional cloud computing center to the network edge, and the end-to-end delay is effectively reduced on the premise that the throughput requirement is met.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
Fig. 1 is a schematic flow chart of a method for analyzing big data edges of an electric power internet of things according to an embodiment of the present disclosure;
fig. 2 is an overall architecture diagram of edge deployment of the internet of things based on a block layout according to an embodiment of the present disclosure;
FIG. 3 is a block layout optimization-based edge analysis of the Internet of things of electric power provided in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a Storm cluster architecture according to an embodiment of the disclosure;
fig. 5 is a schematic diagram of an edge calculation implementation procedure according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Edge computation refers to processing, analyzing data at network edge nodes. An edge node refers to any node between a data generation source and a cloud server that has computing resources and network resources. Through edge calculation, the request response time can be reduced, the network bandwidth can be reduced, and the safety and privacy of data can be ensured.
According to the method, the topological structure of a network end is improved, an edge structure based on the distribution optimization of the HDFS blocks is constructed, the flow data is calculated in an edge cluster, the result is fed back to the Internet of things equipment, the calculation capacity of the flow data can be transferred from a traditional cloud computing center to the network edge, and the end-to-end delay is effectively reduced on the premise of meeting throughput requirements.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a method for analyzing big data edges of an electric power internet of things according to an embodiment of the present disclosure. From the program perspective, the execution subject of the flow may be a program or an application client that is installed on an application server.
As shown in fig. 1, the process may include the steps of:
step 110: the data acquisition system monitors a data source of the terminal equipment and gathers and forwards stream data to the buffer system;
step 120: the buffer system buffers the stream data and then sends the stream data to a computing system;
step 130: the computing system performs block layout optimization on each network node and performs edge computation on the stream data;
Step 140: the computing system stores the edge-processed stream data.
Based on the method of fig. 1, the examples of the present specification also provide some specific implementations of the method, as described below.
According to the big data edge analysis method based on the block layout, a 4-layer management architecture is adopted on the whole physical network deployment, and the 4-layer management architecture is respectively a cloud management layer, a network channel management layer, an edge calculation layer and an edge node optimization layer. The overall architecture is shown in fig. 2.
(1) And a cloud management layer. In the cloud server, a plurality of cloud micro servers can be distributed, stream data of equipment ends or network ends in a power grid are received, and a remote management platform is responsible for sending instructions to terminal equipment and managing edge equipment and the terminal equipment.
(2) Network channel management layer: the function is to transmit stream data, and data communication between the edge equipment and the cloud server is realized through a data channel.
(3) Edge calculation layer: the layer includes edge devices, edge computation, and end devices. The terminal equipment is used for managing data or sensing the running state in the Internet of things architecture, can monitor, collect or sense the running state of the power grid equipment, is responsible for collecting flow data, and collects data and transmits the collected data to the edge equipment. Edge devices refer to computing gateways at the edges of power devices or network data sources that can perform near-edge computation and processing of data. The edge calculation distributes calculation tasks to each edge node, and the edge equipment performs edge prediction on the transmission efficiency of the streaming data due to the real-time performance of the streaming data, and performs overall arrangement on the task distribution of each edge node according to the prediction result so as to prevent the edge node from bearing excessive calculation tasks. Edge computation mainly uses Storm frameworks. The edge calculation result has three transmission directions, namely, the edge calculation result is upwards transmitted to the cloud server to continue calculation; secondly, the information is returned to the terminal equipment, and the terminal equipment operates according to the information; thirdly, the result is directly transmitted to a remote management platform without a cloud server, and a worker sends out an instruction through the remote management platform, and the instruction is transmitted to terminal equipment through an edge node.
(4) Edge node optimization layer: the electric power internet of things is more in connected devices, when computing nodes and networks are interconnected to form edge nodes, the nodes are directly connected and summarized to reduce the overall network information transmission performance, edge node optimization is needed, and an edge cloud topological structure is constructed to bear servers of the HDFS blocks, so that the availability and the optimality of the HDFS are improved.
The edge computing design framework based on block layout optimization mainly comprises 5 processes of data acquisition, data access, block layout optimization, edge computing and data storage, as shown in fig. 3. The framework is based on a log processing mode, firstly, a data source is monitored by an acquisition system, and then data is summarized and forwarded to a buffer system. The buffer system plays a role in buffering data transmitted to the computing system, and coordinates the data collection rate and the data processing rate of the computing system. Edge computation mainly performs block layout optimization on each network node, computes stream data, and can exchange data by using an in-memory database. The calculation result is analyzed and judged to be stored or transmitted according to the deep learning framework.
The individual processes were analyzed as follows:
(1) Data acquisition
The streaming data in the power grid mainly comes from the collection of smart meters, PMUs and various sensors, which are complex in type, large in scale and real-time data. The acquisition system is a distributed, reliable and highly available system for aggregating massive logs, can monitor received data from a client and send the data out, and when a node fails, log files are transmitted to other nodes without being lost, so that the integrity of the data is ensured, and meanwhile, the acquired data is preprocessed according to feedback information during summarizing and forwarding, so that preliminary data screening is realized.
(2) Data access
The speed of data collection and the speed of data processing are not necessarily synchronous, so that the data needs to be stored and cached. Since the edge device does not have a large storage space as the cloud server, it is necessary to increase the cache hit rate of the edge device. For the collected data, after the streaming calculation, generating feedback information according to the service condition of the streaming data by an intelligent optimization algorithm, and returning the feedback information to the collection system, wherein the collection system adjusts the collected data based on the feedback information to form a positive feedback type virtuous circle.
(3) Block layout optimization
To optimize edge points, an edge cloud topology is constructed. Assume that there are two special vertices in the internet of things: one representing the central vertex of the internet of things and the other being the client vertex accessing the HDFS data. Servers are grouped into server architectures, which are further grouped into clusters. Two types of clusters are set: data centers and edge clusters. Each cluster contains at least one gateway node that provides connectivity between the cluster and the network. Each edge of a node connection in the network is associated with a positive threshold value that represents the probability of failure of the edge within a given time frame. In general, the connections are considered to be independent of each other. And calculating a failure value of the path between any two points, namely the probability of path breakage, according to the threshold value of the path edge.
The block layout optimization algorithm comprises the following steps:
First, clusters are sorted according to the utilization rate of the clusters, and the clusters are selected according to the reliability decreasing order. .
Then, if the replication factor is less than 3, one copy is placed in the cluster with the lowest utilization rate, and the other copy is placed in the second cluster with the lowest utilization rate. If the replication factor is greater than or equal to 3, the first two replicas will be placed according to a policy that prioritizes data centers for edge clouds, the partitioning principle is as follows: 1) Data centers placed in two architectures with space available; 2) Placed in two separate data centers; 3) Placed in two separate clusters; 4) Placed in the cluster with the lowest utilization.
The block layout optimization algorithm is realized as follows:
Algorithm 1: cluster selection for HDFS block layout optimization
(4) Edge computation
Storm is a separately running distributed streaming data processing framework designed to handle large volumes of streaming data in fault tolerant and horizontally scalable methods, with the highest uptake rates. Although Storm is stateless, it can manage distributed environments and cluster states through Apache ZooKeeper. Inside the Storm there is a main control node and a plurality of working nodes. The master node is responsible for distributing code within the cluster, distributing computing tasks to machines, and monitoring cluster status. And the working node is responsible for monitoring a plurality of working processes distributed to the machine by the master control node, and is started or closed according to the requirement. The work process is composed of a plurality of executors, and each executor corresponds to one or more tasks. In order to ensure the stability of the Storm system, a Zookeeper is introduced to coordinate task scheduling and allocation between a main control node and a working node. The working architecture diagram is shown in fig. 4.
(5) Data output and storage. After the edge calculation result is obtained, the result is stored or processed in different modes by using a deep learning frame according to different requirements of the monitoring data in the power grid, such as abnormal detection, abnormal electricity analysis, electricity behavior analysis, short-term load prediction and the like. The deep learning framework determines that the result should be stored or forwarded, and if forwarded, should be transmitted to a remote management platform or cloud server. Meanwhile, whether an instruction should be sent to the terminal equipment can be judged, and the work of the cloud server is shared.
3. Electric power Internet of things big data edge analysis implementation process based on block layout optimization
For the edge deployment architecture based on block layout optimization, each layer has its own implementation.
(1) Edge calculation layer: the core of the edge device layer is the implementation of the edge device. Due to the different geographical locations of the terminal equipment distribution, when edge calculation is performed, the distributed terminals are calculated through mobile edge calculation (mobile edge computing, MEC). And through MEC technology, network control, calculation and storage are carried out on the running state of the terminal equipment at the network edge, and regional barriers are crossed. In practical application, the MEC node is usually arranged near a large base station or a wireless network controller, so that the node is also positioned in a wireless area network used by a user, and data transmission in the network is more convenient. The edge calculation implementation is shown in fig. 5.
The edge computing equipment in the layer comprises a collecting unit for collecting various information of the low-voltage distribution network and a computing unit for computing the collected distribution network information and the like. The microcontroller of the acquisition unit is ATMega P and has multiple interfaces, more specifically, 14 GPIO interfaces, 6 PWM interfaces, 12-bit ADC interfaces, UART serial ports, 1 SPI interface, and 1I 2C interface. The device is connected to a controller in serial port mode through an external module, such as ZigBee, RS 485-RS 232 and the like. Four-core BCM2837 with an application frequency of 1.2 GHz. In the computing unit, the processor core board is 64 bits ARMv8, the external interface is BCM43143WiFi and low-power consumption Bluetooth interface, and the computing unit is provided with 40 paths of I/O interfaces, 4 paths of USB interfaces, 1 path of Ethernet interfaces, 1 path of HDMI interfaces and the like. The processor is also integrated with an embedded Linux system, and has outstanding capability in data processing.
(2) The cloud management layer realizes: the cloud server is arranged to implement the recalculation or storage of the stream data. The edge computing layer performs edge computing on a part of stream data, but the edge device layer only has a part of computing function, and a part of output computing result and a part of stream data are transmitted into the cloud management layer through the informationized management layer, so that the data are recalculated.
(3) The informationized management layer realizes: in the information management layer, various communications are provided, such as cloud communication, local area network communication, WIFI communication, bluetooth communication, cellular network communication, zigBee data communication, wide area network communication, and the like. Through the data communication of different forms, can realize having the access of multiple communication interface, the compatibility is stronger.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (1)

1. The method for analyzing the big data edge of the electric power Internet of things is characterized by comprising the following steps of:
the data acquisition system monitors a data source of the terminal equipment and gathers and forwards stream data to the buffer system;
the buffer system buffers the stream data and then sends the stream data to a computing system;
the computing system performs block layout optimization on each network node and performs edge computation on the stream data;
the computing system stores the stream data after edge processing;
The data acquisition system is a distributed log aggregation system, monitors and receives data from terminal equipment and sends the data out, and performs preprocessing on the acquired data according to feedback information during summarizing and forwarding to realize preliminary data screening;
the buffer system generates feedback information through an intelligent optimization algorithm according to the service condition of stream data and returns the feedback information to the data acquisition system, and the data acquisition system adjusts the data acquisition frequency based on the feedback information to form a positive feedback type virtuous circle;
The edge calculation specifically comprises the following steps: edge equipment performs edge prediction on the transmission efficiency of streaming data, and performs overall arrangement on task allocation of each edge node according to a prediction result so as to prevent the edge node from bearing an excessive computing task;
The edge calculation adopts a Storm framework, and the result of the edge calculation has three transmission directions, namely, the result is upwards transmitted to a cloud server to continue calculation; secondly, the information is returned to the terminal equipment, and the terminal equipment operates according to the information; thirdly, directly transmitting the result to a remote management platform without a cloud server, and transmitting an instruction to terminal equipment through an edge node by a worker through the remote management platform;
The block layout optimization specifically comprises the following steps:
Constructing an edge cloud topological structure: assume that there are two special vertices in the internet of things: one represents the central vertex of the internet of things, the other is the client vertex accessing the HDFS data, two types of clusters are set: a data center and edge clusters, each cluster comprising at least one gateway node providing connectivity between the cluster and a network;
Each edge of a node connection in the network is associated with a positive threshold value that represents the probability of failure of the edge within a given time frame; calculating a failure value of a path between any two points, namely the probability of path breakage, according to the threshold value of the path edge;
The block layout optimization algorithm comprises the following steps: firstly, sorting clusters according to the utilization rate of the clusters, and selecting according to the reliability decreasing sequence;
Then, if the replication factor is smaller than 3, placing one copy into the cluster with the lowest utilization rate, and placing the other copy into a second cluster with the lowest utilization rate; if the replication factor is greater than or equal to 3, the first two replicas will be placed according to a policy that prioritizes data centers for edge clouds, the partitioning principle is as follows: 1) Data centers placed in two architectures with space available; 2) Placed in two separate data centers; 3) Placed in two separate clusters; 4) Placing in a cluster with the lowest utilization rate;
the storing of the edge-processed stream data specifically includes:
the method comprises the steps that the deep learning framework is used for storing or processing edge calculation results in different modes, the deep learning framework judges whether the edge calculation results are to be stored or forwarded, if yes, the edge calculation results are to be transmitted to a remote management platform or a cloud server, and meanwhile, whether instructions are to be sent to terminal equipment or not can be judged, so that the work of the cloud server is shared;
the computing system includes: cloud server and edge device;
The computing system performs block layout optimization on each network node, and performs edge computation on the stream data, and specifically includes:
The edge equipment performs partial edge calculation on the stream data, and the partial output calculation result and partial stream data are transmitted to the cloud server to perform recalculation of the stream data;
The edge equipment and the cloud server adopt the following communication modes: cloud communication, local area network communication, WIFI communication, bluetooth communication, cellular network communication, zigBee data communication or wide area network communication;
the terminal equipment is used for managing data or sensing the running state in the Internet of things architecture, can monitor, collect or sense the running state of the power grid equipment, is responsible for collecting flow data, gathers and forwards the collected data to the edge equipment, and the edge equipment refers to a computing gateway at the edge of the power equipment or network data source, and can perform nearby edge computing and processing on the data.
CN202011559413.5A 2020-12-25 2020-12-25 Electric power Internet of things big data edge analysis method Active CN112714016B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011559413.5A CN112714016B (en) 2020-12-25 2020-12-25 Electric power Internet of things big data edge analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011559413.5A CN112714016B (en) 2020-12-25 2020-12-25 Electric power Internet of things big data edge analysis method

Publications (2)

Publication Number Publication Date
CN112714016A CN112714016A (en) 2021-04-27
CN112714016B true CN112714016B (en) 2024-09-27

Family

ID=75546170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011559413.5A Active CN112714016B (en) 2020-12-25 2020-12-25 Electric power Internet of things big data edge analysis method

Country Status (1)

Country Link
CN (1) CN112714016B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115955406A (en) * 2022-12-08 2023-04-11 山东鲁软数字科技有限公司 Power grid model self-management method and system based on edge computing framework

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111212106A (en) * 2019-12-09 2020-05-29 中国科学院计算机网络信息中心 Edge computing task processing and scheduling method and device in industrial internet environment
CN112015718A (en) * 2020-08-25 2020-12-01 阳光保险集团股份有限公司 HBase cluster balancing method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9848041B2 (en) * 2015-05-01 2017-12-19 Amazon Technologies, Inc. Automatic scaling of resource instance groups within compute clusters
CN110377577B (en) * 2018-04-11 2022-03-04 北京嘀嘀无限科技发展有限公司 Data synchronization method, device, system and computer readable storage medium
CN110719209B (en) * 2019-10-31 2022-06-10 北京浪潮数据技术有限公司 Cluster network configuration method, system, equipment and readable storage medium
CN112073461B (en) * 2020-08-05 2022-08-12 烽火通信科技股份有限公司 Industrial Internet system based on cloud edge cooperation
CN111884347B (en) * 2020-08-28 2021-07-13 国网山东省电力公司郯城县供电公司 Power data centralized control system for multi-source power information fusion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111212106A (en) * 2019-12-09 2020-05-29 中国科学院计算机网络信息中心 Edge computing task processing and scheduling method and device in industrial internet environment
CN112015718A (en) * 2020-08-25 2020-12-01 阳光保险集团股份有限公司 HBase cluster balancing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112714016A (en) 2021-04-27

Similar Documents

Publication Publication Date Title
CN105205231B (en) A kind of power distribution network Digital Simulation System based on DCOM
CN103152393B (en) A kind of charging method of cloud computing and charge system
CN102904794A (en) Method and device for mapping virtual network
CN104038540A (en) Method and system for automatically selecting application proxy server
Munir et al. Intelligent service fulfillment for software defined networks in smart city
Liu et al. A survey on virtual machine scheduling in cloud computing
WO2020119060A1 (en) Method and system for scheduling container resources, server, and computer readable storage medium
Kumar et al. Design and implementation of fault tolerance technique for internet of things (iot)
CN110717664A (en) CPS production system for service-oriented production process based on mobile edge calculation
CN113157459A (en) Load information processing method and system based on cloud service
Ping Load balancing algorithms for big data flow classification based on heterogeneous computing in software definition networks
Qiu et al. A packet buffer evaluation method exploiting queueing theory for wireless sensor networks
CN105553872A (en) Multipath data traffic load equalizing method
CN102510403B (en) Receive and the cluster distributed system and method for real-time analysis for vehicle data
CN112714016B (en) Electric power Internet of things big data edge analysis method
Ali et al. Probabilistic normed load monitoring in large scale distributed systems using mobile agents
Chatziliadis et al. Efficient Placement of Decomposable Aggregation Functions for Stream Processing over Large Geo-Distributed Topologies
Wei et al. SDN-based multi-controller optimization deployment strategy for satellite network
Mohamed et al. Dynamic resource allocation in cloud computing based on software-defined networking framework
Fang et al. Latency aware online tasks scheduling policy for edge computing system
CN103246497A (en) Real-time parallel data processing method based on data partitioning
Ning et al. Research on distributed computing method for coordinated cooperation of distributed energy and multi-devices
Liu et al. An adaptive failure recovery mechanism based on asymmetric routing for data center networks
Hussain et al. Computational viability of fog methodologies in IoT enabled smart city architectures-a smart grid case study
CN118381788B (en) Modularized upgrading method, medium and electronic device for electric energy meter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant