[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118349363A - Data processing method and system based on lightweight data center - Google Patents

Data processing method and system based on lightweight data center Download PDF

Info

Publication number
CN118349363A
CN118349363A CN202410776423.6A CN202410776423A CN118349363A CN 118349363 A CN118349363 A CN 118349363A CN 202410776423 A CN202410776423 A CN 202410776423A CN 118349363 A CN118349363 A CN 118349363A
Authority
CN
China
Prior art keywords
data
data set
feature
connection
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410776423.6A
Other languages
Chinese (zh)
Inventor
柳哲宇
欧江平
杨圣
陈兴望
蒋鑫
王焜阳
李红见
罗迅
陈宇
彭海棣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunjing Cultural Tourism Technology Co ltd
Original Assignee
Yunjing Cultural Tourism Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunjing Cultural Tourism Technology Co ltd filed Critical Yunjing Cultural Tourism Technology Co ltd
Priority to CN202410776423.6A priority Critical patent/CN118349363A/en
Publication of CN118349363A publication Critical patent/CN118349363A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of data processing, and discloses a data processing method and system based on a lightweight data center, which are used for improving the efficiency and accuracy of data processing based on the lightweight data center. The method comprises the following steps: collecting original data from a plurality of data sources to obtain a heterogeneous data set, and carrying out data preprocessing on the heterogeneous data set to obtain a processed data set; transmitting the processing data set to a cloud end, carrying out feature analysis on the processing data set at the cloud end to obtain feature description data, and carrying out decision strategy analysis on the feature description data to obtain decision strategy data; and matching terminal equipment related to the decision strategy data to obtain a terminal equipment set, matching an edge calculation node set according to the terminal equipment set, and transmitting the decision strategy data to each terminal equipment in the terminal equipment set through the edge calculation node set.

Description

Data processing method and system based on lightweight data center
Technical Field
The invention relates to the technical field of data processing, in particular to a data processing method and system based on a lightweight data center.
Background
In the development of smart cities, the combination of edge computing and cloud computing provides great convenience for data processing. The edge computing nodes may perform preliminary processing near the data source to reduce data transmission delay and bandwidth consumption. Cloud computing has powerful computing and storage capacity, and can perform deep analysis and processing on a large amount of data. The cooperative working mode of the edge and the cloud plays an important role in various application scenes of the smart city, such as traffic management, environment monitoring and the like.
However, when the existing edge computing and cloud computing combined method processes multi-mode data, the problems of low data transmission efficiency, insufficient data security, uneven computing resource allocation and the like are faced. Bandwidth limitations and latency problems remain during data transmission, especially in large-scale applications. The existing method lacks effective compression and encryption strategies for data, and is easy to cause data leakage. In addition, the scheduling and load balancing mechanisms of the edge computing nodes and cloud computing resources are imperfect, so that the resource utilization rate is low and the system performance is unstable.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a data processing method and system based on a lightweight data center, which are used for improving the efficiency and accuracy of data processing based on the lightweight data center.
The invention provides a data processing method based on a lightweight data center, which comprises the following steps: collecting original data from a plurality of data sources to obtain a heterogeneous data set, and carrying out data preprocessing on the heterogeneous data set to obtain a processed data set, wherein the method specifically comprises the following steps of: collecting original data from a plurality of data sources to obtain a heterogeneous data set, and carrying out data cleaning and missing value filling on the heterogeneous data set to obtain a filled data set; performing data encoding on the filling data set to obtain an encoded data set; performing data type conversion on the coded data set to obtain a converted data set; performing data encoding on the converted data set to obtain an encoded data set; carrying out data connection integration on the coded data set to obtain an integrated data set, which specifically comprises the following steps: performing primary key identification on the coded data set to obtain a primary key set corresponding to the coded data set, wherein each primary key comprises one or more of the following time stamps, equipment ID (identity) and geographic position data; according to the primary key set, performing connection mode matching on the coded data set to obtain a connection mode set, wherein the connection mode set comprises one or more of the following: internal connection, left connection, right connection, full connection, and cross connection; the coded data set is subjected to data connection through the connection mode set and the primary key set to obtain a connection data set; redundant data elimination is carried out on the connection data set, and a data set to be processed is obtained; performing data consistency verification on the data set to be processed, and generating the integrated data set when verification passes; performing multi-modal feature alignment on the integrated data set to obtain the processing data set; transmitting the processing data set to a cloud end, carrying out feature analysis on the processing data set at the cloud end to obtain feature description data, and carrying out decision strategy analysis on the feature description data to obtain decision strategy data;
And matching terminal equipment related to the decision strategy data to obtain a terminal equipment set, matching an edge calculation node set according to the terminal equipment set, and transmitting the decision strategy data to each terminal equipment in the terminal equipment set through the edge calculation node set.
In the present invention, the step of data-encoding the converted data set to obtain an encoded data set includes:
Extracting variable data from the conversion data set to obtain variable data and non-variable data in the conversion data set;
performing numerical coding and unit conversion on the variable data to obtain first coded data;
and carrying out hash coding on the non-variable data to obtain second coded data, and combining the first coded data and the second coded data into a coded data set.
In the invention, the steps of transmitting the processing data set to a cloud end, carrying out feature analysis on the processing data set at the cloud end to obtain feature description data, and carrying out decision strategy analysis on the feature description data to obtain decision strategy data comprise the following steps:
performing compression ratio matching on the processed data set to obtain a target compression ratio;
Carrying out data compression on the processed data set according to the target compression ratio to obtain a compressed data set;
carrying out data encryption on the compressed data set through a symmetric encryption algorithm to obtain an encrypted data set, and transmitting the encrypted data set to the cloud;
and carrying out feature analysis on the processing data set at the cloud to obtain feature description data, and carrying out decision strategy analysis on the feature description data to obtain decision strategy data.
In the invention, the step of performing feature analysis on the processing data set at the cloud to obtain feature description data and performing decision strategy analysis on the feature description data to obtain decision strategy data comprises the following steps:
At the cloud end, carrying out data division on the processing data set to obtain an image data set and a text data set;
Inputting the image data set into a convolutional neural network for image feature extraction to obtain an image feature set;
Inputting the text data set into a cyclic neural network to extract text features, so as to obtain a text feature set;
inputting the image feature set and the text feature set into an attention mechanism for feature weight matching to obtain a first weight set of the image feature set and a second weight set of the text feature set;
Carrying out multi-modal feature fusion on the image feature set and the text feature set through the first weight set and the second weight set to obtain multi-modal fusion features;
and generating the feature description data according to the multi-mode fusion features, and carrying out decision strategy analysis on the feature description data to obtain decision strategy data.
In the invention, the step of matching the terminal equipment related to the decision strategy data to obtain a terminal equipment set, matching an edge calculation node set according to the terminal equipment set, and transmitting the decision strategy data to each terminal equipment in the terminal equipment set through the edge calculation node set comprises the following steps:
performing data execution node matching on the decision strategy data to obtain an execution node set;
Matching terminal equipment related to the decision strategy data through the executing node set to obtain a terminal equipment set;
Matching an edge computing node set according to the terminal equipment set, and performing node state matching on each edge computing node in the edge computing node set to obtain the current node state of each edge computing node;
according to the current node state of each edge computing node, load analysis is carried out on each edge computing node to obtain load data of each edge computing node;
And constructing a node load strategy according to the load data of each edge computing node, and transmitting the decision strategy data to each terminal device in the terminal device set through the node load strategy and the edge computing node set.
The invention also provides a data processing system based on the lightweight data center, which comprises:
The acquisition module is used for acquiring original data from a plurality of data sources to obtain a heterogeneous data set, and carrying out data preprocessing on the heterogeneous data set to obtain a processed data set, and specifically comprises the following steps: collecting original data from a plurality of data sources to obtain a heterogeneous data set, and carrying out data cleaning and missing value filling on the heterogeneous data set to obtain a filled data set; performing data encoding on the filling data set to obtain an encoded data set; performing data type conversion on the coded data set to obtain a converted data set; performing data encoding on the converted data set to obtain an encoded data set; carrying out data connection integration on the coded data set to obtain an integrated data set, which specifically comprises the following steps: performing primary key identification on the coded data set to obtain a primary key set corresponding to the coded data set, wherein each primary key comprises one or more of the following time stamps, equipment ID (identity) and geographic position data; according to the primary key set, performing connection mode matching on the coded data set to obtain a connection mode set, wherein the connection mode set comprises one or more of the following: internal connection, left connection, right connection, full connection, and cross connection; the coded data set is subjected to data connection through the connection mode set and the primary key set to obtain a connection data set; redundant data elimination is carried out on the connection data set, and a data set to be processed is obtained; performing data consistency verification on the data set to be processed, and generating the integrated data set when verification passes; performing multi-modal feature alignment on the integrated data set to obtain the processing data set;
The analysis module is used for transmitting the processing data set to the cloud end, carrying out feature analysis on the processing data set at the cloud end to obtain feature description data, and carrying out decision strategy analysis on the feature description data to obtain decision strategy data;
The matching module is used for matching the terminal equipment related to the decision strategy data to obtain a terminal equipment set, matching an edge computing node set according to the terminal equipment set, and transmitting the decision strategy data to each terminal equipment in the terminal equipment set through the edge computing node set.
According to the technical scheme provided by the invention, the heterogeneous data set can be effectively processed and integrated by collecting the original data from a plurality of data sources and performing data preprocessing. This includes the steps of data cleansing, missing value padding, data encoding, data type conversion, and data connection integration. The systematic data preprocessing process not only improves the consistency and quality of the data, but also lays a solid foundation for subsequent data analysis. In terms of data cleaning and missing value filling, advanced algorithms and technologies such as outlier detection, noise removal and data interpolation are adopted, so that the integrity and the accuracy of data are ensured; the data coding and type conversion steps enable data from different sources to be seamlessly integrated by carrying out standardized processing on the data, and a uniform data format is provided for subsequent feature extraction and analysis; and the data connection integration ensures the high-efficiency integration and consistency of the data by the main key identification and the connection mode matching, so that the data of different modes can be processed and analyzed in the same data set. In the process of data transmission to the cloud, the data transmission efficiency and safety are effectively improved by compressing and encrypting the processed data set. Data compression techniques, such as Gzip and LZ4, can significantly reduce the volume of data, reduce bandwidth requirements, and thereby increase the data transmission speed. Meanwhile, the data encryption technology, such as Advanced Encryption Standard (AES) and transport layer security protocol (TLS), ensures the privacy and integrity of the data during transmission, and prevents data leakage and tampering. In the process, the reasonable setting of parameters such as the data compression ratio, the encryption key length and the like further optimizes the efficiency and the safety of data transmission, so that the high efficiency and the reliability can be ensured in large-scale data transmission. And performing depth feature extraction and multi-modal feature alignment on the processed data set by utilizing powerful computing resources of cloud computing, and extracting high-quality feature representations from multi-modal data such as images and texts by using deep learning models such as Convolutional Neural Networks (CNNs) and cyclic neural networks (RNNs). In the feature extraction process, the parameters such as the number of convolution layers, the size of a filter, a pooling strategy and the like are optimally set, so that the accurate extraction of image features is ensured; in the text feature extraction, the reasonable configuration of parameters such as the number of hidden layers, the number of hidden units, the time step and the like improves the representation capability of time sequence data. The multi-mode feature alignment effectively fuses the features of different modes through a position attention mechanism and a matching attention mechanism, and ensures high consistency and high expressive force of feature description data. The result of this step is that feature description data which can accurately reflect the features of the original data is generated, and a reliable basis is provided for subsequent decision strategy analysis.
Based on the feature description data, high-quality decision strategy data is generated through training and reasoning of a deep learning model. After the decision strategy is generated, the decision strategy is further optimized, so that the decision strategy has high adaptability and high efficiency in a dynamic environment. The optimized decision strategy can better cope with various changes and challenges in practical application, and ensures effective implementation in complex application scenes.
Finally, terminal equipment sets are obtained by matching terminal equipment related to the decision strategy data, and edge computing nodes are matched according to the terminal equipment, so that the decision strategy data can be transmitted to each terminal equipment with high efficiency. In the process of terminal equipment matching, the factors such as equipment type, position, communication capacity, current state and the like are used for carrying out accurate matching, so that an optimized terminal equipment set is generated. And the edge computing node matching ensures reasonable scheduling and load balancing of each node through state matching and load analysis, and further optimizes the resource utilization rate and the overall performance of the system. And through the node load strategy, the decision strategy data is efficiently transmitted to the terminal equipment, so that the accurate execution of the strategy in the actual environment is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a data processing method based on a lightweight data center in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a data processing system based on a lightweight data center in an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
For easy understanding, referring to fig. 1, fig. 1 is a flowchart of a data processing method based on a lightweight data center table according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
S101, acquiring original data from a plurality of data sources to obtain a heterogeneous data set, and performing data preprocessing on the heterogeneous data set to obtain a processed data set;
The method specifically comprises the following steps: collecting original data from a plurality of data sources to obtain a heterogeneous data set, and performing data cleaning and missing value filling on the heterogeneous data set to obtain a filled data set; performing data encoding on the filling data set to obtain an encoded data set; performing data type conversion on the coded data set to obtain a converted data set; performing data coding on the converted data set to obtain a coded data set; carrying out data connection integration on the coded data set to obtain an integrated data set, which specifically comprises the following steps: performing primary key identification on the coded data set to obtain a primary key set corresponding to the coded data set, wherein each primary key comprises one or more of the following time stamps, equipment IDs and geographic position data; according to the primary key set, performing connection mode matching on the coded data set to obtain a connection mode set, wherein the connection mode set comprises one or more of the following: internal connection, left connection, right connection, full connection, and cross connection; the method comprises the steps of performing data connection on an encoded data set through a connection mode set and a main key set to obtain a connection data set; eliminating redundant data of the connection data set to obtain a data set to be processed; performing data consistency verification on the data set to be processed, and generating an integrated data set when the verification passes; performing multi-mode feature alignment on the integrated data set to obtain a processed data set;
Wherein, in particular, raw data is collected from a plurality of data sources including intelligent traffic lights, environmental monitoring sensors, video monitoring cameras, and the like. The raw data collected constitutes heterogeneous data sets, which have different formats and characteristics, e.g., traffic light data may include time stamps, signal status and duration, environmental monitoring sensor data may include temperature, humidity, air quality index, etc., and video monitoring camera data is image and video stream. To ensure high quality and consistency of data, data cleansing and missing value filling are first performed on heterogeneous data sets. The purpose of data cleansing is to remove noise and outliers in the data. For example, significant outliers in the environmental monitoring sensor data may be filtered out by setting a reasonable threshold range, or noise in the video data may be removed and image quality enhanced using image processing algorithms. Missing value filling is to ensure the integrity of data, and common methods include mean filling, median filling and machine learning-based predictive filling. For example, for the missing value of the traffic signal lamp data, the signal states of the adjacent time points before and after can be used for filling, so that the continuity and the integrity of the data are ensured. After the data cleansing and missing value population are completed, a high quality population data set is generated. And then, data encoding is carried out on the filling data set so as to standardize the data format and improve the efficiency of subsequent processing. Data encoding can be classified into numerical encoding and classification encoding. For example, for temperature and humidity data of the environmental monitoring sensor, a normalization process may be performed to convert it into a numerical range between 0 and 1; and for the target detection result of the video monitoring data, classification coding can be performed, and detected objects such as vehicles, pedestrians and the like are converted into corresponding classification labels. These encoding steps ensure standardization and uniformity of the data, creating an encoded data set. Next, a data type conversion is performed on the encoded data set to ensure that all data is consistent in format and type. For example, all timestamp data may be unified into the ISO 8601 format, ensuring comparability and consistency of the time data between different data sources; The image data is converted into uniform resolution and color space, so that subsequent image processing and feature extraction are facilitated. Through data type conversion, a conversion data set with uniform format and uniform type is generated. In order to further optimize the processing efficiency of the data, the transformed data set is again data encoded, further normalizing and simplifying the data structure. For example, the variable data can be subjected to numerical coding and unit conversion, so that the units of all the variable data are consistent, and unit differences among different data sources are eliminated; the non-variable data is hashed to generate a unique numerical representation, so that the complexity of data storage and processing is reduced. By these encoding steps an optimized encoded data set is generated. After the data coding is completed, the coded data sets are required to be subjected to data connection integration, and the data from different sources are effectively integrated through primary key identification and connection mode matching. Primary key identification is the identification of primary key fields in each data source, such as time stamps, device IDs, and geographic location data, etc., that are used for matching and integration of data. The connection mode matching is to select a proper connection mode, such as internal connection, left connection, right connection, full connection and cross connection, and connect the data of different data sources. For example, the data of the traffic light and the data of the environmental monitoring sensor can be connected by using the timestamp and the geographic position as primary keys to generate a connection data set containing multi-modal information. After the data connection integration is completed, redundant data elimination and consistency verification are performed on the connection data sets so as to ensure efficient integration and consistency of the data. Redundant data elimination is to remove duplicate and redundant information in the data, ensuring that each record contains only necessary fields; and consistency check is to check the consistency and the integrity of the data, ensure that the formats and units of all fields are consistent, and the primary keys of all records are correctly matched. When the consistency check passes, a high quality integrated data set is generated. Finally, multi-modal feature alignment is performed on the integrated data set to generate a final processed data set. The multi-mode feature alignment is to align and fuse the data features of different modes through a position attention mechanism and a matching attention mechanism. For example, features of the image data may be aligned with features of the time series data, and correlations and dependencies between different modality data captured by an attention mechanism, to generate a composite feature representation. Through multi-mode feature alignment, high consistency and high expression of data are ensured, a processing data set is generated, and a reliable basis is provided for subsequent feature analysis and decision strategy generation. In the traffic management system of the smart city, the method can realize the efficient processing and analysis of the traffic signal lamp, the environment monitoring sensor and the video monitoring camera data. The system first collects data from different data sources and performs preprocessing to generate a processed data set. And then, transmitting the data to a cloud end through compression and encryption, and carrying out depth feature analysis and decision strategy generation. And finally, transmitting decision strategy data to the terminal equipment by matching the terminal equipment with the edge computing node, so as to realize real-time traffic management optimization. The data processing method based on the lightweight data center not only improves data transmission efficiency and safety, but also ensures high-efficiency and stable operation of the system through reasonable resource scheduling and load balancing, and provides reliable technical support and guarantee for various applications in smart cities.
It should be noted that, primary key identification, connection mode matching, data connection, redundant data elimination and data consistency verification are performed on the encoded data set, and finally an integrated data set is generated. First, raw data collected from a plurality of data sources is preprocessed, encoded, and converted to obtain an encoded data set. Next, primary key identification is performed, which is a basic step of data connection. The primary key identification is aimed at determining primary key fields in each data source that are used to uniquely identify each data record. In a smart city traffic management system, common primary keys include a time stamp, a device ID, and geographic location data. For example, the data of the traffic light may use a time stamp and a device ID as a primary key, the data of the environmental monitoring sensor may use a time stamp and a geographic location as a primary key, and the video monitoring data may use a combination of a time stamp, a device ID and a geographic location as a primary key. By means of the primary keys, effective matching of data of different data sources at the same time and place can be ensured. Once the set of primary keys is determined, connection mode matching follows. The connection mode matching is to select a proper data connection mode according to the main key set, and common connection modes comprise internal connection, left connection, right connection, full connection and cross connection. In a smart city traffic management system, the connection is adapted to preserve only the case where there are matching records in all data sources, the left connection is adapted to preserve all records in the left data source, even when there are no matching records in the right data source, the right connection is reversed, all records in the right data source are preserved, the full connection preserves all records in both data sources, and the cross connection returns the Cartesian product of both data sources. Assume that there are three data sources: traffic light data, environmental monitoring sensor data, and video monitoring data. The primary key of the traffic light data is a timestamp and a device ID, the primary key of the environmental monitoring sensor data is a timestamp and a geographic location, and the primary key of the video monitoring data is a timestamp, a device ID and a geographic location. The internal connections are selected to match the data sources because only those data that have matching records in all data sources are needed. By means of the internal connection, a set of connection means can be obtained, including the internal connection as the main connection means. And carrying out data connection on the coded data set through the connection mode set and the primary key set. In the data connection process, each data source is matched according to the primary key field, and data combination is carried out according to the connection mode. For example, the data of the traffic light may be matched with the video monitoring data by a time stamp and a device ID, and the data of the environmental monitoring sensor may be matched with the data of the other two data sources by a time stamp and a geographic location. In this way, a connection data set is obtained that contains all critical data. After the connection data set is obtained, the next step is to perform redundant data elimination. Redundant data is repeated and unnecessary information existing in the data set, and the data occupies storage space and affects data processing efficiency. To eliminate redundant data, the connection data sets are deduplicated and each record is guaranteed to contain only the necessary fields. For example, if multiple sensors at the same time and place record the same ambient temperature data in the merged dataset, only one record may be kept. And (5) obtaining an optimized data set to be processed through redundant data elimination. And then, checking the data consistency of the data set to be processed, and ensuring the integrity and consistency of the data. The data consistency check includes checking whether the formats and units of all fields are consistent and whether the primary keys of all records match correctly. For example, it is checked whether all timestamp fields are unified in ISO 8601 format, all temperature data are unified in degrees celsius, and all geographic location data are unified in theodolite format. In addition, it is also desirable to ensure that the values of all primary key fields do not conflict and repeat between different data sources. Through consistency check, high quality and reliability of data are ensured. After the data consistency check is passed, a high-quality integrated data set is finally generated. In the smart city traffic management system, the integrated data set includes status data of traffic lights, speed and flow data of vehicles, temperature and humidity data of environmental monitoring sensors, image data of video monitoring cameras, and the like. The integrated data set not only improves the consistency and quality of the data, but also provides a reliable basis for subsequent feature analysis and decision strategy generation. For example, in practice, the traffic management department may use this integrated dataset for real-time traffic monitoring and optimal scheduling. By analyzing the state data of the traffic signal lamps and the traffic flow data in the integrated data set, traffic jam points can be identified, and the time interval of the traffic signal lamps can be adjusted in real time so as to optimize the traffic flow; by analyzing the data of the environment monitoring sensor, the air quality and the temperature change on the road can be monitored, and corresponding measures can be taken in time; by analyzing the data of the video monitoring camera, abnormal events on the road, such as traffic accidents or illegal behaviors, can be detected, and related departments can be timely notified to process. The data processing method based on the lightweight data center not only improves the efficiency and accuracy of data processing, but also ensures the high-efficiency and stable operation of the system through reasonable resource scheduling and load balancing, and provides reliable technical support and guarantee for various applications in smart cities. Through the integrated data set, the intelligent urban traffic management system can realize more intelligent and efficient traffic control, improves road traffic efficiency, reduces traffic jams, and improves travel experience of urban residents.
S102, transmitting the processing data set to a cloud end, carrying out feature analysis on the processing data set at the cloud end to obtain feature description data, and carrying out decision strategy analysis on the feature description data to obtain decision strategy data;
S103, terminal equipment related to the decision strategy data is matched to obtain a terminal equipment set, an edge calculation node set is matched according to the terminal equipment set, and the decision strategy data is transmitted to each terminal equipment in the terminal equipment set through the edge calculation node set.
It should be noted that, the raw data is collected from a plurality of data sources to form a heterogeneous data set, and the heterogeneous data set is preprocessed, including data cleaning, missing value filling, data encoding, data type conversion and data connection integration, to finally obtain a processed data set. In the data acquisition link, the data sources in the smart city may include intelligent traffic lights, environmental monitoring sensors, video monitoring cameras, and the like. The raw data generated by these data sources may have different formats and characteristics, for example, traffic signal data may include time stamps and signal status, environmental monitoring sensor data may include temperature, humidity and air quality index, and video monitoring camera data is image and video stream. In order to effectively process the heterogeneous data, the heterogeneous data is firstly subjected to data cleaning to remove noise and abnormal values. For example, for environmental monitoring sensor data, obvious outlier data points can be filtered out by setting a reasonable threshold range; for video data, image processing algorithms may be used to remove noise and enhance image quality. Next, missing value padding is performed to ensure the integrity of the data. Common filling methods include mean filling, median filling, and machine learning based predictive filling, e.g., for missing values of traffic signal data, signal states at adjacent time points before and after can be used for filling.
After data cleansing and population, the data is encoded and type converted. The data coding includes numerical coding and classification coding, for example, the numerical data such as temperature, humidity and the like of the environment monitoring sensor is standardized, and the target detection result of the video monitoring data is classified and coded. The type conversion ensures that all data is in a consistent format, e.g., unifying all timestamp data to ISO 8601 format, converting image data to uniform resolution and color space. After coding and type conversion are completed, data connection integration is carried out, and data from different sources are effectively integrated through primary key identification and connection mode matching. For example, the data of the traffic light is connected with the data of the environment monitoring sensor, and an integrated data set containing multi-mode information is generated by taking the timestamp and the geographic position as a main key.
Next, the processed data set is transmitted to the cloud. In order to improve transmission efficiency and data security, data is compressed and encrypted. The data volume can be significantly reduced and the bandwidth requirements reduced using a compression algorithm such as Gzip or LZ 4. Meanwhile, the data is encrypted by adopting an Advanced Encryption Standard (AES) or a transport layer security protocol (TLS), so that the privacy and the integrity of the data in the transmission process are ensured. For example, when transmitting an integrated data set including traffic signals and environment monitoring data, the data may be compressed first, then encrypted using AES algorithm, and the encrypted compressed data set is generated and finally transmitted to the cloud server through HTTPS protocol.
At the cloud end, the received encrypted data is decrypted and decompressed first, and the original processing data set is restored. And then, utilizing the strong computing resources in the cloud to perform feature analysis on the data set. Feature analysis includes feature extraction and multi-modal feature alignment. For image data, convolutional Neural Networks (CNNs) are used to extract image features, such as vehicle and pedestrian detection in traffic surveillance videos; for time series data, a Recurrent Neural Network (RNN) is used to extract features such as the law of change of traffic lights and the data trend of environmental monitoring sensors. After feature extraction, the features of different modes are aligned and fused by using an attention mechanism, and the image features and the time sequence features are effectively combined through a position attention and matching attention mechanism, so that high-quality feature description data is generated.
And carrying out decision strategy analysis based on the feature description data. For example, in an intelligent traffic management system, feature data may be analyzed using a deep learning model, such as a transducer, to generate a traffic signal optimization strategy. The training parameters of the model, such as learning rate, batch size, training round number and the like, are optimized, so that efficient training and accurate prediction of the model are ensured. After the preliminary decision strategy is generated, the strategy is further optimized through a reinforcement learning algorithm such as Q learning, so that the strategy has high adaptability and high efficiency in a dynamic traffic environment.
The generated decision strategy data needs to be transmitted to the relevant terminal equipment. Firstly, relevant terminal equipment is matched according to decision strategy data, and a terminal equipment set is generated. The matching process takes into account the device type, location, communication capabilities, and current state. For example, in traffic management applications, terminal devices may include traffic light controllers, roadside sensors, on-board systems, and the like. And generating an optimized terminal equipment set through matching through a terminal equipment database.
And matching corresponding edge computing nodes according to the terminal equipment set. The edge computing node database records the detailed information of each node, including computing power, location, network connection condition, etc. And through state matching and load analysis, reasonable scheduling and load balancing of each edge computing node are ensured, and an edge computing node set is generated. Finally, the decision strategy data is efficiently transmitted to the terminal equipment through the edge computing node. For example, the optimized traffic signal control strategy is transmitted to each traffic signal lamp controller through the edge computing node, so that real-time signal control optimization is realized, and road traffic efficiency is improved.
In practical application, for example, in an intelligent traffic management system, the method can realize efficient processing and analysis of traffic lights, environment monitoring sensors and video monitoring camera data. The system first collects data from different data sources and performs preprocessing to generate a processed data set. And then, transmitting the data to a cloud end through compression and encryption, and carrying out depth feature analysis and decision strategy generation. And finally, transmitting decision strategy data to the terminal equipment by matching the terminal equipment with the edge computing node, so as to realize real-time traffic management optimization. The data processing method based on the lightweight data center not only improves data transmission efficiency and safety, but also ensures high-efficiency and stable operation of the system through reasonable resource scheduling and load balancing, and provides reliable technical support and guarantee for various applications in smart cities.
By performing the above steps, heterogeneous data sets can be efficiently processed and integrated by collecting raw data from multiple data sources and performing data preprocessing. This includes the steps of data cleansing, missing value padding, data encoding, data type conversion, and data connection integration. The systematic data preprocessing process not only improves the consistency and quality of the data, but also lays a solid foundation for subsequent data analysis. In terms of data cleaning and missing value filling, advanced algorithms and technologies such as outlier detection, noise removal and data interpolation are adopted, so that the integrity and the accuracy of data are ensured; the data coding and type conversion steps enable data from different sources to be seamlessly integrated by carrying out standardized processing on the data, and a uniform data format is provided for subsequent feature extraction and analysis; and the data connection integration ensures the high-efficiency integration and consistency of the data by the main key identification and the connection mode matching, so that the data of different modes can be processed and analyzed in the same data set. In the process of data transmission to the cloud, the data transmission efficiency and safety are effectively improved by compressing and encrypting the processed data set. Data compression techniques, such as Gzip and LZ4, can significantly reduce the volume of data, reduce bandwidth requirements, and thereby increase the data transmission speed. Meanwhile, the data encryption technology, such as Advanced Encryption Standard (AES) and transport layer security protocol (TLS), ensures the privacy and integrity of the data during transmission, and prevents data leakage and tampering. In the process, the reasonable setting of parameters such as the data compression ratio, the encryption key length and the like further optimizes the efficiency and the safety of data transmission, so that the high efficiency and the reliability can be ensured in large-scale data transmission. And performing depth feature extraction and multi-modal feature alignment on the processed data set by utilizing powerful computing resources of cloud computing, and extracting high-quality feature representations from multi-modal data such as images and texts by using deep learning models such as Convolutional Neural Networks (CNNs) and cyclic neural networks (RNNs). In the feature extraction process, the parameters such as the number of convolution layers, the size of a filter, a pooling strategy and the like are optimally set, so that the accurate extraction of image features is ensured; in the text feature extraction, the reasonable configuration of parameters such as the number of hidden layers, the number of hidden units, the time step and the like improves the representation capability of time sequence data. The multi-mode feature alignment effectively fuses the features of different modes through a position attention mechanism and a matching attention mechanism, and ensures high consistency and high expressive force of feature description data. The result of this step is that feature description data which can accurately reflect the features of the original data is generated, and a reliable basis is provided for subsequent decision strategy analysis.
Based on the feature description data, high-quality decision strategy data is generated through training and reasoning of a deep learning model. After the decision strategy is generated, the decision strategy is further optimized, so that the decision strategy has high adaptability and high efficiency in a dynamic environment. The optimized decision strategy can better cope with various changes and challenges in practical application, and ensures effective implementation in complex application scenes.
Finally, terminal equipment sets are obtained by matching terminal equipment related to the decision strategy data, and edge computing nodes are matched according to the terminal equipment, so that the decision strategy data can be transmitted to each terminal equipment with high efficiency. In the process of terminal equipment matching, the factors such as equipment type, position, communication capacity, current state and the like are used for carrying out accurate matching, so that an optimized terminal equipment set is generated. And the edge computing node matching ensures reasonable scheduling and load balancing of each node through state matching and load analysis, and further optimizes the resource utilization rate and the overall performance of the system. And through the node load strategy, the decision strategy data is efficiently transmitted to the terminal equipment, so that the accurate execution of the strategy in the actual environment is ensured.
In a specific embodiment, the process of performing the data encoding step on the converted data set may specifically include the following steps:
(1) Extracting variable data from the conversion data set to obtain variable data and non-variable data in the conversion data set;
(2) Performing numerical coding and unit conversion on variable data to obtain first coded data;
(3) And carrying out hash coding on the non-variable data to obtain second coded data, and combining the first coded data and the second coded data into a coded data set.
Specifically, variable data and non-variable data are extracted from the conversion data set. Variable data are typically those that can be represented numerically and have actual physical meaning, such as traffic flow, ambient temperature and humidity, and the like. In intelligent traffic systems, variable data may include time intervals of traffic lights, speed and flow of vehicles, temperature, humidity, and air quality index of environmental monitoring sensors, etc. These data require further processing to ensure that they are consistent and comparable between different data sources.
Once the variable data is extracted, it is then necessary to numerically encode and unit-convert the data. Numerical encoding is to convert all variable data into a unified numerical representation for subsequent processing and analysis. For example, traffic flow may be expressed in terms of vehicles per hour, temperature may be expressed in degrees celsius, and air quality index may be expressed in terms of AQI values. By means of numerical coding, it is ensured that variable data of different data sources are compared and analyzed on the same scale. The unit conversion is to ensure that the units of all data are consistent, eliminating unit differences between different data sources. For example, if the temperature data of some environmental monitoring sensors is expressed in degrees Fahrenheit and the data of other sensors is expressed in degrees Celsius, all the temperature data needs to be converted into a unified unit (e.g., degrees Celsius) for storage and processing.
Taking smart city traffic management as an example, assume that time interval data of traffic lights, vehicle speed data, and ambient temperature data are collected. In this example, the time interval data of the traffic signal is in seconds, the vehicle speed data is in kilometers per hour, and the ambient temperature data is in both degrees celsius and degrees fahrenheit. First, the time interval data of the traffic signal and the vehicle speed data are numerically encoded to ensure that they are on a uniform numerical scale. The degrees fahrenheit in the ambient temperature data is then converted to degrees celsius, ensuring that all temperature data is represented in the same unit.
Meanwhile, the non-variable data needs to be subjected to hash encoding processing. Non-variable data are typically those that cannot be directly represented numerically, such as the status of traffic lights (red, green, yellow), vehicle type (car, truck, bus, etc.), and the geographic location of environmental monitoring sensors. Hash coding is used to convert these non-variable data into a digital representation, facilitating subsequent data processing and analysis. By hash encoding, each non-variable data is assigned a unique numerical identifier, thereby reducing the storage space and computational complexity of the data.
In the smart city traffic management system, it is assumed that state data of traffic lights, vehicle type data, and geographical position data of an environment monitoring sensor are collected. First, hash encoding is performed on the status data of the traffic signal, and "red light", "green light" and "yellow light" are encoded as values 0,1 and 2, respectively. Next, the vehicle type data is hash-coded, and "car", "van", and "bus" are coded as values 0,1, and 2, respectively. Finally, the geographical position data of the environment monitoring sensor is hashed, and each geographical position is allocated with a unique numerical identifier.
Through the above steps, the first encoded data and the second encoded data are obtained, respectively. The first encoded data includes variable data, such as time interval data of traffic lights, vehicle speed data, and ambient temperature data, which are numerically encoded and unit-converted; the second encoded data includes hash-encoded non-variable data such as traffic light status data, vehicle type data, and geographic location data of the environmental monitoring sensor. Next, the first encoded data and the second encoded data are combined to generate a unified encoded data set.
In a smart city traffic management system, this coded data set will contain all critical data and be stored and processed in a standardized format. For example, the encoded data set includes time interval data (in seconds) of traffic lights, vehicle speed data (in kilometers per hour), ambient temperature data (in degrees celsius), status data (hash-encoded) of traffic lights, vehicle type data (hash-encoded), and geographic position data (hash-encoded) of an environmental monitoring sensor. The encoded data set not only improves the consistency and quality of the data, but also provides a reliable basis for subsequent data analysis and decision making.
The complex multi-source data is successfully converted into a standardized coded data set by performing variable data extraction, numerical coding, unit conversion, hash coding and other steps on the converted data set. The data processing method based on the lightweight data center not only improves the processing efficiency and accuracy of data, but also ensures the efficient and stable operation of the system through reasonable resource scheduling and load balancing, and provides reliable technical support and guarantee for various applications in smart cities. For example, in the intelligent urban traffic management system, by the data processing method, the key data such as traffic lights, vehicle speeds, environment temperatures and the like can be efficiently analyzed and optimally scheduled, the road traffic efficiency is improved, the traffic jam is reduced, and the intelligent level of urban traffic management is improved.
In a specific embodiment, the process of executing step S102 may specifically include the following steps:
(1) Performing compression ratio matching on the processed data set to obtain a target compression ratio;
(2) Performing data compression on the processed data set according to the target compression ratio to obtain a compressed data set;
(3) Data encryption is carried out on the compressed data set through a symmetric encryption algorithm, so that an encrypted data set is obtained, and the encrypted data set is transmitted to a cloud;
(4) And carrying out feature analysis on the processed data set at the cloud to obtain feature description data, and carrying out decision strategy analysis on the feature description data to obtain decision strategy data.
Specifically, raw data collected from a plurality of data sources is preprocessed, encoded, and converted to generate a processed data set. Next, compression ratio matching is required for this processed data set to determine the optimal compression ratio parameters. The compression ratio matching is to select a proper compression ratio according to the characteristics of the processed data set and the limitation of transmission bandwidth. The compression ratio is a ratio of the size of data after compression to the size of data before compression. For a smart city traffic management system, the compression ratio needs to find a balance point between compression efficiency and data fidelity. If the compression ratio is too high, although the data volume can be significantly reduced, the accuracy of part of the data may be lost; if the compression ratio is too low, the data volume is large and the transmission efficiency is not high. By analyzing the data characteristics and evaluating the transmission bandwidth, a target compression ratio can be determined, so that the data after compression still has high quality, and the transmission efficiency is improved.
Once the target compression ratio is determined, the processed data set is data compressed to obtain a compressed data set. The data compression may employ various algorithms such as Gzip, LZ4, etc. These algorithms enable efficient compression of data according to a target compression ratio. For example, in a smart city traffic management system, the data of traffic lights, vehicle speeds, and environmental monitoring sensors may be compressed using the Gzip algorithm. The Gzip algorithm reduces the data volume by finding the repeated pattern and redundant information in the data, compressing it into smaller data blocks. The compressed data is smaller in size, key characteristics of the original data are still reserved, and accurate recovery after transmission and decompression is ensured. Next, in order to ensure the security of the data in the transmission process, the compressed data set is data-encrypted by a symmetric encryption algorithm, so as to obtain an encrypted data set. A symmetric encryption algorithm is an algorithm that encrypts and decrypts using the same key, and common symmetric encryption algorithms include Advanced Encryption Standard (AES) and Data Encryption Standard (DES). In the smart city traffic management system, the compressed data may be encrypted using the AES algorithm. The AES algorithm has the characteristics of high efficiency and safety, and can effectively prevent data from being stolen or tampered in the transmission process. An encrypted data set is obtained through encryption by an AES algorithm, is unreadable in the transmission process, and can be restored only after the cloud uses the same key for decryption.
The encrypted data set is transmitted to the cloud end through a secure transmission protocol. Common transport protocols include HTTPS, MQTT and WebSocket, which all have good security and transport efficiency. In a smart city traffic management system, the HTTPS protocol may be used to transmit the encrypted data set. The HTTPS protocol is based on the SSL/TLS protocol, can provide encryption and authentication functions for data transmission, and ensures the safety and the integrity of data in the transmission process.
After reaching the cloud, the encrypted data set is decrypted and decompressed first, and the original processed data set is restored. The decryption process uses the same symmetric encryption algorithm and key as encryption, ensuring that the data can be recovered accurately. The decompression process uses the same algorithm as compression, such as Gzip, to ensure that the data is restored to the original state. After decryption and decompression, a complete processed data set is obtained.
And at the cloud end, carrying out feature analysis on the processed data set to obtain feature description data. Feature analysis is the extraction of useful features from a processed dataset through deep learning and machine learning algorithms. For example, in a smart city traffic management system, a Convolutional Neural Network (CNN) may be used to perform image feature extraction of video monitoring data, and feature description data may be generated by extracting features of vehicles and pedestrians. Meanwhile, feature extraction is performed using cyclic neural network (RNN) time series data, such as traffic light status and environmental monitoring sensor data, and feature description data is generated by analyzing a time variation pattern of these data.
And then, carrying out decision strategy analysis on the feature description data to obtain decision strategy data. The decision strategy analysis is based on feature description data, and an optimization strategy aiming at a specific application scene is generated through a machine learning model and an algorithm. In a smart city traffic management system, a deep reinforcement learning algorithm, such as Q learning or Deep Q Network (DQN), may be used to optimize traffic light control strategies. By analyzing the characteristic description data, an optimized traffic signal lamp control strategy is generated, so that road traffic efficiency is improved, and traffic jam is reduced. For example, by analyzing the vehicle speed and flow data, the switching time of the traffic lights is optimized, ensuring that the vehicle can quickly pass through the intersection during peak hours, and reducing the waiting time.
In a specific embodiment, the process of performing decision strategy analysis on the feature description data to obtain decision strategy data may specifically include the following steps:
(1) At the cloud end, carrying out data division on the processing data set to obtain an image data set and a text data set;
(2) Inputting the image data set into a convolutional neural network for image feature extraction to obtain an image feature set;
(3) Inputting the text data set into a cyclic neural network to extract text features, so as to obtain a text feature set;
(4) Inputting the image feature set and the text feature set into an attention mechanism for feature weight matching to obtain a first weight set of the image feature set and a second weight set of the text feature set;
(5) Carrying out multi-modal feature fusion on the image feature set and the text feature set through the first weight set and the second weight set to obtain multi-modal fusion features;
(6) And generating feature description data according to the multi-mode fusion features, and carrying out decision strategy analysis on the feature description data to obtain decision strategy data.
Specifically, after the raw data collected from a plurality of data sources is preprocessed, encoded and compressed and transmitted to the cloud, the data of the processed data set needs to be divided so as to process different types of data respectively. In particular, we divide the process dataset into an image dataset and a text dataset. The image dataset typically contains images and video data from a video surveillance camera, while the text dataset includes time series data of traffic light status, vehicle speed, environmental monitoring sensor data, etc.
Once the data partitioning is completed, the image dataset is input into a Convolutional Neural Network (CNN) for image feature extraction. CNNs are adept at processing image data, and through multi-layer convolution and pooling operations, useful features such as vehicle, pedestrian, and road status in traffic monitoring video can be extracted from the image. In this process, parameters such as the number of convolution layers, the size of the filter, and the pooling strategy are optimized to ensure high quality and high expressivity of the extracted image features. After a series of convolution and pooling operations, a high-dimensional image feature set is obtained, and the feature set can reflect key information in the image data.
At the same time, the text data set is input into a Recurrent Neural Network (RNN) for text feature extraction. RNNs are particularly suited for processing time series data, where the time dependence of the data can be captured by a cyclic structure. In a smart city traffic management system, RNNs may be used to extract time features of traffic light status, vehicle speed, and environmental monitoring sensor data. For example, by analyzing the traffic signal lamp state data, the rule of traffic signal change can be extracted; through analysis of the vehicle speed data, the change trend of the vehicle flow can be extracted; by analyzing the environmental monitoring sensor data, the time variation pattern of temperature, humidity and air quality can be extracted. After a series of cyclic units, a high-dimensional text feature set is obtained, and the feature set can reflect key information in time sequence data.
Next, the image feature set and the text feature set are input into an attention mechanism for feature weight matching. The attention mechanism can dynamically assign weights to different features, thereby capturing correlations between features. In this process, the attention mechanism first calculates the importance of the feature for the image feature set and the text feature set, respectively, generating a first weight set for the image feature set and a second weight set for the text feature set. These weight sets reflect the weight each feature occupies in the overall feature set, e.g., for certain key image features in the traffic surveillance video, such as detection results of vehicles and pedestrians, the weight may be higher; the weighting may also be higher for certain key time features in the text data, such as peak vehicle flow data.
And carrying out multi-modal feature fusion on the image feature set and the text feature set through the first weight set and the second weight set to generate multi-modal fusion features. The purpose of multimodal feature fusion is to effectively tie together different types of data features in order to capture complex associations and dependencies between data. In this process, the image features and text features are combined together by weighted summation of weights to generate a unified feature representation. These multi-modal fusion features include not only spatial information in the image data, but also temporal information in the text data, thereby providing a more comprehensive and accurate representation of the features.
And generating feature description data according to the multi-mode fusion features. The feature description data can reflect key information in the processing data set, and provides a reliable basis for subsequent decision strategy analysis. In the smart city traffic management system, the feature description data may include an optimization strategy of traffic lights, a vehicle flow prediction result, environment monitoring and early warning information, and the like. By performing decision strategy analysis on the feature description data, a specific optimized decision strategy can be generated. For example, by analyzing traffic light status and vehicle flow data, an optimized set of traffic signal control strategies may be generated to improve road traffic efficiency; by analyzing the environmental monitoring sensor data, a set of environmental monitoring and early warning strategies can be generated to timely cope with the change of air quality.
In a specific application, for example, at a busy urban intersection, by the method, comprehensive analysis and optimization can be performed on traffic light status, vehicle speed and environment monitoring data. Firstly, inputting image data acquired by a video monitoring camera into a convolutional neural network, and extracting image features of vehicles and pedestrians; then, the time series data of the traffic light state and the vehicle speed are input into the cyclic neural network to extract the time characteristics. And then, carrying out weight matching and fusion on the image characteristics and the time characteristics through an attention mechanism to generate multi-modal fusion characteristics. And generating feature description data according to the features, and carrying out decision strategy analysis to obtain an optimized traffic signal control strategy.
For example, in the early peak period, an optimized signal control strategy is generated by analyzing the data of the traffic flow and the traffic light state, so that the green light time of a trunk road is prolonged, the waiting time of vehicles is reduced, and the traffic efficiency of the road is improved. Meanwhile, through data analysis of the environment monitoring sensor, the change of air quality is detected, an environment monitoring early warning strategy is generated, relevant departments are timely reminded to take measures, and the air quality is improved. The data processing method based on the lightweight data center not only improves the efficiency and accuracy of data processing, but also improves the intelligent level of traffic management through multi-mode feature fusion and decision optimization, ensures the efficient operation of urban traffic, and provides powerful technical support for the development of smart cities.
In a specific embodiment, the process of executing step S103 may specifically include the following steps:
(1) Performing data execution node matching on the decision strategy data to obtain an execution node set;
(2) By executing the node set, terminal equipment related to decision strategy data is matched, and a terminal equipment set is obtained;
(3) Matching an edge computing node set according to the terminal equipment set, and carrying out node state matching on each edge computing node in the edge computing node set to obtain the current node state of each edge computing node;
(4) According to the current node state of each edge computing node, load analysis is carried out on each edge computing node to obtain load data of each edge computing node;
(5) And constructing a node load strategy according to the load data of each edge computing node, and transmitting decision strategy data to each terminal device in the terminal device set through the node load strategy and the edge computing node set.
Specifically, after the decision policy data is generated, data execution node matching is performed to determine which nodes will be responsible for executing the decision policies. The data executing node matching aims at selecting a proper executing node according to the characteristics and the requirements of decision strategy data. The nodes may be traffic light controllers, roadside sensors, camera controllers, and the like. In a smart city traffic management system, for example, an optimized traffic signal control strategy needs to be transmitted to each traffic signal controller, and an environment monitoring and early warning strategy needs to be transmitted to a corresponding environment monitoring sensor controller. By analyzing the type and application scene of the decision strategy data, an executing node set can be obtained, and the executing node set comprises all nodes which need to receive and execute the decision strategy data.
Next, by executing the node set, the terminal devices associated with the decision policy data are matched. The terminal devices are the actual devices that specifically implement these policies, e.g., traffic lights, vehicle detection sensors, air quality monitoring devices, etc. In a smart city traffic management system, the matching process of terminal devices needs to consider the type, location and communication capability of the devices. For example, for traffic light control strategies, it is desirable to match all relevant traffic light devices and ensure that these devices have the ability to receive and execute the strategy. By this matching, a set of terminal devices can be generated, ensuring that all devices that need to execute policies are included.
From the set of terminal devices, a set of matching edge computing nodes follows. The edge computing node is an important intermediate link connecting the cloud end and the terminal equipment, and has the capabilities of processing data, storing the data and executing computing tasks. In the matching process, the computing power, the storage power, the network connection condition, the geographic position and other factors of the edge computing nodes need to be considered. For example, a traffic light controller may need to communicate with nearby edge computing nodes to ensure low latency and efficient processing. By this matching, a set of edge computing nodes can be generated, ensuring that all terminal devices can transmit and process data through the appropriate edge computing nodes.
And then, carrying out node state matching on each node in the edge computing node set to obtain the current node state of each edge computing node. The node status includes current computational load, storage utilization, network bandwidth, connection quality, and the like. This state information is an important basis for load analysis and optimal scheduling. For example, excessive computational load on an edge computing node may cause data processing delays, while insufficient network bandwidth may affect data transmission speed. The state of each node is monitored and evaluated in real time, so that the load and resource utilization conditions of the nodes are ensured to be in a controllable range.
And respectively carrying out load analysis on each node according to the current node state of each edge computing node to obtain the load data of each node. The load analysis is to evaluate the resource utilization condition and the load capacity of each node, so that the system can not overload operation while operating efficiently. For example, the load data of an edge computing node may indicate that its CPU utilization is 80%, memory utilization is 70%, and network bandwidth utilization is 60%, which information may be helpful in evaluating whether the node is able to receive new tasks or needs load balancing.
Based on the load analysis result, a node load strategy is constructed, and balance and efficient operation of the system are ensured. The node load strategy comprises load balancing, task scheduling, resource optimization and the like. For example, if the load of a certain edge computing node is too high, part of tasks can be transferred to a node with lower load, and in this way, dynamic scheduling and optimal utilization of resources are realized. In particular, given that a traffic light controller requires a large amount of computing resources to handle the optimization strategy, while the current edge computing node is too loaded, this task can be distributed to another node in the vicinity that is less loaded, ensuring the overall performance of the system.
And finally, transmitting decision strategy data to each terminal device in the terminal device set through the node load strategy and the edge calculation node set. This process requires consideration of the efficiency and reliability of data transmission, ensuring that policy data can be quickly and accurately transmitted to all relevant devices. For example, by optimizing network transmission protocols and data encryption techniques, it is ensured that data is not lost or tampered with during transmission. In a specific smart city traffic management system, by the method, efficient control and management of traffic lights, vehicle detection sensors and air quality monitoring devices can be achieved.
For example, in a busy urban intersection, dynamic adjustment of traffic lights can be realized through generation and transmission of decision strategy data, and vehicle passing efficiency is improved. Specifically, firstly, performing data execution node matching, and determining all traffic signal lamp controllers needing to execute an optimization strategy; then, matching all relevant traffic light equipment by executing the node set, and ensuring that the equipment can receive and execute the strategy; then, matching proper edge computing nodes to ensure low delay and efficient processing; the state of each edge computing node is monitored in real time and the load analysis is carried out, so that the nodes are ensured to operate in a controllable load range; and finally, transmitting decision strategy data to all traffic signal lamp equipment by constructing a load strategy, so as to realize dynamic adjustment and optimal control of the signal lamp.
The embodiment of the invention also provides a data processing system based on the lightweight data center, as shown in fig. 2, the data processing system based on the lightweight data center specifically comprises:
The acquisition module 201 is configured to acquire raw data from a plurality of data sources, obtain a heterogeneous data set, and perform data preprocessing on the heterogeneous data set to obtain a processed data set, and specifically includes: collecting original data from a plurality of data sources to obtain a heterogeneous data set, and carrying out data cleaning and missing value filling on the heterogeneous data set to obtain a filled data set; performing data encoding on the filling data set to obtain an encoded data set; performing data type conversion on the coded data set to obtain a converted data set; performing data encoding on the converted data set to obtain an encoded data set; carrying out data connection integration on the coded data set to obtain an integrated data set, which specifically comprises the following steps: performing primary key identification on the coded data set to obtain a primary key set corresponding to the coded data set, wherein each primary key comprises one or more of the following time stamps, equipment ID (identity) and geographic position data; according to the primary key set, performing connection mode matching on the coded data set to obtain a connection mode set, wherein the connection mode set comprises one or more of the following: internal connection, left connection, right connection, full connection, and cross connection; the coded data set is subjected to data connection through the connection mode set and the primary key set to obtain a connection data set; redundant data elimination is carried out on the connection data set, and a data set to be processed is obtained; performing data consistency verification on the data set to be processed, and generating the integrated data set when verification passes; performing multi-modal feature alignment on the integrated data set to obtain the processing data set;
the analysis module 202 is configured to transmit the processed data set to a cloud end, perform feature analysis on the processed data set at the cloud end to obtain feature description data, and perform decision strategy analysis on the feature description data to obtain decision strategy data;
and the matching module 203 is configured to match terminal devices related to the decision policy data to obtain a terminal device set, match an edge computing node set according to the terminal device set, and transmit the decision policy data to each terminal device in the terminal device set through the edge computing node set.
By the cooperative work of the modules, the heterogeneous data sets can be effectively processed and integrated by collecting the original data from a plurality of data sources and performing data preprocessing. This includes the steps of data cleansing, missing value padding, data encoding, data type conversion, and data connection integration. The systematic data preprocessing process not only improves the consistency and quality of the data, but also lays a solid foundation for subsequent data analysis. In terms of data cleaning and missing value filling, advanced algorithms and technologies such as outlier detection, noise removal and data interpolation are adopted, so that the integrity and the accuracy of data are ensured; the data coding and type conversion steps enable data from different sources to be seamlessly integrated by carrying out standardized processing on the data, and a uniform data format is provided for subsequent feature extraction and analysis; and the data connection integration ensures the high-efficiency integration and consistency of the data by the main key identification and the connection mode matching, so that the data of different modes can be processed and analyzed in the same data set. In the process of data transmission to the cloud, the data transmission efficiency and safety are effectively improved by compressing and encrypting the processed data set. Data compression techniques, such as Gzip and LZ4, can significantly reduce the volume of data, reduce bandwidth requirements, and thereby increase the data transmission speed. Meanwhile, the data encryption technology, such as Advanced Encryption Standard (AES) and transport layer security protocol (TLS), ensures the privacy and integrity of the data during transmission, and prevents data leakage and tampering. In the process, the reasonable setting of parameters such as the data compression ratio, the encryption key length and the like further optimizes the efficiency and the safety of data transmission, so that the high efficiency and the reliability can be ensured in large-scale data transmission. And performing depth feature extraction and multi-modal feature alignment on the processed data set by utilizing powerful computing resources of cloud computing, and extracting high-quality feature representations from multi-modal data such as images and texts by using deep learning models such as Convolutional Neural Networks (CNNs) and cyclic neural networks (RNNs). In the feature extraction process, the parameters such as the number of convolution layers, the size of a filter, a pooling strategy and the like are optimally set, so that the accurate extraction of image features is ensured; in the text feature extraction, the reasonable configuration of parameters such as the number of hidden layers, the number of hidden units, the time step and the like improves the representation capability of time sequence data. The multi-mode feature alignment effectively fuses the features of different modes through a position attention mechanism and a matching attention mechanism, and ensures high consistency and high expressive force of feature description data. The result of this step is that feature description data which can accurately reflect the features of the original data is generated, and a reliable basis is provided for subsequent decision strategy analysis.
Based on the feature description data, high-quality decision strategy data is generated through training and reasoning of a deep learning model. After the decision strategy is generated, the decision strategy is further optimized, so that the decision strategy has high adaptability and high efficiency in a dynamic environment. The optimized decision strategy can better cope with various changes and challenges in practical application, and ensures effective implementation in complex application scenes.
Finally, terminal equipment sets are obtained by matching terminal equipment related to the decision strategy data, and edge computing nodes are matched according to the terminal equipment, so that the decision strategy data can be transmitted to each terminal equipment with high efficiency. In the process of terminal equipment matching, the factors such as equipment type, position, communication capacity, current state and the like are used for carrying out accurate matching, so that an optimized terminal equipment set is generated. And the edge computing node matching ensures reasonable scheduling and load balancing of each node through state matching and load analysis, and further optimizes the resource utilization rate and the overall performance of the system. And through the node load strategy, the decision strategy data is efficiently transmitted to the terminal equipment, so that the accurate execution of the strategy in the actual environment is ensured.
The above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the scope of the claims.

Claims (6)

1. The data processing method based on the lightweight data center is characterized by comprising the following steps:
Collecting original data from a plurality of data sources to obtain a heterogeneous data set, and carrying out data preprocessing on the heterogeneous data set to obtain a processed data set, wherein the method specifically comprises the following steps of: collecting original data from a plurality of data sources to obtain a heterogeneous data set, and carrying out data cleaning and missing value filling on the heterogeneous data set to obtain a filled data set; performing data encoding on the filling data set to obtain an encoded data set; performing data type conversion on the coded data set to obtain a converted data set; performing data encoding on the converted data set to obtain an encoded data set; carrying out data connection integration on the coded data set to obtain an integrated data set, which specifically comprises the following steps: performing primary key identification on the coded data set to obtain a primary key set corresponding to the coded data set, wherein each primary key comprises one or more of the following time stamps, equipment ID (identity) and geographic position data; according to the primary key set, performing connection mode matching on the coded data set to obtain a connection mode set, wherein the connection mode set comprises one or more of the following: internal connection, left connection, right connection, full connection, and cross connection; the coded data set is subjected to data connection through the connection mode set and the primary key set to obtain a connection data set; redundant data elimination is carried out on the connection data set, and a data set to be processed is obtained; performing data consistency verification on the data set to be processed, and generating the integrated data set when verification passes; performing multi-modal feature alignment on the integrated data set to obtain the processing data set; transmitting the processing data set to a cloud end, carrying out feature analysis on the processing data set at the cloud end to obtain feature description data, and carrying out decision strategy analysis on the feature description data to obtain decision strategy data;
And matching terminal equipment related to the decision strategy data to obtain a terminal equipment set, matching an edge calculation node set according to the terminal equipment set, and transmitting the decision strategy data to each terminal equipment in the terminal equipment set through the edge calculation node set.
2. The method for processing data based on a lightweight data center according to claim 1, wherein the step of data-encoding the converted data set to obtain an encoded data set includes:
Extracting variable data from the conversion data set to obtain variable data and non-variable data in the conversion data set;
performing numerical coding and unit conversion on the variable data to obtain first coded data;
and carrying out hash coding on the non-variable data to obtain second coded data, and combining the first coded data and the second coded data into a coded data set.
3. The data processing method based on the lightweight data center as claimed in claim 1, wherein the steps of transmitting the processed data set to a cloud end, performing feature analysis on the processed data set at the cloud end to obtain feature description data, and performing decision strategy analysis on the feature description data to obtain decision strategy data include:
performing compression ratio matching on the processed data set to obtain a target compression ratio;
Carrying out data compression on the processed data set according to the target compression ratio to obtain a compressed data set;
carrying out data encryption on the compressed data set through a symmetric encryption algorithm to obtain an encrypted data set, and transmitting the encrypted data set to the cloud;
and carrying out feature analysis on the processing data set at the cloud to obtain feature description data, and carrying out decision strategy analysis on the feature description data to obtain decision strategy data.
4. The data processing method based on the lightweight data center as claimed in claim 1, wherein the step of performing feature analysis on the processed data set at the cloud to obtain feature description data, and performing decision strategy analysis on the feature description data to obtain decision strategy data includes:
At the cloud end, carrying out data division on the processing data set to obtain an image data set and a text data set;
Inputting the image data set into a convolutional neural network for image feature extraction to obtain an image feature set;
Inputting the text data set into a cyclic neural network to extract text features, so as to obtain a text feature set;
inputting the image feature set and the text feature set into an attention mechanism for feature weight matching to obtain a first weight set of the image feature set and a second weight set of the text feature set;
Carrying out multi-modal feature fusion on the image feature set and the text feature set through the first weight set and the second weight set to obtain multi-modal fusion features;
and generating the feature description data according to the multi-mode fusion features, and carrying out decision strategy analysis on the feature description data to obtain decision strategy data.
5. The data processing method based on the lightweight data center as claimed in claim 4, wherein the steps of matching terminal devices related to the decision policy data to obtain a terminal device set, matching an edge calculation node set according to the terminal device set, and transmitting the decision policy data to each terminal device in the terminal device set through the edge calculation node set, include:
performing data execution node matching on the decision strategy data to obtain an execution node set;
Matching terminal equipment related to the decision strategy data through the executing node set to obtain a terminal equipment set;
Matching an edge computing node set according to the terminal equipment set, and performing node state matching on each edge computing node in the edge computing node set to obtain the current node state of each edge computing node;
according to the current node state of each edge computing node, load analysis is carried out on each edge computing node to obtain load data of each edge computing node;
And constructing a node load strategy according to the load data of each edge computing node, and transmitting the decision strategy data to each terminal device in the terminal device set through the node load strategy and the edge computing node set.
6. A lightweight mid-data station-based data processing system for performing the lightweight mid-data station-based data processing method as claimed in any one of claims 1 to 5, comprising:
The acquisition module is used for acquiring original data from a plurality of data sources to obtain a heterogeneous data set, and carrying out data preprocessing on the heterogeneous data set to obtain a processed data set, and specifically comprises the following steps: collecting original data from a plurality of data sources to obtain a heterogeneous data set, and carrying out data cleaning and missing value filling on the heterogeneous data set to obtain a filled data set; performing data encoding on the filling data set to obtain an encoded data set; performing data type conversion on the coded data set to obtain a converted data set; performing data encoding on the converted data set to obtain an encoded data set; carrying out data connection integration on the coded data set to obtain an integrated data set, which specifically comprises the following steps: performing primary key identification on the coded data set to obtain a primary key set corresponding to the coded data set, wherein each primary key comprises one or more of the following time stamps, equipment ID (identity) and geographic position data; according to the primary key set, performing connection mode matching on the coded data set to obtain a connection mode set, wherein the connection mode set comprises one or more of the following: internal connection, left connection, right connection, full connection, and cross connection; the coded data set is subjected to data connection through the connection mode set and the primary key set to obtain a connection data set; redundant data elimination is carried out on the connection data set, and a data set to be processed is obtained; performing data consistency verification on the data set to be processed, and generating the integrated data set when verification passes; performing multi-modal feature alignment on the integrated data set to obtain the processing data set;
The analysis module is used for transmitting the processing data set to the cloud end, carrying out feature analysis on the processing data set at the cloud end to obtain feature description data, and carrying out decision strategy analysis on the feature description data to obtain decision strategy data;
The matching module is used for matching the terminal equipment related to the decision strategy data to obtain a terminal equipment set, matching an edge computing node set according to the terminal equipment set, and transmitting the decision strategy data to each terminal equipment in the terminal equipment set through the edge computing node set.
CN202410776423.6A 2024-06-17 2024-06-17 Data processing method and system based on lightweight data center Pending CN118349363A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410776423.6A CN118349363A (en) 2024-06-17 2024-06-17 Data processing method and system based on lightweight data center

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410776423.6A CN118349363A (en) 2024-06-17 2024-06-17 Data processing method and system based on lightweight data center

Publications (1)

Publication Number Publication Date
CN118349363A true CN118349363A (en) 2024-07-16

Family

ID=91814298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410776423.6A Pending CN118349363A (en) 2024-06-17 2024-06-17 Data processing method and system based on lightweight data center

Country Status (1)

Country Link
CN (1) CN118349363A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118736847A (en) * 2024-09-04 2024-10-01 四川华体照明科技股份有限公司 Edge computing gateway data processing method and gateway

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110677300A (en) * 2019-10-09 2020-01-10 国家电网有限公司 Electric power safety management video intelligent shunting device and method based on mobile edge calculation
CN116739389A (en) * 2023-08-14 2023-09-12 广东创能科技股份有限公司 Smart city management method and system based on cloud computing
CN117149416A (en) * 2023-08-22 2023-12-01 云南大学 Mobility unloading method based on graph neural network
CN117240887A (en) * 2023-10-13 2023-12-15 山东平安电气集团有限公司 Wisdom thing networking energy management platform system
CN117574313A (en) * 2023-11-27 2024-02-20 中国工商银行股份有限公司 Data processing method, device and storage medium based on multi-mode customer feedback
CN118012850A (en) * 2024-04-08 2024-05-10 北京市农林科学院智能装备技术研究中心 Intelligent irrigation multisource information-oriented database construction system, method and equipment
CN118312861A (en) * 2024-04-25 2024-07-09 北京谷器数据科技有限公司 File export method and system based on AI cloud computing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110677300A (en) * 2019-10-09 2020-01-10 国家电网有限公司 Electric power safety management video intelligent shunting device and method based on mobile edge calculation
CN116739389A (en) * 2023-08-14 2023-09-12 广东创能科技股份有限公司 Smart city management method and system based on cloud computing
CN117149416A (en) * 2023-08-22 2023-12-01 云南大学 Mobility unloading method based on graph neural network
CN117240887A (en) * 2023-10-13 2023-12-15 山东平安电气集团有限公司 Wisdom thing networking energy management platform system
CN117574313A (en) * 2023-11-27 2024-02-20 中国工商银行股份有限公司 Data processing method, device and storage medium based on multi-mode customer feedback
CN118012850A (en) * 2024-04-08 2024-05-10 北京市农林科学院智能装备技术研究中心 Intelligent irrigation multisource information-oriented database construction system, method and equipment
CN118312861A (en) * 2024-04-25 2024-07-09 北京谷器数据科技有限公司 File export method and system based on AI cloud computing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
陆昕渝;肖冰;黄民生;何岩;李欣然;尹超;冷培恩;: "上海东区水质净化厂尾水湿地池水质与蚊幼孳生分析", 华东师范大学学报(自然科学版), no. 06, 25 November 2018 (2018-11-25) *
高建平: ""高校一卡通系统数据分析的设计与实现"", 《中国硕士学位论文全文数据库》, 15 July 2020 (2020-07-15), pages 4 - 5 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118736847A (en) * 2024-09-04 2024-10-01 四川华体照明科技股份有限公司 Edge computing gateway data processing method and gateway

Similar Documents

Publication Publication Date Title
US20210389293A1 (en) Methods and Systems for Water Area Pollution Intelligent Monitoring and Analysis
Yang et al. Machine-learning-enabled cooperative perception for connected autonomous vehicles: Challenges and opportunities
CN112435462B (en) Method, system, electronic device and storage medium for short-time traffic flow prediction
KR102169754B1 (en) Method and system for merging environment sensor data with communication data and usage of the system
CN112804188B (en) Scalable vision computing system
CN204156985U (en) There is high-definition camera equipment and the supervisory control system of face recognition comparison function
CN112114533A (en) Internet of things data processing method and device, computer equipment and storage medium
CN204155449U (en) A kind of Car license recognition comparison high-definition camera and supervisory system
CN118349363A (en) Data processing method and system based on lightweight data center
CN103287359A (en) Energy consumption detecting method for BEV (Blade Electric Vehicle)
CN112688822A (en) Edge computing fault or security threat monitoring system and method based on multi-point cooperation
CN117596755B (en) Intelligent control method and system for street lamp of Internet of things
CN112804348A (en) Method for judging repeatability of reported data of edge computing node by cloud monitoring center
CN113160604A (en) Bus management method and system based on artificial intelligence
CN118075427A (en) Intelligent monitoring management method and system
CN117312465A (en) Big data classification and clustering method based on ensemble learning
CN114363563B (en) Distribution network monitoring system and method based on 5G ultra-high definition video monitoring
CN117132002A (en) Multi-mode space-time track prediction method, device, equipment and medium
CN214338041U (en) Intelligent city monitoring system based on 5G Internet of things
CN117201501B (en) Intelligent engineering sharing management system and operation method
CN110471988B (en) Three-section five-layer artificial intelligence system based on modularization
CN115567563B (en) Comprehensive transportation hub monitoring and early warning system based on end edge cloud and control method thereof
CN115426363B (en) Data acquisition method and terminal of intelligent plate processing factory
CN114374710B (en) Distribution network monitoring method and system for 5G ultra-high definition video and Internet of things monitoring
CN117112039A (en) Transmission optimization system and operation method of data center

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination