[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113505520A - Method, device and system for supporting heterogeneous federated learning - Google Patents

Method, device and system for supporting heterogeneous federated learning Download PDF

Info

Publication number
CN113505520A
CN113505520A CN202110536547.3A CN202110536547A CN113505520A CN 113505520 A CN113505520 A CN 113505520A CN 202110536547 A CN202110536547 A CN 202110536547A CN 113505520 A CN113505520 A CN 113505520A
Authority
CN
China
Prior art keywords
federal learning
initiator
participant
learning
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110536547.3A
Other languages
Chinese (zh)
Inventor
张德
陈行
李安杰
彭南博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Holding Co Ltd
Original Assignee
Jingdong Technology Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Holding Co Ltd filed Critical Jingdong Technology Holding Co Ltd
Priority to CN202110536547.3A priority Critical patent/CN113505520A/en
Publication of CN113505520A publication Critical patent/CN113505520A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Embodiments of the present disclosure disclose methods, apparatus, and systems for supporting heterogeneous federated learning. One embodiment of the method comprises: receiving a federal learning request sent by an initiator; generating at least two federal learning subtasks according to the federal learning request; according to the state of at least one participant corresponding to the federal learning request, the at least two federal learning subtasks are respectively sent to the initiator and the at least one participant; in response to receiving training feedback data sent by the initiator and the at least one participant, generating a training result of the federal learning task indicated by the federal learning request based on the training feedback data, wherein the training feedback data is generated based on intermediate result data transmitted between the initiator and the at least one participant through an adaptive interface. The implementation realizes data docking of heterogeneous federated learning.

Description

Method, device and system for supporting heterogeneous federated learning
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a method and apparatus for supporting heterogeneous federated learning and a system for heterogeneous federated learning.
Background
With the development of machine Learning technology, federal Learning (federal Learning) is gradually applied more and more because it can effectively help a plurality of organizations to perform data use and machine Learning modeling under the condition of meeting the requirements of user privacy protection, data security and government regulations.
However, in the prior art, the same set of federal learning framework is used for the federal learning cooperation among all the participants, so that in the scene, if the used federal learning framework is different from the local architecture, additional deployment cost and low resource utilization rate are brought.
Disclosure of Invention
Embodiments of the present disclosure propose methods, apparatuses, devices, media and systems for supporting heterogeneous federated learning.
In a first aspect, an embodiment of the present disclosure provides a method for supporting heterogeneous federated learning, the method including: receiving a federal learning request sent by an initiator; generating at least two federal learning subtasks according to the federal learning request; respectively sending at least two federal learning subtasks to an initiator and at least one participant according to the state of the at least one participant corresponding to the federal learning request; and in response to receiving training feedback data sent by the initiator and the at least one participant, generating a training result of the federal learning task indicated by the federal learning request based on the training feedback data, wherein the training feedback data is generated based on intermediate result data transmitted between the initiator and the at least one participant through the adaptive interface.
In some embodiments, the federal learning request further includes generic interface definition information, where the generic interface definition information is used to indicate at least one of the following: communication message attribute, communication identification and a data structure to be transmitted; and the adaptation interface is determined based on the generic interface definition information.
In some embodiments, the federal learning request further includes task allocation granularity information; and generating at least two federal learning subtasks according to the federal learning request, wherein the at least two federal learning subtasks comprise: performing configuration analysis on a task indicated by the federal learning request to generate a pipeline (pipeline) model; and generating a federal learning subtask consistent with the task distribution granularity information according to the generated pipeline model.
In some embodiments, the sending the at least two federal learning subtasks to the initiator and the at least one participant respectively according to the state of the at least one participant corresponding to the federal learning request includes: and in response to determining that the states of at least one participant corresponding to the federated learning request are both in a trainable state, sending at least two federated learning subtasks to the initiator and the at least one participant, respectively.
In some embodiments, the generating the training result of the federal learning task indicated by the federal learning request based on the training feedback data includes: updating a target state table according to the training feedback data, wherein the target state table is used for recording training process data related to the federal learning task; and generating a new federal learning subtask according to the target state table until the federal learning task is completed.
In some embodiments, the method further comprises: in response to determining that the federal learning task failed to train, determining a training starting point according to the target state table; the federal learning task is re-executed from the training start.
In some embodiments, the receiving the federal learning request sent by the initiator includes: and receiving a federal learning request sent by the initiator in response to the determination that the initiator is authenticated, wherein the initiator belongs to the registered user.
In a second aspect, an embodiment of the present disclosure provides an apparatus for supporting heterogeneous federated learning, the apparatus including: a receiving unit configured to receive a federal learning request sent by an initiator; the first generation unit is configured to generate at least two federal learning subtasks according to the federal learning request; a distribution unit configured to send at least two federal learning subtasks to the initiator and at least one participant, respectively, according to a state of the at least one participant corresponding to the federal learning request; and a second generating unit configured to generate a training result of the federal learning task indicated by the federal learning request based on the training feedback data in response to receiving the training feedback data sent by the initiator and the at least one participant, wherein the training feedback data is generated based on intermediate result data transmitted between the initiator and the at least one participant through the adaptive interface.
In some embodiments, the federal learning request further includes generic interface definition information, where the generic interface definition information is used to indicate at least one of the following: communication message attribute, communication identification and a data structure to be transmitted; and the adaptation interface is determined based on the generic interface definition information.
In some embodiments, the federal learning request further includes task allocation granularity information; and the first generating unit is further configured to: configuring and analyzing a task indicated by the federal learning request to generate a pipeline model; and generating a federal learning subtask consistent with the task distribution granularity information according to the generated pipeline model.
In some embodiments, the above-mentioned distribution unit is further configured to: and in response to determining that the states of at least one participant corresponding to the federated learning request are both in a trainable state, sending at least two federated learning subtasks to the initiator and the at least one participant, respectively.
In some embodiments, the second generating unit is further configured to: updating a target state table according to the training feedback data, wherein the target state table is used for recording training process data related to the federal learning task; and generating a new federal learning subtask according to the target state table until the federal learning task is completed.
In some embodiments, the apparatus is further configured to: in response to determining that the federal learning task failed to train, determining a training starting point according to the target state table; the federal learning task is re-executed from the training start.
In some embodiments, the receiving unit is further configured to: and receiving a federal learning request sent by the initiator in response to the determination that the initiator is authenticated, wherein the initiator belongs to the registered user.
In a third aspect, an embodiment of the present application provides a system for heterogeneous federated learning, where the system includes: the initiator is configured to send a federal learning request to the server; training based on the received federal learning subtask and local data to generate first intermediate result data; sending the first intermediate result data to a participant corresponding to the initiator through an adaptive interface; a participant configured to train based on the received federated learning subtask and local data, generating second intermediate result data; sending the second intermediate result data to the initiator through the adaptive interface; a server configured to perform a method as described in any implementation manner of the first aspect.
In some embodiments, the server is further configured to support a role of a coordinator in a preset federal learning algorithm.
In a fourth aspect, an embodiment of the present application provides a server, where the server includes: one or more processors; a storage device having one or more programs stored thereon; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a fifth aspect, the present application provides a computer-readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the method, the device, the server and the medium for supporting the heterogeneous federated learning, the received federated learning request is converted into the federated learning subtask distributed to each participant, the communication data and the communication flow are normalized through the adaptive interface, and a technical basis is provided for data communication between heterogeneous federated learning in which different federated learning architectures are adopted by the participants, so that each participant can realize quick access of cross-platform cooperation through conversion of a local communication data format and a standard data format, and data docking between heterogeneous federated learning participants is realized.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for supporting heterogeneous federated learning in accordance with the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for supporting heterogeneous federated learning, in accordance with an embodiment of the present disclosure;
FIG. 4 is a flow diagram of yet another embodiment of a method for supporting heterogeneous federated learning in accordance with the present disclosure;
FIG. 5 is a schematic structural diagram illustrating one embodiment of an apparatus for supporting heterogeneous federated learning in accordance with the present disclosure;
FIG. 6 is a timing diagram of interactions between various devices in one embodiment of a system for heterogeneous federated learning according to the present application.
FIG. 7 is a schematic block diagram of an electronic device suitable for use in implementing embodiments of the present application.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary architecture 100 to which the disclosed method for supporting heterogeneous federated learning or apparatus for supporting heterogeneous federated learning may be applied.
As shown in fig. 1, system architecture 100 may include a federal learned initiator 101, a federal learned participant 102, a network 103, and a server 104. Network 103 is used to provide a medium for communication links between the federal learned sponsor 101, federal learned participant 102, and server 104. Network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The initiator 101 of federal learning and the participants 102 of federal learning, as well as the initiator 101 of federal learning and the participants 102 of federal learning interact with a server 104 over a network 103 to receive or send messages or the like. Various communication client applications, such as communication class applications for supporting federal learning, etc., may be installed on the initiator 101 and the participants 102 of federal learning.
The initiator of federal learning 101 and the participants of federal learning 102 may be hardware or software. When the initiator of federal learning 101 and the participant of federal learning 102 are hardware, they can be various electronic devices having display screens and supporting federal learning, including but not limited to smart phones, tablets, laptop portable computers, desktop computers, cloud servers, and the like. When the initiator 101 of federal learning and the participant 102 of federal learning are software, they can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
Server 104 may be a server that provides various services, such as a backend server that provides support for machine learning model training class applications on both the initiator 101 and participants 102 of federal learning. The background server can analyze and process the received federal learning request, and send the generated federal learning subtask to the federal learning initiator 101 and the federal learning participant 102, and can also generate a training result of the federal learning task according to data fed back by the federal learning initiator 101 and the federal learning participant 102.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for supporting heterogeneous federated learning provided by the embodiments of the present disclosure is generally performed by the server 104, and accordingly, the apparatus for supporting heterogeneous federated learning is generally disposed in the server 104.
It should be understood that the number of federal learned initiators, federal learned participants, networks, and servers in fig. 1 are merely illustrative. There may be any number of federal learned initiators, federal learned participants, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for supporting heterogeneous federated learning in accordance with the present disclosure is shown. The method for supporting heterogeneous federated learning includes the following steps:
step 201, receiving a federal learning request sent by an initiator.
In this embodiment, an executing entity (such as the server 105 shown in fig. 1) of the method for supporting heterogeneous federated learning may receive the federated learning request sent by the initiator through a wired connection manner or a wireless connection manner. The federal learning request can be used for indicating the initiation of a federal learning task. The initiator is typically the initiator of federal learning. Typically, the initiator may be a party with tag data. The above federal learning request may include, for example, general information of the training data (e.g., names of data tables for storing the training data) for subsequently generating data configurations in two federal learning subtasks.
In some optional implementation manners of this embodiment, the federal learning request may further include common interface definition information. The above-mentioned generic interface definition information may be used to indicate at least one of: communication message attributes, communication identifiers and data structures to be transmitted.
In these implementations, the generic interface definition information may include definitions for various federal learning procedures that involve data that needs to be exchanged. Wherein the generic interface definition information may be used to indicate at least one of: communication message attributes, communication identifiers and data structures to be transmitted.
As an example, the above-described common interface definition information may be used to indicate a message attribute in a gRPC (Google Remote Procedure Call). The above-mentioned common interface definition information may also be used to indicate that the communication variables at each step in the federal learning are used as communication identifiers. For example, a concatenation of "/data type/current step/task id/variable name" is used as the identifier of the recipient identification variable. The above-mentioned common interface definition information can also be used to indicate the data format to be transmitted. For example, [ task id, [ homomorphic encrypted value, exponent for floating point to integer mapping ], … … ] is used as a format for data transfer between initiator and participant.
Based on the optional implementation mode, the scheme provides that the initiator realizes the customization of the standardized communication interface and the communication flow through the general interface definition information included by the federal learning request, improves the richness of the supported communication interface, and is beneficial to reducing the resource overhead caused by the unmatched interface between the federal learning participants.
In some optional implementations of this embodiment, the federal learning request may further include task allocation granularity information. The task allocation granularity information may be used to indicate granularity of task allocation, for example, dividing subtasks by the granularity of training rounds, variable update times, and the like.
In some optional implementations of this embodiment, the execution principal may receive a federal learning request sent by the initiator in response to determining that the initiator is authenticated. Wherein the initiator generally belongs to a registered user.
Based on the optional implementation mode, the scheme can support functions of registration, authentication and the like of all the participants in federal learning. Therefore, the safety of the system is ensured on the basis of supporting the federal learning of a plurality of participants.
And 202, generating at least two federal learning subtasks according to the federal learning request.
In this embodiment, the execution subject may generate at least two federal learning subtasks in various ways according to the federal learning request received in step 201. As an example, the executive body may split the federal learning task indicated by the federal learning request into at least two federal learning subtasks according to preset rules. The preset rule can be, for example, splitting the task according to the training turn indicated by the federal learning request and generating a federal learning subtask consistent with the number of participants.
In some optional implementation manners of this embodiment, based on the task allocation granularity information included in the federal learning request, the executing entity may further generate at least two federal learning subtasks according to the following steps:
firstly, a task indicated by the federated learning request is configured and analyzed, and a pipeline model is generated.
In these implementations, the executing entity may perform configuration analysis on the task indicated by the federal learning request received in step 201 to generate a pipeline model in various ways. By way of example, the execution agent may use various existing federal learning modeling management tools to perform configuration analysis on the tasks to generate the pipeline model.
And secondly, generating a federal learning subtask consistent with the task distribution granularity information according to the generated pipeline model.
In these implementations, according to the pipeline model generated in the first step, the execution subject may split or integrate the flows indicated in the pipeline model, so as to generate a federal learning subtask consistent with the task allocation granularity information.
Based on the optional implementation mode, the scheme can realize the change of the scheduling granularity of the training task by generating the federal learning subtask consistent with the task allocation granularity information. Thus, control, monitoring or statistics of internal training processes related to the Federal learning algorithm with finer granularity than the existing scheduling according to the training turns can be supported.
And step 203, respectively sending at least two federal learning subtasks to the initiator and at least one participant according to the state of at least one participant corresponding to the federal learning request.
In this embodiment, the executing entity may send the at least two federal learning subtasks generated in step 202 to the initiator and the at least one participant respectively in various manners according to the state of the at least one participant corresponding to the federal learning request. As an example, the executing entity may send only the federal learning subtask training that the state corresponding to the participant meets the preset condition, among the at least two federal learning subtasks, to the corresponding initiator and at least one participant. As yet another example, the executing entity may further send at least two federal learning subtasks to the initiator and the at least one participant in the non-downed state, respectively.
In some optional implementations of the embodiment, in response to determining that the status of at least one participant corresponding to the federal learning request is in a trainable state, the executing entity may send the at least two federal learning subtasks to the initiator and the at least one participant, respectively.
Based on the optional implementation mode, whether each participant is in a trainable state or not is verified before the federal learning subtask is issued, so that the issued federal learning subtask can be executed in time after being issued, and the success rate of training is improved.
And step 204, in response to receiving the training feedback data sent by the initiator and the at least one participant, generating a training result of the federal learning task indicated by the federal learning request based on the training feedback data.
In this embodiment, in response to receiving the training feedback data sent by the initiator and the at least one participant, the executive body may generate the training result of the federal learning task indicated by the federal learning request in various ways based on the training feedback data. Wherein the training feedback data is generally generated based on intermediate result data transmitted between the initiator and the at least one participant via an adaptation interface.
It should be noted that, in the federal learning process, each participant trains by using local data, and then generates intermediate result data. The intermediate result data may include various data that does not expose the original data and can reflect the training situation. The intermediate result data can be transmitted between the participants through the adaptive interface so as to update the parameters of the local models of the participants. Each participant may generate training feedback data based on the training condition of the local data and the received intermediate result data of the other participants. The training feedback data may be used to indicate a training state, a training index, and the like. Each participant may also send the training feedback data to the executive.
Optionally, the intermediate result data may also include data obtained by privacy interaction (PSI) in the data alignment process, an index used for evaluating the model, and the like.
In this embodiment, in response to receiving the training feedback data sent by the initiator and the at least one participant, as an example, the executive body may determine whether the federal learning task is completed according to a training state, a training index, and the like indicated by the training feedback data, so as to generate a training result of the federal learning task indicated by the federal learning request.
In some optional implementations of this embodiment, based on common interface definition information that may be included in the federal learning request, the adaptive interface may be determined based on the common interface definition information. Therefore, the initiator realizes the self-definition of the standardized communication interface and the communication flow between the federal learning participants through the general interface definition information included in the federal learning request, improves the richness of the supported communication interface, and is beneficial to reducing the resource overhead caused by the unmatched interface between the federal learning participants.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of a method for supporting heterogeneous federated learning, in accordance with an embodiment of the present disclosure. In the application scenario of fig. 3, a user may use device 301 to send a federated learning request 303 to server 302 to initiate federated learning. Server 302 may generate at least two federated learning subtasks 304 based on federated learning request 303. The server 302 may send the at least two federal learning subtasks 304 to the initiator (e.g., the device 301) and the at least one participant, respectively, according to the status of the at least one participant (e.g., the devices 306, 307) corresponding to the federal learning request. The initiator and the at least one participant may then perform federal learning training using their respective local data to generate intermediate outcome data. And then, the initiator and the at least one participant can transmit the generated intermediate result data through the adaptive interface. The initiator and the at least one participant may also send training feedback data generated based on the intermediate result data to the server 302. The server 302 may generate training results (e.g., perform 2 nd iteration or training completion, etc.) for the federated learning task indicated by the federated learning request based on the received training feedback data.
At present, in one of the prior arts, the same set of federal learning framework is usually used for federal learning cooperation among all participants, so that in this scenario, if the used federal learning framework is different from the local architecture, additional deployment cost and low resource utilization rate are brought. In the method provided by the embodiment of the disclosure, the received federal learning request is converted into the federal learning subtask distributed to each participant, and the communication data and the communication flow are normalized through the adaptive interface, so that a technical basis is provided for data communication between heterogeneous federal learning in which the participants adopt different federal learning architectures, each participant can realize quick access of cross-platform cooperation through conversion of a local communication data format and a standard data format, and data docking between heterogeneous federal learning participants is realized.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for supporting heterogeneous federated learning is illustrated. The flow 400 of the method for supporting heterogeneous federated learning includes the steps of:
step 401, receiving a federal learning request sent by an initiator.
In some optional implementations of this embodiment, the execution principal may receive a federal learning request sent by the initiator in response to determining that the initiator is authenticated. Wherein the initiator generally belongs to a registered user.
And step 402, generating at least two federal learning subtasks according to the federal learning request.
And step 403, respectively sending at least two federal learning subtasks to the initiator and at least one participant according to the state of at least one participant corresponding to the federal learning request.
Step 404, in response to receiving the training feedback data sent by the initiator and the at least one participant, updating a target state table according to the training feedback data; and generating a new federal learning subtask according to the target state table until the federal learning task is completed.
In this embodiment, in response to receiving the training feedback data sent by the initiator and the at least one participant, the executing entity may update the target state table according to the training feedback data. Wherein the training feedback data may be generated based on intermediate result data transmitted between the initiator and the at least one participant through the adaptation interface. The goal state table may be used to record training process data associated with the federal learning task. The training process data may include, but is not limited to, at least one of the following training rounds, training steps, model evaluation metrics, and the like. And then, according to the target state table, the execution body can continue to schedule the federal learning task to generate a new federal learning subtask until the federal learning task is completed.
In some optional implementations of this embodiment, the executing body may further continue to perform the following steps:
in a first step, in response to determining that the federal learning task failed to train, a training start point is determined according to a target state table.
In these implementations, the executive may first determine whether the federal learning task failed training through various methods. As an example, the executive may determine that the federal learning task failed training in response to determining that the received training feedback data indicates a failure to train. As an example, the executing entity may determine that the federal learning task training failed in response to detecting a failure of a network or a downtime of a participant of the federal learning task.
In response to determining that the federated learning task failed, the executive may determine a training starting point from the goal state table in various ways. As an example, the execution agent may determine the training starting point according to a preset rule (e.g., a latest time point before the abnormal condition indicated by the target state table).
And secondly, re-executing the federal learning task from the training starting point.
In these implementations, the executive may re-execute the federal learning task based on the training starting point determined in the first step.
Based on the optional implementation manner, the scheme can realize operation running-on by utilizing the training starting point determined according to the target state table, and reduces time cost and resource consumption caused by retraining from zero.
Step 401, step 402, and step 403 are respectively consistent with step 201, step 202, step 203, and their optional implementations in the foregoing embodiments, and the above description on step 201, step 202, step 203, and their optional implementations also applies to step 401, step 402, and step 403, which is not described herein again.
As can be seen from fig. 4, the flow 400 of the method for supporting heterogeneous federated learning in this embodiment represents a step of updating the goal state table according to the training feedback data and a step of generating a new federated learning subtask according to the goal state table. Therefore, the scheme described in the embodiment can record various data in the training process of federal learning, and accordingly, the coordination scheduling is performed on a plurality of participants, so that the functions of performing cooperative tasks, monitoring of abnormalities, troubleshooting of failure factors and the like based on the training steps are realized.
With further reference to fig. 5, as an implementation of the method shown in the above-mentioned figures, the present disclosure provides an embodiment of an apparatus for supporting heterogeneous federated learning, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2 or fig. 4, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 5, the apparatus 500 for supporting heterogeneous federated learning provided by the present embodiment includes a receiving unit 501, a first generating unit 502, a distributing unit 503, and a second generating unit 504. The receiving unit 501 is configured to receive a federal learning request sent by an initiator; a first generating unit 502 configured to generate at least two federal learning subtasks according to the federal learning request; a distribution unit 503 configured to send at least two federal learning subtasks to the initiator and at least one participant, respectively, according to a state of the at least one participant corresponding to the federal learning request; a second generating unit 504, configured to generate, in response to receiving training feedback data sent by the initiator and the at least one participant, a training result of the federal learning task indicated by the federal learning request based on the training feedback data, where the training feedback data is generated based on intermediate result data transmitted between the initiator and the at least one participant through the adaptation interface.
In this embodiment, in the apparatus 500 for supporting heterogeneous federated learning: the specific processing of the receiving unit 501, the first generating unit 502, the distributing unit 503 and the second generating unit 504 and the technical effects thereof can refer to the related descriptions of step 201, step 202, step 203 and step 204 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementation manners of this embodiment, the federal learning request may further include common interface definition information. The above-mentioned generic interface definition information may be used to indicate at least one of: communication message attributes, communication identifiers and data structures to be transmitted. The above-mentioned adaptation interface may be determined based on the common interface definition information.
In some optional implementations of this embodiment, the federal learning request may further include task allocation granularity information. The first generating unit 502 may be further configured to: configuring and analyzing a task indicated by the federal learning request to generate a pipeline model; and generating a federal learning subtask consistent with the task distribution granularity information according to the generated pipeline model.
In some optional implementations of this embodiment, the distributing unit 503 may be further configured to: and in response to determining that the states of at least one participant corresponding to the federated learning request are both in a trainable state, sending at least two federated learning subtasks to the initiator and the at least one participant, respectively.
In some optional implementations of the present embodiment, the second generating unit 504 may be further configured to: updating a target state table according to the training feedback data, wherein the target state table is used for recording training process data related to the federal learning task; and generating a new federal learning subtask according to the target state table until the federal learning task is completed.
In some optional implementations of this embodiment, the apparatus 500 for supporting heterogeneous federated learning described above may be further configured to: in response to determining that the federal learning task failed to train, determining a training starting point according to the target state table; the federal learning task is re-executed from the training start.
In some optional implementations of this embodiment, the receiving unit 501 may be further configured to: and receiving a federal learning request sent by the initiator in response to the determination that the initiator is authenticated, wherein the initiator belongs to the registered user.
According to the device provided by the above embodiment of the disclosure, the federal learning request received by the receiving unit 501 is converted into the federal learning subtasks distributed to each participant by the distributing unit 503 through the first generating unit 502, and the communication data and the communication flow are normalized through the adaptive interface, so that a technical basis is provided for data communication between heterogeneous federal learning in which different federal learning architectures are adopted by the participants, each participant can realize quick access of cross-platform cooperation through conversion between a local communication data format and a standard data format, and data docking between heterogeneous federal learning participants is realized.
With further reference to FIG. 6, a timing sequence 600 of interactions between various devices in one embodiment of a system for heterogeneous federated learning is illustrated. The system for heterogeneous federal learning can include: an initiator (e.g., device 101 shown in fig. 1) configured to send a federated learning request to a server; training based on the received federal learning subtask and local data to generate first intermediate result data; sending the first intermediate result data to a participant corresponding to the initiator through an adaptive interface; a participant (e.g., device 102 shown in fig. 1) configured to train based on the received federal learning subtasks and local data, generating second intermediate result data; sending the second intermediate result data to the initiator through the adaptive interface; a server configured to perform an implementation of the method for supporting heterogeneous federated learning as described in the foregoing embodiments.
As shown in fig. 6, in step 601, the initiator sends a federal learning request to the server.
In this embodiment, the initiator of federal learning may send a federal learning request to a server (e.g., the executing body of the aforementioned method for supporting heterogeneous federal learning). The above federal learning request may be consistent with the corresponding description in step 201 in the foregoing embodiment, and is not described herein again.
In step 602, the server receives a federal learning request sent by the initiator.
In step 603, the server generates at least two federal learning subtasks according to the federal learning request.
In step 604, the server side sends at least two federal learning subtasks to the initiator and at least one participant respectively according to the state of at least one participant corresponding to the federal learning request.
In step 605, training is performed based on the received federal learning subtasks and local data, and the initiator generates first intermediate result data.
In this embodiment, the initiator may be trained using local data and then generate intermediate result data that can be used for transmission between the participants. The intermediate result data may include various data that does not expose the original data and can reflect the training situation.
In step 606, training is performed based on the received federated learning subtasks and local data, and the participants generate second intermediate result data.
In this embodiment, the participants may train with local data and then generate intermediate result data that can be used for transmission between the participants. The intermediate result data may include various data that does not expose the original data and can reflect the training situation.
The steps 605 and 606 may be executed in the order of executing the step 605 and then executing the step 606, may be executed in the order of executing the step 606 and then executing the step 605, or may be executed in parallel at substantially the same time, and are not limited herein.
In step 607, the initiator sends the first intermediate result data to the participant corresponding to the initiator through the adaptive interface.
In this embodiment, the initiator may send the first intermediate result data generated in step 606 to the participant corresponding to the initiator through the adaptive interface. The adaptation interface may include various data interfaces capable of supporting data transmission between each participant and the initiator.
In step 608, the participant sends the second intermediate result data to the initiator through the adaptation interface.
In this embodiment, the participant may send the second intermediate result data generated in step 607 to the participant corresponding to the initiator through the adaptive interface. The adaptation interface may include various data interfaces capable of supporting data transmission between each participant and the initiator.
And step 609, in response to receiving the training feedback data sent by the initiator and the at least one participant, generating a training result of the federal learning task indicated by the federal learning request based on the training feedback data.
Step 602, step 603, step 604, and step 609 are respectively consistent with step 201, step 202, step 203, step 204, and optional implementations thereof in the foregoing embodiment, and the description above for step 201, step 202, step 203, step 204, and optional implementations thereof also applies to step 602, step 603, step 604, and step 609, and is not repeated here.
In some optional implementation manners of this embodiment, the server may further support functions including initiator, party registration, authentication, and the like. Optionally, the server may further support the functions of federal learning training scheduling, training index statistics, training state monitoring, and the like. Optionally, the server may also support taking a role of coordinator (Arbiter) in a part of the algorithm (e.g., logistic regression algorithm).
In the system for heterogeneous federated learning provided in the above embodiment of the present application, the server converts the federated learning request sent by the initiator into the federated learning subtask distributed to each participant, the initiator and the participant train based on the local data and the received federated learning subtask, and standardizes the communication data and communication flow between the participant and the initiator through the adaptive interface, so as to provide a technical basis for data communication between heterogeneous federated learning that adopts different federated learning architectures, and the participants enable each participant to realize fast access of cross-platform cooperation through conversion of the local communication data format and the standard data format, thereby realizing data docking between heterogeneous federated learning participants. And the server generates a training result of the federal learning task indicated by the federal learning request according to training feedback data obtained by the initiator and the participants based on the exchanged intermediate result, so that unified scheduling of heterogeneous federal learning is provided, technical boundaries of all roles in the system are clear, and error positioning during subsequent training abnormity is facilitated. And by means of a cloud service technology, an extensible and highly-available heterogeneous federated learning system can be provided, and compatibility of a federated learning platform is improved.
Referring now to FIG. 7, shown is a schematic diagram of an electronic device (e.g., server 1051 of FIG. 1)700 suitable for use in implementing embodiments of the present application. The server shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 7 may represent one device or may represent multiple devices as desired.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of the embodiments of the present application.
It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (Radio Frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately and not be assembled into the server. The computer readable medium carries one or more programs which, when executed by the server, cause the server to: receiving a federal learning request sent by an initiator; generating at least two federal learning subtasks according to the federal learning request; respectively sending at least two federal learning subtasks to an initiator and at least one participant according to the state of the at least one participant corresponding to the federal learning request; and in response to receiving training feedback data sent by the initiator and the at least one participant, generating a training result of the federal learning task indicated by the federal learning request based on the training feedback data, wherein the training feedback data is generated based on intermediate result data transmitted between the initiator and the at least one participant through the adaptive interface.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as "C", Python, or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a receiving unit, a first generating unit, a distributing unit, and a second generating unit. The names of these units do not in some cases form a limitation on the unit itself, and for example, a receiving unit may also be described as a "unit that receives a federal learning request sent by an initiator".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (12)

1. A method for supporting heterogeneous federated learning, comprising:
receiving a federal learning request sent by an initiator;
generating at least two federal learning subtasks according to the federal learning request;
according to the state of at least one participant corresponding to the federal learning request, the at least two federal learning subtasks are respectively sent to the initiator and the at least one participant;
in response to receiving training feedback data sent by the initiator and the at least one participant, generating a training result of the federated learning task indicated by the federated learning request based on the training feedback data, wherein the training feedback data is generated based on intermediate result data transmitted between the initiator and the at least one participant through an adaptation interface.
2. The method of claim 1, wherein the federal learning request further includes generic interface definition information indicating at least one of: communication message attribute, communication identification and a data structure to be transmitted; and
the adaptation interface is determined based on the generic interface definition information.
3. The method of claim 1, wherein task allocation granularity information is further included in the federated learning request; and
generating at least two federal learning subtasks according to the federal learning request, wherein the at least two federal learning subtasks comprise:
performing configuration analysis on the task indicated by the federal learning request to generate a pipeline model;
and generating a federal learning subtask consistent with the task distribution granularity information according to the generated pipeline model.
4. The method of claim 1, wherein the sending the at least two federated learning subtasks to the initiator and the at least one participant, respectively, according to a state of the at least one participant corresponding to the federated learning request comprises:
and in response to determining that the states of at least one participant corresponding to the federated learning request are both in a trainable state, sending the at least two federated learning subtasks to the initiator and the at least one participant, respectively.
5. The method of claim 1, wherein the generating training results for the federated learning task indicated by the federated learning request based on the training feedback data comprises:
updating a target state table according to the training feedback data, wherein the target state table is used for recording training process data related to the federal learning task;
and generating a new federal learning subtask according to the target state table until the federal learning task is completed.
6. The method of claim 5, wherein the method further comprises:
in response to determining that the federated learning task failed to train, determining a training starting point according to the target state table;
re-executing the federated learning task from the training starting point.
7. The method according to one of claims 1 to 6, wherein the receiving of the federal learning request sent by the initiator comprises:
and receiving a federal learning request sent by an initiator in response to the fact that the initiator is authenticated, wherein the initiator belongs to a registered user.
8. An apparatus for supporting heterogeneous federated learning, comprising:
a receiving unit configured to receive a federal learning request sent by an initiator;
a first generating unit configured to generate at least two federal learning subtasks according to the federal learning request;
a distribution unit configured to send the at least two federal learning subtasks to the initiator and the at least one participant, respectively, according to a state of the at least one participant corresponding to the federal learning request;
a second generating unit configured to generate a training result of the federal learning task indicated by the federal learning request based on training feedback data sent by the initiator and the at least one participant in response to receiving the training feedback data, wherein the training feedback data is generated based on intermediate result data transmitted between the initiator and the at least one participant through an adaptation interface.
9. A system for heterogeneous federal learning, comprising:
the initiator is configured to send a federal learning request to the server; training based on the received federal learning subtask and local data to generate first intermediate result data; sending the first intermediate result data to a participant corresponding to the initiator through an adaptive interface;
the participant configured to train based on the received federated learning subtask and local data, generating second intermediate result data; sending the second intermediate result data to the initiator through the adaptation interface;
the server configured to perform implementing the method according to any one of claims 1-7.
10. The system of claim 9, wherein the server is further configured to support assuming a coordinator role in a pre-set federal learning algorithm.
11. A server, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202110536547.3A 2021-05-17 2021-05-17 Method, device and system for supporting heterogeneous federated learning Pending CN113505520A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110536547.3A CN113505520A (en) 2021-05-17 2021-05-17 Method, device and system for supporting heterogeneous federated learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110536547.3A CN113505520A (en) 2021-05-17 2021-05-17 Method, device and system for supporting heterogeneous federated learning

Publications (1)

Publication Number Publication Date
CN113505520A true CN113505520A (en) 2021-10-15

Family

ID=78008511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110536547.3A Pending CN113505520A (en) 2021-05-17 2021-05-17 Method, device and system for supporting heterogeneous federated learning

Country Status (1)

Country Link
CN (1) CN113505520A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113779613A (en) * 2021-11-05 2021-12-10 深圳致星科技有限公司 Data management method and device for secure data network for federal learning
CN114202018A (en) * 2021-11-29 2022-03-18 新智我来网络科技有限公司 Modular joint learning method and system
CN114328432A (en) * 2021-12-02 2022-04-12 京信数据科技有限公司 Big data federal learning processing method and system
CN114611712A (en) * 2022-05-10 2022-06-10 富算科技(上海)有限公司 Prediction method based on heterogeneous federated learning, model generation method and device
CN114925072A (en) * 2022-06-13 2022-08-19 深圳致星科技有限公司 Data management method, apparatus, system, device, medium, and program product
WO2023088465A1 (en) * 2021-11-22 2023-05-25 华为技术有限公司 Model training method and related device
WO2023116466A1 (en) * 2021-12-20 2023-06-29 杭州趣链科技有限公司 Privacy computing method and apparatus, and electronic device and computer-readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874649A (en) * 2020-01-16 2020-03-10 支付宝(杭州)信息技术有限公司 State machine-based federal learning method, system, client and electronic equipment
CN111461874A (en) * 2020-04-13 2020-07-28 浙江大学 Credit risk control system and method based on federal mode
US20200364608A1 (en) * 2019-05-13 2020-11-19 International Business Machines Corporation Communicating in a federated learning environment
CN112000473A (en) * 2020-08-12 2020-11-27 中国银联股份有限公司 Distributed training method and device for deep learning model
CN112001500A (en) * 2020-08-13 2020-11-27 星环信息科技(上海)有限公司 Model training method, device and storage medium based on longitudinal federated learning system
CN112101536A (en) * 2020-08-30 2020-12-18 西南电子技术研究所(中国电子科技集团公司第十研究所) Lightweight distributed multi-task collaboration framework
WO2021008017A1 (en) * 2019-07-17 2021-01-21 深圳前海微众银行股份有限公司 Federation learning method, system, terminal device and storage medium
CN112270597A (en) * 2020-11-10 2021-01-26 恒安嘉新(北京)科技股份公司 Business processing and credit evaluation model training method, device, equipment and medium
CN112685159A (en) * 2020-12-30 2021-04-20 深圳致星科技有限公司 Federal learning calculation task processing scheme based on FPGA heterogeneous processing system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200364608A1 (en) * 2019-05-13 2020-11-19 International Business Machines Corporation Communicating in a federated learning environment
WO2021008017A1 (en) * 2019-07-17 2021-01-21 深圳前海微众银行股份有限公司 Federation learning method, system, terminal device and storage medium
CN110874649A (en) * 2020-01-16 2020-03-10 支付宝(杭州)信息技术有限公司 State machine-based federal learning method, system, client and electronic equipment
CN111461874A (en) * 2020-04-13 2020-07-28 浙江大学 Credit risk control system and method based on federal mode
CN112000473A (en) * 2020-08-12 2020-11-27 中国银联股份有限公司 Distributed training method and device for deep learning model
CN112001500A (en) * 2020-08-13 2020-11-27 星环信息科技(上海)有限公司 Model training method, device and storage medium based on longitudinal federated learning system
CN112101536A (en) * 2020-08-30 2020-12-18 西南电子技术研究所(中国电子科技集团公司第十研究所) Lightweight distributed multi-task collaboration framework
CN112270597A (en) * 2020-11-10 2021-01-26 恒安嘉新(北京)科技股份公司 Business processing and credit evaluation model training method, device, equipment and medium
CN112685159A (en) * 2020-12-30 2021-04-20 深圳致星科技有限公司 Federal learning calculation task processing scheme based on FPGA heterogeneous processing system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
范永泰: "电子商务物流", 30 September 2020, 北京理工大学出版社, pages: 34 - 35 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113779613A (en) * 2021-11-05 2021-12-10 深圳致星科技有限公司 Data management method and device for secure data network for federal learning
WO2023088465A1 (en) * 2021-11-22 2023-05-25 华为技术有限公司 Model training method and related device
CN114202018A (en) * 2021-11-29 2022-03-18 新智我来网络科技有限公司 Modular joint learning method and system
CN114328432A (en) * 2021-12-02 2022-04-12 京信数据科技有限公司 Big data federal learning processing method and system
WO2023116466A1 (en) * 2021-12-20 2023-06-29 杭州趣链科技有限公司 Privacy computing method and apparatus, and electronic device and computer-readable storage medium
CN114611712A (en) * 2022-05-10 2022-06-10 富算科技(上海)有限公司 Prediction method based on heterogeneous federated learning, model generation method and device
CN114925072A (en) * 2022-06-13 2022-08-19 深圳致星科技有限公司 Data management method, apparatus, system, device, medium, and program product

Similar Documents

Publication Publication Date Title
CN113505520A (en) Method, device and system for supporting heterogeneous federated learning
EP3762882B1 (en) System and method for establishing common request processing
CN110546606A (en) Tenant upgrade analysis
CN111488995B (en) Method, device and system for evaluating joint training model
CN113268336A (en) Service acquisition method, device, equipment and readable medium
CN113626002A (en) Service execution method and device
US11748081B2 (en) System and method for application release orchestration and deployment
CN109828830B (en) Method and apparatus for managing containers
CN111338834B (en) Data storage method and device
CN110737655B (en) Method and device for reporting data
CN110059064B (en) Log file processing method and device and computer readable storage medium
CN109840072B (en) Information processing method and device
CN109840109B (en) Method and apparatus for generating software development toolkit
CN112825525A (en) Method and apparatus for processing transactions
CN110022323A (en) A kind of method and system of the cross-terminal real-time, interactive based on WebSocket and Redux
US11537592B1 (en) Metadata management through blockchain technology
CN111598544A (en) Method and apparatus for processing information
CN103729451B (en) A kind of information input method of database, apparatus and system
CN111324470A (en) Method and device for generating information
CN109462491B (en) System, method and apparatus for testing server functionality
CN111626802A (en) Method and apparatus for processing information
CN110532115B (en) System, method and apparatus for developing smart contracts
CN114677138B (en) Data processing method, device and computer readable storage medium
CN115022328A (en) Server cluster, server cluster testing method and device and electronic equipment
CN115705256A (en) Request facilitation for agreement on service transactions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination