[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109408134B - Model file processing method, device and system and processing equipment - Google Patents

Model file processing method, device and system and processing equipment Download PDF

Info

Publication number
CN109408134B
CN109408134B CN201710703918.6A CN201710703918A CN109408134B CN 109408134 B CN109408134 B CN 109408134B CN 201710703918 A CN201710703918 A CN 201710703918A CN 109408134 B CN109408134 B CN 109408134B
Authority
CN
China
Prior art keywords
model file
service
service instance
memory
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710703918.6A
Other languages
Chinese (zh)
Other versions
CN109408134A (en
Inventor
韩陆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201710703918.6A priority Critical patent/CN109408134B/en
Publication of CN109408134A publication Critical patent/CN109408134A/en
Application granted granted Critical
Publication of CN109408134B publication Critical patent/CN109408134B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method, a device and a system for processing a model file and processing equipment. Wherein, the method comprises the following steps: obtaining a model file loaded into a memory by a service instance of a system service, wherein the model file comprises: the method comprises the steps that an original model file and a new model file exist, and a first reference relation exists between the original model file in a memory and a service instance; and after receiving the hot replacement instruction, converting the first reference relationship into a second reference relationship, wherein the second reference relationship is the reference relationship between the new model file in the memory and the service instance. The invention solves the technical problems that the existing system service can not realize the heat exchange of the model and can not keep the service consistency.

Description

Model file processing method, device and system and processing equipment
Technical Field
The invention relates to the field of intelligent interaction services, in particular to a method, a device and a system for processing a model file and processing equipment.
Background
With the rapid development of internet technology, the modern society is facing a new era of diversified development, and intelligent interactive services increasingly permeate into the daily life of people.
The intelligent interactive service can provide interactive service aiming at the logic of a certain tenant, and in the process of starting the service, the intelligent interactive service can acquire the model file of the specified tenant and load the model file into the memory. After the service is started, the user can request for interactive information through an external application programming interface, the service transmits the request to the model through the algorithm module, and the optimal result information is obtained and returned to the user.
In intelligent interactive services, however, the algorithmic model is typically loaded at startup and will not change once loaded until the service is stopped and restarted. In the face of tenant model updating and additionally loading models of other tenants during running, the existing loading mode cannot achieve hot replacement, and restarting can cause service unavailability or inconsistent service.
In view of the above-mentioned problems that the existing system service cannot implement the hot swapping of the model and cannot maintain the service consistency, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a method, a device, a system and a device for processing a model file, which are used for at least solving the technical problems that the existing system service cannot realize the heat exchange of the model and cannot keep the service consistency.
According to an aspect of the embodiments of the present invention, there is provided a method for processing a model file, including: obtaining a model file loaded into a memory by a service instance of a system service, wherein the model file comprises: the method comprises the steps that an original model file and a new model file exist, and a first reference relation exists between the original model file in a memory and a service instance; and after receiving the hot replacement instruction, converting the first reference relationship into a second reference relationship, wherein the second reference relationship is the reference relationship between the new model file in the memory and the service instance.
According to another aspect of the embodiments of the present invention, there is also provided a device for processing a model file, including: an obtaining module, configured to obtain a model file loaded into a memory by a service instance of a system service, where the model file includes: the method comprises the steps that an original model file and a new model file exist, and a first reference relation exists between the original model file in a memory and a service instance; and the receiving module is used for converting the first reference relationship into a second reference relationship after receiving the hot replacement instruction, wherein the second reference relationship is the reference relationship between the new model file in the memory and the service instance.
According to another aspect of the embodiments of the present invention, there is also provided a system for processing a model file, including: the service terminal initiates a processing request, wherein the processing request is used for calling a model file corresponding to the service instance, and the model file comprises: an original model file and a new model file; the server is communicated with the business terminal and used for creating a first reference relation between the original model file and the service instance when the system service is started, and converting the first reference relation into a second reference relation after a hot replacement instruction is received, wherein the second reference relation is the reference relation between the new model file in the memory and the service instance; after the service instance of the server finishes loading the model file into the memory, the received processing request is blocked and executed until the first reference relation is converted into the second reference relation, the processing request is started to be executed, and the execution result is returned to the service terminal.
According to another aspect of the embodiments of the present invention, there is also provided a processing apparatus for a model file, including: the storage is used for storing a model file loaded into the memory by a service instance of the system service, wherein the model file comprises: an original model file and a new model file; and the processor is used for creating a first reference relation between the original model file and the service instance when the system service is started, and converting the first reference relation into a second reference relation after a hot replacement instruction is received, wherein the second reference relation is the reference relation between the new model file and the service instance in the memory.
According to another aspect of the embodiments of the present invention, there is also provided a processing apparatus for a model file, including: a memory for storing service instances of system services loaded into a model file, wherein the model file comprises: an original model file and a new model file; and the processor is used for creating a first reference relation between the original model file and the service instance when the system service is started, and converting the first reference relation into a second reference relation after a hot replacement instruction is received, wherein the second reference relation is the reference relation between the new model file and the service instance.
According to another aspect of the embodiments of the present invention, there is also provided a method for processing a model file, including: obtaining a service instance of a system service and loading the service instance into a model file, wherein the model file comprises: the method comprises the steps that an original model file and a new model file exist, and a first reference relation exists between the original model file and a service instance; and after receiving the hot replacement instruction, converting the first reference relation into a second reference relation, wherein the second reference relation is the reference relation between the new model file and the service instance.
In the embodiment of the present invention, a mode based on a message mechanism and a reference mechanism is adopted, a model file loaded into a memory by obtaining a service instance of a system service is received, and after a hot replacement instruction is received, a first reference relationship is converted into a second reference relationship, where the model file includes: the system comprises an original model file and a new model file, a first reference relation exists between the original model file and a service instance in a memory, and a second reference relation is the reference relation between the new model file and the service instance in the memory, so that the aim of notifying the change of the model file according to a message mechanism and replacing the model file by a reference mechanism is fulfilled, the technical effects of performing hot replacement on the model file and keeping the service consistency are achieved, and the technical problems that the existing system service cannot realize the heat exchange of the model and cannot keep the service consistency are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a model file processing device according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of processing a model file according to an embodiment of the invention;
FIG. 3 is a schematic illustration of a preferred hot swap according to embodiments of the present invention;
FIG. 4 is a flow diagram of an alternative method of processing a model file in accordance with an embodiment of the present invention;
FIG. 5 is a flow diagram of an alternative method of processing a model file in accordance with an embodiment of the present invention;
FIG. 6 is a flowchart of a preferred method of thermal loading according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a model file processing apparatus according to an embodiment of the present invention;
FIG. 8 is a block diagram of a model file processing system according to an embodiment of the present invention;
FIG. 9 is a block diagram of a computer terminal according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a model file processing device according to an embodiment of the present invention; and
FIG. 11 is a flow chart of a method of processing a model file according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
1. a tenant: the tenant (i.e. the user who needs to pay for use and can only use within a specified time limit) on the private cloud is the underlying logic unit of the intelligent interaction service (e.g. voice interaction service), wherein the underlying layer of one intelligent interaction service corresponds to one tenant model, and the service is regarded as the service of the tenant.
2. The user: refers to the end user of the tenant service.
3. Model: the model refers to a file trained according to data of a certain tenant in machine learning or deep learning, and the model can be used as a logical judgment basis of an algorithm in calculation.
4. Hot replacement: and under the condition that the service is not stopped, using the new model and discarding the old model.
5. Service inconsistency: refers to the phenomenon that in the case that the same tenant service has different instances, when the tenant service receives the same request, the processing logic of each instance is not completely the same.
6. Decoupling: refers to the process of relieving the mutual influence between two or more variables, enhancing the independent existence ability of each, and infinitely reducing but not completely eradicating the existing coupling degree.
Example 1
Before describing further details of embodiments of the present application, reference will be made to FIG. 1 in describing a suitable model file of processing equipment that may be used to implement the principles of the present application.
Fig. 1 is a schematic diagram of a model file processing device according to an embodiment of the present invention, and the depicted structure is only one example of a suitable environment for description purposes and does not set any limit to the scope of use or functionality of the present application. Neither should the hardware resource scheduling system be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in FIG. 1.
The system embodiment provided by the embodiment 1 of the application can be widely applied to private clouds. With the rapid development of internet technology, many private cloud-based question-answering service products (e.g., voice robots, etc.) appear in people's daily life, and these question-answering service products can make certain responses to information input by users according to voice, text, video, etc. input by users, for example, when users interact with robots through voice, users input "what's the time is it? After receiving voice information input by a user, the robot converts the voice into an instruction which can be recognized by the robot, finds an answer corresponding to the instruction through the private cloud, and feeds the answer back to the user through voice, images or other modes. However, when the information fed back to the user by the robot is abnormal, a worker is required to detect the background device or the background program, and a common method is to detect the log file. Generally, when an abnormality occurs in a robot, a processor in the background generates a log file corresponding to the abnormality, and a specific location where the abnormality occurs can be confirmed by calling the log file (for example, by a method of chain calling the log file). Because the robot can receive the instruction of the user and can also output feedback information, only the feedback information is wrong, the staff can eliminate the abnormity by updating the model file in the system service rented by the tenant.
In the existing method for updating the model file, in the process of starting the service, the model file of the designated tenant is obtained and loaded into the memory. After the service is started, the user can request for interactive information through an external application programming interface, the service transmits the request to the model through the algorithm module, and the optimal result information is obtained and returned to the user.
However, in the prior art, the algorithmic model is typically loaded at startup and will not change once loaded until the service is stopped and restarted. In the face of tenant model updating and additionally loading models of other tenants during running, the existing loading mode cannot achieve hot replacement, and restarting can cause service unavailability or inconsistent service.
The method for notifying the model change by the message mechanism and citing the mechanism heating replacement model can effectively solve the problems. Specifically, the address corresponding to the model file of the tenant is mapped onto the variable in a variable reference mode, and the model file is decoupled from the system service. When the model file needs to be updated, the address and the variable of the new model file are mapped and loaded into the memory. In addition, in order to prevent the inconsistency of the instance services in the hot replacement process, a request blocking mode is adopted in the application, namely after all the service instances are loaded to be ready, each service instance blocks the previous processing request thread and wakes up the blocked processing request thread after the hot replacement is completed. The problem of service inconsistency can be avoided by using status message notifications and request blocking.
A storage 101, configured to store a model file loaded into a memory by a service instance of a system service, where the model file includes: an original model file and a new model file.
The processor 103 is configured to create a first reference relationship between the original model file and the service instance when the system service is started, and convert the first reference relationship into a second reference relationship after receiving the hot replacement instruction, where the second reference relationship is a reference relationship between a new model file in the memory and the service instance.
In an alternative embodiment, the system service may be, but is not limited to, an intelligent interaction service (e.g., an intelligent question and answer service), the system service includes at least one service instance, and each service instance may load its corresponding model file into a memory for use by a tenant leasing the system service. The tenants leasing the system service are A, B, C and D, which are four tenants respectively, and the four tenants use the system service in different environments, for example, tenant a uses the system service in an entertainment question-answering service system, tenant B uses the system service in a sports question-answering service system, tenant C uses the system service in a military question-answering service system, tenant D uses the system service in a daily question-answering service system, and the model files used by the four tenants in the same system service are different. When the tenant modifies the model file corresponding to the tenant, a new model file is obtained, and at this time, the model file needs to be reloaded, where the model file is the new model file, and for example, the tenant a adds a new element in the model file corresponding to the tenant a. When a user of the tenant service, that is, the users corresponding to the above four tenants, uses the intelligent question-answering service system, a service instance is generated, for example, a process in which the user (or user) a 'of the tenant a asks a question through the entertainment question-answering service system is a service instance, and at this time, the question asked by the user a' needs to enter a new model file for querying.
It should be noted that, since a reference in a programming language means declaring a variable and assigning an address of an object to the variable, if a new object is assigned to the variable, it is essentially the reference address of the changed variable. However, in a programming language such as Java that uses a virtual machine, there is no object to be referenced, and therefore, the above variables need to be destroyed in periodic garbage collection, and the corresponding memory space needs to be released. Thus, the first reference relationship between the original model file and the service instance can be implemented by a variable. In addition, the above method of establishing the first reference relationship between the original model file and the service instance by variable reference can be achieved by a method of interfacing a C + + algorithm with a JNA (Java Native Access, which provides a local library of a dynamic Access system).
In another optional embodiment, after modifying the original model file to obtain a new model file, the tenant needs to replace the original model file with the new model file, and at this time, the tenant issues a hot replacement command. In the process of replacing the model file by the system service, the reference relationship between the service instance and the model file is also changed, namely the reference relationship between the new model file and the service instance is a second reference relationship, wherein the second reference relationship can also be established by a variable reference method.
Specifically, after the tenant sends out the hot replacement instruction, the system service maps the model file loaded into the memory with the variable in the system service in a variable reference mode, so that the model file is decoupled from the system service, the change of the model file is obtained through a message mechanism, the new model file is dynamically loaded when the system service runs, and finally, after nodes of all service instances in the system service are ready, the original model file is directly replaced by the new model file, so that the hot replacement of the file model is completed.
As can be seen from the above, when the storage stores the model file loaded into the memory by the service instance of the system service, the processor creates a first reference relationship between the original model file and the service instance when starting the system service, and converts the first reference relationship into a second reference relationship after receiving the hot replacement instruction, where the model file includes: the original model file and the new model file, and there is a first reference relationship between the original model file and the service instance in the memory, and the second reference relationship is the reference relationship between the new model file and the service instance in the memory, it is easy to notice that, due to the adoption of the variable reference method, the original model file is mapped with the service instance in the system service through the variable, when the original model file needs to be hot-replaced, only the new model file and the service instance in the system service need to be mapped through the variable, thereby achieving the purpose of notifying the model file to be changed according to the message mechanism so as to quote the machine-made hot-replacement model file, thereby realizing the technical effects of hot replacement of the model file and keeping the service consistency, and further solve the technical problems that the existing system service can not realize the heat exchange of the model and can not keep the service consistency.
In addition, another alternative embodiment exists, the first reference relationship is established by assigning an address of the original model file to a variable, that is, the address of the original model file is stored in the memory; the second reference relationship is that the address of the new model file is assigned to the variable, and when hot switching is performed, the original model file and the new model file exist in the service system at the same time, so that in order to avoid inconsistency of the service instances in the hot replacement process, after all the service instances are loaded, each service instance receives a new model ready message, and the current thread for processing the request is blocked. And after the first reference relation is completely switched into the second reference relation, each service instance sends a message of the new model replacement readiness, subscribes to the message, and wakes up the blocked thread, at the moment, the service instance uses a new mode file, namely, the hot replacement process of the system service is completed.
It should be noted that the loading process of the new model and the starting process of the service system are two parallel and independent processes, which are not affected by each other.
In another alternative embodiment, when the service system receives a hot replacement instruction and converts the first reference relationship into the second reference relationship, that is, after the hot replacement process is completed, a new model file and an original model file are simultaneously included in the service system including a plurality of service instances, and after all the service instances of the service system are ready to be loaded, the processor may block the current processing request thread after each service instance receives a message that the service instance is ready to be loaded. For example, after the tenant a loads the new model file into the memory, the system service includes the original model file and the new model file corresponding to the tenant a, and in order to ensure that all instances in the tenant a use the new model file, at this time, all processing request threads in the tenant a are blocked, that is, all service instances are in a waiting state, and after the first reference relationship is converted into the second reference relationship, the request of the service instance in the tenant a is processed.
Furthermore, after the first reference relationship of each service instance is converted into the second reference relationship, the server will wake up the already blocked thread also after each service instance sends a new model ready message, and in case each service instance sends a corresponding new model ready message.
Specifically, after the first reference relationship of each service instance is converted into the second reference relationship, hot replacement of the model file has been completed inside each service instance, and each service instance sends a new model ready message. For example, after all instances of tenant a send new model ready messages, each service instance will subscribe to the new model ready messages and wake up the threads that have been blocked. Thereafter, each service instance corresponding to tenant a will use the processing request of the new model file.
It should be noted that the problem of service inconsistency can be avoided according to the new model, and the above process is very short and asynchronous, so that the above method can not only effectively avoid the problem of service inconsistency, but also has no influence on the availability of the service.
In a preferred embodiment, the processor may perform a hot swap of the model file. Specifically, after the system service receives a new model load ready message of all instances, the processor enables each service instance of the tenant to block a currently processed request thread, and completes reference switching of variables inside each service instance after the current processing request is blocked. Specifically, an address corresponding to the original model file is assigned to a variable M to establish a reference relationship between the original file model and the service instance, and in the process of switching the model files, an address corresponding to the new model file needs to be assigned to the variable M, that is, the method for variable reference is completed to switch the original model file to the new model file. Furthermore, after completing the reference switching of the variables, the service instances send a new model replacement ready message to the message center of the system service indicating that the original model file has been replaced by the new model file, that is, hot replacement has been completed, and after receiving the new model replacement ready message, each service instance subscribes to the new model replacement ready message, and after subscribing to the new model replacement ready message, each service instance wakes up the request message that has been blocked, restores the thread that has been blocked, and thereafter, each service instance processes the request message using the new model file.
In another optional embodiment, in a case that the model file is an original model file, before obtaining a service instance of the system service and loading the service instance into the model file in the memory, the processor may be further configured to, when starting the system service, obtain the original model file corresponding to the service instance, load the original model file into the memory, and create a first reference relationship between the original model file and the service instance.
Specifically, the service instance obtains metadata from the configuration center, wherein the metadata at least includes model information, such as a type of the model and a creation time of the model. After obtaining the metadata, the service instance uses the metadata to obtain the original model file from the memory, e.g., the service instance queries the original model file matching the type of the model from the memory according to the model information in the metadata. Then, the service instance subscribes the change information of the model file related to the current tenant from the message center, loads the original model file into the memory after obtaining the change information of the model file, and finally obtains the original model file through variable reference. Thus, the starting process of the system service is completed, and after the system service is started, the hot replacement step can be executed.
Optionally, when the model file is a new model file, before the service instance of the system service is obtained and loaded into the model file in the memory, the processor may be further configured to complete loading of the model file.
Specifically, the service instance obtains the model change message from the message center, wherein the new model file can be obtained through training, and the model change information can be generated under the condition that the new model file is obtained through training. Then, the service instance detects that the model change information indicates that the model file of the information has been generated, at this time, the service instance acquires model information (for example, type information of the model and time information of the model creation, etc.) from the configuration center, and then obtains metadata corresponding thereto from the type information of the model and the time information of the model creation. After detecting the model change information, the service instance acquires a new model file from the storage by using the metadata and trains the new model file, and loads the new model file into the memory by using variable reference. After the service instance loads the new model file into the memory, the service instance subscribes the model change information from the message subscription center and sends a new model loading ready message to the message center, and finally, after the service instance sends the new model loading ready message to the message center, the service instance subscribes the new model loading ready message of all the service instances from the message center.
It should be noted that the training of the new model is offline, and after the new model is trained, the offline program sends a model change message to the message center. In addition, the starting processes of the hot loading and the system service are two parallel and independent processes which do not interfere with each other.
Example 2
There is also provided, in accordance with an embodiment of the present invention, an embodiment of a method for processing a model file, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The present application provides an embodiment of a method for processing a model file, and fig. 2 is a flowchart of a method for processing a model file according to an embodiment of a method for processing a model file provided by the present invention, as shown in fig. 2, the method includes the following steps:
step S202, obtaining a model file loaded into a memory by a service instance of the system service, wherein the model file comprises: the method comprises an original model file and a new model file, and a first reference relation exists between the original model file and a service instance in a memory.
In step S202, the system service may be, but is not limited to, an intelligent interaction service (e.g., an intelligent question and answer service), the system service includes at least one service instance, and each service instance may load its corresponding model file into a memory for use by a tenant renting the system service.
In an alternative embodiment, the tenants leasing the system service are A, B, C and D four tenants respectively, and the four tenants use the system service in different environments, for example, tenant a uses the system service in an entertainment question and answer service system, tenant B uses the system service in a sports question and answer service system, tenant C uses the system service in a military question and answer service system, tenant D uses the system service in a daily question and answer service system, and the model files used by the four tenants in the same system service are different. When the tenant modifies the model file corresponding to the tenant, a new model file is obtained, and at this time, the model file needs to be reloaded, where the model file is the new model file, and for example, the tenant a adds a new element in the model file corresponding to the tenant a. When a user of the tenant service, that is, the users corresponding to the above four tenants, uses the intelligent question-answering service system, a service instance is generated, for example, a process in which the user (or user) a 'of the tenant a asks a question through the entertainment question-answering service system is a service instance, and at this time, the question asked by the user a' needs to enter a new model file for querying.
It should be noted that, since a reference in a programming language means declaring a variable and assigning an address of an object to the variable, if a new object is assigned to the variable, it is essentially the reference address of the changed variable. However, in the programming language such as java using a virtual machine, there is no referenced object, so the above variables need to be destroyed in periodic garbage collection, and the corresponding memory space is released. Thus, the first reference relationship between the original model file and the service instance can be implemented by a variable. In addition, the above method of establishing the first reference relationship between the original model file and the service instance by variable reference can be achieved by a method of interfacing a C + + algorithm with a JNA (Java Native Access, which provides a local library of a dynamic Access system).
Through the above step S202, the corresponding relationship between the original model file and the service instance may be established by a variable referencing method, and then the starting process of the system service may be completed according to the corresponding relationship.
Step S204, after receiving the hot replacement instruction, converting the first reference relationship into a second reference relationship, where the second reference relationship is a reference relationship between the new model file in the memory and the service instance.
In step S204, after modifying the original model file to obtain a new model file, the tenant needs to replace the original model file with the new model file, and at this time, the tenant may issue a hot replacement command. In the process of replacing the model file by the system service, the reference relationship between the service instance and the model file is also changed, namely the reference relationship between the new model file and the service instance is a second reference relationship, wherein the second reference relationship can also be established by a variable reference method.
In an optional embodiment, after the tenant sends the hot replacement instruction, the system service maps the model file loaded into the memory with the variable in the system service in a variable reference manner, so that the model file is decoupled from the system service, the change of the model file is known through a message mechanism, the new model file is dynamically loaded when the system service runs, and finally, after nodes of all service instances in the system service are ready, the original model file is directly replaced by the new model file, thereby completing the hot replacement of the file model.
It should be noted that after completing the hot replacement of the file model, each service instance sends a message that the new model replacement is ready, and thereafter, each instance processes the request using the new model file.
The availability and stability of the system service can be ensured through the step S204, and the hot replacement effect of the model file can be realized without stopping the system service.
In the solution defined in steps S202 to S204 based on the above embodiment, it can be known that a model file loaded into a memory by obtaining a service instance of a system service is received, and after a hot replacement instruction is received, a first reference relationship is converted into a second reference relationship, where the model file includes: the original model file and the new model file, and there is a first reference relationship between the original model file and the service instance in the memory, and the second reference relationship is the reference relationship between the new model file and the service instance in the memory, it is easy to notice that, due to the adoption of the variable reference method, the original model file is mapped with the service instance in the system service through the variable, when the original model file needs to be hot-replaced, only the new model file and the service instance in the system service need to be mapped through the variable, thereby achieving the purpose of notifying the model file to be changed according to the message mechanism so as to quote the machine-made hot-replacement model file, thereby realizing the technical effects of hot replacement of the model file and keeping the service consistency, and further solve the technical problems that the existing system service can not realize the heat exchange of the model and can not keep the service consistency.
Optionally, the system service includes a plurality of service instances, and after each service instance finishes loading the model file into the memory, each service instance is blocked for executing the processing request thread until the first reference relationship is converted into the second reference relationship, and then the processing request is started, where the processing request is used to call the model file corresponding to the service instance.
Specifically, when the service system receives a hot replacement instruction and converts the first reference relationship into the second reference relationship, that is, at this time, the hot replacement process of replacing the original model file with the new model file is completed, the service system at this time includes the new model file and the original model file at the same time, and after all service instances of the service system are ready to be loaded, each service instance receives a message that the service instances are ready to be loaded, and blocks the current processing request thread.
In an optional embodiment, after the tenant a loads the new model file into the memory, the system service includes the original model file and the new model file corresponding to the tenant a, and in order to ensure that all instances in the tenant a use the new model file, at this time, all processing request threads in the tenant a are blocked, that is, all service instances are in a waiting state, and after the first reference relationship is converted into the second reference relationship, the request of the service instance in the tenant a is processed.
In another alternative embodiment, after the first reference relationship of each service instance is converted into the second reference relationship, each service instance sends a new model ready message, and in the case that each service instance sends a corresponding new model ready message, the thread that has been blocked will be woken up.
Specifically, after the first reference relationship of each service instance is converted into the second reference relationship, hot replacement of the model file has been completed inside each service instance, and each service instance sends a new model ready message. For example, after all instances of tenant a send new model ready messages, each service instance will subscribe to the new model ready messages and wake up the threads that have been blocked. Thereafter, each service instance corresponding to tenant a will use the processing request of the new model file.
It should be noted that the problem of service inconsistency can be avoided according to the new model, and the above process is very short and asynchronous, so that the above method can not only effectively avoid the problem of service inconsistency, but also has no influence on the availability of the service.
In a preferred embodiment, fig. 3 is a schematic diagram illustrating a preferred hot swap, and as shown in fig. 3, the hot swap process specifically includes the following steps:
s1, the system service receives the ready message of loading the new model of all the instances, at this time, all the instances of the service corresponding to the tenant are loaded successfully, and then, the step S2 is executed;
s2, each service instance of the tenant respectively blocks the currently processed request thread to prevent the problem of service instance inconsistency in the hot replacement process; after the current processing request is blocked, step S3 is executed;
and S3, completing reference switching of the variables inside each service instance, thereby realizing the reference of the variables to the new model file. Specifically, an address (i.e., address 1 in fig. 3) corresponding to the original model file is assigned to the variable M to establish a reference relationship between the original file model and the service instance, and in the process of switching the model files, an address (i.e., address 2 in fig. 3) corresponding to the new model file needs to be assigned to the variable M, that is, the method of variable reference is completed to switch the original model file to the new model file;
s4, after the reference switching of the variable is completed, the service instance sends a new model replacement ready message to a message center of the system service to indicate that the original model file is replaced by the new model file, namely hot replacement is completed;
s5, after the message center receives the new model replacement ready message, each service instance subscribes to the new model replacement ready message;
s6, after each service instance subscribes to the new model replacement ready message, waking up the request message that has been blocked, and resuming the thread that has been blocked, after which each service instance processes the request message using the new model file.
Optionally, fig. 4 shows a flowchart of a processing method of an optional model file, and as shown in fig. 4, in a case that the model file is an original model file, before obtaining a service instance of a system service and loading the service instance into the model file in the memory, the processing method of the model file further includes the following steps:
step S402, when starting system service, obtaining an original model file corresponding to a service instance;
step S404, the original model file is loaded into the memory, and a first reference relationship between the original model file and the service instance is created.
In a preferred embodiment, as shown in fig. 5, a preferred system service starting flowchart specifically includes the following steps:
step S51, the service instance obtains metadata;
specifically, the service instance obtains metadata from the configuration center, wherein the metadata at least includes model information, such as a type of the model and a creation time of the model.
Step S53, the service instance obtains the model file from the memory;
specifically, after obtaining the metadata, the service instance obtains the original model file from the storage using the metadata, for example, the service instance queries the original model file matching the model type from the storage according to the model information (e.g., the model type) in the metadata.
Step S55, subscribing the model change message by the service instance;
specifically, the service instance subscribes to the change information of the model file related to the current tenant from the message center, and after obtaining the change information of the model file, step S57 is executed.
Step S57, the service instance loads the original model file, namely, the service instance loads the original model file to the memory;
step S59, the service instance obtains the original model file through variable reference;
specifically, a variable is used to reference an original model file stored in a memory, and a first reference relationship is created, where an assignment of the variable is a storage address of the original model file, for example, the storage address of the original model file is 0x000011, and if the variable is M, then M is 0x 000011.
Thus, the system service start-up process is completed, and after the system service start-up is completed, the hot replacement process shown in fig. 3 is performed.
It should be noted that, after the service instance loads the original model file into the memory, the service instance subscribes model change information from the message subscription center, where the model change information is used to represent whether to generate a new model file, and the service instance acquires the model change information from the message center under the condition of generating the new model file.
In an optional embodiment, fig. 6 shows a flowchart of a preferred hot loading method, and as shown in fig. 6, in the case that the model file is a new model file, before obtaining the model file loaded into the memory by the service instance of the system service, the processing method of the model file further includes the following steps:
step S602, the service instance obtains a model change message from a message center;
specifically, the new model file may be obtained through training, and in the case that the new model file is obtained through training, the model change information may be generated, where the model change information may be used to represent whether a new model file is generated.
It should be noted that the training of the new model is offline, and after the new model is trained, the offline program sends a model change message to the message center.
Step S604, the service instance obtains metadata from the configuration center, that is, if the service instance detects the model change information, the service instance obtains metadata from the configuration center, where the metadata at least includes: and (4) model information.
Specifically, the service instance detects that the model change information indicates that the model file of the information has been generated, and at this time, the service instance acquires model information (for example, type information of the model, time information of model creation, and the like) from the configuration center, and then obtains metadata corresponding to the model information according to the type information of the model and the time information of model creation.
Step S606, the service instance obtains the model file from the memory, namely the service instance obtains the new model file from the memory by using the metadata;
specifically, the service instance searches a new model file matched with the model information from the memory according to the model information such as the type information of the model and the time information of the model creation.
In step S608, the service instance loads the new model file into the memory, and as shown in fig. 6, loads the address of the new model file into the address 2.
Step S610, after the service instance loads the new model file into the memory, the service instance subscribes model change information from the message subscription center, where the model change information is used to represent whether to generate the new model file again.
Specifically, step S610 includes:
step S6102, the service instance sends a new model loading ready message to the message center;
step S6104, after the service instance sends a new model load ready message to the message center, the service instance subscribes the new model load ready message for all service instances from the message center.
In an alternative embodiment, after the service instance receives the model change message from the message center, the steps shown in fig. 5 are re-executed, and then a new model load ready message is sent to the message center, and all service instance ready messages are subscribed from the message center. At this point, the memories of all node instances of the tenant have two model files, namely a new model file and an original model file, and finally, a message that all instances are ready is triggered.
It should be noted that the hot loading and system service starting process is two parallel and independent processes, and the two processes do not interfere with each other.
Example 3
According to an embodiment of the present invention, there is further provided an embodiment of a device for processing a model file, as shown in fig. 7, the device includes: an obtaining module 701 and a receiving module 703.
An obtaining module 701, configured to obtain a model file loaded into a memory by a service instance of a system service, where the model file includes: the method comprises an original model file and a new model file, and a first reference relation exists between the original model file and a service instance in a memory.
In the obtaining module 701, the system service may be, but is not limited to, an intelligent interaction service (e.g., an intelligent question and answer service), the system service includes at least one service instance, and each service instance may load a model file corresponding to the service instance into a memory for a tenant renting the system service.
In an alternative embodiment, the tenants leasing the system service are A, B, C and D four tenants respectively, and the four tenants use the system service in different environments, for example, tenant a uses the system service in an entertainment question and answer service system, tenant B uses the system service in a sports question and answer service system, tenant C uses the system service in a military question and answer service system, tenant D uses the system service in a daily question and answer service system, and the model files used by the four tenants in the same system service are different. When the tenant modifies the model file corresponding to the tenant, a new model file is obtained, and at this time, the model file needs to be reloaded, where the model file is the new model file, and for example, the tenant a adds a new element in the model file corresponding to the tenant a. When a user of the tenant service, that is, the users corresponding to the above four tenants, uses the intelligent question-answering service system, a service instance is generated, for example, a process in which the user (or user) a 'of the tenant a asks a question through the entertainment question-answering service system is a service instance, and at this time, the question asked by the user a' needs to enter a new model file for querying.
It should be noted that, since a reference in a programming language means declaring a variable and assigning an address of an object to the variable, if a new object is assigned to the variable, it is essentially the reference address of the changed variable. However, in a programming language such as Java that uses a virtual machine, there is no object to be referenced, and therefore, the above variables need to be destroyed in periodic garbage collection, and the corresponding memory space needs to be released. Thus, the first reference relationship between the original model file and the service instance can be implemented by a variable. In addition, the above method of establishing the first reference relationship between the original model file and the service instance by variable reference can be achieved by a method of interfacing a C + + algorithm with a JNA (Java Native Access, which provides a local library of a dynamic Access system).
The obtaining module 701 can establish a corresponding relationship between the original model file and the service instance by a variable reference method, and further can complete a starting process of the system service according to the corresponding relationship.
The receiving module 703 is configured to, after receiving the hot replacement instruction, convert the first reference relationship into a second reference relationship, where the second reference relationship is a reference relationship between a new model file in the memory and the service instance.
In the receiving module 703, after modifying the original model file to obtain a new model file, the tenant needs to replace the original model file with the new model file, and at this time, the tenant may send a hot replacement command. In the process of replacing the model file by the system service, the reference relationship between the service instance and the model file is also changed, namely the reference relationship between the new model file and the service instance is a second reference relationship, wherein the second reference relationship can also be established by a variable reference method.
In an optional embodiment, after the tenant sends the hot replacement instruction, the system service maps the model file loaded into the memory with the variable in the system service in a variable reference manner, so that the model file is decoupled from the system service, the change of the model file is known through a message mechanism, the new model file is dynamically loaded when the system service runs, and finally, after nodes of all service instances in the system service are ready, the original model file is directly replaced by the new model file, thereby completing the hot replacement of the file model.
It should be noted that after completing the hot replacement of the file model, each service instance sends a message that the new model replacement is ready, and thereafter, each instance processes the request using the new model file.
The receiving module 703 can ensure the availability and stability of the system service, and further realize the effect of hot replacement of the model file without stopping the system service.
As can be seen from the above, by obtaining a model file loaded into a memory by a service instance of a system service, after receiving a hot replacement instruction, the first reference relationship is converted into a second reference relationship, where the model file includes: the original model file and the new model file, and there is a first reference relationship between the original model file and the service instance in the memory, and the second reference relationship is the reference relationship between the new model file and the service instance in the memory, it is easy to notice that, due to the adoption of the variable reference method, the original model file is mapped with the service instance in the system service through the variable, when the original model file needs to be hot-replaced, only the new model file and the service instance in the system service need to be mapped through the variable, thereby achieving the purpose of notifying the model file to be changed according to the message mechanism so as to quote the machine-made hot-replacement model file, thereby realizing the technical effects of hot replacement of the model file and keeping the service consistency, and further solve the technical problems that the existing system service can not realize the heat exchange of the model and can not keep the service consistency.
It should be noted that the acquiring module 701 and the receiving module 703 correspond to steps S202 to S204 in embodiment 2, and the two modules are the same as the corresponding steps in the implementation example and application scenarios, but are not limited to the disclosure in embodiment 2.
Optionally, the system service includes a plurality of service instances, and after each service instance finishes loading the model file into the memory, each service instance blocks the thread used for executing the processing request until the first reference relationship is converted into the second reference relationship, and starts to execute the processing request, where the processing request is used to call the model file corresponding to the service instance.
Optionally, after the first reference relationship of each service instance is converted into the second reference relationship, each service instance sends a new model ready message, and in case that each service instance sends a corresponding new model ready message, the thread that has been blocked will be woken up.
Optionally, in a case that the model file is an original model file, the processing apparatus of the model file further includes: the device comprises a starting module and a loading module. The system comprises a starting module, a storage module and a processing module, wherein the starting module is used for acquiring an original model file corresponding to a service instance when starting system service; and the loading module is used for loading the original model file into the memory and creating a first reference relation between the original model file and the service instance.
It should be noted that the starting module and the loading module correspond to steps S402 to S404 in embodiment 2, and the two modules are the same as the corresponding steps in the example and application scenarios, but are not limited to the disclosure in embodiment 2.
Optionally, the starting module includes: the device comprises a data acquisition module and a model acquisition module. The data acquisition module is used for the service instance to acquire metadata from the configuration center, wherein the metadata at least comprises: model information; and the model acquisition module is used for acquiring the original model file from the storage by using the metadata by the service instance.
It should be noted that the data obtaining module and the model obtaining module correspond to steps S51 to S53 in embodiment 2, and the two modules are the same as the corresponding steps in the example and application scenarios, but are not limited to the disclosure of embodiment 2.
Optionally, the loading module includes: a first loading module and a creating module. The first loading module is used for loading the original model file to the memory by the service instance; and the creating module is used for creating a first reference relation by using the variable to reference the original model file stored in the memory, wherein the assignment of the variable is the storage address of the original model file.
It should be noted that the first loading module and the creating module correspond to steps S57 to S59 in embodiment 2, and the two modules are the same as the corresponding steps in the example and application scenarios, but are not limited to the disclosure in embodiment 2.
Optionally, after the service instance loads the original model file into the memory, the service instance subscribes model change information from the message subscription center, where the model change information is used to represent whether to generate a new model file.
Optionally, in a case that the model file is a new model file, the processing apparatus of the model file further includes: the device comprises a first obtaining module, a second obtaining module and a second loading module. Specifically, the first obtaining module is configured to, if the service instance detects model change information, obtain metadata from the configuration center by the service instance, where the model change information is generated in a case where a new model file is obtained by training, and the metadata at least includes: model information; the second acquisition module is used for acquiring a new model file from the memory by using the metadata of the service instance; and the second loading module is used for loading the new model file to the memory by the service instance.
It should be noted that the first obtaining module, the second obtaining module, and the second loading module correspond to steps S604 to S608 in embodiment 2, and the three modules are the same as the corresponding steps in the implementation example and application scenarios, but are not limited to the disclosure in embodiment 2.
Optionally, after the service instance loads the new model file into the memory, the service instance subscribes model change information from the message subscription center, where the model change information is used to represent whether to generate the new model file again.
Example 4
According to an embodiment of the present invention, there is further provided an embodiment of a system for processing a model file, as shown in fig. 8, which includes: a service terminal 801 and a server 803.
The service terminal 801 is configured to initiate a processing request, where the processing request is used to invoke a model file corresponding to a service instance, where the model file includes: an original model file and a new model file;
the server 803 is in communication with the service terminal, and is configured to create a first reference relationship between the original model file and the service instance when the system service is started, and convert the first reference relationship into a second reference relationship after receiving the hot replacement instruction, where the second reference relationship is a reference relationship between a new model file in the memory and the service instance; after the service instance of the server finishes loading the model file into the memory, the received processing request is blocked and executed until the first reference relation is converted into the second reference relation, the processing request is started to be executed, and the execution result is returned to the service terminal.
In an alternative embodiment, the service terminal 801 includes, but is not limited to, an intelligent interactive terminal (e.g., an intelligent question and answer system) installed on an intelligent mobile terminal (e.g., a smartphone, a tablet, a wearable device, etc.) and a computer, etc. When a user asks a question on an intelligent question-answering system on an intelligent mobile terminal in a voice or text mode, the intelligent question-answering system receives a processing request of the user and sends the processing request to a tenant of a corresponding system service, wherein the tenant is the tenant of the system service, and each tenant in the system service corresponds to a tenant model (namely a model file). The method comprises the steps that a server corresponding to the system service receives a processing request sent by an intelligent question-answering system, a first reference relation between an original model file and a service instance is established through a variable reference method, when a tenant needs to load a new model file in the system service, the tenant sends a hot replacement instruction to the system service, the system service maps the model file loaded into a memory and a variable in the system service through a variable reference mode, the model file is decoupled from the system service, the change of the model file is obtained through a message mechanism, the new model file is dynamically loaded when the system service runs, and finally the original model file is directly replaced by the new model file after nodes of all service instances in the system service are ready, so that the hot replacement of a file model is completed.
It should be noted that the first reference relationship is established by assigning an address of the original model file to a variable, that is, the address of the original model file is stored in the memory; the second reference relationship is that the address of the new model file is assigned to the variable, and when hot switching is performed, the original model file and the new model file exist in the service system at the same time, so that in order to avoid inconsistency of the service instances in the hot replacement process, after all the service instances are loaded, each service instance receives a new model ready message, and the current thread for processing the request is blocked. And after the first reference relation is completely switched into the second reference relation, each service instance sends a message of the new model replacement readiness, subscribes to the message, and wakes up the blocked thread, at the moment, the service instance uses a new mode file, namely, the hot replacement process of the system service is completed.
In addition, it should be noted that the loading process of the new model and the starting process of the service system are two parallel and independent processes, which are not affected by each other.
As can be seen from the above, when a server in communication with a service terminal initiates a processing request through the service terminal, creates a first reference relationship between an original model file and a service instance when starting a system service, and converts the first reference relationship into a second reference relationship after receiving a hot replacement instruction, where the model file includes: the method comprises the steps that an original model file and a new model file are obtained, a second reference relation is a reference relation between the new model file and a service instance in a memory, after the service instance of a server finishes loading the model file into the memory, a received processing request is blocked and executed until a first reference relation is converted into a second reference relation, the processing request is started and an execution result is returned to a service terminal, and the method is easy to notice that the original model file and the service instance in the system service are mapped through variables because a variable reference method is adopted, when the original model file needs to be subjected to hot replacement, the new model file and the service instance in the system service only need to be mapped through the variables, so that the purposes of informing the change of the model file according to a message mechanism and using a reference machine to prepare the hot replacement model file are achieved, and the hot replacement of the model file is realized, and the technical effect of keeping the service consistency is achieved, and the technical problems that the existing system service cannot realize the heat exchange of the model and cannot keep the service consistency are solved.
Optionally, the server is further configured to obtain an original model file corresponding to the service instance when starting the system service; and loading the original model file into a memory, and creating a first reference relation between the original model file and the service instance.
Optionally, the server is further configured to obtain, by the service instance, metadata from the configuration center, where the metadata at least includes: model information; the service instance uses the metadata to retrieve the original model file from memory.
Optionally, the server is further configured to load the original model file into the memory by the service instance; and using a variable to refer to the original model file stored in the memory, and creating a first reference relation, wherein the assignment of the variable is the storage address of the original model file.
Optionally, the server is further configured to, if the service instance detects the model change information, the service instance obtains metadata from the configuration center, where the model change information is generated under the condition that a new model file is obtained through training, and the metadata at least includes: model information; the service instance acquires and trains a new model file from a memory by using the metadata; and the service instance loads the new model file to the memory.
Example 5
The embodiment of the invention can provide a computer terminal which can be any computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute the program code of the following steps in the vulnerability detection method of the application program: obtaining a model file loaded into a memory by a service instance of a system service, wherein the model file comprises: the method comprises the steps that an original model file and a new model file exist, and a first reference relation exists between the original model file in a memory and a service instance; and after receiving the hot replacement instruction, converting the first reference relationship into a second reference relationship, wherein the second reference relationship is the reference relationship between the new model file in the memory and the service instance.
Alternatively, fig. 9 is a block diagram of a computer terminal according to an embodiment of the present invention. As shown in fig. 9, the computer terminal 9 may include one or more processors 91 (shown as 91a, 91b, … …, 91n in the figure), the processor 91 (the processor 91 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory 93 for storing data, and a transmission module 95 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 9 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 9 may also include more or fewer components than shown in FIG. 9, or have a different configuration than shown in FIG. 9.
It should be noted that the one or more processors 91 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Furthermore, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 9 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 93 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the display method of the interface content in the embodiment of the present invention, and the processor 91 executes various functional applications and data processing by running the software programs and modules stored in the memory 93, that is, implements the vulnerability detection method of the application program. The memory 93 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 94 may further include memory located remotely from the processor 91, which may be connected to the computer terminal 9 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 95 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 9. In one example, the transmission device 95 includes a Network adapter (NIC) that can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 95 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 9 (or mobile device).
It should be noted here that in some alternative embodiments, the computer terminal 9 shown in fig. 9 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 9 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in the computer terminal described above.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: obtaining a model file loaded into a memory by a service instance of a system service, wherein the model file comprises: the method comprises the steps that an original model file and a new model file exist, and a first reference relation exists between the original model file in a memory and a service instance; and after receiving the hot replacement instruction, converting the first reference relationship into a second reference relationship, wherein the second reference relationship is the reference relationship between the new model file in the memory and the service instance.
Optionally, the processor may further execute the program code of the following steps: when starting system service, obtaining an original model file corresponding to a service instance; and loading the original model file into a memory, and creating a first reference relation between the original model file and the service instance.
Optionally, the processor may further execute the program code of the following steps: the service instance obtains metadata from the configuration center, wherein the metadata at least comprises: model information; the service instance uses the metadata to retrieve the original model file from memory.
Optionally, the processor may further execute the program code of the following steps: the service instance loads an original model file to a memory; and using a variable to refer to the original model file stored in the memory, and creating a first reference relation, wherein the assignment of the variable is the storage address of the original model file.
Optionally, the processor may further execute the program code of the following steps: if the service instance detects the model change information, the service instance acquires metadata from the configuration center, wherein the model change information is generated under the condition that a new model file is obtained through training, and the metadata at least comprises: model information; the service instance acquires and trains a new model file from a memory by using the metadata; and the service instance loads the new model file to the memory.
By adopting the embodiment of the invention, a method for processing a model file is provided, wherein a first reference relation is converted into a second reference relation after a hot replacement instruction is received by acquiring the model file loaded into a memory by a service instance of a system service, wherein the model file comprises: the system comprises an original model file and a new model file, a first reference relation exists between the original model file and a service instance in a memory, and a second reference relation is the reference relation between the new model file and the service instance in the memory, so that the aim of notifying the change of the model file according to a message mechanism and replacing the model file by a reference mechanism is fulfilled, the technical effects of performing hot replacement on the model file and keeping the service consistency are achieved, and the technical problems that the existing system service cannot realize the heat exchange of the model and cannot keep the service consistency are solved.
It can be understood by those skilled in the art that the structure shown in fig. 9 is only an illustration, and the computer terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 9 is a diagram illustrating a structure of the electronic device. For example, the computer terminal 13 may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 9, or have a different configuration than shown in FIG. 9.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 6
The embodiment of the invention also provides a storage medium. Optionally, in this embodiment, the storage medium may be configured to store a program code executed by the processing method of the model file provided in embodiment 2.
Optionally, in this embodiment, the storage medium may be located in any one computer terminal in a computer terminal group in a computer network, or in any one mobile terminal in a mobile terminal group.
Optionally, in this embodiment, the storage medium is configured to store program codes for performing the following steps: obtaining a model file loaded into a memory by a service instance of a system service, wherein the model file comprises: the method comprises the steps that an original model file and a new model file exist, and a first reference relation exists between the original model file in a memory and a service instance; and after receiving the hot replacement instruction, converting the first reference relationship into a second reference relationship, wherein the second reference relationship is the reference relationship between the new model file in the memory and the service instance.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: when starting system service, obtaining an original model file corresponding to a service instance; and loading the original model file into a memory, and creating a first reference relation between the original model file and the service instance.
Optionally, in this embodiment, the storage medium is configured to store program codes for performing the following steps: the service instance obtains metadata from the configuration center, wherein the metadata at least comprises: model information; the service instance uses the metadata to retrieve the original model file from memory.
Optionally, in this embodiment, the storage medium is configured to store program codes for performing the following steps: the service instance loads an original model file to a memory; and using a variable to refer to the original model file stored in the memory, and creating a first reference relation, wherein the assignment of the variable is the storage address of the original model file.
Optionally, in this embodiment, the storage medium is configured to store program codes for performing the following steps: if the service instance detects the model change information, the service instance acquires metadata from the configuration center, wherein the model change information is generated under the condition that a new model file is obtained through training, and the metadata at least comprises: model information; the service instance acquires and trains a new model file from a memory by using the metadata; and the service instance loads the new model file to the memory.
Example 7
An embodiment of the present invention further provides an embodiment of a processing apparatus for model files, and fig. 10 is a schematic structural diagram of a processing apparatus for model files according to an embodiment of the present invention, and for descriptive purposes, the depicted structure is only one example of a suitable environment and does not set any limitation to the scope of use or function of the present application. Neither should the hardware resource scheduling system be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in FIG. 10. As shown in fig. 10, the processing apparatus includes: a memory 1001 and a processor 1003.
The memory 1001 is configured to store a service instance of a system service, and load the service instance into a model file, where the model file includes: an original model file and a new model file; the processor 1003 is configured to create a first reference relationship between the original model file and the service instance when the system service is started, and convert the first reference relationship into a second reference relationship after the hot replacement instruction is received, where the second reference relationship is a reference relationship between the new model file and the service instance.
It should be noted that the model file may be, but is not limited to, a model file stored in a memory, and the system server may be, but is not limited to, an intelligent interaction service (e.g., an intelligent question and answer service). In addition, the first reference relationship between the original model file and the service instance can be established by a method of JNA docking C + + algorithm according to a variable reference method. The second reference relationship may be established by a variable reference method, as in the first reference relationship.
As can be seen from the above, when the storage stores the model file loaded into the memory by the service instance of the system service, the processor creates a first reference relationship between the original model file and the service instance when starting the system service, and converts the first reference relationship into a second reference relationship after receiving the hot replacement instruction, where the model file includes: the original model file and the new model file, and there is a first reference relationship between the original model file and the service instance in the memory, and the second reference relationship is the reference relationship between the new model file and the service instance in the memory, it is easy to notice that, due to the adoption of the variable reference method, the original model file is mapped with the service instance in the system service through the variable, when the original model file needs to be hot-replaced, only the new model file and the service instance in the system service need to be mapped through the variable, thereby achieving the purpose of notifying the model file to be changed according to the message mechanism so as to quote the machine-made hot-replacement model file, thereby realizing the technical effects of hot replacement of the model file and keeping the service consistency, and further solve the technical problems that the existing system service can not realize the heat exchange of the model and can not keep the service consistency.
Example 8
According to an embodiment of the present invention, an embodiment of a method for processing a model file is further provided, and the method can be executed in the processing device for the model file provided in embodiment 7. Fig. 11 is a flowchart of a method for processing a model file according to an embodiment of the method for processing a model file provided by the present invention, and as shown in fig. 11, the method includes the following steps:
step S1102, obtaining a service instance of the system service and loading the service instance to a model file, where the model file includes: the method comprises the steps that an original model file and a new model file exist, and a first reference relation exists between the original model file and a service instance;
step S1104, after receiving the hot replacement instruction, converts the first reference relationship into a second reference relationship, where the second reference relationship is a reference relationship between the new model file and the service instance.
It should be noted that the system service may be, but is not limited to, an intelligent interaction service (e.g., an intelligent question and answer service), the system service includes at least one service instance, and each service instance may load its corresponding model file into a memory for use by a tenant renting the system service. In addition, the model file may be, but is not limited to, a model file stored in a memory.
In an alternative embodiment, a storage in the system service stores a model file loaded into a memory by a service instance, and when the system service is started, a processor in the system service creates a reference relationship between the model file and the service instance according to the model file. After the tenant sends out the hot replacement instruction, a processor in the system service receives the hot replacement instruction, maps the model file recorded in the memory with the variable in the system service in a variable reference mode, so that the model file is decoupled from the system service, the change of the model file is obtained through a message mechanism, a new model file is dynamically loaded when the system service runs, and finally the original model file is directly replaced by the new model file after the nodes of all service instances in the system service are ready, so that the hot replacement of the file model is completed. After the hot replacement of the file model is completed, the first reference relationship between the original model file and the service instance is switched to the second reference relationship between the new model file and the service instance.
In the solution defined in steps S1102 to S1104 based on the foregoing embodiment, it can be known that, by obtaining a service instance of a system service and loading the service instance into a model file, after receiving a hot replacement instruction, the first reference relationship is converted into a second reference relationship, where the model file includes: the original model file and the new model file, and there is a first reference relationship between the original model file and the service instance, and the second reference relationship is a reference relationship between the new model file and the service instance, it is easy to note that, due to the adoption of the variable reference method, the original model file is mapped with the service instance in the system service through the variable, when the original model file needs to be hot-replaced, only the new model file and the service instance in the system service need to be mapped through the variable, thereby achieving the purpose of notifying the model file to be changed according to the message mechanism so as to quote the machine-made hot-replacement model file, thereby realizing the technical effects of hot replacement of the model file and keeping the service consistency, and further solve the technical problems that the existing system service can not realize the heat exchange of the model and can not keep the service consistency.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (13)

1. An apparatus for processing model files, comprising:
the storage is used for storing a model file loaded into the memory by a service instance of the system service, wherein the model file comprises: an original model file and a new model file;
the processor is used for creating a first reference relation between the original model file and the service instance when the system service is started, and converting the first reference relation into a second reference relation after a hot replacement instruction is received, wherein the second reference relation is the reference relation between a new model file in the memory and the service instance;
the system service comprises a plurality of service instances, each service instance is blocked to be used for executing a processing request thread after each service instance finishes loading the model file to the memory, and the processing request is started to be executed after the first reference relation is converted into the second reference relation, wherein the processing request is used for calling the model file corresponding to the service instance.
2. A method for processing a model file, comprising:
obtaining a model file loaded into a memory by a service instance of a system service, wherein the model file comprises: the service instance management method comprises the steps that an original model file and a new model file exist, and a first reference relation exists between the original model file in the memory and the service instance;
after receiving a hot replacement instruction, converting the first reference relationship into a second reference relationship, wherein the second reference relationship is a reference relationship between the new model file in the memory and the service instance;
the system service comprises a plurality of service instances, each service instance is blocked to execute a processing request thread after the model file is loaded to the memory by each service instance, and the processing request is started to be executed until the first reference relation is converted into the second reference relation, wherein the processing request is used for calling the model file corresponding to the service instance.
3. The method of claim 2, wherein each of the service instances sends a new model ready message after the first reference relationship of each of the service instances is converted to the second reference relationship, and wherein an already blocked thread will be woken up if each of the service instances sends a corresponding new model ready message.
4. The method according to any one of claims 2 to 3, wherein, in the case that the model file is the original model file, before obtaining the service instance of the system service and loading the service instance into the model file in the memory, the method further comprises:
when the system service is started, acquiring an original model file corresponding to the service instance;
and loading the original model file into the memory, and creating the first reference relation between the original model file and the service instance.
5. The method of claim 4, wherein obtaining the original model file corresponding to the service instance when the system service is started comprises:
the service instance obtains metadata from a configuration center, wherein the metadata at least comprises: model information;
the service instance retrieves the original model file from memory using the metadata.
6. The method of claim 4, wherein loading the original model file into the memory and creating the first reference relationship between the original model file and the service instance comprises:
the service instance loads the original model file to the memory;
and using a variable to reference the original model file stored in the memory, and creating the first reference relationship, wherein the assignment of the variable is the storage address of the original model file.
7. The method of claim 5, wherein after the service instance loads the original model file into the memory, the service instance subscribes to model change information from a message subscription center, wherein the model change information is used to characterize whether to generate the new model file.
8. The method according to any one of claims 2 to 3, wherein, in the case that the model file is the new model file, before obtaining the service instance of the system service and loading the service instance into the model file in the memory, the method further comprises:
if the service instance detects the model change information, the service instance acquires metadata from a configuration center, wherein the model change information is generated under the condition that the new model file is obtained through training, and the metadata at least comprises: model information;
the service instance acquires and trains the new model file from a memory by using the metadata;
and the service instance loads the new model file to the memory.
9. The method of claim 8, wherein the service instance subscribes to the model change information from a message subscription center after the service instance loads the new model file into the memory, wherein the model change information is used to characterize whether to regenerate the new model file.
10. An apparatus for processing a model file, comprising:
an obtaining module, configured to obtain a model file loaded into a memory by a service instance of a system service, where the model file includes: the service instance management method comprises the steps that an original model file and a new model file exist, and a first reference relation exists between the original model file in the memory and the service instance;
the receiving module is used for converting the first reference relationship into a second reference relationship after receiving the hot replacement instruction, wherein the second reference relationship is the reference relationship between the new model file in the memory and the service instance;
the system service comprises a plurality of service instances, each service instance is blocked to execute a processing request thread after the model file is loaded to the memory by each service instance, and the processing request is started to be executed until the first reference relation is converted into the second reference relation, wherein the processing request is used for calling the model file corresponding to the service instance.
11. A system for processing model files, comprising:
the service terminal is used for initiating a processing request, wherein the processing request is used for calling a model file corresponding to a service instance, and the model file comprises: an original model file and a new model file;
the server is communicated with the business terminal and used for creating a first reference relation between the original model file and the service instance when system service is started, and converting the first reference relation into a second reference relation after a hot replacement instruction is received, wherein the second reference relation is the reference relation between a new model file in the memory and the service instance;
after the service instance of the server finishes loading the model file into the memory, the processing request received by blocking execution is started until the first reference relationship is converted into the second reference relationship, and the execution result is returned to the service terminal.
12. An apparatus for processing model files, comprising:
a memory for storing service instances of system services loaded into a model file, wherein the model file comprises: an original model file and a new model file;
the processor is used for creating a first reference relation between the original model file and the service instance when the system service is started, and converting the first reference relation into a second reference relation after a hot replacement instruction is received, wherein the second reference relation is the reference relation between the new model file and the service instance;
the system service comprises a plurality of service instances, each service instance is blocked to execute a processing request thread after each service instance finishes loading the model file, and the processing request is started to be executed after the first reference relation is converted into the second reference relation, wherein the processing request is used for calling the model file corresponding to the service instance.
13. A method for processing a model file, comprising:
obtaining a service instance of a system service and loading the service instance into a model file, wherein the model file comprises: an original model file and a new model file, wherein a first reference relation exists between the original model file and the service instance;
after receiving a hot replacement instruction, converting the first reference relationship into a second reference relationship, wherein the second reference relationship is a reference relationship between the new model file and the service instance;
the system service comprises a plurality of service instances, each service instance is blocked to execute a processing request thread after each service instance finishes loading the model file, and the processing request is started to be executed after the first reference relation is converted into the second reference relation, wherein the processing request is used for calling the model file corresponding to the service instance.
CN201710703918.6A 2017-08-16 2017-08-16 Model file processing method, device and system and processing equipment Active CN109408134B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710703918.6A CN109408134B (en) 2017-08-16 2017-08-16 Model file processing method, device and system and processing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710703918.6A CN109408134B (en) 2017-08-16 2017-08-16 Model file processing method, device and system and processing equipment

Publications (2)

Publication Number Publication Date
CN109408134A CN109408134A (en) 2019-03-01
CN109408134B true CN109408134B (en) 2022-04-08

Family

ID=65454680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710703918.6A Active CN109408134B (en) 2017-08-16 2017-08-16 Model file processing method, device and system and processing equipment

Country Status (1)

Country Link
CN (1) CN109408134B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7886287B1 (en) * 2003-08-27 2011-02-08 Avaya Inc. Method and apparatus for hot updating of running processes
CN102436373A (en) * 2011-09-13 2012-05-02 上海普元信息技术股份有限公司 Method for realizing resource loading and resource hot updating in enterprise distributed application system
EP2701060A1 (en) * 2012-08-24 2014-02-26 CA, Inc. Method for managing the versioning (update and rollback) of an Agent instrumenting Java application
CN103984582A (en) * 2014-06-04 2014-08-13 网易(杭州)网络有限公司 Method and device for hot updating
CN104461625A (en) * 2014-12-04 2015-03-25 上海斐讯数据通信技术有限公司 Hot patch realization method and system
CN105677415A (en) * 2016-01-06 2016-06-15 网易(杭州)网络有限公司 Hot updating method and device
CN106156186A (en) * 2015-04-21 2016-11-23 阿里巴巴集团控股有限公司 A kind of data model managing device, server and data processing method
CN106502751A (en) * 2016-11-15 2017-03-15 努比亚技术有限公司 Heat deployment apparatus and method
CN106528225A (en) * 2016-11-03 2017-03-22 北京像素软件科技股份有限公司 Hot update method and apparatus for game server

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6880086B2 (en) * 2000-05-20 2005-04-12 Ciena Corporation Signatures for facilitating hot upgrades of modular software components
TWI242725B (en) * 2001-06-11 2005-11-01 Oce Tech Bv A method for executing a hot migrate operation through an incremental roll-over process that uses migration plug-in means for conversion during an upgrade transition, and a multiprocessing system and a system arranged for implementing such method
US7818736B2 (en) * 2005-09-14 2010-10-19 International Business Machines Corporation Dynamic update mechanisms in operating systems
US8561048B2 (en) * 2005-12-29 2013-10-15 Sap Ag Late and dynamic binding of pattern components
US8495351B2 (en) * 2010-10-13 2013-07-23 International Business Machines Corporation Preparing and preserving a system configuration during a hot upgrade
CN102650953B (en) * 2011-02-28 2014-05-07 北京航空航天大学 Concurrently-optimized BPMN (Business Process Modeling Notation) combined service execution engine and method
CN103902319A (en) * 2012-12-30 2014-07-02 青岛海尔软件有限公司 Hot deployment method based on server-side javascript
US9558010B2 (en) * 2013-03-14 2017-01-31 International Business Machines Corporation Fast hot boot of a computer system
CN104657158B (en) * 2013-11-20 2018-02-23 北京先进数通信息技术股份公司 The method and apparatus of business processing in a kind of operation system
US9477461B1 (en) * 2014-03-12 2016-10-25 Cloud Linux Zug GmbH Systems and methods for generating and applying operating system live updates
CN104516760B (en) * 2014-12-12 2018-01-09 华为技术有限公司 A kind of method, device and mobile terminal of operating system hot-swap
CN106201566B (en) * 2015-05-07 2019-08-23 阿里巴巴集团控股有限公司 Benefit wins the hot upgrade method of big special software and equipment
CN106250199B (en) * 2016-07-26 2019-06-21 北京北森云计算股份有限公司 A kind of the dynamic micro services call method and device of multilingual cloud compiling

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7886287B1 (en) * 2003-08-27 2011-02-08 Avaya Inc. Method and apparatus for hot updating of running processes
CN102436373A (en) * 2011-09-13 2012-05-02 上海普元信息技术股份有限公司 Method for realizing resource loading and resource hot updating in enterprise distributed application system
EP2701060A1 (en) * 2012-08-24 2014-02-26 CA, Inc. Method for managing the versioning (update and rollback) of an Agent instrumenting Java application
CN103984582A (en) * 2014-06-04 2014-08-13 网易(杭州)网络有限公司 Method and device for hot updating
CN104461625A (en) * 2014-12-04 2015-03-25 上海斐讯数据通信技术有限公司 Hot patch realization method and system
CN106156186A (en) * 2015-04-21 2016-11-23 阿里巴巴集团控股有限公司 A kind of data model managing device, server and data processing method
CN105677415A (en) * 2016-01-06 2016-06-15 网易(杭州)网络有限公司 Hot updating method and device
CN106528225A (en) * 2016-11-03 2017-03-22 北京像素软件科技股份有限公司 Hot update method and apparatus for game server
CN106502751A (en) * 2016-11-15 2017-03-15 努比亚技术有限公司 Heat deployment apparatus and method

Also Published As

Publication number Publication date
CN109408134A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
US20220179682A1 (en) Task processing method, apparatus, and system based on distributed system
US10110549B2 (en) Method, server and electronic devices of synchronizing notification messages for electronic devices
US20190052476A1 (en) Smart appliance control method and smart appliance
US10455542B2 (en) Method of synchronizing notification messages for electronic devices and electronic devices
CN105187266B (en) information monitoring method and device
CN109246220B (en) Message pushing system and method
CN106021005B (en) Method and device for providing application service and electronic equipment
EP3211527A1 (en) Multi-screen sharing based application management method and device, and storage medium
CN112181677B (en) Service processing method and device, storage medium and electronic device
US10191732B2 (en) Systems and methods for preventing service disruption during software updates
WO2021102691A1 (en) Resource subscription method and apparatus, computer device, and storage medium
CN114691390A (en) User mode program processing method and device, storage medium and processor
CN113296871A (en) Method, equipment and system for processing container group instance
CN113312083B (en) Application generation method, device and equipment
CN111930565B (en) Process fault self-healing method, device and equipment for components in distributed management system
CN114637549B (en) Data processing method, system and storage medium for service grid-based application
CN110958287B (en) Operation object data synchronization method, device and system
WO2022206231A1 (en) Kubernetes cluster load balance handling method and apparatus, and storage medium
CN109408134B (en) Model file processing method, device and system and processing equipment
CN104699535B (en) A kind of information processing method and electronic equipment
WO2022222968A1 (en) Conference call recovery method, apparatus and system, electronic device, and readable storage medium
CN115102999B (en) DevOps system, service providing method, storage medium and electronic device
CN106230878B (en) Equipment service calling method and device based on AllJoyn framework
CN105577525A (en) Converged communication interaction method, device and system
CN114157627B (en) Group processing method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant