[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109117625B - Method and device for determining safety state of AI software system - Google Patents

Method and device for determining safety state of AI software system Download PDF

Info

Publication number
CN109117625B
CN109117625B CN201710481711.9A CN201710481711A CN109117625B CN 109117625 B CN109117625 B CN 109117625B CN 201710481711 A CN201710481711 A CN 201710481711A CN 109117625 B CN109117625 B CN 109117625B
Authority
CN
China
Prior art keywords
target object
real
module
time monitoring
software system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710481711.9A
Other languages
Chinese (zh)
Other versions
CN109117625A (en
Inventor
张建永
孙少杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201710481711.9A priority Critical patent/CN109117625B/en
Priority to PCT/CN2018/092027 priority patent/WO2018233638A1/en
Publication of CN109117625A publication Critical patent/CN109117625A/en
Application granted granted Critical
Publication of CN109117625B publication Critical patent/CN109117625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/44Program or device authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3236Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions
    • H04L9/3239Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions involving non-keyed hash functions, e.g. modification detection codes [MDCs], MD5, SHA or RIPEMD
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3247Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving digital signatures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3263Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving certificates, e.g. public key certificate [PKC] or attribute certificate [AC]; Public key infrastructure [PKI] arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Storage Device Security (AREA)

Abstract

The application discloses a method and a device for determining the safety state of an AI software system, belonging to the technical field of artificial intelligence. The method comprises the following steps: and the monitoring agent module in the AI software system determines a first abstract value of the target object, reports the first abstract value to the real-time monitoring service module, and the real-time monitoring service module performs security authentication on the target object so as to realize security protection on the target object. Because the target object is placed in the REE for execution, the deployment of a software framework of the software system is relatively centralized, and the platformization of the AI software system is facilitated. In addition, the critical component placed in the REE can fully utilize the abundant computing resources on the REE side, so that the critical component can be protected while the computing capacity of the critical component is guaranteed.

Description

Method and device for determining safety state of AI software system
Technical Field
The present disclosure relates to the field of Artificial Intelligence (AI), and in particular, to a method and an apparatus for determining a security status of an AI software system.
Background
The operating system on the terminal provides a platform for the application software to run on the terminal, that is, the application software realizes the functions of the application software through the modules deployed on the various layers of the operating system, wherein the system formed by the modules deployed on the various layers of the operating system is called a software system of the application software, such as an AI software system. It should be noted that AI software generally involves processing of user's personal private data during the operation process, but at present, the operation environment of the operating system is mainly an open Rich Execution Environment (REE), which causes that critical components of the AI software system deployed in the operating system may face a threat of malware during data processing, and therefore, in practical applications, the security state of the critical components in the AI software system, that is, the security state of the AI software system, needs to be determined, so as to facilitate security protection of the AI software system.
For the security problem caused by REE, a Trusted Execution Environment (TEE) is proposed by Global Platform organization (GP), that is, two parallel operating environments, an open REE and a relatively closed TEE, exist on an operating system. Since the program executing in the TEE needs to be signed and Hash (Hash) checked by the TEE, the program executing in the TEE can be secured. Therefore, in the related art, the critical component in the AI software system is placed in the TEE for execution, and the other components are placed in the REE for execution, so as to determine the security status of the AI software system according to the verification result of the critical component executed in the TEE, thereby implementing security protection on the AI software system. For example, fig. 1 is an AI software system based on an AI software framework in the related art, and the AI software system includes an AI framework Application Protocol Interface (API), a model and a key data file, an AI framework body, a Hardware Abstraction Layer (HAL), an algorithm support library, and a computation acceleration engine such as a Central Processing Unit (CPU), a Graphic Processing Unit (GPU), a Digital Signal Processor (DSP), and the like. The model, the key data file and the AI framework main body are critical components of the AI software system, so that the model, the key data file and the AI framework main body can be arranged in the TEE for execution, and other components are arranged in the REE for execution, so as to realize the safety protection of the AI software system based on the AI software framework. However, placing critical components in the AI software system in the TEE for execution while other components remain in the REE for execution results in a relatively decentralized deployment of the software framework that makes up the AI software system.
Disclosure of Invention
In order to solve the problem that the deployment of a software framework of an AI software system is relatively dispersed when the AI software system is subjected to security protection in the related art, the application provides a method and a device for determining the security state of the AI software system. The technical scheme is as follows:
in a first aspect, a method for determining a safety state of an AI software system is provided, where the method includes:
the method comprises the steps that a monitoring agent module in the AI software system determines a first abstract value of a target object in the AI software system, wherein the first abstract value is used for indicating safety authentication information of the target object, the running environment of an operating system for deploying the AI software system comprises a rich running environment REE and a trusted running environment TEE, the target object and the monitoring agent module are arranged in the REE, and the target object is any one of a plurality of modules deployed on the operating system in the AI software system and is to be subjected to safety authentication;
the monitoring agent module reports the first abstract value to a real-time monitoring service module in the AI software system, and the real-time monitoring service module is arranged in the TEE;
the real-time monitoring service module receives the first abstract value;
and the real-time monitoring service module performs security authentication on the target object according to the first abstract value to obtain an authentication result, wherein the authentication result is used for indicating the security state of the target object.
In the application, a target object to be subjected to security authentication is placed in an REE, and the target object is subjected to security authentication through a monitoring agent module placed in the REE and a real-time monitoring service module placed in the TEE, so as to protect the target object, that is, protect the AI software system. Because the target object is placed in the REE for execution, the deployment of a software framework of the software system is facilitated to be relatively centralized while the safety is not influenced.
Optionally, the performing, by the real-time monitoring service module, security authentication on the target object according to the first digest value to obtain an authentication result includes:
the real-time monitoring service module acquires a second digest value preset for the target object from a security key storage module in the AI software system, and the security key storage module is arranged in the TEE;
judging whether the first abstract value and the second abstract value are consistent to obtain an authentication result;
if the first abstract value is consistent with the second abstract value, the authentication result is the passing state of the security authentication;
and if the first digest value is not consistent with the second digest value, the authentication result is the security authentication failure state.
Specifically, in the present application, the real-time monitoring service module performs security authentication on the target object by determining whether a first digest value of the target object reported by the monitoring agent module is consistent with a second digest value preset for the target object.
Optionally, before the real-time monitoring service module obtains the second digest value preset for the target object from the security key storage module in the AI software system, the method further includes:
the real-time monitoring service module acquires a digital certificate preset for the target object from the security key storage module;
the real-time monitoring service module verifies whether the digital certificate is legal or not according to the verification information in the digital certificate;
and when the digital certificate is legal, the real-time monitoring service module triggers and executes the operation of acquiring a second digest value preset for the target object from the security key storage module.
In addition, in order to further enhance the security of the target object, the real-time monitoring service module may check the validity of the digital certificate preset for the target object before determining whether the first digest value of the target object reported by the monitoring agent module is consistent with the second digest value preset for the target object.
Optionally, the method further comprises:
the security key storage module is stored with a plurality of digital certificates, and the monitoring agent module reports the identification of the digital certificate to the real-time monitoring service module when reporting the first digest value to the real-time monitoring service module;
correspondingly, the acquiring, by the real-time monitoring service module, the digital certificate preset for the target object from the security key storage module includes:
and the real-time monitoring service module acquires the digital certificate corresponding to the identifier from the plurality of digital certificates stored in the security key storage module.
When a plurality of digital certificates are stored in the security key storage module, in order to facilitate the real-time monitoring service module to accurately acquire the digital certificate preset for the target object from the security key storage module, the monitoring agent module reports the identification of the digital certificate of the target object to the real-time monitoring service module, so that the real-time monitoring service module can acquire the digital certificate preset for the target object according to the identification of the digital certificate of the target object.
Optionally, after the real-time monitoring service module performs security authentication on the target object according to the first digest value to obtain an authentication result, the method further includes:
when the authentication result is the state that the safety authentication is not passed, the real-time monitoring service module sends an alarm request to a trusted user interface TUI in the AI software system, and the TUI is placed in the TEE;
the TUI receiving the alert request;
the TUI displays alert information for indicating to a user that the security certification of the target object is not passed.
Further, in order to facilitate the user to know the safety state of the AI software system in time, when the safety certification is not passed, the real-time monitoring server module sends an alarm request to the TUI so that the user can know the safety state of the target object through the alarm information displayed by the TUI.
Optionally, after the real-time monitoring service module performs security authentication on the target object according to the first digest value to obtain an authentication result, the method further includes:
and the real-time monitoring service module sends the authentication result to the monitoring agent module.
Further, after determining the authentication result of the target object, the real-time monitoring service module may also feed back the authentication result to the monitoring agent module disposed in the REE.
Optionally, after the real-time monitoring service module sends the authentication result to the monitoring agent module, the method further includes:
and when the authentication result is the state that the safety authentication fails, the monitoring agent module sends a termination request to a preset control module, wherein the termination request is used for indicating the preset control module to terminate the process of the target object, and the preset control module is a module which is deployed in the operating system and does not belong to the AI software system.
Further, after receiving the authentication result fed back by the real-time monitoring server module, the monitoring agent module may perform a corresponding operation according to the authentication result, so as to perform security protection on the target object.
Optionally, the method further comprises:
the monitoring agent module acquires a digital certificate of an upgraded target object from a cloud server, wherein the digital certificate comprises a digital signature and a digest value of the target object after the target object is upgraded by the cloud server;
the monitoring agent module sends the digital certificate of the upgraded target object to a security key storage module in the AI software system;
and the safety key storage module replaces the stored digital certificate of the target object with the digital certificate of the target object after the upgrade.
Because the data of the target object after upgrading may be changed, at this time, the security key storage module may update the stored digital certificate of the target object by the above method, so as to avoid that the subsequent real-time monitoring service module performs security authentication according to the digest value of the target object before upgrading, which may result in that the security authentication fails.
Optionally, the target object includes a model and a key data file in the AI software system.
Specifically, for an AI software system deployed in each module of an operating system, a model and a key data file are relatively easy to face a threat of malware, and therefore, in the present application, the model and the key data file in the AI software system can be used as target objects to implement security protection on the model and the key data file in the AI software system.
In a second aspect, a device for determining the safety state of an AI software system is provided, where the device for determining the safety state of an AI software system has a function of implementing the behavior of the method for determining the safety state of an AI software system in the first aspect. The device for determining the safety state of the AI software system comprises at least one module, and the at least one module is used for implementing the method for determining the safety state of the AI software system provided by the first aspect.
In a third aspect, a device for determining a safety state of an AI software system is provided, where the device for determining a safety state of an AI software system structurally includes a processor and a memory, and the memory is used to store a program for supporting the device for determining a safety state of an AI software system to execute the method for determining a safety state of an AI software system provided in the first aspect, and to store data used to implement the method for determining a safety state of an AI software system provided in the first aspect. The processor is configured to execute programs stored in the memory. The operating means of the memory device may further comprise a communication bus for establishing a connection between the processor and the memory.
In a fourth aspect, a computer-readable storage medium is provided, which has instructions stored therein, and when the computer-readable storage medium runs on a computer, the computer is caused to execute the method for determining the safety state of the AI software system according to the first aspect.
In a fifth aspect, a computer program product containing instructions is provided, which when run on a computer causes the computer to perform the method for determining the safety status of an AI software system according to the first aspect.
The technical effects obtained by the above second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described herein again.
The beneficial effect that technical scheme that this application provided brought is:
in the application, a monitoring agent module in an AI software system determines a first digest value of a target object, reports the first digest value to a real-time monitoring service module, and the real-time monitoring service module performs security authentication on the target object to realize security protection on the target object. Because the target object is placed in the REE for execution, the deployment of a software framework of the software system is facilitated to be relatively centralized while the safety is not influenced.
Drawings
Fig. 1 is a schematic diagram of an AI software system based on an AI software framework provided in the related art;
fig. 2 is a schematic diagram of a platform architecture of an operating system based on Trustzone technology according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an AI software system according to an embodiment of the invention;
fig. 4 is a schematic diagram of an intelligent dynamic behavior protection system according to an embodiment of the present invention;
fig. 5 is a block diagram of a device for determining a safety state of an AI software system according to an embodiment of the present invention.
Fig. 6 is a flowchart of a method for determining a security status of an AI software system according to an embodiment of the present invention;
fig. 7 is a flowchart of another method for determining the safety status of the AI software system according to the embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the embodiments of the present invention in detail, names related to the embodiments of the present invention will be explained.
The abstract value is that the specified data is calculated according to a preset function to obtain a value which can represent the uniqueness of the specified data, and the obtained value is the abstract value of the specified data. When the designated data changes, the changed designated data is recalculated through the preset function to obtain the digest value of the changed designated data, and at this time, the digest value of the changed designated data is inconsistent with the digest value of the designated data before the change, so that the digest value can be used for indicating the security authentication information of the designated data, that is, whether the designated data is modified or not can be judged through the digest value of the designated data. The preset function may be a preset hash function, that is, the specified data is calculated according to the preset hash function, and the obtained hash value is referred to as the digest value of the specified data.
Digital certificates, which refer to a series of data used to indicate identity information of two communicating parties in internet communication, are usually issued by an Authority such as a Certificate Authority (CA) center. For example, after a software developer develops a new software, the CA center issues a digital certificate for the software to indicate the identity information of the software.
The digital signature means that a private key in an asymmetric key pair is used to encrypt a digest value of specified data, the encrypted information is the digital signature of the specified data, and a device receiving the digital signature can decrypt the digital signature by using a public key in the asymmetric key pair to obtain the digest value of the specified data.
The security authentication refers to a method for determining a security status of a specific object, and generally performs security authentication on the specific object, that is, verifies whether data corresponding to the specific object is modified or tampered. Specifically, the security authentication of the specified object can be realized by verifying whether the data corresponding to the specified object is consistent with the data corresponding to the specified object when the software developer originally releases the specified object, that is, by verifying the integrity of the specified object.
It should be noted that, in the present application, the operating environment of the operating system in which the AI software system is deployed includes REE and TEE, and therefore, before describing the AI software system provided in the embodiment of the present invention, a platform architecture of the operating system in which the operating environment includes REE and TEE is described.
For a platform architecture of an operating system whose execution environment includes REE and TEE, arm (advanced riscmachines) provides a trusted zone (TrustZone) technology for providing a platform architecture of an operating system whose execution environment includes REE and TEE for the terminal.
Fig. 2 is a schematic diagram of a platform architecture 200 of an operating system based on TrustZone technology according to an embodiment of the present invention. As shown in fig. 2, the platform architecture 200 of the TrustZone-based operating system includes a secure World (SecureWorld) and an insecure World (Normal World), wherein the operating environment corresponding to the secure World is TEE, and the operating environment corresponding to the insecure World is REE. Dividing hardware and software resources on an operating system into safe resources and common resources, placing the safe resources in a safe world, and placing the common resources in an unsafe world.
The following describes in detail the architecture of the AI software system provided by the embodiment of the present invention. It should be noted that the operating system for deploying the AI software system provided in the embodiment of the present invention is the operating system shown in fig. 2, that is, the AI software system provided in the embodiment of the present invention is a software system based on TrustZone technology.
As shown in fig. 3, the AI software system 300 includes an unsecure world whose operating environment is REE and a secure world whose left system of fig. 3. The operating environment of the secure world is TEE, corresponding to the right system in fig. 3. An AI framework API, a model and a key data file, an AI framework main body, a HAL layer, an algorithm support library, a first Kernel layer (Kernel) and a monitoring agent module (monitor agent) are deployed in the non-secure world. A Real-time monitor service module (Real-time monitor service), a Trusted User Interface (TUI), a TEE internal API and a second kernel layer (Trust OSkernel) are deployed in the secure world. That is, the operating environment of the operating system in which the AI software system is deployed includes an REE and a TEE, and the monitoring agent module is disposed in the REE, the real-time monitoring service module is disposed in the TEE, and the TUI is also disposed in the TEE.
The first kernel layer is deployed with virtual devices (virtual devices) and a first Communication driver module (Communication driver) corresponding to a general-purpose CPU, a GPU, a DSP, and the like, and the second kernel layer is deployed with a second Communication driver module and a secure Key storage module (Key storage). Each module in the first kernel layer and the second kernel layer is a driver software module. The first communication driving module and the second communication driving module are used for realizing communication between the non-secure world and the secure world, namely for any component included in the non-secure world, if the component wants to communicate with the component in the secure world, the communication needs to be realized through the first communication driving module and the second communication driving module.
It should be noted that the AI framework API, the model and the key data file, the AI framework body, the HAL layer, the algorithm support library, and the first kernel layer deployed in the non-secure world are identical to the corresponding components in the software system of the related art shown in fig. 1, that is, in the embodiment of the present invention, the components included in the AI software system shown in fig. 1 are all placed in the REE. That is, compared with the AI software system shown in fig. 1, the AI software system shown in fig. 3 may not change the deployment of the software framework of the AI software system, but adds a monitoring agent module, a real-time monitoring service module, a security key storage module, a TUI, a first communication driver module, and a second communication driver module on the basis of the AI software system shown in fig. 1.
Since each component included in the AI software system shown in fig. 1 is disposed in the REE, the critical component in the AI software system shown in fig. 1 needs to be protected, and for convenience of description, any module in the AI software system that needs to be securely authenticated among a plurality of modules deployed on an operating system is referred to as a target object, and the target object is disposed in the REE.
The target object is safely protected through a monitoring agent module deployed in a non-safe world and a real-time monitoring service module deployed in a safe world, namely, the monitoring agent module and the real-time monitoring service module are used for safely protecting the target object. Specifically, the implementation process of performing security protection on the target object through the monitoring agent module and the real-time monitoring service module will be described in detail in the embodiments provided below, and will not be described in detail herein.
In addition, the TUI deployed in the secure world is used to display the security status of the target object. The security key storage module deployed in the secure world is used for storing information required for performing security authentication on the target object, such as a digest value of the target object. The TEE internal APIs deployed in the secure world are used to provide an interface for communications between the non-secure world and the secure world.
As shown in fig. 3, with respect to the related art that places critical components in the AI software system into the TEE for execution, the deployment of the software framework of the AI software system provided by the embodiment of the present invention is relatively centralized, thereby facilitating the platform of the AI software system. In addition, compared with the related art in which the critical component in the AI software system is placed in the TEE for execution, the embodiment of the present invention can implement that the critical component is placed in the REE for execution while performing security protection on the critical component, for example, the AI framework body is placed in the REE for execution, so that the AI framework body makes full use of rich computing resources on the REE side, and the AI framework body is not placed in the TEE to affect the computing capability of the AI framework body, so as to ensure the security of the AI software system while ensuring the computing capability and being beneficial to platform deployment.
For example, when the AI software is an intelligent dynamic behavior protection software, the embodiment of the present invention provides an intelligent dynamic behavior protection system based on the AI software system shown in fig. 3, and the following embodiment will describe the architecture of the intelligent dynamic behavior protection system in detail.
Fig. 4 is a schematic diagram of an intelligent dynamic behavior protection system 400 according to an embodiment of the present invention, as shown in fig. 4, the intelligent dynamic behavior protection system also includes an unsecure world and a secure world, an operating environment of the unsecure world is REE, and an operating environment of the secure world is TEE. An application framework (Applicationframework), Runtime libraries (Runtime libraries), a browser engine (WEBKIT), an Observer (Observer), a Model File (Model File), class libraries and binary files (Bin & lib files), an Analyzer (Analyzer), a monitoring agent module and a first kernel layer are deployed in the non-secure world. A real-time monitoring service module, a TUI, a TEE internal API and a second kernel layer are deployed in the security world.
The modules deployed in the first kernel layer are the same as the modules deployed in the first kernel layer in the AI software system shown in fig. 3, and will not be described in detail here. The modules deployed in the second kernel layer are the same as those deployed in the second kernel layer in the AI software system shown in fig. 3, and are not described in detail here again.
It should be noted that the application framework, the runtime library, the browser engine (WEBKIT), the viewer, the model file, the class library and the binary file, the analyzer and the first kernel layer deployed in the non-secure world are identical to the corresponding components in the existing intelligent dynamic behavior protection system, and reference may be made to the functional description of the related modules in the related art. That is, in the embodiment of the present invention, each component included in the existing intelligent dynamic behavior protection system is placed in the REE, and then, on the basis of the existing intelligent dynamic behavior protection system, a monitoring agent module, a real-time monitoring service module, a security key storage module, a TUI, a first communication driver module, and a second communication driver module are newly added to perform security protection on any component deployed in the non-secure world.
That is, in the intelligent dynamic behavior protection system shown in fig. 4, the target object may be any one of the class library, the binary file, and the model file, or may include both the class library, the binary file, and the model file.
The method for performing security protection on the target object by using the intelligent dynamic behavior protection system shown in fig. 4 may refer to the method for performing security protection on the target object by using the AI software system shown in fig. 3, and will not be described in detail here.
Based on the AI software system shown in fig. 3 and the intelligent dynamic behavior protection system shown in fig. 4, an embodiment of the invention provides a device 500 for determining the security status of the AI software system, as shown in fig. 5, the device 500 for determining the security status of the AI software system includes a monitoring agent module 501, a real-time monitoring service module 502, a TUI503, and a security key storage module 504, which correspond to the corresponding modules in fig. 3 or fig. 4, respectively. Thus, the apparatus 500 of fig. 5 may be equivalently a part of the software system of fig. 3 or fig. 4.
The monitoring agent module 501 is a monitoring agent module in the AI software system shown in fig. 3 or the intelligent dynamic behavior protection system shown in fig. 4. The real-time monitoring service module 502 is a real-time monitoring service module in the AI software system shown in fig. 3 or the intelligent dynamic behavior protection system shown in fig. 4. The TUI503 is the TUI in the AI software system shown in FIG. 3 above or the intelligent dynamic behavior protection system shown in FIG. 4 above. The security key storage module 504 is a security key storage module in the AI software system shown in fig. 3 or the intelligent dynamic behavior protection system shown in fig. 4.
Specifically, the monitoring agent module 501, the real-time monitoring service module 502, the TUI503 and the security key storage module 504 are configured to execute corresponding steps in the following embodiments, that is, the monitoring agent module 501, the real-time monitoring service module 502, the TUI503 and the security key storage module 504 provide a method for determining a security state of an AI software system according to an embodiment of the present invention by executing corresponding steps in the following embodiments. Therefore, the functions of the monitoring agent module 501, the real-time monitoring service module 502, the TUI503 and the security key storage module 504 will not be described in detail herein.
It should be noted that, when determining the safety state of the AI software system, the determination device for determining the safety state of the AI software system is exemplified by only the division of the functional modules, and in practical applications, the functions may be allocated to different functional modules as needed to complete the corresponding functions. In addition, the device for determining the safety state of the AI software system and the following method embodiment for determining the safety state of the AI software system belong to the same concept, and the specific implementation process is described in the following method embodiment, which is not described herein again.
Next, a detailed description will be given of the determination process of the safety state of the AI software system by the AI software system safety state determination device shown in fig. 5. That is, the determination method of the safety state of the AI software system provided in the following embodiments is a method based on the determination device of the safety state of the AI software system shown in fig. 5.
It should be noted that, in the embodiment of the present invention, the real-time monitoring service module may perform security authentication on the target object in two ways, that is, directly perform security authentication on the target object according to the first digest value of the target object, and simultaneously perform security authentication on the target object according to the first digest value of the target object and a digital certificate preset for the target object. The following examples will explain these two cases in detail, respectively.
Fig. 6 is a flowchart illustrating an AI software system security status determining method according to an embodiment of the present invention, where the method is applied to the AI software system security status determining apparatus 500 shown in fig. 5, and is used in a scenario where the real-time monitoring service module 502 performs security authentication on a target object directly according to a first digest of the target object, and referring to fig. 6, the method includes the following steps.
Step 601: the monitoring agent module 501 in the AI software system determines a first digest value of a target object in the AI software system.
The first digest value is used to indicate the security authentication information of the target object, that is, whether the target object is modified or not can be determined by the first digest value. In addition, the target object refers to any module to be subjected to security authentication in a plurality of modules deployed on the operating system in the AI software system.
In one possible implementation, determining the first digest value of the target object may be: the monitoring agent module 501 determines data of a target object, performs hash calculation on the data of the target object according to a preset hash function to obtain a hash value of the target object, and determines the hash value of the target object as a first digest value of the target object.
It should be noted that, in the embodiment of the present invention, the monitoring agent module 501 may also determine the digest value of the target object in other manners, as long as it is ensured that the obtained digest value can be used to determine whether the target object is modified.
In addition, the timing of the monitoring agent module 501 determining the first digest value of the target object may be divided into the following two cases.
(1) In order to avoid that the monitoring agent module 501 determines the data of the target object in real time and the data to be processed is too huge, the monitoring agent module 501 may periodically determine the data of the target object, that is, periodically determine the first digest value of the target object, that is, the monitoring agent module 501 determines the data of the target object every preset period. The preset time period is a preset time period.
(2) The monitoring agent module 501 determines a first digest value of the target object when receiving a security authentication instruction for the target object. The security authentication instruction for the target object may be triggered by a user through a preset operation, that is, the user may actively initiate security authentication for the target object through the preset operation. In addition, the security authentication instruction for the target object may also be triggered by the monitoring agent module 501 when detecting that the AI application software corresponding to the AI software system has a service abnormality, or may also be triggered by the monitoring agent module 501 when detecting that the AI application software corresponding to the AI software system is upgraded.
It should be noted that, for the AI software system shown in fig. 3, the model and the key data file deployed in the non-secure world are usually the key components in the AI software system, so that the model and the key data file can be set as the target objects. Of course, in the embodiment of the present invention, the target object may also be other components in the AI software system, and is not specifically limited herein.
Specifically, for the intelligent dynamic behavior protection system shown in fig. 4, the target object may be at least one of a class library and a binary file and a model file.
Step 602: the monitoring agent module 501 reports the first digest value to the real-time monitoring service module 502 in the AI software system.
Since the program executed in the TEE needs to perform security authentication, in order to determine the security state of the target object, the monitoring agent module 501 may report the first digest value of the target object to the real-time monitoring service module 502 disposed in the TEE after determining the first digest value, so that the real-time monitoring service module 502 performs security authentication on the target object.
Specifically, as shown in fig. 3 or fig. 4, the monitoring agent module 501 reports the first digest value to the real-time monitoring service module 502 through a first communication driver module deployed in the first kernel layer and a second communication driver module deployed in the second kernel layer.
In addition, when the monitoring agent module 501 reports the first digest value to the real-time monitoring service module 502, the monitoring agent module may also report the identifier of the target object to the real-time monitoring service module 502. The implementation process of reporting the identifier of the target object to the real-time monitoring service module 502 by the monitoring agent module 501 is basically the same as the implementation process of reporting the first digest value to the real-time monitoring service module 502 by the monitoring agent module 501.
Step 603: the real-time monitoring service module 502 receives the first digest value.
The real-time monitoring service module 502 receives the first digest value sent by the internal API of the TEE, and the monitoring agent module 501 reports the first digest value of the target object to the real-time monitoring service module 502.
When the monitoring agent module 501 reports the identifier of the target object to the real-time monitoring service module 502, the real-time monitoring service module 502 also receives the identifier of the target object sent by the internal API of the TEE.
When the real-time monitoring service module 502 receives the first digest value, the real-time monitoring service module 502 performs security authentication on the target object according to the first digest value to obtain an authentication result, and the authentication result is used for indicating the security state of the target object. Specifically, the real-time monitoring service module 502 performs security authentication on the target object to obtain an authentication result, which is implemented by the following step 604.
Step 604: the real-time monitoring service module 502 obtains a second digest value preset for the target object from the security key storage module 504 in the AI software system, and determines whether the first digest value and the second digest value are consistent to obtain an authentication result.
After judging whether the first abstract value and the second abstract value are consistent to obtain an authentication result, if the first abstract value and the second abstract value are consistent, the authentication result is in a safe authentication passing state; and if the first digest value is not consistent with the second digest value, the authentication result is a security authentication failure state. That is, determining whether the first digest value and the second digest value are consistent to obtain the authentication result may specifically be: if the first abstract value is consistent with the second abstract value, the authentication result is a safe authentication passing state, namely the AI software system is determined to be in a safe state; and if the first abstract value is not consistent with the second abstract value, the authentication result is a safe authentication failure state, namely the AI software system is determined to be in an unsafe state.
After obtaining the authentication result, the real-time monitoring service module 502 may record the authentication result by using a boolean variable, that is, when the authentication result is in a security authentication passing state, the authentication result is recorded as 1, and when the authentication result is in a security authentication failing state, the authentication result is recorded as 0.
In addition, the second digest value is a digest value that is configured for the target object in the security key storage module 504 in advance, that is, the second digest value is a digest value preset for the target object.
It should be noted that the second digest value preset for the target object is generally stored in the digital certificate preset for the target object, that is, the digital certificate preset for the target object includes the second digest value preset for the target object.
It should be noted that after the AI application software corresponding to the AI software system is installed for the first time, since a software developer may upgrade the AI application software, and data of the target object may be changed during the process of upgrading the AI application software, in order to avoid that the subsequent real-time monitoring service module 502 still performs security authentication according to information of the target object before upgrading, and thus the security authentication fails, the AI software system needs to update the stored information of the target object.
When the second digest value preset for the target object is stored in the digital certificate of the target object, the implementation process of the AI software system that needs to update the stored information of the target object may be: the monitoring agent module 501 obtains the digital certificate of the upgraded target object from the cloud server, where the digital certificate includes the digital signature and the digest value of the target object after the target object is upgraded by the cloud server, that is, the digital certificate is determined according to the data of the upgraded target object. The monitoring agent module 501 transmits the digital certificate of the target object after the upgrade to the security key storage module 504 in the AI software system. The secure key storage module 504 replaces the stored digital certificate of the target object with the digital certificate of the target object after the upgrade.
Optionally, after the real-time monitoring service module 502 performs security authentication on the target object, the target object may also be secured by executing a corresponding policy, and in particular, the securing of the target object by executing the corresponding policy may be implemented by the following step 605 and/or step 606.
Step 605: the real-time monitoring service module 502 performs security protection on the target object through the TUI503 of the AI software system.
Specifically, when the authentication result is the security authentication non-passing state, the real-time monitoring service module 502 sends an alarm request to the TUI503 in the AI software system, and the TUI503 receives the alarm request and displays an alarm message for indicating to the user that the security authentication of the target object is not passed.
Of course, the real-time monitoring service module 502 may also directly send the authentication result to the TUI 503. When the TUI503 receives the authentication result, it performs a corresponding operation according to the authentication result. That is, when the authentication result is the security authentication failed state, the TUI503 displays the alarm information; and when the authentication result is in a safe authentication passing state, displaying the authentication result so as to enable the user to know that the current target object is in a safe state.
In addition, when receiving the authentication result, the TUI503 may also display the authentication result using a preset identifier. That is, when the authentication result is in the security authentication non-passing state, the authentication result is displayed by using the first preset identifier, and when the authentication result is in the security authentication passing state, the authentication result is displayed by using the second preset identifier. For example, when the authentication result is in a security authentication non-passing state, the authentication result is displayed by using a red light identifier, and when the authentication result is in a security authentication passing state, the authentication result is displayed by using a green light identifier.
It should be noted that, when the real-time monitoring service module 502 records the authentication result using the boolean variable, the authentication result sent by the real-time monitoring service module 502 to the TUI503 is the authentication result recorded using the boolean variable. That is, when the TUI503 receives the authentication result sent by the real-time monitoring service module 502, when the authentication result is 1, the TUI503 determines that the authentication result is in the security authentication passing state, and when the authentication result is 0, the TUI503 determines that the authentication result is in the security authentication failing state.
Step 606: the real-time monitoring service module 502 performs security protection on the target object through the monitoring agent module 501 of the AI software system.
After the real-time monitoring service module 502 obtains the authentication result, the real-time monitoring service module 502 may send the authentication result to the monitoring agent module 501, that is, the real-time monitoring service module 502 sends the authentication result to the monitoring agent module 501 through the second communication driver module deployed in the second kernel layer and the first communication driver module deployed in the first kernel layer.
When receiving the authentication result fed back by the real-time monitoring service module 502, the monitoring agent module 501 may perform security protection on the target object through a preset operation when the authentication result is in a security authentication failed state. Wherein, the preset operation may be: the monitoring agent module 501 sends a termination request to a preset control module, where the termination request is used to instruct the preset control module to terminate the process of the target object, and the preset control module is a module deployed in the operating system and not belonging to the AI software system. When the preset control module receives the termination request, the process of the current target object is terminated so as to perform security protection on the target object.
In addition, when the preset control module receives the termination request, the operation of uninstalling the target object can be executed, so as to perform security protection on the target object.
It should also be noted that, when the real-time monitoring service module 502 records the authentication result by using the boolean variable, the authentication result fed back to the monitoring agent module 501 by the real-time monitoring service module 502 is also the authentication result recorded by using the boolean variable.
In the embodiment of the present invention, the monitoring agent module 501 in the AI software system determines the first digest value of the target object, and reports the first digest value to the real-time monitoring service module 502, and the real-time monitoring service module 502 performs security authentication on the target object, so as to implement security protection on the target object. Because the target object is arranged in the REE for execution, the deployment of a software framework of the software system is relatively centralized while the safety is not influenced, and the platformization of the AI software system is facilitated. In addition, compared with the related art in which the critical component in the AI software system is placed in the TEE for execution, the embodiment of the present invention can implement that the critical component is placed in the REE for execution while performing security protection on the critical component, for example, the AI framework body is placed in the REE for execution, so that the AI framework body fully utilizes rich computing resources on the REE side, and the AI framework body is prevented from being placed in the TEE to affect the computing capability of the AI framework body, so as to ensure the security of the AI software system while ensuring the computing capability and being beneficial to platform deployment.
Fig. 7 is a flowchart illustrating another method for determining the safety status of the AI software system, which is applied to the apparatus for determining the safety status of the AI software system illustrated in fig. 5, and is used in a scenario where the real-time monitoring service module 502 performs safety authentication on a target object according to a first digest value of the target object and a digital certificate preset for the target object, referring to fig. 7, and the method includes the following steps.
Step 701: the monitoring agent module 501 in the AI software system determines a first digest value of a target object in the AI software system.
The implementation process of step 701 may refer to the implementation process of step 601 shown in fig. 6, and will not be elaborated here.
Step 702: the monitoring agent module 501 reports the first digest value to the real-time monitoring service module 502 in the AI software system.
The implementation process of step 702 may refer to the implementation process of step 602 shown in fig. 6, and will not be described in detail here.
Step 703: the real-time monitoring service module 502 receives the first digest value.
The implementation process of step 703 may refer to the implementation process of step 603 shown in fig. 6, and will not be described in detail here.
When the real-time monitoring service module 502 receives the first digest value, the real-time monitoring service module 502 performs security authentication on the target object according to the first digest value to obtain an authentication result, and the authentication result is also used for indicating the security state of the target object. Specifically, the real-time monitoring service module 502 performs security authentication on the target object to obtain an authentication result, which is implemented by the following step 704.
Step 704: the real-time monitoring service module 502 obtains a digital certificate preset for the target object from the security key storage module 504, and verifies whether the digital certificate is legal according to the verification information in the digital certificate.
That is, before determining whether the first digest value and the second digest value are consistent to obtain the authentication result, the real-time monitoring service module 502 needs to verify the digital certificate of the target object, and if the digital certificate of the target object is legal, the real-time monitoring service module performs the security authentication on the target object through the following step 705 to obtain the authentication result.
At this time, the implementation process of the real-time monitoring service module 502 verifying whether the digital certificate is legal according to the verification information in the digital certificate may be: the real-time monitoring service module 502 respectively determines whether the root public key and the digital signature in the verification information are legal, and when the real-time monitoring service module 502 determines that both the root public key and the digital signature are legal, it determines that the digital certificate preset for the target object is legal.
The implementation process of the real-time monitoring service module 502 determining whether the root public key in the verification information is legal may be: the real-time monitoring service module 502 determines the hash value of the root public key of the digital certificate of the target object, and determines whether the hash value of the root public key of the digital certificate of the target object is consistent with the hash value of the pre-stored root public key. And if the hash value of the root public key of the digital certificate of the target object is inconsistent with the hash value of the pre-stored root public key, determining that the root public key is illegal. And if the hash value of the root public key of the digital certificate of the determined target object is consistent with the hash value of the pre-stored root public key, determining that the root public key is legal.
The implementation process of the real-time monitoring service module 502 for judging whether the digital signature in the verification information is legal may be: and verifying the digital signature in the digital certificate according to a root public key in the digital certificate preset for the target object. When the verification passes, determining that the digital signature in the digital certificate is legal; and when the verification fails, determining that the digital signature in the digital certificate is illegal. In the embodiment of the present invention, the verification of the digital signature in the digital certificate may refer to Public Key Infrastructure (PKI) certificate verification technology according to a root Public Key in the digital certificate, which is not described in detail herein.
Among them, the pre-stored root public key is generally stored in a One Time Programming (OTP) device of the terminal. The OTP device is a write-once device for storing a root public key, and is provided in any terminal that supports secure booting.
It should be noted that, in practical applications, a software developer usually presets a digital certificate for one application software, but there may be multiple objects that need to be secured in one application software, and therefore, there may be digest values preset for other objects in the digital certificate preset for a target object, and in order to distinguish the digest values preset for different objects, the software developer sets a corresponding identifier for each object, that is, in the digital certificate, the identifiers of the multiple objects that need to be secured and the digest values preset for each object are stored. Wherein the identification of the object is used to uniquely identify the object, for example, the identification of the object may be object 1, object 2, object 3, …, object n.
For example, table 1 shows a format of a digital certificate according to an embodiment of the present invention. As shown in table 1, the digital certificate includes a software version number (software version) of the AI application software corresponding to the AI software system, a root public key of the digital certificate, a plurality of identifiers of objects that need to be secured, a hash value preset for each object that needs to be secured, a plurality of information such as a digital signature of the digital certificate, and a length of each information in the plurality of information.
TABLE 1
Figure BDA0001329471300000121
Figure BDA0001329471300000131
At this time, the real-time monitoring service module 502 may obtain the second digest value preset for the target object from the security key storage module 504, by determining, according to the received identifier of the target object, the digest value corresponding to the identifier of the target object from the digital certificate, and determining the second digest value of the target object from the determined digest value.
In addition, when a plurality of digital certificates are stored in the security key storage module 504, in order to facilitate the real-time monitoring service module 502 to accurately obtain the digital certificate preset for the target object from the plurality of digital certificates, the monitoring agent module 501 reports the identifier of the digital certificate to the real-time monitoring service module 502 when reporting the first digest value to the real-time monitoring service module 502. When receiving the identifier of the digital certificate, the real-time monitoring service module 502 obtains the digital certificate corresponding to the identifier from the plurality of digital certificates stored in the security key storage module 504. Here, the digital certificate is a digital certificate preset for the target object, and the identifier is an identifier of the digital certificate preset for the target object.
That is, for each digital certificate in the plurality of digital certificates, the security key storage module 504 stores a corresponding relationship between the digital certificate and the identifier of the digital certificate, and when the real-time monitoring service module 502 receives the identifier of the digital certificate preset for the target object, the digital certificate preset for the target object may be obtained according to the corresponding relationship and the received identifier.
It is also worth noting that the AI software system needs to update the stored information of the target object, i.e., the stored digital certificate of the target object. The implementation process in which the stored digital certificate of the target object is updated may refer to step 604 in fig. 6, and will not be described in detail here.
Step 705: when the digital certificate is legal, the real-time monitoring service module 502 obtains a second digest value preset for the target object from the security key storage module 504 in the AI software system, and determines whether the first digest value and the second digest value are consistent to obtain an authentication result.
The implementation process of step 705 may refer to the implementation process of step 604 shown in fig. 6, and will not be elaborated here.
That is, in the embodiment of the present invention, the real-time monitoring service module 502 obtains the digital certificate preset for the target object from the security key storage module 504, the real-time monitoring service module 502 checks whether the digital certificate is legal according to the verification information in the digital certificate, and when the digital certificate is legal, the real-time monitoring service module 502 triggers the execution of the operation of step 604 in fig. 6.
Likewise, after the real-time monitoring service module 502 performs security authentication on the target object, the target object may also be secured by executing a corresponding policy, and in particular, the securing of the target object by executing the corresponding policy may be implemented by the following step 706 and/or step 707.
Step 706: the real-time monitoring service module 502 performs security protection on the target object through the TUI503 of the AI software system.
The implementation process of step 706 may refer to the implementation process of step 605 shown in fig. 6, and will not be described in detail here.
Step 707: the real-time monitoring service module 502 performs security protection on the target object through the monitoring agent module 501 of the AI software system.
The implementation process of step 707 may refer to the implementation process of step 606 shown in fig. 6, and will not be elaborated here.
In the embodiment of the present invention, the monitoring agent module 501 in the AI software system determines the first digest value of the target object, and reports the first digest value to the real-time monitoring service module 502, and the real-time monitoring service module 502 performs security authentication on the target object, so as to implement security protection on the target object. Because the target object is arranged in the REE for execution, the deployment of a software framework of the software system is relatively centralized while the safety is not influenced, and the platformization of the AI software system is facilitated. In addition, compared with the related art in which the critical component in the AI software system is placed in the TEE for execution, the embodiment of the present invention can implement that the critical component is placed in the REE for execution while performing security protection on the critical component, for example, the AI framework body is placed in the REE for execution, so that the AI framework body makes full use of rich computing resources on the REE side, and the AI framework body is not placed in the TEE to affect the computing capability of the AI framework body, so as to ensure the security of the AI software system while ensuring the computing capability and being beneficial to platform deployment.
In addition to the AI software system and the intelligent dynamic behavior protection system in the above embodiments, the present application also provides a terminal, on which the operating system shown in fig. 2, the AI software system shown in fig. 3 or the intelligent dynamic behavior protection system shown in fig. 4 are deployed, so that the terminal can execute the determination method of the safety state of the AI software system shown in fig. 6 or fig. 7.
Fig. 8 is a schematic structural diagram of a terminal 800 according to an embodiment of the present invention. The AI software system shown in fig. 3 and the intelligent dynamic behavior protection system shown in fig. 4 can be implemented by the terminal 800 shown in fig. 8. Referring to fig. 8, the terminal comprises at least one processor 801, a communication bus 802, a memory 803 and at least one communication interface 804.
The processor 801 may be a CPU, a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of programs in accordance with the teachings of the present application.
The communication bus 802 may include a path that conveys information between the aforementioned components.
The Memory 803 may be, but is not limited to, a Read-Only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an Electrically Erasable Programmable Read-Only Memory (EEPROM), a compact disc Read-Only Memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 803 may be self-contained and coupled to the processor 801 via a communication bus 802. The memory 803 may also be integrated with the processor 801.
Communication interface 804 may be any device, such as a transceiver, for communicating with other devices or communication Networks, such as an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), etc.
In a specific implementation, the processor 801 may include one or more CPUs, for example, a CPU corresponding to the virtual device in fig. 3 or fig. 4, and may also include a GPU or a DSP, etc., as an embodiment.
The terminal may be a general-purpose computer device or a special-purpose computer device. In a specific implementation, the computer device may be a desktop computer, a laptop computer, a web server, a Personal Digital Assistant (PDA), a mobile phone, a tablet computer, a wireless terminal device, a communication device, or an embedded device. The embodiment of the invention does not limit the type of the computer equipment.
The memory 803 is used for storing program codes for executing the above method or software system embodiment schemes of the present application, and is executed by the processor 801. The program code may form the apparatus or the AI software system mentioned in the previous embodiments. For example, the memory 803 is used to provide a storage area for each module such as the security key storage module 504 in the AI software system shown in fig. 3 or the intelligent dynamic behavior protection system shown in fig. 4. The processor 801 is configured to execute program code stored in the memory 803. One or more software modules, for example, as described in fig. 5, may be included in the program code. The AI software system shown in fig. 3 or the intelligent dynamic behavior protection system shown in fig. 4 may determine the security status of the corresponding software system through the processor 801 and one or more software modules in the program code in the memory 803.
In the above-described embodiments, all or part of the AI software system may be implemented in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with embodiments of the invention, to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above-mentioned embodiments are provided not to limit the present application, and any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (18)

1. A method for determining the safety state of an artificial intelligence AI software system is characterized by comprising the following steps:
the method comprises the steps that a monitoring agent module in the AI software system determines a first abstract value of a target object in the AI software system, wherein the first abstract value is used for indicating safety authentication information of the target object, the running environment of an operating system for deploying the AI software system comprises a rich running environment REE and a trusted running environment TEE, the target object and the monitoring agent module are arranged in the REE, and the target object is any one of a plurality of modules deployed on the operating system in the AI software system and is to be subjected to safety authentication;
the monitoring agent module reports the first abstract value to a real-time monitoring service module in the AI software system, and the real-time monitoring service module is arranged in the TEE;
the real-time monitoring service module receives the first abstract value;
and the real-time monitoring service module performs security authentication on the target object according to the first abstract value to obtain an authentication result, wherein the authentication result is used for indicating the security state of the target object.
2. The method of claim 1, wherein the real-time monitoring service module performs security authentication on the target object according to the first digest value to obtain an authentication result, and the method comprises:
the real-time monitoring service module acquires a second digest value preset for the target object from a security key storage module in the AI software system, and the security key storage module is arranged in the TEE;
judging whether the first abstract value and the second abstract value are consistent to obtain an authentication result;
if the first abstract value is consistent with the second abstract value, the authentication result is the passing state of the security authentication;
and if the first digest value is not consistent with the second digest value, the authentication result is the security authentication failure state.
3. The method of claim 2, wherein before the real-time monitoring service module obtains the second digest value preset for the target object from the security key storage module in the AI software system, the method further comprises:
the real-time monitoring service module acquires a digital certificate preset for the target object from the security key storage module;
the real-time monitoring service module verifies whether the digital certificate is legal or not according to the verification information in the digital certificate;
and when the digital certificate is legal, the real-time monitoring service module triggers and executes the operation of acquiring a second digest value preset for the target object from the security key storage module.
4. The method of claim 3, wherein the method further comprises:
the security key storage module is stored with a plurality of digital certificates, and the monitoring agent module reports the identification of the digital certificate to the real-time monitoring service module when reporting the first digest value to the real-time monitoring service module;
correspondingly, the acquiring, by the real-time monitoring service module, the digital certificate preset for the target object from the security key storage module includes:
and the real-time monitoring service module acquires the digital certificate corresponding to the identifier from the plurality of digital certificates stored in the security key storage module.
5. The method according to any one of claims 1 to 4, wherein after the real-time monitoring service module performs security authentication on the target object according to the first digest value to obtain an authentication result, the method further comprises:
when the authentication result is the state that the safety authentication is not passed, the real-time monitoring service module sends an alarm request to a trusted user interface TUI in the AI software system, and the TUI is placed in the TEE;
the TUI receiving the alert request;
the TUI displays alert information for indicating to a user that the security certification of the target object is not passed.
6. The method according to any one of claims 1 to 4, wherein after the real-time monitoring service module performs security authentication on the target object according to the first digest value to obtain an authentication result, the method further comprises:
and the real-time monitoring service module sends the authentication result to the monitoring agent module.
7. The method of claim 6, wherein after the real-time monitoring service module sends the authentication result to the monitoring agent module, further comprising:
and when the authentication result is the state that the safety authentication fails, the monitoring agent module sends a termination request to a preset control module, wherein the termination request is used for indicating the preset control module to terminate the process of the target object, and the preset control module is a module which is deployed in the operating system and does not belong to the AI software system.
8. The method of any of claims 1 to 4, and 7, further comprising:
the monitoring agent module acquires a digital certificate of an upgraded target object from a cloud server, wherein the digital certificate comprises a digital signature and a digest value of the target object after the target object is upgraded by the cloud server;
the monitoring agent module sends the digital certificate of the upgraded target object to a security key storage module in the AI software system;
and the safety key storage module replaces the stored digital certificate of the target object with the digital certificate of the target object after the upgrade.
9. The method of any of claims 1 to 4 and 7, wherein the target objects include models and key data files in the AI software system.
10. An apparatus for determining safety state of artificial intelligence AI software system, the apparatus comprising: the monitoring agent module and the real-time monitoring service module;
the monitoring agent module is configured to determine a first digest value of a target object in the AI software system, and report the first digest value to the real-time monitoring service module, where the first digest value is used to indicate security authentication information of the target object, where an operating environment of an operating system in which the AI software system is deployed includes a rich operating environment REE and a trusted operating environment TEE, the target object and the monitoring agent module are placed in the REE, the target object is any one of multiple modules deployed on the operating system in the AI software system, where security authentication is to be performed, and the real-time monitoring service module is placed in the TEE;
the real-time monitoring service module is used for receiving the first abstract value and carrying out security authentication on the target object according to the first abstract value to obtain an authentication result, and the authentication result is used for indicating the security state of the target object.
11. The apparatus of claim 10, wherein the real-time monitoring service module is specifically configured to:
acquiring a second digest value preset for the target object from a security key storage module in the AI software system, wherein the security key storage module is arranged in the TEE;
judging whether the first abstract value and the second abstract value are consistent to obtain an authentication result;
if the first abstract value is consistent with the second abstract value, the authentication result is the passing state of the security authentication;
and if the first digest value is not consistent with the second digest value, the authentication result is the security authentication failure state.
12. The apparatus of claim 11, wherein the real-time monitoring service module is further configured to:
acquiring a digital certificate preset for the target object from the security key storage module;
verifying whether the digital certificate is legal or not according to the verification information in the digital certificate;
and when the digital certificate is legal, triggering and executing the operation of acquiring a second digest value preset for the target object from the security key storage module.
13. The apparatus of claim 12,
the security key storage module is used for storing a plurality of digital certificates, and the monitoring agent module is also used for reporting the identification of the digital certificate to the real-time monitoring service module when reporting the first digest value to the real-time monitoring service module;
correspondingly, the real-time monitoring service module is specifically configured to: and acquiring the digital certificate corresponding to the identifier from a plurality of digital certificates stored in the security key storage module.
14. The apparatus of any of claims 10 to 13,
the real-time monitoring service module is further configured to send an alarm request to a Trusted User Interface (TUI) in the AI software system when the authentication result is that the security authentication fails, the TUI being placed in the TEE;
and the TUI is used for receiving the alarm request and displaying alarm information, and the alarm information is used for indicating that the security certification of the target object is not passed to a user.
15. The apparatus according to any one of claims 10 to 13, wherein the real-time monitoring service module is further configured to send the authentication result to the monitoring agent module.
16. The apparatus of claim 15, wherein the monitoring agent module is further configured to:
and when the authentication result is the state that the safety authentication is not passed, sending a termination request to a preset control module, wherein the termination request is used for indicating the preset control module to terminate the process of the target object, and the preset control module is a module which is deployed in the operating system and does not belong to the AI software system.
17. The apparatus of any of claims 10 to 13, and 16, wherein the monitoring agent module is further configured to:
acquiring a digital certificate of a target object after upgrading from a cloud server, wherein the digital certificate comprises a digital signature and a digest value of the target object after upgrading by the cloud server;
sending the digital certificate of the upgraded target object to a security key storage module in the AI software system;
accordingly, the secure key storage module is configured to replace the stored digital certificate of the target object with the digital certificate of the target object after the upgrade.
18. The apparatus of any of claims 10 to 13 and 16, wherein the target objects comprise models and key data files in the AI software system.
CN201710481711.9A 2017-06-22 2017-06-22 Method and device for determining safety state of AI software system Active CN109117625B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710481711.9A CN109117625B (en) 2017-06-22 2017-06-22 Method and device for determining safety state of AI software system
PCT/CN2018/092027 WO2018233638A1 (en) 2017-06-22 2018-06-20 Method and apparatus for determining security state of ai software system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710481711.9A CN109117625B (en) 2017-06-22 2017-06-22 Method and device for determining safety state of AI software system

Publications (2)

Publication Number Publication Date
CN109117625A CN109117625A (en) 2019-01-01
CN109117625B true CN109117625B (en) 2020-11-06

Family

ID=64732802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710481711.9A Active CN109117625B (en) 2017-06-22 2017-06-22 Method and device for determining safety state of AI software system

Country Status (2)

Country Link
CN (1) CN109117625B (en)
WO (1) WO2018233638A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111949986B (en) * 2020-02-19 2023-10-03 华控清交信息科技(北京)有限公司 Service processing method, system and storage medium
US11947444B2 (en) 2020-11-06 2024-04-02 International Business Machines Corporation Sharing insights between pre and post deployment to enhance cloud workload security

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2746981A1 (en) * 2012-12-19 2014-06-25 ST-Ericsson SA Trusted execution environment access control rules derivation
CN105468969A (en) * 2015-11-19 2016-04-06 中科创达软件股份有限公司 Method and system for promoting security of antivirus application program
CN105653978A (en) * 2015-12-29 2016-06-08 北京握奇智能科技有限公司 Method and system for improving TEE command execution speed
CN105656890A (en) * 2015-12-30 2016-06-08 深圳数字电视国家工程实验室股份有限公司 FIDO (Fast Identity Online) authenticator, system and method based on TEE (Trusted Execution Environment) and wireless confirmation
CN106547618A (en) * 2016-10-19 2017-03-29 沈阳微可信科技有限公司 Communication system and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101729960B1 (en) * 2013-10-21 2017-04-25 한국전자통신연구원 Method and Apparatus for authenticating and managing an application using trusted platform module
CN105608344A (en) * 2014-10-31 2016-05-25 江苏威盾网络科技有限公司 Application program safety management system and method
US20160379212A1 (en) * 2015-06-26 2016-12-29 Intel Corporation System, apparatus and method for performing cryptographic operations in a trusted execution environment
WO2017039241A1 (en) * 2015-08-28 2017-03-09 Samsung Electronics Co., Ltd. Payment information processing method and apparatus of electronic device
CN105447406B (en) * 2015-11-10 2018-10-19 华为技术有限公司 A kind of method and apparatus for accessing memory space
CN107077565B (en) * 2015-11-25 2019-11-26 华为技术有限公司 A kind of configuration method and equipment of safety instruction information
CN106603487B (en) * 2016-11-04 2020-05-19 中软信息系统工程有限公司 Method for improving security of TLS protocol processing based on CPU space-time isolation mechanism

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2746981A1 (en) * 2012-12-19 2014-06-25 ST-Ericsson SA Trusted execution environment access control rules derivation
CN105468969A (en) * 2015-11-19 2016-04-06 中科创达软件股份有限公司 Method and system for promoting security of antivirus application program
CN105653978A (en) * 2015-12-29 2016-06-08 北京握奇智能科技有限公司 Method and system for improving TEE command execution speed
CN105656890A (en) * 2015-12-30 2016-06-08 深圳数字电视国家工程实验室股份有限公司 FIDO (Fast Identity Online) authenticator, system and method based on TEE (Trusted Execution Environment) and wireless confirmation
CN106547618A (en) * 2016-10-19 2017-03-29 沈阳微可信科技有限公司 Communication system and electronic equipment

Also Published As

Publication number Publication date
CN109117625A (en) 2019-01-01
WO2018233638A1 (en) 2018-12-27

Similar Documents

Publication Publication Date Title
US11516011B2 (en) Blockchain data processing methods and apparatuses based on cloud computing
CN111082940B (en) Internet of things equipment control method and device, computing equipment and storage medium
CN110113167B (en) Information protection method and system of intelligent terminal and readable storage medium
CN110414268B (en) Access control method, device, equipment and storage medium
CN103843303B (en) The management control method and device of virtual machine, system
EP2876568B1 (en) Permission management method and apparatus, and terminal
JP5522307B2 (en) System and method for remote maintenance of client systems in electronic networks using software testing with virtual machines
US9270467B1 (en) Systems and methods for trust propagation of signed files across devices
US11252193B2 (en) Attestation service for enforcing payload security policies in a data center
CN104461683B (en) A kind of method of calibration that virtual machine illegally configures, apparatus and system
WO2016109955A1 (en) Software verifying method and device
CN104715183A (en) Trusted verifying method and equipment used in running process of virtual machine
CN111414640B (en) Key access control method and device
CN112422595A (en) Vehicle-mounted system safety protection method and device
CN109117625B (en) Method and device for determining safety state of AI software system
US12026561B2 (en) Dynamic authentication and authorization of a containerized process
CN111400771A (en) Target partition checking method and device, storage medium and computer equipment
CN117610083A (en) File verification method and device, electronic equipment and computer storage medium
US11520771B2 (en) Measurement update method, apparatus, system, storage media, and computing device
CN113868628A (en) Signature verification method and device, computer equipment and storage medium
CN114282208A (en) Secure software workload provisioning to trusted execution environment
CN114879980B (en) Vehicle-mounted application installation method and device, computer equipment and storage medium
CN117494232B (en) Method, device, system, storage medium and electronic equipment for executing firmware
WO2023066055A1 (en) Orchestration and deployment method and device, and readable storage medium
US20220210198A1 (en) System and method for certificate-less security management of interconnected hybrid resources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant