[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113778608A - Development, container deployment, identification, operation method, device, electronic equipment and storage medium - Google Patents

Development, container deployment, identification, operation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113778608A
CN113778608A CN202010526823.3A CN202010526823A CN113778608A CN 113778608 A CN113778608 A CN 113778608A CN 202010526823 A CN202010526823 A CN 202010526823A CN 113778608 A CN113778608 A CN 113778608A
Authority
CN
China
Prior art keywords
scene
interface
algorithm
calling
portable container
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010526823.3A
Other languages
Chinese (zh)
Inventor
林世勤
孙卓金
熊健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010526823.3A priority Critical patent/CN113778608A/en
Publication of CN113778608A publication Critical patent/CN113778608A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The embodiment of the invention provides a development method, a container deployment method, a container identification method, a container operation method, a container deployment device, an electronic device and a storage medium. The development method comprises the following steps: determining a scene development interface set configured for a plurality of intelligent application scenes in a portable container; and developing a target intelligent application scene in the plurality of intelligent application scenes in the portable container based on the scene development interface set. The scene development interface set is suitable for a plurality of intelligent application scenes, so that the target intelligent application scene is developed based on the scene development interface set, and the development efficiency of the scene can be improved.

Description

Development, container deployment, identification, operation method, device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a development method, a container deployment method, a container identification method, a container operation method, a container deployment device, an electronic device and a storage medium.
Background
The various deep learning frameworks provide a complete, efficient, convenient platform for model development, training, management and deployment for deep learning models. Meanwhile, the demand for high performance reasoning in the field of deep learning has also prompted the emergence of reasoning engines based on the deep learning framework. The inference engines introduce various optimization methods, so that the inference performance of the deep learning model is greatly improved compared with a deep learning framework, and the deep learning model is more suitable for specific application deployment.
In addition to online model deployment, as artificial intelligence evolves, more and more offline learning models may also be deployed in embedded devices. The advent of various front-end reasoning frameworks has enabled applications to implement efficient reasoning capabilities in conjunction with front-end reasoning engines in devices such as embedded devices. However, when developing for deep learning scenarios, there is still room for improvement in development efficiency.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a development method, a container deployment apparatus, an identification method, an operation method, an electronic device, and a storage medium, so as to solve or alleviate the above problems.
According to a first aspect of embodiments of the present invention, there is provided a development method, including: determining a scene development interface set configured for a plurality of intelligent application scenes in a portable container; and developing a target intelligent application scene in the plurality of intelligent application scenes in the portable container based on the scene development interface set.
According to a second aspect of embodiments of the present invention, there is provided a container deployment method, comprising: in the portable container, a scene algorithm calling interface is configured for an application program; and deploying the portable container to a target operating system running the application program so as to realize the functions of the application program by calling the scene algorithm calling interface.
According to a third aspect of the embodiments of the present invention, there is provided an identification method, applied to identify an application program, where the identification application program and a portable container run in the same embedded operating system, the method including: acquiring an object to be identified; and calling an identification scene algorithm calling interface configured for the identification application program in the portable container, and identifying the object to be identified to obtain an identification result.
According to a fourth aspect of the embodiments of the present invention, there is provided an operation method including: determining a scene algorithm calling interface of a portable container running in the same operating system as an application program, wherein the scene algorithm calling interface is configured for the application program; and realizing the function of the application program by calling the scene algorithm calling interface.
According to a fifth aspect of embodiments of the present invention, there is provided an operating method, including: determining an image recognition scene algorithm calling interface of a portable container which runs in the same embedded operating system as an image recognition application program, wherein the image recognition scene algorithm calling interface is configured for the image recognition application program; and calling an interface by calling the image recognition scene algorithm to realize the function of the image recognition application program.
According to a sixth aspect of embodiments of the present invention, there is provided an operating method, comprising: determining a voice recognition scene algorithm calling interface of a portable container which runs in the same embedded operating system with a voice recognition application program, wherein the voice recognition scene algorithm calling interface is configured for the voice recognition application program; and calling an interface by calling the speech recognition scene algorithm to realize the function of the speech recognition application program.
According to a seventh aspect of the embodiments of the present invention, there is provided a development apparatus including: the determining module is used for determining a scene development interface set configured for a plurality of intelligent application scenes in the portable container; and the development module is used for developing a target intelligent application scene in the plurality of intelligent application scenes in the portable container based on the scene development interface set.
According to an eighth aspect of the embodiments of the present invention, there is provided an identification apparatus, in which an identification application and a portable container run in a same embedded operating system, the apparatus including: the acquisition module acquires an object to be identified; and the identification module calls an identification scene algorithm calling interface configured for the identification application program in the portable container to identify the object to be identified to obtain an identification result.
According to a ninth aspect of embodiments of the present invention, there is provided a container deployment apparatus comprising: the configuration module is used for configuring a scene algorithm calling interface for the application program in the portable container; and the deployment module is used for deploying the portable container to a target operating system running the application program so as to realize the function of the application program by calling the scene algorithm calling interface.
According to a tenth aspect of the embodiments of the present invention, there is provided an electronic apparatus including: the determining module is used for determining a scene algorithm calling interface of a portable container which runs in the same operating system with an application program, wherein the scene algorithm calling interface is configured for the application program; and the calling module is used for calling the scene algorithm calling interface to realize the function of the application program.
According to an eleventh aspect of embodiments of the present invention, there is provided an electronic apparatus, including: one or more processors; a storage medium configured to store one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method according to any one of the first to third aspects.
According to a twelfth aspect of embodiments of the present invention, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the method according to any one of the first to third aspects.
The scheme of the embodiment of the invention can determine the scene development interface set configured for a plurality of intelligent application scenes in the portable container; and developing a target intelligent application scene in the plurality of intelligent application scenes in the portable container based on the scene development interface set. The scene development interface set is suitable for a plurality of intelligent application scenes, so that the target intelligent application scene is developed based on the scene development interface set, and the development efficiency of the scene can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on the drawings.
FIG. 1 is a schematic diagram of an intelligent application scenario development method for which the development, container deployment, operation method and apparatus of an embodiment of the present invention are applicable;
FIG. 2A is a schematic flow chart diagram of a method of developing another embodiment of the present invention;
FIG. 2B is a schematic view of a container frame according to another embodiment of the present invention;
FIG. 3A is a schematic view of a container frame according to another embodiment of the present invention;
FIG. 3B is a schematic view of a container frame according to another embodiment of the present invention;
FIG. 4A is a schematic flow chart diagram of a container deployment method of another embodiment of the present invention;
FIG. 4B is a schematic view of a container frame according to another embodiment of the present invention;
FIG. 4C is a schematic flow chart diagram of an identification method according to another embodiment of the present invention;
FIG. 5A is a schematic flow chart diagram of a method of operation of another embodiment of the present invention;
FIG. 5B is a schematic view of a container frame according to another embodiment of the present invention;
FIG. 6 is a schematic block diagram of a development device of another embodiment of the present invention;
FIG. 7A is a schematic block diagram of a container deployment device of another embodiment of the present invention;
FIG. 7B is a schematic block diagram of an identification device of another embodiment of the present invention;
FIG. 8 is a schematic block diagram of an electronic device of another embodiment of the invention;
FIG. 9 is a schematic block diagram of an electronic device of another embodiment of the present invention;
fig. 10 is a hardware configuration of an electronic device according to another embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings. Fig. 1 is a schematic diagram of development of an intelligent application scenario to which the development, container deployment and operation method and apparatus of the present invention are applied. As shown, a deep learning application or an Artificial Intelligence (AI) application (i.e., hereinafter, referred to as an intelligent application) may be implemented using one calculation engine, or may be implemented using a plurality of calculation engines, respectively. For example, multiple compute engines may be represented as a flow diagram as shown in FIG. 1. For example, in this example, the application may include four compute engines. Namely, a data engine 10, a pre-processing engine 20, a model inference engine 30, and a post-processing engine 40. It should be understood that in other examples, other engines may be included. For computer vision scenes, the data engine 10 may be used to obtain data from data sources such as cameras, files, and the like. The pre-processing engine 20 in this example may be used to directly invoke the provided interface, such as digital visual pre-processing, to implement media pre-processing capabilities, including video, decoding of image data, image cropping, scaling, and the like. In addition, the model inference engine 30 is configured to invoke an interface provided by the deep learning model management module to load an offline model file, thereby completing inference computation of the model. In addition, post-processing engine 40 is used to perform post-processing of the output data of the model inference.
The flow of four compute engines described above enables computational decoupling of different tasks in an application. Each compute engine is capable of performing its respective function relatively independently. For example, a pre-processing engine may be used to be responsible for performing pre-processing functions using a digital visual pre-processing module. Further, a model inference engine may be used to be responsible for loading and executing the offline model. In addition, the compute engine can be custom developed by a developer.
In addition, in application development, one or more types of application development or program development may be performed based on the number of applications that the hardware platform is running simultaneously. Further, each application may enable multiple threads to accomplish inference tasks. Each application may also initiate inference tasks for a single thread.
In addition, the offline model can be divided into a model generation phase, an application compilation phase and an application deployment phase. In the model generation phase, the model may be parsed and converted into a custom computational graph intermediate representation (for example). In a development environment, an offline model generation file is generated after kernel fusion by, for example, an offline model generator and processing by operator optimization or the like. Thus, the offline model can be deployed in a container for further optimization in conjunction with an inference engine, and then deployed to an operating system in a production environment. In addition, the offline model may also be deployed directly into the operating system in the production environment.
Fig. 2A is a schematic flow chart of a development method of another embodiment of the present invention. The development method of fig. 2A may be performed by any suitable electronic device having data processing capabilities, including but not limited to: server, mobile terminal (such as mobile phone, PAD, etc.), PC, etc. The method comprises the following steps:
210: a set of scenario development interfaces configured for a plurality of intelligent application scenarios in a portable container is determined.
It should be understood that the set of scenario development interfaces includes, but is not limited to, at least one of a data engine interface, a pre-processing engine interface, a model inference engine interface, and a post-processing engine interface. The scenario development interface set may further include other interfaces, for example, at least one of a model configuration interface, a memory management interface, and an event management interface related to the scenario. Preferably, the scene development interface set comprises a data preprocessing interface, a model reasoning interface and a data post-processing interface. For example, in a computer vision scene, a set of scene development interfaces includes an image data pre-processing interface, an image recognition model inference interface, and an image data post-processing interface. For another example, in a speech recognition scenario, the scenario development interface set includes a speech data pre-processing interface, a speech recognition model inference interface, and a speech data post-processing interface.
It should also be understood that the intelligent application scenarios herein include, but are not limited to, deep learning application scenarios, artificial intelligence application scenarios, so-called machine learning application scenarios, and the like. The scenarios include, but are not limited to, computer vision technology application scenarios such as image recognition, speech or semantic based speech recognition, natural language processing, machine translation, and the like. Scenario development herein may be the development of intelligent scenario algorithms, intelligent scenario models, or application modules for intelligent scenarios. For example, the application may be developed as a part (e.g., module) of the application, or may be developed as a separate application. In particular, for the application scene of the computer vision technology, a face recognition scene, a picture intelligent clipping scene, an icon recognition scene, an object recognition scene such as garbage classification, a gesture recognition scene and the like can be further included. That is, the above-mentioned various scenes or sub-scenes can be taken as the intelligent application scene of the embodiment of the present invention.
It should also be understood that portable containers include, but are not limited to, containers deployed across operating system containers, container engines such as docker, and the like. Operating systems include, but are not limited to, embedded operating systems, desktop operating systems, server operating systems, real-time operating systems, and the like. The portable container may span more than two operating systems. Any manner may be used for configuration of the portable container, for example, an image may be pulled from an image repository to configure multiple abstraction layers on the portable container. The portable container may include an edit configuration layer for decoupling the program code of the container from an operating system (e.g., the operating system of the deployment container or the host operating system of the container).
220: and developing a target intelligent application scene in a plurality of intelligent application scenes in the portable container based on the scene development interface set.
It should be understood that the target smart application scenario may be at least one smart application scenario of a plurality of smart application scenarios. For example, a set of scene development interfaces may be provided in a portable container in the form of a scene abstraction layer. For example, the scene abstraction layer may be integrated with at least one of scene management, model configuration, memory management, event management. The scene abstraction layer may also be configured separately from the framework layer that performs the above-described scene management, model configuration, memory management, and event management. For example, a scene abstraction layer may be used for the orchestration of the various interfaces in a scene development interface set. For example, the scene abstraction layer may have a plurality of orchestration configurations, and a target orchestration configuration of the plurality of orchestration configurations may be obtained to develop the target intelligent application scene.
The scene development interface set is suitable for a plurality of intelligent application scenes, so that the target intelligent application scene is developed based on the scene development interface set, and the development efficiency of the scene can be improved. In addition, because the portable container can shield the underlying configuration, rapid deployment is achieved when the portable container is deployed in an electronic device.
In another implementation manner of the present invention, the scene development interface set includes a data preprocessing interface, a model inference interface, and a data postprocessing interface, wherein the developing, in the portable container, a target intelligent application scene in a plurality of intelligent application scenes based on the scene development interface set includes: and a target scene algorithm of the target intelligent application scene based on the data preprocessing interface, the model reasoning interface and the data postprocessing interface is realized by calling the data preprocessing algorithm library, the data postprocessing algorithm library and the reasoning engine library respectively.
In one example, the set of scene development interfaces may be implemented by the scene abstraction layer described above. The scene abstraction layer may be used to highly abstract an intelligent application scene. For example, the scene abstraction layer is abstracted into data preprocessing, model inference, data post-processing, and the like. Generally, the implementation of the intelligent application scenario may implement the interfaces defined in the three phases respectively, and register in the scenario management module, so as to implement the development of the intelligent application scenario. The scene abstraction and the like are used for defining the intelligent application scene realization interfaces, unified specifications are provided for the intelligent application scene realization, and intelligent application scene developers only need to realize the interfaces according to specific scenes and register the interfaces into the scene management, so that the development of the intelligent application scene can be completed.
In another implementation manner of the present invention, a target scene algorithm for realizing a target intelligent application scene based on a data preprocessing interface, a model inference interface and a data post-processing interface by calling a data preprocessing algorithm library, a data post-processing algorithm library and an inference engine library respectively comprises: determining a scene management framework configured for a plurality of intelligent application scenes in a portable container; calling a data preprocessing algorithm library, a data post-processing algorithm library and a reasoning engine library through a scene management framework to obtain a target data preprocessing algorithm, a target data post-processing algorithm and a target reasoning engine; and based on a target data preprocessing algorithm, a target data postprocessing algorithm and a target reasoning engine, realizing a target scene algorithm of which the target intelligent application scene corresponds to the data preprocessing interface, the model reasoning interface and the data postprocessing interface.
In one example, the library of data pre-processing algorithms, the library of data post-processing algorithms, and the library of inference engines may be included in a core library. The core library part can be a data preprocessing algorithm library, a data postprocessing algorithm library and an inference engine library which are required in the development process of the intelligent application scene. For example, the database of data preprocessing algorithms may be an image algorithm database, a semantic algorithm database, or the like. Namely, the core library layer part is combined with an image/voice algorithm library required by the intelligent application scene, an inference engine and other modules, so that the completeness of the intelligent application scene development and operation dependent library is ensured.
In another implementation manner of the present invention, the scenario management framework is further configured to implement at least one of model configuration, memory management, and event management on the target scenario algorithm.
In one example, the scene management framework may be implemented as a framework layer. The framework layer provides management services such as basic management services for intelligent application scenarios. Management services include, but are not limited to, scenario management, model configuration management, memory management, and event management. In other words, the framework layer may be used for parts other than scene development. Thereby an intelligent application scenario developer can focus on a specific service logic implementation. Namely, the framework layer designs management modules related to the intelligent application scenario, such as scenario management, model configuration, event management, memory management and the like, by using a development language such as C/C + + language, so that an intelligent application scenario developer can concentrate on the service logic of the developer, and the complexity of realizing the intelligent application scenario is reduced.
In another implementation manner of the present invention, the scene management framework, the data preprocessing algorithm library, the data postprocessing algorithm library and the inference engine library may be configured based on the same development language, or may be configured based on different development languages. For example, the scenario management framework may be configured in a first development language. The data pre-processing algorithm library, the data post-processing algorithm library, and the inference engine library may be configured based on a second development language. This example is not intended to be limiting. As an example, the scenario management framework, the pre-data processing algorithm library, the post-data processing algorithm library, and the inference engine library are configured based on the same development language.
For example, the plurality of smart application scenarios may be a plurality of computer vision application scenarios. Target intelligence application scenarios include, but are not limited to, face recognition scenarios, picture intelligence cropping scenarios, icon recognition scenarios, object recognition scenarios such as garbage classification, gesture recognition scenarios, and the like. The scene development interface set may include a computer vision data pre-processing algorithm library, a computer vision data post-processing algorithm library, and a computer vision reasoning engine library.
For example, the plurality of smart application scenarios may be a plurality of audio recognition application scenarios, the target smart application scenarios including, but not limited to, speech recognition, track recognition such as song recognition, machine translation, language recognition, and the like. The scene development interface set can comprise an audio data preprocessing algorithm library, an audio data post-processing algorithm library and an audio reasoning engine library.
For example, the plurality of smart application scenarios may be a plurality of spatiotemporal recognition application scenarios, with the target smart application scenarios including, but not limited to, gesture recognition, location recognition, gesture recognition, expression recognition, eye movement recognition, and the like. The scene development interface set may include a spatio-temporal data pre-processing algorithm library, a spatio-temporal data post-processing algorithm library, and a spatio-temporal inference engine library.
Because the intelligent application scenes have algorithm similarity, the corresponding development efficiency is improved by calling the corresponding data preprocessing algorithm library, the data post-processing algorithm library and the inference engine library for development.
In the example of an image recognition scenario, the set of scenario development interfaces may include a library of pre-data processing algorithms, a library of post-data processing algorithms, and a library of inference engines. For example, a picture recognition scene algorithm of which the image recognition scene is based on the data preprocessing interface, the model inference interface and the data post-processing interface can be developed by calling the data preprocessing algorithm library, the data post-processing algorithm library and the inference engine library respectively. For example, the scene management framework calls the data preprocessing algorithm library, the data post-processing algorithm library and the inference engine library to obtain the image preprocessing algorithm, the image post-processing algorithm and the image recognition inference engine, and the image recognition scene algorithm of which the image recognition scene corresponds to the data preprocessing interface, the model inference interface and the data post-processing interface is realized based on the image preprocessing algorithm, the image post-processing algorithm and the image recognition inference engine.
After the development of the image recognition scenario algorithm is completed, the portable container may run on the same embedded operating system as the image recognition application. An image recognition scenario algorithm call interface may be configured in the portable container for an image recognition application. The function of the image recognition application program can be realized by calling an image recognition scene algorithm calling interface. It should be understood that the image recognition application may be a face recognition application, an object recognition application such as an object recognition classification (e.g., spam classification, picture classification), or the like.
When image recognition is carried out, the image recognition application program obtains a target image, and the target image is recognized by calling an image recognition scene algorithm calling interface to obtain an image recognition result.
In the example of a speech recognition scenario, in the speech recognition scenario, the set of scenario development interfaces may include a library of pre-data processing algorithms, a library of post-data processing algorithms, and a library of inference engines. For example, a picture recognition scene algorithm of the speech recognition scene based on the data preprocessing interface, the model inference interface and the data post-processing interface can be developed by calling the data preprocessing algorithm library, the data post-processing algorithm library and the inference engine library respectively. For example, the data preprocessing algorithm library, the data post-processing algorithm library and the inference engine library can be called through the scene management framework to obtain a voice preprocessing algorithm, a voice post-processing algorithm and a voice recognition inference engine, and based on the voice preprocessing algorithm, the voice post-processing algorithm and the voice recognition inference engine, a voice recognition scene algorithm of which the voice recognition scene corresponds to the data preprocessing interface, the model inference interface and the data post-processing interface is realized.
After the development of the speech recognition scenario algorithm is completed, the portable container may run on the same embedded operating system as the speech recognition application. A speech recognition scenario algorithm call interface may be configured in a portable container for a speech recognition application. The function of the speech recognition application program can be realized by calling a speech recognition scene algorithm calling interface. It should be understood that the speech recognition application may be a face recognition application, an object recognition application such as an object recognition classification (e.g., spam classification, picture classification), or the like.
When the voice recognition is carried out, the voice recognition application program obtains a target image, and the target voice is recognized by calling a voice recognition scene algorithm calling interface to obtain a voice recognition result.
Fig. 2B is a schematic view of a container frame according to another embodiment of the present invention. As shown, the portable container includes a set of scenario development interfaces. Wherein, the scene 1, the scene 2, the scene 3 and the scene 4 based on the intelligent application can be developed by utilizing the scene development interface set. In this example, four scenes are shown, but it is understood that in other examples, more or fewer scenes may be shown.
Fig. 3A is a schematic view of a container frame according to another embodiment of the present invention. As shown, in this example, the portable container is configured with an input interface and an output interface (scenario algorithm call interface) for the application. After the development of the target scene is completed, the application program can integrate the intelligent application scene. For example, the application program may call the scene directly through the intelligent application scene container input interface, and obtain the scene output result through the intelligent application scene container output interface.
In addition, the portable container may be provided with a compilation configuration layer (not shown). If the intelligent application scene is transplanted from one operating system to another operating system, the application program based on the other operating system calls the input interface of the intelligent application scene, and the functions of the application program can be realized. In other words, a fast migration is achieved. It should be understood that the input interface and the output interface may be configured at the scene management layer, at the scene abstraction layer, and at other layers not shown in this example.
Fig. 3B is a schematic view of a container frame according to another embodiment of the present invention. As shown, in this example, a compilation configuration layer, a core library layer, a scene abstraction layer, and an intelligent application scene layer are shown.
The core library part can be a data preprocessing algorithm library, a data postprocessing algorithm library and an inference engine library which are required in the development process of the intelligent application scene. For example, the database of data preprocessing algorithms may be an image algorithm database, a semantic algorithm database, or the like. Namely, the core library layer part is combined with an image/voice algorithm library required by the intelligent application scene, an inference engine and other modules, so that the completeness of the intelligent application scene development and operation dependent library is ensured.
In addition, the framework layer provides management services such as basic management services for intelligent application scenarios. Management services include, but are not limited to, scenario management, model configuration management, memory management, and event management. In other words, the framework layer may be used for parts other than scene development. Thereby an intelligent application scenario developer can focus on a specific service logic implementation. Namely, the framework layer designs management modules related to the intelligent application scenario, such as scenario management, model configuration, event management, memory management and the like, by using a development language such as C/C + + language, so that an intelligent application scenario developer can concentrate on the service logic of the developer, and the complexity of realizing the intelligent application scenario is reduced.
Further, the scene abstraction layer may be used to highly abstract intelligent application scenes. For example, the scene abstraction layer is abstracted into data preprocessing, model inference, data post-processing, and the like. Generally, the implementation of the intelligent application scenario may implement the interfaces defined in the three phases respectively, and register in the scenario management module, so as to implement the development of the intelligent application scenario. The scene abstraction and the like are used for defining the intelligent application scene realization interfaces, unified specifications are provided for the intelligent application scene realization, and intelligent application scene developers only need to realize the interfaces according to specific scenes and register the interfaces into the scene management, so that the development of the intelligent application scene can be completed.
Further, a compilation configuration layer may be used to ensure that the container code and the operating system are decoupled. In other words, the entire container code may be compiled in various operating systems. Furthermore, in addition to the internal architecture, the smart application container may also provide a compact external invocation interface. For example, an external application program or other modules needing to integrate the intelligent application scene can quickly complete the integration of the intelligent application scene only by calling the input and output interfaces of the intelligent application scene container.
Fig. 4A is a schematic flow chart of a container deployment method according to another embodiment of the present invention. The container deployment method of fig. 4A may be performed by any suitable electronic device having data processing capabilities, including but not limited to: server, mobile terminal (such as mobile phone, PAD, etc.), PC, etc. The method comprises the following steps:
410: in the portable container, a scene algorithm calling interface is configured for an application program;
420: and deploying the portable container into a target operating system running the application program so as to realize the functions of the application program by calling a scene algorithm calling interface.
In the embodiment of the invention, the application program and the portable container are deployed in the target operating system, and the scene algorithm calling interface is configured for the application program. The portable container can be decoupled from the target operating system, so that the scene function of the scene algorithm calling interface is decoupled from the application program.
In another implementation of the invention, the method further comprises: in a portable container, a scene development interface set is configured for a plurality of intelligent application scenes, and in the portable container, a scene algorithm call interface is configured for an application program, wherein the method comprises the following steps: determining a target scene algorithm obtained by developing a target intelligent application scene by using a scene development interface set; and in the portable container, configuring a scene algorithm calling interface for calling a target scene algorithm for the application program.
In another implementation of the present invention, a portable container comprises a unified compilation configuration configured for a plurality of operating systems, wherein deploying the portable container into a target operating system running an application comprises: determining a target operating system from a plurality of operating systems; the portable container is deployed into the target operating system based on the unified compilation configuration.
It should be understood that the unified compilation configuration may be implemented by a compilation configuration layer. The compilation configuration layer may be used to ensure that the container code and the operating system are decoupled. In other words, the entire container code may be compiled in various operating systems. Furthermore, in addition to the internal architecture, the smart application container may also provide a compact external invocation interface. For example, an external application program or other modules needing to integrate the intelligent application scene can quickly complete the integration of the intelligent application scene only by calling the input and output interfaces of the intelligent application scene container.
Fig. 4B is a schematic view of a container frame according to another embodiment of the present invention. As shown, in this example, an application and a portable container are running in the operating system. Alternatively, an application and a portable container are installed in the operating system. In this example, one application and one portable container are shown. It should be understood that in other examples, the number of applications and portable containers may be arbitrary. In addition, the portable container in the figure comprises four scene algorithms and one scene algorithm calling interface. However, the number of scene algorithms and scene algorithm invocation interfaces may be arbitrary. In one example, multiple scenario algorithms share a scenario algorithm invocation interface. In another example, the plurality of scenario algorithms may correspond to the plurality of scenario algorithm invocation interfaces, respectively. Further, the scene algorithms may include computer vision algorithms such as face recognition, object classification, gesture recognition, and the like. In one example, the respective applications may correspond to respective scene algorithms. For example, the applications may include face recognition applications, object classification applications, gesture recognition applications, and the like. In other words, an application calls only one scenario algorithm. However, in another example, one application may invoke multiple scenario algorithms.
For example, in the scenario of an internet of things device, an application in the device has a voice recognition function and a face recognition function. The application may invoke speech recognition algorithms and face recognition algorithms. It should be understood that the speech recognition algorithm and the face recognition algorithm may be developed based on the same set of scene algorithm interfaces, or may be developed based on different sets of scene algorithm interfaces. In one example, the application invokes a speech recognition algorithm interface and a face recognition algorithm interface. In another example, the application obtains the speech recognition algorithm and the face recognition algorithm by calling the unified algorithm interface.
For another example, in a scenario of a smart device for garbage classification, an application program in the device may implement an object recognition function by calling an object recognition algorithm interface. For example, one or more artificial intelligence applications may be installed in a mobile terminal, such as a cell phone, for use in, for example, face recognition, age recognition, object recognition, orientation recognition, time determination, color recognition, voice recognition, language translation, and the like. In addition, all or part of functions such as face recognition, age recognition, object recognition, orientation recognition, time judgment, color recognition, voice recognition, language translation and the like can be developed in one or more containers by adopting the development method of the embodiment of the invention. Preferably, the development is performed based on the same intelligent application scenario development interface in the same container. One or more of the containers described above may be installed in the terminal device. Thereby, a simple installation of the application is achieved. In addition, a large number of terminal devices can be uniformly deployed based on one development container or several development containers.
Fig. 4C is a schematic flow chart of an identification method according to another embodiment of the invention. The recognition method of fig. 4C is applied to a recognition application, where the recognition application and the portable container run in the same embedded operating system, and the recognition method includes:
430: acquiring an object to be identified;
440: and calling an identification scene algorithm calling interface configured for identifying the application program in the portable container, and identifying the object to be identified to obtain an identification result.
Fig. 5A is a schematic flow chart of a method of operation of another embodiment of the present invention. The method of operation of fig. 5A may be performed by any suitable electronic device having data processing capabilities, including but not limited to: server, mobile terminal (such as mobile phone, PAD, etc.), PC, etc. The method comprises the following steps:
510: and determining a scene algorithm calling interface of a portable container running in the same operating system with the application program, wherein the scene algorithm calling interface is configured for the application program.
520: and calling the scene algorithm to call the interface to realize the function of the application program.
In the embodiment of the invention, the application program and the portable container run in the same operating system, and the scene algorithm calling interface is configured for the application program. The portable container can be decoupled from the operating system, so that the scene function of the scene algorithm calling interface is decoupled from the application program.
In one example, a method of operation of an embodiment of the present invention includes: determining an image recognition scene algorithm calling interface of a portable container which runs in the same embedded operating system as an image recognition application program, wherein the image recognition scene algorithm calling interface is configured for the image recognition application program; and calling an interface by calling the image recognition scene algorithm to realize the function of the image recognition application program.
In another example, a method of operation of an embodiment of the present invention includes: determining a voice recognition scene algorithm calling interface of a portable container which runs in the same embedded operating system with a voice recognition application program, wherein the voice recognition scene algorithm calling interface is configured for the voice recognition application program; and calling an interface by calling the speech recognition scene algorithm to realize the function of the speech recognition application program.
In another implementation manner of the present invention, the function of the application program is implemented by calling a scene algorithm calling interface, which includes: and executing the target scene algorithm developed in the portable container by calling the scene algorithm calling interface so as to realize the functions of the application program.
In another implementation manner of the present invention, a scene development interface set configured for a plurality of intelligent application scenes is configured in the portable container, wherein the target scene algorithm is obtained by developing the target intelligent application scene based on the scene development interface set.
Fig. 5B is a schematic view of a container frame according to another embodiment of the present invention. As shown, in this example, four applications and one portable container are installed in the operating system. It should be appreciated that in other examples, a different number of applications and portable containers may be installed. In this example, multiple applications call the same scenario algorithm call interface, but different scenario algorithm call interfaces may be included in the portable container. At this time, a plurality of applications may call the call interfaces of different scenes. Multiple applications may also call the call interface for multiple scenes.
For example, in a multi-application intelligent device, a portable container is adopted to provide a uniform calling interface for a plurality of applications, so that the configuration efficiency of the intelligent device can be improved. For example, before the intelligent device leaves a factory, because the algorithm required by the application program is stored in the portable container, the portable container scheme of the embodiment of the invention can realize the quick pre-installation of the application program. In addition, because the algorithm interfaces of a plurality of scenes can be included in the portable container, rapid development and rapid deployment of the algorithm are realized. In addition, because the portable container has the characteristic of crossing operating systems, the efficiency of deployment on different operating systems is further realized.
Fig. 6 is a schematic block diagram of a development apparatus of another embodiment of the present invention. The development device of fig. 6 may be any suitable electronic device having data processing capabilities, including but not limited to: development devices such as development boards, deployment devices, storage devices, development devices such as servers, mobile terminals (e.g., cell phones, PADs, etc.), and PCs, etc. The device includes:
a determination module 610 determines a set of scenario development interfaces configured for a plurality of intelligent application scenarios in a portable container.
And the development module 620 develops a target intelligent application scene in the plurality of intelligent application scenes in the portable container based on the scene development interface set.
The scene development interface set is suitable for a plurality of intelligent application scenes, so that the target intelligent application scene is developed based on the scene development interface set, and the development efficiency of the scene can be improved. In addition, because the portable container can shield the underlying configuration, rapid deployment is achieved when the portable container is deployed in an electronic device.
In another implementation manner of the present invention, the scene development interface set includes a data preprocessing interface, a model inference interface, and a data postprocessing interface, wherein the development module is specifically configured to: and a target scene algorithm of the target intelligent application scene based on the data preprocessing interface, the model reasoning interface and the data postprocessing interface is realized by calling the data preprocessing algorithm library, the data postprocessing algorithm library and the reasoning engine library respectively.
In another implementation of the present invention, the development module is specifically configured to: determining a scene management framework configured for a plurality of intelligent application scenes in a portable container; calling a data preprocessing algorithm library, a data post-processing algorithm library and a reasoning engine library through a scene management framework to obtain a target data preprocessing algorithm, a target data post-processing algorithm and a target reasoning engine; and based on a target data preprocessing algorithm, a target data postprocessing algorithm and a target reasoning engine, realizing a target scene algorithm of which the target intelligent application scene corresponds to the data preprocessing interface, the model reasoning interface and the data postprocessing interface.
In another implementation manner of the present invention, the scenario management framework is further configured to implement at least one of model configuration, memory management, and event management on the target scenario algorithm.
In another implementation of the present invention, the scene management framework, the pre-processing database, the post-processing database, and the inference engine database are configured based on the same development language.
The apparatus of this embodiment is used to implement the corresponding method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the apparatus of this embodiment can refer to the description of the corresponding part in the foregoing method embodiment, and is not described herein again.
Fig. 7A is a schematic block diagram of a container deployment device of another embodiment of the present invention. The container deployment apparatus of fig. 7A may be any suitable electronic device with data processing capabilities, including but not limited to: development devices such as development boards, deployment devices, storage devices, deployment devices such as servers, mobile terminals (e.g., cell phones, PADs, etc.), and PCs, and the like. The device includes:
a configuration module 710 for configuring a scene algorithm call interface for an application program in the portable container;
the deployment module 720 deploys the portable container to a target operating system running the application program, so as to implement the function of the application program by calling the scenario algorithm call interface.
In the embodiment of the invention, the application program and the portable container are deployed in the target operating system, and the scene algorithm calling interface is configured for the application program. The portable container can be decoupled from the target operating system, so that the scene function of the scene algorithm calling interface is decoupled from the application program.
In another implementation of the present invention, the configuration module is further configured to: in a portable container, a scene development interface set is configured for a plurality of intelligent application scenes, wherein a configuration module is specifically used for: determining a target scene algorithm obtained by developing a target intelligent application scene by using a scene development interface set; and in the portable container, configuring a scene algorithm calling interface for calling a target scene algorithm for the application program.
In another implementation of the present invention, the portable container comprises a unified compilation configuration configured for a plurality of operating systems, wherein the deployment module is specifically configured to: determining a target operating system from a plurality of operating systems; the portable container is deployed into the target operating system based on the unified compilation configuration.
The apparatus of this embodiment is used to implement the corresponding method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the apparatus of this embodiment can refer to the description of the corresponding part in the foregoing method embodiment, and is not described herein again.
Fig. 7B is a schematic block diagram of a container deployment device of another embodiment of the present invention. The recognition device of fig. 7B runs the recognition application and the portable container in the same embedded operating system, and includes:
an obtaining module 730, for obtaining an object to be identified;
the recognition module 740 calls a recognition scene algorithm call interface configured for the recognition application program in the portable container to recognize the object to be recognized, so as to obtain a recognition result.
Fig. 8 is a schematic block diagram of an electronic device of another embodiment of the present invention. The electronic device of fig. 8 may be any suitable electronic device having data processing capabilities, including but not limited to: server, mobile terminal (such as mobile phone, PAD, etc.), PC, etc. The apparatus comprises:
the determining module 810 is configured to determine a scene algorithm calling interface of a portable container running in the same operating system as the application program, wherein the scene algorithm calling interface is configured for the application program;
the calling module 820 calls the interface by calling the scene algorithm to realize the function of the application program.
In the embodiment of the invention, the application program and the portable container run in the same operating system, and the scene algorithm calling interface is configured for the application program. The portable container can be decoupled from the operating system, so that the scene function of the scene algorithm calling interface is decoupled from the application program.
In another implementation manner of the present invention, the calling module is specifically configured to: and executing the target scene algorithm developed in the portable container by calling the scene algorithm calling interface so as to realize the functions of the application program.
In another implementation manner of the present invention, a scene development interface set configured for a plurality of intelligent application scenes is configured in the portable container, wherein the target scene algorithm is obtained by developing the target intelligent application scene based on the scene development interface set.
The apparatus of this embodiment is configured to implement a corresponding method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again. In addition, the functional implementation of each module in the apparatus of this embodiment can refer to the description of the corresponding part in the foregoing method embodiment, and is not described herein again.
Fig. 9 is a schematic structural diagram of an electronic device according to another embodiment of the invention; the electronic device may include:
one or more processors 901;
a storage medium 902, which may be configured to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the methods as described in the embodiments above.
Fig. 10 is a hardware configuration of an electronic apparatus according to another embodiment of the present invention; as shown in fig. 10, the hardware structure of the electronic device may include: a processor 1001, a communication interface 1002, a storage medium 1003, and a communication bus 1004;
wherein, the processor 1001, the communication interface 1002 and the storage medium 1003 complete the communication with each other through the communication bus 1004;
alternatively, the communication interface 1002 may be an interface of a communication module;
the processor 1001 may be specifically configured to: determining a scene development interface set configured for a plurality of intelligent application scenes in a portable container; developing, in the portable container, a target smart application scenario of the plurality of smart application scenarios based on the set of scenario development interfaces, or,
in the portable container, a scene algorithm calling interface is configured for an application program; deploying the portable container to a target operating system running the application program so as to realize the function of the application program by calling the scene algorithm calling interface, or,
determining a scene algorithm calling interface of a portable container running in the same operating system as an application program, wherein the scene algorithm calling interface is configured for the application program; and realizing the function of the application program by calling the scene algorithm calling interface.
Alternatively, the electronic device is applied to an identification application program, the identification application program and the portable container run in the same embedded operating system, and the processor 1001 may be specifically configured to: acquiring an object to be identified; and calling an identification scene algorithm calling interface configured for the identification application program in the portable container, and identifying the object to be identified to obtain an identification result.
The Processor 1001 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage medium 1003 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Read Only Memory (EPROM), an electrically Erasable Read Only Memory (EEPROM), and the like.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a storage medium, the computer program comprising program code configured to perform the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The computer program performs the above-described functions defined in the method of the present invention when executed by a Central Processing Unit (CPU). It should be noted that the storage medium of the present invention can be a computer-readable signal medium or a computer-readable storage medium or any combination of the two. The storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access storage media (RAM), a read-only storage media (ROM), an erasable programmable read-only storage media (EPROM or flash memory), an optical fiber, a portable compact disc read-only storage media (CD-ROM), an optical storage media piece, a magnetic storage media piece, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any storage medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code configured to carry out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may operate over any of a variety of networks: including a Local Area Network (LAN) or a Wide Area Network (WAN) -to the user's computer, or alternatively, to an external computer (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions configured to implement the specified logical function(s). In the above embodiments, specific precedence relationships are provided, but these precedence relationships are only exemplary, and in particular implementations, the steps may be fewer, more, or the execution order may be modified. That is, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The names of these modules do not in some cases constitute a limitation of the module itself.
As another aspect, the present invention also provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the method as described in the above embodiments.
As another aspect, the present invention also provides a storage medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The storage medium carries one or more programs that, when executed by the apparatus, cause the apparatus to: determining a scene development interface set configured for a plurality of intelligent application scenes in a portable container; developing, in the portable container, a target smart application scenario of the plurality of smart application scenarios based on the set of scenario development interfaces, or,
in the portable container, a scene algorithm calling interface is configured for an application program; deploying the portable container to a target operating system running the application program so as to realize the function of the application program by calling the scene algorithm calling interface, or,
determining a scene algorithm calling interface of a portable container running in the same operating system as an application program, wherein the scene algorithm calling interface is configured for the application program; and realizing the function of the application program by calling the scene algorithm calling interface.
Or, the apparatus is applied to an identification application program, the identification application program and the portable container run in the same embedded operating system, the storage medium carries one or more programs, and when the one or more programs are executed by the apparatus, the apparatus is caused to: acquiring an object to be identified; and calling an identification scene algorithm calling interface configured for the identification application program in the portable container, and identifying the object to be identified to obtain an identification result.
The expressions "first", "second", "said first" or "said second" used in various embodiments of the present disclosure may modify various components regardless of order and/or importance, but these expressions do not limit the respective components. The above description is only configured for the purpose of distinguishing elements from other elements. For example, the first user equipment and the second user equipment represent different user equipment, although both are user equipment. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure.
When an element (e.g., a first element) is referred to as being "operably or communicatively coupled" or "connected" (operably or communicatively) to "another element (e.g., a second element) or" connected "to another element (e.g., a second element), it is understood that the element is directly connected to the other element or the element is indirectly connected to the other element via yet another element (e.g., a third element). In contrast, it is understood that when an element (e.g., a first element) is referred to as being "directly connected" or "directly coupled" to another element (a second element), no element (e.g., a third element) is interposed therebetween.
The foregoing description is only exemplary of the preferred embodiments of the invention and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention according to the present invention is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the scope of the invention as defined by the appended claims. For example, the above features and (but not limited to) features having similar functions disclosed in the present invention are mutually replaced to form the technical solution.

Claims (20)

1. A method of development, comprising:
determining a scene development interface set configured for a plurality of intelligent application scenes in a portable container;
and developing a target intelligent application scene in the plurality of intelligent application scenes in the portable container based on the scene development interface set.
2. The method of claim 1, wherein the set of scenario development interfaces includes a pre-data processing interface, a model inference interface, and a post-data processing interface, wherein,
the developing, in the portable container, a target intelligent application scenario among the plurality of intelligent application scenarios based on the scenario development interface set includes:
and respectively calling a data preprocessing algorithm library, a data post-processing algorithm library and a reasoning engine library to realize a target scene algorithm of the target intelligent application scene based on the data preprocessing interface, the model reasoning interface and the data post-processing interface.
3. The method of claim 2, wherein the implementing the target intelligent application scenario based on the target scenario algorithm of the pre-data processing interface, the model inference interface, and the post-data processing interface by calling a pre-data processing algorithm library, a post-data processing algorithm library, and an inference engine library, respectively, comprises:
determining a scenario management framework configured for the plurality of intelligent application scenarios in the portable container;
calling the data preprocessing algorithm library, the data post-processing algorithm library and the inference engine library through the scene management framework to obtain a target data preprocessing algorithm, a target data post-processing algorithm and a target inference engine;
and based on the target data preprocessing algorithm, the target data post-processing algorithm and the target reasoning engine, realizing a target scene algorithm of the target intelligent application scene corresponding to the data preprocessing interface, the model reasoning interface and the data post-processing interface.
4. The method of claim 3, wherein the scenario management framework is further configured to implement at least one of model configuration, memory management, event management for the target scenario algorithm.
5. The method of claim 3, wherein the scenario management framework, the pre-data processing algorithm library, the post-data processing algorithm library, and the inference engine library are configured based on a same development language.
6. An identification method applied to identify an application program, wherein the identification application program and a portable container run in the same embedded operating system, the method comprises the following steps:
acquiring an object to be identified;
and calling an identification scene algorithm calling interface configured for the identification application program in the portable container, and identifying the object to be identified to obtain an identification result.
7. A container deployment method comprising:
in the portable container, a scene algorithm calling interface is configured for an application program;
and deploying the portable container to a target operating system running the application program so as to realize the functions of the application program by calling the scene algorithm calling interface.
8. The method of claim 7, wherein the method further comprises:
in the portable container, configuring a set of scenario development interfaces for a plurality of intelligent application scenarios,
in the portable container, a scene algorithm calling interface is configured for an application program, and the method comprises the following steps:
determining a target scene algorithm obtained by developing a target intelligent application scene by using the scene development interface set;
and configuring a scene algorithm calling interface for calling the target scene algorithm for the application program in the portable container.
9. The method of claim 7, wherein the portable container comprises a unified compilation configuration set for a plurality of operating systems, wherein the deploying the portable container into a target operating system running the application comprises:
determining the target operating system from the plurality of operating systems;
deploying the portable container into a target operating system based on the unified compilation configuration.
10. A method of operation, comprising:
determining a scene algorithm calling interface of a portable container running in the same operating system as an application program, wherein the scene algorithm calling interface is configured for the application program;
and realizing the function of the application program by calling the scene algorithm calling interface.
11. The method of claim 10, wherein said implementing the functionality of the application by invoking the scene algorithm invocation interface comprises:
and executing the target scene algorithm developed in the portable container by calling the scene algorithm calling interface so as to realize the function of the application program.
12. The method of claim 10, wherein a set of scenario development interfaces configured for a plurality of intelligent application scenarios is configured in the portable container, and wherein the target scenario algorithm develops a target intelligent application scenario based on the set of scenario development interfaces.
13. A method of operation, comprising:
determining an image recognition scene algorithm calling interface of a portable container which runs in the same embedded operating system as an image recognition application program, wherein the image recognition scene algorithm calling interface is configured for the image recognition application program;
and calling an interface by calling the image recognition scene algorithm to realize the function of the image recognition application program.
14. A method of operation, comprising:
determining a voice recognition scene algorithm calling interface of a portable container which runs in the same embedded operating system with a voice recognition application program, wherein the voice recognition scene algorithm calling interface is configured for the voice recognition application program;
and calling an interface by calling the speech recognition scene algorithm to realize the function of the speech recognition application program.
15. A development apparatus, comprising:
the determining module is used for determining a scene development interface set configured for a plurality of intelligent application scenes in the portable container;
and the development module is used for developing a target intelligent application scene in the plurality of intelligent application scenes in the portable container based on the scene development interface set.
16. An identification appliance having an identification application and a portable container running within the same embedded operating system, the appliance comprising:
the acquisition module acquires an object to be identified;
and the identification module calls an identification scene algorithm calling interface configured for the identification application program in the portable container to identify the object to be identified to obtain an identification result.
17. A container deployment apparatus comprising:
the configuration module is used for configuring a scene algorithm calling interface for the application program in the portable container;
and the deployment module is used for deploying the portable container to a target operating system running the application program so as to realize the function of the application program by calling the scene algorithm calling interface.
18. An electronic device, comprising:
the determining module is used for determining a scene algorithm calling interface of a portable container which runs in the same operating system with an application program, wherein the scene algorithm calling interface is configured for the application program;
and the calling module is used for calling the scene algorithm calling interface to realize the function of the application program.
19. An electronic device, the device comprising:
one or more processors;
a storage medium configured to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-14.
20. A storage medium having stored thereon a computer program which, when executed by a processor, carries out the method of any one of claims 1 to 14.
CN202010526823.3A 2020-06-09 2020-06-09 Development, container deployment, identification, operation method, device, electronic equipment and storage medium Pending CN113778608A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010526823.3A CN113778608A (en) 2020-06-09 2020-06-09 Development, container deployment, identification, operation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010526823.3A CN113778608A (en) 2020-06-09 2020-06-09 Development, container deployment, identification, operation method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113778608A true CN113778608A (en) 2021-12-10

Family

ID=78834975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010526823.3A Pending CN113778608A (en) 2020-06-09 2020-06-09 Development, container deployment, identification, operation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113778608A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0315493A2 (en) * 1987-11-06 1989-05-10 VIsystems, Inc Virtual interface system and method for enabling software applications to be environment-independent
CN1831762A (en) * 2005-03-08 2006-09-13 微软公司 Development framework for mixing semantics-driven and state driven dialog
CN108920259A (en) * 2018-03-30 2018-11-30 华为技术有限公司 Deep learning job scheduling method, system and relevant device
US20180357047A1 (en) * 2016-01-27 2018-12-13 Bonsai AI, Inc. Interface for working with simulations on premises
CN109324819A (en) * 2018-09-28 2019-02-12 中国平安财产保险股份有限公司 Code server dispositions method, device, server apparatus and storage medium
US20190318240A1 (en) * 2018-04-16 2019-10-17 Kazuhm, Inc. Training machine learning models in distributed computing systems
CN110458598A (en) * 2019-07-04 2019-11-15 阿里巴巴集团控股有限公司 Scene adaptation method, device and electronic equipment
CN110688142A (en) * 2019-10-10 2020-01-14 星环信息科技(上海)有限公司 Method, device and storage medium for publishing application programming interface
WO2020077522A1 (en) * 2018-10-16 2020-04-23 华为技术有限公司 Method for identifying environment scene, chip, and terminal
CN111062521A (en) * 2019-11-29 2020-04-24 微民保险代理有限公司 Online prediction method, system and server
US20200174834A1 (en) * 2018-12-03 2020-06-04 Salesforce.Com, Inc. Reasoning engine for automated operations management

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0315493A2 (en) * 1987-11-06 1989-05-10 VIsystems, Inc Virtual interface system and method for enabling software applications to be environment-independent
CN1831762A (en) * 2005-03-08 2006-09-13 微软公司 Development framework for mixing semantics-driven and state driven dialog
US20180357047A1 (en) * 2016-01-27 2018-12-13 Bonsai AI, Inc. Interface for working with simulations on premises
CN108920259A (en) * 2018-03-30 2018-11-30 华为技术有限公司 Deep learning job scheduling method, system and relevant device
US20190318240A1 (en) * 2018-04-16 2019-10-17 Kazuhm, Inc. Training machine learning models in distributed computing systems
CN109324819A (en) * 2018-09-28 2019-02-12 中国平安财产保险股份有限公司 Code server dispositions method, device, server apparatus and storage medium
WO2020077522A1 (en) * 2018-10-16 2020-04-23 华为技术有限公司 Method for identifying environment scene, chip, and terminal
US20200174834A1 (en) * 2018-12-03 2020-06-04 Salesforce.Com, Inc. Reasoning engine for automated operations management
CN110458598A (en) * 2019-07-04 2019-11-15 阿里巴巴集团控股有限公司 Scene adaptation method, device and electronic equipment
CN110688142A (en) * 2019-10-10 2020-01-14 星环信息科技(上海)有限公司 Method, device and storage medium for publishing application programming interface
CN111062521A (en) * 2019-11-29 2020-04-24 微民保险代理有限公司 Online prediction method, system and server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张兆晨,罗铁坚: ""CCI:一种基于容器化的持续集成系统"", 《中国科学院大学学报》, vol. 35, no. 04, 31 July 2018 (2018-07-31) *

Similar Documents

Publication Publication Date Title
CN109542399B (en) Software development method and device, terminal equipment and computer readable storage medium
US10268549B2 (en) Heuristic process for inferring resource dependencies for recovery planning
US11699073B2 (en) Network off-line model processing method, artificial intelligence processing device and related products
CN112036577A (en) Method and device for application machine learning based on data form and electronic equipment
WO2023065746A1 (en) Algorithm application element generation method and apparatus, electronic device, computer program product and computer readable storage medium
CN110781180B (en) Data screening method and data screening device
US20220374219A1 (en) Deployment of service
CN115600676A (en) Deep learning model reasoning method, device, equipment and storage medium
CN112783614A (en) Object processing method, device, equipment, storage medium and program product
CN102402455A (en) Method and device for calling dynamic link library
CN115826972A (en) Face recognition method and device, computer equipment and storage medium
CN113778608A (en) Development, container deployment, identification, operation method, device, electronic equipment and storage medium
CN117389647A (en) Plug-in generation method, application development method, device, equipment and medium
CN109005163B (en) HTTP dynamic request service calling method
CN114449063B (en) Message processing method, device and equipment
CN116841560A (en) RISC-V AIoT-oriented high-customization operating system construction method and system
CN115237457A (en) AI application operation method and related product
CN115480771A (en) Method for deploying algorithms and related device
CN114546793A (en) Log generation method and device and computer readable storage medium
CN117519850B (en) AI model arranging method and device, electronic equipment and medium
CN111158704B (en) Model building method, deployment flow generating method, device and electronic equipment
CN113469364B (en) Reasoning platform, method and device
CN117056052A (en) Timing task processing method, device, equipment and storage medium thereof
CN117827320A (en) Service execution method, device, equipment and storage medium thereof
US20200394532A1 (en) Detaching Social Media Content Creation from Publication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination