[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113986162A - Layer composition method, device and computer readable storage medium - Google Patents

Layer composition method, device and computer readable storage medium Download PDF

Info

Publication number
CN113986162A
CN113986162A CN202111109621.XA CN202111109621A CN113986162A CN 113986162 A CN113986162 A CN 113986162A CN 202111109621 A CN202111109621 A CN 202111109621A CN 113986162 A CN113986162 A CN 113986162A
Authority
CN
China
Prior art keywords
layer
synthesis
synthesized
hwc
layers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111109621.XA
Other languages
Chinese (zh)
Other versions
CN113986162B (en
Inventor
林泰良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Glory Smart Technology Development Co ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202111109621.XA priority Critical patent/CN113986162B/en
Publication of CN113986162A publication Critical patent/CN113986162A/en
Application granted granted Critical
Publication of CN113986162B publication Critical patent/CN113986162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a layer synthesis method, layer synthesis equipment and a computer readable storage medium, and belongs to the technical field of display. The method comprises the following steps: when the layer to be synthesized is changed only in part of layers with respect to the layer in the previous frame image and other layers are not changed, the layer management module may obtain the part of the layer to be synthesized that is changed with respect to the layer in the previous frame image, send only the part of the layer that is changed to the HWC, and synthesize the part of the layer that is changed and the synthesis result of the previous synthesis of the HWC by the HWC. Therefore, the layer management module does not need to send all layers to be synthesized to the HWC for synthesis, the layer synthesis number is reduced, and the synthesis efficiency and flexibility are improved. Moreover, the layer synthesis method needs less layers to be synthesized, and can ensure that the number of layers to be synthesized sent to the HWC does not exceed the number of layers which the HWC supports synthesis to a certain extent, thereby saving synthesis time and improving the utilization rate of the HWC.

Description

Layer composition method, device and computer readable storage medium
Technical Field
The present application relates to the field of display technologies, and in particular, to a layer composition method and apparatus, and a computer-readable storage medium.
Background
With the development of electronic technology, more and more electronic devices with image display function, such as mobile phones or tablet computers, are available. The display interface of the electronic device is generally synthesized by a plurality of layers (surfaces), that is, the plurality of layers form the display interface of the electronic device in an overlapping manner. Referring to fig. 1, the display interface of the mobile phone may be composed of 3 layers, namely, a top status bar 11, a bottom navigation bar 12, and a middle application interface 13, where the application interface 13 is an application interface of an application currently running in the foreground.
Currently, the Graphics layer composition method includes two types, i.e., Graphics Processing Unit (GPU) composition and hardware compositor (hwcomposer, HWC) composition. The HWC is a special image processing device, and the HWC synthesis has the advantages of high performance, high synthesis speed and the like compared with the GPU synthesis. When the HWC is synthesized, since each layer to be synthesized needs to occupy one transmission channel of the HWC, the number of layers that the HWC supports synthesis is limited by the transmission channel that the HWC has. When the number of layers to be synthesized is larger than the number of layers that the HWC supports synthesis, the GPU is required to laminate a part of the layers to be synthesized into one synthesized layer first, and then send the synthesized layer and the remaining layers to the HWC together, and the HWC performs synthesis and display, so that a frame of image synthesized by the layers to be synthesized can be displayed on the screen.
In the layer synthesis process, if the number of layers to be synthesized is greater than the number of layers that the HWC supports synthesis, the HWC needs to wait for the completion of GPU synthesis before synthesizing the synthesis result of the GPU with the remaining layers, which may result in a long synthesis period and low synthesis efficiency and flexibility, and may result in failure to complete layer synthesis and display sending work within one frame of image display period, resulting in card frame loss.
Disclosure of Invention
The application provides a layer synthesis method, a device and a computer readable storage medium, which can solve the problems of long synthesis period, low synthesis efficiency and low flexibility in the related technology.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, a method for layer composition is provided, where the method is applied to an electronic device, where the electronic device includes a layer management module and a HWC, and includes:
the layer management module acquires a newly-built layer or a layer with changed layer information in a layer to be synthesized as a first layer, sends the acquired first layer to the HWC, and the HWC synthesizes the acquired first layer and the first synthesized layer to obtain a second synthesized layer, wherein the first synthesized layer is a layer synthesized by the HWC in front of the second synthesized layer.
That is, in the case that only part of layers in the to-be-synthesized layer are changed and other layers are not changed with respect to the layers in the previous frame image, the layer management module may obtain the layers in the to-be-synthesized layer that are changed with respect to the layers in the previous frame image, obtain the synthesis result of the previous synthesis of the HWC, send only the changed layers and the synthesis result of the previous synthesis of the HWC to the HWC for synthesis, and do not need to send all the to-be-synthesized layers to the HWC for synthesis.
Therefore, the number of image layer synthesis can be reduced, and the synthesis efficiency and flexibility are improved. Moreover, because the number of layers to be synthesized in the layer synthesis mode is small, the number of layers to be synthesized sent to the HWC can be guaranteed to a certain extent not to exceed the number of layers which the HWC supports synthesis, so that the layers can be synthesized directly through the HWC without being synthesized through the GPU first and then through the HWC, thereby saving synthesis time, improving the utilization rate of the HWC, and avoiding the problem that the HWC can not complete layer synthesis and display sending work in one frame of image display period possibly to cause card frame loss because the HWC can synthesize the synthesis result of the GPU with the remaining layers after waiting for completion of the GPU synthesis.
The layer management module is used for realizing the functions of layer creation, control, management and the like. The layer management module may be a Surfaceflinger. The Surfaceflinger is a system service. In a display system of an electronic device, the composition of the layers can be realized by a surfefringer. For example, after an application is started, the surfefringer may create a layer for the application. In the process of terminal operation, the surfefinger may acquire layers to be displayed by various applications operated by the terminal, and synthesize the acquired layers through the GPU and/or the HWC.
The layer to be synthesized is a layer to be displayed, that is, a layer to be displayed in a next frame of display picture of the display screen. The layer to be synthesized may include a newly-built layer and a layer whose layer information changes, and may also include a layer whose layer information does not change.
The layer management module may obtain, when the Vsync signal arrives, a layer to be displayed in a next frame of a display image of the display image as a layer to be synthesized, and then obtain, from the layer to be synthesized, a newly-built layer and a layer whose layer information changes, to obtain one or more first layers. That is, the layer composition flow of the surfaflinger may be triggered by the Vsync signal.
The layer information includes layer attributes or display data. The layer attributes comprise position areas, and the position areas are used for indicating the position and the size of the layer. The location area may be represented by pixel coordinates of the respective end points of the layers on the display screen. For example, the area of the layer may be represented by the pixel coordinates of the 4 end points of the layer, i.e., the left, upper, right, and lower ends. In addition, the layer attributes may also include attributes such as hierarchy.
The change of the layer information means that the layer information of a certain layer to be synthesized in a picture to be displayed changes relative to the layer information of the layer to be synthesized in the previous synthesized layer of the previous frame, that is, the layer information of the certain layer to be synthesized in the current Vsync period changes relative to the layer information in the previous Vsync period, and this layer to be synthesized is referred to as a layer in which the layer information changes.
The layer management module may pre-record layer information in a synthesized layer of a previous frame of a layer created in a layer to be synthesized, and after the layer to be synthesized is obtained, may compare the layer information of the layer to be synthesized with the recorded layer information to determine whether the layer information of the layer to be synthesized changes.
In a possible implementation manner, before sending the obtained first layer to the HWC, the layer management module may determine whether the obtained first layer meets a preset synthesis condition; and if the obtained first layer is determined to meet the preset synthesis condition, sending the obtained first layer to the HWC.
Wherein, the preset synthesis conditions may include: the hierarchy of any one of the obtained first layers is larger than or equal to the maximum hierarchy of the rest layers, and the rest layers are layers except the obtained first layers in the layers to be synthesized.
The layer levels are used for describing the front-back sequence of the layers in the vertical direction of the display screen plane, and the layer levels are larger, so that the layers are more front in the vertical direction of the display screen plane and are more upper among the layers. The obtained hierarchy of any one of the first layers in the first layers is greater than or equal to the maximum hierarchy of the rest layers, and the obtained first layer is above the rest layers.
When the obtained first layer is above the remaining layers, the first layer obtained in the synthesis result of the layer to be synthesized is above the other layers, and if the obtained first layer is synthesized with the synthesized layer of the previous frame, the obtained first layer is also above the synthesized layer, so that it can be ensured that the synthesis result of the obtained first layer and the synthesized layer is consistent with the synthesis result of the layer to be synthesized. In this case, it is determined that layer synthesis may be performed by synthesizing the obtained first layer with the synthesized layer, that is, it is determined that the obtained first layer satisfies the preset synthesis condition.
In addition, the preset synthesis conditions may further include: if the obtained first layer includes a first layer with changed layer information, the position area of the first layer with changed layer information can completely cover the position area of the first layer before the change. The position area of the first layer before the change refers to a position area of the first layer in the synthesized layer of the previous frame.
As an example, after a first layer in layers to be synthesized is obtained, it may also be determined whether the obtained first layer includes a first layer whose layer information changes. If the obtained first layer is determined to include the first layer with the changed layer information, whether the position area of the first layer with the changed layer information can completely cover the position area of the first layer before the change is judged. If it is determined whether the position area of the first layer with the changed layer information can completely cover the position area of the first layer before the change, it is determined that the obtained first layer meets a preset synthesis condition, and the obtained first layer and the synthesized layer of the previous frame can be synthesized.
When the acquired first layer includes a first layer whose layer information changes, the synthesized layer includes the first layer whose layer information has changed, and if the position area of the first layer whose layer information has changed can completely cover the position area of the first layer before change, when the acquired first layer is synthesized with the synthesized layer, the first layer whose layer information has changed can also completely cover the first layer before layer information has changed in the synthesized layer. Therefore, the part of the area of the first layer before the layer change in the synthesized layer can be prevented from being exposed on the synthesized layer obtained by synthesizing the obtained first layer and the synthesized layer, so that the synthesis result of the obtained first layer and the synthesized layer is consistent with the synthesis result of the layer to be synthesized, and the layer synthesis effect is improved.
In a possible implementation manner, before sending the obtained first layer to the HWC, the layer management module may first obtain a first synthesized layer synthesized by the HWC, then send the obtained first layer and the first synthesized layer to the HWC, and the HWC synthesizes the obtained first layer and the first synthesized layer.
As one example, the layer management module may call a callback function through which the first synthesized layer that the HWC has synthesized is obtained. Wherein the callback function is used to obtain the synthesized layer that the HWC has synthesized.
The callback function can be obtained by pre-registering the layer management module. For example, the layer management module may call a callback function registration interface, and register the callback function for the HWC through the callback function registration interface, so that the synthesized layer synthesized by the HWC may be subsequently obtained by calling the callback function.
In a possible implementation manner, when sending the obtained first layer to the HWC, the layer management module may further send a layer synthesis request to the HWC, where the layer synthesis request carries the obtained first layer, and the layer synthesis request is used to request the HWC to obtain the synthesized first synthesis layer, and synthesize the obtained first layer and the first synthesis layer. After receiving the layer synthesis request, the HWC may obtain the synthesized first synthesis layer, and then synthesize the obtained first layer and the first synthesis layer.
Generally, the total map layer number of the obtained first map layer and the first synthesis map layer is less than the total map layer number of the map layer to be synthesized, that is, the total map layer number of the obtained first map layer and the first synthesis map layer is less and generally does not exceed the number of map layers that the HWC supports synthesis, so the obtained first map layer, or the obtained first map layer and the first synthesis map layer may be sent to the HWC for synthesis.
However, it is not excluded that the total number of the obtained first layer and the first synthesis layer may be greater than the number of layers that the HWC supports synthesis, so to ensure that the number of layers that the HWC needs to synthesize does not exceed the number of layers that the HWC supports synthesis, before sending the obtained first layer to the HWC, a synthesis policy obtaining request may also be sent to the HWC first to request to obtain the synthesis policies of the obtained first layer and the first synthesis layer.
In a possible implementation manner, before the layer management module sends the acquired first layer and the first synthesis layer to the HWC, a first synthesis policy acquisition request may also be sent to the HWC, where the first synthesis policy acquisition request carries the acquired related information of the first layer. The HWC receives a first synthesis strategy obtaining request, determines a first synthesis strategy according to the obtained relevant information of the first layer and the relevant information of the first synthesis layer, and sends the first synthesis strategy to the layer management module. The first synthesis strategy comprises the obtained first image layer and a synthesis mode corresponding to each image layer in the first synthesis image layer, and the synthesis mode is GPU synthesis or HWC synthesis. And if the layer management module determines that the acquired first layer and the synthesis mode corresponding to each layer in the first synthesis layer are both HWC synthesis according to the first synthesis strategy, the layer management module executes the step of sending the acquired first layer to the HWC.
The obtained related information of the first layer may include one or more of the number of layers of the obtained first layer, an Identity (ID) of each layer, and a layer attribute. The related information of the first synthesis layer may include one or more of a layer ID and a layer attribute of the first synthesis layer. The related information of the first synthesis layer may be sent by the layer management module, or may be determined by the HWC, which is not limited in this embodiment of the application.
As an example, the HWC may determine the first synthesizing policy according to the obtained first layer and related information of the first synthesizing layer, and the hardware performance of the HWC. The hardware performance of the HWC at least includes the number of layers that the HWC supports synthesis, but may also include other hardware performance.
For example, if it is determined that the total number of map layers of the obtained first map layer and first synthesis map layer is less than or equal to the number of map layers that the HWC supports synthesis according to the obtained related information of the first map layer and the first synthesis map layer and the hardware performance of the HWC, the synthesis modes corresponding to the obtained first map layer and each of the first synthesis map layer may be determined as HWC synthesis.
For another example, if it is determined that the total map layer number of the acquired first map layer and first synthesized map layer is greater than the number of map layers that the HWC supports synthesis according to the acquired related information of the first map layer and the first synthesized map layer and the hardware performance of the HWC, the synthesis mode corresponding to one part of the acquired first map layer and first synthesized map layer may be determined as GPU synthesis, and the synthesis mode corresponding to the other part of the acquired first map layer and first synthesized map layer may be determined as HWC synthesis.
As an example, the first composition policy obtaining request may carry the obtained first image layer and the obtained related information of the first composition image layer. The obtained related information of the first layer and the first synthesized layer may include one or more of the total number of the obtained first layer and the first synthesized layer, the layer ID of each layer, and the layer attribute.
In addition, if the layer management module determines that the obtained first layer and the synthesis manner corresponding to each layer in the first synthesis layer include GPU synthesis and HWC synthesis according to the first synthesis policy, the layer corresponding to the synthesis manner that is GPU synthesis in the obtained first layer and the first synthesis layer may be synthesized by the GPU first to obtain a third synthesis layer. And then sending the obtained first image layer and the image layer which is synthesized by the HWC in the first synthesis image layer and the third synthesis image layer to the HWC. And synthesizing the obtained first layer, the layer in the first synthesis layer, which is synthesized by the HWC in the corresponding synthesis mode, and the third synthesis layer by the HWC to obtain a fourth synthesis layer.
In addition, if it is determined that the obtained first layer does not satisfy the preset synthesis condition, the layer management module may further send a second synthesis policy obtaining request to the HWC, where the second synthesis policy obtaining request carries information related to the layer to be synthesized. And the HWC receives the second synthesis strategy acquisition request, and determines a second synthesis strategy of the layer to be synthesized according to the relevant information of the layer to be synthesized, wherein the second synthesis strategy comprises a synthesis mode corresponding to each layer in the layer to be synthesized, and the synthesis mode is GPU synthesis or HWC synthesis. And the HWC sends the second synthesis strategy to the layer management module. And the layer management module synthesizes the layer to be synthesized according to the second synthesis strategy.
As an example, the layer management module synthesizes the layer to be synthesized according to the second synthesis policy, including: and if the layer management module determines that the synthesis mode of each layer in the layer to be synthesized comprises GPU synthesis and HWC synthesis according to the second synthesis strategy, synthesizing the layer to be synthesized, which is synthesized by the GPU in the corresponding synthesis mode, in the layer to be synthesized to obtain a fifth synthesized layer. And then, sending the fifth synthesized layer and the layer to be synthesized, which is synthesized by the HWC in the corresponding synthesis mode, to the HWC, and synthesizing the fifth synthesized layer and the layer to be synthesized, which is synthesized by the HWC in the corresponding synthesis mode, in the layer to be synthesized, to obtain a sixth synthesized layer.
As another example, the layer management module synthesizes the layer to be synthesized according to the second synthesis policy, including: and if the layer management module determines that the synthesis modes corresponding to the layers to be synthesized are HWC synthesis according to the second synthesis strategy, the layers to be synthesized are sent to the HWC, and the HWC synthesizes the layers to be synthesized to obtain a seventh synthesis layer.
In a second aspect, an apparatus for layer composition is provided, where the apparatus for layer composition has a function of implementing a behavior of the method for layer composition in the sixth aspect. The layer composition apparatus includes at least one module, where the at least one module is configured to implement the layer composition method provided in the first aspect.
In a third aspect, an image layer synthesis apparatus is provided, where a structure of the image layer synthesis apparatus includes a processor and a memory, where the memory is used to store a program that supports the image layer synthesis apparatus to execute the image layer synthesis method provided in the first aspect, and store data used to implement the image layer synthesis method in the first aspect. The processor is configured to execute programs stored in the memory. The layer composition apparatus may further include a communication bus for establishing a connection between the processor and the memory.
In a fourth aspect, a computer-readable storage medium is provided, in which instructions are stored, and when the instructions are executed on a computer, the instructions cause the computer to execute the layer composition method according to the first aspect.
In a fifth aspect, a computer program product is provided, which comprises instructions that, when run on a computer, cause the computer to perform the layer composition method according to the first aspect.
The technical effects obtained by the second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described herein again.
Drawings
Fig. 1 is a schematic diagram of a display interface of a mobile phone according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a sequence of multiple surfaces in the Z-axis according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a layer composition process provided in the related art;
FIG. 4 is a schematic diagram of another layer composition flow provided by the related art;
fig. 5 is a schematic diagram illustrating a display flow performed based on a Vsync signal according to the related art;
fig. 6 is a schematic diagram of an image layer composition process according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating a display process performed based on a Vsync signal according to an embodiment of the present application;
fig. 8 is a schematic block diagram of a system architecture of an electronic device according to an embodiment of the present application;
FIG. 9 is a flowchart of a layer composition method according to an embodiment of the present application;
FIG. 10 is a flowchart of another layer composition method according to an embodiment of the present application;
FIG. 11 is a flowchart of another layer composition method according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a third composite image provided by an embodiment of the present application;
FIG. 13 is a diagram illustrating a fourth composite image provided by an embodiment of the present application;
FIG. 14 is a schematic diagram of a fifth composite image provided by an embodiment of the present application;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature.
To facilitate understanding of the present application, terms referred to in the embodiments of the present application will be explained first.
(1) Layer (surface): each application may correspond to one or more graphical interfaces, each of which may be referred to as a surface. The display interface of the electronic device is generally synthesized by a plurality of surfaces.
Referring to fig. 1, fig. 1 is a schematic view of a display interface of a mobile phone according to an embodiment of the present disclosure. As shown in fig. 1, the display screen of the mobile phone is composed of 3 layers, namely, a top status bar 11, a bottom navigation bar 12 and a middle application interface 13. It should be understood that the display screen of the terminal device may also be synthesized by other layers, and the embodiment of the present application is only described with reference to fig. 1 as an example.
The state bar 11 at the top is a layer corresponding to the state bar application. The bottom navigation bar 12 applies the corresponding layer to the navigation bar. The application interface 13 is a layer corresponding to a current application, where the current application refers to an application currently running in the foreground.
The status bar 11 is used to indicate the status of the mobile phone, and may include status icons such as a network status, a battery status, a device connection status, a mobile phone setting status or a time. Navigation bar 12 is used for interface switching, such as navigation icons that may include a return key, a home key, and a running program view key.
Each surface in the display interface has its position, size, and content to be displayed on the screen. During the running process of the application, the position, the size or the displayed content of the surface corresponding to the application may be changed.
Each surface has a corresponding layer attribute, and the layer attribute may include information such as a position and a size of a corresponding layer. In addition, each layer also has a corresponding buffer queue (buffer queue), and the buffer queue is used for storing display data of the corresponding layer, where the display data is used for indicating display content of the corresponding layer. The display data of the layer may be an image obtained by rendering the layer.
In addition, multiple surfaces in the display interface may overlap, and the stacking relationship between the surfaces may be described by a hierarchy. The hierarchy is used for describing the front-back sequence of the surfaces in the vertical direction of the plane of the display screen, namely describing the up-down coverage relation among the surfaces. The larger the hierarchy of surfaces, the further forward the surface is in a direction perpendicular to the plane of the display screen.
For example, a hierarchy may be described in Z-order. A Z axis exists in the vertical direction of the screen plane, and all the surfaces determine the front-back sequence according to the coordinates on the Z axis, namely, the upper-lower covering relation among the surfaces is described. This order on the Z-axis may be referred to as Z-order.
Referring to fig. 2, fig. 2 is a schematic diagram of a sequence of multiple surfaces on the Z-axis according to an embodiment of the present disclosure. Wherein, the screen comprises surface0, surface1 and surface2 in the order from top to bottom when the screen is displayed. surface2 partially covers surface1 and surface1 partially covers surface 0. Z-orders for surface0, surface1, and surface2 are respectively expressed as: the levels 0, 1, 2, namely surface0, surface1 and surface2 are 0, 1, 2, respectively.
The software related to layer composition in the terminal device includes but is not limited to: layer management module (surfefringer). Hardware related to layer composition in the terminal device includes but is not limited to: GPU and HWC. The GPU and HWC are used to provide hardware support for the surfaflinger.
(2) Surfeflinger (layer deliverer): the Surfaceflinger is a system service. The system service is mainly used for realizing the functions of creating, controlling, managing and the like of the Surface.
In a display system of an electronic device, the composition of the layers can be realized by a surfefringer. For example, after an application is started, the surfefringer may create a layer for the application. In the process of terminal operation, the surfefinger may acquire layers to be displayed by various applications operated by the terminal, and synthesize the acquired layers through the GPU and/or the HWC.
As an example, the application may send a layer creation request to the surfefringer after starting. And after receiving the layer creation request of the application, the Surfaceflinger creates a corresponding layer for the application and returns the layer ID of the created layer to the application. The created layer has corresponding layer attributes, and the layer attributes may include information such as the size and position of the layer.
In addition, the surfaceflag may also allocate a corresponding buffer queue (buffer queue) to the created layer, where the buffer queue is used to store the display data of the layer. After the application performs rendering processing on the layer of the application according to the view update request to obtain the display data of the layer, the display data of the layer may be stored in the cache queue corresponding to the layer.
(3) GPU synthesis: the GPU is a general-purpose image processing device, and is used for completing other image processing tasks besides the composition of image layers. The GPU composition refers to a layer composition method for synthesizing layers by the GPU.
(4) HWC synthesis: the HWC is a dedicated image processing device for performing layer composition and display. The HWC may be a stand-alone device or integrated in a system-on-chip. HWC synthesis refers to an image layer synthesis method for synthesizing an image layer by HWC. During the HWC synthesis process, the HWC has a limited number of layers to support synthesis, because each layer needs to occupy one transmission channel of the HWC, and the transmission channel of the HWC is limited.
The synthesis mode comparison between HWC synthesis and GPU synthesis may be as shown in table 1 below:
TABLE 1
Mode of Synthesis Power consumption situation Performance situation Other limitations
GPU synthesis High power consumption Low performance There is no restriction on the number of layers that support composition
HWC synthesis Low power consumption High performance Layer number limitation to support composition
As can be seen from table 1, HWC synthesis has advantages of low power consumption, high performance, and fast synthesis speed, compared with GPU synthesis.
However, since the number of layers that the HWC supports synthesizing is limited, when the number of layers to be synthesized is greater than the number of layers that the HWC supports synthesizing, the GPU is required to combine a part of the layers in the layers to be synthesized first, then send the synthesis result of the GPU and the remaining layers in the layers to be synthesized to the HWC, and the HWC synthesizes the synthesis result of the GPU and the remaining layers to obtain the layers to be displayed, and then send and display the layers to be displayed, so that a frame of image synthesized by the layers to be synthesized can be displayed on the screen.
Referring to fig. 3, fig. 3 is a schematic diagram of a layer composition process provided in the related art. As shown in fig. 3, it is assumed that the to-be-displayed interface of the mobile phone includes 3 surfaces, namely, a top status bar 11, a bottom navigation bar 12, and a middle application interface 13 of the first application, and the HWC supports a synthesized layer number of 2. When layer synthesis is performed on 3 surfaces included in an interface to be displayed, the surfefinger may send two layers, namely, the status bar 11 and the application interface 13, to the GPU, and the GPU synthesizes the two layers, namely, the status bar 11 and the application interface 13, to obtain a synthesized layer 14. Then, the synthesis layer 14 and the remaining navigation bar 12 are sent to the HWC, and the HWC synthesizes the navigation bar 12 and the synthesis layer 14 to obtain the synthesis layer 15. After the layer synthesis is completed, the HWC sends and displays the synthesized layer 15, so as to send and display the synthesized layer 15 to the display screen of the mobile phone, and the display screen displays the synthesized layer.
The merging of the layers refers to acquiring display data of each layer in the layers and merging the display data of the layers. The display data of the layer can be obtained from the buffer queue corresponding to the layer.
Referring to fig. 4, fig. 4 is a schematic diagram of another layer composition flow provided in the related art. As shown in fig. 4, it is assumed that the interface to be displayed includes 3 layers of the state bar at the top, the navigation bar at the bottom, and the application interface of the first application in the middle, and the HWC supports a synthesized layer number of 2. When layer composition needs to be performed on 3 layers included in the interface to be displayed, the surfefringer may first obtain display data of the status bar from a cache queue corresponding to the status bar, obtain display data of the application interface of the first application from a cache queue corresponding to the application interface of the first application, and obtain display data of the navigation bar from a cache queue corresponding to the navigation bar. Because the number of layers to be synthesized exceeds the number of layers that the HWC supports synthesizing, the surfeflinger may send the display data of the status bar and the display data of the application interface of the first application to the GPU, and the GPU synthesizes the display data of the status bar and the display data of the application interface of the first application to obtain the display data of the synthesized layer, and stores the display data of the synthesized layer in the cache queue corresponding to the synthesized layer. And then, the surfefringer sends the display data corresponding to the synthetic layer and the display data of the navigation bar to the HWC, the HWC combines the display data of the synthetic layer and the display data of the navigation bar to obtain a frame image to be displayed, and the frame image to be displayed is sent and displayed by a display screen.
In addition, a display pipeline of a display system of the electronic device mainly includes three processes of Application (APP) rendering, layer composition and hardware display sending. That is, for a certain frame of image, the display system is required to sequentially execute the application rendering process, the surface flinger composition process and the hardware display sending process before displaying the image on the display screen. The application rendering process is to render the layer of the application according to the view updating request. The layer synthesis process means that a plurality of layers rendered by the rendering process are synthesized to generate a synthesis layer, and the synthesis layer is a frame image to be displayed. The hardware display sending process is to perform hardware display processing on the synthesized layer generated by the layer combination process and push the synthesized layer to a display screen.
To avoid display jammers and to enhance the visual appearance of the graphics, the display system synchronizes the various flows of the display pipeline via Vertical Synchronization (Vsync) signals. The Vsync signal is generated by the surfifringer, that is, the surfifringer is also used to generate and distribute the Vsync signal.
For a certain frame image, when the display system displays the frame image based on the Vsync signal, it is necessary to execute the application rendering process in the nth Vsync period, the layer composition process in the N +1 th Vsync period, and the hardware display sending process in the N +2 th Vsync period. That is, at least 2 Vsync cycle times are required for one frame of image to be rendered from the application to the hardware display.
Generally, the Vsync signal may trigger the application rendering process, the layer composition process, and the hardware rendering process simultaneously. For example, fig. 5 is a schematic diagram illustrating a display flow performed based on a Vsync signal according to the related art. As shown in fig. 5, the display system may perform an application rendering process of the 1 st frame image in the nth Vsync period, a layer composition process of the 1 st frame image in the N +1 th Vsync period, and a hardware rendering process of the 1 st frame image in the N +2 th Vsync period. When the N +1 th Vsync signal arrives, that is, when the N +1 th Vsync period starts, the application rendering process of the 2 nd frame image may be started, the layer composition process of the 2 nd frame image may be executed in the N +2 th Vsync period, and the hardware rendering process of the 2 nd frame image may be executed in the N +3 th Vsync period. When the N +2 th Vsync signal arrives, that is, when the N +2 th Vsync period starts, the application rendering process of the 3 rd frame image may be started, the layer composition process of the 3 rd frame image may be executed in the N +3 th Vsync period, and the hardware rendering process of the 2 nd frame image may be executed in the N +4 th Vsync period. It is understood that the display flow between the frame images is performed independently, and there is no correlation between the frame images.
However, in the above-described layer synthesis process in which a part of the layers to be synthesized is synthesized by the GPU, and then the HWC synthesizes the synthesis result of the GPU and the remaining layers to be synthesized, the HWC needs to wait for the completion of the synthesis of the GPU to synthesize the synthesis result of the GPU and the remaining layers, and thus the synthesis efficiency and flexibility are low. In addition, the GPU has low performance and consumes a long time, which results in a long total synthesis period of the image layer synthesis work, and may result in failure to complete image layer synthesis and display sending work within a frame of image display period, resulting in card frame loss.
In the embodiment of the application, in order to save the layer synthesis time and improve the layer synthesis efficiency and flexibility, after the surfeflinger acquires the layer to be synthesized, a newly-built first layer and a first layer with layer information changing may be acquired from the layer to be synthesized, the acquired first layer is sent to the HWC, the HWC synthesizes the acquired first layer and the first synthesized layer to obtain a second synthesized layer, and the first synthesized layer is a layer synthesized by the HWC before the second synthesized layer.
The newly-built layer in the layer to be synthesized and the layer with the changed layer information are layers in the layer to be synthesized, which are changed relative to the layer in the previous frame of image, in the layer to be synthesized. The first synthesis layer is synthesized by each layer in the previous frame image, and includes a layer which is not changed relative to the layer in the previous frame image in the to-be-synthesized image layer, so that the obtained first layer and the first synthesis layer are synthesized, so that the layer which is changed relative to the layer in the previous frame image in the to-be-synthesized image layer and the layer which is not changed can be synthesized, that is, the synthesis result of the obtained first layer and the first synthesis layer is the same as the synthesis result of all the layers to be synthesized in the display form.
That is, in the case that only part of layers in the to-be-synthesized layer are changed and other layers are not changed with respect to the layers in the previous frame image, the surfeflinger may acquire the layers in the to-be-synthesized layer that are changed with respect to the layers in the previous frame image, acquire the synthesis result of the previous synthesis of the HWC, send only the changed layers and the synthesis result of the previous synthesis of the HWC to the HWC for synthesis, and do not need to send all the to-be-synthesized layers to the HWC for synthesis.
Therefore, the number of image layer synthesis can be reduced, and the synthesis efficiency and flexibility are improved. Moreover, because the number of layers to be synthesized in the layer synthesis mode is small, the number of layers to be synthesized sent to the HWC can be guaranteed to a certain extent not to exceed the number of layers which the HWC supports synthesis, so that the layers can be synthesized directly through the HWC without being synthesized through the GPU first and then through the HWC, thereby saving synthesis time, improving the utilization rate of the HWC, and avoiding the problem that the HWC can not complete layer synthesis and display sending work in one frame of image display period possibly to cause card frame loss because the HWC can synthesize the synthesis result of the GPU with the remaining layers after waiting for completion of the GPU synthesis.
Referring to fig. 6, fig. 6 is a schematic diagram of a layer composition process according to an embodiment of the present disclosure. As shown in fig. 6, it is assumed that the interface to be displayed of the mobile phone includes 3 layers, namely, a top status bar 11, a bottom navigation bar 12, and a middle application interface 13 of the first application, and the HWC supports a synthesized layer number of 2. The method includes the steps that a surfeflinger synthesizes two layers, namely a state bar 11 and an application interface 13, through a GPU to obtain a synthesized layer 14, then synthesizes the synthesized layer 14 and the rest navigation bars 12 through a HWC to obtain a synthesized layer 15, displays the synthesized layer 15 on a display screen, and after the synthesized layer 15 is displayed on the display screen, if display data of the application interface 13 of a first application in the middle are changed (the application interface 13 with the changed display data is called as an application interface 16), and the state bar 11 at the top and the navigation bar 12 at the bottom are not changed, the surfeflinger can obtain a last synthesis result of the HWC, namely the synthesized layer 15, and takes the application interface 16 and the synthesized layer 15 as layers to be synthesized. Because the number of layers to be synthesized does not exceed the number 2 of layers which the HWC supports synthesis, the surfaflinger can directly send the application interface 16 and the synthesis layer 15 to the HWC, the HWC synthesizes the application interface 16 and the synthesis layer 15 to obtain a synthesis layer 17, and then the synthesis layer 17 is sent to the display to push the synthesis layer 17 to the display screen of the mobile phone and the display screen displays the synthesis layer 17. Here, the synthesis layer 17 synthesized by the application interface 16 and the synthesis layer 15 is the same as the synthesis result synthesized by the status bar 11, the navigation bar 12, and the application interface 16, and the synthesis layer 17 is a display image of the next frame of the synthesis layer 15.
That is, when only part of the 3 layers of the state bar 11, the navigation bar 12, and the application interface 13 are changed, only the last synthesis result of the changed layer and the HWC may be sent to the HWC as the layer to be synthesized and synthesized by the HWC, and it is not necessary to synthesize all the state bar 11, the navigation bar 12, and the changed application interface 15 as the layer to be synthesized. Therefore, the number of layers to be synthesized sent to the HWC can be reduced, the synthesis efficiency and the flexibility are improved, the number of the layers to be synthesized sent to the HWC can not exceed the number of the layers which the HWC supports synthesis to a certain extent, the layers can be directly synthesized through the HWC during synthesis, the synthesis time is saved, and the utilization rate of the HWC is improved.
For example, fig. 7 is a schematic diagram illustrating a display process performed based on a Vsync signal according to an embodiment of the present application. As shown in fig. 7, the display system may perform an application rendering process of the 1 st frame image in the nth Vsync period, a layer composition process of the 1 st frame image in the N +1 th Vsync period, and a hardware rendering process of the 1 st frame image in the N +2 th Vsync period. In addition, when the N +1 th Vsync period starts, the application rendering flow of the 2 nd frame image may be started, the layer composition flow of the 2 nd frame image may be executed in the N +2 th Vsync period, and the hardware rendering flow of the 2 nd frame image may be executed in the N +3 th Vsync period. However, when the layer composition process of the 2 nd frame image is performed in the N +2 th Vsync period, the layer composition process of the 2 nd frame image may be performed based on the application rendering result of the N +1 th Vsync period and the layer composition result of the 1 st frame image performed in the N +1 th Vsync period, that is, the layer composition process of the N +2 th Vsync period may use the layer composition result of the N +1 th Vsync period. In addition, when the N +2 th Vsync period starts, the application rendering flow of the 3 rd frame image may be started, the layer composition flow of the 3 rd frame image may be executed in the N +3 th Vsync period, and the hardware rendering flow of the 3 rd frame image may be executed in the N +4 th Vsync period. However, during the layer composition process of the 2 nd frame image performed in the N +3 th Vsync period, the layer composition process of the 3 rd frame image may be performed based on the application rendering result of the N +2 th Vsync period and the layer composition result of the 2 nd frame image performed in the N +2 th Vsync period, that is, the layer composition process of the N +3 th Vsync period may use the layer composition result of the N +2 th Vsync period. Therefore, in the embodiment of the present application, the display flows of the frames of images are related to each other, and the layer synthesis flow of each frame of image may perform layer synthesis by using the layer synthesis result of the previous frame of image.
The layer synthesis method provided by the embodiment of the application can be applied to electronic equipment, and the electronic equipment can be terminals or servers and other electronic equipment. The terminal may be a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and other terminals, and the wearable device may be a smart watch or a smart bracelet, which is not limited in the embodiment of the present application.
Fig. 8 is a schematic block diagram of a system architecture of an electronic device 800 according to an embodiment of the present application. As shown in FIG. 8, the system of electronic device 800 includes an application layer 810, a framework layer 820, a kernel layer 830, and a hardware layer 840.
As shown in fig. 8, the application layer 810 may include a series of application programs 811, and a drawing rendering thread 812. For example, the application layer 810 may include applications such as a desktop (launcher), a Media Player (Media Player), a Browser (Browser), and the like. An application 811 in the application layer 810 may call a draw rendering thread 812 to render a layer corresponding to the application 811. The application layer is also referred to as the application layer.
The framework layer 820 provides an Application Programming Interface (API) and a programming framework for applications of the application layer 810. The application framework layer may also be referred to as the system services framework layer. The application framework layer includes a number of predefined functions. The framework layer is also referred to as the application framework layer.
As shown in fig. 8, the framework layer 820 may include an layer management module 821, where the layer management module 821 is used for creating, controlling, managing, and the like functions of the layer. For example, the layer management module 821 may be a surfeflinger or the like. The surfefringer is a system service, and may create a corresponding layer for an application in the application layer 810. In the process of running the application program in the application layer 810, the surfefringer may further obtain a layer rendered by the application program in the application layer 810, and synthesize the obtained layer. For example, the surfeflinger may synthesize the obtained layers by the GPU and/or the CPU.
The kernel layer 830 is a layer between hardware and software. The kernel layer 830 includes at least driver modules such as a CPU driver 831, a GPU driver 832, an HWC driver 833, and a display driver 834. Other architectures of the kernel layer 830 will be described in detail in another example system below, and will not be described here. The CPU driver 831 is configured to drive the CPU, the GPU driver 832 is configured to drive the GPU, the HWC driver 833 is configured to drive the HWC, and the display driver 834 is configured to drive the display screen.
In other embodiments, the system of the electronic device 800 may further include a system library layer, which is located between the framework layer and the kernel layer, without limitation. The system library may include a plurality of functional modules. For example, the system libraries may include surface managers (surface managers), media libraries (media libraries), three-dimensional graphics engines (e.g., OpenGL ES), and 2D graphics engines.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications. The media library supports playback and recording in a variety of commonly used audio and video formats, as well as static graphic files, and the like. The three-dimensional graphic engine is used for realizing three-dimensional graphic drawing, image rendering, synthesis, graphic layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The hardware layer 840 includes physical devices such as a CPU841, GPU842, HWC843, and display screen 844. The display screen 844 may be a Liquid Crystal Display (LCD), a light-emitting diode (LED) display screen, a Cathode Ray Tube (CRT) display screen, a Holographic (Holographic) display screen, a projection (Projector), or the like.
Based on the system framework of the electronic device 800 shown in fig. 8, a specific interaction process between the structures of the system of the electronic device 800 in the process of implementing the layer composition method of the present application by the electronic device 800 shown in fig. 8 is specifically described below with reference to an interaction flow diagram, and specific functions of the structures in the system architecture shown in fig. 8 are specifically described.
Fig. 9 is a flowchart of an image layer composition method according to an embodiment of the present application, where the method is applied to the electronic device 800 shown in fig. 8, and as shown in fig. 9, the method includes the following steps:
step 901: the method comprises the steps that a SurfaceFlinger obtains a first layer in a layer to be synthesized, the first layer is a newly-built layer or a layer with changed layer information, and the layer information comprises layer attributes or display data.
The layer to be synthesized is a layer to be displayed, that is, a layer to be displayed in a next frame of display picture of the display screen. The layer to be synthesized may include a newly-built layer and a layer whose layer information changes, and may also include a layer whose layer information does not change.
When the Vsync signal arrives, the surfaflinger may acquire a layer to be displayed in a next frame of display image of the display image as a layer to be synthesized, and then acquire a newly-built layer and a layer with changed layer information from the layer to be synthesized, so as to obtain one or more first layers. That is, the layer composition flow of the surfaflinger may be triggered by the Vsync signal.
The layer attributes comprise position areas, and the position areas are used for indicating the positions and the sizes of the layers. The location area may be represented by pixel coordinates of the respective end points of the layers on the display screen. For example, the area of the layer may be represented by the pixel coordinates of the 4 end points of the layer, i.e., the left, upper, right, and lower ends. In addition, the layer attributes may also include attributes such as hierarchy.
The change of the layer information means that the layer information of a certain layer to be synthesized in a picture to be displayed changes relative to the layer information of the layer to be synthesized in the previous synthesized layer of the previous frame, that is, the layer information of the certain layer to be synthesized in the current Vsync period changes relative to the layer information in the previous Vsync period, and this layer to be synthesized is referred to as a layer in which the layer information changes.
Wherein, the change of the layer information means that the layer attribute or the display data changes. For example, if a position area of a certain image layer to be synthesized changes with respect to a position area of the image layer to be synthesized in the previous frame of image, the image layer to be synthesized is determined as the first image layer. Or, if the display data of a certain layer to be synthesized changes relative to the display data of the layer to be synthesized in the previous frame of image, determining the layer to be synthesized as the first layer.
The surfeflinger may pre-record layer information in a synthesized layer of a previous frame of a layer created in a layer to be synthesized, and after the layer to be synthesized is obtained, may compare the layer information of the layer to be synthesized with the recorded layer information to determine whether the layer information of the layer to be synthesized changes.
As shown in fig. 6, it is assumed that the current display interface of the handset is composed of 3 layers, namely, a state bar 11 at the top, a navigation bar 12 at the bottom, and an application interface 13 of the first application in the middle. When the current display interface is displayed, the first application receives the view updating request, and an application rendering process is called according to the view updating request to render the application interface 13 of the first application, so that the display data of the application interface 13 of the first application is changed, and the position area is not changed. For convenience of explanation, the rendered layer of the application interface 13 is referred to as an application interface 14. The status bar application and the navigation bar application do not receive the view update request, and do not need to render the status bar 11 and the navigation bar 12, that is, the layer information of the status bar 11 and the navigation bar 12 is not changed.
That is, the interface to be displayed of the current display interface includes 3 image layers of the status bar 11, the navigation bar 12, and the application interface 14, and the surfaflinger may acquire the 3 image layers of the status bar 11, the navigation bar 12, and the application interface 14 as image layers to be synthesized. The layer to be synthesized comprises a state bar 11 and a navigation bar 12 of which the layer information is not changed, and an application interface 14 of which the display data is changed.
Step 902: and judging whether the acquired first image layer meets a preset synthesis condition or not by the SurfaceFlinger.
Wherein, the preset synthesis conditions may include: the hierarchy of any one of the obtained first layers is larger than or equal to the maximum hierarchy of the rest layers, and the rest layers are layers except the obtained first layers in the layers to be synthesized.
The layer levels are used for describing the front-back sequence of the layers in the vertical direction of the display screen plane, and the layer levels are larger, so that the layers are more front in the vertical direction of the display screen plane and are more upper among the layers. The obtained hierarchy of any one of the first layers in the first layers is greater than or equal to the maximum hierarchy of the rest layers, and the obtained first layer is above the rest layers.
When the obtained first layer is above the remaining layers, the first layer obtained in the synthesis result of the layer to be synthesized is above the other layers, and if the obtained first layer is synthesized with the synthesized layer of the previous frame, the obtained first layer is also above the synthesized layer, so that it can be ensured that the synthesis result of the obtained first layer and the synthesized layer is consistent with the synthesis result of the layer to be synthesized. In this case, it is determined that layer synthesis may be performed by synthesizing the obtained first layer with the synthesized layer, that is, it is determined that the obtained first layer satisfies the preset synthesis condition.
In addition, the preset synthesis conditions may further include: if the obtained first layer includes a first layer with changed layer information, the position area of the first layer with changed layer information can completely cover the position area of the first layer before the change. The position area of the first layer before the change refers to a position area of the first layer in the synthesized layer of the previous frame.
As an example, after a first layer in layers to be synthesized is obtained, it may also be determined whether the obtained first layer includes a first layer whose layer information changes. If the obtained first layer is determined to include the first layer with the changed layer information, whether the position area of the first layer with the changed layer information can completely cover the position area of the first layer before the change is judged. If it is determined whether the position area of the first layer with the changed layer information can completely cover the position area of the first layer before the change, it is determined that the obtained first layer meets a preset synthesis condition, and the obtained first layer and the synthesized layer of the previous frame can be synthesized.
When the acquired first layer includes a first layer whose layer information changes, the synthesized layer includes the first layer whose layer information has changed, and if the position area of the first layer whose layer information has changed can completely cover the position area of the first layer before change, when the acquired first layer is synthesized with the synthesized layer, the first layer whose layer information has changed can also completely cover the first layer before layer information has changed in the synthesized layer. Therefore, the part of the area of the first layer before the layer change in the synthesized layer can be prevented from being exposed on the synthesized layer obtained by synthesizing the obtained first layer and the synthesized layer, so that the synthesis result of the obtained first layer and the synthesized layer is consistent with the synthesis result of the layer to be synthesized, and the layer synthesis effect is improved.
Step 903: and if the SurfaceFlinger determines that the acquired first image layer meets the preset synthesis condition, acquiring the first synthesized image layer synthesized by the HWC.
If it is determined that the acquired first layer satisfies the preset synthesis condition, the first synthesis layer synthesized by the HWC may be acquired, so that the acquired first layer and the first synthesis layer synthesized by the HWC are sent to the HWC together and synthesized by the HWC.
The first synthesized layer is a synthesized layer of a previous frame of the layer to be synthesized, that is, a layer synthesized before the layer to be synthesized. For example, the HWC may synthesize the plurality of second layers to obtain a first synthesized layer. Each layer in the layer to be synthesized may be any one of the plurality of second layers, or a newly-built layer other than the plurality of second layers. Correspondingly, the first layer in the layers to be synthesized is a second layer in which layer information in the plurality of second layers changes, or a newly-built layer outside the plurality of second layers.
As one example, to enable the surfflag to retrieve a synthesized layer that the HWC has synthesized, the surfflag may pre-call a callback function registration interface through which a callback function is registered for the HWC, the callback function being used to retrieve a synthesized layer that the HWC has synthesized.
When the surfeflinger needs to acquire the first synthesis layer synthesized by the HWC, the surfeflinger may call a callback function registered for the HWC, and acquire the first synthesis layer synthesized by the HWC through the callback function.
It should be noted that, in the embodiment of the present application, the step of obtaining the first synthesized layer synthesized by the HWC is performed by the surfefringer when it is determined that the obtained first layer meets the preset synthesis condition, but in other embodiments, the surfefringer may also obtain the first synthesized layer synthesized by the HWC at other times. For example, when the layer to be synthesized is acquired, or when the Vsync signal arrives to trigger the layer synthesis flow of the surfaflinger, the step of acquiring the first synthesized layer synthesized by the HWC is performed, which is not limited in the embodiment of the present application.
After it is determined that the acquired first layer meets the preset synthesis condition and the first synthesized layer synthesized by the HWC is acquired, the surfeflinger may send the acquired first layer and the first synthesized layer together to the HWC, and the HWC synthesizes the first layer and the first synthesized layer.
Generally, the total map layer number of the obtained first map layer and the first synthesis map layer is less than the total map layer number of the map layer to be synthesized, that is, the total map layer number of the obtained first map layer and the first synthesis map layer is less and generally does not exceed the number of map layers that the HWC supports synthesis, so the obtained first map layer and the first synthesis map layer can be sent to the HWC together for synthesis.
However, it is not excluded that the total number of the obtained first layer and the first synthesis layer may be greater than the number of layers that the HWC supports synthesis, and therefore, in order to ensure that the number of layers sent to the HWC does not exceed the number of layers that the HWC supports synthesis, before sending the obtained first layer and the first synthesis layer together to the HWC, a synthesis policy obtaining request may also be sent to the HWC to request to obtain the synthesis policies of the obtained first layer and the first synthesis layer.
Step 904: the surfaceflag sends a first synthesis strategy obtaining request to the HWC, wherein the first synthesis strategy obtaining request carries the obtained first image layer and the related information of the first synthesis image layer.
The first synthesis strategy obtaining request is used for requesting to obtain the obtained first image layer and the synthesis strategy of the first synthesis image layer. The obtained related information of the first layer and the first synthesized layer may include one or more of the total number of the obtained first layer and the first synthesized layer, the layer ID of each layer, and the layer attribute.
As one example, if the HWC has been launched, the surfeflinger may send a first synthetic policy fetch request directly to the HWC. If the HWC is not started, the surfeflinger may first call the kernel layer to start the HWC driver, start the HWC through the HWC driver, and then send a first synthetic policy retrieval request to the HWC.
Step 905: the HWC receives a first synthesis strategy obtaining request, and determines a first synthesis strategy of the obtained first layer and the first synthesis layer according to the obtained related information of the first layer and the first synthesis layer, wherein the first synthesis strategy comprises a synthesis mode corresponding to each layer in the obtained first layer and the first synthesis layer, and the synthesis mode is GPU synthesis or HWC synthesis.
As an example, the HWC may determine the first synthesizing policy according to the obtained first layer and related information of the first synthesizing layer, and the hardware performance of the HWC. The hardware performance of the HWC at least includes the number of layers that the HWC supports synthesis, but may also include other hardware performance.
For example, if it is determined that the total number of map layers of the obtained first map layer and first synthesis map layer is less than or equal to the number of map layers that the HWC supports synthesis according to the obtained related information of the first map layer and the first synthesis map layer and the hardware performance of the HWC, the synthesis modes corresponding to the obtained first map layer and each of the first synthesis map layer may be determined as HWC synthesis.
For another example, if it is determined that the total map layer number of the acquired first map layer and first synthesized map layer is greater than the number of map layers that the HWC supports synthesis according to the acquired related information of the first map layer and the first synthesized map layer and the hardware performance of the HWC, the synthesis mode corresponding to one part of the acquired first map layer and first synthesized map layer may be determined as GPU synthesis, and the synthesis mode corresponding to the other part of the acquired first map layer and first synthesized map layer may be determined as HWC synthesis.
It should be noted that, in the embodiment of the present application, the description is only given by taking an example that the first composition policy acquisition request carries the acquired first layer and the acquired related information of the first composition layer. In other embodiments, the first composition policy obtaining request may also only carry the related information of the first image layer. After receiving the first synthesis policy acquisition request, the HWC determines the relevant information of the synthesized first synthesis layer, and then determines the first synthesis policy according to the acquired relevant information of the first layer and the first synthesis layer.
Step 906: the HWC sends the first synthetic strategy to the surfaflinger.
Step 907: and if the surfefringer determines that the synthesis modes corresponding to the obtained first layer and each layer in the first synthesis layer are HWC synthesis according to the first synthesis strategy, the obtained first layer and the first synthesis layer are sent to the HWC.
Step 908: and the HWC synthesizes the acquired first image layer and the acquired first synthesis image layer to obtain a second synthesis image layer.
After receiving the acquired first layer and first synthesis layer, the HWC may synthesize the acquired first layer and first synthesis layer to obtain a second synthesis layer.
When the obtained first image layer and the first synthesized image layer are synthesized, the obtained first image layer and the first synthesized image layer may be synthesized in a manner that the first synthesized image layer is placed at the bottommost layer.
That is, the obtained first image layer may be superimposed on the first synthesized image layer. For example, the obtained first image layers may be sequentially superimposed according to the hierarchy of the obtained first image layer, and then the superimposed image layers are superimposed on the first synthesized image layer.
It should be noted that, in the embodiment of the present application, description is only made by using a surfeflinger to previously obtain a first synthesized layer that has been synthesized by the HWC, and then sending the obtained first layer and the first synthesized layer together to the HWC for synthesis. In other embodiments, the surfeflinger may further send the obtained first layer to the HWC, and the HWC obtains the synthesized first synthesized layer and synthesizes the obtained first layer and the first synthesized layer.
For example, the surfeflinger may send an layer composition request to the HWC, where the layer composition request carries the obtained first layer, and the layer composition request is used to request the HWC to obtain the synthesized first composition layer, and synthesize the obtained first layer and the first composition layer. After receiving the layer synthesis request, the HWC may obtain the synthesized first synthesis layer, and then synthesize the obtained first layer and the first synthesis layer.
Step 909: the HWC displays the second synthesized layer.
After the HWC synthesizes the acquired first layer and first synthesized layer to obtain a second synthesized layer, the HWC may perform hardware display processing on the second synthesized layer and push the second synthesized layer to a display screen.
Step 910: and the display screen displays the second synthesized layer.
The display screen may display a second composite layer presented by the HWC.
In addition, after the layer management module receives the first synthesis policy sent by the HWC, if it is determined that the obtained first layer and the synthesis manner corresponding to each layer in the first synthesis layer include GPU synthesis and HWC synthesis according to the first synthesis policy, the layer that is obtained in the first layer and the first synthesis layer and is synthesized by the GPU in a manner that the synthesis manner corresponding to the GPU synthesis may be synthesized first, and then the synthesis result and the remaining layers are sent to the HWC together and synthesized by the HWC.
For example, if the surfefringer determines that the obtained first layer and the synthesis manner corresponding to each layer in the first synthesis layer include GPU synthesis and HWC synthesis according to the first synthesis policy, the following steps 911 to 917 may be performed.
Step 911: and if the surfefringer determines that the obtained first map layer and the synthetic mode corresponding to each map layer in the first synthetic map layer comprise GPU synthesis and HWC synthesis according to the first synthetic strategy, the obtained first map layer and the map layer which is synthesized by the GPU and corresponds to the synthetic mode in the first synthetic map layer are sent to the GPU.
As an example, if the GPU is started, the surfefringer may directly send the obtained first image layer and the image layer in the first synthesized image layer, which is synthesized by the GPU in the corresponding synthesis manner, to the GPU. If the GPU is not started, the Surfaceflinger can firstly call the kernel layer to start a GPU driver, start the GPU through the GPU driver, and then send the acquired first image layer and the image layer which is synthesized by the GPU in the first synthesis image layer in the corresponding synthesis mode.
Step 912: and the GPU receives the acquired first image layer and the image layer which is synthesized by the GPU and corresponds to the synthesis mode in the first synthesis image layer, and synthesizes the acquired first image layer and the image layer which is synthesized by the GPU and corresponds to the synthesis mode in the first synthesis image layer to obtain a third synthesis image layer.
Step 913: the GPU sends the third composite graph layer to the surfeflinger.
Step 914: and the SurfaceFlinger sends the obtained first image layer and the image layer which is synthesized by the HWC in the corresponding synthesis mode in the first synthesis image layer, and the third synthesis image layer to the HWC.
Step 915: and the HWC synthesizes the obtained first layer and the layer which is synthesized by the HWC in the corresponding synthesis mode in the first synthesis layer and the third synthesis layer to obtain a fourth synthesis layer.
Step 916: the HWC displays the fourth synthesized layer.
Step 917: and the display screen displays the fourth synthesis layer.
In this embodiment of the application, after the surfefringer obtains the layer to be synthesized, a newly-built layer, or a layer whose layer attributes or display data change, and a first synthesized layer that has been synthesized by the HWC may be obtained from the layer to be synthesized, and then the obtained layer and the first synthesized layer are sent to the HWC together for synthesis. That is, only the layer that changes from the previous frame of the display screen in the to-be-synthesized layer and the synthesis result of the last synthesis by the HWC may be transmitted to the HWC and synthesized by the HWC without synthesizing all the to-be-synthesized layers. Therefore, the number of image layer synthesis can be reduced, and the synthesis efficiency and flexibility are improved. Moreover, because the number of layers to be synthesized in the layer synthesis mode is small, the number of layers to be synthesized sent to the HWC can be guaranteed to a certain extent not to exceed the number of layers which the HWC supports synthesis, so that the layers can be synthesized directly through the HWC without being synthesized through the GPU first and then through the HWC, thereby saving synthesis time, improving the utilization rate of the HWC, and avoiding the problem that the HWC can not complete layer synthesis and display sending work in one frame of image display period possibly to cause card frame loss because the HWC can synthesize the synthesis result of the GPU with the remaining layers after waiting for completion of the GPU synthesis.
In addition, after the step 902 in the embodiment of fig. 9, if the surfefringer determines that the obtained first layer does not satisfy the preset synthesis condition, the layer to be synthesized may also be synthesized according to a conventional layer synthesis manner provided in the related art, for example, the following steps 1003 to 1012 may be performed.
Referring to fig. 10, fig. 10 is a flowchart of another layer composition method according to an embodiment of the present application, where the method is applied to the electronic device 800 shown in fig. 8, and as shown in fig. 10, the method includes the following steps:
step 1001: the method comprises the steps that a SurfaceFlinger obtains a first layer in a layer to be synthesized, the first layer is a newly-built layer or a layer with changed layer information, and the layer information comprises layer attributes and/or display data.
Step 1002: and judging whether the acquired first image layer meets a preset synthesis condition or not by the SurfaceFlinger.
Steps 1001 to 1002 are the same as steps 901 to 902 in the embodiment of fig. 9, and the specific implementation process may refer to the description related to steps 901 to 902, which is not described herein again in this embodiment of the present application.
Step 1003: and if the SurfaceFlinger determines that the acquired first layer does not meet the preset synthesis condition, sending a second synthesis strategy acquisition request to the HWC, wherein the second synthesis strategy acquisition request carries the relevant information of the layer to be synthesized.
The related information of the layers to be synthesized may include one or more of the total number of layers of the layers to be synthesized, the layer ID of each layer, and the layer attribute.
Step 1004: the HWC receives a second synthesis strategy obtaining request, and determines a second synthesis strategy of the layer to be synthesized according to the relevant information of each layer in the layer to be synthesized, wherein the second synthesis strategy comprises a synthesis mode corresponding to each layer in the layer to be synthesized.
The synthesis mode corresponding to each image layer may be GPU synthesis or HWC synthesis.
As one example, the HWC may determine the second synthesis policy based on information about the layers to be synthesized and the hardware capabilities of the HWC. The hardware performance of the HWC at least includes the number of layers that the HWC supports synthesis, but may also include other hardware performance.
For example, if it is determined that the total number of layers to be synthesized is less than or equal to the number of layers that the HWC supports synthesis according to the related information of the layers to be synthesized and the hardware performance of the HWC, the synthesis modes corresponding to the layers in the layers to be synthesized may be determined to be HWC synthesis.
For another example, if it is determined that the total number of layers of the layers to be synthesized is greater than the number of layers that the HWC supports synthesis according to the related information of the layers to be synthesized and the hardware performance of the HWC, the synthesis mode corresponding to a part of the layers to be synthesized may be determined as GPU synthesis, and the synthesis mode corresponding to another part of the layers may be determined as HWC synthesis.
Step 1005: the HWC sends the second synthesis strategy to the surfaflinger.
After receiving the second synthesis strategy, the surfeflinger may synthesize the layer to be synthesized according to the synthesis manner indicated by the second synthesis strategy.
For example, if the second synthesis policy indicates that the synthesis manner corresponding to one part of the layers to be synthesized is GPU synthesis and the synthesis manner corresponding to the other part of the layers is determined to be HWC synthesis, the GPU may synthesize the one part of the layers, and then send the synthesis result and the other part of the layers to the HWC for synthesis. The specific implementation process may refer to steps 1006-1012 described below.
For another example, if the second synthesis policy indicates that the synthesis manners of the layers to be synthesized are HWC synthesis, the layers to be synthesized may be directly sent to the HWC for synthesis. The specific implementation process can refer to steps 1013-1016 described below.
Step 1006: and if the surfeflinger determines that the synthesis mode of each layer in the layer to be synthesized comprises GPU synthesis and HWC synthesis according to the second synthesis strategy, sending the layer of which the corresponding synthesis mode in the layer to be synthesized is GPU synthesis to the GPU.
Step 1007: and the GPU synthesizes the image layer which is synthesized by the GPU in the corresponding synthesis mode in the image layer to be synthesized to obtain a fifth synthesis image layer.
Step 1008: the GPU sends the fifth composite graph layer to the surfeflinger.
Step 1009: and the surfeflinger sends the fifth synthesized layer and the layer which is synthesized by the HWC in the corresponding synthesis mode in the layer to be synthesized to the HWC.
Step 1010: and the HWC synthesizes the fifth synthetic layer and the layer to be synthesized, which is synthesized by the HWC in the corresponding synthesis mode, to obtain a sixth synthetic layer.
Step 1011: the HWC displays the sixth composite layer.
Step 1012: and the display screen displays the sixth synthesis layer.
In addition, after step 1005, if the layer management module determines that the synthesizing manners corresponding to the layers to be synthesized are HWC synthesizing according to the second synthesizing policy, the following steps 1013 to 1016 may also be performed.
Step 1013: and if the surfeflinger determines that the synthesis modes corresponding to the image layers to be synthesized are HWC synthesis according to the second synthesis strategy, the surfeflinger sends the image layers to be synthesized to the HWC.
Step 1014: and the HWC synthesizes the layers to be synthesized to obtain a seventh synthesized layer.
Step 1015: the HWC displays the seventh synthesized layer.
Step 1016: and the display screen displays the seventh synthesis layer.
For convenience of understanding, based on the system framework of the electronic device 800 shown in fig. 8, specific interaction processes between system structures of the electronic device 800 in a process of implementing the layer composition method provided by the embodiment of the present application by the electronic device 800 shown in fig. 8 are specifically illustrated below.
Fig. 11 is a flowchart of another layer composition method according to an embodiment of the present application, where the method is applied to the electronic device 800 shown in fig. 8, and as shown in fig. 11, the method includes the following steps:
step 1101: the electronic equipment is started, and the status bar application sends a first layer creation request to the surfaceflag.
After the electronic device is powered on, the status bar application starts to start. After the status bar application is started, a first layer creation request is sent to the surfefringer, so that the surfefringer is requested to create a corresponding layer for the status bar application.
Step 1102: and after receiving the first layer creation request, the Surfaceflinger creates a layer 1 for the status bar application.
For example, the pixel coordinates of each endpoint of the position area of layer 1 may be: left 0, upper 0, right 1200, lower 200. In addition, the layer level of the layer 1 may be 0 layer, and the 0 layer is the lowest layer of the layer in the display interface.
In addition, after the surfefringer creates layer 1 for the status bar application, a corresponding buffer queue 1 may also be allocated for layer 1, and the buffer queue 1 is used to store the display data of layer 1.
Step 1103: and the SurfaceFlinger sends the layer ID of the layer 1 to the status bar application.
After creating layer 1 for the state bar application, a corresponding layer ID may also be allocated for layer 1, and the layer ID of layer 1 is returned to the state bar application, so that the state bar application identifies layer 1 according to the layer ID of layer 1.
In addition, after the corresponding buffer queue 1 is allocated for the layer 1, a corresponding buffer queue ID may also be allocated for the buffer queue 1, and the buffer queue ID of the buffer queue 1 is sent to the status bar application.
Step 1104: and when the Nth Vsync signal arrives, calling a rendering thread by the status bar application, and rendering the layer 1 through the rendering thread to obtain the display data of the layer 1.
Wherein N is a positive integer greater than or equal to 1. After the surfefringer creates layer 1 for the status bar application, the status bar application may trigger an application rendering flow based on the Vsync signal, i.e., trigger the application to draw a rendering thread upon arrival of the Vsync signal. For convenience of explanation, the Vsync signal triggering the application rendering flow of the status bar application may be referred to as an nth Vsync signal.
Wherein, the application rendering process comprises the following steps: and calling a rendering thread, and rendering the layer 1 through the rendering thread to obtain the display data of the layer 1. The rendering thread is a thread in the application layer for performing rendering processing.
Step 1105: and the status bar application stores the display data of the layer 1 in the cache queue 1 corresponding to the layer 1.
When the rendering processing is performed on the layer 1 through the rendering thread to obtain the display data of the layer 1, the display data of the layer 1 may be stored in the buffer queue 1 corresponding to the layer 1, so that when a next Vsync signal arrives, the layer synthesis is performed on the layer 1 according to the display data buffered in the buffer queue 1.
Step 1106: and starting the electronic equipment, and sending a second layer creation request to the SurfaceFlinger by the navigation bar application.
After the electronic device is powered on, the navigation bar application starts to be started. The navigation bar application may send a second layer creation request to the surfefringer after being started, so as to request the surfefringer to create a corresponding layer for the navigation bar application.
It should be noted that, after the electronic device is started, the status bar application and the navigation bar application may be started at the same time, and after the status bar application and the navigation bar application are started, layer creation requests are sent to the surfaflinger respectively. Of course, the status bar application and the navigation bar application may also be started in sequence, and the starting sequence of the status bar application and the navigation bar application and the sequence of sending the layer creation request to the surfeflinger are not limited in the embodiment of the present application.
Step 1107: and after receiving the second layer creation request, the Surfaceflinger creates a layer 2 for the navigation bar application.
For example, the pixel coordinates of each endpoint of the position area of layer 2 may be: left 0, up 2000, right 1200, down 2200. In addition, the layer level of layer 2 may be 0.
In addition, after the surfefringer creates layer 2 for the navigation bar application, a corresponding buffer queue 2 may also be allocated for layer 2, and the buffer queue 2 is used for storing display data of layer 2.
Step 1108: and the SurfaceFlinger sends the layer ID of the layer 2 to the navigation bar application.
After the layer 2 is created for the navigation bar application, a layer ID may also be allocated for the layer 2, and the layer ID of the layer 2 is returned to the navigation bar application, so that the navigation bar application identifies the layer 2 according to the layer ID of the layer 2.
In addition, after the corresponding buffer queue 2 is allocated for the layer 2, a buffer queue ID may also be allocated for the buffer queue 2, and the buffer queue ID of the buffer queue 2 is sent to the status bar application.
Step 1109: and when the Nth Vsync signal arrives, calling a rendering thread by the navigation bar application, and rendering the layer 2 through the rendering thread to obtain the display data of the layer 2.
After the surfefringer creates layer 2 for the navigation bar application, the navigation bar application may trigger an application rendering flow based on the Vsync signal, i.e., trigger the application to draw a rendering thread upon arrival of the Vsync signal.
It should be noted that, in the embodiment of the present application, only the Vsync signal that triggers the application rendering process of the navigation bar application is also used as the nth Vsync signal, that is, the application rendering process of the status bar application and the application rendering process of the navigation bar application may be triggered simultaneously by the nth Vsync signal. Certainly, in other embodiments, the Vsync signals respectively triggering the application rendering processes of the status bar application and the navigation bar application may also be different Vsync signals, for example, the nth-1 Vsync signal triggers the application rendering process of the navigation bar application, and the nth Vsync signal triggers the application rendering process of the status bar application, which is not limited in this embodiment of the present application.
Step 1110: the navigation bar application stores the display data of the layer 2 in the cache queue 2 corresponding to the layer 2.
When the rendering processing is performed on the layer 2 through the rendering thread to obtain the display data of the layer 2, the display data of the layer 2 may be first stored in the buffer queue 2 corresponding to the layer 2, so that when a next Vsync signal arrives, the layer synthesis is performed on the layer 2 according to the display data buffered in the buffer queue 2.
Step 1111: the user opens the WeChat application.
After the electronic equipment is started, a user can open the WeChat application installed in the electronic equipment according to the requirement.
Step 1112: and responding to the operation of the user, starting the WeChat application, and sending a third layer creation request to the SurfaceFlinger.
After the micro-trusted application is started, a third layer creation request is sent to the surfeflinger, so that the surfeflinger is requested to create a corresponding layer for the micro-trusted application.
It should be understood that, in the embodiment of the present application, the electronic device starts the wechat application according to the operation of the user, so that the electronic device merges the image layers corresponding to the wechat application subsequently, but in other embodiments, the electronic device may also start other applications, such as a video application, a news application, and the like, which is not limited in the embodiment of the present application.
Step 1113: and after receiving the third layer creation request, the Surfaceflinger creates a layer 3 for the WeChat application.
For example, the pixel coordinates of each endpoint of the position area of layer 3 may be: left 0, top 201, right 1200, bottom 1999. In addition, the layer level of layer 3 may be 0.
In addition, after the surface flicker creates the layer 3 for the wechat application, a corresponding buffer queue 3 may also be allocated for the layer 3, and the buffer queue 3 is used to store the display data of the layer 3.
Step 1114: and the SurfaceFlinger sends the layer ID of the layer 3 to the WeChat application.
The layer 3 created for the wechat application has a corresponding layer ID, and after the surfefringer creates the layer 3 for the wechat application, the surfefringer may also return the layer ID of the layer 3 to the status bar application, so that the status bar application identifies the layer 3 according to the layer ID of the layer 3.
In addition, the buffer queue 3 allocated for the layer 3 has a corresponding buffer queue ID, and the surfaceflanger may also send the buffer queue ID of the buffer queue 3 to the wechat application.
Step 1115: and when the (N + M) th Vsync signal arrives, calling a rendering thread by the WeChat application, and rendering the layer 3 through the rendering thread to obtain the display data of the layer 3.
After the surfefringer creates layer 3 for the WeChat application, the WeChat application may trigger an application rendering flow based on the Vsync signal, i.e., trigger the application to draw a rendering thread upon arrival of the Vsync signal. Wherein N and M are both positive integers.
It should be noted that, in the embodiment of the present application, only the Vsync signal that triggers the application rendering process of the status bar application and the navigation bar application is an nth Vsync signal, and the Vsync signal that triggers the application rendering process of the wechat application is an N + M Vsync signal, in other embodiments, the Vsync signal that triggers the application rendering process of the status bar application, the navigation bar application, and the wechat application may also be another Vsync signal, for example, the N-1 th Vsync signal triggers the application rendering process of the navigation bar application, the nth Vsync signal triggers the application rendering process of the status bar application, and the N +1 th Vsync signal triggers the application rendering process of the wechat application, which is not limited in the embodiment of the present application.
Step 1116: and the SurfaceFlinger stores the display data of the layer 3 in the cache queue 3 corresponding to the layer 3.
When the rendering processing is performed on the layer 3 through the rendering thread to obtain the display data of the layer 3, the display data of the layer 3 may be stored in the buffer queue 3 corresponding to the layer 3, so that when a next Vsync signal arrives, the layer synthesis is performed on the layer 3 according to the display data buffered in the buffer queue 3.
Step 1117: and when the (N + M + 1) th Vsync signal arrives, the surfaceflicker acquires display data of a corresponding layer from a buffer queue corresponding to each layer in the layer 1, the layer 2 and the layer 3.
After the N + M th Vsync signal triggers the application rendering process of layer 3, the N + M +1 th Vsync signal may trigger the layer composition process of layer 1, layer 2, and layer 3. Therefore, when executing the layer composition process, the surfaflinger may first obtain the display data of the layer 1 cached in the cache queue 1 corresponding to the layer 1, the display data of the layer 2 cached in the cache queue 2 corresponding to the layer 2, and the display data of the layer 3 cached in the cache queue 3 corresponding to the layer 3, respectively, so as to perform layer composition on the layer 1, the layer 2, and the layer 3 according to the obtained display data.
For example, step 1117 may include the following steps:
step 1117-1, when the N + M +1 th Vsync signal arrives, the surfefringer acquires the display data of the layer 1 from the buffer queue corresponding to the layer 1.
Step 1117-2, when the N + M +1 th Vsync signal arrives, the surfefringer acquires the display data of the layer 2 from the buffer queue corresponding to the layer 2.
Step 1117-3, when the N + M +1 th Vsync signal arrives, the surfefringer acquires the display data of the layer 3 from the buffer queue corresponding to the layer 3.
Step 1118: and the surfaceflag sends a third synthesis strategy acquisition request to the HWC, wherein the third synthesis strategy acquisition request carries the related information of the layer 1, the layer 2 and the layer 3.
The third synthesis strategy obtaining request is used for requesting to obtain the layer synthesis strategies of the layer 1, the layer 2 and the layer 3. The related information of the layer 1, the layer 2, and the layer 3 may include a total number of layers of the layer 1, the layer 2, and the layer 3, one or more of a layer ID and a layer attribute of each layer, and may also include other related information, which is not limited in this embodiment of the application.
After the display data of the corresponding layer is acquired from the cache queue corresponding to each of the layers 1, 2, and 3, in order to avoid that the number of layers sent by the HWC exceeds the number of layers that the HWC supports synthesis, a third synthesis policy acquisition request may be sent to the HWC first to request to acquire the layer synthesis policies of the layers 1, 2, and 3.
Step 1119: the HWC receives the third synthesis policy acquisition request, and determines the third synthesis policies of the layer 1, the layer 2, and the layer 3 according to the related information of the layer 1, the layer 2, and the layer 3, where the third synthesis policies include that the synthesis mode corresponding to the layer 2 and the layer 3 is GPU synthesis, and the synthesis mode corresponding to the layer 1 is HWC synthesis.
In this embodiment of the application, assuming that the number of layers that the HWC supports synthesis is 2, after the HWC receives the third synthesis policy acquisition request, if the total number of layers of the layer 1, the layer 2, and the layer 3 is determined to be 3 according to the related information of the layer 1, the layer 2, and the layer 3, and exceeds the number of layers that the HWC supports synthesis itself, the synthesis manner corresponding to a part of the layers in the layer 1, the layer 2, and the layer 3 may be determined to be GPU synthesis, and the synthesis manner corresponding to other layers may be determined to be HWC synthesis. For example, the synthesis method corresponding to layer 2 and layer 3 may be determined as GPU synthesis, and the synthesis method corresponding to layer 1 may be determined as HWC synthesis.
For example, the third synthesis strategy may be as shown in table 2 below:
TABLE 2
Layer of a picture GPU synthesis HWC synthesis
Layer
1 Whether or not Is that
Layer 2 Is that Whether or not
Layer 3 Is that Whether or not
Step 1120: the HWC sends the third synthesis policy to the surfaflinger.
Step 1121: and the surfaceflag receives the third synthesis strategy, and sends the display data of the layer 2 and the layer 3 which are synthesized by the GPU in the corresponding synthesis mode to the GPU according to the third synthesis strategy.
The surfaflinger may synthesize the layer 2 and the layer 3, which are synthesized by the GPU in the corresponding synthesis manner, by the GPU according to a third synthesis policy. Specifically, the display data of the layer 2 and the layer 3 synthesized by the GPU may be sent to the GPU, and the GPU may synthesize the display data of the layer 2 and the layer 3.
Step 1122: and the GPU synthesizes the display data of the layer 2 and the layer 3 to obtain a first synthetic image.
The GPU may superimpose the display data of the layer 2 and the layer 3 according to the hierarchy of the layer 2 and the layer 3 to obtain a first composite image.
Step 1123: the GPU sends the first composite image to the surfeflinger.
Step 1124: the surfeflinger sends the first composite image and the display data corresponding to layer 1 that is synthesized by the HWC.
After the layer 2 and the layer 3 that are synthesized by the GPU in the corresponding synthesis manner are synthesized by the GPU, the synthesis result of the layer 2 and the layer 3 and the layer 1 that is synthesized by the HWC in the corresponding synthesis manner may be synthesized by the HWC. Specifically, the synthesis result of layer 2 and layer 3 and the display data corresponding to layer 1 synthesized by the HWC in the synthesis manner may be sent to the HWC, and synthesized by the HWC.
Step 1125: the HWC synthesizes the first synthetic image and the display data of the layer 1 to obtain a second synthetic image, and stores the second synthetic image in the memory.
After the HWC obtains the synthesis result, the HWC may first store the synthesis result in the memory. For example, the first image may be stored in a frame buffer.
Step 1126: the (N + 2) th Vsync signal arrives and the HWC presents the second composite image.
That is, after the HWC synthesizes the first synthesized image and the display data of layer 1 to obtain the second synthesized image, the next Vsync signal, that is, the N +2 th Vsync signal, may trigger the hardware display flow of the second synthesized image, that is, the HWC is retriggered to display the second synthesized image, so that the display screen displays the second synthesized image.
Step 1127: the display screen displays the second composite image.
For example, the second composite image may be as shown in FIG. 1. As shown in fig. 1, the second composite image is synthesized from the status bar 11 (layer 1), the navigation bar 22 (layer 2), and the wechat interface 13 (layer 3).
Step 1128: the user calls out a Wechat mini control (wechat mini program).
While the display screen displays the second composite image, the user can perform various operations on the WeChat application. For example, a user may call out a WeChat widget of a WeChat application. The WeChat widget is also called a WeChat applet, is one of the applets, and is an application which can be used without downloading and installing.
Step 1129: in response to the operation of the user, the WeChat application sends a fourth layer creation request to the SurfaceFlinger.
And the fourth layer creation request is used for requesting the SurfaceFlinger to create a corresponding layer for the WeChat widget.
Step 1130: and after receiving the fourth layer creation request, the Surfaceflinger creates a layer 4 for the WeChat widget.
For example, the pixel coordinates of each endpoint of the position area of layer 4 may be: left 500, top 500, right 1000, bottom 1000. In addition, the layer 4 may have a hierarchy of 1, that is, the hierarchy of the layer 4 is greater than the hierarchies of the layer 1, the layer 2, and the layer 3, and the layer 1, the layer 2, and the layer 3 will be covered during composition.
In addition, after the surface flicker creates the layer 4 for the small believe control, a corresponding buffer queue 4 may also be allocated for the layer 4, and the buffer queue 4 is used for storing the display data of the layer 4.
Step 1131: and the SurfaceFlinger sends the layer ID of the layer 4 to the WeChat application.
After the surface flicker creates the layer 4 for the small WeChat control, the surface flicker may also allocate a corresponding layer ID for the layer 4, and send the layer ID of the layer 4 to the WeChat application, so that the WeChat application identifies the layer 4 according to the layer ID of the layer 4.
In addition, after the corresponding buffer queue 4 is allocated for the layer 4, a corresponding buffer queue ID may also be allocated for the buffer queue 4, and the buffer queue ID of the buffer queue 4 is sent to the wechat application.
Step 1132: and (4) when the (N + M + 1) th Vsync signal arrives, calling a rendering thread by the WeChat application, and rendering the layer 4 through the rendering thread to obtain the display data of the layer 4.
The (N + M + 1) th Vsync signal may simultaneously trigger the layer composition process of the layer 1, the layer 2, and the layer 3, and the application rendering process of the next frame image. If the layer 1, the layer 2 and the layer 3 are not changed when the N + M +1 th Vsync signal arrives, and only the layer 4 is newly established by the wechat application, the application rendering process of the wechat application to the layer 4 can be triggered, and the application rendering process of the state bar application to the layer 1, the application rendering process of the navigation bar to the layer 2 and the application rendering process of the wechat application to the layer 3 do not need to be triggered.
That is, when the N + M +1 th Vsync signal arrives, the surfifringer is triggered to perform layer composition on the layer 1, the layer 2 and the layer 3, the WeChat application is triggered to call a rendering thread, and the layer 4 is rendered through the rendering thread.
Step 1133: and the WeChat application stores the display data of the layer 4 in the cache queue 4 corresponding to the layer 4.
When the layer 4 is rendered through the rendering thread to obtain the display data of the layer 4, the display data of the layer 4 may be stored in the buffer queue 4 corresponding to the layer 4, so that when the next Vsync signal arrives, the layer 4 is synthesized according to the display data buffered in the buffer queue 4.
In addition, since the rendering process is not performed on layer 1, layer 2, and layer 3 in the N + M +1 th Vsync period, the buffer queues corresponding to layer 1, layer 2, and layer 3 are empty, that is, the corresponding display data is not stored.
Step 1134: when the N + M +2 th Vsync signal arrives, the surfefringer acquires the display data of the layer 4 from the buffer queue 4 corresponding to the layer 4.
The (N + M + 2) th Vsync signal may trigger the hardware rendering process of the second composite image and the layer composition process of the next frame image simultaneously.
When executing a layer synthesis process of a next frame of image, the surfefringer may determine that the layer 1, the layer 2, the layer 3, and the layer 4 are layers to be displayed of the next frame of image, determine that the layer 1, the layer 2, the layer 3, and the layer 4 are layers to be merged, and then obtain display data of corresponding layers from cache queues corresponding to each of the layer 1, the layer 2, the layer 3, and the layer 4, respectively. However, since the buffer queues corresponding to layer 1, layer 2, and layer 3 are empty, the display data of layer 4 can only be acquired from the buffer queue 4 corresponding to layer 4.
Step 1135: and determining that the layer level 4 is larger than the layer level 3 by the SurfaceFlinger, and acquiring a second synthetic image synthesized by the HWC.
If the surfefringer determines that the layer level of the layer 4 is greater than the layer level of the layer 3, and the total number of the layer 4 and the second synthetic image is less than the number of the layers which the HWC supports synthesis, it may be determined that the layer 4 meets a preset synthesis condition, and then the second synthetic image which the HWC has synthesized is obtained.
For example, the surfeflinger may call a callback function of the HWC, through which a second composite image that the HWC has synthesized is obtained.
It should be noted that, in this embodiment, the step of obtaining the second combined image that has been combined by the HWC is performed only when it is determined that the layer 4 satisfies the preset combining condition is taken as an example for description, in other embodiments, the surfaflinger may also obtain the second combined image that has been combined by the HWC at other times, for example, the step of obtaining the second combined image that has been combined by the HWC is performed when the HWC combines the second combined image, which is not limited in this embodiment of the present application.
Step 1136: the surfaceflag sends the display data for layer 4 and the second composite image to the HWC.
In addition, in order to avoid that the number of layers to be synthesized sent to the HWC exceeds the number of layers that the HWC supports synthesis, before the surfeflinger sends the display data of layer 4 and the second synthesized image to the HWC, a fourth synthesis policy acquisition request may also be sent to the HWC, where the fourth synthesis policy acquisition request includes information about layer 4 and the second synthesized image, so that the HWC returns the fourth synthesis policy about layer 4 and the second synthesized image to the surfeflinger according to the information about layer 4 and the second synthesized image. The fourth synthesis strategy comprises a synthesis mode corresponding to the image layer 4 and the second synthesis image.
And the surfeflinger receives a fourth synthesis strategy returned by the HWC, and if the synthesis modes corresponding to the layer 4 and the second synthesis image are determined to be synthesis of the HWC according to the fourth synthesis strategy, the layer 4 and the second synthesis image can be sent to the HWC and then synthesized by the HWC.
Step 1137: the HWC synthesizes the display data of the layer 4 and the second synthesized image to obtain a third synthesized image.
Step 1138: the (N + M + 3) th Vsync signal arrives and the HWC displays the third composite image.
Step 1139: the display screen displays the third composite image.
Referring to fig. 12, fig. 12 is a schematic diagram of a third composite image according to an embodiment of the present disclosure. As shown in fig. 12, the third composite image includes a status bar 11 (layer 1), a navigation bar 12 (layer 2), an application interface 13 (layer 3) of the wechat application, and a control interface 18 (layer 4) of the wechat widget. The controls interface 18 includes small controls for themes, flashlights, notifications, and the like. The flashlight control is displayed with the flashlight off.
In addition, if the user clicks the flashlight control in the control interface 18 during the process of displaying the third composite image on the display screen, the control interface 18 is subjected to interface refreshing, in this case, the surfefinger may not re-create the corresponding layer for the wechat widget any more, but the wechat application directly calls the rendering process to render the control interface 18, so as to obtain new display data of the layer 4, and in the new display data of the layer 4, the flashlight control is switched to the flashlight-on state. Then, when the N +3 th Vsync signal arrives, the surfefringer may acquire the third combined image and the new display data of the layer 4, which have been combined by the HWC, send the third combined image and the new display data of the layer 4 to the HWC, and combine the third combined image and the new display data of the layer 4 by the HWC to obtain a fourth combined image. When the N + M +4 th Vsync signal arrives, the HWC displays the fourth composite image, so that the display screen displays the fourth composite image.
Referring to fig. 13, fig. 13 is a schematic diagram of a fourth composite image according to an embodiment of the present application. As shown in fig. 13, the fourth composite image includes a status bar 11 (layer 1), a navigation bar 12 (layer 2), an application interface 13 (layer 3) of the wechat application, and a control interface 19 (layer 4) of the wechat widget. The controls interface 19 includes small controls for themes, flashlights, notifications, etc. The flashlight control is displayed as a flashlight-on state.
In addition, if the user performs an operation on the control interface 18 in the process of displaying the third composite image on the display screen, so that the position area of the control interface 18 changes, and the changed control interface 18 cannot cover the control interface 18 before the change, for example, the user performs a reduction operation on the control interface 18, in this case, the surfaflinger may also create a corresponding layer 5 for the wechat widget again, and invoke a rendering process by the wechat application to render the layer 5, so as to obtain display data of the layer 5. Then, when the N +3 th Vsync signal arrives, the surfaflinger may synthesize the layer 1, the layer 2, the layer 3, and the layer 5 in a manner of synthesizing the layer 1, the layer 2, and the layer 3 in the above steps 1117 to 1127, so as to obtain a fifth synthesized image. When the N +4 th Vsync signal arrives, the HWC displays the fifth composite image, so that the display screen displays the fifth composite image.
Referring to fig. 14, fig. 14 is a schematic diagram of a fifth composite image according to an embodiment of the present application. As shown in fig. 14, the fifth synthesized image includes a status bar 11 (layer 1), a navigation bar 12 (layer 2), an application interface 13 (layer 3) of the wechat application, and a control interface 20 (layer 5) of the wechat widget. The location area of control interface 20 relative to control interface 18 in fig. 12 becomes smaller and the location area of control interface 20 cannot completely cover the location area of control interface 18.
That is, if the position area of the layer whose layer information changes in the layer to be synthesized does not change, or the position area of the layer whose position area changes but changes can completely cover the position area before the change, in this case, the layer whose layer information changes and the previous frame image that has been synthesized by the HWC may be directly synthesized according to the method provided in the embodiment of the present application. However, if the position area of the layer whose layer information changes in the layer to be synthesized changes and the position area of the changed layer cannot completely cover the position area before the change, the layer to be synthesized may be synthesized in the manner provided in the related art.
Referring to fig. 15, fig. 15 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure. The electronic device 100 may be a terminal or a server. The terminal may be a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and other terminals, and the wearable device may be a smart watch or a smart bracelet, which is not limited in the embodiment of the present application.
As shown in fig. 15, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other terminals, such as AR devices, etc.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the terminal through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
In some possible cases, the camera 193 may be used as a gesture capture module to capture gesture operations of the user. The camera 193 may include a front camera and/or a rear camera.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc. The keys 190 of the electronic device 100 may include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc. The SIM card interface 195 is used to connect a SIM card.
In addition, the electronic device 100 may include various sensors, such as a pressure sensor 180A for sensing a pressure signal, which may be converted to an electrical signal. The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. The air pressure sensor 180C is used to measure air pressure. The magnetic sensor 180D includes a hall sensor. The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus. The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on. The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. The bone conduction sensor 180M may acquire a vibration signal. And a temperature sensor 180J for detecting temperature. For example, the temperature sensor 180J may be a non-contact infrared temperature sensor that can measure the temperature of an object using infrared rays.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one example embodiment or technology disclosed herein. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
The present disclosure also relates to an operating device for performing the method. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), Random Access Memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, Application Specific Integrated Circuits (ASICs), or any type of media suitable for storing electronic instructions, and each may be coupled to a computer system bus. Further, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform one or more method steps. The structure for a variety of these systems is discussed in the description that follows. In addition, any particular programming language sufficient to implement the techniques and embodiments disclosed herein may be used. Various programming languages may be used to implement the present disclosure as discussed herein.
Moreover, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the disclosed subject matter. Accordingly, the present disclosure is intended to be illustrative, but not limiting, of the scope of the concepts discussed herein.
Finally, it should be noted that: the above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. An image layer composition method is applied to an electronic device, wherein the electronic device includes an image layer management module and a hardware synthesizer HWC, and includes:
the layer management module acquires a first layer in layers to be synthesized, wherein the first layer is a newly-built layer or a layer with changed layer information, and the layer information comprises layer attributes or display data;
the layer management module sends the acquired first layer to the HWC;
and the HWC synthesizes the obtained first layer and the first synthesis layer to obtain a second synthesis layer, wherein the first synthesis layer is a layer synthesized by the HWC before the second synthesis layer.
2. The method according to claim 1, wherein before the layer management module sends the obtained first layer to the HWC, the method includes:
the layer management module determines whether the acquired first layer meets a preset synthesis condition;
and if the layer management module determines that the acquired first layer meets the preset synthesis condition, the layer management module executes a step of sending the acquired first layer to the HWC.
3. The method of claim 2, wherein the preset synthesis conditions comprise:
the hierarchy of any one of the obtained first layers is greater than or equal to the maximum hierarchy of the rest layers, and the rest layers refer to layers except the obtained first layers in the layers to be synthesized.
4. The method of claim 3, wherein the preset synthesis conditions further comprise:
if the obtained first layer includes a first layer with changed layer information, the position area of the first layer with changed layer information can completely cover the position area of the first layer before change.
5. The method according to any of claims 1 to 4, wherein before the layer management module sends the obtained first layer to the HWC, the method further includes:
the layer management module acquires the first synthesized layer synthesized by the HWC;
the layer management module sends the acquired first layer to the HWC, including:
and the layer management module sends the acquired first layer and the first synthesized layer to the HWC.
6. The method of claim 5, wherein obtaining the first synthesized layer synthesized by the HWC by the layer management module comprises:
the layer management module calls a callback function of the HWC, and the callback function is used for acquiring the synthesized layer synthesized by the HWC;
and the layer management module acquires the first synthesized layer synthesized by the HWC through the callback function.
7. The method of claim 6, wherein before the layer management module obtains the first synthesized layer synthesized by the HWC, the layer management module further comprises:
and the layer management module calls a callback function registration interface, and registers the callback function for the HWC through the callback function registration interface.
8. The method according to any of claims 1 to 4, wherein the sending, by the layer management module, the obtained first layer to the HWC includes:
the layer management module sends a layer synthesis request to the HWC, where the layer synthesis request carries the obtained first layer, and the layer synthesis request is used to request the HWC to obtain the synthesized first synthesis layer, and synthesize the obtained first layer and the first synthesis layer;
before the HWC synthesizes the obtained first layer and first synthesized layer, the method further includes:
the HWC receives the layer composition request;
and the HWC acquires the synthesized first synthesis layer according to the layer synthesis request.
9. The method according to any one of claims 1 to 8, wherein before the layer management module sends the obtained first layer and the first synthesized layer to the HWC, the method further includes:
the layer management module sends a first synthesis strategy obtaining request to the HWC, wherein the first synthesis strategy obtaining request carries the obtained related information of the first layer;
the HWC receives the first synthesis policy acquisition request, and determines a first synthesis policy for the acquired first layer and the first synthesis layer according to the acquired related information of the first layer and the related information of the first synthesis layer, where the first synthesis policy includes a synthesis mode corresponding to each of the acquired first layer and the first synthesis layer, and the synthesis mode is Graphics Processing Unit (GPU) synthesis or HWC synthesis;
the HWC sends the first synthesis strategy to the layer management module;
and if the layer management module determines that the synthesis modes corresponding to the acquired first layer and each layer in the first synthesis layer are both HWC synthesis according to the first synthesis strategy, the layer management module executes a step of sending the acquired first layer to the HWC.
10. The method of claim 9, further comprising:
if the layer management module determines that the synthetic mode corresponding to each of the obtained first layer and the first synthetic layer includes GPU synthesis and HWC synthesis according to the first synthetic policy, synthesizing the obtained first layer and a layer in the first synthetic layer, which corresponds to the synthetic mode that is GPU synthesis, by using a GPU to obtain a third synthetic layer;
the layer management module sends the obtained first layer and the layer in the first synthesized layer, the synthesis mode of which is HWC synthesis, and the third synthesized layer to the HWC;
and the HWC synthesizes the obtained first layer, the layer which is synthesized by the HWC in the corresponding synthesis mode in the first synthesis layer and the third synthesis layer to obtain a fourth synthesis layer.
11. The method according to any one of claims 2-4, further comprising:
if the layer management module determines that the obtained first layer does not meet the preset synthesis condition, the layer management module sends a second synthesis policy obtaining request to the HWC, where the second synthesis policy obtaining request carries the related information of the layer to be synthesized;
the HWC receives the second synthesis strategy acquisition request, and determines a second synthesis strategy of the layer to be synthesized according to the related information of the layer to be synthesized, wherein the second synthesis strategy comprises synthesis modes corresponding to all layers in the layer to be synthesized, and the synthesis modes are GPU synthesis or HWC synthesis;
the HWC sends the second synthesis strategy to the layer management module;
and the layer management module synthesizes the layer to be synthesized according to the second synthesis strategy.
12. The method according to claim 11, wherein the layer management module synthesizes the layer to be synthesized according to the second synthesis policy, including:
if the layer management module determines that the synthesis mode of each layer in the layers to be synthesized comprises GPU synthesis and HWC synthesis according to the second synthesis strategy, the layer to be synthesized, which is synthesized by the GPU in the corresponding synthesis mode, in the layers to be synthesized is synthesized by the GPU, so that a fifth synthesized layer is obtained;
the layer management module sends the fifth synthesized layer and the layer to be synthesized, which is synthesized by the HWC in the corresponding synthesis manner, to the HWC;
and the HWC synthesizes the fifth synthesis layer and the layer to be synthesized, which is synthesized by the HWC in the corresponding synthesis mode, in the layer to be synthesized, so as to obtain a sixth synthesis layer.
13. The method according to claim 11, wherein the layer management module synthesizes the layer to be synthesized according to the second synthesis policy, including:
if the layer management module determines that the synthesis modes corresponding to the layers to be synthesized are HWC synthesis according to the second synthesis strategy, the layer management module sends the layers to be synthesized to the HWC;
and the HWC synthesizes the layers to be synthesized to obtain a seventh synthesized layer.
14. An electronic device, characterized in that the electronic device comprises: one or more processors; one or more memories; the one or more memories store one or more programs that, when executed by the one or more processors, cause the electronic device to perform the image layer synthesis method of any of claims 1-13.
15. A computer-readable storage medium having stored thereon instructions that, when executed on a computer, cause the computer to perform the layer composition method of any of claims 1 to 13.
CN202111109621.XA 2021-09-22 2021-09-22 Layer composition method, device and computer readable storage medium Active CN113986162B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111109621.XA CN113986162B (en) 2021-09-22 2021-09-22 Layer composition method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111109621.XA CN113986162B (en) 2021-09-22 2021-09-22 Layer composition method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113986162A true CN113986162A (en) 2022-01-28
CN113986162B CN113986162B (en) 2022-11-11

Family

ID=79736310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111109621.XA Active CN113986162B (en) 2021-09-22 2021-09-22 Layer composition method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113986162B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510207A (en) * 2022-02-28 2022-05-17 亿咖通(湖北)技术有限公司 Layer composition method, device, equipment, medium and program product
WO2023174322A1 (en) * 2022-03-17 2023-09-21 华为技术有限公司 Layer processing method and electronic device
WO2024103831A1 (en) * 2022-11-15 2024-05-23 百富计算机技术(深圳)有限公司 Desktop window display method and device, and terminal and storage medium
CN118227069A (en) * 2024-05-23 2024-06-21 鼎道智芯(上海)半导体有限公司 Display control method and electronic equipment
WO2024217359A1 (en) * 2023-04-20 2024-10-24 华为技术有限公司 Image layer processing method and related apparatus

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686304A (en) * 2013-12-09 2014-03-26 华为技术有限公司 Method, device and terminal device for layer composition
CN105741819A (en) * 2016-03-01 2016-07-06 晨星半导体股份有限公司 Image layer processing method and device
CN106055294A (en) * 2016-05-23 2016-10-26 福州瑞芯微电子股份有限公司 Layer composition optimization method and apparatus
CN106933525A (en) * 2017-03-09 2017-07-07 青岛海信移动通信技术股份有限公司 A kind of method and apparatus of display image
US20170365236A1 (en) * 2016-06-21 2017-12-21 Qualcomm Innovation Center, Inc. Display-layer update deferral
US20180189922A1 (en) * 2016-12-31 2018-07-05 Intel IP Corporation Smart composition of output layers
CN110020300A (en) * 2017-11-07 2019-07-16 华为技术有限公司 A kind of browser page synthetic method and terminal
CN110362186A (en) * 2019-07-17 2019-10-22 Oppo广东移动通信有限公司 Figure layer process method, apparatus, electronic equipment and computer-readable medium
CN110377257A (en) * 2019-07-17 2019-10-25 Oppo广东移动通信有限公司 Layer composition, device, electronic equipment and storage medium
CN110377263A (en) * 2019-07-17 2019-10-25 Oppo广东移动通信有限公司 Image composition method, device, electronic equipment and storage medium
CN112527220A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Electronic equipment display method and electronic equipment
CN112767231A (en) * 2021-04-02 2021-05-07 荣耀终端有限公司 Layer composition method and device
CN113012263A (en) * 2021-03-16 2021-06-22 维沃移动通信有限公司 Configuration method of layer composition mode and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686304A (en) * 2013-12-09 2014-03-26 华为技术有限公司 Method, device and terminal device for layer composition
CN105741819A (en) * 2016-03-01 2016-07-06 晨星半导体股份有限公司 Image layer processing method and device
CN106055294A (en) * 2016-05-23 2016-10-26 福州瑞芯微电子股份有限公司 Layer composition optimization method and apparatus
US20170365236A1 (en) * 2016-06-21 2017-12-21 Qualcomm Innovation Center, Inc. Display-layer update deferral
US20180189922A1 (en) * 2016-12-31 2018-07-05 Intel IP Corporation Smart composition of output layers
CN106933525A (en) * 2017-03-09 2017-07-07 青岛海信移动通信技术股份有限公司 A kind of method and apparatus of display image
CN110020300A (en) * 2017-11-07 2019-07-16 华为技术有限公司 A kind of browser page synthetic method and terminal
CN110362186A (en) * 2019-07-17 2019-10-22 Oppo广东移动通信有限公司 Figure layer process method, apparatus, electronic equipment and computer-readable medium
CN110377257A (en) * 2019-07-17 2019-10-25 Oppo广东移动通信有限公司 Layer composition, device, electronic equipment and storage medium
CN110377263A (en) * 2019-07-17 2019-10-25 Oppo广东移动通信有限公司 Image composition method, device, electronic equipment and storage medium
CN112527220A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Electronic equipment display method and electronic equipment
CN113012263A (en) * 2021-03-16 2021-06-22 维沃移动通信有限公司 Configuration method of layer composition mode and electronic equipment
CN112767231A (en) * 2021-04-02 2021-05-07 荣耀终端有限公司 Layer composition method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
董剑: ""Android显示系统应用硬件加速技术的研究"", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *
董剑: "利用硬件加速层优化Android显示系统", 《小型微型计算机系统》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510207A (en) * 2022-02-28 2022-05-17 亿咖通(湖北)技术有限公司 Layer composition method, device, equipment, medium and program product
CN114510207B (en) * 2022-02-28 2024-09-13 亿咖通(湖北)技术有限公司 Layer synthesis method, device, equipment, medium and program product
WO2023174322A1 (en) * 2022-03-17 2023-09-21 华为技术有限公司 Layer processing method and electronic device
WO2024103831A1 (en) * 2022-11-15 2024-05-23 百富计算机技术(深圳)有限公司 Desktop window display method and device, and terminal and storage medium
WO2024217359A1 (en) * 2023-04-20 2024-10-24 华为技术有限公司 Image layer processing method and related apparatus
CN118227069A (en) * 2024-05-23 2024-06-21 鼎道智芯(上海)半导体有限公司 Display control method and electronic equipment
CN118227069B (en) * 2024-05-23 2024-09-17 鼎道智芯(上海)半导体有限公司 Display control method and electronic equipment

Also Published As

Publication number Publication date
CN113986162B (en) 2022-11-11

Similar Documents

Publication Publication Date Title
CN113986162B (en) Layer composition method, device and computer readable storage medium
US20230419570A1 (en) Image Processing Method and Electronic Device
WO2021233218A1 (en) Screen casting method, screen casting source end, screen casting destination end, screen casting system and storage medium
CN112328130B (en) Display processing method and electronic equipment
WO2021000881A1 (en) Screen splitting method and electronic device
WO2020093988A1 (en) Image processing method and electronic device
WO2022007862A1 (en) Image processing method, system, electronic device and computer readable storage medium
WO2021190344A1 (en) Multi-screen display electronic device, and multi-screen display method for electronic device
WO2022100685A1 (en) Drawing command processing method and related device therefor
US12027112B2 (en) Always on display method and mobile device
WO2022095744A1 (en) Vr display control method, electronic device, and computer readable storage medium
CN112532892A (en) Image processing method and electronic device
WO2022001258A1 (en) Multi-screen display method and apparatus, terminal device, and storage medium
CN114498028B (en) Data transmission method, device, equipment and storage medium
WO2022143180A1 (en) Collaborative display method, terminal device, and computer readable storage medium
WO2022143310A1 (en) Double-channel screen projection method and electronic device
WO2021204103A1 (en) Picture preview method, electronic device, and storage medium
WO2022135195A1 (en) Method and apparatus for displaying virtual reality interface, device, and readable storage medium
CN115460343A (en) Image processing method, apparatus and storage medium
CN115686403A (en) Display parameter adjusting method, electronic device, chip and readable storage medium
CN115691370A (en) Display control method and related device
CN114172596A (en) Channel noise detection method and related device
CN116700578B (en) Layer synthesis method, electronic device and storage medium
WO2023124149A1 (en) Image processing method, electronic device, and storage medium
WO2022206709A1 (en) Component loading method for application and related apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230915

Address after: 201306 building C, No. 888, Huanhu West 2nd Road, Lingang New Area, Pudong New Area, Shanghai

Patentee after: Shanghai Glory Smart Technology Development Co.,Ltd.

Address before: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040

Patentee before: Honor Device Co.,Ltd.

TR01 Transfer of patent right