[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2024159950A1 - Display method and apparatus, electronic device, and storage medium - Google Patents

Display method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
WO2024159950A1
WO2024159950A1 PCT/CN2023/139711 CN2023139711W WO2024159950A1 WO 2024159950 A1 WO2024159950 A1 WO 2024159950A1 CN 2023139711 W CN2023139711 W CN 2023139711W WO 2024159950 A1 WO2024159950 A1 WO 2024159950A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
processing
thread
timestamp
threads
Prior art date
Application number
PCT/CN2023/139711
Other languages
French (fr)
Chinese (zh)
Inventor
朱术新
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024159950A1 publication Critical patent/WO2024159950A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Definitions

  • the present application belongs to the field of device control technology, and in particular, relates to a display method, device, electronic device and storage medium.
  • VR virtual reality
  • VST Video see through
  • multiple threads in the VR display device need to work together to process the image data and generate VR composite images. If a thread has a delay fluctuation when processing image data, it will cause all threads to be unable to synchronize the processing trigger time of the next frame of image data, which is prone to frame loss, resulting in the final output picture being unsmooth, reducing the user's immersion and affecting the user's viewing experience.
  • the embodiments of the present application provide a display method, device, electronic device and computer-readable storage medium, which can solve the problem of existing display technology. Since VR display devices require multiple threads to collaborate in the process of displaying VR composite images, when a thread processes image data and there is a delay fluctuation, frame loss is likely to occur, and the smoothness of the picture output is low.
  • an embodiment of the present application provides a display method, which is applied to a virtual reality (VR) display device, wherein the VR display device includes multiple threads for generating VR composite images; the display method includes:
  • the timestamps of the respective display cycles are sent to a plurality of threads respectively, so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamps of the Nth display cycle; the processing triggering moment of the threads is synchronized with the timestamps of the synchronization signals of the N+Pth display cycle; N and P are positive integers greater than 0;
  • the threads are triggered in sequence to process image data at the processing triggering moments corresponding to the threads to generate VR composite images.
  • the VR display device when receiving the first operation, the VR display device generates a synchronization signal in each display cycle and records the timestamp of generating the synchronization signal; the timestamp of recording each display cycle is sent to multiple threads for synthesizing VR synthetic images, and each thread can determine the processing trigger moment of the next display cycle according to the timestamp.
  • the processing trigger moment of each thread that can process the image data in the next display cycle is synchronized with the timestamp of the synchronization signal of the next display cycle, it is possible to ensure that the processing trigger moments of multiple threads are synchronized with each other, and then according to the processing order corresponding to each thread, the threads are triggered to process the image data in sequence at the processing trigger moment, thereby generating a VR synthetic image.
  • the processing trigger moment of each thread can be synchronized with the synchronization signal of the next display cycle, and the processing trigger moments between multiple threads are synchronized with each other, thereby realizing orderly control of multiple threads to collaboratively process image data, reducing the probability of frame loss, ensuring the smoothness of the output picture, and then improving the user's immersion and viewing experience.
  • the multiple threads include an imaging thread; a processing triggering moment of the imaging thread is an exposure moment;
  • the sending the timestamps of the respective display cycles to a plurality of threads respectively so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamps of the Nth display cycle comprises:
  • the camera thread sends a first start instruction to the camera framework in the hardware abstraction layer; the first start instruction includes the timestamp;
  • the camera framework In response to the first start-up instruction, the camera framework sends a second start-up instruction to the camera module;
  • the camera module feeds back the exposure parameters of the preview image to the camera framework, wherein the preview image is obtained by the camera module based on the second start instruction, and the exposure parameters include the start exposure moment of the preview image;
  • the camera framework calculates the time deviation between the timestamp and the exposure start time
  • the exposure moment of the camera thread in the N+Pth display period is determined according to the timestamp of the Nth display period and the time deviation.
  • the camera module includes a main camera module and at least one slave camera module; the main camera module and the slave camera module complete hardware synchronization through a second startup instruction sent by the camera framework when starting.
  • the multiple threads include: a graphics processing thread
  • the sending the timestamps of the respective display cycles to a plurality of threads respectively so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamps of the Nth display cycle comprises:
  • the graphics processing thread calculates a timestamp of an N+Pth display cycle; the timestamp of the N+Pth display cycle is determined according to the timestamp of the Nth display cycle and a processing frame rate of the graphics processing thread;
  • the graphics processing thread determines a processing completion time of the Nth display cycle; the processing completion time is a time corresponding to when the graphics processing thread completes processing of the image data of the Nth display cycle;
  • the graphics processing thread calculates a sleep duration; the sleep duration is calculated based on the processing completion time and the timestamp of the N+Pth display cycle;
  • the graphics processing thread is set to a sleep state, and the graphics processing thread is awakened when the duration of the sleep state reaches the sleep duration.
  • the graphics processing thread includes a secondary rendering thread
  • the graphics processing thread determines the processing completion time of the Nth display cycle, including:
  • the secondary rendering thread determines a second rendering completion time of the Nth display cycle
  • the graphics processing thread calculates the sleep duration, including:
  • the secondary rendering thread calculates a time difference between a timestamp of the N+Pth display cycle and a time point at which the rendering is completed, and uses the time difference as a second sleep duration.
  • the graphics processing thread determines the processing completion time of the Nth display cycle, including:
  • the graphics processing thread calculates the sleep duration, including:
  • the processing chip generates the synchronization signal of each display period based on a preset display frame rate
  • the service process in the application framework layer reads the timestamp of the synchronization signal and stores it in a time variable corresponding to the service process.
  • the decentralized central processing unit DPU generates the synchronization signal at the display frame rate.
  • the processing chip generates the synchronization signal of each display period based on a preset display frame rate, including:
  • the display driver chip sends a feedback signal to the central processing unit CPU, so that the CPU generates the synchronization signal when receiving the feedback signal; the feedback signal is generated when the display driver chip refreshes a frame of the VR composite image.
  • sending the timestamps of the display periods to multiple threads respectively includes:
  • the runtime application running in the application layer reads the timestamp recorded in the time variable; the runtime application communicates with the service process in the application framework layer through a preset interface;
  • P is determined based on a ratio between a display period of the VR synthetic image and a processing period of the thread.
  • an embodiment of the present application provides a display device, including:
  • a timestamp recording unit configured to record, in response to a first operation, a timestamp of a synchronization signal generated in each display period;
  • a timestamp sending unit is used to send the timestamp of each display cycle to multiple threads respectively, so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamp of the Nth display cycle; the processing triggering moment of the threads is synchronized with the timestamp of the synchronization signal of the N+Pth display cycle; N and P are positive integers greater than 0; the multiple threads are used to generate VR synthetic images;
  • the multiple threads include a camera thread; a processing trigger moment of the camera thread is an exposure moment; and the timestamp issuing unit includes:
  • a first startup instruction transmission unit used for the camera thread to send a first startup instruction to the camera framework in the hardware abstraction layer; the first startup instruction includes the timestamp;
  • a second startup instruction transmission unit configured to cause the camera framework to send a second startup instruction to the camera module in response to the first startup instruction
  • an exposure parameter feedback unit configured for the camera module to feed back the exposure parameters of the preview image to the camera framework, wherein the preview image is obtained by the camera module based on the second start instruction, and the exposure parameters include the start exposure moment of the preview image;
  • a time deviation calculation unit used for calculating the time deviation between the timestamp and the exposure start time in the camera framework
  • the exposure time determination unit is used to determine the exposure time of the camera thread in the N+Pth display period according to the timestamp of the Nth display period and the time deviation.
  • the camera module includes a main camera module and at least one slave camera module; the main camera module and the slave camera module complete hardware synchronization through a second startup instruction sent by the camera framework when starting.
  • the multiple threads include: a graphics processing thread; and the timestamp issuing unit includes:
  • a timestamp calculation unit used for the graphics processing thread to calculate the timestamp of the N+Pth display cycle; the timestamp of the N+Pth display cycle is determined according to the timestamp of the Nth display cycle and the processing frame rate of the graphics processing thread;
  • a rendering completion time determination unit used for the graphics processing thread to determine the processing completion time of the Nth display cycle; the processing completion time is the time corresponding to the completion of the processing of the image data by the graphics processing thread in the Nth display cycle;
  • a sleep duration calculation unit used for the graphics processing thread to calculate the sleep duration; the sleep duration is calculated based on the processing completion time and the timestamp of the N+Pth display cycle;
  • the sleep state triggering unit is used to set the graphics processing thread to a sleep state and wake up the graphics processing thread when the duration of the sleep state reaches the sleep duration.
  • the graphics processing thread includes a secondary rendering thread
  • the rendering completion time determination unit comprises:
  • a second rendering time determination unit used for the secondary rendering thread to determine the second rendering completion time of the Nth display cycle
  • the sleep duration calculation unit comprises:
  • the second sleep calculation unit is used for the secondary rendering thread to calculate the time difference between the timestamp of the N+Pth display cycle and the rendering completion time, and use the time difference as the second sleep duration.
  • the graphics processing thread includes a primary rendering thread; a time difference between an expected triggering moment of the primary rendering thread and an expected triggering moment of the secondary rendering thread is a preset interval duration;
  • the rendering completion time determination unit comprises:
  • a first rendering time determination unit used for the primary rendering thread to determine a first rendering completion time of an Nth display cycle
  • the sleep duration calculation unit comprises:
  • the first sleep calculation unit is used for calculating a first sleep duration for the rendering thread; the first sleep duration is determined according to the timestamp of the N+Pth display cycle, the preset interval duration and the first rendering completion time.
  • the timestamp recording unit includes:
  • a synchronization signal generating unit configured to generate the synchronization signal of each display period based on a preset display frame rate by a processing chip
  • a device node storage unit configured to record the timestamp of generating the synchronization signal based on the generation time of the synchronization signal, and store the timestamp in a device node in the kernel layer;
  • the time variable storage unit is used for the service process in the application framework layer to read the timestamp of the synchronization signal and store it in the time variable corresponding to the service process.
  • the synchronization signal generating unit includes:
  • the first synchronization signal generating unit is used for the central processing unit distributed unit DPU to generate the synchronization signal at the display frame rate.
  • the synchronization signal generating unit includes:
  • the second synchronization signal generating unit is used for the display driver chip to send a feedback signal to the central processing unit CPU, so that the CPU generates the synchronization signal when receiving the feedback signal; the feedback signal is generated when the display driver chip refreshes a frame of the VR composite image.
  • the timestamp issuing unit includes:
  • a runtime application reading unit used for a runtime application running in the application layer to read the timestamp recorded in the time variable; the runtime application communicates with the service process in the application framework layer through a preset interface;
  • a runtime application sending unit is used for the runtime application to send the timestamp to the multiple threads.
  • P is determined based on a ratio between a display period of the VR synthetic image and a processing period of the thread.
  • an embodiment of the present application provides a display device, comprising: a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the display method described in any one of the first aspects when executing the computer program.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and wherein the computer program, when executed by a processor, implements the display method described in any one of the first aspects above.
  • an embodiment of the present application provides a computer program product.
  • the computer program product When the computer program product is run on an electronic device, the electronic device executes the display method described in any one of the first aspects above.
  • an embodiment of the present application provides a chip system, including a processor, the processor being coupled to a memory, the processor executing a computer program stored in the memory to implement a display method as described in any one of the first aspects.
  • FIG1 is a schematic diagram of an implementation of displaying a VR composite screen by an electronic device provided by an embodiment of the present application
  • FIG2 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present application.
  • FIG3 is a software structure block diagram of an electronic device according to an embodiment of the present application.
  • FIG4 is a schematic diagram of a screen based on a VST display technology provided by an embodiment of the present application.
  • FIG5 is a data flow diagram of a VR synthetic image based on VST technology provided by an embodiment of the present application.
  • FIG6 is a data flow diagram of a VR synthetic image based on VST technology provided by another embodiment of the present application.
  • FIG7 is a flowchart of an implementation of a display method provided in an embodiment of the present application.
  • FIG8 is a flowchart of a specific implementation of S701 provided in an embodiment of the present application.
  • FIG. 9 is a flowchart of a specific implementation of S701 based on a software framework according to an embodiment of the present application.
  • FIG10 is a schematic diagram showing a comparison of the calibration process before and after the triggering moment according to an embodiment of the present application
  • FIG. 11 is a flowchart of the implementation of the camera thread calibration processing triggering moment provided by an embodiment of the present application.
  • FIG13 is a schematic diagram of calibration of exposure timing provided by an embodiment of the present application.
  • FIG15 is a control timing diagram of a graphics processing thread provided by an embodiment of the present application.
  • FIG16 is a timing diagram of different graphics processing threads occupying a GPU according to an embodiment of the present application.
  • FIG17 is a flowchart of an implementation of each graphics processing thread determining a triggering time according to an embodiment of the present application
  • FIG18 is a schematic diagram of dividing a processing order provided by an embodiment of the present application.
  • FIG19 is a timing diagram of various stages in a process of generating a multi-frame VR composite image by an electronic device provided by an embodiment of the present application;
  • FIG20 is a structural block diagram of a display device provided in an embodiment of the present application.
  • FIG. 21 is a structural block diagram of an electronic device provided in one embodiment of the present application.
  • the term “if” can be interpreted as “when” or “uponce” or “in response to determining” or “in response to detecting”, depending on the context.
  • the phrase “if it is determined” or “if [described condition or event] is detected” can be interpreted as meaning “uponce it is determined” or “in response to determining” or “uponce [described condition or event] is detected” or “in response to detecting [described condition or event]", depending on the context.
  • references to "one embodiment” or “some embodiments” etc. described in the specification of this application mean that one or more embodiments of the present application include specific features, structures or characteristics described in conjunction with the embodiment. Therefore, the statements “in one embodiment”, “in some embodiments”, “in some other embodiments”, “in some other embodiments”, etc. that appear in different places in this specification do not necessarily refer to the same embodiment, but mean “one or more but not all embodiments", unless otherwise specifically emphasized in other ways.
  • the terms “including”, “comprising”, “having” and their variations all mean “including but not limited to”, unless otherwise specifically emphasized in other ways.
  • the display method provided in the embodiments of the present application can be applied to electronic devices such as mobile phones, tablet computers, augmented reality (AR)/virtual reality (VR) display devices, laptop computers, ultra-mobile personal computers (UMPCs), netbooks, and personal digital assistants (PDAs).
  • the display method can be applied to electronic devices that can realize VR display, or electronic devices that are externally connected to VR display devices.
  • the embodiments of the present application do not impose any restrictions on the specific types of electronic devices.
  • FIG1 shows a schematic diagram of an implementation of displaying a VR composite screen by an electronic device provided in an embodiment of the present application.
  • the electronic device can be a wearable VR display device.
  • the VR display device has a built-in processor 11 and a camera module 12.
  • the processor 11 includes: a central processing unit (CPU) and a graphics processing unit (GPU), etc.
  • the image data is processed and synthesized by the processor.
  • the camera module 12 can be used to obtain the environmental image of the scene where the wearer (i.e., user) is located.
  • the processor 11 can synthesize the environmental image with the virtual picture to generate a VR synthetic image based on VST technology.
  • the electronic device may be a smart phone, and the smart phone 13 may establish a communication connection with a wearable pair of glasses 14.
  • the communication connection may be a wired communication connection or a wireless communication connection; for example, the smart phone 13 may be connected to the wearable pair of glasses 14 via a serial port; if the wearable pair of glasses 14 is equipped with a wireless communication module, such as a Bluetooth module or a WIFI module, the smart phone 13 may establish a communication connection with the wearable pair of glasses 14 via the wireless communication module.
  • the wearable pair of glasses 14 may also be equipped with a camera module, which captures environmental images and feeds them back to the smart phone 13.
  • the smart phone 13 may synthesize the environmental images with the virtual images through the built-in processor, generate a VR synthesized image based on the VST technology, and feed it back to the wearable pair of glasses, and output the VR synthesized image through the wearable pair of glasses.
  • the electronic device can be a station (STAION, ST) in a WLAN, a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a personal digital assistant (PDA) device, a handheld device with wireless communication function, a computing device or other processing device connected to a wireless modem, a computer, a laptop computer, a handheld communication device, a handheld computing device, and/or other devices for communicating on a wireless system and a next-generation communication system, such as a mobile terminal in a 5G network or a mobile terminal in a future evolved public land mobile network (PLMN) network, etc.
  • STAION, ST in a WLAN
  • a cellular phone a cordless phone
  • SIP Session Initiation Protocol
  • WLL Wireless Local Loop
  • PDA personal digital assistant
  • a handheld device with wireless communication function a computing device or other processing device connected to a wireless modem
  • a computer a laptop
  • FIG. 2 shows a schematic structural diagram of the electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine some components, or split some components, or arrange the components differently.
  • the components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (AP), a modem processor, a graphics processor (GPU), an image signal processor (ISP), a controller, a video codec, a digital signal processor (DSP), a baseband processor, a central processing unit (DPU), and/or a neural-network processing unit (NPU). Different processing units may be independent devices or integrated into one or more processors.
  • AP application processor
  • GPU graphics processor
  • ISP image signal processor
  • DSP digital signal processor
  • DPU central processing unit
  • NPU neural-network processing unit
  • Different processing units may be independent devices or integrated into one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signal to complete the control of instruction fetching and execution.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or cyclically used. If the processor 110 needs to use the instruction or data again, it may be directly called from the memory. This avoids repeated access, reduces the waiting time of the processor 110, and thus improves the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (SIM) interface, and/or a universal serial bus (USB) interface, etc.
  • I2C inter-integrated circuit
  • I2S inter-integrated circuit sound
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple groups of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 communicates with the touch sensor 180K through the I2C bus interface, thereby realizing the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 can include multiple I2S buses.
  • the processor 110 can be coupled to the audio module 170 via the I2S bus to achieve communication between the processor 110 and the audio module 170.
  • the audio module 170 can transmit an audio signal to the wireless communication module 160 via the I2S interface to achieve the function of answering a call through a Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 can be coupled via a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 via the PCM interface to realize the function of answering calls via a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit an audio signal to the wireless communication module 160 through the UART interface to implement the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193.
  • the MIPI interface includes a camera serial interface (CSI), a display serial interface (DSI), etc.
  • the processor 110 and the camera 193 communicate via the CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate via the DSI interface to implement the display function of the electronic device 100.
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, etc.
  • the GPIO interface can also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically can be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and a peripheral device. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices, etc.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration and does not constitute a structural limitation on the electronic device 100.
  • the electronic device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from a wired charger through the USB interface 130.
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. While the charging management module 140 is charging the battery 142, it may also power the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle number, battery health status (leakage, impedance), etc.
  • the power management module 141 can also be set in the processor 110.
  • the power management module 141 and the charging management module 140 can also be set in the same device.
  • the wireless communication function of the electronic device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve the utilization of antennas.
  • antenna 1 can be reused as a diversity antenna for a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide solutions for wireless communications including 2G/3G/4G/5G, etc., applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), etc.
  • the mobile communication module 150 may receive electromagnetic waves from the antenna 1, and perform filtering, amplification, and other processing on the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 may also amplify the signal modulated by the modulation and demodulation processor, and convert it into electromagnetic waves for radiation through the antenna 1.
  • at least some of the functional modules of the mobile communication module 150 may be arranged in the processor 110.
  • at least some of the functional modules of the mobile communication module 150 may be arranged in the same device as at least some of the modules of the processor 110.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 170A, a receiver 170B, etc.), or displays an image or video through a display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless communication solutions including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), bluetooth (BT), global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared (IR) and the like applied to the electronic device 100.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared
  • the wireless communication module 160 can be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the frequency of the electromagnetic wave signal and performs filtering processing, and sends the processed signal to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, modulate the frequency of the signal, amplify the signal, and convert it into electromagnetic waves for radiation through the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS) and/or a satellite based augmentation system (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation system
  • the electronic device 100 implements the display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a prompt microprocessor that connects the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information. It should be noted that the GPU can perform abnormal identification on the storage unit associated with the controlled display screen 194 through the display method provided in this embodiment.
  • the GPU can transfer the image data to be displayed to the storage unit in the display screen 194 for storage for subsequent display. If the electronic device is a smart phone, the electronic device can be connected to external wearable glasses through a serial interface or a wireless communication interface, and the display function is implemented through the wearable glasses when it is in VR display mode.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diodes (QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the display screen 194 may include a touch panel and other input devices.
  • the display screen 194 may be associated with one or more storage units, which are used to cache image data displayed on the display screen 194.
  • the electronic device 100 can realize the shooting function through ISP, camera 193, video codec, GPU, display screen 194 and application processor.
  • ISP is used to process the data fed back by camera 193. For example, when taking a photo, the shutter is opened, and the light is transmitted to the camera photosensitive element through the lens. The light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to ISP for processing and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on the noise, brightness, and skin color of the image. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, ISP can be set in camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) phototransistor.
  • CMOS complementary metal oxide semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to be converted into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • the DSP converts the digital image signal into an image signal in a standard RGB, YUV or other format.
  • the electronic device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the digital signal processor is used to process digital signals, and can process not only digital image signals but also other digital signals. For example, when the electronic device 100 is selecting a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital videos.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record videos in a variety of coding formats, such as Moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG Moving Picture Experts Group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • applications such as intelligent cognition of electronic device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, etc.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and videos are saved in the external memory card.
  • the display method in the embodiment of the present application can manage the storage space in the external memory card.
  • the internal memory 121 can be used to store computer executable program codes, which include instructions.
  • the internal memory 121 may include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application required for at least one function (such as a sound playback function, an image playback function, etc.), etc.
  • the data storage area may store data created during the use of the electronic device 100 (such as audio data, a phone book, etc.), etc.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the electronic device 100 can implement audio functions such as music playing and recording through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone jack 170D, and the application processor.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 can be arranged in the processor 110, or some functional modules of the audio module 170 can be arranged in the processor 110.
  • the speaker 170A also called a "speaker" is used to convert an audio electrical signal into a sound signal.
  • the electronic device 100 can listen to music or listen to a hands-free call through the speaker 170A.
  • the speaker 170A can be used to output prompt information to inform the user of the part that needs to be touched by the electronic scale.
  • the receiver 170B also called a "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be received by placing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak by putting their mouth close to microphone 170C to input the sound signal into microphone 170C.
  • the electronic device 100 can be provided with at least one microphone 170C. In other embodiments, the electronic device 100 can be provided with two microphones 170C, which can not only collect sound signals but also realize noise reduction function. In other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify the sound source, realize directional recording function, etc.
  • the earphone interface 170D is used to connect a wired earphone.
  • the earphone interface 170D may be the USB interface 130, or may be a 3.5 mm open mobile terminal platform (OMTP) standard interface or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A can be set on the display screen 194.
  • the electronic device can obtain the user's weight through the pressure sensor 180A.
  • the capacitive pressure sensor can be a parallel plate including at least two conductive materials. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the intensity of the pressure based on the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the touch operation intensity according to the pressure sensor 180A.
  • the electronic device 100 can also calculate the touch position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities can correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to a first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes i.e., x, y, and z axes
  • the gyro sensor 180B can be used for anti-shake shooting. For example, when the shutter is pressed, the gyro sensor 180B detects the angle of the electronic device 100 shaking, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to offset the shaking of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
  • the electronic device 100 when the electronic device 100 is a flip phone, the electronic device 100 can detect the opening and closing of the flip cover according to the magnetic sensor 180D. Then, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, the flip cover can be automatically unlocked.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in all directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the electronic device and is applied to applications such as horizontal and vertical screen switching and pedometers.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light outward through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 can determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode and pocket mode to automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photography, fingerprint call answering, etc.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 reduces the performance of a processor located near the temperature sensor 180J to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature. In other embodiments, when the temperature is lower than another threshold, the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • the touch sensor 180K is also called a "touch control device”.
  • the touch sensor 180K can be set on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a "touch control screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K can also be set on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can obtain a vibration signal. In some embodiments, the bone conduction sensor 180M can obtain a vibration signal of a vibrating bone block of the vocal part of the human body. The bone conduction sensor 180M can also contact the human pulse to receive a blood pressure beat signal. In some embodiments, the bone conduction sensor 180M can also be set in an earphone and combined into a bone conduction earphone.
  • the audio module 170 can parse out a voice signal based on the vibration signal of the vibrating bone block of the vocal part obtained by the bone conduction sensor 180M to realize a voice function.
  • the application processor can parse the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M to realize a heart rate detection function.
  • the key 190 includes a power key, a volume key, etc.
  • the key 190 may be a mechanical key or a touch key.
  • the electronic device 100 may receive key input and generate key signal input related to user settings and function control of the electronic device 100.
  • Motor 191 can generate vibration prompts.
  • Motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • touch operations acting on different areas of the display screen 194 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminders, receiving messages, alarm clocks, games, etc.
  • the touch vibration feedback effect can also support customization.
  • Indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, messages, missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be connected to and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195.
  • the electronic device 100 can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, and the like. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 can also be compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as calls and data communications.
  • the electronic device 100 uses an eSIM, i.e., an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture.
  • the Android system of the layered architecture is taken as an example to exemplify the software structure of the electronic device 100.
  • FIG. 3 is a software structure block diagram of the electronic device according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, each with clear roles and division of labor.
  • the layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, from top to bottom: the application layer, the application framework layer, the system layer of the Android runtime (Android runtime), and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, calendar, map, WLAN, Bluetooth, music, video, short message, mailbox, WeChat, WPS, etc.
  • the application framework layer provides application programming interface (API) and programming framework for the applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
  • the window manager is used to manage window programs.
  • the window manager can obtain the display screen size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • the data may include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying images, etc.
  • the view system can be used to build applications.
  • a display interface can be composed of one or more views.
  • a display interface including a text notification icon can include a view for displaying text and a view for displaying images.
  • the phone manager is used to provide communication functions for electronic devices, such as the management of call status (including answering, hanging up, etc.).
  • the resource manager provides various resources for applications, such as localized strings, icons, images, layout files, video files, and so on.
  • the notification manager enables applications to display notification information in the status bar. It can be used to convey notification-type messages and can disappear automatically after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be a notification that appears in the system top status bar in the form of a chart or scroll bar text, such as notifications of applications running in the background, or a notification that appears on the screen in the form of a dialog window. For example, a text message is displayed in the status bar, a prompt sound is emitted, an electronic device vibrates, an indicator light flashes, etc.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function that needs to be called by the Java language, and the other part is the Android core library.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the Java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system layer can include multiple functional modules, such as surface manager, media library, 3D graphics processing library (such as OpenGL ES), 2D graphics engine (such as SGL), etc.
  • functional modules such as surface manager, media library, 3D graphics processing library (such as OpenGL ES), 2D graphics engine (such as SGL), etc.
  • the surface manager is used to manage the display subsystem and provide the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis and layer processing, etc.
  • a 2D graphics engine is a drawing engine for 2D drawings.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes the touch operation into a raw input event (including touch coordinates, timestamp of the touch operation, and other information).
  • the raw input event is stored in the kernel layer.
  • the application framework layer obtains the raw input event from the kernel layer and identifies the control corresponding to the input event. For example, if the touch operation is a touch single-click operation and the control corresponding to the single-click operation is the control of the camera application icon, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer to capture static images or videos through the camera 193.
  • VR display technology has become one of the mainstream display technologies today.
  • Traditional VR display technology provides users with an immersive viewing experience.
  • the above traditional VR display technology has been applied to many fields such as games and movies.
  • traditional VR display technology builds one or more virtual graphics through the GPU in the processor, and generates purely fictitious virtual images through the combination of virtual graphics, its authenticity is low, thereby reducing the user's sense of immersion.
  • VST technology has emerged.
  • VST display technology can collect the real environment image of the scene collected by the user through the camera module configured on the VR display device, and superimpose a synthetic virtual image on the basis of the real environment image, that is, obtain a VR synthetic image based on VST technology. Since the background image in the VR synthetic image is often generated based on the real environment image, it can improve the authenticity of the image, and then improve the user's sense of immersion in the process of watching the VR synthetic image.
  • FIG4 shows a schematic diagram of a screen based on VST display technology provided by an embodiment of the present application.
  • the user wears a VR display device, which may be the VR display device shown in FIG1 (a).
  • the VR display device is equipped with a camera module, through which the environmental image within the user's field of vision can be obtained, such as objects such as a television and a clock can be photographed, and the VR display device can synthesize the desired synthesized virtual image into the above-mentioned environmental image.
  • FIG4 (b) By comparing FIG4 (a) with FIG4 (b), it can be found that the TV is in a turned-off state in the real scene, that is, no screen content is output.
  • the specified video image frame can be added to the area where the TV is located in the environmental image, realizing the combination of the virtual image and the real image, improving the immersion of the output image, and then improving the user's viewing experience.
  • VST display technology can improve the authenticity of VR synthetic images and thus enhance the user's immersive viewing experience, it also introduces new challenges to VR display devices, namely, the amount of data to be processed is large and the processing flow is long.
  • VR synthetic images need to be combined with the real and the virtual, that is, the camera thread needs to be used to control the camera module to shoot the real scene, which involves processes such as image exposure and image transmission.
  • the combination of virtual images requires the rendering thread to perform operations such as virtual image analysis and synthesis, and the environmental image needs to be overlaid and rendered with the virtual image, which requires the collaboration of multiple threads.
  • the latency of different threads is different during the image data processing of VR display devices, and the latency is also affected by the real-time processing capability of the display device and fluctuates, for example, when the display device processes high-precision video decoding, it occupies more processing resources, while other threads can be allocated fewer processing resources, which will reduce the processing rate of other threads, thereby increasing the latency of this type of thread processing.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • the present application provides a display method
  • the execution subject of the display method is an electronic device
  • the electronic device includes but is not limited to: a smart phone, a tablet computer, a computer, a laptop, a VR display device, etc., which can synthesize VR synthetic images based on VST technology.
  • the electronic device can have a built-in VR display module, or it can be externally connected to a wearable VR display device, and the above-mentioned VR synthetic image is output through the external wearable VR display device.
  • the VR display device needs to call multiple different threads to process the image data in the process of generating VR synthetic images.
  • the processing trigger moment of multiple threads can be synchronized to achieve the purpose of orderly controlling multiple threads, thereby reducing the probability of frame loss, so as to improve the smoothness of the output picture, and then make the predicted posture more consistent with the actual picture, improve the authenticity of the VR synthetic image, reduce the dizziness of the user when watching, and enhance the user's viewing experience.
  • FIG5 shows a data flow diagram of a VR synthetic image based on VST technology provided by an embodiment of the present application.
  • the process of generating a VR synthetic image by an electronic device specifically includes the following multiple stages:
  • Stage 1 Image exposure stage
  • the image signal processing stage can be divided into the processing stage through the image front-end processor (Image Front End, IFE) (such as 0.5ms) and the processing stage through the image processor (Image processing engine, IPE) (such as 8.5ms).
  • IFE image Front End
  • IPE image processing engine
  • the above-mentioned image exposure stage is completed through the camera module side.
  • the processed image data needs to be transmitted to the relevant threads at the system layer for processing.
  • the camera module is at the kernel layer, and the relevant threads transmitted to the system layer need to go through the hardware abstraction layer for data conversion and then be handed over to the GPU for processing. This will consume a certain amount of transmission time, such as 6.5ms.
  • Phase 3 One-shot rendering and video parsing
  • an electronic device When an electronic device generates a VR composite image, it needs to use a rendering application at the application layer to complete the task of rendering virtual objects, such as rendering a virtual keyboard, a virtual cartoon image, etc., which requires a rendering stage.
  • this rendering stage can be performed in parallel with the above-mentioned image exposure stage, that is, while the image is exposed and transmitted, the electronic device can perform a rendering process, which also introduces a certain processing delay.
  • the video decoder in the application framework layer needs to parse the video data frame by frame, and the above process also requires a certain amount of processing time.
  • the virtual image data fusion also requires the introduction of processing time, for example, 0.6ms.
  • This stage needs to complete the fusion of virtual and real, that is, the fusion of the environment image and the virtual screen, which involves the image frame selection stage (such as 10.2ms) and the rendering stage (such as 21.3ms) to generate a VR synthetic image.
  • the delay of this stage is relatively long. In some implementation scenarios, the average delay of the secondary rendering stage can reach 21.3ms.
  • Stage 7 Image display stage
  • the display screen When the display screen obtains a VR composite image, it needs to be scanned row by row or column by column to output the image.
  • FIG6 shows a data flow diagram of VR synthetic images based on VST technology provided by another embodiment of the present application.
  • the relationship between each of the above stages and the corresponding modules can be:
  • the original environment image is mainly collected through the camera module, and the original environment image is initially processed through IFE, where the processing is specifically used to distinguish the gaze area where the user's line of sight is focused in the original environment image, and the background area where the user is not focused.
  • the background area can be blurred, and the gaze area is subjected to image enhancement and other related processing.
  • the image data after the initial processing is then transmitted to IPE for secondary processing, such as performing image processing tasks such as hardware noise reduction, image cropping, color processing, detail enhancement, etc., to generate image data after secondary processing.
  • the framework that completes the data transmission from the camera module to the upper layer in the hardware abstraction layer is the camera framework CAM X.
  • CAM X can provide the camera module of the kernel layer through the interface to receive the relevant image data sent by the camera module to the system layer.
  • the decoder located in the application framework layer can decode the required video data, and the decoded video image frame can be forwarded to the runtime application located in the application layer for subsequent secondary rendering.
  • a rendering phase when the rendering application at the application layer needs to render a virtual object, it can render the virtual object through the rendering framework in the application framework layer (such as XR Plugin) and transmit the rendered virtual object to the runtime application in the application layer.
  • the rendering framework in the application framework layer such as XR Plugin
  • the runtime application in the application layer can call related threads to perform image fusion on virtual objects and video image frames to obtain virtual image data.
  • the runtime application located in the application layer can call related threads to fuse the environmental image data and the virtual image data.
  • the GPU of the kernel layer it is necessary to call the GPU of the kernel layer to complete it.
  • the image data can be forwarded through the interface provided by the 3D graphics processing library in the system layer.
  • the GPU will feed back the VR composite image after secondary rendering to the runtime application.
  • the runtime application in the system layer sends the synthesized VR composite image for display.
  • the corresponding VR composite images are output through the VR display screen and binocular screen respectively.
  • the electronic device can generate a synchronization signal through the processor and send it to the relevant threads in the above stages, and synchronize and calibrate the processing trigger time of the threads in each stage to ensure that the processing trigger time of each stage is aligned with the synchronization signal, so as to achieve the purpose of orderly controlling each thread and reduce the probability of frame loss.
  • FIG7 shows an implementation flow chart of the display method provided by an embodiment of the present application.
  • the user when the user needs to play the VR picture, the user can perform a first operation on the electronic device, and the first operation is used to indicate that the user needs to view the VR picture.
  • the electronic device detects the user's first operation, the process of generating a VR composite image can be executed.
  • a start button may be provided in the VR display device, and the first operation may be that the user touches (which may be a click, a double click, a long press, etc.) the start button.
  • the smart phone can be connected to a wearable pair of glasses to output VR composite images.
  • the user can start a related VR application in the smart phone, and a start control can be configured in the VR application.
  • the first operation can be that the user touches (can be clicked, double-clicked, long pressed, etc.) the start control displayed on the screen of the smart phone.
  • the electronic device when the electronic device detects the first operation of the user, the electronic device can generate a synchronization signal based on a preset display cycle.
  • the processor of the electronic device may include a controller, and the controller generates an operation control signal based on the instruction operation code and the timing signal to complete the control of fetching and executing instructions. Therefore, based on the display cycle, the synchronization signal can be generated by the controller in the processor.
  • the electronic device can also record the timestamp of the synchronization signal generated in each display cycle. The timestamp of the synchronization signal can be determined based on the system time of the electronic device.
  • Fig. 8 shows a specific implementation flow chart of S701 provided in an embodiment of the present application.
  • S701 specifically includes S7011 to S7013.
  • FIG9 shows a specific implementation flowchart of S701 based on the software framework provided in an embodiment of the present application.
  • the processing chip generates a synchronization signal for each display period based on a preset display frame rate.
  • the electronic device can generate a synchronization signal through one or more built-in processing chips.
  • a controller can be configured in the processing chip. Based on a preset display frame rate, the processing chip can calculate the corresponding period interval length. For example, if the display frame rate is F, the interval length of the display cycle is 1/F, and the synchronization signal is generated according to the interval length.
  • the processing chip may be a DPU or a display driver chip.
  • the generation method may also be different, and may specifically include the following two methods:
  • the electronic device can generate a synchronization signal at a preset display frame rate through the hardware module inside the DPU. That is, the electronic device can generate a synchronization signal at a preset display frame rate through the DPU in the CPU.
  • the display driver chip can generate a feedback signal and send the feedback signal to the CPU.
  • the CPU receives the feedback signal, it can generate a synchronization signal.
  • the generation timing of the synchronization signal is the refresh timing of a frame of VR synthetic image. The synchronization of the two timings can ensure that the generation cycle of the synchronization signal is consistent with the display frame rate, and the subsequent threads can also calibrate the processing trigger time according to the timestamp of the synchronization signal.
  • a timestamp of the generation of the synchronization signal is recorded, and the timestamp is stored in a device node in the kernel layer.
  • the processing chip when the processing chip generates a synchronization signal, the system time corresponding to the generation of the synchronization signal is recorded at the same time, and a timestamp corresponding to the synchronization signal is created according to the system time.
  • the driver of the processing chip in the kernel layer records the timestamp in the device node.
  • the device nodes in the kernel layer can be used to record related signals of different signals in the device system, including the timestamp of the synchronization signal.
  • the timestamp of each display cycle is recorded in the same file in the device node of the kernel layer, that is, when the processing chip generates a corresponding synchronization signal in the next display cycle, the timestamp of the synchronization signal will overwrite the timestamp of the synchronization signal generated in the previous display cycle, so that each time the subsequent interface reads the file corresponding to the timestamp, the timestamp obtained is the latest recorded timestamp.
  • the path corresponding to the file storing the timestamp can be: /sys/class/graphics/fb0/vsync_timestamp.
  • the service process in the application framework layer reads the timestamp of the synchronization signal and stores it in a time variable corresponding to the service process.
  • a service process is configured in the application framework layer, and the service process can read the files in the device node in the kernel layer at intervals through a polling mechanism. Since the above timestamp is stored in the device node, when the service mechanism reads the file content in the device node based on the polling mechanism, it will read the file where the above timestamp is located and store the timestamp in the corresponding time variable.
  • the processing chip can generate a synchronization signal based on a preset display frame rate, and record it in the device node in the kernel layer, and then call the service process in the application framework layer to read the timestamp, and store it in the time variable, so as to realize the recording and reading of the timestamp. Since the timestamp is recorded in the hardware abstraction layer, it is convenient for the system layer and subsequent software layers such as the application framework layer to read and use it, thereby reducing the delay.
  • the timestamps of the respective display cycles are sent to a plurality of threads respectively, so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamp of the Nth display cycle; the processing triggering moment of the threads is synchronized with the timestamp of the synchronization signal of the N+Pth display cycle; N and P are positive integers greater than 0; and the plurality of threads are used to generate VR composite images.
  • the timestamp can be sent to each thread so that subsequent threads can calibrate the processing trigger moment according to the timestamp to facilitate orderly processing of image data.
  • the processing cycle of the thread is not necessarily synchronized with the display cycle of the VR composite image, that is, some threads can reuse the image data generated in the previous display cycle to generate the VR composite image.
  • some threads can reuse the image data generated in the previous display cycle to generate the VR composite image.
  • the processing frame rate of the rendering thread of the virtual object can be reduced, that is, the processing cycle corresponding to the rendering thread is extended, and the duration of a processing cycle is the duration corresponding to P display cycles. Therefore, after the thread processes the image data in the Nth display cycle, the next processing trigger moment is the N+Pth display cycle.
  • the above-mentioned S702 specifically may include S7021 to S7022.
  • the specific implementation process is as follows:
  • the runtime application running in the application layer reads the timestamp recorded in the time variable; the runtime application communicates with the service process in the application framework layer through a preset interface.
  • the runtime application sends the timestamp to the multiple threads.
  • a runtime application (i.e., Runtime APP) may be run in the application layer of the electronic device, and the application layer may configure a process to run the runtime application.
  • the service process in the application framework layer allocates a preset interface to the runtime application, and the runtime application may read the timestamp from the service process through the preset interface.
  • the runtime application is configured in an event trigger mode.
  • a trigger instruction for triggering the runtime application can also be generated.
  • the runtime application detects the trigger instruction, the operation of S7021 can be executed, that is, the value of the time variable is read from the service process in the application framework layer through a preset interface to determine the timestamp corresponding to the synchronization signal of the current display cycle.
  • the runtime application can send it to multiple threads so that the threads can perform subsequent steps according to the timestamp. It should be noted that the runtime application can send the timestamp to each thread or to a specified thread, and forward the timestamp to other threads through the specified thread, which is not limited here.
  • the runtime application can send the timestamp to the secondary rendering thread, and the primary rendering thread and the video parsing thread can obtain the timestamp through the secondary rendering thread.
  • the runtime application can also send the timestamp to the secondary rendering thread, the primary rendering thread, and the video parsing thread respectively, which can be set according to actual conditions.
  • timestamp acquisition and timestamp distribution are performed through a runtime application, and a process dedicated to managing multiple threads for VR composite images can be configured in the application layer to improve the accuracy of timing control and thereby improve the smoothness of subsequent VR image generation.
  • the threads can correct the processing triggering moment of the next display cycle of the thread through the timestamp, thereby realizing the alignment of the processing triggering moment with the generation moment of the synchronization signal of the next display cycle (i.e., the timestamp of the next display cycle). Since after the synchronization signal is generated in the kernel layer, it takes a certain transmission time to transmit it to each thread in the application layer, when the synchronization signal is transmitted to each thread in the application layer, there is actually a certain deviation between the generation moment and the synchronization signal, and it is not aligned with the generation moment of the synchronization signal. In addition, the transmission time is uncontrollable, and the transmission time can be long or short, which will also cause the processing triggering moment of different display cycles to be triggered at any time within the display cycle, thereby resulting in the actual processing time of each display cycle.
  • FIG10 shows a comparative schematic diagram before and after the calibration processing trigger moment provided by an embodiment of the present application.
  • the thread is specifically a camera thread.
  • the camera thread is specifically used to control the camera module to capture images.
  • the electronic device will cyclically output 4 frames of different images in one display cycle, namely, image 1 to image 4.
  • the synchronization signal can be transmitted to the camera thread at any time within the display cycle. Therefore, when the exposure moment is not aligned with the synchronization signal, the captured image can be any image of image 1 to image 4.
  • the processing trigger moment of the camera thread is synchronized with the generation moment of the synchronization signal, since the synchronization signal is generated at a fixed time in each display cycle, such as the initial moment of the display cycle, the processing trigger moment and the initial moment of each display cycle will be synchronized and aligned.
  • the image captured by the camera module is the frame of image output at the initial moment of the display cycle, that is, Image 1.
  • the interval between the calibration processing trigger moments before and after calibration it can be determined that the interval between different processing trigger moments before calibration is random, while the interval between the processing trigger moments after calibration is fixed, thereby ensuring that the processing time of the thread of each display cycle after calibration is consistent, avoiding the inability to complete related image processing tasks due to too short processing time.
  • FIG11 shows a flowchart of the implementation of the camera thread calibration processing trigger moment provided by an embodiment of the present application.
  • the camera thread implements the process of processing the trigger moment (for the camera thread, the processing trigger moment is the exposure moment of controlling the camera module to take an image) including S1101 to S1105.
  • FIG12 shows an interaction flow chart of the camera thread calibration processing moment provided by an embodiment of the present application.
  • the timestamp of the camera thread is sent by the runtime application, and when the runtime application detects the synchronization signal, the timestamp is obtained from the time variable in the application framework layer.
  • the camera thread sends a first start instruction to the camera framework in the hardware abstraction layer; the first start instruction includes the timestamp.
  • the camera thread can specifically run in the application layer. Of course, it can run in other layers within the software framework according to actual conditions, which is not limited here.
  • the runtime application can send the timestamp to the camera thread through the corresponding interface.
  • a first startup instruction can be generated, and the first startup instruction can carry the timestamp received.
  • the first startup instruction is specifically used to notify the camera framework to start.
  • the camera framework in response to the first start-up instruction, sends a second start-up instruction to the camera module.
  • the camera framework at the hardware abstraction layer when the camera framework at the hardware abstraction layer receives the above-mentioned first startup instruction, it indicates that the camera module needs to be turned on, so a second startup instruction will be sent to the interface between the kernel layer. After receiving the second startup instruction, the camera driver corresponding to the kernel layer will control the camera module to turn on. When the camera module receives the second startup instruction, it will start to obtain a preview image, which is specifically used to capture the environmental image within the current user's line of sight.
  • the camera module specifically includes a main camera module and at least one slave camera module.
  • the main camera module is closer to the output interface of the second startup instruction, that is, when the second startup instruction is transmitted, it will first pass through the main camera module, and then pass through the slave camera module.
  • the main camera module will start first, and then when the second startup instruction is transmitted to the slave camera module, the slave camera module will start again. That is, the main camera module and the slave camera module are both turned on when the relevant hardware receives the second startup instruction, that is, the main camera module and the slave camera module are configured as a hardware synchronization relationship, and hardware synchronization can be completed according to the second startup instruction.
  • hardware synchronization has the advantages of low latency and high stability, which can improve the synchronization of the acquisition time of environmental images acquired by different camera modules in the subsequent image synthesis process, improve the accuracy of subsequent image synthesis, and reduce phase deviation.
  • the camera module feeds back the exposure parameters of the preview image to the camera framework, wherein the preview image is obtained by the camera module based on the second start-up instruction, and the exposure parameters include the start-up exposure moment of the preview image.
  • the preview image is fed back to the camera framework located at the hardware abstraction layer. Since the image information of the preview image records the exposure parameters when the preview image is shot, the exposure parameters may include information such as exposure time, exposure duration, and ISO sensitivity. Since the above preview image is an image obtained when the camera module is started, the exposure time corresponding to the preview image is used as the start-up exposure time of the camera module.
  • the camera framework can read the exposure parameters in the above preview image and extract the start-up exposure time carried therein.
  • both the main camera module and the slave camera module can capture preview images, that is, the preview images fed back include the main preview image captured by the main camera module and the slave preview image captured by the slave camera module. Since the transmission path of the main camera module is shorter than the transmission path of the slave camera module, the camera framework can extract the start exposure moment of the main camera module, and perform subsequent time deviation calculation based on the start exposure moment of the main camera module.
  • the camera framework calculates the time deviation between the timestamp and the exposure start time.
  • the camera framework can calculate the difference between the start exposure time and the timestamp corresponding to the first start instruction received, and use the difference as the above time deviation.
  • the start exposure time is t1
  • the above timestamp is ts
  • the exposure time of the camera thread in the N+Pth display period is determined according to the timestamp of the Nth display period and the time deviation.
  • the camera thread can send a shooting instruction to the camera framework according to the exposure time, and then the camera framework can control the camera module to collect the environmental image at the corresponding exposure time.
  • the shooting frame rate of the environment image is consistent with the display frame rate of the VR synthetic image, the above P is 1.
  • FIG13 shows a schematic diagram of the calibration of the exposure moments provided in an embodiment of the present application.
  • the dotted lines mark the exposure moments before calibration, namely t1 to tN.
  • t1 is the exposure moment corresponding to when the camera module is started, that is, the above-mentioned start-up exposure moment.
  • ts is the timestamp corresponding to the display period when the first start-up instruction is sent to the camera module, and the generation moment ts corresponding to the timestamp is earlier than the moment when the first start-up instruction is generated, that is, earlier than t1.
  • the time difference between the two is ⁇ t.
  • the time deviation between the start exposure moment and the timestamp is determined, and the exposure moments corresponding to each subsequent display cycle are adjusted, so that the exposure moment of the camera module is aligned with the timestamp of the synchronization signal, thereby ensuring the accuracy of the camera thread's control over the camera module.
  • Fig. 14 shows a flowchart of the triggering moment of the calibration process of the graphics processing thread provided by an embodiment of the present application. Referring to Fig. 14, the calibration process specifically includes S1401 to S1404.
  • the graphics processing thread calculates the timestamp of the N+Pth display cycle; the timestamp of the N+Pth display cycle is determined according to the timestamp of the Nth display cycle and the processing frame rate of the graphics processing thread.
  • the graphics processing thread can process the image data to be processed according to the corresponding processing frame rate.
  • the specific processing frame rate can be determined according to the thread type of the graphics processing thread and the processing power of the GPU, and is not limited here.
  • the electronic device can send the timestamp to the graphics processing thread through the runtime application. Since the synchronization signal is generated once for each display cycle, the time interval between the synchronization signals is predictable. Therefore, when the graphics processing thread receives the timestamp corresponding to the Nth display cycle, it can predict the timestamp of the synchronization signal corresponding to the N+Pth display cycle. If the processing frame rate of the graphics processing thread is consistent with the display frame rate of the VR synthetic image, the above P is 1, that is, the display frame rate is the same as the processing frame rate, and the graphics processing thread performs an image data processing operation once for each display cycle.
  • the timestamp corresponding to the N+Pth display cycle is:
  • t(N+P) t(N)+1/f, where t(N+P) is the timestamp corresponding to the N+Pth display period, and t(N) is the timestamp corresponding to the Nth display period.
  • the graphics processing thread determines the processing completion time of the Nth display cycle; the processing completion time is the time corresponding to the time when the graphics processing thread completes the processing of the image data in the Nth display cycle.
  • the processing trigger moment of the image processing thread is specifically the moment corresponding to switching from the sleep state to the running state. Since a certain transmission time is required for the synchronization signal to be transmitted from the kernel layer to the image processing thread in the application layer, in order to align the processing trigger moment of the graphics processing thread with the generation moment of the synchronization signal (i.e., the moment corresponding to the timestamp), the electronic device can determine the timing of switching from the sleep state to the running state (i.e., the processing trigger moment) by setting the sleep duration. After determining the timestamp of the display cycle corresponding to the next image processing, i.e., the end time of the sleep state, it is necessary to determine the start time of the sleep state.
  • the condition for the graphics processing thread to enter the dormant state is that the relevant processing tasks of the thread have been completed. Therefore, when the graphics processing thread completes the image data processing corresponding to the Nth display cycle, the system time will be obtained, and the system time corresponding to the completion of the image data processing will be used as the above-mentioned processing completion time.
  • the graphics processing thread calculates the sleep duration; the sleep duration is calculated based on the processing completion time and the timestamp of the N+Pth display cycle.
  • the graphics processing thread when the image processing thread completes the image data processing task of the Nth display cycle, the graphics processing thread will enter the sleep state and set the corresponding sleep timer.
  • the graphics processing thread will detect whether the count value of the sleep timer is greater than or equal to the sleep duration calculated above. If so, the image processing thread will be awakened; otherwise, if the count value of the sleep timer is less than the sleep duration, the graphics processing thread will be kept in the sleep state.
  • FIG15 shows a control timing diagram of a graphics processing thread provided by an embodiment of the present application.
  • the processing frame rate of the graphics processing thread is consistent with the display frame rate of the VR synthetic image, so one display cycle corresponds to one processing cycle of the graphics processing thread.
  • the graphics processing thread can receive the corresponding timestamp t(N) of the current display cycle fed back by the runtime application at any time during the process of processing image data.
  • the graphics processing thread can calculate the timestamp corresponding to the next display cycle according to the processing frame rate and t(N), that is, t(N+1).
  • the graphics processing thread When the graphics processing thread completes processing of the image data, it will record the corresponding processing completion time t(fin), enter the dormant state, and set the dormant timer. The graphics processing thread will detect whether the count value of the dormant counter is t(N+1)-t(fin). If so, the graphics processing thread will be awakened. At this time, the synchronization signal corresponding to the next display cycle is generated, and the runtime application will feedback the corresponding timestamp to the graphics processing thread at a certain time, and repeat the above steps, so that the graphics processing thread is awakened each time and the synchronization signal is generated. The time is aligned.
  • the timing of waking up the graphics processing thread can be aligned with the generation time of the synchronization signal of the corresponding display cycle, thereby achieving the purpose of orderly controlling the graphics processing thread to process image data and reducing the probability of frame loss.
  • the graphics processing thread includes: a rendering thread and a decoding thread, and the rendering thread may also include a primary rendering thread and a secondary rendering thread. Since video decoding, primary rendering and secondary rendering all need to occupy the GPU for processing, and the GPU can only perform one task at the same time, that is, the above-mentioned multiple graphics processing threads need to occupy the GPU to process the tasks of the corresponding threads in time-sharing. In order to effectively utilize GPU resources, corresponding processing priorities can be configured for different graphics processing threads.
  • the secondary rendering process is to merge the virtual synthetic image with the real environment image, it is the last image synthesis stage and is directly output to the display screen for display, so its processing importance is higher and can be configured with a higher processing priority
  • the primary rendering process is mainly to render virtual objects, and virtual objects may not appear in some scenes, so its corresponding processing priority will be lower than the secondary rendering thread, and for the decoding thread, the time required for video data decoding is shorter, so its corresponding processing priority can be set to be higher than the primary rendering thread, then the processing priority relationship corresponding to the above three graphics processing threads can be: video decoding ⁇ secondary rendering > primary rendering.
  • Figure 16 shows a timing diagram of different graphics processing threads occupying the GPU provided by an embodiment of the present application.
  • the video thread decodes the video image of the third frame, and the first rendering thread renders the virtual object for the VR composite image of the third frame, and the second rendering thread can perform image fusion through the environmental image of the second frame generated in the previous display cycle, the video image of the second frame and the virtual object of the second frame, that is, complete the secondary rendering operation.
  • the secondary rendering thread since the processing priority of the secondary rendering thread is higher than that of the primary rendering thread, if the secondary rendering thread occupies the GPU for the secondary rendering task during the processing of the primary rendering thread, it will interrupt the primary rendering task of the primary rendering thread being processed, and the task of the primary rendering thread may be delayed to the next display cycle.
  • the primary rendering thread When the secondary rendering thread completes the processing, the primary rendering thread will re-occupy the GPU for processing, thereby delaying the task of decoding the 4th frame of video, which may cause the 4th frame of video image frame to be lost, resulting in the final output VR composite image being not smooth.
  • the electronic device can configure different processing time slots for different graphics processing threads in the same display cycle, and configure corresponding processing priorities for different graphics processing threads, so as to realize orderly control of each graphics processing thread to call the GPU.
  • the electronic device can be set at the time when the synchronization signal corresponding to the display cycle is generated, triggering the video decoding thread and the secondary rendering thread to process the corresponding task, and setting the priority of the video decoding thread to be greater than the priority of the secondary rendering thread. Therefore, the GPU will first process the task corresponding to the video decoding thread.
  • the GPU Since the video decoding thread takes a shorter time, when the video decoding thread completes the decoding task, the GPU will be occupied by the secondary rendering thread to execute the corresponding secondary rendering task. After a preset interval, the primary rendering thread is triggered to process the corresponding task. Since the priority of the primary rendering thread is lower than that of the secondary rendering thread, the GPU will be occupied for processing only after the secondary rendering thread completes the corresponding task. In the next display cycle, the GPU can also be occupied in turn according to the above method, thereby reducing the situation where different graphics processing threads are interrupted during the task processing process, and can also reduce the probability of frame loss, thereby improving the smoothness of VR synthetic images.
  • the runtime application in the application layer can provide an interface for all graphics processing threads to obtain timestamps.
  • the interface can be directly assigned to the secondary rendering thread.
  • the first rendering thread and the video decoding thread need to obtain the timestamp, they can obtain it from the secondary rendering thread.
  • Figure 17 shows an implementation flow chart of each graphics processing thread determining the trigger moment provided by an embodiment of the present application. Referring to Figure 17, specifically, the way different threads determine the processing trigger moment is as follows:
  • Step 2.1.1 The video decoding thread calculates the timestamp of the N+1th display cycle; the timestamp of the N+1th display cycle is determined based on the timestamp of the Nth display cycle and the processing frame rate of the video decoding thread; wherein the decoding frame rate of the video decoding thread is the same as the display frame rate of the VR synthetic image.
  • Step 2.1.2 The video decoding thread determines the processing completion time of the Nth display cycle; the processing completion time is the time corresponding to the time when the video decoding thread completes the processing of the image data in the Nth display cycle.
  • Step 2.1.3 The video decoding thread calculates the third sleep duration; the third sleep duration is calculated based on the processing completion time and the timestamp of the N+1th display cycle.
  • Step 2.1.4 sets the video decoding thread to a sleep state, and wakes up the video decoding thread to decode the video data when the duration of the sleep state reaches a third sleep duration.
  • Step 2.2.1 The secondary rendering thread calculates the timestamp of the N+1th display cycle; the timestamp of the N+1th display cycle is determined based on the timestamp of the Nth display cycle and the processing frame rate of the secondary rendering thread; wherein the rendering frame rate of the secondary rendering thread is the same as the display frame rate of the VR composite image.
  • Step 2.2.2 The secondary rendering thread determines the processing completion time of the Nth display cycle; the processing completion time is the time corresponding to the completion of the processing of the image data by the secondary rendering thread in the Nth display cycle.
  • Step 2.2.3 The secondary rendering thread calculates the second sleep duration; the second sleep duration is calculated based on the processing completion time and the timestamp of the N+1th display cycle.
  • Step 2.2.4 sets the secondary rendering thread to a sleep state, and wakes up the secondary rendering thread to decode the video data when the duration of the sleep state reaches a second sleep duration.
  • is a preset coefficient, which can be any value greater than 0 and less than 1, for example, it can be 0.65
  • F2 is the display frame rate.
  • Step 2.3.1 A rendering thread calculates the timestamp of the N+1th display cycle; the timestamp of the N+Pth display cycle is determined based on the timestamp of the Nth display cycle and the processing frame rate of the rendering thread; wherein, the rendering frame rate of the rendering thread may be less than or equal to the display frame rate of the VR synthetic image.
  • Step 2.3.2 A rendering thread determines the processing completion time of the Nth display cycle; the processing completion time is the time corresponding to the completion of the processing of the image data by the rendering thread for the Nth display cycle.
  • step 2.3.3 the rendering thread calculates a first sleep duration; the first sleep duration is determined based on the timestamp of the N+Pth display cycle, the preset interval duration, and the first rendering completion time.
  • the specific calculation process of the first sleep time is: t(N+P)+offset-t(fin), wherein offset is the above-mentioned preset interval time.
  • Step 2.3.4 sets the primary rendering thread to a sleep state, and wakes up the primary rendering thread to decode the video data when the duration of the sleep state reaches a first sleep duration.
  • the threads are triggered to process image data at the processing triggering moments corresponding to the threads in turn to generate a VR composite image.
  • the electronic device can divide the entire generation process into multiple processing stages according to the processing content and processing time of different threads when generating VR synthetic images, and each processing stage corresponds to a processing order.
  • the electronic device can start each thread in sequence according to the above processing order, and trigger each thread to process image data according to the processing trigger time, so as to synthesize the corresponding VR synthetic image.
  • the process of generating VR synthetic images can refer to the relevant descriptions of the above stages, which will not be repeated here.
  • FIG18 shows a schematic diagram of the division of a processing order provided by an embodiment of the present application.
  • the exposure stage is the first stage, which can occupy two display cycles, while the video decoding stage and the primary rendering stage can correspond to the second stage, and can also occupy one display cycle, and the secondary rendering stage requires the content output by multiple previous stages, so it is in the third stage and occupies one display cycle.
  • the display stage is the fourth stage, which is specifically used to display the VR composite image output by the secondary rendering thread.
  • FIG19 shows a timing diagram of each stage in the process of generating a multi-frame VR composite image by an electronic device provided by an embodiment of the present application.
  • the process of displaying a multi-frame VR composite image it is a process of data stream processing.
  • each stage can be executed in order, and the corresponding processing trigger time is synchronized with the timestamp of the synchronization signal.
  • the primary rendering thread, the video decoding thread, and the secondary rendering thread can time-share the GPU in the same display cycle and have corresponding processing time slots, thereby reducing the loss of a frame of image due to task interruption.
  • a display method provided by an embodiment of the present application can generate a synchronization signal in each display cycle when a first operation is received, and record the timestamp of generating the synchronization signal; send the timestamp of each display cycle to multiple threads used to synthesize VR synthetic images, and each thread can determine the processing trigger moment of the next display cycle according to the timestamp.
  • each thread can synchronize the processing trigger moment of processing image data in the next display cycle with the timestamp of the synchronization signal of the next display cycle, it can ensure that the processing trigger moments of multiple threads are synchronized with each other, and then trigger the threads to process the image data in turn at the processing trigger moment according to the processing order corresponding to each thread, so as to generate a VR synthetic image.
  • the processing trigger moment of each thread can be synchronized with the synchronization signal of the next display cycle, so as to achieve the synchronization of the processing trigger moments between multiple threads, thereby achieving orderly control of multiple threads to collaboratively process image data, reducing the probability of frame loss, ensuring the smoothness of the output picture, and then improving the user's immersion and viewing experience.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • FIG20 shows a structural block diagram of the display device provided in the embodiment of the present application. For the sake of convenience, only the part related to the embodiment of the present application is shown.
  • the display device includes:
  • a timestamp recording unit 201 configured to record a timestamp of a synchronization signal generated in each display period in response to a first operation
  • the timestamp sending unit 202 is used to send the timestamp of each display cycle to multiple threads respectively, so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamp of the Nth display cycle; the processing triggering moment of the threads is synchronized with the timestamp of the synchronization signal of the N+Pth display cycle; N and P are positive integers greater than 0; the multiple threads are used to generate VR synthetic images;
  • the image synthesis unit 203 is used to trigger the threads to process image data at the processing triggering moments corresponding to the threads in sequence according to a preset processing order, so as to generate a VR synthesized image.
  • the multiple threads include a camera thread; the processing trigger moment of the camera thread is an exposure moment; and the timestamp issuing unit 202 includes:
  • a first startup instruction transmission unit used for the camera thread to send a first startup instruction to the camera framework in the hardware abstraction layer; the first startup instruction includes the timestamp;
  • a second startup instruction transmission unit configured to cause the camera framework to send a second startup instruction to the camera module in response to the first startup instruction
  • an exposure parameter feedback unit configured for the camera module to feed back the exposure parameters of the preview image to the camera framework, wherein the preview image is obtained by the camera module based on the second start instruction, and the exposure parameters include the start exposure moment of the preview image;
  • a time deviation calculation unit used for calculating the time deviation between the timestamp and the exposure start time in the camera framework
  • the exposure time determination unit is used to determine the exposure time of the camera thread in the N+Pth display period according to the timestamp of the Nth display period and the time deviation.
  • the camera module includes a main camera module and at least one slave camera module; the main camera module and the slave camera module complete hardware synchronization through a second startup instruction sent by the camera framework when starting.
  • the multiple threads include: a graphics processing thread; and the timestamp issuing unit 202 includes:
  • a timestamp calculation unit used for the graphics processing thread to calculate the timestamp of the N+Pth display cycle; the timestamp of the N+Pth display cycle is determined according to the timestamp of the Nth display cycle and the processing frame rate of the graphics processing thread;
  • a rendering completion time determination unit used for the graphics processing thread to determine the processing completion time of the Nth display cycle; the processing completion time is the time corresponding to the completion of the processing of the image data by the graphics processing thread in the Nth display cycle;
  • a sleep duration calculation unit used for the graphics processing thread to calculate the sleep duration; the sleep duration is calculated based on the processing completion time and the timestamp of the N+Pth display cycle;
  • the sleep state triggering unit is used to set the graphics processing thread to a sleep state and wake up the graphics processing thread when the duration of the sleep state reaches the sleep duration.
  • the graphics processing thread includes a secondary rendering thread
  • the rendering completion time determination unit comprises:
  • a second rendering time determination unit used for the secondary rendering thread to determine the second rendering completion time of the Nth display cycle
  • the sleep duration calculation unit comprises:
  • the second sleep calculation unit is used for the secondary rendering thread to calculate the time difference between the timestamp of the N+Pth display cycle and the rendering completion time, and use the time difference as the second sleep duration.
  • the graphics processing thread includes a primary rendering thread; the time difference between the expected triggering moment of the primary rendering thread and the expected triggering moment of the secondary rendering thread is a preset interval duration;
  • the rendering completion time determination unit comprises:
  • a first rendering time determination unit used for the primary rendering thread to determine a first rendering completion time of an Nth display cycle
  • the sleep duration calculation unit comprises:
  • the first sleep calculation unit is used for calculating a first sleep duration for the primary rendering thread; the first sleep duration is determined according to the timestamp of the N+Pth display cycle, the preset interval duration and the first rendering completion time.
  • the timestamp recording unit 201 includes:
  • a synchronization signal generating unit configured to generate the synchronization signal of each display period based on a preset display frame rate by a processing chip
  • a device node storage unit used for generating the timestamp for recording the synchronization signal based on the generation time of the synchronization signal, and storing the timestamp in the device node in the kernel layer;
  • the time variable storage unit is used for the service process in the application framework layer to read the timestamp of the synchronization signal and store it in the time variable corresponding to the service process.
  • the synchronization signal generating unit includes:
  • the first synchronization signal generating unit is used for the central processing unit distributed unit DPU to generate the synchronization signal at the display frame rate.
  • the synchronization signal generating unit includes:
  • the second synchronization signal generating unit is used for the display driver chip to send a feedback signal to the central processing unit CPU, so that the CPU generates the synchronization signal when receiving the feedback signal; the feedback signal is generated when the display driver chip refreshes a frame of the VR composite image.
  • the timestamp issuing unit 202 includes:
  • a runtime application reading unit used for a runtime application running in the application layer to read the timestamp recorded in the time variable; the runtime application communicates with the service process in the application framework layer through a preset interface;
  • a runtime application sending unit is used for the runtime application to send the timestamp to the multiple threads.
  • P is determined based on a ratio between a display period of the VR synthetic image and a processing period of the thread.
  • the display device provided in the embodiment of the present application can also generate a synchronization signal in each display cycle when receiving the first operation, and record the timestamp of generating the synchronization signal; send the timestamp of each display cycle to multiple threads used to synthesize VR synthetic images, and each thread can determine the processing trigger moment of the next display cycle according to the timestamp. Since each thread can synchronize the processing trigger moment of the image data in the next display cycle with the timestamp of the synchronization signal of the next display cycle, it can ensure that the processing trigger moments of multiple threads are synchronized with each other, and then trigger the threads to process the image data in turn at the processing trigger moment according to the processing order corresponding to each thread, so as to generate a VR synthetic image.
  • the processing trigger moment of each thread can be synchronized with the synchronization signal of the next display cycle, so as to achieve the synchronization of the processing trigger moments between multiple threads, thereby achieving orderly control of multiple threads to collaboratively process image data, reducing the probability of frame loss, ensuring the smoothness of the output picture, and then improving the user's immersion and viewing experience.
  • FIG21 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present application.
  • the electronic device 21 of this embodiment includes: at least one processor 210 (only one processor is shown in FIG21 ), a memory 211, and a computer program 212 stored in the memory 211 and executable on the at least one processor 210, and when the processor 210 executes the computer program 212, the steps in any of the above-mentioned display method embodiments are implemented.
  • the electronic device 21 may be a computing device such as a desktop computer, a notebook, a PDA, and a cloud server.
  • the electronic device may include, but is not limited to, a processor 210 and a memory 211.
  • FIG. 21 is merely an example of the electronic device 21 and does not constitute a limitation on the electronic device 21.
  • the electronic device 21 may include more or fewer components than shown in the figure, or may combine certain components, or different components, and may also include, for example, input and output devices, network access devices, etc.
  • the processor 210 may be a central processing unit (CPU), or other general-purpose processors, digital signal processors (DSP), application-specific integrated circuits (ASIC), field-programmable gate arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor or any conventional processor, etc.
  • the memory 211 may be an internal storage unit of the electronic device 21, such as a hard disk or memory of the electronic device 21. In other embodiments, the memory 211 may also be an external storage device of the electronic device 21, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash card (Flash Card), etc. equipped on the electronic device 21. Further, the memory 211 may also include both an internal storage unit of the electronic device 21 and an external storage device.
  • the memory 211 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program. The memory 211 may also be used to temporarily store data that has been output or is to be output.
  • the technicians in the relevant field can clearly understand that for the convenience and simplicity of description, only the division of the above-mentioned functional units and modules is used as an example for illustration.
  • the above-mentioned function allocation can be completed by different functional units and modules as needed, that is, the internal structure of the device can be divided into different functional units or modules to complete all or part of the functions described above.
  • the functional units and modules in the embodiment can be integrated in a processing unit, or each unit can exist physically separately, or two or more units can be integrated in one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or in the form of software functional units.
  • An embodiment of the present application also provides an electronic device, which includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, wherein the processor implements the steps of any of the above-mentioned method embodiments when executing the computer program.
  • An embodiment of the present application further provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments can be implemented.
  • An embodiment of the present application provides a computer program product.
  • the computer program product runs on a mobile terminal
  • the mobile terminal can implement the steps in the above-mentioned method embodiments when executing the computer program product.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the present application implements all or part of the processes in the above-mentioned embodiment method, which can be completed by instructing the relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium.
  • the computer program is executed by the processor, the steps of the above-mentioned method embodiments can be implemented.
  • the computer program includes computer program code, which can be in source code form, object code form, executable file or some intermediate form.
  • the computer-readable medium may at least include: any entity or device that can carry the computer program code to the camera/electronic device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium.
  • ROM read-only memory
  • RAM random access memory
  • electric carrier signal telecommunication signal and software distribution medium.
  • USB flash drive mobile hard disk, disk or optical disk.
  • computer-readable media cannot be electric carrier signals and telecommunication signals.
  • the disclosed devices/network equipment and methods can be implemented in other ways.
  • the device/network equipment embodiments described above are merely schematic.
  • the division of the modules or units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The present application is applicable to the technical field of device control, and provides a display method and apparatus, an electronic device, and a storage medium. The method comprises: in response to a first operation, recording a timestamp of a synchronization signal generated in each display period; and respectively sending the timestamp in each display period to a plurality of threads, so that the threads calibrate the processing trigger moments thereof in the (N+P)th display period according to the timestamp in the Nth display period, wherein the processing trigger moments of the threads are synchronized with the timestamp of the synchronization signal in the (N+P)th display period, and N and P are positive integers greater than 0; and according to a preset processing order, sequentially triggering the threads to process image data at the processing trigger moments corresponding to the threads to generate a VR composite image. According to the technical solution provided by the present application, the probability of frame loss can be reduced, and the fluency of output pictures can be ensured, thereby increasing the viewing immersion of the user, and improving the viewing experience of the user.

Description

一种显示方法、装置、电子设备及存储介质Display method, device, electronic device and storage medium
本申请要求于2023年01月31日提交国家知识产权局、申请号为202310115473.5、申请名称为“一种显示方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application filed with the State Intellectual Property Office on January 31, 2023, with application number 202310115473.5 and application name “A display method, device, electronic device and storage medium”, all contents of which are incorporated by reference in this application.
技术领域Technical Field
本申请属于设备控制技术领域,尤其涉及一种显示方法、装置、电子设备及存储介质。The present application belongs to the field of device control technology, and in particular, relates to a display method, device, electronic device and storage medium.
背景技术Background Art
虚拟显示(Virtual Reality,VR)技术,作为现今主流的显示技术之一,其应用领域越来越广。而VR显示技术相对于传统的二维显示技术,具有高沉浸性的特点,因此,VR显示设备如何能够显示流畅的画面,则直接影响用户在通过VR显示设备观看多媒体数据过程中的沉浸感。视频透视(VideoSeeThrough,VST)作为VR显示技术中的重要的分支之一,通过摄像模组采集用户所在场景的环境图像,并通过VR技术在环境图像内叠加虚拟内容,让用户同时观看到虚拟内容以及与现实一致的真实环境,进一步提高了用户观看过程的沉浸感。Virtual reality (VR) technology, as one of the mainstream display technologies today, has an increasingly wide range of applications. Compared with traditional two-dimensional display technology, VR display technology is highly immersive. Therefore, how VR display devices can display smooth images directly affects the user's sense of immersion when viewing multimedia data through VR display devices. Video see through (VST), as one of the important branches of VR display technology, uses a camera module to capture the environmental image of the user's scene, and uses VR technology to overlay virtual content in the environmental image, allowing users to simultaneously view virtual content and the real environment consistent with reality, further enhancing the user's sense of immersion in the viewing process.
为了实现上述真实场景与虚拟内容结合,需要VR显示设备内多个线程协同工作对图像数据进行处理,生成VR合成图像,若某一线程在处理图像数据时出现时延波动,则会导致所有线程对于下一帧图像数据的处理触发时刻无法同步,容易出现丢帧的情况,导致最终输出画面不流畅,降低了用户观看的沉浸感,影响用户的观看体验。In order to realize the combination of the above-mentioned real scene and virtual content, multiple threads in the VR display device need to work together to process the image data and generate VR composite images. If a thread has a delay fluctuation when processing image data, it will cause all threads to be unable to synchronize the processing trigger time of the next frame of image data, which is prone to frame loss, resulting in the final output picture being unsmooth, reducing the user's immersion and affecting the user's viewing experience.
发明内容Summary of the invention
本申请实施例提供了一种显示方法、装置、电子设备以及计算机可读存储介质,可以解决现有的显示技术,由于VR显示设备在显示VR合成图像的过程中需要多个线程协同处理,当某一线程处理图像数据出现时延波动时,容易出现丢帧的情况,画面输出流畅度低的问题。The embodiments of the present application provide a display method, device, electronic device and computer-readable storage medium, which can solve the problem of existing display technology. Since VR display devices require multiple threads to collaborate in the process of displaying VR composite images, when a thread processes image data and there is a delay fluctuation, frame loss is likely to occur, and the smoothness of the picture output is low.
第一方面,本申请实施例提供了一种显示方法,应用于虚拟现实VR显示设备,所述VR显示设备包括用于生成VR合成图像的多个线程;所述显示方法包括:In a first aspect, an embodiment of the present application provides a display method, which is applied to a virtual reality (VR) display device, wherein the VR display device includes multiple threads for generating VR composite images; the display method includes:
响应于第一操作,记录在每个显示周期生成的同步信号的时间戳;In response to a first operation, recording a time stamp of a synchronization signal generated in each display period;
分别将各个所述显示周期的所述时间戳发送给多个线程,以使所述线程根据第N个显示周期的所述时间戳校准所述线程在第N+P个显示周期的处理触发时刻;所述线程的处理触发时刻与第N+P个显示周期的同步信号的时间戳同步;所述N和所述P为大于0的正整数;The timestamps of the respective display cycles are sent to a plurality of threads respectively, so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamps of the Nth display cycle; the processing triggering moment of the threads is synchronized with the timestamps of the synchronization signals of the N+Pth display cycle; N and P are positive integers greater than 0;
根据预设的处理次序,依次在所述线程对应的所述处理触发时刻触发所述线程处理图像数据,生成VR合成图像。According to a preset processing order, the threads are triggered in sequence to process image data at the processing triggering moments corresponding to the threads to generate VR composite images.
实施本申请实施例具有以下有益效果:当接收到第一操作时,VR显示设备会在每个显示周期生成一个同步信号,并记录生成该同步信号的时间戳;将记录每个显示周期的时间戳发送给用于合成VR合成图像的多个线程,每个线程可以根据该时间戳确定下一显示周期的处理触发时刻,由于每个线程能够在下一显示周期处理图像数据的处理触发时刻均与下一显示周期的同步信号的时间戳同步,即能够保证多个线程的处理触发时刻相互同步,继而根据各个线程对应的处理次序,依次在处理触发时刻触发线程对图像数据进行处理,从而生成VR合成图像。与现有的显示技术相比,通过向多个线程下发同步信号的时间戳,能够使得各个线程的处理触发时刻与下一显示周期的同步信号同步,实现了多个线程之间处理触发时刻相互同步,从而实现有序控制多个线程协同处理图像数据,减少了丢帧出现的概率,保证了输出画面的流畅程度,继而提高了用户观看的沉浸感,提升用户的观看体验。The implementation of the embodiment of the present application has the following beneficial effects: when receiving the first operation, the VR display device generates a synchronization signal in each display cycle and records the timestamp of generating the synchronization signal; the timestamp of recording each display cycle is sent to multiple threads for synthesizing VR synthetic images, and each thread can determine the processing trigger moment of the next display cycle according to the timestamp. Since the processing trigger moment of each thread that can process the image data in the next display cycle is synchronized with the timestamp of the synchronization signal of the next display cycle, it is possible to ensure that the processing trigger moments of multiple threads are synchronized with each other, and then according to the processing order corresponding to each thread, the threads are triggered to process the image data in sequence at the processing trigger moment, thereby generating a VR synthetic image. Compared with the existing display technology, by sending the timestamp of the synchronization signal to multiple threads, the processing trigger moment of each thread can be synchronized with the synchronization signal of the next display cycle, and the processing trigger moments between multiple threads are synchronized with each other, thereby realizing orderly control of multiple threads to collaboratively process image data, reducing the probability of frame loss, ensuring the smoothness of the output picture, and then improving the user's immersion and viewing experience.
在第一方面的一种可能实现方式中,所述多个线程包括摄像线程;所述摄像线程的处理触发时刻为曝光时刻;In a possible implementation manner of the first aspect, the multiple threads include an imaging thread; a processing triggering moment of the imaging thread is an exposure moment;
所述分别将各个所述显示周期的所述时间戳发送给多个线程,以使所述线程根据第N个显示周期的所述时间戳校准所述线程在第N+P个显示周期的处理触发时刻,包括: The sending the timestamps of the respective display cycles to a plurality of threads respectively so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamps of the Nth display cycle comprises:
摄像线程向硬件抽象层中的摄像框架发送第一启动指令;所述第一启动指令包含所述时间戳;The camera thread sends a first start instruction to the camera framework in the hardware abstraction layer; the first start instruction includes the timestamp;
响应于所述第一启动指令,所述摄像框架向摄像模组发送第二启动指令;In response to the first start-up instruction, the camera framework sends a second start-up instruction to the camera module;
所述摄像模组将预览图像的曝光参数反馈给所述摄像框架,其中,所述预览图像是所述摄像模组基于所述第二启动指令获取的,所述曝光参数包括所述预览图像的启动曝光时刻;The camera module feeds back the exposure parameters of the preview image to the camera framework, wherein the preview image is obtained by the camera module based on the second start instruction, and the exposure parameters include the start exposure moment of the preview image;
所述摄像框架计算所述时间戳与所述启动曝光时刻之间的时间偏差;The camera framework calculates the time deviation between the timestamp and the exposure start time;
根据第N个显示周期的时间戳以及所述时间偏差,确定第N+P个显示周期所述摄像线程的曝光时刻。The exposure moment of the camera thread in the N+Pth display period is determined according to the timestamp of the Nth display period and the time deviation.
在第一方面的一种可能实现方式中,所述摄像模组包括主摄像模组以及至少一个从摄像模组;所述主摄像模组与所述从摄像模组在启动时通过所述摄像框架发送的第二启动指令完成硬件同步。In a possible implementation of the first aspect, the camera module includes a main camera module and at least one slave camera module; the main camera module and the slave camera module complete hardware synchronization through a second startup instruction sent by the camera framework when starting.
在第一方面的一种可能实现方式中,所述多个线程包括:图形处理线程;In a possible implementation of the first aspect, the multiple threads include: a graphics processing thread;
所述分别将各个所述显示周期的所述时间戳发送给多个线程,以使所述线程根据第N个显示周期的所述时间戳校准所述线程在第N+P个显示周期的处理触发时刻,包括:The sending the timestamps of the respective display cycles to a plurality of threads respectively so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamps of the Nth display cycle comprises:
所述图形处理线程计算第N+P个显示周期的时间戳;所述第N+P个显示周期的时间戳是根据所述第N个显示周期的所述时间戳以及所述图形处理线程的处理帧率确定的;The graphics processing thread calculates a timestamp of an N+Pth display cycle; the timestamp of the N+Pth display cycle is determined according to the timestamp of the Nth display cycle and a processing frame rate of the graphics processing thread;
所述图形处理线程确定第N个显示周期的处理完成时刻;所述处理完成时刻为所述图形处理线程完成第N个显示周期对图像数据的处理对应的时刻;The graphics processing thread determines a processing completion time of the Nth display cycle; the processing completion time is a time corresponding to when the graphics processing thread completes processing of the image data of the Nth display cycle;
所述图形处理线程计算休眠时长;所述休眠时长是根据所述处理完成时刻以及所述第N+P个显示周期的时间戳计算得到的;The graphics processing thread calculates a sleep duration; the sleep duration is calculated based on the processing completion time and the timestamp of the N+Pth display cycle;
将所述图形处理线程设置为休眠状态,并在休眠状态的时长到达所述休眠时长时唤醒所述图形处理线程。The graphics processing thread is set to a sleep state, and the graphics processing thread is awakened when the duration of the sleep state reaches the sleep duration.
在第一方面的一种可能实现方式中,所述图形处理线程包括二次渲染线程;In a possible implementation of the first aspect, the graphics processing thread includes a secondary rendering thread;
所述图形处理线程确定第N个显示周期的处理完成时刻,包括:The graphics processing thread determines the processing completion time of the Nth display cycle, including:
所述二次渲染线程确定第N个显示周期的第二渲染完成时刻;The secondary rendering thread determines a second rendering completion time of the Nth display cycle;
所述图形处理线程计算休眠时长,包括:The graphics processing thread calculates the sleep duration, including:
所述二次渲染线程计算所述第N+P个显示周期的时间戳与所述渲染完成时刻之间的时间差,将所述时间差作为第二休眠时长。The secondary rendering thread calculates a time difference between a timestamp of the N+Pth display cycle and a time point at which the rendering is completed, and uses the time difference as a second sleep duration.
在第一方面的一种可能实现方式中,所述图形处理线程包括一次渲染线程;所述一次渲染线程的预期触发时刻与所述二次渲染线程的预期触发时刻之间的时间差为预设间隔时长;In a possible implementation of the first aspect, the graphics processing thread includes a primary rendering thread; a time difference between an expected triggering moment of the primary rendering thread and an expected triggering moment of the secondary rendering thread is a preset interval duration;
所述图形处理线程确定第N个显示周期的处理完成时刻,包括:The graphics processing thread determines the processing completion time of the Nth display cycle, including:
所述一次渲染线程确定第N个显示周期的第一渲染完成时刻;The primary rendering thread determines a first rendering completion time of the Nth display cycle;
所述图形处理线程计算休眠时长,包括:The graphics processing thread calculates the sleep duration, including:
所述一次渲染线程计算第一休眠时长;所述第一休眠时长是根据所述第N+P个显示周期的时间戳、所述预设间隔时长以及所述第一渲染完成时刻确定的。The primary rendering thread calculates a first sleep duration; the first sleep duration is determined according to the timestamp of the N+Pth display cycle, the preset interval duration, and the first rendering completion time.
在第一方面的一种可能实现方式中,其特征在于,所述响应于显示指令,依次记录在每个显示周期生成的同步信号的时间戳,包括:In a possible implementation manner of the first aspect, it is characterized in that, in response to the display instruction, sequentially recording the timestamp of the synchronization signal generated in each display period includes:
处理芯片基于预设的显示帧率生成各个所述显示周期的所述同步信号;The processing chip generates the synchronization signal of each display period based on a preset display frame rate;
基于所述同步信号的生成时刻,记录生成所述同步信号的所述时间戳,将所述时间戳存储于内核层内的设备节点;Based on the generation time of the synchronization signal, record the timestamp of generating the synchronization signal, and store the timestamp in a device node in the kernel layer;
应用程序框架层中的服务进程读取同步信号的时间戳,并存储于所述服务进程对应的时间变量中。The service process in the application framework layer reads the timestamp of the synchronization signal and stores it in a time variable corresponding to the service process.
在第一方面的一种可能实现方式中,所述处理芯片基于预设的显示帧率生成各个所述显示周期的所述同步信号,包括:In a possible implementation manner of the first aspect, the processing chip generates the synchronization signal of each display period based on a preset display frame rate, including:
中央处理器分散单元DPU以所述显示帧率生成所述同步信号。The decentralized central processing unit DPU generates the synchronization signal at the display frame rate.
在第一方面的一种可能实现方式中,所述处理芯片基于预设的显示帧率生成各个所述显示周期的所述同步信号,包括:In a possible implementation manner of the first aspect, the processing chip generates the synchronization signal of each display period based on a preset display frame rate, including:
显示驱动芯片发送反馈信号至中央处理单元CPU,以使所述CPU在接收到所述反馈信号时生成所述同步信号;所述反馈信号是在所述显示驱动芯片刷新一帧所述VR合成图像时生成的。 The display driver chip sends a feedback signal to the central processing unit CPU, so that the CPU generates the synchronization signal when receiving the feedback signal; the feedback signal is generated when the display driver chip refreshes a frame of the VR composite image.
在第一方面的一种可能实现方式中,所述分别将各个所述显示周期的所述时间戳发送给多个线程,包括:In a possible implementation manner of the first aspect, sending the timestamps of the display periods to multiple threads respectively includes:
运行于应用层的运行时应用读取记录于所述时间变量中的时间戳;所述运行时应用通过预设接口与所述应用程序框架层中的服务进程通信;The runtime application running in the application layer reads the timestamp recorded in the time variable; the runtime application communicates with the service process in the application framework layer through a preset interface;
所述运行时应用将所述时间戳发送给所述多个线程。The runtime application sends the timestamp to the plurality of threads.
在第一方面的一种可能实现方式中,所述P是根据所述VR合成图像的显示周期与所述线程的处理周期之间的比值确定的。In a possible implementation manner of the first aspect, P is determined based on a ratio between a display period of the VR synthetic image and a processing period of the thread.
第二方面,本申请实施例提供了一种显示装置,包括:In a second aspect, an embodiment of the present application provides a display device, including:
时间戳记录单元,用于响应于第一操作,记录在每个显示周期生成的同步信号的时间戳;a timestamp recording unit, configured to record, in response to a first operation, a timestamp of a synchronization signal generated in each display period;
时间戳下发单元,用于分别将各个所述显示周期的所述时间戳发送给多个线程,以使所述线程根据第N个显示周期的所述时间戳校准所述线程在第N+P个显示周期的处理触发时刻;所述线程的处理触发时刻与第N+P个显示周期的同步信号的时间戳同步;所述N和所述P为大于0的正整数;所述多个线程用于生成VR合成图像;A timestamp sending unit is used to send the timestamp of each display cycle to multiple threads respectively, so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamp of the Nth display cycle; the processing triggering moment of the threads is synchronized with the timestamp of the synchronization signal of the N+Pth display cycle; N and P are positive integers greater than 0; the multiple threads are used to generate VR synthetic images;
图像合成单元,用于根据预设的处理次序,依次在所述线程对应的所述处理触发时刻触发所述线程处理图像数据,生成VR合成图像。The image synthesis unit is used to trigger the threads to process image data at the processing triggering moments corresponding to the threads in sequence according to a preset processing order, so as to generate a VR synthetic image.
在第二方面的一种可能实现方式中,所述多个线程包括摄像线程;所述摄像线程的处理触发时刻为曝光时刻;所述时间戳下发单元包括:In a possible implementation manner of the second aspect, the multiple threads include a camera thread; a processing trigger moment of the camera thread is an exposure moment; and the timestamp issuing unit includes:
第一启动指令传输单元,用于摄像线程向硬件抽象层中的摄像框架发送第一启动指令;所述第一启动指令包含所述时间戳;A first startup instruction transmission unit, used for the camera thread to send a first startup instruction to the camera framework in the hardware abstraction layer; the first startup instruction includes the timestamp;
第二启动指令传输单元,用于响应于所述第一启动指令,所述摄像框架向摄像模组发送第二启动指令;A second startup instruction transmission unit, configured to cause the camera framework to send a second startup instruction to the camera module in response to the first startup instruction;
曝光参数反馈单元,用于所述摄像模组将预览图像的曝光参数反馈给所述摄像框架,其中,所述预览图像是所述摄像模组基于所述第二启动指令获取的,所述曝光参数包括所述预览图像的启动曝光时刻;an exposure parameter feedback unit, configured for the camera module to feed back the exposure parameters of the preview image to the camera framework, wherein the preview image is obtained by the camera module based on the second start instruction, and the exposure parameters include the start exposure moment of the preview image;
时间偏差计算单元,用于所述摄像框架计算所述时间戳与所述启动曝光时刻之间的时间偏差;A time deviation calculation unit, used for calculating the time deviation between the timestamp and the exposure start time in the camera framework;
曝光时刻确定单元,用于根据第N个显示周期的时间戳以及所述时间偏差,确定第N+P个显示周期所述摄像线程的曝光时刻。The exposure time determination unit is used to determine the exposure time of the camera thread in the N+Pth display period according to the timestamp of the Nth display period and the time deviation.
在第二方面的一种可能实现方式中,所述摄像模组包括主摄像模组以及至少一个从摄像模组;所述主摄像模组与所述从摄像模组在启动时通过所述摄像框架发送的第二启动指令完成硬件同步。In a possible implementation of the second aspect, the camera module includes a main camera module and at least one slave camera module; the main camera module and the slave camera module complete hardware synchronization through a second startup instruction sent by the camera framework when starting.
在第二方面的一种可能实现方式中,所述多个线程包括:图形处理线程;所述时间戳下发单元包括:In a possible implementation manner of the second aspect, the multiple threads include: a graphics processing thread; and the timestamp issuing unit includes:
时间戳计算单元,用于所述图形处理线程计算第N+P个显示周期的时间戳;所述第N+P个显示周期的时间戳是根据所述第N个显示周期的所述时间戳以及所述图形处理线程的处理帧率确定的;A timestamp calculation unit, used for the graphics processing thread to calculate the timestamp of the N+Pth display cycle; the timestamp of the N+Pth display cycle is determined according to the timestamp of the Nth display cycle and the processing frame rate of the graphics processing thread;
渲染完成时刻确定单元,用于所述图形处理线程确定第N个显示周期的处理完成时刻;所述处理完成时刻为所述图形处理线程完成第N个显示周期对图像数据的处理对应的时刻;A rendering completion time determination unit, used for the graphics processing thread to determine the processing completion time of the Nth display cycle; the processing completion time is the time corresponding to the completion of the processing of the image data by the graphics processing thread in the Nth display cycle;
休眠时长计算单元,用于所述图形处理线程计算休眠时长;所述休眠时长是根据所述处理完成时刻以及所述第N+P个显示周期的时间戳计算得到的;A sleep duration calculation unit, used for the graphics processing thread to calculate the sleep duration; the sleep duration is calculated based on the processing completion time and the timestamp of the N+Pth display cycle;
休眠状态触发单元,用于将所述图形处理线程设置为休眠状态,并在休眠状态的时长到达所述休眠时长时唤醒所述图形处理线程。The sleep state triggering unit is used to set the graphics processing thread to a sleep state and wake up the graphics processing thread when the duration of the sleep state reaches the sleep duration.
在第二方面的一种可能实现方式中,所述图形处理线程包括二次渲染线程;In a possible implementation of the second aspect, the graphics processing thread includes a secondary rendering thread;
所述渲染完成时刻确定单元包括:The rendering completion time determination unit comprises:
第二渲染时刻确定单元,用于所述二次渲染线程确定第N个显示周期的第二渲染完成时刻;A second rendering time determination unit, used for the secondary rendering thread to determine the second rendering completion time of the Nth display cycle;
所述休眠时长计算单元包括:The sleep duration calculation unit comprises:
第二休眠计算单元,用于所述二次渲染线程计算所述第N+P个显示周期的时间戳与所述渲染完成时刻之间的时间差,将所述时间差作为第二休眠时长。The second sleep calculation unit is used for the secondary rendering thread to calculate the time difference between the timestamp of the N+Pth display cycle and the rendering completion time, and use the time difference as the second sleep duration.
在第二方面的一种可能实现方式中,所述图形处理线程包括一次渲染线程;所述一次渲染线程的预期触发时刻与所述二次渲染线程的预期触发时刻之间的时间差为预设间隔时长;In a possible implementation of the second aspect, the graphics processing thread includes a primary rendering thread; a time difference between an expected triggering moment of the primary rendering thread and an expected triggering moment of the secondary rendering thread is a preset interval duration;
所述渲染完成时刻确定单元包括:The rendering completion time determination unit comprises:
第一渲染时刻确定单元,用于所述一次渲染线程确定第N个显示周期的第一渲染完成时刻;A first rendering time determination unit, used for the primary rendering thread to determine a first rendering completion time of an Nth display cycle;
所述休眠时长计算单元包括:The sleep duration calculation unit comprises:
第一休眠计算单元,用于所述一次渲染线程计算第一休眠时长;所述第一休眠时长是根据所述第N+P个显示周期的时间戳、所述预设间隔时长以及所述第一渲染完成时刻确定的。The first sleep calculation unit is used for calculating a first sleep duration for the rendering thread; the first sleep duration is determined according to the timestamp of the N+Pth display cycle, the preset interval duration and the first rendering completion time.
在第二方面的一种可能实现方式中,所述时间戳记录单元包括:In a possible implementation manner of the second aspect, the timestamp recording unit includes:
同步信号生成单元,用于处理芯片基于预设的显示帧率生成各个所述显示周期的所述同步信号;A synchronization signal generating unit, configured to generate the synchronization signal of each display period based on a preset display frame rate by a processing chip;
设备节点存储单元,用于基于所述同步信号的生成时刻,记录生成所述同步信号的所述时间戳,将所述时间戳存储于内核层内的设备节点;A device node storage unit, configured to record the timestamp of generating the synchronization signal based on the generation time of the synchronization signal, and store the timestamp in a device node in the kernel layer;
时间变量存储单元,用于应用程序框架层中的服务进程读取同步信号的时间戳,并存储于所述服务进程对应的时间变量中。The time variable storage unit is used for the service process in the application framework layer to read the timestamp of the synchronization signal and store it in the time variable corresponding to the service process.
在第二方面的一种可能实现方式中,所述同步信号生成单元包括:In a possible implementation manner of the second aspect, the synchronization signal generating unit includes:
第一同步信号生成单元,用于中央处理器分散单元DPU以所述显示帧率生成所述同步信号。The first synchronization signal generating unit is used for the central processing unit distributed unit DPU to generate the synchronization signal at the display frame rate.
在第二方面的一种可能实现方式中,所述同步信号生成单元包括:In a possible implementation manner of the second aspect, the synchronization signal generating unit includes:
第二同步信号生成单元,用于显示驱动芯片发送反馈信号至中央处理单元CPU,以使所述CPU在接收到所述反馈信号时生成所述同步信号;所述反馈信号是在所述显示驱动芯片刷新一帧所述VR合成图像时生成的。The second synchronization signal generating unit is used for the display driver chip to send a feedback signal to the central processing unit CPU, so that the CPU generates the synchronization signal when receiving the feedback signal; the feedback signal is generated when the display driver chip refreshes a frame of the VR composite image.
在第二方面的一种可能实现方式中,所述时间戳下发单元,包括:In a possible implementation manner of the second aspect, the timestamp issuing unit includes:
运行时应用读取单元,用于运行于应用层的运行时应用读取记录于所述时间变量中的时间戳;所述运行时应用通过预设接口与所述应用程序框架层中的服务进程通信;A runtime application reading unit, used for a runtime application running in the application layer to read the timestamp recorded in the time variable; the runtime application communicates with the service process in the application framework layer through a preset interface;
运行时应用发送单元,用于所述运行时应用将所述时间戳发送给所述多个线程。A runtime application sending unit is used for the runtime application to send the timestamp to the multiple threads.
在第二方面的一种可能实现方式中,所述P是根据所述VR合成图像的显示周期与所述线程的处理周期之间的比值确定的。In a possible implementation manner of the second aspect, P is determined based on a ratio between a display period of the VR synthetic image and a processing period of the thread.
第三方面,本申请实施例提供了一种显示设备,包括:存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现上述第一方面中任一项所述显示方法。In a third aspect, an embodiment of the present application provides a display device, comprising: a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the display method described in any one of the first aspects when executing the computer program.
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现上述第一方面中任一项所述显示方法。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and wherein the computer program, when executed by a processor, implements the display method described in any one of the first aspects above.
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行上述第一方面中任一项所述显示方法。In a fifth aspect, an embodiment of the present application provides a computer program product. When the computer program product is run on an electronic device, the electronic device executes the display method described in any one of the first aspects above.
第六方面,本申请实施例提供一种芯片系统,包括处理器,处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现如第一方面中任一项所述显示方法。In a sixth aspect, an embodiment of the present application provides a chip system, including a processor, the processor being coupled to a memory, the processor executing a computer program stored in the memory to implement a display method as described in any one of the first aspects.
可以理解的是,上述第二方面至第六方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。It can be understood that the beneficial effects of the second to sixth aspects mentioned above can be found in the relevant description of the first aspect mentioned above, and will not be repeated here.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本申请一实施例提供的电子设备的显示VR合成画面的实现示意图;FIG1 is a schematic diagram of an implementation of displaying a VR composite screen by an electronic device provided by an embodiment of the present application;
图2是本申请实施例提供的电子设备的结构示意图;FIG2 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present application;
图3是本申请实施例的电子设备的软件结构框图;FIG3 is a software structure block diagram of an electronic device according to an embodiment of the present application;
图4是本申请一实施例提供的基于VST显示技术的画面示意图;FIG4 is a schematic diagram of a screen based on a VST display technology provided by an embodiment of the present application;
图5是本申请一实施例提供的基于VST技术的VR合成图像的数据流图;FIG5 is a data flow diagram of a VR synthetic image based on VST technology provided by an embodiment of the present application;
图6是本申请另一实施例提供的基于VST技术的VR合成图像的数据流图;FIG6 is a data flow diagram of a VR synthetic image based on VST technology provided by another embodiment of the present application;
图7是本申请一实施例提供的显示方法的实现流程图;FIG7 is a flowchart of an implementation of a display method provided in an embodiment of the present application;
图8是本申请一实施例提供的S701的具体实现流程图;FIG8 is a flowchart of a specific implementation of S701 provided in an embodiment of the present application;
图9是本申请一实施例提供的基于软件框架下S701具体实现流程图; FIG. 9 is a flowchart of a specific implementation of S701 based on a software framework according to an embodiment of the present application;
图10是本申请一实施例提供的校准处理触发时刻前后的对比示意图;FIG10 is a schematic diagram showing a comparison of the calibration process before and after the triggering moment according to an embodiment of the present application;
图11是本申请一实施例提供的摄像线程校准处理触发时刻的实现流程图;11 is a flowchart of the implementation of the camera thread calibration processing triggering moment provided by an embodiment of the present application;
图12是本申请一实施例提供的摄像线程校准处理时刻的交互流程图;12 is an interactive flow chart of the camera thread calibration processing time provided by an embodiment of the present application;
图13是本申请一实施例提供的曝光时刻的校准示意图;FIG13 is a schematic diagram of calibration of exposure timing provided by an embodiment of the present application;
图14是本申请一实施例提供的图形处理线程校准处理触发时刻的实现流程图;14 is a flowchart of the implementation of the triggering time of the graphics processing thread calibration process provided by an embodiment of the present application;
图15是本申请一实施例提供的图形处理线程的控制时序图;FIG15 is a control timing diagram of a graphics processing thread provided by an embodiment of the present application;
图16是本申请一实施例提供的不同图形处理线程占用GPU的时序示意图;FIG16 is a timing diagram of different graphics processing threads occupying a GPU according to an embodiment of the present application;
图17是本申请一实施例提供的各个图形处理线程确定触发时刻的实现流程图;FIG17 is a flowchart of an implementation of each graphics processing thread determining a triggering time according to an embodiment of the present application;
图18是本申请一实施例提供的一种处理次序的划分示意图;FIG18 is a schematic diagram of dividing a processing order provided by an embodiment of the present application;
图19是本申请一实施例提供的一种电子设备在生成多帧VR合成图像过程中各阶段的时序图;FIG19 is a timing diagram of various stages in a process of generating a multi-frame VR composite image by an electronic device provided by an embodiment of the present application;
图20是本申请实施例提供的显示装置的结构框图;FIG20 is a structural block diagram of a display device provided in an embodiment of the present application;
图21是本申请一实施例提供的电子设备的结构框图。FIG. 21 is a structural block diagram of an electronic device provided in one embodiment of the present application.
具体实施方式DETAILED DESCRIPTION
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, specific details such as specific system structures, technologies, etc. are provided for the purpose of illustration rather than limitation, so as to provide a thorough understanding of the embodiments of the present application. However, it should be clear to those skilled in the art that the present application may also be implemented in other embodiments without these specific details. In other cases, detailed descriptions of well-known systems, devices, circuits, and methods are omitted to prevent unnecessary details from obstructing the description of the present application.
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。It should be understood that when used in the present specification and the appended claims, the term "comprising" indicates the presence of described features, wholes, steps, operations, elements and/or components, but does not exclude the presence or addition of one or more other features, wholes, steps, operations, elements, components and/or combinations thereof.
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。It should also be understood that the term “and/or” used in the specification and appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。As used in the specification and appended claims of this application, the term "if" can be interpreted as "when" or "uponce" or "in response to determining" or "in response to detecting", depending on the context. Similarly, the phrase "if it is determined" or "if [described condition or event] is detected" can be interpreted as meaning "uponce it is determined" or "in response to determining" or "uponce [described condition or event] is detected" or "in response to detecting [described condition or event]", depending on the context.
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。In addition, in the description of the present application specification and the appended claims, the terms "first", "second", "third", etc. are only used to distinguish the descriptions and cannot be understood as indicating or implying relative importance.
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。References to "one embodiment" or "some embodiments" etc. described in the specification of this application mean that one or more embodiments of the present application include specific features, structures or characteristics described in conjunction with the embodiment. Therefore, the statements "in one embodiment", "in some embodiments", "in some other embodiments", "in some other embodiments", etc. that appear in different places in this specification do not necessarily refer to the same embodiment, but mean "one or more but not all embodiments", unless otherwise specifically emphasized in other ways. The terms "including", "comprising", "having" and their variations all mean "including but not limited to", unless otherwise specifically emphasized in other ways.
本申请实施例提供的显示方法可以应用于手机、平板电脑、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)显示设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等电子设备上,特别地,该显示方法可以应用于可实现VR显示的电子设备上,或外接有VR显示装置的电子设备上,本申请实施例对电子设备的具体类型不作任何限制。The display method provided in the embodiments of the present application can be applied to electronic devices such as mobile phones, tablet computers, augmented reality (AR)/virtual reality (VR) display devices, laptop computers, ultra-mobile personal computers (UMPCs), netbooks, and personal digital assistants (PDAs). In particular, the display method can be applied to electronic devices that can realize VR display, or electronic devices that are externally connected to VR display devices. The embodiments of the present application do not impose any restrictions on the specific types of electronic devices.
示例性地,图1示出了本申请一实施例提供的电子设备的显示VR合成画面的实现示意图。Exemplarily, FIG1 shows a schematic diagram of an implementation of displaying a VR composite screen by an electronic device provided in an embodiment of the present application.
如图1中的(a)所示,该电子设备可以为一可穿戴式的VR显示设备,该VR显示设备内置有一处理器11以及摄像模组12,该处理器11包括:中央处理器(Central Processing Unit,CPU)以及图像处理器(Graphics Processing Unit,GPU)等,通过处理器实现对图像数据的处理以及合成,该摄像模组12可以用于获取穿戴者(即用户)所处场景的环境图像,处理器11可以将环境图像与虚拟画面进行合成,生成基于VST技术的VR合成图像。As shown in (a) of Figure 1, the electronic device can be a wearable VR display device. The VR display device has a built-in processor 11 and a camera module 12. The processor 11 includes: a central processing unit (CPU) and a graphics processing unit (GPU), etc. The image data is processed and synthesized by the processor. The camera module 12 can be used to obtain the environmental image of the scene where the wearer (i.e., user) is located. The processor 11 can synthesize the environmental image with the virtual picture to generate a VR synthetic image based on VST technology.
如图1中的(b)所示,该电子设备可以为一智能手机,该智能手机13可以与一可穿戴眼镜14建立通信连接。其中,该通信连接可以为有线通信连接或无线通信连接;例如智能手机13可以通过串口与可穿戴眼镜14相连;若可穿戴眼镜14若配置有无线通信模块,如蓝牙模块或WIFI模块,则智能手机13可以通过无线通信模块与可穿戴眼镜14建立通信连接。其中,可穿戴眼镜14上同样可以配置有摄像模组,通过摄像模组采集环境图像,并反馈给智能手机13,智能手机13可以通过内置的处理器对环境图像与虚拟画面进行合成,生成基于VST技术的VR合成图像,并反馈给可穿戴眼镜,通过可穿戴眼镜输出VR合成图像。As shown in (b) of FIG1 , the electronic device may be a smart phone, and the smart phone 13 may establish a communication connection with a wearable pair of glasses 14. The communication connection may be a wired communication connection or a wireless communication connection; for example, the smart phone 13 may be connected to the wearable pair of glasses 14 via a serial port; if the wearable pair of glasses 14 is equipped with a wireless communication module, such as a Bluetooth module or a WIFI module, the smart phone 13 may establish a communication connection with the wearable pair of glasses 14 via the wireless communication module. The wearable pair of glasses 14 may also be equipped with a camera module, which captures environmental images and feeds them back to the smart phone 13. The smart phone 13 may synthesize the environmental images with the virtual images through the built-in processor, generate a VR synthesized image based on the VST technology, and feed it back to the wearable pair of glasses, and output the VR synthesized image through the wearable pair of glasses.
例如,所述电子设备可以是WLAN中的站点(STAION,ST),可以是蜂窝电话、无绳电话、会话启动协议(Session InitiationProtocol,SIP)电话、无线本地环路(Wireless Local Loop,WLL)站、个人数字处理(Personal Digital Assistant,PDA)设备、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、电脑、膝上型计算机、手持式通信设备、手持式计算设备、和/或用于在无线系统上进行通信的其它设备以及下一代通信系统,例如,5G网络中的移动终端或者未来演进的公共陆地移动网络(Public Land Mobile Network,PLMN)网络中的移动终端等。For example, the electronic device can be a station (STAION, ST) in a WLAN, a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a personal digital assistant (PDA) device, a handheld device with wireless communication function, a computing device or other processing device connected to a wireless modem, a computer, a laptop computer, a handheld communication device, a handheld computing device, and/or other devices for communicating on a wireless system and a next-generation communication system, such as a mobile terminal in a 5G network or a mobile terminal in a future evolved public land mobile network (PLMN) network, etc.
图2示出了电子设备100的一种结构示意图。FIG. 2 shows a schematic structural diagram of the electronic device 100 .
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, etc.
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It is to be understood that the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown in the figure, or combine some components, or split some components, or arrange the components differently. The components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,中央处理器分散处理单元(Data Processing Unit,DPU),和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (AP), a modem processor, a graphics processor (GPU), an image signal processor (ISP), a controller, a video codec, a digital signal processor (DSP), a baseband processor, a central processing unit (DPU), and/or a neural-network processing unit (NPU). Different processing units may be independent devices or integrated into one or more processors.
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller can generate operation control signals according to the instruction operation code and timing signal to complete the control of instruction fetching and execution.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。The processor 110 may also be provided with a memory for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may store instructions or data that the processor 110 has just used or cyclically used. If the processor 110 needs to use the instruction or data again, it may be directly called from the memory. This avoids repeated access, reduces the waiting time of the processor 110, and thus improves the efficiency of the system.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。In some embodiments, the processor 110 may include one or more interfaces. The interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (SIM) interface, and/or a universal serial bus (USB) interface, etc.
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。The I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL). In some embodiments, the processor 110 may include multiple groups of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces. For example, the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 communicates with the touch sensor 180K through the I2C bus interface, thereby realizing the touch function of the electronic device 100.
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。The I2S interface can be used for audio communication. In some embodiments, the processor 110 can include multiple I2S buses. The processor 110 can be coupled to the audio module 170 via the I2S bus to achieve communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 can transmit an audio signal to the wireless communication module 160 via the I2S interface to achieve the function of answering a call through a Bluetooth headset.
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。The PCM interface can also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 can be coupled via a PCM bus interface. In some embodiments, the audio module 170 can also transmit audio signals to the wireless communication module 160 via the PCM interface to realize the function of answering calls via a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。The UART interface is a universal serial data bus for asynchronous communication. The bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, the UART interface is generally used to connect the processor 110 and the wireless communication module 160. For example, the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function. In some embodiments, the audio module 170 can transmit an audio signal to the wireless communication module 160 through the UART interface to implement the function of playing music through a Bluetooth headset.
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。The MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193. The MIPI interface includes a camera serial interface (CSI), a display serial interface (DSI), etc. In some embodiments, the processor 110 and the camera 193 communicate via the CSI interface to implement the shooting function of the electronic device 100. The processor 110 and the display screen 194 communicate via the DSI interface to implement the display function of the electronic device 100.
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。The GPIO interface can be configured by software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface can be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, etc. The GPIO interface can also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, etc.
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。The USB interface 130 is an interface that complies with the USB standard specification, and specifically can be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc. The USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and a peripheral device. It can also be used to connect headphones to play audio through the headphones. The interface can also be used to connect other electronic devices, such as AR devices, etc.
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It is understandable that the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration and does not constitute a structural limitation on the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。The charging management module 140 is used to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger through the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. While the charging management module 140 is charging the battery 142, it may also power the electronic device through the power management module 141.
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160. The power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle number, battery health status (leakage, impedance), etc. In some other embodiments, the power management module 141 can also be set in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 can also be set in the same device.
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve the utilization of antennas. For example, antenna 1 can be reused as a diversity antenna for a wireless local area network. In some other embodiments, the antenna can be used in combination with a tuning switch.
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 can provide solutions for wireless communications including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, and perform filtering, amplification, and other processing on the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modulation and demodulation processor, and convert it into electromagnetic waves for radiation through the antenna 1. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be arranged in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be arranged in the same device as at least some of the modules of the processor 110.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。The modem processor may include a modulator and a demodulator. Among them, the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing. After the low-frequency baseband signal is processed by the baseband processor, it is passed to the application processor. The application processor outputs a sound signal through an audio device (not limited to a speaker 170A, a receiver 170B, etc.), or displays an image or video through a display screen 194. In some embodiments, the modem processor may be an independent device. In other embodiments, the modem processor may be independent of the processor 110 and be set in the same device as the mobile communication module 150 or other functional modules.
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The wireless communication module 160 can provide wireless communication solutions including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), bluetooth (BT), global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared (IR) and the like applied to the electronic device 100. The wireless communication module 160 can be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the frequency of the electromagnetic wave signal and performs filtering processing, and sends the processed signal to the processor 110. The wireless communication module 160 can also receive the signal to be sent from the processor 110, modulate the frequency of the signal, amplify the signal, and convert it into electromagnetic waves for radiation through the antenna 2.
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology. The GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS) and/or a satellite based augmentation system (SBAS).
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为提示微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。需要说明的是,该GPU可以通过本实施例提供的显示方法对所控制的显示屏194关联的存储单元进行异常识别。GPU可以将需要显示的图像数据传输给显示屏194中的存储单元进行存储,以便后续的显示。若该电子设备为智能手机,则电子设备可以通过串行接口或无线通信接口与外接的可穿戴眼镜相连,在处于VR显示模式下,通过可穿戴眼镜实现显示功能。The electronic device 100 implements the display function through a GPU, a display screen 194, and an application processor. The GPU is a prompt microprocessor that connects the display screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or change display information. It should be noted that the GPU can perform abnormal identification on the storage unit associated with the controlled display screen 194 through the display method provided in this embodiment. The GPU can transfer the image data to be displayed to the storage unit in the display screen 194 for storage for subsequent display. If the electronic device is a smart phone, the electronic device can be connected to external wearable glasses through a serial interface or a wireless communication interface, and the display function is implemented through the wearable glasses when it is in VR display mode.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。显示屏194可包括触控面板以及其他输入设备。该显示屏194可以关联有一个或多个存储单元,该存储单元用于缓存于显示屏194上显示的图像数据。The display screen 194 is used to display images, videos, etc. The display screen 194 includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diodes (QLED), etc. In some embodiments, the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1. The display screen 194 may include a touch panel and other input devices. The display screen 194 may be associated with one or more storage units, which are used to cache image data displayed on the display screen 194.
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。The electronic device 100 can realize the shooting function through ISP, camera 193, video codec, GPU, display screen 194 and application processor.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。 ISP is used to process the data fed back by camera 193. For example, when taking a photo, the shutter is opened, and the light is transmitted to the camera photosensitive element through the lens. The light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to ISP for processing and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on the noise, brightness, and skin color of the image. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, ISP can be set in camera 193.
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。The camera 193 is used to capture still images or videos. The object generates an optical image through the lens and projects it onto the photosensitive element. The photosensitive element can be a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV or other format. In some embodiments, the electronic device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。The digital signal processor is used to process digital signals, and can process not only digital image signals but also other digital signals. For example, when the electronic device 100 is selecting a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital videos. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record videos in a variety of coding formats, such as Moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,脸部识别,语音识别,文本理解等。NPU is a neural network (NN) computing processor. By drawing on the structure of biological neural networks, such as the transmission mode between neurons in the human brain, it can quickly process input information and can also continuously self-learn. Through NPU, applications such as intelligent cognition of electronic device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, etc.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。同样地,本申请实施例中的显示方法可以对外部存储卡内的存储空间进行管理。The external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and videos are saved in the external memory card. Similarly, the display method in the embodiment of the present application can manage the storage space in the external memory card.
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。The internal memory 121 can be used to store computer executable program codes, which include instructions. The internal memory 121 may include a program storage area and a data storage area. The program storage area may store an operating system, an application required for at least one function (such as a sound playback function, an image playback function, etc.), etc. The data storage area may store data created during the use of the electronic device 100 (such as audio data, a phone book, etc.), etc. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash storage (UFS), etc. The processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 can implement audio functions such as music playing and recording through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone jack 170D, and the application processor.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。The audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. The audio module 170 can also be used to encode and decode audio signals. In some embodiments, the audio module 170 can be arranged in the processor 110, or some functional modules of the audio module 170 can be arranged in the processor 110.
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。特别地,上述扬声器170A可以用于输出提示信息,用于通知用户需要与电子秤接触的部位。The speaker 170A, also called a "speaker", is used to convert an audio electrical signal into a sound signal. The electronic device 100 can listen to music or listen to a hands-free call through the speaker 170A. In particular, the speaker 170A can be used to output prompt information to inform the user of the part that needs to be touched by the electronic scale.
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。The receiver 170B, also called a "earpiece", is used to convert audio electrical signals into sound signals. When the electronic device 100 receives a call or voice message, the voice can be received by placing the receiver 170B close to the human ear.
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。Microphone 170C, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak by putting their mouth close to microphone 170C to input the sound signal into microphone 170C. The electronic device 100 can be provided with at least one microphone 170C. In other embodiments, the electronic device 100 can be provided with two microphones 170C, which can not only collect sound signals but also realize noise reduction function. In other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify the sound source, realize directional recording function, etc.
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The earphone interface 170D is used to connect a wired earphone. The earphone interface 170D may be the USB interface 130, or may be a 3.5 mm open mobile terminal platform (OMTP) standard interface or a cellular telecommunications industry association of the USA (CTIA) standard interface.
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194,例如电子设备可以通过压力传感器180A获取用户的体重。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。The pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A can be set on the display screen 194. For example, the electronic device can obtain the user's weight through the pressure sensor 180A. There are many types of pressure sensors 180A, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc. The capacitive pressure sensor can be a parallel plate including at least two conductive materials. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the intensity of the pressure based on the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 can also calculate the touch position according to the detection signal of the pressure sensor 180A. In some embodiments, touch operations acting on the same touch position but with different touch operation intensities can correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to a first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。The gyro sensor 180B can be used to determine the motion posture of the electronic device 100. In some embodiments, the angular velocity of the electronic device 100 around three axes (i.e., x, y, and z axes) can be determined by the gyro sensor 180B. The gyro sensor 180B can be used for anti-shake shooting. For example, when the shutter is pressed, the gyro sensor 180B detects the angle of the electronic device 100 shaking, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to offset the shaking of the electronic device 100 through reverse movement to achieve anti-shake. The gyro sensor 180B can also be used for navigation and somatosensory game scenes.
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。The air pressure sensor 180C is used to measure air pressure. In some embodiments, the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。The magnetic sensor 180D includes a Hall sensor. The electronic device 100 can use the magnetic sensor 180D to detect the opening and closing of the flip leather case. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 can detect the opening and closing of the flip cover according to the magnetic sensor 180D. Then, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, the flip cover can be automatically unlocked.
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。The acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in all directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the electronic device and is applied to applications such as horizontal and vertical screen switching and pedometers.
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。The distance sensor 180F is used to measure the distance. The electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。The proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 can determine that there is no object near the electronic device 100. The electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power. The proximity light sensor 180G can also be used in leather case mode and pocket mode to automatically unlock and lock the screen.
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。The ambient light sensor 180L is used to sense the brightness of the ambient light. The electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures. The ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。The fingerprint sensor 180H is used to collect fingerprints. The electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photography, fingerprint call answering, etc.
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。The temperature sensor 180J is used to detect temperature. In some embodiments, the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 reduces the performance of a processor located near the temperature sensor 180J to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature. In other embodiments, when the temperature is lower than another threshold, the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。The touch sensor 180K is also called a "touch control device". The touch sensor 180K can be set on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a "touch control screen". The touch sensor 180K is used to detect touch operations acting on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event. Visual output related to the touch operation can be provided through the display screen 194. In other embodiments, the touch sensor 180K can also be set on the surface of the electronic device 100, which is different from the position of the display screen 194.
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。The bone conduction sensor 180M can obtain a vibration signal. In some embodiments, the bone conduction sensor 180M can obtain a vibration signal of a vibrating bone block of the vocal part of the human body. The bone conduction sensor 180M can also contact the human pulse to receive a blood pressure beat signal. In some embodiments, the bone conduction sensor 180M can also be set in an earphone and combined into a bone conduction earphone. The audio module 170 can parse out a voice signal based on the vibration signal of the vibrating bone block of the vocal part obtained by the bone conduction sensor 180M to realize a voice function. The application processor can parse the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M to realize a heart rate detection function.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。The key 190 includes a power key, a volume key, etc. The key 190 may be a mechanical key or a touch key. The electronic device 100 may receive key input and generate key signal input related to user settings and function control of the electronic device 100.
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。Motor 191 can generate vibration prompts. Motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback. For example, touch operations acting on different applications (such as taking pictures, audio playback, etc.) can correspond to different vibration feedback effects. For touch operations acting on different areas of the display screen 194, motor 191 can also correspond to different vibration feedback effects. Different application scenarios (for example: time reminders, receiving messages, alarm clocks, games, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect can also support customization.
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。Indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, messages, missed calls, notifications, etc.
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。The SIM card interface 195 is used to connect a SIM card. The SIM card can be connected to and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195. The electronic device 100 can support 1 or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, and the like. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards can be the same or different. The SIM card interface 195 can also be compatible with different types of SIM cards. The SIM card interface 195 can also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as calls and data communications. In some embodiments, the electronic device 100 uses an eSIM, i.e., an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。The software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the present invention, the Android system of the layered architecture is taken as an example to exemplify the software structure of the electronic device 100.
图3是本申请实施例的电子设备的一种软件结构框图。FIG. 3 is a software structure block diagram of the electronic device according to an embodiment of the present application.
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)的系统层,以及内核层。The layered architecture divides the software into several layers, each with clear roles and division of labor. The layers communicate with each other through software interfaces. In some embodiments, the Android system is divided into four layers, from top to bottom: the application layer, the application framework layer, the system layer of the Android runtime (Android runtime), and the kernel layer.
应用程序层可以包括一系列应用程序包。The application layer can include a series of application packages.
如图3所示,应用程序包可以包括相机,日历,地图,WLAN,蓝牙,音乐,视频,短信息、邮箱、微信、WPS等应用程序。As shown in FIG. 3 , the application package may include applications such as camera, calendar, map, WLAN, Bluetooth, music, video, short message, mailbox, WeChat, WPS, etc.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。The application framework layer provides application programming interface (API) and programming framework for the applications in the application layer. The application framework layer includes some predefined functions.
如图3所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。As shown in FIG. 3 , the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。The window manager is used to manage window programs. The window manager can obtain the display screen size, determine whether there is a status bar, lock the screen, capture the screen, etc.
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。Content providers are used to store and retrieve data and make it accessible to applications. The data may include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls for displaying text, controls for displaying images, etc. The view system can be used to build applications. A display interface can be composed of one or more views. For example, a display interface including a text notification icon can include a view for displaying text and a view for displaying images.
电话管理器用于提供电子设备的通信功能。例如通话状态的管理(包括接通,挂断等)。The phone manager is used to provide communication functions for electronic devices, such as the management of call status (including answering, hanging up, etc.).
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The resource manager provides various resources for applications, such as localized strings, icons, images, layout files, video files, and so on.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager enables applications to display notification information in the status bar. It can be used to convey notification-type messages and can disappear automatically after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc. The notification manager can also be a notification that appears in the system top status bar in the form of a chart or scroll bar text, such as notifications of applications running in the background, or a notification that appears on the screen in the form of a dialog window. For example, a text message is displayed in the status bar, a prompt sound is emitted, an electronic device vibrates, an indicator light flashes, etc.
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。 Android Runtime includes core libraries and virtual machines. Android runtime is responsible for scheduling and management of the Android system.
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。The core library consists of two parts: one part is the function that needs to be called by the Java language, and the other part is the Android core library.
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and the application framework layer run in a virtual machine. The virtual machine executes the Java files of the application layer and the application framework layer as binary files. The virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
系统层可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。The system layer can include multiple functional modules, such as surface manager, media library, 3D graphics processing library (such as OpenGL ES), 2D graphics engine (such as SGL), etc.
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。The surface manager is used to manage the display subsystem and provide the fusion of 2D and 3D layers for multiple applications.
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。The media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc. The media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
三维图形处理库用于实现三维图形绘图,图像渲染,合成和图层处理等。The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis and layer processing, etc.
2D图形引擎是2D绘图的绘图引擎。A 2D graphics engine is a drawing engine for 2D drawings.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。The kernel layer is the layer between hardware and software. The kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
下面结合捕获拍照场景,示例性说明电子设备100软件以及硬件的工作流程。The following is an illustrative description of the workflow of the software and hardware of the electronic device 100 in conjunction with capturing a photo scene.
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头193捕获静态图像或视频。When the touch sensor 180K receives a touch operation, the corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the touch operation into a raw input event (including touch coordinates, timestamp of the touch operation, and other information). The raw input event is stored in the kernel layer. The application framework layer obtains the raw input event from the kernel layer and identifies the control corresponding to the input event. For example, if the touch operation is a touch single-click operation and the control corresponding to the single-click operation is the control of the camera application icon, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer to capture static images or videos through the camera 193.
随着显示技术的不断发展,VR显示技术成为了现今主流的显示技术之一。传统的VR显示技术,为用户提供沉浸式的观看体验,上述传统的VR显示技术已应用于游戏、电影等多个领域。但由于传统的VR显示技术是通过处理器中的GPU构建一个或多个虚拟图形,通过虚拟图形的组合生成纯虚构的虚拟画面,其真实性较低,从而降低了用户的沉浸感。为了进一步提高用户的观看体验,提升沉浸感,VST技术随之产生。与传统的VR显示技术输出纯虚拟的虚拟画面相比,VST显示技术可以通过VR显示设备上配置的摄像模组,采集用户所采场景的真实环境图像,并在真实环境图像的基础上叠加基于合成的虚拟画面,即得到基于VST技术的VR合成图像,由于VR合成图像内的背景画面往往是基于真实环境图像生成的,从而能够提高画面的真实性,继而提高用户观看VR合成画面过程中的沉浸感。With the continuous development of display technology, VR display technology has become one of the mainstream display technologies today. Traditional VR display technology provides users with an immersive viewing experience. The above traditional VR display technology has been applied to many fields such as games and movies. However, since traditional VR display technology builds one or more virtual graphics through the GPU in the processor, and generates purely fictitious virtual images through the combination of virtual graphics, its authenticity is low, thereby reducing the user's sense of immersion. In order to further improve the user's viewing experience and enhance the sense of immersion, VST technology has emerged. Compared with the traditional VR display technology that outputs purely virtual virtual images, VST display technology can collect the real environment image of the scene collected by the user through the camera module configured on the VR display device, and superimpose a synthetic virtual image on the basis of the real environment image, that is, obtain a VR synthetic image based on VST technology. Since the background image in the VR synthetic image is often generated based on the real environment image, it can improve the authenticity of the image, and then improve the user's sense of immersion in the process of watching the VR synthetic image.
示例性地,图4示出了本申请一实施例提供的基于VST显示技术的画面示意图。参见图4中的(a)所示,用户穿戴一VR显示设备,该VR显示设备可以为图1中的(a)所示的VR显示设备。该VR显示设备内置有摄像模组,通过摄像模块可以获取用户视线范围内的环境图像,如可以拍摄得到电视机以及时钟等对象,VR显示设备可以将所需合成的虚拟图像合成到上述环境图像内。如图4中的(b)所示的画面。对比图4中的(a)与图4中的(b)可以发现,现实场景中该电视机是处于关机状态,即并没有输出任何画面内容,当通过VR显示设备进行图像合成,生成对应的VR合成图像时,可以将指定的视频图像帧添加到环境图像中电视机所在的区域,实现了虚拟画面与真实画面相结合,提高了输出画面的沉浸感,继而提高了用户的观看体验。Exemplarily, FIG4 shows a schematic diagram of a screen based on VST display technology provided by an embodiment of the present application. Referring to FIG4 (a), the user wears a VR display device, which may be the VR display device shown in FIG1 (a). The VR display device is equipped with a camera module, through which the environmental image within the user's field of vision can be obtained, such as objects such as a television and a clock can be photographed, and the VR display device can synthesize the desired synthesized virtual image into the above-mentioned environmental image. As shown in FIG4 (b). By comparing FIG4 (a) with FIG4 (b), it can be found that the TV is in a turned-off state in the real scene, that is, no screen content is output. When the image is synthesized by the VR display device to generate the corresponding VR synthesized image, the specified video image frame can be added to the area where the TV is located in the environmental image, realizing the combination of the virtual image and the real image, improving the immersion of the output image, and then improving the user's viewing experience.
然后,VST显示技术虽然能够提高了VR合成图像的真实性,继而提高用户观看的沉浸感,但也同时为VR显示设备引入了新的难题,即所需处理的数据量较大,处理流程较长。由于需要VR合成图像虚实结合,即需要通过摄像线程控制摄像模组拍摄真实场景的画面,里面涉及到图像曝光、图像传输等过程,而虚拟画面的结合则需要通过渲染线程进行虚拟画面的解析以及合成等操作,还需要将环境图像与虚拟画面进行叠加渲染,即需要多个线程协同合作。由于VR显示设备在进行图像数据处理过程中,不同线程的时延不同,时延也受显示设备的实时处理能力影响存在波动,例如当显示设备处理高精度的视频解码过程时,其占用的处理资源较多,而其他线程能够分得的处理资源较少,从而会降低其他线程处理速率,从而增加了该类型线程处理的时延,在不同线程处理时延不一致,无法同步处理的情况下,则会出现部分帧的VR合成图像丢失的情况,也会影响画面内部分显示对象的预测位姿不准确的情况,例如导致部分虚拟画面的显示位置与实际显示位置存在错位的现象(例如在电视机上显示的视频画面超出电视机所在的边框等),降低了VR合成图像的真实性的同时,也会导致画面不流畅,造成观看的用户出现眩晕感,影响用户的观看体验。Then, although VST display technology can improve the authenticity of VR synthetic images and thus enhance the user's immersive viewing experience, it also introduces new challenges to VR display devices, namely, the amount of data to be processed is large and the processing flow is long. Since VR synthetic images need to be combined with the real and the virtual, that is, the camera thread needs to be used to control the camera module to shoot the real scene, which involves processes such as image exposure and image transmission. The combination of virtual images requires the rendering thread to perform operations such as virtual image analysis and synthesis, and the environmental image needs to be overlaid and rendered with the virtual image, which requires the collaboration of multiple threads. Because the latency of different threads is different during the image data processing of VR display devices, and the latency is also affected by the real-time processing capability of the display device and fluctuates, for example, when the display device processes high-precision video decoding, it occupies more processing resources, while other threads can be allocated fewer processing resources, which will reduce the processing rate of other threads, thereby increasing the latency of this type of thread processing. When the processing latency of different threads is inconsistent and cannot be processed synchronously, some frames of VR composite images will be lost, and the predicted posture of some displayed objects in the screen will be inaccurate, such as causing the display position of some virtual images to be misaligned with the actual display position (for example, the video screen displayed on the TV exceeds the border of the TV, etc.), which reduces the authenticity of the VR composite image and causes the picture to be unsmooth, causing dizziness to the viewing user, affecting the user's viewing experience.
实施例一:Embodiment 1:
因此,为了解决上述显示技术的问题,本申请提供了一种显示方法,该显示方法的执行主体以一电子设备,该电子设备包括但不限于:智能手机、平板电脑、计算机电脑、手提电脑、VR显示设备等能够合成基于VST技术的VR合成图像的电子设备,该电子设备可以内置有VR显示模块,也可以外接有一可穿戴VR显示设备,通过外接的可穿戴VR显示设备输出上述的VR合成图像。其中,VR显示设备在生成VR合成图像的过程中需要调用多个不同线程对图像数据进行处理,通过该显示方法能够对多个线程进行处理触发时刻的同步,实现有序控制多个线程的目的,继而减少了丢帧的发生概率,以提高了输出画面的流畅度,继而能够使得预测位姿与实际画面更为契合,提高了VR合成图像的真实性,降低用户观看时的晕眩感,提升用户的观看体验。Therefore, in order to solve the problems of the above-mentioned display technology, the present application provides a display method, the execution subject of the display method is an electronic device, the electronic device includes but is not limited to: a smart phone, a tablet computer, a computer, a laptop, a VR display device, etc., which can synthesize VR synthetic images based on VST technology. The electronic device can have a built-in VR display module, or it can be externally connected to a wearable VR display device, and the above-mentioned VR synthetic image is output through the external wearable VR display device. Among them, the VR display device needs to call multiple different threads to process the image data in the process of generating VR synthetic images. Through the display method, the processing trigger moment of multiple threads can be synchronized to achieve the purpose of orderly controlling multiple threads, thereby reducing the probability of frame loss, so as to improve the smoothness of the output picture, and then make the predicted posture more consistent with the actual picture, improve the authenticity of the VR synthetic image, reduce the dizziness of the user when watching, and enhance the user's viewing experience.
在一些实施例中,图5示出了本申请一实施例提供的基于VST技术的VR合成图像的数据流图。参见图5所示,电子设备在生成VR合成画面的过程中,具体包含以下多个阶段:In some embodiments, FIG5 shows a data flow diagram of a VR synthetic image based on VST technology provided by an embodiment of the present application. Referring to FIG5, the process of generating a VR synthetic image by an electronic device specifically includes the following multiple stages:
阶段1:图像曝光阶段Stage 1: Image exposure stage
由于需要获取用户所在场景的环境图像,需要通过摄像模组进行图像拍摄。当通过摄像模组拍摄环境图像时,则包含有曝光(如需要11毫秒ms),图像数据读取(8.5ms),图像信号处理阶段(Image Signal Processing,ISP)。其中,图像信号处理阶段可以划分为通过图像前端处理器(Image Front End,IFE)的处理阶段(如需要0.5ms)以及通过图像处理器(Image processing engine,IPE)的处理阶段(如需要8.5ms)。在摄像模组侧获取输出给处理器CPU之前则需要完成上述多个阶段,可见具有一定的处理时延。Since it is necessary to obtain the environmental image of the scene where the user is located, it is necessary to capture the image through the camera module. When capturing the environmental image through the camera module, it includes exposure (such as 11 milliseconds), image data reading (8.5ms), and image signal processing stage (Image Signal Processing, ISP). Among them, the image signal processing stage can be divided into the processing stage through the image front-end processor (Image Front End, IFE) (such as 0.5ms) and the processing stage through the image processor (Image processing engine, IPE) (such as 8.5ms). The above multiple stages need to be completed before the camera module side obtains the output to the processor CPU, which shows that there is a certain processing delay.
阶段2:图像传输阶段Phase 2: Image transmission phase
上述图像曝光阶段是通过摄像模组侧完成的,当摄像模组完成拍摄操作时,需要将处理后的图像数据传输给处于系统层的相关线程进行处理,而摄像模组处于内核层,传输至系统层的相关线程需要经过硬件抽象层进行数据转换,并交由GPU进行处理,此时会消耗一定的传输时间,如6.5ms。The above-mentioned image exposure stage is completed through the camera module side. When the camera module completes the shooting operation, the processed image data needs to be transmitted to the relevant threads at the system layer for processing. The camera module is at the kernel layer, and the relevant threads transmitted to the system layer need to go through the hardware abstraction layer for data conversion and then be handed over to the GPU for processing. This will consume a certain amount of transmission time, such as 6.5ms.
阶段3:一次渲染阶段以及视频解析阶段Phase 3: One-shot rendering and video parsing
当电子设备生成VR合成图像时,需要通过处于应用程序层的渲染应用完成虚拟对象渲染的任务,如渲染虚拟键盘、虚拟卡通形象等,则需要进行一次渲染阶段。其中,该一次渲染阶段与上述的图像曝光阶段可以是并行执行,即图像曝光以及图像传输的同时,电子设备可以执行一次渲染处理,该阶段也引入一定的处理时延。When an electronic device generates a VR composite image, it needs to use a rendering application at the application layer to complete the task of rendering virtual objects, such as rendering a virtual keyboard, a virtual cartoon image, etc., which requires a rendering stage. Among them, this rendering stage can be performed in parallel with the above-mentioned image exposure stage, that is, while the image is exposed and transmitted, the electronic device can perform a rendering process, which also introduces a certain processing delay.
同样地,若需要在画面中插入已经生成的视频数据,则需要处于应用框架层内的视频解码器对视频数据进行逐帧解析,上述过程也需要一定的处理时间。Similarly, if it is necessary to insert the generated video data into the picture, the video decoder in the application framework layer needs to parse the video data frame by frame, and the above process also requires a certain amount of processing time.
阶段4:虚拟画面融合Stage 4: Virtual Image Fusion
由于存在一次渲染阶段生成的虚拟对象以及对视频阶段得到的视频图像帧,需要将虚拟对象与视频图像帧进行融合,该虚拟图像数据融合也需要引入处理时长,例如0.6ms。Since there are virtual objects generated in a rendering phase and video image frames obtained in the video phase, the virtual objects need to be fused with the video image frames. The virtual image data fusion also requires the introduction of processing time, for example, 0.6ms.
阶段5:二次渲染阶段Stage 5: Second rendering stage
该阶段需要完成虚实融合,即将环境图像与虚拟画面进行融合,其中涉及了选取图像帧阶段(如需要10.2ms)以及渲染阶段(如需要21.3ms),从而生成VR合成图像。该阶段的时延较长,在部分实现场景下,二次渲染阶段的时延均值可达到21.3ms。This stage needs to complete the fusion of virtual and real, that is, the fusion of the environment image and the virtual screen, which involves the image frame selection stage (such as 10.2ms) and the rendering stage (such as 21.3ms) to generate a VR synthetic image. The delay of this stage is relatively long. In some implementation scenarios, the average delay of the secondary rendering stage can reach 21.3ms.
阶段6:合成图像传输阶段Stage 6: Synthetic image transmission stage
需要将从VR合成图像输出至内核层进行图像显示,也需要一定的传输时长,其中包含有图层合成以及传输等步骤。It is necessary to output the VR synthesized image to the kernel layer for image display, which also requires a certain amount of transmission time, including steps such as layer synthesis and transmission.
阶段7:图像显示阶段Stage 7: Image display stage
当显示屏获取得到VR合成图像时,需要进行逐行或逐列扫描,以进行图像输出。 When the display screen obtains a VR composite image, it needs to be scanned row by row or column by column to output the image.
为了进一步了解基于VST技术的VR合成图像的生成过程,确定电子设备内各模块与上述各阶段之间的对应关系,图6示出了本申请另一实施例提供的基于VST技术的VR合成图像的数据流图。结合图5以及图6,上述各个阶段与对应的模块之间的关系可以为:In order to further understand the generation process of VR synthetic images based on VST technology and determine the corresponding relationship between each module in the electronic device and each of the above stages, FIG6 shows a data flow diagram of VR synthetic images based on VST technology provided by another embodiment of the present application. In combination with FIG5 and FIG6, the relationship between each of the above stages and the corresponding modules can be:
在图像曝光阶段,主要是通过摄像模组采集原始环境图像,并通过IFE对原始环境图像进行初始处理,其中该处理具体是用于区分原始环境图像中用户视线聚焦的注视区域,以及用户未聚焦的背景区域,对于背景区域可以进行模糊处理,而对于注视区域则进行图像增强等相关处理,然后将完成初次处理后的图像数据传输给IPE进行二次处理,例如执行如硬件降噪、图像裁剪、颜色处理、细节增强等图像处理工作,生成经过二次处理的图像数据。During the image exposure stage, the original environment image is mainly collected through the camera module, and the original environment image is initially processed through IFE, where the processing is specifically used to distinguish the gaze area where the user's line of sight is focused in the original environment image, and the background area where the user is not focused. The background area can be blurred, and the gaze area is subjected to image enhancement and other related processing. The image data after the initial processing is then transmitted to IPE for secondary processing, such as performing image processing tasks such as hardware noise reduction, image cropping, color processing, detail enhancement, etc., to generate image data after secondary processing.
在图像传输阶段,由于上述IFE以及IPE处于内核层,因此需要通过在硬件抽象层将摄像模组生成的图像数据传输给应用层内的运行时应用中的相关线程进行后续处理,其中,硬件抽象层中完成摄像模组向上层进行数据传输的框架则为摄像框架CAM X。CAM X可以通过接口给内核层的摄像模组,以接收摄像模组向系统层发送的相关图像数据。In the image transmission stage, since the above-mentioned IFE and IPE are in the kernel layer, it is necessary to transmit the image data generated by the camera module in the hardware abstraction layer to the relevant threads in the runtime application in the application layer for subsequent processing. Among them, the framework that completes the data transmission from the camera module to the upper layer in the hardware abstraction layer is the camera framework CAM X. CAM X can provide the camera module of the kernel layer through the interface to receive the relevant image data sent by the camera module to the system layer.
在视频解码阶段,位于应用框架层内的解码器可以对所需视频数据进行解码处理,解码后的视频图像帧可以转发给位于应用层内的运行时应用中进行后续的二次渲染。In the video decoding stage, the decoder located in the application framework layer can decode the required video data, and the decoded video image frame can be forwarded to the runtime application located in the application layer for subsequent secondary rendering.
在一次渲染阶段,位于应用层的渲染应用在需要渲染虚拟对象时,可以通过应用程序框架层内的渲染框架(如XR Plugin)进行虚拟对象渲染,并将渲染后的虚拟对象传输给应用层内的运行时应用。In a rendering phase, when the rendering application at the application layer needs to render a virtual object, it can render the virtual object through the rendering framework in the application framework layer (such as XR Plugin) and transmit the rendered virtual object to the runtime application in the application layer.
在虚拟画面融合阶段,位于应用层内的运行时应用可以调用相关的线程对虚拟对象以及视频图像帧进行图像融合,得到虚拟图像数据。In the virtual image fusion stage, the runtime application in the application layer can call related threads to perform image fusion on virtual objects and video image frames to obtain virtual image data.
在二次渲染阶段,位于应用层内的运行时应用可以调用相关线程对环境图像数据与虚拟图像数据进行融合处理,其中,当进行二次渲染时需要调用内核层的GPU完成,为了实现将运行时应用中获取的相关图像数据传输给GPU,可以通过系统层内的三维图形处理库提供的接口进行图像数据的转发,GPU将经过二次渲染后的VR合成图像反馈给运行时应用。In the secondary rendering stage, the runtime application located in the application layer can call related threads to fuse the environmental image data and the virtual image data. Among them, when performing secondary rendering, it is necessary to call the GPU of the kernel layer to complete it. In order to transmit the relevant image data obtained in the runtime application to the GPU, the image data can be forwarded through the interface provided by the 3D graphics processing library in the system layer. The GPU will feed back the VR composite image after secondary rendering to the runtime application.
在合成图像阶段,系统层内的运行时应用将合成得到的VR合成图像进行送显。In the image synthesis stage, the runtime application in the system layer sends the synthesized VR composite image for display.
在图像显示阶段,通过VR显示屏双目屏幕分别输出对应的VR合成图像。During the image display stage, the corresponding VR composite images are output through the VR display screen and binocular screen respectively.
由此可见,当生成基于VST的VR合成图像时,从生成图像数据到最后VR合成图像的过程需要经历多个阶段,不同阶段协同工作,且时延各不相同。若没有进行时间同步对多个阶段进行有序控制,则容易出现丢帧的情况,影响输出画面的流程程度。基于此,电子设备可以通过处理器生成一个同步信号,并发送给上述各阶段中的相关线程,对各个阶段线程的处理触发时刻进行同步校准,以保证各个阶段的处理触发时刻与同步信号对齐,实现有序控制各个线程的目的,以减少丢帧的发生概率。It can be seen that when generating a VR synthetic image based on VST, the process from generating image data to the final VR synthetic image needs to go through multiple stages, and different stages work together with different delays. If time synchronization is not performed to control multiple stages in an orderly manner, frame loss is likely to occur, affecting the flow of the output picture. Based on this, the electronic device can generate a synchronization signal through the processor and send it to the relevant threads in the above stages, and synchronize and calibrate the processing trigger time of the threads in each stage to ensure that the processing trigger time of each stage is aligned with the synchronization signal, so as to achieve the purpose of orderly controlling each thread and reduce the probability of frame loss.
以下具体描述本申请实施例提供的显示方法的实现过程。图7示出了本申请一实施例提供的显示方法的实现流程图,当用户需要播放VR画面,可以在电子设备上发起显示指令,当电子设备接收到用户发起的显示指令,可以执行生成VR合成图像的流程,即执行如图7所示的步骤S701~S703,具体实现过程详述如下:The following specifically describes the implementation process of the display method provided by the embodiment of the present application. FIG7 shows an implementation flow chart of the display method provided by an embodiment of the present application. When a user needs to play a VR picture, a display instruction can be initiated on the electronic device. When the electronic device receives the display instruction initiated by the user, the process of generating a VR composite image can be executed, that is, steps S701 to S703 as shown in FIG7 are executed. The specific implementation process is described in detail as follows:
在S701中,响应于第一操作,记录在每个显示周期生成的同步信号的时间戳。In S701 , in response to a first operation, a time stamp of a synchronization signal generated in each display period is recorded.
在本实施例中,用户在需要播放VR画面时,可以在电子设备上执行第一操作,该第一操作用于指示用户需要查看VR画面。当电子设备检测到用户的第一操作,可以执行生成VR合成图像的流程。In this embodiment, when the user needs to play the VR picture, the user can perform a first operation on the electronic device, and the first operation is used to indicate that the user needs to view the VR picture. When the electronic device detects the user's first operation, the process of generating a VR composite image can be executed.
在一些实施例中,若电子设备为一可穿戴的VR显示设备,VR显示设备内可以设置有一启动按钮,第一操作可以为用户触控(可以为点击、双击、长按等)上述启动按钮。In some embodiments, if the electronic device is a wearable VR display device, a start button may be provided in the VR display device, and the first operation may be that the user touches (which may be a click, a double click, a long press, etc.) the start button.
在一些实施例中,若电子设备为一智能手机,该智能手机可以外接一个可穿戴眼镜,通过该可穿戴眼镜输出VR合成图像。用户可以在智能手机内启动相关的VR应用程序,在VR应用程序内可以配置有启动控件,第一操作可以为用户触控(可以为点击、双击、长按等)智能手机屏幕内显示的启动控件。In some embodiments, if the electronic device is a smart phone, the smart phone can be connected to a wearable pair of glasses to output VR composite images. The user can start a related VR application in the smart phone, and a start control can be configured in the VR application. The first operation can be that the user touches (can be clicked, double-clicked, long pressed, etc.) the start control displayed on the screen of the smart phone.
在本实施例中,当电子设备检测到用户的第一操作,电子设备可以基于预设的显示周期生成同步信号,例如,电子设备的处理器中可以包括有控制器,控制器具体根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制,因此,基于显示周期,可以通过处理器中的控制器生成同步信号。电子设备还可以记录在每个显示周期生成的同步信号的时间戳。该同步信号的时间戳可以是基于电子设备的系统时间确定的。In this embodiment, when the electronic device detects the first operation of the user, the electronic device can generate a synchronization signal based on a preset display cycle. For example, the processor of the electronic device may include a controller, and the controller generates an operation control signal based on the instruction operation code and the timing signal to complete the control of fetching and executing instructions. Therefore, based on the display cycle, the synchronization signal can be generated by the controller in the processor. The electronic device can also record the timestamp of the synchronization signal generated in each display cycle. The timestamp of the synchronization signal can be determined based on the system time of the electronic device.
图8示出了本申请实施例提供的S701的具体实现流程图。参见图8所示,S701具体包括S7011~S7013。Fig. 8 shows a specific implementation flow chart of S701 provided in an embodiment of the present application. Referring to Fig. 8 , S701 specifically includes S7011 to S7013.
为了进一步确定S7011~S7013各步骤与电子设备的软件框架中各个层级之间的关系,图9示出了本申请一实施例提供的基于软件框架下S701具体实现流程图。In order to further determine the relationship between each step S7011 to S7013 and each level in the software framework of the electronic device, FIG9 shows a specific implementation flowchart of S701 based on the software framework provided in an embodiment of the present application.
结合图8以及图9,S7011~S7013具体的实现过程如下:In conjunction with FIG. 8 and FIG. 9 , the specific implementation process of S7011 to S7013 is as follows:
在S7011中,处理芯片基于预设的显示帧率生成各个显示周期的同步信号。In S7011, the processing chip generates a synchronization signal for each display period based on a preset display frame rate.
在本实施例中,电子设备可以通过内置的一个或多个处理芯片生成同步信号。处理芯片内可以配置有控制器,基于预设的显示帧率,处理芯片可以计算对应的周期间隔时长,例如显示帧率为F,则显示周期的间隔时长为1/F,根据该间隔时长生成同步信号。In this embodiment, the electronic device can generate a synchronization signal through one or more built-in processing chips. A controller can be configured in the processing chip. Based on a preset display frame rate, the processing chip can calculate the corresponding period interval length. For example, if the display frame rate is F, the interval length of the display cycle is 1/F, and the synchronization signal is generated according to the interval length.
其中,上述处理芯片可以为DPU或显示驱动芯片。根据生成同步信号的处理芯片类型不同,生成的方式也可以存在不同,具体可以包括以下两种方式:The processing chip may be a DPU or a display driver chip. Depending on the type of processing chip that generates the synchronization signal, the generation method may also be different, and may specifically include the following two methods:
如图9中的方式1,若电子设备内配置有DPU,则电子设备可以通过DPU内部的硬件模块定时产生,即电子设备可以通过CPU中的DPU以预设的显示帧率生成同步信号。As shown in method 1 in FIG. 9 , if a DPU is configured in the electronic device, the electronic device can generate a synchronization signal at a preset display frame rate through the hardware module inside the DPU. That is, the electronic device can generate a synchronization signal at a preset display frame rate through the DPU in the CPU.
如图9中的方式2,由于同步信号的生成过程是与VR合成图像的显示帧率相一致的,以使得用于生成VR合成图像的多个线程可以根据同步信号确定下一显示周期的处理触发时刻。基于此,当显示驱动芯片完成一帧VR合成图像的输出时,可以生成一个反馈信号,并将该反馈信号发送给CPU,当CPU接收到该反馈信号时,则可以生成一个同步信号,同步信号的生成时机是一帧VR合成图像的刷新时机,两个时机的同步,能够保证同步信号的生成周期与显示帧率一致,后续线程也能够根据同步信号的时间戳对处理触发时刻进行校准。As shown in Mode 2 in Figure 9, since the generation process of the synchronization signal is consistent with the display frame rate of the VR synthetic image, multiple threads used to generate the VR synthetic image can determine the processing trigger time of the next display cycle according to the synchronization signal. Based on this, when the display driver chip completes the output of a frame of VR synthetic image, it can generate a feedback signal and send the feedback signal to the CPU. When the CPU receives the feedback signal, it can generate a synchronization signal. The generation timing of the synchronization signal is the refresh timing of a frame of VR synthetic image. The synchronization of the two timings can ensure that the generation cycle of the synchronization signal is consistent with the display frame rate, and the subsequent threads can also calibrate the processing trigger time according to the timestamp of the synchronization signal.
在S7012中,基于同步信号的生成时刻,记录生成同步信号的时间戳,将时间戳存储于内核层内的设备节点。In S7012, based on the generation time of the synchronization signal, a timestamp of the generation of the synchronization signal is recorded, and the timestamp is stored in a device node in the kernel layer.
在本实施例中,当处理芯片生成了同步信号,会同时记录生成该同步信号对应的系统时间,并根据该系统时间创建该同步信号对应的时间戳。内核层中处理芯片的驱动会将该时间戳记录于设备节点内。In this embodiment, when the processing chip generates a synchronization signal, the system time corresponding to the generation of the synchronization signal is recorded at the same time, and a timestamp corresponding to the synchronization signal is created according to the system time. The driver of the processing chip in the kernel layer records the timestamp in the device node.
在本实施例中,内核层内的设备节点可以用于记录设备系统内的不同信号的相关信号,其中包括有同步信号的时间戳。In this embodiment, the device nodes in the kernel layer can be used to record related signals of different signals in the device system, including the timestamp of the synchronization signal.
在一些实施方式中,每个显示周期的时间戳均记录于内核层的设备节点中的同一文件内,即当处理芯片下一显示周期生成对应的同步信号时,该同步信号的时间戳会覆盖上一显示周期生成的同步信号的时间戳,从而后续接口每次读取该时间戳对应的文件时,获取的时间戳均是最新记录的时间戳。示例性地,存储时间戳的文件对应的路径可以为:/sys/class/graphics/fb0/vsync_timestamp。In some embodiments, the timestamp of each display cycle is recorded in the same file in the device node of the kernel layer, that is, when the processing chip generates a corresponding synchronization signal in the next display cycle, the timestamp of the synchronization signal will overwrite the timestamp of the synchronization signal generated in the previous display cycle, so that each time the subsequent interface reads the file corresponding to the timestamp, the timestamp obtained is the latest recorded timestamp. Exemplarily, the path corresponding to the file storing the timestamp can be: /sys/class/graphics/fb0/vsync_timestamp.
在S7013中,应用程序框架层中的服务进程读取同步信号的时间戳,并存储于服务进程对应的时间变量中。In S7013, the service process in the application framework layer reads the timestamp of the synchronization signal and stores it in a time variable corresponding to the service process.
在本实施例中,应用程序框架层中配置有一个服务进程,该服务进程可以通过轮询机制间隔读取内核层中的设备节点内的文件。由于上述时间戳存储于设备节点内,因此服务机制基于轮询机制读取设备节点中的文件内容时,会读取得到上述时间戳所在的文件,并将时间戳存储于对应的时间变量中。In this embodiment, a service process is configured in the application framework layer, and the service process can read the files in the device node in the kernel layer at intervals through a polling mechanism. Since the above timestamp is stored in the device node, when the service mechanism reads the file content in the device node based on the polling mechanism, it will read the file where the above timestamp is located and store the timestamp in the corresponding time variable.
在本申请实施例中,处理芯片可以基于预设的显示帧率生成同步信号,并记录于内核层中的设备节点内,继而调用应用程序框架层中的服务进程进行时间戳读取,存储于时间变量内,能够实现时间戳的记录以及读取,由于将时间戳记录于硬件抽象层中,便于系统层以及后续的应用框架层等软件层级进行读取以及使用,减少了时延。In an embodiment of the present application, the processing chip can generate a synchronization signal based on a preset display frame rate, and record it in the device node in the kernel layer, and then call the service process in the application framework layer to read the timestamp, and store it in the time variable, so as to realize the recording and reading of the timestamp. Since the timestamp is recorded in the hardware abstraction layer, it is convenient for the system layer and subsequent software layers such as the application framework layer to read and use it, thereby reducing the delay.
在S702中,分别将各个所述显示周期的所述时间戳发送给多个线程,以使所述线程根据第N个显示周期的所述时间戳校准所述线程在第N+P个显示周期的处理触发时刻;所述线程的处理触发时刻与第N+P个显示周期的同步信号的时间戳同步;所述N和所述P为大于0的正整数;所述多个线程用于生成VR合成图像。In S702, the timestamps of the respective display cycles are sent to a plurality of threads respectively, so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamp of the Nth display cycle; the processing triggering moment of the threads is synchronized with the timestamp of the synchronization signal of the N+Pth display cycle; N and P are positive integers greater than 0; and the plurality of threads are used to generate VR composite images.
在本实施例中,当电子设备获取得在每个显示周期生成的同步信号的时间戳时,可以将时间戳发送给各个线程,以便后续线程根据该时间戳进行处理触发时刻的校准,便于有序对图像数据进行处理。In this embodiment, when the electronic device obtains the timestamp of the synchronization signal generated in each display cycle, the timestamp can be sent to each thread so that subsequent threads can calibrate the processing trigger moment according to the timestamp to facilitate orderly processing of image data.
在一些实施方式中,由于线程的处理周期并不一定与VR合成图像的显示周期同步,即部分线程可以复用上一显示周期生成的图像数据生成VR合成图像。例如,对于静态的虚拟对象,其形状位姿可能并不会发生改变,在该情况下,可以降低虚拟对象的渲染线程的处理帧率,即延长该渲染线程对应的处理周期,一个处理周期的时长为P个显示周期对应的时长。因此线程在第N个显示周期对图像数据进行处理后,下一次的处理触发时刻即为第N+P个显示周期。In some embodiments, since the processing cycle of the thread is not necessarily synchronized with the display cycle of the VR composite image, that is, some threads can reuse the image data generated in the previous display cycle to generate the VR composite image. For example, for a static virtual object, its shape and posture may not change. In this case, the processing frame rate of the rendering thread of the virtual object can be reduced, that is, the processing cycle corresponding to the rendering thread is extended, and the duration of a processing cycle is the duration corresponding to P display cycles. Therefore, after the thread processes the image data in the Nth display cycle, the next processing trigger moment is the N+Pth display cycle.
进一步地,继续参见图8以及图9所示的实施例,若S701具体采用S7011~S7013的方式存储上述各个显示周期的时间戳,即时间戳存储于硬件抽象层中服务线程的时间变量中,则上述S702具体可以包括S7021~S7022。具体实现过程如下:Further, referring to the embodiments shown in FIG8 and FIG9, if S701 specifically adopts S7011 to S7013 to store the timestamps of the above-mentioned display cycles, that is, the timestamps are stored in the time variables of the service thread in the hardware abstraction layer, then the above-mentioned S702 specifically may include S7021 to S7022. The specific implementation process is as follows:
在S7021中,运行于应用层的运行时应用读取记录于所述时间变量中的时间戳;所述运行时应用通过预设接口与所述应用程序框架层中的服务进程通信。In S7021, the runtime application running in the application layer reads the timestamp recorded in the time variable; the runtime application communicates with the service process in the application framework layer through a preset interface.
在S7022中,所述运行时应用将所述时间戳发送给所述多个线程。In S7022, the runtime application sends the timestamp to the multiple threads.
在本实施例中,电子设备的应用层内可以运行有一个运行时应用(即Runtime APP),应用层会配置一个进程运行该运行时应用。应用程序框架层中的服务进程为运行时应用分配了一个预设接口,运行时应用可以通过该预设接口从服务进程中读取时间戳。In this embodiment, a runtime application (i.e., Runtime APP) may be run in the application layer of the electronic device, and the application layer may configure a process to run the runtime application. The service process in the application framework layer allocates a preset interface to the runtime application, and the runtime application may read the timestamp from the service process through the preset interface.
在一些实施方式中,运行时应用被配置为事件触发模式。当处理芯片生成同步信号时,除了可以记录其对应的时间戳外,还可以生成一个用于触发运行时应用的触发指令,当运行时应用检测到该触发指令时,可以执行S7021的操作,即通过预设接口从应用程序框架层中的服务进程处读取时间变量的值,以确定该当前显示周期的同步信号对应的时间戳。In some embodiments, the runtime application is configured in an event trigger mode. When the processing chip generates a synchronization signal, in addition to recording the corresponding timestamp, a trigger instruction for triggering the runtime application can also be generated. When the runtime application detects the trigger instruction, the operation of S7021 can be executed, that is, the value of the time variable is read from the service process in the application framework layer through a preset interface to determine the timestamp corresponding to the synchronization signal of the current display cycle.
在本实施例中,运行时应用在读取到时间戳后,可以发送给多个线程,以便线程根据时间戳执行后续步骤操作。需要说明的是,运行时应用可以将时间戳发送给各个线程,也可以发送给指定的线程,通过指定的线程将该时间戳转发给其他线程,在此不做限定。In this embodiment, after reading the timestamp, the runtime application can send it to multiple threads so that the threads can perform subsequent steps according to the timestamp. It should be noted that the runtime application can send the timestamp to each thread or to a specified thread, and forward the timestamp to other threads through the specified thread, which is not limited here.
示例性地,运行时应用可以将时间戳发送给二次渲染线程,一次渲染线程以及视频解析线程可以通过二次渲染线程获取得到时间戳。当然,运行时应用也可以将时间戳分别发送给二次渲染线程、一次渲染线程以及视频解析线程,具体可以根据实际情况进行设置。For example, the runtime application can send the timestamp to the secondary rendering thread, and the primary rendering thread and the video parsing thread can obtain the timestamp through the secondary rendering thread. Of course, the runtime application can also send the timestamp to the secondary rendering thread, the primary rendering thread, and the video parsing thread respectively, which can be set according to actual conditions.
在本申请实施例中,通过运行时应用进行时间戳获取并进行时间戳下发,能够在应用层内通过配置一个专门用于管理多个线程进行VR合成图像的进程,能够提高时序控制的准确性,继而提高后续VR画面生成的流畅程度。In an embodiment of the present application, timestamp acquisition and timestamp distribution are performed through a runtime application, and a process dedicated to managing multiple threads for VR composite images can be configured in the application layer to improve the accuracy of timing control and thereby improve the smoothness of subsequent VR image generation.
在本实施例中,当不同的线程接收到时间戳时,线程可以通过时间戳对本线程下一显示周期的处理触发时刻进行校正,从而实现了处理触发时刻与下一显示周期的同步信号的生成时刻(即下一显示周期的时间戳)对准。由于在内核层生成同步信号后,传输给应用层内各个线程需要一定的传输时间,因此当同步信号传输至应用层内各个线程时,实际上与生成时刻之间已经存在一定的偏差,并没有与同步信号的生成时刻对准。并且传输时间不可控,传输时间可长可短,也会导致不同显示周期的处理触发时刻会在显示周期内的任意时刻触发,从而导致实际每个显示周期的实际处理时长。In this embodiment, when different threads receive the timestamp, the threads can correct the processing triggering moment of the next display cycle of the thread through the timestamp, thereby realizing the alignment of the processing triggering moment with the generation moment of the synchronization signal of the next display cycle (i.e., the timestamp of the next display cycle). Since after the synchronization signal is generated in the kernel layer, it takes a certain transmission time to transmit it to each thread in the application layer, when the synchronization signal is transmitted to each thread in the application layer, there is actually a certain deviation between the generation moment and the synchronization signal, and it is not aligned with the generation moment of the synchronization signal. In addition, the transmission time is uncontrollable, and the transmission time can be long or short, which will also cause the processing triggering moment of different display cycles to be triggered at any time within the display cycle, thereby resulting in the actual processing time of each display cycle.
示例性地,图10示出了本申请一实施例提供的校准处理触发时刻前后的对比示意图。该线程具体为一摄像线程。摄像线程具体用于控制摄像模组采集图像。电子设备在一个显示周期内会循环输出4帧不同的图像,即图像1~图像4。在没有校准摄像线程的处理触发时刻前,由于摄像线程是根据同步信号传输至摄像线程时刻,才控制摄像模组执行拍摄动作,同步信号传输给摄像线程可以是在显示周期内的任意时刻,因此当曝光时刻未与同步信号对准时,采集到的图像可以为图像1~图像4的任一张图像。Exemplarily, FIG10 shows a comparative schematic diagram before and after the calibration processing trigger moment provided by an embodiment of the present application. The thread is specifically a camera thread. The camera thread is specifically used to control the camera module to capture images. The electronic device will cyclically output 4 frames of different images in one display cycle, namely, image 1 to image 4. Before the processing trigger moment of the camera thread is calibrated, since the camera thread controls the camera module to perform the shooting action according to the moment when the synchronization signal is transmitted to the camera thread, the synchronization signal can be transmitted to the camera thread at any time within the display cycle. Therefore, when the exposure moment is not aligned with the synchronization signal, the captured image can be any image of image 1 to image 4.
若摄像线程的处理触发时刻与同步信号的生成时刻实现了同步,由于同步信号是在每个显示周期的固定时刻生成的,如显示周期的初始时刻生成,则处理触发时刻与每个显示周期的初始时刻会同步对齐,在该情况,摄像模组采集到的图像即为显示周期初始时刻输出那一帧图像,即图像1。If the processing trigger moment of the camera thread is synchronized with the generation moment of the synchronization signal, since the synchronization signal is generated at a fixed time in each display cycle, such as the initial moment of the display cycle, the processing trigger moment and the initial moment of each display cycle will be synchronized and aligned. In this case, the image captured by the camera module is the frame of image output at the initial moment of the display cycle, that is, Image 1.
并且根据校准前后的校准处理触发时刻之间的间隔可以确定,校准前不同处理触发时刻之间的间隔是随机的,而校准后的处理触发时刻之间的间隔是固定的,从而能够保证校准后每个显示周期的线程的可处理时长一致,避免因处理时长过短而无法完成相关的图像处理任务。And according to the interval between the calibration processing trigger moments before and after calibration, it can be determined that the interval between different processing trigger moments before calibration is random, while the interval between the processing trigger moments after calibration is fixed, thereby ensuring that the processing time of the thread of each display cycle after calibration is consistent, avoiding the inability to complete related image processing tasks due to too short processing time.
其中,不同类型的线程,对处理触发时刻校正的方式也存在差异。具体可以包含以下几种情况:Different types of threads have different ways of handling trigger time correction, which may include the following situations:
情况1:对于摄像线程Case 1: For the camera thread
图11示出了本申请一实施例提供的摄像线程校准处理触发时刻的实现流程图。参见图11所示,摄像线程在进行处理触发时刻(对于摄像线程而言,处理触发时刻即控制摄像模组进行图像拍摄的曝光时刻)实现流程具体包括S1101~S1105。FIG11 shows a flowchart of the implementation of the camera thread calibration processing trigger moment provided by an embodiment of the present application. Referring to FIG11 , the camera thread implements the process of processing the trigger moment (for the camera thread, the processing trigger moment is the exposure moment of controlling the camera module to take an image) including S1101 to S1105.
为了便于理解,确定拍摄线程与摄像模组之间的交互流程图,图12示出了本申请一实施例提供的摄像线程校准处理时刻的交互流程图。其中,摄像线程的时间戳是通过运行时应用发送的,当运行时应用检测到同步信号时,会从应用框架层中的时间变量中获取时间戳。For ease of understanding, the interaction flow chart between the shooting thread and the camera module is determined, and FIG12 shows an interaction flow chart of the camera thread calibration processing moment provided by an embodiment of the present application. Among them, the timestamp of the camera thread is sent by the runtime application, and when the runtime application detects the synchronization signal, the timestamp is obtained from the time variable in the application framework layer.
结合图11以及图12的内容,上述S1101~S1105的具体实现过程如下:In combination with the contents of FIG. 11 and FIG. 12 , the specific implementation process of the above S1101 to S1105 is as follows:
在S1101中,摄像线程向硬件抽象层中的摄像框架发送第一启动指令;所述第一启动指令包含所述时间戳。In S1101, the camera thread sends a first start instruction to the camera framework in the hardware abstraction layer; the first start instruction includes the timestamp.
在本实施例中,摄像线程具体可以运行于应用层,当然,根据实际情况可以运行于软件框架内的其他层级,在此不作限定。若该摄像线程为运行于应用层内,则运行时应用可以通过对应的接口将时间戳发送给摄像线程,当摄像线程接收到上述运行时应用反馈的时间戳时,可以生成一个第一启动指令,该第一启动指令内可以携带有上述接收到的时间戳。其中,第一启动指令具体用于通知摄像框架启动。In this embodiment, the camera thread can specifically run in the application layer. Of course, it can run in other layers within the software framework according to actual conditions, which is not limited here. If the camera thread runs in the application layer, the runtime application can send the timestamp to the camera thread through the corresponding interface. When the camera thread receives the timestamp fed back by the runtime application, a first startup instruction can be generated, and the first startup instruction can carry the timestamp received. The first startup instruction is specifically used to notify the camera framework to start.
在S1102中,响应于第一启动指令,所述摄像框架向摄像模组发送第二启动指令。In S1102, in response to the first start-up instruction, the camera framework sends a second start-up instruction to the camera module.
在本实施例中,当处于硬件抽象层的摄像框架接收到上述的第一启动指令时,表示需要开启摄像模组,因此会向与内核层之间的接口发送一个第二启动指令,内核层对应的摄像驱动在接收到该第二启动指令后,会控制摄像模组开启,当摄像模组接收到第二启动指令时,会开始获取预览图像,该预览图像具体用于采集当前用户视线方向范围内的环境图像。In this embodiment, when the camera framework at the hardware abstraction layer receives the above-mentioned first startup instruction, it indicates that the camera module needs to be turned on, so a second startup instruction will be sent to the interface between the kernel layer. After receiving the second startup instruction, the camera driver corresponding to the kernel layer will control the camera module to turn on. When the camera module receives the second startup instruction, it will start to obtain a preview image, which is specifically used to capture the environmental image within the current user's line of sight.
在一些实施方式中,上述摄像模组具体包括主摄像模组以及至少一个从摄像模组。其中,相对于从摄像模组,主摄像模组更为靠近第二启动指令的输出接口,即当传输第二启动指令时,会先经过主摄像模组,然后再经过从摄像模组,主摄像模组会先启动,然后当第二启动指令传输至从摄像模组时,从摄像模组再启动。即主摄像模组与从摄像模组均是基于相关的硬件接收到第二启动指令时进行开启的,即主摄像模组与从摄像模组是配置为硬件同步关系,可以根据第二启动指令完成硬件同步。与软件同步相比,硬件同步具有低时延以及高稳定等优点,能够提高后续图像合成过程中,不同摄像模组采集的环境图像的采集时间的同步,提高后续图像合成的准确性,减少相位偏差。In some embodiments, the camera module specifically includes a main camera module and at least one slave camera module. Among them, relative to the slave camera module, the main camera module is closer to the output interface of the second startup instruction, that is, when the second startup instruction is transmitted, it will first pass through the main camera module, and then pass through the slave camera module. The main camera module will start first, and then when the second startup instruction is transmitted to the slave camera module, the slave camera module will start again. That is, the main camera module and the slave camera module are both turned on when the relevant hardware receives the second startup instruction, that is, the main camera module and the slave camera module are configured as a hardware synchronization relationship, and hardware synchronization can be completed according to the second startup instruction. Compared with software synchronization, hardware synchronization has the advantages of low latency and high stability, which can improve the synchronization of the acquisition time of environmental images acquired by different camera modules in the subsequent image synthesis process, improve the accuracy of subsequent image synthesis, and reduce phase deviation.
在S1103中,摄像模组将预览图像的曝光参数反馈给所述摄像框架,其中,预览图像是摄像模组基于所述第二启动指令获取的,曝光参数包括所述预览图像的启动曝光时刻。In S1103, the camera module feeds back the exposure parameters of the preview image to the camera framework, wherein the preview image is obtained by the camera module based on the second start-up instruction, and the exposure parameters include the start-up exposure moment of the preview image.
在本实施例中,当摄像模块采集得到预览图像时,会将预览图像反馈给位于硬件抽象层的摄像框架。由于预览图像的图像信息会记录有拍摄预览图像时的曝光参数,该曝光参数可以包括曝光时刻、曝光时长以及感光度ISO等信息。由于上述预览图像为摄像模组启动时获取的图像,因此将预览图像对应的曝光时刻作为摄像模组的启动曝光时刻。摄像框架可以读取上述预览图像中的曝光参数,并提取其中携带的启动曝光时刻。In this embodiment, when the camera module acquires a preview image, the preview image is fed back to the camera framework located at the hardware abstraction layer. Since the image information of the preview image records the exposure parameters when the preview image is shot, the exposure parameters may include information such as exposure time, exposure duration, and ISO sensitivity. Since the above preview image is an image obtained when the camera module is started, the exposure time corresponding to the preview image is used as the start-up exposure time of the camera module. The camera framework can read the exposure parameters in the above preview image and extract the start-up exposure time carried therein.
在一些实施例中,由于主摄像模组与从摄像模组均会采集预览图像,即反馈的预览图像包括主摄像模组采集的主预览图像以及从摄像模组采集的从预览图像。由于主摄像模组的传输路径短于从摄像模组的传输路径,因此摄像框架可以提取主摄像模组的启动曝光时刻,根据主摄像模组的启动曝光时刻进行后续的时间偏差计算。In some embodiments, since both the main camera module and the slave camera module can capture preview images, that is, the preview images fed back include the main preview image captured by the main camera module and the slave preview image captured by the slave camera module. Since the transmission path of the main camera module is shorter than the transmission path of the slave camera module, the camera framework can extract the start exposure moment of the main camera module, and perform subsequent time deviation calculation based on the start exposure moment of the main camera module.
在S1104中,摄像框架计算所述时间戳与所述启动曝光时刻之间的时间偏差。 In S1104, the camera framework calculates the time deviation between the timestamp and the exposure start time.
在本实施例中,摄像框架可以计算启动曝光时刻与接收到的第一启动指令时对应的时间戳之间的差值,将该差值作为上述的时间偏差。例如,启动曝光时刻为t1,而上述时间戳为ts,则对应的时间偏差为△t=t1-ts。In this embodiment, the camera framework can calculate the difference between the start exposure time and the timestamp corresponding to the first start instruction received, and use the difference as the above time deviation. For example, the start exposure time is t1, and the above timestamp is ts, then the corresponding time deviation is △t=t1-ts.
在S1105中,根据第N个显示周期的时间戳以及所述时间偏差,确定第N+P个显示周期所述摄像线程的曝光时刻。In S1105, the exposure time of the camera thread in the N+Pth display period is determined according to the timestamp of the Nth display period and the time deviation.
在本实施例中,由于摄像模组的启动时刻与同步信号的生成时刻具有一定的时间偏差,若保持摄像模组预设的拍摄帧率获取环境图像,在摄像模组启动后每个环境图像的曝光时刻也会与同步信号之间的时间戳之间存在一定的时间偏差。因此,可以需要根据上述的时间偏差调整每个显示周期对应的曝光时刻。摄像线程可以根据曝光时刻向摄像框架发送拍摄指令,继而摄像框架可以控制摄像模组在对应的曝光时刻采集环境图像。In this embodiment, since there is a certain time deviation between the start-up time of the camera module and the generation time of the synchronization signal, if the preset shooting frame rate of the camera module is maintained to obtain the environmental image, there will be a certain time deviation between the exposure time of each environmental image and the timestamp between the synchronization signals after the camera module is started. Therefore, it is necessary to adjust the exposure time corresponding to each display cycle according to the above-mentioned time deviation. The camera thread can send a shooting instruction to the camera framework according to the exposure time, and then the camera framework can control the camera module to collect the environmental image at the corresponding exposure time.
在一些实施例中,若环境图像的拍摄帧率与VR合成图像的显示帧率一致,则上述P为1。In some embodiments, if the shooting frame rate of the environment image is consistent with the display frame rate of the VR synthetic image, the above P is 1.
示例性地,图13示出了本申请一实施例提供的曝光时刻的校准示意图。参见图13所示,虚线标识在校准前的各个曝光时刻,即为t1~tN。其中,t1为摄像模组启动时对应的曝光时刻,即为上述的启动曝光时刻。其中,ts为向摄像模组发送第一启动指令时对应显示周期的时间戳,该时间戳对应的生成时刻ts是早于生成第一启动指令的时刻的,即早于t1。两者相差的时间差即为△t。为了使得各个曝光时刻与同步信号的时间戳对齐,需要根据△t提前各个曝光时刻,即为实线标识的各个曝光时刻,即t2’~tN’。Exemplarily, FIG13 shows a schematic diagram of the calibration of the exposure moments provided in an embodiment of the present application. Referring to FIG13, the dotted lines mark the exposure moments before calibration, namely t1 to tN. Among them, t1 is the exposure moment corresponding to when the camera module is started, that is, the above-mentioned start-up exposure moment. Among them, ts is the timestamp corresponding to the display period when the first start-up instruction is sent to the camera module, and the generation moment ts corresponding to the timestamp is earlier than the moment when the first start-up instruction is generated, that is, earlier than t1. The time difference between the two is △t. In order to align each exposure moment with the timestamp of the synchronization signal, it is necessary to advance each exposure moment according to △t, that is, each exposure moment marked by the solid line, that is, t2' to tN'.
在本申请实施例中,当摄像模组启动时,确定启动曝光时刻与时间戳之间的时间偏差,调整后续各个显示周期对应的曝光时刻,能够使得摄像模组的曝光时刻与同步信号的时间戳对齐,保证了摄像线程对于摄像模组控制的准确性。In an embodiment of the present application, when the camera module is started, the time deviation between the start exposure moment and the timestamp is determined, and the exposure moments corresponding to each subsequent display cycle are adjusted, so that the exposure moment of the camera module is aligned with the timestamp of the synchronization signal, thereby ensuring the accuracy of the camera thread's control over the camera module.
情况2:对于需要调用GPU的图形处理线程Case 2: For graphics processing threads that need to call the GPU
图14示出了本申请一实施例提供的图形处理线程校准处理触发时刻的实现流程图。参见图14所示,上述校准过程具体包括S1401~S1404。Fig. 14 shows a flowchart of the triggering moment of the calibration process of the graphics processing thread provided by an embodiment of the present application. Referring to Fig. 14, the calibration process specifically includes S1401 to S1404.
在S1401中,图形处理线程计算第N+P个显示周期的时间戳;第N+P个显示周期的时间戳是根据第N个显示周期的所述时间戳以及图形处理线程的处理帧率确定的。In S1401, the graphics processing thread calculates the timestamp of the N+Pth display cycle; the timestamp of the N+Pth display cycle is determined according to the timestamp of the Nth display cycle and the processing frame rate of the graphics processing thread.
在本实施例中,图形处理线程可以根据对应的处理帧率对所需处理的图像数据进行处理。其中,该显示周期对应的显示帧率与处理帧率之间是整数倍的关系。即处理帧率F1与显示帧率F2之间满足如下:F2=M*F1,M为任意正整数。例如,显示帧率为60fps,则处理帧率可以为60fps,也可以为30fps。具体的处理帧率可以根据图形处理线程的线程类型以及GPU的处理能力确定,在此不做限定。In this embodiment, the graphics processing thread can process the image data to be processed according to the corresponding processing frame rate. Among them, the display frame rate corresponding to the display period and the processing frame rate are integer multiples. That is, the processing frame rate F1 and the display frame rate F2 satisfy the following: F2 = M*F1, M is an arbitrary positive integer. For example, if the display frame rate is 60fps, the processing frame rate can be 60fps or 30fps. The specific processing frame rate can be determined according to the thread type of the graphics processing thread and the processing power of the GPU, and is not limited here.
在本实施例中,电子设备可以通过运行时应用将时间戳发送给图形处理线程。由于同步信号是每一个显示周期生成一个,同步信号之间的时间间隔是可预测的,因此,当该图形处理线程在接收到第N个显示周期对应的时间戳时,可以预测在第N+P个显示周期对应的同步信号的时间戳。若图形处理线程的处理帧率与VR合成图像的显示帧率一致,则上述P为1,即显示帧率与处理帧率相同,每一个显示周期该图形处理线程均执行一次图像数据的处理操作。In this embodiment, the electronic device can send the timestamp to the graphics processing thread through the runtime application. Since the synchronization signal is generated once for each display cycle, the time interval between the synchronization signals is predictable. Therefore, when the graphics processing thread receives the timestamp corresponding to the Nth display cycle, it can predict the timestamp of the synchronization signal corresponding to the N+Pth display cycle. If the processing frame rate of the graphics processing thread is consistent with the display frame rate of the VR synthetic image, the above P is 1, that is, the display frame rate is the same as the processing frame rate, and the graphics processing thread performs an image data processing operation once for each display cycle.
若图形处理线程的处理帧率为f,则第N+P个显示周期对应的时间戳为:If the processing frame rate of the graphics processing thread is f, then the timestamp corresponding to the N+Pth display cycle is:
t(N+P)=t(N)+1/f,其中,t(N+P)为第N+P个显示周期对应的时间戳,t(N)为第N个显示周期对应的时间戳。t(N+P)=t(N)+1/f, where t(N+P) is the timestamp corresponding to the N+Pth display period, and t(N) is the timestamp corresponding to the Nth display period.
在S1402中,图形处理线程确定第N个显示周期的处理完成时刻;处理完成时刻为图形处理线程完成第N个显示周期对图像数据的处理对应的时刻。In S1402, the graphics processing thread determines the processing completion time of the Nth display cycle; the processing completion time is the time corresponding to the time when the graphics processing thread completes the processing of the image data in the Nth display cycle.
在本实施例中,图像处理线程的处理触发时刻具体为从休眠状态切换至运行状态对应的时刻。由于同步信号从内核层传输至应用层中的图像处理线程需要一定的传输时间,为了实现图形处理线程的处理触发时刻与同步信号的生成时刻(即时间戳对应的时刻)对齐,电子设备可以通过设置休眠时长来确定从休眠状态切换至运行状态的时机(即处理触发时刻)。在确定了下一次进行图像处理对应的显示周期的时间戳后,即休眠状态的时间终点,接来下则需要确定休眠状态的时间起点。 In this embodiment, the processing trigger moment of the image processing thread is specifically the moment corresponding to switching from the sleep state to the running state. Since a certain transmission time is required for the synchronization signal to be transmitted from the kernel layer to the image processing thread in the application layer, in order to align the processing trigger moment of the graphics processing thread with the generation moment of the synchronization signal (i.e., the moment corresponding to the timestamp), the electronic device can determine the timing of switching from the sleep state to the running state (i.e., the processing trigger moment) by setting the sleep duration. After determining the timestamp of the display cycle corresponding to the next image processing, i.e., the end time of the sleep state, it is necessary to determine the start time of the sleep state.
图形处理线程具备进入休眠状态的条件为已经完成该线程的相关处理任务,因此,当图形处理线程完成第N个显示周期对应的图像数据处理时,会获取系统时间,将完成图像数据处理时对应的系统时间作为上述的处理完成时刻。The condition for the graphics processing thread to enter the dormant state is that the relevant processing tasks of the thread have been completed. Therefore, when the graphics processing thread completes the image data processing corresponding to the Nth display cycle, the system time will be obtained, and the system time corresponding to the completion of the image data processing will be used as the above-mentioned processing completion time.
在S1403中,图形处理线程计算休眠时长;休眠时长是根据处理完成时刻以及第N+P个显示周期的时间戳计算得到的。In S1403, the graphics processing thread calculates the sleep duration; the sleep duration is calculated based on the processing completion time and the timestamp of the N+Pth display cycle.
在本实施例中,当图形处理线程确定了休眠状态的开始时间(即处理完成时刻)以及结束时间(即第N+P个显示周期对应的时间戳)时,可以计算上述两个时刻之间的时间差,将该时间差作为上述的休眠时长。In this embodiment, when the graphics processing thread determines the start time (i.e., the processing completion time) and the end time (i.e., the timestamp corresponding to the N+Pth display cycle) of the sleep state, the time difference between the above two times can be calculated and used as the above sleep duration.
在S1404中,将图形处理线程设置为休眠状态,并在休眠状态的时长到达休眠时长时唤醒图形处理线程。In S1404, the graphics processing thread is set to a sleep state, and the graphics processing thread is awakened when the duration of the sleep state reaches the sleep duration.
在本实施例中,当图像处理线程完成第N个显示周期的图像数据处理任务时,则图形处理线程会进入休眠状态,并设置对应的休眠计时器。图形处理线程会检测该休眠计时器的计数值是否大于或等于上述计算得到的休眠时长,若是,则唤醒该图像处理线程;反之,若休眠计时器的计数值小于休眠时长,则保持图形处理线程处于休眠状态。In this embodiment, when the image processing thread completes the image data processing task of the Nth display cycle, the graphics processing thread will enter the sleep state and set the corresponding sleep timer. The graphics processing thread will detect whether the count value of the sleep timer is greater than or equal to the sleep duration calculated above. If so, the image processing thread will be awakened; otherwise, if the count value of the sleep timer is less than the sleep duration, the graphics processing thread will be kept in the sleep state.
示例性地,图15示出了本申请一实施例提供的图形处理线程的控制时序图。参见图15所示,图形处理线程的处理帧率与VR合成图像的显示帧率一致,因此,一个显示周期对应图形处理线程的一个处理周期。图形处理线程可以在处理图像数据过程中的任一时刻接收到运行时应用反馈的当前显示周期下对应的时间戳t(N),图形处理线程可以根据处理帧率以及t(N)计算下一显示周期对应的时间戳,即t(N+1)。当图形处理线程处理完成图像数据时,会记录对应的处理完成时刻t(fin),并进入休眠状态,设置休眠计时器。图形处理线程会检测该休眠计数器的计数值是否为t(N+1)-t(fin),若是,则会唤醒图形处理线程,此时下一显示周期对应的同步信号生成,并在某一时刻运行时应用会反馈对应的时间戳给图形处理线程,并重复上述步骤,从而使得图形处理线程每次唤醒的时刻与同步信号的生成时刻对齐。Exemplarily, FIG15 shows a control timing diagram of a graphics processing thread provided by an embodiment of the present application. As shown in FIG15, the processing frame rate of the graphics processing thread is consistent with the display frame rate of the VR synthetic image, so one display cycle corresponds to one processing cycle of the graphics processing thread. The graphics processing thread can receive the corresponding timestamp t(N) of the current display cycle fed back by the runtime application at any time during the process of processing image data. The graphics processing thread can calculate the timestamp corresponding to the next display cycle according to the processing frame rate and t(N), that is, t(N+1). When the graphics processing thread completes processing of the image data, it will record the corresponding processing completion time t(fin), enter the dormant state, and set the dormant timer. The graphics processing thread will detect whether the count value of the dormant counter is t(N+1)-t(fin). If so, the graphics processing thread will be awakened. At this time, the synchronization signal corresponding to the next display cycle is generated, and the runtime application will feedback the corresponding timestamp to the graphics processing thread at a certain time, and repeat the above steps, so that the graphics processing thread is awakened each time and the synchronization signal is generated. The time is aligned.
在本申请实施例中,通过设置休眠时长,能够使得唤醒图形处理线程的时机与对应显示周期的同步信号的生成时刻对齐,能够实现有序控制图形处理线程处理图像数据的目的,减少了丢帧出现的概率。In an embodiment of the present application, by setting the sleep duration, the timing of waking up the graphics processing thread can be aligned with the generation time of the synchronization signal of the corresponding display cycle, thereby achieving the purpose of orderly controlling the graphics processing thread to process image data and reducing the probability of frame loss.
在一些实施例中,图形处理线程包括:渲染线程以及解码线程,渲染线程还可以包括一次渲染线程以及二次渲染线程。由于视频解码、一次渲染以及二次渲染均需要占用GPU进行处理,而GPU同一时刻只能够执行一个任务,即上述多个图形处理线程需要分时占用GPU处理对应线程的任务。为了有效利用GPU资源,可以为不同的图形处理线程配置对应的处理优先级。示例性地,由于二次渲染处理是将虚拟合成画面与真实的环境图像进行合并,其为最后的图像合成阶段,直接输出给显示屏进行显示,因此其处理的重要程度较高,可以配置具有较高的处理优先级,而一次渲染处理主要是进行虚拟对象的渲染,在部分场景下可能并不会出现虚拟对象,因此其对应的处理优先级会低于二次渲染线程,而对于解码线程,视频数据解码所需的耗时较短,因此其对应的处理优先级可以设置为高于一次渲染线程,则上述三个图形处理线程对应的处理优先级关系可以为:视频解码≥二次渲染>一次渲染。In some embodiments, the graphics processing thread includes: a rendering thread and a decoding thread, and the rendering thread may also include a primary rendering thread and a secondary rendering thread. Since video decoding, primary rendering and secondary rendering all need to occupy the GPU for processing, and the GPU can only perform one task at the same time, that is, the above-mentioned multiple graphics processing threads need to occupy the GPU to process the tasks of the corresponding threads in time-sharing. In order to effectively utilize GPU resources, corresponding processing priorities can be configured for different graphics processing threads. Exemplarily, since the secondary rendering process is to merge the virtual synthetic image with the real environment image, it is the last image synthesis stage and is directly output to the display screen for display, so its processing importance is higher and can be configured with a higher processing priority, while the primary rendering process is mainly to render virtual objects, and virtual objects may not appear in some scenes, so its corresponding processing priority will be lower than the secondary rendering thread, and for the decoding thread, the time required for video data decoding is shorter, so its corresponding processing priority can be set to be higher than the primary rendering thread, then the processing priority relationship corresponding to the above three graphics processing threads can be: video decoding ≥ secondary rendering > primary rendering.
当没有对各个图形处理线程的处理触发时刻与同步信号进行同步时,上述三种类型的图形处理线程会在显示周期内的任一时刻调用GPU。示例性地,图16示出了本申请一实施例提供的不同图形处理线程占用GPU的时序示意图。参见图16中的(a)所示,GPU在某一显示周期内,视频线程对第3帧的视频图像进行解码,以及一次渲染线程为第3帧的VR合成图像渲染虚拟对象,以及二次渲染线程可以通过前一显示周期生成的第2帧的环境图像、第2帧的视频图像与第2帧的虚拟对象进行图像融合,即完成二次渲染的操作。若上述三个不同图形处理线程在显示周期内任一时刻触发任务,由于二次渲染线程的处理优先级高于一次渲染线程的处理优先级,若在一次渲染线程处理的过程中,二次渲染线程占用GPU进行二次渲染任务,则会中断正在处理的一次渲染线程的一次渲染任务,则一次渲染线程的任务可能会被延迟至下一显示周期内执行,当二次渲染线程完成处理时,一次渲染线程会重新占用GPU,进行处理,从而延后了第4帧视频解码的任务,从而可能会使得第4帧视频图像帧丢帧,导致最终输出的VR合成图像不流畅的情况。When the processing trigger moment of each graphics processing thread is not synchronized with the synchronization signal, the above three types of graphics processing threads will call the GPU at any time within the display cycle. Exemplarily, Figure 16 shows a timing diagram of different graphics processing threads occupying the GPU provided by an embodiment of the present application. Referring to (a) in Figure 16, during a certain display cycle of the GPU, the video thread decodes the video image of the third frame, and the first rendering thread renders the virtual object for the VR composite image of the third frame, and the second rendering thread can perform image fusion through the environmental image of the second frame generated in the previous display cycle, the video image of the second frame and the virtual object of the second frame, that is, complete the secondary rendering operation. If the above three different graphics processing threads trigger tasks at any time during the display cycle, since the processing priority of the secondary rendering thread is higher than that of the primary rendering thread, if the secondary rendering thread occupies the GPU for the secondary rendering task during the processing of the primary rendering thread, it will interrupt the primary rendering task of the primary rendering thread being processed, and the task of the primary rendering thread may be delayed to the next display cycle. When the secondary rendering thread completes the processing, the primary rendering thread will re-occupy the GPU for processing, thereby delaying the task of decoding the 4th frame of video, which may cause the 4th frame of video image frame to be lost, resulting in the final output VR composite image being not smooth.
因此,为了减少上述不同图形处理线程在处理对应任务时被其他线程频繁中断,导致丢帧的情况发生。电子设备可以为不同的图形处理线程在同一显示周期内配置不同的处理时隙,以及为不同的图形处理线程配置对应的处理优先级,实现有序控制各个图形处理线程调用GPU。参见图16中的(b)所示,电子设备可以设置在显示周期对应的同步信号生成时刻,触发视频解码线程以及二次渲染线程处理对应的任务,并将视频解码线程的优先级设置为大于二次渲染线程的优先级。因此,GPU会先处理视频解码线程对应的任务,由于视频解码线程的耗时较短,因此,当视频解码线程完成解码任务时,GPU会被二次渲染线程占用,执行对应的二次渲染任务。在经过预设的间隔时长后,触发一次渲染线程处理对应的任务,由于一次渲染线程的优先级低于二次渲染线程的优先级,因此会等待二次渲染线程完成对应的任务后,才会占用GPU进行处理,在下一个显示周期中也可以根据上述方式依次占用GPU,从而减少了不同图形处理线程在处理任务过程中被中断的情况,也能够减少丢帧的发生概率,提高了VR合成图像的流畅度。Therefore, in order to reduce the frequent interruptions of the above-mentioned different graphics processing threads by other threads when processing corresponding tasks, resulting in frame losses. The electronic device can configure different processing time slots for different graphics processing threads in the same display cycle, and configure corresponding processing priorities for different graphics processing threads, so as to realize orderly control of each graphics processing thread to call the GPU. As shown in (b) in Figure 16, the electronic device can be set at the time when the synchronization signal corresponding to the display cycle is generated, triggering the video decoding thread and the secondary rendering thread to process the corresponding task, and setting the priority of the video decoding thread to be greater than the priority of the secondary rendering thread. Therefore, the GPU will first process the task corresponding to the video decoding thread. Since the video decoding thread takes a shorter time, when the video decoding thread completes the decoding task, the GPU will be occupied by the secondary rendering thread to execute the corresponding secondary rendering task. After a preset interval, the primary rendering thread is triggered to process the corresponding task. Since the priority of the primary rendering thread is lower than that of the secondary rendering thread, the GPU will be occupied for processing only after the secondary rendering thread completes the corresponding task. In the next display cycle, the GPU can also be occupied in turn according to the above method, thereby reducing the situation where different graphics processing threads are interrupted during the task processing process, and can also reduce the probability of frame loss, thereby improving the smoothness of VR synthetic images.
需要说明的是,应用层内的运行时应用可以提供一个接口给所有图形处理线程获取时间戳。其中,该接口可以直接分配给二次渲染线程。在该情况下,当第一渲染线程以及视频解码线程需要获取时间戳时,可以从二次渲染线程处获取。示例性地,图17示出了本申请一实施例提供的各个图形处理线程确定触发时刻的实现流程图。参见图17所示,具体地,不同线程确定处理触发时刻的方式如下:It should be noted that the runtime application in the application layer can provide an interface for all graphics processing threads to obtain timestamps. Among them, the interface can be directly assigned to the secondary rendering thread. In this case, when the first rendering thread and the video decoding thread need to obtain the timestamp, they can obtain it from the secondary rendering thread. Exemplarily, Figure 17 shows an implementation flow chart of each graphics processing thread determining the trigger moment provided by an embodiment of the present application. Referring to Figure 17, specifically, the way different threads determine the processing trigger moment is as follows:
情况2.1对于视频解码线程Case 2.1 for video decoding thread
步骤2.1.1视频解码线程计算第N+1个显示周期的时间戳;第N+1个显示周期的时间戳是根据第N个显示周期的时间戳以及视频解码线程的处理帧率确定的;其中,视频解码线程的解码帧率和VR合成图像的显示帧率相同。Step 2.1.1 The video decoding thread calculates the timestamp of the N+1th display cycle; the timestamp of the N+1th display cycle is determined based on the timestamp of the Nth display cycle and the processing frame rate of the video decoding thread; wherein the decoding frame rate of the video decoding thread is the same as the display frame rate of the VR synthetic image.
步骤2.1.2视频解码线程确定第N个显示周期的处理完成时刻;处理完成时刻为视频解码线程完成第N个显示周期对图像数据的处理对应的时刻。Step 2.1.2: The video decoding thread determines the processing completion time of the Nth display cycle; the processing completion time is the time corresponding to the time when the video decoding thread completes the processing of the image data in the Nth display cycle.
步骤2.1.3视频解码线程计算第三休眠时长;第三休眠时长是根据处理完成时刻以及第N+1个显示周期的时间戳计算得到的。Step 2.1.3: The video decoding thread calculates the third sleep duration; the third sleep duration is calculated based on the processing completion time and the timestamp of the N+1th display cycle.
步骤2.1.4将视频解码线程设置为休眠状态,并在休眠状态的时长到达第三休眠时长时唤醒视频解码线程对视频数据进行解码。Step 2.1.4 sets the video decoding thread to a sleep state, and wakes up the video decoding thread to decode the video data when the duration of the sleep state reaches a third sleep duration.
由于上述步骤2.1.1~步骤2.1.4实现过程与S1401~S1404的具体实现过程相同,具体描述可以参见S1401~S1404的相关描述,在此不再赘述。Since the implementation process of the above steps 2.1.1 to 2.1.4 is the same as the specific implementation process of S1401 to S1404, the specific description can refer to the relevant description of S1401 to S1404, which will not be repeated here.
情况2.2对于二次渲染线程Case 2.2 For the secondary rendering thread
步骤2.2.1二次渲染线程计算第N+1个显示周期的时间戳;第N+1个显示周期的时间戳是根据第N个显示周期的时间戳以及二次渲染线程的处理帧率确定的;其中,二次渲染线程的渲染帧率和VR合成图像的显示帧率相同。Step 2.2.1 The secondary rendering thread calculates the timestamp of the N+1th display cycle; the timestamp of the N+1th display cycle is determined based on the timestamp of the Nth display cycle and the processing frame rate of the secondary rendering thread; wherein the rendering frame rate of the secondary rendering thread is the same as the display frame rate of the VR composite image.
步骤2.2.2二次渲染线程确定第N个显示周期的处理完成时刻;处理完成时刻为二次渲染线程完成第N个显示周期对图像数据的处理对应的时刻。Step 2.2.2 The secondary rendering thread determines the processing completion time of the Nth display cycle; the processing completion time is the time corresponding to the completion of the processing of the image data by the secondary rendering thread in the Nth display cycle.
步骤2.2.3二次渲染线程计算第二休眠时长;第二休眠时长是根据处理完成时刻以及第N+1个显示周期的时间戳计算得到的。Step 2.2.3: The secondary rendering thread calculates the second sleep duration; the second sleep duration is calculated based on the processing completion time and the timestamp of the N+1th display cycle.
步骤2.2.4将二次渲染线程设置为休眠状态,并在休眠状态的时长到达第二休眠时长时唤醒二次渲染线程对视频数据进行解码。Step 2.2.4 sets the secondary rendering thread to a sleep state, and wakes up the secondary rendering thread to decode the video data when the duration of the sleep state reaches a second sleep duration.
由于上述步骤2.2.1~步骤2.2.4实现过程与S1401~S1404的具体实现过程相同,具体描述可以参见S1401~S1404的相关描述,在此不再赘述。Since the implementation process of the above steps 2.2.1 to 2.2.4 is the same as the specific implementation process of S1401 to S1404, the specific description can refer to the relevant description of S1401 to S1404, which will not be repeated here.
情况2.3对于一次渲染线程Case 2.3 For a rendering thread
其中,在同一显示周期内,一次渲染线程的预期触发时刻与二次渲染线程的预期触发时刻之间具有一定的时间差,具体为预设间隔时长,避免一次渲染线程调用GPU处理任务时被二次渲染线程中断。其中,上述预设间隔时长可以根据显示周期的周期时长确定,即上述的预设间隔时长offset=α*1/F2。其中,α为预设系数,该系数可以为大于0小于1的任意数值,例如可以为0.65,F2即为显示帧率。基于此,一次渲染线程计算第一休眠时长的过程具体为:Among them, within the same display cycle, there is a certain time difference between the expected triggering moment of the primary rendering thread and the expected triggering moment of the secondary rendering thread, specifically a preset interval duration, to avoid the primary rendering thread being interrupted by the secondary rendering thread when calling the GPU to process the task. Among them, the above-mentioned preset interval duration can be determined according to the cycle duration of the display cycle, that is, the above-mentioned preset interval duration offset = α*1/F2. Among them, α is a preset coefficient, which can be any value greater than 0 and less than 1, for example, it can be 0.65, and F2 is the display frame rate. Based on this, the process of calculating the first sleep duration of the primary rendering thread is specifically as follows:
步骤2.3.1一次渲染线程计算第N+1个显示周期的时间戳;第N+P个显示周期的时间戳是根据第N个显示周期的时间戳以及一次渲染线程的处理帧率确定的;其中,一次渲染线程的渲染帧率可以小于或等于VR合成图像的显示帧率。Step 2.3.1 A rendering thread calculates the timestamp of the N+1th display cycle; the timestamp of the N+Pth display cycle is determined based on the timestamp of the Nth display cycle and the processing frame rate of the rendering thread; wherein, the rendering frame rate of the rendering thread may be less than or equal to the display frame rate of the VR synthetic image.
步骤2.3.2一次渲染线程确定第N个显示周期的处理完成时刻;处理完成时刻为一次渲染线程完成第N个显示周期对图像数据的处理对应的时刻。Step 2.3.2: A rendering thread determines the processing completion time of the Nth display cycle; the processing completion time is the time corresponding to the completion of the processing of the image data by the rendering thread for the Nth display cycle.
步骤2.3.3所述一次渲染线程计算第一休眠时长;所述第一休眠时长是根据所述第N+P个显示周期的时间戳、所述预设间隔时长以及所述第一渲染完成时刻确定的。In step 2.3.3, the rendering thread calculates a first sleep duration; the first sleep duration is determined based on the timestamp of the N+Pth display cycle, the preset interval duration, and the first rendering completion time.
其中,由于第一休眠时长的计算需要考虑预设间隔时长,因此,第一休眠时长的具体计算过程方式为:t(N+P)+offset-t(fin)。其中,offset为上述的预设间隔时长。Since the calculation of the first sleep time needs to take into account the preset interval time, the specific calculation process of the first sleep time is: t(N+P)+offset-t(fin), wherein offset is the above-mentioned preset interval time.
步骤2.3.4将一次渲染线程设置为休眠状态,并在休眠状态的时长到达第一休眠时长时唤醒一次渲染线程对视频数据进行解码。Step 2.3.4 sets the primary rendering thread to a sleep state, and wakes up the primary rendering thread to decode the video data when the duration of the sleep state reaches a first sleep duration.
由于上述步骤2.3.1~步骤2.3.4实现过程与S1401~S1404的具体实现过程相同,具体描述可以参见S1401~S1404的相关描述,在此不再赘述。Since the implementation process of the above steps 2.3.1 to 2.3.4 is the same as the specific implementation process of S1401 to S1404, the specific description can be found in the relevant description of S1401 to S1404, which will not be repeated here.
在S703中,根据预设的处理次序,依次在所述线程对应的所述处理触发时刻触发所述线程处理图像数据,生成VR合成图像。In S703, according to a preset processing order, the threads are triggered to process image data at the processing triggering moments corresponding to the threads in turn to generate a VR composite image.
在本实施例中,电子设备可以根据不同线程在生成VR合成图像时的处理内容以及处理时长,可以对整个生成过程划分为多个处理阶段,每个处理阶段对应一个处理次序。电子设备可以根据上述处理次序依次启动各个线程,并根据处理触发时刻触发各个线程处理图像数据,从而合成得到对应的VR合成图像,生成VR合成图像的过程可以参见上述各个阶段的相关描述,在此不再赘述。In this embodiment, the electronic device can divide the entire generation process into multiple processing stages according to the processing content and processing time of different threads when generating VR synthetic images, and each processing stage corresponds to a processing order. The electronic device can start each thread in sequence according to the above processing order, and trigger each thread to process image data according to the processing trigger time, so as to synthesize the corresponding VR synthetic image. The process of generating VR synthetic images can refer to the relevant descriptions of the above stages, which will not be repeated here.
示例性地,图18示出了本申请一实施例提供的一种处理次序的划分示意图。参见图18所示,曝光阶段为第一阶段,可以占用两个显示周期,而视频解码阶段以及一次渲染阶段可以对应第二阶段,也可以占用一个显示周期,而二次渲染阶段需要前序多个阶段输出的内容,因此处于第三阶段,占用一个显示周期。而显示阶段为第四阶段,具体用于显示二次渲染线程输出的VR合成图像。Exemplarily, FIG18 shows a schematic diagram of the division of a processing order provided by an embodiment of the present application. Referring to FIG18, the exposure stage is the first stage, which can occupy two display cycles, while the video decoding stage and the primary rendering stage can correspond to the second stage, and can also occupy one display cycle, and the secondary rendering stage requires the content output by multiple previous stages, so it is in the third stage and occupies one display cycle. The display stage is the fourth stage, which is specifically used to display the VR composite image output by the secondary rendering thread.
示例性地,图19示出了本申请一实施例提供的一种电子设备在生成多帧VR合成图像过程中各阶段的时序图。在显示多帧VR合成图像的过程中,是一个数据流处理的过程,通过与同步信号校准后,各个阶段能够有序执行,且对应的处理触发时刻均与同步信号的时间戳同步。其中,对一次渲染线程、视频解码线程以及二次渲染线程可以在同一显示周期内分时复用GPU,并具有对应的处理时隙,继而减少了因任务打断而导致的某一帧图像丢失的情况。Exemplarily, FIG19 shows a timing diagram of each stage in the process of generating a multi-frame VR composite image by an electronic device provided by an embodiment of the present application. In the process of displaying a multi-frame VR composite image, it is a process of data stream processing. After calibration with the synchronization signal, each stage can be executed in order, and the corresponding processing trigger time is synchronized with the timestamp of the synchronization signal. Among them, the primary rendering thread, the video decoding thread, and the secondary rendering thread can time-share the GPU in the same display cycle and have corresponding processing time slots, thereby reducing the loss of a frame of image due to task interruption.
以上可以看出,本申请实施例提供的一种显示方法可以当接收到第一操作时,VR显示设备会在每个显示周期生成一个同步信号,并记录生成该同步信号的时间戳;将记录每个显示周期的时间戳发送给用于合成VR合成图像的多个线程,每个线程可以根据该时间戳确定下一显示周期的处理触发时刻,由于每个线程能够在下一显示周期处理图像数据的处理触发时刻均与下一显示周期的同步信号的时间戳同步,即能够保证多个线程的处理触发时刻相互同步,继而根据各个线程对应的处理次序,依次在处理触发时刻触发线程对图像数据进行处理,从而生成VR合成图像。与现有的显示技术相比,通过向多个线程下发同步信号的时间戳,能够使得各个线程的处理触发时刻与下一显示周期的同步信号同步,实现了多个线程之间处理触发时刻相互同步,从而实现有序控制多个线程协同处理图像数据,减少了丢帧出现的概率,保证了输出画面的流畅程度,继而提高了用户观看的沉浸感,提升用户的观看体验。As can be seen from the above, a display method provided by an embodiment of the present application can generate a synchronization signal in each display cycle when a first operation is received, and record the timestamp of generating the synchronization signal; send the timestamp of each display cycle to multiple threads used to synthesize VR synthetic images, and each thread can determine the processing trigger moment of the next display cycle according to the timestamp. Since each thread can synchronize the processing trigger moment of processing image data in the next display cycle with the timestamp of the synchronization signal of the next display cycle, it can ensure that the processing trigger moments of multiple threads are synchronized with each other, and then trigger the threads to process the image data in turn at the processing trigger moment according to the processing order corresponding to each thread, so as to generate a VR synthetic image. Compared with the existing display technology, by sending the timestamp of the synchronization signal to multiple threads, the processing trigger moment of each thread can be synchronized with the synchronization signal of the next display cycle, so as to achieve the synchronization of the processing trigger moments between multiple threads, thereby achieving orderly control of multiple threads to collaboratively process image data, reducing the probability of frame loss, ensuring the smoothness of the output picture, and then improving the user's immersion and viewing experience.
实施例二:Embodiment 2:
对应于上文实施例一所述的显示方法,图20示出了本申请实施例提供的显示装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。 Corresponding to the display method described in the first embodiment above, FIG20 shows a structural block diagram of the display device provided in the embodiment of the present application. For the sake of convenience, only the part related to the embodiment of the present application is shown.
参照图20,该显示装置包括:20, the display device includes:
时间戳记录单元201,用于响应于第一操作,记录在每个显示周期生成的同步信号的时间戳;A timestamp recording unit 201, configured to record a timestamp of a synchronization signal generated in each display period in response to a first operation;
时间戳下发单元202,用于分别将各个所述显示周期的所述时间戳发送给多个线程,以使所述线程根据第N个显示周期的所述时间戳校准所述线程在第N+P个显示周期的处理触发时刻;所述线程的处理触发时刻与第N+P个显示周期的同步信号的时间戳同步;所述N和所述P为大于0的正整数;所述多个线程用于生成VR合成图像;The timestamp sending unit 202 is used to send the timestamp of each display cycle to multiple threads respectively, so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamp of the Nth display cycle; the processing triggering moment of the threads is synchronized with the timestamp of the synchronization signal of the N+Pth display cycle; N and P are positive integers greater than 0; the multiple threads are used to generate VR synthetic images;
图像合成单元203,用于根据预设的处理次序,依次在所述线程对应的所述处理触发时刻触发所述线程处理图像数据,生成VR合成图像。The image synthesis unit 203 is used to trigger the threads to process image data at the processing triggering moments corresponding to the threads in sequence according to a preset processing order, so as to generate a VR synthesized image.
可选地,所述多个线程包括摄像线程;所述摄像线程的处理触发时刻为曝光时刻;所述时间戳下发单元202包括:Optionally, the multiple threads include a camera thread; the processing trigger moment of the camera thread is an exposure moment; and the timestamp issuing unit 202 includes:
第一启动指令传输单元,用于摄像线程向硬件抽象层中的摄像框架发送第一启动指令;所述第一启动指令包含所述时间戳;A first startup instruction transmission unit, used for the camera thread to send a first startup instruction to the camera framework in the hardware abstraction layer; the first startup instruction includes the timestamp;
第二启动指令传输单元,用于响应于所述第一启动指令,所述摄像框架向摄像模组发送第二启动指令;A second startup instruction transmission unit, configured to cause the camera framework to send a second startup instruction to the camera module in response to the first startup instruction;
曝光参数反馈单元,用于所述摄像模组将预览图像的曝光参数反馈给所述摄像框架,其中,所述预览图像是所述摄像模组基于所述第二启动指令获取的,所述曝光参数包括所述预览图像的启动曝光时刻;an exposure parameter feedback unit, configured for the camera module to feed back the exposure parameters of the preview image to the camera framework, wherein the preview image is obtained by the camera module based on the second start instruction, and the exposure parameters include the start exposure moment of the preview image;
时间偏差计算单元,用于所述摄像框架计算所述时间戳与所述启动曝光时刻之间的时间偏差;A time deviation calculation unit, used for calculating the time deviation between the timestamp and the exposure start time in the camera framework;
曝光时刻确定单元,用于根据第N个显示周期的时间戳以及所述时间偏差,确定第N+P个显示周期所述摄像线程的曝光时刻。The exposure time determination unit is used to determine the exposure time of the camera thread in the N+Pth display period according to the timestamp of the Nth display period and the time deviation.
可选地,所述摄像模组包括主摄像模组以及至少一个从摄像模组;所述主摄像模组与所述从摄像模组在启动时通过所述摄像框架发送的第二启动指令完成硬件同步。Optionally, the camera module includes a main camera module and at least one slave camera module; the main camera module and the slave camera module complete hardware synchronization through a second startup instruction sent by the camera framework when starting.
可选地,所述多个线程包括:图形处理线程;所述时间戳下发单元202包括:Optionally, the multiple threads include: a graphics processing thread; and the timestamp issuing unit 202 includes:
时间戳计算单元,用于所述图形处理线程计算第N+P个显示周期的时间戳;所述第N+P个显示周期的时间戳是根据所述第N个显示周期的所述时间戳以及所述图形处理线程的处理帧率确定的;A timestamp calculation unit, used for the graphics processing thread to calculate the timestamp of the N+Pth display cycle; the timestamp of the N+Pth display cycle is determined according to the timestamp of the Nth display cycle and the processing frame rate of the graphics processing thread;
渲染完成时刻确定单元,用于所述图形处理线程确定第N个显示周期的处理完成时刻;所述处理完成时刻为所述图形处理线程完成第N个显示周期对图像数据的处理对应的时刻;A rendering completion time determination unit, used for the graphics processing thread to determine the processing completion time of the Nth display cycle; the processing completion time is the time corresponding to the completion of the processing of the image data by the graphics processing thread in the Nth display cycle;
休眠时长计算单元,用于所述图形处理线程计算休眠时长;所述休眠时长是根据所述处理完成时刻以及所述第N+P个显示周期的时间戳计算得到的;A sleep duration calculation unit, used for the graphics processing thread to calculate the sleep duration; the sleep duration is calculated based on the processing completion time and the timestamp of the N+Pth display cycle;
休眠状态触发单元,用于将所述图形处理线程设置为休眠状态,并在休眠状态的时长到达所述休眠时长时唤醒所述图形处理线程。The sleep state triggering unit is used to set the graphics processing thread to a sleep state and wake up the graphics processing thread when the duration of the sleep state reaches the sleep duration.
可选地,所述图形处理线程包括二次渲染线程;Optionally, the graphics processing thread includes a secondary rendering thread;
所述渲染完成时刻确定单元包括:The rendering completion time determination unit comprises:
第二渲染时刻确定单元,用于所述二次渲染线程确定第N个显示周期的第二渲染完成时刻;A second rendering time determination unit, used for the secondary rendering thread to determine the second rendering completion time of the Nth display cycle;
所述休眠时长计算单元包括:The sleep duration calculation unit comprises:
第二休眠计算单元,用于所述二次渲染线程计算所述第N+P个显示周期的时间戳与所述渲染完成时刻之间的时间差,将所述时间差作为第二休眠时长。The second sleep calculation unit is used for the secondary rendering thread to calculate the time difference between the timestamp of the N+Pth display cycle and the rendering completion time, and use the time difference as the second sleep duration.
可选地,所述图形处理线程包括一次渲染线程;所述一次渲染线程的预期触发时刻与所述二次渲染线程的预期触发时刻之间的时间差为预设间隔时长;Optionally, the graphics processing thread includes a primary rendering thread; the time difference between the expected triggering moment of the primary rendering thread and the expected triggering moment of the secondary rendering thread is a preset interval duration;
所述渲染完成时刻确定单元包括:The rendering completion time determination unit comprises:
第一渲染时刻确定单元,用于所述一次渲染线程确定第N个显示周期的第一渲染完成时刻;A first rendering time determination unit, used for the primary rendering thread to determine a first rendering completion time of an Nth display cycle;
所述休眠时长计算单元包括:The sleep duration calculation unit comprises:
第一休眠计算单元,用于所述一次渲染线程计算第一休眠时长;所述第一休眠时长是根据所述第N+P个显示周期的时间戳、所述预设间隔时长以及所述第一渲染完成时刻确定的。The first sleep calculation unit is used for calculating a first sleep duration for the primary rendering thread; the first sleep duration is determined according to the timestamp of the N+Pth display cycle, the preset interval duration and the first rendering completion time.
可选地,所述时间戳记录单元201包括:Optionally, the timestamp recording unit 201 includes:
同步信号生成单元,用于处理芯片基于预设的显示帧率生成各个所述显示周期的所述同步信号;A synchronization signal generating unit, configured to generate the synchronization signal of each display period based on a preset display frame rate by a processing chip;
设备节点存储单元,用于基于所述同步信号的生成时刻,生成记录所述同步信号的所述时间戳,将所述时间戳存储于内核层内的设备节点;A device node storage unit, used for generating the timestamp for recording the synchronization signal based on the generation time of the synchronization signal, and storing the timestamp in the device node in the kernel layer;
时间变量存储单元,用于应用程序框架层中的服务进程读取同步信号的时间戳,并存储于所述服务进程对应的时间变量中。The time variable storage unit is used for the service process in the application framework layer to read the timestamp of the synchronization signal and store it in the time variable corresponding to the service process.
可选地,所述同步信号生成单元包括:Optionally, the synchronization signal generating unit includes:
第一同步信号生成单元,用于中央处理器分散单元DPU以所述显示帧率生成所述同步信号。The first synchronization signal generating unit is used for the central processing unit distributed unit DPU to generate the synchronization signal at the display frame rate.
可选地,所述同步信号生成单元包括:Optionally, the synchronization signal generating unit includes:
第二同步信号生成单元,用于显示驱动芯片发送反馈信号至中央处理单元CPU,以使所述CPU在接收到所述反馈信号时生成所述同步信号;所述反馈信号是在所述显示驱动芯片刷新一帧所述VR合成图像时生成的。The second synchronization signal generating unit is used for the display driver chip to send a feedback signal to the central processing unit CPU, so that the CPU generates the synchronization signal when receiving the feedback signal; the feedback signal is generated when the display driver chip refreshes a frame of the VR composite image.
可选地,所述时间戳下发单元202包括:Optionally, the timestamp issuing unit 202 includes:
运行时应用读取单元,用于运行于应用层的运行时应用读取记录于所述时间变量中的时间戳;所述运行时应用通过预设接口与所述应用程序框架层中的服务进程通信;A runtime application reading unit, used for a runtime application running in the application layer to read the timestamp recorded in the time variable; the runtime application communicates with the service process in the application framework layer through a preset interface;
运行时应用发送单元,用于所述运行时应用将所述时间戳发送给所述多个线程。A runtime application sending unit is used for the runtime application to send the timestamp to the multiple threads.
可选地,所述P是根据所述VR合成图像的显示周期与所述线程的处理周期之间的比值确定的。Optionally, P is determined based on a ratio between a display period of the VR synthetic image and a processing period of the thread.
因此,本申请实施例提供的显示装置同样可以当接收到第一操作时,VR显示设备会在每个显示周期生成一个同步信号,并记录生成该同步信号的时间戳;将记录每个显示周期的时间戳发送给用于合成VR合成图像的多个线程,每个线程可以根据该时间戳确定下一显示周期的处理触发时刻,由于每个线程能够在下一显示周期处理图像数据的处理触发时刻均与下一显示周期的同步信号的时间戳同步,即能够保证多个线程的处理触发时刻相互同步,继而根据各个线程对应的处理次序,依次在处理触发时刻触发线程对图像数据进行处理,从而生成VR合成图像。与现有的显示技术相比,通过向多个线程下发同步信号的时间戳,能够使得各个线程的处理触发时刻与下一显示周期的同步信号同步,实现了多个线程之间处理触发时刻相互同步,从而实现有序控制多个线程协同处理图像数据,减少了丢帧出现的概率,保证了输出画面的流畅程度,继而提高了用户观看的沉浸感,提升用户的观看体验。Therefore, the display device provided in the embodiment of the present application can also generate a synchronization signal in each display cycle when receiving the first operation, and record the timestamp of generating the synchronization signal; send the timestamp of each display cycle to multiple threads used to synthesize VR synthetic images, and each thread can determine the processing trigger moment of the next display cycle according to the timestamp. Since each thread can synchronize the processing trigger moment of the image data in the next display cycle with the timestamp of the synchronization signal of the next display cycle, it can ensure that the processing trigger moments of multiple threads are synchronized with each other, and then trigger the threads to process the image data in turn at the processing trigger moment according to the processing order corresponding to each thread, so as to generate a VR synthetic image. Compared with the existing display technology, by sending the timestamp of the synchronization signal to multiple threads, the processing trigger moment of each thread can be synchronized with the synchronization signal of the next display cycle, so as to achieve the synchronization of the processing trigger moments between multiple threads, thereby achieving orderly control of multiple threads to collaboratively process image data, reducing the probability of frame loss, ensuring the smoothness of the output picture, and then improving the user's immersion and viewing experience.
图21为本申请一实施例提供的电子设备的结构示意图。如图21所示,该实施例的电子设备21包括:至少一个处理器210(图21中仅示出一个处理器)、存储器211以及存储在所述存储器211中并可在所述至少一个处理器210上运行的计算机程序212,所述处理器210执行所述计算机程序212时实现上述任意各个显示方法实施例中的步骤。FIG21 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present application. As shown in FIG21 , the electronic device 21 of this embodiment includes: at least one processor 210 (only one processor is shown in FIG21 ), a memory 211, and a computer program 212 stored in the memory 211 and executable on the at least one processor 210, and when the processor 210 executes the computer program 212, the steps in any of the above-mentioned display method embodiments are implemented.
所述电子设备21可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。该电子设备可包括,但不仅限于,处理器210、存储器211。本领域技术人员可以理解,图21仅仅是电子设备21的举例,并不构成对电子设备21的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如还可以包括输入输出设备、网络接入设备等。The electronic device 21 may be a computing device such as a desktop computer, a notebook, a PDA, and a cloud server. The electronic device may include, but is not limited to, a processor 210 and a memory 211. Those skilled in the art will appreciate that FIG. 21 is merely an example of the electronic device 21 and does not constitute a limitation on the electronic device 21. The electronic device 21 may include more or fewer components than shown in the figure, or may combine certain components, or different components, and may also include, for example, input and output devices, network access devices, etc.
所称处理器210可以是中央处理单元(Central Processing Unit,CPU),该处理器210还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor 210 may be a central processing unit (CPU), or other general-purpose processors, digital signal processors (DSP), application-specific integrated circuits (ASIC), field-programmable gate arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general-purpose processor may be a microprocessor or any conventional processor, etc.
所述存储器211在一些实施例中可以是所述电子设备21的内部存储单元,例如电子设备21的硬盘或内存。所述存储器211在另一些实施例中也可以是所述电子设备21的外部存储设备,例如所述电子设备21上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器211还可以既包括所述电子设备21的内部存储单元也包括外部存储设备。所述存储器211用于存储操作系统、应用程序、引导装载程序(BootLoader)、数据以及其他程序等,例如所述计算机程序的程序代码等。所述存储器211还可以用于暂时地存储已经输出或者将要输出的数据。 In some embodiments, the memory 211 may be an internal storage unit of the electronic device 21, such as a hard disk or memory of the electronic device 21. In other embodiments, the memory 211 may also be an external storage device of the electronic device 21, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash card (Flash Card), etc. equipped on the electronic device 21. Further, the memory 211 may also include both an internal storage unit of the electronic device 21 and an external storage device. The memory 211 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program. The memory 211 may also be used to temporarily store data that has been output or is to be output.
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。It should be noted that the information interaction, execution process, etc. between the above-mentioned devices/units are based on the same concept as the method embodiment of the present application. Their specific functions and technical effects can be found in the method embodiment part and will not be repeated here.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。The technicians in the relevant field can clearly understand that for the convenience and simplicity of description, only the division of the above-mentioned functional units and modules is used as an example for illustration. In practical applications, the above-mentioned function allocation can be completed by different functional units and modules as needed, that is, the internal structure of the device can be divided into different functional units or modules to complete all or part of the functions described above. The functional units and modules in the embodiment can be integrated in a processing unit, or each unit can exist physically separately, or two or more units can be integrated in one unit. The above-mentioned integrated unit can be implemented in the form of hardware or in the form of software functional units. In addition, the specific names of the functional units and modules are only for the convenience of distinguishing each other, and are not used to limit the scope of protection of this application. The specific working process of the units and modules in the above-mentioned system can refer to the corresponding process in the aforementioned method embodiment, which will not be repeated here.
本申请实施例还提供了一种电子设备,该电子设备包括:至少一个处理器、存储器以及存储在所述存储器中并可在所述至少一个处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任意各个方法实施例中的步骤。An embodiment of the present application also provides an electronic device, which includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, wherein the processor implements the steps of any of the above-mentioned method embodiments when executing the computer program.
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。An embodiment of the present application further provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments can be implemented.
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述各个方法实施例中的步骤。An embodiment of the present application provides a computer program product. When the computer program product runs on a mobile terminal, the mobile terminal can implement the steps in the above-mentioned method embodiments when executing the computer program product.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/电子设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the present application implements all or part of the processes in the above-mentioned embodiment method, which can be completed by instructing the relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium. When the computer program is executed by the processor, the steps of the above-mentioned method embodiments can be implemented. Among them, the computer program includes computer program code, which can be in source code form, object code form, executable file or some intermediate form. The computer-readable medium may at least include: any entity or device that can carry the computer program code to the camera/electronic device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium. For example, USB flash drive, mobile hard disk, disk or optical disk. In some jurisdictions, according to legislation and patent practice, computer-readable media cannot be electric carrier signals and telecommunication signals.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the above embodiments, the description of each embodiment has its own emphasis. For parts that are not described or recorded in detail in a certain embodiment, reference can be made to the relevant descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art will appreciate that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this application.
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided in the present application, it should be understood that the disclosed devices/network equipment and methods can be implemented in other ways. For example, the device/network equipment embodiments described above are merely schematic. For example, the division of the modules or units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。 The embodiments described above are only used to illustrate the technical solutions of the present application, rather than to limit them. Although the present application has been described in detail with reference to the aforementioned embodiments, a person skilled in the art should understand that the technical solutions described in the aforementioned embodiments may still be modified, or some of the technical features may be replaced by equivalents. Such modifications or replacements do not deviate the essence of the corresponding technical solutions from the spirit and scope of the technical solutions of the embodiments of the present application, and should all be included in the protection scope of the present application.

Claims (14)

  1. 一种显示方法,其特征在于,包括:A display method, characterized by comprising:
    响应于第一操作,记录在每个显示周期生成的同步信号的时间戳;In response to a first operation, recording a time stamp of a synchronization signal generated in each display period;
    分别将各个所述显示周期的所述时间戳发送给多个线程,以使所述线程根据第N个显示周期的所述时间戳校准所述线程在第N+P个显示周期的处理触发时刻;所述线程的处理触发时刻与第N+P个显示周期的同步信号的时间戳同步;所述N和所述P为大于0的正整数;所述多个线程用于生成VR合成图像;The timestamps of the respective display cycles are sent to a plurality of threads respectively, so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamps of the Nth display cycle; the processing triggering moment of the threads is synchronized with the timestamps of the synchronization signals of the N+Pth display cycle; N and P are positive integers greater than 0; the plurality of threads are used to generate VR synthetic images;
    根据预设的处理次序,依次在所述线程对应的所述处理触发时刻触发所述线程处理图像数据,生成VR合成图像。According to a preset processing order, the threads are triggered in sequence to process image data at the processing triggering moments corresponding to the threads to generate VR composite images.
  2. 根据权利要求1所述的显示方法,其特征在于,所述多个线程包括摄像线程;所述摄像线程的处理触发时刻为曝光时刻;The display method according to claim 1, characterized in that the multiple threads include a camera thread; the processing triggering moment of the camera thread is an exposure moment;
    所述分别将各个所述显示周期的所述时间戳发送给多个线程,以使所述线程根据第N个显示周期的所述时间戳校准所述线程在第N+P个显示周期的处理触发时刻,包括:The sending the timestamps of the respective display cycles to a plurality of threads respectively so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamps of the Nth display cycle comprises:
    摄像线程向硬件抽象层中的摄像框架发送第一启动指令;所述第一启动指令包含所述时间戳;The camera thread sends a first start instruction to the camera framework in the hardware abstraction layer; the first start instruction includes the timestamp;
    响应于所述第一启动指令,所述摄像框架向摄像模组发送第二启动指令;In response to the first start-up instruction, the camera framework sends a second start-up instruction to the camera module;
    所述摄像模组将预览图像的曝光参数反馈给所述摄像框架,其中,所述预览图像是所述摄像模组基于所述第二启动指令获取的,所述曝光参数包括所述预览图像的启动曝光时刻;The camera module feeds back the exposure parameters of the preview image to the camera framework, wherein the preview image is obtained by the camera module based on the second start instruction, and the exposure parameters include the start exposure moment of the preview image;
    所述摄像框架计算所述时间戳与所述启动曝光时刻之间的时间偏差;The camera framework calculates the time deviation between the timestamp and the exposure start time;
    根据第N个显示周期的时间戳以及所述时间偏差,确定第N+P个显示周期所述摄像线程的曝光时刻。The exposure moment of the camera thread in the N+Pth display period is determined according to the timestamp of the Nth display period and the time deviation.
  3. 根据权利要求2所述的显示方法,其特征在于,所述摄像模组包括主摄像模组以及至少一个从摄像模组;所述主摄像模组与所述从摄像模组在启动时通过所述摄像框架发送的第二启动指令完成硬件同步。The display method according to claim 2 is characterized in that the camera module includes a main camera module and at least one slave camera module; the main camera module and the slave camera module complete hardware synchronization through a second startup instruction sent by the camera framework when starting.
  4. 根据权利要求1所述的显示方法,其特征在于,所述多个线程包括:图形处理线程;The display method according to claim 1, wherein the plurality of threads include: a graphics processing thread;
    所述分别将各个所述显示周期的所述时间戳发送给多个线程,以使所述线程根据第N个显示周期的所述时间戳校准所述线程在第N+P个显示周期的处理触发时刻,包括:The sending the timestamps of the respective display cycles to a plurality of threads respectively so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamps of the Nth display cycle comprises:
    所述图形处理线程计算第N+P个显示周期的时间戳;所述第N+P个显示周期的时间戳是根据所述第N个显示周期的所述时间戳以及所述图形处理线程的处理帧率确定的;The graphics processing thread calculates a timestamp of an N+Pth display cycle; the timestamp of the N+Pth display cycle is determined according to the timestamp of the Nth display cycle and a processing frame rate of the graphics processing thread;
    所述图形处理线程确定第N个显示周期的处理完成时刻;所述处理完成时刻为所述图形处理线程完成第N个显示周期对图像数据的处理对应的时刻;The graphics processing thread determines a processing completion time of the Nth display cycle; the processing completion time is a time corresponding to when the graphics processing thread completes processing of the image data of the Nth display cycle;
    所述图形处理线程计算休眠时长;所述休眠时长是根据所述处理完成时刻以及所述第N+P个显示周期的时间戳计算得到的;The graphics processing thread calculates a sleep duration; the sleep duration is calculated based on the processing completion time and the timestamp of the N+Pth display cycle;
    将所述图形处理线程设置为休眠状态,并在休眠状态的时长到达所述休眠时长时唤醒所述图形处理线程。The graphics processing thread is set to a sleep state, and the graphics processing thread is awakened when the duration of the sleep state reaches the sleep duration.
  5. 根据权利要求4所述的显示方法,其特征在于,所述图形处理线程包括二次渲染线程;The display method according to claim 4, characterized in that the graphics processing thread includes a secondary rendering thread;
    所述图形处理线程确定第N个显示周期的处理完成时刻,包括:The graphics processing thread determines the processing completion time of the Nth display cycle, including:
    所述二次渲染线程确定第N个显示周期的第二渲染完成时刻;The secondary rendering thread determines a second rendering completion time of the Nth display cycle;
    所述图形处理线程计算休眠时长,包括:The graphics processing thread calculates the sleep duration, including:
    所述二次渲染线程计算所述第N+P个显示周期的时间戳与所述渲染完成时刻之间的时间差,将所述时间差作为第二休眠时长。The secondary rendering thread calculates a time difference between a timestamp of the N+Pth display cycle and a time point at which the rendering is completed, and uses the time difference as a second sleep duration.
  6. 根据权利要求5所述的显示方法,其特征在于,所述图形处理线程包括一次渲染线程;所述一次渲染线程的预期触发时刻与所述二次渲染线程的预期触发时刻之间的时间差为预设间隔时长;The display method according to claim 5, characterized in that the graphics processing thread includes a primary rendering thread; the time difference between the expected triggering moment of the primary rendering thread and the expected triggering moment of the secondary rendering thread is a preset interval duration;
    所述图形处理线程确定第N个显示周期的处理完成时刻,包括:The graphics processing thread determines the processing completion time of the Nth display cycle, including:
    所述一次渲染线程确定第N个显示周期的第一渲染完成时刻;The primary rendering thread determines a first rendering completion time of the Nth display cycle;
    所述图形处理线程计算休眠时长,包括: The graphics processing thread calculates the sleep duration, including:
    所述一次渲染线程计算第一休眠时长;所述第一休眠时长是根据所述第N+P个显示周期的时间戳、所述预设间隔时长以及所述第一渲染完成时刻确定的。The primary rendering thread calculates a first sleep duration; the first sleep duration is determined according to the timestamp of the N+Pth display cycle, the preset interval duration, and the first rendering completion time.
  7. 根据权利要求1-6任一项所述的显示方法,其特征在于,所述响应于第一操作,依次记录在每个显示周期生成的同步信号的时间戳,包括:The display method according to any one of claims 1 to 6, characterized in that, in response to the first operation, sequentially recording the timestamp of the synchronization signal generated in each display period comprises:
    处理芯片基于预设的显示帧率生成各个所述显示周期的所述同步信号;The processing chip generates the synchronization signal of each display period based on a preset display frame rate;
    基于所述同步信号的生成时刻,记录生成所述同步信号的所述时间戳,将所述时间戳存储于内核层内的设备节点;Based on the generation time of the synchronization signal, record the timestamp of generating the synchronization signal, and store the timestamp in a device node in the kernel layer;
    应用程序框架层中的服务进程读取同步信号的时间戳,并存储于所述服务进程对应的时间变量中。The service process in the application framework layer reads the timestamp of the synchronization signal and stores it in a time variable corresponding to the service process.
  8. 根据权利要求7所述的显示方法,其特征在于,所述处理芯片基于预设的显示帧率生成各个所述显示周期的所述同步信号,包括:The display method according to claim 7, characterized in that the processing chip generates the synchronization signal of each display period based on a preset display frame rate, comprising:
    中央处理器分散单元DPU以所述显示帧率生成所述同步信号。The decentralized central processing unit DPU generates the synchronization signal at the display frame rate.
  9. 根据权利要求7所述的显示方法,其特征在于,所述处理芯片基于预设的显示帧率生成各个所述显示周期的所述同步信号,包括:The display method according to claim 7, characterized in that the processing chip generates the synchronization signal of each display period based on a preset display frame rate, comprising:
    显示驱动芯片发送反馈信号至中央处理单元CPU,以使所述CPU在接收到所述反馈信号时生成所述同步信号;所述反馈信号是在所述显示驱动芯片刷新一帧所述VR合成图像时生成的。The display driver chip sends a feedback signal to the central processing unit CPU, so that the CPU generates the synchronization signal when receiving the feedback signal; the feedback signal is generated when the display driver chip refreshes a frame of the VR composite image.
  10. 根据权利要求7所述的显示方法,其特征在于,所述分别将各个所述显示周期的所述时间戳发送给多个线程,包括:The display method according to claim 7, characterized in that the sending of the timestamps of the respective display cycles to multiple threads comprises:
    运行于应用层的运行时应用读取记录于所述时间变量中的时间戳;所述运行时应用通过预设接口与所述应用程序框架层中的服务进程通信;The runtime application running in the application layer reads the timestamp recorded in the time variable; the runtime application communicates with the service process in the application framework layer through a preset interface;
    所述运行时应用将所述时间戳发送给所述多个线程。The runtime application sends the timestamp to the plurality of threads.
  11. 根据权利要求1-6任一项所述的显示方法,其特征在于,所述P是根据所述VR合成图像的显示周期与所述线程的处理周期之间的比值确定的。The display method according to any one of claims 1 to 6 is characterized in that P is determined based on the ratio between the display period of the VR synthetic image and the processing period of the thread.
  12. 一种显示装置,其特征在于,包括:A display device, comprising:
    时间戳记录单元,用于响应于第一操作,记录在每个显示周期生成的同步信号的时间戳;a timestamp recording unit, configured to record, in response to a first operation, a timestamp of a synchronization signal generated in each display period;
    时间戳下发单元,用于分别将各个所述显示周期的所述时间戳发送给多个线程,以使所述线程根据第N个显示周期的所述时间戳校准所述线程在第N+P个显示周期的处理触发时刻;所述线程的处理触发时刻与第N+P个显示周期的同步信号的时间戳同步;所述N和所述P为大于0的正整数;所述多个线程用于生成VR合成图像;A timestamp sending unit is used to send the timestamp of each display cycle to multiple threads respectively, so that the threads calibrate the processing triggering moment of the threads in the N+Pth display cycle according to the timestamp of the Nth display cycle; the processing triggering moment of the threads is synchronized with the timestamp of the synchronization signal of the N+Pth display cycle; N and P are positive integers greater than 0; the multiple threads are used to generate VR synthetic images;
    图像合成单元,用于根据预设的处理次序,依次在所述线程对应的所述处理触发时刻触发所述线程处理图像数据,生成VR合成图像。The image synthesis unit is used to trigger the threads to process image data at the processing triggering moments corresponding to the threads in sequence according to a preset processing order, so as to generate a VR synthetic image.
  13. 一种电子设备,其特征在于,所述电子设备包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时如权利要求1至11任一项所述方法的步骤。An electronic device, characterized in that the electronic device comprises a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the steps of the method as described in any one of claims 1 to 11.
  14. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至11任一项所述方法的步骤。 A computer-readable storage medium stores a computer program, wherein the computer program, when executed by a processor, implements the steps of the method according to any one of claims 1 to 11.
PCT/CN2023/139711 2023-01-31 2023-12-19 Display method and apparatus, electronic device, and storage medium WO2024159950A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310115473.5 2023-01-31
CN202310115473.5A CN118426722A (en) 2023-01-31 2023-01-31 Display method, display device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2024159950A1 true WO2024159950A1 (en) 2024-08-08

Family

ID=92027386

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/139711 WO2024159950A1 (en) 2023-01-31 2023-12-19 Display method and apparatus, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN118426722A (en)
WO (1) WO2024159950A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180302569A1 (en) * 2017-04-14 2018-10-18 Facebook, Inc. Three-dimensional, 360-degree virtual reality camera live preview
CN114579075A (en) * 2022-01-30 2022-06-03 荣耀终端有限公司 Data processing method and related device
CN115048012A (en) * 2021-09-30 2022-09-13 荣耀终端有限公司 Data processing method and related device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180302569A1 (en) * 2017-04-14 2018-10-18 Facebook, Inc. Three-dimensional, 360-degree virtual reality camera live preview
CN115048012A (en) * 2021-09-30 2022-09-13 荣耀终端有限公司 Data processing method and related device
CN114579075A (en) * 2022-01-30 2022-06-03 荣耀终端有限公司 Data processing method and related device

Also Published As

Publication number Publication date
CN118426722A (en) 2024-08-02

Similar Documents

Publication Publication Date Title
WO2020259452A1 (en) Full-screen display method for mobile terminal, and apparatus
CN113254120B (en) Data processing method and related device
CN114089933B (en) Display parameter adjusting method, electronic device, chip and readable storage medium
WO2021000881A1 (en) Screen splitting method and electronic device
WO2020093988A1 (en) Image processing method and electronic device
CN114397982A (en) Application display method and electronic equipment
WO2021104485A1 (en) Photographing method and electronic device
CN113961157B (en) Display interaction system, display method and equipment
CN115048012B (en) Data processing method and related device
WO2023000772A1 (en) Mode switching method and apparatus, electronic device and chip system
WO2022042637A1 (en) Bluetooth-based data transmission method and related apparatus
CN115967851A (en) Quick photographing method, electronic device and computer readable storage medium
US20230335081A1 (en) Display Synchronization Method, Electronic Device, and Readable Storage Medium
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
CN113438366A (en) Information notification interaction method, electronic device and storage medium
WO2021204103A1 (en) Picture preview method, electronic device, and storage medium
WO2024078275A1 (en) Image processing method and apparatus, electronic device and storage medium
CN114827098A (en) Method and device for close shooting, electronic equipment and readable storage medium
CN116389884B (en) Thumbnail display method and terminal equipment
CN115904184B (en) Data processing method and related device
CN113923372B (en) Exposure adjusting method and related equipment
WO2024159950A1 (en) Display method and apparatus, electronic device, and storage medium
CN115686403A (en) Display parameter adjusting method, electronic device, chip and readable storage medium
WO2024066834A9 (en) Vsync signal control method, electronic device, storage medium and chip
EP4239467A1 (en) Frame rate switching method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23919512

Country of ref document: EP

Kind code of ref document: A1