[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117221549A - Coding method of multipath video and electronic equipment - Google Patents

Coding method of multipath video and electronic equipment Download PDF

Info

Publication number
CN117221549A
CN117221549A CN202210886609.8A CN202210886609A CN117221549A CN 117221549 A CN117221549 A CN 117221549A CN 202210886609 A CN202210886609 A CN 202210886609A CN 117221549 A CN117221549 A CN 117221549A
Authority
CN
China
Prior art keywords
video
image data
instruction
pause
recording
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210886609.8A
Other languages
Chinese (zh)
Inventor
王拣贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN117221549A publication Critical patent/CN117221549A/en
Pending legal-status Critical Current

Links

Landscapes

  • Television Signal Processing For Recording (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A coding method of multipath video and electronic equipment, the electronic equipment includes encoder and camera, including: responding to a first operation of a user, starting video recording by the electronic equipment, displaying a first interface, and displaying a preview picture acquired through a camera by the first interface; in the process of video recording, the electronic equipment starts encoding camera image data through an encoder to obtain encoded image data; under the condition that the electronic equipment acquires a pause video recording signal, the camera image data after the pause video recording is not encoded through an encoder; under the condition that the electronic equipment acquires the restored video signal, the camera image data after video restoration is continuously encoded through an encoder; under the condition that the electronic equipment acquires a video recording ending signal, ending encoding of the camera image data through an encoder; the electronic device generates video based on the encoded image data. The embodiment of the application is used for preventing the card frames and the flower frames of the multi-path video.

Description

Coding method of multipath video and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a method for encoding a multi-channel video and an electronic device.
Background
Now, a terminal device such as a mobile phone supporting video shooting can realize an automatic tracking shooting mode. When recording video, the terminal device can receive the principal angle selected by the user. Then, the terminal device can always follow the main angle in the process of recording the video later, so that a close-up video with the video center always being the selected main angle is obtained. When the mobile phone records the video in the process, the images acquired by the camera are required to be encoded, and the problems of frame spending or frame clamping can occur when the encoded image data are segmented or sequentially adjusted due to the correlation between the front frame and the rear frame of the encoded images.
Disclosure of Invention
The embodiment of the application provides a multi-channel video coding method and electronic equipment, which are used for preventing card frames and flower frames of multi-channel video.
In a first aspect, an embodiment of the present application provides a method for encoding a multi-channel video, where the method is applied to an electronic device, and the electronic device includes an encoder and a camera, and includes: responding to a first operation of a user, starting video recording by the electronic equipment, and displaying a first interface, wherein the first interface displays a preview picture acquired through the camera, the first interface comprises a big window and a small window, and the picture content of the big window comprises the picture content of the small window; in the video recording process of the electronic equipment, the camera image data is started to be encoded through the encoder to obtain encoded image data, wherein the camera image data is uncoded image data of the big window or the small window; under the condition that the electronic equipment acquires a pause video signal, the encoder does not encode the camera image data after the pause video; under the condition that the electronic equipment acquires the recovery video signal, the encoder is used for continuing to encode the camera image data after recovery video; the electronic equipment finishes encoding the camera image data through the encoder under the condition that the electronic equipment acquires a video recording ending signal; the electronic equipment generates video based on the encoded image data, wherein the video is of the big window or the small window.
The electronic equipment encodes the camera image data before the video recording is paused through the encoder, and the encoder continues to not encode the camera image data before the video recording is resumed.
In the embodiment of the application, the electronic equipment can ensure the correlation between the front and the back of the coded image frames during the pause and the recovery in the process of multi-path video coding, thereby preventing the problem of frame pattern or frame clamping of the packaged video. In addition, the input camera image data is screened at the encoder to remove the image data in the pause period from being coded, so that the screening at an application layer is not needed, the screened image data is directly processed in the coding process, and the image data is not needed to be uploaded to the application layer for processing, thereby saving processing resources and reducing excessive energy consumption.
In one possible implementation manner, the electronic device includes a camera mode module, and when the electronic device acquires the pause video recording signal, the electronic device does not encode the camera image data after the pause video recording through the encoder, and specifically includes: under the condition that the electronic equipment acquires a pause video signal through the camera mode module, the electronic equipment acquires a pause coding instruction through the encoder, and does not code camera image data with shooting time stamps at and after the pause time stamp, each frame of image in the camera image data comprises a shooting time stamp, and the pause coding instruction comprises the pause time stamp; under the condition that the electronic equipment acquires the recovery video signal, the encoder is used for continuing to encode the camera image data after the recovery video, and the method specifically comprises the following steps: under the condition that the electronic equipment acquires a recovery video signal through a camera mode module, the electronic equipment acquires a recovery coding instruction through the encoder, and continuously codes camera image data of which the shooting time stamp is a recovery time stamp and the later, wherein the recovery coding instruction comprises the recovery time stamp.
The electronic device encodes camera image data with shooting time stamps being in a pause time stamp and before the pause time stamp through the encoder, and does not encode camera image data with shooting time stamps being in a recovery time stamp through the encoder.
In the embodiment of the application, the encoder can directly take the pause time stamp and the recovery time stamp, and the camera image data between the two time stamps is removed and not encoded; the other images are encoded in sequence during video recording, so that the sequence of the images before encoding can be ensured to be determined, and the relevance of the images after encoding can be ensured, thereby avoiding the problems of frame clamping and frame pattern in the process of multipath video recording.
In one possible implementation, the camera image data includes first image data and second image data, the first image data being image data of the large window and the second image data being image data of the small window; the encoder includes a first encoder that encodes the first image data and a second encoder that encodes the second image data.
In the embodiment of the application, the encoding process of the multi-path video recording is multiple encoders, namely, the first encoder encodes the large window image and the second encoder encodes the small window image, so that the encoding process is completed instead of one-to-one completion through channels of multiple application layers, and the results of the respective encoding of the multi-path images can be correspondingly packaged in the application layers without respective processing, thereby avoiding wasting excessive processing and energy consumption resources.
In one possible implementation manner, the electronic device further includes an encoding control module, where in the case that the camera image data is the first image data, the electronic device starts encoding, by the encoder, the camera image data in a video recording process, to obtain encoded image data, and specifically includes: the electronic equipment responds to the first operation, acquires a first video starting signal through the camera mode module, and sends a first video starting instruction to the coding control module; under the condition that the encoding control module acquires the first video recording starting instruction, the electronic equipment controls the first encoder to start encoding the first image data based on the first video recording starting instruction through the encoding control module to acquire third image data; under the condition that the electronic equipment acquires a pause video signal through the camera mode module, the electronic equipment acquires a pause coding instruction through the encoder and does not code camera image data of which the shooting time stamp is in the pause time stamp and after the pause time stamp, and the method specifically comprises the following steps: under the condition that the camera mode module acquires a first pause video recording signal, the electronic equipment sends a first pause video recording instruction to the coding control module through the camera mode module; under the condition that the coding control module acquires a first pause video recording instruction, the electronic equipment determines a pause time stamp based on the first pause video recording instruction through the coding control module so as to generate a first pause coding instruction, and sends the first pause coding instruction to the first encoder; the electronic equipment does not encode first image data with shooting time stamps at and after the pause time stamp based on the first pause encoding instruction through the first encoder; the first pause video recording instruction indicates to pause the encoding of the large window image, and the first pause encoding instruction comprises the pause time stamp; under the condition that the electronic equipment acquires a recovery video signal through a camera mode module, the electronic equipment acquires a recovery coding instruction through the encoder and continuously codes the recovery time stamp and the camera image data behind the recovery time stamp, and the method specifically comprises the following steps: under the condition that the camera mode module acquires a first recovery video signal, the electronic equipment sends a first recovery video instruction to the coding control module through the camera mode module; under the condition that the coding control module acquires a first recovery video instruction, the electronic equipment determines a recovery time stamp based on the first recovery video instruction through the coding control module so as to generate a first recovery coding instruction, and sends the first recovery coding instruction to the first encoder; the electronic equipment continues to encode the first image data with the shooting time stamp at the recovery time stamp and after the shooting time stamp based on the first recovery encoding instruction through the first encoder; the first recovery video recording instruction indicates to recover the encoding of the large window image, and the first recovery encoding instruction comprises the recovery time stamp; the electronic device, when obtaining the video signal ending, finishes encoding the camera image data through the encoder, and specifically includes: under the condition that the camera mode module acquires a video ending signal, the electronic equipment sends a first video ending instruction to the coding control module through the camera mode module; when the coding control module acquires the first video ending instruction, the electronic equipment controls the first encoder to finish coding the first image data based on the first video ending instruction through the coding control module; the electronic device generates video based on the encoded image data, and specifically includes: and the electronic equipment encapsulates the third image data through the editing control module to generate a first file, wherein the first file is a video file of a large window.
The first starting video signal is a starting large window video signal in the embodiment of the application; the first video recording starting instruction is a large window video recording starting instruction in the embodiment of the application; the first pause video recording signal is a pause large window video recording signal in the embodiment of the application, and the first pause video recording instruction is a pause large window video recording instruction in the embodiment of the application; the first recovered video signal is a recovered large window video signal in the embodiment of the application; the first video restoration instruction is a large window video restoration instruction in the embodiment of the application; the first end video signal is an end large window video signal in the embodiment of the present application, and the first end video instruction is an end large window video instruction in the embodiment of the present application.
In the embodiment of the application, the encoder can discard the first image data after the pause time stamp based on the first pause coding instruction and the first resume coding instruction, and continue to encode after receiving the first resume coding instruction, namely encode the image frame after the resume time stamp. Therefore, the problem that the obtained large window video data is in a frame pattern and a frame is blocked in the video recording process of the main angle mode can be avoided. In addition, in the process of multipath video recording, the big window and the small window of the electronic equipment are only divided into two encoders, and the rest parts of the two encoders are all shared by the big window and the small window, namely, in the encoding control module, the encoders of the big window and the small window are all required to be controlled, and the generated videos of the big window and the small window are all required to be packaged respectively. On the one hand, the encoder module is divided into two encoders, so that the timeliness of encoding is required to be ensured, namely, the encoding speed is required to be higher than the image data generating speed, on the other hand, all the modules are not required to be divided into several paths of modules according to a plurality of paths of videos, the occupation of processing resources is reduced, and the processing process is simpler and more effective.
In one possible implementation manner, after the electronic device starts recording and displays the first interface, the method further includes: responding to a second operation of a user, suspending video recording of the large window by the electronic equipment, displaying a second interface, and acquiring a first suspended video recording signal through the camera mode module, wherein the first interface comprises a large window suspended recording control, the second operation is an operation acting on the large window suspended recording control, and the suspended time stamp included in the first suspended coding instruction is a time point when the second operation is detected by the electronic equipment; responding to a third operation of a user, the electronic equipment restores video of the large window, displays a third interface, and acquires a first restoring video signal through the camera mode module, wherein the second interface comprises a large window restoring recording control, the third operation is an operation acting on the large window restoring recording control, and a restoring time stamp included in the first restoring coding instruction is a time point when the electronic equipment detects the third operation; and responding to a fourth operation of a user, ending the video recording of the large window by the electronic equipment, acquiring a first video recording ending signal through the camera mode module, and ending the video recording control by the large window by the third interface, wherein the third operation is an operation acting on the large window ending video recording control.
In the embodiment of the application, in the process of encoding the large window, a camera mode module is required to determine what operator is performed by a user under what conditions, and the recording of the large window can be started, paused, resumed and ended correspondingly, so that the accuracy of determining the recording time and the reasonability of obtaining the pause time stamp and the resume time stamp are ensured.
In one possible implementation manner, the electronic device further includes an encoding control module, where in the case that the camera image data is the second image data, the electronic device starts encoding, by the encoder, the camera image data in a video recording process, to obtain encoded image data, and specifically includes: the electronic equipment responds to the first operation, acquires a second starting video signal through the camera mode module, and sends a second starting video instruction to the coding control module, and the coding control module controls the second encoder to start coding the second image data based on the second starting video instruction under the condition that the coding control module acquires the second starting video instruction, so as to obtain fourth image data; under the condition that the electronic equipment acquires a pause video signal through the camera mode module, the electronic equipment acquires a pause coding instruction through the encoder and does not code camera image data of which the shooting time stamp is in the pause time stamp and after the pause time stamp, and the method specifically comprises the following steps: under the condition that the camera mode module acquires a second pause video recording signal, the electronic equipment sends a second pause video recording instruction to the coding control module through the camera mode module; under the condition that the coding control module acquires a second pause video recording instruction, the electronic equipment determines a pause time stamp based on the second pause video recording instruction through the coding control module so as to generate a second pause coding instruction, and sends the second pause coding instruction to the second coder; the electronic equipment does not encode second image data with shooting time stamps at and after the pause time stamp based on the second pause encoding instruction through the second encoder; the second pause video recording instruction indicates to pause the encoding of the small window image, and the second pause encoding instruction comprises the pause time stamp; under the condition that the electronic equipment acquires a recovery video signal through a camera mode module, the electronic equipment acquires a recovery coding instruction through the encoder and continuously codes the recovery time stamp and the camera image data behind the recovery time stamp, and the method specifically comprises the following steps: when the camera mode module acquires a second recovery video signal, the electronic device sends a second recovery video instruction to the coding control module through the camera mode module, and when the coding control module acquires the second recovery video instruction, the electronic device determines a recovery timestamp based on the second recovery video instruction through the coding control module so as to generate a second recovery coding instruction and sends the second recovery coding instruction to the second encoder; the electronic equipment continues to encode second image data with shooting time stamps at and after the recovery time stamp based on the second recovery encoding instruction through the second encoder; the second recovery video recording instruction indicates the code of recovering the small window image, and the second recovery coding instruction comprises the recovery time stamp; the electronic device, when obtaining the video signal ending, finishes encoding the camera image data through the encoder, and specifically includes: under the condition that the camera mode module acquires a video ending signal, the electronic equipment sends a second video ending instruction to the coding control module by the camera mode module; when the encoding control module acquires the second video ending instruction, the encoding control module of the electronic device controls the second encoder to finish encoding the second image data based on the second video ending instruction; the electronic device generates video based on the encoded image data, and specifically includes: and the electronic equipment encapsulates the fourth image data through the editing control module to generate a second file, wherein the second file is a video file of the small window.
The second starting video signal is a starting small window video signal in the embodiment of the application; the second video recording starting instruction is a small window video recording starting instruction in the embodiment of the application; the second pause video signal is a pause window video signal in the embodiment of the application, and the second pause video instruction is a pause window video instruction in the embodiment of the application; the second recovery video signal is a recovery small window video signal in the embodiment of the application; the second video restoration instruction is a small window video restoration instruction in the embodiment of the application; the second ending video signal is an ending small window video signal in the embodiment of the present application, and the second ending video instruction is an ending small window video instruction in the embodiment of the present application. The second file is a video file of a close-up video in the embodiment of the present application.
In the embodiment of the application, the second encoder can discard the second image data after the pause time stamp based on the second pause encoding instruction and the second resume encoding instruction, and continue encoding after receiving the second resume encoding instruction, namely, encoding the image frame after the resume time stamp. Therefore, the problem that the obtained small window video data is in a frame pattern and a frame is blocked in the video recording process of the main angle mode can be avoided. In addition, in the process of multipath video recording, the big window and the small window of the electronic equipment are only divided into two encoders, and the rest parts of the two encoders are all shared by the big window and the small window, namely, in the encoding control module, the encoders of the big window and the small window are all required to be controlled, and the generated videos of the big window and the small window are all required to be packaged respectively. On the one hand, the encoder module is divided into two encoders, so that the timeliness of encoding is required to be ensured, namely, the encoding speed is required to be higher than the image data generating speed, on the other hand, all the modules are not required to be divided into several paths of modules according to a plurality of paths of videos, the occupation of processing resources is reduced, and the processing process is simpler and more effective.
In one possible implementation manner, the video target of the small window is a shooting object in the large window frame, and in a case that the electronic device detects that the video target disappears, the method further includes: the electronic equipment obtains a second pause video signal based on the vanishing time of the video target through the camera mode module, wherein the pause time stamp is a first time point after the vanishing time of the video target; in the event that the electronic device re-detects the disappeared video recording target, the method further comprises: and the electronic equipment acquires a second recovery video signal based on the time when the video target is re-detected through the camera mode module, wherein the recovery time stamp is the time point when the video target is re-detected.
In the embodiment of the application, the main angle is not near or far in the picture, and sometimes the electronic equipment is frequently switched from a state in which the main angle can be detected and the main angle can not be detected, so that if the small window disappears immediately, the small window is frequently displayed and disappears, and the picture flicker changes. Therefore, the small window is pushed out by the delay of the first duration and the first frame length, and the switching frequency of the existence of the small window is reduced, so that the sensory experience of a user is improved. In addition, the disappearance of the video target enables the suspension of the small window video, the target is detected again, and the recording is continued, so that the close-up video can be ensured to be aimed at the determined target, and the correctness of the target in the close-up video picture can be ensured.
In one possible implementation, the method further includes: responding to a second operation of a user, suspending video recording of the large window by the electronic equipment, displaying a second interface, and acquiring a second video recording ending signal through the camera mode module, wherein the first interface comprises a large window suspending recording control, and the second operation is an operation acting on the large window suspending recording control; or, responding to a fourth operation, wherein the electronic equipment finishes video recording of the large window, and acquires a second video recording finishing signal through the camera mode module, the first interface comprises a large window finishing recording control, and the fourth operation is an operation acting on the large window finishing recording control; or, responding to a fifth operation, the electronic equipment ends the video recording of the small window, and obtains a second video recording ending signal through the camera mode module, wherein the first interface comprises a small window ending recording control, and the fifth operation is an operation acting on the small window ending recording control.
In the embodiment of the application, since the large window recording and the small window recording are performed for a plurality of times simultaneously, for the small window recording, the small window recording is finished by receiving not only the control operation of finishing the small window recording, but also the pause of the large window or the operation of finishing the control. The small window video can not exist independently of the large window video, the small window image is an image processed by the large window image, and the small window can not be independently encoded in the encoding process, so that the requirement of an actual video scheme is met, and the rationality and the effectiveness of the scheme are ensured.
In a second aspect, an embodiment of the present application provides an electronic device, including: one or more processors and memory; the memory is coupled to the one or more processors, the memory for storing computer program code, the computer program code comprising computer instructions, the electronic device comprising an encoder and a camera, the one or more processors invoking the computer instructions to cause the electronic device to perform: responding to a first operation of a user, starting video recording, and displaying a first interface, wherein the first interface displays a preview picture acquired through the camera, the first interface comprises a large window and a small window, and the picture content of the large window comprises the picture content of the small window; in the video recording process, camera image data is started to be encoded through the encoder to obtain encoded image data, wherein the camera image data is uncoded image data of the big window or the small window; under the condition that a pause video recording signal is acquired, the encoder is used for not encoding the camera image data after the pause video recording; under the condition that a restored video signal is obtained, continuing to encode camera image data after video restoration through the encoder; under the condition that a video recording ending signal is acquired, ending encoding of the camera image data by the encoder; and generating video based on the encoded image data, wherein the video is of the large window or the small window.
In the embodiment of the application, the electronic equipment can ensure the correlation between the front and the back of the coded image frames during the pause and the recovery in the process of multi-path video coding, thereby preventing the problem of frame pattern or frame clamping of the packaged video. In addition, the input camera image data is screened at the encoder to remove the image data in the pause period from being coded, so that the screening at an application layer is not needed, the screened image data is directly processed in the coding process, and the image data is not needed to be uploaded to the application layer for processing, thereby saving processing resources and reducing excessive energy consumption.
In one possible implementation manner, the electronic device includes a camera mode module, where, when the electronic device acquires a pause video signal, the electronic device does not encode camera image data after the pause video by using the encoder, and specifically performs: under the condition that the electronic equipment acquires a pause video signal through the camera mode module, acquiring a pause coding instruction through the encoder, and not coding camera image data with shooting time stamps at and after the pause time stamp, wherein each frame of image in the camera image data comprises a shooting time stamp, and the pause coding instruction comprises the pause time stamp; under the condition that the electronic equipment acquires the recovery video signal, the encoder is used for continuing to encode the camera image data after the recovery video, and the electronic equipment specifically executes the following steps: and under the condition that the electronic equipment acquires a recovery video signal through the camera mode module, acquiring a recovery coding instruction through the encoder, and continuing to code camera image data of which the shooting time stamp is at and after the recovery time stamp, wherein the recovery coding instruction comprises the recovery time stamp.
In the embodiment of the application, the encoder can directly take the pause time stamp and the recovery time stamp, and the camera image data between the two time stamps is removed and not encoded; the other images are encoded in sequence during video recording, so that the sequence of the images before encoding can be ensured to be determined, and the relevance of the images after encoding can be ensured, thereby avoiding the problems of frame clamping and frame pattern in the process of multipath video recording.
In one possible implementation, the camera image data includes first image data and second image data, the first image data being image data of the large window and the second image data being image data of the small window; the encoder includes a first encoder that encodes the first image data and a second encoder that encodes the second image data.
In the embodiment of the application, the encoding process of the multi-path video recording is multiple encoders, namely, the first encoder encodes the large window image and the second encoder encodes the small window image, so that the encoding process is completed instead of one-to-one completion through channels of multiple application layers, and the results of the respective encoding of the multi-path images can be correspondingly packaged in the application layers without respective processing, thereby avoiding wasting excessive processing and energy consumption resources.
In one possible implementation manner, the electronic device further includes an encoding control module, where in the case that the camera image data is the first image data, the electronic device starts encoding the camera image data through the encoder in a video recording process to obtain encoded image data, and specifically performs: responding to the first operation, acquiring a first video starting signal through the camera mode module, and sending a first video starting instruction to the coding control module; under the condition that the encoding control module acquires the first video recording starting instruction, the encoding control module controls the first encoder to start encoding the first image data based on the first video recording starting instruction to acquire third image data; under the condition that the electronic equipment acquires a pause video signal through the camera mode module, the electronic equipment acquires a pause coding instruction through the coder and does not code camera image data of which the shooting timestamp is in the pause timestamp and after the pause timestamp, and the electronic equipment specifically executes the following steps: under the condition that the camera mode module acquires a first pause video recording signal, a first pause video recording instruction is sent to the coding control module through the camera mode module; under the condition that the coding control module acquires a first pause video recording instruction, determining a pause time stamp based on the first pause video recording instruction by the coding control module so as to generate a first pause coding instruction, and sending the first pause coding instruction to the first encoder; encoding, by the first encoder, first image data having a shooting timestamp at and after the pause timestamp based on the first pause encoding instruction; the first pause video recording instruction indicates to pause the encoding of the large window image, and the first pause encoding instruction comprises the pause time stamp; under the condition that the electronic equipment acquires a recovery video signal through a camera mode module, the electronic equipment acquires a recovery coding instruction through the encoder and continuously codes camera image data of which the shooting time stamp is a recovery time stamp and the later, and the electronic equipment specifically executes the following steps: under the condition that the camera mode module acquires a first recovery video signal, a first recovery video instruction is sent to the coding control module through the camera mode module; under the condition that the coding control module acquires a first recovery video instruction, determining a recovery time stamp based on the first recovery video instruction by the coding control module so as to generate a first recovery coding instruction, and sending the first recovery coding instruction to the first encoder; continuing to encode, by the first encoder, first image data having a shooting timestamp at and after the recovery timestamp based on the first recovery encoding instruction; the first recovery video recording instruction indicates to recover the encoding of the large window image, and the first recovery encoding instruction comprises the recovery time stamp; under the condition that the electronic equipment acquires a video ending signal, the encoder finishes encoding the camera image data, and the electronic equipment specifically executes: under the condition that the camera mode module acquires a video ending signal, a first video ending instruction is sent to the coding control module through the camera mode module; when the encoding control module acquires the first video ending instruction, controlling the first encoder to finish encoding the first image data based on the first video ending instruction through the encoding control module; the electronic device generates video based on the encoded image data, and the electronic device specifically performs: and packaging the third image data through the editing control module to generate a first file, wherein the first file is a video file of a large window.
The first starting video signal is a starting large window video signal in the embodiment of the application; the first video recording starting instruction is a large window video recording starting instruction in the embodiment of the application; the first pause video recording signal is a pause large window video recording signal in the embodiment of the application, and the first pause video recording instruction is a pause large window video recording instruction in the embodiment of the application; the first recovered video signal is a recovered large window video signal in the embodiment of the application; the first video restoration instruction is a large window video restoration instruction in the embodiment of the application; the first end video signal is an end large window video signal in the embodiment of the present application, and the first end video instruction is an end large window video instruction in the embodiment of the present application.
In the embodiment of the application, the encoder can discard the first image data after the pause time stamp based on the first pause coding instruction and the first resume coding instruction, and continue to encode after receiving the first resume coding instruction, namely encode the image frame after the resume time stamp. Therefore, the problem that the obtained large window video data is in a frame pattern and a frame is blocked in the video recording process of the main angle mode can be avoided. In addition, in the process of multipath video recording, the big window and the small window of the electronic equipment are only divided into two encoders, and the rest parts of the two encoders are all shared by the big window and the small window, namely, in the encoding control module, the encoders of the big window and the small window are all required to be controlled, and the generated videos of the big window and the small window are all required to be packaged respectively. On the one hand, the encoder module is divided into two encoders, so that the timeliness of encoding is required to be ensured, namely, the encoding speed is required to be higher than the image data generating speed, on the other hand, all the modules are not required to be divided into several paths of modules according to a plurality of paths of videos, the occupation of processing resources is reduced, and the processing process is simpler and more effective.
In one possible implementation, after the electronic device starts recording and displays the first interface, the electronic device further performs: suspending video recording of the large window in response to a second operation of a user, displaying a second interface, and acquiring a first suspension video recording signal through the camera mode module, wherein the first interface comprises a large window suspension recording control, the second operation is an operation acting on the large window suspension recording control, and the suspension timestamp included in the first suspension coding instruction is a time point when the second operation is detected by the electronic equipment; responding to a third operation of a user, restoring video of the large window, displaying a third interface, and acquiring a first restoring video signal through the camera mode module, wherein the second interface comprises a large window restoring recording control, the third operation is an operation acting on the large window restoring recording control, and a restoring time stamp included in the first restoring coding instruction is a time point when the electronic equipment detects the third operation; and responding to a fourth operation of a user, ending the video recording of the large window, and acquiring a first video recording ending signal through the camera mode module, wherein the third interface comprises a large window ending recording control, and the third operation is an operation acting on the large window ending recording control.
In the embodiment of the application, in the process of encoding the large window, a camera mode module is required to determine what operator is performed by a user under what conditions, and the recording of the large window can be started, paused, resumed and ended correspondingly, so that the accuracy of determining the recording time and the reasonability of obtaining the pause time stamp and the resume time stamp are ensured.
In one possible implementation manner, the electronic device further includes an encoding control module, where in the case where the camera image data is the second image data, the electronic device starts encoding, by the encoder, the camera image data to obtain encoded image data in a video recording process, and specifically performs: responding to the first operation, acquiring a second video starting signal through the camera mode module, and sending a second video starting instruction to the coding control module; under the condition that the coding control module acquires the second video recording starting instruction, the coding control module controls the second encoder to start coding the second image data based on the second video recording starting instruction to acquire fourth image data; under the condition that the electronic equipment acquires a pause video signal through the camera mode module, the electronic equipment acquires a pause coding instruction through the coder and does not code camera image data of which the shooting timestamp is in the pause timestamp and after the pause timestamp, and the electronic equipment specifically executes the following steps: under the condition that the camera mode module acquires a second pause video recording signal, a second pause video recording instruction is sent to the coding control module through the camera mode module; under the condition that the coding control module acquires a second pause video recording instruction, determining a pause time stamp based on the second pause video recording instruction by the coding control module so as to generate a second pause coding instruction, and sending the second pause coding instruction to the second encoder; encoding, by the second encoder, second image data having a shooting timestamp at and after the pause timestamp based on the second pause encoding instruction; the second pause video recording instruction indicates to pause the encoding of the small window image, and the second pause encoding instruction comprises the pause time stamp; under the condition that the electronic equipment acquires a recovery video signal through a camera mode module, the electronic equipment acquires a recovery coding instruction through the encoder and continuously codes camera image data of which the shooting time stamp is a recovery time stamp and the later, and the electronic equipment specifically executes the following steps: when the camera mode module acquires a second recovery video signal, a second recovery video instruction is sent to the coding control module through the camera mode module, and when the coding control module acquires the second recovery video instruction, a recovery time stamp is determined through the coding control module based on the second recovery video instruction so as to generate a second recovery coding instruction, and the second recovery coding instruction is sent to the second encoder; continuing to encode, by the second encoder, second image data having a shooting timestamp at and after the recovery timestamp based on the second recovery encoding instruction; the second recovery video recording instruction indicates the code of recovering the small window image, and the second recovery coding instruction comprises the recovery time stamp; under the condition that the electronic equipment acquires a video ending signal, the encoder finishes encoding the camera image data, and the electronic equipment specifically executes: under the condition that the camera mode module acquires a video ending signal, a second video ending instruction is sent to the coding control module through the camera mode module; when the encoding control module acquires the second video ending instruction, the encoding control module controls the second encoder to finish encoding the second image data based on the second video ending instruction; the electronic device generates video based on the encoded image data, and the electronic device specifically performs: and packaging the fourth image data through the editing control module to generate a second file, wherein the second file is a video file of the small window.
The second starting video signal is a starting small window video signal in the embodiment of the application; the second video recording starting instruction is a small window video recording starting instruction in the embodiment of the application; the second pause video signal is a pause window video signal in the embodiment of the application, and the second pause video instruction is a pause window video instruction in the embodiment of the application; the second recovery video signal is a recovery small window video signal in the embodiment of the application; the second video restoration instruction is a small window video restoration instruction in the embodiment of the application; the second ending video signal is an ending small window video signal in the embodiment of the present application, and the second ending video instruction is an ending small window video instruction in the embodiment of the present application. The second file is a video file of a close-up video in the embodiment of the present application.
In the embodiment of the application, the second encoder can discard the second image data after the pause time stamp based on the second pause encoding instruction and the second resume encoding instruction, and continue encoding after receiving the second resume encoding instruction, namely, encoding the image frame after the resume time stamp. Therefore, the problem that the obtained small window video data is in a frame pattern and a frame is blocked in the video recording process of the main angle mode can be avoided. In addition, in the process of multipath video recording, the big window and the small window of the electronic equipment are only divided into two encoders, and the rest parts of the two encoders are all shared by the big window and the small window, namely, in the encoding control module, the encoders of the big window and the small window are all required to be controlled, and the generated videos of the big window and the small window are all required to be packaged respectively. On the one hand, the encoder module is divided into two encoders, so that the timeliness of encoding is required to be ensured, namely, the encoding speed is required to be higher than the image data generating speed, on the other hand, all the modules are not required to be divided into several paths of modules according to a plurality of paths of videos, the occupation of processing resources is reduced, and the processing process is simpler and more effective.
In one possible implementation manner, the video target of the small window is a shooting object in the large window frame, and in a case that the electronic device detects that the video target disappears, the electronic device further performs: acquiring a second pause video signal based on the vanishing time of the video target through the camera mode module, wherein the pause time stamp is a first time point after the vanishing time of the video target; in the case where the electronic device detects the disappeared video recording target again, the electronic device further performs: and acquiring a second recovery video signal based on the time when the video target is re-detected through the camera mode module, wherein the recovery time stamp is the time point when the video target is re-detected.
In the embodiment of the application, the main angle is not near or far in the picture, and sometimes the electronic equipment is frequently switched from a state in which the main angle can be detected and the main angle can not be detected, so that if the small window disappears immediately, the small window is frequently displayed and disappears, and the picture flicker changes. Therefore, the small window is pushed out by the delay of the first duration and the first frame length, and the switching frequency of the existence of the small window is reduced, so that the sensory experience of a user is improved. In addition, the disappearance of the video target enables the suspension of the small window video, the target is detected again, and the recording is continued, so that the close-up video can be ensured to be aimed at the determined target, and the correctness of the target in the close-up video picture can be ensured.
In one possible implementation, the electronic device further performs: suspending video recording of the large window in response to a second operation of a user, displaying a second interface, and acquiring a second video recording ending signal through the camera mode module, wherein the first interface comprises a large window suspension recording control, and the second operation is an operation acting on the large window suspension recording control; or, responding to a fourth operation, ending the video recording of the large window, and obtaining a second video recording ending signal through the camera mode module, wherein the first interface comprises a large window ending recording control, and the fourth operation is an operation acting on the large window ending recording control; or, responding to a fifth operation, ending the video recording of the small window, and acquiring a second video recording ending signal through the camera mode module, wherein the first interface comprises a small window ending recording control, and the fifth operation is an operation acting on the small window ending recording control.
In the embodiment of the application, since the large window recording and the small window recording are performed for a plurality of times simultaneously, for the small window recording, the small window recording is finished by receiving not only the control operation of finishing the small window recording, but also the pause of the large window or the operation of finishing the control. The small window video can not exist independently of the large window video, the small window image is an image processed by the large window image, and the small window can not be independently encoded in the encoding process, so that the requirement of an actual video scheme is met, and the rationality and the effectiveness of the scheme are ensured.
In a third aspect, an embodiment of the present application provides an electronic device, including: the touch screen, the camera, one or more processors and one or more memories; the one or more processors are coupled to the touch screen, the camera, the one or more memories for storing computer program code, the computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to perform the method of encoding a multi-way video as described in the first aspect or any of the possible implementations of the first aspect.
In a fourth aspect, an embodiment of the present application provides a chip system, where the chip system is applied to an electronic device, and the chip system includes one or more processors, where the processors are configured to invoke computer instructions to cause the electronic device to perform a method for encoding a multi-path video recording according to the first aspect or any possible implementation manner of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform a method of encoding a multiplex video as described in the first aspect or any one of the possible implementations of the first aspect.
In a sixth aspect, an embodiment of the present application provides a computer readable storage medium, including instructions that, when executed on an electronic device, cause the electronic device to perform a method for encoding a multi-way video according to the first aspect or any one of the possible implementations of the first aspect.
Drawings
Fig. 1 is a schematic view of a shooting scene in a main angle mode according to an embodiment of the present application;
FIGS. 2A-2M are diagrams of a user interface in a set of principal angle modes provided by an embodiment of the present application;
fig. 3 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic software structure of an electronic device according to an embodiment of the present application;
FIG. 5 is a schematic software architecture of another electronic device according to an embodiment of the present application;
FIGS. 6A-6C are a flow chart of a method for video data processing in a set of principal angle modes according to an embodiment of the present application;
FIGS. 7A-7G are diagrams of user interfaces for recording pause and resume in a set of principal angle modes provided by embodiments of the present application;
FIGS. 8A and 8B are diagrams illustrating a set of recording close-up video time selections according to embodiments of the present application;
FIGS. 9A-9F are diagrams of a user interface for recording pause and resume in another set of principal angle modes provided by an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a set of time options for recording close-up video according to an embodiment of the present application;
FIG. 11 is a schematic flow chart of a method for encoding a multi-channel video according to an embodiment of the present application;
FIG. 12 is a flowchart of another encoding method for multiplexing video according to an embodiment of the present application;
FIG. 13 is a flowchart of another encoding method for multiplexing video according to an embodiment of the present application;
fig. 14 is a schematic diagram of an encoding process according to an embodiment of the present application.
Detailed Description
The terminology used in the following embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that the term "and/or" as used in this disclosure refers to and encompasses any or all possible combinations of one or more of the listed items.
In one embodiment of the present application, a terminal device (denoted as an electronic device, and hereinafter referred to as the terminal device in a unified manner) such as a mobile phone and a tablet pc, which has shooting and image processing functions, may identify a plurality of objects in an image in a multi-object scene, automatically track an object specified by a user, and generate and store a close-up video of the object. Meanwhile, the electronic equipment can also save the original video.
The original video is composed of original images acquired by the camera. The close-up video is obtained by cutting out the main angle in the original image as the center on the basis of the original image. The close-up video is a video taking the principal angle as the shooting center all the time. Thus, after the principal angle is selected, the user can shoot a close-up video taking the principal angle as the center and can simultaneously obtain the original video consisting of the original images acquired by the original camera.
Not limited to a cell phone, tablet computer, electronic device may also be a desktop computer, laptop computer, handheld computer, notebook computer, ultra-mobile personal computer, UMPC, netbook, and cellular telephone, personal digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) device, virtual Reality (VR) device, artificial intelligence (artificial intelligence, AI) device, wearable device, vehicle-mounted device, smart home device, and/or smart city device, and the specific type of the terminal is not particularly limited by the embodiments of the present application.
In addition, in the application, focusing, the process of making the photographed object imaged clearly is focusing by changing the distance between the lens and the imaging surface (image sensor) through the camera focusing mechanism. The automatic focusing is to receive the reflected light by an image sensor (CCD or CMOS) on a camera in the mobile phone by utilizing the principle of object light reflection, obtain an original image, and drive an electric focusing device to perform focusing by calculating and processing the original image. Essentially a set of data calculation methods integrated in a mobile ISP (image signal processor). When the viewfinder captures the most original image, the image data are transmitted to the ISP as original data, and the ISP analyzes the image data to obtain the distance of the lens to be adjusted, and then drives the voice coil motor to adjust, so that the image is clear, and the process is reflected in the eyes of the user of the mobile phone, and is an automatic focusing process. Wherein, in the auto-focusing of the mobile phone, the lens is locked in the voice coil motor, and the position of the lens can be changed by driving the voice coil motor.
The application scenario related to the embodiment of the application is described below.
In daily life, people often use electronic devices such as smartphones, tablet computers, etc. to take photographs of people. Fig. 1 is a schematic view of a shooting scene in a main angle mode exemplarily disclosed by the application. As shown in fig. 1, a user records a person 1 and a person 2 in a scene using an electronic device that displays a user interface as shown in fig. 1. The user interface may include a preview image 100 (large window 100), a home screen 110 (small window 110), a capture mode menu 120, a conversion camera control 131, a capture (video) control 132, an album 133, and a tools menu 140 (including a settings control 141, a filter switch 142, a flash switch 143, and a home mode switch 144). Wherein:
the preview image 100 (large window 100) is an image of a shooting scene acquired by the electronic device through the camera in real time. In fig. 1, a preview image 100 shows images of a person 1 and a person 2 captured by an electronic device 1 via a camera.
The main angle screen 110 (small window 110) is a screen formed by the main angle in the large window 100 when the electronic device is in the main angle mode. The electronic device may display a small window 110 in preview image 100 in picture-in-picture form and a close-up image of person 1 in the small window. The close-up image is an image obtained by performing processing such as cutting about a selected principal angle on the basis of an original image (image displayed in the preview image 100) captured by the camera. In fig. 1, a home screen 110 displays an image of a person 1 in a preview image 100. In the embodiment of the application, the principal angle refers to more than just a person. But also animal, plant or other, without limitation.
The shooting mode menu 120 may include options of multiple camera modes such as portrait, photo, video, night scene, etc., different camera modes may implement different shooting functions, and the camera mode pointed by the triangle in the shooting mode menu 120 is used to indicate the initial or user-selected camera mode, as shown in fig. 1, "triangle" points to "video", which indicates that the current camera is in the video mode.
The conversion camera control 131 is used for switching the camera for collecting the image between the front camera and the rear camera.
And a shooting control 132, configured to, in response to a user operation, cause the electronic device to record a current shot picture as a video.
Album 133 for viewing the pictures and videos taken by the user.
Setting control 141 is used for setting various parameters when capturing images.
A filter switch 142 for turning on or off the filter.
A flash switch 143 for turning on or off the flash.
A main angle mode switch 144 for turning on or off the main angle mode.
1. And entering a main angle mode, and selecting a main angle in a preview state (fig. 2A-2E).
Fig. 2A-2E illustrate a set of user interface diagrams for a user to initiate a principal angle mode and select a principal angle.
Fig. 2A schematically illustrates a user interface of an electronic device. The user opens his own electronic device such that the display of the electronic device displays the desktop of the electronic device, i.e. the user interface 201. As shown in fig. 2A, the user interface 201 may include icons for at least one application (e.g., weather, calendar, mail, settings, application store, notes, photo album, phone, short message, browser, camera 2011, etc.). The positions of the icons of the application programs and the names of the corresponding application programs can be adjusted according to the preference of the user, which is not limited by the embodiment of the application.
It should be noted that, the interface schematic diagram of the electronic device shown in fig. 2A is an exemplary illustration of the embodiment of the present application, and the interface schematic diagram of the electronic device may be in other types, which is not limited in the embodiment of the present application.
In fig. 2A, a user may click on the camera control 2011 in the user interface 201, and after receiving an operation on the camera control 2011, the electronic device may display the user interface shown in fig. 2B. Fig. 2B is a user interface of one type of photographing exemplarily shown. As shown in fig. 2B, the user photographs person 1 and person 2 in the scene using the electronic device, i.e., the electronic device is currently in a photographing mode. The description of the specific control or switch meaning of fig. 2B may refer to the related description of fig. 1, which is not repeated.
In the application, in order to conveniently describe the recording process in the main angle mode, the application is described in a mode that the user interface uses a horizontal screen mode, but the user can shoot or record video by using a horizontal screen or a vertical screen without limitation.
In fig. 2B, the user may click on the video control 2021 in the user interface 202, and after receiving an operation on the video control 2021, the electronic device may display the user interface shown in fig. 2C. Fig. 2C is a user interface of a video recording exemplarily shown. As shown in fig. 2C, in the case where the electronic apparatus is in the video recording mode during photographing, the electronic apparatus may display the main angle mode switch 2031. The specific control or switch of fig. 2C may refer to the related description of fig. 1, which is not repeated.
In fig. 2C, the user may click on the principal mode switch 2031 in the user interface 203, and after receiving an operation on the principal mode switch 2031, the electronic apparatus may display the user interface shown in fig. 2D.
Fig. 2D illustrates a user interface 204 for the electronic device to take a photograph in the principal angle mode.
After selecting the principal angle mode, the electronic device may perform image content recognition (object recognition) on an image captured by the camera, identifying an object included in the image. Such objects include, but are not limited to, humans, animals, plants, and the like. The following description of the embodiments of the present application mainly uses figures as examples. While the electronic device displays the image captured by the camera in the preview window 2041, the electronic device may also display a selection box on each of the identified objects.
Referring to the user interface 204, the image acquired by the camera at a certain moment includes a person 1 and a person 2. After the electronic device receives the image acquired and generated by the camera, the electronic device can identify the object included in the image by using a preset object identification algorithm before displaying the image. Here, the object recognition algorithm may include a face recognition algorithm, a human body recognition algorithm. At this time, the electronic device can recognize 2 objects including the person 1 and the person 2 in the image by using the object recognition algorithm.
Of course, in some examples, the electronic device also supports identifying objects of animal, plant type, not limited to the characters 1, 2 described in the user interface 204 above. Accordingly, the object recognition algorithm described above further includes a recognition algorithm for one or more animals, and a recognition algorithm for one or more plants, to which embodiments of the present application are not limited.
In one aspect, the electronic device may display the images described above including person 1, person 2 in preview window 2041. On the other hand, before displaying the image, the electronic device may determine a selection frame corresponding to each of the objects. When the above-described image is displayed, the electronic apparatus may display a selection frame corresponding to each object, for example, a selection frame 20401 corresponding to person 1, a selection frame 20402 corresponding to person 2. At this time, the user can confirm the video main angle through the above selection frame.
At the same time, the user interface 204 may also display a prompt 2042, such as "please click on the character at the principal angle," to start the automatic focus tracking video. Prompt 2042 prompts the user to determine the video cardinal angle. The user may click on any of the above-described selection boxes according to the prompt of prompt 2042. The object corresponding to the selection box acted by the clicking operation of the user is the video main angle determined by the user.
The user interface 204 (main angle mode shooting interface) may also include united states Yan Kongjian 2043. The united states Yan Kongjian 2043 can be used to adjust the face image of a person in an image. After detecting the user operation on the face of the person Yan Kongjian 2043, the electronic apparatus can perform the face-beautifying process on the person in the image and display the face-beautifying processed image in the preview window. The user interface 204 (main angle mode capture interface) may also include a focus control (not shown in fig. 2D) that may be used to set the focus of the camera to adjust the camera's viewing range. When the view range of the camera changes, the image displayed in the preview window changes accordingly. The user interface 204 may also display other capture controls, which are not exemplified here.
While the user interface 204 shown in fig. 2D is displayed, the electronic device may detect a user operation acting on any of the selection boxes. The user may click on a selection frame 20403 in the user interface 204, and after receiving an operation on the selection frame 20403, the electronic device may determine that an object corresponding to the selection frame is a principal angle.
For example, referring to the user interface 204 shown in fig. 2D, the electronic device may detect a user operation on the selection box 20401. In response to the above operation, the electronic apparatus may determine that the person 1 corresponding to the selection frame 20401 is the shooting principal angle.
The electronic device may then display a small window in the preview window 2041 in picture-in-picture form and a close-up image of person 1 in the small window. The close-up image is an image obtained by performing operations such as cutting around a selected principal angle on the basis of an original image (an image displayed in a preview window) acquired by a camera.
Fig. 2E illustrates a user interface 205 in which the electronic device displays a widget and displays a close-up image of character 1 in the widget.
As shown in FIG. 2E, a widget 2052 may be included in the preview window 2051 of the user interface 205. At this time, a close-up image of person 1 may be displayed in the small window 2052. As the image displayed in the preview window 2051 changes, the image displayed in the small window 2052 also changes accordingly. The small window 2052 always displays an image centered on the person 1. In this way, the video composed of the image displayed in the small window 2052 is a close-up video of the person 1.
Optionally, the close-up image displayed in the widget 2052 may also come from a different camera than the original image displayed in the preview window 2051. For example, the close-up image displayed in the small window 2052 may be from an image captured by a tele camera, and the original image displayed in the preview window 2051 may be from an image captured by a wide-angle camera. The long-focus camera and the wide-angle camera can collect images at the same time. The images acquired by the long-focus camera and the wide-angle camera are corresponding at the same moment. In this way, the user can view a larger range of landscapes in the preview window 2051 while displaying a more detailed corner image in the small window 2052.
After the shooting principal angle is determined by the principal angle mode, a selection box of the principal angle among selection boxes in the preview box 2051 is distinguished from a selection box of a non-principal angle, that is, the selection box may become a selection box. For example, after determining that the person 1 is a shooting principal angle, the person 1 may change to the form shown by a selection box 20501 in fig. 2E corresponding to the selection box 20401 in fig. 2D. The user may determine the selected shooting principal angle through a selection box 20501. Not limited to the selection box 20501 shown in the user interface 205, the electronic device may also display other styles of icons to indicate that character 1 is selected as the principal angle to illustrate differentiation. For example, the color of the selection frame changes to the selection frame; the frame of the selection frame is thickened into a selection frame, and the like, and is not limited.
Optionally, the window 2052 (widget 2052) for presenting a close-up image may also include a close control 20521 and a transpose control 20522. A close control 20521 can be used to close window 2052. The transpose control can be used to resize window 2052.
In some examples, the electronic device can cancel the previously determined principal angle (persona 1) after closing the widget 2052 according to a user operation acting on the closing control 20521. The electronic device may then instruct the user to re-select the shooting principal angle among the identified objects. At this point, the electronic device may again display the widget 2052 in the preview window 2051 based on the redetermined home angle. At this time, a close-up image obtained by processing the original image with the center of a new principal angle (the same or different principal angle as the last close-up object may be) is displayed in the small window 2052.
In some examples, the close control 20521 may also be used to pause recording a close-up video after starting recording the video. At this point, the electronic device does not cancel the previously determined principal angle. After suspending recording, the close control 20521 may be replaced with an initiate control. After detecting the user operation on the start control, the electronic device may continue recording the close-up video centered around the principal angle.
In other examples, after closing widget 2052, the electronic device does not display only the widget, i.e., does not display a close-up image of the previously determined principal angle (character 1), but the electronic device still maintains the previously determined principal angle. At this point, the preview window 2051 is not obscured by the small window 2052 that reveals the main angle close-up image. The user can monitor the image content of the original video better, so that the original video with higher quality is obtained. At this time, the user may cancel the selected principal character 1 by clicking on the selection box 20501, thereby newly selecting a new principal among the recognized objects.
Alternatively, after determining the principal angle, the electronic device may first generate a 9:16 aspect ratio widget (vertical window) for presenting a close-up image, referring to widget 2052 in fig. 2E. The aspect ratios described above are exemplary and include, but are not limited to, 9:16 aspects of the aspect ratio of the mullion. Upon detecting a user operation on the transpose control 20522, the electronic device can change the original vertical window to a lateral window (transom) with an aspect ratio of 16:9. Of course, the electronic device may also generate the transom by default, and then adjust the transom to the vertical window according to the user operation, which is not limited by the embodiment of the present application. In this way, the user can adjust the video content of the close-up video with the transpose control 20522 to meet his own personalization needs.
Alternatively, the electronic device may display a widget showing a close-up image fixedly at the lower left (or lower right, upper left, upper right) of the screen. In some examples, the small window may also adjust the display position according to the position of the main angle in the preview window, so as to avoid blocking the main angle in the preview window.
Furthermore, the electronic device can also adjust the position and the size of the small window according to the operation of the user. In some examples, the electronic device may also detect a long press operation and a drag operation on widget 2052, in response to which the electronic device may move the widget to a position where the user drag operation last stopped. In other examples, the electronic device may also detect a double-click operation on the widget 2052, in response to which the electronic device may zoom in or out of the widget 2052. The electronic device may also control the adjustment of the position and size of the small window through gesture recognition and voice recognition without being limited to the long press operation, the drag operation, and the double click operation described above. For example, the electronic device may recognize through an image captured by the camera that the user made a fist-making gesture, and in response to the fist-making gesture, the electronic device may narrow the widget 2052. The electronic device may recognize through the image captured by the camera that the user has made a hand-open gesture, and in response to the Zhang Shou gesture described above, the electronic device may zoom in on the widget 2052.
2. In the preview, a focus tracking target and a lost focus tracking target are selected and retrieved (fig. 2E to 2H).
Fig. 2F-2H are diagrams of user interfaces illustrating exemplary loss of principal angle for a set of video previews.
At some point after the home angle mode is turned on (when video has not been recorded), the user's initially selected home angle may leave the view range of the camera of the electronic device (e.g., the home angle is not included in preview window 2061 in fig. 2F). Referring to the user interface 206 shown in FIG. 2F, the objects identifiable in the preview window 2061 include person 2, but do not include the aforementioned user-selected principal angle: character 1.
At this time, as shown in fig. 2F, the electronic device may display a prompt 2062, such as "main angle lost, exit tracking after 5 seconds", "main angle lost, please aim at main angle shooting" (not shown in the figure), to prompt the user that the main angle is lost, and that a close-up image of the main angle cannot be determined. After 5 seconds passes, the electronic device may display a small window 2063 that closes the display of the main angle close-up image. Referring to fig. 2F, at this point, the electronic device may display the user interface 207 with no portlets included in the preview window 2071.
In response to the prompt 2062, the user may adjust the camera position so that the main angle is within the view of the camera, so that the camera may re-capture an image including the main angle. Referring to the user interface 205 shown in fig. 2E, at this point, character 1 (principal angle) is re-detected in the preview window 2051, and the electronic device may then re-generate the widget 2052 and display the current principal angle-centered close-up image in the widget 2052.
In some embodiments, the electronic device may decide whether to close the widget after a few frames (or a period of time) apart. As shown in fig. 2G, the electronic device closes the widget after an interval of a few frames (or a period of time) if the principal angle is not detected. After the main angle disappears and before the closing of the small window is confirmed, the electronic device can determine the image content displayed in the small window in the period with the clipping region of the last frame before the main angle disappears. Optionally, after the moment shown in the user interface 206 shown in fig. 2F (no principal angle detected), the electronic device may continue to detect N frames of images after this frame of image, at which point the user interface 206 widget 2063 may display the image in which the last frame was detected. If none of the N frames of images includes a principal angle, the electronic device closes the small window 2063. Alternatively, after the moment shown in the user interface 208 shown in fig. 2H (no principal angle detected), the electronic device may continue to detect N frames (or a period of time, e.g., 5 s) of images after the frame of images. At this point, the user interface 208 widget 2083 may display an image in which the preview box 2081 was detected.
It can be understood that the electronic equipment automatically exits the focus tracking after the focus tracking target is lost for 5 seconds. As shown in fig. 2G, the electronic device may display a user interface 207. The user interface 207 is an interface displayed after the electronic device exits the focus tracking. In this case, the user may select the focus tracking target again and click on the corresponding pre-selected box, or wait for the original focus tracking target to reenter the view range of the electronic device.
It is understood that the time interval from the beginning of defocus to the exit of the tracking of the electronic device can also be other values. For example, 2s, 3s, 6s, etc. This length of time may be set according to actual needs, and the present application is not limited thereto.
In some embodiments of the present application, after the electronic device loses the focus tracking target and exits the focus tracking, the electronic device may retrieve the focus tracking target and redisplay the preview widget without the user switching the focus tracking target.
As shown in fig. 2E, the electronic device may display a user interface 205. Based on the image displayed in the preview window 2051, the person 1 reappears within the viewing range of the electronic device. At this time, the electronic device may again detect the focus tracking target and display the widget 2052 and the selection box 20501.
3. Video recording, pausing and saving and viewing in a focus tracking successful state (fig. 2I-2L).
The electronic device may detect a user operation on a start video control 2092 included in the user interface 209 shown in fig. 2I. In response to the user operation, the electronic device may begin recording video.
As shown in fig. 2J, the electronic device may display a user interface 210. The user interface 210 is specifically referred to above and will not be described in detail.
The user interface 210 may display the video recording time corresponding to the preview window 2101. I.e. the recording time of the original video. As shown in fig. 2J, the video recording time corresponding to the preview window 2101 is "00:01", i.e., the recording time of the original video is 1s.
It can be appreciated that the widget 2103 can hide the control 20521 and the control 20522 after the electronic device begins recording video in the main angle mode. As shown in FIG. 2J, the widget 2103 may include a recording end control 21031 and a recording time (e.g., 00:01). The small window recording time indicates that the recording time of the close-up video is 1s.
It should be noted that, regarding the related meaning of the original video and the close-up video, reference is made to the above, and no further description is given here.
The end video control 21031 is used to end the recording of the video (i.e., the close-up video) corresponding to the preview pane 2103. The large window recording control 2102 includes a (large window) end recording control 21021, where the control 21021 is used to end recording of the video (i.e., the original video) corresponding to the preview window 2101. The large window video control 2102 also includes a video pause control 21022 for pausing the recording of the original video.
In some embodiments, once the electronic device pauses recording the original video, recording of the close-up video is paused. Accordingly, once the electronic device continues to record the original video, the close-up video also continues to record.
For example, the electronic device may detect a user operation acting on pause video control 21022. In response to the user operation, the electronic device may display a user interface 211 as shown in fig. 2K. The large window video controls 2112 of the user interface 211 may include a resume video control 21122. Resume video control 21122 is used to continue recording the original video. As shown in fig. 2K, the video recording time displayed in the preview box 2111 included in the user interface 211 remains "00:05". This means that the original video pauses recording and the close-up video ends recording.
In other embodiments, if the electronic device pauses recording the original video, the recording of the close-up video is not paused.
In still other embodiments, if the electronic device pauses recording the original video, the recording of the close-up video will also pause.
In some embodiments, widget 2103 in user interface 210 may also include a pause video control (not shown). The record pause control may be used to pause recording of a close-up video. In this case, once the electronic device detects a user operation on a pause video control included in the widget 2103, the electronic device may pause recording of the close-up video in response to the user operation. Correspondingly, the electronic device may continue recording the original video.
The electronic device may detect a user operation acting on the user interface 210 to end the video recording control 21021. In response to the user operation, the electronic device may display a user interface 205 as shown in fig. 2E. The controls included in user interface 205 are substantially identical to the controls included in user interface 210. The album shortcut control 2054 included in the user interface 205 may display a thumbnail of the first frame image in the original video saved by the electronic device.
In some embodiments, if the electronic device ends recording the original video, the recording of the close-up video does not end. In this case, once the electronic device detects a user operation on the end video control 21031 in the user interface 210, the electronic device may end recording the close-up video in response to the user operation.
The electronic device can detect a user operation on the album shortcut control 2054 included in the user interface 205. In response to the user operation, the electronic device may display a user interface 212 as shown in fig. 2L. The user interface 212 may include a display area 2121 and a display area 2122. The electronic device can detect a user operation acting on the display area 2121. In response to the user operation, the electronic device may play the original video, and correspondingly, the electronic device may display a picture in the played original video. The electronic device can detect a user operation acting on the display area 2122. In response to the user operation, the electronic device may play the close-up video, and correspondingly, the electronic device may display a screen in the play close-up video.
4. And switching the focus tracking target during recording.
(1) When the original video and the close-up video are recorded simultaneously, the focus tracking target is switched (fig. 2M).
The electronic device may detect the preselection box 21041 for person 1 and the preselection box 21042 for person 2 as included in the user interface 210 illustrated in fig. 2J. The user clicks on the preselection box 21042 of person 2. In response to the user operation, the electronic device may display a user interface 213 as shown in fig. 2M. The user interface 213 may include a pre-selection box 21041 for person 1 and a selection box 21342 for person 2, the selection box 21342 being used to focus on person 2, and a small window 2133 in the user interface 213 displaying a close-up image of person 2. The selection box 21342 can be used to frame a focus tracking target (principal angle). As shown in fig. 2M, a preselection box 21341 is used to frame persona 1. That is, the user has switched the focus tracking target from person 1 to person 2, and the electronic apparatus succeeds in tracking the focus.
(2) And when the close-up video is finished to be recorded but the original video is recorded, switching the focus tracking target.
The electronic device can detect a user operation on the end of recording control 21031 included in the user interface 210 as shown in fig. 2J. In response to the user operation, the electronic device may display a user interface 211 as shown in fig. 2K. At this point, the electronic device has finished recording the close-up video with character 1 as the focus target, but is still recording the original video.
The electronic device can detect a user operation acting on a preselection box 21142 included in the user interface 211 as shown in fig. 2K. In response to the user operation, the electronic apparatus may newly determine the character selected by the preselection box 21142 (i.e., character 2) as the focus target (principal angle). The electronic device may record a close-up video of character 2.
It should be noted that the above user interfaces are only examples provided by the present application, and should not be construed as limiting the present application.
The following describes the apparatus according to the embodiment of the present application.
Fig. 3 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
The electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (Universal Serial Bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (Subscriber Identification Module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device. In other embodiments of the application, the electronic device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (Application Processor, AP), a modem processor, a graphics processor (Graphics Processing unit, GPU), an image signal processor (Image Signal Processor, ISP), a controller, a memory, a video codec, a digital signal processor (Digital Signal Processor, DSP), a baseband processor, and/or a Neural network processor (Neural-network Processing Unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The controller can be a neural center and a command center of the electronic device. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. A memory may also be provided in the processor 110 for storing instructions and data.
In the embodiment provided by the present application, the electronic device may perform the photographing method through the processor 110.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied on an electronic device. The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (Wireless Local Area Networks, WLAN) (e.g., wireless fidelity (Wireless Fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (Global Navigation Satellite System, GNSS), frequency modulation (Frequency Modulation, FM), near field wireless communication technology (Near Field Communication, NFC), infrared technology (IR), etc., as applied to electronic devices.
The electronic device implements display functions via a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), an Active-matrix Organic Light-Emitting Diode (AMOLED) or an Active-matrix Organic Light-Emitting Diode (Matrix Organic Light Emitting Diode), a flexible Light-Emitting Diode (Flex), a Mini LED, a Micro-OLED, a quantum dot Light-Emitting Diode (Quantum Dot Light Emitting Diodes, QLED), or the like. In some embodiments, the electronic device may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device may implement the acquisition function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image or video visible to naked eyes. ISP can also optimize the noise, brightness and color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (Charge Coupled Device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to an ISP to be converted into a digital image or video signal. The ISP outputs digital image or video signals to the DSP for processing. The DSP converts digital image or video signals into standard RGB, YUV, etc. format image or video signals. In some embodiments, the electronic device may include a plurality of cameras 193.
Video codecs are used to compress or decompress digital video. The electronic device may support one or more video codecs. In this way, the electronic device may play or record video in a variety of encoding formats, such as: dynamic picture experts group (Moving Picture Experts Group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device. In some embodiments, the angular velocity of the electronic device about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device at a different location than the display 194.
It should be noted that, the function of other modules not mentioned in the electronic device shown in fig. 3 may refer to related art documents, which are not described in the present application.
Fig. 4 is a schematic software structure of an electronic device according to an embodiment of the present application.
As shown in fig. 4, the software framework of the electronic device according to the present application may include an application layer, an application framework layer (FWK), a system library, a An Zhuoyun row, a Hardware Abstraction Layer (HAL), and a kernel layer (kernel).
The application layer may include a series of application packages (also referred to as applications) such as cameras, gallery, calendar, talk, map, navigation, WLAN, bluetooth, music, video, short messages, etc. Among other things, camera applications may be used to acquire images and video.
As shown in fig. 4, the camera application may include a camera mode module, a stream management module, an encoding control module, and a storage module. The camera mode module may be used to monitor user operations and determine the mode of the camera. Modes of the camera may include, but are not limited to: a photographing mode, a video preview mode, a video mode, a portrait mode, a night scene mode and the like. The video preview mode may include a video preview mode in a principal angle mode. The recording mode may include a recording mode in a main angle mode. The stream management module is used for carrying out data stream management. For example, the delivery of data flow configuration information (which may be referred to simply as provisioning information). The stream management module may include an area of data stream buffering. For example, video-surface and Video-Track-surface. Video-surface and Video-Track-surface may store two paths of image data returned by the camera HAL. The encoding control module may include a mixer, a video encoder, and an audio encoder. The video encoder is used for encoding the image acquired by the camera. The audio encoder is used for encoding the collected audio. The mixer is used for combining the encoded image and the encoded audio into a video file. The storage module is used for storing the original video and the close-up video.
The application framework layer provides an application programming interface (Application Programming Interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 4, the application framework layers may include a camera FWK (framework layer) and a media FMK (framework layer). The Camera FWK may provide an API interface to call an application (e.g., a Camera application), further receive a request from the application, and maintain business logic of the request flowing internally, and finally send the request to a Camera Service (Camera Service) for processing by calling a Camera AIDL cross-process interface, and then wait for a return of a Camera Service (Camera Service) result, and further send the final result to the Camera application. The English of AIDL is called Android Interface Definition Language, and the Chinese meaning is android interface definition language. Similarly, the media FWK may invoke a corresponding application (e.g., camera application) with an API interface, thereby receiving a request from the application (e.g., camera application), and passing down the request of the application, and then back to the application.
It is to be appreciated that the application framework layer can also include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like. For specific meaning, reference is made to the related art documents, and description thereof is not given here.
The Runtime (run time) is responsible for the scheduling and management of the system. Run time includes a core library and virtual machines. The core library comprises two parts: one part is the function that the programming language (e.g., java language) needs to call, and the other part is the core library of the system.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes the programming files (e.g., java files) of the application layer and the application framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface Manager (Surface Manager), media library (Media Libraries), three-dimensional graphics processing library (e.g., openGL ES), two-dimensional graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of two-Dimensional (2D) and three-Dimensional (3D) layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing 3D graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
A Hardware Abstraction Layer (HAL) is an interface layer located between the operating system kernel and upper layer software, which aims at abstracting the hardware. The hardware abstraction layer is a device kernel driven abstraction interface for enabling application programming interfaces that provide higher level Java API frameworks with access to the underlying devices. The HAL contains a plurality of library modules, such as camera HAL, vendor library, display screen, bluetooth, audio, etc. Wherein each library module implements an interface for a particular type of hardware component. It is understood that the camera HAL may provide an interface for the camera FWK to access hardware components such as a camera head. The Vendor bin may provide an interface for media FWK to access hardware components such as encoders. To load library modules for the hardware component when the system framework layer API requires access to the hardware of the portable device, the Android operating system will load the library modules for the hardware component.
The kernel layer is the basis of the Android operating system, and the final functions of the Android operating system are completed through the kernel layer. The inner core layer at least comprises a display driver, a camera driver, an audio driver, a sensor driver, an audio driver and the like.
It should be noted that, the software structure of the electronic device shown in fig. 4 provided by the present application is only an example,
the method is not limited to specific module division in different layers of the Android operating system, and the description of the software structure of the Android operating system in the conventional technology can be referred to specifically. In addition, the shooting method provided by the application can be realized based on other operating systems, and the application is not limited to one by one.
Based on the software and hardware structures of the electronic device shown in fig. 3 and fig. 4, fig. 5 is a schematic diagram of the software structure of another electronic device according to the embodiment of the present application, and the shooting method provided by the embodiment of the present application is described in conjunction with fig. 5 from the perspective of software and hardware collaboration.
The camera mode control may monitor user operations on the principal angle mode control to determine that the current camera mode is changed to a video preview mode in the principal angle mode and notify the stream management module. Correspondingly, the stream management module can issue stream allocation information of two paths of data streams to the camera FWK. After the camera FWK receives the streaming information, the streaming information may be forwarded to the camera HAL. The camera HAL processes the image data collected by the camera to obtain two paths of image data with different effects. The camera HAL may return the image data of the two different effects to the camera FWK. Correspondingly, the camera FWK can continuously return the image data with the two paths of different effects to the camera application, and the image data are respectively stored in storage areas Video-surface and Video-Track-surface of the camera application.
The camera mode control can monitor the user operation on the video recording start control, so that the current camera mode is determined to be changed into a video recording mode under a main angle mode, and the coding control module is notified. The encoding control module controls the encoder to encode two paths of image data in the Video-surface and the Video-Track-surface, so that two or more videos are obtained.
Other modules in fig. 5 may refer to the related description of fig. 4, and are not described in detail.
A specific implementation of the foregoing embodiment is described below in conjunction with fig. 6A-6C.
Fig. 6A-6C are schematic flow diagrams of a method for processing video data in a set of principal angle modes according to an embodiment of the present application.
It is understood that the user may click on the camera application icon. Correspondingly, the camera mode module monitors the user operation, determines the camera start and sends a camera start message to the camera. The camera start message is used for requesting to start the camera. After the camera receives the camera starting message, the camera is started to acquire images.
1. Configuration data stream (step S601-step S609 as shown in fig. 6A)
The user clicks the principal angle mode control (control that initiates principal angle mode). Accordingly, the camera mode module monitors the user operation and determines that the electronic device needs to enter the principal angle mode.
S601: the camera mode module sends a principal angle mode initiation message to the stream management module and the storage module.
It will be appreciated that the principal mode initiation message is used to prompt other module users in the camera application that a preview in principal mode needs to be initiated.
Accordingly, the stream management module and the storage module may receive a principal angle mode initiation message from the camera mode module.
S602: the stream management module configures the first path of data stream and the second path of data stream based on the principal angle mode initiation message.
After the main angle mode starting message is acquired by the stream management module, the first path of data stream and the second path of data stream are configured, and stream identifiers of the first path of data stream and the second path of data stream are determined to be a first stream identifier and a second stream identifier respectively.
It is understood that the configuration information of the first path data stream and the second path data stream may be the same. The flow identification may be used to distinguish between different data flows. The flow management module can distinguish the data flow corresponding to the first path of data flow and the second path of data flow by setting the flow identification. In particular, the first stream identifier is used to mark the first path of data stream, and the second stream identifier is used to mark the second path of data stream.
In some embodiments, the first stream is identified as video-surface-1 and the second stream is identified as video-surface-2.
It should be noted that the stream management module includes a storage area that can store the first path of data stream and the second path of data stream. For convenience of description, a storage area in the stream management module for storing the first path of data stream is referred to as a first storage area, and a storage area in the stream management module for storing the second path of data stream is referred to as a second storage area. As shown in fig. 6A, the Video-surface in the stream management module may be a first storage area for storing a first path of data stream. The stream identifier corresponding to the Video-surface is the first stream identifier. And the Video-Track-surface in the stream management module may be a second storage area for storing a second path of data stream. The stream identifier corresponding to the Video-Track-surface is the second stream identifier.
S603: the storage module generates a first file and a second file based on the current system time, respectively.
The time stamps of the first file and the second file are respectively a first time and a second time.
After the storage module acquires the main angle mode starting message, a first file with a time stamp of a first moment and a second file with a time stamp of a second moment are respectively generated based on the current system time.
It is understood that the initial states of the first file and the second file respectively generated by the storage module based on the current system time are empty (i.e., no data).
In some embodiments, the second time is no earlier than the first time.
The execution order of S602 and S603 is not limited.
S604: the storage module transmits the file information W1 to the encoding control module.
Wherein the file information W1 includes file names of the first file and the second file.
In some embodiments, the file information W1 may further include time stamps (i.e., first time and second time) of the first file and the second file. The file information W1 may further include a correspondence of file names and time stamps.
After the storage module generates the first file and the second file, file information W1 is transmitted to the encoding control module.
It will be appreciated that the file information W1 may also include other content related to the first file and the second file, which the present application is not limited to. In addition, the file information W1 may be in the form of text, numerals, character strings, etc., which the present application is not limited to.
By way of example, the file information W1 may be video-1-time-1-video-2-time-2. See Table 1, video-1 corresponds to time-1. video-1 is the file name of the first file. time-1 is the timestamp of the first file (i.e., the first time). video-2 corresponds to time-2. video-2 is the file name of the second file. time-2 is the timestamp of the second file (i.e., the second time).
TABLE 1
The file information W1 is 10987662098767, for example. Bits 1 and 8 of the string represent the file name. Bits 2 to 7 of the character string represent the time stamp of the file name corresponding to the first bit. Bits 9 to 14 of the character string represent the time stamp of the file name corresponding to bit 8.
In some embodiments, the first file may be named with its timestamp.
Correspondingly, the coding control module can receive the stream identification parameter information sent by the storage module.
S605: the stream management module sends stream identification parameter information to the encoding control module.
After the stream management module configures the first path of data stream and the second path of data stream, stream identification parameter information can be sent to the coding control module.
Wherein the flow identification parameter information includes a first flow identification and a second flow identification.
It will be appreciated that the stream identification parameter information may be in the form of text, numbers, strings, etc., as the application is not limited in this regard.
For example, the flow identification parameter information may be 1, 2. For another example, the stream identification parameter information may be video-surface-1-video-surface-2.
Correspondingly, the coding control module can receive the stream identification parameter information sent by the stream management module.
S606: the encoding control module creates a first encoder and a second encoder based on the stream identification parameter information and the file information W1, and associates a first file, a first data stream and the first encoder; the second file, the second data stream, and the second encoder are associated.
The encoding control module may create a first encoder and a second encoder based on the stream identification parameter information and the file information W1, associate the first path data stream and the second path data stream, and the first file and the second file with the first encoder and the second encoder, and use the first path data stream as an input of the first encoder; the second data stream is used as an input to a second encoder. Taking the first file as a file for storing the video obtained by the first encoder; and taking the second file as a file for storing the video obtained by the encoding of the second encoder. The first encoder is used for encoding and generating videos corresponding to the large preview window, and the second encoder is used for encoding and generating videos corresponding to the small preview window.
It can be understood that after receiving the stream identification parameter information, the encoding control module may parse the stream identification parameter information to determine that the number of data streams is 2, and may also determine a correspondence between the stream identification and the data streams. Similarly, after receiving the file information W1, the encoding control module may parse it to determine the number of files as 2, and file names. Accordingly, the encoding control module creates two encoders (i.e., a first encoder and a second encoder) and associates two data streams with the two encoders according to the correspondence of the stream identification and the data stream. The encoding control module may also associate two files with two encoders based on file names.
Specifically, the encoding control module may associate the first stream identifier and the first file with the first encoder, respectively, and use a data stream corresponding to the first stream identifier as an input of the first encoder, and use the first file as a storage file of output data of the first encoder. Similarly, the encoding control module may associate the second stream identifier and the second file with the second encoder, respectively, and use a data stream corresponding to the second stream identifier as an input of the second encoder, and use the second file as a storage file of output data of the second encoder.
In some embodiments of the present application, the first encoder is video-codec-1 and the second encoder is video-codec-2.
Illustratively, the stream identification parameter information is video-surface-1-video-surface-2. The coding control module analyzes the stream identification parameter information and determines that the stream management module configures two paths of data streams. The flow identifiers of the two paths of data flows are respectively: video-surface-1 and video-surface-2. As shown in Table 2, the encoding control module may associate a data stream (i.e., a first path data stream) that identifies the stream as video-surface-1 with video-codec-1. And video-codec-1 is used for encoding the video (original video) corresponding to the preview pane. This means that the data stream identified as video-surface-1 (i.e., the first path of data stream) is the data stream corresponding to the preview pane. Similarly, as shown in Table 2, the encoding control module may associate a data stream (i.e., a second path data stream) that identifies the stream as video-surface-2 with video-codec-2. And video-codec-2 is used to encode the video (close-up video) corresponding to the preview pane. This means that the data stream identified as video-surface-2 (i.e., the second path data stream) is the data stream corresponding to the preview pane.
The file information W1 is illustratively video-1-time-1-video-2-time-2. The encoding control module analyzes the file information W1 and determines that the storage module generates two files. The file names of the two files are respectively: video-1 and video-2. As shown in table 2, the encoding control module may associate a file (i.e., a first file) having a file name of video-1 with the first encoder. And the first encoder is used for encoding and generating the video corresponding to the preview big window. This means that the file with the file name video-1 (i.e. the first file) is used to store the video corresponding to the preview pane. Similarly, as shown in Table 2, the encoding control module may associate a file (i.e., a second file) having a file name of video-2 with the second encoder. And the second encoder is used for encoding and generating video corresponding to the preview window. This means that the file with the file name video-2 (i.e. the second file) is used to store the video corresponding to the preview pane.
TABLE 2
Encoder with a plurality of sensors Flow identification File name Time stamp
video-codec-1 video-surface-1 video-1 time-1
video-codec-2 video-surface-2 video-2 time-2
S607: the encoding control module sends an encoder initialization request to the encoder through the media FWK.
After the encoding control module performs S606, an encoder initialization request may be transmitted to the encoder through the media FWK.
Wherein the encoder initialization request includes the encoding parameters of the first encoder and the encoding parameters of the second encoder.
It is understood that the encoding parameters of the encoder may include format and size.
Accordingly, the encoder may receive an encoder initialization request from the encoding control module via the media FWK.
S608: the stream management module sends stream identification parameter information to the camera HAL.
After the flow management module configures the first path of data flow and the second path of data flow based on the principal angle mode initiation message, the flow identification parameter information may be sent to the camera HAL.
In particular, the stream management module may send the stream identification parameter information to the camera FWK, which then sends the stream identification parameter information to the camera HAL. The description of the flow identification parameter information may be referred to above, and will not be repeated here.
Accordingly, the camera HAL may receive flow identification parameter information from the flow management module.
S609: the camera HAL analyzes the stream identification parameter information to obtain a first stream identification and a second stream identification, and the first stream identification and the second stream identification are respectively matched with a first image effect and a second image effect.
The first image effect is an image effect corresponding to an image displayed in a preview large window, and the second image effect is an image effect corresponding to an image displayed in a preview small window.
It is understood that the image effects may include a clipping range, an image size, a filter effect, a beauty effect, and the like.
Illustratively, the first image effect includes: 1920px 1080px, adding filter effect, adding beauty effect.
The stream identification parameter information may be, for example, video-surface-1-video-surface-2. The camera HAL analyzes the stream identification parameter information and determines that the stream management module configures two paths of data streams. The flow identifiers of the two paths of data flows are respectively: video-surface-1 and video-surface-2. The camera HAL may match the data stream identifying the stream as video-surface-1 (i.e., the first path of data stream) with the first image effect and the data stream identifying the stream as video-surface-2 (i.e., the second path of data stream) with the second image effect.
It will be appreciated that the present application is not limited to the order of step S602 and step S604. The present application does not limit the order of S605 and S608.
2. The big and small windows initiate recording (step S610-step S616 shown in fig. 6B).
When the focus tracking is successful and the preview window is displayed, the user can click on the video recording start control on the interface. Correspondingly, the camera mode module monitors the user operation and determines that the electronic equipment needs to record videos corresponding to the small preview window and the large preview window.
S610: the camera mode module sends a video recording start message to the stream management module and the coding control module.
Under the condition that the camera mode module acquires the operation of clicking to start the video control, a video starting message can be sent to the stream management module and the coding control module.
It will be appreciated that the video start message is used to prompt other modules in the camera application that the user needs to begin recording the preview pane and the preview pane.
Correspondingly, the stream management module and the coding control module can receive the video start message sent by the camera mode module.
S611: the stream management module sends a dynamic request D1 to the camera HAL through the camera FWK.
Wherein, the dynamic request D1 is used for requesting to return the first path of data stream and the second path of data stream, and the dynamic request D1 may include a first stream identifier and a second stream identifier.
It is understood that the stream management module may send the dynamic request D1 to the camera FWK, which then sends the dynamic request D1 to the camera HAL.
Accordingly, the camera HAL may receive a dynamic request D1 sent by the stream management module. The camera HAL may then receive raw data from the camera head.
S612: the camera HAL processes the raw image data based on the dynamic request D1 to obtain first image data and second image data.
It will be appreciated that after the camera HAL receives the dynamic request D1, the dynamic request D1 may be parsed and the first stream identifier and the second stream identifier may be obtained, and the camera HAL may determine the image effect matching the first stream identifier and the second stream identifier according to the matching condition of the previous stream identifier and the image effect. Specifically, the camera HAL may determine that the image effect matched with the first stream identifier is a first image effect, and process the original data according to the first image effect to obtain first image data. The stream identifier of the first image data is a first stream identifier. Similarly, the camera HAL may determine that the image effect matching the second stream identification is a second image effect and process the raw data according to the second image effect to obtain second image data. The stream identifier of the second image data is a second stream identifier.
S613: the camera HAL sends the first image data and the second image data to the stream management module via the camera FWK.
It is understood that the camera HAL may return the first image data to the stream management module as a first path data stream and the second image data to the stream management module as a second path data stream.
Accordingly, the stream management module may receive the first image data and the second image data transmitted by the camera HAL through the camera FWK and put them into the corresponding storage areas according to the stream identifications of the first image data and the second image data.
It is understood that the stream management module may store the first image data in the first storage area since the stream identification of the first image data is identical to the stream identification of the first storage area. Similarly, since the stream identification of the second image data coincides with the stream identification of the second storage area, the stream management module may store the second image data in the second storage area.
For example, the stream management module may store the first image in a Video-Track-surface and the second image data in a Video-Track-surface.
S614: the encoding control module sends an encoding command B1 to the first encoder and the second encoder via the media FWK.
Wherein the encoding command B1 is used to activate the first encoder and the second encoder. And encoding the image data corresponding to the large preview window and the image data corresponding to the small preview window, thereby obtaining an original video and a close-up video.
It is understood that after the encoding control module receives the video start message sent by the camera mode module, the encoding control module may start the first encoder and the second encoder. Specifically, the encoding control module may send the encoder command to the media FWK, which then sends the encoder command B1 to the encoder. After the encoder receives the encoder command B1, the respective encoders (i.e., the first encoder and the second encoder) may be started according to the encoder command.
Illustratively, the encoder command B1 may include: codec-1.Start and codec-2.Start.
S615: the first encoder acquires first image data and encodes the first image to obtain third image data.
It will be appreciated that after the first encoder is started, the first image data may be obtained from the first storage area (e.g., video-surface) of the stream management module, and then the first image data may be encoded to obtain the third image data. The encoder may then write the third image data into the first file of the memory module.
It is understood that the third image data may be the image in the original video mentioned above.
S616: the second encoder acquires second image data and encodes the second image to obtain fourth image data.
It will be appreciated that after the second encoder is started, the second image data may be obtained from a second storage area (e.g., video-surface) of the stream management module, and then the second image data may be encoded to obtain fourth image data. The encoder may then write the fourth image data into the second file of the memory module.
It is understood that the fourth image data may be an image in the close-up video mentioned above.
It will be appreciated that the present application is not limited to the order of steps S611 and S614. The present application is not limited to the order of step S615 and step S616.
3. The large window and the small window end recording (step S617-step S623 as shown in fig. 6C).
Under the condition that the preview small window and the preview large window are normally recorded, a user can click a video recording ending control corresponding to the preview large window. Correspondingly, the camera mode module monitors the user operation and determines that the electronic equipment needs to finish recording the large preview window and the small preview window.
S617: the camera mode module sends a recording ending message to the stream management module, the storage module and the coding control module.
It will be appreciated that the end recording message is used to inform other modules in the prompt camera application that the user needs to end recording of the preview big window and the preview small window.
Correspondingly, the stream management module, the storage module and the coding control module can receive the recording ending message sent by the camera mode module.
S618: the flow management module deletes the first flow identification and the second flow identification included in the dynamic request D1.
After receiving the recording ending message sent by the camera mode module, the stream management module may delete the first stream identifier and the second stream identifier in the original dynamic request (i.e., the dynamic request D3).
The stream management module may also generate a dynamic request D2, it being understood that after the stream management module generates a new dynamic request, the new dynamic request may be continuously issued.
S619: the encoding control module sends an encoding command B2 to the first encoder and the second encoder via the media FWK.
Wherein the encoding command B2 is used to stop the first encoder and the second encoder.
It is understood that after the encoding control module receives the end recording message sent by the camera mode module, the encoding control module may stop the first encoder and the second encoder.
In contrast, the first encoder and the second encoder may receive the encoding command B2 from the encoding control module via the media FWK, respectively.
S620: the first encoder (encoder 1) ends encoding the first image data.
After the first encoder receives the encoding command B2, the encoding of the first image data at present may be ended (stopped).
Illustratively, the encoded command B2 may include a codec-1.Stop.
S621: the second encoder (encoder 2) ends encoding the second image data.
After the second encoder receives the encoding command B2, the encoding of the second image data may be ended (stopped) at present.
Illustratively, the encoder command B2 may include: and codec-2.Stop.
S622: the encoding control module encapsulates the third image data into a first file and encapsulates the fourth image data into a second file.
Wherein the third image data is data after encoding the first image data; the fourth image data is data after encoding the second image data. The first file and the second file are video files, and are encapsulated in the order of the image data after encoding.
S623: the storage module stores the first file and the second file.
After the encoding control module encapsulates the first file and the second file, the first file and the second file may be stored in the storage module. Namely, the storage module stores the first file and the second file, and stores the time stamp of the first file and the second file, wherein the time stamp can be the time stamp of the current system.
Illustratively, after the first file and the third file are saved according to the above, the correspondence between the file names and the time stamps is shown in table 3:
TABLE 3 Table 3
File name Time stamp
video-1 time-4
video-3 time-3
It will be appreciated that the present application is not limited to the sequence of steps S618-S619
In fig. 6A to 6C, the encoder 1 is a first encoder, and the encoder 2 is a second encoder. The methods in the embodiments have inheritance relationships, and the meaning of nouns is generic.
In the application, two (or more) video recordings (two video recordings of a preview frame and a small window) are required to be recorded respectively in the recording process, so that an encoder is required to encode and package the acquired audio and video data respectively.
In the video recording process, the original image (or the image data subjected to effect processing) acquired by the camera is required to be subjected to encoding processing, and the encoded image is obtained to be packaged and stored. For example, the encoder may compress the color space (luminance chrominance chroma, YUV) data collected by the Camera module into a video code stream in MPEG-4 or advanced video coding (advanced video coding, AVC), and the video code stream is sent to a File Muxer (File mixing) module to be packaged into a video File such as 3gpp, mp4, h.264, AAC, etc. format that can be played by the player.
In the video recording process due to the large window and the small window, two paths of videos need to have the possibility of recording suspension and recording resumption respectively. In the current openGL recording method, each path of original video acquired from a camera module is processed in advance (image data in the pause recording period is removed) at an application layer based on the need of pause recording and resume recording, and is sent to an encoder for encoding. However, since the encoding of the two paths of video streams is obtained through the camera FMK and the camera HAL in the present application, the image data to be encoded cannot be obtained through the application layer, and thus the encoding process of openGL cannot be applied to the two paths of encoding schemes of the present application. In addition, in the video recording method of the mediadecoder, when the application layer generates video data, one video corresponds to one audio, and the encoded audio and video are required to be packaged one by one, but the two video streams and one audio stream (only one audio is required to be encoded) in the application break the original packaging relationship, so the mediadecoder is not suitable for the two video scenes of the application.
Furthermore, for the encoder encoding process, it is essentially compression encoding of the original video data. Such compression coding formally makes it possible to compress data considerably by means of a specific relationship existing between pictures of successive frames in video, for example, only a small part of pictures of the preceding and following frames changes, and the coding process determines only the change between the preceding and following frames. If the video after encoding is performed according to the need of suspending recording and resuming recording, the encoding relation of frames formed in the encoding process of the encoder is destroyed, so that the video is cut and packaged after encoding, the relevance between the front frame and the rear frame formed by the encoding is destroyed, and the problems of frame spending and frame clamping are caused.
Based on the above problems, the present application proposes a method for encoding multiple video recordings, in which, during the process of recording a certain preview window, if the video recording of the preview window is paused, the encoder of the preview window can pause the encoding of the image data based on the pause time, i.e. the image data after the pause time is not encoded, and the encoding is performed before; if the video of the preview window is restored, the encoder of the path may continue encoding the image data based on the restoration time, that is, the image data before the restoration time is not encoded, and then encoding is performed. Therefore, the electronic equipment can ensure the relevance of the front and the rear of the coded image frames in the process of suspending and restoring in the process of multi-path video coding, thereby preventing the problem of frame-missing or frame-clamping of the packaged video.
In the main angle mode, two paths of video recording can be respectively paused and resumed, and the pause and resume recording of a large window and the resume recording of a small window are respectively described below with reference to the video encoding and decoding processes of fig. 6A to 6C.
Under the condition that the main angle (focus tracking failure) cannot be detected in the preview window under the condition that the recording is paused and resumed by the small window video, the electronic equipment pauses the recording, and resumes the recording when the main angle is found again.
In some embodiments, after the electronic device starts recording, the electronic device may continue recording the first time length (e.g., 5 s) or the first frame length (e.g., 10 frames) and may start suspending the small window recording after the first time length or the first frame length, without detecting the main angle. After the principal angle is retrieved or switched in the preview frame, the electronic device continues to resume recording.
In other embodiments, the electronic device may pause the recording after the electronic device begins recording, and no principal angle is detected from a certain point. After a period of time, the electronic device continues to resume recording after retrieving the principal angle or switching the principal angle in the preview box.
In still other embodiments, the recording window of the widget may be provided with a pause and resume recording control, and after the electronic device begins recording, the user clicks the pause recording control of the widget, and the electronic device may pause the widget recording. After a period of time, the user clicks the resume recording control, and the electronic device resumes the small window recording.
In still other embodiments, after the electronic device begins recording, the electronic device may pause both the large window and the small window recordings by clicking a pause recording control for the large window. After a period of time, the user clicks a large window recovery recording control, and the electronic equipment continues to recover the large window and the small window video.
It should be noted that, in the embodiment of the present application, the timing of suspending video recording and resuming of the small window of the electronic device is merely an exemplary illustration, and other cases are possible, and the application is not limited thereto.
Fig. 7A to 7F exemplarily show user interface diagrams for recording pause and resume in a set of main angle modes.
At time T1, the electronic device begins recording video.
In some embodiments, a user clicking on the start video of the large window starts the recording of the large window and the small window. Interface where the electronic device is already in the principal mode, referring to fig. 7A, the electronic device displays a user interface 710. Wherein the principal angle selected by the user is character 1 of characters 1 and 2. The user clicks the start recording control 7103, and the electronic device may start recording in response to the above operation. The electronic device can immediately begin (preview box described above) recording of both the large and small windows. After the video recording is started, the electronic device may refer to the user interface 720 in fig. 7B, and the widget 7102 of the electronic device may record the person 1.
At time T2, the main angle of the electronic device widget is lost.
Referring to fig. 7C, the electronic device may display the user interface 730 without detecting the principal angle in the preview box 7301, which is lost.
At time T3, the electronic device pauses the recording of the widget.
In some embodiments, the electronic device continues to display and record the small window within the first time length or the first frame length when the main angle is lost, and pauses the small window recording if the main angle is not detected after the first time length or the first frame length. When the main angle is lost, the small window is recorded, and the small window can be the same picture as the large window, or can be a part of pictures of the large window at the position of the main angle loss without limitation.
Illustratively, referring to FIG. 7D, the electronic device displays a user interface 740, and the electronic device can record the window 7402 of the electronic device and the screen of the preview pane 7401 within 5 seconds (first duration). By the 5 th s (00: 15 distance 00:10 is 5 s), referring to fig. 7E, if the main angle is not detected by the preview window 7501 of the electronic device in 5s, the electronic device may add a cover layer (refer to gray in fig. 7E) to the last frame of the screen of the acquisition window 7502 in 5s and display, and after 5s, the electronic device may pause recording.
When the small window is recorded, for example, the main angle is not constant when the main angle is not constant in the picture, and the electronic device is frequently switched from a state in which the main angle is not detected to a state in which the main angle is not detected, so that if the small window is immediately disappeared, the small window is frequently displayed and disappeared, and the picture flicker is changed. Therefore, the small window is pushed out by the delay of the first duration and the first frame length, and the switching frequency of the existence of the small window is reduced, so that the sensory experience of a user is improved.
In some embodiments, the window recording is immediately paused when the window main angle is lost. I.e. the window recording is paused at time T2, where T2 and T3 are the same time.
In some embodiments, the widget includes a pause record control for pausing the recording of the widget. The user clicks the pause recording control, and the electronic device pauses the small window recording at the time T2.
At time T4, the electronic device resumes the recording of the widget.
In some embodiments, if the first time length or the first frame length does not detect the main angle, the electronic device may detect the lost main angle all the time after suspending the small window recording, and if the main angle is detected, display the small window picture in close-up display, and resume recording.
For example, referring to fig. 7F, the electronic device can display a user interface 760. The main angle (character 1) is retrieved at 8s after the target is lost (assuming the first duration is 5 s), the small window of the electronic device displays a close-up view of character 1, and resumes recording of video at 00:15 seconds.
In some embodiments, after the electronic device loses the principal angle, the user clicks on the principal angle in the current preview box, and the small window recording of the electronic device may be recorded in a close-up view of the new principal angle. If the current electronic device displays user interface 750, the user may click on character 1 and the electronic device may switch the principal angle to character 2 and resume recording.
In some embodiments, the widget includes a resume record control for resuming recording of the widget. The user clicks the resume recording control, and the electronic device resumes the small window recording at the time T2.
At time T5, the electronic device finishes the recording of the small window.
In some embodiments, the user clicks the end video control of the widget, and the electronic device ends the recording of the widget.
For example, referring to fig. 7G, the electronic device may display a user interface 770. The user clicks on the end of window recording control 77021 (end of window 7702 recording control), the electronic device can close the currently displayed window 7702 and stop the recording of window 7702.
In some embodiments, the user clicks the end video control of the big window, and the electronic device ends the recording of the big window and the small window.
For example, referring to fig. 7G, the electronic device may display a user interface 770. The user clicks the end recording control 7703 for the big window and the electronic device may close the big window and the small window that are currently displayed and stop the recording of the big window and the small window.
In some embodiments, the user clicks a pause video control for the large window, and the electronic device pauses the recording of the large window and ends the recording of the small window.
Based on the above-described case of small window (characteristic screen) video recording, the video recording result obtained by the electronic apparatus is exemplarily described below:
fig. 8A and 8B are diagrams schematically illustrating the timing of a set of recorded close-up videos. In some examples, referring to fig. 8A, during the time of recording video composed of T1-T5, the electronic device may detect an operation to start recording a small window at T1, detect a main sentence loss in a small window picture at T2, detect a small window pause recording at T3, detect an operation to click on a selected main angle of a certain selection box at T4, or detect a lost main angle (reconfirm main angle); an operation of closing the widget (ending recording of the widget) is detected at T5. At this time, the electronic device may obtain close-up video 1 from time T1 to time T3; and close-up video 2 from time T4 to time T5. Alternatively, the electronic device may package and store the 2 close-up videos as one close-up video.
In other examples, referring to fig. 8A, a loss of main sentence in the small window is detected at T2, and the small window pauses recording until the main angle reappears at time T4, resuming recording. At this time, the electronic device may obtain close-up video 3 from time T1 to time T2; and close-up video 4 from time T4 to time T5. Alternatively, the electronic device may package and save the close-up video 3 and the close-up video 4 as one close-up video.
Under the condition that the large window video is paused and recorded in a resume mode, the electronic equipment pauses the large window recording under the condition that the user clicks to pause the large window recording, and continues to resume the large window recording under the condition that the user clicks to resume the large window recording.
Fig. 9A-9F schematically illustrate user interface diagrams for recording pause and resume in another set of principal angle modes.
At time T6, the electronic device starts recording the large window.
In some embodiments, the electronic device may be able to begin both the big-window and the small-window recordings by the user clicking on the start recording control for the big window.
Illustratively, referring to fig. 9A, the user may click on a start recording control 9103 for a large window and the electronic device starts recording for the large window 9101 and recording for the small window 9102.
At time T7, the electronic device pauses the recording of the large window.
In some embodiments, the electronic device may pause the large window recording by the user clicking on a pause recording control for the large window. Alternatively, the electronic device may end the widget recording or pause the widget recording.
Illustratively, referring to fig. 9B, the user may click on the large window pause record control 9201, and the electronic device pauses the large window 9201 recording and ends the small window 9202 recording. Referring to fig. 9C, the electronic device may display a user interface 930, and the electronic device may pause the recording of the large window 9301 for a pause time of 00:05; and when the small window is over, the small window is not displayed.
At time T8, the electronic device resumes recording the large window.
In some embodiments, the electronic device may resume the large window recording by the user clicking on the resume recording control for the large window.
Illustratively, referring to FIG. 9D, the user may click on the resume recording control 9403 for the big window, and the electronic device resumes recording for the big window 9401, with the recording time for the big window 9401 continuing from 00:05. If the user continues to select character 1 as a close-up target, referring to FIG. 9E, the electronic device may display user interface 950 and the electronic device may resume the widget display and recording.
At the time T9, the electronic equipment finishes recording the large window.
In some embodiments, the user clicks the end recording control of the big window, and the electronic device may end the big window recording.
For example, referring to fig. 9F, the electronic device may display a user interface 960, the user may click on the end recording control 9603 for the big window, the electronic device ends the recording for the big window 9601 and the recording for the small window 9602, and the recording time for the big window 9601 ends at 00:10.
Based on the above-described case of large window video recording, the video recording result obtained by the electronic device is exemplarily described below:
fig. 10 is a schematic diagram illustrating exemplary timing of a set of recorded close-up videos. In some examples, referring to fig. 10, during the time of recording video composed of T6-T9, the electronic device may detect an operation to start recording a big window at T6, detect a pause in recording the big window at T7, and detect a click to resume recording the big window at T8; the end recording operation of the large window is detected at T9. At this time, the electronic device may obtain a large window video 1 from time T6 to time T7; and a large window video 2 from time T8 to time T9. Alternatively, the electronic device may package and store the 2 large window videos as one large window video.
It should be noted that, the time periods T1-T5 and T6-T9 are not related in time, and are merely used as distinction.
The encoding process of the multiplex video will be specifically described with reference to fig. 6A to 6C and fig. 7A to 10.
Fig. 11 is a flowchart of a method for encoding a multi-channel video according to an embodiment of the present application, which is based on a flowchart of shooting in a main angle mode by the electronic device. The following specifically describes a process flow of the encoding method for implementing the recording pause resume of the main angle mode by the electronic device with reference to fig. 11.
The electronic device may include a camera mode module (may be simply referred to as a mode), an encoding control module, a storage module, a media frame layer (FMK), an encoder, and the like, and the detailed description may refer to the descriptions related to fig. 4 and 5, which are not repeated.
The encoder of the present application may be one of the encoder 1 (first encoder) and the encoder 2 (or third encoder) in fig. 6A to 6C described above, without limitation.
S1101: the camera mode module acquires a start video signal.
Reference may be made specifically to fig. 7A and fig. 9A, and the related description in fig. 2I, which are not repeated.
For example, the camera mode module may acquire the start video signal by acquiring an operation of clicking the start video control.
S1102: the camera mode module sends a video recording starting instruction to the coding control module.
The instruction for starting video recording includes a recording start instruction for a large window or a small window, and reference may be made to step S610 in fig. 6B.
S1103: the encoding control module controls the corresponding encoder to encode the camera image data based on the video recording starting instruction, and encoded image data is obtained.
The camera image data may be the first image data or the second image data of fig. 6B, and the corresponding encoded image data may be the third image data or the fourth image data of fig. 6B.
The step S1103 may specifically refer to the descriptions of S614, S615, and S616 in fig. 6B, which are not described in detail.
After the encoding is started, the encoder may start sending the encoded video after the encoding is completed to the encoding control module, and the encoding control module may start packaging, which may refer to S1117.
S1104: the camera mode module acquires a pause video signal.
For the small window video recording, the suspension video recording signal may be obtained without detecting the main angle, or may be obtained with a first duration or a first frame length after the main angle is not detected, or may be a suspension operation of the user for the small window, which may be specifically described with reference to fig. 7A to 7G, and will not be repeated.
For large window video recording, the acquisition of the pause video recording signal may be a video recording pause operation for a large window, and specifically, refer to the operations of fig. 9A to 9F.
Accordingly, the pause video recording signal may be obtained by a pause operation by the user or by determining the result of the target detection, and the specific manner of the different video recording scenes is not limited.
S1105: the camera mode module sends a pause video recording instruction to the coding control module.
The pause video recording instruction is used for pausing video recording of one path of video aiming at a large window or a small window. The pause video signal is an encoder command for pausing encoding.
Under the condition that the camera mode module acquires the pause video recording signal, the electronic equipment can send a pause video recording instruction to the coding control module.
S1106: the encoding control module generates a pause time stamp based on the pause video instruction and generates a pause encoding instruction based on the pause time stamp.
Wherein the pause encoding instruction includes a pause timestamp and a suspend instruction, the pause encoding instruction is used for instructing the encoder to discard the original video frame after the pause timestamp, i.e. the original video frame earliest before the pause timestamp for encoding. The encoder starts receiving the input original frame picture for a certain frame but pauses the encoding. The input to the encoder discards the original video suspension instruction from the pause timestamp for instructing the encoder to stop encoding the original video after the pause timestamp. The manner of representation of a suspend instruction, e.g., drop-input-frame=1, 1 represents a suspend (use-suspended) encoder.
S1107: the coding control module sends a coding pause instruction to the corresponding coder through the media FMK.
After the encoding control module generates the pause encoding instruction, the pause encoding instruction may be sent to the media frame. Correspondingly, the media frame receives a pause encoding instruction from the encoding control module. After the media frame receives the pause encoding instruction from the encoding control module, the pause encoding instruction may be sent to the encoder. Correspondingly, the encoding control module may obtain a pause encoding instruction from the encoding control module.
S1108: the encoder pauses encoding of the camera image data based on the pause encoding instruction.
After the encoder receives the pause encoding instruction, the pause encoding instruction pauses encoding of the camera image data. The camera image data may specifically be the first image data and the second image data in fig. 6B described above. For example, video data in YUV format.
The encoding is for an input original video, and the number of frames of camera image data is greater than or equal to 1 frame. The electronic device performs encoding processing based on the continuous frame images to obtain encoded image data. Each frame of image in the camera image data carries a shooting time stamp, and the shooting time stamp represents the time when the current frame of image is shot. The camera image data is video data that has not been encoded.
In the embodiment of the application, after the encoder always receives the camera image data and acquires the pause coding instruction, the encoder judges whether to code the image of the current frame or not based on the pause timestamp. Specifically, the encoder may compare the photographing time stamp of the first frame image (current frame) with the pause time stamp, and in the case where the photographing time stamp is earlier than or equal to (earlier than) the pause time stamp, the encoder encodes the first image frame (current frame) image; in the case where the photographing time stamp is later than or equal to (later than) the pause time stamp, the encoder does not encode the first image frame (current frame) image, i.e., pauses encoding (discards the first image frame). Wherein the first image frame is a frame of image in the camera image data.
In the case where the encoder compares the photographing time stamp with the pause time stamp later or equal (later), the determination of whether to encode or not is no longer made for the image frame of this pause time stamp until the resume encoding instruction is acquired or the encoding instruction is ended.
In the case of encoding camera image data, the encoder outputs encoded image data (e.g., third image data and fourth image data in fig. 6B). Camera image data refers to image data after encoding.
S1109: the camera mode module acquires the resume video signal.
The resume video signal is used to indicate that the current video recording is resumed. The current video includes a large window video or a small window video.
For the small window video recording, the obtaining of the recovery video recording signal may be that the main angle is re-detected after the main angle is lost, the user switches the main angle, or the user clicks the recovery video recording control in the video recording process, which may be specifically referred to fig. 7A-7G and will not be described in detail.
For large window video, the user clicks the large window recovery video control, and the electronic device may acquire a recovery video signal, and may specifically see the contents of fig. 9A-9F, which is not described in detail.
S1110: the camera mode module sends a video restoration instruction to the coding control module.
The video restoration instruction is used for indicating video restoration. The resume video signal is an encoder command for resuming encoding.
After the camera mode module acquires the video restoration signal, a video restoration instruction can be sent to the coding control module.
S1111: the encoding control module generates a recovery time stamp based on the recovery video instruction and generates a recovery encoding instruction based on the recovery time stamp.
Wherein the resume encoding instruction includes a resume timestamp and a start instruction, the resume encoding instruction is for instructing the encoder to discard frames in the camera image data earlier than the resume timestamp, i.e., encode frames in the camera image data at the latest than the pause timestamp. The encoder continuously receives an input (camera image data) frame picture, and changes from pause encoding to resume encoding by taking a resume timestamp as a demarcation point according to the shooting time of the frame picture. The input to the encoder begins encoding the frame pictures of the original video from the recovery time stamp. The start instruction is for instructing the encoder to start encoding camera image data after the recovery time stamp. The manner of representation of the start instruction, for example drop-input-frame=0, 0 representing the start (resume) of the encoder.
S1112: the coding control module sends a coding restoration instruction to the coder through the media FMK.
The recovery coding instruction is used for instructing the encoder to recover the coding of the video.
Here, S1112 may refer to the specific description of S1107 described above, and will not be described in detail.
S1113: the encoder resumes encoding of the camera image data based on the resume encoding instruction.
After the encoder receives the resume encoding instruction, the resume encoding instruction resumes encoding of the camera image data.
In the embodiment of the application, after the encoder always receives the camera image data and acquires the recovery encoding instruction, the encoder judges whether to encode the image of the current frame or not based on the recovery timestamp. Specifically, the encoder may compare the photographing time stamp of the second image frame (current frame) with the recovery time stamp, and in case that the photographing time stamp is earlier than or equal to (earlier than) the recovery time stamp, the encoder does not encode the second image frame (current frame) image, and directly discards the second image frame; in case the shooting time stamp is later than or equal to (later than) the recovery time stamp, the encoder continues encoding the second image frame (current frame) image, i.e. starts the encoder. Wherein the second image data is a frame image in the camera image data.
S1114: the camera mode module acquires the end video signal.
The ending of the small window video may be the user clicking on the small window ending recording control, or clicking on the large window ending recording control, or the user switching the main angle. Reference may be made specifically to the related descriptions of fig. 7A to 7G, which are not repeated.
The recording end of the large window video may be clicking the large window end recording control, and the specific description may refer to the related descriptions of fig. 9A to 9F, which are not repeated.
The process of acquiring the ending video signal may refer to the related description before S617 in fig. 6C, which is not repeated.
S1115: the camera mode module sends a video recording ending instruction to the coding control module.
The video recording ending instruction is used for indicating the video recording ending of the video.
The specific process of S1122 may refer to the description of S617 in fig. 6C, which is not repeated.
S1116: the encoding control module controls the corresponding encoder to finish encoding the camera image data based on the video recording finishing instruction.
The specific process of S1116 may refer to S1103 and descriptions of S619 to S621 in fig. 6C, which are not repeated.
S1117: the encoding control module encapsulates the encoded image data based on the end video instruction.
S1117 may refer to the description of S622 in fig. 6C, which is not repeated.
After S1117, the encoding control module may store the data packaged into a complete video to the storage device where the corresponding document has been created, and may refer specifically to S603 in fig. 6A.
It should be noted that, the above encoding process encodes the same video, and the first encoded data, the second encoded data, and the third encoded data are all stored in the same video.
Further, if the above-described encoding process is to encode a large window video (preview frame video), the encoder corresponds to the above-described codec1. If the encoding process is to encode a small window video, the encoder corresponds to codec2 described above. Throughout the process, either large window video or small window video is encoded.
It should be noted that, because the encoding control module may continuously acquire encoded image data after encoding in the encoding process, the process of encapsulating the picture into the video file by the encoding control module is continuous, and when the encoding control module acquires the encoded image data, the encoding control module needs to encapsulate the encoded image data, that is, sequentially write the encoded image data into the same video file.
S1118: and correspondingly storing the packaged file to the storage device by the coding control module.
S1118 may refer to the descriptions of S622 and S623 in fig. 6C, and is not repeated.
The file stored in the storage module corresponds to the file generated by the storage module. For example, the video data of the third image data package in fig. 6A to 6C is stored in the first file; the fourth image data encapsulated video data is stored in the second file.
It should be further noted that, in the above processing, the number of times the electronic device pauses and resumes the encoding may be one or more times, which is not limited by the present application.
In the encoding process of the multi-path video, the electronic equipment can not encode the image frames in the pause video period according to the pause time stamp and the resume time stamp under the condition that the pause and resume encoding occur in a certain path of video encoding process, so that the image frames which do not need to be encoded can be screened out in encoding, the relevance of encoded images after encoding is ensured, and the problem of packaging video flower frames or card frames is prevented.
The above is a possibility to pause the resume encoding process, the following big and small windows are illustrated separately:
1. The start, resumption, suspension, and end of recording of the large window (step S1201-step S1220 shown in fig. 12).
Fig. 12 is a flowchart of another encoding method of multiplexing video according to an embodiment of the present application. As shown in fig. 12, the encoding method of the large window video may include, but is not limited to, the following steps:
s1201: the camera mode module sends a message for starting the large window video to the stream management module.
Here, S1201 may refer to the related description of S610, which is not described in detail.
S1202: the camera mode module sends a command for starting the large window video to the coding control module.
Wherein, S1202 may refer to the related description of S1102, which is not repeated.
The execution order of S1201 and S1202 is not limited.
S1203: the encoding control module sends a first start encoding instruction to the first encoder through the media FMK.
The first start-up encoding instruction may refer to the encoding command B1 in S614, and the first start-up encoding instruction is used to start up the first encoder. And encoding the image data corresponding to the preview big window, thereby obtaining the original video.
Accordingly, the first encoder (encoder 1) may receive a first start-up encoding instruction from the encoding control module through the media FMK.
S1204: the stream management module sends a dynamic request D2 to the camera HAL via the camera FMK.
Wherein the dynamic request D2 is used for requesting to return to the first path of data stream, and the dynamic request D2 may include the first stream identifier.
It is understood that the stream management module may send the dynamic request D2 to the camera FWK, which then sends the dynamic request D2 to the camera HAL.
Accordingly, the camera HAL may receive the dynamic request D2 sent by the stream management module through the camera FWK. The camera HAL may then receive raw data from the camera head.
Herein, S1204 may refer to the related description of S611, which is not described in detail.
S1205: the camera HAL processes the raw image data based on the dynamic request D2, resulting in first image data.
Wherein the dynamic request D1 may include a dynamic request D2.
The description of S1205 may specifically be related to S612, which is not described herein.
S1206: the camera HAL sends the first image data to the stream management module via the camera FWK.
Here, S1206 may refer to the related description of S613, which is not described in detail.
S1207: the first encoder encodes the first image data based on the first start-up encoding instruction to obtain third image data.
The first encoder may receive a start-up encoding instruction from the encoding control module, and then encode the first image data based on the start-up encoding instruction to obtain third image data.
In S1207, reference may be made to S615 and description related to S1103, which are not repeated.
S1208: the camera mode module sends a pause window video recording message to the stream management module.
And under the condition that the camera mode module acquires the operation of clicking the control for suspending the large window video, a message for suspending the large window video can be sent to the stream management module.
It will be appreciated that the pause window video message is used to prompt the stream administration module user in the camera application that the recording of the preview window needs to be paused.
Correspondingly, the stream management module can receive the pause large window video recording message sent by the camera mode module.
S1209: the camera mode module sends a command for suspending the large window video to the coding control module.
And under the condition that the camera mode module acquires the operation of clicking the control for suspending the large window video, the camera mode module can also send a command for suspending the large window video to the coding control module.
It will be appreciated that the pause window video instruction is used to prompt the encoding control module user in the camera application that the recording of the preview window needs to be paused.
Correspondingly, the coding control module can receive a pause large window video recording instruction sent by the camera mode module.
Wherein, S1209 may refer to the related description of S1105, which is not described in detail.
S1210: the encoding control module sends a first pause encoding instruction to the first encoder through the media FMK.
After receiving the large window video pause instruction, the encoding control module can send a first pause encoding instruction to the first encoder through the media FMK.
Wherein the first pause encoding instruction is for pausing the instructions of the first encoder. The pause encoding instruction may include a pause timestamp, and reference may be made to the descriptions of S1106 and S1107 for details, which are not repeated.
S1211: the first encoder pauses encoding of the first image data based on the first pause encoding instruction.
The description of S1108 may be referred to for S1211, which is not described in detail.
S1212: the camera mode module sends a message for recovering the large window video to the stream management module.
And under the condition that the camera mode module acquires the operation of clicking the control for restoring the large window video, a message for restoring the large window video can be sent to the stream management module.
It can be appreciated that the resume big window video message is used to prompt the stream administration module user in the camera application that the recording of the preview big window needs to be resumed.
Correspondingly, the stream management module can receive the large window video restoration message sent by the camera mode module.
S1213: the camera mode module sends a command for restoring the large window video to the coding control module.
And under the condition that the camera mode module acquires the operation of clicking the control for restoring the large window video, the camera mode module can also send a command for restoring the large window video to the coding control module.
It can be appreciated that the resume big window video instruction is used to prompt the encoding control module user in the camera application that the recording of the preview big window needs to be resumed.
Correspondingly, the coding control module can receive a large window video restoration instruction sent by the camera mode module.
The description of S1110 may be referred to for S1213, which is not described in detail.
S1214: the encoding control module sends a first recovery encoding instruction to the first encoder through the media FMK.
The first recovery encoded instruction includes a recovery timestamp.
In S1214, reference may be made to descriptions of S1111 and S1112 specifically, which are not described in detail.
S1215: the first encoder resumes encoding of the first image data based on the first resume encoding instruction.
The description of S1113 may be referred to for S1215, which is not described in detail.
S1216: the camera mode module sends a message of ending the large window video to the stream management module.
Wherein, S1216 may refer to the related description of S617, and is not described in detail.
S1217: the camera mode module sends a command for ending the large window video to the coding control module.
Here, S1217 may refer to the related description of S1115, which is not described in detail.
S1218: the encoding control module sends a first end encoding instruction to the first encoder through the media FMK.
Here, S1218 may refer to the related descriptions of S619 and S1116, which are not described in detail.
S1219: the first encoder ends encoding of the first image data based on the first end encoding instruction.
After the first encoder acquires the first end encoding instruction, encoding of the first image data ends based on the first end encoding instruction.
Here, S1219 may refer to the related descriptions of S620 and S1116, which are not described in detail.
S1220: and the coding control module finishes packaging the third image data based on the instruction of finishing the large window video recording, and stores the packaged first file into the storage module.
In particular, S1220 may refer to S1117 and S1118, and S622 and S623, which are not described herein.
The first image data and the third image data in fig. 12 may be described with reference to fig. 6A to 6C. The meaning is the same. The method of fig. 12 includes only the encoding process in the pause resume recording process of the large window of fig. 11.
In addition, the camera mode module in fig. 12 may refer to fig. 11, and the descriptions related to the big window in fig. 7A to 7G, for example, the camera mode module may acquire the start big window video signal (reference S1101), the pause big window video signal (reference S1104), the resume big window video signal (reference S1109), and the end big window video signal (reference S1114).
In the embodiment of the application, the encoder can discard the first image data after the pause time stamp based on the first pause coding instruction and the first resume coding instruction, and continue to encode after receiving the first resume coding instruction, namely encode the image frame after the resume time stamp. Therefore, the problem that the obtained large window video data is in a frame pattern and a frame is blocked in the video recording process of the main angle mode can be avoided.
2. The widget starts, resumes, pauses and ends recording (step S1301-step S1320 shown in fig. 13).
Fig. 13 is a flowchart of another encoding method of a multi-channel video according to an embodiment of the present application. As shown in fig. 13, the coding method of the small window video may include, but is not limited to, the following steps:
s1301: the camera mode module sends a message for starting the small window video to the stream management module.
Herein, S1301 may refer to the related description of S610, which is not repeated.
S1302: the camera mode module sends a command for starting the small window video to the coding control module.
Wherein, S1302 may refer to the related description of S1102, which is not described in detail.
The execution order of S1301 and S1302 is not limited.
S1303: the encoding control module sends a second start encoding instruction to the second encoder through the media FMK.
The second start code instruction may refer to the code command B1 in S614, and the second start code instruction is used to start the second encoder. And encoding the image data corresponding to the preview window so as to obtain the specific video.
Accordingly, the second encoder (encoder 1) may receive a second start-up encoding instruction from the encoding control module through the media FMK.
S1304: the stream management module sends a dynamic request D2 to the camera HAL via the camera FMK.
Wherein the dynamic request D2 is used for requesting to return to the second path of data stream, the dynamic request D2 may include the second stream identifier.
It is understood that the stream management module may send the dynamic request D2 to the camera FWK, which then sends the dynamic request D2 to the camera HAL.
Accordingly, the camera HAL may receive the dynamic request D2 sent by the stream management module through the camera FWK. The camera HAL may then receive raw data from the camera head.
Herein, S1304 may refer to the related description of S611, which is not described in detail.
S1305: the camera HAL processes the raw image data based on the dynamic request D3 to obtain second image data.
Wherein dynamic request D1 may include dynamic request D3.
The description of S1305 may specifically be that of S612, which is not described herein.
S1306: the camera HAL sends the second image data to the stream management module via the camera FWK.
Here, S1306 may refer to the related description of S613, which is not described in detail.
S1307: the second encoder encodes the second image data based on the second start-up encoding instruction to obtain fourth image data.
The second encoder may receive a start-up encoding instruction from the encoding control module, and then encode the second image data based on the start-up encoding instruction, resulting in fourth image data.
In S1307, reference may be made specifically to S616 and the description related to S1103, which are not repeated.
S1308: the camera mode module sends a pause window video message to the stream management module.
And under the condition that the camera mode module acquires the operation of clicking the control for suspending the small window video, a small window video suspension message can be sent to the stream management module.
It will be appreciated that the pause window video message is used to prompt the stream administration module user in the camera application that the recording of the preview window needs to be paused.
Correspondingly, the stream management module can receive the pause window video recording message sent by the camera mode module.
S1309: the camera mode module sends a command for suspending small window video recording to the coding control module.
And under the condition that the camera mode module acquires the operation of clicking the control for suspending the small window video, the camera mode module can also send a small window video suspending instruction to the coding control module.
It will be appreciated that the pause window video instruction is used to prompt the encoding control module user in the camera application that the recording of the preview window needs to be paused.
Correspondingly, the coding control module can receive a pause small window video recording instruction sent by the camera mode module.
Here, S1309 may refer to the related description of S1105, which is not described in detail.
S1310: the encoding control module sends a second pause encoding instruction to the second encoder via the media FMK.
After receiving the small window video pause instruction, the encoding control module can send a second pause encoding instruction to a second encoder through the media FMK.
Wherein the second pause encoding instruction is for pausing the instructions of the second encoder. The pause encoding instruction may include a pause timestamp, and reference may be made to the descriptions of S1106 and S1107 for details, which are not repeated.
S1311: the second encoder pauses encoding of the second image data based on the second pause encoding instruction.
In S1311, reference may be specifically made to the description of S1108, which is not repeated.
S1312: the camera mode module sends a restore widget video message to the stream management module.
And under the condition that the camera mode module acquires the operation of clicking the control for recovering the small window video, a small window video recovering message can be sent to the stream management module.
It can be appreciated that the resume widget video message is used to prompt the stream management module user in the camera application that the recording of the preview widget needs to be resumed.
Correspondingly, the stream management module can receive the recovery small window video message sent by the camera mode module.
S1313: the camera mode module sends a command for recovering the small window video to the coding control module.
And under the condition that the camera mode module acquires the operation of clicking the control for recovering the small window video, the camera mode module can also send a small window video recovering instruction to the coding control module.
It can be appreciated that the resume widget video instruction is used to prompt the encoding control module user in the camera application that the recording of the preview widget needs to be resumed.
Correspondingly, the coding control module can receive a small window video restoration instruction sent by the camera mode module.
In S1313, reference may be specifically made to the description of S1110, which is not repeated.
S1314: the encoding control module sends a second resume encoding instruction to the second encoder via the media FMK.
The second recovery encoded instruction includes a recovery timestamp.
The descriptions of S1111 and S1112 may be referred to for S1314, and are not described in detail.
S1315: the second encoder recovery-encodes the second image data based on the second recovery-encoding instruction.
In S1315, reference may be made to the description of S1113, which is not repeated.
S1316: the camera mode module sends a small window video recording ending message to the stream management module.
S1316 may refer to the description of S617, which is not repeated.
S1317: the camera mode module sends a command for ending the small window video recording to the coding control module.
Here, S1317 may refer to the related description of S1115, which is not described in detail.
S1318: the encoding control module sends a second end encoding instruction to the second encoder through the media FMK.
Here, S1318 may refer to the related descriptions of S619 and S1116, which are not described in detail.
S1319: the second encoder ends encoding of the second image data based on the second end encoding instruction.
After the second encoder acquires the second end encoding instruction, encoding of the second image data ends based on the second end encoding instruction.
Here, S1319 may refer to the related descriptions of S621 and S1116, which are not described in detail.
S1320: and the coding control module finishes packaging the fourth image data based on the small window video finishing instruction, and stores the packaged second file into the storage module.
In particular, S1320 may refer to S1117 and S1118, and S622 and S623, which are not described in detail.
The second image data and the fourth image data in fig. 13 may be described with reference to fig. 6A to 6C. The meaning is the same. The method of fig. 13 includes only the encoding process in the pause resume recording process of the small window of fig. 11.
In addition, the camera mode module in fig. 13 may refer to fig. 11, and the related descriptions of the windows in fig. 7A to 7G, for example, the start window video signal (reference S1101), the pause window video signal (reference S1104), the resume window video signal (reference S1109), and the end window video signal (reference S1114).
In the above embodiment, the pause time stamps in fig. 12 and 13 may be time stamps having the same time or time stamps having different times, and are not limited. Similarly, the recovery time stamps in fig. 12 and 13 may be time stamps of the same time or time stamps of different times.
In the embodiment of the application, the encoder can discard the second image data after the pause time stamp based on the second pause coding instruction and the second resume coding instruction, and continue to encode after receiving the second resume coding instruction, namely encode the image frame after the resume time stamp. Therefore, the problem that the obtained small window video data is in a frame pattern and a frame is blocked in the video recording process of the main angle mode can be avoided. Meanwhile, the video effect can be ensured.
Based on the above image encoding process, fig. 14 is a schematic diagram of an encoding process provided by the present application. As shown in fig. 14, after the encoder receives the start encoding instruction from Tx, encoding of the corresponding image data may be started, and in the case of receiving a pause time stamp (in the pause encoding instruction), encoding is not performed for the image data in which the photographing time stamp is after the pause time stamp, that is, the pause encoding; and until a recovery time stamp (in the recovery coding instruction) is received, coding the start of the shooting time stamp in the image data after the recovery time stamp, after a period of time, the encoder receives the end coding instruction, then ending the current coding, and packaging the coded image data into a file. I.e. the encoder does not need to encode the image between the pause time stamp and the resume time stamp.
In the embodiment of the application, the encoder can discard the original image frames after the pause time stamp based on the pause coding instruction and the resume coding instruction, and continue to encode after receiving the resume coding instruction, namely encode the image frames after the resume time stamp. Therefore, the problem that the obtained video data is in a frame pattern and a card frame cannot occur in the video recording process of the main angle mode can be guaranteed.
In order to describe the video recording method provided by the embodiment of the present application, the definition of nouns in the embodiment of the present application is described below. The principal angle mode is understood as a mode in which a person tracking video is additionally generated when the terminal device records the video. The person image in the person image tracking video may be understood as a "principal angle" focused by the user, and the manner of generating the video corresponding to the "principal angle" may be: and cutting out video content corresponding to the main angle from the video conventionally recorded by the terminal equipment. It is understood that the main angle mode of the terminal device may provide a preview mode and a recording mode. In the preview mode, a preview interface can be displayed in a display screen of the terminal device. In the recording mode, a recording interface can be displayed in a display screen of the terminal device.
It should be noted that, the interface displayed by the terminal device in the preview mode (before recording) and the recording mode (during recording) may be referred to as a preview interface; the pictures displayed in the preview interface of the preview mode (before recording) are not generated and saved; the pictures displayed in the preview interface of the recording mode (during recording) can be generated and saved. For convenience of distinction, hereinafter, a preview interface of a preview mode (before recording) is referred to as a preview interface; the preview interface of the recording mode (during recording) is referred to as a recording interface.
The preview interface may include a large window and a small window. The large window may be a window with a specification equal to or slightly smaller than that of the display screen, the large window may display an image obtained by the camera, and an image displayed by the large window in the preview mode may be defined as a preview screen of the large window. The small window may be a window having a smaller specification than the large window, the small window may display an image of the focus tracking object selected by the user, the terminal device may select the focus tracking object based on the tracking identification associated with the focus tracking object, and the image displayed by the small window in the preview mode may be defined as a preview screen of the small window. It can be understood that in the preview mode, the terminal device may display the image acquired by the camera based on the large window, and the small window may display the image of the focus tracking object, but the terminal device may not generate a video, or may not store the contents displayed by the large window and the small window.
The recording interface may include a large window and a small window. The large window can be a window with the specification equal to or slightly smaller than that of the display screen, the large window can display images obtained by the camera, and the images displayed by the large window in the recording mode can be defined as recording pictures of the large window. The small window can be a window with a specification smaller than that of the large window, the small window can display an image of the focus tracking object selected by a user, and the image displayed by the small window in the recording mode can be defined as a recording picture of the small window. It can be understood that in the recording mode, the terminal device not only can display the recording picture of the large window and the recording picture of the small window, but also can generate the large window video and the small window video which are recorded after the recording mode is started, and can save the video generated in the large window when the recording of the large window is finished, and save the video generated in the small window when the recording of the small window is finished. The embodiment of the application does not limit the naming of the preview mode and the recording mode.
It should be noted that, the preview interface described in the embodiment of the present application may be understood as a preview mode in which the camera application of the terminal device is in the principal angle mode; the recording interface may be understood as a recording mode in which the camera application of the terminal device is in a main angle mode. And will not be described in detail later.
As used in the above embodiments, the term "when …" may be interpreted to mean "if …" or "after …" or "in response to determination …" or "in response to detection …" depending on the context. Similarly, the phrase "at the time of determination …" or "if detected (a stated condition or event)" may be interpreted to mean "if determined …" or "in response to determination …" or "at the time of detection (a stated condition or event)" or "in response to detection (a stated condition or event)" depending on the context.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.

Claims (10)

1. A method for encoding a multiplex video, the method being applied to an electronic device, the electronic device comprising an encoder and a camera, the method comprising:
responding to a first operation of a user, starting video recording by the electronic equipment, and displaying a first interface, wherein the first interface displays a preview picture acquired through the camera, the first interface comprises a big window and a small window, and the picture content of the big window comprises the picture content of the small window;
in the video recording process of the electronic equipment, the camera image data is started to be encoded through the encoder to obtain encoded image data, wherein the camera image data is uncoded image data of the big window or the small window;
Under the condition that the electronic equipment acquires a pause video signal, the encoder does not encode the camera image data after the pause video;
under the condition that the electronic equipment acquires the recovery video signal, the encoder is used for continuing to encode the camera image data after recovery video;
the electronic equipment finishes encoding the camera image data through the encoder under the condition that the electronic equipment acquires a video recording ending signal;
the electronic equipment generates video based on the encoded image data, wherein the video is of the big window or the small window.
2. The method according to claim 1, wherein the electronic device includes a camera mode module, and the electronic device does not encode the camera image data after the pause video recording by the encoder in case the pause video recording signal is acquired, specifically comprising:
under the condition that the electronic equipment acquires a pause video signal through the camera mode module, the electronic equipment acquires a pause coding instruction through the encoder, and does not code camera image data with shooting time stamps at and after the pause time stamp, each frame of image in the camera image data comprises a shooting time stamp, and the pause coding instruction comprises the pause time stamp;
Under the condition that the electronic equipment acquires the recovery video signal, the encoder is used for continuing to encode the camera image data after the recovery video, and the method specifically comprises the following steps:
under the condition that the electronic equipment acquires a recovery video signal through a camera mode module, the electronic equipment acquires a recovery coding instruction through the encoder, and continuously codes camera image data of which the shooting time stamp is a recovery time stamp and the later, wherein the recovery coding instruction comprises the recovery time stamp.
3. The method of claim 2, wherein the camera image data comprises first image data and second image data, the first image data being image data of the large window and the second image data being image data of the small window; the encoder includes a first encoder that encodes the first image data and a second encoder that encodes the second image data.
4. The method according to claim 3, wherein the electronic device further comprises an encoding control module, and in the case that the camera image data is the first image data, the electronic device starts encoding the camera image data by the encoder during the recording process, so as to obtain encoded image data, and specifically includes:
The electronic equipment responds to the first operation, acquires a first video starting signal through the camera mode module, and sends a first video starting instruction to the coding control module; under the condition that the encoding control module acquires the first video recording starting instruction, the electronic equipment controls the first encoder to start encoding the first image data based on the first video recording starting instruction through the encoding control module to acquire third image data;
under the condition that the electronic equipment acquires a pause video signal through the camera mode module, the electronic equipment acquires a pause coding instruction through the encoder and does not code camera image data of which the shooting time stamp is in the pause time stamp and after the pause time stamp, and the method specifically comprises the following steps:
under the condition that the camera mode module acquires a first pause video recording signal, the electronic equipment sends a first pause video recording instruction to the coding control module through the camera mode module; under the condition that the coding control module acquires a first pause video recording instruction, the electronic equipment determines a pause time stamp based on the first pause video recording instruction through the coding control module so as to generate a first pause coding instruction, and sends the first pause coding instruction to the first encoder; the electronic equipment does not encode first image data with shooting time stamps at and after the pause time stamp based on the first pause encoding instruction through the first encoder; the first pause video recording instruction indicates to pause the encoding of the large window image, and the first pause encoding instruction comprises the pause time stamp;
Under the condition that the electronic equipment acquires a recovery video signal through a camera mode module, the electronic equipment acquires a recovery coding instruction through the encoder and continuously codes the recovery time stamp and the camera image data behind the recovery time stamp, and the method specifically comprises the following steps:
under the condition that the camera mode module acquires a first recovery video signal, the electronic equipment sends a first recovery video instruction to the coding control module through the camera mode module; under the condition that the coding control module acquires a first recovery video instruction, the electronic equipment determines a recovery time stamp based on the first recovery video instruction through the coding control module so as to generate a first recovery coding instruction, and sends the first recovery coding instruction to the first encoder; the electronic equipment continues to encode the first image data with the shooting time stamp at the recovery time stamp and after the shooting time stamp based on the first recovery encoding instruction through the first encoder; the first recovery video recording instruction indicates to recover the encoding of the large window image, and the first recovery encoding instruction comprises the recovery time stamp;
The electronic device, when obtaining the video signal ending, finishes encoding the camera image data through the encoder, and specifically includes:
under the condition that the camera mode module acquires a video ending signal, the electronic equipment sends a first video ending instruction to the coding control module through the camera mode module; when the coding control module acquires the first video ending instruction, the electronic equipment controls the first encoder to finish coding the first image data based on the first video ending instruction through the coding control module;
the electronic device generates video based on the encoded image data, and specifically includes:
and the electronic equipment encapsulates the third image data through the editing control module to generate a first file, wherein the first file is a video file of a large window.
5. The method of claim 4, wherein the electronic device starts recording, and wherein after displaying the first interface, the method further comprises:
responding to a second operation of a user, suspending video recording of the large window by the electronic equipment, displaying a second interface, and acquiring a first suspended video recording signal through the camera mode module, wherein the first interface comprises a large window suspended recording control, the second operation is an operation acting on the large window suspended recording control, and the suspended time stamp included in the first suspended coding instruction is a time point when the second operation is detected by the electronic equipment;
Responding to a third operation of a user, the electronic equipment restores video of the large window, displays a third interface, and acquires a first restoring video signal through the camera mode module, wherein the second interface comprises a large window restoring recording control, the third operation is an operation acting on the large window restoring recording control, and a restoring time stamp included in the first restoring coding instruction is a time point when the electronic equipment detects the third operation;
and responding to a fourth operation of a user, ending the video recording of the large window by the electronic equipment, acquiring a first video recording ending signal through the camera mode module, and ending the video recording control by the large window by the third interface, wherein the third operation is an operation acting on the large window ending video recording control.
6. The method according to any one of claims 3-5, wherein the electronic device further includes an encoding control module, and in the case that the camera image data is the second image data, the electronic device starts encoding the camera image data by the encoder during the recording process, to obtain encoded image data, and specifically includes:
The electronic equipment responds to the first operation, acquires a second video starting signal through the camera mode module, and sends a second video starting instruction to the coding control module; under the condition that the coding control module acquires the second video recording starting instruction, the electronic equipment controls the second encoder to start coding the second image data based on the second video recording starting instruction through the coding control module to acquire fourth image data;
under the condition that the electronic equipment acquires a pause video signal through the camera mode module, the electronic equipment acquires a pause coding instruction through the encoder and does not code camera image data of which the shooting time stamp is in the pause time stamp and after the pause time stamp, and the method specifically comprises the following steps:
under the condition that the camera mode module acquires a second pause video recording signal, the electronic equipment sends a second pause video recording instruction to the coding control module through the camera mode module; under the condition that the coding control module acquires a second pause video recording instruction, the electronic equipment determines a pause time stamp based on the second pause video recording instruction through the coding control module so as to generate a second pause coding instruction, and sends the second pause coding instruction to the second coder; the electronic equipment does not encode second image data with shooting time stamps at and after the pause time stamp based on the second pause encoding instruction through the second encoder; the second pause video recording instruction indicates to pause the encoding of the small window image, and the second pause encoding instruction comprises the pause time stamp;
Under the condition that the electronic equipment acquires a recovery video signal through a camera mode module, the electronic equipment acquires a recovery coding instruction through the encoder and continuously codes the recovery time stamp and the camera image data behind the recovery time stamp, and the method specifically comprises the following steps:
when the camera mode module acquires a second recovery video signal, the electronic device sends a second recovery video instruction to the coding control module through the camera mode module, and when the coding control module acquires the second recovery video instruction, the electronic device determines a recovery timestamp based on the second recovery video instruction through the coding control module so as to generate a second recovery coding instruction and sends the second recovery coding instruction to the second encoder; the electronic equipment continues to encode second image data with shooting time stamps at and after the recovery time stamp based on the second recovery encoding instruction through the second encoder; the second recovery video recording instruction indicates the code of recovering the small window image, and the second recovery coding instruction comprises the recovery time stamp;
The electronic device, when obtaining the video signal ending, finishes encoding the camera image data through the encoder, and specifically includes:
under the condition that the camera mode module acquires a video ending signal, the electronic equipment sends a second video ending instruction to the coding control module through the camera mode module; when the coding control module acquires the second video ending instruction, the electronic equipment controls the second encoder to finish coding the second image data based on the second video ending instruction through the coding control module;
the electronic device generates video based on the encoded image data, and specifically includes:
and the electronic equipment encapsulates the fourth image data through the editing control module to generate a second file, wherein the second file is a video file of the small window.
7. The method of claim 6, wherein the video object of the small window is a photographic object in the large window frame, and wherein in the event that the electronic device detects the disappearance of the video object, the method further comprises:
The electronic equipment obtains a second pause video signal based on the vanishing time of the video target through the camera mode module, wherein the pause time stamp is a first time point after the vanishing time of the video target;
in the event that the electronic device re-detects the disappeared video recording target, the method further comprises:
and the electronic equipment acquires a second recovery video signal based on the time when the video target is re-detected through the camera mode module, wherein the recovery time stamp is the time point when the video target is re-detected.
8. The method of claim 6, wherein the method further comprises:
responding to a second operation of a user, suspending video recording of the large window by the electronic equipment, displaying a second interface, and acquiring a second video recording ending signal through the camera mode module, wherein the first interface comprises a large window suspending recording control, and the second operation is an operation acting on the large window suspending recording control; or alternatively, the first and second heat exchangers may be,
responding to a fourth operation, wherein the electronic equipment finishes video recording of the large window, and acquires a second video recording finishing signal through the camera mode module, the first interface comprises a large window finishing recording control, and the fourth operation is an operation acting on the large window finishing recording control; or alternatively, the first and second heat exchangers may be,
And responding to a fifth operation, wherein the electronic equipment finishes video recording of the small window, and acquires a second video recording finishing signal through the camera mode module, the first interface comprises a small window finishing recording control, and the fifth operation is an operation acting on the small window finishing recording control.
9. An electronic device, comprising: one or more processors and one or more memories; the one or more processors being coupled with the one or more memories, the one or more memories being configured to store computer program code, the computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-8.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method according to any of claims 1-8.
CN202210886609.8A 2022-06-02 2022-07-26 Coding method of multipath video and electronic equipment Pending CN117221549A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210623379 2022-06-02
CN2022106233796 2022-06-02

Publications (1)

Publication Number Publication Date
CN117221549A true CN117221549A (en) 2023-12-12

Family

ID=89033993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210886609.8A Pending CN117221549A (en) 2022-06-02 2022-07-26 Coding method of multipath video and electronic equipment

Country Status (1)

Country Link
CN (1) CN117221549A (en)

Similar Documents

Publication Publication Date Title
US11669242B2 (en) Screenshot method and electronic device
CN112286477B (en) Screen projection display method and related product
WO2020253719A1 (en) Screen recording method and electronic device
US11949978B2 (en) Image content removal method and related apparatus
US12096120B2 (en) Photographing method in telephoto scenario and mobile terminal
KR101299796B1 (en) Modulation of background substitution based on camera attitude and motion
EP3402186A1 (en) Method and apparatus for providing image service
CN113099146B (en) Video generation method and device and related equipment
CN112527174B (en) Information processing method and electronic equipment
CN113556479A (en) Method for sharing camera by multiple applications and electronic equipment
WO2024055797A9 (en) Method for capturing images in video, and electronic device
CN117479000B (en) Video recording method and related device
CN115714908B (en) Switching control method of working modes, electronic equipment and readable storage medium
CN115802146B (en) Method for capturing images in video and electronic equipment
WO2023231696A1 (en) Photographing method and related device
CN117221549A (en) Coding method of multipath video and electronic equipment
KR20200003291A (en) Master device, slave device and control method thereof
CN116055868A (en) Shooting method and related equipment
CN115550559A (en) Video picture display method, device, equipment and storage medium
CN115802148A (en) Method for acquiring image and electronic equipment
CN116055861B (en) Video editing method and electronic equipment
CN115776532B (en) Method for capturing images in video and electronic equipment
CN117177066B (en) Shooting method and related equipment
CN117221709A (en) Shooting method and related electronic equipment
EP4361805A1 (en) Method for generating theme wallpaper, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination