Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for concentrating a panoramic video, so as to improve the accuracy of concentrating the panoramic video.
In a first aspect, an embodiment of the present application provides a method for panoramic video enrichment, including: obtaining a preselected motion track of each preselected motion target in a first video, wherein the preselected motion track of each preselected motion target passes through a position corresponding to a dividing line, and the first video is obtained after a panoramic video is divided by the dividing line; according to the preselected motion tracks of the preselected motion targets, obtaining the motion characteristics of the preselected motion targets when the preselected motion targets pass through the positions, and combining the preselected motion tracks corresponding to the same motion target according to the motion characteristics to obtain combined motion tracks; and acquiring the video after the panoramic video is concentrated according to the combined motion trail.
With reference to the first aspect, in a possible implementation manner of the first aspect, the motion characteristic of each preselected motion object passing through the position is any one of the following: gradually approaching the position along a first direction, gradually departing the position along the first direction, gradually approaching the position along a second direction, and gradually departing the position along the second direction; the device comprises a position, a first direction and a second direction, wherein the position is gradually close to the position along the first direction and gradually far away from the position along the second direction to be matched, the position is gradually far away from the position along the first direction and gradually close to the position along the second direction to be matched, and the first direction and the second direction are opposite.
With reference to the first aspect, in a possible implementation manner of the first aspect, merging preselected motion trajectories corresponding to the same motion target according to the motion feature includes: and combining the preselected motion tracks corresponding to the same motion target according to the motion characteristics, the time of each preselected motion target passing through the position and the coordinates of each preselected motion target.
With reference to the first aspect, in a possible implementation manner of the first aspect, combining preselected motion trajectories corresponding to the same motion object according to the motion characteristics, the time for each preselected motion object to pass through the position, and the coordinates of each preselected motion object includes: for one first preselected moving object of each preselected moving object: determining a preselected moving object matched with the moving characteristics of the first preselected moving object, having the same time passing through the position and the coordinate matched with the first coordinate of the first preselected moving object as a first moving object group; determining a preselected moving target which is the same moving target as the first preselected moving target in the first moving target group by adopting a pedestrian re-recognition algorithm; and combining the preselected motion tracks corresponding to the same motion target.
With reference to the first aspect, in a possible implementation manner of the first aspect, an abscissa of a coordinate matched with the first coordinate is located within a preset range, and an absolute value of a difference between a ordinate of the first coordinate and a ordinate of the first coordinate is less than or equal to a preset threshold.
In a second aspect, an embodiment of the present application provides an apparatus for video compression, including: the acquisition module is used for acquiring preselected motion tracks of all preselected motion targets in a first video, wherein the preselected motion tracks of all preselected motion targets pass through positions corresponding to a dividing line, and the first video is obtained by dividing a panoramic video through the dividing line; the merging module is used for acquiring the motion characteristics of all preselected motion targets when the preselected motion targets pass through the positions according to the preselected motion tracks of all preselected motion targets, and merging the preselected motion tracks corresponding to the same motion target according to the motion characteristics to obtain merged motion tracks; the acquisition module is further configured to acquire the video after the panoramic video is concentrated according to the combined motion trajectory.
With reference to the second aspect, in a possible implementation manner of the second aspect, the motion characteristic of each preselected moving object passing through the position is any one of the following: gradually approaching the position along a first direction, gradually departing the position along the first direction, gradually approaching the position along a second direction, and gradually departing the position along the second direction; the device comprises a position, a first direction and a second direction, wherein the position is gradually close to the position along the first direction and gradually far away from the position along the second direction to be matched, the position is gradually far away from the position along the first direction and gradually close to the position along the second direction to be matched, and the first direction and the second direction are opposite.
With reference to the second aspect, in a possible implementation manner of the second aspect, the merging module is specifically configured to: and combining the preselected motion tracks corresponding to the same motion target according to the motion characteristics, the time of each preselected motion target passing through the position and the coordinates of each preselected motion target.
With reference to the second aspect, in a possible implementation manner of the second aspect, the merging module is specifically configured to: for one first preselected moving object of each preselected moving object: determining a preselected moving object matched with the moving characteristics of the first preselected moving object, having the same time passing through the position and the coordinate matched with the first coordinate of the first preselected moving object as a first moving object group; determining a preselected moving target which is the same moving target as the first preselected moving target in the first moving target group by adopting a pedestrian re-recognition algorithm; and combining the preselected motion tracks corresponding to the same motion target.
With reference to the second aspect, in a possible implementation manner of the second aspect, an abscissa of a coordinate matched with the first coordinate is located within a preset range, and an absolute value of a difference between a ordinate of the first coordinate and a ordinate of the first coordinate is less than or equal to a preset threshold.
In a third aspect, an embodiment of the present application provides an apparatus for video compression, including: a processor and a memory.
The memory is for storing computer-executable instructions.
The processor is configured to execute the memory-stored computer-executable instructions to cause the processor to perform the method of panoramic video enrichment as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer storage medium having stored therein computer-executable instructions for implementing the method for panoramic video enrichment of the first aspect when the computer-executable instructions are executed by a processor.
In a fifth aspect, the present application provides a computer program product comprising computer executable instructions for implementing the method of panoramic video enrichment according to the first aspect when the computer executable instructions are executed by a processor.
According to the method and the device, the preselected motion trail of each preselected motion target in the first video obtained after the panoramic video is divided by the dividing line is obtained, the position of the preselected motion trail of each preselected motion target passing through the dividing line is obtained, the motion characteristics of each preselected motion target when passing through the position corresponding to the dividing line are obtained according to the preselected motion trail of each preselected motion target, the preselected motion trails corresponding to the same motion target are combined to obtain the combined motion trail, and video concentration is carried out according to the combined motion trail. Therefore, the motion tracks of the same moving target separated by the dividing line in the panoramic video are combined, the problem that the motion track of the same moving target is tracked into a plurality of motion tracks of a plurality of moving targets is solved, and the accuracy of panoramic video concentration is improved.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Specifically, in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple. The terms "first," "second," and the like in this application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The panoramic camera according to the embodiment of the present application is a panoramic camera in which a plurality of camera lenses are installed around a fixed point, for example, 6 camera lenses are installed around a fixed point of the panoramic camera, and one camera lens is installed at every 60 degrees, and all the fields of view of the plurality of camera lenses are added to form a panoramic field of view.
The video output by the panoramic camera is formed by fusing the shooting contents of the cameras, the shooting contents of the cameras can be projected on the spherical surface for displaying, and the video projected on the spherical surface can be called panoramic video. The panoramic video is divided along a dividing line to obtain a first video, namely the first video is a video obtained by spreading the panoramic video on a two-dimensional plane. At this time, in the first video, the moving object passing through the position corresponding to the dividing line is recognized as two different moving objects, so that two moving tracks are obtained for the moving object, and the accuracy of video concentration is affected.
The panoramic camera related in the present application may be a common fixed focus panoramic camera, a 3D fixed focus panoramic camera, or a square zoom camera, which is not limited herein.
Fig. 1 is a schematic view of a video obtained by dividing a panoramic video by a dividing line, as shown in fig. 1, after the panoramic video is divided along the dividing line, one of moving objects is divided into two moving objects, and accordingly the moving object has two moving tracks.
In order to solve the problem of insufficient accuracy when the existing video concentration technology is directly applied to a panoramic video, the application provides a method for concentrating the panoramic video, the preselected motion tracks of all preselected motion targets in a first video are obtained, the preselected motion tracks of all the preselected motion targets pass through the positions corresponding to a dividing line, and the first video is obtained after the panoramic video is divided by the dividing line; according to the preselected motion tracks of the preselected motion targets, obtaining the motion characteristics of the preselected motion targets when the preselected motion targets pass through the positions corresponding to the dividing lines, and according to the motion characteristics, combining the preselected motion tracks corresponding to the same motion target to obtain combined motion tracks; and acquiring the video after the panoramic video is concentrated according to the combined motion trail.
The technical solution of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a flowchart of a method for panoramic video compression according to an embodiment of the present disclosure. The execution subject of the method is a video concentration device, and the video concentration device can be part or all of intelligent equipment such as a computer, a tablet computer, a notebook computer, a server and the like. As shown in fig. 2, the method comprises the steps of:
s201: and acquiring a preselected motion track of each preselected motion target in the first video, wherein the preselected motion track of each preselected motion target passes through the position corresponding to the dividing line, and the first video is obtained after the panoramic video is divided by the dividing line.
S202: and according to the preselected motion trail of each preselected motion target, obtaining the motion characteristics of each preselected motion target when passing through the position corresponding to the segmentation line, and according to the motion characteristics, combining the preselected motion trails corresponding to the same motion target to obtain a combined motion trail.
S203: and acquiring the video after the panoramic video is concentrated according to the combined motion trail.
The following description is made with respect to step S201:
before the preselected motion tracks of the preselected motion targets in the first video are obtained, the video concentration device divides the panoramic video by adopting a dividing line to obtain the first video, wherein the first video is the video for spreading the panoramic video on a two-dimensional plane.
After the first video is acquired, the video concentration device acquires the background of the first video, and the background in the first video can be acquired through a background modeling method, wherein the background is a static object image of the first video without motion information. The background of the first video can be obtained by using a prior art algorithm, for example, the background modeling can use a gaussian mixture background modeling algorithm. The Gaussian mixture background modeling algorithm can realize foreground/background two-classification of pixels in a video frame, and a background model is obtained by counting pixel values of all points in a video image.
Further, the video compression apparatus performs foreground object detection on the first video to obtain moving objects in the first video, where the moving objects in the first video obtained in this embodiment may be referred to as original moving objects. And tracking the original moving target in the first video by adopting a tracking algorithm to obtain the moving track of each original moving target in the first video.
Specifically, foreground object detection can be realized by an algorithm in the prior art, for example, Only a one-eye (You Only Look one, YOLO) algorithm is needed. The YOLO algorithm is an end-to-end real-time target detection algorithm based on deep learning, integrates target area prediction and target category prediction into a single neural network model, and achieves rapid target detection and identification under the condition of high accuracy.
Further, the motion trail of each original moving object can be obtained through an algorithm in the prior art, for example, a kalman filter algorithm and a hungarian algorithm are adopted to obtain the motion trail of each original moving object. Specifically, after a moving object is obtained, feature information of the moving object, including a moving center of mass and a circumscribed rectangle, is calculated, and a kalman filter is initialized by using the feature information, for example, the feature information is initialized to 0. And predicting the corresponding target area in the next frame by using a Kalman filter, and when the next frame arrives, performing target matching in the predicted area by using a Hungarian algorithm to obtain the motion trail of each original moving target.
Further, after the motion trail of each original motion object in the first video is obtained, a preselected motion trail of each preselected motion object in each original motion object is obtained from the motion trail of each original motion object in the first video, wherein the preselected motion trail of each preselected motion object passes through the position corresponding to the dividing line. In other words, the moving object of the position corresponding to the position where the moving trajectory passes through the dividing line in each original moving object is the preselected moving object.
The following description is made with respect to step S202:
and after the preselected motion trail of each preselected motion target is obtained, obtaining the motion characteristics of each preselected motion target when passing through the position corresponding to the segmentation line according to the preselected motion trail of each preselected motion target. In one approach, the motion characteristic may be any of: gradually approaching the position corresponding to the dividing line along the first direction, gradually departing from the position corresponding to the dividing line along the first direction, gradually approaching the position corresponding to the dividing line along the second direction, and gradually departing from the position corresponding to the dividing line along the second direction; the position corresponding to the dividing line gradually approaching along the first direction is matched with the position corresponding to the dividing line gradually departing along the second direction, the position corresponding to the dividing line gradually departing along the first direction is matched with the position corresponding to the dividing line gradually approaching along the second direction, and the first direction is opposite to the second direction.
After the motion characteristics of each preselected motion target passing through the position corresponding to the dividing line are obtained, the preselected motion tracks corresponding to the same motion target can be combined according to the motion characteristics of each preselected motion target passing through the position corresponding to the dividing line, and the combined motion tracks are obtained.
In one mode, combining preselected motion trajectories corresponding to a same motion object according to motion characteristics of each preselected motion object when passing through a position corresponding to a dividing line includes: and combining the motion characteristics of each preselected motion target when passing through the position corresponding to the segmentation line, the time of each preselected motion target when passing through the position corresponding to the segmentation line and the coordinates of each preselected motion target, and combining the preselected motion tracks corresponding to the same motion target.
The preselected moving objects corresponding to the same moving object can be determined according to the moving characteristics of the preselected moving objects when the preselected moving objects pass through the positions corresponding to the dividing lines, the time of the preselected moving objects passing through the positions corresponding to the dividing lines and the coordinates of the preselected moving objects, and then the preselected moving tracks of the preselected moving objects corresponding to the same moving object are combined. The method can be realized through the following steps (1) to (3):
(1) for one first preselected moving object of each preselected moving object: and determining the preselected moving objects which are matched with the moving characteristics of the first preselected moving object, pass through the positions corresponding to the segmentation lines at the same time and have the coordinates matched with the coordinates of the first preselected moving object as a first moving object group.
Specifically, the motion characteristics of each pre-selected motion target passing through the position corresponding to the dividing line are divided into a group, so as to obtain 4 groups;
according to any one first group in the 4 groups, acquiring a second group matched with the motion characteristics of each preselected motion object in the first group when passing through the position corresponding to the segmentation line; for any first preselected moving object in the first set, a second preselected moving object is determined from the second set that has the same time as the first preselected moving object in the first set passes the location corresponding to the parting line and whose coordinates match the first coordinates of the first preselected moving object in the first set. It is understood that the number of second preselected moving objects is at least one, and that at least one second preselected moving object constitutes the first moving object group. Wherein the coordinates matching the first coordinates of the first preselected moving object in the first set satisfy the following condition: the abscissa of the coordinate matched with the first coordinate is located in a preset range, and the absolute value of the difference value between the ordinate of the coordinate matched with the first coordinate and the ordinate of the first coordinate is smaller than or equal to a preset threshold value.
It is understood that the preset range includes a first preset range and a second preset range. The horizontal coordinate in the first preset range is the horizontal coordinate of each point in the first area of the first video, the first area is an area close to the first edge of the first video, the horizontal coordinate in the second preset range is the horizontal coordinate of each point in the second area of the first video, and the second area is an area close to the second edge of the first video, wherein the first edge can be the right frame line of the first video, and the second edge can be the left frame line of the first video.
If the motion characteristic of the first preselected motion target passing through the position corresponding to the dividing line is that the first preselected motion target is gradually far away from the position corresponding to the dividing line along the first direction or gradually approaches to the position corresponding to the dividing line along the second direction, the abscissa of the second preselected target should be within a first preset range; if the motion characteristic of the first preselected moving object passing through the position corresponding to the dividing line is gradually close to the position corresponding to the dividing line along the first direction or gradually far away from the position corresponding to the dividing line along the second direction, the abscissa of the second preselected object should be within the second preset range.
In one possible mode, for a first preselected moving object, at least one first moving object matched with the moving characteristics of the first preselected moving object when the first preselected moving object passes through the position corresponding to the dividing line is acquired, at least one second moving object of which the abscissa is within a preset range is acquired from the at least one first moving object, and at least one third moving object of which the absolute value of the difference value from the ordinate of the first preset object is smaller than or equal to a preset threshold value is acquired from the at least one second moving object. The third moving objects are second pre-selected moving objects, and at least one third moving object forms a first moving object group.
With continued reference to fig. 1, in practice, the moving object 11 and the moving object 12 separated by the dividing line in the first video correspond to the same moving object, and the moving object 11 and the moving object 12 are the preselected moving objects. As can be seen from fig. 1, the motion characteristic of the moving object 11 passing through the position corresponding to the dividing line is gradually away from the position corresponding to the dividing line in the first direction, the motion characteristic of the moving object 12 passing through the position corresponding to the dividing line is gradually close to the position corresponding to the dividing line in the second direction, and the motion characteristic of the moving object 11 passing through the position corresponding to the dividing line matches the motion characteristic of the moving object 12 passing through the position corresponding to the dividing line. Then the moving object 12 is a second preselected moving object from the first set of moving objects in the event that the moving object 11 is a first preselected moving object.
(2) And determining a preselected moving object which is the same moving object as the first preselected moving object in the first moving object group by adopting a pedestrian re-recognition algorithm.
Specifically, a first preselected moving object is compared with moving objects of a first moving object group, feature extraction is carried out by adopting a pedestrian re-identification technology, meanwhile, retrieval is carried out in the first preselected moving object group according to the moving features of the preselected moving objects when the preselected moving objects pass through positions corresponding to the dividing lines, time when the preselected moving objects pass through the positions corresponding to the dividing lines and coordinates of the preselected moving objects, and a preselected moving object with the highest similarity is obtained, wherein the preselected moving object and the first preselected moving object are the same moving object.
(3) And combining the preselected motion tracks of the preselected motion targets corresponding to the same motion target.
Specifically, the preselected motion trajectories of the preselected motion targets of the same motion target are combined to obtain a combined motion trajectory.
The method for combining the motion trajectories can refer to the method in the prior art, and is not described herein again.
The following description is made with respect to step S203:
and combining the preselected motion tracks of the preselected motion targets corresponding to the same motion target to obtain a combined motion track, and then acquiring the concentrated video of the panoramic video according to the combined motion track. According to the combined motion trail, acquiring a video after the panoramic video is concentrated, and specifically comprising the following steps: and acquiring a video after the panoramic video is condensed according to the combined motion track, a first motion track and a background, wherein the first motion track is the motion track of the motion target except for the preselected motion target in each original motion target, and the background is obtained according to the first video in the step S201.
Specifically, after step S202, the complete motion trajectory of each moving object in the panoramic video may be obtained, at this time, the energy function algorithm may be used to complete the concentration of the panoramic video, the energy function algorithm may implement that the motion trajectories are densely arranged on the premise that the spatial position is not changed and the motion trajectory collision of the moving object is avoided as much as possible, the static background image obtained in step S201 is used as the background, the arranged motion trajectories are placed according to the original spatial position, and the target area and the background image are fused to obtain the concentrated panoramic video.
In this embodiment, by obtaining the preselected motion trajectories of the preselected motion targets in the first video after the panoramic video is divided by the dividing line, the positions of the preselected motion trajectories of the preselected motion targets passing through the dividing line are obtained, and according to the preselected motion trajectories of the preselected motion targets, the motion characteristics of the preselected motion targets when passing through the positions corresponding to the dividing line are obtained, the preselected motion trajectories corresponding to the same motion target are combined to obtain combined motion trajectories, and video concentration is performed according to the combined motion trajectories. Therefore, the motion tracks of the same moving target separated by the dividing line in the panoramic video are combined, the problem that the motion track of the same moving target is tracked into a plurality of motion tracks of a plurality of moving targets is solved, and the accuracy of panoramic video concentration is improved.
The above adopts specific embodiments to the panoramic video concentration device of the present application
Fig. 3 is a schematic diagram of an apparatus for panoramic video compression according to an embodiment of the present disclosure. The embodiment provides a video concentration device, which can be part or all of intelligent equipment such as a computer, a tablet computer, a notebook computer and the like. As shown in fig. 3, the apparatus includes:
the obtaining module 310 is configured to obtain a preselected motion trajectory of each preselected motion object in a first video, where the preselected motion trajectory of each preselected motion object passes through a position corresponding to a partition line, and the first video is obtained by dividing a panoramic video by the partition line.
And a merging module 320, configured to obtain motion characteristics of each preselected motion target when passing through the position according to the preselected motion trajectory of each preselected motion target, and merge preselected motion trajectories corresponding to the same motion target according to the motion characteristics to obtain a merged motion trajectory.
The obtaining module 310 is further configured to obtain a concentrated video of the panoramic video according to the combined motion trajectory.
Optionally, as an embodiment, the motion characteristic of each preselected moving object passing through the position is any one of the following: gradually approaching the position along a first direction, gradually departing the position along the first direction, gradually approaching the position along a second direction, and gradually departing the position along the second direction; the device comprises a position, a first direction and a second direction, wherein the position is gradually close to the position along the first direction and gradually far away from the position along the second direction to be matched, the position is gradually far away from the position along the first direction and gradually close to the position along the second direction to be matched, and the first direction and the second direction are opposite.
Optionally, as an embodiment, the merging module 320 is specifically configured to: and combining the preselected motion tracks corresponding to the same motion target according to the motion characteristics, the time of each preselected motion target passing through the position and the coordinates of each preselected motion target.
Optionally, as an embodiment, the merging module 320 is specifically configured to: for one first preselected moving object of each preselected moving object: determining a preselected moving object matched with the moving characteristics of the first preselected moving object, having the same time passing through the position and the coordinate matched with the first coordinate of the first preselected moving object as a first moving object group; determining a preselected moving target which is the same moving target as the first preselected moving target in the first moving target group by adopting a pedestrian re-recognition algorithm; and combining the preselected motion tracks corresponding to the same motion target.
Optionally, as an embodiment, an abscissa of a coordinate matched with the first coordinate is located within a preset range, and an absolute value of a difference between an ordinate of the first coordinate and an ordinate of the first coordinate is smaller than or equal to a preset threshold.
The video compression apparatus provided in the embodiment of the present application may be specifically configured to execute the video compression method, and reference may be made to the method embodiment for implementation principles and effects, which are not described herein again.
Fig. 4 is a schematic diagram of a panoramic video compression apparatus according to an embodiment of the present application. As shown in fig. 4, an embodiment of the present application provides a video compression apparatus including:
a memory 410 for storing computer executable instructions.
A processor 420 for executing computer executable instructions stored in the memory to implement the video compression method described above.
Optionally, the video concentration apparatus further comprises: and a transceiver 430 for enabling communication with other network devices or terminal devices.
The video compression apparatus provided in the embodiment of the present application may be specifically configured to execute the video compression method, and reference may be made to the method embodiment for implementation principles and effects, which are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the computer-readable storage medium is configured to implement any one of the video compression methods described above.
Embodiments of the present application further provide a computer program product, which includes computer executable instructions, and the computer executable instructions are executed by a processor to implement any of the video compression methods described above.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The computer program may be stored in a computer readable storage medium. The computer program, when executed by a processor, performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.