[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110322471B - Method, device and equipment for concentrating panoramic video and storage medium - Google Patents

Method, device and equipment for concentrating panoramic video and storage medium Download PDF

Info

Publication number
CN110322471B
CN110322471B CN201910648517.4A CN201910648517A CN110322471B CN 110322471 B CN110322471 B CN 110322471B CN 201910648517 A CN201910648517 A CN 201910648517A CN 110322471 B CN110322471 B CN 110322471B
Authority
CN
China
Prior art keywords
motion
preselected
target
video
moving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910648517.4A
Other languages
Chinese (zh)
Other versions
CN110322471A (en
Inventor
刘琼
华婉钰
杨铀
喻莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201910648517.4A priority Critical patent/CN110322471B/en
Publication of CN110322471A publication Critical patent/CN110322471A/en
Application granted granted Critical
Publication of CN110322471B publication Critical patent/CN110322471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本申请实施例提供一种全景视频浓缩的方法、装置、设备及存储介质。该方法包括:获取第一视频中的各预选运动目标的预选运动轨迹,该各预选运动目标的预选运动轨迹经过分割线所对应的位置,该第一视频是全景视频经该分割线分割后得到的;根据各预选运动目标的预选运动轨迹,获取各预选运动目标经过该位置时的运动特征,并根据该运动特征,合并对应同一运动目标的预选运动轨迹,得到合并后的运动轨迹;根据该合并后的运动轨迹,获取该全景视频浓缩后的视频。本申请实施例所提供的技术方案,避免了在全景视频浓缩的过程中将相同运动目标的运动轨迹跟踪为多个运动目标的多条运动轨迹的问题,提高了全景视频浓缩的准确性。

Figure 201910648517

Embodiments of the present application provide a method, apparatus, device, and storage medium for panoramic video enrichment. The method includes: acquiring pre-selected motion trajectories of each pre-selected moving object in a first video, where the pre-selected motion trajectories of each pre-selected moving object pass through a position corresponding to a dividing line, and the first video is obtained by dividing a panoramic video by the dividing line according to the preselected motion trajectory of each preselected moving target, obtain the motion feature of each preselected motion target when passing through the position, and according to the motion feature, merge the preselected motion trajectory corresponding to the same motion target to obtain the combined motion trajectory; The merged motion trajectories are obtained to obtain the condensed video of the panoramic video. The technical solutions provided by the embodiments of the present application avoid the problem of tracking the motion trajectories of the same moving object into multiple motion trajectories of multiple moving objects in the process of panoramic video condensing, and improve the accuracy of panoramic video condensing.

Figure 201910648517

Description

Method, device and equipment for concentrating panoramic video and storage medium
Technical Field
The present application relates to the field of video surveillance technologies, and in particular, to a method, an apparatus, a device, and a storage medium for panoramic video compression.
Background
With the rapid development of computer networks and digital video technologies, video surveillance based on digital networks is widely applied to security applications in public places, important facilities and the like, such as the fields of banks, electric power, transportation, security inspection, military affairs and the like. Along with the increasingly expanded range of security monitoring, the number of monitoring equipment is also increasing at an incredible speed, and then produce massive monitoring video, mainly have video storage data volume big, storage cycle length, occupy characteristics such as storage space are big. The traditional method of searching for clues through manual video screening needs to consume a large amount of manpower, material resources and time, and the efficiency is extremely low. Therefore, in the video monitoring system, the storage space of massive videos can be greatly reduced by using a video concentration technology, the utilization rate of analysis of massive video monitoring videos is improved, and the maximum value of the massive video monitoring videos is fully excavated.
Compared with a common video, a panoramic video shot by the panoramic camera has a wider visual angle, and can carry out global monitoring on a larger scene. However, the problem that tracks of panoramic videos shot by the panoramic camera are discontinuous among different cameras of the panoramic camera exists, and if the existing video concentration technology is directly used, the problem that the motion tracks of the same moving target are tracked into multiple motion tracks of multiple moving targets exists, so that the concentrated videos are not matched with the original videos, and the accuracy of panoramic video concentration is influenced.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for concentrating a panoramic video, so as to improve the accuracy of concentrating the panoramic video.
In a first aspect, an embodiment of the present application provides a method for panoramic video enrichment, including: obtaining a preselected motion track of each preselected motion target in a first video, wherein the preselected motion track of each preselected motion target passes through a position corresponding to a dividing line, and the first video is obtained after a panoramic video is divided by the dividing line; according to the preselected motion tracks of the preselected motion targets, obtaining the motion characteristics of the preselected motion targets when the preselected motion targets pass through the positions, and combining the preselected motion tracks corresponding to the same motion target according to the motion characteristics to obtain combined motion tracks; and acquiring the video after the panoramic video is concentrated according to the combined motion trail.
With reference to the first aspect, in a possible implementation manner of the first aspect, the motion characteristic of each preselected motion object passing through the position is any one of the following: gradually approaching the position along a first direction, gradually departing the position along the first direction, gradually approaching the position along a second direction, and gradually departing the position along the second direction; the device comprises a position, a first direction and a second direction, wherein the position is gradually close to the position along the first direction and gradually far away from the position along the second direction to be matched, the position is gradually far away from the position along the first direction and gradually close to the position along the second direction to be matched, and the first direction and the second direction are opposite.
With reference to the first aspect, in a possible implementation manner of the first aspect, merging preselected motion trajectories corresponding to the same motion target according to the motion feature includes: and combining the preselected motion tracks corresponding to the same motion target according to the motion characteristics, the time of each preselected motion target passing through the position and the coordinates of each preselected motion target.
With reference to the first aspect, in a possible implementation manner of the first aspect, combining preselected motion trajectories corresponding to the same motion object according to the motion characteristics, the time for each preselected motion object to pass through the position, and the coordinates of each preselected motion object includes: for one first preselected moving object of each preselected moving object: determining a preselected moving object matched with the moving characteristics of the first preselected moving object, having the same time passing through the position and the coordinate matched with the first coordinate of the first preselected moving object as a first moving object group; determining a preselected moving target which is the same moving target as the first preselected moving target in the first moving target group by adopting a pedestrian re-recognition algorithm; and combining the preselected motion tracks corresponding to the same motion target.
With reference to the first aspect, in a possible implementation manner of the first aspect, an abscissa of a coordinate matched with the first coordinate is located within a preset range, and an absolute value of a difference between a ordinate of the first coordinate and a ordinate of the first coordinate is less than or equal to a preset threshold.
In a second aspect, an embodiment of the present application provides an apparatus for video compression, including: the acquisition module is used for acquiring preselected motion tracks of all preselected motion targets in a first video, wherein the preselected motion tracks of all preselected motion targets pass through positions corresponding to a dividing line, and the first video is obtained by dividing a panoramic video through the dividing line; the merging module is used for acquiring the motion characteristics of all preselected motion targets when the preselected motion targets pass through the positions according to the preselected motion tracks of all preselected motion targets, and merging the preselected motion tracks corresponding to the same motion target according to the motion characteristics to obtain merged motion tracks; the acquisition module is further configured to acquire the video after the panoramic video is concentrated according to the combined motion trajectory.
With reference to the second aspect, in a possible implementation manner of the second aspect, the motion characteristic of each preselected moving object passing through the position is any one of the following: gradually approaching the position along a first direction, gradually departing the position along the first direction, gradually approaching the position along a second direction, and gradually departing the position along the second direction; the device comprises a position, a first direction and a second direction, wherein the position is gradually close to the position along the first direction and gradually far away from the position along the second direction to be matched, the position is gradually far away from the position along the first direction and gradually close to the position along the second direction to be matched, and the first direction and the second direction are opposite.
With reference to the second aspect, in a possible implementation manner of the second aspect, the merging module is specifically configured to: and combining the preselected motion tracks corresponding to the same motion target according to the motion characteristics, the time of each preselected motion target passing through the position and the coordinates of each preselected motion target.
With reference to the second aspect, in a possible implementation manner of the second aspect, the merging module is specifically configured to: for one first preselected moving object of each preselected moving object: determining a preselected moving object matched with the moving characteristics of the first preselected moving object, having the same time passing through the position and the coordinate matched with the first coordinate of the first preselected moving object as a first moving object group; determining a preselected moving target which is the same moving target as the first preselected moving target in the first moving target group by adopting a pedestrian re-recognition algorithm; and combining the preselected motion tracks corresponding to the same motion target.
With reference to the second aspect, in a possible implementation manner of the second aspect, an abscissa of a coordinate matched with the first coordinate is located within a preset range, and an absolute value of a difference between a ordinate of the first coordinate and a ordinate of the first coordinate is less than or equal to a preset threshold.
In a third aspect, an embodiment of the present application provides an apparatus for video compression, including: a processor and a memory.
The memory is for storing computer-executable instructions.
The processor is configured to execute the memory-stored computer-executable instructions to cause the processor to perform the method of panoramic video enrichment as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer storage medium having stored therein computer-executable instructions for implementing the method for panoramic video enrichment of the first aspect when the computer-executable instructions are executed by a processor.
In a fifth aspect, the present application provides a computer program product comprising computer executable instructions for implementing the method of panoramic video enrichment according to the first aspect when the computer executable instructions are executed by a processor.
According to the method and the device, the preselected motion trail of each preselected motion target in the first video obtained after the panoramic video is divided by the dividing line is obtained, the position of the preselected motion trail of each preselected motion target passing through the dividing line is obtained, the motion characteristics of each preselected motion target when passing through the position corresponding to the dividing line are obtained according to the preselected motion trail of each preselected motion target, the preselected motion trails corresponding to the same motion target are combined to obtain the combined motion trail, and video concentration is carried out according to the combined motion trail. Therefore, the motion tracks of the same moving target separated by the dividing line in the panoramic video are combined, the problem that the motion track of the same moving target is tracked into a plurality of motion tracks of a plurality of moving targets is solved, and the accuracy of panoramic video concentration is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a schematic view of a video obtained by dividing a panoramic video by a dividing line according to an embodiment of the present application;
fig. 2 is a flowchart of a method for panoramic video compression according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an apparatus for panoramic video compression according to an embodiment of the present application;
fig. 4 is a schematic diagram of a panoramic video compression apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Specifically, in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple. The terms "first," "second," and the like in this application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The panoramic camera according to the embodiment of the present application is a panoramic camera in which a plurality of camera lenses are installed around a fixed point, for example, 6 camera lenses are installed around a fixed point of the panoramic camera, and one camera lens is installed at every 60 degrees, and all the fields of view of the plurality of camera lenses are added to form a panoramic field of view.
The video output by the panoramic camera is formed by fusing the shooting contents of the cameras, the shooting contents of the cameras can be projected on the spherical surface for displaying, and the video projected on the spherical surface can be called panoramic video. The panoramic video is divided along a dividing line to obtain a first video, namely the first video is a video obtained by spreading the panoramic video on a two-dimensional plane. At this time, in the first video, the moving object passing through the position corresponding to the dividing line is recognized as two different moving objects, so that two moving tracks are obtained for the moving object, and the accuracy of video concentration is affected.
The panoramic camera related in the present application may be a common fixed focus panoramic camera, a 3D fixed focus panoramic camera, or a square zoom camera, which is not limited herein.
Fig. 1 is a schematic view of a video obtained by dividing a panoramic video by a dividing line, as shown in fig. 1, after the panoramic video is divided along the dividing line, one of moving objects is divided into two moving objects, and accordingly the moving object has two moving tracks.
In order to solve the problem of insufficient accuracy when the existing video concentration technology is directly applied to a panoramic video, the application provides a method for concentrating the panoramic video, the preselected motion tracks of all preselected motion targets in a first video are obtained, the preselected motion tracks of all the preselected motion targets pass through the positions corresponding to a dividing line, and the first video is obtained after the panoramic video is divided by the dividing line; according to the preselected motion tracks of the preselected motion targets, obtaining the motion characteristics of the preselected motion targets when the preselected motion targets pass through the positions corresponding to the dividing lines, and according to the motion characteristics, combining the preselected motion tracks corresponding to the same motion target to obtain combined motion tracks; and acquiring the video after the panoramic video is concentrated according to the combined motion trail.
The technical solution of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a flowchart of a method for panoramic video compression according to an embodiment of the present disclosure. The execution subject of the method is a video concentration device, and the video concentration device can be part or all of intelligent equipment such as a computer, a tablet computer, a notebook computer, a server and the like. As shown in fig. 2, the method comprises the steps of:
s201: and acquiring a preselected motion track of each preselected motion target in the first video, wherein the preselected motion track of each preselected motion target passes through the position corresponding to the dividing line, and the first video is obtained after the panoramic video is divided by the dividing line.
S202: and according to the preselected motion trail of each preselected motion target, obtaining the motion characteristics of each preselected motion target when passing through the position corresponding to the segmentation line, and according to the motion characteristics, combining the preselected motion trails corresponding to the same motion target to obtain a combined motion trail.
S203: and acquiring the video after the panoramic video is concentrated according to the combined motion trail.
The following description is made with respect to step S201:
before the preselected motion tracks of the preselected motion targets in the first video are obtained, the video concentration device divides the panoramic video by adopting a dividing line to obtain the first video, wherein the first video is the video for spreading the panoramic video on a two-dimensional plane.
After the first video is acquired, the video concentration device acquires the background of the first video, and the background in the first video can be acquired through a background modeling method, wherein the background is a static object image of the first video without motion information. The background of the first video can be obtained by using a prior art algorithm, for example, the background modeling can use a gaussian mixture background modeling algorithm. The Gaussian mixture background modeling algorithm can realize foreground/background two-classification of pixels in a video frame, and a background model is obtained by counting pixel values of all points in a video image.
Further, the video compression apparatus performs foreground object detection on the first video to obtain moving objects in the first video, where the moving objects in the first video obtained in this embodiment may be referred to as original moving objects. And tracking the original moving target in the first video by adopting a tracking algorithm to obtain the moving track of each original moving target in the first video.
Specifically, foreground object detection can be realized by an algorithm in the prior art, for example, Only a one-eye (You Only Look one, YOLO) algorithm is needed. The YOLO algorithm is an end-to-end real-time target detection algorithm based on deep learning, integrates target area prediction and target category prediction into a single neural network model, and achieves rapid target detection and identification under the condition of high accuracy.
Further, the motion trail of each original moving object can be obtained through an algorithm in the prior art, for example, a kalman filter algorithm and a hungarian algorithm are adopted to obtain the motion trail of each original moving object. Specifically, after a moving object is obtained, feature information of the moving object, including a moving center of mass and a circumscribed rectangle, is calculated, and a kalman filter is initialized by using the feature information, for example, the feature information is initialized to 0. And predicting the corresponding target area in the next frame by using a Kalman filter, and when the next frame arrives, performing target matching in the predicted area by using a Hungarian algorithm to obtain the motion trail of each original moving target.
Further, after the motion trail of each original motion object in the first video is obtained, a preselected motion trail of each preselected motion object in each original motion object is obtained from the motion trail of each original motion object in the first video, wherein the preselected motion trail of each preselected motion object passes through the position corresponding to the dividing line. In other words, the moving object of the position corresponding to the position where the moving trajectory passes through the dividing line in each original moving object is the preselected moving object.
The following description is made with respect to step S202:
and after the preselected motion trail of each preselected motion target is obtained, obtaining the motion characteristics of each preselected motion target when passing through the position corresponding to the segmentation line according to the preselected motion trail of each preselected motion target. In one approach, the motion characteristic may be any of: gradually approaching the position corresponding to the dividing line along the first direction, gradually departing from the position corresponding to the dividing line along the first direction, gradually approaching the position corresponding to the dividing line along the second direction, and gradually departing from the position corresponding to the dividing line along the second direction; the position corresponding to the dividing line gradually approaching along the first direction is matched with the position corresponding to the dividing line gradually departing along the second direction, the position corresponding to the dividing line gradually departing along the first direction is matched with the position corresponding to the dividing line gradually approaching along the second direction, and the first direction is opposite to the second direction.
After the motion characteristics of each preselected motion target passing through the position corresponding to the dividing line are obtained, the preselected motion tracks corresponding to the same motion target can be combined according to the motion characteristics of each preselected motion target passing through the position corresponding to the dividing line, and the combined motion tracks are obtained.
In one mode, combining preselected motion trajectories corresponding to a same motion object according to motion characteristics of each preselected motion object when passing through a position corresponding to a dividing line includes: and combining the motion characteristics of each preselected motion target when passing through the position corresponding to the segmentation line, the time of each preselected motion target when passing through the position corresponding to the segmentation line and the coordinates of each preselected motion target, and combining the preselected motion tracks corresponding to the same motion target.
The preselected moving objects corresponding to the same moving object can be determined according to the moving characteristics of the preselected moving objects when the preselected moving objects pass through the positions corresponding to the dividing lines, the time of the preselected moving objects passing through the positions corresponding to the dividing lines and the coordinates of the preselected moving objects, and then the preselected moving tracks of the preselected moving objects corresponding to the same moving object are combined. The method can be realized through the following steps (1) to (3):
(1) for one first preselected moving object of each preselected moving object: and determining the preselected moving objects which are matched with the moving characteristics of the first preselected moving object, pass through the positions corresponding to the segmentation lines at the same time and have the coordinates matched with the coordinates of the first preselected moving object as a first moving object group.
Specifically, the motion characteristics of each pre-selected motion target passing through the position corresponding to the dividing line are divided into a group, so as to obtain 4 groups;
according to any one first group in the 4 groups, acquiring a second group matched with the motion characteristics of each preselected motion object in the first group when passing through the position corresponding to the segmentation line; for any first preselected moving object in the first set, a second preselected moving object is determined from the second set that has the same time as the first preselected moving object in the first set passes the location corresponding to the parting line and whose coordinates match the first coordinates of the first preselected moving object in the first set. It is understood that the number of second preselected moving objects is at least one, and that at least one second preselected moving object constitutes the first moving object group. Wherein the coordinates matching the first coordinates of the first preselected moving object in the first set satisfy the following condition: the abscissa of the coordinate matched with the first coordinate is located in a preset range, and the absolute value of the difference value between the ordinate of the coordinate matched with the first coordinate and the ordinate of the first coordinate is smaller than or equal to a preset threshold value.
It is understood that the preset range includes a first preset range and a second preset range. The horizontal coordinate in the first preset range is the horizontal coordinate of each point in the first area of the first video, the first area is an area close to the first edge of the first video, the horizontal coordinate in the second preset range is the horizontal coordinate of each point in the second area of the first video, and the second area is an area close to the second edge of the first video, wherein the first edge can be the right frame line of the first video, and the second edge can be the left frame line of the first video.
If the motion characteristic of the first preselected motion target passing through the position corresponding to the dividing line is that the first preselected motion target is gradually far away from the position corresponding to the dividing line along the first direction or gradually approaches to the position corresponding to the dividing line along the second direction, the abscissa of the second preselected target should be within a first preset range; if the motion characteristic of the first preselected moving object passing through the position corresponding to the dividing line is gradually close to the position corresponding to the dividing line along the first direction or gradually far away from the position corresponding to the dividing line along the second direction, the abscissa of the second preselected object should be within the second preset range.
In one possible mode, for a first preselected moving object, at least one first moving object matched with the moving characteristics of the first preselected moving object when the first preselected moving object passes through the position corresponding to the dividing line is acquired, at least one second moving object of which the abscissa is within a preset range is acquired from the at least one first moving object, and at least one third moving object of which the absolute value of the difference value from the ordinate of the first preset object is smaller than or equal to a preset threshold value is acquired from the at least one second moving object. The third moving objects are second pre-selected moving objects, and at least one third moving object forms a first moving object group.
With continued reference to fig. 1, in practice, the moving object 11 and the moving object 12 separated by the dividing line in the first video correspond to the same moving object, and the moving object 11 and the moving object 12 are the preselected moving objects. As can be seen from fig. 1, the motion characteristic of the moving object 11 passing through the position corresponding to the dividing line is gradually away from the position corresponding to the dividing line in the first direction, the motion characteristic of the moving object 12 passing through the position corresponding to the dividing line is gradually close to the position corresponding to the dividing line in the second direction, and the motion characteristic of the moving object 11 passing through the position corresponding to the dividing line matches the motion characteristic of the moving object 12 passing through the position corresponding to the dividing line. Then the moving object 12 is a second preselected moving object from the first set of moving objects in the event that the moving object 11 is a first preselected moving object.
(2) And determining a preselected moving object which is the same moving object as the first preselected moving object in the first moving object group by adopting a pedestrian re-recognition algorithm.
Specifically, a first preselected moving object is compared with moving objects of a first moving object group, feature extraction is carried out by adopting a pedestrian re-identification technology, meanwhile, retrieval is carried out in the first preselected moving object group according to the moving features of the preselected moving objects when the preselected moving objects pass through positions corresponding to the dividing lines, time when the preselected moving objects pass through the positions corresponding to the dividing lines and coordinates of the preselected moving objects, and a preselected moving object with the highest similarity is obtained, wherein the preselected moving object and the first preselected moving object are the same moving object.
(3) And combining the preselected motion tracks of the preselected motion targets corresponding to the same motion target.
Specifically, the preselected motion trajectories of the preselected motion targets of the same motion target are combined to obtain a combined motion trajectory.
The method for combining the motion trajectories can refer to the method in the prior art, and is not described herein again.
The following description is made with respect to step S203:
and combining the preselected motion tracks of the preselected motion targets corresponding to the same motion target to obtain a combined motion track, and then acquiring the concentrated video of the panoramic video according to the combined motion track. According to the combined motion trail, acquiring a video after the panoramic video is concentrated, and specifically comprising the following steps: and acquiring a video after the panoramic video is condensed according to the combined motion track, a first motion track and a background, wherein the first motion track is the motion track of the motion target except for the preselected motion target in each original motion target, and the background is obtained according to the first video in the step S201.
Specifically, after step S202, the complete motion trajectory of each moving object in the panoramic video may be obtained, at this time, the energy function algorithm may be used to complete the concentration of the panoramic video, the energy function algorithm may implement that the motion trajectories are densely arranged on the premise that the spatial position is not changed and the motion trajectory collision of the moving object is avoided as much as possible, the static background image obtained in step S201 is used as the background, the arranged motion trajectories are placed according to the original spatial position, and the target area and the background image are fused to obtain the concentrated panoramic video.
In this embodiment, by obtaining the preselected motion trajectories of the preselected motion targets in the first video after the panoramic video is divided by the dividing line, the positions of the preselected motion trajectories of the preselected motion targets passing through the dividing line are obtained, and according to the preselected motion trajectories of the preselected motion targets, the motion characteristics of the preselected motion targets when passing through the positions corresponding to the dividing line are obtained, the preselected motion trajectories corresponding to the same motion target are combined to obtain combined motion trajectories, and video concentration is performed according to the combined motion trajectories. Therefore, the motion tracks of the same moving target separated by the dividing line in the panoramic video are combined, the problem that the motion track of the same moving target is tracked into a plurality of motion tracks of a plurality of moving targets is solved, and the accuracy of panoramic video concentration is improved.
The above adopts specific embodiments to the panoramic video concentration device of the present application
Fig. 3 is a schematic diagram of an apparatus for panoramic video compression according to an embodiment of the present disclosure. The embodiment provides a video concentration device, which can be part or all of intelligent equipment such as a computer, a tablet computer, a notebook computer and the like. As shown in fig. 3, the apparatus includes:
the obtaining module 310 is configured to obtain a preselected motion trajectory of each preselected motion object in a first video, where the preselected motion trajectory of each preselected motion object passes through a position corresponding to a partition line, and the first video is obtained by dividing a panoramic video by the partition line.
And a merging module 320, configured to obtain motion characteristics of each preselected motion target when passing through the position according to the preselected motion trajectory of each preselected motion target, and merge preselected motion trajectories corresponding to the same motion target according to the motion characteristics to obtain a merged motion trajectory.
The obtaining module 310 is further configured to obtain a concentrated video of the panoramic video according to the combined motion trajectory.
Optionally, as an embodiment, the motion characteristic of each preselected moving object passing through the position is any one of the following: gradually approaching the position along a first direction, gradually departing the position along the first direction, gradually approaching the position along a second direction, and gradually departing the position along the second direction; the device comprises a position, a first direction and a second direction, wherein the position is gradually close to the position along the first direction and gradually far away from the position along the second direction to be matched, the position is gradually far away from the position along the first direction and gradually close to the position along the second direction to be matched, and the first direction and the second direction are opposite.
Optionally, as an embodiment, the merging module 320 is specifically configured to: and combining the preselected motion tracks corresponding to the same motion target according to the motion characteristics, the time of each preselected motion target passing through the position and the coordinates of each preselected motion target.
Optionally, as an embodiment, the merging module 320 is specifically configured to: for one first preselected moving object of each preselected moving object: determining a preselected moving object matched with the moving characteristics of the first preselected moving object, having the same time passing through the position and the coordinate matched with the first coordinate of the first preselected moving object as a first moving object group; determining a preselected moving target which is the same moving target as the first preselected moving target in the first moving target group by adopting a pedestrian re-recognition algorithm; and combining the preselected motion tracks corresponding to the same motion target.
Optionally, as an embodiment, an abscissa of a coordinate matched with the first coordinate is located within a preset range, and an absolute value of a difference between an ordinate of the first coordinate and an ordinate of the first coordinate is smaller than or equal to a preset threshold.
The video compression apparatus provided in the embodiment of the present application may be specifically configured to execute the video compression method, and reference may be made to the method embodiment for implementation principles and effects, which are not described herein again.
Fig. 4 is a schematic diagram of a panoramic video compression apparatus according to an embodiment of the present application. As shown in fig. 4, an embodiment of the present application provides a video compression apparatus including:
a memory 410 for storing computer executable instructions.
A processor 420 for executing computer executable instructions stored in the memory to implement the video compression method described above.
Optionally, the video concentration apparatus further comprises: and a transceiver 430 for enabling communication with other network devices or terminal devices.
The video compression apparatus provided in the embodiment of the present application may be specifically configured to execute the video compression method, and reference may be made to the method embodiment for implementation principles and effects, which are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the computer-readable storage medium is configured to implement any one of the video compression methods described above.
Embodiments of the present application further provide a computer program product, which includes computer executable instructions, and the computer executable instructions are executed by a processor to implement any of the video compression methods described above.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The computer program may be stored in a computer readable storage medium. The computer program, when executed by a processor, performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (7)

1.一种全景视频浓缩的方法,其特征在于,包括:1. a method for concentrating panoramic video, is characterized in that, comprises: 获取第一视频中的各预选运动目标的预选运动轨迹,所述各预选运动目标的预选运动轨迹经过分割线所对应的位置,所述第一视频是全景视频经所述分割线分割后得到的;Obtain the pre-selected motion trajectories of each pre-selected moving target in the first video, the pre-selected motion trajectories of each pre-selected motion target passing through the position corresponding to the dividing line, and the first video is obtained after the panoramic video is divided by the dividing line ; 根据各预选运动目标的预选运动轨迹,获取各预选运动目标经过所述位置时的运动特征,并根据所述运动特征、各预选运动目标经过所述位置的时间和各预选运动目标的坐标,合并对应同一运动目标的预选运动轨迹,得到合并后的运动轨迹;According to the pre-selected motion trajectory of each pre-selected moving object, the motion characteristics of each pre-selected moving object when passing through the position are obtained, and based on the motion characteristics, the time when each pre-selected moving object passes through the position, and the coordinates of each pre-selected moving object, merge Corresponding to the preselected motion trajectory of the same moving target, the combined motion trajectory is obtained; 根据所述合并后的运动轨迹,获取所述全景视频浓缩后的视频;According to the combined motion trajectory, obtain the video after the panoramic video is concentrated; 其中,所述各预选运动目标经过所述位置时的运动特征为如下中的任一项:Wherein, the motion feature of each preselected moving target when passing through the position is any one of the following: 沿第一方向逐渐靠近所述位置、沿第一方向逐渐远离所述位置、沿第二方向逐渐靠近所述位置、沿第二方向逐渐远离所述位置;gradually approaching the position along the first direction, gradually moving away from the position along the first direction, gradually approaching the position along the second direction, and gradually moving away from the position along the second direction; 其中,沿第一方向逐渐靠近所述位置和沿第二方向逐渐远离所述位置相匹配,沿第一方向逐渐远离所述位置和沿第二方向逐渐靠近所述位置相匹配,所述第一方向和所述第二方向相反。Wherein, gradually approaching the position along the first direction and gradually moving away from the position along the second direction match, and gradually moving away from the position along the first direction and gradually approaching the position along the second direction match, the first The direction is opposite to the second direction. 2.根据权利要求1所述的方法,其特征在于,根据所述运动特征、各预选运动目标经过所述位置的时间和各预选运动目标的坐标,合并对应同一运动目标的预选运动轨迹,包括:2. The method according to claim 1, wherein, according to the motion feature, the time when each preselected motion target passes through the position, and the coordinates of each preselected motion target, the preselected motion trajectory corresponding to the same motion target is merged, including : 对于各预选运动目标中的一个第一预选运动目标:确定与所述第一预选运动目标的运动特征相匹配、经过所述位置的时间相同且坐标与所述第一预选运动目标的第一坐标相匹配的预选运动目标为第一运动目标组;For a first pre-selected moving object among the pre-selected moving objects: determine that it matches the motion characteristics of the first pre-selected moving object, passes through the location at the same time, and has coordinates with the first coordinates of the first pre-selected moving object The matched preselected moving target is the first moving target group; 采用行人重识别算法,确定所述第一运动目标组中与所述第一预选运动目标为同一运动目标的预选运动目标;A pedestrian re-identification algorithm is used to determine a pre-selected moving target that is the same moving target as the first pre-selected moving target in the first moving target group; 合并对应同一运动目标的预选运动轨迹。Combine preselected motion trajectories corresponding to the same moving target. 3.根据权利要求2所述的方法,其特征在于,与所述第一坐标相匹配的坐标的横坐标位于预设范围内,纵坐标与所述第一坐标的纵坐标的差值的绝对值小于或等于预设阈值。3. The method according to claim 2, wherein the abscissa of the coordinate matching the first coordinate is within a preset range, and the absolute difference between the ordinate and the ordinate of the first coordinate The value is less than or equal to the preset threshold. 4.一种视频浓缩的装置,其特征在于,包括:4. a device for video concentrating, is characterized in that, comprises: 获取模块,用于获取第一视频中的各预选运动目标的预选运动轨迹,所述各预选运动目标的预选运动轨迹经过分割线所对应的位置,所述第一视频是全景视频经所述分割线分割后得到的;The acquisition module is used to acquire the preselected motion trajectory of each preselected moving target in the first video, the preselected motion trajectory of each preselected motion target passes through the position corresponding to the dividing line, and the first video is a panoramic video after the segmentation obtained after line division; 合并模块,用于根据各预选运动目标的预选运动轨迹,获取各预选运动目标经过所述位置时的运动特征,并根据所述运动特征、各预选运动目标经过所述位置的时间和各预选运动目标的坐标,合并对应同一运动目标的预选运动轨迹,得到合并后的运动轨迹;其中,所述各预选运动目标经过所述位置时的运动特征为如下中的任一项:沿第一方向逐渐靠近所述位置、沿第一方向逐渐远离所述位置、沿第二方向逐渐靠近所述位置、沿第二方向逐渐远离所述位置;其中,沿第一方向逐渐靠近所述位置和沿第二方向逐渐远离所述位置相匹配,沿第一方向逐渐远离所述位置和沿第二方向逐渐靠近所述位置相匹配,所述第一方向和所述第二方向相反;The merging module is used to obtain the motion characteristics of each preselected motion target when passing through the position according to the preselected motion trajectory of each preselected motion target, and according to the motion characteristics, the time when each preselected motion target passes through the position, and each preselected motion The coordinates of the target, merge the pre-selected motion trajectories corresponding to the same moving target, and obtain the combined motion trajectory; wherein, the motion characteristics of each pre-selected moving target when passing through the position is any of the following: gradually along the first direction approaching the position, gradually moving away from the position along the first direction, gradually approaching the position along the second direction, gradually moving away from the position along the second direction; matching the direction gradually moving away from the position, gradually moving away from the position along a first direction and gradually approaching the position along a second direction, the first direction and the second direction being opposite; 所述获取模块,还用于根据所述合并后的运动轨迹,获取所述全景视频浓缩后的视频。The acquiring module is further configured to acquire the concentrated video of the panoramic video according to the combined motion trajectory. 5.根据权利要求4所述的装置,其特征在于,所述合并模块,具体用于:5. The device according to claim 4, wherein the merging module is specifically used for: 对于各预选运动目标中的一个第一预选运动目标:确定与所述第一预选运动目标的运动特征相匹配、经过所述位置的时间相同且坐标与所述第一预选运动目标的第一坐标相匹配的预选运动目标为第一运动目标组;For a first pre-selected moving object among the pre-selected moving objects: determine that it matches the motion characteristics of the first pre-selected moving object, passes through the location at the same time, and has coordinates with the first coordinates of the first pre-selected moving object The matched preselected moving target is the first moving target group; 采用行人重识别算法,确定所述第一运动目标组中与所述第一预选运动目标为同一运动目标的预选运动目标;A pedestrian re-identification algorithm is used to determine a pre-selected moving target that is the same moving target as the first pre-selected moving target in the first moving target group; 合并对应同一运动目标的预选运动轨迹。Combine preselected motion trajectories corresponding to the same moving target. 6.一种视频浓缩的设备,其特征在于,包括:处理器和存储器;6. A device for video enrichment, comprising: a processor and a memory; 所述存储器用于存储计算机可执行指令,以使所述处理器执行所述计算机可执行指令实现如权利要求1-3任一项所述的全景视频浓缩的方法。The memory is used for storing computer-executable instructions, so that the processor executes the computer-executable instructions to implement the method for concentrating panoramic video according to any one of claims 1-3. 7.一种计算机存储介质,其特征在于,包括:计算机可执行指令,所述计算机可执行指令用于实现如权利要求1-3任一项所述的全景视频浓缩的方法。7. A computer storage medium, comprising: computer-executable instructions, the computer-executable instructions being used to implement the method for concentrating panoramic video according to any one of claims 1-3.
CN201910648517.4A 2019-07-18 2019-07-18 Method, device and equipment for concentrating panoramic video and storage medium Active CN110322471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910648517.4A CN110322471B (en) 2019-07-18 2019-07-18 Method, device and equipment for concentrating panoramic video and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910648517.4A CN110322471B (en) 2019-07-18 2019-07-18 Method, device and equipment for concentrating panoramic video and storage medium

Publications (2)

Publication Number Publication Date
CN110322471A CN110322471A (en) 2019-10-11
CN110322471B true CN110322471B (en) 2021-02-19

Family

ID=68123960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910648517.4A Active CN110322471B (en) 2019-07-18 2019-07-18 Method, device and equipment for concentrating panoramic video and storage medium

Country Status (1)

Country Link
CN (1) CN110322471B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689331B (en) * 2021-07-20 2023-06-23 中国铁路设计集团有限公司 A Method of Panoramic Image Stitching under Complex Background

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751677A (en) * 2008-12-17 2010-06-23 中国科学院自动化研究所 Target continuous tracking method based on multi-camera
CN102930061A (en) * 2012-11-28 2013-02-13 安徽水天信息科技有限公司 Video abstraction method and system based on moving target detection

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102256065B (en) * 2011-07-25 2012-12-12 中国科学院自动化研究所 Automatic video condensing method based on video monitoring network
CN102708182B (en) * 2012-05-08 2014-07-02 浙江捷尚视觉科技有限公司 Rapid video concentration abstracting method
CN103686095B (en) * 2014-01-02 2017-05-17 中安消技术有限公司 Video concentration method and system
CN107770484A (en) * 2016-08-19 2018-03-06 杭州海康威视数字技术股份有限公司 A kind of video monitoring information generation method, device and video camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751677A (en) * 2008-12-17 2010-06-23 中国科学院自动化研究所 Target continuous tracking method based on multi-camera
CN102930061A (en) * 2012-11-28 2013-02-13 安徽水天信息科技有限公司 Video abstraction method and system based on moving target detection

Also Published As

Publication number Publication date
CN110322471A (en) 2019-10-11

Similar Documents

Publication Publication Date Title
Aharon et al. BoT-SORT: Robust associations multi-pedestrian tracking
Wen et al. Detection, tracking, and counting meets drones in crowds: A benchmark
Fan et al. Point 4d transformer networks for spatio-temporal modeling in point cloud videos
Bisio et al. A systematic review of drone based road traffic monitoring system
Zhang et al. Ppgnet: Learning point-pair graph for line segment detection
Nguyen et al. Lmgp: Lifted multicut meets geometry projections for multi-camera multi-object tracking
Kim et al. Skeleton-based action recognition of people handling objects
Zhang et al. 3d crowd counting via multi-view fusion with 3d gaussian kernels
Pujara et al. DeepSORT: real Time & multi-object detection and tracking with YOLO and TensorFlow
Han et al. Dr. vic: Decomposition and reasoning for video individual counting
Chandrajit et al. Multiple objects tracking in surveillance video using color and hu moments
Saif et al. Crowd density estimation from autonomous drones using deep learning: challenges and applications
Zhou et al. Pass: Patch automatic skip scheme for efficient on-device video perception
Zhang et al. Visual object tracking via cascaded RPN fusion and coordinate attention
Kamble et al. A convolutional neural network based 3D ball tracking by detection in soccer videos
KR101492059B1 (en) Real Time Object Tracking Method and System using the Mean-shift Algorithm
Xu et al. Real-time detection via homography mapping of foreground polygons from multiple cameras
CN110322471B (en) Method, device and equipment for concentrating panoramic video and storage medium
Wei et al. Graph-theoretic spatiotemporal context modeling for video saliency detection
CN112381024B (en) An unsupervised pedestrian re-identification and rearrangement method based on fusion of multiple modalities
Ahmed et al. Object motion tracking and detection in surveillance videos using Resnet architecture
Pi et al. A novel spatial and temporal context-aware approach for drone-based video object detection
Peng et al. Continuous vehicle detection and tracking for non-overlapping multi-camera surveillance system
Lo Presti et al. Depth-aware multi-object tracking in spherical videos
Ullah et al. Dual Deep Learning Network for Abnormal Action Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant