CN114139016A - Data processing method and system for intelligent cell - Google Patents
Data processing method and system for intelligent cell Download PDFInfo
- Publication number
- CN114139016A CN114139016A CN202111312691.5A CN202111312691A CN114139016A CN 114139016 A CN114139016 A CN 114139016A CN 202111312691 A CN202111312691 A CN 202111312691A CN 114139016 A CN114139016 A CN 114139016A
- Authority
- CN
- China
- Prior art keywords
- monitoring
- video
- screened
- frames
- monitoring video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 26
- 238000012544 monitoring process Methods 0.000 claims abstract description 619
- 238000012545 processing Methods 0.000 claims abstract description 71
- 238000012216 screening Methods 0.000 claims abstract description 45
- 238000000034 method Methods 0.000 claims abstract description 10
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 239000006185 dispersion Substances 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/735—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/16—Real estate
- G06Q50/163—Real estate management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Tourism & Hospitality (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Library & Information Science (AREA)
- General Health & Medical Sciences (AREA)
- Economics (AREA)
- Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Computational Linguistics (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention provides a data processing method and a data processing system for an intelligent cell, and relates to the technical field of data processing. In the invention, a to-be-processed monitoring video sent by monitoring terminal equipment is obtained, wherein the to-be-processed monitoring video is obtained by carrying out image acquisition on the entrance and exit positions of a target cell based on the monitoring terminal equipment; processing the monitored video to be processed to obtain at least one corresponding monitored video to be screened, wherein the monitored objects corresponding to any two frames of monitored video to be screened belonging to the same monitored video to be screened are the same; and aiming at each monitoring video to be screened in at least one monitoring video to be screened, screening multiple frames of monitoring video frames to be screened included in the monitoring video to be screened to obtain multiple frames of target monitoring video frames corresponding to the monitoring video to be screened so as to obtain the corresponding target monitoring video. Based on the method, the problem of poor screening effect on the monitoring video in the prior art can be solved.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to a data processing method and system for an intelligent cell.
Background
With the continuous development of computer technology and internet technology and the increasing demand for cell security, the application of smart cells is in great demand. An important technical means in the implementation process of the intelligent cell is monitoring, such as image acquisition to realize video monitoring. And after the surveillance video is acquired, the surveillance video can be screened for subsequent application, but in the prior art, the surveillance video is generally directly subjected to duplicate removal or high similarity removal screening, so that the problem of poor screening effect on the surveillance video is solved.
Disclosure of Invention
In view of the above, the present invention provides a data processing method and system for an intelligent cell to solve the problem of poor screening effect of surveillance videos in the prior art.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
a data processing method of an intelligent cell is applied to a data processing server, the data processing server is in communication connection with monitoring terminal equipment, the monitoring terminal equipment is deployed at the entrance and exit position of a target cell, and the data processing method of the intelligent cell comprises the following steps:
acquiring a to-be-processed monitoring video sent by the monitoring terminal equipment, wherein the to-be-processed monitoring video is obtained by carrying out image acquisition on the entrance and exit position of the target cell based on the monitoring terminal equipment, and the to-be-processed monitoring video comprises a plurality of frames of to-be-processed monitoring video frames;
processing the to-be-processed monitoring video to obtain at least one to-be-screened monitoring video corresponding to the to-be-processed monitoring video, wherein each to-be-screened monitoring video in the at least one to-be-screened monitoring video comprises multiple frames of to-be-screened monitoring video frames, monitoring objects corresponding to any two to-be-screened monitoring video frames belonging to the same to-be-screened monitoring video are the same, and monitoring objects corresponding to any two to-be-screened monitoring video frames belonging to different two to-be-screened monitoring videos are different;
and for each to-be-screened monitoring video in the at least one to-be-screened monitoring video, screening multiple frames of to-be-screened monitoring video frames included in the to-be-screened monitoring video to obtain multiple frames of target monitoring video frames corresponding to the to-be-screened monitoring video, and obtaining the target monitoring video corresponding to the to-be-screened monitoring video based on the multiple frames of target monitoring video frames.
In some preferred embodiments, in the data processing method for an intelligent cell, the step of acquiring a to-be-processed monitoring video sent by the monitoring terminal device includes:
judging whether monitoring starting request information sent by target user terminal equipment is acquired or not, wherein the monitoring starting request information is generated based on monitoring starting request operation carried out by the target user terminal equipment in response to a corresponding target management user;
if the monitoring starting request information sent by the target user terminal equipment is acquired, determining the current time, acquiring corresponding current time information, and judging whether the current time information reaches the preset target time information or not;
if the current time information reaches the target time information, generating monitoring starting notification information, and sending the monitoring starting notification information to the monitoring terminal equipment, wherein the monitoring terminal equipment is used for acquiring images of the entrance and exit positions of the target cell after receiving the monitoring starting notification information to obtain a to-be-processed monitoring video;
and acquiring the to-be-processed monitoring video acquired and sent by the monitoring terminal equipment based on the monitoring starting notification information.
In some preferred embodiments, in the data processing method for an intelligent cell, the step of acquiring the to-be-processed monitoring video acquired and sent by the monitoring terminal device based on the monitoring start notification information includes:
acquiring video monitoring precision condition information which is configured in advance for a current time period, analyzing the video monitoring precision condition information, and obtaining unit time length information corresponding to the current time period, wherein the unit time length information is determined based on historical monitoring object flow information of a historical time period corresponding to the current time period at the entrance and exit position of a target cell, and a negative correlation relationship is formed between the unit time length information and the historical monitoring object flow information;
sending the unit duration information to the monitoring terminal equipment, wherein the monitoring terminal equipment is used for sending a currently acquired monitoring video to be processed to the data processing server based on the unit duration information, and the video length of the monitoring video to be processed is the unit duration information;
and acquiring the to-be-processed monitoring video acquired and sent by the monitoring terminal equipment based on the unit time length information and the monitoring starting notification information.
In some preferred embodiments, in the data processing method for an intelligent cell, the step of processing the to-be-processed monitoring video to obtain at least one to-be-screened monitoring video corresponding to the to-be-processed monitoring video includes:
determining whether a monitoring object exists in the monitored video frame to be processed or not aiming at each monitored video frame to be processed in the monitored video to be processed, and determining the monitored video frame to be processed as a first monitored video frame to be processed when the monitoring object exists in the monitored video frame to be processed;
determining the number of monitoring objects existing in the first to-be-processed monitoring video frame aiming at each frame of the first to-be-processed monitoring video frame, determining the first to-be-processed monitoring video frame as the to-be-screened monitoring video frame when the number of monitoring objects existing in the first to-be-processed monitoring video frame is equal to 1, splitting the first to-be-processed monitoring video frame based on the number of monitoring objects existing in the first to-be-processed monitoring video frame when the number of monitoring objects existing in the first to-be-processed monitoring video frame is greater than 1 to obtain a corresponding number of frame sub-monitoring video frames, determining each frame sub-monitoring video frame as the to-be-screened monitoring video, wherein for each frame of the first to-be-processed monitoring video frame, the corresponding number of frame sub-monitoring video frames corresponding to the first to-be-processed monitoring video frame are spliced to be the first to-be-processed monitoring video frame, each frame of the corresponding number of the sub-monitoring video frames has a monitoring object, and any two frames of the corresponding number of the sub-monitoring video frames have different monitoring objects;
identifying the monitored video frames to be screened based on the monitored objects existing in the monitored video frames to be screened aiming at each determined frame of the monitored video frames to be screened to obtain object identification information corresponding to the monitored video frames to be screened, wherein the object identification information corresponding to the monitored video frames to be screened with the same monitored object in any two frames is the same, and the object identification information corresponding to the monitored video frames to be screened with different monitored objects in any two frames is different;
and clustering the determined surveillance video frames to be screened based on whether the corresponding object identification information is the same or not to obtain at least one corresponding video frame set, and determining the surveillance video frames to be screened included in the video frame set as a surveillance video to be screened aiming at each video frame set in the at least one video frame set, wherein each video frame set in the at least one video frame set includes multiple frames of surveillance video frames to be screened.
In some preferred embodiments, in the data processing method for a smart cell, the clustering the determined surveillance video frames to be screened based on whether the corresponding object identification information is the same or not to obtain at least one corresponding video frame set, and for each video frame set in the at least one video frame set, determining the surveillance video frames to be screened included in the video frame set as a surveillance video to be screened includes:
clustering the determined monitoring video frames to be screened based on whether the corresponding object identification information is the same or not to obtain at least one corresponding video frame set;
and aiming at each frame of video frame set in the at least one video frame set, determining video frame time sequence information of each frame of monitoring video frame to be screened, which is included in the video frame set, and sequencing each frame of monitoring video frame to be screened, which is included in the video frame set, based on the video frame time sequence information to form a corresponding video frame sequence so as to obtain the monitoring video to be screened, which corresponds to the video frame set.
In some preferred embodiments, in the data processing method for an intelligent cell, the step of, for each to-be-screened surveillance video in the at least one to-be-screened surveillance video, performing screening processing on multiple frames of to-be-screened surveillance video frames included in the to-be-screened surveillance video to obtain multiple frames of target surveillance video frames corresponding to the to-be-screened surveillance video, and obtaining a target surveillance video corresponding to the to-be-screened surveillance video based on the multiple frames of target surveillance video frames includes:
determining a plurality of frames of surveillance video to be screened with continuous video frame time sequences as candidate surveillance video frames corresponding to the surveillance video to be screened from a plurality of frames of surveillance video to be screened included in the surveillance video to be screened aiming at each surveillance video to be screened in the at least one surveillance video to be screened;
and aiming at each monitoring video to be screened in the at least one monitoring video to be screened, carrying out duplicate removal screening processing on the multi-frame candidate monitoring video frame corresponding to the monitoring video to be screened to obtain a target monitoring video corresponding to the monitoring video to be screened.
In some preferred embodiments, in the data processing method for an intelligent cell, the step of performing duplicate removal and screening processing on multiple candidate surveillance video frames corresponding to the at least one surveillance video to be screened to obtain a target surveillance video corresponding to the at least one surveillance video to be screened includes:
for each to-be-screened monitoring video in the at least one to-be-screened monitoring video, performing similarity calculation processing on every two adjacent candidate monitoring video frames in the multiple candidate monitoring video frames corresponding to the to-be-screened monitoring video to obtain video frame similarity between every two adjacent candidate monitoring video frames in the to-be-screened monitoring video;
determining the relative size relationship between the video frame similarity between every two adjacent candidate monitoring video frames in the monitoring video to be screened and a preset video frame similarity threshold value aiming at each monitoring video to be screened in the at least one monitoring video to be screened, and screening out one candidate monitoring video frame with a video frame time sequence after the video frame similarity between every two adjacent candidate monitoring video frames is greater than the video frame similarity threshold value;
and aiming at each monitoring video to be screened in the at least one monitoring video to be screened, obtaining a corresponding target monitoring video based on each monitoring video frame to be screened except the candidate monitoring video frame included in the monitoring video to be screened and each candidate monitoring video frame which is not screened.
The embodiment of the present invention further provides a data processing system of an intelligent cell, which is applied to a data processing server, wherein the data processing server is communicatively connected with a monitoring terminal device, the monitoring terminal device is deployed at an entrance and exit position of a target cell, and the data processing system of the intelligent cell includes:
the monitoring terminal equipment is used for acquiring a to-be-processed monitoring video sent by the monitoring terminal equipment, wherein the to-be-processed monitoring video is obtained by carrying out image acquisition on the entrance and exit position of the target cell based on the monitoring terminal equipment, and the to-be-processed monitoring video comprises a plurality of frames of to-be-processed monitoring video frames;
the monitoring video processing module is used for processing the to-be-processed monitoring video to obtain at least one to-be-screened monitoring video corresponding to the to-be-processed monitoring video, wherein each to-be-screened monitoring video in the at least one to-be-screened monitoring video comprises multiple frames of to-be-screened monitoring video frames, monitoring objects corresponding to any two to-be-screened monitoring video frames belonging to the same to-be-screened monitoring video are the same, and monitoring objects corresponding to any two to-be-screened monitoring video frames belonging to different two to-be-screened monitoring videos are different;
and the monitoring video screening module is used for screening multiple frames of monitoring video frames to be screened, which are included in the monitoring video to be screened, aiming at each monitoring video to be screened in the at least one monitoring video to be screened, so as to obtain multiple frames of target monitoring video frames corresponding to the monitoring video to be screened, and obtaining the target monitoring video corresponding to the monitoring video to be screened based on the multiple frames of target monitoring video frames.
In some preferred embodiments, in the data processing system of the smart cell, the surveillance video processing module is specifically configured to:
determining whether a monitoring object exists in the monitored video frame to be processed or not aiming at each monitored video frame to be processed in the monitored video to be processed, and determining the monitored video frame to be processed as a first monitored video frame to be processed when the monitoring object exists in the monitored video frame to be processed;
determining the number of monitoring objects existing in the first to-be-processed monitoring video frame aiming at each frame of the first to-be-processed monitoring video frame, determining the first to-be-processed monitoring video frame as the to-be-screened monitoring video frame when the number of monitoring objects existing in the first to-be-processed monitoring video frame is equal to 1, splitting the first to-be-processed monitoring video frame based on the number of monitoring objects existing in the first to-be-processed monitoring video frame when the number of monitoring objects existing in the first to-be-processed monitoring video frame is greater than 1 to obtain a corresponding number of frame sub-monitoring video frames, determining each frame sub-monitoring video frame as the to-be-screened monitoring video, wherein for each frame of the first to-be-processed monitoring video frame, the corresponding number of frame sub-monitoring video frames corresponding to the first to-be-processed monitoring video frame are spliced to be the first to-be-processed monitoring video frame, each frame of the corresponding number of the sub-monitoring video frames has a monitoring object, and any two frames of the corresponding number of the sub-monitoring video frames have different monitoring objects;
identifying the monitored video frames to be screened based on the monitored objects existing in the monitored video frames to be screened aiming at each determined frame of the monitored video frames to be screened to obtain object identification information corresponding to the monitored video frames to be screened, wherein the object identification information corresponding to the monitored video frames to be screened with the same monitored object in any two frames is the same, and the object identification information corresponding to the monitored video frames to be screened with different monitored objects in any two frames is different;
and clustering the determined surveillance video frames to be screened based on whether the corresponding object identification information is the same or not to obtain at least one corresponding video frame set, and determining the surveillance video frames to be screened included in the video frame set as a surveillance video to be screened aiming at each video frame set in the at least one video frame set, wherein each video frame set in the at least one video frame set includes multiple frames of surveillance video frames to be screened.
In some preferred embodiments, in the data processing system of the smart cell, the surveillance video screening module is specifically configured to:
for each to-be-screened monitoring video in the at least one to-be-screened monitoring video, performing similarity calculation processing on every two adjacent candidate monitoring video frames in the multiple candidate monitoring video frames corresponding to the to-be-screened monitoring video to obtain video frame similarity between every two adjacent candidate monitoring video frames in the to-be-screened monitoring video;
determining the relative size relationship between the video frame similarity between every two adjacent candidate monitoring video frames in the monitoring video to be screened and a preset video frame similarity threshold value aiming at each monitoring video to be screened in the at least one monitoring video to be screened, and screening out one candidate monitoring video frame with a video frame time sequence after the video frame similarity between every two adjacent candidate monitoring video frames is greater than the video frame similarity threshold value;
and aiming at each monitoring video to be screened in the at least one monitoring video to be screened, obtaining a corresponding target monitoring video based on each monitoring video frame to be screened except the candidate monitoring video frame included in the monitoring video to be screened and each candidate monitoring video frame which is not screened.
The data processing method and system for the intelligent cell provided by the embodiment of the invention have the advantages that after the to-be-processed monitoring video sent by the monitoring terminal equipment is obtained, the surveillance video to be processed may be processed first to obtain at least one corresponding surveillance video to be screened, and then, for each surveillance video to be screened, screening multiple frames of monitored video frames to be screened included in the monitored video to be screened to obtain corresponding multiple frames of target monitored video frames, wherein, because the monitoring objects corresponding to any two frames of the monitoring video frames to be screened which belong to the same monitoring video to be screened are the same, and the monitoring objects corresponding to any two frames of the monitoring video frames to be screened which belong to different monitoring videos to be screened are different, when the screening processing is carried out on each monitoring video to be screened, the method and the device have better screening effect, so that the problem of poor screening effect on the monitoring video in the prior art is solved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a block diagram of a data processing server according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating steps included in a data processing method for an intelligent cell according to an embodiment of the present invention.
Fig. 3 is a schematic diagram illustrating modules included in a data processing system of an intelligent cell according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a data processing server. Wherein the data processing server may include a memory and a processor.
In detail, the memory and the processor are electrically connected directly or indirectly to realize data transmission or interaction. For example, they may be electrically connected to each other via one or more communication buses or signal lines. The memory can have stored therein at least one software function (computer program) which can be present in the form of software or firmware. The processor may be configured to execute the executable computer program stored in the memory, so as to implement the data processing method for the smart cell according to the embodiment of the present invention (for details, refer to the following description).
Alternatively, in an alternative implementation, the Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Read Only Memory (EPROM), an electrically Erasable Read Only Memory (EEPROM), and the like.
Optionally, in an alternative implementation, the Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), a System on Chip (SoC), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Alternatively, in an alternative implementation, the structure shown in fig. 1 is only an illustration, and the data processing server may further include more or fewer components than those shown in fig. 1, or have a different configuration from that shown in fig. 1, for example, may include a communication unit for information interaction with other devices (e.g., a monitoring terminal device such as a camera).
With reference to fig. 2, an embodiment of the present invention further provides a data processing method for an intelligent cell, which can be applied to the data processing server. The method steps defined by the relevant flow of the data processing method of the intelligent cell can be realized by the data processing server, the data processing server is in communication connection with monitoring terminal equipment, and the monitoring terminal equipment is deployed at the entrance and exit position of the target cell. The specific process shown in FIG. 2 will be described in detail below.
Step S110, acquiring the to-be-processed monitoring video sent by the monitoring terminal equipment.
In the embodiment of the present invention, the data processing server may obtain the to-be-processed monitoring video sent by the monitoring terminal device. The to-be-processed monitoring video is obtained by carrying out image acquisition on the entrance and exit position of the target cell based on the monitoring terminal equipment, and the to-be-processed monitoring video comprises multiple frames of to-be-processed monitoring video frames.
Step S120, the surveillance video to be processed is processed, and at least one surveillance video to be screened corresponding to the surveillance video to be processed is obtained.
In the embodiment of the present invention, the data processing server may process the to-be-processed monitoring video to obtain at least one to-be-screened monitoring video corresponding to the to-be-processed monitoring video. Each monitoring video to be screened in the at least one monitoring video to be screened comprises a plurality of frames of monitoring video to be screened, monitoring objects corresponding to any two frames of monitoring video to be screened belonging to the same monitoring video to be screened are the same, and monitoring objects corresponding to any two frames of monitoring video to be screened belonging to different two monitoring videos are different.
Step S130, for each to-be-screened surveillance video in the at least one to-be-screened surveillance video, performing screening processing on multiple frames of to-be-screened surveillance video frames included in the to-be-screened surveillance video to obtain multiple frames of target surveillance video frames corresponding to the to-be-screened surveillance video, and obtaining a target surveillance video corresponding to the to-be-screened surveillance video based on the multiple frames of target surveillance video frames.
In the embodiment of the present invention, the data processing server may perform, for each to-be-screened surveillance video in the at least one to-be-screened surveillance video, screening processing on multiple frames of to-be-screened surveillance video frames included in the to-be-screened surveillance video to obtain multiple frames of target surveillance video frames corresponding to the to-be-screened surveillance video, and then may obtain the target surveillance video corresponding to the to-be-screened surveillance video based on the multiple frames of target surveillance video frames.
Based on the above steps (such as step S110, step S120 and step S130), after the to-be-processed monitoring video transmitted by the monitoring terminal device is acquired, the surveillance video to be processed may be processed first to obtain at least one corresponding surveillance video to be screened, and then, for each surveillance video to be screened, screening multiple frames of monitored video frames to be screened included in the monitored video to be screened to obtain corresponding multiple frames of target monitored video frames, wherein, because the monitoring objects corresponding to any two frames of the monitoring video frames to be screened which belong to the same monitoring video to be screened are the same, and the monitoring objects corresponding to any two frames of the monitoring video frames to be screened which belong to different monitoring videos to be screened are different, when the screening processing is carried out on each monitoring video to be screened, the method and the device have better screening effect, so that the problem of poor screening effect on the monitoring video in the prior art is solved.
Optionally, in an alternative implementation manner, the step of obtaining the to-be-processed monitoring video sent by the monitoring terminal device, that is, the step S110, may include:
firstly, judging whether monitoring starting request information sent by target user terminal equipment is acquired or not, wherein the monitoring starting request information is generated based on monitoring starting request operation carried out by a target management user (such as a property entrance guard) corresponding to the target user terminal equipment response;
secondly, if the monitoring starting request information sent by the target user terminal equipment is obtained, determining the current time, obtaining the corresponding current time information, and judging whether the current time information reaches the preset target time information or not;
then, if the current time information reaches the target time information, generating monitoring starting notification information, and sending the monitoring starting notification information to the monitoring terminal device, wherein the monitoring terminal device is used for acquiring images of the entrance and exit positions of the target cell after receiving the monitoring starting notification information to obtain a to-be-processed monitoring video;
and then, acquiring the to-be-processed monitoring video acquired and sent by the monitoring terminal device based on the monitoring starting notification information.
Optionally, on the basis of the foregoing implementation manner, in an alternative implementation manner, the step of acquiring the to-be-processed monitoring video acquired and sent by the monitoring terminal device based on the monitoring start notification information may include:
firstly, acquiring video monitoring precision condition information configured in advance for a current time period, and analyzing the video monitoring precision condition information to obtain unit time length information corresponding to the current time period, wherein the unit time length information is determined based on historical monitoring object flow information of a historical time period corresponding to the current time period at an entrance and exit position of a target cell, and the unit time length information and the historical monitoring object flow information have a negative correlation relationship (namely, the larger the historical monitoring object flow information is, the smaller the corresponding unit time length information is);
secondly, the unit duration information is sent to the monitoring terminal equipment, wherein the monitoring terminal equipment is used for sending the currently acquired to-be-processed monitoring video to the data processing server based on the unit duration information, and the video length of the to-be-processed monitoring video is the unit duration information (namely, each time the unit duration information forms a to-be-processed monitoring video);
and then, acquiring the to-be-processed monitoring video acquired and sent by the monitoring terminal equipment based on the unit time length information and the monitoring starting notification information.
Optionally, in an alternative implementation manner, the step of processing the to-be-processed monitoring video to obtain at least one to-be-screened monitoring video corresponding to the to-be-processed monitoring video, that is, the step S120 may include:
firstly, determining whether a monitoring object exists in each to-be-processed monitoring video frame in the to-be-processed monitoring video, and determining the to-be-processed monitoring video frame as a first to-be-processed monitoring video frame when the monitoring object exists in the to-be-processed monitoring video frame;
secondly, determining the number of the monitoring objects existing in the first to-be-processed monitoring video frame aiming at each frame of the first to-be-processed monitoring video frame, determining the first to-be-processed monitoring video frame as the to-be-screened monitoring video frame when the number of the monitoring objects existing in the first to-be-processed monitoring video frame is equal to 1, splitting the first to-be-processed monitoring video frame based on the number of the monitoring objects existing in the first to-be-processed monitoring video frame when the number of the monitoring objects existing in the first to-be-processed monitoring video frame is greater than 1 to obtain a corresponding number of sub-monitoring video frames, determining each sub-monitoring video frame as the to-be-screened monitoring video, wherein for each frame of the first to-be-processed monitoring video frame, the corresponding number of sub-monitoring video frames corresponding to the first to-processed monitoring video frame are spliced to form the first to-processed monitoring video frame, each frame of the corresponding number of the sub-monitoring video frames has a monitoring object, and any two frames of the corresponding number of the sub-monitoring video frames have different monitoring objects;
then, for each determined frame of the monitored video frame to be screened, identifying the monitored video frame to be screened based on the monitored object existing in the monitored video frame to be screened to obtain object identification information corresponding to the monitored video frame to be screened, wherein the object identification information corresponding to the monitored video frame to be screened with the same monitored object in any two frames is the same, and the object identification information corresponding to the monitored video frame to be screened with different monitored objects in any two frames is different;
and finally, clustering the determined surveillance video frames to be screened based on whether the corresponding object identification information is the same or not to obtain at least one corresponding video frame set, and determining the surveillance video frames to be screened included in the video frame set as a surveillance video to be screened aiming at each video frame set in the at least one video frame set, wherein each video frame set in the at least one video frame set includes multiple frames of surveillance video frames to be screened.
Optionally, on the basis of the foregoing implementation, in an alternative implementation, the clustering, based on whether the corresponding object identification information is the same, the determined surveillance video frames to be screened to obtain at least one corresponding video frame set, and for each frame video frame set in the at least one video frame set, the step of determining the surveillance video frames to be screened included in the video frame set as a surveillance video to be screened may include:
firstly, clustering the determined monitoring video frames to be screened based on whether the corresponding object identification information is the same or not to obtain at least one corresponding video frame set;
secondly, determining the video frame time sequence information of each frame of to-be-screened monitoring video frame included in the video frame set aiming at each frame of video frame set in the at least one video frame set, and sequencing each frame of to-be-screened monitoring video frame included in the video frame set based on the video frame time sequence information to form a corresponding video frame sequence so as to obtain the to-be-screened monitoring video corresponding to the video frame set.
Optionally, in an alternative implementation manner, the step of, for each to-be-screened surveillance video in the at least one to-be-screened surveillance video, performing screening processing on multiple frames of to-be-screened surveillance video frames included in the to-be-screened surveillance video to obtain multiple frames of target surveillance video frames corresponding to the to-be-screened surveillance video, and obtaining a target surveillance video corresponding to the to-be-screened surveillance video based on the multiple frames of target surveillance video frames, that is, the step S130 may include:
firstly, aiming at each monitored video to be screened in at least one monitored video to be screened, determining a plurality of monitored video frames to be screened with continuous video frame time sequences in a plurality of monitored video frames to be screened included in the monitored video to be screened as candidate monitored video frames corresponding to the monitored video to be screened (namely, the plurality of candidate monitored video frames are a continuous reading segment in the monitored video to be screened);
secondly, for each surveillance video to be screened in the at least one surveillance video to be screened, performing duplicate removal screening processing on multiple candidate surveillance video frames corresponding to the surveillance video to be screened to obtain a target surveillance video corresponding to the surveillance video to be screened.
Optionally, on the basis of the foregoing implementation manner, in an alternative implementation manner, the step of determining, for each to-be-filtered monitoring video in the at least one to-be-filtered monitoring video, multiple to-be-filtered monitoring video frames with consecutive video frame time sequences from the multiple to-be-filtered monitoring video frames included in the to-be-filtered monitoring video as candidate monitoring video frames corresponding to the to-be-filtered monitoring video, (for one of the to-be-filtered monitoring videos) may include:
firstly, calculating a pixel discrete value of each monitored video frame to be screened in the monitored video to be screened, wherein the pixel discrete value is used for representing the dispersion of the pixel value of each pixel point in the corresponding monitored video frame to be screened;
secondly, determining the relative size relation between the pixel discrete value of each frame of the surveillance video frame to be screened and a preset pixel dispersion threshold value aiming at each frame of the surveillance video frame to be screened, and determining the surveillance video frame to be screened as a first surveillance video frame to be screened when the pixel discrete value of the surveillance video frame to be screened is greater than or equal to the pixel dispersion threshold value;
then, based on the first surveillance video frame to be screened, the surveillance video to be screened is segmented to obtain at least one corresponding surveillance video frame segment, wherein the at least one surveillance video frame segment does not include the first surveillance video frame to be screened;
then, counting the number of the surveillance video frames to be screened included in the surveillance video frame segment aiming at each surveillance video frame segment in the at least one surveillance video frame segment to obtain the number of the video frames corresponding to the surveillance video frame segment, and determining the relative size relation between the number of the video frames and a preset threshold value of the number of the video frames;
further, for each of the at least one monitored video frame segment, if the number of video frames corresponding to the monitored video frame segment is greater than or equal to the threshold value of the number of video frames, determining the monitored video frame segment as a first monitored video frame segment;
further, for each first surveillance video frame segment, determining whether the first surveillance video frame to be screened exists before the first surveillance video frame segment, and when the first surveillance video frame to be screened does not exist before the first surveillance video frame segment, determining the first surveillance video frame segment as a second surveillance video frame segment, and when the first surveillance video frame to be screened exists before the first surveillance video frame segment, calculating a similarity mean value of each frame of screened surveillance video frame included in the first surveillance video frame segment and at least one first surveillance video frame to be screened before the first surveillance video frame segment, and when the similarity mean value is greater than or equal to a preset similarity mean value threshold, determining the first surveillance video frame segment as a second surveillance video frame segment;
and finally, respectively counting the number of the surveillance video frames to be screened, which are included in each second surveillance video frame segment, and taking the multiple frames of the surveillance video frames to be screened, which are included in the second surveillance video frame segment with the largest number, as candidate surveillance video frames corresponding to the surveillance video to be screened.
Optionally, on the basis of the foregoing implementation manner, in an alternative implementation manner, the step of performing duplicate removal screening processing on multiple candidate surveillance video frames corresponding to the at least one surveillance video to be screened to obtain a target surveillance video corresponding to the surveillance video to be screened may include:
firstly, aiming at each monitoring video to be screened in the at least one monitoring video to be screened, carrying out similarity calculation processing on every two adjacent candidate monitoring video frames in a plurality of candidate monitoring video frames corresponding to the monitoring video to be screened to obtain the video frame similarity between every two adjacent candidate monitoring video frames in the monitoring video to be screened;
secondly, determining the relative size relationship between the video frame similarity between every two adjacent candidate monitoring video frames in the monitoring video to be screened and a preset video frame similarity threshold value aiming at each monitoring video to be screened in the at least one monitoring video to be screened, and screening out one candidate monitoring video frame with a video frame time sequence when the video frame similarity between every two adjacent candidate monitoring video frames is greater than the video frame similarity threshold value;
then, for each to-be-screened monitoring video in the at least one to-be-screened monitoring video, obtaining a corresponding target monitoring video based on each to-be-screened monitoring video frame other than the candidate monitoring video frame included in the to-be-screened monitoring video and each candidate monitoring video frame not to be screened.
With reference to fig. 3, an embodiment of the present invention further provides a data processing system of an intelligent cell, which can be applied to the data processing server. Wherein, the data processing system of the smart cell may include:
the monitoring terminal equipment is used for acquiring a to-be-processed monitoring video sent by the monitoring terminal equipment, wherein the to-be-processed monitoring video is obtained by carrying out image acquisition on the entrance and exit position of the target cell based on the monitoring terminal equipment, and the to-be-processed monitoring video comprises a plurality of frames of to-be-processed monitoring video frames;
the monitoring video processing module is used for processing the to-be-processed monitoring video to obtain at least one to-be-screened monitoring video corresponding to the to-be-processed monitoring video, wherein each to-be-screened monitoring video in the at least one to-be-screened monitoring video comprises multiple frames of to-be-screened monitoring video frames, monitoring objects corresponding to any two to-be-screened monitoring video frames belonging to the same to-be-screened monitoring video are the same, and monitoring objects corresponding to any two to-be-screened monitoring video frames belonging to different two to-be-screened monitoring videos are different;
and the monitoring video screening module is used for screening multiple frames of monitoring video frames to be screened, which are included in the monitoring video to be screened, aiming at each monitoring video to be screened in the at least one monitoring video to be screened, so as to obtain multiple frames of target monitoring video frames corresponding to the monitoring video to be screened, and obtaining the target monitoring video corresponding to the monitoring video to be screened based on the multiple frames of target monitoring video frames.
Optionally, on the basis of the foregoing implementation manner, in an alternative implementation manner, the surveillance video processing module may be specifically configured to:
determining whether a monitoring object exists in the monitored video frame to be processed or not aiming at each monitored video frame to be processed in the monitored video to be processed, and determining the monitored video frame to be processed as a first monitored video frame to be processed when the monitoring object exists in the monitored video frame to be processed;
determining the number of monitoring objects existing in the first to-be-processed monitoring video frame aiming at each frame of the first to-be-processed monitoring video frame, determining the first to-be-processed monitoring video frame as the to-be-screened monitoring video frame when the number of monitoring objects existing in the first to-be-processed monitoring video frame is equal to 1, splitting the first to-be-processed monitoring video frame based on the number of monitoring objects existing in the first to-be-processed monitoring video frame when the number of monitoring objects existing in the first to-be-processed monitoring video frame is greater than 1 to obtain a corresponding number of frame sub-monitoring video frames, determining each frame sub-monitoring video frame as the to-be-screened monitoring video, wherein for each frame of the first to-be-processed monitoring video frame, the corresponding number of frame sub-monitoring video frames corresponding to the first to-be-processed monitoring video frame are spliced to be the first to-be-processed monitoring video frame, each frame of the corresponding number of the sub-monitoring video frames has a monitoring object, and any two frames of the corresponding number of the sub-monitoring video frames have different monitoring objects;
identifying the monitored video frames to be screened based on the monitored objects existing in the monitored video frames to be screened aiming at each determined frame of the monitored video frames to be screened to obtain object identification information corresponding to the monitored video frames to be screened, wherein the object identification information corresponding to the monitored video frames to be screened with the same monitored object in any two frames is the same, and the object identification information corresponding to the monitored video frames to be screened with different monitored objects in any two frames is different;
and clustering the determined surveillance video frames to be screened based on whether the corresponding object identification information is the same or not to obtain at least one corresponding video frame set, and determining the surveillance video frames to be screened included in the video frame set as a surveillance video to be screened aiming at each video frame set in the at least one video frame set, wherein each video frame set in the at least one video frame set includes multiple frames of surveillance videos to be screened.
Optionally, on the basis of the foregoing implementation, in an alternative implementation, the surveillance video screening module may be specifically configured to:
for each to-be-screened monitoring video in the at least one to-be-screened monitoring video, performing similarity calculation processing on every two adjacent candidate monitoring video frames in the multiple candidate monitoring video frames corresponding to the to-be-screened monitoring video to obtain video frame similarity between every two adjacent candidate monitoring video frames in the to-be-screened monitoring video;
determining the relative size relationship between the video frame similarity between every two adjacent candidate monitoring video frames in the monitoring video to be screened and a preset video frame similarity threshold value aiming at each monitoring video to be screened in the at least one monitoring video to be screened, and screening out one candidate monitoring video frame with a video frame time sequence after the video frame similarity between every two adjacent candidate monitoring video frames is greater than the video frame similarity threshold value;
and aiming at each monitoring video to be screened in the at least one monitoring video to be screened, obtaining a corresponding target monitoring video based on each monitoring video frame to be screened except the candidate monitoring video frame included in the monitoring video to be screened and each candidate monitoring video frame which is not screened.
In summary, after the to-be-processed monitoring video sent by the monitoring terminal device is acquired, the data processing method and system for the intelligent cell provided by the present invention, the surveillance video to be processed may be processed first to obtain at least one corresponding surveillance video to be screened, and then, for each surveillance video to be screened, screening multiple frames of monitored video frames to be screened included in the monitored video to be screened to obtain corresponding multiple frames of target monitored video frames, wherein, because the monitoring objects corresponding to any two frames of the monitoring video frames to be screened which belong to the same monitoring video to be screened are the same, and the monitoring objects corresponding to any two frames of the monitoring video frames to be screened which belong to different monitoring videos to be screened are different, when the screening processing is carried out on each monitoring video to be screened, the method and the device have better screening effect, so that the problem of poor screening effect on the monitoring video in the prior art is solved.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A data processing method of an intelligent cell is characterized in that the data processing method is applied to a data processing server, the data processing server is in communication connection with monitoring terminal equipment, the monitoring terminal equipment is deployed at the entrance and exit position of a target cell, and the data processing method of the intelligent cell comprises the following steps:
acquiring a to-be-processed monitoring video sent by the monitoring terminal equipment, wherein the to-be-processed monitoring video is obtained by carrying out image acquisition on the entrance and exit position of the target cell based on the monitoring terminal equipment, and the to-be-processed monitoring video comprises a plurality of frames of to-be-processed monitoring video frames;
processing the to-be-processed monitoring video to obtain at least one to-be-screened monitoring video corresponding to the to-be-processed monitoring video, wherein each to-be-screened monitoring video in the at least one to-be-screened monitoring video comprises multiple frames of to-be-screened monitoring video frames, monitoring objects corresponding to any two to-be-screened monitoring video frames belonging to the same to-be-screened monitoring video are the same, and monitoring objects corresponding to any two to-be-screened monitoring video frames belonging to different two to-be-screened monitoring videos are different;
and for each to-be-screened monitoring video in the at least one to-be-screened monitoring video, screening multiple frames of to-be-screened monitoring video frames included in the to-be-screened monitoring video to obtain multiple frames of target monitoring video frames corresponding to the to-be-screened monitoring video, and obtaining the target monitoring video corresponding to the to-be-screened monitoring video based on the multiple frames of target monitoring video frames.
2. The data processing method of the intelligent cell according to claim 1, wherein the step of obtaining the to-be-processed monitoring video transmitted by the monitoring terminal device comprises:
judging whether monitoring starting request information sent by target user terminal equipment is acquired or not, wherein the monitoring starting request information is generated based on monitoring starting request operation carried out by the target user terminal equipment in response to a corresponding target management user;
if the monitoring starting request information sent by the target user terminal equipment is acquired, determining the current time, acquiring corresponding current time information, and judging whether the current time information reaches the preset target time information or not;
if the current time information reaches the target time information, generating monitoring starting notification information, and sending the monitoring starting notification information to the monitoring terminal equipment, wherein the monitoring terminal equipment is used for acquiring images of the entrance and exit positions of the target cell after receiving the monitoring starting notification information to obtain a to-be-processed monitoring video;
and acquiring the to-be-processed monitoring video acquired and sent by the monitoring terminal equipment based on the monitoring starting notification information.
3. The data processing method of the intelligent cell according to claim 2, wherein the step of acquiring the to-be-processed monitoring video acquired and transmitted by the monitoring terminal device based on the monitoring start notification information includes:
acquiring video monitoring precision condition information which is configured in advance for a current time period, analyzing the video monitoring precision condition information, and obtaining unit time length information corresponding to the current time period, wherein the unit time length information is determined based on historical monitoring object flow information of a historical time period corresponding to the current time period at the entrance and exit position of a target cell, and a negative correlation relationship is formed between the unit time length information and the historical monitoring object flow information;
sending the unit duration information to the monitoring terminal equipment, wherein the monitoring terminal equipment is used for sending a currently acquired monitoring video to be processed to the data processing server based on the unit duration information, and the video length of the monitoring video to be processed is the unit duration information;
and acquiring the to-be-processed monitoring video acquired and sent by the monitoring terminal equipment based on the unit time length information and the monitoring starting notification information.
4. The data processing method of the intelligent cell according to claim 1, wherein the step of processing the surveillance video to be processed to obtain at least one surveillance video to be screened corresponding to the surveillance video to be processed comprises:
determining whether a monitoring object exists in the monitored video frame to be processed or not aiming at each monitored video frame to be processed in the monitored video to be processed, and determining the monitored video frame to be processed as a first monitored video frame to be processed when the monitoring object exists in the monitored video frame to be processed;
determining the number of monitoring objects existing in the first to-be-processed monitoring video frame aiming at each frame of the first to-be-processed monitoring video frame, determining the first to-be-processed monitoring video frame as the to-be-screened monitoring video frame when the number of monitoring objects existing in the first to-be-processed monitoring video frame is equal to 1, splitting the first to-be-processed monitoring video frame based on the number of monitoring objects existing in the first to-be-processed monitoring video frame when the number of monitoring objects existing in the first to-be-processed monitoring video frame is greater than 1 to obtain a corresponding number of frame sub-monitoring video frames, determining each frame sub-monitoring video frame as the to-be-screened monitoring video, wherein for each frame of the first to-be-processed monitoring video frame, the corresponding number of frame sub-monitoring video frames corresponding to the first to-be-processed monitoring video frame are spliced to be the first to-be-processed monitoring video frame, each frame of the corresponding number of the sub-monitoring video frames has a monitoring object, and any two frames of the corresponding number of the sub-monitoring video frames have different monitoring objects;
identifying the monitored video frames to be screened based on the monitored objects existing in the monitored video frames to be screened aiming at each determined frame of the monitored video frames to be screened to obtain object identification information corresponding to the monitored video frames to be screened, wherein the object identification information corresponding to the monitored video frames to be screened with the same monitored object in any two frames is the same, and the object identification information corresponding to the monitored video frames to be screened with different monitored objects in any two frames is different;
and clustering the determined surveillance video frames to be screened based on whether the corresponding object identification information is the same or not to obtain at least one corresponding video frame set, and determining the surveillance video frames to be screened included in the video frame set as a surveillance video to be screened aiming at each video frame set in the at least one video frame set, wherein each video frame set in the at least one video frame set includes multiple frames of surveillance video frames to be screened.
5. The data processing method of the smart cell as claimed in claim 4, wherein the step of clustering the determined surveillance video frames to be filtered based on whether the corresponding object identification information is the same or not to obtain at least one corresponding video frame set, and for each video frame set in the at least one video frame set, determining the surveillance video frames to be filtered included in the video frame set as a surveillance video to be filtered comprises:
clustering the determined monitoring video frames to be screened based on whether the corresponding object identification information is the same or not to obtain at least one corresponding video frame set;
and aiming at each frame of video frame set in the at least one video frame set, determining video frame time sequence information of each frame of monitoring video frame to be screened, which is included in the video frame set, and sequencing each frame of monitoring video frame to be screened, which is included in the video frame set, based on the video frame time sequence information to form a corresponding video frame sequence so as to obtain the monitoring video to be screened, which corresponds to the video frame set.
6. The method for processing data of an intelligent cell according to any one of claims 1 to 5, wherein the step of, for each of the at least one monitored video to be screened, screening multiple frames of monitored video frames to be screened included in the monitored video to be screened to obtain multiple target monitored video frames corresponding to the monitored video to be screened, and obtaining a target monitored video corresponding to the monitored video to be screened based on the multiple frames of target monitored video frames comprises:
determining a plurality of frames of surveillance video to be screened with continuous video frame time sequences as candidate surveillance video frames corresponding to the surveillance video to be screened from a plurality of frames of surveillance video to be screened included in the surveillance video to be screened aiming at each surveillance video to be screened in the at least one surveillance video to be screened;
and aiming at each monitoring video to be screened in the at least one monitoring video to be screened, carrying out duplicate removal screening processing on the multi-frame candidate monitoring video frame corresponding to the monitoring video to be screened to obtain a target monitoring video corresponding to the monitoring video to be screened.
7. The method according to claim 6, wherein the step of performing duplicate screening on multiple candidate surveillance video frames corresponding to the at least one surveillance video to be screened to obtain a target surveillance video corresponding to the at least one surveillance video to be screened includes:
for each to-be-screened monitoring video in the at least one to-be-screened monitoring video, performing similarity calculation processing on every two adjacent candidate monitoring video frames in the multiple candidate monitoring video frames corresponding to the to-be-screened monitoring video to obtain video frame similarity between every two adjacent candidate monitoring video frames in the to-be-screened monitoring video;
determining the relative size relationship between the video frame similarity between every two adjacent candidate monitoring video frames in the monitoring video to be screened and a preset video frame similarity threshold value aiming at each monitoring video to be screened in the at least one monitoring video to be screened, and screening out one candidate monitoring video frame with a video frame time sequence after the video frame similarity between every two adjacent candidate monitoring video frames is greater than the video frame similarity threshold value;
and aiming at each monitoring video to be screened in the at least one monitoring video to be screened, obtaining a corresponding target monitoring video based on each monitoring video frame to be screened except the candidate monitoring video frame included in the monitoring video to be screened and each candidate monitoring video frame which is not screened.
8. The utility model provides a data processing system of wisdom district, its characterized in that is applied to data processing server, data processing server communication connection has monitor terminal equipment, monitor terminal equipment deploys in the access & exit position of target cell, data processing system of wisdom district includes:
the monitoring terminal equipment is used for acquiring a to-be-processed monitoring video sent by the monitoring terminal equipment, wherein the to-be-processed monitoring video is obtained by carrying out image acquisition on the entrance and exit position of the target cell based on the monitoring terminal equipment, and the to-be-processed monitoring video comprises a plurality of frames of to-be-processed monitoring video frames;
the monitoring video processing module is used for processing the to-be-processed monitoring video to obtain at least one to-be-screened monitoring video corresponding to the to-be-processed monitoring video, wherein each to-be-screened monitoring video in the at least one to-be-screened monitoring video comprises multiple frames of to-be-screened monitoring video frames, monitoring objects corresponding to any two to-be-screened monitoring video frames belonging to the same to-be-screened monitoring video are the same, and monitoring objects corresponding to any two to-be-screened monitoring video frames belonging to different two to-be-screened monitoring videos are different;
and the monitoring video screening module is used for screening multiple frames of monitoring video frames to be screened, which are included in the monitoring video to be screened, aiming at each monitoring video to be screened in the at least one monitoring video to be screened, so as to obtain multiple frames of target monitoring video frames corresponding to the monitoring video to be screened, and obtaining the target monitoring video corresponding to the monitoring video to be screened based on the multiple frames of target monitoring video frames.
9. The data processing system of an intelligent cell of claim 8, wherein the surveillance video processing module is specifically configured to:
determining whether a monitoring object exists in the monitored video frame to be processed or not aiming at each monitored video frame to be processed in the monitored video to be processed, and determining the monitored video frame to be processed as a first monitored video frame to be processed when the monitoring object exists in the monitored video frame to be processed;
determining the number of monitoring objects existing in the first to-be-processed monitoring video frame aiming at each frame of the first to-be-processed monitoring video frame, determining the first to-be-processed monitoring video frame as the to-be-screened monitoring video frame when the number of monitoring objects existing in the first to-be-processed monitoring video frame is equal to 1, splitting the first to-be-processed monitoring video frame based on the number of monitoring objects existing in the first to-be-processed monitoring video frame when the number of monitoring objects existing in the first to-be-processed monitoring video frame is greater than 1 to obtain a corresponding number of frame sub-monitoring video frames, determining each frame sub-monitoring video frame as the to-be-screened monitoring video, wherein for each frame of the first to-be-processed monitoring video frame, the corresponding number of frame sub-monitoring video frames corresponding to the first to-be-processed monitoring video frame are spliced to be the first to-be-processed monitoring video frame, each frame of the corresponding number of the sub-monitoring video frames has a monitoring object, and any two frames of the corresponding number of the sub-monitoring video frames have different monitoring objects;
identifying the monitored video frames to be screened based on the monitored objects existing in the monitored video frames to be screened aiming at each determined frame of the monitored video frames to be screened to obtain object identification information corresponding to the monitored video frames to be screened, wherein the object identification information corresponding to the monitored video frames to be screened with the same monitored object in any two frames is the same, and the object identification information corresponding to the monitored video frames to be screened with different monitored objects in any two frames is different;
and clustering the determined surveillance video frames to be screened based on whether the corresponding object identification information is the same or not to obtain at least one corresponding video frame set, and determining the surveillance video frames to be screened included in the video frame set as a surveillance video to be screened aiming at each video frame set in the at least one video frame set, wherein each video frame set in the at least one video frame set includes multiple frames of surveillance video frames to be screened.
10. The data processing system of an intelligent cell of claim 8, wherein the surveillance video screening module is specifically configured to:
for each to-be-screened monitoring video in the at least one to-be-screened monitoring video, performing similarity calculation processing on every two adjacent candidate monitoring video frames in the multiple candidate monitoring video frames corresponding to the to-be-screened monitoring video to obtain video frame similarity between every two adjacent candidate monitoring video frames in the to-be-screened monitoring video;
determining the relative size relationship between the video frame similarity between every two adjacent candidate monitoring video frames in the monitoring video to be screened and a preset video frame similarity threshold value aiming at each monitoring video to be screened in the at least one monitoring video to be screened, and screening out one candidate monitoring video frame with a video frame time sequence after the video frame similarity between every two adjacent candidate monitoring video frames is greater than the video frame similarity threshold value;
and aiming at each monitoring video to be screened in the at least one monitoring video to be screened, obtaining a corresponding target monitoring video based on each monitoring video frame to be screened except the candidate monitoring video frame included in the monitoring video to be screened and each candidate monitoring video frame which is not screened.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111312691.5A CN114139016A (en) | 2021-11-08 | 2021-11-08 | Data processing method and system for intelligent cell |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111312691.5A CN114139016A (en) | 2021-11-08 | 2021-11-08 | Data processing method and system for intelligent cell |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114139016A true CN114139016A (en) | 2022-03-04 |
Family
ID=80393092
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111312691.5A Withdrawn CN114139016A (en) | 2021-11-08 | 2021-11-08 | Data processing method and system for intelligent cell |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114139016A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114418555A (en) * | 2022-03-28 | 2022-04-29 | 四川高速公路建设开发集团有限公司 | Project information management method and system applied to intelligent construction |
CN114581856A (en) * | 2022-05-05 | 2022-06-03 | 广东邦盛北斗科技股份公司 | Agricultural unit motion state identification method and system based on Beidou system and cloud platform |
-
2021
- 2021-11-08 CN CN202111312691.5A patent/CN114139016A/en not_active Withdrawn
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114418555A (en) * | 2022-03-28 | 2022-04-29 | 四川高速公路建设开发集团有限公司 | Project information management method and system applied to intelligent construction |
CN114418555B (en) * | 2022-03-28 | 2022-06-07 | 四川高速公路建设开发集团有限公司 | Project information management method and system applied to intelligent construction |
CN114581856A (en) * | 2022-05-05 | 2022-06-03 | 广东邦盛北斗科技股份公司 | Agricultural unit motion state identification method and system based on Beidou system and cloud platform |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114140713A (en) | Image recognition system and image recognition method | |
CN114581856B (en) | Agricultural unit motion state identification method and system based on Beidou system and cloud platform | |
CN115018840B (en) | Method, system and device for detecting cracks of precision casting | |
CN114140712A (en) | Automatic image recognition and distribution system and method | |
CN114139016A (en) | Data processing method and system for intelligent cell | |
CN114155459A (en) | Smart city monitoring method and system based on data analysis | |
CN114697618A (en) | Building control method and system based on mobile terminal | |
CN114140710A (en) | Monitoring data transmission method and system based on data processing | |
CN114189535A (en) | Service request method and system based on smart city data | |
CN113949881A (en) | Service processing method and system based on smart city data | |
CN113902412A (en) | Environment monitoring method based on data processing | |
CN113902993A (en) | Environmental state analysis method and system based on environmental monitoring | |
CN115620243B (en) | Pollution source monitoring method and system based on artificial intelligence and cloud platform | |
CN115457467A (en) | Building quality hidden danger positioning method and system based on data mining | |
CN115330140A (en) | Building risk prediction method based on data mining and prediction system thereof | |
CN114139017A (en) | Safety protection method and system for intelligent cell | |
CN115375886A (en) | Data acquisition method and system based on cloud computing service | |
CN114095734A (en) | User data compression method and system based on data processing | |
CN114720812A (en) | Method and system for processing and positioning faults of power distribution network | |
CN113780126A (en) | Security protection method and device based on RFID (radio frequency identification) | |
CN114418555B (en) | Project information management method and system applied to intelligent construction | |
CN115471767A (en) | Building quality prediction method and system based on data mining | |
CN114156495B (en) | Laminated battery assembly processing method and system based on big data | |
CN115294513A (en) | Security monitoring method and system | |
CN114827538B (en) | Construction progress monitoring method and system for intelligent building site |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220304 |