[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114139017A - Safety protection method and system for intelligent cell - Google Patents

Safety protection method and system for intelligent cell Download PDF

Info

Publication number
CN114139017A
CN114139017A CN202111312692.XA CN202111312692A CN114139017A CN 114139017 A CN114139017 A CN 114139017A CN 202111312692 A CN202111312692 A CN 202111312692A CN 114139017 A CN114139017 A CN 114139017A
Authority
CN
China
Prior art keywords
target
video
surveillance video
monitoring
target surveillance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111312692.XA
Other languages
Chinese (zh)
Inventor
钟丹
宋早虎
孙家平
秦永部
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111312692.XA priority Critical patent/CN114139017A/en
Publication of CN114139017A publication Critical patent/CN114139017A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Alarm Systems (AREA)

Abstract

The invention provides a security protection method and a security protection system for an intelligent cell, and relates to the technical field of data processing. In the invention, the obtained surveillance video to be processed is screened to obtain at least one corresponding target surveillance video, wherein each target surveillance video comprises a plurality of frames of target surveillance video frames, and the corresponding surveillance objects of any two frames of target surveillance video frames belonging to the same target surveillance video are the same; analyzing target monitoring video frames included in each target monitoring video to obtain an initial video analysis result corresponding to the target monitoring video; and correcting an initial video analysis result corresponding to each target monitoring video in at least one target monitoring video to obtain a target video analysis result corresponding to the target monitoring video. Based on the method, the problem of poor reliability of video monitoring in the prior art can be solved.

Description

Safety protection method and system for intelligent cell
Technical Field
The invention relates to the technical field of data processing, in particular to a security protection method and system for an intelligent cell.
Background
With the continuous development of computer technology and internet technology and the increasing demand for cell security, the application of smart cells is in great demand. An important technical means in the implementation process of the intelligent cell is monitoring, such as image acquisition to realize video monitoring. Moreover, after the monitoring video is acquired, for convenience of subsequent applications, the monitoring video may be screened, for example, the screened monitoring video may be analyzed to obtain an analysis result indicating whether behaviors of the monitoring objects meet preset conditions, but in the prior art, generally, the behaviors of each monitoring object are individually analyzed to obtain a corresponding analysis result, so that there may be a problem that reliability of video monitoring (analysis result) is not good.
Disclosure of Invention
In view of the above, the present invention provides a security method and system for an intelligent cell to improve the problem of poor reliability of video surveillance in the prior art.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
a safety precaution method of an intelligent cell is applied to a data processing server, the data processing server is in communication connection with monitoring terminal equipment, the monitoring terminal equipment is deployed at the entrance and exit position of a target cell, and the safety precaution method of the intelligent cell comprises the following steps:
after a to-be-processed monitoring video sent by the monitoring terminal equipment is obtained, screening the to-be-processed monitoring video to obtain at least one target monitoring video corresponding to the to-be-processed monitoring video, wherein the to-be-processed monitoring video is obtained by carrying out image acquisition on the entrance and exit position of a target cell based on the monitoring terminal equipment, the to-be-processed monitoring video comprises multiple frames of to-be-processed monitoring video frames, each target monitoring video in the at least one target monitoring video comprises multiple frames of target monitoring video frames, monitoring objects corresponding to any two frames of target monitoring video frames belonging to the same target monitoring video are the same, and monitoring objects corresponding to any two frames of target monitoring video frames belonging to different two target monitoring videos are different;
analyzing a target monitoring video frame included in the target monitoring video aiming at each target monitoring video in the at least one target monitoring video to obtain an initial video analysis result corresponding to the target monitoring video, wherein the initial video analysis result is used for representing whether a behavior which does not meet a preset condition exists at an entrance and exit position of a corresponding monitoring object in the target cell or not;
and correcting the initial video analysis result corresponding to the target monitoring video aiming at each target monitoring video in the at least one target monitoring video to obtain a target video analysis result corresponding to the target monitoring video, wherein the target video analysis result is used for representing whether behaviors which do not meet preset conditions exist at the entrance and exit positions of the corresponding monitoring object in the target cell.
In some preferred embodiments, in the safety precaution method for an intelligent cell, the analyzing, for each target surveillance video of the at least one target surveillance video, a target surveillance video frame included in the target surveillance video to obtain an initial video analysis result corresponding to the target surveillance video includes:
for each target surveillance video in the at least one target surveillance video, performing action recognition processing on each frame of target surveillance video frame included in the target surveillance video to obtain an action recognition result corresponding to each frame of target surveillance video frame included in the target surveillance video;
and for each target surveillance video in the at least one target surveillance video, performing result fusion processing based on the action recognition result corresponding to each frame of target surveillance video included in the target surveillance video to obtain an initial video analysis result corresponding to the target surveillance video.
In some preferred embodiments, in the safety precaution method for an intelligent cell, the step of performing, for each target surveillance video of the at least one target surveillance video, action recognition processing on each target surveillance video frame included in the target surveillance video to obtain an action recognition result corresponding to each target surveillance video frame included in the target surveillance video includes:
determining video frame acquisition frequency information of the to-be-processed monitoring video acquired by the monitoring terminal equipment, and determining target sampling information based on the video frame acquisition frequency information, wherein the target sampling information and the video frame acquisition frequency information have positive correlation;
for each target monitoring video in the at least one target monitoring video, sampling target monitoring video frames included in the target monitoring video based on the target sampling information to obtain multi-frame sampling monitoring video frames corresponding to the target monitoring video;
and aiming at each target surveillance video in the at least one target surveillance video, performing action identification processing on each frame of sampling surveillance video frame included in the target surveillance video to obtain an action identification result corresponding to each frame of sampling surveillance video frame corresponding to the target surveillance video.
In some preferred embodiments, in the safety precaution method for an intelligent cell, the step of performing result fusion processing on each target surveillance video of the at least one target surveillance video based on an action recognition result corresponding to each frame of target surveillance video included in the target surveillance video to obtain an initial video parsing result corresponding to the target surveillance video includes:
for each target surveillance video in the at least one target surveillance video, if an action recognition result corresponding to a target surveillance video frame corresponding to the target surveillance video is a first action recognition result, determining the target surveillance video frame as a first type of target surveillance video frame, and if an action recognition result corresponding to a target surveillance video frame corresponding to the target surveillance video is a second action recognition result, determining the target surveillance video frame as a second type of target surveillance video frame, wherein the first action recognition result is used for representing that a behavior which does not meet a preset condition exists at an entrance and exit position of a target cell in a corresponding monitored object, and the second action recognition result is used for representing that a behavior which does not meet the preset condition does not exist at the entrance and exit position of the target cell in the corresponding monitored object;
for each target surveillance video in the at least one target surveillance video, if the target surveillance video includes at least one frame of the first type target surveillance video frame, determining an initial video analysis result corresponding to the target surveillance video as that a behavior that does not satisfy a preset condition exists at an entrance and exit position of the target cell for a corresponding surveillance object, and if the target surveillance video does not include the first type target surveillance video frame, determining an initial video analysis result corresponding to the target surveillance video as that a behavior that does not satisfy a preset condition does not exist at an entrance and exit position of the target cell for a corresponding surveillance object.
In some preferred embodiments, in the safety precaution method for an intelligent cell, the step of performing result fusion processing on each target surveillance video of the at least one target surveillance video based on an action recognition result corresponding to each frame of target surveillance video included in the target surveillance video to obtain an initial video parsing result corresponding to the target surveillance video includes:
for each target surveillance video in the at least one target surveillance video, if an action recognition result corresponding to a target surveillance video frame corresponding to the target surveillance video is a first action recognition result, determining the target surveillance video frame as a first type of target surveillance video frame, and if an action recognition result corresponding to a target surveillance video frame corresponding to the target surveillance video is a second action recognition result, determining the target surveillance video frame as a second type of target surveillance video frame, wherein the first action recognition result is used for representing that a behavior which does not meet a preset condition exists at an entrance and exit position of a target cell in a corresponding monitored object, and the second action recognition result is used for representing that a behavior which does not meet the preset condition does not exist at the entrance and exit position of the target cell in the corresponding monitored object;
counting the number of the first type of target monitoring video frames included in the target monitoring video to obtain the number of first video frames corresponding to the target monitoring video, and counting the number of the second type of target monitoring video frames included in the target monitoring video to obtain the number of second video frames corresponding to the target monitoring video;
acquiring a first weight coefficient configured in advance for the first video frame number and acquiring a second weight coefficient configured in advance for the second video frame number, wherein the first weight coefficient is larger than the second weight coefficient;
for each target surveillance video in the at least one target surveillance video, calculating a product between the number of first video frames corresponding to the target surveillance video and the first weight coefficient to obtain a first product value corresponding to the target surveillance video, calculating a product between the number of second video frames corresponding to the target surveillance video and the second weight coefficient to obtain a second product value corresponding to the target surveillance video, and determining a size relationship between the first product value and the second product value;
for each target surveillance video in the at least one target surveillance video, if the first product value corresponding to the target surveillance video is greater than or equal to the corresponding second product value, determining an initial video analysis result corresponding to the target surveillance video as that a corresponding monitored object has a behavior which does not satisfy a preset condition at the entrance and exit position of the target cell, and if the first product value corresponding to the target surveillance video is smaller than the corresponding second product value, determining that the initial video analysis result corresponding to the target surveillance video does not have the behavior which does not satisfy the preset condition at the entrance and exit position of the target cell.
In some preferred embodiments, in the safety precaution method for an intelligent cell, the step of correcting, for each target surveillance video of the at least one target surveillance video, the initial video analysis result corresponding to the target surveillance video to obtain a target video analysis result corresponding to the target surveillance video includes:
determining whether an associated target surveillance video having an association relation with the target surveillance video exists in other target surveillance videos except the target surveillance video aiming at each target surveillance video in the at least one target surveillance video;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video is that the corresponding surveillance object has a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell, determining the target video analysis result corresponding to the target surveillance video as that the corresponding surveillance object has a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video indicates that the corresponding surveillance object does not have a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell, and the target surveillance video does not have a related target surveillance video, determining the target video analysis result corresponding to the target surveillance video as the corresponding surveillance object does not have a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video indicates that the corresponding surveillance object does not have a behavior which does not meet the preset condition at the entrance/exit position of the target cell, and the target surveillance video has a related target surveillance video, determining a target video analysis result corresponding to the target surveillance video based on the initial video analysis result corresponding to the related target surveillance video.
In some preferred embodiments, in the safety precaution method for an intelligent cell, the step of determining, for each target surveillance video in the at least one target surveillance video, whether an associated target surveillance video having an association relationship with the target surveillance video exists in other target surveillance videos except the target surveillance video includes:
determining whether video frame time sequences between the target surveillance video and target surveillance video frames corresponding to other target surveillance videos except the target surveillance video are completely the same or not aiming at each target surveillance video in the at least one target surveillance video;
and for each target surveillance video in the at least one target surveillance video, if other target surveillance videos with completely the same video frame time sequence as the target surveillance video frames corresponding to the target surveillance video exist, determining the other target surveillance videos as associated target surveillance videos with associated relations with the target surveillance video.
The embodiment of the present invention further provides a security system for an intelligent cell, which is applied to a data processing server, wherein the data processing server is communicatively connected with a monitoring terminal device, the monitoring terminal device is deployed at an entrance/exit position of a target cell, and the security system for the intelligent cell includes:
the monitoring terminal equipment is used for acquiring a to-be-processed monitoring video, and acquiring at least one target monitoring video corresponding to the to-be-processed monitoring video, wherein the to-be-processed monitoring video is acquired by acquiring images of an entrance and exit position of a target cell based on the monitoring terminal equipment, the to-be-processed monitoring video comprises multiple frames of to-be-processed monitoring video frames, each target monitoring video in the at least one target monitoring video comprises multiple frames of target monitoring video frames, monitoring objects corresponding to any two frames of target monitoring video frames belonging to the same target monitoring video are the same, and monitoring objects corresponding to any two frames of target monitoring video frames belonging to different two target monitoring videos are different;
the monitoring video analysis module is used for analyzing a target monitoring video frame included in the target monitoring video aiming at each target monitoring video in the at least one target monitoring video to obtain an initial video analysis result corresponding to the target monitoring video, wherein the initial video analysis result is used for representing whether a behavior which does not meet a preset condition exists at the entrance and exit position of the corresponding monitoring object in the target cell;
and the analysis result correction module is used for correcting the initial video analysis result corresponding to each target monitoring video in the at least one target monitoring video to obtain a target video analysis result corresponding to the target monitoring video, wherein the target video analysis result is used for representing whether a behavior which does not meet a preset condition exists at the entrance and exit position of the corresponding monitoring object in the target cell.
In some preferred embodiments, in the security system for an intelligent cell, the surveillance video parsing module is specifically configured to:
for each target surveillance video in the at least one target surveillance video, performing action recognition processing on each frame of target surveillance video frame included in the target surveillance video to obtain an action recognition result corresponding to each frame of target surveillance video frame included in the target surveillance video;
and for each target surveillance video in the at least one target surveillance video, performing result fusion processing based on the action recognition result corresponding to each frame of target surveillance video included in the target surveillance video to obtain an initial video analysis result corresponding to the target surveillance video.
In some preferred embodiments, in the security system for smart cells, the analysis result correction module is specifically configured to:
determining whether an associated target surveillance video having an association relation with the target surveillance video exists in other target surveillance videos except the target surveillance video aiming at each target surveillance video in the at least one target surveillance video;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video is that the corresponding surveillance object has a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell, determining the target video analysis result corresponding to the target surveillance video as that the corresponding surveillance object has a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video indicates that the corresponding surveillance object does not have a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell, and the target surveillance video does not have a related target surveillance video, determining the target video analysis result corresponding to the target surveillance video as the corresponding surveillance object does not have a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video indicates that the corresponding surveillance object does not have a behavior which does not meet the preset condition at the entrance/exit position of the target cell, and the target surveillance video has a related target surveillance video, determining a target video analysis result corresponding to the target surveillance video based on the initial video analysis result corresponding to the related target surveillance video.
According to the security protection method and system for the intelligent cell provided by the embodiment of the invention, after the obtained surveillance video to be processed is screened to obtain at least one corresponding target surveillance video, the target surveillance video frame included in the target surveillance video can be analyzed to obtain a corresponding initial video analysis result for each target surveillance video, and then the initial video analysis result corresponding to the target surveillance video is corrected to obtain a corresponding target video analysis result for each target surveillance video.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a block diagram of a data processing server according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating steps included in a security method for an intelligent cell according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of modules included in a security system for an intelligent cell according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a data processing server. Wherein the data processing server may include a memory and a processor.
In detail, the memory and the processor are electrically connected directly or indirectly to realize data transmission or interaction. For example, they may be electrically connected to each other via one or more communication buses or signal lines. The memory can have stored therein at least one software function (computer program) which can be present in the form of software or firmware. The processor may be configured to execute the executable computer program stored in the memory, so as to implement the security protection method for an intelligent cell provided by the embodiment of the present invention (for details, refer to the following description).
Alternatively, in an alternative implementation, the Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Read Only Memory (EPROM), an electrically Erasable Read Only Memory (EEPROM), and the like.
Optionally, in an alternative implementation, the Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), a System on Chip (SoC), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Alternatively, in an alternative implementation, the structure shown in fig. 1 is only an illustration, and the data processing server may further include more or fewer components than those shown in fig. 1, or have a different configuration from that shown in fig. 1, for example, may include a communication unit for information interaction with other devices (e.g., a monitoring terminal device such as a camera).
With reference to fig. 2, an embodiment of the present invention further provides a security method for an intelligent cell, which can be applied to the data processing server. The method steps defined by the related flow of the safety precaution method of the intelligent cell can be realized by the data processing server, the data processing server is in communication connection with monitoring terminal equipment, and the monitoring terminal equipment is deployed at the entrance and exit position of the target cell. The specific process shown in FIG. 2 will be described in detail below.
Step S100, after the to-be-processed monitoring video sent by the monitoring terminal equipment is obtained, screening the to-be-processed monitoring video to obtain at least one target monitoring video corresponding to the to-be-processed monitoring video.
In the embodiment of the present invention, after acquiring the to-be-processed monitoring video sent by the monitoring terminal device, the data processing server may perform screening processing on the to-be-processed monitoring video to obtain at least one target monitoring video corresponding to the to-be-processed monitoring video. The monitoring terminal device acquires images of the entrance and exit positions of the target cell, the to-be-processed monitoring video comprises multiple frames of to-be-processed monitoring video frames, each target monitoring video in the at least one target monitoring video comprises multiple frames of target monitoring video frames, monitoring objects corresponding to any two frames of target monitoring video frames belonging to the same target monitoring video are the same, and monitoring objects corresponding to any two frames of target monitoring video frames belonging to different two target monitoring videos are different.
Step S200, analyzing the target monitoring video frames included in the target monitoring video aiming at each target monitoring video in the at least one target monitoring video to obtain an initial video analysis result corresponding to the target monitoring video.
In the embodiment of the present invention, the data processing server may analyze, for each target surveillance video in the at least one target surveillance video, a target surveillance video frame included in the target surveillance video to obtain an initial video analysis result corresponding to the target surveillance video. And the initial video analysis result is used for representing whether behaviors which do not meet preset conditions exist in the entrance and exit positions of the corresponding monitored objects in the target cell or not.
Step S300, aiming at each target monitoring video in the at least one target monitoring video, correcting the initial video analysis result corresponding to the target monitoring video to obtain a target video analysis result corresponding to the target monitoring video.
In the embodiment of the present invention, the data processing server may correct the initial video analysis result corresponding to each target surveillance video of the at least one target surveillance video, so as to obtain a target video analysis result corresponding to the target surveillance video. The target video analysis result is used for representing whether behaviors (such as illegal behaviors like fighting a shelf and the like) which do not meet preset conditions exist at the entrance and exit positions of the corresponding monitored objects in the target cell.
Based on the above steps (e.g., step S100, step S200, and step S300), after the obtained to-be-processed surveillance video is screened to obtain at least one corresponding target surveillance video, the target surveillance video frames included in the target surveillance video may be analyzed to obtain a corresponding initial video analysis result, and then the initial video analysis result corresponding to the target surveillance video is corrected to obtain a corresponding target video analysis result, so that by configuring a mechanism for correcting the initial video analysis result, the reliability of the obtained target video analysis result may be improved, the reliability of video surveillance is guaranteed, and the problem of poor reliability of video surveillance in the prior art is solved.
Optionally, in an alternative implementation manner, the step of obtaining at least one target surveillance video corresponding to the to-be-processed surveillance video by performing screening processing on the to-be-processed surveillance video after obtaining the to-be-processed surveillance video sent by the surveillance terminal device, that is, the step S100 may include step S110, step S120, and step S130.
Step S110, acquiring the to-be-processed monitoring video sent by the monitoring terminal equipment.
In the embodiment of the present invention, the data processing server may obtain the to-be-processed monitoring video sent by the monitoring terminal device. The to-be-processed monitoring video is obtained by carrying out image acquisition on the entrance and exit position of the target cell based on the monitoring terminal equipment, and the to-be-processed monitoring video comprises multiple frames of to-be-processed monitoring video frames.
Step S120, the surveillance video to be processed is processed, and at least one surveillance video to be screened corresponding to the surveillance video to be processed is obtained.
In the embodiment of the present invention, the data processing server may process the to-be-processed monitoring video to obtain at least one to-be-screened monitoring video corresponding to the to-be-processed monitoring video. Each monitoring video to be screened in the at least one monitoring video to be screened comprises a plurality of frames of monitoring video to be screened, monitoring objects corresponding to any two frames of monitoring video to be screened belonging to the same monitoring video to be screened are the same, and monitoring objects corresponding to any two frames of monitoring video to be screened belonging to different two monitoring videos are different.
Step S130, for each to-be-screened surveillance video in the at least one to-be-screened surveillance video, performing screening processing on multiple frames of to-be-screened surveillance video frames included in the to-be-screened surveillance video to obtain multiple frames of target surveillance video frames corresponding to the to-be-screened surveillance video, and obtaining a target surveillance video corresponding to the to-be-screened surveillance video based on the multiple frames of target surveillance video frames.
In the embodiment of the present invention, the data processing server may perform, for each to-be-screened surveillance video in the at least one to-be-screened surveillance video, screening processing on multiple frames of to-be-screened surveillance video frames included in the to-be-screened surveillance video to obtain multiple frames of target surveillance video frames corresponding to the to-be-screened surveillance video, and then may obtain the target surveillance video corresponding to the to-be-screened surveillance video based on the multiple frames of target surveillance video frames.
Based on the above steps (such as step S110, step S120 and step S130), after the to-be-processed monitoring video transmitted by the monitoring terminal device is acquired, the surveillance video to be processed may be processed first to obtain at least one corresponding surveillance video to be screened, and then, for each surveillance video to be screened, screening multiple frames of monitored video frames to be screened included in the monitored video to be screened to obtain corresponding multiple frames of target monitored video frames, wherein, because the monitoring objects corresponding to any two frames of the monitoring video frames to be screened which belong to the same monitoring video to be screened are the same, and the monitoring objects corresponding to any two frames of the monitoring video frames to be screened which belong to different monitoring videos to be screened are different, when the screening processing is carried out on each monitoring video to be screened, the method and the device have better screening effect, so that the problem of poor screening effect on the monitoring video in the prior art is solved.
Optionally, in an alternative implementation manner, the step of obtaining the to-be-processed monitoring video sent by the monitoring terminal device, that is, the step S110, may include:
firstly, judging whether monitoring starting request information sent by target user terminal equipment is acquired or not, wherein the monitoring starting request information is generated based on monitoring starting request operation carried out by a target management user (such as a property entrance guard) corresponding to the target user terminal equipment response;
secondly, if the monitoring starting request information sent by the target user terminal equipment is obtained, determining the current time, obtaining the corresponding current time information, and judging whether the current time information reaches the preset target time information or not;
then, if the current time information reaches the target time information, generating monitoring starting notification information, and sending the monitoring starting notification information to the monitoring terminal device, wherein the monitoring terminal device is used for acquiring images of the entrance and exit positions of the target cell after receiving the monitoring starting notification information to obtain a to-be-processed monitoring video;
and then, acquiring the to-be-processed monitoring video acquired and sent by the monitoring terminal device based on the monitoring starting notification information.
Optionally, on the basis of the foregoing implementation manner, in an alternative implementation manner, the step of acquiring the to-be-processed monitoring video acquired and sent by the monitoring terminal device based on the monitoring start notification information may include:
firstly, acquiring video monitoring precision condition information configured in advance for a current time period, and analyzing the video monitoring precision condition information to obtain unit time length information corresponding to the current time period, wherein the unit time length information is determined based on historical monitoring object flow information of a historical time period corresponding to the current time period at an entrance and exit position of a target cell, and the unit time length information and the historical monitoring object flow information have a negative correlation relationship (namely, the larger the historical monitoring object flow information is, the smaller the corresponding unit time length information is);
secondly, the unit duration information is sent to the monitoring terminal equipment, wherein the monitoring terminal equipment is used for sending the currently acquired to-be-processed monitoring video to the data processing server based on the unit duration information, and the video length of the to-be-processed monitoring video is the unit duration information (namely, each time the unit duration information forms a to-be-processed monitoring video);
and then, acquiring the to-be-processed monitoring video acquired and sent by the monitoring terminal equipment based on the unit time length information and the monitoring starting notification information.
Optionally, in an alternative implementation manner, the step of processing the to-be-processed monitoring video to obtain at least one to-be-screened monitoring video corresponding to the to-be-processed monitoring video, that is, the step S120 may include:
firstly, determining whether a monitoring object exists in each to-be-processed monitoring video frame in the to-be-processed monitoring video, and determining the to-be-processed monitoring video frame as a first to-be-processed monitoring video frame when the monitoring object exists in the to-be-processed monitoring video frame;
secondly, determining the number of the monitoring objects existing in the first to-be-processed monitoring video frame aiming at each frame of the first to-be-processed monitoring video frame, determining the first to-be-processed monitoring video frame as the to-be-screened monitoring video frame when the number of the monitoring objects existing in the first to-be-processed monitoring video frame is equal to 1, splitting the first to-be-processed monitoring video frame based on the number of the monitoring objects existing in the first to-be-processed monitoring video frame when the number of the monitoring objects existing in the first to-be-processed monitoring video frame is greater than 1 to obtain a corresponding number of sub-monitoring video frames, determining each sub-monitoring video frame as the to-be-screened monitoring video, wherein for each frame of the first to-be-processed monitoring video frame, the corresponding number of sub-monitoring video frames corresponding to the first to-processed monitoring video frame are spliced to form the first to-processed monitoring video frame, each frame of the corresponding number of the sub-monitoring video frames has a monitoring object, and any two frames of the corresponding number of the sub-monitoring video frames have different monitoring objects;
then, for each determined frame of the monitored video frame to be screened, identifying the monitored video frame to be screened based on the monitored object existing in the monitored video frame to be screened to obtain object identification information corresponding to the monitored video frame to be screened, wherein the object identification information corresponding to the monitored video frame to be screened with the same monitored object in any two frames is the same, and the object identification information corresponding to the monitored video frame to be screened with different monitored objects in any two frames is different;
and finally, clustering the determined surveillance video frames to be screened based on whether the corresponding object identification information is the same or not to obtain at least one corresponding video frame set, and determining the surveillance video frames to be screened included in the video frame set as a surveillance video to be screened aiming at each video frame set in the at least one video frame set, wherein each video frame set in the at least one video frame set includes multiple frames of surveillance video frames to be screened.
Optionally, on the basis of the foregoing implementation, in an alternative implementation, the clustering, based on whether the corresponding object identification information is the same, the determined surveillance video frames to be screened to obtain at least one corresponding video frame set, and for each frame video frame set in the at least one video frame set, the step of determining the surveillance video frames to be screened included in the video frame set as a surveillance video to be screened may include:
firstly, clustering the determined monitoring video frames to be screened based on whether the corresponding object identification information is the same or not to obtain at least one corresponding video frame set;
secondly, determining the video frame time sequence information of each frame of to-be-screened monitoring video frame included in the video frame set aiming at each frame of video frame set in the at least one video frame set, and sequencing each frame of to-be-screened monitoring video frame included in the video frame set based on the video frame time sequence information to form a corresponding video frame sequence so as to obtain the to-be-screened monitoring video corresponding to the video frame set.
Optionally, in an alternative implementation manner, the step of, for each to-be-screened surveillance video in the at least one to-be-screened surveillance video, performing screening processing on multiple frames of to-be-screened surveillance video frames included in the to-be-screened surveillance video to obtain multiple frames of target surveillance video frames corresponding to the to-be-screened surveillance video, and obtaining a target surveillance video corresponding to the to-be-screened surveillance video based on the multiple frames of target surveillance video frames, that is, the step S130 may include:
firstly, aiming at each monitored video to be screened in at least one monitored video to be screened, determining a plurality of monitored video frames to be screened with continuous video frame time sequences in a plurality of monitored video frames to be screened included in the monitored video to be screened as candidate monitored video frames corresponding to the monitored video to be screened (namely, the plurality of candidate monitored video frames are a continuous reading segment in the monitored video to be screened);
secondly, for each surveillance video to be screened in the at least one surveillance video to be screened, performing duplicate removal screening processing on multiple candidate surveillance video frames corresponding to the surveillance video to be screened to obtain a target surveillance video corresponding to the surveillance video to be screened.
Optionally, on the basis of the foregoing implementation manner, in an alternative implementation manner, the step of determining, for each to-be-filtered monitoring video in the at least one to-be-filtered monitoring video, multiple to-be-filtered monitoring video frames with consecutive video frame time sequences from the multiple to-be-filtered monitoring video frames included in the to-be-filtered monitoring video as candidate monitoring video frames corresponding to the to-be-filtered monitoring video, (for one of the to-be-filtered monitoring videos) may include:
firstly, calculating a pixel discrete value of each monitored video frame to be screened in the monitored video to be screened, wherein the pixel discrete value is used for representing the dispersion of the pixel value of each pixel point in the corresponding monitored video frame to be screened;
secondly, determining the relative size relation between the pixel discrete value of each frame of the surveillance video frame to be screened and a preset pixel dispersion threshold value aiming at each frame of the surveillance video frame to be screened, and determining the surveillance video frame to be screened as a first surveillance video frame to be screened when the pixel discrete value of the surveillance video frame to be screened is greater than or equal to the pixel dispersion threshold value;
then, based on the first surveillance video frame to be screened, the surveillance video to be screened is segmented to obtain at least one corresponding surveillance video frame segment, wherein the at least one surveillance video frame segment does not include the first surveillance video frame to be screened;
then, counting the number of the surveillance video frames to be screened included in the surveillance video frame segment aiming at each surveillance video frame segment in the at least one surveillance video frame segment to obtain the number of the video frames corresponding to the surveillance video frame segment, and determining the relative size relation between the number of the video frames and a preset threshold value of the number of the video frames;
further, for each of the at least one monitored video frame segment, if the number of video frames corresponding to the monitored video frame segment is greater than or equal to the threshold value of the number of video frames, determining the monitored video frame segment as a first monitored video frame segment;
further, for each first surveillance video frame segment, determining whether the first surveillance video frame to be screened exists before the first surveillance video frame segment, and when the first surveillance video frame to be screened does not exist before the first surveillance video frame segment, determining the first surveillance video frame segment as a second surveillance video frame segment, and when the first surveillance video frame to be screened exists before the first surveillance video frame segment, calculating a similarity mean value of each frame of screened surveillance video frame included in the first surveillance video frame segment and at least one first surveillance video frame to be screened before the first surveillance video frame segment, and when the similarity mean value is greater than or equal to a preset similarity mean value threshold, determining the first surveillance video frame segment as a second surveillance video frame segment;
and finally, respectively counting the number of the surveillance video frames to be screened, which are included in each second surveillance video frame segment, and taking the multiple frames of the surveillance video frames to be screened, which are included in the second surveillance video frame segment with the largest number, as candidate surveillance video frames corresponding to the surveillance video to be screened.
Optionally, on the basis of the foregoing implementation manner, in an alternative implementation manner, the step of performing duplicate removal screening processing on multiple candidate surveillance video frames corresponding to the at least one surveillance video to be screened to obtain a target surveillance video corresponding to the surveillance video to be screened may include:
firstly, aiming at each monitoring video to be screened in the at least one monitoring video to be screened, carrying out similarity calculation processing on every two adjacent candidate monitoring video frames in a plurality of candidate monitoring video frames corresponding to the monitoring video to be screened to obtain the video frame similarity between every two adjacent candidate monitoring video frames in the monitoring video to be screened;
secondly, determining the relative size relationship between the video frame similarity between every two adjacent candidate monitoring video frames in the monitoring video to be screened and a preset video frame similarity threshold value aiming at each monitoring video to be screened in the at least one monitoring video to be screened, and screening out one candidate monitoring video frame with a video frame time sequence when the video frame similarity between every two adjacent candidate monitoring video frames is greater than the video frame similarity threshold value;
then, for each to-be-screened monitoring video in the at least one to-be-screened monitoring video, obtaining a corresponding target monitoring video based on each to-be-screened monitoring video frame other than the candidate monitoring video frame included in the to-be-screened monitoring video and each candidate monitoring video frame not to be screened.
Optionally, in an alternative implementation manner, the step of analyzing, for each target surveillance video in the at least one target surveillance video, a target surveillance video frame included in the target surveillance video to obtain an initial video analysis result corresponding to the target surveillance video, that is, the step S200 may include:
firstly, for each target surveillance video in the at least one target surveillance video, performing action recognition processing (for example, recognition may be performed based on a neural network model obtained through training) on each frame of target surveillance video frame included in the target surveillance video to obtain an action recognition result corresponding to each frame of target surveillance video frame included in the target surveillance video;
secondly, for each target surveillance video in the at least one target surveillance video, performing result fusion processing based on the action recognition result corresponding to each frame of target surveillance video included in the target surveillance video to obtain an initial video analysis result corresponding to the target surveillance video.
Optionally, on the basis of the foregoing implementation, in an alternative implementation, the step of performing, for each target surveillance video in the at least one target surveillance video, action recognition processing on each frame of target surveillance video frame included in the target surveillance video to obtain an action recognition result corresponding to each frame of target surveillance video frame included in the target surveillance video includes:
firstly, determining video frame acquisition frequency information of the to-be-processed monitoring video acquired by the monitoring terminal equipment, and determining target sampling information based on the video frame acquisition frequency information, wherein the target sampling information and the video frame acquisition frequency information have positive correlation;
secondly, for each target monitoring video in the at least one target monitoring video, sampling target monitoring video frames included in the target monitoring video based on the target sampling information to obtain multi-frame sampling monitoring video frames corresponding to the target monitoring video;
then, for each target surveillance video in the at least one target surveillance video, performing action recognition processing on each frame of sampled surveillance video frame included in the target surveillance video to obtain an action recognition result corresponding to each frame of sampled surveillance video frame corresponding to the target surveillance video.
Optionally, on the basis of the foregoing implementation manner, in an alternative implementation manner, the step of performing result fusion processing on each target surveillance video of the at least one target surveillance video based on an action recognition result corresponding to each frame of target surveillance video included in the target surveillance video to obtain an initial video parsing result corresponding to the target surveillance video may include:
firstly, for each target surveillance video in the at least one target surveillance video, if an action recognition result corresponding to a target surveillance video frame corresponding to the target surveillance video is a first action recognition result, determining the target surveillance video frame as a first type of target surveillance video frame, and if an action recognition result corresponding to a target surveillance video frame corresponding to the target surveillance video is a second action recognition result, determining the target surveillance video frame as a second type of target surveillance video frame, wherein the first action recognition result is used for representing that a behavior which does not meet a preset condition exists at an entrance and exit position of a corresponding monitored object in the target cell, and the second action recognition result is used for representing that the behavior which does not meet the preset condition does not exist at the entrance and exit position of the corresponding monitored object in the target cell;
secondly, for each target surveillance video in the at least one target surveillance video, if the target surveillance video includes at least one frame of the first type of target surveillance video frame, determining an initial video analysis result corresponding to the target surveillance video as that a behavior which does not satisfy a preset condition exists at an entrance and exit position of the target cell in a corresponding surveillance object, and if the target surveillance video does not include the first type of target surveillance video frame, determining an initial video analysis result corresponding to the target surveillance video as that a behavior which does not satisfy a preset condition does not exist at the entrance and exit position of the target cell in the corresponding surveillance object.
Optionally, on the basis of the foregoing implementation manner, in another alternative implementation manner, the step of performing result fusion processing on each target surveillance video of the at least one target surveillance video based on an action recognition result corresponding to each frame of target surveillance video included in the target surveillance video to obtain an initial video parsing result corresponding to the target surveillance video includes:
firstly, for each target surveillance video in the at least one target surveillance video, if an action recognition result corresponding to a target surveillance video frame corresponding to the target surveillance video is a first action recognition result, determining the target surveillance video frame as a first type of target surveillance video frame, and if an action recognition result corresponding to a target surveillance video frame corresponding to the target surveillance video is a second action recognition result, determining the target surveillance video frame as a second type of target surveillance video frame, wherein the first action recognition result is used for representing that a behavior which does not meet a preset condition exists at an entrance and exit position of a corresponding monitored object in the target cell, and the second action recognition result is used for representing that the behavior which does not meet the preset condition does not exist at the entrance and exit position of the corresponding monitored object in the target cell;
secondly, counting the number of the first type of target monitoring video frames included in the target monitoring video aiming at each target monitoring video in the at least one target monitoring video to obtain the number of first video frames corresponding to the target monitoring video, and counting the number of the second type of target monitoring video frames included in the target monitoring video to obtain the number of second video frames corresponding to the target monitoring video;
then, acquiring a first weight coefficient configured in advance for the first video frame number, and acquiring a second weight coefficient configured in advance for the second video frame number, wherein the first weight coefficient is greater than the second weight coefficient;
then, for each target surveillance video in the at least one target surveillance video, calculating a product between the number of the first video frames corresponding to the target surveillance video and the first weight coefficient to obtain a first product value corresponding to the target surveillance video, calculating a product between the number of the second video frames corresponding to the target surveillance video and the second weight coefficient to obtain a second product value corresponding to the target surveillance video, and determining a magnitude relationship between the first product value and the second product value (i.e., whether the first product value is greater than or equal to the second product value);
finally, for each target surveillance video in the at least one target surveillance video, if the first product value corresponding to the target surveillance video is greater than or equal to the corresponding second product value, determining an initial video analysis result corresponding to the target surveillance video as that a behavior which does not satisfy a preset condition exists at the entrance and exit position of the corresponding surveillance object in the target cell, and if the first product value corresponding to the target surveillance video is smaller than the corresponding second product value, determining an initial video analysis result corresponding to the target surveillance video as that a behavior which does not satisfy a preset condition does not exist at the entrance and exit position of the corresponding surveillance object in the target cell.
Optionally, in an alternative implementation manner, the step of correcting the initial video analysis result corresponding to each target surveillance video in the at least one target surveillance video to obtain the target video analysis result corresponding to the target surveillance video, that is, the step S300 may include:
firstly, aiming at each target surveillance video in at least one target surveillance video, determining whether an associated target surveillance video which has an association relation with the target surveillance video exists in other target surveillance videos except the target surveillance video;
secondly, for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video is that the corresponding surveillance object has a behavior which does not meet the preset condition at the entrance and exit position of the target cell, determining the target video analysis result corresponding to the target surveillance video as that the corresponding surveillance object has a behavior which does not meet the preset condition at the entrance and exit position of the target cell;
then, for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video indicates that the corresponding surveillance object does not have a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell, and the target surveillance video does not have a related target surveillance video, determining the target video analysis result corresponding to the target surveillance video as the corresponding surveillance object does not have a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell;
finally, for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video indicates that the corresponding surveillance object does not have a behavior which does not meet the preset condition at the entrance/exit position of the target cell, and the target surveillance video has a related target surveillance video, determining the target video analysis result corresponding to the target surveillance video based on the initial video analysis result corresponding to the related target surveillance video.
Optionally, on the basis of the foregoing implementation manner, in an alternative implementation manner, the step of determining, for each target surveillance video in the at least one target surveillance video, whether an associated target surveillance video that has an association relationship with the target surveillance video exists in other target surveillance videos except the target surveillance video may include:
firstly, determining whether video frame time sequences between the target surveillance video and target surveillance video frames corresponding to each other target surveillance video except the target surveillance video are completely the same (namely, the corresponding surveillance object is completely synchronous, and enters a surveillance position and leaves the surveillance position at the same time) aiming at each target surveillance video in the at least one target surveillance video;
secondly, for each target surveillance video in the at least one target surveillance video, if other target surveillance videos with identical video frame time sequences with the target surveillance video frames corresponding to the target surveillance video exist, determining the other target surveillance videos as associated target surveillance videos with associated relations with the target surveillance video.
With reference to fig. 3, an embodiment of the present invention further provides a security system for an intelligent cell, which can be applied to the data processing server. Wherein, the safety precaution system of wisdom district can include:
the monitoring terminal equipment is used for acquiring a to-be-processed monitoring video, and acquiring at least one target monitoring video corresponding to the to-be-processed monitoring video, wherein the to-be-processed monitoring video is acquired by acquiring images of an entrance and exit position of a target cell based on the monitoring terminal equipment, the to-be-processed monitoring video comprises multiple frames of to-be-processed monitoring video frames, each target monitoring video in the at least one target monitoring video comprises multiple frames of target monitoring video frames, monitoring objects corresponding to any two frames of target monitoring video frames belonging to the same target monitoring video are the same, and monitoring objects corresponding to any two frames of target monitoring video frames belonging to different two target monitoring videos are different;
the monitoring video analysis module is used for analyzing a target monitoring video frame included in the target monitoring video aiming at each target monitoring video in the at least one target monitoring video to obtain an initial video analysis result corresponding to the target monitoring video, wherein the initial video analysis result is used for representing whether a behavior which does not meet a preset condition exists at the entrance and exit position of the corresponding monitoring object in the target cell;
and the analysis result correction module is used for correcting the initial video analysis result corresponding to each target monitoring video in the at least one target monitoring video to obtain a target video analysis result corresponding to the target monitoring video, wherein the target video analysis result is used for representing whether a behavior which does not meet a preset condition exists at the entrance and exit position of the corresponding monitoring object in the target cell.
Optionally, on the basis of the foregoing implementation, in an alternative implementation, the surveillance video parsing module may be specifically configured to:
for each target surveillance video in the at least one target surveillance video, performing action recognition processing on each frame of target surveillance video frame included in the target surveillance video to obtain an action recognition result corresponding to each frame of target surveillance video frame included in the target surveillance video;
and for each target surveillance video in the at least one target surveillance video, performing result fusion processing based on the action recognition result corresponding to each frame of target surveillance video included in the target surveillance video to obtain an initial video analysis result corresponding to the target surveillance video.
Optionally, on the basis of the foregoing implementation, in an alternative implementation, the analysis result correction module may be specifically configured to:
determining whether an associated target surveillance video having an association relation with the target surveillance video exists in other target surveillance videos except the target surveillance video aiming at each target surveillance video in the at least one target surveillance video;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video is that the corresponding surveillance object has a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell, determining the target video analysis result corresponding to the target surveillance video as that the corresponding surveillance object has a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video indicates that the corresponding surveillance object does not have a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell, and the target surveillance video does not have a related target surveillance video, determining the target video analysis result corresponding to the target surveillance video as the corresponding surveillance object does not have a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video indicates that the corresponding surveillance object does not have a behavior which does not meet the preset condition at the entrance/exit position of the target cell, and the target surveillance video has a related target surveillance video, determining a target video analysis result corresponding to the target surveillance video based on the initial video analysis result corresponding to the related target surveillance video.
In summary, according to the security protection method and system for an intelligent cell provided by the present invention, after the obtained to-be-processed surveillance video is screened to obtain at least one corresponding target surveillance video, the target surveillance video frames included in the target surveillance video may be analyzed to obtain a corresponding initial video analysis result, and then the initial video analysis result corresponding to the target surveillance video is corrected to obtain a corresponding target video analysis result, so that by configuring a mechanism for correcting the initial video analysis result, the reliability of the obtained target video analysis result may be improved, the reliability of video surveillance may be ensured, and the problem of poor reliability of video surveillance in the prior art may be solved.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The utility model provides a safety precaution method of wisdom district, its characterized in that is applied to data processing server, data processing server communication connection has monitor terminal equipment, monitor terminal equipment deploys in the access & exit position of target cell, the safety precaution method of wisdom district includes:
after a to-be-processed monitoring video sent by the monitoring terminal equipment is obtained, screening the to-be-processed monitoring video to obtain at least one target monitoring video corresponding to the to-be-processed monitoring video, wherein the to-be-processed monitoring video is obtained by carrying out image acquisition on the entrance and exit position of a target cell based on the monitoring terminal equipment, the to-be-processed monitoring video comprises multiple frames of to-be-processed monitoring video frames, each target monitoring video in the at least one target monitoring video comprises multiple frames of target monitoring video frames, monitoring objects corresponding to any two frames of target monitoring video frames belonging to the same target monitoring video are the same, and monitoring objects corresponding to any two frames of target monitoring video frames belonging to different two target monitoring videos are different;
analyzing a target monitoring video frame included in the target monitoring video aiming at each target monitoring video in the at least one target monitoring video to obtain an initial video analysis result corresponding to the target monitoring video, wherein the initial video analysis result is used for representing whether a behavior which does not meet a preset condition exists at an entrance and exit position of a corresponding monitoring object in the target cell or not;
and correcting the initial video analysis result corresponding to the target monitoring video aiming at each target monitoring video in the at least one target monitoring video to obtain a target video analysis result corresponding to the target monitoring video, wherein the target video analysis result is used for representing whether behaviors which do not meet preset conditions exist at the entrance and exit positions of the corresponding monitoring object in the target cell.
2. The method for safety precaution of an intelligent cell according to claim 1, wherein the step of analyzing, for each target surveillance video of the at least one target surveillance video, a target surveillance video frame included in the target surveillance video to obtain an initial video analysis result corresponding to the target surveillance video includes:
for each target surveillance video in the at least one target surveillance video, performing action recognition processing on each frame of target surveillance video frame included in the target surveillance video to obtain an action recognition result corresponding to each frame of target surveillance video frame included in the target surveillance video;
and for each target surveillance video in the at least one target surveillance video, performing result fusion processing based on the action recognition result corresponding to each frame of target surveillance video included in the target surveillance video to obtain an initial video analysis result corresponding to the target surveillance video.
3. The method for safety precaution of an intelligent cell according to claim 2, wherein the step of performing, for each target surveillance video of the at least one target surveillance video, an action recognition process on each target surveillance video frame included in the target surveillance video to obtain an action recognition result corresponding to each target surveillance video frame included in the target surveillance video includes:
determining video frame acquisition frequency information of the to-be-processed monitoring video acquired by the monitoring terminal equipment, and determining target sampling information based on the video frame acquisition frequency information, wherein the target sampling information and the video frame acquisition frequency information have positive correlation;
for each target monitoring video in the at least one target monitoring video, sampling target monitoring video frames included in the target monitoring video based on the target sampling information to obtain multi-frame sampling monitoring video frames corresponding to the target monitoring video;
and aiming at each target surveillance video in the at least one target surveillance video, performing action identification processing on each frame of sampling surveillance video frame included in the target surveillance video to obtain an action identification result corresponding to each frame of sampling surveillance video frame corresponding to the target surveillance video.
4. The method for safety precaution of an intelligent cell according to claim 2, wherein the step of performing result fusion processing based on the action recognition result corresponding to each frame of target surveillance video included in the target surveillance video for each target surveillance video in the at least one target surveillance video to obtain the initial video parsing result corresponding to the target surveillance video comprises:
for each target surveillance video in the at least one target surveillance video, if an action recognition result corresponding to a target surveillance video frame corresponding to the target surveillance video is a first action recognition result, determining the target surveillance video frame as a first type of target surveillance video frame, and if an action recognition result corresponding to a target surveillance video frame corresponding to the target surveillance video is a second action recognition result, determining the target surveillance video frame as a second type of target surveillance video frame, wherein the first action recognition result is used for representing that a behavior which does not meet a preset condition exists at an entrance and exit position of a target cell in a corresponding monitored object, and the second action recognition result is used for representing that a behavior which does not meet the preset condition does not exist at the entrance and exit position of the target cell in the corresponding monitored object;
for each target surveillance video in the at least one target surveillance video, if the target surveillance video includes at least one frame of the first type target surveillance video frame, determining an initial video analysis result corresponding to the target surveillance video as that a behavior that does not satisfy a preset condition exists at an entrance and exit position of the target cell for a corresponding surveillance object, and if the target surveillance video does not include the first type target surveillance video frame, determining an initial video analysis result corresponding to the target surveillance video as that a behavior that does not satisfy a preset condition does not exist at an entrance and exit position of the target cell for a corresponding surveillance object.
5. The method for safety precaution of an intelligent cell according to claim 2, wherein the step of performing result fusion processing based on the action recognition result corresponding to each frame of target surveillance video included in the target surveillance video for each target surveillance video in the at least one target surveillance video to obtain the initial video parsing result corresponding to the target surveillance video comprises:
for each target surveillance video in the at least one target surveillance video, if an action recognition result corresponding to a target surveillance video frame corresponding to the target surveillance video is a first action recognition result, determining the target surveillance video frame as a first type of target surveillance video frame, and if an action recognition result corresponding to a target surveillance video frame corresponding to the target surveillance video is a second action recognition result, determining the target surveillance video frame as a second type of target surveillance video frame, wherein the first action recognition result is used for representing that a behavior which does not meet a preset condition exists at an entrance and exit position of a target cell in a corresponding monitored object, and the second action recognition result is used for representing that a behavior which does not meet the preset condition does not exist at the entrance and exit position of the target cell in the corresponding monitored object;
counting the number of the first type of target monitoring video frames included in the target monitoring video to obtain the number of first video frames corresponding to the target monitoring video, and counting the number of the second type of target monitoring video frames included in the target monitoring video to obtain the number of second video frames corresponding to the target monitoring video;
acquiring a first weight coefficient configured in advance for the first video frame number and acquiring a second weight coefficient configured in advance for the second video frame number, wherein the first weight coefficient is larger than the second weight coefficient;
for each target surveillance video in the at least one target surveillance video, calculating a product between the number of first video frames corresponding to the target surveillance video and the first weight coefficient to obtain a first product value corresponding to the target surveillance video, calculating a product between the number of second video frames corresponding to the target surveillance video and the second weight coefficient to obtain a second product value corresponding to the target surveillance video, and determining a size relationship between the first product value and the second product value;
for each target surveillance video in the at least one target surveillance video, if the first product value corresponding to the target surveillance video is greater than or equal to the corresponding second product value, determining an initial video analysis result corresponding to the target surveillance video as that a corresponding monitored object has a behavior which does not satisfy a preset condition at the entrance and exit position of the target cell, and if the first product value corresponding to the target surveillance video is smaller than the corresponding second product value, determining that the initial video analysis result corresponding to the target surveillance video does not have the behavior which does not satisfy the preset condition at the entrance and exit position of the target cell.
6. The method for safety precaution of an intelligent cell according to any one of claims 1 to 5, wherein the step of correcting the initial video parsing result corresponding to each target surveillance video of the at least one target surveillance video to obtain the target video parsing result corresponding to the target surveillance video comprises:
determining whether an associated target surveillance video having an association relation with the target surveillance video exists in other target surveillance videos except the target surveillance video aiming at each target surveillance video in the at least one target surveillance video;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video is that the corresponding surveillance object has a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell, determining the target video analysis result corresponding to the target surveillance video as that the corresponding surveillance object has a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video indicates that the corresponding surveillance object does not have a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell, and the target surveillance video does not have a related target surveillance video, determining the target video analysis result corresponding to the target surveillance video as the corresponding surveillance object does not have a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video indicates that the corresponding surveillance object does not have a behavior which does not meet the preset condition at the entrance/exit position of the target cell, and the target surveillance video has a related target surveillance video, determining a target video analysis result corresponding to the target surveillance video based on the initial video analysis result corresponding to the related target surveillance video.
7. The method for security protection of an intelligent cell according to claim 6, wherein the step of determining, for each target surveillance video of the at least one target surveillance video, whether an associated target surveillance video associated with the target surveillance video exists in other target surveillance videos except the target surveillance video comprises:
determining whether video frame time sequences between the target surveillance video and target surveillance video frames corresponding to other target surveillance videos except the target surveillance video are completely the same or not aiming at each target surveillance video in the at least one target surveillance video;
and for each target surveillance video in the at least one target surveillance video, if other target surveillance videos with completely the same video frame time sequence as the target surveillance video frames corresponding to the target surveillance video exist, determining the other target surveillance videos as associated target surveillance videos with associated relations with the target surveillance video.
8. The utility model provides a safety precaution system of wisdom district, its characterized in that is applied to data processing server, data processing server communication connection has monitor terminal equipment, monitor terminal equipment deploys in the access & exit position of target cell, the safety precaution system of wisdom district includes:
the monitoring terminal equipment is used for acquiring a to-be-processed monitoring video, and acquiring at least one target monitoring video corresponding to the to-be-processed monitoring video, wherein the to-be-processed monitoring video is acquired by acquiring images of an entrance and exit position of a target cell based on the monitoring terminal equipment, the to-be-processed monitoring video comprises multiple frames of to-be-processed monitoring video frames, each target monitoring video in the at least one target monitoring video comprises multiple frames of target monitoring video frames, monitoring objects corresponding to any two frames of target monitoring video frames belonging to the same target monitoring video are the same, and monitoring objects corresponding to any two frames of target monitoring video frames belonging to different two target monitoring videos are different;
the monitoring video analysis module is used for analyzing a target monitoring video frame included in the target monitoring video aiming at each target monitoring video in the at least one target monitoring video to obtain an initial video analysis result corresponding to the target monitoring video, wherein the initial video analysis result is used for representing whether a behavior which does not meet a preset condition exists at the entrance and exit position of the corresponding monitoring object in the target cell;
and the analysis result correction module is used for correcting the initial video analysis result corresponding to each target monitoring video in the at least one target monitoring video to obtain a target video analysis result corresponding to the target monitoring video, wherein the target video analysis result is used for representing whether a behavior which does not meet a preset condition exists at the entrance and exit position of the corresponding monitoring object in the target cell.
9. The system of claim 8, wherein the surveillance video resolution module is specifically configured to:
for each target surveillance video in the at least one target surveillance video, performing action recognition processing on each frame of target surveillance video frame included in the target surveillance video to obtain an action recognition result corresponding to each frame of target surveillance video frame included in the target surveillance video;
and for each target surveillance video in the at least one target surveillance video, performing result fusion processing based on the action recognition result corresponding to each frame of target surveillance video included in the target surveillance video to obtain an initial video analysis result corresponding to the target surveillance video.
10. The system of claim 8, wherein the analysis result calibration module is specifically configured to:
determining whether an associated target surveillance video having an association relation with the target surveillance video exists in other target surveillance videos except the target surveillance video aiming at each target surveillance video in the at least one target surveillance video;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video is that the corresponding surveillance object has a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell, determining the target video analysis result corresponding to the target surveillance video as that the corresponding surveillance object has a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video indicates that the corresponding surveillance object does not have a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell, and the target surveillance video does not have a related target surveillance video, determining the target video analysis result corresponding to the target surveillance video as the corresponding surveillance object does not have a behavior which does not satisfy the preset condition at the entrance and exit position of the target cell;
for each target surveillance video in the at least one target surveillance video, if the initial video analysis result corresponding to the target surveillance video indicates that the corresponding surveillance object does not have a behavior which does not meet the preset condition at the entrance/exit position of the target cell, and the target surveillance video has a related target surveillance video, determining a target video analysis result corresponding to the target surveillance video based on the initial video analysis result corresponding to the related target surveillance video.
CN202111312692.XA 2021-11-08 2021-11-08 Safety protection method and system for intelligent cell Withdrawn CN114139017A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111312692.XA CN114139017A (en) 2021-11-08 2021-11-08 Safety protection method and system for intelligent cell

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111312692.XA CN114139017A (en) 2021-11-08 2021-11-08 Safety protection method and system for intelligent cell

Publications (1)

Publication Number Publication Date
CN114139017A true CN114139017A (en) 2022-03-04

Family

ID=80393093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111312692.XA Withdrawn CN114139017A (en) 2021-11-08 2021-11-08 Safety protection method and system for intelligent cell

Country Status (1)

Country Link
CN (1) CN114139017A (en)

Similar Documents

Publication Publication Date Title
CN114140713A (en) Image recognition system and image recognition method
CN115018840B (en) Method, system and device for detecting cracks of precision casting
CN114140712A (en) Automatic image recognition and distribution system and method
CN114581856B (en) Agricultural unit motion state identification method and system based on Beidou system and cloud platform
CN114666473A (en) Video monitoring method, system, terminal and storage medium for farmland protection
CN114155459A (en) Smart city monitoring method and system based on data analysis
CN114697618A (en) Building control method and system based on mobile terminal
CN114139016A (en) Data processing method and system for intelligent cell
CN114140710A (en) Monitoring data transmission method and system based on data processing
CN113949881A (en) Service processing method and system based on smart city data
CN113902993A (en) Environmental state analysis method and system based on environmental monitoring
CN115620243B (en) Pollution source monitoring method and system based on artificial intelligence and cloud platform
CN115457467A (en) Building quality hidden danger positioning method and system based on data mining
CN114139017A (en) Safety protection method and system for intelligent cell
CN114095734A (en) User data compression method and system based on data processing
CN115330140A (en) Building risk prediction method based on data mining and prediction system thereof
CN114928467A (en) Network security operation and maintenance association analysis method and system
CN115147752A (en) Video analysis method and device and computer equipment
CN115375886A (en) Data acquisition method and system based on cloud computing service
CN113848210A (en) Material qualitative method and qualitative system based on material detection data
CN114720812A (en) Method and system for processing and positioning faults of power distribution network
CN113808088A (en) Pollution detection method and system
CN114156495B (en) Laminated battery assembly processing method and system based on big data
CN114827538B (en) Construction progress monitoring method and system for intelligent building site
CN114173086A (en) User data screening method based on data processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220304