[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114399535B - Multi-behavior recognition device and method based on artificial intelligence algorithm - Google Patents

Multi-behavior recognition device and method based on artificial intelligence algorithm Download PDF

Info

Publication number
CN114399535B
CN114399535B CN202210050131.5A CN202210050131A CN114399535B CN 114399535 B CN114399535 B CN 114399535B CN 202210050131 A CN202210050131 A CN 202210050131A CN 114399535 B CN114399535 B CN 114399535B
Authority
CN
China
Prior art keywords
area
region
determining
preset
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210050131.5A
Other languages
Chinese (zh)
Other versions
CN114399535A (en
Inventor
海拉提·恰凯
杨柳
黎红
王涛
郭江涛
李志刚
孙博文
柳瑞
魏乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Xinjiang Electric Power CorporationInformation & Telecommunication Co ltd
Original Assignee
State Grid Xinjiang Electric Power CorporationInformation & Telecommunication Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Xinjiang Electric Power CorporationInformation & Telecommunication Co ltd filed Critical State Grid Xinjiang Electric Power CorporationInformation & Telecommunication Co ltd
Priority to CN202210050131.5A priority Critical patent/CN114399535B/en
Publication of CN114399535A publication Critical patent/CN114399535A/en
Application granted granted Critical
Publication of CN114399535B publication Critical patent/CN114399535B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of behavior recognition, and particularly discloses a multi-behavior recognition device and method based on an artificial intelligence algorithm, wherein the method comprises the steps of obtaining an area image containing a heat source layer, and determining a motion area and a reference area in the area image according to the heat source layer; calculating the region range of the motion region, and determining an independent region and a collection region according to the region range; dividing the collection area to obtain a subarea; marking feature points according to the independent areas and the sub-areas; and extracting a motion track based on the marked feature points, and determining the risk value of the feature points according to the motion track. According to the technical scheme, region identification is carried out on the region images, then the characteristic points of each region are determined, the motion trail is determined according to the characteristic points, and the behavior risk value is determined according to the motion trail, so that the range of the existing identification technology is expanded, and particularly, the image aggregation is carried out.

Description

Multi-behavior recognition device and method based on artificial intelligence algorithm
Technical Field
The invention relates to the technical field of behavior recognition, in particular to a multi-behavior recognition device and method based on an artificial intelligence algorithm.
Background
With the development of computer technology, the determination of human behaviors by a computer has been widely applied, for example, in the scenes of intelligent video monitoring, patient monitoring systems, intelligent home and the like, so how to accurately determine human behaviors by a computer has become a popular research.
The existing multi-person behavior recognition method is mainly used for recognizing individual human body areas, when aggregation occurs in the areas, the existing recognition mode is easy to make mistakes, and the problem is solved, so that the recognition capability is improved.
Disclosure of Invention
The invention aims to provide a multi-behavior recognition device and method based on an artificial intelligence algorithm, so as to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
A multi-persona identification device based on an artificial intelligence algorithm, the device comprising:
The device comprises a region determining module, a heat source layer processing module and a control module, wherein the region determining module is used for acquiring a region image containing the heat source layer and determining a motion region and a reference region in the region image according to the heat source layer; wherein the reference area is the mapping of the reference heat source of the area in the area image; the regional image takes a time item as an index;
The range detection module is used for calculating the region range of the motion region and determining an independent region and a collection region according to the region range;
The region segmentation module is used for carrying out content recognition on the collection region and segmenting the collection region according to a content recognition result to obtain a sub-region;
The characteristic marking module is used for determining characteristic points according to the independent areas and the sub-areas, obtaining the position information of the characteristic points, determining distribution information according to the position information, and marking the characteristic points according to the distribution information;
the track determining module is used for extracting the region images of the preset time period based on the marked feature points, determining the motion track of the feature points according to the region images of different moments, and determining the risk value of the feature points according to the motion track.
As a further scheme of the invention: the range detection module includes:
A total number calculation unit, configured to determine a contour curve in the heat source layer according to a preset heat value, and calculate a total number of pixel points in the contour curve;
The first marking unit is used for comparing the total number of the pixel points with a preset total number threshold value, and marking the motion area as an independent area when the total number of the pixel points is in a preset total number range;
and the second marking unit is used for marking the motion area as an aggregation area when the total number of the pixel points exceeds a preset total number range.
As a further scheme of the invention: the region segmentation module comprises:
The contour recognition unit is used for carrying out contour recognition on the aggregate region according to a preset tolerance and determining a target region according to a contour recognition result;
The assignment unit is used for determining the center point of the target area, counting the color values in the target area, calculating the color value mean value, and assigning values to the center point according to the color value mean value;
the central dot matrix generating unit is used for counting the assigned central dots and generating a central dot matrix which has a mapping relation with the aggregation area;
The processing execution unit is used for determining an ear area in the central lattice according to a preset feature framework, and segmenting the collection area according to the ear area to obtain a sub-area.
As a further scheme of the invention: the process execution unit includes:
the content recognition subunit is used for carrying out content recognition on the ear area and determining the ear outline;
The position determining subunit is used for determining the position of an image acquisition end according to the position of the reference area and determining orientation information according to the ear contour and the position of the acquisition end;
and the segmentation subunit is used for segmenting the collection area according to the orientation information and the contour recognition result.
As a further scheme of the invention: the feature labeling module comprises:
The width generation unit is used for sequentially reading the maximum pixel number of the independent area and the sub-area in the preset direction to serve as the width;
the theoretical point determining unit is used for obtaining the total number of the pixel points of the independent area and the sub-area and calculating the theoretical points according to the total number of the pixel points and the width;
The detection unit is used for detecting a pixel point by taking the theoretical point as a center according to a preset incremental detection radius, and taking the pixel point as a characteristic point when the pixel point is detected;
The array generating unit is used for generating a coordinate system according to the reference area, acquiring the position information of each characteristic point based on the coordinate system and generating a position lattice; calculating the position difference value of adjacent feature points in the direction of a coordinate system, and generating a difference value array which takes a position lattice as a mapping relation; wherein the difference value array is a two-dimensional array;
and the marking unit is used for inputting the difference value array into a trained analysis model to obtain discrete values of the characteristic points, and marking the characteristic points in the position lattice according to the discrete values.
As a further scheme of the invention: the track determination module includes:
The position extraction unit is used for extracting the region images of the preset time period based on the marked feature points and extracting the positions of the feature points in the region images at different moments;
the curve inserting unit is used for inserting the positions of the characteristic points in the area images at different moments into a preset background image and generating a motion curve in the background image;
And the inflection point identification unit is used for carrying out inflection point identification on the motion curve and determining the risk value of the feature point according to an inflection point identification result.
As a further scheme of the invention: the inflection point identifying unit includes:
The sampling point determining subunit is used for sequentially determining sampling points on the motion curve according to a preset detection step length;
The curvature calculating subunit is used for calculating curve curvature in a preset detection radius by taking the sampling point as a center;
the comparison subunit is used for comparing the curve curvature with a preset curvature threshold, taking the sampling point as an inflection point when the curve curvature reaches the preset curvature threshold, and assigning a value to the inflection point according to the curve curvature;
And the calculating subunit is used for determining the risk value of the feature point according to the assigned inflection point.
The technical scheme of the invention also provides a multi-behavior recognition method based on the artificial intelligence algorithm, which comprises the following steps:
acquiring an area image containing a heat source layer, and determining a motion area and a reference area in the area image according to the heat source layer; wherein the reference area is the mapping of the reference heat source of the area in the area image; the regional image takes a time item as an index;
Calculating the region range of the motion region, and determining an independent region and a collection region according to the region range;
performing content identification on the collection area, and segmenting the collection area according to a content identification result to obtain a subarea;
Determining feature points according to the independent areas and the sub-areas, acquiring position information of the feature points, determining distribution information according to the position information, and marking the feature points according to the distribution information;
and extracting region images of a preset time period based on the marked feature points, determining the motion trail of the feature points according to the region images of different moments, and determining the risk value of the feature points according to the motion trail.
As a further scheme of the invention: the step of calculating the region range of the motion region and determining the independent region and the integrated region according to the region range comprises the following steps:
Determining a contour curve in the heat source layer according to a preset heat value, and calculating the total number of pixel points in the contour curve;
comparing the total number of the pixel points with a preset total number threshold value, and marking the motion area as an independent area when the total number of the pixel points is in a preset total number range;
And when the total number of the pixel points exceeds a preset total number range, marking the motion area as an aggregation area.
As a further scheme of the invention: the step of carrying out content recognition on the collection area, and segmenting the collection area according to a content recognition result to obtain a sub-area comprises the following steps:
performing contour recognition on the set region according to a preset tolerance, and determining a target region according to a contour recognition result;
Determining a center point of the target area, counting color values in the target area, calculating a color value mean value, and assigning a value to the center point according to the color value mean value;
Counting the assigned center points, and generating a center dot matrix which is in a mapping relation with the collection area;
and determining an ear area in the central lattice according to a preset feature framework, and segmenting the collection area according to the ear area to obtain a subarea.
Compared with the prior art, the invention has the beneficial effects that: according to the technical scheme, region identification is carried out on the region images, then the characteristic points of each region are determined, the motion trail is determined according to the characteristic points, and the behavior risk value is determined according to the motion trail, so that the range of the existing identification technology is expanded, and particularly, the image aggregation is carried out.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
Fig. 1 is a block diagram of the constitution of a multi-person behavior recognition device based on an artificial intelligence algorithm.
Fig. 2 is a block diagram of the structure of a range detection module in the multi-behavior recognition device based on the artificial intelligence algorithm.
Fig. 3 is a block diagram of the structure of the region segmentation module in the multi-behavior recognition device based on the artificial intelligence algorithm.
Fig. 4 is a block diagram of the composition and structure of a feature marking module in the multi-behavior recognition device based on the artificial intelligence algorithm.
Fig. 5 is a block diagram of the composition and structure of a track determining module in the multi-behavior recognition device based on the artificial intelligence algorithm.
FIG. 6 is a flow chart diagram of a multi-personally identifiable method based on an artificial intelligence algorithm.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Example 1
Fig. 1 shows a block diagram of a multi-person behavior recognition device based on an artificial intelligence algorithm, and in an embodiment of the present invention, a multi-person behavior recognition device based on an artificial intelligence algorithm, the device 10 includes:
A region determining module 11, configured to acquire a region image containing a heat source layer, and determine a motion region and a reference region in the region image according to the heat source layer; wherein the reference area is the mapping of the reference heat source of the area in the area image; the regional image takes a time item as an index;
A range detection module 12, configured to calculate a region range of the motion region, and determine an independent region and a collective region according to the region range;
The region segmentation module 13 is used for carrying out content recognition on the collection region, and segmenting the collection region according to a content recognition result to obtain a sub-region;
A feature marking module 14, configured to determine feature points according to the independent areas and the sub-areas, obtain location information of the feature points, determine distribution information according to the location information, and mark feature points according to the distribution information;
the track determining module 15 is configured to extract a region image of a preset time period based on the marked feature points, determine a motion track of the feature points according to the region images at different moments, and determine a risk value of the feature points according to the motion track.
The purpose of the area determining module 11 is to acquire an image of an area containing a heat source layer, which may be performed by two cameras separately to acquire images and then perform image fusion, or may be two modes on one camera, one for acquiring temperature information and the other for acquiring an image of an area. Since the object behavior recognition according to the present invention is performed by a time-dependent amount, a plurality of time-sequential area images are often required for determining one behavior, and therefore, a time item is provided in the area images.
The range detection module 12 and the region segmentation module 13 are used for identifying the behavior regions in the region image, wherein the regions are mainly human body contours, are individual complete contours, are also aggregated together, and are different in identification mode, and the individual complete contours are directly compared and identified, and the aggregated contours are required to be segmented and then identified.
The feature labeling module 14 is an identification module of problem contours, and is used for judging whether each contour is a risk contour; after the risk profile is extracted, determining the motion trail of the risk profile according to the time item of the region profile, and further identifying the behavior.
Fig. 2 is a block diagram of the structure of a range detection module in the multi-behavior recognition device based on the artificial intelligence algorithm, where the range detection module 12 includes:
a total number calculating unit 121, configured to determine a contour curve in the heat source layer according to a preset heat value, and calculate a total number of pixels in the contour curve;
A first marking unit 122, configured to compare the total number of pixels with a preset total number threshold, and mark the motion area as an independent area when the total number of pixels is within a preset total number range;
And a second marking unit 123, configured to mark the motion area as a collection area when the total number of pixels exceeds a preset total number range.
The above-mentioned in-range detection module 12 further defines that the number of pixels within the determined contour curve is calculated, if this number is too large it is an aggregate area, if this number is too small it is likely not an image of the human body. Therefore, both sides of the comparison process are the total number of pixel points and the preset total number range.
Fig. 3 is a block diagram of the structure of a region segmentation module in the multi-behavior recognition device based on the artificial intelligence algorithm, where the region segmentation module 13 includes:
the contour recognition unit 131 is configured to perform contour recognition on the aggregate area according to a preset tolerance, and determine a target area according to a contour recognition result;
The assignment unit 132 is configured to determine a center point of the target area, count color values in the target area, calculate a color value average value, and assign a value to the center point according to the color value average value;
A central lattice generating unit 133, configured to count the assigned central points, and generate a central lattice with a mapping relationship with the aggregation area;
the processing execution unit 134 is configured to determine an ear area in the central lattice according to a preset feature architecture, and segment the collection area according to the ear area to obtain a sub-area.
The purpose of the region splitting module 13 is to split the aggregate region into identifiable sub-regions in such a way that the aggregate region is contour-identified, it being noted that the value of the tolerance is generally relatively large, for example, the tolerance is set to fifty, which is because only large contours in the aggregate region are identified. The reason for the occurrence of the collection area is that a group of people gather together, the division of the body area is obvious, the hair is easy to identify, the color values of the areas such as the hair, the clothes and the like are often obvious, and the collection area is converted into a central lattice by identifying the contents. It should be noted that the feature structure may be a contour adjacent to the hair area, which is considered as an ear area.
Further, the processing execution unit includes:
the content recognition subunit is used for carrying out content recognition on the ear area and determining the ear outline;
The position determining subunit is used for determining the position of an image acquisition end according to the position of the reference area and determining orientation information according to the ear contour and the position of the acquisition end;
and the segmentation subunit is used for segmenting the collection area according to the orientation information and the contour recognition result.
Under the premise of determining the ear area, the position information of the image acquisition end is acquired, the orientation information is further determined, and the specific ear outline can judge the orientation of a person, but only the left and right directions relative to the image acquisition position can be judged, and the southeast, the northwest directions can be specifically determined according to the acquisition end position.
Fig. 4 is a block diagram of the structure of a feature labeling module in the multi-behavior recognition device based on the artificial intelligence algorithm, where the feature labeling module 14 includes:
A width generating unit 141, configured to sequentially read the maximum pixel number of the independent area and the sub-area in the preset direction as a width;
a theoretical point determining unit 142, configured to obtain the total number of pixels in the independent area and the sub-area, and calculate a theoretical point according to the total number of pixels and the width;
a detecting unit 143, configured to detect a pixel point with the theoretical point as a center according to a preset incremental detection radius, and when the pixel point is detected, use the pixel point as a feature point;
an array generating unit 144, configured to generate a coordinate system according to the reference area, obtain location information of each feature point based on the coordinate system, and generate a location lattice; calculating the position difference value of adjacent feature points in the direction of a coordinate system, and generating a difference value array which takes a position lattice as a mapping relation; wherein the difference value array is a two-dimensional array;
and the marking unit 145 is configured to input the difference value array into a trained analysis model, obtain discrete values of each feature point, and mark the feature points in the position lattice according to the discrete values.
The purpose of the feature labeling module 14 is to label important information in the independent area and the sub-area, which are human body areas according to the above, determine the theoretical point of each area according to the principle of simple mathematical gravity center, then query the point closest to the theoretical point, namely the feature point, and then determine the feature point according to the discrete value of the feature point. It is worth mentioning that if there are independent areas and aggregate areas in one area, the feature points of the independent areas are highly likely to be marked.
Fig. 5 is a block diagram of the composition and structure of a track determining module in the multi-behavior recognition device based on the artificial intelligence algorithm, where the track determining module 15 includes:
a position extraction unit 151 for extracting the position of the feature point in the region image at different time points based on the marked feature point to extract the region image at a preset time period;
a curve inserting unit 152, configured to insert the positions of the feature points in the area images at different times into a preset background image, and generate a motion curve in the background image;
and the inflection point identifying unit 153 is configured to identify an inflection point of the motion curve, and determine a risk value of the feature point according to an inflection point identification result.
Specifically, the inflection point identifying unit includes:
The sampling point determining subunit is used for sequentially determining sampling points on the motion curve according to a preset detection step length;
The curvature calculating subunit is used for calculating curve curvature in a preset detection radius by taking the sampling point as a center;
the comparison subunit is used for comparing the curve curvature with a preset curvature threshold, taking the sampling point as an inflection point when the curve curvature reaches the preset curvature threshold, and assigning a value to the inflection point according to the curve curvature;
And the calculating subunit is used for determining the risk value of the feature point according to the assigned inflection point.
The above specifically defines the track determining module 15, where the purpose of the track determining module 15 is to determine the relationship between the feature point and time, specifically, the feature point in each area image is actually different, taking an area image as an example, if the feature point is detected in the area image, the area image of the previous period or the later period may be extracted according to the feature point, and then the track of the feature point is determined, so as to generate a track curve, and the inflection point identification is performed on the track curve, so that the risk value of the feature point may be determined. The inflection point identification process obtains points with curvature reaching a certain degree and curvature values thereof. If a risk value is to be calculated, the total curvature value can be calculated, so that the relationship of the number and curvature value can be fitted.
It should be noted that if the above-mentioned identification process is performed for each area image, it is possible to repeatedly calculate the motion trail; in practice, the method includes extracting the region images at certain time intervals, performing feature point identification on the region images, extracting the region images for a period of time according to the feature point identification result, performing feature point identification on the region images, generating a track, and repeating the identification steps by taking a new region image as a substrate.
Example 2
Fig. 6 is a flow chart of a multi-behavior recognition method based on an artificial intelligence algorithm, and in an embodiment of the invention, the method includes:
Step S100: acquiring an area image containing a heat source layer, and determining a motion area and a reference area in the area image according to the heat source layer; wherein the reference area is the mapping of the reference heat source of the area in the area image; the regional image takes a time item as an index;
step S200: calculating the region range of the motion region, and determining an independent region and a collection region according to the region range;
Step S300: performing content identification on the collection area, and segmenting the collection area according to a content identification result to obtain a subarea;
Step S400: determining feature points according to the independent areas and the sub-areas, acquiring position information of the feature points, determining distribution information according to the position information, and marking the feature points according to the distribution information;
Step S500: and extracting region images of a preset time period based on the marked feature points, determining the motion trail of the feature points according to the region images of different moments, and determining the risk value of the feature points according to the motion trail.
Further, the step of calculating the region range of the motion region and determining the independent region and the integrated region according to the region range includes:
Determining a contour curve in the heat source layer according to a preset heat value, and calculating the total number of pixel points in the contour curve;
comparing the total number of the pixel points with a preset total number threshold value, and marking the motion area as an independent area when the total number of the pixel points is in a preset total number range;
And when the total number of the pixel points exceeds a preset total number range, marking the motion area as an aggregation area.
Specifically, the performing content recognition on the collection area, and segmenting the collection area according to the content recognition result, to obtain the sub-area includes:
performing contour recognition on the set region according to a preset tolerance, and determining a target region according to a contour recognition result;
Determining a center point of the target area, counting color values in the target area, calculating a color value mean value, and assigning a value to the center point according to the color value mean value;
Counting the assigned center points, and generating a center dot matrix which is in a mapping relation with the collection area;
and determining an ear area in the central lattice according to a preset feature framework, and segmenting the collection area according to the ear area to obtain a subarea.
The functions which can be realized by the multi-behavior recognition method based on the artificial intelligence algorithm are all completed by computer equipment, the computer equipment comprises one or more processors and one or more memories, at least one program code is stored in the one or more memories, and the program code is loaded and executed by the one or more processors to realize the functions of the multi-behavior recognition method based on the artificial intelligence algorithm.
The processor takes out instructions from the memory one by one, analyzes the instructions, then completes corresponding operation according to the instruction requirement, generates a series of control commands, enables all parts of the computer to automatically, continuously and cooperatively act to form an organic whole, realizes the input of programs, the input of data, the operation and the output of results, and the arithmetic operation or the logic operation generated in the process is completed by the arithmetic unit; the Memory comprises a Read-Only Memory (ROM) for storing a computer program, and a protection device is arranged outside the Memory.
For example, a computer program may be split into one or more modules, one or more modules stored in memory and executed by a processor to perform the present invention. One or more of the modules may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program in the terminal device.
It will be appreciated by those skilled in the art that the foregoing description of the service device is merely an example and is not meant to be limiting, and may include more or fewer components than the foregoing description, or may combine certain components, or different components, such as may include input-output devices, network access devices, buses, etc.
The Processor may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the terminal device described above, and which connects the various parts of the entire user terminal using various interfaces and lines.
The memory may be used for storing computer programs and/or modules, and the processor may implement various functions of the terminal device by running or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as an information acquisition template display function, a product information release function, etc.), and the like; the storage data area may store data created according to the use of the berth status display system (e.g., product information acquisition templates corresponding to different product types, product information required to be released by different product providers, etc.), and so on. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card (SMART MEDIA CARD, SMC), secure Digital (SD) card, flash memory card (FLASH CARD), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
The modules/units integrated in the terminal device may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on this understanding, the present invention may implement all or part of the modules/units in the system of the above-described embodiments, or may be implemented by instructing the relevant hardware by a computer program, which may be stored in a computer-readable storage medium, and which, when executed by a processor, may implement the functions of the respective system embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. A multi-personally identifiable device based on an artificial intelligence algorithm, the device comprising:
The device comprises a region determining module, a heat source layer processing module and a control module, wherein the region determining module is used for acquiring a region image containing the heat source layer and determining a motion region and a reference region in the region image according to the heat source layer; wherein the reference area is the mapping of the reference heat source of the area in the area image; the regional image takes a time item as an index;
The range detection module is used for calculating the region range of the motion region and determining an independent region and a collection region according to the region range;
The region segmentation module is used for carrying out content recognition on the collection region and segmenting the collection region according to a content recognition result to obtain a sub-region;
The characteristic marking module is used for determining characteristic points according to the independent areas and the sub-areas, obtaining the position information of the characteristic points, determining distribution information according to the position information, and marking the characteristic points according to the distribution information;
The track determining module is used for extracting the region images of the preset time period based on the marked feature points, determining the motion track of the feature points according to the region images of different moments, and determining the risk value of the feature points according to the motion track.
2. The artificial intelligence algorithm based multi-persona identification device of claim 1, wherein the range detection module includes:
A total number calculation unit, configured to determine a contour curve in the heat source layer according to a preset heat value, and calculate a total number of pixel points in the contour curve;
The first marking unit is used for comparing the total number of the pixel points with a preset total number threshold value, and marking the motion area as an independent area when the total number of the pixel points is in a preset total number range;
and the second marking unit is used for marking the motion area as an aggregation area when the total number of the pixel points exceeds a preset total number range.
3. The artificial intelligence algorithm-based multi-persona identification device of claim 1, wherein the region segmentation module includes:
The contour recognition unit is used for carrying out contour recognition on the aggregate region according to a preset tolerance and determining a target region according to a contour recognition result;
The assignment unit is used for determining the center point of the target area, counting the color values in the target area, calculating the color value mean value, and assigning values to the center point according to the color value mean value;
the central dot matrix generating unit is used for counting the assigned central dots and generating a central dot matrix which has a mapping relation with the aggregation area;
The processing execution unit is used for determining an ear area in the central lattice according to a preset feature framework, and segmenting the collection area according to the ear area to obtain a sub-area.
4. The artificial intelligence algorithm based multi-persona identification device of claim 3, wherein the process execution unit includes:
the content recognition subunit is used for carrying out content recognition on the ear area and determining the ear outline;
The position determining subunit is used for determining the position of an image acquisition end according to the position of the reference area and determining orientation information according to the ear contour and the position of the acquisition end;
and the segmentation subunit is used for segmenting the collection area according to the orientation information and the contour recognition result.
5. The artificial intelligence algorithm based multi-persona identification device of claim 1, wherein the feature tagging module comprises:
The width generation unit is used for sequentially reading the maximum pixel number of the independent area and the sub-area in the preset direction to serve as the width;
the theoretical point determining unit is used for obtaining the total number of the pixel points of the independent area and the sub-area and calculating the theoretical points according to the total number of the pixel points and the width;
The detection unit is used for detecting a pixel point by taking the theoretical point as a center according to a preset incremental detection radius, and taking the pixel point as a characteristic point when the pixel point is detected;
The array generating unit is used for generating a coordinate system according to the reference area, acquiring the position information of each characteristic point based on the coordinate system and generating a position lattice; calculating the position difference value of adjacent feature points in the direction of a coordinate system, and generating a difference value array which takes a position lattice as a mapping relation; wherein the difference value array is a two-dimensional array;
and the marking unit is used for inputting the difference value array into a trained analysis model to obtain discrete values of the characteristic points, and marking the characteristic points in the position lattice according to the discrete values.
6. The artificial intelligence algorithm based multi-persona identification device of claim 1, wherein the trajectory determination module includes:
The position extraction unit is used for extracting the region images of the preset time period based on the marked feature points and extracting the positions of the feature points in the region images at different moments;
the curve inserting unit is used for inserting the positions of the characteristic points in the area images at different moments into a preset background image and generating a motion curve in the background image;
And the inflection point identification unit is used for carrying out inflection point identification on the motion curve and determining the risk value of the feature point according to an inflection point identification result.
7. The artificial intelligence algorithm based multi-persona identification device of claim 6, wherein the inflection point identification unit includes:
The sampling point determining subunit is used for sequentially determining sampling points on the motion curve according to a preset detection step length;
The curvature calculating subunit is used for calculating curve curvature in a preset detection radius by taking the sampling point as a center;
the comparison subunit is used for comparing the curve curvature with a preset curvature threshold, taking the sampling point as an inflection point when the curve curvature reaches the preset curvature threshold, and assigning a value to the inflection point according to the curve curvature;
And the calculating subunit is used for determining the risk value of the feature point according to the assigned inflection point.
8. A multi-person behavior recognition method based on an artificial intelligence algorithm, the method comprising:
acquiring an area image containing a heat source layer, and determining a motion area and a reference area in the area image according to the heat source layer; wherein the reference area is the mapping of the reference heat source of the area in the area image; the regional image takes a time item as an index;
Calculating the region range of the motion region, and determining an independent region and a collection region according to the region range;
performing content identification on the collection area, and segmenting the collection area according to a content identification result to obtain a subarea;
Determining feature points according to the independent areas and the sub-areas, acquiring position information of the feature points, determining distribution information according to the position information, and marking the feature points according to the distribution information;
and extracting region images of a preset time period based on the marked feature points, determining the motion trail of the feature points according to the region images of different moments, and determining the risk value of the feature points according to the motion trail.
9. The artificial intelligence algorithm-based multi-person behavior recognition method according to claim 8, wherein the calculating the region range of the moving region, determining the independent region and the integrated region according to the region range, comprises:
Determining a contour curve in the heat source layer according to a preset heat value, and calculating the total number of pixel points in the contour curve;
comparing the total number of the pixel points with a preset total number threshold value, and marking the motion area as an independent area when the total number of the pixel points is in a preset total number range;
And when the total number of the pixel points exceeds a preset total number range, marking the motion area as an aggregation area.
10. The method for multi-behavior recognition based on artificial intelligence algorithm according to claim 9, wherein the performing content recognition on the collection area and dividing the collection area according to the content recognition result to obtain the sub-area comprises:
performing contour recognition on the set region according to a preset tolerance, and determining a target region according to a contour recognition result;
Determining a center point of the target area, counting color values in the target area, calculating a color value mean value, and assigning a value to the center point according to the color value mean value;
Counting the assigned center points, and generating a center dot matrix which is in a mapping relation with the collection area;
and determining an ear area in the central lattice according to a preset feature framework, and segmenting the collection area according to the ear area to obtain a subarea.
CN202210050131.5A 2022-01-17 2022-01-17 Multi-behavior recognition device and method based on artificial intelligence algorithm Active CN114399535B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210050131.5A CN114399535B (en) 2022-01-17 2022-01-17 Multi-behavior recognition device and method based on artificial intelligence algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210050131.5A CN114399535B (en) 2022-01-17 2022-01-17 Multi-behavior recognition device and method based on artificial intelligence algorithm

Publications (2)

Publication Number Publication Date
CN114399535A CN114399535A (en) 2022-04-26
CN114399535B true CN114399535B (en) 2024-07-23

Family

ID=81230268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210050131.5A Active CN114399535B (en) 2022-01-17 2022-01-17 Multi-behavior recognition device and method based on artificial intelligence algorithm

Country Status (1)

Country Link
CN (1) CN114399535B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854027A (en) * 2013-10-23 2014-06-11 北京邮电大学 Crowd behavior identification method
CN106127814A (en) * 2016-07-18 2016-11-16 四川君逸数码科技股份有限公司 A kind of wisdom gold eyeball identification gathering of people is fought alarm method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190371144A1 (en) * 2018-05-31 2019-12-05 Henry Shu Method and system for object motion and activity detection
CN111860383B (en) * 2020-07-27 2023-11-10 苏州市职业大学 Group abnormal behavior identification method, device, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854027A (en) * 2013-10-23 2014-06-11 北京邮电大学 Crowd behavior identification method
CN106127814A (en) * 2016-07-18 2016-11-16 四川君逸数码科技股份有限公司 A kind of wisdom gold eyeball identification gathering of people is fought alarm method and device

Also Published As

Publication number Publication date
CN114399535A (en) 2022-04-26

Similar Documents

Publication Publication Date Title
WO2021047232A1 (en) Interaction behavior recognition method, apparatus, computer device, and storage medium
CN110443210B (en) Pedestrian tracking method and device and terminal
CN110197146B (en) Face image analysis method based on deep learning, electronic device and storage medium
US8379920B2 (en) Real-time clothing recognition in surveillance videos
CN110781859B (en) Image annotation method and device, computer equipment and storage medium
CN110163864B (en) Image segmentation method and device, computer equipment and storage medium
CN108960412B (en) Image recognition method, device and computer readable storage medium
CN110837580A (en) Pedestrian picture marking method and device, storage medium and intelligent device
CN112801236B (en) Image recognition model migration method, device, equipment and storage medium
CN113095441A (en) Pig herd bundling detection method, device, equipment and readable storage medium
CN114758249A (en) Target object monitoring method, device, equipment and medium based on field night environment
CN111523387A (en) Method and device for detecting hand key points and computer device
CN111932545A (en) Image processing method, target counting method and related device thereof
WO2021169642A1 (en) Video-based eyeball turning determination method and system
Yang et al. Fusion of retinaface and improved facenet for individual cow identification in natural scenes
CN114399535B (en) Multi-behavior recognition device and method based on artificial intelligence algorithm
CN115905733B (en) Mask wearing abnormality detection and track tracking method based on machine vision
CN113496162B (en) Parking specification identification method, device, computer equipment and storage medium
CN111325106A (en) Method and device for generating training data
CN112232272B (en) Pedestrian recognition method by fusing laser and visual image sensor
CN105095834A (en) Method and device for identifying mark text of sports participant
CN113887384A (en) Pedestrian trajectory analysis method, device, equipment and medium based on multi-trajectory fusion
CN112203053A (en) Intelligent supervision method and system for subway constructor behaviors
CN113850207B (en) Micro-expression classification method and device based on artificial intelligence, electronic equipment and medium
CN113780116B (en) Invoice classification method, invoice classification device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant