[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117889858B - Positioning method, device, system and medium for multiple fire targets - Google Patents

Positioning method, device, system and medium for multiple fire targets Download PDF

Info

Publication number
CN117889858B
CN117889858B CN202311863420.8A CN202311863420A CN117889858B CN 117889858 B CN117889858 B CN 117889858B CN 202311863420 A CN202311863420 A CN 202311863420A CN 117889858 B CN117889858 B CN 117889858B
Authority
CN
China
Prior art keywords
fire
target
list
information
targets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311863420.8A
Other languages
Chinese (zh)
Other versions
CN117889858A (en
Inventor
罗除
孙义哲
张翊晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Greater Bay Area University In Preparation
Original Assignee
Greater Bay Area University In Preparation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Greater Bay Area University In Preparation filed Critical Greater Bay Area University In Preparation
Priority to CN202311863420.8A priority Critical patent/CN117889858B/en
Publication of CN117889858A publication Critical patent/CN117889858A/en
Application granted granted Critical
Publication of CN117889858B publication Critical patent/CN117889858B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Fire Alarms (AREA)

Abstract

The invention discloses a method, a device, a system and a medium for positioning multiple fire targets, which are applied to the technical field of fire positioning, wherein the method comprises the following steps: when a plurality of devices are installed or run into a place where a fire disaster occurs, each device performs positioning processing on the device to obtain corresponding position information and sends the corresponding position information to other devices, and observes the environments of a plurality of fire targets and places according to the corresponding position information to obtain a corresponding observation data list and sends the corresponding observation data list to other devices; the equipment with hardware calculation force larger than the calculation force threshold value determines azimuth information and shape information of a plurality of fire targets according to the corresponding observation data list and the observation data list sent by other equipment. The invention uses at least three devices with laser radar sensing function and machine vision function to rapidly and accurately locate and identify the shape of a plurality of fire targets in a certain range, effectively improves the fire location precision and efficiency, and has high availability.

Description

Positioning method, device, system and medium for multiple fire targets
Technical Field
The invention relates to the technical field of fire disaster positioning, in particular to a method, a device, a system and a medium for positioning multiple fire disaster targets.
Background
For industries such as logistics and manufacturing industry, the monitoring of a fire scene is important to emergency and fire-fighting work, and particularly the judgment of the position and shape of a fire target. Industries such as logistics and manufacturing typically work or manufacture in large sites where fires tend to be complex, including multiple fire targets and multiple combustibles. Currently, the related art relies on a single sensing technology for a single device to achieve the localization of a single fire target. However, in an actual fire scene, a single sensing technology of a single device is easily interfered by environment, so that a fluctuation range of sensing data is large, sensing data has a certain error, and related technologies tend to realize positioning only for a single fire target, and cannot sense a plurality of fire targets in a certain range at the same time, so that it is difficult to quickly and accurately position a plurality of positions where fire occurs and identify the shape of the fire target.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art to a certain extent.
Therefore, the invention aims to provide a method, a device, a system and a medium for positioning multiple fire targets.
In order to achieve the technical purpose, the technical scheme adopted by the embodiment of the invention comprises the following steps:
In one aspect, the embodiment of the invention provides a method for positioning multiple fire targets, which comprises the following steps:
When a plurality of devices are installed or run into a place where a fire disaster occurs, each device performs positioning processing on the device, obtains corresponding position information and sends the position information to other devices;
Each device observes the environments of a plurality of fire targets and places according to the corresponding position information, obtains a corresponding observation data list and sends the corresponding observation data list to other devices; the observation data list comprises a fire target visual information list and a background point cloud list, wherein the fire target visual information list is used for storing first azimuth angle, second azimuth angle and aspect ratio information of each fire target, and the background point cloud list is used for storing a plurality of detection point clouds and position information thereof;
Among the plurality of devices, the device with the hardware computing power larger than the computing power threshold value determines the azimuth information and the shape information of the plurality of fire targets according to the corresponding observation data list and the observation data list sent by other devices.
In addition, the method for positioning multiple fire targets according to the embodiment of the present invention may further have the following additional technical features:
Further, in an embodiment of the present invention, the observing the environments of the fire targets and sites according to the corresponding location information to obtain a corresponding observed data list includes:
Creating a blank fire target visual information list and a background point cloud list;
Measuring each fire disaster target through a pre-configured machine vision detection module to obtain information of a first azimuth angle, a second azimuth angle and an aspect ratio of each fire disaster target, and storing the information into a fire disaster target vision information list in a ternary combination arrangement mode;
The first azimuth angle is a left side edge horizontal azimuth angle, and the second azimuth angle is a right side edge horizontal azimuth angle;
scanning the environment of a place through a pre-configured laser radar module to obtain a plurality of initial point clouds;
According to the first azimuth angle and the second azimuth angle of each fire target and combining the position information of the equipment, processing a plurality of initial point clouds to obtain a plurality of detection point clouds and the position information thereof, and storing the detection point clouds and the position information thereof into the background point cloud list;
And obtaining an observation data list through the fire target visual information list and the background point cloud list.
Further, in an embodiment of the present invention, the processing, according to the first azimuth angle and the second azimuth angle of each fire target and in combination with the location information of the device, the plurality of initial point clouds to obtain a plurality of detection point clouds and location information thereof includes:
Searching a first azimuth angle and a second azimuth angle of each fire target from the fire target visual information list;
Deleting point clouds positioned between a first azimuth angle and a second azimuth angle of each fire target from a plurality of initial point clouds to obtain a plurality of detection point clouds;
and carrying out three-dimensional space coordinate conversion processing on each detection point cloud according to the distance and angle of each detection point cloud relative to the equipment and the position information of the equipment, and obtaining the position information of each detection point cloud in the place.
Further, in one embodiment of the present invention, the device whose hardware computing power is greater than the computing power threshold is defined as a computing device; the device with hardware computing power larger than the computing power threshold value determines azimuth information and shape information of a plurality of fire targets according to the corresponding observation data list and the observation data list sent by other devices, and comprises:
Each operation device selects a preset number of devices as the corresponding devices to be operated, and for the observation data list of all the corresponding devices to be operated, each operation device executes the following operation steps:
creating a blank edge point list, a fire target vertex list and a target list of suspected fire targets;
Processing a background point cloud list and a fire target visual information list of all equipment to be operated to obtain edge point clouds of each fire target, storing the edge point clouds into the edge point list, and numbering the edge point clouds of each fire target in the edge point list;
Screening a plurality of edge point clouds corresponding to a preset distance threshold value from the edge point list to serve as vertexes, and storing the vertexes into the vertex list;
determining a plurality of vertexes and coordinate extremums corresponding to each fire disaster target according to the position information of all vertexes in the vertex list; the coordinate extremum comprises a maximum value and a minimum value of an abscissa, a maximum value and a minimum value of an ordinate and a maximum value and a minimum value of a depth coordinate;
Correcting the coordinate extremum of each fire target to obtain the corrected coordinate extremum of each fire target;
Performing de-duplication treatment on all fire targets according to the corrected coordinate extremum of each fire target to obtain a plurality of reserved fire targets;
Determining azimuth information and shape information of each reserved fire target through a plurality of vertexes and position information thereof corresponding to each reserved fire target, storing the azimuth information and the shape information into the target list, and renumbering each fire target in the target list;
When all the operation devices execute the operation steps on the observation data lists of all the corresponding devices to be operated, each operation device sequentially outputs the corresponding target list and sends the target list to the terminal where the user is located.
Further, in an embodiment of the present invention, the processing the background point cloud list and the fire target visual information list of all the devices to be operated to obtain edge point clouds of each fire target includes:
Adding the equipment number of each equipment to be operated to the information of all detection point clouds of each equipment to be operated, merging the background point cloud lists of all equipment to be operated, and obtaining a merged background point cloud list;
According to the position information of each device to be operated and the fire target visual information list, searching to obtain a first azimuth angle and a second azimuth angle of each fire target;
And screening out detection point clouds positioned between the first azimuth angle and the second azimuth angle of each fire target from the combined background point cloud list to serve as edge point clouds of each fire target.
Further, in an embodiment of the present invention, the correcting the coordinate extremum of each fire target to obtain the corrected coordinate extremum of each fire target includes:
correcting the maximum value of the depth coordinate of each fire target according to the height-width ratio information of each fire target to obtain the maximum value of the corrected depth coordinate of each fire target;
and taking the maximum value and the minimum value of the abscissa, the maximum value and the minimum value of the ordinate, the minimum value of the depth coordinate and the maximum value of the corrected depth coordinate of each fire target as corrected coordinate extremum of each fire target.
Further, in an embodiment of the present invention, the performing a deduplication process on all fire targets according to the corrected coordinate extremum of each fire target to obtain a plurality of reserved fire targets includes:
Determining a three-axis coordinate range of each fire target according to the corrected coordinate extremum of the plurality of fire targets;
Traversing the three-axis coordinate range of each fire target, and randomly deleting one fire target and reserving the other fire target for the two fire targets with the coincidence degree larger than the coincidence threshold value in the three-axis coordinate range, so as to obtain a plurality of reserved fire targets.
In another aspect, an embodiment of the present invention provides a positioning device for multiple fire targets, including:
the positioning module is used for performing positioning processing on the positioning module to obtain corresponding position information and sending the corresponding position information to other equipment;
The observation module is used for observing the environments of a plurality of fire targets and places according to the corresponding position information, obtaining a corresponding observation data list and sending the corresponding observation data list to other equipment; the observation data list comprises a fire target visual information list and a background point cloud list;
And the multi-target processing module is used for determining the azimuth information and the shape information of a plurality of fire targets according to the corresponding observation data list and the observation data list sent by other equipment when the hardware calculation force is larger than the calculation force threshold value.
In yet another aspect, an embodiment of the present invention provides a positioning system for multiple fire targets, including:
At least one processor;
At least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to perform a multi-fire target localization method as previously described.
In yet another aspect, an embodiment of the present invention provides a storage medium in which a processor-executable program is stored, which when executed by a processor, is configured to implement a method for locating a multi-fire target as described above.
The beneficial effects of the invention are as follows: the method, the device, the system and the medium for positioning the multiple fire targets utilize at least three devices with laser radar sensing function and machine vision function to position and identify the shape of the multiple fire targets in a certain range, can effectively reduce the interference of fire environments on data sensing, and improve the accuracy of sensing data, thereby being beneficial to improving the accuracy and the efficiency of fire positioning and having high availability.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
FIG. 1 is a flow chart of a method for locating multiple fire targets provided by the invention;
FIG. 2 is a flow chart of a single device sensing phase provided by the present invention;
FIG. 3 is an exemplary view of a first azimuth angle provided by the present invention;
FIG. 4 is an exemplary view of a second azimuth angle provided by the present invention;
FIG. 5 is a flow chart for determining the shape and orientation of a plurality of fire targets provided by the present invention;
FIG. 6 is a flowchart of the operational steps performed by a single computing device provided by the present invention;
Fig. 7 is a block diagram of a positioning device for multiple fire targets according to the present invention.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The application will be further described with reference to the drawings and specific examples. The described embodiments should not be taken as limitations of the present application, and all other embodiments that would be obvious to one of ordinary skill in the art without making any inventive effort are intended to be within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
For industries such as logistics and manufacturing industry, the monitoring of a fire scene is important to emergency and fire-fighting work, and particularly the judgment of the position and shape of a fire target. Industries such as logistics and manufacturing typically work or manufacture in large sites where fires tend to be complex, including multiple fire targets and multiple combustibles. Currently, the related art relies on a single sensing technology for a single device to achieve the localization of a single fire target.
However, in an actual fire scene, a single sensing technology of a single device is easily interfered by environment, so that a fluctuation range of sensing data is large, sensing data has a certain error, and related technologies tend to realize positioning only for a single fire target, and cannot sense a plurality of fire targets in a certain range at the same time, so that it is difficult to quickly and accurately position a plurality of positions where fire occurs and identify the shape of the fire target.
Aiming at the problems and defects existing in the related art, the embodiment of the invention provides a method, a device, a system and a medium for positioning multiple fire targets, which aim to quickly and accurately position and identify the shapes of the multiple fire targets within a certain range by utilizing at least three devices with laser radar sensing function and machine vision function, are suitable for detecting the positions and the shapes of the multiple fire targets in large places, and can provide reference basis for emergency and fire fighting of complex fire conditions.
It should be noted that, the device of the present invention is an electronic hardware with communication function, laser radar sensing capability and machine vision capability. The point cloud data acquired through the laser radar sensing capability is regarded as disordered point cloud, and the functional module with the machine vision capability can be a common optical camera or an infrared thermal imaging camera. The specific hardware capabilities of each device may be different; each device may be autonomous or may be installed at a fixed location within the venue, the number of devices necessarily being three or more.
Furthermore, the present invention is applicable to a three-dimensional space place in which the coordinate system thereof includes the horizontal axis and the vertical axis of the horizontal plane and the depth axis representing the height. In a three-dimensional space place, a fire target is a certain number (zero or any positive integer) of closed three-dimensional areas, the areas comprise a plurality of vertexes to be positioned, the plurality of vertexes and coordinate information thereof can describe the shape and azimuth information of the fire target, and the shape and azimuth information are data to be solved by the invention.
Firstly, the implementation steps of a method for locating multiple fire targets according to the embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The method provided by the embodiment of the invention can be applied to the terminal, the server, software running in the terminal or the server and the like. The terminal may be, but is not limited to, a tablet computer, a notebook computer, a desktop computer, etc. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, basic cloud computing services such as big data and artificial intelligence platforms, and the like. In addition, the server may also be a node server in a blockchain network, but is not limited thereto. The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like.
Referring to fig. 1, fig. 1 is a flowchart of a method for locating multiple fire targets according to the present invention, which mainly includes the following steps S100-S300.
And S100, when a plurality of devices are installed or run into a place where a fire disaster occurs, each device performs positioning processing on the device, obtains corresponding position information and sends the position information to other devices.
In this step, each device may be autonomously mobile or may be installed at a fixed location within the venue. After a plurality of devices are installed or driven into a fire site, each device locates itself. If the equipment can move autonomously, the equipment can be positioned by adopting the existing positioning mode, so that a positioning result is obtained. If the device is installed in a fixed location at a predetermined site, the device may directly employ the fixed location as a result of the positioning. For device d i, numbered i, its location information is noted (x i,yi,zi).
And S200, each device observes the environments of a plurality of fire targets and places according to the corresponding position information, obtains a corresponding observation data list and sends the corresponding observation data list to other devices.
The observation data list includes a fire target visual information list and a background point cloud list. The fire target visual information list is used for storing first azimuth angle, second azimuth angle and height-width ratio information of each fire target, and the background point cloud list is used for storing a plurality of detection point clouds and position information thereof.
In this step, for each device, after the device locates itself, the device will keep its location unchanged, and use the pre-configured lidar module and machine vision detection module to sense multiple fire targets and fire scene environments, so as to perform a single device sensing phase, thus obtaining a corresponding observation data list, and share this data to other devices.
S300, among the plurality of devices, the device with the hardware computing power larger than the computing power threshold value determines azimuth information and shape information of a plurality of fire targets according to the corresponding observation data list and the observation data list sent by other devices.
It should be noted that the calculation force threshold may be set according to practical situations, which is not particularly limited in the present invention.
In this step, after each device completes the single device sensing phase, a device whose hardware calculation force is greater than a calculation force threshold is selected from among a plurality of devices as calculation devices for performing calculation, and the number of calculation devices is two or more. Each computing device will locate a plurality of fire targets and determine shape information for each fire target based on its own observed data list in combination with other observed data lists observed by other devices.
In some embodiments of the present invention, before step S100, the method may further include the steps of:
Each device establishes its communication connection with the other devices.
In this step, after a plurality of devices are installed or driven into a fire place, the respective devices are networked to construct a communication connection between each device. Optionally, the networking mode may be a wireless network mode such as WiFi or bluetooth, or a wired network mode such as a network cable or an optical fiber, which is not limited in particular by the present invention.
In some embodiments of the present invention, the implementation of step S100 may include, but is not limited to, the following steps S110-S120 for a single device.
S110, when the device is fixedly installed at a place where a fire occurs, the device uses its installation position in the place as position information corresponding to the device.
In this step, if the device is installed at a fixed location on a predetermined site, the device may directly employ the fixed location as a positioning result and share the positioning result to other devices.
S120, when the device travels into the place where the fire occurs, the device travels to a position for observing a plurality of fire targets, and the position for observing a plurality of fire targets is set as position information corresponding to the device.
In this step, if the device can move autonomously, the device enters the seek phase. Specifically, for a single device, the device needs to move and search for a better location and angle to observe as many or all fire targets as possible while continually locating itself and sharing its location results to other devices. It should be noted that the device needs to ensure that it does not itself obstruct the view of other devices observing the fire.
Optionally, the positioning mode is one or more of satellite positioning mode, CSI (CHANNEL STATE Information about channel state) signal strength positioning of WiFi, and laser radar terrain matching, which is not limited in detail in the present invention.
In some embodiments of the invention, the single device sensing phase is entered after all devices have completed locating themselves. Referring to fig. 2 to 4, the implementation procedure of step S200 mainly includes steps S210 to S250 for a single device.
S210, creating a blank fire target visual information list and a background point cloud list.
S220, measuring each fire target through a pre-configured machine vision detection module to obtain information of a first azimuth angle, a second azimuth angle and an aspect ratio of each fire target, and storing the information into a fire target vision information list in a ternary grouping mode.
It should be noted that, the north direction is a horizontal direction of 0 degrees, the south direction is a horizontal direction of 180 degrees, the east direction is a horizontal direction of 90 degrees, the west direction is a horizontal direction of 270 degrees, and the first azimuth angle is a horizontal azimuth angle of the left side edge, as shown in fig. 3; the second azimuth angle is the right side edge horizontal azimuth angle, as shown in fig. 4.
Alternatively, the machine vision detection module may be a common optical camera or an infrared thermal imaging camera, which is not particularly limited in the present invention.
In the step, the equipment measures each fire target which can be observed in a machine vision mode of a common optical camera or an infrared thermal imaging camera to obtain left side edge horizontal azimuth angle, right side edge horizontal azimuth angle and height-width ratio information of each fire target. Then, the device stores the left side edge horizontal azimuth angle, the right side edge horizontal azimuth angle and the aspect ratio information of each fire target into the fire target visual information list according to the arrangement mode of the triples.
S230, scanning the environment of the place through a pre-configured laser radar module to obtain a plurality of initial point clouds.
In the step, the equipment scans the place 360 degrees through a laser radar module which is pre-configured, and a plurality of initial point clouds are obtained through scanning.
S240, processing a plurality of initial point clouds according to the first azimuth angle and the second azimuth angle of each fire target and combining the position information of the equipment to obtain a plurality of detection point clouds and the position information thereof, and storing the detection point clouds and the position information thereof into a background point cloud list.
In this step, for each fire target, the device performs screening processing on a plurality of initial point clouds by using the left edge horizontal azimuth angle, the right edge horizontal azimuth angle, and the aspect ratio information of the fire target, thereby obtaining a plurality of detection point clouds and position information of each detection point cloud, and storing the data in a background point cloud list.
More specifically, first, the first azimuth angle and the second azimuth angle of each fire target are searched from the fire target visual information list. Then, deleting the point cloud positioned between the first azimuth angle and the second azimuth angle of each fire target from the plurality of initial point clouds to obtain a plurality of detection point clouds. And then, according to the distance and the angle of each detection point cloud relative to the equipment, combining the position information of the equipment, and carrying out three-dimensional space coordinate conversion processing on each detection point cloud to obtain the position information of each detection point cloud in the place.
S250, obtaining an observation data list through the fire target visual information list and the background point cloud list.
In this step, the device obtains an observation data list composed of a fire target visual information list and a background point cloud list.
In some embodiments of the invention, the multi-device distributed computing phase is entered after each device completes the single device sensing phase. The multi-device distributed computing stage is performed by a part of devices with stronger hardware performance, and the devices participating in the computing stage are called as computing devices in the invention. In order to prevent the problems of excessive calculation amount and data accumulation errors, in the multi-device distributed calculation stage, a single computing device comprehensively analyzes data of only a plurality of devices (such as two or three devices), rather than comprehensively analyzing data of all devices.
Specifically, referring to fig. 5, in step S300, the implementation process of determining azimuth information and shape information of a plurality of fire targets may include, but is not limited to, the following steps S310 to S320.
S310, each operation device selects a preset number of devices as the corresponding devices to be operated, and each operation device executes an operation step once for the observation data list of all the corresponding devices to be operated.
It should be noted that the number of devices selected by each computing device may be set according to practical situations, which is not particularly limited in the present invention.
In this step, each computing device selects two or more devices to be processed and acquires an observation data list of the selected devices. Then, each computing device performs a computing step on the observation data list of all the devices to be computed corresponding thereto, the computing step being explained in the following embodiments.
It should be noted that the computing power of each computing device needs to meet the computing power requirements of the data of the device selected by the computing device.
S320, when all the operation devices execute the operation steps on the observation data lists of all the corresponding devices to be operated, each operation device sequentially outputs the corresponding target list and sends the target list to the terminal where the user is located.
Each target list records azimuth information and shape information of a plurality of finally determined fire targets. Wherein the shape information of the finally determined fire target is characterized by the plurality of vertices of the finally determined fire target, and the azimuth information of the finally determined fire target is characterized by the position information of the plurality of vertices of the finally determined fire target.
As a further embodiment, referring to fig. 6, the operation step may include, but is not limited to, the following steps S311-S316.
S311, creating a blank edge point list, a fire target vertex list and a target list of suspected fire targets.
In this step, the computing device creates a blank edge point list of suspected fire targets, a fire target vertex list, and a target list corresponding to the edge point list, so as to facilitate subsequent data storage, processing, and sorting.
Alternatively, the computing device may also preset a target number variable, which counts from the beginning. However, it should be noted that the rules of the target number variables preset by all the computing devices are the same.
S312, processing the background point cloud list and the fire target visual information list of all the equipment to be operated to obtain edge point clouds of each fire target, storing the edge point clouds into the edge point list, and numbering the edge point clouds of each fire target in the edge point list.
Specifically, in this step, first, the device number of each device to be operated is added to the information of all the detection point clouds of each device to be operated, and the background point cloud lists of all the devices to be operated are combined, so as to obtain a combined background point cloud list. Then, according to the position information of each device to be operated and the fire target visual information list, searching to obtain a first azimuth angle and a second azimuth angle of each fire target, namely a left side edge horizontal azimuth angle and a right side edge horizontal azimuth angle of each fire target. And screening out detection point clouds positioned between the left side edge horizontal azimuth angle and the right side edge horizontal azimuth angle of each fire target from the combined background point cloud list, and storing the detection point clouds as edge point clouds of each fire target into an edge point list of a suspected fire target. And finally, numbering the point clouds by using a preset target numbering variable.
It should be noted that because different fire targets are not yet distinguished at different devices, each fire target needs to be given a different target number. Whether or not the same fire target is recorded by a plurality of devices at the same time, the target number variable of each fire target is incremented in the fire target visual information list of any one device.
S313, selecting a plurality of edge point clouds corresponding to a preset distance threshold from the edge point list as vertexes and storing the vertexes into the vertex list.
In this step, with the distance threshold as a reference, a plurality of pairs of edge point clouds are found in the edge point list, where each pair of edge point clouds includes two adjacent edge point clouds from different computing devices. Then, these found edge point clouds are taken as vertices and stored into a vertex list of fire targets.
Alternatively, the distance threshold may be set according to practical situations, which is not particularly limited in the embodiment of the present invention. Preferably, the distance threshold is 0.1 meters.
S314, determining a plurality of vertexes and coordinate extremums corresponding to each fire target according to the position information of all vertexes in the vertex list.
The coordinate extremum includes maximum and minimum values of abscissa, maximum and minimum values of ordinate, and maximum and minimum values of depth coordinate. For the jth fire target, the maximum value of the abscissa refers to the maximum x-axis coordinate value x j-max, the minimum value of the abscissa refers to the minimum x-axis coordinate value x j-min, the maximum value of the ordinate refers to the maximum y-axis coordinate value y j-max, and the minimum value of the ordinate refers to the minimum y-axis coordinate value y j-min; the maximum value of the depth coordinate means the maximum z-axis coordinate value z j-max, and the minimum value of the depth coordinate means the minimum z-axis coordinate value z j-min.
In this step, first, the vertexes having the same number are grouped to obtain a plurality of vertex groups. The plurality of vertex groups are in one-to-one correspondence with the plurality of fire targets, the plurality of vertices in each vertex group can be characterized as a plurality of vertices corresponding to the fire targets corresponding to each vertex group, and all the vertices in a single vertex group can describe the shape of a single fire target. And then, determining a plurality of vertexes corresponding to each fire target and coordinate extremum according to the position information of all vertexes of each vertex group.
S315, correcting the coordinate extremum of each fire target to obtain the corrected coordinate extremum of each fire target.
In this step, the laser radar may be interfered by smoke of the fire when scanning the environment in the height direction, so that a certain sensing error may exist in the coordinate extremum of each fire target, and thus the maximum value of the depth coordinate of each fire target needs to be corrected.
Specifically, first, the maximum value of the depth coordinate of each fire target is corrected based on the aspect ratio information of each fire target, and the maximum value of the corrected depth coordinate of each fire target is obtained. Then, the maximum value and the minimum value of the abscissa, the maximum value and the minimum value of the ordinate, the minimum value of the depth coordinate, and the maximum value of the corrected depth coordinate of each fire target are taken as corrected coordinate extremum of each fire target.
Wherein the maximum value of the corrected depth coordinates satisfies the following formula:
in the method, in the process of the invention, The maximum value of the depth coordinate after the j fire target correction is represented, and R j represents the aspect ratio information of the j fire target.
S316, performing de-duplication treatment on all fire targets according to the corrected coordinate extremum of each fire target to obtain a plurality of reserved fire targets.
In this step, since different fire targets are not distinguished in different devices, it is necessary to perform a deduplication process on a plurality of fire targets, that is, delete duplicate fire targets.
Specifically, first, the three-axis coordinate range of each fire target, that is, the coordinate ranges of the x-axis, y-axis, and z-axis, is determined based on the maximum and minimum values of the abscissa, the maximum and minimum values of the ordinate, the minimum values of the depth coordinates, and the maximum values of the corrected depth coordinates of each fire target. Then, traversing the three-axis coordinate range of each fire target, and randomly deleting one fire target and reserving the other fire target for two fire targets with the coincidence degree of the three-axis coordinate ranges being larger than the coincidence threshold value. And traversing the three-axis coordinate ranges of all the fire targets to obtain a plurality of reserved fire targets, namely the fire targets which are finally determined.
Alternatively, the coincidence threshold may be set according to practical situations, which is not particularly limited in the embodiment of the present invention.
S317, determining azimuth information and shape information of each reserved fire target through a plurality of vertexes and position information thereof corresponding to each reserved fire target, storing the azimuth information and the shape information into a target list, and renumbering each fire target in the target list.
In this step, the shape information of each reserved fire target is depicted by the plurality of vertices of each reserved fire target, and the azimuth information of each reserved fire target is depicted by the position information of the plurality of vertices of each reserved fire target, so that the azimuth information and the shape information of each reserved fire target are determined, and are stored in the target list. And then renumbering all fire targets in the target list according to a preset target numbering variable, so as to obtain a final target list.
Still further, before step S317, the method further includes the steps of:
For each reserved fire target, searching for a vertex with a depth coordinate value greater than the maximum value of the corrected depth coordinate, and replacing the value of the depth coordinate of the searched vertex with the maximum value of the corrected depth coordinate.
In this step, if there is a vertex whose z-axis coordinate value is greater than the corrected maximum z-axis coordinate value of the fire target for all vertices of the fire target, the z-axis coordinate value of the vertex is set to the corrected maximum z-axis coordinate value.
In summary, the embodiment of the invention utilizes at least three devices with laser radar sensing function and machine vision function to locate and identify the shape of a plurality of fire targets within a certain range, can effectively reduce the interference of fire environment on data sensing, and improves the accuracy of sensing data, thereby being beneficial to improving the accuracy and efficiency of fire location, being applicable to the detection of the positions and the shapes of a plurality of fire targets in large-scale places and having high availability.
In addition, referring to fig. 7, the embodiment of the present invention further provides a positioning device for multiple fire targets, which mainly includes:
The positioning module 101 has the functions of: and carrying out positioning processing on the mobile terminal to obtain corresponding position information and sending the position information to other equipment.
The observation module 102 functions as: and observing the environments of a plurality of fire targets and places according to the corresponding position information, obtaining a corresponding observation data list and sending the corresponding observation data list to other equipment.
The multi-target processing module 103 functions to: and when the hardware calculation force is larger than the calculation force threshold value, determining the azimuth information and the shape information of a plurality of fire targets according to the corresponding observation data list and the observation data list sent by other equipment.
Further, the observation module 102 mainly includes:
The machine vision detection module has the functions that: and measuring each fire target to obtain the first azimuth angle, the second azimuth angle and the height-width ratio information of each fire target.
The laser radar module has the functions that: and scanning the environment of the place to obtain a plurality of initial point clouds.
The content in the method embodiment is applicable to the embodiment of the device, and the functions specifically realized by the embodiment of the device are the same as those of the method embodiment, and the obtained beneficial effects are the same as those of the method embodiment.
In addition, the embodiment of the invention also provides a positioning system for multiple fire targets, which comprises the following steps:
At least one processor;
At least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to perform a multi-fire target localization method as previously described.
The content in the method embodiment is applicable to the system embodiment, the functions specifically realized by the system embodiment are the same as those of the method embodiment, and the achieved beneficial effects are the same as those of the method embodiment.
Finally, an embodiment of the present invention provides a computer-readable storage medium in which a processor-executable program is stored, which when executed by a processor is used to implement a method for locating multiple fire targets as described above.
Similarly, the content in the above method embodiment is applicable to the present storage medium embodiment, and the specific functions of the present storage medium embodiment are the same as those of the above method embodiment, and the achieved beneficial effects are the same as those of the above method embodiment.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the invention is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the functions and/or features may be integrated in a single physical device and/or software module or may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in the form of a software product stored in a storage medium, including several programs for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable programs for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with a program execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the programs from the program execution system, apparatus, or device and execute the programs. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the program execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable program execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the foregoing description of the present specification, reference has been made to the terms "one embodiment/example", "another embodiment/example", "certain embodiments/examples", and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the embodiments described above, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present invention, and these equivalent modifications and substitutions are intended to be included in the scope of the present invention as defined in the appended claims.

Claims (10)

1. The method for positioning the multi-fire target is characterized by comprising the following steps of:
When a plurality of devices are installed or run into a place where a fire disaster occurs, each device performs positioning processing on the device, obtains corresponding position information and sends the position information to other devices;
Each device observes the environments of a plurality of fire targets and places according to the corresponding position information, obtains a corresponding observation data list and sends the corresponding observation data list to other devices; the observation data list comprises a fire target visual information list and a background point cloud list, wherein the fire target visual information list is used for storing first azimuth angle, second azimuth angle and aspect ratio information of each fire target, and the background point cloud list is used for storing a plurality of detection point clouds and position information thereof;
Among the plurality of devices, the device with the hardware computing power larger than the computing power threshold value determines the azimuth information and the shape information of the plurality of fire targets according to the corresponding observation data list and the observation data list sent by other devices.
2. The method for locating multiple fire targets according to claim 1, wherein the observing environments of multiple fire targets and sites according to the corresponding location information to obtain a corresponding observed data list comprises:
Creating a blank fire target visual information list and a background point cloud list;
Measuring each fire disaster target through a pre-configured machine vision detection module to obtain information of a first azimuth angle, a second azimuth angle and an aspect ratio of each fire disaster target, and storing the information into a fire disaster target vision information list in a ternary combination arrangement mode;
The first azimuth angle is a left side edge horizontal azimuth angle, and the second azimuth angle is a right side edge horizontal azimuth angle;
scanning the environment of a place through a pre-configured laser radar module to obtain a plurality of initial point clouds;
According to the first azimuth angle and the second azimuth angle of each fire target and combining the position information of the equipment, processing a plurality of initial point clouds to obtain a plurality of detection point clouds and the position information thereof, and storing the detection point clouds and the position information thereof into the background point cloud list;
And obtaining an observation data list through the fire target visual information list and the background point cloud list.
3. The method for locating multiple fire targets according to claim 2, wherein the processing the plurality of initial point clouds according to the first azimuth angle and the second azimuth angle of each fire target and the position information of the device to obtain a plurality of detection point clouds and the position information thereof comprises:
Searching a first azimuth angle and a second azimuth angle of each fire target from the fire target visual information list;
Deleting point clouds positioned between a first azimuth angle and a second azimuth angle of each fire target from a plurality of initial point clouds to obtain a plurality of detection point clouds;
and carrying out three-dimensional space coordinate conversion processing on each detection point cloud according to the distance and angle of each detection point cloud relative to the equipment and the position information of the equipment, and obtaining the position information of each detection point cloud in the place.
4. A method of locating a multi-fire target according to claim 1, wherein the device whose hardware computing power is greater than a computing power threshold is defined as a computing device; the device with hardware computing power larger than the computing power threshold value determines azimuth information and shape information of a plurality of fire targets according to the corresponding observation data list and the observation data list sent by other devices, and comprises:
Each operation device selects a preset number of devices as the corresponding devices to be operated, and for the observation data list of all the corresponding devices to be operated, each operation device executes the following operation steps:
creating a blank edge point list, a fire target vertex list and a target list of suspected fire targets;
Processing a background point cloud list and a fire target visual information list of all equipment to be operated to obtain edge point clouds of each fire target, storing the edge point clouds into the edge point list, and numbering the edge point clouds of each fire target in the edge point list;
Screening a plurality of edge point clouds corresponding to a preset distance threshold value from the edge point list to serve as vertexes, and storing the vertexes into the vertex list;
determining a plurality of vertexes and coordinate extremums corresponding to each fire disaster target according to the position information of all vertexes in the vertex list; the coordinate extremum comprises a maximum value and a minimum value of an abscissa, a maximum value and a minimum value of an ordinate and a maximum value and a minimum value of a depth coordinate;
Correcting the coordinate extremum of each fire target to obtain the corrected coordinate extremum of each fire target;
Performing de-duplication treatment on all fire targets according to the corrected coordinate extremum of each fire target to obtain a plurality of reserved fire targets;
Determining azimuth information and shape information of each reserved fire target through a plurality of vertexes and position information thereof corresponding to each reserved fire target, storing the azimuth information and the shape information into the target list, and renumbering each fire target in the target list;
When all the operation devices execute the operation steps on the observation data lists of all the corresponding devices to be operated, each operation device sequentially outputs the corresponding target list and sends the target list to the terminal where the user is located.
5. The method for locating multiple fire targets according to claim 4, wherein the processing the background point cloud list and the fire target visual information list of all the devices to be operated to obtain edge point clouds of each fire target comprises:
Adding the equipment number of each equipment to be operated to the information of all detection point clouds of each equipment to be operated, merging the background point cloud lists of all equipment to be operated, and obtaining a merged background point cloud list;
According to the position information of each device to be operated and the fire target visual information list, searching to obtain a first azimuth angle and a second azimuth angle of each fire target;
And screening out detection point clouds positioned between the first azimuth angle and the second azimuth angle of each fire target from the combined background point cloud list to serve as edge point clouds of each fire target.
6. The method for locating multiple fire targets according to claim 4, wherein said correcting the coordinate extremum of each fire target to obtain the corrected coordinate extremum of each fire target comprises:
correcting the maximum value of the depth coordinate of each fire target according to the height-width ratio information of each fire target to obtain the maximum value of the corrected depth coordinate of each fire target;
and taking the maximum value and the minimum value of the abscissa, the maximum value and the minimum value of the ordinate, the minimum value of the depth coordinate and the maximum value of the corrected depth coordinate of each fire target as corrected coordinate extremum of each fire target.
7. The method for locating multiple fire targets according to claim 4, wherein said performing a de-duplication process on all fire targets according to the corrected coordinate extremum of each fire target to obtain multiple reserved fire targets comprises:
Determining a three-axis coordinate range of each fire target according to the corrected coordinate extremum of the plurality of fire targets;
Traversing the three-axis coordinate range of each fire target, and randomly deleting one fire target and reserving the other fire target for the two fire targets with the coincidence degree larger than the coincidence threshold value in the three-axis coordinate range, so as to obtain a plurality of reserved fire targets.
8. A multi-fire target positioning device, comprising:
the positioning module is used for performing positioning processing on the positioning module to obtain corresponding position information and sending the corresponding position information to other equipment;
The observation module is used for observing the environments of a plurality of fire targets and places according to the corresponding position information, obtaining a corresponding observation data list and sending the corresponding observation data list to other equipment; the observation data list comprises a fire target visual information list and a background point cloud list;
And the multi-target processing module is used for determining the azimuth information and the shape information of a plurality of fire targets according to the corresponding observation data list and the observation data list sent by other equipment when the hardware calculation force is larger than the calculation force threshold value.
9. A multi-fire target location system, comprising:
At least one processor;
At least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor is caused to run a method of locating a multi-fire target as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium in which a processor-executable program is stored, wherein the processor-executable program, when executed by a processor, is for implementing a method of locating a multi-fire target as claimed in any one of claims 1 to 7.
CN202311863420.8A 2023-12-29 2023-12-29 Positioning method, device, system and medium for multiple fire targets Active CN117889858B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311863420.8A CN117889858B (en) 2023-12-29 2023-12-29 Positioning method, device, system and medium for multiple fire targets

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311863420.8A CN117889858B (en) 2023-12-29 2023-12-29 Positioning method, device, system and medium for multiple fire targets

Publications (2)

Publication Number Publication Date
CN117889858A CN117889858A (en) 2024-04-16
CN117889858B true CN117889858B (en) 2024-07-16

Family

ID=90645282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311863420.8A Active CN117889858B (en) 2023-12-29 2023-12-29 Positioning method, device, system and medium for multiple fire targets

Country Status (1)

Country Link
CN (1) CN117889858B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117970360B (en) * 2023-12-29 2024-10-11 大湾区大学(筹) Fire disaster positioning method, system and medium based on laser radar collaborative sensing

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117970360A (en) * 2023-12-29 2024-05-03 大湾区大学(筹) Fire disaster positioning method, system and medium based on laser radar collaborative sensing

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435437A (en) * 2020-12-12 2021-03-02 浙江工业大学之江学院 Intelligent fire early warning system and method based on visual perception three-dimensional reconstruction technology
CN113436338A (en) * 2021-07-14 2021-09-24 中德(珠海)人工智能研究院有限公司 Three-dimensional reconstruction method and device for fire scene, server and readable storage medium
CN116109759A (en) * 2021-11-08 2023-05-12 中德(珠海)人工智能研究院有限公司 Fire scene three-dimensional reconstruction method and device for laser camera and spherical screen camera
CN114202880B (en) * 2021-12-13 2023-06-20 哈尔滨工业大学(深圳) Fire detection method, system, intelligent terminal and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117970360A (en) * 2023-12-29 2024-05-03 大湾区大学(筹) Fire disaster positioning method, system and medium based on laser radar collaborative sensing

Also Published As

Publication number Publication date
CN117889858A (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN109275093B (en) Positioning method based on UWB positioning and laser map matching and mobile terminal
CN112526513A (en) Millimeter wave radar environment map construction method and device based on clustering algorithm
US11113896B2 (en) Geophysical sensor positioning system
CN109557532B (en) Tracking method before detection based on three-dimensional Hough transform and radar target detection system
CN117889858B (en) Positioning method, device, system and medium for multiple fire targets
CN111308500B (en) Obstacle sensing method and device based on single-line laser radar and computer terminal
CN112305559A (en) Power transmission line distance measuring method, device and system based on ground fixed-point laser radar scanning and electronic equipment
CN113687429B (en) Device and method for determining boundary of millimeter wave radar monitoring area
CN111736167B (en) Method and device for obtaining laser point cloud density
CN114488026B (en) Underground parking garage passable space detection method based on 4D millimeter wave radar
CN113759928B (en) Mobile robot high-precision positioning method for complex large-scale indoor scene
CN111742242A (en) Point cloud processing method, system, device and storage medium
CN117970360B (en) Fire disaster positioning method, system and medium based on laser radar collaborative sensing
CN109459723B (en) Pure orientation passive positioning method based on meta-heuristic algorithm
CN114543808B (en) Indoor repositioning method, device, equipment and storage medium
CN114648618B (en) Indoor space three-dimensional topological relation construction method and system
Abadi et al. Manhattan World Constraint for Indoor Line-based Mapping Using Ultrasonic Scans
Yamada et al. Probability-Based LIDAR–Camera Calibration Considering Target Positions and Parameter Evaluation Using a Data Fusion Map
Wu et al. Point cloud registration algorithm based on the volume constraint
Gomez et al. Localization Exploiting Semantic and Metric Information in Non-static Indoor Environments
CN116824068B (en) Real-time reconstruction method, device and equipment for point cloud stream in complex dynamic scene
CN117968851B (en) Fire disaster positioning method, device and medium based on infrared thermal imaging collaborative sensing
CN118215124B (en) Asset positioning method, device, equipment and storage medium based on Bluetooth
CN115994942B (en) Symmetrical extraction method, device, equipment and storage medium of three-dimensional model
CN113960554A (en) Millimeter-wave radar-based method and device for positioning traffic target in tunnel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant