CN113781520A - Local image identification method and system based on AR intelligent glasses - Google Patents
Local image identification method and system based on AR intelligent glasses Download PDFInfo
- Publication number
- CN113781520A CN113781520A CN202110464722.2A CN202110464722A CN113781520A CN 113781520 A CN113781520 A CN 113781520A CN 202110464722 A CN202110464722 A CN 202110464722A CN 113781520 A CN113781520 A CN 113781520A
- Authority
- CN
- China
- Prior art keywords
- information
- moving body
- tracking
- body mark
- equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 239000011521 glass Substances 0.000 title claims abstract description 28
- 238000004458 analytical method Methods 0.000 claims abstract description 19
- 238000005070 sampling Methods 0.000 claims abstract description 18
- 230000000007 visual effect Effects 0.000 claims abstract description 14
- 230000003321 amplification Effects 0.000 claims abstract description 5
- 238000003199 nucleic acid amplification method Methods 0.000 claims abstract description 5
- 230000003068 static effect Effects 0.000 claims description 26
- 239000004984 smart glass Substances 0.000 claims description 18
- 239000003550 marker Substances 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000003190 augmentative effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a local image identification method based on AR intelligent glasses, which comprises the following steps: acquiring dynamic image information in a visual field range of equipment in real time, and performing motion characteristic analysis on the image information to generate a moving body mark; carrying out high-definition amplification sampling on the local image according to the moving body mark to generate characteristic information of the moving body mark; carrying out cross comparison analysis on the characteristic information of the moving body mark and preset object information to generate a tracking instruction; sending a tracking instruction to the cooperative equipment through the intranet, and tracking the moving body; the AR glasses-based multi-device collaborative object automatic identification and tracking system is realized, the identification and tracking efficiency of related personnel on the occasions with more public people on the tracked object is greatly and effectively improved, and a plurality of problems of the existing tracking mode are solved.
Description
Technical Field
The invention relates to the field of intelligent AR equipment, in particular to a local image identification method and system based on AR intelligent glasses.
Background
AR (Augmented Reality) is a technology for fusing a real world with virtual information, namely a technology for enhancing the real world with the virtual information, and achieves the purpose of better supporting work and entertainment by mutually supplementing and correcting the virtual information and the real world information under the coordination of various technical means such as multimedia, a three-dimensional model, intelligent interaction, a sensor and the like.
With the continuous development of the AR technology in recent years, more and more AR related technologies and devices are gradually brought forward, more functions and use levels of the AR related devices are more prone to civil consumption, some AR devices have certain creator support functions, creation behavior modes of creators can be increased, creation potential and desire of creators are stimulated, according to the AR devices and technologies in the prior art, it is not difficult to find that the AR devices gradually replace mobile devices in the prior art to serve as next-generation intelligent terminals, but other potential application spaces of the AR technology, which is still immature at the present stage of the AR, are still a very wide development opportunity.
Cooperate current AR intelligence eye technique and image recognition, can be used to the tracking location to moving the unit, can play very effectual assistance effect to the work of correlation mechanism, move among the prior art when moving the unit especially to personnel's pursuit, go on by the pursuit personnel manual work mostly, be difficult to carry out effectual discernment and pursuit to the personnel of waiting to be tracked who is in remote crowd, and the monitored control system who assists the pursuit also has the dead angle more, uses insensitive problem.
Disclosure of Invention
The invention aims to provide a local image identification method and a local image identification system based on AR intelligent glasses, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
a local image identification method based on AR intelligent glasses comprises the following steps:
acquiring dynamic image information in a visual field range of equipment in real time, and performing three-dimensional motion characteristic analysis on the image information to generate a moving body mark;
dynamic image acquisition is carried out on the moving body mark, and the characteristic information of the moving body mark is extracted;
carrying out cross comparison analysis on the characteristic information of the moving body mark and preset object information to generate a tracking instruction;
and sending a tracking instruction to the cooperative equipment through the intranet, and tracking the moving body.
As a further scheme of the invention: the dynamic image information is formed by combining a plurality of continuous static images acquired at preset time intervals, the preset time intervals are used for representing the movement angular speed of the visual field range of the equipment, and the preset time intervals and the movement angular speed of the visual field range of the equipment are arranged in an inverse proportion relation.
As a further scheme of the invention: the dynamic image information also comprises distance data and equipment position information, and the distance data is used for representing the linear distance between each image point on the dynamic image data and the equipment; the step of performing three-dimensional motion characteristic analysis on the image information to generate a moving body mark specifically comprises the following steps:
recording and storing the position information of the equipment in real time, establishing a three-dimensional space model and generating an equipment motion track;
reading static image information in the dynamic image information frame by frame, and acquiring distance information and equipment position information in the static image information;
establishing a real-time object model in the three-dimensional space model according to the distance information in the static image information corresponding to the equipment position information, and storing the real-time object model;
generating an object model motion track according to the continuously updated real-time object model;
and marking the moving body of the object model in the moving state according to the motion trail of the object model.
As a further scheme of the invention: the step of collecting dynamic images of the moving body mark and extracting the characteristic information of the moving body mark specifically comprises the following steps:
acquiring relative position and distance data of the moving body mark according to the real-time object model;
carrying out high-definition real-time sampling on a local image according to the relative position and distance data of the moving body mark to obtain a local dynamic high-definition image;
and performing feature extraction on the local dynamic high-definition image to generate feature information of the motion marker.
As a further scheme of the invention: the method is characterized in that preset object information is preset, the preset object information is characteristic information of a tracking object which is collected in advance, the characteristic information of a moving body mark and the preset object information are subjected to cross comparison analysis, and a tracking instruction is generated, and the method specifically comprises the following steps:
establishing a feature comparison model according to preset object information, wherein the feature comparison model is used for representing the contact ratio with the preset object information;
obtaining the characteristic information of the moving body mark, comparing and analyzing the characteristic information of the moving body mark according to the characteristic comparison model to generate a comparison result, and if the comparison result is not the same as the comparison result, comparing and analyzing the characteristic information of the moving body mark
And if the comparison result is high coincidence, generating prompt information and outputting a tracking instruction as tracking.
As a further scheme of the invention: the step of sending a tracking instruction to the cooperative device through the intranet and tracking the moving body specifically includes:
when the tracking instruction is tracking, extracting a corresponding moving body mark and the tracking instruction;
sending the moving body mark and the tracking instruction to the cooperative equipment;
and continuously tracking in the three-dimensional space model according to the corresponding motion marker information.
As a further scheme of the invention: the method further comprises the following steps:
receiving a moving body marking and tracking instruction from the cooperative equipment;
searching and identifying the moving body mark in the three-dimensional space model according to the received moving body mark; if it is
And if the moving body mark in the three-dimensional space model is consistent with the received moving body mark, continuously tracking in the three-dimensional space model.
In a second aspect, an embodiment of the present invention is directed to a local image recognition system based on AR smart glasses, where the local image recognition system specifically includes:
the global construction module is used for acquiring dynamic image information in the visual field range of the equipment in real time, analyzing the motion characteristics of the image information and generating a moving body mark;
the sampling identification module is used for carrying out high-definition amplification sampling on the local image according to the moving body mark to generate the characteristic information of the moving body mark;
the tracking identification module is used for performing cross comparison analysis on the characteristic information of the moving body mark and preset object information to generate a tracking instruction;
and the cooperative tracking module is used for sending a tracking instruction to the cooperative equipment through the intranet and tracking the moving body.
As a further scheme of the invention: the global building block specifically includes:
an environment image acquisition unit for acquiring dynamic image information in the visual field of the equipment in real time
The space motion model unit is used for recording and storing the position information of the equipment in real time, establishing a three-dimensional space model and generating an equipment motion track; reading static image information in the dynamic image information frame by frame, and acquiring distance information and equipment position information in the static image information; establishing a real-time object model in the three-dimensional space model according to the distance information in the static image information corresponding to the equipment position information, and storing the real-time object model; generating an object model motion track according to the continuously updated real-time object model;
and the moving body identification unit is used for marking the moving body of the object model in the moving state according to the motion trail of the object model.
As a further scheme of the invention: the sampling identification module specifically comprises:
the individual relative positioning module is used for acquiring the relative position and distance data of the moving body mark according to the real-time object model;
the individual image acquisition unit is used for carrying out high-definition real-time sampling on a local image according to the relative position and the distance data of the moving body mark to obtain a local dynamic high-definition image;
and the individual characteristic acquisition unit is used for extracting the characteristics of the local dynamic high-definition images and generating the characteristic information of the motion marker.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a local image identification method and system based on AR intelligent glasses, which realize an automatic identification and tracking system of a multi-device cooperative object based on the AR glasses, greatly and effectively improve the identification and tracking efficiency of related personnel to be tracked on occasions with more public crowds, and solve various problems of the existing tracking mode.
Drawings
Fig. 1 is a flow chart of a local image recognition method based on AR smart glasses.
Fig. 2 is a block diagram of specific process steps for generating a moving object mark in a local image recognition method based on AR smart glasses.
Fig. 3 is a block diagram of specific process steps for generating feature information in a local image recognition method based on AR smart glasses.
Fig. 4 is a block diagram illustrating specific steps of a process for outputting a tracking command in a local image recognition method based on AR smart glasses.
Fig. 5 is a block diagram of a local image recognition system based on AR smart glasses.
Fig. 6 is a block diagram of a global building block in a local image recognition system based on AR smart glasses.
Fig. 7 is a block diagram of a sampling identification module in a local image identification system based on AR smart glasses.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of specific embodiments of the present invention is provided in connection with specific embodiments.
As shown in fig. 1, a local image recognition method based on AR smart glasses according to an embodiment of the present invention includes the following steps:
and S200, acquiring dynamic image information in the visual field range of the equipment in real time, and performing three-dimensional motion characteristic analysis on the image information to generate a moving body mark.
S400, dynamic image acquisition is carried out on the moving body mark, and the characteristic information of the moving body mark is extracted.
S600, cross comparison analysis is carried out on the characteristic information of the moving body mark and preset object information, and a tracking instruction is generated.
And S800, sending a tracking instruction to the cooperative equipment through the intranet, and tracking the moving body.
In the embodiment of the present invention, in step S200, the device refers to a wearable portable AR device such as AR glasses, and the description is given by taking the AR glasses as an example; the method uses and realizes a scene which at least comprises one AR (augmented reality) glasses with the execution capacity of the method, a plurality of AR devices are used at the same time to obtain a better use effect, when related personnel use the AR glasses, the AR glasses are worn to observe the surrounding environment, a target person or object is found to appear, task tracking is carried out, at the moment, the worn AR glasses can continuously shoot and collect images in the direction of a sight line, the collected images are processed to judge and mark moving objects in the images so as to facilitate the collection and identification of local images, then in step S400, the AR glasses can carry out local high-definition image collection on all marked moving objects (namely moving object marks described in the method), and carry out feature analysis on the collected images to extract the feature information of the moving objects, after the feature information is obtained, step S600, comparing and identifying the characteristic information with the characteristic information of a person to be tracked, which is preset by a related person, if the characteristic information is the same as the preset characteristic information of the person to be tracked, indicating that the tracked person is searched, continuously tracking the tracked person, and preventing the target from being lost; in S800, there is a step of sending the tracking instruction to the cooperative device, where the cooperative device refers to an AR device carried by other related personnel in the same duty, and data exchange is performed by multiple devices, so that a device group can cover a larger scanning range, the success rate of tracking the personnel to be tracked is improved, the occurrence of target loss in the tracking process is also reduced to a certain extent, and even if the target tracking of one device is lost, the relative geographic position of the target can still be obtained by the other cooperative devices.
As another preferred embodiment of the present invention, the dynamic image information is formed by combining a plurality of consecutive static images collected at preset time intervals, the preset time intervals are used for representing the movement angular velocity of the device view range, and the preset time intervals are set in an inverse relationship with the movement angular velocity of the device view range.
In the embodiment of the present invention, the dynamic image information in step S200 is explained and illustrated, where the dynamic image refers to a set of static images continuously collected according to a preset time interval, and the preset time interval is not a constant time interval, a speed sensor is disposed in the device, and is capable of sensing a rotation angular speed of the device, that is, the AR glasses on the head of the wearer relative to the neck of the wearer, when the head of the wearer rotates slowly or relatively still, an environment change speed in a real environment picture collected in the AR glasses is relatively slow, and the preset time interval is relatively long, so that the operation pressure of the device can be reduced, the detection accuracy is improved, and the collected moving object mark is better identified; when the wearer turns the head at a very high speed, the real environment picture quickly skims the visible range of the AR glasses, so that the AR glasses need a shorter preset interval time to collect more static images to form dynamic image information, and the establishment of the three-dimensional space model in the subsequent steps is completed.
As another preferred embodiment of the present invention, as shown in fig. 2, the dynamic image information further includes distance data and device position information, the distance data is used to represent the straight-line distance of each image point on the dynamic image data from the device; the step of performing three-dimensional motion characteristic analysis on the image information to generate a moving body mark specifically includes:
s202, recording and storing the position information of the equipment in real time, establishing a three-dimensional space model and generating an equipment motion track.
S203, reading the static image information in the dynamic image information frame by frame, and acquiring the distance information and the device position information in the static image information.
And S204, establishing a real-time object model in the three-dimensional space model according to the distance information in the static image information corresponding to the equipment position information, and storing the real-time object model.
And S205, generating an object model motion track according to the continuously updated real-time object model.
And S206, marking the moving body of the object model in the moving state according to the motion track of the object model.
In the embodiment of the present invention, a specific extension of step S200 is described, step S200 may establish a three-dimensional spatial model by itself, and then establish a three-dimensional spatial model according to distance data and position information in the collected dynamic image information in combination with motion trajectory information of the device itself, where the position information represents angle information relative to the front viewing direction of the AR glasses, and the distance data is a distance from the AR glasses; with real-time change of updated dynamic image information, a rough description is carried out on the surrounding environment in a three-dimensional space model, so that objects which are static relative to the space and objects which move continuously are generated, then the objects which move relative to the space are marked, the feature extraction is conveniently carried out on the local sampling in the subsequent step, and the static objects are not processed as the environment content.
As shown in fig. 3, as another preferred embodiment of the present invention, the step of performing dynamic image acquisition on the moving object marker and extracting feature information of the moving object marker specifically includes:
s401, obtaining the relative position and distance data of the moving body mark according to the real-time object model.
And S402, carrying out high-definition real-time sampling on the local image according to the relative position and the distance data of the moving body mark, and acquiring a local dynamic high-definition image.
And S403, performing feature extraction on the local dynamic high-definition image to generate feature information of the motion marker.
In the embodiment of the invention, the step of extracting the feature information is described in detail, when an object is marked by a moving object, a plurality of groups of different sets of relative position information and distance information of the object are acquired, the relative position of the object relative to the AR glasses can be determined according to the plurality of groups of data, a mechanism with a high-definition image acquisition function arranged on the AR glasses can perform positioning amplification and focusing by taking the relative position as a reference to acquire a high-definition image of the object, namely the high-definition image of a local position in the whole visual angle of the AR glasses, and the feature information of the moving mark body can be generated by performing feature extraction processing on the image.
As shown in fig. 4, as another preferred embodiment of the present invention, the method is preset with preset object information, where the preset object information is feature information of a tracking object collected in advance, and the step of performing cross-contrast analysis on the feature information of the moving object marker and the preset object information to generate a tracking instruction specifically includes:
s601, establishing a feature comparison model according to the preset object information, wherein the feature comparison model is used for representing the contact ratio with the preset object information.
And S602, acquiring the characteristic information of the moving body mark, and comparing and analyzing the characteristic information of the moving body mark according to the characteristic comparison model to generate a comparison result.
S603, if the comparison result is high coincidence, prompt information is generated, and a tracking instruction is output as tracking.
In the embodiment of the invention, the process of comparing and analyzing the characteristic information is described, the technology is based on the prior art, after the comparison and analysis is finished, if the characteristic information is a tracking target, a wearer is informed to remind by flashing an AR (augmented reality) glasses display screen or a voice prompt mode, and then the object is continuously tracked.
As another preferred embodiment of the present invention, the step of sending a tracking instruction to the cooperative device through the intranet, and tracking the moving object specifically includes:
when the tracking instruction is tracking, extracting the corresponding moving body mark and the tracking instruction.
And sending the moving body mark and the tracking instruction to the cooperative equipment.
And continuously tracking in the three-dimensional space model according to the corresponding motion marker information.
Specifically, the method further comprises:
and receiving a moving body marking and tracking instruction from the cooperative equipment.
And searching and identifying the moving body mark in the three-dimensional space model according to the received moving body mark.
And if the moving body mark in the three-dimensional space model is consistent with the received moving body mark, continuously tracking in the three-dimensional space model.
In the embodiment of the invention, a simple step description is carried out on the cooperation mode among multiple devices, namely when one device detects a tracked object, another device synchronously tracks the object in a data exchange mode, so that the tracking efficiency can be greatly improved, the visual coverage range can be increased through the cooperation of the multiple devices, and the loss of the target is prevented.
As shown in fig. 5, the present invention further provides a local image recognition system based on AR smart glasses, which comprises:
and S100, a global construction module is used for acquiring dynamic image information in the visual field range of the equipment in real time, analyzing the motion characteristics of the image information and generating a moving body mark.
And S300, a sampling identification module is used for carrying out high-definition amplification sampling on the local image according to the moving body mark to generate the characteristic information of the moving body mark.
And S500, a tracking identification module is used for performing cross comparison analysis on the characteristic information of the moving body mark and the preset object information to generate a tracking instruction.
And S700, the cooperative tracking module is used for sending a tracking instruction to the cooperative equipment through the intranet and tracking the moving body.
As another preferred embodiment of the present invention, the global building block specifically includes:
and S101, an environment image acquisition unit is used for acquiring dynamic image information in the visual field range of the equipment in real time.
S102, a space motion model unit is used for recording and storing equipment position information in real time, establishing a three-dimensional space model and generating an equipment motion track; reading static image information in the dynamic image information frame by frame, and acquiring distance information and equipment position information in the static image information; establishing a real-time object model in the three-dimensional space model according to the distance information in the static image information corresponding to the equipment position information, and storing the real-time object model; and generating the motion trail of the object model according to the continuously updated real-time object model.
And S103, a moving body identification unit for marking the moving body of the object model in the moving state according to the motion track of the object model.
In the embodiment of the present invention, the environment image capturing unit includes a plurality of distance sensors, an angle sensor, a wide-angle image capturing lens with an invariable focal length, cmos, and the like.
As another preferred embodiment of the present invention, the sampling identification module specifically includes:
s301, an individual relative positioning module is used for acquiring the relative position and distance data of the moving body mark according to the real-time object model.
And S302, the individual image acquisition unit is used for carrying out high-definition real-time sampling on the local images according to the relative position and distance data of the moving body mark and acquiring local dynamic high-definition images.
And S303, the individual feature acquisition unit is used for extracting the features of the local dynamic high-definition images and generating the feature information of the motion marker.
In an embodiment of the present invention, the individual image capturing unit includes an image capturing lens with a variable focal length and an adjustable focus, cmos, and the like.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in various embodiments may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. A local image recognition method based on AR intelligent glasses is characterized by comprising the following steps:
acquiring dynamic image information in a visual field range of equipment in real time, and performing three-dimensional motion characteristic analysis on the image information to generate a moving body mark;
dynamic image acquisition is carried out on the moving body mark, and the characteristic information of the moving body mark is extracted;
carrying out cross comparison analysis on the characteristic information of the moving body mark and preset object information to generate a tracking instruction;
and sending a tracking instruction to the cooperative equipment through the intranet, and tracking the moving body.
2. The local image recognition method based on the AR smart glasses according to claim 1, wherein the dynamic image information is a combination of a plurality of consecutive static images collected at a predetermined time interval, the predetermined time interval is used to represent a motion angular velocity of a field of view of the device, and the predetermined time interval is set in an inverse relationship with the motion angular velocity of the field of view of the device.
3. The local image recognition method based on AR smart glasses according to claim 2, wherein the dynamic image information further comprises distance data and device location information, the distance data is used for representing a straight-line distance of each image point on the dynamic image data from the device; the step of performing three-dimensional motion characteristic analysis on the image information to generate a moving body mark specifically comprises the following steps:
recording and storing the position information of the equipment in real time, establishing a three-dimensional space model and generating an equipment motion track;
reading static image information in the dynamic image information frame by frame, and acquiring distance information and equipment position information in the static image information;
establishing a real-time object model in the three-dimensional space model according to the distance information in the static image information corresponding to the equipment position information, and storing the real-time object model;
generating an object model motion track according to the continuously updated real-time object model;
and marking the moving body of the object model in the moving state according to the motion trail of the object model.
4. The local image recognition method based on AR smart glasses according to claim 3, wherein the step of performing dynamic image acquisition on the moving object markers and extracting feature information of the moving object markers specifically comprises:
acquiring relative position and distance data of the moving body mark according to the real-time object model;
carrying out high-definition real-time sampling on a local image according to the relative position and distance data of the moving body mark to obtain a local dynamic high-definition image;
and performing feature extraction on the local dynamic high-definition image to generate feature information of the motion marker.
5. The local image recognition method based on the AR smart glasses according to claim 1, wherein the method is preset with preset object information, the preset object information is feature information of a tracking object collected in advance, and the step of performing cross-contrast analysis on the feature information of the moving object marker and the preset object information to generate a tracking command specifically includes:
establishing a feature comparison model according to preset object information, wherein the feature comparison model is used for representing the contact ratio with the preset object information;
obtaining the characteristic information of the moving body mark, comparing and analyzing the characteristic information of the moving body mark according to the characteristic comparison model to generate a comparison result, and if the comparison result is not the same as the comparison result, comparing and analyzing the characteristic information of the moving body mark
And if the comparison result is high coincidence, generating prompt information and outputting a tracking instruction as tracking.
6. The local image recognition method based on AR smart glasses according to claim 5, wherein the step of sending a tracking instruction to the cooperative device through an intranet and tracking the moving object specifically comprises:
when the tracking instruction is tracking, extracting a corresponding moving body mark and the tracking instruction;
sending the moving body mark and the tracking instruction to the cooperative equipment;
and continuously tracking in the three-dimensional space model according to the corresponding motion marker information.
7. The local image recognition method based on AR smart glasses according to claims 3 and 6, wherein the method further comprises:
receiving a moving body marking and tracking instruction from the cooperative equipment;
searching and identifying the moving body mark in the three-dimensional space model according to the received moving body mark; if it is
And if the moving body mark in the three-dimensional space model is consistent with the received moving body mark, continuously tracking in the three-dimensional space model.
8. A local image recognition system based on AR intelligent glasses is characterized by specifically comprising:
the global construction module is used for acquiring dynamic image information in the visual field range of the equipment in real time, analyzing the motion characteristics of the image information and generating a moving body mark;
the sampling identification module is used for carrying out high-definition amplification sampling on the local image according to the moving body mark to generate the characteristic information of the moving body mark;
the tracking identification module is used for performing cross comparison analysis on the characteristic information of the moving body mark and preset object information to generate a tracking instruction;
and the cooperative tracking module is used for sending a tracking instruction to the cooperative equipment through the intranet and tracking the moving body.
9. The local image recognition system based on AR smart glasses according to claim 8, wherein the global construction module specifically comprises:
an environment image acquisition unit for acquiring dynamic image information in the visual field of the equipment in real time
The space motion model unit is used for recording and storing the position information of the equipment in real time, establishing a three-dimensional space model and generating an equipment motion track; reading static image information in the dynamic image information frame by frame, and acquiring distance information and equipment position information in the static image information; establishing a real-time object model in the three-dimensional space model according to the distance information in the static image information corresponding to the equipment position information, and storing the real-time object model; generating an object model motion track according to the continuously updated real-time object model;
and the moving body identification unit is used for marking the moving body of the object model in the moving state according to the motion trail of the object model.
10. The local image recognition system based on AR smart glasses according to claim 9, wherein the sampling recognition module specifically comprises:
the individual relative positioning module is used for acquiring the relative position and distance data of the moving body mark according to the real-time object model;
the individual image acquisition unit is used for carrying out high-definition real-time sampling on a local image according to the relative position and the distance data of the moving body mark to obtain a local dynamic high-definition image;
and the individual characteristic acquisition unit is used for extracting the characteristics of the local dynamic high-definition images and generating the characteristic information of the motion marker.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110464722.2A CN113781520A (en) | 2021-04-28 | 2021-04-28 | Local image identification method and system based on AR intelligent glasses |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110464722.2A CN113781520A (en) | 2021-04-28 | 2021-04-28 | Local image identification method and system based on AR intelligent glasses |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113781520A true CN113781520A (en) | 2021-12-10 |
Family
ID=78835715
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110464722.2A Pending CN113781520A (en) | 2021-04-28 | 2021-04-28 | Local image identification method and system based on AR intelligent glasses |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113781520A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004201231A (en) * | 2002-12-20 | 2004-07-15 | Victor Co Of Japan Ltd | Monitoring video camera system |
CN106445173A (en) * | 2016-11-25 | 2017-02-22 | 四川赞星科技有限公司 | Method and device for converting objective state |
WO2018210305A1 (en) * | 2017-05-18 | 2018-11-22 | 腾讯科技(深圳)有限公司 | Image identification and tracking method and device, intelligent terminal and readable storage medium |
CN109086726A (en) * | 2018-08-10 | 2018-12-25 | 陈涛 | A kind of topography's recognition methods and system based on AR intelligent glasses |
CN109240496A (en) * | 2018-08-24 | 2019-01-18 | 中国传媒大学 | A kind of acousto-optic interactive system based on virtual reality |
WO2020045837A1 (en) * | 2018-08-28 | 2020-03-05 | 김영대 | Method for smart-remote lecturing using automatic scene-transition technology having artificial intelligence function in virtual and augmented reality lecture room |
CN112101269A (en) * | 2020-09-23 | 2020-12-18 | 上海工艺美术职业学院 | Information processing method, device and system |
-
2021
- 2021-04-28 CN CN202110464722.2A patent/CN113781520A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004201231A (en) * | 2002-12-20 | 2004-07-15 | Victor Co Of Japan Ltd | Monitoring video camera system |
CN106445173A (en) * | 2016-11-25 | 2017-02-22 | 四川赞星科技有限公司 | Method and device for converting objective state |
WO2018210305A1 (en) * | 2017-05-18 | 2018-11-22 | 腾讯科技(深圳)有限公司 | Image identification and tracking method and device, intelligent terminal and readable storage medium |
CN109086726A (en) * | 2018-08-10 | 2018-12-25 | 陈涛 | A kind of topography's recognition methods and system based on AR intelligent glasses |
CN109240496A (en) * | 2018-08-24 | 2019-01-18 | 中国传媒大学 | A kind of acousto-optic interactive system based on virtual reality |
WO2020045837A1 (en) * | 2018-08-28 | 2020-03-05 | 김영대 | Method for smart-remote lecturing using automatic scene-transition technology having artificial intelligence function in virtual and augmented reality lecture room |
CN112101269A (en) * | 2020-09-23 | 2020-12-18 | 上海工艺美术职业学院 | Information processing method, device and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108965687B (en) | Shooting direction identification method, server, monitoring method, monitoring system and camera equipment | |
CN111830953B (en) | Vehicle self-positioning method, device and system | |
JP4717760B2 (en) | Object recognition device and video object positioning device | |
CN109934848B (en) | A method for precise positioning of moving objects based on deep learning | |
CN110458025B (en) | A target recognition and localization method based on binocular camera | |
KR101634966B1 (en) | Image tracking system using object recognition information based on Virtual Reality, and image tracking method thereof | |
EP3552388B1 (en) | Feature recognition assisted super-resolution method | |
CN107665505B (en) | Method and device for realizing augmented reality based on plane detection | |
CN106233371A (en) | Select the panoramic picture for the Annual distribution shown | |
US20230063939A1 (en) | Electro-hydraulic varifocal lens-based method for tracking three-dimensional trajectory of object by using mobile robot | |
CN107665507B (en) | Method and device for realizing augmented reality based on plane detection | |
CN112207821B (en) | Target searching method of visual robot and robot | |
WO2021209981A1 (en) | Artificial intelligence and computer vision powered driving-performance assessment | |
CN116403139A (en) | A Visual Tracking and Localization Method Based on Target Detection | |
CN111062971B (en) | Deep learning multi-mode-based mud head vehicle tracking method crossing cameras | |
CN113256731A (en) | Target detection method and device based on monocular vision | |
CN115035626A (en) | An AR-based intelligent inspection system and method for scenic spots | |
CN111783675A (en) | Intelligent city video self-adaptive HDR control method based on vehicle semantic perception | |
Teepe et al. | EarlyBird: Early-fusion for multi-view tracking in the bird's eye view | |
WO2023160722A1 (en) | Interactive target object searching method and system and storage medium | |
CN112598739B (en) | Infrared target tracking method, system and storage medium for mobile robot based on spatio-temporal feature aggregation network | |
CN109977853A (en) | A kind of mine group overall view monitoring method based on more identifiers | |
CN113781520A (en) | Local image identification method and system based on AR intelligent glasses | |
CN111246116B (en) | Method for intelligent framing display on screen and mobile terminal | |
CN114913470B (en) | Event detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |