[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2007140523A1 - procédé de reconnaissance de cible - Google Patents

procédé de reconnaissance de cible Download PDF

Info

Publication number
WO2007140523A1
WO2007140523A1 PCT/AU2007/000786 AU2007000786W WO2007140523A1 WO 2007140523 A1 WO2007140523 A1 WO 2007140523A1 AU 2007000786 W AU2007000786 W AU 2007000786W WO 2007140523 A1 WO2007140523 A1 WO 2007140523A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
target
feature vector
vector set
image
Prior art date
Application number
PCT/AU2007/000786
Other languages
English (en)
Inventor
Mark A. Cronan
Original Assignee
Operational Solutions Management Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2006903041A external-priority patent/AU2006903041A0/en
Application filed by Operational Solutions Management Pty Ltd filed Critical Operational Solutions Management Pty Ltd
Priority to AU2007257324A priority Critical patent/AU2007257324A1/en
Publication of WO2007140523A1 publication Critical patent/WO2007140523A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/422Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/05Recognition of patterns representing particular kinds of hidden objects, e.g. weapons, explosives, drugs

Definitions

  • the present invention relates to the field of image recognition and methods for identifying objects.
  • AIS transmitters are fitted to ships which transmit identification information.
  • such a system is open to abuse by tampering with the AIS transmitter to transmit false identification information.
  • visual cross-check identification still needs to be employed as verification. It is desired to have automatic visual recognition conducted by computers.
  • Feature finding involves complex methods that require significant processing overheads. These methods produce models or parameterised elements which are used to search. The methods attempt to come up with feature measurements which are used to estimate a feature fit against stored feature measurements of known objects.
  • V feature extraction methods required approximately 55 seconds per image. This period of time provides a significant delay from when the image is taken to when the characteristics of the features are identified and can be crucial in security applications when, for example, image recognition is being used to identify moving objects such as vehicles, aircraft or vessels.
  • US Patent No. 6,118,886 discloses an "Automatic target recognition apparatus and method". This approach details the processing involved in achieving the features needed for classification.
  • the classification technique uses a fractal dimension value to measure target edge contortion to attempt classification against land mines.
  • US Patent Application No. 2003/0138146 discloses a method for determining the attributes of features within a captured image, for example, determining the readings of various instrument indicators from an image of a cockpit instrument panel.
  • the method relies upon the use of a template to extract the known location of sub-images, i.e each instrument, from the overall captured image to extract the required feature, i.e the instrument indicator. Processing of the extracted feature allows the instrument reading to be determined.
  • the fact that it is a prerequisite of the method that the cockpit instrument panel is known and conforms to the template used means that the method is not suitable for identification of an unknown object.
  • the invention provides a method of creating a multi- dimensional search space that can be used to categorise noisy data.
  • the method first defines a description language for the features whose separation allows for expected noise variance from real life readings. This is done such that each feature type maps to one or more dimensions (arranged in an orthogonal basis).
  • the set of features results in a vector representing the image or data set.
  • a method of recognising the identity of a target object from a plurality of known objects including the steps of: obtaining a digital visual image of said target object; applying a feature extraction method to said image to extract one or more visual features of said target object; for each extracted visual feature: ' establishing one or more values for one or more predetermined feature categorisation parameters for said extracted visual feature; and creating one or more target feature vectors based upon said established value(s); collating all created target feature vectors to form a target feature vector set; comparing said target feature vector set with feature vector sets of known objects; and providing an indication of one or more known objects which have a feature vector set substantially matching said target feature vector set.
  • the establishing step further includes: establishing one or more values for one or more alternative categorisation parameters for the respective extracted visual feature; and creating one or more alternative target feature vectors based upon said further established value(s).
  • all the created alternative target feature vectors are collated to form an alternative target feature vector set; whereby the method further includes: comparing said alternative target feature vector set with feature vector sets of known objects; and providing an indication of known object(s) which have a feature vector set substantially matching said alternative target feature vector set.
  • a target image is analysed by feature extraction methods to identify groups of pixels that are related in some way. This could be by region colour, edge detection or other means to determine the boundaries and extent of a feature.
  • Each feature is categorised using chosen categorisations.
  • the result is a vector representing the feature.
  • fuzzy logic multiple vectors can be created for the same feature representing different possibilities of categorising a feature using different categorisations.
  • a search space is populated with positions of all known object feature vectors. This target image vector is then checked against the search space to determine which object vectors are within a defined distance (Nearest neighbours). The categorisation ensures that those within the defined distance categorise to the same vector point. Those matching object vectors represent candidates classifying the input image or data.
  • the total feature fit is assessed for the target giving an overall probability of target identification.
  • Preferred implementations of the present invention provide the advantage of ease of calculation and speed at which object recognition can be performed concurrently against extremely large datasets of known object feature vectors.
  • the search time is logarithmic with respect to the size of the number of signatures.
  • FIG. 1 illustrates a captured image of a target object in the form of a ship
  • Fig. 2 is an exploded view of a feature of the ship of Fig. 1 , being the ship's funnel;
  • Fig. 3 illustrates an example of a search space for a feature vector.
  • the present invention can be implemented on common computing platforms such as a PC, Apple, IBM, Sun, or HP utilising commonly known operating systems such as Windows, HPUX, Solaris, Linux, BSD Unix, or Mac OS and in common programming languages such as Visual Basic, C++, C#, Pascal and Java. It will be appreciated, however, that the present invention is not specific to any particular hardware or software implementation, and is at a conceptual level above specifics of implementation. It is to be understood that various other embodiments and variations of the invention may be produced without departing from the spirit or scope of the invention. The following is provided to assist in understanding the practical implementation of orie ⁇ embodiment of the invention.
  • a multi-dimensional search space A multi-dimensional search space is defined by each of the category types selected. Each dimension of the multi-dimensional space should represent an orthogonal category type. The full set of categories to be used defines a vector for a feature. Each vector defines a feature in some way.
  • a target signature being a collection of features that identify an object. More than one signature may be applicable to the same object.
  • a data set of object signatures This is the total data set of known targets. Each target may have one or more signatures. Each signature has one or more feature vectors.
  • the target signature result data set links the object identification, signature identifier and results of feature matching or the object signature features against the target image features.
  • Feature Categorisation Definition Feature measurements are first defined into categorisations. These categorisations could be relative position, colour, density, shape or some other type of value describing the object. The choice of the coarseness of the categorisations method of determining which set of values equates to a categorisation needs to be made dependent on the application domain. It is important to choose domain specific categorisations that separate different objects rather than categorisations that apply to almost all objects. For example, a human face categorisation of "Has Eyebrows" (everyone has at least one) would not greatly differentiate between most humans whereas "Eye Colour” would differentiate. Finally the categorisations are ordered where possible to allow distance or fuzzy vectors to match.
  • the target signature result data set is initially cleared.
  • the search space is populated with the features contained in the data set of object signatures. This can be a full set or a select group of signatures. Feature vectors from the selected target signatures identify their position in the search space and the identity of the target. Images from cameras are typically arranged in a 2 dimensional matrix of pixels image colours. These colours could be represented as a binary value (eg: 0 - 255) or a colour value (eg RGB). Each pixel represents a single colour at a location on the image. Using different techniques, a subset of the pixels representing the target image can be selected from the full set of image pixels (Segmentation). This is the target image.
  • the identified features in a target image are categorised by the predefined categorisation methods. This provides a set of feature vectors. This categorisation is performed in the exact same manner as the target object features were determined. Each feature identifies a unique point in the vector space.
  • Each object feature vector's position in proximity to the target feature vector's position is assessed and a total measure of fit established. This measure is then used to update the result of the target signature result data set. If the target signature is not in the target signature result data set, its details are added from the data set of object signatures. The distance from a measured point to these vectors provides a goodness of fit of the classification. Targets in the target signature result data set with the best measurement of fit can then be considered as candidates for identification.
  • Categorisation further helps reduce the size of signatures by eliminating common elements between objects.
  • the objects that are being assessed may nearly all have the same categorisation for certain features.
  • All ships have a large area on the bottom called a hull and therefore categorising the hull as a feature does not assist in search process. (I.e. all ships have hulls so all objects in the data set would have a hull feature.)
  • the vector space allows multiple points to be identified as belonging to the same object. This allows differences to be categorised as to multiple vector points assisting in determination of fit during the matching phase.
  • the method can provide significant separation between total feature vectors allowing for significant differentiation to be identified between objects.
  • Judicial selection of categories used within the signatures developed can be resilient to object size, image aspect as well as scene lighting and sensor sensitivity variations.
  • the resultant target signatures can be used to train the object signature data sets.
  • Figure 1 is a segmented black and white image of a ship 10 at sea. This is a target image in the context of the method. The target image is scanned and the sub features of the target image are determined. For example, the rectangle 12 located on the ship's funnel 14 may be selected as a feature as shown in Figure 2.
  • the categorisation methods chosen include relative position of the feature as a percentage of the vessel length and height then this feature is located in the 10% from the left and 15% from the top.
  • the categorisation of shape being tall, squarish, or wide would categorise this as square.
  • the three elements of the vector are the three dimensions in the search space, as shown in Figure 3.
  • the target feature vector identifies a position in the vector space.
  • a search of the vector space for the feature near the location may find a set of features from different ships. For every ship that the target feature vector matches against the ship, the probability of a unique identification is increased. This process continues until all features in the target image have been assessed.
  • the set of ships that have been selected are then checked to see the goodness of fit. This can be achieved by comparing the number of features expected on the vessel and the number of features in the target image against the number that matched. This provides an overall measure of the probability that the set of ships matches the image.
  • a sequence of ship images can be assessed using this method to build confidence in the ship's class and/or the individual ship name.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de reconnaissance de l'identité d'un objet cible parmi une pluralité d'objets connus. Comme première étape, une image visuelle numérique de l'objet cible est obtenue. Un procédé d'extraction de caractéristique est appliqué à l'image pour extraire une ou plusieurs caractéristiques visuelles de l'objet cible. Pour chaque caractéristique visuelle extraite, une ou plusieurs valeurs sont établies pour un ou plusieurs paramètres prédéterminés de catégorisation de caractéristique pour la caractéristique visuelle extraite ; et un ou plusieurs vecteurs de caractéristiques cibles sont créés sur la base de la ou des valeurs établies. Tous les vecteurs créés de caractéristiques cibles sont réunis pour former un ensemble de vecteurs de caractéristiques cibles. L'ensemble de vecteurs de caractéristiques cibles est comparé à des ensembles de vecteurs de caractéristiques d'objets connus. Sur la base de la comparaison, une indication d'un ou plusieurs objets connus qui ont un ensemble de vecteurs de caractéristiques correspondant sensiblement à l'ensemble de vecteurs de caractéristiques cibles est produite.
PCT/AU2007/000786 2006-06-05 2007-06-04 procédé de reconnaissance de cible WO2007140523A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2007257324A AU2007257324A1 (en) 2006-06-05 2007-06-04 Target recognition method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2006903041 2006-06-05
AU2006903041A AU2006903041A0 (en) 2006-06-05 Target Recognition Method

Publications (1)

Publication Number Publication Date
WO2007140523A1 true WO2007140523A1 (fr) 2007-12-13

Family

ID=38800958

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2007/000786 WO2007140523A1 (fr) 2006-06-05 2007-06-04 procédé de reconnaissance de cible

Country Status (2)

Country Link
AU (1) AU2007257324A1 (fr)
WO (1) WO2007140523A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2471647A (en) * 2009-06-16 2011-01-12 Aptina Imaging Corp Object Detection Using Feature Responses
US8345125B2 (en) 2009-06-16 2013-01-01 Aptina Imaging Corporation Object detection using an in-sensor detector
CN102855493A (zh) * 2012-08-02 2013-01-02 成都众合云盛科技有限公司 一种物体识别系统
CN103544505A (zh) * 2013-07-31 2014-01-29 天津大学 面向无人机航拍图像的船只识别系统及方法
US8675099B2 (en) 2009-06-16 2014-03-18 Aptina Imaging Corporation Efficient system and method for face tracking
CN106022280A (zh) * 2016-05-26 2016-10-12 西安空间无线电技术研究所 一种基于分级不变性特征的典型舰船目标识别方法
US9659380B1 (en) 2016-01-07 2017-05-23 International Business Machines Corporation Object position tracking using motion estimation
CN108229342A (zh) * 2017-12-18 2018-06-29 西南技术物理研究所 一种海面舰船目标自动检测方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030138146A1 (en) * 2002-01-23 2003-07-24 Honeywell Inc. Methods, functional data, and systems for image feature translation
US6614917B1 (en) * 1999-10-22 2003-09-02 Lockheed Martin Corporation Dynamic process for identifying objects in multi-dimensional data
US20040208372A1 (en) * 2001-11-05 2004-10-21 Boncyk Wayne C. Image capture and identification system and process
US20040252882A1 (en) * 2000-04-13 2004-12-16 Microsoft Corporation Object recognition using binary image quantization and Hough kernels
WO2006001525A1 (fr) * 2004-06-28 2006-01-05 Canon Kabushiki Kaisha Procédé et appareil de reconnaissance d'objets
WO2006002299A2 (fr) * 2004-06-22 2006-01-05 Sarnoff Corporation Procede et appareil de reconnaissance d'objets tridimensionnels

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6614917B1 (en) * 1999-10-22 2003-09-02 Lockheed Martin Corporation Dynamic process for identifying objects in multi-dimensional data
US20040252882A1 (en) * 2000-04-13 2004-12-16 Microsoft Corporation Object recognition using binary image quantization and Hough kernels
US20040208372A1 (en) * 2001-11-05 2004-10-21 Boncyk Wayne C. Image capture and identification system and process
US20030138146A1 (en) * 2002-01-23 2003-07-24 Honeywell Inc. Methods, functional data, and systems for image feature translation
WO2006002299A2 (fr) * 2004-06-22 2006-01-05 Sarnoff Corporation Procede et appareil de reconnaissance d'objets tridimensionnels
WO2006001525A1 (fr) * 2004-06-28 2006-01-05 Canon Kabushiki Kaisha Procédé et appareil de reconnaissance d'objets

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ALVES ET AL.: "Robust Recognition of Ship Types from an Infrared Silhouette", COMMAND AND CONTROL RESEARCH AND TECHNOLOGY SYMPOSIUM, SAN DIEGO, CA, June 2004 (2004-06-01), pages 1 - 18, XP008090710, Retrieved from the Internet <URL:http://www.cs.nps.navy.mil/people/faculty/rowe/oldstudents/infrared_ccrts_04.htm> *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2471647A (en) * 2009-06-16 2011-01-12 Aptina Imaging Corp Object Detection Using Feature Responses
US8306337B2 (en) 2009-06-16 2012-11-06 Aptina Imaging Corporation Use of Z-order data in an image sensor
US8345125B2 (en) 2009-06-16 2013-01-01 Aptina Imaging Corporation Object detection using an in-sensor detector
US8675099B2 (en) 2009-06-16 2014-03-18 Aptina Imaging Corporation Efficient system and method for face tracking
GB2471647B (en) * 2009-06-16 2016-03-23 Aptina Imaging Corp Use of Z-order data in an image sensor
CN102855493A (zh) * 2012-08-02 2013-01-02 成都众合云盛科技有限公司 一种物体识别系统
CN103544505A (zh) * 2013-07-31 2014-01-29 天津大学 面向无人机航拍图像的船只识别系统及方法
US9659380B1 (en) 2016-01-07 2017-05-23 International Business Machines Corporation Object position tracking using motion estimation
US10255504B2 (en) 2016-01-07 2019-04-09 International Business Machines Corporation Object position tracking using motion estimation
CN106022280A (zh) * 2016-05-26 2016-10-12 西安空间无线电技术研究所 一种基于分级不变性特征的典型舰船目标识别方法
CN108229342A (zh) * 2017-12-18 2018-06-29 西南技术物理研究所 一种海面舰船目标自动检测方法
CN108229342B (zh) * 2017-12-18 2021-10-26 西南技术物理研究所 一种海面舰船目标自动检测方法

Also Published As

Publication number Publication date
AU2007257324A1 (en) 2007-12-13

Similar Documents

Publication Publication Date Title
Saha et al. Unsupervised deep change vector analysis for multiple-change detection in VHR images
Islam et al. Real time hand gesture recognition using different algorithms based on American sign language
CN107423690B (zh) 一种人脸识别方法及装置
WO2007140523A1 (fr) procédé de reconnaissance de cible
EP3633605A1 (fr) Dispositif de traitement d&#39;informations, procédé de traitement d&#39;informations, et programme
CN101609504B (zh) 一种红外图像海面目标检测识别定位方法
US20180349716A1 (en) Apparatus and method for recognizing traffic signs
JP2006146626A (ja) パターン認識方法および装置
Mousavian et al. Semantic image based geolocation given a map
CN111460884A (zh) 基于人体跟踪的多人脸识别方法
TR201614118A1 (tr) Bir deniz aracı tanıma yöntemi.
CN108073940B (zh) 一种非结构化环境中的3d目标实例物体检测的方法
CN114821358A (zh) 光学遥感图像海上舰船目标提取与识别方法
CN108257179B (zh) 一种图像处理方法
US4323880A (en) Automatic target screening
CN116109936B (zh) 基于光学遥感的目标检测与识别方法
CN110942473A (zh) 一种基于特征点网格化匹配的运动目标跟踪检测方法
WO2008118706A1 (fr) Identifier si un objet candidat provient d&#39;une classe d&#39;objets
Feng et al. 3D feature point extraction from LiDAR data using a neural network
CN116563591A (zh) 一种基于特征提取的海天背景下的光学烟雾检测方法
Perrotton et al. Automatic object detection on aerial images using local descriptors and image synthesis
Khuspe et al. Robust image forgery localization and recognition in copy-move using bag of features and SVM
Truong et al. Combining multi-threshold saliency with transfer learning for ship detection and information extraction from optical satellite images
Thoreau et al. SaRNet: A dataset for deep learning assisted search and rescue with satellite imagery
Xu et al. Coin recognition method based on SIFT algorithm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07719029

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2007257324

Country of ref document: AU

ENP Entry into the national phase

Ref document number: 2007257324

Country of ref document: AU

Date of ref document: 20070604

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 07719029

Country of ref document: EP

Kind code of ref document: A1