[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2014152923A1 - Apparatus and method for producing anomaly images - Google Patents

Apparatus and method for producing anomaly images Download PDF

Info

Publication number
WO2014152923A1
WO2014152923A1 PCT/US2014/028258 US2014028258W WO2014152923A1 WO 2014152923 A1 WO2014152923 A1 WO 2014152923A1 US 2014028258 W US2014028258 W US 2014028258W WO 2014152923 A1 WO2014152923 A1 WO 2014152923A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
object class
ordinary
class
vehicle
Prior art date
Application number
PCT/US2014/028258
Other languages
French (fr)
Inventor
Kevin M. Holt
Original Assignee
Varian Medical Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Varian Medical Systems, Inc. filed Critical Varian Medical Systems, Inc.
Publication of WO2014152923A1 publication Critical patent/WO2014152923A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Definitions

  • This invention relates generally to image processing and more particularly to identifying anomalies in a given image relative to reference information.
  • X-ray images are sometimes employed to facilitate examining an object of interest such as an automobile, delivery van, truck, trailer, mobile shipping container, suitcase, or the like. To a very large extent these efforts are aimed at identifying the otherwise-hidden presence of specific objects or materials of concern. Examples of such objects include but are not limited to weapons of various kinds, smuggled goods, and manufacturing defects. Examples of such materials of concern include but are not limited to known threatening materials such as explosive and/or radioactive materials, materials known to represent a defect in context, such as carbon deposits in aluminum castings, or to materials that may or may not represent a defect in context, such as any non-aluminum particles in an aluminum casting.
  • Weapons can assume any of a wide variety of form factors and materials. Image-based identification of a hidden weapon can be further complicated when the weapon has been broken down into two or more parts that are separated from one another. Smuggled goods, too, can take a variety of forms and sizes, drugs may be molded into arbitrary shapes, and an inspector may not know ahead of time what types of objects he or she may be looking for.
  • FIG. 1 comprises a flow diagram as configured in accordance with various embodiments of the invention
  • FIG. 2 comprises a flow diagram as configured in accordance with various embodiments of the invention.
  • FIG. 3 comprises a plurality of images as configured in accordance with various embodiments of the invention.
  • FIG. 4 comprises a plurality of images as configured in accordance with various embodiments of the invention.
  • FIG. 5 comprises a block diagram as configured in accordance with various embodiments of the invention.
  • FIG. 6 comprises a flow diagram as configured in accordance with various embodiments of the invention.
  • a control circuit accesses information that relates to an ordinary set (of images) for a particular object class wherein this ordinary set defines a range of images that represent a presumed non- extraordinary state for the object class.
  • This teachings are highly flexible and will accommodate wide variation as to the nature of the object class.
  • Non-limiting examples include object classes that constitute only a single specific vehicle, a particular vehicle make and model (for a given year), a particular cargo manifest, or a particular manufactured item type, and so forth.
  • X-ray images that are regularly captured of a given vehicle will capture various images of that vehicle in differing states that are all nevertheless non-extraordinary in that the differences in the vehicle from one day to the next do not represent anything out of the ordinary in terms of constituting or suggesting a threat to security, or a smuggling event.
  • a given vehicle such as, for example, a vehicle that daily enters an embassy compound via an entry gate
  • X-ray images that are regularly captured of a given vehicle will capture various images of that vehicle in differing states that are all nevertheless non-extraordinary in that the differences in the vehicle from one day to the next do not represent anything out of the ordinary in terms of constituting or suggesting a threat to security, or a smuggling event.
  • a seat back may be up while on another day that same seat back may be folded down.
  • both of these states are non-extraordinary. The ordinary set for this vehicle, therefore, could accommodate both configuration states for that seat back as being "ordinary.”
  • the present teachings will accommodate accessing an object class record that contains information related to the aforementioned ordinary set for the object class. That data can then be used to project an image of the particular unique object onto the ordinary set for the object class to provide corresponding projection information. An anomaly image can then be produced, at least in part, by identifying differences between the image of the particular object and the projection information.
  • the anomaly image need not identify either the seat back being up or down as being an "anomaly.” Accordingly, the foregoing analysis need not identify a present configuration for the seat back as being an anomaly even when one configuration might occur far more frequently than another.
  • these teachings permit an automated analysis to identify differences that are in fact different from a variety of different views of what is ordinary for a given object class.
  • a corresponding report in these regards can significantly help security personnel to focus their attention on extraordinary differences to determine whether such differences are, in fact, a security threat.
  • the control circuit accesses data representing an image of a particular unique object.
  • the control circuit accesses a memory to retrieve this data.
  • image types and modalities By one approach that can be particularly useful in many varied application settings the image comprises an x-ray image or other image formed, at least in part, by the use of penetrating energy. For many application settings it will suffice that the control circuit so access only a single two-dimensional image of the particular unique object.
  • these teachings will accommodate accessing, for example, a three-dimensional image (such as a computed tomography image) of the particular unique object and/or multiple such images of the particular unique object (such as, for example, images of the particular unique object captured from different angles).
  • a three-dimensional image such as a computed tomography image
  • multiple such images of the particular unique object such as, for example, images of the particular unique object captured from different angles.
  • the above-described activity could comprise capturing a side-elevational x-ray image of a given vehicle upon that vehicle having entered a secure access area of an embassy compound. That said, these teachings will also readily accommodate essentially any field of view as may be useful in a given application setting. For example, in many cases it may be useful to mount the x-ray source above the object to be imaged and the detector(s) in the floor (or vice versa). These teachings will accommodate using automatically-captured images if desired.
  • the control circuit identifies an object class as corresponds to the particular unique object for which the foregoing data has been accessed.
  • This identification can comprise a partially or wholly automatic activity as desired.
  • the specific nature of this identification can vary widely with the application setting. For example, when the objects are specific particular vehicles (such as the automobiles driven each day by the employees of an embassy to their workplace) the object class may be identified by associating one object class with each unique automatically-read license plate (and hence each unique vehicle).
  • control circuit can identify the object class for the particular unique vehicle by reading a currently-captured image of the vehicle's license plate (which currently-captured image may be captured using a different camera than, say, the above- inferred x-ray imaging system).
  • the object when the object is a vehicle these teachings will also accommodate associating one object class with each vehicle make and model (and presumably a corresponding model year or range of model years as well).
  • the object when the object is cargo (contained, for example, within a van, truck, trailer, or the like) these teachings will accommodate deriving the object class from the object's cargo manifest. For example, all shipments with manifests declaring a shipment of frozen broccoli could be lumped together into one object class. Alternatively, all shipments declared to contain automobile parts shipping between Ohio and Nuevo Leon could be lumped together into another object class.
  • these teachings when the object is an item within a cargo shipment, these teachings will accommodate associating one object class with a collection of all items within the cargo shipment (which can be particularly useful when such items are all generally of a like type, form factor, and material). And as yet another illustrative example, when the object is a manufactured item (such as a refrigerator, a water pump, a computer, an engine block, a turbine blade, castings, and so forth) these teachings will accommodate associating one object class with the manufactured item type (which can, if desired, be as specific, say, as referring to a particular model number for a given manufactured item).
  • a manufactured item such as a refrigerator, a water pump, a computer, an engine block, a turbine blade, castings, and so forth
  • these teachings will accommodate associating one object class with the manufactured item type (which can, if desired, be as specific, say, as referring to a particular model number for a given manufactured item).
  • the image itself may be used to determine the object class.
  • the object class is based on vehicle make and model
  • that make and model could be estimated from the image data itself.
  • the model number could be derived from the image data by performing a first-pass analysis of the image data.
  • a casting house produces three models of engine blocks and two models of transmission cases, there would be five object classes, and the exact object class could be determined for each scan by analyzing the overall shape of the object in the scan data to find the matching part model.
  • the control circuit uses the aforementioned identified object class to access an object class record.
  • This object class record contains (exclusively, if desired) information related to an ordinary set for the object class. That is, a range of images that represent a non-extraordinary state for the object class.
  • "non-extraordinary" will be understood to refer to a subjective state assigned by the user but which serves to identify any state for the object that is not a state associated with any level of concern or any state that warrants further notice or inspection.
  • the non- extraordinary state for a given object can subsume any number of different configurations which, although different from one another, do not rise to a level of concern. In fact, one or more of these non- extraordinary states may not be what one might typically think of as "ordinary” in the sense of representing a common state. Rareness alone, however, need not necessarily equate with being extraordinary.
  • various non-extraordinary states for a given particular vehicle can pertain to different configurations for a given seat back.
  • that seat back may be fully up or fully down.
  • seat back may be fully up ninety-five percent of the time.
  • the relative rareness of the down position does not alter that non-extraordinary status.
  • fuel or other fluid levels may be different for every scan, so the ordinary set may include images with a variety of fluid levels (including, perhaps, all possible combinations of all possible fluid levels).
  • the ordinary set for a particular model of engine block may include images of that engine block with a variety of different porosities or density variations in a number of different locations, each known to be innocuous.
  • Such an ordinary set for the object class can itself be formed and/or maintained and updated using any of a variety of approaches.
  • the ordinary set is often continuous and thus infinite in size, and therefore may exist only in concept, rather than being stored directly as a set of images. For example, and referring again to the embassy employee use case presented earlier, a daily x-ray image of each employee's vehicle can be used to build, over time, a growing documented understanding of what is non-extraordinary for each such vehicle.
  • each time a new non-extraordinary image is acquired for a vehicle that image (or information from that image) is added to the object class record, so that that object state is effectively added to the ordinary set for future scans of that object class.
  • Adding information from a single image may also extend the ordinary set by not just what is in that image, but also combinations of features from that image and existing images in the ordinary set. For example, if the ordinary set has an image of a car with its seat up and fuel tank nearly empty, and an image is added with the seat down and fuel tank full, then the new ordinary set may include all four combinations (seat up, tank empty; seat up, tank full; seat down, tank empty; and seat down, tank full) even though two of those combinations were never actually observed. Similarly, if the object class record for a particular car includes an image with a full fuel tank and an image with an empty fuel tank, the ordinary set may include images with all possible fuel levels (i.e., all possible combinations of full and empty).
  • the control circuit uses the aforementioned data (and, by one approach, the object class record) to project the image of the particular unique object onto the aforementioned ordinary set for the object class to thereby provide projection information. This step may also optionally incorporate a geometric transformation to align the data to a canonical ordinary orientation and location of the object.
  • the control circuit then produces an anomaly image by identifying differences between the image of the particular object (or the geometrically transformed object) and the projection information (which anomaly image can then be presented, if desired, via a display of choice as suggested by optional block 106). (In some application settings it may also be useful to display an image of the object as projected onto the ordinary set (i.e., the image with its anomalies removed).)
  • FIG. 3 provides a simple example in these regards that can help to illustrate certain general principles set forth herein.
  • the above-mentioned object class record comprises, at least in part, information related to at least one ordinary-mode image, which is a particular image that is a member of the ordinary set for that object class.
  • the first x-ray image 301 is a recently-captured image of a particular vehicle.
  • the second x-ray image 302 (which may, if desired, be a virtual image corresponding to an object that never actually physically existed) represents, in a single example, an ordinary-mode image that represents a non-extraordinary state for that same vehicle.
  • this second image 302 constitutes a composite of sorts reflecting the use of various previous images of the vehicle in non-extraordinary states.
  • this ordinary-mode image is stored a priori and accessed as needed.
  • this ordinary-mode image may be reconstructed from a plurality of stored previous images (also taking into account the particular image for which the ordinary-mode image is requested) each time the process requires the ordinary-mode image.
  • the third image 303 constitutes a raw anomaly view as per the foregoing.
  • the third image 303 in this illustrative example presents only portions of the first image 301 that are anomalous as compared to the second image 302.
  • the fourth image 304 then shows a cleaned anomaly image that shows only substantial regions from the third image 303, discarding minor differences.
  • the fifth image 305 constitutes an anomaly overlay image that shows the original image 301 while highlighting anomalies from the anomaly image 304. This overlay may be performed by, for example, highlighting any anomalies in a special color, by drawing boxes around them, and/or by flashing their pixels.
  • FIG. 3 readily illustrates how much easier it is for the observer to identify possible areas to inspect in the fourth image 304 or fifth image 305 as compared to the first image 301.
  • FIG. 6 Illustrates how the process may appear from the user's perspective.
  • a user may initialize one or more object classes ahead of time by scanning exemplar objects (objects that have been validated by other means that are known to be in an ordinary state). For each object class, the control circuit makes a new object class record containing information derived from the corresponding exemplar scans. When this optional process is finished, online scanning can begin.
  • the control circuit When a new object is scanned (the scan itself might be manually initiated by a user, or it might be automatically triggered when the object enters the scan chamber, say by triggering a light curtain) the control circuit tries to identify an object class for it. If an object class is successfully found, the control circuit generates an anomaly image (this anomaly image or a corresponding anomaly overlay image may optionally be displayed to the user, or may be hidden from view at this point). The control circuit then detects non-trivial blobs in the anomaly image.
  • the image is deemed ordinary.
  • the image is added to the object class record (and hence absorbed into the ordinary set) for future scans of this object class.
  • the image is not added to the object record. In either case, the object can then be automatically released. In such cases, the object can pass through the entire scanner without any human intervention whatsoever.
  • the anomaly image (or anomaly image overlay) is displayed to the user, and the user is prompted for a decision. At this point the user manually inspects the image (and perhaps the physical object) to make a determination. If the user decides that the object is in an ordinary state, the image is added to the object record for this object class. Otherwise, the image is extraordinary and requires corrective action (such as alerting a customs agent, a quality engineer, or the like), and is not added to the object record as an ordinary scan (though it may be logged for other reasons).
  • the system can take special steps to create a new class. In this case, there is no a priori knowledge as to what is ordinary or extraordinary, so the user is prompted to analyze the object image. In the same way as if an anomaly was detected, the user must inspect the image (and perhaps the physical object) to make a determination. If the user decides that the object is in an ordinary state, a new object class record is created, containing the information from this object scan. If the user decides that the object is in an extraordinary state, corrective action is required.
  • FIG. 6 The process illustrated in FIG. 6 is semi-automated. In the early stages of scanning a new object class, the system requires some feedback from a user to guide it to understand what is ordinary versus extraordinary. However, once the system has gained sufficient knowledge of the range of states considered ordinary for the object class, future scanning can often be performed with no user involvement. If many objects are scanned, this allows very high throughput scanning by removing the bottleneck of human image analysis.
  • FIG. 2 provides a more-specific example as regards the foregoing projection activity.
  • the object class record is presumed to comprise data representing images (such as (and perhaps exclusively) x-ray images) of other objects within the object class.
  • the control circuit defines a measure of compactness for an ordinary set, i.e. a measure that indicates whether the ordinary set is smaller (in which case the measure is smaller) or larger (in which case the measure is larger). For example, if the ordinary set includes only a single image of a car with the seat up, the set is very compact (and hence has a small compactness measure). If the ordinary set includes a car with the seat up and seat down, the set is slightly less compact (and hence has a slightly larger
  • the compactness measure for an ordinary set is calculated as a measurement on a particular set of ordinary-mode images.
  • the control circuit defines a type of parameterized geometric transformation, such as a rigid translation, a rigid transformation (i.e. translation and rotation), a similarity transformation (i.e. rigid transformation with scaling and reflection), an affme transformation (i.e. similarity transform with additional skew), planar homography (i.e. mapping a 3D object from its projection on one viewing plane onto its projection on a different viewing plane), curved-surface homography (i.e. mapping a 3D object from its projection on a curved surface to its projection on another (possibly planar) surface), or non-rigid warping with cubic B-splines.
  • a rigid translation i.e. translation and rotation
  • a similarity transformation i.e. rigid transformation with scaling and reflection
  • an affme transformation i.e. similarity transform with additional skew
  • planar homography i.e. mapping a 3D object from its projection on one viewing plane onto its projection on a different viewing plane
  • curved-surface homography i.e
  • block 202 can be omitted, or, equivalently, the geometric transformation is chosen (at least implicitly) to be the identity transform (i.e. a transform that has no effect).
  • the control circuit defines a difference measure (preferably an outlier-resilient difference measure) that measures a total difference of an image (typically a geometrically transformed image) relative to an ordinary-mode image.
  • a difference measure preferably an outlier-resilient difference measure
  • control circuit defines a total projection cost that combines the aforementioned measure of compactness for a set of ordinary images, the difference measure of the (typically geometrically transformed) object image relative to its corresponding ordinary-mode image, and the total of all difference measures of each
  • control circuit determines a set of ordinary-mode images
  • a raw anomaly image is calculated as the difference between the object image and the corresponding ordinary-mode image for that object image. This calculation may be performed explicitly, i.e. literally after block 205 has completed. Or the calculation may be produced as a side-effect of an earlier step.
  • the raw anomaly image may be post-processed to produce a cleaned anomaly image with reduced false alarms, as will be discussed in further detail below.
  • these teachings can serve to consider a collection of N images as points in some N-dimensional space where there is some range of ordinary images that lie in some M-dimensional space, where M is sometimes less than N (sometimes considerably less than M). In such a case, the contents of a given current image that do not lie in that M-dimensional space are fairly viewed as being anomalies. Furthermore, in some cases (in particular when N is larger than M) it can be possible to learn the M-dimensional space directly from the collection of N images.
  • Y a set of ordinary images that is similar to X;
  • E some set of sparse features (i.e. anomalies);
  • T is some parameterized geometric transformation.
  • PCA Principle Component Analysis
  • SVD Singular Value Decomposition
  • FIG. 4 provides illustrative examples in these regards.
  • FIG. 4 comprises twelve basis images labeled from “1" to "12." These twelve images collectively compose a hypothetical ordinary set for the object class where the object class is associated with one particular automobile.
  • the first image, labeled "1,” is the first basis and captures the largest variations in X (from zero) (making this first basis somewhat akin to the mean of X).
  • the second image, labeled "2,” is the second basis and captures the largest variations in X after removing the aforementioned first basis.
  • the third image, labeled "3” is the third basis and captures the largest variations in X after removing the first and second bases.
  • RPCA Robust Principal Component Analysis
  • This foregoing RPCA process can be viewed as a type of batch processing as the evaluation image and all the reference images (where "reference images" are the images stored in a particular object class record) are all processed in one big batch.
  • reference images are the images stored in a particular object class record
  • online processing by, for example, pre-computing much with respect to the reference images.
  • Such an approach can comprise, for example, performing the full RPCA algorithm on the reference images during a pre-computing stage and then, during an online stage, performing the RPCA algorithm as set forth above with the exclusion of steps 2 and 3 on the evaluation image.
  • the present teachings are highly flexible in practice and will accommodate various approaches to achieving such registration.
  • Such approaches can include, for example, both pre-registration and integrated registration approaches.
  • Pre-registration looks for certain features of the object (such as a vehicle's wheels, bumpers, body outline, and so forth) and uses those identified features to perform a pre-registration of the images before performing the foregoing RPCA (where T is omitted).
  • Integrated registration (used in lieu of the foregoing or in combination therewith) adds another step to the RPCA algorithm to update the registration. Using this approach one continually updates the registration while searching for a low-rank
  • RASL Low-rank Decomposition
  • each basis is presumed to be global. That is, N ordinary-mode images can each be represented as a different weighted sum of M different basis images, each of the basis images being a full-size image.
  • These teachings will also accommodate, however, using local bases.
  • These local bases can be in the form of tiles or dictionaries. While there is room for overlap as regards a tiles-based and a dictionary-based approach, essentially the tiled approach uses a global approach in conjunction with (only) smaller rectangles (of the respective image) that may or may not overlap with one another. For example, a given image may be parsed into 4, 12, or 100 tiles or some other number of choice.
  • the bases are spatially variant; that is, a tile in the upper left of the image will be composed of a different set of basis tiles than a tile in the lower right of the image.
  • this approach one would typically choose fairly large tiles, typically covering the image with at most several tens of tiles in each direction.
  • the same dictionary i.e. the same set of atoms
  • the dictionary can be used anywhere in the image (i.e. the same set of atoms are available in the upper left portion of the image as the lower right portion of the image).
  • the dictionary may be over- complete. That is, there may be many different combinations of different atoms that all lead to the same eventual image. This approach can therefore benefit from a tiered dictionary that is organized as a collection of atom-groups, each of which is itself a collection of atoms, then adding a two-level sparseness constraint.
  • the dictionary method merits two separate terms, one measuring the inter-group compactness of the dictionary (i.e. how many groups of atoms are required to represent an image), as well as the intra-group compactness (i.e. how many atoms are necessary from each group), where typically both measures will be based on L-l norm or nuclear norm.
  • dictionaries (or tile bases) can be learned online in a manner similar to the
  • aforementioned anomaly image a conventional radiography image of the object in conjunction with an indication that there is not enough scanning history for this object.
  • the history for an object (such as a given vehicle or vehicle make/model) can be initialized as having no history and no anomaly images are shown for the first few scans (or, if desired, an anomaly image can be provided along with a visual or auditory caution that the anomaly image is based on insufficient information and should not be relied upon).
  • the history can be initialized with at least one vetted scan.
  • a vetted scan for example, an operator can manually inspect the object (physically and/or by careful inspection of the full radiograph).
  • the vetted image can then be added as a reference image.
  • a certain number of vetted images may be required before any anomaly images are provided to the user.
  • these teachings will also accommodate permitting (or requiring) reference images to expire.
  • these teachings will permit limiting each vehicle's history to some number (such as, for example, twenty or thirty or some other number as may be desired) of images, or some time period (such as, for example, two weeks, three months, or some other duration of choice), or some storage limitation (such as, for example, thirty megabytes, one gigabyte, or some other limitation of choice), or some combination of these (for example, a maximum of three months, but never falling below ten images).
  • these teachings will accommodate deleting that information from memory or retaining that information for archival purposes.
  • the system decides which image expires based on some measure of the information provided by that image. For example, the system could be configured to delete the image that has the smallest projection cost, since that image can be best represented by the other images in the training set and thus adds little. Alternatively, the system could be configured to delete the image that has the largest projection cost, since that is the least "ordinary" image in the training set and may thus represent a rare case. Or the system could use some measure other than projection cost; for example, by deleting the image that appears most similar to the rest of the training set.
  • the control circuit discards anomaly pixels in E that are very close (but not necessarily equal) to zero; typically this can be effected by simple thresholding.
  • regions that are smaller in area than some predetermined threshold are discarded; typically this can be effected using
  • morphological processing By another approach, thin anomalous regions are discarded where artifacts are more likely due to imperfect registration; typically this can be effected using a combination of edge detection and morphological processing. By one approach, a system will employ more than one of these or other techniques in combination, as warranted by the particular application.
  • Some illustrative rendering options include (where X represents the evaluation image, Y represents an ordinary-mode image (i.e. low rank typicalized image), and E represents the anomaly image (which can be either raw or cleaned, though for most applications, the cleaned anomaly image will generally be preferred)):
  • the enabling apparatus 500 includes a control circuit 501 that operably couples to a memory 502 and an application interface 503.
  • a control circuit 501 can comprise a fixed-purpose hard- wired platform or can comprise a partially or wholly programmable platform. These architectural options are well known and understood in the art and require no further description here.
  • This control circuit 501 is configured (for example, by using corresponding programming as will be well understood by those skilled in the art) to carry out one or more of the steps, actions, and/or functions described herein.
  • the enabling apparatus 500 includes a control circuit 501 that operably couples to a memory 502, an application interface 503, and a display 504.
  • a control circuit 501 can comprise a fixed-purpose hard- wired platform or can comprise a partially or wholly-programmable platform. These architectural options are well known and understood in the art and require no further description here.
  • This control circuit 501 is configured (for example, by using corresponding programming as will be well understood by those skilled in the art) to carry out one or more of the steps, actions, and/or functions described herein.
  • the memory 502 may be integral to the control circuit 501 or can be physically discrete (in whole or in part) from the control circuit 501 as desired. This memory 502 can also be local with respect to the control circuit 501 (where, for example, both share a common circuit board, chassis, power supply, and/or housing) or can be partially or wholly remote with respect to the control circuit 501 (where, for example, the memory 502 is physically located in another facility, metropolitan area, or even country as compared to the control circuit 501).
  • the scanner might upload its record to a remote central database to thereby distribute the aforementioned memory block over several locations.
  • aforementioned reviewing stations might be distributed one-for-one for each image capture platform, or there might be a central reviewing room somewhere (almost like a call center) where one group of people do all the analysis for all 20 scanners. It will be understood that the specifics of this example are intended to serve to illustrate an approach to a distributed network and are not intended to suggest any particular limitations in these regards.
  • This memory 502 can serve, for example, to non-transitorily store the computer instructions that, when executed by the control circuit 501, cause the control circuit 501 to behave as described herein.
  • this reference to "non-transitorily” will be understood to refer to a non-ephemeral state for the stored contents (and hence excludes when the stored contents merely constitute signals or waves) rather than volatility of the storage media itself and hence includes both non- volatile memory (such as read-only memory (ROM) as well as volatile memory (such as an erasable programmable read-only memory (EPROM).)
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • the user interface 503 can comprise any of a variety of user- input mechanisms (such as, but not limited to, keyboards and keypads, cursor-control devices, touch-sensitive displays, speech-recognition interfaces, gesture-recognition interfaces, and so forth) to facilitate receiving information and/or instructions from a user.
  • the display 504 can similarly comprise any of a wide variety of display mechanisms. As such user interfaces 503 and displays 504 are very well known in the art, and as the present teachings are not overly sensitive to any particular selections in these regards, further elaboration in these regards is not provided here for the sake of brevity.
  • control circuit 501 also operably couples to one or more network interfaces 505.
  • network interfaces 505. Such an interface can serve to communicatively connect the control circuit 501 to one or more other devices via any of a variety of local and non-local networks (including but certainly not limited to the extranet known as the Internet).
  • this image capture platform comprises a mechanism that employs high energy beams (such as x-rays) to capture images of the object of interest.
  • these teachings provide a convenient, reliable, automated, relatively fast, and effective way to help assess whether a given object presents itself in any extraordinary state at a time of need or interest. These results are readily used in an intuitive manner by the user to determine whether and how to conduct a further inspection of the object as regards any extraordinary circumstance. [00113] These teachings can be leveraged in various ways. By one approach the aforementioned anomalies can help to identify security risks. By another approach (when used, for example, to inspect items coming off an assembly line) identifying and presenting such anomalies can help to identify manufacturing defects and hence serve as a form of nondestructive testing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

A control circuit accesses information that relates to an ordinary set (of images) for a particular object class wherein this ordinary set includes (exclusively, if desired) a range of images that represent a non-extraordinary state for the object class. By then accessing data representing an image of a particular unique object and identifying an object class of that particular unique object, the present teachings will accommodate accessing an object class record that contains information related to the aforementioned ordinary set for the object class. That data can then be used to project an image of the particular unique object onto the ordinary set for the object class to provide corresponding projection information. An anomaly image can then be produced, at least in part, by identifying differences between the image of the particular object and the projection information.

Description

APPARATUS AND METHOD FOR PRODUCING ANOMALY IMAGES
Related Application(s)
[0001] This application claims the benefit of U.S. Provisional application number
61/783,285, filed March 14, 2013, which is incorporated by reference in its entirety herein.
Technical Field
[0002] This invention relates generally to image processing and more particularly to identifying anomalies in a given image relative to reference information.
Background
[0003] X-ray images are sometimes employed to facilitate examining an object of interest such as an automobile, delivery van, truck, trailer, mobile shipping container, suitcase, or the like. To a very large extent these efforts are aimed at identifying the otherwise-hidden presence of specific objects or materials of concern. Examples of such objects include but are not limited to weapons of various kinds, smuggled goods, and manufacturing defects. Examples of such materials of concern include but are not limited to known threatening materials such as explosive and/or radioactive materials, materials known to represent a defect in context, such as carbon deposits in aluminum castings, or to materials that may or may not represent a defect in context, such as any non-aluminum particles in an aluminum casting.
[0004] Notwithstanding much progress as regards the foregoing, existing solutions nevertheless do not always wholly meet the requirements of all application settings.
Weapons, for example, can assume any of a wide variety of form factors and materials. Image-based identification of a hidden weapon can be further complicated when the weapon has been broken down into two or more parts that are separated from one another. Smuggled goods, too, can take a variety of forms and sizes, drugs may be molded into arbitrary shapes, and an inspector may not know ahead of time what types of objects he or she may be looking for.
Brief Description of the Drawings
[0005] The above needs are at least partially met through provision of the apparatus and method for producing anomaly images described in the following detailed description, particularly when studied in conjunction with the drawings, wherein:
[0006] FIG. 1 comprises a flow diagram as configured in accordance with various embodiments of the invention;
[0007] FIG. 2 comprises a flow diagram as configured in accordance with various embodiments of the invention;
[0008] FIG. 3 comprises a plurality of images as configured in accordance with various embodiments of the invention;
[0009] FIG. 4 comprises a plurality of images as configured in accordance with various embodiments of the invention;
[0010] FIG. 5 comprises a block diagram as configured in accordance with various embodiments of the invention; and
[0011] FIG. 6 comprises a flow diagram as configured in accordance with various embodiments of the invention.
[0012] Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions and/or relative positioning of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present invention. Also, common but well-understood elements that are useful or necessary in a commercially feasible
embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments of the present invention. Certain actions and/or steps may be described or depicted in a particular order of occurrence while those skilled in the art will understand that such specificity with respect to sequence is not actually required. The terms and expressions used herein have the ordinary technical meaning as is accorded to such terms and expressions by persons skilled in the technical field as set forth above except where different specific meanings have otherwise been set forth herein.
Detailed Description
[0013] Generally speaking, pursuant to these various embodiments a control circuit accesses information that relates to an ordinary set (of images) for a particular object class wherein this ordinary set defines a range of images that represent a presumed non- extraordinary state for the object class. These teachings are highly flexible and will accommodate wide variation as to the nature of the object class. Non-limiting examples include object classes that constitute only a single specific vehicle, a particular vehicle make and model (for a given year), a particular cargo manifest, or a particular manufactured item type, and so forth.
[0014] As one illustrative example and without intending any limitations in these regards, X-ray images that are regularly captured of a given vehicle (such as, for example, a vehicle that daily enters an embassy compound via an entry gate) will capture various images of that vehicle in differing states that are all nevertheless non-extraordinary in that the differences in the vehicle from one day to the next do not represent anything out of the ordinary in terms of constituting or suggesting a threat to security, or a smuggling event. As one simple example, on one day a seat back may be up while on another day that same seat back may be folded down. Although different configuration states, both of these states are non-extraordinary. The ordinary set for this vehicle, therefore, could accommodate both configuration states for that seat back as being "ordinary."
[0015] By then accessing data representing an image of a particular unique object
(such as, for example, an X-ray image for a specific vehicle captured at the present time) and identifying an object class of that particular unique object, the present teachings will accommodate accessing an object class record that contains information related to the aforementioned ordinary set for the object class. That data can then be used to project an image of the particular unique object onto the ordinary set for the object class to provide corresponding projection information. An anomaly image can then be produced, at least in part, by identifying differences between the image of the particular object and the projection information.
[0016] So configured, and to continue the previous example, the anomaly image need not identify either the seat back being up or down as being an "anomaly." Accordingly, the foregoing analysis need not identify a present configuration for the seat back as being an anomaly even when one configuration might occur far more frequently than another.
[0017] So configured, these teachings permit an automated analysis to identify differences that are in fact different from a variety of different views of what is ordinary for a given object class. A corresponding report in these regards can significantly help security personnel to focus their attention on extraordinary differences to determine whether such differences are, in fact, a security threat.
[0018] Differences that are not of substance can be added, if desired, to the ordinary set for the object class to permit that ordinary set to grow and adapt over time to
accommodate evolving this record of what is ordinary for the object class. As one simple example in these regards, when a vehicle that comprises an object class incurs a small dent in a parking lot, that dent can be initially identified as an anomaly. Upon an authorized person clearing the anomaly alert after making a visual inspection, that dent can become a part of the record of what is ordinary for that vehicle. Furthermore, old images can optionally be phased out of the ordinary set. By way of example, when a vehicle incurs a small dent old images taken from before the dent occurred could be phased out as more images are acquired after the dent occurred (since after the dent, the dentless vehicle might no longer be considered ordinary).
[0019] These teachings can be applied in a variety of application settings. For example, these teachings are readily applied to note extraordinary states for a given item such as a vehicle, for a given model and make of vehicle, for a given cargo manifest, and so forth. [0020] These and other benefits may become clearer upon making a thorough review and study of the following detailed description. Referring now to the drawings, and in particular to FIG. 1, an illustrative process 100 that is compatible with many of these teachings will now be presented. For the sake of an illustrative example this description presumes that a control circuit of choice carries out the described activities. Further description regarding such a control circuit appears further herein.
[0021] At block 101 the control circuit accesses data representing an image of a particular unique object. By one approach the control circuit accesses a memory to retrieve this data. These teachings will accommodate a variety of image types and modalities. By one approach that can be particularly useful in many varied application settings the image comprises an x-ray image or other image formed, at least in part, by the use of penetrating energy. For many application settings it will suffice that the control circuit so access only a single two-dimensional image of the particular unique object. If desired, however, these teachings will accommodate accessing, for example, a three-dimensional image (such as a computed tomography image) of the particular unique object and/or multiple such images of the particular unique object (such as, for example, images of the particular unique object captured from different angles).
[0022] As a specific illustrative example in these regards, and without intending any limitations, the above-described activity could comprise capturing a side-elevational x-ray image of a given vehicle upon that vehicle having entered a secure access area of an embassy compound. That said, these teachings will also readily accommodate essentially any field of view as may be useful in a given application setting. For example, in many cases it may be useful to mount the x-ray source above the object to be imaged and the detector(s) in the floor (or vice versa). These teachings will accommodate using automatically-captured images if desired.
[0023] At block 102 the control circuit identifies an object class as corresponds to the particular unique object for which the foregoing data has been accessed. This identification can comprise a partially or wholly automatic activity as desired. The specific nature of this identification can vary widely with the application setting. For example, when the objects are specific particular vehicles (such as the automobiles driven each day by the employees of an embassy to their workplace) the object class may be identified by associating one object class with each unique automatically-read license plate (and hence each unique vehicle). So configured, at block 102 the control circuit can identify the object class for the particular unique vehicle by reading a currently-captured image of the vehicle's license plate (which currently-captured image may be captured using a different camera than, say, the above- inferred x-ray imaging system).
[0024] As another illustrative example, when the object is a vehicle these teachings will also accommodate associating one object class with each vehicle make and model (and presumably a corresponding model year or range of model years as well). As yet another illustrative example, when the object is cargo (contained, for example, within a van, truck, trailer, or the like) these teachings will accommodate deriving the object class from the object's cargo manifest. For example, all shipments with manifests declaring a shipment of frozen broccoli could be lumped together into one object class. Alternatively, all shipments declared to contain automobile parts shipping between Ohio and Nuevo Leon could be lumped together into another object class.
[0025] As yet another illustrative example, when the object is an item within a cargo shipment, these teachings will accommodate associating one object class with a collection of all items within the cargo shipment (which can be particularly useful when such items are all generally of a like type, form factor, and material). And as yet another illustrative example, when the object is a manufactured item (such as a refrigerator, a water pump, a computer, an engine block, a turbine blade, castings, and so forth) these teachings will accommodate associating one object class with the manufactured item type (which can, if desired, be as specific, say, as referring to a particular model number for a given manufactured item).
[0026] As another illustrative example, the image itself may be used to determine the object class. For example, when the object class is based on vehicle make and model, that make and model could be estimated from the image data itself. As another example, when the object is a manufactured item, the model number could be derived from the image data by performing a first-pass analysis of the image data. By way of illustration, if a casting house produces three models of engine blocks and two models of transmission cases, there would be five object classes, and the exact object class could be determined for each scan by analyzing the overall shape of the object in the scan data to find the matching part model.
[0027] At block 103 the control circuit uses the aforementioned identified object class to access an object class record. This object class record contains (exclusively, if desired) information related to an ordinary set for the object class. That is, a range of images that represent a non-extraordinary state for the object class. As used herein, "non-extraordinary" will be understood to refer to a subjective state assigned by the user but which serves to identify any state for the object that is not a state associated with any level of concern or any state that warrants further notice or inspection. Accordingly, the non- extraordinary state for a given object can subsume any number of different configurations which, although different from one another, do not rise to a level of concern. In fact, one or more of these non- extraordinary states may not be what one might typically think of as "ordinary" in the sense of representing a common state. Rareness alone, however, need not necessarily equate with being extraordinary.
[0028] By way of a simple illustration, various non-extraordinary states for a given particular vehicle can pertain to different configurations for a given seat back. For example, that seat back may be fully up or fully down. And in practice that seat back may be fully up ninety-five percent of the time. To the extent that the user deems both configurations to be non-extraordinary, however, the relative rareness of the down position does not alter that non-extraordinary status. As another example, in a typical vehicle, fuel or other fluid levels may be different for every scan, so the ordinary set may include images with a variety of fluid levels (including, perhaps, all possible combinations of all possible fluid levels). As another example, in the context of analyzing metal castings, the ordinary set for a particular model of engine block may include images of that engine block with a variety of different porosities or density variations in a number of different locations, each known to be innocuous. [0029] Such an ordinary set for the object class can itself be formed and/or maintained and updated using any of a variety of approaches. Furthermore, the ordinary set is often continuous and thus infinite in size, and therefore may exist only in concept, rather than being stored directly as a set of images. For example, and referring again to the embassy employee use case presented earlier, a daily x-ray image of each employee's vehicle can be used to build, over time, a growing documented understanding of what is non-extraordinary for each such vehicle. Initially, security personnel may make a thorough inspection of the vehicle before approving the vehicle for entry into the compound. Succeeding days will yield succeeding additional images that present other views of the vehicles, which other views (coupled presumably with corresponding security inspections and approvals) add over time to various specific image-based examples of what is non-extraordinary for each such vehicle.
[0030] By one approach, each time a new non-extraordinary image is acquired for a vehicle, that image (or information from that image) is added to the object class record, so that that object state is effectively added to the ordinary set for future scans of that object class. Adding information from a single image may also extend the ordinary set by not just what is in that image, but also combinations of features from that image and existing images in the ordinary set. For example, if the ordinary set has an image of a car with its seat up and fuel tank nearly empty, and an image is added with the seat down and fuel tank full, then the new ordinary set may include all four combinations (seat up, tank empty; seat up, tank full; seat down, tank empty; and seat down, tank full) even though two of those combinations were never actually observed. Similarly, if the object class record for a particular car includes an image with a full fuel tank and an image with an empty fuel tank, the ordinary set may include images with all possible fuel levels (i.e., all possible combinations of full and empty).
[0031] There are various ways by which such an ordinary set can be specifically built and more particularly represented to ease processing and comparison requirements. Further description in these regards appears below.
[0032] At block 104 the control circuit uses the aforementioned data (and, by one approach, the object class record) to project the image of the particular unique object onto the aforementioned ordinary set for the object class to thereby provide projection information. This step may also optionally incorporate a geometric transformation to align the data to a canonical ordinary orientation and location of the object. At block 105 the control circuit then produces an anomaly image by identifying differences between the image of the particular object (or the geometrically transformed object) and the projection information (which anomaly image can then be presented, if desired, via a display of choice as suggested by optional block 106). (In some application settings it may also be useful to display an image of the object as projected onto the ordinary set (i.e., the image with its anomalies removed).)
[0033] FIG. 3 provides a simple example in these regards that can help to illustrate certain general principles set forth herein.
[0034] By one approach the above-mentioned object class record comprises, at least in part, information related to at least one ordinary-mode image, which is a particular image that is a member of the ordinary set for that object class. In FIG. 3 the first x-ray image 301 is a recently-captured image of a particular vehicle. The second x-ray image 302 (which may, if desired, be a virtual image corresponding to an object that never actually physically existed) represents, in a single example, an ordinary-mode image that represents a non-extraordinary state for that same vehicle. By one approach this second image 302 constitutes a composite of sorts reflecting the use of various previous images of the vehicle in non-extraordinary states. By one approach this ordinary-mode image is stored a priori and accessed as needed. By another approach, if desired, this ordinary-mode image may be reconstructed from a plurality of stored previous images (also taking into account the particular image for which the ordinary-mode image is requested) each time the process requires the ordinary-mode image.
[0035] The third image 303, in turn, constitutes a raw anomaly view as per the foregoing.
Accordingly, the third image 303 in this illustrative example presents only portions of the first image 301 that are anomalous as compared to the second image 302.
[0036] The fourth image 304 then shows a cleaned anomaly image that shows only substantial regions from the third image 303, discarding minor differences. Lastly, the fifth image 305 constitutes an anomaly overlay image that shows the original image 301 while highlighting anomalies from the anomaly image 304. This overlay may be performed by, for example, highlighting any anomalies in a special color, by drawing boxes around them, and/or by flashing their pixels.
[0037] FIG. 3 readily illustrates how much easier it is for the observer to identify possible areas to inspect in the fourth image 304 or fifth image 305 as compared to the first image 301.
[0038] FIG. 6 Illustrates how the process may appear from the user's perspective.
Optionally, before scanning online, a user may initialize one or more object classes ahead of time by scanning exemplar objects (objects that have been validated by other means that are known to be in an ordinary state). For each object class, the control circuit makes a new object class record containing information derived from the corresponding exemplar scans. When this optional process is finished, online scanning can begin.
[0039] When a new object is scanned (the scan itself might be manually initiated by a user, or it might be automatically triggered when the object enters the scan chamber, say by triggering a light curtain) the control circuit tries to identify an object class for it. If an object class is successfully found, the control circuit generates an anomaly image (this anomaly image or a corresponding anomaly overlay image may optionally be displayed to the user, or may be hidden from view at this point). The control circuit then detects non-trivial blobs in the anomaly image.
[0040] If no blobs are found, the image is deemed ordinary. At this point, if the system has been configured to record all images, the image is added to the object class record (and hence absorbed into the ordinary set) for future scans of this object class. Alternatively, if the system has been configured for only explicit recording, the image is not added to the object record. In either case, the object can then be automatically released. In such cases, the object can pass through the entire scanner without any human intervention whatsoever.
[0041] Alternatively, if a non- trivial blob is found in the anomaly image, the anomaly image (or anomaly image overlay) is displayed to the user, and the user is prompted for a decision. At this point the user manually inspects the image (and perhaps the physical object) to make a determination. If the user decides that the object is in an ordinary state, the image is added to the object record for this object class. Otherwise, the image is extraordinary and requires corrective action (such as alerting a customs agent, a quality engineer, or the like), and is not added to the object record as an ordinary scan (though it may be logged for other reasons).
[0042] If the system was unable to find an existing object class for this object (either because no exemplar objects were ever scanned, or this object is from a different class than the exemplar objects that were scanned), then the system can take special steps to create a new class. In this case, there is no a priori knowledge as to what is ordinary or extraordinary, so the user is prompted to analyze the object image. In the same way as if an anomaly was detected, the user must inspect the image (and perhaps the physical object) to make a determination. If the user decides that the object is in an ordinary state, a new object class record is created, containing the information from this object scan. If the user decides that the object is in an extraordinary state, corrective action is required.
[0043] The process illustrated in FIG. 6 is semi-automated. In the early stages of scanning a new object class, the system requires some feedback from a user to guide it to understand what is ordinary versus extraordinary. However, once the system has gained sufficient knowledge of the range of states considered ordinary for the object class, future scanning can often be performed with no user involvement. If many objects are scanned, this allows very high throughput scanning by removing the bottleneck of human image analysis.
[0044] FIG. 2 provides a more-specific example as regards the foregoing projection activity. In this more specific example the object class record is presumed to comprise data representing images (such as (and perhaps exclusively) x-ray images) of other objects within the object class.
[0045] At block 201 the control circuit defines a measure of compactness for an ordinary set, i.e. a measure that indicates whether the ordinary set is smaller (in which case the measure is smaller) or larger (in which case the measure is larger). For example, if the ordinary set includes only a single image of a car with the seat up, the set is very compact (and hence has a small compactness measure). If the ordinary set includes a car with the seat up and seat down, the set is slightly less compact (and hence has a slightly larger
compactness measure). If the set includes the seat up and seat down, and the fuel tank at any fill-state, the set is even less compact (with even larger compactness measure). There are many possible quantitative measures of compactness (such as matrix rank or nuclear norm), some of which are discussed below. By one approach, the compactness measure for an ordinary set is calculated as a measurement on a particular set of ordinary-mode images.
[0046] At block 202, in many cases the control circuit defines a type of parameterized geometric transformation, such as a rigid translation, a rigid transformation (i.e. translation and rotation), a similarity transformation (i.e. rigid transformation with scaling and reflection), an affme transformation (i.e. similarity transform with additional skew), planar homography (i.e. mapping a 3D object from its projection on one viewing plane onto its projection on a different viewing plane), curved-surface homography (i.e. mapping a 3D object from its projection on a curved surface to its projection on another (possibly planar) surface), or non-rigid warping with cubic B-splines. In some cases, no geometric
transformation is required. In this case, block 202 can be omitted, or, equivalently, the geometric transformation is chosen (at least implicitly) to be the identity transform (i.e. a transform that has no effect).
[0047] At block 203, the control circuit defines a difference measure (preferably an outlier-resilient difference measure) that measures a total difference of an image (typically a geometrically transformed image) relative to an ordinary-mode image.
[0048] At block 204, then, the control circuit defines a total projection cost that combines the aforementioned measure of compactness for a set of ordinary images, the difference measure of the (typically geometrically transformed) object image relative to its corresponding ordinary-mode image, and the total of all difference measures of each
(typically geometrically transformed) image from the object class record relative to its corresponding ordinary-mode image. [0049] At block 205 the control circuit then determines a set of ordinary-mode images
(and typically a set of geometric transformation parameters) by, at least in part, finding the ordinary-mode images (and typically the geometric transformation parameters) that minimize the projection cost for set of ordinary-mode images.
[0050] At block 206, a raw anomaly image is calculated as the difference between the object image and the corresponding ordinary-mode image for that object image. This calculation may be performed explicitly, i.e. literally after block 205 has completed. Or the calculation may be produced as a side-effect of an earlier step. Optionally, at block 207, the raw anomaly image may be post-processed to produce a cleaned anomaly image with reduced false alarms, as will be discussed in further detail below.
[0051] Generally speaking, for many application settings these teachings can serve to consider a collection of N images as points in some N-dimensional space where there is some range of ordinary images that lie in some M-dimensional space, where M is sometimes less than N (sometimes considerably less than M). In such a case, the contents of a given current image that do not lie in that M-dimensional space are fairly viewed as being anomalies. Furthermore, in some cases (in particular when N is larger than M) it can be possible to learn the M-dimensional space directly from the collection of N images.
[0052] To express the foregoing using matrices, let:
[0053] X = the set of input images;
[0054] Y = a set of ordinary images that is similar to X;
[0055] E = some set of sparse features (i.e. anomalies);
[0056] T is some parameterized geometric transformation.
[0057] Then X typically has rank N, Y typically has rank M, and T(X) = Y + E.
[0058] In some cases it can be beneficial to omit the geometric transformation (i.e. assume T is the identity transformation), in which case X [0059] The goal is therefore to recover both Y and E, given only X. There is a natural
(and perhaps difficult) tradeoff between the compactness of Y and the amount of content in E.. Most users may wish for Y (which is unknown) to be very compact, and E (also unknown) to be mostly (but not entirely) zeros. In other words, one may typically wish to avoid unwanted extremes. For example, when Y=0 (which is as compact as possible), then E=X and everything in the current image is considered to be an anomaly. As another example of a possibly unhelpful extreme, when E=0 then Y=X and nothing in the current image constitutes an anomaly.
[0060] Something in between such extremes may be preferable. An alternate viewpoint is that one will in general seek the most parsimonious explanation of the data, i.e. the simplest explanation possible. Encouraging E to be near zero encourages Y to represent X more closely, and encouraging Y to be compact encourages E to include more information as anomalies, so simultaneously encouraging both Y to be compact and E to be near zero encourages Y to represent the features common across different images in X, while leaving only anomalies in E.
[0061] As a simple example, one can average all images in X and copy the result into each column of Y, then choose E=X-Y. This is computationally and conceptually simple, but does not allow for any object variation. In other words, the ordinary set consists of a single image, which is the same as the average of all images.
[0062] A slightly better method (at least for many application settings) involves
Principle Component Analysis (PCA) using the Singular Value Decomposition (SVD). In this case, one performs an SVD to find matrices U and V and diagonal matrix S such that X = USV1. One then thresholds S, setting all small values to zero, to get a thresholded singular value matrix S. Then Y = USV1, and E = X-Y.
[0063] FIG. 4 provides illustrative examples in these regards. FIG. 4 comprises twelve basis images labeled from "1" to "12." These twelve images collectively compose a hypothetical ordinary set for the object class where the object class is associated with one particular automobile. The first image, labeled "1," is the first basis and captures the largest variations in X (from zero) (making this first basis somewhat akin to the mean of X). The second image, labeled "2," is the second basis and captures the largest variations in X after removing the aforementioned first basis. Similarly, the third image, labeled "3," is the third basis and captures the largest variations in X after removing the first and second bases. The same progression applies for each succeeding image such that the ΝΛ basis captures the largest variations in X after removing the first to the (Ν-1)Λ bases (where N is an integer greater than "1"). For example, we might choose to keep only the first three bases, and each image in Y is made of a weighted sum of these three basis images. However, while relatively simple computationally, this approach has the downside of correlating object features that we would prefer be kept separate.
[0064] In FIG 4, note that the basis in image "3" includes variation both in seat position and in the presence of a box in the trunk. A practical consequence of this is raising the seat of a car in this object class may create a false positive for the appearance of an anomalous box in the trunk of the car. To avoid such problems, it can be quite beneficial to use nonlinear projection schemes.
[0065] By one approach one can employ a sparsity-promoting technique known in the art as Robust Principal Component Analysis (RPCA) that automatically uncovers Y and E from X, and does so in a way that discovers a good tradeoff between compact Ys and mostly zero E's.
[0066] To use RPCA, we choose matrix nuclear norm of Y to be our ordinary set compactness measure, and we choose vector 1-norm of E to be our difference measure, and we choose some weighting parameter λ. We then search for the values of Y and E that minimize total projection cost C = \\Y\\ * + λ\\Ε \\1 under the constraint that Y+E=X.
Mathematically it has been shown that this tends to result in a Y that is low rank and an E whose entries are mostly zeros. For a rigorous description of RPCA, see E. Candes, X. Li, Y. Ma, and J. Wright. Robust principal component analysis (2009). Roughly, the steps to performing RPCA (i.e. to minimize the above total projection cost) can be thought of as follows:
[0067] 1. Initialize Y=0, E=0; [0068] 2. Perform a singular value decomposition to determine a set of basis images that fully represent (X-E);
[0069] 3. Threshold the singular values to discard bases with little effect on (X-E);
[0070] 4. Update Y = the projection of X on the remaining bases;
[0071] 5. Update E = X - Y;
[0072] 6. Threshold to discard pixels where E is small;
[0073] 7. Update the thresholds; and
[0074] 8. Repeat from step 2.
[0075] This foregoing RPCA process can be viewed as a type of batch processing as the evaluation image and all the reference images (where "reference images" are the images stored in a particular object class record) are all processed in one big batch. Alternatively, one can utilize online processing by, for example, pre-computing much with respect to the reference images. Such an approach can comprise, for example, performing the full RPCA algorithm on the reference images during a pre-computing stage and then, during an online stage, performing the RPCA algorithm as set forth above with the exclusion of steps 2 and 3 on the evaluation image.
[0076] In addition to the foregoing it can be useful in many application settings to register images. For example, each time a vehicle passes through a scanner the vehicle may be located at slightly different locations when the relevant image is captured. Without registration these variations can lead to the appearance of random horizontal shifts from image to image. As another example, the vehicle may drive through the portal at different speeds. Such differences in velocity can lead to the appearance of random horizontal stretching that changes from image to image, or even within a given image if the vehicle's speed is not constant. And as yet another example, each time a vehicle pulls into a scanner the vehicle may do so with a slight port or starboard shift or skew that can lead to a slight vertical magnification difference from one image to the next. [0077] Confounding such concerns is that the objects being registered may themselves change. Ideally, one can successfully register a convertible vehicle with the top down against a convertible with the top up, or a loaded truck with an empty truck.
[0078] The present teachings are highly flexible in practice and will accommodate various approaches to achieving such registration. Such approaches can include, for example, both pre-registration and integrated registration approaches. Pre-registration looks for certain features of the object (such as a vehicle's wheels, bumpers, body outline, and so forth) and uses those identified features to perform a pre-registration of the images before performing the foregoing RPCA (where T is omitted).
[0079] Integrated registration (used in lieu of the foregoing or in combination therewith) adds another step to the RPCA algorithm to update the registration. Using this approach one continually updates the registration while searching for a low-rank
representation of the image set. In this case, the total projection cost is the same as the above, C = H^ll* + .||£ΊΙι> but the constraint is now that Y+E=T(X), and we search not only for Y and E, but also for T.
[0080] One known approach in these regards is the "Robust Alignment by Sparse and
Low-rank Decomposition" (RASL) algorithm. This approach is typically more
computationally intensive than pre-registration but can allow for much greater differences between the objects being registered, as it is it does not rely on detecting any specific a priori object features. This fully automatic registration feature also makes it easier to establish new object classes without special knowledge of the object structure. For details, see Yigang Peng, Arvind Ganesh, John Wright, Wenli Xu and Yi Ma, "RASL: Robust Alignment via Sparse and Low-Rank Decomposition for Linearly Correlated Images", IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2010. Note that while Peng et al. discuss similarity, affine, and planar homography geometric transforms, their method can also easily be extended to support other transformations as well, such as curved-surface homography (which can be more appropriate for some high energy scanners, which often have curved detector arrays). [0081] Additionally, during analysis of future object scans, it can be desirable to avoid computational waste in re-registering existing reference scans to the same canonical ordinary object location every time. As such, once a new object has been registered and projected, it can be desirable to add T(X) to the object record, rather than simply adding X to the object record. In such an approach, one might still use RASL, with the expectation that the amount of geometric transformation that will be applied to the reference images will be small. Alternatively, or one might disable geometric transformations for reference images, so one in effect uses the RPCA constraint, X=Y+E, for reference images but the RASL constraint, T(X)=Y+E, for the object image.
[0082] In the foregoing discussion each basis is presumed to be global. That is, N ordinary-mode images can each be represented as a different weighted sum of M different basis images, each of the basis images being a full-size image.
[0083] These teachings will also accommodate, however, using local bases. These local bases can be in the form of tiles or dictionaries. While there is room for overlap as regards a tiles-based and a dictionary-based approach, essentially the tiled approach uses a global approach in conjunction with (only) smaller rectangles (of the respective image) that may or may not overlap with one another. For example, a given image may be parsed into 4, 12, or 100 tiles or some other number of choice.
[0084] By one approach, the bases are spatially variant; that is, a tile in the upper left of the image will be composed of a different set of basis tiles than a tile in the lower right of the image. In this approach, one would typically choose fairly large tiles, typically covering the image with at most several tens of tiles in each direction.
[0085] Dictionary methods, on the other hand, tend to use a much larger number
(such as, for example, many thousands) of "atoms" (i.e., the dictionary elements, which are essentially small local bases) to describe an image. By one approach, for a dictionary-based method, the same dictionary (i.e. the same set of atoms) can be used anywhere in the image (i.e. the same set of atoms are available in the upper left portion of the image as the lower right portion of the image). Additionally, by one approach, the dictionary may be over- complete. That is, there may be many different combinations of different atoms that all lead to the same eventual image. This approach can therefore benefit from a tiered dictionary that is organized as a collection of atom-groups, each of which is itself a collection of atoms, then adding a two-level sparseness constraint.
[0086] Whereas in conventional RPCA, the ordinary-set compactness measure is matrix nuclear norm, leading to a term in the total projection cost, the dictionary method merits two separate terms, one measuring the inter-group compactness of the dictionary (i.e. how many groups of atoms are required to represent an image), as well as the intra-group compactness (i.e. how many atoms are necessary from each group), where typically both measures will be based on L-l norm or nuclear norm.
[0087] The dictionaries (or tile bases) can be learned online in a manner similar to the
RPCA approach discussed above.
[0088] Alternatively one can build part or all of the dictionary(s) offline. A hybrid approach may in fact be useful for at least some application settings. These teachings will accommodate, for example, building an online-learned dictionary of the object (using RPCA on the reference images) and an offline-learned dictionary of specialty items (say, for each person working at an embassy, one scans their briefcase in a luggage scanner to build the offline dictionary). The foregoing can then be aggregated and used as a combined dictionary for the processing of the evaluation image.
[0089] For many application settings these teachings offer better results when some given number of reference images (or a trained dictionary) are already available. Such, however, may not always be the case. A new employee at an embassy compound, for example, or a new vehicle that an existing employee drives to work, are examples in these regards. When there are not a sufficient number of existing reference images, by one approach these teachings will accommodate providing to the user (in lieu of the
aforementioned anomaly image) a conventional radiography image of the object in conjunction with an indication that there is not enough scanning history for this object. As another example, the history for an object (such as a given vehicle or vehicle make/model) can be initialized as having no history and no anomaly images are shown for the first few scans (or, if desired, an anomaly image can be provided along with a visual or auditory caution that the anomaly image is based on insufficient information and should not be relied upon).
[0090] By one approach the history can be initialized with at least one vetted scan. In a vetted scan, for example, an operator can manually inspect the object (physically and/or by careful inspection of the full radiograph). The vetted image can then be added as a reference image. By one approach a certain number of vetted images may be required before any anomaly images are provided to the user.
[0091] These teachings will readily accommodate yet other approaches in these regards, in lieu of the foregoing or in combination therewith. By one approach, for example, one can automatically add every evaluation image to the reference history for a given object. Or alternatively, these teachings will accommodate having the user make a confidence decision as to the non- extraordinary state of the object in a particular image. That confidence decision can be based upon a purely subject standard if desired or can be partially or wholly based upon some objective standard (such as a checklist of specific circumstances to confirm).
[0092] These teachings will also accommodate permitting (or requiring) reference images to expire. For example, these teachings will permit limiting each vehicle's history to some number (such as, for example, twenty or thirty or some other number as may be desired) of images, or some time period (such as, for example, two weeks, three months, or some other duration of choice), or some storage limitation (such as, for example, thirty megabytes, one gigabyte, or some other limitation of choice), or some combination of these (for example, a maximum of three months, but never falling below ten images). When an image expires, these teachings will accommodate deleting that information from memory or retaining that information for archival purposes.
[0093] By one approach, when a quota has been reached (say, we have exceeded a threshold of 20 images, or 1 Gb storage), the oldest image expires. By another approach, when a quota has been reached, the system decides which image expires based on some measure of the information provided by that image. For example, the system could be configured to delete the image that has the smallest projection cost, since that image can be best represented by the other images in the training set and thus adds little. Alternatively, the system could be configured to delete the image that has the largest projection cost, since that is the least "ordinary" image in the training set and may thus represent a rare case. Or the system could use some measure other than projection cost; for example, by deleting the image that appears most similar to the rest of the training set.
[0094] Once the raw anomaly image E is produced (as described above), it may also be useful to post-process this image to remove any false anomalies, resulting in a cleaned anomaly image with reduced false alarms. There are a number of useful techniques to produce a cleaned anomaly image. By one approach, the control circuit discards anomaly pixels in E that are very close (but not necessarily equal) to zero; typically this can be effected by simple thresholding. By another approach, regions that are smaller in area than some predetermined threshold are discarded; typically this can be effected using
morphological processing. By another approach, thin anomalous regions are discarded where artifacts are more likely due to imperfect registration; typically this can be effected using a combination of edge detection and morphological processing. By one approach, a system will employ more than one of these or other techniques in combination, as warranted by the particular application.
[0095] There are many different possibilities for rendering the processing results.
Some illustrative rendering options include (where X represents the evaluation image, Y represents an ordinary-mode image (i.e. low rank typicalized image), and E represents the anomaly image (which can be either raw or cleaned, though for most applications, the cleaned anomaly image will generally be preferred)):
[0096] Showing a conventional radiograph image to include highlighting for any nonzero portions in E by: [0097] - showing the E=0 portion of the radiograph in grayscale, and the non-zero E portions in color;
[0098] - drawing a box around the highlighted portions; and/or
[0099] - showing the radiograph normally (grayscale or color) but fading out everything not in the non-zero portions of E; or
[00100] Showing the non-zero E portions directly (essentially with Y peeled away from X).
[00101] In any of the foregoing use cases it may be useful to provide the user with an ability to selectively toggle between the original X and the anomaly-highlight image.
[00102] The above-described processes are readily enabled using any of a wide variety of available and/or readily configured platforms, including partially or wholly programmable platforms as are known in the art or dedicated purpose platforms as may be desired for some applications. Referring now to FIG. 5, an illustrative approach to such a platform will now be provided.
[00103] In this example the enabling apparatus 500 includes a control circuit 501 that operably couples to a memory 502 and an application interface 503. Such a control circuit 501 can comprise a fixed-purpose hard- wired platform or can comprise a partially or wholly programmable platform. These architectural options are well known and understood in the art and require no further description here. This control circuit 501 is configured (for example, by using corresponding programming as will be well understood by those skilled in the art) to carry out one or more of the steps, actions, and/or functions described herein.
[00104] In this example the enabling apparatus 500 includes a control circuit 501 that operably couples to a memory 502, an application interface 503, and a display 504. Such a control circuit 501 can comprise a fixed-purpose hard- wired platform or can comprise a partially or wholly-programmable platform. These architectural options are well known and understood in the art and require no further description here. This control circuit 501 is configured (for example, by using corresponding programming as will be well understood by those skilled in the art) to carry out one or more of the steps, actions, and/or functions described herein.
[00105] The memory 502 may be integral to the control circuit 501 or can be physically discrete (in whole or in part) from the control circuit 501 as desired. This memory 502 can also be local with respect to the control circuit 501 (where, for example, both share a common circuit board, chassis, power supply, and/or housing) or can be partially or wholly remote with respect to the control circuit 501 (where, for example, the memory 502 is physically located in another facility, metropolitan area, or even country as compared to the control circuit 501).
[00106] These teachings will also readily accommodate implementation via a distributed network. Consider, for example, a system having 20 scanners scattered around the country. Each scanner has a local control circuit that handles the processing of vehicle-based images. When one such station sees a certain license plate, that station can access the other records as correspond to that license plate taken on other scanners, either by getting that information directly from the local storage of the other scanners, or from some central database. That scanner/control circuit could then do its processing locally or, alternatively, it could just send its image to a remote high-powered processing cluster that will do the projection and return the anomaly image.
[00107] Similarly, after completing the scan (either immediately, or during a lull period, like overnight) the scanner might upload its record to a remote central database to thereby distribute the aforementioned memory block over several locations. The
aforementioned reviewing stations might be distributed one-for-one for each image capture platform, or there might be a central reviewing room somewhere (almost like a call center) where one group of people do all the analysis for all 20 scanners. It will be understood that the specifics of this example are intended to serve to illustrate an approach to a distributed network and are not intended to suggest any particular limitations in these regards.
[00108] This memory 502 can serve, for example, to non-transitorily store the computer instructions that, when executed by the control circuit 501, cause the control circuit 501 to behave as described herein. (As used herein, this reference to "non-transitorily" will be understood to refer to a non-ephemeral state for the stored contents (and hence excludes when the stored contents merely constitute signals or waves) rather than volatility of the storage media itself and hence includes both non- volatile memory (such as read-only memory (ROM) as well as volatile memory (such as an erasable programmable read-only memory (EPROM).)
[00109] The user interface 503 can comprise any of a variety of user- input mechanisms (such as, but not limited to, keyboards and keypads, cursor-control devices, touch-sensitive displays, speech-recognition interfaces, gesture-recognition interfaces, and so forth) to facilitate receiving information and/or instructions from a user. The display 504 can similarly comprise any of a wide variety of display mechanisms. As such user interfaces 503 and displays 504 are very well known in the art, and as the present teachings are not overly sensitive to any particular selections in these regards, further elaboration in these regards is not provided here for the sake of brevity.
[00110] By one approach the control circuit 501 also operably couples to one or more network interfaces 505. Such an interface can serve to communicatively connect the control circuit 501 to one or more other devices via any of a variety of local and non-local networks (including but certainly not limited to the extranet known as the Internet).
[00111] The present teachings will also accommodate operably coupling the control circuit 501 to one or more image capture platforms 506 that capture some or all of the images referred to herein. Accordingly, by one approach this image capture platform comprises a mechanism that employs high energy beams (such as x-rays) to capture images of the object of interest.
[00112] So configured, these teachings provide a convenient, reliable, automated, relatively fast, and effective way to help assess whether a given object presents itself in any extraordinary state at a time of need or interest. These results are readily used in an intuitive manner by the user to determine whether and how to conduct a further inspection of the object as regards any extraordinary circumstance. [00113] These teachings can be leveraged in various ways. By one approach the aforementioned anomalies can help to identify security risks. By another approach (when used, for example, to inspect items coming off an assembly line) identifying and presenting such anomalies can help to identify manufacturing defects and hence serve as a form of nondestructive testing.
[00114] Those skilled in the art will recognize that a wide variety of modifications, alterations, and combinations can be made with respect to the above described embodiments without departing from the scope of the invention, and that such modifications, alterations, and combinations are to be viewed as being within the ambit of the inventive concept.

Claims

What is claimed is:
1. An apparatus comprising:
a memory having stored therein:
data representing an image of a particular unique object;
an object class record that contains information related to an ordinary set for an object class, wherein the ordinary set includes a range of images that represent a non-extraordinary state for the object class;
a control circuit operably coupled to the memory and configured to:
identify an object class of the particular unique object;
use the object class to access the object class record;
use the data to project the image of the particular unique object onto the ordinary set for the object class to provide projection information;
produce an anomaly image by, at least in part, identifying differences between the image of the particular object and the projection information.
2. The apparatus of claim 1 wherein the control circuit is configured to identify the object class by at least one of:
when the object is a vehicle, associating one object class with each unique license plate;
when the object is a vehicle, associating one object class with each vehicle make and model;
when the object is cargo, associating one object class with each cargo manifest; when the object is an item within a cargo shipment, associating the one object class with a collection of all items within the cargo shipment;
when the object is a manufactured item, associating one object class with the manufactured item type.
3. The apparatus of claim 1 wherein the object class record comprises, at least in part, information related to at least one ordinary-mode image.
4. The apparatus of claim 3 wherein the ordinary-mode image includes image content that is common to all of the plurality of different views of the particular unique object in a non-extraordinary state.
5. The apparatus of claim 1 further comprising:
a display operably coupled to the control circuit;
wherein the control circuit is configured to present the anomaly image via the display.
6. The apparatus of claim 1 wherein the control circuit is configured to facilitate identifying differences between an x-ray image of the particular unique vehicle and a normalized record of the particular unique vehicle by, at least in part, identifying at least one of:
at least one vehicle part that is physically reconfigured;
a cargo item.
7. The apparatus of claim 1 wherein the control circuit is configured to employ at least one threshold value to distinguish between a minor difference and a non-minor difference between the x-ray image of the particular unique vehicle and a normalized record of the particular unique vehicle.
8. The apparatus of claim 1 wherein the data representing an image of a particular unique object comprises x-ray data.
9. A method comprising:
by a control circuit:
accessing data representing an image of a particular unique object;
identifying an object class of the particular unique object; accessing an object class record that contains information related to an ordinary set for the object class, wherein the ordinary set includes a range of images that represent a non- extraordinary state for the object class;
using the data to project the image of the particular unique object onto the ordinary set for the object class to provide projection information;
producing an anomaly image by, at least in part, identifying differences between the image of the particular object and the projection information.
10. The method of claim 9 wherein the object class is identified by at least one of:
when the object is a vehicle, associating one object class with each unique license plate;
when the object is a vehicle, associating one object class with each vehicle make and model;
when the object is cargo, associating one object class with each cargo manifest; when the object is an item within a cargo shipment, associating the one object class with a collection of all items within the cargo shipment;
when the object is a manufactured item, associating one object class with the manufactured item type.
11. The method of claim 9 wherein the object class record comprises, at least in part, information related to at least one ordinary-mode image.
12. The method of claim 11 wherein the ordinary-mode image includes image content that is common to all of the plurality of different views of the particular unique object in a non-extraordinary state.
13. The method of claim 11 wherein the object class record further comprises at least one delta tile that can be combined with the ordinary mode image to yield a projection of the image of the particular unique object onto the non-extraordinary state.
14. The method of claim 13 where at least one of the at least one delta tile is a delta image.
15. The method of claim 9 wherein the object class record comprises data representing images of other objects within the object class, and projecting the image of the particular unique object onto the ordinary set for the object class comprises:
defining a measure of compactness for an ordinary set;
defining an outlier-resilient difference measure that measures a total difference between an image and the projection of the image onto the ordinary set;
defining a total projection cost that combines the measure of compactness for the ordinary set with the total of all outlier-resilient differences of each image from the object class record of other objects within the object class;
determining the ordinary set of the object class by, at least in part, finding a set that maximizes the total projection cost.
16. The method of claim 15 wherein the outlier-resilient difference measure for each image also comprises a geometric transformation for the image, and determining the ordinary set comprises also determining the geometric transformations to spatially register each image with respect to the ordinary set for the image class.
17. The method of claim 9 further comprising:
presenting the anomaly image.
18. The method of claim 9 wherein the object class record represents a rolling window of captured views of objects within the object class.
19. The method of claim 9 wherein producing an anomaly image comprises, at least in part, identifying at least one of:
at least one portion of the object that is missing relative to the object class record; at least one portion of the object that is new relative to the object class record; at least one portion of the object that has moved relative to the object class record.
20. The method of claim 9 wherein identifying differences between the image of the particular object and the projection information comprises employing at least one threshold value to distinguish between a minor difference and a non-minor difference.
PCT/US2014/028258 2013-03-14 2014-03-14 Apparatus and method for producing anomaly images WO2014152923A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361783285P 2013-03-14 2013-03-14
US61/783,285 2013-03-14

Publications (1)

Publication Number Publication Date
WO2014152923A1 true WO2014152923A1 (en) 2014-09-25

Family

ID=51581356

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/028258 WO2014152923A1 (en) 2013-03-14 2014-03-14 Apparatus and method for producing anomaly images

Country Status (1)

Country Link
WO (1) WO2014152923A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11941716B2 (en) 2020-12-15 2024-03-26 Selex Es Inc. Systems and methods for electronic signature tracking

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030185340A1 (en) * 2002-04-02 2003-10-02 Frantz Robert H. Vehicle undercarriage inspection and imaging method and system
WO2004019616A1 (en) * 2002-08-20 2004-03-04 Scyron Limited Safety method and apparatus
US20050264796A1 (en) * 2003-09-10 2005-12-01 Shaw Eugene L Non-destructive testing and imaging
US20120140079A1 (en) * 2005-02-23 2012-06-07 Millar Christopher A Entry Control Point Device, System and Method
KR20120128110A (en) * 2011-05-16 2012-11-26 이에이디에스 도이치란트 게엠베하 Image analysis for disposal of explosive ordinance and safety inspections

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030185340A1 (en) * 2002-04-02 2003-10-02 Frantz Robert H. Vehicle undercarriage inspection and imaging method and system
WO2004019616A1 (en) * 2002-08-20 2004-03-04 Scyron Limited Safety method and apparatus
US20050264796A1 (en) * 2003-09-10 2005-12-01 Shaw Eugene L Non-destructive testing and imaging
US20120140079A1 (en) * 2005-02-23 2012-06-07 Millar Christopher A Entry Control Point Device, System and Method
KR20120128110A (en) * 2011-05-16 2012-11-26 이에이디에스 도이치란트 게엠베하 Image analysis for disposal of explosive ordinance and safety inspections

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11941716B2 (en) 2020-12-15 2024-03-26 Selex Es Inc. Systems and methods for electronic signature tracking

Similar Documents

Publication Publication Date Title
US11030704B1 (en) Automated damage assessment and claims processing
Petrich et al. Multi-modal sensor fusion with machine learning for data-driven process monitoring for additive manufacturing
CN105809655B (en) Vehicle inspection method and system
EP3089074B1 (en) Hyperspectral demixing using foveated compressive projections
Jain et al. Segmentation of X-ray and C-scan images of fiber reinforced composite materials
CN117292116A (en) Material handling method, apparatus and system for identifying a region of interest
Petrich et al. Machine learning for defect detection for PBFAM using high resolution layerwise imaging coupled with post-build CT scans
EP2859506B1 (en) System and method for detection cargo container seals
EP3420563A1 (en) Systems and methods for detecting threats and contraband in cargo
EP3132378B1 (en) Identification or determination of a load based on texture
Jahanshahi et al. Parametric performance evaluation of wavelet-based corrosion detection algorithms for condition assessment of civil infrastructure systems
KR20160083099A (en) Detection method and device
EP2177898A1 (en) Method for selecting an optimized evaluation feature subset for an inspection of free-form surfaces and method for inspecting a free-form surface
CN111382762A (en) Empty box identification method and system
US10697757B2 (en) Container auto-dimensioning
Sun et al. Vehicle change detection from aerial imagery using detection response maps
CN116213306A (en) Automatic visual identification method and sorting system
CN115984186A (en) Fine product image anomaly detection method based on multi-resolution knowledge extraction
WO2014152923A1 (en) Apparatus and method for producing anomaly images
US11009604B1 (en) Methods for detecting if a time of flight (ToF) sensor is looking into a container
Voronin et al. Preprocessing images and restore the contours of objects obtained in the infrared range
Krishnamoorthy et al. Implementation of image fusion to investigate wall crack
US11436835B2 (en) Method for detecting trailer status using combined 3D algorithms and 2D machine learning models
CN110023990B (en) Detection of illicit items using registration
Palm et al. Extraction and classification of vehicles in LADAR imagery

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14769705

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14769705

Country of ref document: EP

Kind code of ref document: A1