[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

EP1064613A2 - Energy minimization for classification, pattern recognition, sensor fusion, data compression, network reconstruction and signal processing - Google Patents

Energy minimization for classification, pattern recognition, sensor fusion, data compression, network reconstruction and signal processing

Info

Publication number
EP1064613A2
EP1064613A2 EP98966467A EP98966467A EP1064613A2 EP 1064613 A2 EP1064613 A2 EP 1064613A2 EP 98966467 A EP98966467 A EP 98966467A EP 98966467 A EP98966467 A EP 98966467A EP 1064613 A2 EP1064613 A2 EP 1064613A2
Authority
EP
European Patent Office
Prior art keywords
data
matrices
space output
source space
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP98966467A
Other languages
German (de)
French (fr)
Other versions
EP1064613A4 (en
Inventor
Jeff B. Glickman
Abel Wolman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of EP1064613A2 publication Critical patent/EP1064613A2/en
Publication of EP1064613A4 publication Critical patent/EP1064613A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2137Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on criteria of topology preservation, e.g. multidimensional scaling or self-organising maps

Definitions

  • An appendix of computer program source code is included and comprises 22 sheets.
  • the present invention relates to recognition, analysis, and classification of patterns in data from real world sources, events and processes. Patterns exist throughout the real world. Patterns also exist in the data used to represent or convey or store information about real world objects or events or processes. As information systems process more real world data, there are mounting requirements to build more sophisticated, capable and reliable pattern recognition systems.
  • Existing pattern recognition systems include statistical, syntactic and neural systems. Each of these systems has certain strengths which lends it to specific applications. Each of these systems has problems which limit its effectiveness.
  • Existing pattern recognition systems include statistical, syntactic and neural systems. Each of these systems has certain strengths which lends it to specific applications. Each of these systems has problems which limit its effectiveness.
  • Some real world patterns are purely statistical in nature. Statistical and probabilistic pattern recognition works by expecting data to exhibit statistical patterns. Pattern recognition by this method alone is limited. Statistical pattern recognizers cannot see beyond the expected statistical pattern. Only the expected statistical pattern can be detected.
  • Syntactic pattern recognizers function by expecting data to exhibit structure. While syntactic pattern recognizers are an improvement over statistical pattern recognizers, perception is still narrow and the system cannot perceive beyond the expected structures. While some real world patterns are structural in nature, the extraction of structure is unreliable.
  • Pattern recognition systems that rely upon neural pattern recognizers are an improvement over statistical and syntactic recognizers. Neural recognizers operate by storing training patterns as synaptic weights. Later stimulation retrieves these patterns and classifies the data.
  • the fixed structure of neural pattern recognizers limits their scope of recognition. While a neural system can learn on its own, it can only find the patterns that its fixed structure allows it to see. The difficulties with this fixed structure are illustrated by the well-known problem that the number of hidden layers in a neural network strongly affects its ability to learn and generalize. Additionally, neural pattern recognition results are often not reproducible. Neural nets are also sensitive to training order, often require redundant data for training, can be slow learners and sometimes never learn. Most importantly, as with statistical and syntactic pattern recognition systems, neural pattern recognition systems are incapable of discovering truly new knowledge.
  • an analyzer/classifier process for data comprises using energy minimization with one or more input matrices.
  • the data to be analyzed/classified is processed by an energy minimization technique such as individual differences multidimensional scaling (IDMDS) to produce at least a rate of change of stress/energy.
  • IDMDS individual differences multidimensional scaling
  • the rate of change of stress/energy and possibly other IDMDS output the data are analyzed or classified through patterns recognized within the data.
  • FIG. 1 is a diagram illustrating components of an analyzer according to the first embodiment of the invention.
  • FIG. 2 through FIG. 10 relate to examples illustrating use of an embodiment of the invention for data classification, pattern recognition, and signal processing.
  • the method and apparatus in accordance with the present invention provide an analysis tool with many applications.
  • This tool can be used for data classification, pattern recognition, signal processing, sensor fusion, data compression, network reconstruction, and many other purposes.
  • the invention relates to a general method for data analysis based on energy minimization and least energy deformations.
  • the invention uses energy minimization principles to analyze one to many data sets.
  • energy is a convenient descriptor for concepts which are handled similarly mathematically. Generally, the physical concept of energy is not intended by use of this term but the more general mathematical concept.
  • individual data sets are characterized by their deformation under least energy merging.
  • a number of methods for producing energy minimization and least energy merging and extraction of deformation information have been identified; these include, the finite element method (FEM), simulated annealing, and individual differences multidimensional scaling (IDMDS).
  • FEM finite element method
  • IDMDS individual differences multidimensional scaling
  • the presently preferred embodiment of the invention utilizes individual differences multidimensional scaling (IDMDS).
  • Multidimensional scaling is a class of automated, numerical techniques for converting proximity data into geometric data.
  • IDMDS is a generalization of MDS, which converts multiple sources of proximity data into a common geometric configuration space, called the common space, and an associated vector space called the source space. Elements of the source space encode deformations of the common space specific to each source of proximity data.
  • MDS and IDMDS were developed for psychometric research, but are now standard tools in many statistical software packages. MDS and IDMDS are often described as data visualization techniques. This description emphasizes only one aspect of these algorithms.
  • p is a measure of proximity between objects in S. Then the goal of MDS is to construct a mapping / from S into a metric space (X, d),
  • X is usually assumed to be n dimensional Euclidean space R" , with n sufficiently small.
  • IDMDS is a method for representing many points of view.
  • the different proximities p k can be viewed as giving the proximity perceptions of different judges.
  • IDMDS accommodates these different points of view by finding different maps f k for each judge.
  • These individual maps, or their image configurations, are deformations of a common configuration space whose interpoint distances represent the common or merged point of view.
  • MDS and IDMDS can be further broken down into so-called metric and nonmetric versions.
  • metric MDS or IDMDS the transformations ⁇ f k ) are parametric functions of the proximities p (p k ) .
  • Nonmetric MDS or IDMDS generalizes the metric approach by allowing arbitrary admissible transformations/ ( f k ), where admissible means the association between proximities and transformed proximities (also called disparities in this context) is weakly monotone:
  • P, j ⁇ P k i implies f( PlJ ) ⁇ f(p kl ) .
  • PROXSCAL See Commandeur, J. and Heiser, W., "Mathematical derivations in the proximity scaling (PROXSCAL) of symmetric data matrices," Tech. report no.
  • PROXSCAL is a least squares, constrained majorization algorithm for IDMDS. We now summarize this algorithm, following closely the above reference.
  • PROXSCAL is a least squares approach to IDMDS which minimizes the objective function
  • ⁇ (f,X°) ⁇ (f(p IJ ) - d IJ (X 0 )) 2 .
  • ALSCAL also produces common and source spaces, but these spaces are computed through alternating least squares without explicit use of constraints. Either form of IDMDS can be used in the present invention.
  • MDS and IDMDS have proven useful for many kinds of analyses. However, it is believed that prior utilizations of these techniques have not extended the use of these techniques to further possible uses for which MDS and IDMDS have particular utility and provide exceptional results. Accordingly, one benefit of the present invention is to incorporate MDS or IDMDS as part of a platform in which aspects of these techniques are extended. A further benefit is to provide an analysis technique, part of which uses IDMDS, that has utility as an analytic engine applicable to problems in classification, pattern recognition, signal processing, sensor fusion, and data compression, as well as many other kinds of data analytic applications.
  • Step 110 is a front end for data transformation.
  • Step 120 is a process step implementing energy minimization and deformation computations — in the presently preferred embodiment, this process step is implemented through the IDMDS algorithm.
  • Step 130 is a back end which interprets or decodes the output of the process step 120.
  • step 110 may be configured as a code
  • step 120 may be configured as second code
  • step 120 may be configured as third code, with each code comprising a plurality of machine readable steps or operations for performing the specified operations. While step 110, step 120 and step 130 have been shown as three separate elements, their functionality can be combined and/or distributed. It is to be further understood that "medium” is intended to broadly include any suitable medium, including analog or digital, hardware or software, now in use or developed in the future.
  • Step 110 of the tool 100 is the transformation of the data into matrix form.
  • the only constraint on this transformation for the illustrated embodiment is that the resulting matrices be square.
  • the type of transformation used depends on the data to be processed and the goal of the analysis. In particular, it is not required that the matrices be proximity matrices in the traditional sense associated with IDMDS.
  • time series and other sequential data may be transformed into source matrices through straight substitution into entries of symmetric matrices of sufficient dimensionality (this transformation will be discussed in more detail in an example below).
  • Time series or other signal processing data may also be Fourier or otherwise analyzed and then transformed to matrix form.
  • Step 120 of the tool 100 implements energy minimization and extraction of deformation information through IDMDS.
  • the stress function ⁇ defines an energy functional over configurations and transformations.
  • the weight vectors diag( W k ) are the contextual signature, with respect to the common space, of the k-th input source. Inte ⁇ retation of ⁇ as an energy functional is fundamental; it greatly expands the applicability of MDS as an energy minimization engine for data classification and analysis.
  • Step 130 consists of both visual and analytic methods for decoding and inte ⁇ reting the source space J ⁇ from step 120. Unlike traditional applications of IDMDS, tool 100 often produces high dimensional output. Among other things, this makes visual inte ⁇ retation and decoding of the source space problematic. Possible analytic methods for understanding the high dimensional spaces include, but are not limited to, linear programming techniques for hype ⁇ lane and decision surface estimation, cluster analysis techniques, and generalized gravitational model computations. A source space dye-dropping or tracer technique has been developed for both source space visualization and analytic postprocessing. Step 130 may also consist in recording stress/energy, or the rate of change of stress/energy, over multiple dimensions.
  • the graph of energy (rate or change or stress/energy) against dimension can be used to determine network and dynamical system dimensionality.
  • the graph of stress/energy against dimensionality is traditionally called a scree plot.
  • the use and pu ⁇ ose of the scree plot is greatly extended in the present embodiment of the tool 100.
  • Step 110 of the tool 100 converts each S k e S to matrix form M(S k ) where M(S k ) is a ? dimensional real hollow symmetric matrix. Hollow means the diagonal entries of M(S k ) are zero. As indicated above, M(S k ) need not be symmetric or hollow, but for simplicity of exposition these additional restrictions are adopted. Note also that the matrix dimensionality/? is a function of the data S and the goal of the analysis. Since M(S k ) is hollow symmetric, it can be inte ⁇ reted and processed in IDMDS as a proximity (dissimilarity) matrix. Step
  • M(S k ) M(S k )
  • H P (R) is the set of/? dimensional hollow real symmetric matrices.
  • M depends on the type of data in S, and the pu ⁇ ose of the analysis. For example, if S contains time series data, then M might entail the straightforward entry- wise encoding mentioned above. If S consists of optical character recognition data, or some other kind of geometric data, then M(S k ) may be a standard distance matrix whose ij-X entry is the Euclidean distance between "on" pixels i andy. ean also be combined with other transformations to form the composite, (M ° F)(S k ) , where F, for example, is a fast Fourier transform
  • FFT frequency transform
  • M(S k ) e M(S) is an input source for IDMDS.
  • the IDMDS output is a common space Z ⁇ z R" and a source space W.
  • low dimensional output spaces are essential. In the case of network reconstruction, system dimensionality is discovered by the invention itself. IDMDS can be thought of as a constrained energy minimization process.
  • IDMDS attempts to find the lowest stress or energy configurations X k which also satisfy the constraint equation.
  • Configurations X k most simile to the source matrices M(S k ) have the lowest energy.
  • each X k is required to match the common space Z up to deformation defined by the weight matrices W k .
  • the common space serves as a characteristic, or reference object.
  • the weight space signatures are contextual; they are defined with respect to the reference object Z.
  • the contextual nature of the source deformation signature is fundamental.
  • Z- contextuality of the signature allows the tool 100 to display integrated unsupervised machine learning and generalization.
  • the analyzer/classifier learns seamlessly and invisibly.
  • Z-contextuality also allows the tool 100 to operate without a priori data models.
  • the analyzer/classifier constructs its own model of the data, the common space Z.
  • Step 130 the back end of the tool 100, decodes and inte ⁇ rets the source or classification space output W from IDMDS. Since this output can be high dimensional, visualization techniques must be supplemented by analytic methods of inte ⁇ retation.
  • a dye-dropping or tracer technique has been developed for both visual and analytic postprocessing. This entails differential marking or coloring of source space output.
  • the specification of the dye-dropping is contingent upon the data and overall analysis goals. For example, dye-dropping may be two-color or binary allowing separating hype ⁇ lanes to be visually or analytically determined.
  • an analytic approach to separating hype ⁇ lanes using binary dye-dropping see Bosch, R. and Smith, J, "Separating hype ⁇ lanes and the authorship of the disputed federalist papers," American Mathematical Monthly, Vol. 105, 1998.
  • Discrete dye-dropping allows the definition of generalized gravitational clustering measures of the form
  • A denotes a subset of W (indicated by dye-dropping)
  • ⁇ A (x) is the characteristic function on A
  • d(-,-) is a distance function
  • p e R is a distance function
  • Such measures may be useful for estimating missing values in data bases.
  • Dye- dropping can be defined continuously, as well, producing a kind of height function on W. This allows the definition of decision surfaces or volumetric discriminators.
  • the source space W is also analyzable using standard cluster analytic techniques.
  • the precise clustering metric depends on the specifications and conditions of the IDMDS analysis in question.
  • the stress/energy and rate of change of stress/energy can be used as postprocessing tools.
  • Minima or kinks in a plot of energy, or the rate of change of energy, over dimension can be used to determine the dimensionality of complex networks and general dynamical systems for which only partial output information is available. In fact, this technique allows dimensionality to be inferred often from only a single data stream of time series of observed data.
  • each polygon S k was divided into 60 equal segments with the segment endpoints ordered clockwise from a fixed initial endpoint.
  • a turtle application was then applied to each polygon to compute the Euclidean distance from each segment endpoint to every other segment endpoint (initial endpoint included). Let x S ' i denote the t-th endpoint of polygon S k , then the mapping Mis defined by
  • the individual column vectors d_ have intrinsic interest. When plotted as functions of arc length they represent a geometric signal which contains both frequency and spatial information.
  • PROXSCAL The 16, 60 x 60 distance matrices were input into a publicly distributed version of PROXSCAL.
  • PROXSCAL was run with the following technical specifications: sources- 16, objects- 60, dimension- 4, model- weighted, initial configuration- Torgerson, conditionality- unconditional, transformations- numerical, rate of convergence- 0.0, number of iterations- 500, and minimum stress- 0.0.
  • FIG. 2 and FIG. 3 show the four dimensional common and source space output.
  • the common space configuration appears to be a multifaceted representation of the original polygons. It forms a simple closed path in four dimensions which, when viewed from different angles, or, what is essentially the same thing, when deformed by the weight matrices, produces a best, in the sense of minimal energy, representation of each of the two dimensional polygonal figures. The most successful such representation appears to be that of the triangle projected onto the plane determined by dimensions 2 and 4.
  • the different types of polygons are arranged, and hence, classified, along different radii. Magnitudes within each such radial classification indicate polygon size or scale with the smaller polygons located nearer the origin.
  • the contextual nature of the polygon classification is embodied in the common space configuration.
  • this configuration looks like a single, carefully bent wire loop.
  • this loop of wire looks variously like a triangle, a square, a pentagon, or a hexagon.
  • Example B Classification of non-regular polygons
  • the polygons in Example A were regular.
  • the perimeter of each figure S k was divided into 30 equal segments with the preprocessing transformation M computed as in Example A. This produced 6, 30 x 30 source matrices which were input into PROXSCAL with technical specifications the same as those above except for the number of sources, 6, and objects, 30.
  • FIG. 4 and FIG. 5 show the three dimensional common and source space outputs.
  • the common space configuration again, has a "holographic" or faceted quality; when illuminated from different angles, it represents each of the polygonal figures.
  • this change of viewpoint is encoded in the source space weight vectors. While the weight vectors encoding triangles and rectangles are no longer radially arranged, they can clearly be separated by a hype ⁇ lane and are thus accurately classified by the analysis tool as presently embodied.
  • This example relates to signal processing and demonstrates the analysis tool's invariance with respect to phase and frequency modification of time series data. It also demonstrates an entry-wise approach to computing the preprocessing transformation M.
  • the set S ⁇ S, ,...,S 12 ⁇ consisted of sine, square, and sawtooth waveforms.
  • This preprocessing produced 12, 9 x 9 source matrices which were input to PROXSCAL with the following technical specifications: sources- 12, objects- 9, dimension- 8, model- weighted, initial configuration- Torgerson, conditionality- unconditional, transformations- ordinal, approach to ties- secondary, rate of convergence- 0.0, number of iterations- 500, and minimum stress- 0.0.
  • data while metric or numeric, was transformed as if it were ordinal or nonmetric.
  • the use of nonmetric IDMDS has been greatly extended in the present embodiment of the tool 100.
  • FIG. 6 shows the eight dimensional source space output for the time series data.
  • the data set S ⁇ S 1 ; ...,S 9 ⁇ in this example consisted of nine sequences with ten elements each; they are shown in Table 1, FIG. 8. Sequences 1-3 are constant, arithmetic, and Fibonacci sequences respectively. Sequences 4-6 are these same sequences with some error or noise introduced. Sequences 7-9 are the same as 1-3, but the negative 1 's indicate that these elements are missing or unknown.
  • the resulting 10 x 10 source matrices where input to PROXSCAL configured as follows: sources- 9, objects- 10, dimension- 8, model- weighted, initial configuration- simplex, conditionality- unconditional, transformations- numerical, rate of convergence- 0.0, number of iterations- 500, and minimum stress- 0.0.
  • FIG. 9 shows dimensions 5 and 6 of the eight dimensional source space output.
  • the sequences are clustered, hence classified, according to whether they are constant, arithmetic, or Fibonacci based. Note that in this projection, the constant sequence and the constant sequence with missing element coincide, therefore only two versions of the constant sequence are visible.
  • Example E Missing value estimation for bridges This example extends the previous result to demonstrate the applicability of the analysis tool to missing value estimation on noisy, real-world data.
  • the data set consisted of nine categories of bridge data from the National Bridge Inventory (NBI) of the Federal Highway Administration. One of these categories, bridge material (steel or concrete), was removed from the database. The goal was to repopulate this missing category using the technique of the presently preferred embodiment to estimate the missing values.
  • NBI National Bridge Inventory
  • One hundred bridges were arbitrarily chosen from the NBI. Each bridge defined an eight dimensional vector of data with components the NBI categories. These vectors were preprocessed as in Example D, creating one hundred 8 x 8 source matrices. The matrices were submitted to PROXSCAL with specifications: sources- 100, objects- 8, dimension- 7, model- weighted, initial configuration- simplex, conditionality- unconditional, transformations- numerical, rate of convergence- 0.0, number of iterations- 500, and minimum stress- 0.00001.
  • the seven dimensional source space output was partially labeled by bridge material — an application of dye-dropping — and analyzed using the following function
  • a bridge was determined to be steel (concrete) if g p (A l ,x) > g p (A 2 ,x)
  • Example F Network dimensionality for a 4-node network
  • This example demonstrates the use of stress/energy minima to determine network dimensionality from partial network output data.
  • Dimensionality in this example, means the number of nodes in a network.
  • a four-node network was constructed as follows: generator nodes 1 to 3 were defined by the sine functions, sin(2x), sin( 2x + f ) , and sin( 2x + A f-) ; node 4 was the sum of nodes 1 through 3. The output of node 4 was sampled at 32 equal intervals between 0 and 2 ⁇ .
  • the data from node 4 was preprocessed in the manner of Example D: the zy ' -th entry of the source matrix for node 4 was defined to be the absolute value of the difference between the t-th andy ' -th samples of the node 4 time series.
  • a second, reference, source matrix was defined using the same preprocessing technique, now applied to thirty two equal interval samples of the function sin(x) for 0 ⁇ x ⁇ 2 ⁇ .
  • the resulting 2, 32 x32 source matrices were input to PROXSCAL with technical specification: sources- 2, objects- 32, dimension- 1 to 6, model- weighted, initial configuration- simplex, conditionality- conditional, transformations- numerical, rate of convergence- 0.0, number of iterations- 500, and minimum stress- 0.0.
  • the dimension specification had a range of values, 1 to 6.
  • the dimension resulting in the lowest stress/energy is the dimensionality of the underlying network.
  • Table 2, FIG. 10, shows dimension and corresponding stress/energy values from the analysis by the tool 100 of the 4-node network. The stress/energy minimum is achieved in dimension 4, hence the tool 100 has correctly determined network dimensionality.
  • Similar experiments were run with more sophisticated dynamical systems and networks. Each of these experiments resulted in the successful determination of system degrees of freedom or dimensionality. These experiments included the determination of the dimensionality of a linear feedback shift register. These devices generate pseudo-random bit streams and are designed to conceal their dimensionality.
  • the illustrated embodiment of the present invention provides a method and apparatus for classifying input data.
  • Input data are received and formed into one or more matrices.
  • the matrices are processed using IDMDS to produce a stress/energy value, a rate or change of stress/energy value, a source space and a common space.
  • An output or back end process uses analytical or visual methods to inte ⁇ ret the source space and the common space.
  • the technique in accordance with the present invention therefore avoids limitations associated with statistical pattern recognition techniques, which are limited to detecting only the expected statistical pattern, and syntactical pattern recognition techniques, which cannot perceive beyond the expected structures.
  • the tool in accordance with the present invention is not limited to the fixed structure of neural pattern recognizers.
  • the technique in accordance with the present invention locates patterns in data without interference from preconceptions of models or users about the data.
  • the pattern recognition method in accordance with the present invention uses energy minimization to allow data to self-organize, causing structure to emerge. Furthermore, the technique in accordance with the present invention determines the dimension of dynamical systems from partial data streams measured on those systems through calculation of stress/energy or rate of change of stress/energy across dimensions.
  • PROXSCAL may be replaced by other IDMDS routines which are commercially available or are proprietary. It is therefore intended in the appended claims to cover all such changes and modifications which fall within the true spirit and scope of the invention.
  • MakeDissMissVal "MakeDissMissVal[R,form,metric,mv,prnt] creates (no. of columns) source dissimilarity matrices from matrix R with possible missing values.
  • R is assumed to have the form: objects-by-categories.
  • Hybrid: :usage "Hybrid[L] creates hybrid dissimilarity matrices from list of -data vectors L.”
  • outmat Flatten[Map[Sym[Augment[#] ]&, M] , 1] ;
  • Lp: :usage "Lp[C,p] calulates Minkowski distance with exponent p on sets C of configurations of points.”
  • Test data: "] ; test ⁇ 1, 2, 3 ⁇ , ⁇ 1, 5, 9 ⁇ ; test / / TableForm
  • Mathematica code for step 2 of the invention processing.
  • TMat: .usage "TMat [A] defines the T matrix which normalizes common space Z.
  • Diagonal: :usage "Diagonal[M] creates a diagonal matrix from main diagonal of M.”
  • UnDiagonal: :usage "UnDiagonal[M] turns diagonal matrix into a vector.”
  • IDMDSALS: :usage "IDMDSALS[prox_,proxwts_,dim_,epsilon_, iterations_, seed_] computes IDMDS for proximity matrices prox.”
  • Begin[ " * Private * "] IDMDSALS[prox_, proxwts_, dim_, epsilon_, iterations.., seed_] : Module[
  • ⁇ XO, ZO, AO ⁇ InitialConfig[numscs, numobj, dim]; Print [ "Number of sources: “ , numscs] ; Print ["Number of objects: “, numobj];
  • A0norm Map[Inverse[TO] . #&, Map[DiagonalMatrix, AO] ] ;
  • XO Map[Z0nor . #&, AOnor ] ;
  • V VMat [proxwts] ;
  • Zconstrain (1 / numscs) * (Vp . Plus@@MapThread[#l . #2&, ⁇ numobj * Xupdate, AOnorm ⁇ ] ) ;
  • Zt Transpose[Zconstrain] ;
  • Aconstrain Map[Inverse[Diagonal[Zt . V . Zconstrain] ] . #&, Map[Diagonal, Map[Zt . #&, numobj * Xupdate] ] ] ;
  • T TMat[Map[UnDiagonal, Aconstrain] ] ;
  • Print ["The common space coordinates are: “ , ZOnorm / / MatrixForm] ; Print ["The source space coordinates are: “, Map[MatrixForm, Chop[Aconstrain] ]] ; ⁇ ZOnorm, Chop[Aconstrain] ⁇
  • BeginPackage["DistanceMatrix * "] DistanceMatrix : : usage "DistanceMatrix[ config] produces' a distance matrix from configuration matrix config.
  • DiagMatNorm: :usage "DiagMatNorm[A] normalizes the list of weight vectors A.”
  • norm able [norm[k] , ⁇ k, Length[A[ [1] ] ] ⁇ ] ;
  • NormalizeG: :usage "NormalizeG[A] gives matrix which normalizes the common space given the list of weight vectors A. Begin[ " * Private * "]
  • no3-m Table[norm[k] , ⁇ k, Length[A[ [1] ] ] ⁇ ] ; N[Diagonal-Matrix[norm] ]
  • BMatrix[delta, EM] is part of the Guttman transform. " Begin[" * Private * "]
  • GuttmanTransform : : usage "The GuttmanTransform[B, X] updates the configuration X through multiplication by the BMatrix B. "
  • AGWStress :: usage "The AGWStress[dissimilarity, distance] is the loss function for multidimensional scaling. " Begin[ " * Private * "]
  • NormStress :: usage "NormStress[dissimilarity] normalizes the stress loss function.” Begin[" * Private * "]
  • Ave::usage "Ave[M] finds the average of the list of matrices M and produces a list of the same length with every element the average of M.”
  • start startlist start configurations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Investigating Strength Of Materials By Application Of Mechanical Stress (AREA)

Abstract

An analyzer/classifier process for data comprises using energy minimization with one or more input matrices. The data to be analyzed/classified is processed by an energy minimization technique such as individual difference multidimensional scaling (IDMDS) (fig.1, 120) to produce at least a rate of change of stress/energy. Using the rate of change of stress/energy and possibly other IDMDS output, the data are analyzed or classified through patterns recognized within the data.

Description

ENERGY MINIMIZATION FOR CLASSIFICATION, PATTERN
RECOGNITION, SENSOR FUSION, DATA COMPRESSION, NETWORK RECONSTRUCTION AND SIGNAL PROCESSING
CROSS REFERENCE TO RELATED APPLICATIONS This application claims priority of U.S. provisional application serial number 60/071,592, filed December 29, 1997.
COPYRIGHT NOTICE
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the U.S. Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
APPENDIX
An appendix of computer program source code is included and comprises 22 sheets.
The Appendix is hereby expressly incorporated herein by reference, and contains material which is subject to copyright protection as set forth above.
BACKGROUND OF THE INVENTION
The present invention relates to recognition, analysis, and classification of patterns in data from real world sources, events and processes. Patterns exist throughout the real world. Patterns also exist in the data used to represent or convey or store information about real world objects or events or processes. As information systems process more real world data, there are mounting requirements to build more sophisticated, capable and reliable pattern recognition systems.
Existing pattern recognition systems include statistical, syntactic and neural systems. Each of these systems has certain strengths which lends it to specific applications. Each of these systems has problems which limit its effectiveness. Existing pattern recognition systems include statistical, syntactic and neural systems. Each of these systems has certain strengths which lends it to specific applications. Each of these systems has problems which limit its effectiveness. Some real world patterns are purely statistical in nature. Statistical and probabilistic pattern recognition works by expecting data to exhibit statistical patterns. Pattern recognition by this method alone is limited. Statistical pattern recognizers cannot see beyond the expected statistical pattern. Only the expected statistical pattern can be detected.
Syntactic pattern recognizers function by expecting data to exhibit structure. While syntactic pattern recognizers are an improvement over statistical pattern recognizers, perception is still narrow and the system cannot perceive beyond the expected structures. While some real world patterns are structural in nature, the extraction of structure is unreliable.
Pattern recognition systems that rely upon neural pattern recognizers are an improvement over statistical and syntactic recognizers. Neural recognizers operate by storing training patterns as synaptic weights. Later stimulation retrieves these patterns and classifies the data. However, the fixed structure of neural pattern recognizers limits their scope of recognition. While a neural system can learn on its own, it can only find the patterns that its fixed structure allows it to see. The difficulties with this fixed structure are illustrated by the well-known problem that the number of hidden layers in a neural network strongly affects its ability to learn and generalize. Additionally, neural pattern recognition results are often not reproducible. Neural nets are also sensitive to training order, often require redundant data for training, can be slow learners and sometimes never learn. Most importantly, as with statistical and syntactic pattern recognition systems, neural pattern recognition systems are incapable of discovering truly new knowledge.
Accordingly, there is a need for an improved method and apparatus for pattern recognition, analysis, and classification which is not encumbered by preconceptions about data or models. BRIEF SUMMARY OF THE INVENTION
By way of illustration only, an analyzer/classifier process for data comprises using energy minimization with one or more input matrices. The data to be analyzed/classified is processed by an energy minimization technique such as individual differences multidimensional scaling (IDMDS) to produce at least a rate of change of stress/energy. Using the rate of change of stress/energy and possibly other IDMDS output, the data are analyzed or classified through patterns recognized within the data. The foregoing discussion of one embodiment has been presented only by way of introduction. Nothing in this section should be taken as a limitation on the following claims, which define the scope of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a diagram illustrating components of an analyzer according to the first embodiment of the invention; and
FIG. 2 through FIG. 10 relate to examples illustrating use of an embodiment of the invention for data classification, pattern recognition, and signal processing.
DETAILED DESCRIPTION THE PRESENTLY PREFERRED EMBODIMENTS The method and apparatus in accordance with the present invention provide an analysis tool with many applications. This tool can be used for data classification, pattern recognition, signal processing, sensor fusion, data compression, network reconstruction, and many other purposes. The invention relates to a general method for data analysis based on energy minimization and least energy deformations. The invention uses energy minimization principles to analyze one to many data sets. As used herein, energy is a convenient descriptor for concepts which are handled similarly mathematically. Generally, the physical concept of energy is not intended by use of this term but the more general mathematical concept. Within multiple data sets, individual data sets are characterized by their deformation under least energy merging. This is a contextual characterization which allows the invention to exhibit integrated unsupervised learning and generalization. A number of methods for producing energy minimization and least energy merging and extraction of deformation information have been identified; these include, the finite element method (FEM), simulated annealing, and individual differences multidimensional scaling (IDMDS). The presently preferred embodiment of the invention utilizes individual differences multidimensional scaling (IDMDS).
Multidimensional scaling (MDS) is a class of automated, numerical techniques for converting proximity data into geometric data. IDMDS is a generalization of MDS, which converts multiple sources of proximity data into a common geometric configuration space, called the common space, and an associated vector space called the source space. Elements of the source space encode deformations of the common space specific to each source of proximity data. MDS and IDMDS were developed for psychometric research, but are now standard tools in many statistical software packages. MDS and IDMDS are often described as data visualization techniques. This description emphasizes only one aspect of these algorithms.
Broadly, the goal of MDS and IDMDS is to represent proximity data in a low dimensional metric space. This has been expressed mathematically by others
(see, for example, de Leeuw, J. and Heiser, W., "Theory of multidimensional scaling," in P. R. Krishnaiah and L. N. Kanal, eds., Handbook of Statistics, Vol. 2. North-Holland, New York, 1982) as follows. Let S be a nonempty finite set, p a real valued function on S x S ,
p : S x S →R.
p is a measure of proximity between objects in S. Then the goal of MDS is to construct a mapping / from S into a metric space (X, d),
f - s→x, such that p(i,j) - pv « d(f(i),f(j)), that is, such that the proximity of object i to object j in S is approximated by the distance in X between f(i) and f(j) . X is usually assumed to be n dimensional Euclidean space R" , with n sufficiently small.
IDMDS generalizes MDS by allowing multiple sources. For k = \, ...,m let Sk be a finite set with proximity measure pk , then IDMDS constructs maps
fk : Sk → X
such that pk (i, j) = p * d(fk (i), fk (j)) , fov k = l,...,m .
Intuitively, IDMDS is a method for representing many points of view. The different proximities pk can be viewed as giving the proximity perceptions of different judges. IDMDS accommodates these different points of view by finding different maps fk for each judge. These individual maps, or their image configurations, are deformations of a common configuration space whose interpoint distances represent the common or merged point of view.
MDS and IDMDS can equivalently be described in terms of transformation functions. Let R = (ptJ ) be the matrix defined by the proximity/? on S x S . Then MDS defines a transformation function
f : PlJ ^ dυ(X),
where dtJ (X) = d(f(i),f(j)) , with / the mapping from S → X induced by the transformation function/ Here, by abuse of notation, X = f(S) , also denotes the image of S under/ . The transformation function/should be optimal in the sense that the distances f(ptJ ) give the best approximation to the proximities pu . This optimization criterion is described in more detail below. IDMDS is similarly re- expressed; the single transformation/is replaced by m transformations fk . Note, these fk need not be distinct. In the following, the image of Sk under fk will be written Xk .
MDS and IDMDS can be further broken down into so-called metric and nonmetric versions. In metric MDS or IDMDS, the transformations^ fk ) are parametric functions of the proximities p (p k ) . Nonmetric MDS or IDMDS generalizes the metric approach by allowing arbitrary admissible transformations/ ( fk ), where admissible means the association between proximities and transformed proximities (also called disparities in this context) is weakly monotone:
P,j < Pki implies f(PlJ) ≤ f(pkl) .
Beyond the metric-nonmetric distinction, algorithms for MDS and IDMDS are distinguished by their optimization criteria and numerical optimization routines. One particularly elegant and publicly available IDMDS algorithm is
PROXSCAL See Commandeur, J. and Heiser, W., "Mathematical derivations in the proximity scaling (PROXSCAL) of symmetric data matrices," Tech. report no.
RR-93-03, Department of Data Theory, Leiden University, Leiden, The Netherlands. PROXSCAL is a least squares, constrained majorization algorithm for IDMDS. We now summarize this algorithm, following closely the above reference.
PROXSCAL is a least squares approach to IDMDS which minimizes the objective function
σ(/,...,/ m > \ > - > Xm ) '(fk {p) - dΛXk
4=1 KJ σ is called the stress and measures the goodness-of-fit of the configuration distances dl} (Xk ) to the transformed proximities fk (p k ) . This is the most general form for the objective function. MDS can be interpreted as an energy minimization process and stress can be inteφreted as an energy functional. The w are proximity weights. For simplicity, it is assumed in what follows that w = 1 for all ,;', k.
The PROXSCAL majorization algorithm for MDS with transformations is summarized as follows.
1. Choose a (possibly random) initial configuration X° . 2. Find optimal transformations f(ptJ ) for fixed distances dtJ (X° ) .
3. Compute the initial stress
σ(f,X°) = ∑(f(pIJ) - dIJ(X0))2.
><J
4. Compute the Guttman transform X of X° with the transformed proximities f(p ) . This is the majorization step.
5. Replace X° with X and find optimal transformations f(pu ) for fixed distances dυ (X°) .
6. Compute σ(f,X°) . 7. Go to step 4 if the difference between the current and previous stress is not less than ε , some previously defined number. Stop otherwise.
For multiple sources of proximity data, restrictions are imposed on the configurations Xk associated to each source of proximity data in the form of the constraint equation Xk = ZWk . This equation defines a common configuration space Z and diagonal weight matrices Wk . Z represents a merged or common version of the input sources, while the Wk define the deformation of the common space required to produce the individual configurations Xk . The vectors defined by diag( Wk ), the diagonal entries of the weight matrices Wk , form the source space ^associated to the common space Z.
The PROXSCAL constrained majorization algorithm for IDMDS with transformations is summarized as follows. To simplify the discussion, so-called unconditional IDMDS is described. This means the m transformation functions are the same: / = f2 = ■ ■ • = fm .
1. Choose constrained initial configurations Xk . 2. Find optimal transformations f(p ) for fixed distances d (Xk ) .
3. Compute the initial stress
σ(f,XΪ,...,Xm 0 ) = ∑∑(f(p)-du(Xk 0)y k=\ KJ
4. Compute unconstrained updates Xk of using the Guttman transform with transformed proximities f(p ) . This is the unconstrained majorization step.
5. Solve the metric projection problem by finding Xk minimizing
m h(X1 ,...,Xm) = ∑tv (Xk -Xk)'(Xk - Xk ) k=] subject to the constraints Xk = ZWk . This step constrains the updated configurations from step 4.
6. Replace Xk with Xk and find optimal transformations f(p) for fixed distances dv(Xl) . 7. Compute ) . 8. Go to step 4 if the difference between the current and previous stress is not less than ε , some previously defined number. Stop otherwise. Here, tx(A) and A' denote, respectively, the trace and transpose of matrix A. It should be pointed out that other IDMDS routines do not contain an explicit constraint condition. For example, ALSCAL (see Takane, Y., Young, F, and de Leeuw, J., "Nonmetric individual differences multidimensional scaling: an alternating least squares method with optimal scaling features," Psychometrika, Vol. 42, 1977) minimizes a different energy expression (sstress) over transformations, configurations, and weighted Euclidean metrics. ALSCAL also produces common and source spaces, but these spaces are computed through alternating least squares without explicit use of constraints. Either form of IDMDS can be used in the present invention.
MDS and IDMDS have proven useful for many kinds of analyses. However, it is believed that prior utilizations of these techniques have not extended the use of these techniques to further possible uses for which MDS and IDMDS have particular utility and provide exceptional results. Accordingly, one benefit of the present invention is to incorporate MDS or IDMDS as part of a platform in which aspects of these techniques are extended. A further benefit is to provide an analysis technique, part of which uses IDMDS, that has utility as an analytic engine applicable to problems in classification, pattern recognition, signal processing, sensor fusion, and data compression, as well as many other kinds of data analytic applications.
Referring now to FIG. 1, it illustrates an operational block diagram of a data analysis/classifier tool 100. The least energy deformation analyzer/classifier is a three-step process. Step 110 is a front end for data transformation. Step 120 is a process step implementing energy minimization and deformation computations — in the presently preferred embodiment, this process step is implemented through the IDMDS algorithm. Step 130 is a back end which interprets or decodes the output of the process step 120. These three steps are illustrated in FIG. 1.
It is to be understood that the steps forming the tool 100 may be implemented in a computer usable medium or in a computer system as computer executable software code. In such an embodiment, step 110 may be configured as a code, step 120 may be configured as second code and step 120 may be configured as third code, with each code comprising a plurality of machine readable steps or operations for performing the specified operations. While step 110, step 120 and step 130 have been shown as three separate elements, their functionality can be combined and/or distributed. It is to be further understood that "medium" is intended to broadly include any suitable medium, including analog or digital, hardware or software, now in use or developed in the future.
Step 110 of the tool 100 is the transformation of the data into matrix form. The only constraint on this transformation for the illustrated embodiment is that the resulting matrices be square. The type of transformation used depends on the data to be processed and the goal of the analysis. In particular, it is not required that the matrices be proximity matrices in the traditional sense associated with IDMDS. For example, time series and other sequential data may be transformed into source matrices through straight substitution into entries of symmetric matrices of sufficient dimensionality (this transformation will be discussed in more detail in an example below). Time series or other signal processing data may also be Fourier or otherwise analyzed and then transformed to matrix form.
Step 120 of the tool 100 implements energy minimization and extraction of deformation information through IDMDS. In the IDMDS embodiment of the tool 100, the stress function σ defines an energy functional over configurations and transformations. The configurations are further restricted to those which satisfy the constraint equations Xk = ZWk . For each configuration Xk , the weight vectors diag( Wk ) are the contextual signature, with respect to the common space, of the k-th input source. Inteφretation of σ as an energy functional is fundamental; it greatly expands the applicability of MDS as an energy minimization engine for data classification and analysis.
Step 130 consists of both visual and analytic methods for decoding and inteφreting the source space J^from step 120. Unlike traditional applications of IDMDS, tool 100 often produces high dimensional output. Among other things, this makes visual inteφretation and decoding of the source space problematic. Possible analytic methods for understanding the high dimensional spaces include, but are not limited to, linear programming techniques for hypeφlane and decision surface estimation, cluster analysis techniques, and generalized gravitational model computations. A source space dye-dropping or tracer technique has been developed for both source space visualization and analytic postprocessing. Step 130 may also consist in recording stress/energy, or the rate of change of stress/energy, over multiple dimensions. The graph of energy (rate or change or stress/energy) against dimension can be used to determine network and dynamical system dimensionality. The graph of stress/energy against dimensionality is traditionally called a scree plot. The use and puφose of the scree plot is greatly extended in the present embodiment of the tool 100.
Let S = {Sk} be a collection of data sets or sources Sk for k = l,...,m .
Step 110 of the tool 100 converts each Sk e S to matrix form M(Sk ) where M(Sk ) is a ? dimensional real hollow symmetric matrix. Hollow means the diagonal entries of M(Sk ) are zero. As indicated above, M(Sk ) need not be symmetric or hollow, but for simplicity of exposition these additional restrictions are adopted. Note also that the matrix dimensionality/? is a function of the data S and the goal of the analysis. Since M(Sk ) is hollow symmetric, it can be inteφreted and processed in IDMDS as a proximity (dissimilarity) matrix. Step
110 can be represented by the map
M : S → HP(R),
Sk ^ M(Sk) where HP(R) is the set of/? dimensional hollow real symmetric matrices. The precise rule for computing M depends on the type of data in S, and the puφose of the analysis. For example, if S contains time series data, then M might entail the straightforward entry- wise encoding mentioned above. If S consists of optical character recognition data, or some other kind of geometric data, then M(Sk ) may be a standard distance matrix whose ij-X entry is the Euclidean distance between "on" pixels i andy. ean also be combined with other transformations to form the composite, (M ° F)(Sk ) , where F, for example, is a fast Fourier transform
(FFT) on signal data Sk . To make this more concrete, in the examples below M will be explicitly calculated in a number of different ways. It should also be pointed out that for certain data collections S it is possible to analyze the conjugate or transpose S' of S. For instance, in data mining applications, it is useful to transpose records (clients) and fields (client attributes) thus allowing analysis of attributes as well as clients. The mapping Mis simply applied to the transposed data.
Step 120 of the presently preferred embodiment of the tool 100 is the application of IDMDS to the set of input matrices M(S) = {M(Sk)} . Each
M(Sk ) e M(S) is an input source for IDMDS. As described above, the IDMDS output is a common space Z <z R" and a source space W. The dimensionality n of these spaces depends on the input data S and the goal of the analysis. For signal data, it is often useful to set n = p - 1 or even n = \Sk\ where \Sk\ denotes the cardinality of Sk . For data compression, low dimensional output spaces are essential. In the case of network reconstruction, system dimensionality is discovered by the invention itself. IDMDS can be thought of as a constrained energy minimization process.
As discussed above,the stress σ is an energy functional defined over transformations and configurations in R" ; the constraints are defined by the constraint equation Xk = ZWk . IDMDS attempts to find the lowest stress or energy configurations Xk which also satisfy the constraint equation. (MDS is the special case when each Wk = I ,the identity matrix.) Configurations Xk most simile to the source matrices M(Sk ) have the lowest energy. At the same time, each Xk is required to match the common space Z up to deformation defined by the weight matrices Wk . The common space serves as a characteristic, or reference object. Differences between individual configurations are expressed in terms of this characteristic object with these differences encoded in the weight matrices Wk . The deformation information contained in the weight matrices, or, equivalently, in the weight vectors defined by their diagonal entries, becomes the signature of the configurations Xk and hence the sources Sk (through M(Sk ) ). The source space may be thought of as a signature classification space.
The weight space signatures are contextual; they are defined with respect to the reference object Z. The contextual nature of the source deformation signature is fundamental. As the polygon classification example below will show, Z- contextuality of the signature allows the tool 100 to display integrated unsupervised machine learning and generalization. The analyzer/classifier learns seamlessly and invisibly. Z-contextuality also allows the tool 100 to operate without a priori data models. The analyzer/classifier constructs its own model of the data, the common space Z.
Step 130, the back end of the tool 100, decodes and inteφrets the source or classification space output W from IDMDS. Since this output can be high dimensional, visualization techniques must be supplemented by analytic methods of inteφretation. A dye-dropping or tracer technique has been developed for both visual and analytic postprocessing. This entails differential marking or coloring of source space output. The specification of the dye-dropping is contingent upon the data and overall analysis goals. For example, dye-dropping may be two-color or binary allowing separating hypeφlanes to be visually or analytically determined. For an analytic approach to separating hypeφlanes using binary dye-dropping see Bosch, R. and Smith, J, "Separating hypeφlanes and the authorship of the disputed federalist papers," American Mathematical Monthly, Vol. 105, 1998. Discrete dye-dropping allows the definition of generalized gravitational clustering measures of the form
∑χA(χ) z V(p - d x, y)) y≠x
∑exp(p - d(x,y)) y≠x
Here, A denotes a subset of W (indicated by dye-dropping), χA (x) , is the characteristic function on A, d(-,-) is a distance function, and p e R . Such measures may be useful for estimating missing values in data bases. Dye- dropping can be defined continuously, as well, producing a kind of height function on W. This allows the definition of decision surfaces or volumetric discriminators. The source space W is also analyzable using standard cluster analytic techniques.
The precise clustering metric depends on the specifications and conditions of the IDMDS analysis in question.
Finally, as mentioned earlier, the stress/energy and rate of change of stress/energy can be used as postprocessing tools. Minima or kinks in a plot of energy, or the rate of change of energy, over dimension can be used to determine the dimensionality of complex networks and general dynamical systems for which only partial output information is available. In fact, this technique allows dimensionality to be inferred often from only a single data stream of time series of observed data. A number of examples are presented below to illustrate the method and apparatus in accordance with the present invention. These examples are illustrative only and in no way limit the scope of the method or apparatus. Example A: Classification of regular polygons The goal of this experiment was to classify a set of regular polygons. The collection S = {S, , ... , S16} with data sets S, - S4 , equilateral triangles; S5 - S8 , squares; S9 - S12 , pentagons; and S13 - S16 ; hexagons. Within each subset of distinct polygons, the size of the figures is increasing with the subscript. The perimeter of each polygon Sk was divided into 60 equal segments with the segment endpoints ordered clockwise from a fixed initial endpoint. A turtle application was then applied to each polygon to compute the Euclidean distance from each segment endpoint to every other segment endpoint (initial endpoint included). Let xS'i denote the t-th endpoint of polygon Sk , then the mapping Mis defined by
M : S → Hβ0(R) ,
sk ^ [dsl \ dsA - \ ds ]
where the columns
Sk' > XSk )) -
The individual column vectors d_, have intrinsic interest. When plotted as functions of arc length they represent a geometric signal which contains both frequency and spatial information.
The 16, 60 x 60 distance matrices were input into a publicly distributed version of PROXSCAL. PROXSCAL was run with the following technical specifications: sources- 16, objects- 60, dimension- 4, model- weighted, initial configuration- Torgerson, conditionality- unconditional, transformations- numerical, rate of convergence- 0.0, number of iterations- 500, and minimum stress- 0.0.
FIG. 2 and FIG. 3 show the four dimensional common and source space output. The common space configuration appears to be a multifaceted representation of the original polygons. It forms a simple closed path in four dimensions which, when viewed from different angles, or, what is essentially the same thing, when deformed by the weight matrices, produces a best, in the sense of minimal energy, representation of each of the two dimensional polygonal figures. The most successful such representation appears to be that of the triangle projected onto the plane determined by dimensions 2 and 4.
In the source space, the different types of polygons are arranged, and hence, classified, along different radii. Magnitudes within each such radial classification indicate polygon size or scale with the smaller polygons located nearer the origin.
The contextual nature of the polygon classification is embodied in the common space configuration. Intuitively, this configuration looks like a single, carefully bent wire loop. When viewed from different angles, as encoded by the source space vectors, this loop of wire looks variously like a triangle, a square, a pentagon, or a hexagon.
Example B: Classification of non-regular polygons The polygons in Example A were regular. In this example, irregular polygons S = {S, , ... , S6 } are considered, where S, - S3 are triangles and S4 - S6 rectangles. The perimeter of each figure Sk was divided into 30 equal segments with the preprocessing transformation M computed as in Example A. This produced 6, 30 x 30 source matrices which were input into PROXSCAL with technical specifications the same as those above except for the number of sources, 6, and objects, 30.
FIG. 4 and FIG. 5 show the three dimensional common and source space outputs. The common space configuration, again, has a "holographic" or faceted quality; when illuminated from different angles, it represents each of the polygonal figures. As before, this change of viewpoint is encoded in the source space weight vectors. While the weight vectors encoding triangles and rectangles are no longer radially arranged, they can clearly be separated by a hypeφlane and are thus accurately classified by the analysis tool as presently embodied.
It is notable that two dimensional IDMDS outputs were not sufficient to classify these polygons in the sense that source space separating hypeφlanes did not exist in two dimensions. Example C: Time series data
This example relates to signal processing and demonstrates the analysis tool's invariance with respect to phase and frequency modification of time series data. It also demonstrates an entry-wise approach to computing the preprocessing transformation M.
The set S = {S, ,...,S12} consisted of sine, square, and sawtooth waveforms.
Four versions of each waveform were included, each modified for frequency and phase content. Indices 1-4 indicate sine, 5-8 square, and 9-12 sawtooth frequency and phase modified waveforms. All signals had unit amplitude and were sampled at 32 equal intervals x, for 0 < x ≤ 2π .
Each time series Sk was mapped into a symmetric matrix as follows. First, an "empty" nine dimensional, lower triangular matrix Tk = (tk ) = T(Sk ) was created. "Empty" meant that Tk had no entries below the diagonal and zeros everywhere else. Nine dimensions were chosen since nine is the smallest positive integer m satisfying the inequality m(m - 1)/2 ≥ 32 and m(m - l)/2 is the number of entries below the diagonal in an m dimensional matrix. The empty entries in Tk were then filled in, from upper left to lower right, column by column, by reading in the time series data from Sk . Explicitly l k , the first sample in Sk was written in the second row, first column of Tk ; ,the second sample in Sk was written in the third row, first column of Tk , and so on. Since there were only 32 signal samples for 36 empty slots in Tk , the four remaining entries were designated missing by writing -2 in these positions (These entries are then ignored when calculating the stress). Finally, a hollow symmetric matrix was defined by setting
M(Sk) = Tk + Tk' .
This preprocessing produced 12, 9 x 9 source matrices which were input to PROXSCAL with the following technical specifications: sources- 12, objects- 9, dimension- 8, model- weighted, initial configuration- Torgerson, conditionality- unconditional, transformations- ordinal, approach to ties- secondary, rate of convergence- 0.0, number of iterations- 500, and minimum stress- 0.0. Note that the data, while metric or numeric, was transformed as if it were ordinal or nonmetric. The use of nonmetric IDMDS has been greatly extended in the present embodiment of the tool 100.
FIG. 6 shows the eight dimensional source space output for the time series data. The projection in dimensions seven and eight, as detailed in FIG. 7, shows the input signals are separated by hypeφlanes into sine, square, and sawtooth waveform classes independent of the frequency or phase content of the signals.
Example D: Sequences. Fibonacci, etc.
The data set S = {S1 ;...,S9} in this example consisted of nine sequences with ten elements each; they are shown in Table 1, FIG. 8. Sequences 1-3 are constant, arithmetic, and Fibonacci sequences respectively. Sequences 4-6 are these same sequences with some error or noise introduced. Sequences 7-9 are the same as 1-3, but the negative 1 's indicate that these elements are missing or unknown.
The nine source matrices M(Sk ) = (m* ) were defined by
lr \ k k \
!»; = -*,* - 5,*
the absolute value of the difference of the z'-th and/'-th elements in sequence Sk .
The resulting 10 x 10 source matrices where input to PROXSCAL configured as follows: sources- 9, objects- 10, dimension- 8, model- weighted, initial configuration- simplex, conditionality- unconditional, transformations- numerical, rate of convergence- 0.0, number of iterations- 500, and minimum stress- 0.0.
FIG. 9 shows dimensions 5 and 6 of the eight dimensional source space output. The sequences are clustered, hence classified, according to whether they are constant, arithmetic, or Fibonacci based. Note that in this projection, the constant sequence and the constant sequence with missing element coincide, therefore only two versions of the constant sequence are visible. This result demonstrates that the tool 100 of the presently preferred embodiment can function on noisy or error containing, partially known, sequential data sets. Example E: Missing value estimation for bridges This example extends the previous result to demonstrate the applicability of the analysis tool to missing value estimation on noisy, real-world data. The data set consisted of nine categories of bridge data from the National Bridge Inventory (NBI) of the Federal Highway Administration. One of these categories, bridge material (steel or concrete), was removed from the database. The goal was to repopulate this missing category using the technique of the presently preferred embodiment to estimate the missing values.
One hundred bridges were arbitrarily chosen from the NBI. Each bridge defined an eight dimensional vector of data with components the NBI categories. These vectors were preprocessed as in Example D, creating one hundred 8 x 8 source matrices. The matrices were submitted to PROXSCAL with specifications: sources- 100, objects- 8, dimension- 7, model- weighted, initial configuration- simplex, conditionality- unconditional, transformations- numerical, rate of convergence- 0.0, number of iterations- 500, and minimum stress- 0.00001.
The seven dimensional source space output was partially labeled by bridge material — an application of dye-dropping — and analyzed using the following function
∑χA (x) - d(x, y)-p gp(Ai, x) = y≠x
∑d(x,y)' y≠x
where/? is an empirically determined negative number, d(x,y) is Euclidean distance on the source space, and χA is the characteristic function on material set A, , i = 1,2 , where A is steel, A2 concrete. (For the bridge data, no two bridges had the same source space coordinates, hence gp was well-defined.) A bridge was determined to be steel (concrete) if gp(Al ,x) > gp(A2 ,x)
(gp(Ax ,x) < gp(A2 ,x)). The result was indeterminate in case of equality.
The tool 100 illustrated in FIG. 1 estimated bridge construction material with 90 percent accuracy. Example F: Network dimensionality for a 4-node network
This example demonstrates the use of stress/energy minima to determine network dimensionality from partial network output data. Dimensionality, in this example, means the number of nodes in a network.
A four-node network was constructed as follows: generator nodes 1 to 3 were defined by the sine functions, sin(2x), sin( 2x + f ) , and sin( 2x + Af-) ; node 4 was the sum of nodes 1 through 3. The output of node 4 was sampled at 32 equal intervals between 0 and 2π .
The data from node 4 was preprocessed in the manner of Example D: the zy'-th entry of the source matrix for node 4 was defined to be the absolute value of the difference between the t-th andy'-th samples of the node 4 time series. A second, reference, source matrix was defined using the same preprocessing technique, now applied to thirty two equal interval samples of the function sin(x) for 0 ≤ x ≤ 2π . The resulting 2, 32 x32 source matrices were input to PROXSCAL with technical specification: sources- 2, objects- 32, dimension- 1 to 6, model- weighted, initial configuration- simplex, conditionality- conditional, transformations- numerical, rate of convergence- 0.0, number of iterations- 500, and minimum stress- 0.0. The dimension specification had a range of values, 1 to 6. The dimension resulting in the lowest stress/energy is the dimensionality of the underlying network. Table 2, FIG. 10, shows dimension and corresponding stress/energy values from the analysis by the tool 100 of the 4-node network. The stress/energy minimum is achieved in dimension 4, hence the tool 100 has correctly determined network dimensionality. Similar experiments were run with more sophisticated dynamical systems and networks. Each of these experiments resulted in the successful determination of system degrees of freedom or dimensionality. These experiments included the determination of the dimensionality of a linear feedback shift register. These devices generate pseudo-random bit streams and are designed to conceal their dimensionality.
From the foregoing, it can be seen that the illustrated embodiment of the present invention provides a method and apparatus for classifying input data.
Input data are received and formed into one or more matrices. The matrices are processed using IDMDS to produce a stress/energy value, a rate or change of stress/energy value, a source space and a common space. An output or back end process uses analytical or visual methods to inteφret the source space and the common space. The technique in accordance with the present invention therefore avoids limitations associated with statistical pattern recognition techniques, which are limited to detecting only the expected statistical pattern, and syntactical pattern recognition techniques, which cannot perceive beyond the expected structures. Further, the tool in accordance with the present invention is not limited to the fixed structure of neural pattern recognizers. The technique in accordance with the present invention locates patterns in data without interference from preconceptions of models or users about the data. The pattern recognition method in accordance with the present invention uses energy minimization to allow data to self-organize, causing structure to emerge. Furthermore, the technique in accordance with the present invention determines the dimension of dynamical systems from partial data streams measured on those systems through calculation of stress/energy or rate of change of stress/energy across dimensions.
While a particular embodiment of the present invention has been shown and described, modifications may be made. For example, PROXSCAL may be replaced by other IDMDS routines which are commercially available or are proprietary. It is therefore intended in the appended claims to cover all such changes and modifications which fall within the true spirit and scope of the invention. ENERGY MINIMIZATION APPENDIX
FOR
CLASSIFICATION, PATTERN RECOGNITION, SENSOR FUSION, DATA COMPRESSION, NETWORK RECONSTRUCTION, AND SIGNAL PROCESSING
SOURCE CODE FOR PRESENTLY PREFERRED EMBODIMENT OF THE INVENTION
(c) 1998 Abel Wolman and Jeff Glickman
STEP 1. PREPROCESSING SOURCE CODE Mathematica code for step 1 of the invention: preprocessing.
Mathematica packages:
<< ---inearAlgebraxMatri-κManipulationJ Packages for preprocessing:
(* Creates dissimilarity matrix from list of data vectors . No missing values. *)
BβginPackagβ [ "MakeDissMat " " ] MakeDissMat: : usage = "MakeDissMat [M] creates dissimilarity matrices from list of lists M. "
Begi [ " Private * " ] MakβDissMat [M_] : = Module [ {DissMat} , DissMat [L_] : = Module [ {len = Length [L] } ,
Tablβ [Abs [L[ [i] ] - L[ [j] ] ] , {i, len} , {j , len} ]
1 ;
If[MatrixQ[M], Print ["Number of sources: ", Length[M] ] ; Print["Number of objects: ", Length[M[ [1] ] ] ] ; Flatten[DissMat/@M, 1], Print["Number of sources: 1"]; Print["Number of objects: ", Length[ ] ] ; DissMat[M]]
] End[] EndPackage[]
Al (* Creates dissimilarity matrices from ratings data with alternative distances and allowance for missing data *)
BeginPackage["MakeDissMissVal* "]
MakeDissMissVal: :usage = "MakeDissMissVal[R,form,metric,mv,prnt] creates (no. of columns) source dissimilarity matrices from matrix R with possible missing values. Set ms=l to indicate missing values; ms=0 otherwise; metric specifies the distance function to be calculated. R is assumed to have the form: objects-by-categories. Form specifications are: form=l, list of dissimilarity matrices; foπo=2, vector form, a single dissimilarity matrix; form=3, stacked dissimilarity matrices. prnt=l means print source and object count."
Begi ["*Private "]
MakeDissMissVal[R_, form_, metric_, mv_, prnt_] := Module[{Rt, numobj, numsource, dissims, vectform, stackfoxm, temp = 0}, numobj = Length[R] ;
If[Length[R[ [1] ] ] == 0, Rt = Transpose[Map[List, R] ] , Rt = Transposβ[R] ] ; numsource = Length[Rt] ; If[prnt== 1,
Print["Number of sources: ", numsource] ; Prin [ "Number of objects: ", numobj]]; dissims = Table[Table[If[ (Rt[ [k, i] ] < 0 | | Rt[[k, j]] < 0) &&uw== l&&i != j, -1, Whichf metric == 1, If[ (temp = Abs[Rt[ [k, i] ] -Rt[[k, j]]]) > 10 A (-5), temp, 0], metric == 2, If[ (temp = Log[Abs [Rt[[k, i] ] -Rt[ [k, j] ] ] + 1] ) > 10* (-5), temp, 0], True, 0]], {i, numobj}, {j, numobj}], {k, numsource}]; vectform = Table[S<jrt[ (R[ [i] ] -R[[j]]) . (R[[i]] -R[[j]])], {i, numobj}, {j, numobj}]; stackform = Join@@dissims; Which[form== 1, dissims // N, form== 2, vectform// N, form== 3, stackform// N]
]; Ξnd[] EndPackage[]
A2 (* Creates hybrid dissimilarity matrix *)
BeginPackage["Hybrid" "]
Needs ["LinearAlgebravMatri-κManipulation"] ; Needs["MakeDissMissVal* "] ;
(* need to use this package since need -1 in border of matrix *) Hybrid: :usage = "Hybrid[L] creates hybrid dissimilarity matrices from list of -data vectors L."
Begin[" *PrivateN "]
Hybrid[L_] := Module[{output, toprows, restofrows}, toprows = Map[{Join[{0}, #]}&, L] ; restofrows = MapThread[AppendRows[#l, #2]&,
{Ma [Transpose[{#}]&, L] , MakeDissMissVal[Transpose[L] , 1, 1, 1, 0]}]; output = Flatt4sn[MapThread[Join[#l, #2]&, {toprows, restofrows}], 1] ;
Print["Number of sources: ", Length[L] ] ;
Print["Number of objects: ", Length[output[ [1] ] ] ] ; output
1 End[] EndPackage[]
A3 (* Creates Toeplitz proximity matrices *)
BeginPackage[ "MakeToeplitz* "] MakeToeplitz: :usage = "MakeToeplitz [L] creates a Toeplitz proximity matrix from the list L."
Begin[" Private "]
MakeToeplitz[M_] : = Module[ {Toeplitz} , Toeplitz[L_] := Module[{len = Length[L] , size}, size = 1 + len (len - 1) / 2 ; For[i = 1, i <= size, i++, For[j = 1, j <= size, j++, If[i== j, a[i, j] =0.0, a[i, j] = L[ [Abs[i - j] ] ] ] l;
1;
Table[a[i, j], {i, size}, {j, size}]
];
If [MatrixQ[M],
Print [ "Number of sources: ", Length[M] ] ;
Print [ "Number of objects: ", Length[M[ [1] ] ] + 1] ; Flatten[Toeplitz/@M, 1],
Print [ "Number of sources: 1"];
Print [ "Number of objects: ", LengthfM] + 1] ; Toeplitz [M]]
1 End[] EndPackage[ ]
A4 (* Creates proximity matrices by populating symmetric matrix of appropriate size *)
BeginPackage[ "MakeSymMat* "] MakeSymMat: :usage =
"MakeSymMat[M] creates a set of symmetric matrices from the list of -data vectors M through entrywise substitution into a symmetric matrix of appropriate size."
Begin[""Private" "] MakeSymMat[M_] :=MOdule[{}, Print["Number of sources: ", Length[M] ] ;
■Sym[V_] := Module[{outmat = {}, symlen = Length[V] , k = 1, symsize}, symsize = (Sqrt[8 symlen+ 1] + 1) / 2; For[i = 1, i <s symsize, i++, For[j = 1, j < i, j++, a[i, j] =V[[k]];a[j, i] =V[[k]];k++;
1;
1;
Table[If[i == j, 0, a[i, j]], {i, symsize}, {j, symsize}]
1;
Augment[L_] := Module[{auglen, n, temp}, auglen = Length[L] ; n = auglen; temp =1;
While[2 auglen- nA2 -n <= 0, temp = n+ 1; n—] ; Flatten[Joi [L, Table[-1, {(temp (temp-1) 1 2) -auglen}]]]
1; outmat = Flatten[Map[Sym[Augment[#] ]&, M] , 1] ;
Print["Number of objects: ", Length[outmat[ [1] ] ] ] ; outmat
1 End[] ΞndPackage[]
A5 (* Creates distance matrices from sets of configurations of points *)
BeginPackage["Lp" "]
Lp: :usage = "Lp[C,p] calulates Minkowski distance with exponent p on sets C of configurations of points."
Begin[""Private" "] Lp[C_, p_] : = Module[ {metric} , metric[X_, a_] := Module[{len = Length[X] } , (Partition[Plus@@Transpose[
Flatten[Table[Abs[(X[[i]] -X[[j]])] Aa, {i, len}, {j, len}], 1]], len]) A (1/a) // N l;
If[MatrixQ[C[[l]]],
Print["Number of sources: ", Length[C] ] ;
Prin [ "Number of objects: ", Length[C[ [1]] ] ] ;
Flatten[Map[metric[#, p]&, C] , 1],
Print[ "Number of sources: 1"];
Print["Number of objects: ", Length[C] ] ; metric[C, p] ]
1
End[]
End?ackage[]
(* Output from preprocessing packages *)
Print [ "Test data: "] ; test = {{1, 2, 3}, {1, 5, 9}}; test / / TableForm
Print [ "MakeDissMat output on test data:"];
MakeDissMat[test] // TableForm
Print["MakeDissMissVal output on test data:"]
MakeDissMissVal[Transpose[test] , 3, 1, 1, 1] //TableForm
Print["Hybrid output on test data:"]
Hybrid[test] //TableForm
Print["MakeToeplitz output on test data:"]
MakeToeplitz[test] //TableForm
Print ["MakeSymMat output on test data:"]
MakeSymMat[test] // TableForm
Print["Lp output on test data:"]
Lpftest, 2] // TableForm
Test data:
1 2 3 1 5 9
MakeDissMat output on test data:
Number of sources: 2
A6 Number of objects: 3
0 1 2
1 0 1
2 1 0
0 4 8
4 0 4
8 4 0
MakeDissMissVal output on test data:
Number of sources: 2
Number of objects: 3
0 1. 2.
1. 0 1.
2. 1. 0
0 4. 8.
4. 0 4.
8. 4. 0
Hybrid output on test data: Number of sources: 2 Number of objects: 4
0 1 2 3
1 0 1. 2.
2 1. 0 1.
3 2. 1. 0
0 1 5 9
1 0 4. 8.
5 4. 0 4.
9 8. 4. 0
MakeToepl-itz output on test data: Number of sources: 2 Number of objects: 4
0. 1 2 3
1 0. 1 2
2 1 0. 1
3 2 1 0.
0. 1 5 9
1 0. 1 5
5 1 0. 1
9 5 1 0.
MakeSymMat output on test data: Number of sources: 2 Number of objects: 3
A7 0 1 2
1 0 3
2 3 0
0 1 5
1 0 9
5 9 0
Lp output on test data
Number of sources: 1
Number of objects: 2
A8 0 6.7082 6.7082 0
STEP 2. PROCESSING SOURCE CODE
Mathematica code for step 2 of the invention: processing.
Mathematica code for IDMDS via Alternating Least Squares and Singular Value Decomposition.
IDMDS via Alternating Least Squares (ALS)
Mathematica packages:
<< LinearAlgebra'CholeskyJ
< < Graphics " Graphics3D" ;
< < Graphics " Graphics * ;
<< Graphics 'MultipleListPlotJ
Subpackages for IDMDS via ALS:
(* distance matrix package *)
BeginPackage [ "Dmatrix^ " ] lu-atrix: : usage = "Dmatrix[C] calculates the Euclidean interpoint distances of input configuration C.
Begin [ " " Private * " ] Dmatrix[C_] : = Module [ {numobj } , numobj = Length [C] ;
Table[Sqrt[(C[[i]] -C[[j]]) . (C[[i]] -C[[j]])], {i, numobj}, {j, numobj}] //N
] End[] EndPackagef]
(* stress Package *)
BeginPackage [ "Stress " " ]
Needs [ "Dmatrix* " ] ;
Stress: :usage = "Stress [P,W,X] calculates Kruskal's stress."
Begin[ " xPrivate" "] Stress[P_, W_, X_] :=Module[{}, Plus@@Flatten[(Map[W*#&, P] -Map[Dmatrix, X]) A2]
] End[] EndPackage[]
A9 (* Guttman Package *)
BeginPackage["Guttman" "] Needs["Dmatrix* "] ; Guttman: : usage =
»Guttman[P,X, ] computes the update configuration via the Guttman transform." Begin[ " Private* "]
Guttman[P_, X_, W_] := Module[{B, D, d, dim}, D = Dmatrix[X] ; dim = Length[X] ; d = Length[P] ; B = Table[
If[i != j && D[[i, j]] 1=0, -( [[i, j]]* P[[i, j]]) /D[[i, j]], 0], {i, d}, {j, d}]; N[(l/dim) * (B-Sum[DiagonalMatrix[B[[i]]], {i, d}]) .X]
1
End[]
ΞndPackage[]
(* Vmat Package *)
BeginPackage["VMat" "]
VMat::usage= "VMatf ] computes the p.s.d. V matrix from the weight matrix W.'
Beginf"*Private "]
VMat [ _] : = Module [ {dim = Length [W] , V} ,
V = Table[If[i [= j, -W[[i, j]], 0], {i, dim}, {j, dim}];
V+ Sum[DiagonalMatrix[(-l) *V[[i]]], {i, dim}]
]
End[]
EndPackage[]
(* UnitNorm Package *)
BeginPackage["UnitNorm* "]
UnitNorm: :usage = nUnitNorm[A] takes the list of diagonals A and unit normalizes them so that (l/n)2_A*A=l." Begin[ "*Private* "] UhitNorm[A_] :=Module[{},
Map[Sqrt[Plus-?@(AA2)] A (-1) *#&, A] *Sqrt[Length[A] ] //N
1 End[] EndPackage[]
A10 (* lMat Package *)
BeginPackage["TMat* "]
TMat: .usage = "TMat [A] defines the T matrix which normalizes common space Z.
Begin[» *Private* "]
TMat[A_] :=Mόdule[{},
DiagonalMatrix[Sqrt[Plus«?@(AA2)] * ( (Sqrt[Length[A] ] ) A (-1))] //N
1 End[] EndPackage[]
(* InitialConfig Package *)
BeginPackage["InitialConfig* "]
InitialConfig: :usage = "InitlalConfig[ns, no, d] creates ns=number of sources, no= number of objects by d-dimensional constrained random start configurations." Begi [" *Private* "] InitialConfig[ns_, no_, d_] : = Module[{numsources = ns, numobs = no, dimens = d, i, j, k, G, }, G = N[Table[ (*SeedRandαm[i*j] ;*)Randαm[] , {i, numobs}, {j, ■dimens}] ] ; W = N[Table[ (*SeedRandαm[k*j] ;*)Random[] , {k, numsources}, {j, dimens}]]; {Table[G . DiagonalMatrix[ [[k] ]] , {k, numsources} ] , G, W}
1
End[]
EndPackage[]
(* Diagonal Package *)
BeginPackage["Diagonal* "]
Diagonal: :usage = "Diagonal[M] creates a diagonal matrix from main diagonal of M."
Begi [" *Private* "]
Diagonal[M_] : = Module[ {dim} , dim= Length[M] ;
Table[If[i== j, M[[i, j]], 0], {i, dim}, {j, dim}] //
]
End[]
EndPackage[]
All (* UnDiagonal Package *)
BeginPackage["Ut-Diagonal* "]
UnDiagonal: :usage = "UnDiagonal[M] turns diagonal matrix into a vector."
Begi [ " *Private* "]
UnDiagonal[M_] : = Module[{dim} , dim = Length[M] ;
Table[M[[i, i]], {i, dim}] //N
1
End[]
EndPackage[]
IDMDS via ALS:
(* IDMDS via ALS *)
BeginPackage[ "IDMDSALS* "]
Needs ["Stress*"] ;
Needs [ "InitialConfig* "] ;
Needs["UnitNorm* "] ;
Needs["VMat*"] ;
Needs["TMat*"],•
Needs [ "Guttman* "] ;
Needs["Diagonal* "] ;
Needs [ "UnDiagonal* "] ;
IDMDSALS: :usage = "IDMDSALS[prox_,proxwts_,dim_,epsilon_, iterations_, seed_] computes IDMDS for proximity matrices prox." Begin[ " *Private* "] IDMDSALS[prox_, proxwts_, dim_, epsilon_, iterations.., seed_] := Module[
{Aconstrain, AO, BX, deltastress, dβltastresslist, numits, numobj, numscs, stresslist, T, TO, tempstress, tempstressprev, V, Vin, Xupdate, XO, Zconstrain, Zt, ZO}, numobj = Length[prox[ [1] ] ] ; numscs = Length[prox] ; numits = 0;
SeedRandom[seed] ;
{XO, ZO, AO} = InitialConfig[numscs, numobj, dim]; Print [ "Number of sources: " , numscs] ; Print ["Number of objects: ", numobj];
TO = TMat[A0];
Z0norm= ZO . TO;
A0norm= Map[Inverse[TO] . #&, Map[DiagonalMatrix, AO] ] ;
XO = Map[Z0nor . #&, AOnor ] ;
V = VMat [proxwts] ;
Vinp = PseudoInverse[V] ; tempstress = Stress [prox, proxwts, XO] ; stresslist = {tempstress}; deltastress = tempstress; deltastresslist = {};
A12 While[deltastress > epsilon&&numits <= iterations, numits++; Xupdate = MapThread[Guttman[#l, #2, proxwts] &, {prox, XO}];
For[i = 1, i <= 1, i++,
Zconstrain = (1 / numscs) * (Vp . Plus@@MapThread[#l . #2&, {numobj * Xupdate, AOnorm} ] ) ; Zt = Transpose[Zconstrain] ;
Aconstrain = Map[Inverse[Diagonal[Zt . V . Zconstrain] ] . #&, Map[Diagonal, Map[Zt . #&, numobj * Xupdate] ] ] ;
T = TMat[Map[UnDiagonal, Aconstrain] ] ;
AOnorm = Map[Inverse[T] . #&, Aconstrain] ; ZOnor = Zconstrain . T;
];
XO = Map[ZOnorm . #&, AOnorm] ; tempstressprev = tempstress; tempstress = Stress [prox, proxwts, XO] ; stresslist = Append[stresslist, tempstress] ; deltastress = tempstressprev- tempstress; deltastresslist = Append[deltastresslist, deltastress] ; print["Final stress is: ", teitpstress] ; Print["Stress record is: ", stresslist]; Print["Stress differences: ", deltastresslist]; Print[ "Number of iterations: ", numits];
Print ["The common space coordinates are: " , ZOnorm / / MatrixForm] ; Print ["The source space coordinates are: ", Map[MatrixForm, Chop[Aconstrain] ]] ; {ZOnorm, Chop[Aconstrain] }
] End[] EndPackage[]
IDMDS via Singular Value Decomposition (SVD) Mathematica packages:
< < Graphics * Graphics3D* ;
< < Graphics * Graphics * ;
Subpackages for IDMDS via SVD:
A13 (* DistanceMatrix Package *)
BeginPackage["DistanceMatrix* "] DistanceMatrix : : usage = "DistanceMatrix[ config] produces' a distance matrix from configuration matrix config. " Begin["*Private* "] DistanceMatrix[config_] := Module[{c, d, m, one, temp}, d = Length[config] ; m = config . Transpose[config] ; c = {Table[m[ [i, i] ] , {i, d}]}; {i, d}]}; N[Sςrrt[ Transpose[one] . c+ Transpose[c] . one - 2m]]
1 End[] EndPackage[]
(* DiagMatNorm Package *)
BeginPackage["DiagMatNorm* "]
DiagMatNorm: :usage = "DiagMatNorm[A] normalizes the list of weight vectors A." Bβgin["*Private*"]
DiagMatNorm[A_List] := Module[ {newA, norm, sumnorm= 0}, newA = Table[0, {LengthfA]}, {Length[A[ [1] ] ] }] ; For[i = 1, i <= Length[A[ [1] ] ] , i++, For[j = 1, j <= Length[A] , j++, sumnorm += A[ [j, i] ] A2;
1 ; norm[i] = Stjrt [ sumnorm] ; sumnorm = 0 ;
] ; norm = able [norm[k] , {k, Length[A[ [1] ] ] } ] ; For[i = 1, i <= L4βngth[A[ [1] ] ] , i++, For[ j = 1, j <= LengthfA] , j++, newA[ [j , i] ] = A[ [j , i] ] / norm[ [i] ] ;
1; 1;
N[newA]
] End[] EndPackage[]
A14 (* NormalizeG *)
BeginPackage[ "NormalizeG* "]
NormalizeG: :usage = "NormalizeG[A] gives matrix which normalizes the common space given the list of weight vectors A. Begin[ " *Private* "]
NormalizeG[A_List] := Module[ {newA, norm, sumnorm =0}, nβwA = Table[0, {LengthfA]}, {Length[A[ [1] ] ] }] ; For[i = 1, i <= Length[A[ [1] ] ] , i++, For[j = 1, j <= L-angth[A] , j++, sumnorm+= A[ [j, i] ] A2;
]; norm[i] = Sςprt [sumnorm] ; sumnorm = 0;
1; no3-m = Table[norm[k] , {k, Length[A[ [1] ] ] } ] ; N[Diagonal-Matrix[norm] ]
]
End[]
EndPackage[]
(* BMatrix Package *)
BeginPackage["BMatrix* "]
BMatrix : : usage = "BMatrix[delta, EM] is part of the Guttman transform. " Begin["*Private* "]
BMatrix[delta_, EM_] := Module[{b, bdiag, d, i, j, k}, d = Length[delta] ; b = Table[0, {d}, {d}] //N; For[i = 1, i <= d, i++, For[j = 1, j <= d, j++,
If[i 1= j && DM[[i, j]] !=0, b[[i, j]] = b[[i, j]] - delta[[i, j]] /EM[[i, j]]];
J; ]; bdiag = Sum[DiagonalMatrix[b[ [k] ] ] , {k, d}]; N[b - bdiag]
1
End[]
EndPackage[]
A15 (* GuttmanTransform Package *)
BeginPackage["GuttmanTransform* "]
GuttmanTransform : : usage = "The GuttmanTransform[B, X] updates the configuration X through multiplication by the BMatrix B. "
Begin[ " *Private* "]
GuttmanTransform[B_, X_] := Module[{dim}, dim = Length[X] ;
]
End[]
EndPackage[]
(* AGWStress Package *)
BeginPackage["AGWStress* "]
AGWStress :: usage = "The AGWStress[dissimilarity, distance] is the loss function for multidimensional scaling. " Begin[ "*Private* "]
AGWStress[<dissimilarity_, distance..] := Module[{S, dim, i, j}, dim a Length[dissimilarity] ; S= 0;
For[j = 1, j <= dim, j++, For[i = 1, i < j, i++,
S+= (dissimilarit [[i, j]] -distancef [i, j]])A2
]
1 ;
N[S]
] End[] EndPackage[ ]
A16 (* Normstress *)
BeginPackage["NormStress* "]
NormStress :: usage = "NormStress[dissimilarity] normalizes the stress loss function." Begin[" *Private* "]
NormStress[dissimilarity_List] := Module[{norm, numobjects, i, j}, numobjects = Length[dissimilarity[ [1] ] ] ; norm= 0.0;
For[j = 1, j <= numobjects, j++, For[i = 1, i < j, i++, norm += dissimilarity[ [i, j]]A2
1; 1;
N[norm]
] End[] EndPackage[]
(* InitialConfig2 Package *)
BeginPackage["InitialConfig2* "]
Initialconfig: :usage = "Initialconfig[ns, no, d] creates ns=number of sources, no= number of objects by d-dimensional constrained random start configurations." Begin[ "*Private* "] Initialconfig[ns_, no_, d_] := Module[{numsources = ns, numobs = no, dimens = d, i, j, k, G, W}, G = N[Table[ (*SeedRandαm[i*j] ;*)Randαm[] , {i, numobs}, {j, dimens}]]; = N[Table[(*SeedRandαm[k*j] ;*)Randαm[], {k, numsources}, {j, dimens}] ] ; Table[G . DiagonalMatrix[W[ [k] ] ] , {k, numsources}]
]
End[]
EndPackage[]
A17 (* Torgerson Package *)
BeginPackage["Torgerson* "]
Torgerson: :usage = "Torgerson[D,d] computes classical
2(d) dimensional scaling solution for dissimilarity matrix D." Begin[ "*Private* "] Torgerson[D_, d_] := Module[ {Dsq, u, v, w, Bdelta, J, One, n = Length[D] }, θne= {Tablefl, {i, n}]};
J = IdentityMatrixfn] - (1/n) Transpose[One] .One;
Dsq = N[Map[#A2&, D, 1] ] ;
Bdelta = N[ (-0.5) J.Dsq. J] ;
{u, w, v} = SingularValues[Bdelta, Tolerance -> 0] ;
Transpose[Table[v[[i]] , {i, d}]] . DiagonalMatrix[Table[w[ [i] ] , {i, d}]] //Chop
1 End[] EndPackage[]
(* Ave Package *)
BeginPackage["Ave* "]
Ave::usage = "Ave[M] finds the average of the list of matrices M and produces a list of the same length with every element the average of M." Begin["*Private* "] Ave[M_List] :=MOdule[{}, average = N[ (1 / Length[M] ) * Apply[Plus, M] ] ; Table[average, {i, Length[M]}]
] End[] EndPackage[]
A18 (* SVDConstrain Package *)
BeginPackage["SVDConstrain* "] SVDConstrain: :usage =
"SVDConstrain[configs] i-mposes constraints on the list of configurations configs." Begin["*Private* "] SVDConstrain[configs_List, d_] := Module[ {numsrcs = Length[configs] , numobjects = Length[configs[ [1] ] ] , dim-- d, y, u, v, uhold, vhold, G, weights}, For[i = 1, i <= dim, i++, y[i] = Transpose[Table[Map[Transpose, configs] [ [k, i] ] , {k, numsrcs}]]; ]; Print[Table[y[i], {i, dim}]];
For[i = 1, i <= dim, i++,
{uhold[i] , w[i] , vhold[i] } = N[SingularValues[N[y[i] ] , Tolerance -> 0] // Chop] ;
1; u = Table[uhold[i], {i, dim}]; v = Table[vhold[i] , {i, dim}];
G = N[Transpose[Table[u[[i, 1]], {i, dim}]]]; weights =
N[Partition[Flatten[Table[w[k] [[1]] *v[[k, 1, j]], {j, numsrcs}, {k, dim}]], dim] ] ; Table[G . Map[DiagonalMatrix, weights] [ [i] ] , {i, numsrcs}]
1
End[]
EndPackage[]
IDMDS via SVD:
(* IDMDS via SVD on constraints with implicit normalization of stress. *)
BeginPackage[ "IdmdsSVD* "]
Needs [ "DistanceMatrix* "] ;
Needs["BMatrix* "] ;
Needs[ "GuttmanTransform* "] ;
Needs[ "AGWStress* "] ;
Needs[ "NormStress* "] ;
Needs["DiagMatNorm* "] ;
Needsf"NormalizeG* "] ;
Needs[ "Initialconfig2* "] ;
Needs [ "Torgerson* "] ;
Needs["SVDConstrain* "] ;
Needs ["Ave* "];
Needs [ "Graphics*Graphics* "] ;
Needs [ "Graphics*Graphics3D*"] ;
IdmdsSVD : : usage = "IdmdsSVD[ diss, dim, iterations, rate, start] is an IEMDS package for an arbitrary number of sources. Enter a list of dissimilarity matrices = diss; the dimension = dim of the common space; the number of iterations required = iterations; the rate of convergence = rate. Has random, starts1; Torgerson,
A19 start=2; or user-provided, start=startlist start configurations." Begin["*Private*"]
Idm!sSVD[diss_, dim_, iterations_, rate_, start_] := Module[{ d = dim, e = rate, G, Gnorm, gt, holdstress, iterate, konstant, numit = iterations, numobjects, numsrcs = Length[diss] , sdr, sr, stress, stressdiffrecord, stressrecord, temp, tempstress, u, updates, utemp, v, vtemp, w, weightrecord, weights, weightsnorm, wr}, numobjects = Length[diss[ [1] ] ] ; tempstress = 0.0; holdstress = 0.0; stress = 0.0; konstant =0.0; iterate =0;
For[i = 1, i <= numsrcs, i++, konstant += NormStress[diss[ [i] ] ] ;
1;
If [NumberQtstart] && start == 1, temp = Initialconfig2[numsrcs, numobjects, d] ] ;
If [NumberQ[start] && start == 2, temp = SVDConstrain[Map[Torgerson[#, d]&, diss] , d] ] ;
If [NumberQ[start] && start == 3, temp = SVDConstrain[Map[Torgerson[#, d]&, Ave[diss] ] , d] ] ;
If[NumberQtstart] == False, temp = start] ;
For[i = 1, i <= numsrcs, i++, holdstress += AGWStress[diss [ [i] ] , DistanceMatrix[tβmp[ [i] ] ]
]; 1;
If[konsteuit 1=0, stress = holdstress / konstant, stress = holdstress] ; Print["Initial stress is: ", stress]; sdr = {0.0}; sr = {stress}; wr={};
While[ iterate <= numit,
For[i = 1, i <= nπnsrcs, i++, gt[i] = GuttmanTransform[BMatrix[diss [ [i] ] , DistanceMatrix[temp[ [i] ] ] ] , temp[ [i] ] ] ;
1;
X = Table[gt[i] , {i, numsrcs}];
For[i = 1, i <= d, i++, y[i] = Transpose[Table[Map[Transpose, X] [ [k, i] ] , {k, numsrcs}]];
1;
For[i = 1, i <= d, i++, {ut4βπι>[i] , w[i] , vtemp[i] } = N[SingularValues[y[i] ] ] ;
] ;
A20 u = Table[utβmp[i] , {i, d}]; v = Table[vtenp[i], {i, d}];
G = N[Transpose[Table[u[[i, 1]], {i, d}]]]; (*Print["G is: ",MatrixForm[G] ] ;*) weights = N[Partition[Flatten[Table[w[k] [[1]] *v[[k, 1, j]], {j, numsrcs}, {k, d}]], d] ] ; temp = Table[G . MapfDiagonalMatrix, weights] [ [i] ] , {i, numsrcs}] ; tempstress =0.0;
For[i = 1; holdstress = 0.0, i <= numsrcs, i++, holdstress += AGWStress[diss [ [i] ] , DistanceMatrix[tem [ [i] ] ] ] ;
1;
If[konstant != 0, tempstress = holdstress / konstant, tempstress = holdstress] ; weightrecord = AppβndTo[wr, weights] ; stressrecord = AppendTofsr, tempstress] ; stressdiffrecord = AppendTofsdr, stress - tempstress ] ;
If[stress - tempstress <- e , Break[] , stress = tempstress; iteratβ++; ] ; (* -and If *) ] ; (* end while *) Gnorm = G . NormalizeG[weights] ; weightsnorm = DiagMatNorm[weights] ; For[ = 1, j <= numsrcs, j++, For[i = 1, i <= d, i++, If[wβightsnorm[[j, i] ] < 0, weightsnorm = ReplacePart[weightsnorm, -weightsnorm[ [j , i] ] , {j , 1} ] ;
]; 1; 1;
Print["Number of iterations was: ", iterate] ;
Print[ "Final stress: ", tempstress] ;
Print["Stress difference record: ", stressdiffrecord] ;
Print["Stress record: ", stressrecord];
Print[ "Coordinates of the common space:"];
Print[MatrixForm[Gnorm] ] ;
Print["Coordinates of the weight space: "] ;
Print[MatrixFor [weightsnorm] ] ;
(*Print["Coordinates of the private spaces: "];
Print[Map[MatrixForm,Map[G.#&,Map[DiagonalMatrix,weightsnorm] ]]];*)
Print["Plot of cu-.i-.nju space:"];
Which[d==3, ScatterPlot3D[Gnorm, PlotStyle -> {{AbsolutePointSize[5] }}] ,
A21 d == 2, TextListPlot[Gnorm] , d » 1,
ListPlot[Flatten[Gnorm] , PlotJoined-> True, Axes -> False, Frame -> True] ;
1;
Print["Plot of the source space:"];
Which[d==3, ScatterPlot3D[weightsnorm, PlotStyle -> {{AbsolutePointSize[5] }}, Viewpoint -> {2.857, 0.884, 1.600} (*,ViewPoint->{l.377, 1.402, 2.827}*) ] , d == 2, TextListPlot[weightsnorm, PlotRange -> All, Axes -> False, Frame -> True] , d == 1, ListPlot[Flatte [weightsnorm] , PlotJoined -> True, Axes -> False, Frame -> True] ;
];
1
End[]
EndPackage []
STEP 3. POSTPROCESSING SOURCE CODE Mathematica code for step 3 of the invention: postprocessing.
Subf unctions:
(* Euclidean distance between vectors u and v *) d[u_, v_] :=Sqrt[(u-v) . (u-v)] // N;
(* Characteristic function *) char[u_, x_] :=If[x===u, 1, 0];
Clustering functions for back-end analysis of classifying space of invention: gl[obj_, dyeval_, dyelist_, Ll_, L2_, power_] := Plus@@Table[If [i == obj, 0, char[dyeval, dyelistf [i] ] ] d[Ll[ [obj] ] , L2[ [i] ] ] A (power) ] , {i, Length [dyelist] }] / Plus@@ Table[If [i == obj, 0, d[Ll[ [obj] ], L2[ [i] ]] A (power) ] , {i, Length[L2] }] ; g2[obj_, dyeval_, dyelist_, L2_, pσwβr_] := Plus@@Table[If [i == obj, 0, char[dyeval, dyelistf [i] ] ] d[L2[ [obj] ] , L2[ [i] ]] A (power) ] , {i, Length [ dyelist] }] / Plus@@ Table[If [i == obj, 0, d[L2[[obj]], L2[[i]]] A (power)], {i, Length [ L2] }] ; g3[obj_, dyeval_, dyelist_, L2_, power_] : = Plus@@Table[If [i == obj, 0, charfdyeval, dyelistf [i] ] ]
J--xp[d[L2[[obj]], L2[[i]]]] A (power)], {i. Length [dyelist] }] / Plus@@ Table[If[i==obj, 0, Exp[d[L2[ [obj] ] , L2[[i]]]] A (power) ] , {i, Length [ L2] }] ;
A22

Claims

1. A method for classifying data, the method comprising the steps of: receiving input data for classification; defining one or more transformations of the input data; applying energy minimization to the one or more transforms of the input data; producing at least a rate of change in energy in response to energy minimization; and classifying the input data using at least the stress rate value.
2. The method of claim 1 wherein the step of applying energy minimization comprises using individual differences multidimensional scaling applied to the input data.
3. The method of claim 1 wherein the step of applying energy minimization comprises using a finite element method analysis applied to the input data.
4. The method of claim 1 wherein the step of applying energy minimization comprises using simulated annealing applied to the input data.
5. The method of claim 2 further comprising the steps of producing a source space output and classifying the input data using the source space output.
6. The method of claim 2 further comprising the steps of producing a common space output and classifying the input data using the common space output.
7. A classifier process for data comprising: using individual differences multidimensional scaling with one or more input proximity matrices into which the data to be classified has been converted to produce at least a source space output; and using the source space output to classify the data.
8. The invention of claim 7 further comprising the step of: prior to the step of using individual differences multidimensional scaling, producing the one or more proximity matrices from the data to be classified.
9. The invention of claim 7 wherein said step of using individual differences multidimensional scaling also produces a common space output, and wherein the classifier process further comprises the step of: additionally using the common space output to classify the data.
10. The invention of claim 7 wherein said step of using the source space output to classify the data, is further characterized as comprising the step of: searching for clustering.
11. The invention of claim 7 wherein said step of using the source space output to classify the data, is further characterized as comprising the step of: searching for hyperplane discriminators.
12. The invention of claim 7 wherein said step of using the source space output to classify the data, is further characterized as comprising the step of: searching for decision surfaces.
13. The invention of claim 7 wherein said step of using the source space output to classify the data, is further characterized as comprising the step of: searching for classifying structures.
14. A classifier process for data comprising: using individual differences multidimensional scaling with one or more input proximity matrices into which the data to be classified has been converted to produce at least a source space output; and using the source space output for pattern recognition.
15. A classifier process for data comprising: using individual differences multidimensional scaling with one or more input proximity matrices into which the data to be classified has been converted to produce at least a source space output; and using the source space output for sensor fusion.
16. A method for optical character recognition comprising: using individual differences multidimensional scaling with one or more input proximity matrices into which the data including the characters to be recognized has been converted to produce at least a source space output; and using the source space output for optical character recognition.
17. A method for data compression comprising: using individual differences multidimensional scaling with one or more input proximity matrices into which the data to be compressed has been converted to produce at least a source space output; and using the source space output for data compression.
18. A method for data compression comprising: producing the one or more proximity matrices including the data to be compressed; using individual differences multidimensional scaling upon the one or more input proximity matrices to produce a source space output and a common space output; and using the source space output and the common space output as a compressed representation of the data.
19. A data compression method for data comprising: using individual differences multidimensional scaling with one or more input proximity matrices into which the data to be compressed has been converted to produce a common space output and a source space output; and transferring the common space output and the source space output as a compressed representation of the data.
20. A program for classifying data comprised of: a first program portion that uses individual differences multidimensional scaling with one or more input proximity matrices into which the data to be classified has been converted to produce at least a source space output; a second program portion that uses the source space output to classify the data.
21. A program for classifying data comprised of: a first program portion that using individual differences multidimensional scaling with one or more input proximity matrices into which the data to be classified has been converted to produce at least a source space output; a second program portion that performs an analysis of the source space output; and a third program portion that classifies the data based upon the analysis performed by the second program portion.
22. Computer executable software code stored on a computer readable medium, the code for classifying input data, the code comprising: first code that receives the input data and forms one or more matrices using the input data; second code that applies individual differences multidimensional scaling to the one or more matrices and produces at least a source space; and third code that uses the source space to classify the input data according to one or more predetermined criteria and produce output data representative of data classification.
23. The computer executable software code of claim 22 wherein the first code forms one or more square matrices.
24. The computer executable software code of claim 22 wherein the first code forms one or more hollow, symmetric matrices.
25. The computer executable software code of claim 22 wherein the input data are time series data and wherein each element of the one or more matrices is a datum from the time series data.
26. The computer executable software code of claim 22 wherein the second code further produces a common space, the third code using both the source space and the common space for classifying the input data.
27. The computer executable software code of claim 22 wherein the second code performs an energy minimization process.
28. The computer executable software code of claim 27 wherein the second code defines a stress σ over configurations of the input data and finds a configuration Xsκ having a lowest stress.
29. The computer executable software code of claim 28 wherein the second code defines a constraint equation Xk = ZWk and wherein the second code finds the configuration Xk which also satisfies a constraint equation.
30. The computer executable software code of claim 22 wherein the third code searches for clustering of elements of the source space.
31. The computer executable software code of claim 22 wherein the third code searches for hyperplane discriminators of the source space.
32. A signal processing method comprising the steps of: receiving input data representative of time varying signals; mapping the input data into one or more matrices; applying individual differences multidimensional scaling to the one or more matrices to produce a source space output; and processing the input data using the source space output.
33. The signal processing method of claim 32 wherein processing the input data comprises separating elements of the source space output using hyperplanes.
34. A signal processing method comprising the steps of: receiving input data representative of time varying signals; mapping the input data into one or more matrices; applying individual differences multidimensional scaling to the one or more matrices to produce a common space output; and processing the input data using the common space output.
35. The signal processing method of claim 32 wherein processing the input data comprises separating elements of the common space output using hypeφlanes.
36. A signal processing method comprising the steps of: receiving input data representative of time varying signals; mapping the input data into one or more matrices; applying individual differences multidimensional scaling to the one or more matrices to produce a rate of change of stress/energy; and processing the input data using the rate of change of stress/energy.
37. A method for determining dimensionality of a network, the dimensionality corresponding to a number of degrees of freedom in the network, the method comprising the steps of: sampling data from one or more nodes of the network; mapping the data into one or more matrices; applying individual differences multidimensional scaling to the one or more matrices to produce a stress/energy; and processing the stress/energy to determine the dimensionality of the network.
38. A method for determining dimensionality of a network, the dimensionality corresponding to a number of degrees of freedom in the network, the method comprising the steps of: sampling data from one or more nodes of the network; mapping the data into one or more matrices; applying individual differences multidimensional scaling to the one or more matrices to produce a rate of change of stress/energy output and processing the rate of change of stress/energy output to determine the dimensionality of the network.
39. A method for determining dimensionality of a network, the dimensionality corresponding to a number of degrees of freedom in the network, the method comprising the steps of: sampling data from one or more nodes of the network; mapping the data into one or more matrices; applying individual differences multidimensional scaling to the one or more matrices to produce a common space output; and processing the common space output to determine the dimensionality of the network.
40. A method for reconstructing a network, the method comprising the steps of sampling data from one or more nodes of the network; mapping the data into one or more matrices; applying individual differences multidimensional scaling to the one or more matrices to produce a source space output; from the source space output, determining the dimensionality of the network; using free nodes to recreate and reconstruct individual nodes through the use of matrices containing missing values; and establishing node connectivity through the use of lowest-energy connections constrained by dimensionality.
41. A method for determining dimensionality of a dynamical system from partial data, the dimensionality corresponding to a number of degrees of freedom in the dynamical system, the method comprising the steps of: sampling data from the dynamical system; mapping the data into one or more matrices; applying individual differences multidimensional scaling to the one or more matrices to produce a stress/energy; processing the stress/energy to determine dimensionality of the dynamical system.
42. A method for determining dimensionality of a dynamical system from partial data, the dimensionality corresponding to a number of degrees of freedom in the dynamical system, the method comprising the steps of: sampling data from the dynamical system; mapping the data into one or more matrices; applying individual differences multidimensional scaling to the one or more matrices to produce rate of change of stress/energy output; processing the rate of change of stress/energy output to determine dimensionality of the dynamical system.
43. A method for determining dimensionality of a dynamical system from partial data, the dimensionality corresponding to a number of degrees of freedom in the dynamical system, the method comprising the steps of: sampling data from the dynamical system; mapping the data into one or more matrices; applying individual differences multidimensional scaling to the one or more matrices to produce a common space output; processing the common space output to determine dimensionality of the dynamical system.
EP98966467A 1997-12-29 1998-12-23 Energy minimization for classification, pattern recognition, sensor fusion, data compression, network reconstruction and signal processing Withdrawn EP1064613A4 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US7159297P 1997-12-29 1997-12-29
US71592P 1997-12-29
PCT/US1998/027374 WO1999034316A2 (en) 1997-12-29 1998-12-23 Energy minimization for classification, pattern recognition, sensor fusion, data compression, network reconstruction and signal processing

Publications (2)

Publication Number Publication Date
EP1064613A2 true EP1064613A2 (en) 2001-01-03
EP1064613A4 EP1064613A4 (en) 2002-01-02

Family

ID=22102313

Family Applications (1)

Application Number Title Priority Date Filing Date
EP98966467A Withdrawn EP1064613A4 (en) 1997-12-29 1998-12-23 Energy minimization for classification, pattern recognition, sensor fusion, data compression, network reconstruction and signal processing

Country Status (4)

Country Link
EP (1) EP1064613A4 (en)
AU (1) AU2307099A (en)
CA (1) CA2315814A1 (en)
WO (1) WO1999034316A2 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6993186B1 (en) 1997-12-29 2006-01-31 Glickman Jeff B Energy minimization for classification, pattern recognition, sensor fusion, data compression, network reconstruction and signal processing
AU3864599A (en) * 1998-12-23 2000-07-31 Jeff B. Glickman Energy minimization for data merging and fusion
US7222126B2 (en) 2002-07-30 2007-05-22 Abel Wolman Geometrization for pattern recognition, data analysis, data merging, and multiple criteria decision making
US7557805B2 (en) 2003-04-01 2009-07-07 Battelle Memorial Institute Dynamic visualization of data streams
US8279709B2 (en) * 2007-07-18 2012-10-02 Bang & Olufsen A/S Loudspeaker position estimation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5181259A (en) * 1990-09-25 1993-01-19 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration General method of pattern classification using the two domain theory
US5422961A (en) * 1992-04-03 1995-06-06 At&T Corp. Apparatus and method for improving recognition of patterns by prototype transformation
US5602938A (en) * 1994-05-20 1997-02-11 Nippon Telegraph And Telephone Corporation Method of generating dictionary for pattern recognition and pattern recognition method using the same
US5802207A (en) * 1995-06-30 1998-09-01 Industrial Technology Research Institute System and process for constructing optimized prototypes for pattern recognition using competitive classification learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BUSING F M T A ET AL: "PROXSCAL: a multidimensional scaling program for individual differences scaling with constraints" PROCEEDINGS SOFTSTAT. CONFERENCE ON THE SCIENTIFIC USE OF STATISTICAL SOFTWARE, XX, XX, 1997, pages 67-74, XP002112130 *
COX T F ET AL: "DISCRIMINANT ANALYSIS USING NON-METRIC MULTIDIMENSIONAL SCALING" PATTERN RECOGNITION, PERGAMON PRESS INC. ELMSFORD, N.Y, US, vol. 26, no. 1, 1993, pages 145-153, XP000355500 ISSN: 0031-3203 *
See also references of WO9934316A2 *

Also Published As

Publication number Publication date
AU2307099A (en) 1999-07-19
WO1999034316A3 (en) 1999-10-28
CA2315814A1 (en) 1999-07-08
WO1999034316A2 (en) 1999-07-08
EP1064613A4 (en) 2002-01-02
WO1999034316A9 (en) 1999-09-23

Similar Documents

Publication Publication Date Title
US7912290B2 (en) Energy minimization for classification, pattern recognition, sensor fusion, data compression, network reconstruction and signal processing
Chen et al. Graph multiview canonical correlation analysis
Kamvar et al. Interpreting and extending classical agglomerative clustering algorithms using a model-based approach
Ding et al. Linearized cluster assignment via spectral ordering
Agrafiotis Stochastic proximity embedding
Kargupta et al. Distributed clustering using collective principal component analysis
EP1038261B1 (en) Visualization and self-organization of multidimensional data through equalized orthogonal mapping
Duin The dissimilarity representation for pattern recognition: foundations and applications
US8055677B2 (en) Geometrization for pattern recognition data analysis, data merging and multiple criteria decision making
Watanabe et al. A new pattern representation scheme using data compression
CN108415883A (en) Convex non-negative matrix factorization method based on subspace clustering
EP1064613A2 (en) Energy minimization for classification, pattern recognition, sensor fusion, data compression, network reconstruction and signal processing
US20220414108A1 (en) Classification engineering using regional locality-sensitive hashing (lsh) searches
Martin et al. Impact of embedding view on cross mapping convergence
Lee et al. Student modeling using principal component analysis of SOM clusters
Soukup et al. Robust object recognition under partial occlusions using NMF
Mithal et al. Change detection from temporal sequences of class labels: Application to land cover change mapping
Tilton et al. Compression experiments with AVHRR data
Laub Non-metric pairwise proximity data
WO2000039705A1 (en) Energy minimization for data merging and fusion
Tran-Luu Mathematical concepts and novel heuristic methods for data clustering and visualization
Siedlecki et al. Mapping techniques for exploratory pattern analysis
Zhang et al. Integrating Color Vector Quantization and Curvelet Transform for Image Retrieval
Niemel et al. A novel description of handwritten characters for use with generalised Fourier descriptors
da Fontoura Costa The Coincidence Similarity Index under Rotation

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20000626

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

A4 Supplementary search report drawn up and despatched

Effective date: 20011115

AK Designated contracting states

Kind code of ref document: A4

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

RIC1 Information provided on ipc code assigned before grant

Free format text: 7G 06K 1/00 A, 7G 06K 9/62 B

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Withdrawal date: 20020304