[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Binomial Distributed Data Confidence Interval Calculation: Formulas, Algorithms and Examples
Previous Article in Journal
Trivial and Nontrivial Eigenvectors for Latin Squares in Max-Plus Algebra
Previous Article in Special Issue
Measures of Departure from Local Marginal Homogeneity for Square Contingency Tables
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visualising Departures from Symmetry and Bowker’s X2 Statistic

by
Eric J. Beh
1,2,*,† and
Rosaria Lombardo
3,†
1
National Institute for Applied Statistics Research Australia (NIASRA), University of Wollongong, Wollongong, NSW 2522, Australia
2
Department of Statistics and Actuarial Science, Stellenbosch University, Stellenbosch 7602, South Africa
3
Department of Economics, University of Campania, 810423 Capua, CE, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2022, 14(6), 1103; https://doi.org/10.3390/sym14061103
Submission received: 8 April 2022 / Revised: 10 May 2022 / Accepted: 12 May 2022 / Published: 27 May 2022
(This article belongs to the Special Issue Advances in Quasi-Symmetry Models)

Abstract

:
Sometimes, the same categorical variable is studied over different time periods or across different cohorts at the same time. One may consider, for example, a study of voting behaviour of different age groups across different elections, or the study of the same variable exposed to a child and a parent. For such studies, it is interesting to investigate how similar, or different, the variable is between the two time points or cohorts and so a study of the departure from symmetry of the variable is important. In this paper, we present a method of visualising any departures from symmetry using correspondence analysis. Typically, correspondence analysis uses Pearson’s chi-squared statistic as the foundation for all of its numerical and visual features. In the case of studying the symmetry of a variable, Bowker’s chi-squared statistic, presented in 1948, provides a simple numerical means of assessing symmetry. Therefore, this paper shall discuss how a correspondence analysis can be performed to study the symmetry (or lack thereof) of a categorical variable when Bowker’s statistic is considered. The technique presented here provides an extension to the approach developed by Michael Greenacre in 2000.

1. Introduction

Studying the departure from symmetry between categorical variables that are cross-classified to form a contingency table has been a topic of much discussion over the past few decades. There is wealth of literature that is now available that examines the measuring, modelling and application of symmetric categorical variables including, but not confined to, the contributions of Agresti [1] (Section 10.4), Anderson [2] (Section 10.2), Bove [3], De Falguerolles and van der Heijden [4], Iki, Yamamoto and Tomizawa [5], Tomizawa [6], Yamamoto [7] and Yamamoto, Shimada and Tomizawa [8]. Most recently, Arellano-Valle, Contreras-Reyes and Stehlik [9] and Nishisato [10] provides a wide ranging discussion on symmetry from the perspective of quantification theory. Such techniques are generally applied to an S × S contingency table, denoted here by N , where a categorical variable may be examined over, for example, two time periods or between two cohorts. When interest is on assessing departures from complete or perfect symmetry in N , further information on the symmetric (or lack thereof) nature of the variables may be gained by partitioning N so that
N = Y + K = 1 2 N + N T + 1 2 N N T
where Y is the matrix reflecting the symmetric part of N while K is the skew-symmetric part of the matrix. This partition was considered by Bove [3], Constantine and Gower [11] (Section 3), Gower [12] and Greenacre [13] for visualising row and column categories that deviate from the null hypothesis of perfect symmetry. Greenacre [13] shows how (1) can be used when performing a correspondence analysis on N . His approach yields two visual summaries—one reflecting the symmetric structure (by performing a correspondence analysis on Y ) and another that visualises that part of the structure that is not symmetric (through a correspondence analysis of K ).
In this paper, we also consider how one may perform a correspondence analysis on a contingency table formed from the cross-classification of symmetric variables. Our strategy is different, but not completely independent, to that of Greenacre’s [13] method. The similarities of our approach and those of Greenacre [13] include basing our analysis on a weighted version of K that yields pairs of equivalent principal inertia values. However, the key difference of our technique when compared with Greenacre’s [13] is that we show how the approach outlined below can be performed when the underlying measure of the departure from perfect symmetry is Bowker’s [14] chi-squared statistic. By doing so, we show, for example, that there exists a link between the principal coordinates and singular values that are obtained and Bowker’s statistic.
For our discussion, this paper consists of seven further sections. Section 2 provides an overview of correspondence analysis which is typically used for visually exploring departures from complete independence between the categorical variables. In this section, we describe the application of singular value decomposition (SVD) to a transformation of a standard two-way contingency table, we define the principal coordinates needed to construct the visual summary and describe other key features. We then turn our attention to presenting an overview of the test of symmetry between the two categorical variables and the role that Bowker’s chi-squared statistic plays in such a test; see Section 3. Section 4 then shows how Bowker’s statistic can be used as the core global measure for assessing the variable’s departure from symmetry when providing a visual summary of such sources of departure using correspondence analysis. Section 5 illustrates some features of correspondence analysis when applied to Bowker’s statistic. Two examples are then presented that demonstrate the applicability of the technique. Section 6 applies this method of correspondence analysis to an artificially constructed 4 × 4 contingency table that exhibits perfect symmetry when a constant C = 0 is added to the 2 , 1 th cell frequency. Changes in C then provide a means of visualising the extent to which the rows and columns deviate from what is expected when the two variables are perfectly symmetric. Our second example, studied in Section 7, considers the data presented in Grover and Srinivasan [15], which looks at differences in the purchase of five brands of decaffeinated coffee across two time periods. Section 8 provides some final comments on the technique, including possible extensions and emendations that may be considered for future research. Three appendices are also included. The first two appendices derive the singular values for a contingency table of size 2 × 2 and 3 × 3 (both of which always yields an optimal display that consists of two dimensions). The third appendix provides a description of the R function bowkerca.exe() that performs the necessary calculations of the approach described in this paper.

2. On the Classical Approach

Before we outline our approach to the correspondence analysis of a contingency table using Bowker’s chi-squared statistic it is worth providing a broad overview of the classical technique; one may also consider, for example, Beh and Lombardo [16] for a detailed historical, methodological, practical and computational discussion of a range of correspondence analysis methods. To do so, we consider a contingency table, N of size I × J so that there are I row categories and J column categories. The  i , j th cell frequency of N is denoted by n i j so that the total sample size is n = i = 1 I j = 1 J n i j . Denote the matrix of relative proportions by P where the i , j th value is p i j = n i j / n so that the sum of these proportions across all I J cells of the table is 1. We also define the ith row marginal proportion by p i = j = 1 J p i j so that it is the ith element of the vector r and the i , i th element of the diagonal matrix D R . Similarly, the jth column marginal proportion is denoted by p j = i = 1 I p i j , so that it is the jth element of the vector c and the j , j th element of the diagonal matrix D C .
To test whether the observed set of proportions, p i j , differs from what is expected under some model with an expected value of p ^ i j , then Pearson’s chi-squared statistic for such a test takes the form
X 2 = n i = 1 S j = 1 S p i j p ^ i j 2 p ^ i j .
For example, under the hypothesis of complete independence p ^ i j = p i p j so that (2) becomes
X I 2 = n i = 1 I j = 1 J r i j 2
where
r i j = p i j p i p j p i p j
is the i , j th standardised (Pearson) residual and X I 2 is a chi-squared random variable with I 1 J 1 degrees of freedom. If there is a statistically significant association between the row and column variables this association can be visualised using correspondence analysis. This is achieved by first performing a SVD on the matrix of standardised residuals, such that
D R 1 / 2 P r c T D C 1 / 2 = A D λ B T .
Here, A and B are the column matrices containing the left and right singular vectors, respectively, and are constrained such that A T A = I M and B T B = I M , where M = min I , J 1 ; here I M is an M × M identity matrix. The  m , m th element of the diagonal matrix D λ is the mth singular value and these values arranged in descending order so that 1 > λ 1 > > λ M > 0 .
A visual depiction of the association between the row and column variables can be made by simultaneously projecting the row principal coordinates and the column principal coordinates onto the same low-dimensional space which optimally consists of M dimensions; such a display is commonly referred to as a correspondence plot and typically consists of the first two dimensions (for ease of visualisation). The matrix of row and column principal coordinates are defined by
F = D R 1 / 2 A D λ G = D C 1 / 2 B D λ
respectively.
Since (3) shows that Pearson’s chi-squared statistic is linearly related to the sample size, n, correspondence analysis uses X 2 / n as the measure of association which is termed the total inertia of the contingency table. This measure is directly related to the principal coordinates and singular values, such that
X 2 n = trace D λ 2 = trace F T D R F = trace G T D C G .
Therefore, points located at a great distance from the origin provide a visual indication of the importance that a category plays in the association structure of the variables, while the origin is the position of all of the row and column principal coordinates if there is complete independence between the variables.
We shall not provide a comprehensive account of all of the features, and related methods, of correspondence analysis. Instead, the interested reader is directed to Beh and Lombardo [16], for example, for more information.

3. On Studying the Symmetry of a Categorical Variable

Suppose we now wish to study the departure from complete symmetry of two categorical variables that share a similar structure. Let S be the number of categories contained in this variable so that the contingency table is now of size S × S . Then, a test of symmetry of the variable may be undertaken by defining the null hypothesis by
H 0 : p i j = p j i ; i , j .
Sometimes, a study of symmetry in a contingency table may be undertaken by assessing the marginal homogeneity of the table where the null hypothesis is
H 0 : p i = p j ; i , j
but we shall say very little on this issue in the following sections.
When assessing whether there is any evidence of symmetry between the row and column variable of a contingency table, the most appropriate choice of p ^ i j is
p ^ i j = p i j + p j i 2 ;
see, for example, Agresti [1] (p. 424) and Anderson [2] (p. 321). Substituting (4) into (2), and denoting the resulting statistic by X S 2 , yields
X S 2 = n 2 i = 1 S j = 1 S p i j p j i 2 p i j + p j i
= n i > j S p i j p j i 2 p i j + p j i
and is the chi-squared statistic derived by Bowker [14] for testing the symmetry between a row and column variable of a contingency table and has S S 1 / 2 degrees of freedom. When S = 2 (6) simplifies to McNemar’s [17] statistic for testing the symmetry of two cross-classified dichotomous variables.
The simplicity of using (6) has gained wide appeal and was considered in the classic texts of Agresti [1] (p. 424), Bishop, Fienberg and Holland [18] (p. 283), Lancaster [19] (p. 236) and Plackett [20] (p. 59). However, Agresti [1] (p. 425) does point out that “it rarely fits well. When the marginal distributions differ substantially, it fits poorly”. The concern here about the poor fit when the margins of the contingency table differ substantially assumes that one is also interested in testing the independence between the categories. This is because when one assumes independence in the context of symmetry (4) becomes
p ^ i j = p i p j + p j p i 2
and requires that p i = p i for all i = 1 , 2 , , S , which we need not impose for testing symmetry. Perhaps what helps to clarify this point is that Lancaster [19] (p. 237) says that “more difficulties arise ifit is desired to test the homogeneity of the margins since p i j may not be equal to p i p j ”. Since we are not concerned with testing for marginal homogeneity in this paper this “difficulty” is of no concern to us. We are also not greatly concerned with the inferential aspects of the statistic since (6) is used as a numerical basis on which correspondence analysis lies; in doing so, we are assuming, or have a priori tested and confirmed, that there is a departure from symmetry and we wish only to visualise the potential sources of this departure.
One may also consider the log-likelihood ratio statistic
G S 2 = 2 n i = 1 I j = 1 I p i j ln p i j p ^ i j = 2 n i j p i j ln 2 p i j p i j + p j i
as an alternative to X S 2 ; see Bishop, Fienberg and Holland [18] (p. 283). We shall not discuss how this statistic can be used for studying departures from symmetry using correspondence analysis. However, the log-likelihood statistic, like Pearson’s statistic of (3), is a special case of the Cressie–Read family of divergence statistics [21] and Beh and Lombardo [22] provide an overview of how one may perform correspondence analysis using this divergence statistic. Future research can certainly be undertaken to study the role of G S 2 in correspondence analysis when studying the symmetry of a categorical variable.

4. On Bowker’s Residuals and Departures from Symmetry

Let S be the matrix of the Bowker residuals where the i , j th element is
s i j = 1 2 p i j p j i p i j + p j i
so that Bowker’s statistic, X S 2 defined by (5), can be expressed as
X S 2 = n i = 1 S j = 1 S s i j 2 = n trace S T S = n trace S S T .
A feature of the matrix S (and K in (1)) is that it is an anti-symmetric, or skew-symmetric, matrix so that S T = S . Therefore, the left and right singular vectors, and the singular values, of the matrix of the Bowker residuals, S , can be obtained from the eigen-decomposition of S T S or, equivalently, of  S 2 . If S is odd then there will always be a zero eigen-value and S 1 positive eigen-values; see Ward and Gray [23] and Gower [12] (p. 113). Ward and Gray [23] also present an algorithm that can perform the necessary eigen-decomposition and in Appendix A and Appendix B we provide a derivation that leads to closed-form solutions of the eigen-values of a 2 × 2 and 3 × 3 matrix of the Bowker residuals. Constantine and Gower [11] note that, since S T S = S 2 , there will be pairs of identical eigen-values. This is a feature that is also demonstrated in the appendices and described in Section 6 and Section 7.
Suppose we denote the i , j th element of Y and K from (1) by y i j and k i j , respectively. Then, the i , j th Bowker residual—see (7)—can be alternatively expressed as
s i j = 1 2 p i j p j i ÷ 1 2 p i j + p j i = k i j y i j .
Therefore, Bowker’s residuals do assess for each cell of the contingency table where departures from symmetry exist but they do so relative to the amount of symmetry that exists between the two variables. This is in contrast to Greenacre’s [13] approach which considers a residual where k i j is divided by the mean of the row and column marginal proportions; for the ith such proportion this is p i + p i / 2 for i = 1 , 2 , , S .
In addition, since the numerator of S can be expressed in terms of K this ensures that Bowker’s residuals are centred at zero under perfect symmetry (a property shared with Greenacre’s [13] approach). This is important since it means that the principal coordinates that we derive in Section 5.3 are centred at the origin of the optimal correspondence plot.

5. Correspondence Analysis and Bowker’s Statistic

5.1. On the SVD of the Matrix of the Bowker Residuals

Visually detecting departures from symmetry can be undertaken using correspondence analysis. This can be achieved by first applying a SVD to S such that
S = A D λ B T .
Here, the S × M column matrix A contains the left singular vectors of S and have the property A T A = I M . Similarly, B is a S × M column matrix of the right singular vectors such that B T B = I M of S . While M = min S , S 1 = S 1 for the classical approach to correspondence analysis, this is not the case for the analysis of the matrix of the Bowker residuals. Since S is a skew-symmetric matrix there will be S singular values when S is even and S 1 such values when S is odd. Therefore,
M = S , if S is even S 1 , if S is odd .
Gower [12] showed that since S is skew-symmetric then the SVD of the matrix, (10), is equivalent to
S = A D λ J M A T
so that
B = A J M T .
Here J M is a M × M block diagonal and orthogonal skew-symmetric matrix so that
J M T J M = J M J M T = I M
where I M is an M × M identity matrix. When M = 2 ,
J 2 = 0 1 1 0
while
J 4 = 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 .
If S is odd then the S , S th element is 1 and the rest of its row/column consists of zeros so that  
J S = J S 1 0 S 1 T 0 S 1 1
where 0 S 1 is a zero vector of length S 1 . For example,
J 3 = 0 1 0 1 0 0 0 0 1 .
The M × M diagonal matrix D λ contains the singular values of S and are arranged in consecutive pairs of values so that 1 > λ 1 = λ 2 > λ 3 = λ 4 > 0 . This feature will be discussed in the next section, but it is worth noting that the SVD presented in (11) makes use of the Murnaghan canonical form of S ; see Murnaghan and Wintner [24] and Paardekooper [25] for further information on this decomposition.

5.2. The Total Inertia

Quantifying the departure from symmetry can be undertaken by calculating the total inertia of S which is just Bowker’s statistic divided by the sample size, X S 2 / n . When B , for example, is of full rank, then B B T = B T B . Thus, using (8), the total inertia can be expressed as the sum-of-squares of the singular values such that
ϕ S 2 = X S 2 n = trace A D λ B T T A D λ B T = trace B D λ 2 B T = trace D λ 2 .
When visualising departures from symmetry, one may construct a correspondence plot consisting of at most M dimensions where the principal inertia (or weight) of the mth dimension is λ m 2 . Therefore, expressing the total inertia in terms of the matrix D λ 2 (or the elements λ m 2 ) provides a means of determining, for each dimension, the percentage of the total departure from symmetry that exists between the row and column variables. Such a percentage is calculated by
100 × λ m 2 ϕ S 2 .
Since λ 1 = λ 2 is the largest pair of singular values, the first and second dimensions visually provide the best (and an equivalent quality) depiction of any departure from symmetry that exists in N . For example, when S = 5 , say, the first and second dimensions will always provide the same (and most) amount of information to the visual display, while the third and fourth dimensions will be equally weighted and display less of the departure. These four dimensions will provide an optimal display of the departure from symmetry since the fifth dimension will have a zero principal inertia value.

5.3. The Principal Coordinates

When calculating Bowker’s statistic, only the cell proportions p i j and p j i are used, not the marginal proportions; see (5) and equivalently (6). The cloud-of-points generated for the rows and columns can be obtained by aggregating these cell proportions across the row and column variables, yielding two spaces that have the same metric; this metric being D ^ = D R + D C T / 2 . Such a property is consistent, but slightly different, to the traditional approach to correspondence analysis which assesses departures from independence, not symmetry, and so assumes that the row space and column space have a metric based on the aggregation of p i j across the two variables. That is, the row space has the metric D R while the column metric is D C . Thus, the metric D ^ is the average of the row and column spaces and is consistent with the metric adopted by Greenacre [13].
To obtain a visual depiction of the departures from symmetry one may simultaneously plot the row and column principal coordinates which are defined in terms of the above matrices by
F = D ^ 1 / 2 A D λ
G = D ^ 1 / 2 B D λ
respectively.

5.4. On the Origin and Transition Formulae

The total inertia of the two-way contingency table can be expressed in terms of the principal coordinates defined by (14) and (15). For example, for the row coordinates,
trace F T D ^ F = trace D ^ 1 / 2 A D λ T D ^ D ^ 1 / 2 A D λ = trace D λ T A T A D λ = trace D λ 2 = X S 2 n .
This shows that the origin coincides with the location of all row coordinates when there is perfect symmetry between variables of the contingency table. By following a similar derivation, it can also be shown that
X S 2 n = trace G T D ^ G .
These expressions of the total inertia also show that points located far from the origin identify categories that deviate from what is expected if there was perfect symmetry.
Alternative expressions of the matrices of row and column principal coordinates can also be obtained. For example, suppose we consider (14). Then, by post-multiplying both sides by B T and simplifying we get
F = D ^ 1 / 2 S B .
Thus, the i , m ’th element of F is
f i m = 2 p i + p i j = 1 J s i j b j m = 2 p i + p i j = 1 J 1 2 p i j p j i p i j + p j i b j m = j = 1 J p i j p j i p i + p i p i j + p j i b j m .
Therefore, a principal coordinate will lie at the origin of the correspondence plot if there is perfect symmetry between the ith row and ith column; that is, when p i j = p j i for all j = 1 , 2 , , S , thus verifying expressing the total inertia in terms of the matrix of row principal coordinates. Any departure from symmetry of the ith row from the ith column will result in the ith principal coordinate moving away from the origin. This implies that the ith column category will also move away from the origin but it will do so in a different direction from the row category. Such a feature can be verified by showing that the row and column principal coordinates are linked through the following transition formulae. By substituting (12) into (15) the matrix of column principal coordinates can be alternatively expressed as
G = D ^ 1 / 2 A J M T D λ = D ^ 1 / 2 A D λ J M T .
Thus, the matrix of column principal coordinates can be expressed in terms of the matrix of row principal coordinates such that
G = F J M T .
Similarly, post-multiplying both sides of (16) by J M and using (13) leads to
F = G J M .
showing how the matrix of row principal coordinates can be expressed in terms of the matrix of column principal coordinates. Thus, for the ith row and column categories, f i 1 , f i 2 = g i 2 , g i 1 showing that departures from symmetry between these two categories will position their principal coordinates on opposite sides of the correspondence plot, unless there is perfect symmetry in which case both will lie at the origin.

5.5. Intra-Variable Distances

Suppose we are interested in the distance between the ith and i th row principal coordinates in a correspondence plot. It can be shown that the squared Euclidean distance between f i m and f i m is
d I 2 i , i = j = 1 S c i j p i j + p i j 2 p j i + p j i 2 2
      = j = 1 S c i j p i j p j i 2 + p i j p j i 2 2
where  
c i j = p j + p j p i j + p j i p i j + p j i
Thus, if  p i j + p i j = p j i + p j i or, equivalently, p ^ i j = p ^ i j for all j = 1 , 2 , , S then the ith and i th row principal coordinates will be located in the same position in the optimal correspondence plot. Such a feature arises when the expected cell frequency (under perfect symmetry) is the same for the ith and i th rows. Alternatively, f i m = f i m , for all m = 1 , 2 , , S when p i j p j i = p i j p i j . This feature does not imply that there must be perfect symmetry between the ith and i th row categories, but it does arise when any departure from perfect symmetry is the same for these categories.

6. Example 1 Artificial Data

6.1. The Data

To examine how to perform a correspondence analysis on a contingency table using Bowker’s statistic (6), suppose we consider the artificial data presented in Table 1. Here, C 0 in the 2 , 1 th cell is a non-negative integer ensuring that Table 1 maintains the features of a contingency table. For this table, Bowker’s statistic is
X S 2 = i > j 4 n i j n j i 2 n i j + n j i = n 21 n 12 2 n 21 + n 12 + n 31 n 13 2 n 31 + n 13 + n 41 n 14 2 n 41 + n 14 + n 32 n 23 2 n 32 + n 23 + n 42 n 24 2 n 42 + n 24 + n 43 n 34 2 n 43 + n 34
= C 2 40 + C                   
and is a chi-squared random variable with 4 × 4 1 / 2 = 6 degrees of freedom. Therefore, (21) shows that when C = 0 , Bowker’s chi-squared statistic is zero and Table 1 is perfectly symmetric.

6.2. Preliminary Examination of the Departure from Symmetry

Departures from symmetry for Table 1 can be assessed by determining the minimum value of C so that X S 2 > χ α 2 6 . That is, when
C 2 40 + C > χ α 2 6 .
From this result, one may obtain the following quadratic equation (in terms of C)
C 2 χ α 2 6 × C 40 χ α 2 6 > 0
which only has one valid solution
C = χ α 2 6 + χ α 2 6 2 + 160 χ α 2 6 2
that ensures that C is non-negative. For example, when testing symmetry using α = 0.05 , χ 0.05 2 6 = 12.59 so that C must exceed 29.59 for the departure from symmetry to be statistically significant. For practical purposes, C must be an integer of at least 30 to ensure that the 2 , 1 th cell remains a positive integer. Hence, this cell frequency must exceed 50 for there to be a statistically significant lack of symmetry in Table 1. Thus, to demonstrate how the correspondence analysis technique described in Section 5 can be applied to Table 1 we shall be considering the following values of C; 50, 75, 100 and 150. These values of C give a Bowker statistic of 27.78, 48.91, 71.43 and 118.42, respectively, which all have a p-value that is less than 0.001.

6.3. Features of Correspondence Analysis & Symmetry

Interestingly, for Table 1, closed form expressions exist for many of the features that come from the correspondence analysis of contingency table when examining departures from symmetry. To show this, suppose consider again the matrix of the Bowker residuals, S . For Table 1, this matrix is
S = 0 C 2 n 40 + C 0 0 C 2 n 40 + C 0 0 0 0 0 0 0 0 0 0 0 .
The structure of this 4 × 4 matrix is identical to a 2 × 2 matrix when removing the vectors of zeros in the last two rows and columns of S . In Appendix A we show that when S is of size 2 × 2 , then applying a SVD to it yields the singular values
λ 1 = λ 2 = C 2 n 40 + C = s 21 = s 12
thereby producing two non-trivial, and identical, singular values, and one zero singular value. Thus, the sum-of-squares of these singular values gives, for Table 1, a total inertia of
ϕ S 2 = λ 1 2 + λ 2 2 + λ 3 2 = C 2 n 40 + C .
Note that this is equivalent to (21) divided by the sample size n. Since n = 680 + C then the total inertia can be expressed in terms of C by
ϕ S 2 C = C 2 680 + C 40 + C
so that, for Table 1, 0 ϕ S 2 C < 1 for C 0 .
Since there are only two, equivalent, non-trivial singular values, a two-dimensional correspondence plot will produce an optimal display of the departure from symmetry of Table 1 regardless of the value of C; the principal inertia of these two dimensions is λ 1 each. Thus, both dimensions of the optimal display will visually describe exactly half of the departure from symmetry which is quantified by ϕ S 2 C . Note that from (22) the first two singular values of S from Table 1 can be expressed as
λ 1 = λ 2 = C 2 680 + C 40 + C
so that
D λ = C 2 680 + C 40 + C 1 0 0 1
while, the first two non-trivial left and right singular vectors of S are
A = 0 1 1 0 0 0 0 0 and B = A J 2 T = 1 0 0 1 0 0 0 0 ,
respectively. The matrix D ^ is
D ^ = 200 + C 2 680 + C 0 0 0 0 400 + C 2 680 + C 0 0 0 0 150 680 + C 0 0 0 0 230 680 + C
so that
D ^ 1 / 2 = 680 + C 2 200 + C 0 0 0 0 2 400 + C 0 0 0 0 1 150 0 0 0 0 1 230 .
Therefore, after some matrix manipulation, the matrix containing the row principal coordinates in the optimal two-dimensional correspondence plot can be expressed in terms of C and some constants (dependent on the cell frequencies of Table 1) such that
F = C 40 + C 0 1 200 + C 1 400 + C 0 0 0 0 0 .
By following a similar derivation, the matrix of column principal coordinates is
G = C 40 + C 1 200 + C 0 0 1 400 + C 0 0 0 0 .
Thus, (23) and (24) satisfy (16) and (17). The zeros in the third and fourth rows of F and G are because there is perfect symmetry between the third and fourth rows and columns of Table 1. So, the position of these points in the correspondence plot lie at the origin, irrespective of the choice of C 0 . When C = 0 , all principal coordinates for the row and column categories will lie at the origin since Table 1 exhibits perfect symmetry. Therefore, changes in C will lead to changes in the configuration depicted in the correspondence plot. For example, suppose C = 50 so that, for Table 1, n 21 = 70 . Then, the key features of the correspondence analysis when an examination of the departures from symmetry are being made using Bowker’s statistic can be simply calculated as follows:
X S 2 = 50 2 40 + 50 = 27.778 λ 1 = λ 2 = 50 2 680 + 50 40 + 50 = 0.138 f 12 = 50 40 + 50 × 200 + 50 = 0.333 f 21 = 50 40 + 50 400 + 50 = 0.248 g 11 = 50 40 + 50 200 + 50 = 0.333 g 22 = 50 40 + 50 × 400 + 50 = 0.248 .
A summary of these quantities for C = 50 , 75 , 100 and 150 is given in Table 2.
Supplementing the numerical summaries that appear in Table 2 are the two-dimensional correspondence plots of Figure 1, Figure 2, Figure 3 and Figure 4. These figures, and their accompanying numerical features, can be obtained using the R function bowkerca.exe() that is described in Appendix C of this paper. Figure 1 shows such a plot for C = 50 , Figure 2 is the correspondence plot for C = 75 , Figure 3 is the plot for C = 100 while the two-dimensional correspondence plot of Table 1 when C = 150 is given by Figure 4. Based on the statements we made above, it should be of no surprise to see that these four plots show that its two dimensions visually describes exactly half of any departures from symmetry that exist between the variables of Table 1. Thus, each of the four correspondence plots is optimal in its visual depiction of such depatures; since λ 1 and λ 2 are equivalent for all values of C. These correspondence plots also show that C3, C4, R3 and R4 all share the same position at the origin since there is no departure from perfect symmetry for these categories. However, the position of R1, R2, C1 and C2 lie further from the origin as C increases since these categories increasingly deviate from perfect symmetry as C increases.
While R1 and C2 are unaffected by changes in C, since symmetry is assessed by considering the difference between the 2 , 1 th and 1 , 2 th cell frequencies (or proportions), any departure from symmetry does impact their relative position from each other in the correspondence plot. This can be observed by noting that, along the second dimension,
f 12 g 22 = 400 + C 200 + C
is not independent of C. For  C > 0 , f 12 and g 22 will always remain on opposite sides of the first dimension with f 12 being at most 2 units further away from the origin than g 22 ; as C f 12 g 22 .
We can gain an understanding of how the position of R1 and C1, say, in the optimal (two-dimensional) correspondence plot compare for C > 0 . Since f 12 < 0 and g 11 < 0 , R1 will lie along the second dimension, while C1 will lie along the first dimension showing that their relative proximity from each other increases as C increases; note that we are not interpreting this row/column proximity in terms of a quantifiable distance measure. However, we can quantify how the relative position of R1 and C1 change by noting the ratio between these two coordinates is f 12 / g 11 = 1 for all values of C. Therefore, R1 and C1 will move the same number of units away from the origin as C increases.

7. Example 2 on the Purchase of Decaffeinated Coffee

We now focus our attention on a 5 × 5 contingency table considered by Agresti [26] (Table 8.5) whose original data came from Grover and Srinivasan [15]; see Table 3. For these data, 541 individuals were surveyed about their choice of purchase of five brands of decaffeinated coffee. Each of the participants were asked to record the brand they bought on their first and subsequent purchase. If every participant of the study bought the same brand of coffee on their first and second purchase then the contingency table would exhibit perfect symmetry. However, this was not the case, and so one may investigate where the departures from symmetry lie. In doing so, one can identify brands that had a similar purchasing pattern on the first and second purchase and those that did not.
A test of symmetry can be performed and doing so yields a Bowker’s statistic of 20.265 . With  5 × 5 1 / 2 = 10 degrees of freedom, this statistic has a p-value of 0.027 showing that there are departures from symmetry present in the data. An evaluation of where departures from symmetry lie in Table 3 can be made by observing the skew-symmetric matrix S which is
S = 0 0.048 0.105 0.008 0.000 0.048 0 0 0.061 0.042 0.105 0 0 0 0 0.008 0.061 0 0 0 0 0.042 0 0 0 .
Observing the relative size of the s i j values of this matrix shows that the greatest source of departure from symmetry appears to be for the coffee brand “High Point” while “Brim” is the coffee brand that deviates least from symmetry (although not perfectly). A visual depiction of the departures from symmetry that are present in Table 3 can be made by performing the correspondence analysis approach described above. Appendix C shows how the R function bowkerca.exe() can be used to perform this analysis on Table 3. This analysis gives the following two pairs of non-trivial singular values
λ 1 = λ 2 = 0.1215 and λ 3 = λ 4 = 0.0641
so that their sum-of-squares gives the total inertia
X S 2 n = 0 . 1215 2 + 0 . 1215 2 + 0 . 0641 2 + 0 . 0641 2 = 0.0377 .
A visual depiction of the departure from symmetry is given by Figure 5. The quality of this two-dimensional correspondence is excellent and accounts for
100 × 0 . 1215 2 + 0 . 1215 2 0.0377 = 78.242 %
of the departure from symmetry that exists in Table 3. The row and column principal coordinates depicted in Figure 5 are
F = 0 0.213 0.186 0.016 0.155 0 0.044 0.140 0.007 0.075 G = 0.213 0 0.016 0.186 0 0.155 0.140 0.044 0.075 0.007 ,
respectively, and satisfy (16) and (17).
The following points can be made from the configuration in Figure 5 on departures from symmetry in Table 3. Keeping in mind that this correspondence plot captures slightly more than three-quarters of the departures from symmetry in Table 3
  • the purchase of the five coffee brands is different across the first and second purchases and so reflects the departure from symmetry that Bowker’s statistic shows,
  • the greatest departure from symmetry is for the coffee brand “High Point” since “HP” and “hp” lie furthest from the origin than any of the four remaining brands. Thus, it is this brand that has undergone the greatest difference in purchasing preference over the two time periods,
  • the coffee brands are ordered as follows based on the greatest to least departure from perfect symmetry: “High Point”, “Taster’s Choice”, ”Sanka”, “Nescafé” and “Brim”,
  • therefore, “Brim” is the coffee brand that has the most similar purchasing pattern across the two time periods when the brands were purchased.  
Furthermore, Figure 5 shows that
  • the purchasing preferences of the brands “Sanka” and “Taster’s Choice” are very similar on their first purchase as well as on their second purchase. This can be seen because of the close proximity of “sa” and “tc” on the left of the plot, and “SA” and “TC” on the right of the plot,
  • the purchasing preferences of the brands “High Point” and “Nescafé” are similar (although not as similar as “SA” and “TC”) within each of the two purchases.

8. Discussion

When studying departures from symmetry between the categorical variables of a two-way contingency table there are many different techniques that can be considered. A common thread amongst many of them (especially over the past few decades) has been to partition the contingency table into a symmetric part ( Y ) and an asymmetric, or skew-symmetric, part ( K ) as (1) shows. While such a partition has appeared in the correspondence analysis literature, to the best of our knowledge the above technique is the first to formally link a meaningful measure of asymmetry when visualising departures from symmetry. Specifically, this paper has shown how Bowker’s statistic [14] plays a pivotal role in quantifying such departures in the context of correspondence analysis. Importantly, we also showed that by using Bowker’s statistic, we are able to capture departures from perfect symmetry relative to the amount of symmetry that lies between the variables.
In preparing this paper, we considered metrics that differ to those we described above. Like Greenacre [13], we adopted a metric involving the mean row-column marginal proportion. However, since Bowker’s statistic is independent of the row and column marginal information, consideration was given to P ^ = P + P T / 2 . While using such a metric does not lead to the exact total inertia, it does provide an excellent approximation to it in some cases. It also provides additional features not available with the metric adopted above and so this is an interesting avenue to pursue in the future.
There are further extensions of the technique described above that can be considered at a later time. One such extension, and one that we raised at the end of Section 3, is to investigate the role of the Cressie–Read family of divergence statistics [21] for visualising departures from perfect symmetry using correspondence analysis. Doing so then will mean that one can consider “symmetry” (as opposed to “independence”) versions of the special cases of this family of divergence statistics, such as G S 2 , the Freeman–Tukey statistic [27] and other association measures, as alternatives to Bowker’s X 2 statistic. One can then consider measures of accuracy such as those described by Hubert and Arabie [28] for assessing how different members of this family compare.
Another possible avenue for future research is to extend the above technique in the case where one categorical variable is defined as a predictor variable and the other is its response variable. Such an approach provides a visual means of identifying departures from perfect symmetry using non-symmetrical correspondence analysis. While the approach described above is confined to examining departures from perfect symmetry between two cross-classified categorical variables, another natural extension is to consider adapting the above technique to analyse multi-way contingency tables. There is scope to investigate how this can be achieved in the context of multiple and multi-way correspondence analysis [6]. However, this extension, and the others we described, will be left for consideration at a later date.   

Author Contributions

Conceptualization: E.J.B.; Methodology: E.J.B. and R.L.; Software: E.J.B. and R.L.; Artificial and Applied Data Analysis: E.J.B. and R.L.; Writing—original draft preparation: E.J.B.; Writing—review and editing: E.J.B. and R.L.; Visualization: E.J.B. and R.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data in Table 1 is artificial and unique to this paper. The data in Table 3 is from Grover and Srinivasan [15] and also appears in Agresti [26] (Table 8.5).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviation is used in this manuscript:
SVDSingular value decomposition

Appendix A. The Singular Values of a 2 × 2 S Matrix

Suppose we have the following generic 2 × 2  skew-symemtric matrix
S = 0 a a 0
Since S = 2 , there will be two singular-values for us to determine. To derive them we shall consider the eigen-decomposition of S T S by noting that
S T S = 0 a a 0 T 0 a a 0 = 0 a a 0 0 a a 0 = a 2 0 0 a 2 .
Since S T S is a diagonal matrix with identical diagonal elements then it has two identical eigen-values which are
λ 1 2 = a 2 and λ 2 2 = a 2 .
In the context of the matrix of the Bowker residuals,
a 1 2 p 21 p 12 p 21 + p 12 .
Therefore,
λ 1 2 = λ 2 2 = 1 2 p 21 p 12 2 p 21 + p 12
and are two non-trivial, and identical, eigen-vectors of S T S . Thus, the two non-trivial singular values of S are
λ 1 = λ 2 = 1 2 p 21 p 12 p 21 + p 12
so that the total inertia of the 2 × 2 matrix containing the Bowker residuals is
ϕ 2 = λ 1 2 + λ 2 2 = p 21 p 12 2 p 21 + p 12
and is equivalent to McNemar [17] statistic divided by the sample size. This shows that when performing a correspondence analysis for assessing the departure from symmetry of a 2 × 2 contingency table, each dimension contributes equally, and to exactly half, of the total inertia.

Appendix B. The Singular Values of a 3 × 3 S Matrix

Suppose we now have the generic 3 × 3 skew-symmetric matrix
S = 0 a b a 0 c b c 0
Since S = 3 , this matrix has exactly two positive singular values and one zero singular value. Here we derive these values in terms of the elements of S by considering the eigen-decomposition of S T S . In doing so
S T S = 0 a b a 0 c b c 0 T 0 a b a 0 c b c 0 = 0 a b a 0 c b c 0 0 a b a 0 c b c 0 = a 2 + b 2 b c a c b c a 2 + c 2 a b a c a b b 2 + c 2 .
The eigen-values of S T S in this case are determined by solving the characte- ristic equation
S T S λ 2 I 3 = 0
where I 3 is a 3 × 3 identity matrix. Thus
S T S λ 2 I 3 = a 2 + b 2 λ 2 b c a c b c a 2 + c 2 λ 2 a b a c a b b 2 + c 2 λ 2 = a 2 + b 2 λ 2 a 2 + c 2 λ 2 a b a b b 2 + c 2 λ 2 b c b c a b a c b 2 + c 2 λ 2 a c b c a 2 + c 2 λ 2 a c a b = a 2 + b 2 λ 2 a 2 + c 2 λ 2 b 2 + c 2 λ 2 a 2 b 2 b c b c b 2 + c 2 λ 2 + a 2 b c a c a b 2 c + a c a 2 + c 2 λ 2 = a 2 + b 2 λ 2 a 2 + c 2 b 2 + c 2 λ 2 a 2 + c 2 λ 2 b 2 + c 2 + λ 4 a 2 b 2 b c b 3 c + b c 3 λ 2 b c + a 2 b c a c a b 2 c + a 3 c + a c 3 λ 2 a c = a 2 + b 2 λ 2 λ 4 λ 2 a 2 + b 2 + 2 c 2 + a 2 c 2 + b 2 c 2 + c 4 b c λ 2 b c + b 3 c + b c 3 + a 2 b c a c λ 2 a c + a b 2 c + a 3 c + a c 3 = λ 4 a 2 + b 2 λ 2 a 2 + b 2 a 2 + b 2 + 2 c 2 + a 2 + b 2 a 2 c 2 + b 2 c 2 + c 4 λ 6 + λ 4 a 2 + b 2 + 2 c 2 λ 2 a 2 c 2 + b 2 c 2 + c 4 + λ 2 b 2 c 2 b 4 c 2 b 2 c 4 a 2 b 2 c 2 + λ 2 a 2 c 2 a 2 b 2 c 2 a 4 c 2 a 2 c 4 = λ 6 + λ 4 2 a 2 + 2 b 2 + 2 c 2 λ 2 a 2 + b 2 a 2 + b 2 + 2 c 2 + a 2 c 2 + b 2 c 2 + c 4 b 2 c 2 a 2 c 2 + a 2 + b 2 a 2 c 2 + b 2 c 2 + c 4 b 4 c 2 b 2 c 4 a 2 b 2 c 2 a 2 b 2 c 2 a 4 c 2 a 2 c 4 = λ 6 + λ 4 2 a 2 + 2 b 2 + 2 c 2 λ 2 a 4 + a 2 b 2 + 2 a 2 c 2 + a 2 b 2 + b 4 + 2 b 2 c 2 + a 2 c 2 + b 2 c 2 + c 4 b 2 c 2 a 2 c 2 + 0 .
Therefore, setting this sextic equation of λ to zero gives
λ 2 λ 4 2 λ 2 a 2 + b 2 + c 2 + a 2 + b 2 + c 2 + 2 a 2 b 2 + 2 a 2 c 2 + 2 b 2 c 2 = 0
which is a perfect square so that
λ 2 λ 4 2 λ 2 a 2 + b 2 + c 2 + a 2 + b 2 + c 2 2 = 0
or
λ 2 λ 2 a 2 + b 2 + c 2 2 = 0 .
Therefore, there are two positive eigen-values and one zero eigen-value of S T S (when S = 3 ) and they are
λ 1 2 = λ 2 2 = a 2 + b 2 + c 2 and λ 3 2 = 0 .
This then confirms that the two largest eigen-values of S T S , and hence singular values of S , are identical with a zero third value. In the context of S ,
a 1 2 p 21 p 12 p 21 + p 12 , b 1 2 p 31 p 13 p 31 + p 13 and c 1 2 p 32 p 23 p 32 + p 23
so that the principal inertia values associated with the dimensions of the optimal correspondence plot are
λ 1 2 = λ 2 2 = 1 2 p 21 p 21 2 p 21 + p 21 + p 31 p 13 2 p 31 + p 13 + p 32 p 23 2 p 32 + p 23 = X s 2 2 n
and
λ 2 3 = 0 .
Thus for a 3 × 3 contingency table, the optimal correspondence plot will consist of two dimensions and each will account for exactly 50% of the total inertia. Note then that it is not surprising that the sum of the three principal inertia values gives the total inertia since
ϕ 2 = λ 1 2 + λ 2 2 + λ 3 2 = X s 2 2 n + X s 2 2 n + 0 = X s 2 n .

Appendix C. R Code

This appendix contains the R function bowkerca.exe() that performs a correspondence analysis on an S × S contingency table where the depature from perfect symmetry is assessed using Bowker’s X 2 statistic—see (6). The arguments of the function are
  • N—the two-way contingency table of size S × S , where S > 2 ,
  • scaleplot—rescales the limit of the axes used to construct the two-dimensional correspondence plot. By default, scaleplot = 1.2,
  • dim1—the first dimension of the correspondence plot. By default, dim1 = 1 so that the first dimension is depicted horizontally, and
  • dim2—the second dimension of the correspondence plot. By default, dim2 = 2 so that the second dimension is depicted vertically
bowkerca.exe <- function(N, scaleplot = 1.2, dim1 = 1, dim2 = 2){
                       
 S <- nrow(N)        # Number of rows & columns of the table
 Inames <- dimnames(N)[1]  # Row category names
 Jnames <- dimnames(N)[2]  # Column category names
                       
 n <- sum(N)   # Total number of classifications in the table
 p <- N * (1/n) # Matrix of joint relative proportions
                       
 pidot <- apply(p, 1, sum) # Row marginal proportions
 pdotj <- apply(p, 2, sum) # Column marginal proportions
                       
 dI <- diag(pidot, nrow = S, ncol = S)
 dJ <- diag(pdotj, nrow = S, ncol = S)
 dIJ <- 0.5*(dI + dJ)
                       
 # Constructing the matrix of Bowker residuals
                       
 s <- matrix(0, nrow = S, ncol = S)
                       
 for (i in 1:S){
  for (j in 1:S){
   s[i,j] <- (p[i,j]-(p[i,j]+p[j,i])/2)/sqrt((p[i,j]+p[j,i])/2)
  }
 }
                       
 dimnames(s) <- list(paste(Inames[[1]]), paste(Jnames[[1]]))
                       
 # Applying a singular value decomposition (SVD) to the matrix of
 # Bowker residuals
                       
 sva <- svd(s)
                       
 d <- sva$d
 dmu <- diag(sva$d)
                       
 ##########################################################
 #                             #
 # Principal Coordinates                #
 #                             #
 ##########################################################
                       
 # Row principal coordinates
                       
 f <- solve(dIJ^0.5) %*% sva$u %*% dmu
 dimnames(f) <- list(paste(Inames[[1]]), paste(1:S))
                       
 # Column principal coordinates
                       
 g <- solve(dIJ^0.5) %*% sva$v %*% dmu
 dimnames(g) <- list(paste(Jnames[[1]]), paste(1:S))
                       
 ##################################################################
 #                                 #
 # Calculating the total inertia, Bowker’s chi-squared   #
 # statistic, its p-value and the percentage contribution #
 # of the axes to the inertia                #
 #                                 #
 ##################################################################
                       
 Principal.Inertia <- diag(t(f) %*% dIJ %*% f)
 Total.Inertia <- sum(Principal.Inertia)
 Bowker.X2 <- n * Total.Inertia  # Bowker’s Chi-squared statistic
 Perc.Inertia <- (Principal.Inertia/Total.Inertia) * 100
 Cumm.Inertia <- cumsum(Perc.Inertia)
 Inertia <- cbind(Principal.Inertia, Perc.Inertia, Cumm.Inertia)
 dimnames(Inertia)[1] <- list(paste("Axis", 1:S, sep = " "))
 p.value <- 1 - pchisq(Bowker.X2, S * (S - 1)/2)
                       
 ##########################################################
 #                              #
 # Here we construct the 2-D correspondence plot  #
 #                              #
 ##########################################################
                       
 par(pty = "s")
 plot(0, 0, pch = " ",
   xlim = scaleplot*range(f[, dim1], f[, dim2], g[, dim1], g[, dim2]),
   ylim = scaleplot*range(f[, dim1], f[, dim2], g[, dim1], g[, dim2]),
   xlab = paste("Principal Axis", dim1, "(",round(Perc.Inertia[dim1],
    digits = 2), "%)"),
   ylab = paste("Principal Axis", dim2, "(", round(Perc.Inertia[dim2],
    digits = 2), "%)")
 )
                       
 points(f[, dim1], f[, dim2], pch = "+", col = "red")
 text(f[, dim1], f[, dim2], labels = Inames[[1]], pos = 4, col = "red")
                       
 points(g[, dim1], g[, dim2], pch = "*", col = "blue")
 text(g[, dim1], g[, dim2], labels = Jnames[[1]], pos = 2, col = "blue")
                       
 abline(h = 0, v = 0)
                       
 list(N = N,
   s = round(s, digits = 3),
   f = round(f, digits = 3),
   g = round(g, digits = 3),
   Bowker.X2 = round(Bowker.X2, digits = 3),
   P.Value = round(p.value, digits = 3),
   Total.Inertia = round(Total.Inertia, digits = 3),
   Inertia = round(Inertia, digits = 3)
 )
}
The numerical summaries that are produced from this function are
  • the contingency table under investigation, N,
  • the matrix of Bowker residuals, s, where the elements are defined by (7),
  • the matrix of row principal coordinates, f, and column principal coordinates, g, defined by (14) and (15), respectively,
  • Bowker’s chi-squared statistic defined by (6), Bowker.X2, and its p-value, P.Value, and 
  • the principal inertia value for each of the M dimensions, Principal.Inertia, the percentage of the total inertia accounted for by each of these dimensions, Perc.Inertia, and the cumulative percentage of the M principal inertia values, Cumm.Inertia.  
Therefore, when coffee.dat is the R object assigned to Table 3 so that
> coffee.dat <- matrix(c(93, 9, 17, 6, 10, 17, 46, 11, 4, 4, 44, 11, 155,
+ 9, 12, 7, 0, 9, 15, 2, 10, 9, 12, 2, 27), nrow = 5)
> dimnames(coffee) <- list(paste(c("HP", "TC", "SA", "NE", "BR")),
+ paste(c("hp", "tc", "sa", "ne", "br")))
>
then the function produces the correspondence plot of Figure 5 and the following numerical summaries
> bowkerca.exe(coffee)
$N
  hp tc sa ne br
HP 93 17 44  7  10
TC  9 46 11  0  9
SA 17 11 155  9  12
NE  6  4  9 15  2
BR 10  4 12  2  27
                       
$s
    hp    tc    sa    ne   br
HP  0.000   0.048  0.105  0.008 0.000
TC  -0.048    0.000  0.000  -0.061 0.042
SA -0.105  0.000  0.000  0.000 0.000
NE -0.008  0.061  0.000  0.000 0.000
BR  0.000  -0.042   0.000   0.000 0.000
                       
$f
     1    2     3    4 5
HP  0.000  -0.213   0.000  0.043 0
TC  0.186  0.016  -0.135  0.023 0
SA  0.155  0.000  0.059  0.000 0
NE  0.044  -0.140  -0.020  -0.193 0
BR  -0.007  0.075  0.017  0.103 0
                       
$g
     1    2     3    4 5
hp  -0.213  0.000  -0.043  0.000 0
tc  0.016  -0.186  -0.023  -0.135 0
sa  0.000  -0.155  0.000  0.059 0
ne  -0.140  -0.044  0.193  -0.020 0
br  0.075  0.007  -0.103  0.017 0
                       
$Bowker.X2
[1] 20.412
                       
$P.Value
[1] 0.026
                       
$Total.Inertia
[1] 0.038
                       
$Inertia
    Principal.Inertia Perc.Inertia Cumm.Inertia
Axis 1      0.015     39.121    39.121
Axis 2      0.015     39.121    78.242
Axis 3      0.004     10.879    89.121
Axis 4      0.004     10.879    100.000
Axis 5      0.000     0.000    100.000
                       
>

References

  1. Agresti, A. Categorical Data Analysis, 2nd ed.; Wiley: New York, NY, USA, 2002. [Google Scholar]
  2. Anderson, E.B. The Statistical Analysis of Categorical Data; Springer: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
  3. Bove, G. Asymmetrical multidimensional scaling and correspondence analysis for square tables. Stat. Appl. 1992, 4, 587–598. [Google Scholar]
  4. De Falguerolles, A.; van der Heijden, P.G.M. Reduced rank quasi-symmetry and quasi-skew symmetry: A generalized bi-linear model approach. Ann. Fac. Sci. Toulouse 2002, 11, 507–524. [Google Scholar] [CrossRef]
  5. Iki, K.; Yamamoto, K.; Tomizawa, S. Quasi-diagonal exponent symmetry model for square contingency tables with ordered categories. Stat. Probab. Lett. 2014, 92, 33–38. [Google Scholar] [CrossRef] [Green Version]
  6. Tomizawa, S. Two kinds of measures of departure from symmetry in square contingency tables having nominal categories. Stat. Sin. 1994, 4, 325–334. [Google Scholar]
  7. Yamamoto, H. A measure of departure from symmetry for multi-way contingency tables with nominal categories. Jpn. J. Biom. 2004, 25, 69–88. [Google Scholar] [CrossRef] [Green Version]
  8. Yamamoto, K.; Shimada, F.; Tomizawa, S. Measure of departure from symmetry for the analysis of collapsed square contingency tables with ordered categories. J. Appl. Stat. 2015, 42, 866–875. [Google Scholar] [CrossRef]
  9. Arellano-Valle, R.B.; Contreras-Reyes, J.E.; Stehlik, M. Generalized skew-normal negentropy and its application to fish condition factor time series. Entropy 2017, 19, 528. [Google Scholar] [CrossRef] [Green Version]
  10. Nishisato, S. Optimal Quantification and Symmetry; Springer: Singapore, 2022. [Google Scholar]
  11. Constantine, A.G.; Gower, J.C. Graphical representation of asymmetry. Appl. Stat. 1978, 27, 297–304. [Google Scholar] [CrossRef]
  12. Gower, J.C. The analysis of asymmetry and orthogonality. In Recent Developments in Statistics; Barra, J.R., Brodeau, F., Romer, G., van Cutsem, B., Eds.; North-Holland: Amsterdam, The Netherlands, 1977; pp. 109–123. [Google Scholar]
  13. Greenacre, M. Correspondence analysis of square asymmetric matrices. J. R. Stat. Soc. Ser. C Appl. Stat. 2000, 49, 297–310. [Google Scholar] [CrossRef]
  14. Bowker, A.H. A test for symmetry in contingency tables. J. Am. Stat. Assoc. 1948, 43, 572–598. [Google Scholar] [CrossRef] [PubMed]
  15. Grover, R.; Srinivasan, V. A simultaneous approach to market segmentation and market structuring. J. Mark. Res. 1987, 24, 129–153. [Google Scholar] [CrossRef]
  16. Beh, E.J.; Lombardo, R. Correspondence Analysis: Theory, Practice and New Strategies; Wiley: Chichester, UK, 2014. [Google Scholar]
  17. McNemar, Q. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika 1947, 12, 153–157. [Google Scholar] [CrossRef] [PubMed]
  18. Bishop, Y.M.; Fienberg, S.E.; Holland, P.W. Discrete Multivariate Analysis: Theory and Practice; MIT Press: Cambridge, MA, USA, 1975. [Google Scholar]
  19. Lancaster, H.O. The Chi-squared Distribution; Wiley: Sydney, Australia, 1969. [Google Scholar]
  20. Plackett, R.L. The Analysis of Categorical Data; Charles Griffin and Company, Limited: London, UK, 1974. [Google Scholar]
  21. Cressie, N.A.C.; Read, T.R.C. Multinomial goodness-of-fit tests. J. R. Stat. Soc. Ser. B 1984, 46, 440–464. [Google Scholar] [CrossRef]
  22. Beh, E.J.; Lombardo, R. Correspondence Analysis and the Cressie–Read Family of Divergence Statistics. National Institute for Applied Statistics Reasearch Australia (NIASRA) Working Paper Series. 2022. Available online: https://www.uow.edu.au/niasra/publications/ (accessed on 19 May 2022).
  23. Ward, R.C.; Gray, L.J. Eigensystem computation for skew-symmetric matrices and a class of symmetric matrices. Acm Trans. Math. Softw. 1978, 4, 278–285. [Google Scholar] [CrossRef]
  24. Murnaghan, F.D.; Wintner, A. A canonical form for real matrices under orthogonal transformations. Proc. Natl. Acad. Sci. USA 1931, 17, 417–420. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Paardekooper, M.H.C. An eigenvalue algorithm for skew-symmetric matrices. Numer. Math. 1971, 17, 189–202. [Google Scholar] [CrossRef]
  26. Agresti, A. An Introduction to Categorical Data Analysis, 3rd ed.; Wiley: New York, NY, USA, 2019. [Google Scholar]
  27. Freeman, M.F.; Tukey, J.W. Transformations related to the angular and square root. Ann. Math. Stat. 1950, 21, 607–611. [Google Scholar] [CrossRef]
  28. Hubert, L.; Arabie, P. Comparing partitions. J. Classif. 1985, 2, 193–218. [Google Scholar] [CrossRef]
Figure 1. Correspondence plot that visually examines the departure from symmetry for Table 1; C = 50 .
Figure 1. Correspondence plot that visually examines the departure from symmetry for Table 1; C = 50 .
Symmetry 14 01103 g001
Figure 2. Correspondence plot that visually examines the departure from symmetry for Table 1; C = 75 .
Figure 2. Correspondence plot that visually examines the departure from symmetry for Table 1; C = 75 .
Symmetry 14 01103 g002
Figure 3. Correspondence plot that visually examines the departure from symmetry for Table 1; C = 100 .
Figure 3. Correspondence plot that visually examines the departure from symmetry for Table 1; C = 100 .
Symmetry 14 01103 g003
Figure 4. Correspondence plot that visually examines the departure from symmetry for Table 1; C = 150 .
Figure 4. Correspondence plot that visually examines the departure from symmetry for Table 1; C = 150 .
Symmetry 14 01103 g004
Figure 5. Correspondence plot that visually examines the departure from symmetry for Table 1; C = 150 .
Figure 5. Correspondence plot that visually examines the departure from symmetry for Table 1; C = 150 .
Symmetry 14 01103 g005
Table 1. A near-symmetric artificial contingency table where C is a non-negative integer.
Table 1. A near-symmetric artificial contingency table where C is a non-negative integer.
C1C2C3C4
R110203040
R220 + C506070
R330602040
R440704080
Table 2. Select output from the correspondence analysis of Table 1 when studying departures from symmetry; C = 50 , 75 , 100 and 150.
Table 2. Select output from the correspondence analysis of Table 1 when studying departures from symmetry; C = 50 , 75 , 100 and 150.
C
Output5075100150
X S 2 27.77848.91371.429118.421
ϕ S C 0.0380.0650.0920.143
λ 1 0.1380.1800.2140.267
f 12 −0.333−0.422−0.488−0.582
f 21 −0.248−0.321−0.378−0.464
g 11 −0.333−0.422−0.488−0.582
g 22 0.2480.3210.3780.464
Table 3. A simple 5 × 5 table where we test for symmetry using CA.
Table 3. A simple 5 × 5 table where we test for symmetry using CA.
Second Purchase
FirstHigh PtTaster’sSankaNescaféBrimTotal
Purchase(hp)(ta)(sa)(ne)(br)
High Point (HP)931744710171
Taster’s Choice (TC)946110975
Sanka (SA)1711155912204
Nescafé64915236
Brim1041222755
Total135822313360541
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Beh, E.J.; Lombardo, R. Visualising Departures from Symmetry and Bowker’s X2 Statistic. Symmetry 2022, 14, 1103. https://doi.org/10.3390/sym14061103

AMA Style

Beh EJ, Lombardo R. Visualising Departures from Symmetry and Bowker’s X2 Statistic. Symmetry. 2022; 14(6):1103. https://doi.org/10.3390/sym14061103

Chicago/Turabian Style

Beh, Eric J., and Rosaria Lombardo. 2022. "Visualising Departures from Symmetry and Bowker’s X2 Statistic" Symmetry 14, no. 6: 1103. https://doi.org/10.3390/sym14061103

APA Style

Beh, E. J., & Lombardo, R. (2022). Visualising Departures from Symmetry and Bowker’s X2 Statistic. Symmetry, 14(6), 1103. https://doi.org/10.3390/sym14061103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop