[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
The Degradation and Aging of Biological Systems as a Process of Information Loss and Entropy Increase
Next Article in Special Issue
Identity-Based Matchmaking Encryption with Equality Test
Previous Article in Journal
AM3F-FlowNet: Attention-Based Multi-Scale Multi-Branch Flow Network
Previous Article in Special Issue
User Real-Time Influence Ranking Algorithm of Social Networks Considering Interactivity and Topicality
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fair Max–Min Diversity Maximization in Streaming and Sliding-Window Models †

1
School of Data Science and Engineering, East China Normal University, Shanghai 200062, China
2
Spotify, 08000 Barcelona, Spain
3
Department of Computer Science, University of Helsinki, 00560 Helsinki, Finland
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Proceedings of the IEEE 38th International Conference on Data Engineering (ICDE 2022); pp. 41–53.
Entropy 2023, 25(7), 1066; https://doi.org/10.3390/e25071066
Submission received: 19 June 2023 / Revised: 12 July 2023 / Accepted: 13 July 2023 / Published: 14 July 2023
(This article belongs to the Special Issue Advances in Information Sciences and Applications II)
Figure 1
<p>Comparison of (<b>a</b>) max–sum dispersion (MSD) and (<b>b</b>) max–min dispersion (MMD) for diversity maximization on a dataset of one hundred points. We use circles and crossmarks to denote all points in the dataset and the points selected based on MSD and MMD.</p> ">
Figure 2
<p>Comparison of (<b>a</b>) unconstrained max–min diversity maximization and (<b>b</b>) fair max–min diversity maximization. We have a set of individuals, each described by two attributes, partitioned into two disjoint groups of red and blue, respectively. Fair diversity maximization returns a subset of size 10 that maximizes diversity in terms of attributes and contains an equal number (i.e., <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>) of elements from both groups.</p> ">
Figure 3
<p>Illustration of the <tt>SFDM1</tt> algorithm. During stream processing, one group-blind and two group-specific candidates are maintained for each guess <math display="inline"><semantics> <mi>μ</mi> </semantics></math> of <math display="inline"><semantics> <msub> <mi mathvariant="monospace">OPT</mi> <mi>f</mi> </msub> </semantics></math>. Then, a subset of group-blind candidates is selected for post-processing by adding the elements from the under-filled group before deleting the elements from the over-filled one.</p> ">
Figure 4
<p>Illustration of post-processing in <tt>SFDM2</tt>. For each <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>∈</mo> <msup> <mi mathvariant="script">U</mi> <mo>′</mo> </msup> </mrow> </semantics></math>, an initial <math display="inline"><semantics> <msubsup> <mi>S</mi> <mi>μ</mi> <mo>′</mo> </msubsup> </semantics></math> is first extracted from <math display="inline"><semantics> <msub> <mi>S</mi> <mi>μ</mi> </msub> </semantics></math> by removing the elements from over-filled groups. Then, the elements in all candidates are divided into clusters. The final <math display="inline"><semantics> <msubsup> <mi>S</mi> <mi>μ</mi> <mo>′</mo> </msubsup> </semantics></math> is augmented from the initial solution by adding new elements from under-filled groups based on matroid intersection.</p> ">
Figure 5
<p>Illustration of the framework of sliding-window algorithms. During stream processing, two candidate solutions <math display="inline"><semantics> <msub> <mi>A</mi> <mrow> <mi>λ</mi> <mo>,</mo> <mi>μ</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>B</mi> <mrow> <mi>λ</mi> <mo>,</mo> <mi>μ</mi> </mrow> </msub> </semantics></math>, along with their backups <math display="inline"><semantics> <msubsup> <mi>A</mi> <mrow> <mi>λ</mi> <mo>,</mo> <mi>μ</mi> </mrow> <mo>′</mo> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>B</mi> <mrow> <mi>λ</mi> <mo>,</mo> <mi>μ</mi> </mrow> <mo>′</mo> </msubsup> </semantics></math>, are maintained for each guess <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>,</mo> <mi>μ</mi> </mrow> </semantics></math> of <math display="inline"><semantics> <mrow> <mi mathvariant="monospace">OPT</mi> <mo>[</mo> <mi>W</mi> <mo>]</mo> </mrow> </semantics></math>. Then, during post-processing, the elements in <math display="inline"><semantics> <msub> <mi>B</mi> <mrow> <mi>λ</mi> <mo>,</mo> <mi>μ</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>A</mi> <mrow> <mi>λ</mi> <mo>,</mo> <mi>μ</mi> </mrow> </msub> </semantics></math> (or non-expired elements in <math display="inline"><semantics> <msubsup> <mi>A</mi> <mrow> <mi>λ</mi> <mo>,</mo> <mi>μ</mi> </mrow> <mo>′</mo> </msubsup> </semantics></math> if <math display="inline"><semantics> <msub> <mi>A</mi> <mrow> <mi>λ</mi> <mo>,</mo> <mi>μ</mi> </mrow> </msub> </semantics></math> has expired) are passed to an existing algorithm for solution computation.</p> ">
Figure 6
<p>Performance of <tt>SFDM1</tt> and <tt>SFDM2</tt> with varying parameter <math display="inline"><semantics> <mi>ε</mi> </semantics></math> on (<b>a</b>) Adult (Sex, <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>), (<b>b</b>) CelebA (Sex, <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>), (<b>c</b>) Census (Sex, <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>), and (<b>d</b>) Lyrics (Genre, <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math>) when <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>.</p> ">
Figure 7
<p>Solution quality of different algorithms in the streaming setting with varying solution sizes, <span class="html-italic">k</span>. The diversity values of <tt>GMM</tt> are plotted as gray lines to illustrate the “price of fairness”, i.e., the losses in diversity caused by incorporating the fairness constraints.</p> ">
Figure 8
<p>Update time of different algorithms in the streaming setting with varying solution sizes, <span class="html-italic">k</span>.</p> ">
Figure 9
<p>Solution quality and update time on synthetic datasets in the streaming setting with varying dataset sizes, <span class="html-italic">n</span>, and numbers of groups, <span class="html-italic">m</span> (<math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>).</p> ">
Figure 10
<p>Comparison of different algorithms on <span class="html-italic">Adult</span> for equal representation (ER) and proportional representation (PR) when <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>.</p> ">
Figure 11
<p>Solution quality of different algorithms in the sliding-window setting with varying solution size <span class="html-italic">k</span> (<math display="inline"><semantics> <mrow> <mi>w</mi> <mo>=</mo> <mn>25</mn> </mrow> </semantics></math> k for adult and 100 k for others). The diversity values of <tt>GMM</tt> are also plotted as gray lines to illustrate the “price of fairness”.</p> ">
Figure 12
<p>Update time of different algorithms in the sliding-window setting with varying solution size <span class="html-italic">k</span> (<math display="inline"><semantics> <mrow> <mi>w</mi> <mo>=</mo> <mn>25</mn> </mrow> </semantics></math> k for Adult and 100 k for others).</p> ">
Figure 13
<p>Solution quality and update time on synthetic datasets in the sliding-window setting with varying window size <span class="html-italic">w</span> and number of groups <span class="html-italic">m</span> (<math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>).</p> ">
Review Reports Versions Notes

Abstract

:
Diversity maximization is a fundamental problem with broad applications in data summarization, web search, and recommender systems. Given a set X of n elements, the problem asks for a subset S of k n elements with maximum diversity, as quantified by the dissimilarities among the elements in S. In this paper, we study diversity maximization with fairness constraints in streaming and sliding-window models. Specifically, we focus on the max–min diversity maximization problem, which selects a subset S that maximizes the minimum distance (dissimilarity) between any pair of distinct elements within it. Assuming that the set X is partitioned into m disjoint groups by a specific sensitive attribute, e.g., sex or race, ensuring fairness requires that the selected subset S contains k i elements from each group i [ m ] . Although diversity maximization has been extensively studied, existing algorithms for fair max–min diversity maximization are inefficient for data streams. To address the problem, we first design efficient approximation algorithms for this problem in the (insert-only) streaming model, where data arrive one element at a time, and a solution should be computed based on the elements observed in one pass. Furthermore, we propose approximation algorithms for this problem in the sliding-window model, where only the latest w elements in the stream are considered for computation to capture the recency of the data. Experimental results on real-world and synthetic datasets show that our algorithms provide solutions of comparable quality to the state-of-the-art offline algorithms while running several orders of magnitude faster in the streaming and sliding-window settings.

1. Introduction

Data summarization is a common approach to tackling the challenges of a large volume of data in data-intensive applications. That is because, rather than performing high-complexity analyses on the whole dataset, it is often beneficial to perform them on a representative and significantly smaller summary of the dataset, thus reducing the processing costs in terms of both running time and space usage. Typical techniques for data summarization [1] include sampling, sketching, coresets, and diverse data selection.
In this paper, we focus on diversity-aware data summarization, which finds application in a wide range of real-world problems. For example, in database query processing [2,3], web search [4,5], and recommender systems [6], the output might be too large to be presented to the user in its entirety, even after filtering the results by relevance. One feasible solution, then, is to present the user with a small but diverse subset that is easy to process and representative of the complete results. As another example, when training machine learning models on massive data, feature and subset selection is a standard method to improve efficiency. As indicated by [7,8], selecting diverse features or subsets can lead to a better balance between efficiency and accuracy. A key technical problem in such cases is diversity maximization [9,10,11,12,13,14,15,16,17,18,19,20].
In more detail, for a given set X of elements in some metric space and a size constraint k, diversity maximization asks for a subset of k elements with maximum diversity. Formally, diversity is quantified by a function that captures how well a subset spans the range of elements in X, and is typically defined in terms of distances or dissimilarities among elements in the subset. Prior studies [3,4,6,12] have suggested many different objectives of this kind. Two of the most popular ones are max–sum dispersion, which aims to maximize the sum of the distances between all pairs of elements in the selected subset S, and max–min dispersion, which aims to maximize the minimum distance between any pair of distinct elements in S. Figure 1 illustrates a selection of the 10 most diverse points from a two-dimensional point set with each of the two objectives for diversity maximization. As shown in Figure 1, max–sum dispersion tends to select “outliers” and may include highly similar elements in the solution, making it unsuitable for applications requiring more uniform coverage of the span of data. Therefore, we focus on diversity maximization with the objective of max–min dispersion, referred to as max–min diversity maximization, in this paper.
In addition to diversity, fairness in data summarization is also attracting increasing attention [8,21,22,23,24,25,26,27]. Several studies reveal that the biases with respect to (w.r.t.) sensitive attributes, such as sex, race, or age, in underlying datasets can be retained in the summaries and could lead to unfairness in data-driven social computational systems such as education, recruitment, and banking [8,23,26]. One of the most common notions for fairness in data summarization is group fairness [8,21,22,23,27], which partitions the dataset into m disjoint groups based on a specific sensitive attribute and introduces a fairness constraint that limits the number of elements from group i in the data summary to k i for every group i [ m ] (see Figure 2 for an illustrative example). However, most existing methods for diversity maximization cannot easily be adapted to satisfy such fairness constraints. Moreover, a few methods that can deal with fairness constraints are specific to max–sum diversity maximization [9,11,13]. To the best of our knowledge, the methods in [17,20] are the only means of max–min diversity maximization with fairness constraints.
Furthermore, since many applications of diversity maximization are in the realm of massive data analysis, it is essential to design efficient algorithms for processing large-scale datasets. The (insert-only) streaming and sliding-window models are well-recognized frameworks for big data processing. In the streaming model, an algorithm is only permitted to process each element in the dataset sequentially in one pass, is allowed to take time and space that are sublinear to or even independent of the dataset size, and is required to provide solutions of comparable quality to those returned by the offline algorithms. In the sliding-window model, the computation is further restricted to the latest w elements in the stream, and an algorithm is required to find good solutions in sublinear time and space w.r.t. the window size. However, the only known algorithms [17,20] for fair max–min diversity maximization are designed for the offline setting and are very inefficient in the streaming and sliding-window models.
  • Our Contributions: In this paper, we propose novel streaming and sliding-window algorithms for the max–min diversity maximization problem with fairness constraints. Our main contributions are summarized as follows:
  • We formally define the problem of fair max–min diversity maximization (FDM) in metric spaces. Then, we describe the existing streaming and sliding-window algorithms for (unconstrained) max–min diversity maximization [14]. In particular, we improve the approximation ratio of the existing streaming algorithm from 1 ε 5 to 1 ε 2 for any parameter ε ( 0 , 1 ) by refining the analysis of [14].
  • We propose two novel streaming algorithms for FDM. Our first algorithm, called SFDM1, is 1 ε 4 -approximate for FDM when there are two groups in the dataset. It takes O ( k log Δ ε ) time per element in the streaming processing, where Δ is the ratio of the maximum and minimum distances between any pair of elements, spends O ( k 2 log Δ ε ) time for post-processing, and stores O ( k log Δ ε ) elements in memory. Our second algorithm, called SFDM2, is 1 ε 3 m + 2 -approximate for FDM with an arbitrary number m of groups. SFDM2 also takes O ( k log Δ ε ) time per element in the streaming processing but requires a longer O k 2 m log Δ ε · ( m + log 2 k ) time for post-processing and stores O ( k m log Δ ε ) elements in memory.
  • We further extend our two streaming algorithms to the sliding-window model. The extended SWFDM1 and SWFDM2 algorithms achieve approximation factors of Θ ( 1 ) and Θ ( m 1 ) for FDM with m = 2 and an arbitrary m when any Θ ( 1 ) -approximation algorithm for unconstrained max–min diversity maximization is used for post-processing. Additionally, their time and space complexities increase by a factor of O ( log Δ ε ) compared with SFDM1 and SFDM2, respectively.
  • Finally, we evaluate the performance of our proposed algorithms against the state-of-the-art algorithms on several real-world and synthetic datasets. The results demonstrate that our algorithms provide solutions of comparable quality for FDM to those returned by the state-of-the-art algorithms while running several orders of magnitude faster in the streaming and sliding-window settings.
A preliminary version of this paper is published in [28]. In this extended version, we make the following novel contributions with respect to [28]: (1) We propose two novel algorithms for FDM in the sliding-window model along with the implementation of an existing algorithm for unconstrained max–min diversity maximization in the sliding-window model [14]. Moreover, we analyze the approximation factors and complexities of the two algorithms for fair sliding-window diversity maximization; (2) We conduct more comprehensive examinations of our streaming algorithms by implementing and comparing them with a new offline baseline called FairGreedyFlow [20], which achieves a better approximation factor than previous offline algorithms. The additional results further confirm the superior performance of our streaming algorithms; (3) We conduct new experiments for FDM in the sliding-window setting to evaluate the performance of our sliding-window algorithms compared with the existing offline algorithms. The new experimental results validate their efficiency, effectiveness, and scalability.
  • Paper Organization: The rest of this paper is organized as follows. The related work is reviewed in Section 2. In Section 3, we introduce the basic concepts and formally define the FDM problem. In Section 4, we first propose our streaming algorithms for FDM. In Section 5, we further design our sliding-window algorithms for FDM. Our experimental setup and results are described in Section 6. Finally, we conclude the paper in Section 7.

2. Related Work

Diversity maximization has been extensively studied over the last two decades. Existing studies mainly focus on two popular objectives—i.e., max–sum dispersion [11,12,13,14,15,16,29,30,31] and max–min dispersion [12,14,16,17,18,20,31], and their variants [12,32].
An early study [33] proved that both max–sum and max–min diversity maximization problems are NP-hard even in metric spaces. The classic approaches to both problems are the greedy algorithms [34,35], which achieves the best possible approximation ratio of 1 2 unless P = NP. Indyk et al. [12] proposed composable coreset-based approximation algorithms for diversity maximization. Aghamolaei et al. [31] improved the approximation ratios in [12]. Ceccarello et al. [16] proposed coreset-based approximation algorithms for diversity maximization in MapReduce and streaming settings where the metric space has a bounded doubling dimension. Borassi et al. [14] proposed sliding-window algorithms for diversity maximization. Epasto et al. [36] further proposed improved sliding-window algorithms for diversity maximization specific to the Euclidean space. Drosou and Pitoura [18] studied max–min diversity maximization on dynamic data. They proposed a b 1 2 b 2 -approximation algorithm using a cover tree of base b. Bauckhage et al. [15] proposed an adiabatic quantum computing solution for max-sum diversification. Zhang and Gionis [19] extended diversity maximization to clustered data. Nevertheless, all the above methods only consider diversity maximization problems without fairness constraints.
There were several studies on diversity maximization under matroid constraints, of which the fairness constraints are special cases. Abbassi et al. [11] proposed a ( 1 2 ε ) -approximation local search algorithm for max–sum diversification under matroid constraints. Borodin et al. [9] proposed a ( 1 2 ε ) -approximation algorithm for maximizing the sum of a submodular function and a max–sum dispersion function. Cevallos et al. [30] extended the local search algorithm for distances of a negative type. They also proposed a PTAS for this problem via convex programming [29]. Bhaskara et al. [37] proposed a  1 8 -approximation algorithm for sum–min diversity maximization under matroid constraints using linear relaxations. Ceccarello et al. [13] proposed a coreset-based approach to matroid-constrained max–sum diversification in metric spaces of bounded doubling dimension. Nevertheless, the above methods are still not applicable to the max–min dispersion problem. The only known algorithms for fair max–min diversity maximization in [17,20,38] are offline algorithms and inefficient for data streams. We will compare our proposed algorithms with these, both theoretically and empirically. To the best of our knowledge, there has not been any previous streaming or sliding-window algorithm for fair max–min diversity maximization.
In addition to diversity maximization, fairness has also been considered in many other data summarization problems, such as k-center [21,22,23], determinantal point processes [8], coresets for k-means clustering [24,25], and submodular maximization [26,27]. However, since their optimization objectives differ from diversity maximization, the proposed algorithms for their fair variants cannot be directly used for our problem.

3. Preliminaries

In this section, we introduce the basic concepts and formally define the fair max–min diversity maximization problem.
Let X be a set of n elements from a metric space with distance function d ( · , · ) capturing the dissimilarities among elements. Recall that d ( · , · ) is nonnegative, symmetric, and satisfies the triangle inequality—i.e., d ( x , y ) + d ( y , z ) d ( x , z ) for any x , y , z X . Note that all the algorithms and analyses in this paper are general for any distance metric. We further generalize the notion of distance to an element x and a set S as the distance between x and its nearest neighbor in S—i.e., d ( x , S ) = min y S d ( x , y ) .
Our focus in this paper is to find a small subset of most diverse elements from X. Given a subset S X , its diversity d i v ( S ) is defined as the minimum of the pairwise distances between any two distinct elements in S, i.e., d i v ( S ) = min x , y S , x y d ( x , y ) . The unconstrained version of diversity maximization (DM) asks for a subset S X of k elements maximizing d i v ( S ) , i.e., S * = arg max S X : | S | = k d i v ( S ) . We use OPT = d i v ( S * ) to denote the diversity of the optimal solution S * for DM. This problem has been proven to be NP-complete [33], and no polynomial-time algorithm can achieve an approximation factor that is better than 1 2 unless P=NP. One approach to DM is the 1 2 -approximation greedy algorithm [34,39] (known as GMM) in the offline setting.
We introduce fairness to diversity maximization when X is composed of several demographic groups defined by a certain sensitive attribute, e.g., sex or race. Formally, suppose that X is divided into m disjoint groups [ 1 , , m ] ( [ m ] for short) and a function c : X [ m ] maps each element x X to its group. Let X i = { x X : c ( x ) = i } be the subset of elements from group i in X. Obviously, we have i = 1 m X i = X and X i X j = for any i j . The fairness constraint assigns a positive integer k i to each of the m groups and restricts the number of elements from group i in the solution to k i . We assume that i = 1 m k i = k . The fair max-min diversity maximization problem is defined as:
Definition 1
(FDM). Given a set X of n elements with X = i = 1 m X i and m size constraints k 1 , , k m Z + , find a subset S that contains k i elements from X i and maximizes d i v ( S ) —i.e., S f * = arg max S X : | S X i | = k i , i [ m ] d i v ( S ) .
We use OPT f = d i v ( S f * ) to denote the diversity of the optimal solution S f * for FDM. Since DM is a special case of FDM when m = 1 , FDM is NP-hard up to a 1 2 -approximation. In addition, our FDM problem is closely related to the concept of matroid [40] in combinatorics. Given a ground set V, a matroid is a pair M = ( V , I ) where I is a family of subsets of V (called independent sets) with the following properties: (i) I ; (ii) for each A B V , if B I then A I (hereditary); and (iii) if A I , B I , and | A | > | B | , then there exists x A B such that B { x } I (augmentation). An independent set is maximal if it is not a proper subset of any other independent set. A basic property of M is that all its maximal independent sets have the same size, denoted as the matroid’s rank. As is easy to verify, our fairness constraint is a case of rank-k partition matroids, where the ground set is partitioned into disjoint groups and the independent sets are exactly the sets in which, for each group, the number of elements from this group is, at most, the group capacity. Our algorithms for general m in Section 4 and Section 5 will be built on matroids.
In this paper, we first consider FDM in the streaming setting, where the elements in X arrive one at a time. Here, we use t ( x ) to denote the time when an element x is observed and X ( T ) = { x X : t ( x ) T } to denote the subset of elements observed from X until time T. A streaming algorithm should process each element sequentially in one pass using limited space (typically independent of n) and return a valid approximate solution S (if it exists) for FDM on X ( T ) at any time T. We further study FDM in the sliding-window setting, where the window W ( T ) always contains the last w elements observed from X until time T, i.e., W ( T ) = { x X : T w + 1 t ( x ) T } . A sliding-window algorithm should provide a valid approximate solution S (if it exists) for FDM on W ( T ) at any time T.

4. Streaming Algorithms

As has been shown in Section 3, FDM is NP-hard. Thus, we focus on efficient approximation algorithms for FDM. In this section, we first describe the existing algorithms for unconstrained diversity maximization in the streaming model on which our streaming algorithms will be built. We then propose a 1 ε 4 -approximation streaming algorithm for FDM in the special case that there are only two groups in the dataset. Finally, we propose a  1 ε 3 m + 2 -approximation streaming algorithm for FDM on a dataset with an arbitrary number m of groups.

4.1. (Unconstrained) Streaming Algorithm

We first present the streaming algorithm of [14] for (unconstrained) diversity maximization in Algorithm 1. Let d m i n = min x , y X , x y d ( x , y ) , d m a x = max x , y X , x y d ( x , y ) and Δ = d m a x d m i n . Obviously, it always holds that OPT [ d m i n , d m a x ] . First, it maintains a sequence U of values for guessing OPT within a relative error of 1 ε and initializes an empty solution S μ for each μ U before processing the stream (Lines 1 and 2). Then, for each x X and each μ U , if S μ contains less than k elements and the distance between x and S μ is at least μ , it will add x to S μ (Lines 3–6). After processing all elements in X, the candidate solution that contains k elements and maximizes the diversity is returned as the solution S for DM (Line 7). Algorithm 1 is proven to be a 1 ε 5 -approximation algorithm for max–min diversity maximization [14]. In Theorem 1, its approximation ratio is improved to 1 ε 2 by refining the analysis of [14].
Algorithm 1 SDM
Input:
Stream X, distance metric d ( · , · ) , parameter ε ( 0 , 1 ) , solution size k Z +
Output:
A set S X with | S | = k
1:
U = { d m i n ( 1 ε ) j : j Z 0 + ( 1 ε ) j d m i n d m a x }
2:
Initialize S μ = for each μ U
3:
for all  x X  do
4:
    for all  μ U  do
5:
        if  | S μ | < k and d ( x , S μ ) μ  then
6:
            S μ S μ { x }
7:
return  S arg max μ U : | S μ | = k d i v ( S μ )
Theorem 1.
Algorithm 1 is a 1 ε 2 -approximation algorithm for max–min diversity maximization.
Proof. 
For each μ U , there are two cases for S μ after processing all elements in X: (1) If | S μ | = k , the condition of Line 5 guarantees that d i v ( S μ ) μ ; (2) If | S μ | < k , it holds that d ( x , S μ ) < μ for every x X S μ since the fact that x is not added to S μ implies that d ( x , S μ ) < μ , as | S μ | < k . Let us consider a candidate solution S μ with | S μ | < k . Suppose that S * = { s 1 * , , s k * } is the optimal solution for DM on X. We define a function f : S * S μ that maps each element in S * to its nearest neighbor in S μ . As is shown above, d ( s * , f ( s * ) ) < μ for each s * S * . Because | S μ | < k and | S * | = k , two distinct elements s a * , s b * S * with f ( s a * ) = f ( s b * ) must exist. For such s a * , s b * , we have
d ( s a * , s b * ) d ( s a * , f ( s a * ) ) + d ( s b * , f ( s b * ) ) < 2 μ
according to the triangle inequality. Thus, OPT = d i v ( S * ) d ( s a * , s b * ) < 2 μ if | S μ | < k . Let μ be the smallest μ U with | S μ | < k . We obtained d i v ( S * ) < 2 μ from the above results. Additionally, for μ = ( 1 ε ) μ , we must have | S μ | = k and d i v ( S μ ) μ . Therefore, we have d i v ( S ) μ = ( 1 ε ) μ 1 ε 2 · d i v ( S * ) .    □
In terms of complexity, Algorithm 1 stores O ( k log Δ ε ) elements and takes O ( k log Δ ε ) time per element, since it makes O ( log Δ ε ) guesses for OPT , keeps, at most, k elements in each candidate and requires, at most, k distance computations to decide whether to add an element to a candidate.

4.2. Fair Streaming Algorithm for m = 2

The procedure of our streaming algorithm in case of m = 2 , called SFDM1, is described in Algorithm 2 and illustrated in Figure 3. In general, the algorithm runs in two phases: stream processing and post-processing. In the stream processing (Lines 1–6), for each guess μ U of OPT f , it utilizes Algorithm 1 to keep a group-blind candidate S μ with size constraint k and two group-specific candidates S μ , 1 and S μ , 2 with size constraints k 1 and k 2 for X 1 and X 2 , respectively. The only difference from Algorithm 1 is that the elements are filtered by group to maintain S μ , 1 and S μ , 2 . After processing all elements of X in one pass, it will post-process the group-blind candidates to make them satisfy the fairness constraint (Lines 7–15). The post-processing is only performed on a subset U of U , where S μ contains k elements and S μ , i contains k i elements for each group i { 1 , 2 } . For each μ U , S μ , either has satisfied the fairness constraint or has one over-filled group i o and another under-filled group i u . If S μ is not yet a fair solution, S μ will be balanced for fairness by first adding k i u k i u elements, where k i u = | S μ X i u | , from S μ , i u to S μ , and then removing the same number of elements from S μ X i o . The elements to be added and removed are selected greedily, as in GMM [39], to minimize the loss in diversity: the element in S μ , i u that is the furthest from S μ X i u is picked for each insertion; and the element in S μ X i o that is the closest to S μ X i u is picked for each deletion. Finally, the fair candidate with the maximum diversity after post-processing is returned as the final solution for FDM (Line 16). Next, we will theoretically analyze the approximation ratio and complexity of SFDM1.
Algorithm 2 SFDM1
Input:
Stream X = X 1 X 2 , distance metric d ( · , · ) , parameter ε ( 0 , 1 ) , size constraints k 1 , k 2 Z + ( k = k 1 + k 2 )
Output:
A set S X s.t.  | S X i | = k i for i { 1 , 2 }
Stream processing
1:
U = { d m i n ( 1 ε ) j : j Z 0 + ( 1 ε ) j d m i n d m a x }
2:
Initialize S μ , S μ , i = for every μ U and i { 1 , 2 }
3:
for all  x X  do
4:
    Run Lines 3–6 of Algorithm 1 to update S μ w.r.t. x
5:
    if  c ( x ) = i  then
6:
        Run Lines 3–6 of Algorithm 1 to update S μ , i w.r.t. x with size constraint k i
Post-processing
7:
U = { μ U : | S μ | = k | S μ , i | = k i , i { 1 , 2 } }
8:
for all  μ U  do
9:
    if  | S μ X i | < k i for some i { 1 , 2 }  then
10:
        while  | S μ X i | < k i  do
11:
            x + arg max x S μ , i d ( x , S μ X i )
12:
            S μ S μ { x + }
13:
        while  | S μ | > k  do
14:
            x arg min x S μ X i d ( x , S μ X i )
15:
            S μ S μ { x }
16:
return  S arg max μ U d i v ( S μ )
  • Theoretical Analysis: We prove that SFDM1 achieves an approximation ratio of 1 ε 4 for FDM, where ε ( 0 , 1 ) , in Theorem 2. The proof is based on (i) the existence of μ U such that μ 1 ε 2 · OPT f (Lemma 1) and (ii)  d i v ( S μ ) μ 2 for each μ U after post-processing (Lemma 2). Then, we analyze the complexity of SFDM1 in Theorem 3.
Lemma 1.
Let μ be the largest μ U . It holds that μ 1 ε 2 · OPT f , where OPT f is the optimal diversity of FDM on X.
Proof. 
First of all, we have OPT f OPT , where OPT is the optimal diversity of unconstrained DM with k = k 1 + k 2 on X, since any valid solution for FDM must also be a valid solution for DM. Moreover, it holds that OPT f OPT k i , where OPT k i is the optimal diversity of unconstrained DM with size constraint k i on X i for both i { 1 , 2 } , because the optimal solution must contain k i elements from X i and d i v ( · ) is a monotonically non-increasing function—i.e., d i v ( S { x } ) d i v ( S ) for any S X and x S X . Therefore, we prove that OPT f d i v ( S * X i ) OPT k i .
Then, according to the results of Theorem 1, we have OPT < 2 μ if S μ < k and OPT k i < 2 μ if S μ , i < k i for each i { 1 , 2 } . Note that μ is the largest μ U , such that | S μ | = k , | S μ , 1 | = k 1 , and | S μ , 2 | = k 2 after stream processing. For μ = μ 1 ε U , we have either | S μ | < k or | S μ , i | < k i for some i { 1 , 2 } . Therefore, it holds that OPT f < 2 μ 2 1 ε · μ and we conclude the proof.    □
Lemma 2.
For each μ U , the candidate solution S μ must satisfy d i v ( S μ ) μ 2 and | S μ X i | = k i for both i { 1 , 2 } after post-processing.
Proof. 
The candidate S μ before post-processing has exactly k = k 1 + k 2 elements but may not contain k 1 elements from X 1 and k 2 elements from X 2 . If S μ has exactly k 1 elements from X 1 and k 2 elements from X 2 , and thus the post-processing is skipped, we have d i v ( S μ ) μ according to Theorem 1. Otherwise, assuming that | S μ X 1 | = k 1 < k 1 , we will add k 1 k 1 elements from S μ , 1 to S μ and remove k 1 k 1 elements from S μ X 2 to ensure the fairness constraint. In Line 16, all the k 1 elements in S μ , 1 can be selected for insertion. Since the minimum distance between any pair of elements in S μ , 1 is at least μ , we can find, at most, one element x S μ , 1 , such that d ( x , y ) < μ 2 for each y S μ X 1 . This means that there are at least k 1 k 1 elements from S μ , 1 whose distances to all the existing elements in S μ X 1 are greater than μ 2 . Accordingly, after adding k 1 k 1 elements from S μ , 1 to S μ greedily, it still holds that d ( x , y ) μ 2 for any x , y S μ X 1 . In Line 14, for each element x S μ X 2 , there is, at most, one (newly added) element y S μ X 1 such that d ( x , y ) < μ 2 . Meanwhile, it is guaranteed that y is the nearest neighbor of x in S μ in this case. Therefore, in Line 14, every x S μ X 2 with d ( x , S μ X 2 ) < μ 2 is removed, since there are, at most, k 1 k 1 such elements and the one with the smallest d ( x , S μ X 2 ) is removed at each step. Therefore, S μ contains k 1 elements from X 1 and k 2 elements from X 2 and d i v ( S μ ) μ 2 after post-processing.    □
Theorem 2.
SFDM1returns a 1 ε 4 -approximate solution for FDM.
Proof. 
According to the results of Lemmas 1 and 2, we have d i v ( S ) d i v ( S μ ) μ 2 1 ε 4 · OPT f , where μ = max μ U μ .    □
Theorem 3.
SFDM1stores O ( k log Δ ε ) elements in memory, takes O ( k log Δ ε ) time per element for stream processing, and O ( k 2 log Δ ε ) time for post-processing.
Proof. 
SFDM1 keeps three candidates for each μ U and O ( k ) elements in each candidate. Hence, the total number of stored elements is O ( k log Δ ε ) , since | U | = O ( log Δ ε ) . The stream processing performs, at most, O ( k log Δ ε ) distance computations per element. Finally, for each μ U in the post-processing, at most k i ( k i k i ) distance computations are performed to select the elements in S μ , i that are to be added to S μ . To find the elements that are to be removed, at most k ( k i k i ) distance computations are needed. Thus, the time complexity for post-processing is O ( k 2 log Δ ε ) as | U | = O ( log Δ ε ) .    □
  • Comparison with Prior Art: The idea of finding a solution and balancing it for fairness in SFDM1 has also been used for FairSwap [17]. However, FairSwap only works in the offline setting, which keeps the dataset in memory and requires random accesses for computation, whereas SFDM1 works in the streaming setting, which scans the dataset in one pass and uses only the elements in the candidates for post-processing. Compared with FairSwap, SFDM1 reduces the space complexity from O ( n ) to O ( k log Δ ε ) and the time complexity from O ( n k ) to O ( k 2 log Δ ε ) at the expense of lowering the approximation ratio by a factor of 1 ε .

4.3. Fair Streaming Algorithm for General m

The detailed procedure of our streaming algorithm, which can work with an arbitrary m 2 , called SFDM2, is presented in Algorithm 3. Similar to SFDM1, it also has two phases: stream processing and post-processing. In the stream processing (Lines 1–7), it utilizes Algorithm 1 to keep a group-blind candidate S μ and m group-specific candidates S μ , 1 , , S μ , m for all the m groups. The difference from SFDM1 is that the size constraint of each group-specific candidate for each group i is k instead of k i . Then, after processing all elements in X, a post-processing scheme is required to ensure the fairness of candidates. Nevertheless, the post-processing procedures are totally different from SFDM1, since the swap-based balancing strategy cannot guarantee the validity of the solution with any theoretical bound. Like SFDM1, the post-processing is performed on a subset U , where S μ has k elements and S μ , i has at least k i elements for each group i (Line 8). For each μ U , it initializes with a subset S μ of S μ (Line 10). For an over-filled group i, i.e., | S μ X i | > k i , S μ contains k i arbitrary elements from S μ . For an under-filled or exactly filled group i, i.e., | S μ X i | k i , S μ contains all k i = | S μ X i | elements from S μ . Next, new elements from under-filled groups should be added to S μ so that S μ is a fair solution. The method used to find the elements that are to be added is to divide the set S a l l of elements in all candidates into a set C of clusters, which guarantees that d ( x , y ) μ m + 1 for any x C a and y C b (Lines 12–15), where C a and C b are two different clusters in C . Then, S μ is limited to contain, at most, one element from each cluster after new elements are added so that d i v ( S μ ) μ m + 1 . Meanwhile, S μ should still satisfy the fairness constraint. To meet both requirements, the problem of adding new elements to S μ is formulated as an instance of matroid intersection [41,42,43], as will be discussed subsequently (Line 17). Finally, it returns S μ containing k elements with maximum diversity after post-processing as the final solution for FDM (Line 18). An illustration of the post-processing procedure of SFDM2 is given in Figure 4.
Algorithm 3 SFDM2
Input:
Stream X = i = 1 m X i , distance metric d, parameter ε ( 0 , 1 ) , size constraints k 1 , , k m Z + ( k = i = 1 m k i )
Output:
A set S X s.t.  | S X i | = k i , i [ m ]
Stream processing
1:
U = { d m i n ( 1 ε ) j : j Z 0 + ( 1 ε ) j d m i n d m a x }
2:
Initialize S μ , S μ , i = for every μ U and i [ m ]
3:
for all  x X  do
4:
    for all  μ U and i [ m ]  do
5:
        Run Lines 3–6 of Algorithm 1 to update S μ w.r.t. x
6:
        if  c ( x ) = i  then
7:
           Run Lines 3–6 of Algorithm 1 to update S μ , i w.r.t. x
Post-processing
8:
U = { μ U : | S μ | = k | S μ , i | k i , i [ m ] }
9:
for all  μ U  do
10:
    For each group i [ m ] , pick min ( k i , | S μ X i | ) elements arbitrarily from S μ as S μ
11:
    Let S a l l = ( i = 1 m S μ , i ) S μ and l = | S a l l |
12:
    Create l clusters C = { C 1 , , C l } , each of which contains one element in S a l l
13:
    while there exist C a , C b C s.t.  d ( x , y ) < μ m + 1 for some x C a and y C b  do
14:
        Merge C a , C b into a new cluster C = C a C b
15:
         C C { C a , C b } { C }
16:
    Let M 1 = ( S a l l , I 1 ) and M 2 = ( S a l l , I 2 ) be two matroids, where S I 1 iff | S X i | k i , i [ m ] and S I 2 iff | S C | 1 , C C
17:
    Run Algorithm 4 to augment S μ such that S μ is a maximum cardinality set in I 1 I 2
18:
return  S arg max μ U : | S μ | = k d i v ( S μ )
  • Matroid Intersection: Next, we describe how to use matroid intersection for solution augmentation in SFDM2. We define the first rank-k matroid M 1 = ( V , I 1 ) based on the fairness constraint, where the ground set V is S a l l and S I 1 iff | S X i | k i , i [ m ] . Intuitively, a set S is fair if it is a maximal independent set in I 1 . Moreover, we define the second rank-l ( l = | C | ) matroid M 2 = ( V , I 2 ) on the set C of clusters, where the ground set V is also S a l l and S I 2 if | S C | 1 , C C . Accordingly, the problem of adding new elements to S μ to ensure fairness is an instance of the matroid intersection problem, which aims to find a maximum cardinality set S I 1 I 2 for M 1 = ( S a l l , I 1 ) and M 2 = ( S a l l , I 2 ) . Here, we adopt Cunningham’s algorithm [41], a well-known solution for the matroid intersection problem based on the augmentation graph in Definition 2.
Definition 2
(Augmentation Graph [41]). Given two matroids M 1 = ( V , I 1 ) and M 2 = ( V , I 2 ) , a set S V , such that S I 1 I 2 , and two sets V 1 = { x V S : S { x } I 1 } and V 2 = { x V S : S { x } I 2 } , an augmentation graph is a digraph G = ( V { a , b } , E ) , where a , b V . There is an edge ( a , x ) E for each x V 1 . There is an edge ( x , b ) E for each x V 2 . There is an edge ( y , x ) E for each x V S , y S , such that S { x } I 1 and S { x } { y } I 1 . There is an edge ( x , y ) E for each x V S , y S , such that S { x } I 2 and S { x } { y } I 2 .
Specifically, the Cunningham’s algorithm [41] is initialized with S = (or any S I 1 I 2 ). At each step, it builds an augmentation graph G for M 1 , M 2 , and S. If there is no directed path from a to b in G, then S is already a maximum cardinality set. Otherwise, it finds the shortest path P * from a to b in G, and augments S according to P * . For each x V S , except a and b, add x to S; for each x S , remove x from S. We adapt Cunningham’s algorithm [41] for our problem, as shown in Algorithm 4. Our algorithm is initialized with S μ instead of . In addition, to reduce the cost of building G and maximize the diversity, it first adds the elements in V 1 V 2 greedily to S μ until V 1 V 2 = . This is because a shortest path, P * = a , x , b in G, exists for any x V 1 V 2 , which is easy to verify from Definition 2. Finally, if | S | < k after the above procedures, the standard Cunningham’s algorithm will be used to augment S to ensure the maximality of S.
Algorithm 4 Matroid Intersection
Input:
Two matroids M 1 = ( V , I 1 ) , M 2 = ( V , I 2 ) , distance metric d, initial set S 0 V
Output:
A maximum cardinality set S V in I 1 I 2
1:
Initialize S S 0 , V 1 = { x V S : S { x } I 1 } , and V 2 = { x V S : S { x } I 2 }
2:
while  V 1 V 2  do
3:
     x * arg max x V 1 V 2 d ( x , S ) and S S { x * }
4:
    for all  x V 1  do
5:
         V 1 V 1 { x } if S { x } I 1
6:
    for all  x V 2  do
7:
         V 2 V 2 { x } if S { x } I 2
8:
Build an augmentation graph G for S
9:
while there is a directed path from a to b in G do
10:
    Let P * be a shortest path from a to b in G
11:
    for all  x P * { a , b }  do
12:
         S S { x } if x S
13:
         S S { x } if x S
14:
    Rebuild G for the updated S
15:
returnS
  • Theoretical Analysis: We prove that SFDM2 achieves an approximation ratio of 1 ε 3 m + 2 for FDM. The high-level idea of the proof is to connect the clustering procedure in post-processing with the notion of matroid and then to utilize the geometric properties of the clusters and the theoretical results of matroid intersection for approximation. Next, we first show that the set C of clusters has several important properties (Lemma 3). Then, we prove that Algorithm 4 can return a fair solution for a specific μ based on the properties of C (Lemma 4). Finally, we analyze the time and space complexities of SFDM2 in Theorem 5.
Lemma 3.
The set C of clusters has the following properties: (i) for any x C a and y C b ( a b ), d ( x , y ) μ m + 1 ; (ii) each cluster C contains, at most, one element from S μ and S μ , i for any i [ m ] ; (iii) for any x , y C , d ( x , y ) < m m + 1 · μ .
Proof. 
First of all, Property (i) holds from Lines 12–15 of Algorithm 3, since all clusters that do not satisfy it have been merged. Then, we prove Property (ii) by contradiction. Let us construct an undirected graph G = ( V , E ) for a cluster C C , where V is the set of elements in C and there exists an edge ( x , y ) E iff d ( x , y ) < μ m . Based on Algorithm 3, for any x C , there must exist some y C ( x y ) such that d ( x , y ) < μ m . Therefore, G is a connected graph. Suppose that C contains more than one element from S μ or S μ , i for some i [ m ] . Let P x , y = ( x , , y ) be the shortest path of G between x and y, where x and y are both from S μ or S μ , i . Next, we show that the length of P x , y is, at most, m + 1 . If the length of P x , y is longer than m + 1 , there will be a sub-path P x , y of P x , y where x and y are both from S μ or S μ , i , and this violates the fact that P x , y is the shortest. Since the length of P x , y is, at most, m + 1 , we have d ( x , y ) < ( m + 1 ) · μ m + 1 = μ , which contradicts the fact that d ( x , y ) μ , as they are both from S μ or S μ , i . Finally, Property (iii) is a natural extension of Property (ii): since each cluster C contains, at most, one element from S μ and S μ , i for any i [ m ] , C has, at most, m + 1 elements. Therefore, for any two elements x , y C , the length of the path between them is, at most, m in G and d ( x , y ) < m · μ m + 1 = m m + 1 · μ .    □
Lemma 4.
If OPT f 3 m + 2 m + 1 · μ , then Algorithm 4 returns a size-k subset S μ , such that S μ I 1 I 2 and d i v ( S μ ) μ m + 1 .
Proof
First of all, the initial S μ is a subset of S μ . According to Property (ii) of Lemma 3, all elements of S μ are in different clusters of C , and thus S μ I 1 I 2 . The theoretical results in [41] guarantee that Algorithm 4 can find a size-k set in I 1 I 2 , as long as it exists. Next, we will show such a set exists when OPT f 3 m + 2 m + 1 · μ . To verify this, we need to identify k i clusters of C that contain at least one element from X i for each i [ m ] and show that all k = i = 1 m k i clusters are distinct. Here, we consider two cases for each group i [ m ] .
  • Case 1: For each i [ m ] , such that k i | S μ , i | < k , we have d ( x , S μ , i ) < μ for each x X i . Given the optimal solution S f * , we define a function f that maps each x * S f * to its nearest neighbor in S μ , i . For two elements x a * , x b * S f * in these groups, we have d ( x a * , f ( x a * ) ) < μ , d ( x b * , f ( x b * ) ) < μ , and d ( x a * , x b * ) OPT f = d i v ( S f * ) . Therefore, d ( f ( x a * ) , f ( x b * ) ) > OPT f 2 μ . Since OPT f 3 m + 2 m + 1 · μ , d ( f ( x a * ) , f ( x b * ) ) > 3 m + 2 m + 1 · μ 2 μ = m m + 1 · μ . According to Property (iii) of Lemma 3, it is guaranteed that f ( x a * ) and f ( x b * ) are in different clusters. By identifying all the clusters that contain f ( x * ) for all x * S f * , we found k i clusters for each group i [ m ] such that k i | S μ , i | < k . All the clusters that were found are guaranteed to be distinct.
  • Case 2: For all i [ m ] such that | S μ , i | = k , we are able to find k clusters that contain one element from S μ , i based on Property (ii) of Lemma 3. For such a group i, even though k k i clusters have been identified for all other groups, there are still at least k i clusters available for selection. Therefore, we can always find k i clusters that are distinct from all the clusters identified by any other group for such a group X i .
Considering both cases, we have proven the existence of a size-k set in I 1 I 2 . Finally, for any set S I 2 , we have d i v ( S ) μ m + 1 according to Property (i) of Lemma 3.    □
Theorem 4.
SFDM2is a 1 ε 3 m + 2 -approximation algorithm for FDM.
Proof. 
Let μ be the smallest μ U . It holds that μ OPT f 2 (see Lemma 1). Thus, there is some μ < μ in U , such that μ [ ( m + 1 ) ( 1 ε ) 3 m + 2 · OPT f , m + 1 3 m + 2 · OPT f ] , as m + 1 3 m + 2 < 1 2 for any m Z + . Therefore, SFDM2 provides a fair solution S, such that d i v ( S ) d i v ( S μ ) μ m + 1 1 ε 3 m + 2 · OPT f .    □
Theorem 5.
SFDM2keeps O ( k m log Δ ε ) elements in memory, takes O ( k log Δ ε ) time per element in the stream processing, and spends O k 2 m log Δ ε · ( m + log 2 k ) time for post-processing.
Proof. 
SFDM2 keeps m + 1 candidates for each μ U and O ( k ) elements in each candidate. So, the total number of elements stored by SFDM2 is O ( k m log Δ ε ) . Only two candidates are checked in streaming processing for each element and thus O ( k log Δ ε ) distance computations are needed. In the post-processing of each μ , we need O ( k ) time to get the initial solution, O ( k 2 m 2 ) time to cluster S a l l , and O ( k 2 m ) time to augment the candidate using Lines 2–7 of Algorithm 4. The time complexity of Cunningham’s algorithm is O ( k 2 m log 2 k ) according to [42,43]. In sum, the overall time complexity of post-processing is O k 2 m log Δ ε · ( m + log 2 k ) .    □
  • Comparison with Prior Art: Existing methods have aimed to find a fair solution based on matroid intersection for fair k-center [21,22,44] and fair max–min diversity maximization [17]. SFDM2 adopts a similar method to FairFlow [17] to construct the clusters and matroids. However, FairFlow solves matroid intersection as a max-flow problem on a directed graph. Its solution is of poor quality in practice, particularly when m is large. Therefore, SFDM2 uses a different method for matroid intersection based on Cunningham’s algorithm, which initializes with a partial solution instead of an empty set for higher efficiency and adds elements greedily like GMM [39] for higher diversity. Hence, SFDM2 has a significantly higher solution quality than FairFlow in practice, though it has a slightly lower approximation ratio.

5. Sliding-Window Algorithms

In this section, we extend our streaming algorithms, i.e., SFDM1 and SFDM2, to the sliding-window model. In Section 5.1, we first present the existing sliding-window algorithm for (unconstrained) diversity maximization [14]. In Section 5.2, we propose our extended sliding-window algorithms for FDM based on the algorithms in Section 4 and Section 5.1.

5.1. (Unconstrained) Sliding-Window Algorithm

The unconstrained sliding-window algorithm is shown in Algorithm 5 and illustrated in Figure 5. First of all, it keeps two sequences Λ , U , both ranging from d m i n to d m a x , to guess the optimum OPT [ W ] of DM on the window W (Line 1). For each combination of λ Λ and μ U , it initializes two candidate solutions A λ , μ and B λ , μ , each of which will be maintained by Algorithm 1 on two consecutive sub-sequences of X. Two sets A λ , μ and B λ , μ to store the replacements of the elements in A λ , μ and B λ , μ , in case that they fall out of the sliding window, are also initialized as empty sets (Lines 2 and 3). Then, for each element x X , it adds x to each B λ , μ using the same method as Algorithm 1. Once x is added to B λ , μ , it will be set as its own replacement in B λ , μ (Lines 7 and 8). Otherwise, it checks whether the distance between x and any existing element in B λ , μ is, at most, μ and assigns x as the replacement of such an element in B λ , μ (Line 10). Similarly, it also checks whether x can replace any element in A λ , μ and perform the assignment if so (Line 12). After that, if the diversity of any candidate B λ , μ with | B λ , μ | = k exceeds λ , it will remove x from B λ , μ , B λ , μ and set them as A λ , μ , A λ , μ , and then re-initialize a new B λ , μ and B λ , μ with x (Lines 13–16). We describe the post-processing procedure for the window W containing the last w elements in X, which can be easily extended to any window W ( T ) at time T, in Lines 17–23. It considers two cases for different values of λ , μ : (i) when A λ , μ W , it runs any algorithm ALG for (centralized) max–min diversity maximization on A λ , μ B λ , μ to find a size-k candidate solution S λ , μ (Line 20); (ii) when B λ , μ W , ALG is run on ( W A λ , μ ) B λ , μ , i.e., the non-expired elements from A λ , μ and B λ , μ , instead (Line 22). Finally, the best solution found after post-processing all candidates is returned as the solution S for the window W (Line 23).
Algorithm 5 SWDM
Input:
Stream X, distance metric d ( · , · ) , window size w Z + , parameter ε ( 0 , 1 ) , solution size k Z +
Output:
A set S W with | S | = k
1:
Λ , U = { d m i n ( 1 ε ) j : j Z 0 + ( 1 ε ) j d m i n d m a x }
2:
for all λ Λ and μ U  do
3:
    Initialize A λ , μ , A λ , μ = and B λ , μ , B λ , μ =
4:
for all  x X  do
5:
    for all  λ Λ  do
6:
        for all  μ U  do
7:
           if  | B λ , μ | < k and d ( x , B λ , μ ) μ  then
8:
                B λ , μ B λ , μ { x } , B λ , μ [ x ] x
9:
           else if  d ( x , B λ , μ ) < μ  then
10:
                y arg min y B λ , μ d ( x , y ) , B λ , μ [ y ] x
11:
           if  A λ , μ and d ( x , A λ , μ ) < μ  then
12:
                y arg min y A λ , μ d ( x , y ) , A λ , μ [ y ] x
13:
        if  max μ U : | B λ , μ | = k d i v ( B λ , μ ) > λ  then
14:
           Remove x from each B λ , μ , B λ , μ
15:
            A λ , μ , A λ , μ B λ , μ , B λ , μ for each μ U
16:
            B λ , μ , B λ , μ { x } for each μ U
Post-processing
17:
W { x X : max { 1 , | X | w + 1 } t ( x ) | X | }
18:
for all λ Λ and μ U  do
19:
    if  A λ , μ W  then
20:
         S λ , μ ALG ( k , A λ , μ B λ , μ )
21:
    else if  B λ , μ W  then
22:
         S λ , μ ALG ( k , ( W A λ , μ ) B λ , μ )
23:
return  S arg max λ Λ , μ U : | S λ , μ | = k d i v ( S λ , μ )

5.2. Fair Sliding-Window Algorithms

Generally, to extend SFDM1 and SFDM2 so that they can work in the sliding-window model, we need to modify them in two aspects: (i) the stream processing should follow the procedure of Algorithm 5 instead of Algorithm 1 to maintain the candidate solutions for the case when old elements are deleted from the window W; (ii) the post-processing should be adjusted for the candidate solutions kept by Algorithm 5 during stream processing with theoretical guarantees.
Specifically, the procedures of our extended algorithms, i.e., SWFDM1 and SWFDM2 are presented in Algorithm 6. Here, we put the descriptions of both algorithms together because they share many common subroutines and inherit some others from Algorithms 2–5. Following the procedure of Algorithm 5, they initialize the candidate solutions for different guesses λ , μ of OPT [ W ] in the sequences Λ and U . In the stream processing (Lines 1–11), SWFDM1 and SWFDM2 adopts the same method as used in Algorithm 5 to maintain the unconstrained candidate solutions as well as the monochromatic candidate solutions for each group i [ m ] . The only difference is the solution size of each monochromatic candidate, which is k i for i { 1 , 2 } in SWFDM1 but k for each i [ m ] in SWFDM2.
The following theorem indicates the approximation factor of Algorithm 5.
Theorem 6.
Algorithm 5 is a ξ ε 5 -approximation algorithm for max–min diversity maximization when a ξ-approximation algorithm ALG for (centralized) max–min diversity maximization is used for post-processing.
We refer readers to Lemma 4.7 in [14] for the proof of Theorem 6. Here, if GMM [39], which is 1 2 -approximate for max–min diversity maximization, is used as ALG , the approximation factor of Algorithm 5 will be 1 ε 10 . In terms of complexity, Algorithm 5 stores O ( k log 2 Δ ε 2 ) elements, takes O ( k log 2 Δ ε 2 ) time per element for stream processing, and spends O ( k 2 log 2 Δ ε 2 ) time for post-processing.
Algorithm 6 SWFDM
Input:
Stream X = i = 1 m X i , distance metric d ( · , · ) , parameter ε ( 0 , 1 ) , window size w Z + , size constraints k 1 , , k m ( k = i = 1 m k i )
Output:
A set S W s.t.  | S X i | = k i for i [ m ]
Stream processing
1:
Λ , U = { d m i n ( 1 ε ) j : j Z 0 + ( 1 ε ) j d m i n d m a x }
2:
for all  λ Λ , μ U  do
3:
    Initialize A λ , μ , A λ , μ , B λ , μ , B λ , μ =
4:
    for all  i [ m ]  do
5:
        Initialize A λ , μ ( i ) , A λ , μ ( i ) , B λ , μ ( i ) , B λ , μ ( i ) =
6:
for all  x X  do
7:
    Run Lines 5–16 of Algorithm 5 to update A λ , μ , A λ , μ , B λ , μ , and B λ , μ w.r.t. x
8:
    if  m = 2 c ( x ) = i and ‘SWFDM1’ is used then
9:
        Run Lines 5–16 of Algorithm 5 to update to update A λ , μ ( i ) , A λ , μ ( i ) , B λ , μ ( i ) , and B λ , μ ( i ) w.r.t. x under size constraint k i
10:
    else if  c ( x ) = i and ‘SWFDM2’ is used then
11:
        Run Lines 5–16 of Algorithm 5 to update to update A λ , μ ( i ) , A λ , μ ( i ) , B λ , μ ( i ) , and B λ , μ ( i ) w.r.t. x under size constraint k
Post-processing
12:
W { x X : max { 1 , | X | w + 1 } t ( x ) | X | }
13:
for all λ Λ and μ U  do
14:
    if  A λ , μ W  then
15:
         S λ , μ ALG ( k , A λ , μ B λ , μ )
16:
    else if  B λ , μ W  then
17:
         S λ , μ ALG ( k , ( W A λ , μ ) B λ , μ )
18:
    if  m = 2 and ‘SWFDM1’ is used then
19:
        if  | S λ , μ | = k | S λ , μ X i | < k i  then
20:
           if  A λ , μ ( i ) W  then
21:
                S λ , μ ( i ) ALG ( k i , A λ , μ ( i ) B λ , μ ( i ) )
22:
           else if  B λ , μ ( i ) W  then
23:
                S λ , μ ( i ) ALG ( k i , ( W A λ , μ ( i ) ) B λ , μ ( i ) )
24:
           Run Lines 10–15 of Algorithm 2 using S λ , μ and S λ , μ ( i ) as input to find a fair solution S λ , μ
25:
    else ifSWFDM2’ is used then
26:
        for  i [ m ]  do
17:
           if  A λ , μ ( i ) W  then
28:
                S λ , μ ( i ) ALG ( k , A λ , μ ( i ) B λ , μ ( i ) )
29:
           else if  B λ , μ ( i ) W  then
30:
                S λ , μ ( i ) ALG ( k , ( W A λ , μ ( i ) ) B λ , μ ( i ) )
31:
        Run Lines 10–17 (with d ( x , y ) < ξ μ m + 1 in Line 13) of Algorithm 3 using S λ , μ and S a l l = i = 1 m S λ , μ ( i ) S λ , μ as input to find a fair solution S λ , μ
32:
return  S arg max λ Λ , μ U : | S λ , μ | = k d i v ( S λ , μ )
The post-processing steps of both algorithms for the window W containing the last w elements in X are shown in Lines 12–31. Note that these steps can be trivially used for any window W ( T ) based on the intermediate candidate solutions at time T. It first computes an unconstrained solution S λ , μ for each λ Λ and μ U from the (unconstrained) candidates kept during stream processing based on Algorithm 5. For SWFDM1, it next checks whether S λ , μ has contained k elements and an under-filled group exists in S λ , μ . If | S λ , μ | < k , the post-processing procedure is skipped because S λ , μ cannot produce any valid solution. Moreover, if | S λ , μ | = k and has already satisfied the fairness constraint, the post-processing will not be required anymore. Otherwise, it computes a group-specific solution S λ , μ ( i u ) of size k i u from the candidates maintained for the under-filled group i u and performs the procedure as Lines 10–15 of Algorithm 2 to greedily swap the elements from S λ , μ ( i u ) into S λ , μ and the elements from the over-filled group i o out of S λ , μ so that S λ , μ becomes a fair solution. For SWFDM2, it computes a group-specific solution S λ , μ ( i ) of size k from each group-specific candidate for i [ m ] . S λ , μ as well as each S λ , μ ( i ) constitutes S a l l for post-processing. Then, using the same method as Algorithm 3, it picks a subset S λ , μ of S λ , μ , divides S a l l into clusters, and augments S λ , μ via matroid intersection as the new solution S λ , μ . Both algorithms return the fair solution with maximum diversity after post-processing as the final solution for FDM on the window W.
  • Theoretical Analysis: Subsequently, we will analyze the theoretical soundness and complexities of the extended SWFDM1 and SWFDM2 algorithms for FDM in the sliding-window model by generalizing the analyses for SFDM1 and SFDM2 in Section 4.
Theorem 7.
SWFDM1is a ξ ε 10 -approximation algorithm for FDM in the sliding-window model when a ξ-approximation algorithm is used for post-processing. It keeps O ( k log 2 Δ ε 2 ) elements, takes O ( k log 2 Δ ε 2 ) time per element in streaming processing and O ( k 2 log 2 Δ ε 2 ) time for post-processing.
Proof. 
First, based on the analyses in [14], when μ OPT [ W ] 5 , there exists λ Λ such that d i v ( S λ , μ ) ξ μ . Let μ be the value in U such that μ [ ( 1 ε ) · OPT f [ W ] 5 , OPT f [ W ] 5 ] , where OPT f [ W ] is the optimal diversity for FDM on window W. Obviously, OPT f [ W ] OPT [ W ] . Accordingly, we can find the values λ Λ and μ U with d i v ( S λ , μ ) ξ μ . Then, Lemma 2 guarantees that d i v ( S λ , μ ) ξ μ 2 after the post-processing procedure. Combining the above results, we have d i v ( S ) d i v ( S λ , μ ) ( 1 ε ) ξ 10 · OPT f [ W ] , where S is the solution for FDM on W returned by SWFDM1. Finally, since the number of candidates increases from O ( log Δ ε ) to O ( log 2 Δ ε 2 ) and the complexities of the remaining steps are not changed, the time and space complexities of SWFDM1 grow by a factor of log Δ ε compared with SFDM1. □
Theorem 8.
SWFDM2is a ξ ε 15 m + 10 -approximation algorithm for FDM in the sliding-window model when a ξ-approximation algorithm is used for post-processing. It keeps O ( k m log 2 Δ ε 2 ) elements in memory, and takes O ( k log 2 Δ ε 2 ) time per element in streaming processing and O k 2 m log 2 Δ ε 2 · ( m + log 2 k ) time for post-processing.
Proof. 
Similar to the proof of Theorem 7, we find the values λ Λ and μ U such that μ [ ( 1 ε ) · OPT f [ W ] 5 , OPT f [ W ] 5 ] and d i v ( S λ , μ ) ξ μ , where OPT f [ W ] is the optimal diversity value for FDM on W. Then, Lemmas 3 and 4 guarantee that d i v ( S λ , μ ) ξ μ 3 m + 2 after the post-processing procedure. Combining the above results, we have d i v ( S ) d i v ( S λ , μ ) ( 1 ε ) ξ 15 m + 10 · OPT f [ W ] , where S is the solution for FDM on W returned by SWFDM2. Since the number of candidates increases from O ( log Δ ε ) to O ( log 2 Δ ε 2 ) and the complexities of the remaining steps are not changed, the time and space complexities of SWFDM2 grows by a factor of log Δ ε compared with SFDM2. □
Finally, since the approximation factor ξ of the algorithm ALG we use is Θ ( 1 ) , e.g., ξ = 1 2 for GMM [39], the approximation factors of SWFDM1 and SWFDM2 are written as Θ ( 1 ) and Θ ( m 1 ) , respectively, for simplicity.

6. Experiments

In this section, we evaluate the performance of our proposed algorithms on several real-world and synthetic datasets. We first introduce our experimental setup in Section 6.1. Then, experimental results in the streaming setting are presented in Section 6.2. Finally, experimental results in the sliding-window setting are presented in Section 6.3.

6.1. Experimental Setup

  • Datasets: Our experiments are conducted on four publicly available real-world datasets, as follows:
  • Adult (https://archive.ics.uci.edu/dataset/2/adult, accessed on 12 July 2023) is a collection of 48,842 records from the 1994 US Census database. We select six numeric attributes as features and normalize each of them to have zero mean and unit standard deviation. The Euclidean distance is used as the distance metric. The groups are generated from two demographic attributes: sex and race. By using them individually and in combination, there are two (sex), five (race), and ten (sex + race) groups, respectively.
  • CelebA (https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html, accessed on 12 July 2023) is a set of 202,599 images of human faces. We use 41 pre-trained class labels as features and the Manhattan distance as the distance metric. We generate two groups from sex {‘female’, ‘male’}, two groups from age {‘young’, ‘not young’}, and four groups from their combination, respectively.
  • Census (https://archive.ics.uci.edu/dataset/116/us+census+data+1990, accessed on 12 July 2023) is a set of 2,426,116 records from the 1990 US Census data. We take 25 (normalized) numeric attributes as features and use the Manhattan distance as the distance metric. We generate 2, 7, and 14 groups from sex, age, and both of them, respectively.
  • Lyrics (http://millionsongdataset.com/musixmatch, accessed on 12 July 2023) is a set of 122,448 documents, each of which is the lyrics of a song. We train a topic model with 50 topics using LDA [45] implemented in Gensim (https://radimrehurek.com/gensim, accessed on 12 July 2023). Each document is represented as a 50-dimensional vector and the angular distance is used as the distance metric. We generate 15 groups based on the primary genres of songs.
We also generate different synthetic datasets with varying n and m for scalability tests. In each synthetic dataset, we generate ten two-dimensional Gaussian isotropic blobs with random centers in [ 10 , 10 ] 2 and identity covariance matrices. We assign points to groups uniformly at random. The Euclidean distance is used as the distance metric. The number n of points varies from 10 3 to 10 7 with fixed m = 2 or 10. The number m of groups varies from 2 to 20 with fixed n = 10 5 . The statistics of all datasets are summarized in Table 1.
  • Algorithms: We compare our streaming algorithms, i.e., SFDM1 and SFDM2, and sliding-window algorithms, i.e., SWFDM1 and SWFDM2, with four existing offline FDM algorithms: the 1 3 m 1 -approximation FairFlow algorithm for an arbitrary m, the 1 5 -approximation FairGMM algorithm for small k and m, and the 1 4 -approximation FairSwap algorithm specific for m = 2 in [17], and the 1 ε m + 1 -approximation FairGreedyFlow algorithm for an arbitrary m in [20]. Since no implementation for the algorithms in [17,20] is available, we implement them by ourselves, following the description in the original paper. All the algorithms are implemented in Python 3. All experiments were run on a desktop with an Intel® Core i5-9500 3.0GHz processor and 32GB RAM running Ubuntu 20.04.3 LTS. Each algorithm was run on a single thread.
For a given solution size k, the group-specific size constraint k i for each group i [ m ] is set based on equal representation, which has been widely used in the literature [21,22,23,27]: If k is divisible by m, k i = k m for each i [ m ] . If k is not divisible by m, k i = k m for some groups or k m for the others while ensuring i = 1 m k i = k . We also compare the performance of different algorithms for proportional representation [8,26,27], another popular notion of fairness that requires a proportion of elements from each group in the solution that are generally preserved in the dataset.
  • Performance Metrics: The performance of each algorithm is evaluated in terms of efficiency, quality, and space usage. The efficiency is measured as average update time, i.e., the average wall-clock time used to compute a solution for each arrival element in the stream. The quality is measured by the value of the diversity function of the solution returned by an algorithm. Since computing the optimal diversity OPT f of FDM is infeasible, we run GMM [39] for unconstrained diversity maximization to estimate an upper bound of OPT f for comparison. Space usage is measured by the number of distinct elements stored by each algorithm. However, only the numbers of elements stored by our proposed algorithms are presented because offline algorithms should keep all elements in memory for random access, and thus their space usage is always equal to the dataset (or window) size. We run each experiment 10 times with different permutations of the same dataset and report the average of each measure over 10 runs.

6.2. Results in Streaming Setting

  • Effect of Parameter ε :Figure 6 illustrates the performance of SFDM1 and SFDM2 with different values of ε when k = 20 . We range the value of ε from 0.05 to 0.25 on Adult, CelebA, and Census and from 0.02 to 0.1 on Lyrics. Since the angular distances between any two vectors are at most π 2 , larger values of ε (e.g., >0.1) leads to greater estimation errors for OPT f and thus significantly lower solution quality in Lyrics. Generally, SFDM1 has higher efficiency and smaller space usage than SFDM2 for different values of ε , but SFDM2 exhibits a better solution quality. Furthermore, the running time and numbers of stored elements of both algorithms significantly decrease when the value of ε increases. This is consistent with our analyses in Section 4 because the number of guesses for OPT f , and thus the number of candidates maintained by both algorithms is O ( log Δ ε ) . A slightly surprising result is that the diversity values of the solutions do not obviously degrade, even when ε = 0.25 . This can be explained by the fact that both algorithms return the best solutions after post-processing among all candidates, which means that they provide good solutions as long as there is some μ U close to OPT f . We infer that μ still exists when ε = 0.25 . Nevertheless, we note that the chance of finding an appropriate value of μ will be smaller when the value of ε is larger, which will result in a less stable solution quality. Therefore, in the experiments for streaming setting, we always use ε = 0.1 for both algorithms on all datasets, except Lyrics, where the value of ε is set to 0.05 . The impact of ε on the performance of SWFDM1 and SWFDM2 is generally similar to that of SFDM1 and SFDM2. However, since the number of candidate solutions is quadratic with respect to ε , we use a larger ε = 0.25 for SWFDM1 and SWFDM2 on all datasets except Lyrics, where the value of ε is set to 0.1 .
  • Overview:Table 2 presents the performance of different algorithms for FDM in the streaming setting on four real-world datasets with different group partitions when the solution size k is fixed to 20. FairGMM is not included because it needs to enumerate, at most, k m k = O ( e m ) k candidates for solution computation and cannot scale to k > 10 and m > 5 . First, compared with the unconstrained solution returned by GMM, all the fair solutions are less diverse because of the additional fairness constraints. Since GMM is a 1 2 -approximation algorithm and OPT OPT f , 2 · d i v _ GMM is the upper bound of OPT f , from which we observe that all five fair algorithms return solutions with much better approximation ratios than their lower bounds. In case of m = 2 , SFDM1 runs the fastest among all five algorithms, which achieves a speed-up of from two to four orders of magnitude over FairSwap, FairFlow and FairGreedyFlow. At the same time, its solution quality is close to or equal to that of FairSwap in most cases. SFDM2 shows lower efficiency than SFDM1 due to the higher cost of post-processing. However, it is still much more efficient than offline algorithms by taking advantage of stream processing. In addition, the solution quality of SFDM2 benefits from the greedy selection procedure in Algorithm 4, which is not only consistently better than that of SFDM1 but also better than that of FairSwap on the Adult and Census datasets. In the case of m > 2 , SFDM1 and FairSwap are not applicable anymore. In addition, FairGreedyFlow cannot finish within one day on the Census dataset, and the corresponding results are also ignored. SFDM2 shows significant advantages compared to FairFlow and FairGreedyFlow in terms of both solution quality and efficiency. It provides up to 3.4 times more diverse solutions than FairFlow and FairGreedyFlow while running several orders of magnitude faster. In terms of space usage, both SFDM1 and SFDM2 store very small portions of elements (<0.1% on Census) on all datasets. SFDM2 contains slightly more elements than SFDM1 because the capacity of each group-specific candidate for group i is set to k instead of k i . For SFDM2, the number of stored elements increases nearly linearly with m since the number of candidates is linear to m.
  • Effect of Solution Size k: The impact of solution size k on the performance of different algorithms in the streaming setting is shown in Figure 7 and Figure 8. Here, we vary k in [ 5 , 50 ] when m 5 , or [ 10 , 50 ] when 5 < m 10 , or [ 15 , 50 ] when m > 10 , since we restrict that an algorithm must pick at least one element from each group. For each algorithm, the diversity value drops with k as the diversity function is monotonically non-increasing. At the same time, the update time grows with k, as their time complexities are linear or quadratic w.r.t. k. Compared with the solutions of GMM, all fair solutions are slightly less diverse when m = 2 . The gaps in diversity values become more apparent when m is larger. Although FairGMM achieves a slightly higher solution quality than any other algorithm when k 10 and m = 2 , it is not scalable to a larger k and m due to the enormous cost of enumeration. The solution quality of FairSwap, SFDM1, and SFDM2 is close to each other when m = 2 , which is better than that of FairFlow and FairGreedyFlow. However, the efficiencies of SFDM1 and SFDM2 are orders of magnitude higher than those of offline algorithms. Furthermore, when m > 2 , SFDM2 outperforms FairFlow and FairGreedyFlow in terms of both efficiency and effectiveness across all k values. However, since the time complexity of SFDM2 is quadratic w.r.t. both k and m, its update time increases drastically with k and might be close to that of FairFlow when k and m are large.
  • Scalability: We evaluate the scalability of each algorithm in the streaming setting on synthetic datasets by varying the dataset size n from 10 3 to 10 7 and the number of groups m from 2 to 20. The results regarding solution quality and update time for different values of n and m when k = 20 are presented in Figure 9. First of all, SFDM2 shows a much better scalability than FairFlow and FairGreedyFlow w.r.t. m in terms of solution quality. The diversity value of the solution of SFDM2 only slightly decreases with m and is up to 3 times higher than that of FairFlow and FairGreedyFlow when m > 10 . However, its update time increases more rapidly with m due to the quadratic dependence on m. Moreover, the diversity values of different algorithms slightly grow with n but are always close to each other for different values of n when m = 2 . Finally, the running time of offline algorithms is linear to n. However, the update time of SFDM1 and SFDM2 are almost independent of n, as analyzed in Section 4.
  • Equal vs. Proportional Representation:Figure 10 compares the solution quality and running time of different algorithms for two popular notions of fairness, i.e., equal representation (ER) and proportional representation (PR), when k = 20 on Adult with highly skewed groups, where 67 % of the records are for males and 87 % of the records are for Whites. The diversity value of the solution of each algorithm is slightly higher for PR than ER, as the solution for PR is closer to the unconstrained one. The running time of SFDM1 and SFDM2 is slightly shorter for PR than ER, since fewer swapping and augmentation steps are performed on each candidate during post-processing. The results for SWFDM1 and SWFDM2 are similar and will be omitted.

6.3. Results in Sliding-Window Setting

  • Overview:Table 3 shows the performance of different algorithms for sliding-window FDM on four real-world datasets with different group settings when the solution size k is fixed to 20 and the window size w is set to 25 k on Adult (as its size is smaller than 100 k) or 100 k on other datasets. FairGMM is also omitted in Table 3 due to its high complexity. Compared with the streaming setting, the “price of fairness” becomes higher in the sliding-window setting, for two possible reasons. First, the approximation factors of our proposed algorithms are lower. Second, some minor groups contain too few elements in the window when the value of m is large (marginally larger than k i ). Thus, the selection of elements from such groups is very restricted to ensure fairness. Nevertheless, we still find that all fair algorithms provide solutions with much better approximations than their lower bounds.
We observe that SWFDM2 runs the fastest of all five algorithms, which achieves 5–150× speedups over FairSwap, FairFlow and FairGreedyFlow. Moreover, SWFDM1 and SWFDM2 have a slightly lower solution quality than FairSwap when m = 2 . Nevertheless, SWFDM2 shows significant advantages over FairFlow and FairGreedyFlow in terms of both solution quality and efficiency when m > 2 . Unlike the streaming setting, SWFDM2 shows higher efficiency than SWFDM1. This is because SWFDM2 maintains group-specific solutions with size constraints k, instead of k i for SWFDM1, in stream processing. Consequently, its group-specific solutions often expire (i.e., A λ , μ ( i ) W ), and thus are not eligible for post-processing. However, such efficiency improvements come at the expense of less diverse solutions. In terms of space usage, both SWFDM1 and SWFDM2 store very small portions of elements (at most 3.2 % · w ) across all datasets. SWFDM2 keeps slightly more elements than SWFDM1 also because the capacity of each group-specific solution is k instead of k i .
  • Effect of Solution Size k: The impact of solution size k on the performance of different algorithms in the sliding-window setting is illustrated in Figure 11 and Figure 12. We use the same values of k as in the streaming setting. The window size w is set to w = 25 k for Adult and 100 k for others. For each algorithm, the diversity value drops with k as the diversity function is monotonically non-increasing. At the same time, the update time grows with k as their time complexities are linear or quadratic w.r.t. k. The gaps in diversity values between unconstrained and fair solutions are much larger than those in the streaming setting. The reasons for this were explained in the previous paragraph. The solution quality of SWFDM1 and SWFDM2 is slightly lower than FairSwap when m = 2 , but is still better than that of FairFlow and FairGreedyFlow. However, their efficiencies are always much higher than those of offline algorithms. Finally, when m > 2 , SWFDM2 outperforms FairFlow and FairGreedyFlow in terms of efficiency and effectiveness across all k values.
  • Scalability: We evaluate the scalability of each algorithm in the sliding-window setting on synthetic datasets by varying the number of groups m from 2 to 20 and the window size w from 10 3 to 10 6 . The results regarding solution quality and update time for different values of w and m when k = 20 are presented in Figure 13. First of all, SWFDM2 shows a much better scalability than FairFlow and FairGreedyFlow w.r.t. m in terms of solution quality. The diversity value of the solution of SWFDM2 only slightly decreases with m. However, for FairFlow and FairGreedyFlow, the diversity values drop drastically with m. Nevertheless, its update time increases more rapidly with m since its time complexity is quadratic w.r.t. m. Furthermore, the results of the diversity values of different algorithms with varying w are similar to those for varying k. As expected, the running time of offline algorithms is nearly linear to w. However, unlike the streaming setting, the update time of SWFDM1 and SWFDM2 increases with w because more candidates are non-expired and thus considered in post-processing for a larger value of w.

7. Conclusions

In this paper, we studied the diversity maximization problem with fairness constraints in the streaming and sliding window settings. We first proposed a 1 ε 4 -approximation streaming algorithm for this problem when there were two groups in the dataset and a 1 ε 3 m + 2 -approximation streaming algorithm that could deal with an arbitrary number m of groups. Moreover, we extended the two proposed streaming algorithms to the sliding-window model while maintaining approximation factors of Θ ( 1 ) and Θ ( m 1 ) , respectively. Extensive experiments on real-world and synthetic datasets confirmed the efficiency, effectiveness, and scalability of our proposed algorithms.
In future work, we would like to improve the approximation ratios of the proposed algorithms. It would also be interesting to consider diversity maximization problems with other objective functions and fairness constraints defined on multiple sensitive attributes.

Author Contributions

Conceptualization, Y.W. and M.M.; methodology, Y.W.; software, Y.W., F.F. and J.L.; validation, M.M.; data curation, F.F. and J.L.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W., F.F. and M.M.; visualization, Y.W. and J.L.; supervision, Y.W. and M.M.; funding acquisition, Y.W. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the MLDB project of the Academy of Finland (decision number: 322046) and the National Natural Science Foundation of China under grant number 62202169.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Real-world datasets that we use in our experiments are publicly available. The code for generating synthetic data and for our experiments is available at https://github.com/yhwang1990/code-FDM (accessed on 12 July 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahmed, M. Data summarization: A survey. Knowl. Inf. Syst. 2019, 58, 249–273. [Google Scholar] [CrossRef]
  2. Qin, L.; Yu, J.X.; Chang, L. Diversifying Top-K Results. Proc. VLDB Endow. 2012, 5, 1124–1135. [Google Scholar] [CrossRef] [Green Version]
  3. Zheng, K.; Wang, H.; Qi, Z.; Li, J.; Gao, H. A survey of query result diversification. Knowl. Inf. Syst. 2017, 51, 1–36. [Google Scholar] [CrossRef]
  4. Gollapudi, S.; Sharma, A. An axiomatic approach for result diversification. In Proceedings of the 18th International Conference on World Wide Web, Madrid, Spain, 20–24 April 2009; pp. 381–390. [Google Scholar]
  5. Rafiei, D.; Bharat, K.; Shukla, A. Diversifying web search results. In Proceedings of the 19th International Conference on World Wide Web, Raleigh, NC, USA, 26–30 April 2010; pp. 781–790. [Google Scholar]
  6. Kunaver, M.; Pozrl, T. Diversity in recommender systems—A survey. Knowl. Based Syst. 2017, 123, 154–162. [Google Scholar] [CrossRef]
  7. Zadeh, S.A.; Ghadiri, M.; Mirrokni, V.S.; Zadimoghaddam, M. Scalable Feature Selection via Distributed Diversity Maximization. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 2876–2883. [Google Scholar]
  8. Celis, L.E.; Keswani, V.; Straszak, D.; Deshpande, A.; Kathuria, T.; Vishnoi, N.K. Fair and Diverse DPP-Based Data Summarization. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 715–724. [Google Scholar]
  9. Borodin, A.; Lee, H.C.; Ye, Y. Max-Sum diversification, monotone submodular functions and dynamic updates. In Proceedings of the 31st ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, Scottsdale, AZ, USA, 21–23 May 2012; pp. 155–166. [Google Scholar]
  10. Drosou, M.; Pitoura, E. DisC diversity: Result diversification based on dissimilarity and coverage. Proc. VLDB Endow. 2012, 6, 13–24. [Google Scholar] [CrossRef]
  11. Abbassi, Z.; Mirrokni, V.S.; Thakur, M. Diversity maximization under matroid constraints. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, 11–14 August 2013; pp. 32–40. [Google Scholar]
  12. Indyk, P.; Mahabadi, S.; Mahdian, M.; Mirrokni, V.S. Composable core-sets for diversity and coverage maximization. In Proceedings of the 33rd ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, Dallas, TX, USA, 15–17 May 2014; pp. 100–108. [Google Scholar]
  13. Ceccarello, M.; Pietracaprina, A.; Pucci, G. Fast Coreset-based Diversity Maximization under Matroid Constraints. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, Marina Del Rey, CA, USA, 5–9 February 2018; pp. 81–89. [Google Scholar]
  14. Borassi, M.; Epasto, A.; Lattanzi, S.; Vassilvitskii, S.; Zadimoghaddam, M. Better Sliding Window Algorithms to Maximize Subadditive and Diversity Objectives. In Proceedings of the 38th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, Amsterdam, The Netherlands, 1–3 July 2019; pp. 254–268. [Google Scholar]
  15. Bauckhage, C.; Sifa, R.; Wrobel, S. Adiabatic Quantum Computing for Max-Sum Diversification. In Proceedings of the 2020 SIAM International Conference on Data Mining, Cincinnati, OH, USA, 7–9 May 2020; pp. 343–351. [Google Scholar]
  16. Ceccarello, M.; Pietracaprina, A.; Pucci, G.; Upfal, E. MapReduce and Streaming Algorithms for Diversity Maximization in Metric Spaces of Bounded Doubling Dimension. Proc. VLDB Endow. 2017, 10, 469–480. [Google Scholar] [CrossRef]
  17. Moumoulidou, Z.; McGregor, A.; Meliou, A. Diverse Data Selection under Fairness Constraints. In Proceedings of the 24th International Conference on Database Theory, Nicosia, Cyprus, 23–26 March 2021; pp. 13:1–13:25. [Google Scholar]
  18. Drosou, M.; Pitoura, E. Diverse Set Selection Over Dynamic Data. IEEE Trans. Knowl. Data Eng. 2014, 26, 1102–1116. [Google Scholar] [CrossRef]
  19. Zhang, G.; Gionis, A. Maximizing diversity over clustered data. In Proceedings of the 2020 SIAM International Conference on Data Mining, Cincinnati, OH, USA, 7–9 May 2020; pp. 649–657. [Google Scholar]
  20. Addanki, R.; McGregor, A.; Meliou, A.; Moumoulidou, Z. Improved Approximation and Scalability for Fair Max-Min Diversification. In Proceedings of the 25th International Conference on Database Theory, Online, 29 March–1 April 2022; pp. 7:1–7:21. [Google Scholar]
  21. Chiplunkar, A.; Kale, S.; Ramamoorthy, S.N. How to Solve Fair k-Center in Massive Data Models. In Proceedings of the 37th International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 1877–1886. [Google Scholar]
  22. Jones, M.; Nguyen, H.; Nguyen, T. Fair k-Centers via Maximum Matching. In Proceedings of the 37th International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 4940–4949. [Google Scholar]
  23. Kleindessner, M.; Awasthi, P.; Morgenstern, J. Fair k-Center Clustering for Data Summarization. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 3448–3457. [Google Scholar]
  24. Schmidt, M.; Schwiegelshohn, C.; Sohler, C. Fair Coresets and Streaming Algorithms for Fair k-means. In Proceedings of the 17th International Workshop on Approximation and Online Algorithms, Munich, Germany, 12–13 September 2019; pp. 232–251. [Google Scholar]
  25. Huang, L.; Jiang, S.H.; Vishnoi, N.K. Coresets for Clustering with Fairness Constraints. Adv. Neural Inf. Process. Syst. 2019, 32, 7587–7598. [Google Scholar]
  26. El Halabi, M.; Mitrović, S.; Norouzi-Fard, A.; Tardos, J.; Tarnawski, J.M. Fairness in Streaming Submodular Maximization: Algorithms and Hardness. Adv. Neural Inf. Process. Syst. 2020, 33, 13609–13622. [Google Scholar]
  27. Wang, Y.; Fabbri, F.; Mathioudakis, M. Fair and Representative Subset Selection from Data Streams. In Proceedings of the Web Conference 2021, Ljubljana, Slovenia, 19–23 April 2021; pp. 1340–1350. [Google Scholar]
  28. Wang, Y.; Fabbri, F.; Mathioudakis, M. Streaming Algorithms for Diversity Maximization with Fairness Constraints. In Proceedings of the 38th IEEE International Conference on Data Engineering, Kuala Lumpur, Malaysia, 9–12 May 2022; pp. 41–53. [Google Scholar]
  29. Cevallos, A.; Eisenbrand, F.; Zenklusen, R. Max-Sum Diversity via Convex Programming. In Proceedings of the 32nd International Symposium on Computational Geometry, Boston, MA, USA, 14–18 June 2016; pp. 26:1–26:14. [Google Scholar]
  30. Cevallos, A.; Eisenbrand, F.; Zenklusen, R. Local Search for Max-Sum Diversification. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, Barcelona, Spain, 16–19 January 2017; pp. 130–142. [Google Scholar]
  31. Aghamolaei, S.; Farhadi, M.; Zarrabi-Zadeh, H. Diversity Maximization via Composable Coresets. In Proceedings of the 27th Canadian Conference on Computational Geometry, Kingston, ON, Canada, 10–12 August 2015; pp. 38–48. [Google Scholar]
  32. Chandra, B.; Halldórsson, M.M. Approximation Algorithms for Dispersion Problems. J. Algorithms 2001, 38, 438–465. [Google Scholar] [CrossRef]
  33. Erkut, E. The discrete p-dispersion problem. Eur. J. Oper. Res. 1990, 46, 48–60. [Google Scholar] [CrossRef]
  34. Ravi, S.S.; Rosenkrantz, D.J.; Tayi, G.K. Heuristic and Special Case Algorithms for Dispersion Problems. Oper. Res. 1994, 42, 299–310. [Google Scholar] [CrossRef] [Green Version]
  35. Hassin, R.; Rubinstein, S.; Tamir, A. Approximation algorithms for maximum dispersion. Oper. Res. Lett. 1997, 21, 133–137. [Google Scholar] [CrossRef]
  36. Epasto, A.; Mahdian, M.; Mirrokni, V.; Zhong, P. Improved Sliding Window Algorithms for Clustering and Coverage via Bucketing-Based Sketches. In Proceedings of the 2022 ACM-SIAM Symposium on Discrete Algorithms, Virtual, 9–12 January 2022; pp. 3005–3042. [Google Scholar]
  37. Bhaskara, A.; Ghadiri, M.; Mirrokni, V.S.; Svensson, O. Linear Relaxations for Finding Diverse Elements in Metric Spaces. Adv. Neural Inf. Process. Syst. 2016, 29, 4098–4106. [Google Scholar]
  38. Wang, Y.; Mathioudakis, M.; Li, J.; Fabbri, F. Max-Min Diversification with Fairness Constraints: Exact and Approximation Algorithms. In Proceedings of the 2023 SIAM International Conference on Data Mining (SDM), Minneapolis, MI, USA, 27–29 April 2023; pp. 91–99. [Google Scholar]
  39. Gonzalez, T.F. Clustering to Minimize the Maximum Intercluster Distance. Theor. Comput. Sci. 1985, 38, 293–306. [Google Scholar] [CrossRef] [Green Version]
  40. Korte, B.; Vygen, J. Combinatorial Optimization: Theory and Algorithms; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  41. Cunningham, W.H. Improved Bounds for Matroid Partition and Intersection Algorithms. SIAM J. Comput. 1986, 15, 948–957. [Google Scholar] [CrossRef]
  42. Chakrabarty, D.; Lee, Y.T.; Sidford, A.; Singla, S.; Wong, S.C. Faster Matroid Intersection. In Proceedings of the 60th IEEE Annual Symposium on Foundations of Computer Science, Baltimore, MD, USA, 9–12 November 2019; pp. 1146–1168. [Google Scholar]
  43. Nguyen, H.L. A note on Cunningham’s algorithm for matroid intersection. arXiv 2019, arXiv:1904.04129. [Google Scholar]
  44. Chen, D.Z.; Li, J.; Liang, H.; Wang, H. Matroid and Knapsack Center Problems. Algorithmica 2016, 75, 27–52. [Google Scholar] [CrossRef] [Green Version]
  45. Blei, D.M.; Ng, A.Y.; Jordan, M.I. Latent Dirichlet Allocation. J. Mach. Learn. Res. 2003, 3, 993–1022. [Google Scholar]
Figure 1. Comparison of (a) max–sum dispersion (MSD) and (b) max–min dispersion (MMD) for diversity maximization on a dataset of one hundred points. We use circles and crossmarks to denote all points in the dataset and the points selected based on MSD and MMD.
Figure 1. Comparison of (a) max–sum dispersion (MSD) and (b) max–min dispersion (MMD) for diversity maximization on a dataset of one hundred points. We use circles and crossmarks to denote all points in the dataset and the points selected based on MSD and MMD.
Entropy 25 01066 g001
Figure 2. Comparison of (a) unconstrained max–min diversity maximization and (b) fair max–min diversity maximization. We have a set of individuals, each described by two attributes, partitioned into two disjoint groups of red and blue, respectively. Fair diversity maximization returns a subset of size 10 that maximizes diversity in terms of attributes and contains an equal number (i.e., k i = 5 ) of elements from both groups.
Figure 2. Comparison of (a) unconstrained max–min diversity maximization and (b) fair max–min diversity maximization. We have a set of individuals, each described by two attributes, partitioned into two disjoint groups of red and blue, respectively. Fair diversity maximization returns a subset of size 10 that maximizes diversity in terms of attributes and contains an equal number (i.e., k i = 5 ) of elements from both groups.
Entropy 25 01066 g002
Figure 3. Illustration of the SFDM1 algorithm. During stream processing, one group-blind and two group-specific candidates are maintained for each guess μ of OPT f . Then, a subset of group-blind candidates is selected for post-processing by adding the elements from the under-filled group before deleting the elements from the over-filled one.
Figure 3. Illustration of the SFDM1 algorithm. During stream processing, one group-blind and two group-specific candidates are maintained for each guess μ of OPT f . Then, a subset of group-blind candidates is selected for post-processing by adding the elements from the under-filled group before deleting the elements from the over-filled one.
Entropy 25 01066 g003
Figure 4. Illustration of post-processing in SFDM2. For each μ U , an initial S μ is first extracted from S μ by removing the elements from over-filled groups. Then, the elements in all candidates are divided into clusters. The final S μ is augmented from the initial solution by adding new elements from under-filled groups based on matroid intersection.
Figure 4. Illustration of post-processing in SFDM2. For each μ U , an initial S μ is first extracted from S μ by removing the elements from over-filled groups. Then, the elements in all candidates are divided into clusters. The final S μ is augmented from the initial solution by adding new elements from under-filled groups based on matroid intersection.
Entropy 25 01066 g004
Figure 5. Illustration of the framework of sliding-window algorithms. During stream processing, two candidate solutions A λ , μ and B λ , μ , along with their backups A λ , μ and B λ , μ , are maintained for each guess λ , μ of OPT [ W ] . Then, during post-processing, the elements in B λ , μ and A λ , μ (or non-expired elements in A λ , μ if A λ , μ has expired) are passed to an existing algorithm for solution computation.
Figure 5. Illustration of the framework of sliding-window algorithms. During stream processing, two candidate solutions A λ , μ and B λ , μ , along with their backups A λ , μ and B λ , μ , are maintained for each guess λ , μ of OPT [ W ] . Then, during post-processing, the elements in B λ , μ and A λ , μ (or non-expired elements in A λ , μ if A λ , μ has expired) are passed to an existing algorithm for solution computation.
Entropy 25 01066 g005
Figure 6. Performance of SFDM1 and SFDM2 with varying parameter ε on (a) Adult (Sex, m = 2 ), (b) CelebA (Sex, m = 2 ), (c) Census (Sex, m = 2 ), and (d) Lyrics (Genre, m = 15 ) when k = 20 .
Figure 6. Performance of SFDM1 and SFDM2 with varying parameter ε on (a) Adult (Sex, m = 2 ), (b) CelebA (Sex, m = 2 ), (c) Census (Sex, m = 2 ), and (d) Lyrics (Genre, m = 15 ) when k = 20 .
Entropy 25 01066 g006
Figure 7. Solution quality of different algorithms in the streaming setting with varying solution sizes, k. The diversity values of GMM are plotted as gray lines to illustrate the “price of fairness”, i.e., the losses in diversity caused by incorporating the fairness constraints.
Figure 7. Solution quality of different algorithms in the streaming setting with varying solution sizes, k. The diversity values of GMM are plotted as gray lines to illustrate the “price of fairness”, i.e., the losses in diversity caused by incorporating the fairness constraints.
Entropy 25 01066 g007
Figure 8. Update time of different algorithms in the streaming setting with varying solution sizes, k.
Figure 8. Update time of different algorithms in the streaming setting with varying solution sizes, k.
Entropy 25 01066 g008
Figure 9. Solution quality and update time on synthetic datasets in the streaming setting with varying dataset sizes, n, and numbers of groups, m ( k = 20 ).
Figure 9. Solution quality and update time on synthetic datasets in the streaming setting with varying dataset sizes, n, and numbers of groups, m ( k = 20 ).
Entropy 25 01066 g009
Figure 10. Comparison of different algorithms on Adult for equal representation (ER) and proportional representation (PR) when k = 20 .
Figure 10. Comparison of different algorithms on Adult for equal representation (ER) and proportional representation (PR) when k = 20 .
Entropy 25 01066 g010
Figure 11. Solution quality of different algorithms in the sliding-window setting with varying solution size k ( w = 25 k for adult and 100 k for others). The diversity values of GMM are also plotted as gray lines to illustrate the “price of fairness”.
Figure 11. Solution quality of different algorithms in the sliding-window setting with varying solution size k ( w = 25 k for adult and 100 k for others). The diversity values of GMM are also plotted as gray lines to illustrate the “price of fairness”.
Entropy 25 01066 g011
Figure 12. Update time of different algorithms in the sliding-window setting with varying solution size k ( w = 25 k for Adult and 100 k for others).
Figure 12. Update time of different algorithms in the sliding-window setting with varying solution size k ( w = 25 k for Adult and 100 k for others).
Entropy 25 01066 g012
Figure 13. Solution quality and update time on synthetic datasets in the sliding-window setting with varying window size w and number of groups m ( k = 20 ).
Figure 13. Solution quality and update time on synthetic datasets in the sliding-window setting with varying window size w and number of groups m ( k = 20 ).
Entropy 25 01066 g013
Table 1. Statistics of datasets used in the experiments.
Table 1. Statistics of datasets used in the experiments.
DatasetGroupnm# FeaturesDistance Function
AdultSex48,84226Euclidean
Race5
S + R10
CelebASex202,599241Manhattan
Age2
S + A4
CensusSex2,426,116225Manhattan
Age7
S + A14
LyricsGenre122,4481550Angular
Synthetic- 10 3 10 7 2–202Euclidean
Table 2. Overview of the performance of different algorithms in the streaming setting ( k = 20 ).
Table 2. Overview of the performance of different algorithms in the streaming setting ( k = 20 ).
DatasetGroupGMMFairSwapFairFlowFairGreedyFlowSFDM1SFDM2
DiversityDiversityTime (s)DiversityTime (s)DiversityTime (s)DiversityTime (s)#ElemDiversityTime (s)#Elem
AdultSex5.02264.14857.063.11905.452.131559.993.94270.025690.24.17100.0965120.4
Race--1.37025.821.168158.59---3.13731.0175312.3
S + R--1.00496.550.849060.92---2.91823.0914620.6
CelebASex13.011.435.138.422.975.0705.49.80.018887.210.90.0410122.3
Age11.431.697.223.065.0657.410.40.022594.610.80.0591128.0
S + A--6.324.173.5312.8---10.40.1124193.1
CensusSex35.027.0372.317.5254.6--27.00.0321121.531.00.0931163.0
Age--8.5294.0-----21.00.8671676.0
S + A--5.0347.5-----19.03.75391276.0
LyricsGenre1.5476--0.424414.840.6732302.6---1.44632.6785677.2
Table 3. Overview of the performance of different algorithms in the sliding-window setting ( k = 20 ; w = 25 k for Adult and w = 100 k for other datasets).
Table 3. Overview of the performance of different algorithms in the sliding-window setting ( k = 20 ; w = 25 k for Adult and w = 100 k for other datasets).
DatasetGroupGMMFairSwapFairFlowFairGreedyFlowSWFDM1SWFDM2
DiversityDiversityTime (s)DiversityTime (s)DiversityTime (s)DiversityTime (s)#ElemDiversityTime (s)#Elem
AdultSex4.95984.05686.113.06605.472.050113.953.54450.870431.53.40520.489551.8
Race--1.23645.870.41627.71---2.52121.056616.1
S + R--0.91056.430.32764.15---1.78431.027799.6
CelebASex12.011.328.127.723.226.0138.49.72.526427.78.70.875560.6
Age10.626.997.323.116.0121.510.52.966418.58.70.976537.2
S + A--6.224.104.0112.1---8.82.119864.1
CensusSex32.030.026.3318.021.4911.0109.929.02.593377.028.01.614397.0
Age--5.022.492.076.15---13.02.568646.0
S + A--2.024.115.0128.4---13.03.077796.0
LyricsGenre1.5586--0.252220.120.6432133.4---1.21664.6941132.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Fabbri, F.; Mathioudakis, M.; Li, J. Fair Max–Min Diversity Maximization in Streaming and Sliding-Window Models. Entropy 2023, 25, 1066. https://doi.org/10.3390/e25071066

AMA Style

Wang Y, Fabbri F, Mathioudakis M, Li J. Fair Max–Min Diversity Maximization in Streaming and Sliding-Window Models. Entropy. 2023; 25(7):1066. https://doi.org/10.3390/e25071066

Chicago/Turabian Style

Wang, Yanhao, Francesco Fabbri, Michael Mathioudakis, and Jia Li. 2023. "Fair Max–Min Diversity Maximization in Streaming and Sliding-Window Models" Entropy 25, no. 7: 1066. https://doi.org/10.3390/e25071066

APA Style

Wang, Y., Fabbri, F., Mathioudakis, M., & Li, J. (2023). Fair Max–Min Diversity Maximization in Streaming and Sliding-Window Models. Entropy, 25(7), 1066. https://doi.org/10.3390/e25071066

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop