1 Introduction

We are living in the information era, where almost all data are digitised. Many different techniques of data mining, machine learning and artificial intelligence can be applied to extract useful information in many different contexts. Education is one of such contexts, where the intelligent analysis of this information, applied to academic activities, has led to the emergence of what is known as Educational Data Mining (EDM) [1]. Currently, many approaches have shown the benefits of knowledge extraction techniques to forecast student’s performance [2], forecast student dropout [3], or recommend educational activities [4], among others. Roughly speaking, the advantage of using computational learning techniques to study these problems is that they can now rely on large amounts of data and will therefore be able to extract relevant knowledge about the patterns of students’ academic behaviour in a wide variety of situations. This knowledge extraction allows determining different strategies to improve the academic success of students.

Mainly, the artificial intelligence techniques that have been applied in Educational Data Mining belong to the class of supervised machine learning [5]. The most popular techniques have been deep neural networks (deep learning) [6] and decision trees (random forests) [7], although other methods have also been applied satisfactorily, including continuous and logistic regression [8], support vector machines [9], association rules [10] or evolutionary algorithms [11].

We will use formal concept analysis (FCA) [12] as our tool to extract knowledge from a dataset with data about results in a mathematics course of students. Specifically, we propose the use of FCA to model the knowledge structure of an academic course and the dependencies among the different units.

Knowledge Space Theory (KST) [13] proposes the evaluation of a person’s knowledge status with respect to a given knowledge domain. The reference knowledge domain is a set of questions or items to be solved in a course and the knowledge state is the set of items the person is able to solve. The links between KST and FCA and how to explore KST in terms of FCA are shown in [14], concretely, these authors said “the FCA community developed a bundle of remarkably fast algorithms which could be exploited” in the field of Technology-Enhanced Learning.

Classical FCA extracts knowledge from a binary dataset, which relates objects and attributes, i.e., if in the dataset, in row i, column j we have a one or a cross, this means that \(O_i\) is related with \(A_j\). FCA uses in the general approach the positive information in the dataset, i.e., the ones in the binary relation.

Our proposal extends this standard approach in FCA by incorporating mixed attributes, that is, considering positive and negative information in the same framework. If in the dataset, in row k, column l we do not have a one or a cross, this means that \(O_k\) is not related with \(A_l\). This information is overlooked by the classical approach but we use it in our framework. In the educational context, this means that we use information about passed and failed exams, not only about failed ones, which is the framework established in [15]. Thus, the objective is to identify patterns, seen as relationships between course units, representing relevant hidden knowledge.

In addition, FCA facilitates acquiring a formal logic system based on implications that determine dependencies in the dataset. Therefore, FCA becomes a suitable tool to formally reason with attributes related to the acquisition of concepts, skills and competencies, as well as for the study of academic performance and the knowledge structure of a course, as proposed in the present work. FCA has been recently used to cluster students according to their academic behaviour to guide them in their academic path [16] or to build a decision support system capable of identifying students at risk of dropping out in Massive Online Open Courses (MOOCs) [17].

One of the advantages of using a logical approach to this problem is the possibility of explaining and interpreting the results obtained and, more importantly, the model built. The methods mentioned above in Educational Data Mining, such as neural networks or random forests, are based on the iterated application of statistical techniques and, in some cases, are black boxes, where the model built is not interpretable by the user in a simple way. With FCA, interpretability and explainability are guaranteed because the knowledge is expressed in the form of logical rules, which allow for traceability.

From a theoretical point of view, our contributions to this work are related to mixed formal contexts. Specifically, we establish a relation between the concept lattice of the mixed context and the concept lattices which only consider separately positive or negative information. To extract a more granular knowledge of the mixed context, we will compute the minimal generators [18] of all closed sets from the set of implications computed using FCA.

Minimal generators [19] play a major role in different areas such as databases, graph theory, data mining, etc.

We emphasize that in the theoretical framework, the crisp background presented in [18], about minimal generators, is extended to the mixed paradigm in this paper. Relationships between the minimal generators considering mixed contexts with respect to the minimal generators in positive and negative contexts are presented here. These relationships prove that more knowledge can be retrieved than by analysing individual contexts separately.

As mentioned above, to put the proposed theoretical framework into practice, we have applied it to the specific case of a high school Mathematics course in Spain. From a binary relation with the results of students in a set of relevant items in the course, we extract the implications using FCA techniques [20] and, all the minimal generators and their closed sets using Simplification Logic [21]. Regarding Simplification Logic, it is the reasoning tool that we use to draw conclusions from the relationship between the data, remove redundancy in the implications [22], and compute the minimal generators [18]. In our case study, we show the use of the minimal generators as a means of extracting the core items in the course, as well as answering the question “Is it possible for a student that has failed a certain number of units to pass the course?”.

The paper is structured as follows. In Sect. 2, we present the basic notions needed for readers from different backgrounds to fully understand the content of the article. Later, in Sect. 3 we present the technical details of our proposal and the main theoretical contributions. In Sect. 4, the case study is presented, and an in-depth analysis is performed using the mentioned tools. The conclusions of this work and the proposal for future research directions are given in Sect. 5.

2 Background

In this section, we present the basic notions of formal concept analysis and Simplification Logic, which are used to find the minimal generators of the knowledge space.

2.1 Formal Concept Analysis

Formal concept analysis (FCA) is a helpful tool to extract knowledge from a dataset (called formal context). The formal context is defined as a three-tuple \({\mathbb {K}}= (G, M, I)\) where G is a set of objects, M is a set of attributes, and I is a relation between G and M (called incidence) with the following interpretation: if the pair \((g,m)\in I\) then we say that the object g has the attribute m. The incidence relation is usually represented by a table where the rows are objects; the columns are attributes. When we find a cross in a cell, we have that the object related to the row has the attribute related to the column.

FCA is closely related to Galois connections, which are two maps \(\phi : P \rightarrow Q\) and \(\psi : Q \rightarrow P\) between two ordered sets \((P, \le )\) and \((Q, \le )\) satisfying:

  1. (1)

    \(\phi\) is antitone, i.e., \(p_1 \le p_2\) then \(\phi (p_1) \ge \phi (p_2)\);

  2. (2)

    \(\psi\) is antitone, i.e., \(q_1 \le q_2\) then \(\psi (q_1) \ge \psi (q_2)\);

  3. (3)

    for all \(p\in P\) and \(q\in Q\) we have that:

    $$\begin{aligned} p\le \psi (q) \iff q\le \phi (p). \end{aligned}$$

Given a formal context \({\mathbb {K}}= (G,M,I)\), we can define a Galois Connection between the set of attributes and objects as follows. The first map, \(^\uparrow : 2^G \rightarrow 2^M\) is defined as \(A^{\uparrow }= \{m \in M \mid (g,m) \in I \quad \forall g \in A\}\). That is, given a set of objects \(A\subseteq G\), \(A^{\uparrow }\) is the a set of all the attributes shared for all the objects in A. The second map of the Galois Connection, denoted by \(^\downarrow\), is defined as \(^\downarrow : 2^M \rightarrow 2^G\) with \(B^{\downarrow }= \{g \in G \mid (g,m) \in I \quad \forall m \in B\}\). In other words, given a set of attributes \(B\subseteq M\), \(B^{\downarrow }\) is the set of all the objects that have all the attributes in B. The pair \((^\uparrow ,\,^\downarrow )\) forms a Galois Connection [12, 20]. Hence, the compositions \(^\uparrow \circ ^\downarrow\) and \(^\downarrow \circ ^\uparrow\) are closure operators. For the sake of the presentation, hereafter, we omit the symbol \(\circ\) to denote such a composition; i.e., we write \(^{\uparrow \downarrow }\) and \(^{\downarrow \uparrow }\), respectively. A set C is said to be closed under the Galois connection \((^\uparrow ,\,^\downarrow )\) if \(C^{\uparrow \downarrow }=C\).

Once the mappings \(^\uparrow\) and \(^\downarrow\) have been introduced, we can define the notion of formal concept, which is a pair \((A,B)\subseteq G\times M\), such that \(A^{\uparrow }=B\) and \(B^{\downarrow }= A\). The subset A is said to be the extent of the formal concept and B is said to be the intent of the formal concept. Given a formal concept (AB), all the objects in A share all the attributes in B and do not share any other attributes. Moreover, we can define an order relation between formal concepts, given two formal concepts (AB) and (CD), we say that \((A,B) \le (C,D)\) if and only if \(A \subseteq C\) (or equivalently, if and only if \(D \subseteq B\)). Indeed, this order relation defines a structure of complete lattice in the set of formal contexts, where the supremum and infimum are given by:

$$\begin{aligned} \sup _{j \in J} (A_j,B_j)=\Big ((\bigcup _{j \in J} A_j)^{\uparrow \downarrow },\bigcap _{j\in J} B_j\Big ) \qquad \inf _{j \in J} (A_j,B_j)=\Big (\bigcap _{j \in J} A_j, (\bigcup _{j\in J} B_j)^{\downarrow \uparrow }\Big ) \end{aligned}$$

for any family of formal concepts \(\{(A_j,B_j)\ :j\in J\}\). The complete lattice defined by this order is called the Concept Lattice of the formal context \({\mathbb {K}} = (G,M,I)\) and we denote it by \(\underline{{\mathbb {B}}}({\mathbb {K}})\). In addition, it can be proved that every complete lattice L can be seen as a concept lattice of a certain formal context [23, Chapters 3 and 7].

Throughout the paper, we will use the term formal concept for pairs \((A,B)\subseteq G\times M\), as well as for subsets of attributes which are closed under the Galois connection \(^{\downarrow \uparrow }\), that is, we identify formal concepts with their intents.

2.2 Simplification Logic and Minimal Generators

One crucial advantage of FCA is that the implications arising from a formal context \({\mathbb {K}}=(G, M, I)\) can be treated using formal logic. Consequently, we may say that FCA provides a suitable mathematical background to express, represent, reason, deduce and explain the pieces of knowledge extracted from a dataset.

Let us begin by describing the syntax and semantics of the logic that models the dependencies between attributes in a formal context \({\mathbb {K}}=(G, M, I)\). Syntactically, given two sets of attributes \(A, B\subseteq M\), we define an implication “A implies B” and denote it by \(A\rightarrow B\). Semantically, we say that an implication \(A \rightarrow B\) is true if all objects with all the attributes in A have all the attributes in B. Formally, we say that an implication \(A \rightarrow B\) is valid in a context \({\mathbb {K}}\) if and only if \(B \subseteq A^{\downarrow \uparrow }\), which is equivalent to \(A^{\downarrow } \subseteq B^{\downarrow }.\) Moreover, we say that a context \({\mathbb {K}}\) is a model of a set of implications \(\Sigma\) if every implication in \(\Sigma\) is valid in \({\mathbb {K}}\).

The benefit of using a logic system to represent dependencies between attributes is that we can perform (syntactic) inferences and define logical consequences. Accordingly, we say that a formula \(A \rightarrow B\) is a logical consequence of a set of implications \(\Sigma\) (denoted \(\Sigma \models A\rightarrow B\)) if every model of \(\Sigma\) is also a model of \(A\rightarrow B\). In other words, logical consequences are implications that must be true if we assume a set of valid implications. Hence, and thanks to formal logic, we may differentiate between correct conclusions (i.e., logical consequences) and fallacious conclusions.

The (syntactic) inferences require the use of axioms and inference rules. It is well known that we can manage this kind of implications through Armstrong’s Axioms [24]. In this paper, we consider the Simplification Logic instead, this logic is equivalent to Armstrong’s axioms but it is more appropriate for designing automatic reasoning methods [21]. Simplification Logic considers “Inclusion” as an axiom

  • [Inc] InclusionFootnote 1: \(\vdash _S A B \rightarrow A\).

and three inference rules called “fragmentation”, “composition” and “simplification”, respectively:

  • [Frag] Fragmentation: \(A \rightarrow BC\) \(\vdash _S A \rightarrow B\).

  • [Comp] Composition: \(A \rightarrow B, C \rightarrow D \vdash _S AC \rightarrow BD\).

  • [Simp] Simplification: \(A \rightarrow B, C \rightarrow D \vdash _{{\mathcal {S}}} A (C-B) \rightarrow D\).

The Simplification Logic is sound and complete; i.e., given a set of implications \(\Sigma\), any inferred implication is a logical consequence of \(\Sigma\). Conversely, any logical consequence of \(\Sigma\) can be deduced through the Simplification Logic. It is worth mentioning that syntactic inferences allow obtaining logical consequences much more straightforward than applying definition. Therefore, Simplification Logic is a powerful tool for the performance of correct deductions.

Another point that supports the use of the previously described logic is that we can represent the knowledge in a context through a set of implications, which is much easier to interpret. That is, given a context \({\mathbb {K}}=(G, M, I)\), we aim at determining a set of implications \(\Sigma\) such that (1) \({\mathbb {K}}\) is a model of \(\Sigma\) and (2) for all implication \(A\rightarrow B\) valid in \({\mathbb {K}}\), we have that \(\Sigma \vdash _{{\mathcal {S}}} A\rightarrow B\). A set of implications satisfying these properties is called complete. Hereinafter, every set of implications is assumed to be complete.

Given a set of implications \(\Sigma\) and a set of attributes \(A\subseteq M\), we define the logical closure of A as the maximum (for the inclusion) set \(Y\subseteq {{\mathcal {A}}}\) such that \(A\rightarrow Y\) holds, w.r.t. the implications using the Armstrong Axioms, or equivalently Simplification Logic. \(A^+_\Sigma\) denotes this set. See [21] for an efficient method using Simplification Logic to compute the closure of a set of attributes. Given a complete set of implications \(\Sigma\) and due to the Simplification Logic being sound and complete, the respective logical closure operator induced by \(\Sigma\) coincides with the closure operator of FCA, i.e. for any \(A\subseteq M\), we have \(A^+_\Sigma = A^{\downarrow \uparrow }\). Therefore, we can use either Simplification Logic or FCA derivation operators to compute the same closure.

Then, given a closed set of attributes \(A\subseteq M\), we say that \(C\subseteq M\) is a minimal generator of A if

  • \(C^+_{\Sigma }=A\) and

  • if \(D^+_{\Sigma }=A\) for certain \(D\subseteq C\), then \(D = C\).

Therefore, given a context \({\mathbb {K}}=(G, M, I)\), a minimal generator of M determines a minimal set of attributes from which we can infer the rest in M. In other words, minimal generators compress all the knowledge into only a few attributes, and somehow, we may say that they are more valuable than the others. Note that this definition of minimal generator from the use of the logical closure associated with a set of implications is equivalent to using the concept-forming operators from FCA, that is, C is a minimal generator of \(A\subseteq M\) if and only if, \(C^{\downarrow \uparrow } = A\) and for all \(D\subseteq C\) such that \(D^{\downarrow \uparrow } = A\), then \(D = C\).

Example 1

Let us consider the set of attributes \(M=\{a,b,c,d,e,f\}\) and the following set of implications:

$$\begin{aligned} \Sigma =\big \{ \{a,b\}\rightarrow \{c\}; \{a,d\}\rightarrow \{e,f\}; \{b,c,e\}\rightarrow \{d,f\} \big \} \end{aligned}$$

Then, it is easy to prove that both sets \(\{a,b,d\}\) and \(\{a,b,e\}\) are minimal generators of M (note that we may have different minimal generators of the same set). In this respect, we can ensure that any object with the attributes ab and d necessarily has also the rest of the attributes due to the dependencies given by \(\Sigma\). To point out the importance of minimal generators, if we were in a context of employee selection and M were the attributes we are interested in, then we could just focus on the verification of the attributes ab and d (respectively, ab and e) of applicants since the rest are consequences of those. \(\square\)

2.3 Relationship Between FCA and Knowledge Space Theory

Knowledge Space Theory (KST) is based on the idea that the knowledge acquired by a group of persons about a specific discipline may be represented by a set of questions that they are capable of solving [15]. Accordingly, given a set of questions Q and a subset \({\mathcal {K}}\) of \(2^Q\) (i.e., the powerset of Q), we say that the pair \((Q,{\mathcal {K}})\) is a knowledge space if

  • [KST1] \(Q\in {\mathcal {K}}\) and \(\varnothing \in {\mathcal {K}}\);

  • [KST2] \({\mathcal {K}}\) is closed under union.

Each element in \({\mathcal {K}}\) is called knowledge state. Roughly speaking, each knowledge state represents the knowledge acquired by a subgroup of people. The condition [KST1] states that the full and null knowledge of a discipline must be knowledge states. On the other hand, [KST2] states that if two groups of people are capable of solving questions of two knowledge states \(K_1\) and \(K_2\), then the join of both groups of people is capable of solving the questions in the knowledge states \(K_1\cup K_2\).

Additionally, a knowledge space \((Q,{\mathcal {K}})\) is called quasi ordinal if \({\mathcal {K}}\) is closed under intersection. Quasi ordinal knowledge spaces are motivated in [13] to become the reference structure. They prove that if we can impose some dependencies among questions (in the sense, that if an individual answers question (a), then the individual can answer (b) as well), knowledge spaces have the structure of a lattice under the natural ordering in \(2^Q\). In this line, [15] relates knowledge spaces and FCA by means of the notion of Knowledge Context, which is a formal context \({\mathbb {K}} = (P, Q, I)\) such that P is a set of students, Q is a set of questions/evaluations, and the relationship I is such that \((p, q)\in I\) means that student p failed to solve problem q. The complements of the intents of the knowledge context (PQI) form a knowledge space on Q, whose knowledge states represent the tested knowledge of the persons of P [15]. Reciprocally, given a knowledge space \((Q, {\mathcal {K}})\), one can easily define the context \(({\mathcal {K}}, Q, \not \ni )\) that expresses all the knowledge states and is isomorphic to the knowledge context (PQI).

This characterisation of knowledge spaces by means of formal contexts allows us to work with the FCA techniques to analyse and express the hidden structure of the knowledge. Then, FCA provides two main tools for mining knowledge: the concept lattice \(\underline{{\mathbb {B}}}({\mathbb {K}})\), which is isomorphicFootnote 2 to the associated knowledge space \((Q, {\mathcal {K}})\); and the set of valid implications in the formal context which determines, in this case, the dependencies among questions. Notably, in [15], it is stated that an irredundant description of a quasi ordinal knowledge space is given by an implication basis of the corresponding knowledge context. Thus, by studying the implication bases, we could obtain deeper insight into the knowledge structure of a field.

3 Proposal

The aim of this section is to show that bases of implications and minimal generators in mixed contexts give more information than positive, or negative, contexts. This is due to the fact that mixed contexts provide a finer granularity than the classical ones. This section presents a set of theoretical results that support the statement above. Specifically, we prove that all the implications in the positive and negative contexts can be derived from the basis of implications of the mixed context, but the converse is not true, i.e., there are implications of the mixed context that cannot be derived from the implications in the positive or negative one; and likewise for minimal generators. To illustrate, from a practical point of view, the advantages of retrieving information with mixed contexts, we will apply these results to a real-life example in Sect. 4. Next, we introduce briefly the terminology of mixed contexts.

Classical FCA considers a binary relation in which if an object g is related to an attribute m, then the pair (gm) appears in the relation (positive information). Taking the proposal a step further, we propose in this article the use of an FCA extension considering not only the positive information but also the negative one, that is, when an object is not related to an attribute. Recently some works have appeared in the literature that deal with positive and negative information [25]. To the best of our knowledge, none of them considers the computation of minimal generators taking into account positive and negative information in a dataset.

Our approach follows the line of [26], which uses positive and negative attributes. Given a formal context \({\mathbb {K}}=(G,M,I)\), we define the set of negative attributes as \({\overline{M}}=\{{\overline{m}}\mid m\in M\}\) and construct a new formal context \(({\mathbb {K}}\mid ~\overline{{\mathbb {K}}})=(G,M\cup {\overline{M}},I^*)\) where the incidence relation \(I^*\) is defined as:

  • \((g,m)\in I^*\) if and only if \((g,m)\in I\) for all \(m \in M\) and

  • \((g,{{\overline{m}}})\in I^*\) if and only if \((g,m)\notin I\) for all \({{\overline{m}}} \in {{\overline{M}}}\).

Hence, the new incidence takes the opposite value to the value of m in the original incidence for all \({\overline{m}} \in {\overline{M}}\). Hereafter, we refer to \({\mathbb {K}}\) and \(\overline{{\mathbb {K}}}\) as the positive and the negative contexts, respectively.

However, this approach duplicates the number of columns in the context and, as a consequence, it increases the algorithmic and computational cost of working with the dataset. In this work, we follow the line in [26] of defining a new Galois connection over \({\mathbb {K}}\) that captures both the positive and negative information without the need to duplicate the columns. The new connection is denoted by \(^\Uparrow\) and \(^\Downarrow\) to differentiate them from those of the original context \({\mathbb {K}}\). In the rest of this paper, the context \({\mathbb {K}}\) equipped with the new Galois connection will be referred to as the mixed context. The new operators \(^{\Uparrow }:2^{G}\rightarrow 2^{M\cup {\overline{M}}}\) and \(^{\Downarrow }:2^{M\cup {\overline{M}}}\rightarrow 2^{G}\) are defined as follows:

$$\begin{aligned} X^{\Uparrow }= & {} \{m \in M \mid (g,m) \in I \, \, \, \, \forall \, \, g \in X\} \cup \{{\overline{m}} \in {\overline{M}} \mid (g,m) \not \in I \, \, \, \, \forall \, \, g \in X\} \\ Y^{\Downarrow }= & {} \{g \in G \mid (g,m) \in I \, \, \, \, \forall \, \, m \in Y\} \cap \{g \in G \mid (g,m) \not \in I \, \, \, \, \forall \, \, {\overline{m}} \in Y\} \end{aligned}$$

Since these two operators form a Galois connection, their composition is a closure operator. Therefore, they induce a concept lattice over the mixed context of \({\mathbb {K}}\). Let us denote by \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\) the lattice formed by using the derivation operators \(^{\Uparrow }\) and \(^{\Downarrow }\), in contrast to the concept lattice \(\underline{{\mathbb {B}}}({\mathbb {K}})\) built using the concept-forming operators \(^{\uparrow }\) and \(^{\downarrow }\). Note that with this definition of the new derivation operators, we have that \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}}) = \underline{{\mathbb {B}}}({\mathbb {K}} \mid \overline{{\mathbb {K}}})\). This means that handling positive and negative information is more efficient since there is no need to duplicate the number of columns in the context.

The following functions are introduced to capture the positive and negative information related to a given set of attributes A:

$$\begin{aligned} {\mathrm {Pos}}(A) =&\, A \cap M\\ {\mathrm {Neg}}(A) =&\, {\overline{A}} \cap M \end{aligned}$$

where \({\overline{A}} = \{{\overline{m}}\in {\overline{M}}: m \in A\} \cup \{m\in M: {\overline{m}}\in A\}\), the set that transforms m to \({\overline{m}}\) and \({\overline{m}}\) to m for all \(m \in M\).

The two mappings defined above, namely \(^\Uparrow\) and \(^\Downarrow\), are related to the standard derivation operators \(^\uparrow\) and \(^\downarrow\). This is illustrated in the following result.

Lemma 1

[26] For a formal context \({\mathbb {K}}\) and its complementFootnote 3\(\overline{{\mathbb {K}}}\), the following statements hold:

  1. 1.

    If \(A\subseteq M\), then \(A^{\Downarrow } = A^{\downarrow }\) (in \({\mathbb {K}}\)).

  2. 2.

    If \(A\subseteq {\overline{M}}\), then \(A^{\Downarrow } = A^{\downarrow }\) (in \(\overline{{\mathbb {K}}}\)).

  3. 3.

    If \(B \subseteq G\), then \({\mathrm {Pos}}(B^{\Uparrow }) = B^{\uparrow }\) (in \({\mathbb {K}}\)) and \({\mathrm {Neg}}(B^{\Uparrow }) = B^{\uparrow }\) (in \(\overline{{\mathbb {K}}}\)).

The example below is a running example which will illustrate the results presented along the paper.

Example 2

Let us consider the formal context \({\mathbb {K}}\) in Table 1 (a). The apposition (concatenation by columns) of \({\mathbb {K}}\) and its complement \({\overline{\mathbb {K}}}\) is in Table 1 (b). We follow the notation \({\mathbb {K}} \mid {\overline{\mathbb {K}}}\) for the concatenated formal context. In addition, we define \(M = \{{\mathrm {a}}, {\mathrm {b}}, {\mathrm {c}}, {\mathrm {d}}\}\) and \({\overline{M}} = \{ \overline{{\mathrm {a}}}, \overline{{\mathrm {b}}}, \overline{{\mathrm {c}}}, \overline{{\mathrm {d}}}\}\).

Table 1 Formal contexts for the example

We have defined over \({\mathbb {K}}\) two Galois connections: \((^\uparrow ,\,^\downarrow )\) and \((^\Uparrow ,\,^\Downarrow )\). We can show the different information they capture with a simple example:

$$\begin{aligned} \{{\mathrm {c}},\,{\mathrm {d}}\}^{\downarrow \uparrow }&= \left( \{{\mathrm {c}},\,{\mathrm {d}}\}^{\downarrow }\right) ^{\uparrow } = \{\mathrm {o5}\}^{\uparrow } = \{{\mathrm {b}},\,{\mathrm {c}},\,{\mathrm {d}}\}\\ \{{\mathrm {c}},\,{\mathrm {d}}\}^{\Downarrow \Uparrow }&= \left( \{{\mathrm {c}},\,{\mathrm {d}}\}^{\Downarrow }\right) ^{\Uparrow } = \{\mathrm {o5}\}^{\Uparrow } = \{{\mathrm {b}},\,{\mathrm {c}},\,{\mathrm {d}},\,\overline{{\mathrm {a}}}\} \end{aligned}$$

In this case, we can check the results of Lemma 1, since \(\{{\mathrm {c}},\,{\mathrm {d}}\}^{\Downarrow } = \{{\mathrm {c}},\,{\mathrm {d}}\}^{\downarrow } = \{\mathrm {o5}\}\), that is, the operators \(^\downarrow\) and \(^\Downarrow\) coincide when they are applied to a set of positive attributes. However, the operator \(^\Uparrow\) differs in general from \(^\uparrow\), since, \(\{\mathrm {o5}\}^{\Uparrow }=\{{\mathrm {b}},\,{\mathrm {c}},\,{\mathrm {d}},\,\overline{{\mathrm {a}}}\} \ne \{{\mathrm {b}},\,{\mathrm {c}},\,{\mathrm {d}}\}= \{\mathrm {o5}\}^{\uparrow }\). Despite this difference, they can be related by means of the operator \({\mathrm {Pos}}\) (see Lemma 1), that takes only the positive attributes to the result obtained by applying \(^\Uparrow\). That is: \({\mathrm {Pos}}(\{\mathrm {o5}\}^{\Uparrow }) = {\mathrm {Pos}}(\{{\mathrm {b}},\,{\mathrm {c}},\,{\mathrm {d}},\,\overline{{\mathrm {a}}}\}) = \{{\mathrm {b}},\,{\mathrm {c}},\,{\mathrm {d}}\} = \{\mathrm {o5}\}^{\uparrow }\).

Notice also that \(^{\downarrow }\) is defined on \(2^{M}\) whereas \(^{\Downarrow }\) is in \(2^{M\cup {\overline{M}}}\), so it does not make sense to write \(\{{\mathrm {c}},\,\overline{{\mathrm {b}}}\}^{\downarrow }\). Instead, we have to rely on \(^{\Downarrow }\) to compute the desired extent using mixed attributes: \(\{{\mathrm {c}},\,\overline{{\mathrm {b}}}\}^{\Downarrow } = \{\mathrm {o4},\,\mathrm {o6}\}\). \(\square\)

The use of mixed contexts, with positive and negative attributes, provides richer information than the one obtained by using a context where only one positive (or negative) attribute is considered. In the theoretical results that we show below, we can see how the use of mixed contexts extends the information obtained from simple contexts. Specifically, we present a few properties that relate the closed sets, the lattice of concepts and the minimal generators of the mixed context with both simple contexts (i.e., both positive and negative). The work on mixed contexts was initiated in [26], where some interesting results concerning this framework were proved. The result below shows that the concept lattices of the positive (or negative) contexts are embedded in the mixed concept lattice. As a result, the mixed context gives, at least, as much information as the positive or negative contexts.

Theorem 1

[26] The maps \(\pi _1: \underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\rightarrow \underline{{\mathbb {B}}}({\mathbb {K}})\), \(\pi _2: \underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\rightarrow \underline{{\mathbb {B}}}(\overline{{\mathbb {K}}})\) such that

$$\begin{aligned} \pi _1(X,Y):= & {} ({\mathrm {Pos}}(Y)^{\Downarrow },\, {\mathrm {Pos}}(Y))\\ \pi _2(X,Y):= & {} (\overline{{\mathrm {Neg}}(Y)}^{\Downarrow },\, \overline{{\mathrm {Neg}}(Y)}) \end{aligned}$$

are join-preserving and surjective.

As a consequence, besides the isomorphism between the knowledge space \((Q, {\mathcal {K}})\) and the concept lattice of the knowledge context \({\mathbb {K}} = (P, Q, I)\) given by [15], we can say that the concept lattice of the knowledge context \({\mathbb {K}}\) is embedded in \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\) employing the \(\pi _1\) and \(\pi _2\) projection operators. Accordingly, we may consider more information using the positive and negative attributes, i.e., representing the two faces of the same phenomenon.

Example 3

We continue with the same contexts from Example 2. In Fig. 1, we show the concept lattices of the positive (\(\underline{{\mathbb {B}}}({\mathbb {K}})\)), negative (\(\underline{{\mathbb {B}}}(\overline{{\mathbb {K}}})\)) and mixed (\(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\)) formal contexts. We have used a color code to represent the relationships and embedding of \(\underline{{\mathbb {B}}}({\mathbb {K}})\) and \(\underline{{\mathbb {B}}}(\overline{{\mathbb {K}}})\) into \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\). The backprojection of the concepts in \(\underline{{\mathbb {B}}}({\mathbb {K}})\) via \(\pi _1\) is in blue and gray, and the backprojection of \(\underline{{\mathbb {B}}}(\overline{{\mathbb {K}}})\) via \(\pi _2\) is in orange and gray. Some of the concepts in \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\) are in gray because they are the backprojections of both positive and negative concepts. Note that, although a concept belong to the positive concept lattice \(\underline{{\mathbb {B}}}({\mathbb {K}})\), it may be the projection of a concept with negative attributes in the mixed concept lattice \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\). For example, the concept \(\{a,c\}\in \underline{{\mathbb {B}}}({\mathbb {K}})\) is the projection of \(\{a,c,{\overline{b}},{\overline{d}}\}\in \underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\). As a result, we can assert that the mixed concept lattice contains those concepts of the other two contexts and provide additional information and granularity. Actually, note that in Fig. 1, none of the concepts coloured in white in \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\) can be obtained by considering only either positive or negative contexts. \(\square\)

Fig. 1
figure 1

Concept lattices associated with a the positive context \({\mathbb {K}}\) in Table 1 (a); b the mixed context; c the negative formal context \(\overline{{\mathbb {K}}}\)

At this point, we can consider applying Simplification Logic on the extended context \({\mathbb {K}}\mid \overline{{\mathbb {K}}}\) as described in Sect. 2.2. However, in such a case, we do not capture the relationship between opposite attributes, since in \({\mathbb {K}}\mid \overline{{\mathbb {K}}}\) the opposite attributes a and \({{\overline{a}}}\) are unrelated. That is a drawback. For such a reason, the following definition presents semantics on implications where positive and negative attributes are involved.

Definition 1

[26] Let \({\mathbb {K}}\) be a mixed formal context and let \(A\rightarrow B\) with \(A,B\subseteq M\cup {\overline{M}}\) be an implication. We say that \(A\rightarrow B\) is valid in \({\mathbb {K}}\) (denoted by \({\mathbb {K}}\models A\rightarrow B\)) if and only if \(B\subseteq A^{\Downarrow \Uparrow }\).

As mentioned above, this semantics allows us to establish new relations between implications that do not appear in the standard logic presented in Sect. 2.2. In particular, the authors in [26] proposed an axiomatic system to capture the relationship between opposite attributes, named Simplification Logic for Mixed Attributes, a sound and complete logic system for implications on mixed formal contexts.

  • [Ref] Reflexivity: \(\vdash _{{\mathcal {S}}} A \rightarrow A\).

  • [Simp] Simplification: \(A \rightarrow B, C \rightarrow D \vdash _{{\mathcal {S}}} A(C\smallsetminus B) \rightarrow D\).

  • [Key] Key: \(A \rightarrow \{b\} \vdash _{{\mathcal {S}}} A \cup \left\{ {\overline{b}}\right\} \rightarrow M \cup {\overline{M}}\).

  • [Inky] Inverse key: \(A \cup \{b\} \rightarrow M \cup {\overline{M}} \vdash _{{\mathcal {S}}} A \rightarrow \left\{ {\overline{b}}\right\}\).

  • [Red] Reduction: \(A \cup \{b\} \rightarrow C, A \cup \left\{ {\overline{b}}\right\} \rightarrow C \vdash _{{\mathcal {S}}} A \rightarrow C\).

Besides, it is convenient to display the following inference rules obtained from the previous logic system since they will be used later to explain some results.

  • [Cont] Contradiction: \(\vdash _{{\mathcal {S}}} \{a,{{\overline{a}}}\}\rightarrow M\cup {\overline{M}}\).

  • [Rft] Reflection: \(A\cup \{a\} \rightarrow \{b\} \vdash _{{\mathcal {S}}} A\cup \left\{ {{\overline{b}}}\right\} \rightarrow \{{{\overline{a}}}\}\).

  • [Frag] Fragmentation: \(A \rightarrow BC\) \(\vdash _{{\mathcal {S}}} A \rightarrow B\).

  • [Comp] Composition: \(A \rightarrow B, C \rightarrow D \vdash _{{\mathcal {S}}} AC \rightarrow BD\).

Let us briefly comment on the details of the specific rules of this logic, [Cont] and [Rft], since they are the ones showing the relationship between positive and negative attributes. The first inference rule, called Contradiction, is a form of the well-known Ex contradictione quodlibet, i.e., we can infer all the attributes from a set of contradictory attributes. The second one, Reflection, is an inference rule that allows us to interchange attributes from the antecedent to the consequent (and vice versa) simply by negating them.

Example 4

We now study the behaviour of Simplification Logic on the implications retrieved from the positive, negative and mixed contexts of our running example started in Example 2.

On the one hand, the bases of implications for both simple formal contexts, \({\mathbb {K}}\) and \(\overline{{\mathbb {K}}}\), in Example 2 are the following:

For \(\mathbb {K}\)

For \(\overline{\mathbb {K}}\)

\(\left\{ {\mathrm {c}},\, {\mathrm {d}}\right\}\)

\(\rightarrow\)

\(\left\{ {\mathrm {b}}\right\}\)

\(\left\{ {{\overline{{\mathrm {b}}}}},\, {{\overline{{\mathrm {c}}}}},\, {{\overline{{\mathrm {d}}}}}\right\}\)

\(\rightarrow\)

\(\left\{ {{\overline{{\mathrm {a}}}}}\right\}\)

\(\left\{ {\mathrm {a}},\, {\mathrm {b}},\, {\mathrm {d}}\right\}\)

\(\rightarrow\)

\(\left\{ {\mathrm {c}}\right\}\)

\(\left\{ {{\overline{{\mathrm {a}}}}},\, {{\overline{{\mathrm {c}}}}},\, {{\overline{{\mathrm {d}}}}}\right\}\)

\(\rightarrow\)

\(\left\{ {{\overline{{\mathrm {b}}}}}\right\}\)

\(\left\{ {\mathrm {a}},\, {\mathrm {b}},\, {\mathrm {c}}\right\}\)

\(\rightarrow\)

\(\left\{ {\mathrm {d}}\right\}\)

\(\left\{ {{\overline{{\mathrm {a}}}}},\, {{\overline{{\mathrm {b}}}}}\right\}\)

\(\rightarrow\)

\(\left\{ {{\overline{{\mathrm {d}}}}}\right\}\)

On the other hand, the basis of implications for the formal context \({\mathbb {K}} \mid {\overline{\mathbb {K}}}\) in Table 1 (b) is:

$$\begin{aligned} \begin{array}{rrclrrcl} 1{\text {:}} &{}\left\{ {{\overline{{\mathrm {c}}}}},\, {{\overline{{\mathrm {d}}}}}\right\} &{}\rightarrow &{}\left\{ {\mathrm {a}},\, {\mathrm {b}}\right\} &{} 11{\text {:}} &{}\left\{ {\mathrm {c}},\, {{\overline{{\mathrm {b}}}}}\right\} &{}\rightarrow &{}\left\{ {{\overline{{\mathrm {d}}}}}\right\} \\ 2{\text {:}} &{}\left\{ {{\overline{{\mathrm {b}}}}},\, {{\overline{{\mathrm {d}}}}}\right\} &{}\rightarrow &{}\left\{ {\mathrm {c}}\right\} &{} 12{\text {:}} &{}\left\{ {\mathrm {c}},\, {\mathrm {d}}\right\} &{}\rightarrow &{}\left\{ {\mathrm {b}},\, {{\overline{{\mathrm {a}}}}}\right\} \\ 3{\text {:}} &{}\left\{ {{\overline{{\mathrm {b}}}}},\, {{\overline{{\mathrm {c}}}}}\right\} &{}\rightarrow &{}\left\{ {\mathrm {a}},\, {\mathrm {d}}\right\} &{} 13{\text {:}} &{}\left\{ {\mathrm {b}},\, {{\overline{{\mathrm {b}}}}}\right\} &{}\rightarrow &{}\left\{ {\mathrm {a}},\, {\mathrm {c}},\, {\mathrm {d}},\, {{\overline{{\mathrm {a}}}}},\, {{\overline{{\mathrm {c}}}}},\, {{\overline{{\mathrm {d}}}}}\right\} \\ 4{\text {:}} &{}\left\{ {{\overline{{\mathrm {a}}}}},\, {{\overline{{\mathrm {d}}}}}\right\} &{}\rightarrow &{}\left\{ {\mathrm {c}}\right\} &{} 14{\text {:}} &{}\left\{ {\mathrm {b}},\, {\mathrm {d}}\right\} &{}\rightarrow &{}\left\{ {{\overline{{\mathrm {a}}}}}\right\} \\ 5{\text {:}} &{}\left\{ {{\overline{{\mathrm {a}}}}},\, {{\overline{{\mathrm {c}}}}}\right\} &{}\rightarrow &{}\left\{ {\mathrm {b}},\, {\mathrm {d}}\right\} &{} 15{\text {:}} &{}\left\{ {\mathrm {b}},\, {\mathrm {c}}\right\} &{}\rightarrow &{}\left\{ {{\overline{{\mathrm {a}}}}}\right\} \\ 6{\text {:}} &{}\left\{ {{\overline{{\mathrm {a}}}}},\, {{\overline{{\mathrm {b}}}}}\right\} &{}\rightarrow &{}\left\{ {\mathrm {c}},\, {{\overline{{\mathrm {d}}}}}\right\} &{} 16{\text {:}} &{}\left\{ {\mathrm {a}},\, {{\overline{{\mathrm {a}}}}}\right\} &{}\rightarrow &{}\left\{ {\mathrm {b}},\, {\mathrm {c}},\, {\mathrm {d}},\, {{\overline{{\mathrm {b}}}}},\, {{\overline{{\mathrm {c}}}}},\, {{\overline{{\mathrm {d}}}}}\right\} \\ 7{\text {:}} &{}\left\{ {\mathrm {d}},\, {{\overline{{\mathrm {d}}}}}\right\} &{}\rightarrow &{}\left\{ {\mathrm {a}},\, {\mathrm {b}},\, {\mathrm {c}},\, {{\overline{{\mathrm {a}}}}},\, {{\overline{{\mathrm {b}}}}},\, {{\overline{{\mathrm {c}}}}}\right\} &{} 17{\text {:}} &{}\left\{ {\mathrm {a}},\, {\mathrm {d}}\right\} &{}\rightarrow &{}\left\{ {{\overline{{\mathrm {b}}}}},\, {{\overline{{\mathrm {c}}}}}\right\} \\ 8{\text {:}} &{}\left\{ {\mathrm {d}},\, {{\overline{{\mathrm {b}}}}}\right\} &{}\rightarrow &{}\left\{ {\mathrm {a}},\, {{\overline{{\mathrm {c}}}}}\right\} &{} 18{\text {:}} &{}\left\{ {\mathrm {a}},\, {\mathrm {c}}\right\} &{}\rightarrow &{}\left\{ {{\overline{{\mathrm {b}}}}},\, {{\overline{{\mathrm {d}}}}}\right\} \\ 9{\text {:}} &{}\left\{ {\mathrm {d}},\, {{\overline{{\mathrm {a}}}}}\right\} &{}\rightarrow &{}\left\{ {\mathrm {b}}\right\} &{} 19{\text {:}} &{}\left\{ {\mathrm {a}},\, {\mathrm {b}}\right\} &{}\rightarrow &{}\left\{ {{\overline{{\mathrm {c}}}}},\, {{\overline{{\mathrm {d}}}}}\right\} \\ 10{\text {:}} &{}\left\{ {\mathrm {c}},\, {{\overline{{\mathrm {c}}}}}\right\} &{}\rightarrow &{}\left\{ {\mathrm {a}},\, {\mathrm {b}},\, {\mathrm {d}},\, {{\overline{{\mathrm {a}}}}},\, {{\overline{{\mathrm {b}}}}},\, {{\overline{{\mathrm {d}}}}}\right\} &{} \\ \end{array} \end{aligned}$$

These implications form a minimal set from which all other valid implications in the context \({\mathbb {K}} \mid \overline{{\mathbb {K}}}\) can be deduced. However, thanks to the Simplification Logic for Mixed Attributes, we can infer a set of implications with lower cardinality and equivalent to the 19 implications above:

$$\begin{aligned} \begin{array}{rrcl} {\text {i:}} &{} \{\overline{{\mathrm {c}}},\,\overline{{\mathrm {d}}}\} &{} \rightarrow &{} \{{\mathrm {a}},\, {\mathrm {b}} \} \\ {\text {ii:}} &{} \{\overline{{\mathrm {a}}},\,\overline{{\mathrm {b}}}\} &{} \rightarrow &{} \{{\mathrm {c}},\,\overline{{\mathrm {d}}}\} \\ {\text {iii:}} &{} \{{\mathrm {c}},\,{\mathrm {d}}\} &{} \rightarrow &{} \{{\mathrm {b}},\,\overline{{\mathrm {a}}}\} \\ {\text {iv:}} &{} \{{\mathrm {a}},\, {\mathrm {b}} \} &{} \rightarrow &{} \{\overline{{\mathrm {c}}},\,\overline{{\mathrm {d}}}\} \\ \end{array} \end{aligned}$$

We stress that from these four implications, we can derive the 19 from the basis of the duplicate context \({\mathbb {K}} \mid \overline{{\mathbb {K}}}\), i.e., they condense exactly the same knowledge (that is, they are equivalent) with less redundancy. The Simplification Logic for Mixed Attributes allows us to reason wholly and efficiently from a smaller set of implications.

Furthermore, it should be noted that the knowledge extracted in the form of rules in the mixed context can be used to deduce the rules of the simple positive and negative contexts:

  • Implication \(\left\{ {\mathrm {c}},\, {\mathrm {d}}\right\} \rightarrow \left\{ {\mathrm {b}}\right\}\) from \({\mathbb {K}}\) can be obtained using the inference rule [Frag] with implication iii. Analogously, we obtain \(\left\{ {{\overline{{\mathrm {a}}}}},\, {{\overline{{\mathrm {b}}}}}\right\} \rightarrow \left\{ {{\overline{{\mathrm {d}}}}}\right\}\) (valid in \(\overline{{\mathbb {K}}}\)) from implication ii using the same inference rule.

  • Implication \(\left\{ {\mathrm {a}},\, {\mathrm {b}},\, {\mathrm {d}}\right\} \rightarrow \left\{ {\mathrm {c}}\right\}\) from \({\mathbb {K}}\) is deduced using the following inference chain:

    1. (a)

      \(\{{\mathrm {d}},\,\overline{{\mathrm {d}}}\} \rightarrow M\cup {\overline{M}}\) is valid, using axiom [Cont].

    2. (b)

      From (a), using [Frag], we obtain \(\{{\mathrm {d}},\,\overline{{\mathrm {d}}}\}\rightarrow \{{\mathrm {c}}\}\).

    3. (c)

      Use [Frag] on implication iv, \(\{{\mathrm {a}},\, {\mathrm {b}} \}\rightarrow \{\overline{{\mathrm {c}}},\,\overline{{\mathrm {d}}}\}\), getting \(\{{\mathrm {a}},\, {\mathrm {b}} \}\rightarrow \{\overline{{\mathrm {d}}}\}\).

    4. (d)

      By axiom [Ref], we have \(\{{\mathrm {d}}\}\rightarrow \{{\mathrm {d}}\}\).

    5. (e)

      Applying the inference rule [Comp] on (c) and (d), we obtain the implication \(\{{\mathrm {a}},\,{\mathrm {b}},\,{\mathrm {d}}\}\rightarrow \{{\mathrm {d}},\,\overline{{\mathrm {d}}}\}\).

    6. (f)

      Using [Simp] on (e) and (b), we infer \(\left\{ {\mathrm {a}},\, {\mathrm {b}},\, {\mathrm {d}}\right\} \rightarrow \left\{ {\mathrm {c}}\right\}\).

    The rest of the implications in \({\mathbb {K}}\) and \(\overline{{\mathbb {K}}}\) are deduced similarly.

In summary, we have shown that the set of implications for the mixed formal context contains more information than the union of the two bases of implications for the positive and negative contexts. In other words, there are some relations between attributes that none of the two simple contexts can capture. Secondly, it can be shown that information captured by the two implication bases of the positive and negative contexts is contained in the set of implications of the mixed context. \(\square\)

The notions of logical closure and minimal generator are defined as in Sect. 2.2. In this case, since we obtain two simple lattices, \(\underline{{\mathbb {B}}}({\mathbb {K}})\) and \(\underline{{\mathbb {B}}}(\overline{{\mathbb {K}}})\), and the lattice \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\) of the mixed context, due to the use of different closure operators, we have different sets of minimal generators associated with each of them.

Notation. From now on, let us denote by \(\mathrm {Gen}({\mathbb {K}})\) the set of minimal generators for a given formal context \({\mathbb {K}}\), using the derivation operators \(^{\uparrow }\) and \(^{\downarrow }\), and by \(\mathrm {Gen}^{\#}({\mathbb {K}})\) the minimal generators associated to the operators \(^{\Uparrow }\) and \(^{\Downarrow }\). Note that, since \(\underline{{\mathbb {B}}}^{\#}({{\mathbb {K}}})=\underline{{\mathbb {B}}}({\mathbb {K}} \mid \overline{{\mathbb {K}}})\), we have \(\mathrm {Gen}^{\#}({\mathbb {K}}) = \mathrm {Gen}({\mathbb {K}} \mid \overline{{\mathbb {K}}})\).

In this work, the minimal generators for the concepts in \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\) are used to characterise the structure of the knowledge space. The following theoretical result states that, using these generators, we are accounting also for the generators of the original knowledge space.

Proposition 2

Let \({\mathbb {K}} = (G, M, I)\) be a formal context and let \(\overline{{\mathbb {K}}}\) be its complemented context. The following holds:

  1. 1.

    If \(A\subseteq M\) is a minimal generator of a set \(C\subseteq M\) in \(\underline{{\mathbb {B}}}({\mathbb {K}})\), then A is also a minimal generator in \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\) such that \({\mathrm {Pos}}(A^{\Downarrow \Uparrow }) = C\).

  2. 2.

    If \(A\subseteq {\overline{M}}\) is a minimal generator of a set \(C\subseteq {\overline{M}}\) in \(\underline{{\mathbb {B}}}(\overline{{\mathbb {K}}})\), then A is also a minimal generator in \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\) such that \({\mathrm {Neg}}(A^{\Downarrow \Uparrow }) = C\).

Proof

Let A be a minimal generator of \(C\subseteq M\) in \(\underline{{\mathbb {B}}}({\mathbb {K}})\), let \(Z\subseteq A\subseteq M\) such that \(Z^{\Downarrow \Uparrow } = A^{\Downarrow \Uparrow }\). We will show that \(Z=A\), which implies that A is also a minimal generator in \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\). Since \(Z^{\Downarrow \Uparrow } = A^{\Downarrow \Uparrow }\), in particular we have that \({\mathrm {Pos}}(Z^{\Downarrow \Uparrow }) = {\mathrm {Pos}}(A^{\Downarrow \Uparrow })\). Then, by Lemma 1, we have

$$\begin{aligned} Z^{\downarrow \uparrow }=(Z^{\Downarrow })^{\uparrow }={\mathrm {Pos}}(Z^{\Downarrow \Uparrow })={\mathrm {Pos}}(A^{\Downarrow \Uparrow }) = (A^{\Downarrow })^{\uparrow } = A^{\downarrow \uparrow } = C. \end{aligned}$$

That is, we obtain that Z is a generator of \(C\subseteq M\). Then, by minimality of A we have that \(Z=A\). Note, that in the same equation we prove that \({\mathrm {Pos}}(A^{\Downarrow \Uparrow }) = A^{\downarrow \uparrow } = C\).

The second statement is proved analogously. \(\square\)

As a direct consequence of the previous result, we have the following corollary, which states that the minimal generators of the mixed context contain the minimal generators of both simple contexts.

Corollary 1

Let \({\mathbb {K}} = (G, M, I)\) be a formal context and let \(\overline{{\mathbb {K}}}\) be its complemented context. It is fulfilled:

$$\begin{aligned} \mathrm {Gen}({\mathbb {K}}) \cup \mathrm {Gen}(\overline{{\mathbb {K}}}) \subseteq \mathrm {Gen}^{\#}({\mathbb {K}}). \end{aligned}$$

So when we use the positive and negative information, we capture all the generators, i.e., the positive generators, the negative and the mixed ones.

A question arises from the last corollary: are there elements in \(\mathrm {Gen}^{\#}({\mathbb {K}})\) which are neither in \(\mathrm {Gen}({\mathbb {K}})\) nor in \(\mathrm {Gen}(\overline{{\mathbb {K}}})\)? The answer is affirmative, and from now on, we focus on showing it in this section.

Proposition 3

Let \({\mathbb {K}} = (G, M, I)\) be a formal context and \(\overline{{\mathbb {K}}}\) its complement. Let \(a\in M\cup {\overline{M}}\). Then:

  1. (1)

    If \(\{a\}^{\downarrow } = \varnothing\), then \(\{a\}\) is a minimal generator of the closed set \(M\cup {\overline{M}}\) in the concept lattice \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\).

  2. (2)

    If \(\{a\}^{\downarrow } = G\), then \(\{{\overline{a}}\}\) is a minimal generator of the closed set \(M \cup {\overline{M}}\) in the concept lattice \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\).

  3. (3)

    If \(\varnothing \ne \{a\}^{\downarrow } \ne G\), then \(A = \{a, {\overline{a}}\}\) is a minimal generator of the closed set \(M\cup {\overline{M}}\) in the concept lattice \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\).

Proof

  1. (1)

    If \(\{a\}^{\downarrow } = \varnothing\), then \(\{a\}^{\Downarrow \Uparrow } = (\{a\}^{\downarrow })^{\Uparrow } = \varnothing ^{\Uparrow } = M \cup {\overline{M}}\). Moreover, note that, since \(I^*\ne \varnothing\), then \(\varnothing ^{\Downarrow \Uparrow }\ne M \cup {\overline{M}}\), necessarily \(\{a\}\) is minimal.

  2. (2)

    If \(\{a\}^{\downarrow } = G\), then \(\{{\overline{a}}\}^{\downarrow } = \varnothing\) in \(\overline{{\mathbb {K}}}\). Analogously to (1), \(\{{\overline{a}}\}\) is a minimal generator of \(M \cup {\overline{M}}\).

  3. (3)

    Let us suppose that \(\varnothing \ne \{a\}^{\downarrow } \ne G\), and let us show that neither \(\{a\}\) nor \(\{{\overline{a}}\}\) generate the closed set \(M\cup {\overline{M}}\). Since \(\{a\}^{\downarrow } \ne G\), then \(\{{\overline{a}}\}^{\downarrow }\ne \varnothing\) in \(\overline{{\mathbb {K}}}\). This means that there exist \(g_+\in \{a\}^{\downarrow }\) and \(g_-\in \{{\overline{a}}\}^{\downarrow }\) (using the derivation operators of the corresponding contexts). Then, \((g_+, a)\in I\), or, equivalently \((g_+, {\overline{a}})\not \in I^*\). Thus, \({\overline{a}}\not \in (\{a\}^{\downarrow })^{\Uparrow } = \{a\}^{\Downarrow \Uparrow }\). Analogously, using \(g_-\), we can show that \(a\not \in \{{\overline{a}}\}^{\Downarrow \Uparrow }\). This means that \(\{a\}^{\Downarrow \Uparrow }\) and \(\{{\overline{a}}\}^{\Downarrow \Uparrow }\) are not equal to \(M\cup {\overline{M}}\). In this case, a minimal generator is \(A = \{a, {\overline{a}}\}\), since by [Cont], it is \(A^{\Downarrow \Uparrow } = M\cup {\overline{M}}\) and no subset of A has \(M\cup {\overline{M}}\) as its closure.

\(\square\)

The next theoretical result identifies a relation between the minimal generators of the concept lattice \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\) and those of \(\underline{{\mathbb {B}}}({\mathbb {K}})\) and \(\underline{{\mathbb {B}}}(\overline{{\mathbb {K}}})\).

Proposition 4

Let \({\mathbb {K}} = (G, M, I)\) be a formal context. The following statements are true:

  1. (1)

    If A is a minimal generator in the mixed context, with \(A\subseteq M\) (i.e., A only consists of positive attributes), whose closure is \(A^{\Downarrow \Uparrow } = C\subseteq M\cup {\overline{M}}\), then A is also a minimal generator in \({\mathbb {K}}\) and \(A^{\downarrow \uparrow } = {\mathrm {Pos}}(C)\).

  2. (2)

    If A is a minimal generator in the mixed context, with \(A\subseteq {\overline{M}}\) (i.e., it consists only of negative attributes), and its closure is \(A^{\Downarrow \Uparrow } = C\subseteq M\cup {\overline{M}}\), then A is also a minimal generator in \(\overline{{\mathbb {K}}}\) and \(A^{\downarrow \uparrow } = \overline{{\mathrm {Neg}}(C)}\).

Proof

We will prove (1), since (2) follows an analogous reasoning. Let us consider a minimal generator \(A\subseteq M\) in the mixed context. Let us take \(B\subseteq A\) such that \(B^{\downarrow } = A^{\downarrow }\) and we will show that \(B = A\), which would mean that \(A\in \mathrm {Gen}({\mathbb {K}})\). As \(B^{\downarrow \uparrow } = A^{\downarrow \uparrow }\), applying the operator \(^{\downarrow }\), we obtain \(B^{\downarrow \uparrow \downarrow } = A^{\downarrow \uparrow \downarrow }\). Using that, for any \(X\subseteq M\), then \(X^{\downarrow \uparrow \downarrow } = X^{\downarrow }\) (see Proposition 1 in [27]), we arrive at \(B^{\downarrow } = A^{\downarrow }\). Applying Lemma 1, we obtain that \(B^{\Downarrow } = A^{\Downarrow }\). Again, applying the derivation operator \(^{\Uparrow }\), we have that \(B^{\Downarrow \Uparrow } = A^{\Downarrow \Uparrow }\). Since A is a generator in the mixed context, then it must be \(A\subseteq B\). Thus \(B=A\). Therefore, A is a minimal generator in \(\underline{{\mathbb {B}}}({\mathbb {K}})\). Moreover, \(A^{\downarrow \uparrow } = {\mathrm {Pos}}(C)\) is a consequence of the application of the Lemma 1. \(\square\)

In what follows, our purpose is to characterise the structure of the minimal generators of \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\). To this end, we introduce the following notation:

Definition 2

Let us consider a formal context \({\mathbb {K}} = (G, M, I)\). We denote:

$$\begin{aligned} \mathrm {Gen}^+({\mathbb {K}})&= \,\, \mathrm {Gen}^{\#}({\mathbb {K}}) \cap 2^M \\ \mathrm {Gen}^-({\mathbb {K}})&= \,\, \mathrm {Gen}^{\#}({\mathbb {K}}) \cap 2^{{\overline{M}}} \\ \mathrm {Gen}^{\pm }({\mathbb {K}})&= \left\{ A\in \mathrm {Gen}^{\#}({\mathbb {K}}): A\cap M \ne \varnothing ,\, A\cap {\overline{M}} \ne \varnothing \right\} \end{aligned}$$

\(\mathrm {Gen}^+({\mathbb {K}})\) (resp. \(\mathrm {Gen}^-({\mathbb {K}})\)) consists of the minimal generators composed of only positive (resp. negative) attributes. On the other hand, \(\mathrm {Gen}^{\pm }({\mathbb {K}})\) consists of those minimal generators that have both positive and negative attributes. Note that \(\mathrm {Gen}^{\pm }({\mathbb {K}}) \ne \varnothing\) if there exists \(m\in M\) such that \(\varnothing \ne \{m\}^{\downarrow }\ne G\), by Proposition 3. We can then prove the following result:

Lemma 2

Let \({\mathbb {K}} = (G, M, I)\) be a formal context. Then \(\mathrm {Gen}({\mathbb {K}}) = \mathrm {Gen}^+({\mathbb {K}})\) and \(\mathrm {Gen}(\overline{{\mathbb {K}}}) = \mathrm {Gen}^-({\mathbb {K}})\).

Proof

It is sufficient to use Propositions 2 and 4. \(\square\)

With this result, we can determine a partition of the set of minimal generators of a mixed context:

Corollary 2

Let \({\mathbb {K}} = (G, M, I)\) be a formal context. Then:

$$\begin{aligned} \mathrm {Gen}^{\#}({\mathbb {K}}) = \mathrm {Gen}({\mathbb {K}}) \cup \mathrm {Gen}(\overline{{\mathbb {K}}}) \cup \mathrm {Gen}^{\pm }({\mathbb {K}}) \end{aligned}$$

This means that there are not only the generators of the positive and negative contexts, but with the mixed context, we are contemplating others that arise from mixing positive and negative attributes.

Example 5

Now, we proceed to study the minimal generators of the formal contexts of Example 2 and to check that the previous theoretical results hold in this running example. For the sake of simplicity, we only consider non-trivial minimal generators, that is, such that the minimal generator is not the concept itself. For example, in the positive lattice \(\underline{{\mathbb {B}}}({\mathbb {K}})\), the minimal generator \(\{a,c\}\) is trivial because it generates the concept \(\{a,c\}\) again. The list of non-trivial minimal generators for \(\underline{{\mathbb {B}}}({\mathbb {K}})\) and \(\underline{{\mathbb {B}}}(\overline{{\mathbb {K}}})\) is the following:

For \(\underline{{\mathbb {B}}}({\mathbb {K}})\)

For \(\underline{{\mathbb {B}}}(\overline{{\mathbb {K}}})\)

Concepts

 

Minimal generators

Concepts

 

Minimal generators

\(\left\{ {\mathrm {b}},\, {\mathrm {c}},\, {\mathrm {d}}\right\}\)

:

\(\left\{ {\mathrm {c}},\, {\mathrm {d}}\right\}\)

\(\left\{ {{\overline{{\mathrm {a}}}}},\, {{\overline{{\mathrm {b}}}}},\, {{\overline{{\mathrm {d}}}}}\right\}\)

:

\(\left\{ {{\overline{{\mathrm {a}}}}},\, {{\overline{{\mathrm {b}}}}}\right\}\)

\(\left\{ {\mathrm {a}},\, {\mathrm {b}},\, {\mathrm {c}},\, {\mathrm {d}}\right\}\)

:

\(\left\{ {\mathrm {a}},\, {\mathrm {b}},\, {\mathrm {d}}\right\}\), \(\left\{ {\mathrm {a}},\, {\mathrm {b}},\, {\mathrm {c}}\right\}\),

\(\left\{ {{\overline{{\mathrm {a}}}}},\, {{\overline{{\mathrm {b}}}}},\, {{\overline{{\mathrm {c}}}}},\, {{\overline{{\mathrm {d}}}}}\right\}\)

:

\(\left\{ {{\overline{{\mathrm {b}}}}},\, {{\overline{{\mathrm {c}}}}},\, {{\overline{{\mathrm {d}}}}}\right\}\), \(\left\{ {{\overline{{\mathrm {a}}}}},\, {{\overline{{\mathrm {c}}}}},\, {{\overline{{\mathrm {d}}}}}\right\}\),

  

\(\left\{ {\mathrm {a}},\, {\mathrm {c}},\, {\mathrm {d}}\right\}\)

  

\(\left\{ \overline{{\mathrm {a}}},\, \overline{{\mathrm {b}}},\, \overline{{\mathrm {c}}}\right\}\)

Notice that there are only four non-trivial minimal generators in each simple context. In contrast, in the mixed context, we can find new patterns relating positive and negative attributes that do not appear in the list above, as is shown below.

Concepts

 

Min. gen.

Concepts

 

Min. gen.

\(\left\{ {\mathrm {c}},\, \mathrm {\overline{a}},\, \mathrm {\overline{d}}\right\}\)

:

\(\left\{ \mathrm {\overline{a}},\, \mathrm {\overline{d}}\right\}\)

\(\left\{ {\mathrm {b}},\, {\mathrm {d}},\, \mathrm {\overline{a}},\, \mathrm {\overline{c}}\right\}\)

:

\(\left\{ \mathrm {\overline{a}},\, \mathrm {\overline{c}}\right\}\)

\(\left\{ {\mathrm {c}},\, \mathrm {\overline{a}},\, \mathrm {\overline{b}},\, \mathrm {\overline{d}}\right\}\)

:

\(\left\{ \mathrm {\overline{a}},\, \mathrm {\overline{b}}\right\}\)

\(\left\{ {\mathrm {c}},\, \mathrm {\overline{b}},\, \mathrm {\overline{d}}\right\}\)

:

\(\left\{ \mathrm {\overline{b}},\, \mathrm {\overline{d}}\right\}\), \(\left\{ {\mathrm {c}},\, \mathrm {\overline{b}}\right\}\)

\(\left\{ {\mathrm {b}},\, {\mathrm {c}},\, {\mathrm {d}},\, \mathrm {\overline{a}}\right\}\)

:

\(\left\{ {\mathrm {c}},\, {\mathrm {d}}\right\}\)

\(\left\{ {\mathrm {b}},\, {\mathrm {d}},\, \mathrm {\overline{a}}\right\}\)

:

\(\left\{ {\mathrm {d}},\, \mathrm {\overline{a}}\right\}\), \(\left\{ {\mathrm {b}},\, {\mathrm {d}}\right\}\)

\(\left\{ {\mathrm {b}},\, {\mathrm {c}},\, \mathrm {\overline{a}}\right\}\)

:

\(\left\{ {\mathrm {b}},\, {\mathrm {c}}\right\}\)

\(\left\{ {\mathrm {a}},\, {\mathrm {d}},\, \mathrm {\overline{b}},\, \mathrm {\overline{c}}\right\}\)

:

\(\left\{ \mathrm {\overline{b}},\, \mathrm {\overline{c}}\right\}\), \(\left\{ {\mathrm {d}},\, \mathrm {\overline{b}}\right\}\), \(\left\{ {\mathrm {a}},\, {\mathrm {d}}\right\}\)

\(\left\{ {\mathrm {a}},\, {\mathrm {c}},\, \mathrm {\overline{b}},\, \mathrm {\overline{d}}\right\}\)

:

\(\left\{ {\mathrm {a}},\, {\mathrm {c}}\right\}\)

\(\left\{ {\mathrm {a}},\, {\mathrm {b}},\, \mathrm {\overline{c}},\, \mathrm {\overline{d}}\right\}\)

:

\(\left\{ \mathrm {\overline{c}},\, \mathrm {\overline{d}}\right\}\), \(\left\{ {\mathrm {a}},\, {\mathrm {b}}\right\}\)

Concepts

Minimal generators

\(M\cup \overline{M}\)

:

\(\left\{ \mathrm {\overline{b}},\, \mathrm {\overline{c}},\, \mathrm {\overline{d}}\right\}\), \(\left\{ \mathrm {\overline{a}},\, \mathrm {\overline{c}},\, \mathrm {\overline{d}}\right\}\), \(\left\{ \mathrm {\overline{a}},\, \mathrm {\overline{b}},\, \mathrm {\overline{c}}\right\}\), \(\left\{ {\mathrm {d}},\, \mathrm {\overline{a}},\, \mathrm {\overline{b}}\right\}\), \(\left\{ {\mathrm {d}},\, \mathrm {\overline{d}}\right\}\), \(\left\{ {\mathrm {c}},\, {\mathrm {d}},\, \mathrm {\overline{b}}\right\}\), \(\left\{ {\mathrm {c}},\, \mathrm {\overline{c}}\right\}\), \(\left\{ {\mathrm {a}},\, {\mathrm {c}},\, {\mathrm {d}}\right\}\), \(\left\{ {\mathrm {b}},\, \mathrm {\overline{b}}\right\}\), \(\left\{ {\mathrm {a}},\, {\mathrm {b}},\, {\mathrm {d}}\right\}\), \(\left\{ {\mathrm {a}},\, {\mathrm {b}},\, {\mathrm {c}}\right\}\), \(\left\{ {\mathrm {a}},\, \mathrm {\overline{a}}\right\}\)

Next, the Hasse diagrams of the minimal generators for the mixed and simple contexts are depicted in Fig. 2, where we have used the same color code as in Fig. 1. For the sake of presentation, we have not included those generators that explicitly described a contradiction in the corresponding figure: \(\left\{ {\mathrm {a}},\, {{\overline{{\mathrm {a}}}}}\right\}\), \(\left\{ {\mathrm {b}},\, {{\overline{{\mathrm {b}}}}}\right\}\), \(\left\{ {\mathrm {c}},\, {{\overline{{\mathrm {c}}}}}\right\}\) and \(\left\{ {\mathrm {d}},\, {{\overline{{\mathrm {d}}}}}\right\}\). Note that in Fig. 2, the reader can easily visualize the new extracted knowledge resulting from the use of mixed attributes: firstly, it is visually clear that none of those minimal generators of the mixed context coloured by white can be obtained by the minimal generators of the positive and negative contexts (coloured by blue and orange, respectively); and secondly, all the minimal generators of the positive and negative contexts are embedded in the set of minimal generators of the mixed context. \(\square\)

Fig. 2
figure 2

Hasse diagrams of non-trivial minimal generators for a the positive context \({\mathbb {K}}\); b the mixed formal context; c the negative context \(\overline{{\mathbb {K}}}\)

With the theoretical results and the running example in this section, we have illustrated that a greater level of granularity in the knowledge extracted from the context and a more exhaustive exploration of the relationship between attributes can be achieved by working with mixed contexts. New hidden patterns arise with the computation of the minimal generators for mixed formal contexts.

4 Case of Study and Results

In this section, we present a case study where we use the previous theoretical results and show their advantages with respect to the classical approach. For the sake of reproducibility, all the material (dataset and code, as well as a script to replicate the results) has been collected and presented in a public GitHub repository at https://github.com/Malaga-FCA-group/demo-elearning.

4.1 Introduction to the Case

The object of study is a class of 47 students in the third year of compulsory secondary education in Spain (3\(^{\circ }\) ESO). The analysis is performed by considering the marks of those students in the subject of Mathematics in all the partial exams and courseworks made during the course 2020/2021. All those exams can be split into three terms (see Table 2). The analysis carried out aims to exploit the information obtained through the use of mixed contexts (with positive and negative attributes) in two ways:

  • On the one hand, we propose the use of the mixed lattice \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\) to establish the possible learning paths of a student;

  • On the other hand, we describe the knowledge space generated by the students employing the minimal generators.

Table 2 Exams and coursework
Fig. 3
figure 3

Subsemilattice of the mixed context considering those concepts containing the attributes P and F

The analysis has been done in R. The package used for the study is fcaR, which was developed by our team and is available at the CRAN package repository [28]. This package allows the user to compute the main operations of FCA, as concept lattices and minimal generators. The dataset, provided by the teacher in a spreadsheet, is turned into a formal context where the students are the objects and each unit is an attribute. A student is related to an attribute if they passed the exam of the respective unit. Note that after considering its mixed context, we have also the negated attribute that determines that a student has failed the respective exam. The names of the (positive) attributes are specified in Table 2 next to the unit, and the negated attribute is represented by an overline; e.g., the attribute I and \({\overline{I}}\) represent “the student has passed the exam of Integers and fractions” and “the student has failed the exam of Integers and fractions”, respectively.

4.2 First Analysis: Exploration of the Knowledge Space

For the first study, the mixed lattice \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\) has been constructed using the NextClosure algorithm [27], yielding a total of 403 rules in the implication basis and 1769 concepts. This lattice can be navigated to determine the path that a learner has followed during the course to either pass or fail an exam. For example, let us pose a practical question: what has to happen for a student who has failed the polynomials and functions exams to pass the course. In Fig. 3, we show the subsemilattice formed by those concepts (from \(\underline{{\mathbb {B}}}^{\#}({\mathbb {K}})\)) that contain the attributes \(\{\overline{\mathrm {P}},\, \overline{\mathrm {F}}\}\); i.e., the attributes stating that a student failed the exams related to the units of Polynomials and Functions. Note that the top concept also contains \(\{\overline{\mathrm {2{nd}T}}\}\), indicating that all the students that have failed those two exams have also failed the second term. The same colour code as in Example 3 has been used to mark those concepts that also appear in the lattice \(\underline{{\mathbb {B}}}(\overline{{\mathbb {K}}})\). To improve the readability of this graph, only those attributes that do not appear in the nodes immediately above are shown in each node. A node marked with \(\star\) indicates that its attributes are exactly those formed by the union of its upper neighbours. To reinforce this fact, the arrows have been labelled with symbols \(+\) and \(\cup\) indicating that its immediate subconcepts (below) contain either new attributes (specified in boxes) or that the new concept is the union of its upper neighbours, respectively. For example, the concept at the bottom, represented by an \(\star\), is exactly the concept containing the attributes \(\{\overline{\mathrm {P}},\, \overline{\mathrm {F}}, \, \overline{\mathrm {2{nd}T}}, \, \overline{\mathrm {I}},\, \overline{{\mathrm {d}}},\, \overline{\mathrm {1{st}T}},\, \overline{\mathrm {3{rd}T}}, \, \overline{\mathrm {M}}, \, \overline{\mathrm {Final}},\, \overline{\mathrm {G}},\, \mathrm {Cwk},\, \mathrm {S}\}\). To answer the posed question, we can observe in Fig. 3 that there is a path, marked in blue, according to students that reach a knowledge state (concept) in which the attribute \(\mathrm {Final}\) is present, i.e., students that pass the course. This learning path includes passing the coursework (\(\mathrm {Cwk}\)), the Statistics exam (\(\mathrm {S}\)) and the third term (\(\mathrm {3{rd}T}\)), among other possibilities. In contrast, we can see that many other paths end with the attribute \(\overline{\mathrm {Final}}\), indicating that students have not completed the course successfully. In all these paths, the attribute \(\overline{\mathrm {3{rd}T}}\) appears, indicating that students failing the third term also fail the course.

This exhaustive analysis would not have been possible using only the positive and negative contexts, hence the importance of the wealth of knowledge that can be extracted from the mixed context. Note that using only \(\underline{{\mathbb {B}}}({\mathbb {K}})\) or \(\underline{{\mathbb {B}}}(\overline{{\mathbb {K}}})\), we could never have inferred the possible dependency relationship or learning path between two attributes with opposite signs (such as in this case).

4.3 Second Analysis: Minimal Generators

In the second analysis, we focus on minimal sets of exams or terms that lead students to fail the whole course. In this case, the mathematical entities of our framework that can help us face this question are the minimal generators. In particular, our analysis aims at providing answers to the following question: “Is it possible for a student that has failed a certain number of units to pass the course?” or, equivalently, “Which are the minimal sets of exams or terms that lead the students to fail the module?”.

Notice that, in a well-structured subject, there should not be a unit, nor a small set of units, that implies passing the whole course. For such a reason, we do not focus our analysis on passing the whole course, but on failing it. In this line, since Mathematics is a hierarchical subject, in the sense of one unit being usually necessary to understand the following one, we aim to answer whether a student that failed some exams would fail all the subsequent exams. Therefore, we could support the idea that the contents of one unit can be retaken (without a specific exam) during the rest of the course. This would mean that even with a rough start, hard work pays off.

The computation of all the minimal generators of the mixed context produces an aggregate of 24,975 generators. For the sake of readability, we have filtered this large set to select generators whose support is greater than 10%, that is, generators consisting of combinations of attributes that appear, at least, in 10% of the cases. Since the minimal generators and their closed sets can be unambiguously represented by a set of implications, we will use such representation. Thus, any implication in the following will have a minimal generator as its left-hand side. This allows us to use the Simplification Logic to further reduce the redundancies in the implications. After this process, we restrict the results to those implications interesting for the question posed above and obtain 20 representative implications that are related to failing the course:

$$\begin{aligned} \begin{array}{cccc} \text {1: }&{} \{\overline{\mathrm {2{nd}T}}, \overline{\mathrm {3rdT}} \} \rightarrow \{{\overline{\mathrm {Final}}}, {\overline{\mathrm {M}}} \} &{} \text {11: } &{} \{{\overline{{\mathrm {d}}}}, {\overline{\mathrm {F}}}, \overline{\mathrm {3rdT}} \} \rightarrow \{{\overline{\mathrm {I}}}, {\overline{\mathrm {P}}}, \overline{\mathrm {1stT}}, \overline{\mathrm {2ndT}}, {\overline{\mathrm {Final}}}, {\overline{\mathrm {M}}} \} \\ \text {2: }&{} \{{\overline{\mathrm {F}}}, \overline{\mathrm {2ndT}}, {\overline{\mathrm {M}}} \} \rightarrow \{\overline{\mathrm {3rdT}}, {\overline{\mathrm {Final}}} \} &{} \text {12: } &{} \{{\overline{{\mathrm {d}}}}, {\overline{\mathrm {F}}}, \overline{\mathrm {2ndT}}, {\overline{\mathrm {M}}} \} \rightarrow \{{\overline{\mathrm {I}}}, {\overline{\mathrm {P}}}, \overline{\mathrm {1stT}} \}\\ \text {3: } &{} \{{\overline{\mathrm {F}}}, \overline{\mathrm {2ndT}}, {\overline{\mathrm {G}}} \} \rightarrow \{\overline{\mathrm {3rdT}}, {\overline{\mathrm {Final}}}, {\overline{\mathrm {M}}} \} &{} \text {13: } &{} \{{\overline{\mathrm {I}}}, {\overline{{\mathrm {d}}}}, {\overline{\mathrm {P}}}, \overline{\mathrm {3rdT}}, {\overline{\mathrm {M}}}\} \rightarrow \{{\overline{\mathrm {F}}}\}\\ \text {4: } &{} \{\overline{\mathrm {1stT}}, \overline{\mathrm {3rdT}}, {\overline{\mathrm {M}}} \} \rightarrow \{\overline{\mathrm {2ndT}}, {\overline{\mathrm {Final}}} \} &{} \text {14: } &{} \{{\overline{\mathrm {P}}}, \overline{\mathrm {3rdT}}, {\overline{\mathrm {M}}} \} \rightarrow \{\overline{\mathrm {1stT}}, \overline{\mathrm {2ndT}}, {\overline{\mathrm {Final}}}\}\\ \text {5: } &{} \{\overline{\mathrm {1stT}}, {\overline{\mathrm {F}}} \} \rightarrow \{ {\overline{\mathrm {I}}}, {\overline{{\mathrm {d}}}}, {\overline{\mathrm {P}}}, \overline{\mathrm {2ndT}}, \overline{\mathrm {3rdT}}, {\overline{\mathrm {Final}}}, {\overline{\mathrm {M}}}\} &{} \text {15: } &{} \{{\overline{\mathrm {P}}}, {\overline{\mathrm {F}}}, {\overline{\mathrm {M}}} \} \rightarrow \{{\overline{\mathrm {I}}}, {\overline{{\mathrm {d}}}}, \overline{\mathrm {1stT}}, \overline{\mathrm {2ndT}}, \overline{\mathrm {3rdT}}, {\overline{\mathrm {Final}}} \}\\ \text {6: } &{} \{{\mathrm {d}}, \overline{\mathrm {2ndT}}, {\overline{\mathrm {M}}} \} \rightarrow \{{\overline{\mathrm {Final}}} \} &{} \text {16: } &{} \{{\overline{\mathrm {I}}}, {\overline{\mathrm {P}}}, {\overline{\mathrm {F}}} \} \rightarrow \{{\overline{{\mathrm {d}}}}, \overline{\mathrm {1stT}}, \overline{\mathrm {2ndT}}, \overline{\mathrm {3rdT}}, {\overline{\mathrm {Final}}}, {\overline{\mathrm {M}}} \}\\ \text {7: } &{} \{{\overline{\mathrm {P}}}, \overline{\mathrm {2ndT}}, \overline{\mathrm {3rdT}} \} \rightarrow \{\overline{\mathrm {1stT}} \} &{} \text {17: } &{} \{{\overline{{\mathrm {d}}}}, {\overline{\mathrm {P}}}, {\overline{\mathrm {F}}} \} \rightarrow \{{\overline{\mathrm {I}}}, \overline{\mathrm {1stT}}, \overline{\mathrm {3rdT}}, {\overline{\mathrm {Final}}}, {\overline{\mathrm {M}}} \}\\ \text {8: } &{} \{{\overline{{\mathrm {d}}}}, \overline{\mathrm {2ndT}}, \overline{\mathrm {3rdT}} \} \rightarrow \{\overline{\mathrm {1stT}} \} &{} \text {18: } &{} \{{\overline{\mathrm {I}}}, {\overline{\mathrm {F}}}, \overline{\mathrm {2ndT}}\} \rightarrow \{{\overline{{\mathrm {d}}}}, {\overline{\mathrm {P}}}, \overline{\mathrm {1stT}}, \overline{\mathrm {3rdT}}, {\overline{\mathrm {Final}}}, {\overline{\mathrm {M}}}\}\\ \text {9: } &{} \{{\overline{\mathrm {I}}}, {\overline{{\mathrm {d}}}}, {\overline{\mathrm {P}}}, \overline{\mathrm {2ndT}}, \overline{\mathrm {3rdT}} \} \rightarrow \{{\overline{\mathrm {F}}}\} &{} \text {19: } &{} \{{\overline{\mathrm {I}}}, \overline{\mathrm {2ndT}}, {\overline{\mathrm {G}}} \} \rightarrow \{{\overline{\mathrm {Final}}}, {\overline{\mathrm {M}}} \}\\ \text {10: } &{} \{{\overline{\mathrm {P}}}, {\overline{\mathrm {F}}}, \overline{\mathrm {3rdT}} \} \rightarrow \{{\overline{\mathrm {I}}}, {\overline{{\mathrm {d}}}}, {\overline{\mathrm {Final}}}, \overline{\mathrm {2ndT}}, \overline{\mathrm {1stT}}, {\overline{\mathrm {M}}} \} &{} \text {20: } &{} \{{\overline{\mathrm {I}}}, {\overline{{\mathrm {d}}}}, {\overline{\mathrm {F}}}, {\overline{\mathrm {M}}} \} \rightarrow \{{\overline{\mathrm {P}}}, \overline{\mathrm {1stT}}, \overline{\mathrm {2ndT}}, \overline{\mathrm {3rdT}}, {\overline{\mathrm {Final}}}\} \end{array} \end{aligned}$$

Firstly, note that left-hand side of each implication is a minimal generator of a concept containing \(\{{\overline{\mathrm {Final}}}\}\). Note that, in the right-hand side of some of these implications, \(\{{\overline{\mathrm {Final}}}\}\) does not appear. This is due to the simplification performed to improve legibility. For instance, for rule number 7, the consequent does not mention explicitly \(\{{\overline{\mathrm {Final}}}\}\), but, if we compute the logical closure of the antecedent \(\{{\overline{\mathrm {P}}}, \overline{\mathrm {2ndT}}, \overline{\mathrm {3rdT}} \}\) we obtain \(\{{\overline{\mathrm {P}}}, \overline{\mathrm {1stT}}, \overline{\mathrm {2ndT}}, \overline{\mathrm {3rdT}}, {\overline{\mathrm {Final}}}, {\overline{\mathrm {M}}} \}\). It suffices to observe that we can use [Comp] with rules 1 and 7 to infer the logical closure.

Secondly, for the sake of understating, let us explain the information represented by some implications in detail. Let us start with rule 1; this one says that every student who has failed the second and third terms has failed the mean of the marks of the whole course and failed the course. This is an obvious piece of information from the teaching point of view. A student who fails two out of three terms will hardly ever pass the module (in this particular case, this situation does not arise, not even once). Whenever an algorithm gives trivial information as an output, it can be seen as a sign of the approach being coherent and trustworthy. However, not all the information given by this set of implications is trivial. For instance, rule 5 states that if a student has failed the first term and the Functions exam, then the student has also failed the Integers exam, the Decimals exam, the Polynomials exam, the second term, the third term and the final mark. In other words, every student that has failed the first term and Functions, a single unit in the second term, has failed the course. This emphasises somehow that Mathematics has a pyramidal structure, where a lack of basic knowledge, namely failing the first term, makes learning new concepts significantly harder. Actually, the set of implications shows up the importance of the Functions unit to pass the course. It does not only appear in the previously described implication 5 but also in the antecedent (i.e., in minimal generators) of implications 2, 3, 10, 11, 12, 15, 16, 17, 18 and 20.

On the other hand, we have the information reported by implication 6, whose antecedent contains the only mixed set of attributes; specifically, the positive attribute \(\text {D}\) and negative attributes \(\overline{\mathrm {2{nd}}}\) and \(\overline{\text {M}}\). Note that this implication would not be obtained if we were using only positive or negative contexts. This implication states that every student that has passed the Decimals unit but failed the second term and the mean of the module failed the whole subject. Failing the average mark of the module is often a strong enough condition to fail the entire module, but this is not true in every single case. Some students have a failed average mark but passed the course anyway. This is due to some hidden evaluation method carried out by the teacher that depends on information that is not explicitly shown in the dataset; e.g., behaviour in class, punctuality, submitting homework on time, etc. Anyway, note that FCA is capable to show up this hidden evaluation method, since the single set of attributes \(\{\text {M}\}\) is not a minimal generator.

Finally, concerning the posed question: Which are the minimal sets of exams or terms that lead the students to fail the module? We focus on implication 16, which says that every student who has failed Integers, Polynomials and Functions, has failed the subject, Decimals and the first and third term. Hence, there is a set of units whose failure implies failing the whole module. Note that this set is not unique. For example, implication 17 says that every student who has failed Decimals, Polynomials and Functions exams has failed the whole module.

A final remark: one could argue that a similar analysis can be done using only positive or negative attributes, to detect, for instance, how failing some items leads to failing the whole course. However, we must emphasise that the combination of positive and negative attributes provides a higher level of granularity that cannot be achieved otherwise. To reflect this point, note that, out of the 24,975 minimal generators, 24,023 are mixed, 509 are purely positive and 443 are purely negative. That is, more than 96% of the generators are mixed. Thus, using only one of the simple contexts would not allow us to reach the level of expressiveness and granularity that we can achieve with the mixed context.

To complete this remark and demonstrate that the mixed information (not purely positive or negative) is not incidental, a more comprehensive listing of mixed minimal generators related to this problem is presented in Appendix A, showing the richness of the information retrieved by Formal Concept Analysis in the mixed context.

The reader can observe that the use of mixed attributes, along with the proposed algebraic and logic tools, allows us to determine the structure of the knowledge obtained by the students during the course. Note that some pieces of that knowledge are not directly visible to the naked eye from the raw data, i.e., they cannot be derived without the use of formal tools.

5 Conclusions and Future Work

In this paper, we have recalled the basic notions of FCA, Simplification Logic and the relationship of those theories with Knowledge Space Theory. Furthermore, we have presented some new theoretical results about the minimal generators in FCA using positive and negative attributes. The most remarkable result states that the minimal generators of a mixed context contain not only the minimal generators of the positive and negative contexts but also new minimal generators not purely positive or negative. Consequently, we have shown the advantages of using the mixed context instead of using the context with just positive or negative information.

In addition, we have presented a case study. Specifically, we have analysed the students’ marks in a Mathematics course in a High School in Andalusia and we have constructed its corresponding Knowledge Space. We have extended the approach of [15], using the information when a student fails or passes an exam. We have computed the minimal generators of the formal context and studied those that lead the students to fail the whole module. We have had some expected information but have collected some non-trivial information as well. Actually, we have explicitly shown that we can obtain a deeper insight by the use of mixed attributes than by the standard FCA approach. This information might be a helpful reference for teachers when preparing the course organisation.

As future work, we plan to extend this study to the fuzzy framework. This will allow us to capture a finer grain in the information and a higher level of detail when modelling the knowledge space. In the long-term, an analysis of the subjects of a whole course would be helpful to find dependencies between different subjects, not only in Mathematics.