Abstract
The hesitant fuzzy \(\beta \)-covering rough set offers stronger representational capabilities than earlier hesitant fuzzy rough sets. Its flexibility makes it more suitable for hesitant fuzzy multi-attribute decision-making (MADM). As a result, it has become a popular research focus in decision analysis and has drawn significant attention from scholars. However, the existing hesitant fuzzy \(\beta \)-covering rough set based on t-norms cannot handle the overlap and correlation between hesitant information well. Addressing this problem, we propose the hesitant fuzzy overlap function and hesitant fuzzy \(\beta \)-covering \(({\mathcal {I}},\) \({\mathcal {O}})\) rough set (HF\(\beta \)CIORS) models based on the hesitant fuzzy overlap function. First, we establish the definition of the hesitant overlap function and representable hesitant fuzzy overlap function on a partial order relation. Based on proposed definitions, we provide examples of representable and unrepresentable hesitant fuzzy overlap functions and offer a detailed proof to explain the unrepresentable function. Second, we construct four types of HF\(\beta \)CIORS models and prove some of its important properties. Thirdly, we integrate the HF\(\beta \)CIORS models with the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method and apply them to solve MADM problems. The validity of the proposed method is demonstrated through a practical application, and its stability and effectiveness are confirmed via sensitivity and comparative analyses. Based on these validations, our method proves effective in addressing MADM problems, offering reliable decision-making support.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In practice, decision-making is a complex and uncertain process, as the evaluative information from decision-makers (DMs) often involves cognitive uncertainty and fuzziness. Cognitive differences among DMs may result in varying responses to the same issue. Addressing this problem, there has been increasing research on hesitant fuzzy sets (HFSs) in recent years. Unlike traditional fuzzy sets, HFSs represent membership degrees by constructing a set of possible values rather than a single fuzzy value. It is highly beneficial for modeling uncertain information in real-world problems, as it helps more accurately reflect the DMs’ level of uncertainty. HFSs were initially introduced by Torra [21, 22], expanding the concept of fuzzy sets to allow for more comprehensive information representation and reducing personal biases. Xia and Xu [27] formalized the mathematical expression of HFSs in 2011 and developed the related hesitant fuzzy (HF) aggregation operators. Later, Zhu et al. [43] introduced dual hesitant fuzzy sets (DHFSs) in 2012, exploring their basic operations and properties in depth. Liao et al. [13] applied HF linguistic preference relations in decision-making processes, while Faizi expanded HFSs theoretically, applying them to the characteristic object method and demonstrating their effectiveness in addressing uncertainty in decision-making problems. Xian et al. [28] proposed a model using Z hesitant fuzzy linguistic term sets to address uncertainty and fuzziness. Xin and Ying [29] developed comprehensive hesitant fuzzy entropy. The development of HFSs has effectively addressed the differences of opinion and uncertainty among DMs, helping experts express their hesitation or ambiguity. This flexibility makes HFSs an ideal tool for constructing decision support systems. By integrating the uncertainty and hesitation of experts, HFSs provide a richer information base, leading to more accurate decision-making [1, 20, 34, 35].
Similar to traditional fuzzy sets, it is important to integrate related fuzzy logic for HFSs. Fuzzy logic studies the logical properties of fuzzy propositions and connectives, as well as their inferential relationships, extending classical logic. Fuzzy logic connectives are key components. Logic systems based on t-norms have long dominated fuzzy logic theory, especially in the field of aggregation operators. [5] constructed generalized geometric aggregation operators based on t-norms. [19] constructed picture fuzzy aggregation operators based on the Frank t-norm. However, since t-norms must satisfy the associative law, they face limitations in certain application scenarios. Overlap functions, which are closely linked to t-norms, are not restricted by the associative law, thus emerging as new non-associative fuzzy logic connectives, gradually gaining attention in both practical applications and theoretical research. Bustince et al. [3] was the first to introduce overlap functions and applied them to image processing and classification. Gómez et al. [6] defined n-dimensional overlap functions and demonstrated their axiomatization. Zhang et al. [39] introduced pseudo-overlap functions by eliminating commutativity, showing practical applications. Wang [24] created new overlap functions on bounded lattices, and Qiao [18] proposed quasi-overlap functions and their generalizations. Paiva et al. [15] defined quasi-overlap functions on lattices, exploring properties like transitivity, uniformity, idempotence, and the cancellation law. Building on these advances, Qiao [17] developed the \((I_O,O)\)-fuzzy rough set model, extending rough approximation operators to overlap functions, pioneering new directions. Han and Qiao [7] introduced the \((G_O,O)\) fuzzy rough set model based on overlap and group functions, analyzing its characteristics and topological properties. Zhang et al. [40] proposed a variable precision fuzzy rough set model based on an overlap function, investigates its properties, and demonstrates its effectiveness in tumor classification. Han et al. [8] introduces overlap function-based fuzzy probabilistic rough sets and multigranulation fuzzy probabilistic rough sets, showcasing their effectiveness and superior classification performance over t-norms-based models through examples and experiments.
On the other hand, with the rapid increase in information and the growing societal demand for complete information, researchers have focused on how to obtain such information. To address this issue, Pawlak [16] first introduced rough set theory in 1982, which has since become an important tool in the field of uncertainty mathematics. However, the equivalence conditions of rough sets are not easily obtained. Consequently, related theories have been continuously proposed, covering various aspects of research such as [26, 32, 33]. Zakowski [33] introduced the concept of covering rough sets (CRSs), which replaces equivalence relations with coverings, retaining the original advantages of rough sets and significantly enhancing their practicality. Covering rough sets have been widely applied in decision analysis and other fields [2, 25], but there are still imperfections in some practical scenarios. Specifically, when dealing with real-world problems involving various attribute values, such as hesitant fuzzy (HF) numbers [21], further consideration is required. To address this issue, Yang et al. [31] proposed the HF rough set theory, which was subsequently extended by Zhang et al. [36, 37]. Liang [11] introduced HFSs into HF decision theory rough sets and studied their decision mechanisms. In the context of covering rough sets involving HFSs, Zhou and Li [42] proposed four HF-\(\beta \) neighborhood operators, while Fu et al. [4] introduced the HF-\(\beta \) covering \(({\mathcal {T}},\) \({\mathcal {I}})\) rough set (HF\(\beta \)CTIRS) model.
Since the HF t-norm must satisfy the associative law, it is limited in some application scenarios. In particular, as the complexity and relevance of information continue to grow, the limitations of existing HF-\(\beta \) covering rough sets based on the HF t-norm become apparent. These models face challenges in effectively addressing overlapping and interrelated hesitant information, which is crucial for accurately representing and analyzing complex data relationships. In addition, there is currently no definition or example of a representable HF t-norm, which also limits their applicability.
The overlap function is not constrained by the associative law, and can better handle the overlap between information in practical applications, and has a wider range of application prospects. Therefore, it is natural to study the overlap function under HFS, introduce new concepts and examples, and establish a new HF-\(\beta \) covering a rough set model based on the proposed overlap function. This can expand the application of the overlap function in new fields, and also provide a new method for the processing and analysis of hesitant fuzzy data.To better understand the relevant concepts mentioned in this paper, Fig. 1 provides the connection between the concepts.
Based on the above research, the main research contents of this paper are as follows:
-
Extend the existing HF rough set model. Given the limited research on the representable HF t-norms, this paper proposes the representable HF t-norms based on the work of Xia and Xu [27], enriching the theoretical content of the HF t-norm.
-
The aggregation operator in the existing HF-\(\beta \) covering rough set model is limited to the t-norm, while the HF t-norm is not suitable for dealing with the overlap and correlation between hesitant information. Therefore, this paper introduces the HF overlap function and illustrates it through examples.
-
Based on the HF overlap function and HF implication, a new HF\(\beta \)CIORS model is proposed, and its key properties are explored.
-
The HF\(\beta \)CIORS model is combined with the TOPSIS method and applied to the hesitant fuzzy multi-attribute decision making (HFMADM) problem, and the results are analyzed and calculated through examples. The results are subjected to sensitivity analysis and comparative analysis to verify the stability and effectiveness of the proposed method.
2 Preliminaries
In this section, some fundamental concepts are reviewed.
2.1 HFSs
Definition 1
[21] Consider a non-empty and finite set \(\Omega .\) A HFS E on \(\Omega \) is expressed as:
where \( h_E(\omega )\subseteq [0,1],\) indicating the possible membership degree of the element \(\omega \) to E. To facilitate our subsequent expression, \( h_E(\omega )\) is termed as a hesitant fuzzy element (HFE).
Then \( h_E(\omega )\) is denoted by HFE and \({\mathcal {H}}\) is denoted by a set of all HFEs, Then the set of all HFSs on \(\Omega \) is referred to as \(HF(\Omega ).\) Some special HFSs have also been proposed:
\(\forall \omega \in \Omega ,\) an empty HFS is characterized by \(h(\omega )=0_{\mathcal {H}}=\{0\},\) and it is represented as \(\varnothing .\) \(\forall \omega \in \Omega ,\) a full HFS is characterized by \(h(\omega )=1_{\mathcal {H}}=\{1\},\) and it is represented as \(\Omega .\)
Definition 2
[27] \(\forall h_A\in {\mathcal {H}},\) score function of \(h_A\) is expressed as
\(l_{h_A}\) denotes the amount of values in \(h_A.\) For two HFEs \(h_A\) and \(h_B,\) if \(s(h_A) > s(h_B)\) \((s(h_A) < s(h_B)),\) \(h_A > h_B\) \((h_A < h_B);\) if \(s(h_A) = s(h_B),\) \(h_A = h_B.\)
It is noteworthy that the amount of values contained in various HFEs may differ, and these values are not necessarily arranged in a specific order. To tackle this concern, [27] put forth the following two assumptions.
-
1.
Values in each HFE h are ordered in descending sequence. Let \(h^{\delta (s)}\) represent the s-th largest value in h.
-
2.
\(\forall h_A, h_B \in {\mathcal {H}}\) , if \(l_{h_A}> l_{h_B},\) \(h_B\) should be extended to be as long as \(h_A.\)
To achieve this goal, we extend \(h_B\) by adding the minimum value to it until \(l_{h_A} = l_{h_B},\) following the extension rules presented by Xu and Zhang [30].
Based on the mentioned above, the basic calculations between HFSs are as follows.
Definition 3
[12] Consider a non-empty and finite set \(\Omega .\) \(\forall \) \(\alpha , \beta \in HF(\Omega )\) and \(\omega \in \Omega ,\)
-
(1)
\(h_{\alpha \sqcup \beta }(\omega )=h_\alpha (\omega ) \curlyvee h_\beta (\omega )=\{h_\alpha ^{\delta (s)}(\omega )\vee h_\beta ^{\delta (s)}(\omega ) \mid s=1,2, \ldots , k\};\)
-
(2)
\(h_{\alpha \sqcap \beta }(\omega )=h_\alpha (\omega ) \curlywedge h_\beta (\omega )=\{h_\alpha ^{\delta (s)}(\omega ) \wedge h_\beta ^{\delta (s)}(\omega ) \mid s=1,2, \ldots , k\};\)
-
(3)
\(h_{\alpha \boxplus \beta }(\omega )=h_\alpha (\omega ) \oplus h_\beta (\omega )=\{h_\alpha ^{\delta (s)}(\omega )+h_\beta ^{\delta (s)}(\omega )-h_\alpha ^{\delta (s)}(\omega ) h_\beta ^{\delta (s)}(\omega ) \mid s=1,2, \ldots , k\} \)
-
(4)
\(h_{\alpha \boxtimes \beta }(\omega )=h_\alpha (\omega ) \otimes h_\beta (\omega )= \{h_\alpha ^{\delta (s)}(\omega ) h_\beta ^{\delta (s)}(\omega ) \mid s=1,2, \ldots , k\}\)
-
(5)
\(h_{\alpha \boxdot \beta }(\omega )=h_\alpha (\omega ) \oslash h_\beta (\omega )=\{\overline{\gamma }^{\delta (s)} \mid s=1,2, \ldots , k\},\) where
$$\begin{aligned} \overline{\gamma }^{\delta (s)}=\left\{ \begin{array}{ll} \frac{h_\alpha ^{\delta (s)}(\omega )}{h_\beta ^{\delta (s)}(\omega )}, & \quad h_\alpha ^{\delta (s)}(\omega ) \le h_\beta ^{\delta (s)}(\omega ), h_\beta ^{\delta (s)}(\omega ) \ne 0 \\ 1, & \quad \text {others} \end{array}\right. \end{aligned}$$ -
(6)
\(h_{\sim \alpha }(\omega )={\mathcal {N}}(h_\alpha (\omega ))=\{1-h_\alpha ^{\delta (s)}(\omega ) \mid s=1,2, \ldots , k\},\) where \({\mathcal {N}}\) is a HF standard negator;
-
(7)
\(\lambda (h_\alpha (\omega ))=\{1-(1-h_\alpha ^{\delta (s)}(\omega ))^\lambda \mid s=1,2, \ldots , k\},\) where \(\lambda >0.\)
In (1) and (7), \(k=l_{h_\alpha (\omega )}, k=\max (l_{h_\alpha (\omega )}, l_{h_\beta (\omega )}).\)
Example 1
Consider two HFEs given by \(h_A=\{0.8,0.6,0.4\}\) and \(h_B=\{0.8,0.7,0.5,0.3\}.\) Based on the assumption (2), since \(l_{h_A}=3<l_{h_B}=4,\) we can obtain \(h_A=\{0.8,0.6,0.4,0.4\}.\) Next,
The remaining operations can also be obtained using Definition 3.
Definition 4
[41] \(\forall h_A, h_B \in {\mathcal {H}},\) Defined as follows is a partial order \(\le _{{\mathcal {H}}}\):
where \(s=1,2, \ldots , k\) and \(k=\max (l_{h_A}, l_{h_B}).\) The pair \(({\mathcal {H}}, \le _{{\mathcal {H}}})\) forms a bounded lattice, where the smallest element is \(0_{{\mathcal {H}}}={0}\) and the largest element is \(1_{{\mathcal {H}}}={1}.\)
Definition 5
[38] Consider a non-empty and finite set \(\Omega .\) \(\forall \alpha , \beta \in H F(\Omega ),\) when \(h_{\alpha }(\omega ) \le _{{\mathcal {H}}} h_{\beta }(\omega )\) holds \(\forall \omega \in \Omega ,\) \(\alpha \) is termed as a HF subset of \(\beta ,\) this relationship is denoted as \(\alpha \Subset \beta .\)
Specifically, \(\alpha \) and \(\beta \) are considered equal, if \(\forall \omega \in \Omega ,\) satisfying \(h_\alpha (\omega )=h_\beta (\omega )(h_\alpha (\omega )=h_\beta (\omega ) \Longleftrightarrow h_\alpha ^{\delta (s)}(\omega )=h_\beta ^{\delta (s)}(\omega ), s=1,2, \ldots , k).\)
2.2 HF Logical Operators
This subsection reviews the logic operators of HF, which represent a broader application of fuzzy logic operators in HF environment.
Definition 6
[41] A HF t-norm is defined as a mapping \({\mathcal {T}}: {\mathcal {H}}^2 \rightarrow {\mathcal {H}},\) satisfying:
-
(i)
\({\mathcal {T}}(1_{{\mathcal {H}}}, h_A)=h_A\) (border condition);
-
(ii)
\({\mathcal {T}}(h_A, h_B)={\mathcal {T}}(h_B, h_A)\) (commutativity);
-
(iii)
\({\mathcal {T}}(h_A, {\mathcal {T}}(h_B, h_C))={\mathcal {T}}({\mathcal {T}}(h_A, h_B), h_C)\) (associativity);
-
(iv)
If \(h_A \le _{{\mathcal {H}}} h_C\) and \(h_B \le _{{\mathcal {H}}} h_D,\) then \({\mathcal {T}}(h_A, h_B) \le _{{\mathcal {H}}} {\mathcal {T}}(h_C, h_D)\) (monotonicity), where \(h_i \in {\mathcal {H}}(i=A,B,C,D).\)
A HF t-conorm is defined as a mapping \({\mathcal {S}}: {\mathcal {H}}^2 \rightarrow {\mathcal {H}},\) exhibiting the following properties:
-
(i)
\({\mathcal {S}}(0_{{\mathcal {H}}}, h_A)=h_A\) (border condition);
-
(ii)
\({\mathcal {S}}(h_A, h_B)={\mathcal {S}}(h_B, h_A)\) (commutativity);
-
(iii)
\({\mathcal {S}}(h_A, {\mathcal {S}}(h_B, h_C))={\mathcal {S}}({\mathcal {S}}(h_A, h_B), h_C)\) (associativity);
-
(iv)
If \(h_A \le _{{\mathcal {H}}} h_C\) and \(h_B \le _{{\mathcal {H}}} h_D,\) then \({\mathcal {S}}(h_A, h_B) \le _{{\mathcal {H}}} {\mathcal {S}}(h_C, h_D)\) (monotonicity), where \(h_i \in {\mathcal {H}}(i=A,B,C,D).\)
Three different typical HF t-norms and HF t-conorms are shown as :
-
(1)
\({\mathcal {T}}_M (h_A, h_B)=h_A \curlywedge h_B=\{h_A^{\delta (s)} \wedge h_B^{\delta (s)} \mid s=1,2, \ldots , k\}; S_M(h_A, h_B)=h_A \curlyvee h_B=\{h_A^{\delta (s)} \vee h_B^{\delta (s)} \mid s=1,2, \ldots , k\} \)
-
(2)
\({\mathcal {T}}_P(h_A, h_B)=h_A \otimes h_B=\{h_A^{\delta (s)} h_B^{\delta (s)} \mid s=1,2, \ldots , k\}; S_P(h_A, h_B)=h_A \oplus h_B=\{h_A^{\delta (s)}+h_B^{\delta (s)}-h_A^{\delta (s)} h_B^{\delta (s)} \mid s=1,2, \ldots , k\} \)
-
(3)
\({\mathcal {T}}_L(h_A, h_B)=\{(h_A^{\delta (s)}+h_B^{\delta (s)}-1) \vee 0 \mid s=1,2, \ldots , k\}; S_L(h_A, h_B)=\{(h_A^{\delta (s)}+h_B^{\delta (s)}) \wedge 1 \mid s=1,2, \ldots , k\}.\)
Definition 7
[14] A HF implicator is defined as a mapping \({\mathcal {I}}: {\mathcal {H}}^2 \rightarrow {\mathcal {H}},\) exhibiting the following properties:
-
(i)
\({\mathcal {I}}(0_{\mathcal {H}}, 0_{\mathcal {H}}) = {\mathcal {I}}(0_{\mathcal {H}}, 1_{\mathcal {H}}) = {\mathcal {I}}(1_{\mathcal {H}}, 1_{\mathcal {H}}) = 1_{\mathcal {H}},\)
-
(ii)
\({\mathcal {I}}(1_{\mathcal {H}}, 0_{\mathcal {H}}) = 0_{\mathcal {H}}.\)
If \(h_A \le _{\mathcal {H}} h_B \Rightarrow {\mathcal {I}}(h_A, h_C) \ge _{\mathcal {H}} {\mathcal {I}}(h_B, h_C),\) then \({\mathcal {I}}\) is left monotonic decreasing; If \(h_A \le _{\mathcal {H}} h_B \Rightarrow {\mathcal {I}}(h_C, h_A) \le _{\mathcal {H}} {\mathcal {I}}(h_C, h_B),\) then \({\mathcal {I}}\) is right monotonic increasing.
2.3 HF \(\beta \)-Covering Approximation Space
This section reviews concepts of HF \(\beta \)-covering approximation space(HF\(\beta \)-CAS).
Definition 8
[42] Consider a non-empty and finite set \(\Omega ,\) and let \(C=\{C_1, C_2, \ldots , C_m\},\) where \(C_i \in \) HF(U) for \(i=1,2, \ldots , m.\) \(\forall \) HFE \(\beta \in {\mathcal {H}},\) C is a HF \(\beta \)-covering of \(\Omega \) if \(h_{\sqcup _{i=1}^m} C_i(\omega ) \ge _{\mathcal {H}} \beta \) holds for any \(\omega \in \Omega .\) \((\Omega , C)\) is defined as a HF\(\beta \)-CAS.
Definition 9
[42] Consider a HF\(\beta \)-CAS \((\Omega ,C).\) \(\forall \) \(\omega \in \Omega ,\) a HF \(\beta \)-neighborhood(HF\(\beta \)-N) of \(\omega \) is defined as:
Definition 10
[42] Consider a HF\(\beta \)-CAS \((\Omega ,C).\) \(\forall \omega \in \Omega ,\) a HF complementary \(\beta \)-neighborhood of \(\omega \) is defined as:
where \(h_{N_{2, \omega }^{\beta , C}}(y)=h_{N_{1, y}^{\beta , C}}(\omega ).\) Construct two HF\(\beta \)-N operators by the union and the intersection between \(N_{1, \omega }^{\beta , C}\) and \(N_{2, \omega }^{\beta , C}\): \(\forall \omega \in \Omega \)
3 HF Overlap Functions
Regarding the HF t-norm mentioned in reference, we can observe that there are more HF t-norms that satisfy the definition, not just the one denoted by \({\mathcal {T}}(h_A, h_B)=\{T(h_A^{\delta (s)},h_B^{\delta (s)}) \mid s=1,2, \ldots , k\} .\) Therefore, the definition of HF t-norms can be extended.
Example 2
Consider two HFEs \(h_A=\{0.8,0.6,0.4\}\) and \(h_B=\) \(\{0.8,0.7,0.5\}.\) Applying \(T_M\) between \(h_A^{\delta (1)}\) and \(h_B^{\delta (1)},\) \(h_A^{\delta (2)}\) and \(h_B^{\delta (2)},\) respectively. And applying \(T_P\) simultaneously between \(h_A^{\delta (3)}\) and \(h_B^{\delta (3)}.\) We refer to t-norm that satisfies this operational rule as \({\mathcal {T}}_1\):
Then \({\mathcal {T}}_1(h_A, h_B)=\{0.8,0.6,0.2\}.\) It is easy to prove \({\mathcal {T}}_1\) satisfies the definition of HF t-norm.
Therefore, the concept of representable HF t-norms can be provided next.
Definition 11
The representable HF t-norm\(({\mathcal {T}}: {\mathcal {H}}^2 \rightarrow {\mathcal {H}})\) has the following form:
where \(T_1\le T_2\le \cdots \le T_k.\)
Example 3
Let \(h_A=\{h_A^{\delta (1)}, h_A^{\delta (2)} \}\) and \(h_B=\{h_B^{\delta (1)}, h_B^{\delta (2)} \}\) be two HFEs. So, we have the following representable HF t-norm:
Then, we give the definition of overlap function:
Definition 12
[3] A bivariate function O : \([0, 1]^2 \rightarrow [0, 1]\) is defined as an overlap function if satisfying:
-
(1)
\(O(\nu _1, \nu _2) =O(\nu _2, \nu _1);\)
-
(2)
\(O(\nu _1, \nu _2) =0\) iff \(\nu _1=0\) or \(\nu _2=0;\)
-
(3)
\(O(\nu _1, \nu _2) =1\) iff \(\nu _1=\nu _2=1;\)
-
(4)
O is increasing;
-
(5)
O is continuous.
Example 4
[3] The following are some operations of overlap function, where p is positive:
-
Minimum-Maximum Overlap Function:
\(O_{n m}(\nu _1, \nu _2)= \min (\nu _1, \nu _2) \max (\nu _1^2, \nu _2^2);\)
-
Product Overlap Function:
\(O_p(\nu _1, \nu _2)=\nu _1^p \nu _2^p;\)
-
Minimum-Power Overlap Function:
\(O_{m p}(\nu _1, \nu _2)=\min (\nu _1^p, \nu _2^p);\)
-
Maximum-Power Overlap Function:
\(O_{M p}(\nu _1, \nu _2)=1-\max ((1-\nu _1)^p,(1-\nu _2)^p);\)
-
Dubois and Prade’s Overlap Function:
\(O_{D B}(\nu _1, \nu _2)= {\left\{ \begin{array}{ll}\frac{2 \nu _1 \nu _2}{\nu _1+\nu _2}, & \text{ if } \nu _1+\nu _2 \ne 0, \\ 0, & \text{ if } \nu _1+\nu _2=0.\end{array}\right. }\)
Next, let’s discuss the partial order problem related to overlap functions.
Definition 13
Let \(f_A, f_B \in O\)
-
(i)
We say that \(f_A \preceq f_B,\) iff \(f_A(\nu _1, \nu _2) \le f_B(\nu _1, \nu _2)\) holds, \(\forall \) \(\nu _1, \nu _2 \in \) [0, 1].
-
(ii)
We say that \(f_A \prec f_B\) iff \(f_A \preceq f_B\) and \(f_A \ne f_B.\)
Example 5
According to the overlap functions mentioned in Example 4, we have
-
\(O_{n m} \preceq O_{m p},\) where \(0<p \le 1;\)
-
\(O_{m p} \preceq O_{n m},\) where \(p \ge 3;\)
-
\(O_p \preceq O_{D B},\) where \(p \ge 1;\)
-
\(O_p \preceq O_{m p}.\)
But if we want to establish overlap functions on a hesitant set, it is more reasonable to build overlap functions on the lattice rather than constructing the original overlap functions.
Definition 14
[24] Consider a bounded lattice \(({\mathcal {L}}, \le , 0, 1),\) where 1 is the greatest element and 0 is the smallest element. A binary operator \(O: {\mathcal {L}}^2 \rightarrow {\mathcal {L}}\) is defined as an overlap functions on \({\mathcal {L}},\) if the following conditions are met for any \( \nu _1,\nu _2 \in {\mathcal {L}}\):
-
(1)
\(O(\nu _1, \nu _2) = O(\nu _2, \nu _1);\)
-
(2)
\(O(\nu _1, \nu _2)=0\) iff \(\nu _1=0\) or \(\nu _2=0;\)
-
(3)
\(O(\nu _1, \nu _2)=1\) iff \(\nu _1=\nu _2=1;\)
-
(4)
O preserves directed sups and filtered infs in each variable.
Definition 15
[10] A function \(O: {\mathcal {L}}^2 \rightarrow {\mathcal {L}}\) is defined as an overlap function on the complete lattice \({\mathcal {L}}\) if it satisfies the following conditions for any \(\nu _1, \nu _2 \in {\mathcal {L}}\) and any \(\{z_i: i \in \Lambda \} \subseteq {\mathcal {L}}\):
- (1):
-
\(O(\nu _1, \nu _2)=O(\nu _2, \nu _1);\)
- (2):
-
\(O(\nu _1, \nu _2)=0_{{\mathcal {L}}}\) iff \(\nu _1=0_{{\mathcal {L}}}\) or \(\nu _2=0_{{\mathcal {L}}};\)
- (3):
-
\(O(\nu _1, \nu _2)=1_{{\mathcal {L}}}\) iff \(\nu _1=\nu _2=1_{{\mathcal {L}}};\)
- (4):
-
\(O(\nu _1, \nu _2) \le O(\nu _1, z)\) if \(\nu _2 \le z;\)
- (5):
-
\(O(\nu _1, \vee _{i \in \Lambda } z_i)=\vee _{i \in \Lambda } O(\nu _1, z_i)\)
- (6):
-
\(O(\nu _1, \wedge _{i \in \Lambda } z_i)=\wedge _{i \in \Lambda } O(\nu _1, z_i);\)
where \(1_{\mathcal {L}}\) and \(0_{\mathcal {L}}\) denote the greatest and smallest element.
Based on the definition of overlap function, the concept of HF overlap function is defined as follows:
Definition 16
A HF overlap function is a mapping \({\mathcal {O}}: {\mathcal {H}} \times {\mathcal {H}} \rightarrow {\mathcal {H}}\) satisfying for any \(h_1, h_2, h_3 \in {\mathcal {H}}\):
\(({\mathcal {O}} 1)\) Commutativity: \({\mathcal {O}}(h_1, h_2)={\mathcal {O}}(h_2, h_1);\)
\(({\mathcal {O}} 2)\) Boundary condition: \({\mathcal {O}}(h_1, h_2)=0_{{\mathcal {H}}}\) iff \(h_1=0_{{\mathcal {H}}}\) or \(h_2=0_{{\mathcal {H}}};\)
\(({\mathcal {O}} 3)\) Boundary condition: \({\mathcal {O}}(h_1, h_2)=1_{{\mathcal {H}}}\) iff \(h_1=h_2=1_{{\mathcal {H}}};\)
\(({\mathcal {O}} 4)\) Monotonicity: \({\mathcal {O}}(h_1, h_2) \le _{{\mathcal {H}}} {\mathcal {O}}(h_1, h_3)\) if \(h_2 \le _{{\mathcal {H}}} h_3;\)
\(({\mathcal {O}} 5)\) Continuity: \({\mathcal {O}}\) is continuous, i.e.\(\forall i \in \Lambda , h_i \in {\mathcal {H}}, {\mathcal {O}}(h, \vee _{i \in \Lambda } h_i)=\vee _{i \in \Lambda } {\mathcal {O}}(h, h_i)\) and \({\mathcal {O}}(h, \wedge _{i \in \Lambda } h_i)=\wedge _{i \in \Lambda } {\mathcal {O}}(h, h_i);\)
Some typical examples of HF overlap are shown:
-
(1)
\( {\mathcal {O}}_p(h_1, h_2)=\{(h_1^{\delta (s)})^p (h_2^{\delta (s)})^p \mid s=1,2, \ldots , k\} \)
-
(2)
\({\mathcal {O}}_{nm}(h_1, h_2)=\{ ( h_1^{\delta (s)}\wedge h_2^{\delta (s)} ) ( (h_1^{\delta (s)})^2 \vee (h_2^{\delta (s)})^2 ) \mid s=1,2, \ldots , k\} \)
-
(3)
\({\mathcal {O}}_{mp}(h_1, h_2)=\{(h_1^{\delta (s)})^p \wedge (h_2^{\delta (s)})^p \mid s=1,2, \ldots , k\} \)
-
(4)
\({\mathcal {O}}_{Mp}(h_1, h_2)=\{1-((1-h_1^{\delta (s)})^p \vee (1-h_2^{\delta (s)})^p) \mid s=1,2, \ldots , k\}.\)
Similarly, we can also define representable HF overlap functions.
Definition 17
The representable HF overlap functions \(({\mathcal {O}}: {\mathcal {H}}^2 \rightarrow {\mathcal {H}})\) has the following form:
where \(O_1\le O_2\le \cdots \le O_k.\)
Example 6
Let \(h_1=\{h_1^{\delta (1)}, h_1^{\delta (2)} \}\) and \(h_2=\{h_2^{\delta (1)}, h_2^{\delta (2)} \}\) be two HFEs , where \(0 < p \le 1.\) So, we have the following representable HF overlap functions:
-
(1)
\({\mathcal {O}}_a(h_1, h_2)=\{(h_1^{\delta (1)})^p (h_2^{\delta (1)})^p , (h_1^{\delta (2)})^p \wedge (h_2^{\delta (2)})^p \} \)
-
(2)
\({\mathcal {O}}_b(h_1, h_2)=\{( h_1^{\delta (1)}\wedge h_2^{\delta (1)})( (h_1^{\delta (1)})^2 \vee (h_2^{\delta (1)})^2 ) , (h_1^{\delta (2)})^p \wedge (h_2^{\delta (2)})^p\}.\)
But not all HF overlap functions are representable; here is an example of unrepresentable HF overlap functions.
Example 7
Let \(h_1=\{h_1^{\delta (1)}, h_1^{\delta (2)} \}\) and \(h_2=\{h_2^{\delta (1)}, h_2^{\delta (2)} \}\) be two HFEs. The HF overlap functions
is an unrepresentable HF overlap functions.
Proof
First, for all \(h_1, h_2 \in {\mathcal {H}},\) \(h_1=\{h_1^{\delta (1)}, h_1^{\delta (2)} \}\) and \(h_2=\{h_2^{\delta (1)}, h_2^{\delta (2)} \},\) where \(h_1^{\delta (1)} \ge h_1^{\delta (2)}\) and \(h_2^{\delta (1)} \ge h_2^{\delta (2)},\) we need to prove \({\mathcal {O}}(h_1, h_2)\) is a HFE, which means we need to prove \(O_1(h_1^{\delta (1)},h_2^{\delta (1)})\ge O_2(h_1^{\delta (2)},h_2^{\delta (2)}),\) where \(O_1(h_1^{\delta (1)},h_2^{\delta (1)})=0.5 h_1^{\delta (1)}h_2^{\delta (1)} + 0.5\max (0,h_1^{\delta (1)} + h_2^{\delta (1)} - 1 ),\) \(O_2(h_1^{\delta (2)},h_2^{\delta (2)})=1-\min (1, 2-h_1^{\delta (2)} - h_2^{\delta (1)}, 2-h_1^{\delta (1)} - h_2^{\delta (2)}).\)For a clearer presentation, the process of proving that \({\mathcal {O}}(h_1, h_2)\) is HFE is represented by the Table 1.
Then, prove that it is a HF overlap functions \((\forall h_1, h_2, h_3 \in {\mathcal {H}})\).
\(({\mathcal {O}} 1)\) Commutativity: \({\mathcal {O}}(h_2,h_1)=\{ 0.5 h_2^{\delta (1)}h_1^{\delta (1)} + 0.5\max (0,h_2^{\delta (1)} + h_1^{\delta (1)} - 1 ), 1-\min (1, 2-h_2^{\delta (2)} - h_1^{\delta (1)}, 2-h_2^{\delta (1)} - h_1^{\delta (2)} )\}={\mathcal {O}}(h_1,h_2)\)
\(({\mathcal {O}} 2)\) Boundary condition: \({\mathcal {O}}(h_1,h_2)=0_{\mathcal {H}}=(0,0)\Leftrightarrow h_1 = 0_{\mathcal {H}}=(0,0)\) or \(h_2=0_{\mathcal {H}} =(0,0).\)
\(({\mathcal {O}} 3)\) Boundary condition: \({\mathcal {O}}(h_1,h_2)=1_{\mathcal {H}}=(1,1)\Leftrightarrow h_1 = 1_{\mathcal {H}}=(1,1)\) and \(h_2=1_{\mathcal {H}} =(1,1).\)
\(({\mathcal {O}} 4)\) Monotonicity: If \(h_2 \le _{\mathcal {H}} h_3,\) \(i.e. h_2^{\delta (1)} \le h_3^{\delta (1)},h_2^{\delta (2)} \le h_3^{\delta (2)}. \) Then, \(0.5 h_1^{\delta (1)}h_2^{\delta (1)} + 0.5\max (0,h_1^{\delta (1)} + h_2^{\delta (1)} - 1 ) \le 0.5 h_1^{\delta (1)}h_3^{\delta (1)} + 0.5\max (0,h_1^{\delta (1)} + h_3^{\delta (1)} - 1 )\) and \(1-\min (1, 2-h_1^{\delta (2)} - h_2^{\delta (1)}, 2-h_1^{\delta (1)} - h_2^{\delta (2)})\le 1-\min (1, 2-h_1^{\delta (2)} - h_3^{\delta (1)}, 2-h_1^{\delta (1)} - h_3^{\delta (2)}).\) Consequently, \({\mathcal {O}}(h_1,h_2) \le {\mathcal {O}}(h_1,h_3).\)
\(({\mathcal {O}} 5)\) Continuity: First, prove left continuous, i.e. \( {\mathcal {O}}(h_1, \vee _{i \in \Lambda } h_i) = \vee _{i \in \Lambda }{\mathcal {O}}(h_1,h_2).\)
Let \(O_1(h_1^{\delta (1)},h_2^{\delta (1)})=0.5 h_1^{\delta (1)}h_2^{\delta (1)} + 0.5\max (0,h_1^{\delta (1)} + h_2^{\delta (1)} - 1 ),\) \(O_2(h_1^{\delta (2)},h_2^{\delta (2)})=1-\min (1, 2-h_1^{\delta (2)} - h_2^{\delta (1)}, 2-h_1^{\delta (1)} - h_2^{\delta (2)}).\) Because \(O_1,O_2\) are continuous, \(O_1(h_1^{\delta (1)}, \vee _{i \in \Lambda } h_i^{\delta (1)})=\vee _{i \in \Lambda } O_1(h_1^{\delta (1)},h_i^{\delta (1)})\) and \(O_2(h_1^{\delta (2)}, \vee _{i \in \Lambda } h_i^{\delta (2)}) = \vee _{i \in \Lambda }O_2(h_1^{\delta (2)},h_i^{\delta (2)})\) are holding. It can be obtained that
Therefore, \({\mathcal {O}}\) is left continuous. Similarly, it can be obtained that \({\mathcal {O}}(h_1, \wedge _{i \in \Lambda } h_i) = \wedge _{i \in \Lambda }{\mathcal {O}}(h_1,h_i).\) Hence \({\mathcal {O}}\) is continuous. But it is an unrepresentable HF overlap functions.
Let \(O_1(h_1^{\delta (1)},h_2^{\delta (1)})=0.5 h_1^{\delta (1)}h_2^{\delta (1)} + 0.5\max (0,h_1^{\delta (1)} + h_2^{\delta (1)} - 1 ),\) \(O_2(h_1^{\delta (2)},h_2^{\delta (2)})=1-\min (1, 2-h_1^{\delta (2)} - h_2^0, 2-h_1^0 - h_2^{\delta (2)}),\) where \(h_1^0,h_2^0 \in [0,1]\) is constant.
Because \(O_2(1,1) =1-\min (1, 1-h_2^0,1-h_1^0),\) we have
Obviously, \(O_2(1,1)\ne 1,\) it doesn’t satisfy the definition of overlap function. Thus, the HF overlap functions \({\mathcal {O}}(h_1, h_2)=\{ 0.5 h_1^{\delta (1)}h_2^{\delta (1)} + 0.5\max (0,h_1^{\delta (1)} + h_2^{\delta (1)} - 1 ), 1-\min (1, 2-h_1^{\delta (2)} - h_2^{\delta (1)}, 2-h_1^{\delta (1)} - h_2^{\delta (2)} ) \}\) is an unrepresentable HF overlap functions.
Example 8
Let \(h_1=\{h_1^{\delta (1)}, h_1^{\delta (2)}, h_1^{\delta (3)}, h_1^{\delta (4)}, \ldots , h_1^{\delta (k)}\}\) and \(h_2=\{h_2^{\delta (1)}, h_2^{\delta (2)}, h_2^{\delta (3)}, h_2^{\delta (4)} \ldots , h_2^{\delta (k)} \}\) be two HFEs.
The overlap functions
is an unrepresentable HF overlap functions, where
Proof
The proofs are similar to the proof of Example 7.
Remark 1
HF overlap functions and HF t-norms are not mutually inclusive concepts. For example, the HF overlap functions
is not a HF t-norm. The HF t-norm \( {\mathcal {T}}_H(h_1, h_2)=\{( T_H(h_1^{\delta (s)},h_2^{\delta (s)})) \mid s=1,2, \ldots , k\}\) is not a HF overlap functions, where \(T_H(\nu _1, \nu _2) = \frac{\nu _1 \cdot \nu _2}{p + (1 - p) \cdot (\nu _1 + \nu _2 - \nu _1 \cdot \nu _2)}.\)
4 HF \(\beta \)-Covering \(({\mathcal {I}},{\mathcal {O}})\) Rough Set Models
Four types of HF \(\beta \)-covering \(({\mathcal {I}},{\mathcal {O}})\) rough set (HF\(\beta \)CIORS) models using HF logic operators and HF\(\beta \)-Ns are defined. Additionally, we explore the fundamental properties of models and investigate the connections between them.
Definition 18
Consider a continuous HF overlap function \({\mathcal {O}}\) and a HF implicator \({\mathcal {I}}\) on the interval [0, 1]. Suppose \((\Omega , C)\) represents a HF\(\beta \)-CAS. \(\forall \) \(A \in HF(U),\) the r-th \((r = 1, 2, 3, 4)\) type of HF \(\beta \)-covering \({\mathcal {O}}\)-upper, and \({\mathcal {I}}\)-lower approximation operators of A are defined as:
where
The pair \((\overline{R}_r^{\beta , C}(A), \underline{R}_r^{\beta , C}(A))\) is defined as the r-th type of \({\textrm{HF}} \beta {\textrm{CIORS}}\) of A (r-HF \(\beta \) CIORS).
Example 9
Consider a set \(\Omega =\{\omega _1, \omega _2, \ldots , \omega _4\}.\) A set of HFSs \(C=\{C_1, C_2, C_3, C_4\}\) on \(\Omega \) is shown in Table 2. The calculation results of \(N_{1, \omega _i}^{\beta , C}(i=1,2, \ldots , 4)\) are listed in Table 3. Taking \(\beta =\{0.5,0.4,0.3\},\) then C is the HF \(\beta \)-covering of \(\Omega .\) When \(r=1,\) there are \(N_{1, \omega _1}^{\beta , C}=C_1 \sqcap C_4, N_{1, \omega _2}^{\beta , C}=\) \(C_1 \sqcap C_3, N_{1, \omega _3}^{\beta , C}=C_2 \sqcap C_3 \sqcap C_4, N_{1, \omega _4}^{\beta , C}=C_3 \sqcap C_4.\)
Let \( A=\{\langle \omega _1,\{0.6,0.4,0.2\}\rangle ,\langle \omega _2,\{0.5,0.2,0.1\}\rangle ,\) \( \langle \omega _3,\{0.7,0.5,0.3\}\rangle , \langle \omega _4,\{0.4,0.2\}\rangle \}. \)
Assuming \({\mathcal {O}}={\mathcal {O}}_{P(P=2)}\) and \({\mathcal {I}}={\mathcal {I}}(h_1,h_2)=\{ ( 1\wedge 1-h_1^{\delta (s)}+h_2^{\delta (s)} ) \mid s=1,2, \ldots , k\},\) then by Definition 18, there are
2-HF \(\beta \) CIORS, 3-HF \(\beta \) CIORS and 4-HF \(\beta \) CIORS of A are calculated in a similar way. Next, the basic properties of the HF\(\beta \) CIORS models are analysed.
Theorem 1
Consider a HF\( \beta \)-CAS \((\Omega , C)\) and an index set \(\Lambda .\) \(\forall \) \(A, B \in H F(\Omega ),\) it satisfies :
-
(1)
\(\overline{R}_r^{\beta , C}(\varnothing )=\varnothing .\)
-
(2)
If \(A \Subset B,\) \(\overline{R}_r^{\beta , C}(A) \Subset \overline{R}_r^{\beta , C}(B).\)
-
(3)
\(\overline{R}_r^{\beta , C}(\sqcap _{i \in \Lambda } A_i) \Subset \sqcap _{i \in \Lambda } \overline{R}_r^{\beta , C}(A_i).\)
-
(4)
\(\overline{R}_r^{\beta , C}(\sqcup _{i \in \Lambda } A_i)=\sqcup _{i \in \Lambda } \overline{R}_r^{\beta , C}(A_i).\)
-
(5)
If \(\beta _1 \le _{{\mathcal {H}}} \beta _2(\beta _1, \beta _2 \in {\mathcal {H}}),\) \(\overline{R}_r^{\beta _1, C}(A) \Subset \) \(\overline{R}_r^{\beta _2, C}(A).\)
Proof
(1) Since \({\mathcal {O}}(h_1, h_2)=0_{{\mathcal {H}}}\) iff \(h_1=0_{{\mathcal {H}}}\) or \(h_2=0_{{\mathcal {H}}}.\) Then \(\forall \) \(\omega \in \Omega ,\) there is
Hence, \( \overline{R}_r^{\beta , C}(\varnothing )=\varnothing .\)
(2) Since \({\mathcal {O}}\) is monotonic increasing and \(A \Subset \) B, then \(\forall \) \(\omega \in \Omega ,\)
Hence, \(\overline{R}_r^{\beta , C}(A) \Subset \overline{R}_r^{\beta , C}(B).\)
(3) Since \({\mathcal {O}}(h, \curlywedge _{i \in \Lambda } h_j)=\curlywedge _{i \in \Lambda } {\mathcal {O}}(h, h_i),\) then \(\forall \) \(\omega \in \Omega ,\)
Hence, \(\overline{R}_r^{\beta , C}(\sqcap _{i \in \Lambda } A_i) \Subset \sqcap _{i \in \Lambda } \overline{R}_r^{\beta , C}(A_i)\)
(4) Since \({\mathcal {O}}(h, \curlyvee _{i \in \Lambda } h_i)=\curlyvee _{i \in \Lambda } {\mathcal {O}}(h, h_i),\) then \(\forall \) \(\omega \in \Omega ,\)
Hence, \(\overline{R}_r^{\beta , C}(\sqcup _{i \in \Lambda } A_i)=\) \(\sqcup _{i \in \Lambda } \overline{R}_r^{\beta , C}(A_i).\)
(5) If \(\beta _1\le _{{\mathcal {H}}}\beta _2\), \(\forall \) \(\omega \in \Omega ,\) \(N_{r,\omega }^{\beta _1, C}\Subset N_{r,\omega }^{\beta _2, C},\) since \({\mathcal {O}}\) is monotonic increasing, then
Hence, \(\overline{R}_r^{\beta _1, C}\Subset \overline{R}_r^{\beta _2, C}.\)
Theorem 2
Consider a HF\( \beta \)-CAS \((\Omega , C)\) and an index set \(\Lambda .\) \(\forall \) \(A, B \in H F(\Omega )\) satisfy :
-
(1)
\(\underline{R}_r^{\beta , C}(\Omega )=\Omega ,\) if \({\mathcal {I}}\) is left monotonic decreasing.
-
(2)
If \(A \Subset B\) and \({\mathcal {I}}\) is right monotonic increasing, \(\underline{R}_r^{\beta , C}(A) \Subset \underline{R}_r^{\beta , C}(B).\)
-
(3)
\(\underline{R}_r^{\beta , C}(\sqcap _{i \in \Lambda } A_i)=\sqcap _{i \in \Lambda } \underline{R}_r^{\beta , C}(A_i),\) if \({\mathcal {I}}(h, \curlywedge _{i \in \Lambda } h_j)=\curlywedge _{i \in \Lambda } {\mathcal {I}}(h, h_j).\)
-
(4)
\(\underline{R}_r^{\beta , C}(\sqcup _{i \in \Lambda } A_i) \Supset \sqcup _{i \in \Lambda } \underline{R}_r^{\beta , C}(A_i),\) if \({\mathcal {I}}\) is right monotonic increasing.
-
(5)
Assume that \({\mathcal {I}}\) is left monotonic decreasing. If \(\beta _1 \le _{{\mathcal {H}}} \beta _2(\beta _1, \beta _2 \in {\mathcal {H}} ),\) then \(\underline{R}_r^{\beta _1, C}(A) \Supset \) \(\underline{R}_r^{\beta _2, C}(A).\)
Proof
The proofs are similar to that of Theorem 1.
Theorem 3
Consider a HF\( \beta \)-CAS \((\Omega , C).\) \(\forall \) \(A \in H F(\Omega ),\) if \(N_{r, \omega }^{\beta , C}\) is reflective (i.e., \(h_{N_{r, \omega }^{\beta , C}}(\omega )= 1_{{\mathcal {H}}})\) and \({\mathcal {O}}(1_{\mathcal {H}},h_A(\omega ))\ge _{\mathcal {H}} h_A(\omega )\) for any \(\omega \in \Omega ,\) \(\underline{R}_r^{\beta , C}(A) \Subset A \Subset \overline{R}_r^{\beta , C}(A),\) provided that \({{\mathcal {O}}}(1_{{\mathcal {H}}}, h)=h\) for \(\forall h \in {\mathcal {H}}.\)
Proof
\(\forall \) \(\omega \in \Omega ,\) there are
Hence, it can be obtained that \(h_{\underline{R}_r^{\beta , C}(A)}(\omega ) \le _{\mathcal {H}}\) \(h_A(\omega ) \le _{\mathcal {H}} h_{\overline{R}_r^{\beta , C}(A)}(\omega ).\) It means that \(\underline{R}_r^{\beta , C}(A) \Subset A \Subset \overline{R}_r^{\beta , C}(A).\)
Theorem 4
Consider a non-empty and finite set \(\Omega \) and \(A \in H F(\Omega ).\) Assume that C and \(C^{\prime }\) are two HF \(\beta \) coverings of \(\Omega ,\) where \(C=\{C_1, C_2, \ldots , C_m\}, C^{\prime }=\) \(\{C_1^{\prime }, C_2^{\prime }, \ldots , C_n^{\prime }\}\) and \(\beta \in {\mathcal {H}}.\) If \({\mathcal {I}}\) is left monotonic decreasing and \(N_{r, \omega }^{\beta , C} \Subset N_{r, \omega }^{\beta , C^{\prime }}\) for any \(\omega \in \Omega ,\) \(\underline{R}_r^{\beta , C}(A) \Supset \underline{R}_r^{\beta , C^{\prime }}(A)\) and \(\overline{R}_r^{\beta , C}(A) \Subset \overline{R}_r^{\beta , C^{\prime }}(A).\)
Proof
Since \(N_{r, \omega }^{\beta , C} \Subset N_{r, \omega }^{\beta , C^{\prime }},\) \(\forall \omega \in \Omega .\) There exists \(h_{N_{r, \omega }^{\beta , C}}(y) \le h_{N_{r, \omega }^{\beta , C^{\prime }}}(y),\) \(\forall \) \(y \in \Omega .\) Then
Since \(N_{r, \omega }^{\beta , C} \Subset N_{r, \omega }^{\beta , C^{\prime }},\) \(\forall \omega \in \Omega .\) There exists \(h_{N_{r, \omega }^{\beta , C}}(y) \le h_{N_{r, \omega }^{\beta , C^{\prime }}}(y),\) \(\forall y \in \Omega .\) Then
Hence, \(\overline{R}_r^{\beta , C}(A) \Subset \overline{R}_r^{\beta , C^{\prime }}(A).\) The proof of the other one is the same.
The relationships among the four types of \({\textrm{HF}} \beta \) CIORS models is explored as follows.
Theorem 5
Consider a HF \(\beta \)CAS \((\Omega , C).\) \(\forall \) \(A \in H F(\Omega )\) satisfy :
-
(1)
If \({\mathcal {I}}\) is left monotonic decreasing, \(\underline{R}_3^{\beta , C}(A) \Subset \underline{R}_1^{\beta , C}(A) \Subset \underline{R}_1^{\beta , C}(A).\)
-
(2)
If \({\mathcal {I}}\) is left monotonic decreasing, \(\underline{R}_3^{\beta , C}(A) \Subset \underline{R}_2^{\beta , C}(A) \Subset \underline{R}_1^{\beta , C}(A).\)
-
(3)
\(\overline{R}_4^{\beta , C}(A) \Subset \overline{R}_1^{\beta , C}(A) \Subset \overline{R}_3^{\beta , C}(A).\)
-
(4)
\(\overline{R}_4^{\beta , C}(A) \Subset \overline{R}_2^{\beta , C}(A) \Subset \overline{R}_3^{\beta , C}(A).\)
Proof
-
(1)
Based on Definition 9, we have \(h_{N_{4, \omega }^{\beta , C}}(y) \le _{\mathcal {H}} h_{N_{1, \omega }^{\beta , C}}(y) \le _{\mathcal {H}} h_{N_{3, \omega }^{\beta , C}}(y),\) \(\forall \omega , y \in \Omega .\) Because of the left monotonic decrease of \({\mathcal {I}},\) it can be obtained that \({\mathcal {I}}(h_{N_{3, \omega }^{\beta , C}}(y), h_A(y)) \le _{\mathcal {H}}\) \({\mathcal {I}}(h_{N_{1, \omega }^{\beta , C}}(y), h_A(y)) \le _{\mathcal {H}} {\mathcal {I}}(h_{N_{4, \omega }^{\beta , C}}(y), h_A(y)). \) Then, \(\forall \omega \in \Omega ,\) there are
$$\begin{aligned} h_{\underline{R}_3^{\beta , C}(A)}(\omega ) & =\curlywedge _{y \in \Omega } {\mathcal {I}}(h_{N_{3, \omega }^{\beta , C}}(y), h_A(y)) \\ & \le _{\mathcal {H}} \curlywedge _{y \in \Omega } {\mathcal {I}}(h_{N_{1, \omega }^{\beta , C}}(y), h_A(y)) \\ & =h_{\underline{R}_1^{\beta , C}(A)}(\omega ), \end{aligned}$$$$\begin{aligned} h_{\underline{R}_1^{\beta , C}(A)}(\omega ) & =\curlywedge _{y \in \Omega } {\mathcal {I}}(h_{N_{1, \omega }^{\beta , C}}(y), h_A(y)) \\ & \le _{\mathcal {H}} \curlywedge _{y \in \Omega } {\mathcal {I}}(h_{N_{4, \omega }^{\beta , C}}(y), h_A(y)) \\ & =h_{\underline{R}_1^{\beta , C}(A)}(\omega ) . \end{aligned}$$Hence, it can be obtained that \(h_{\underline{R}_3^{\beta , C}(A)}(\omega ) \le _{\mathcal {H}}\) \(h_{\underline{R}_1^{\beta , C}(A)}(\omega ) \le _{\mathcal {H}} h_{\underline{R}_1^{\beta , C}(A)}(\omega ).\) It means that \(\underline{R}_3^{\beta , C}(A) \Subset \underline{R}_1^{\beta , C}(A) \Subset \underline{R}_1^{\beta , C}(A).\)
-
(2)
The proof is similar to the proof of (1).
-
(3)
\(\forall \omega \in \Omega ,\) there are
$$\begin{aligned} h_{\overline{R}_4^{\beta , C}(A)}(\omega ) & =\curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{4, \omega }^{\beta , C}}(y), h_A(y)) \\ & \le _{\mathcal {H}} \curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{1, \omega }^{\beta , C}}(y), h_A(y))\\ & =h_{\overline{R}_1^{\beta , C}(A)}(\omega ), \end{aligned}$$$$\begin{aligned} h_{\overline{R}_1^{\beta , C}(A)}(\omega ) & =\curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{1, \omega }^{\beta , C}}(y), h_A(y)) \\ & \le _{\mathcal {H}} \curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{3, \omega }^{\beta , C}}(y), h_A(y))\\ & =h_{\overline{R}_3^{\beta , C}(A)}(\omega ). \end{aligned}$$Hence, it can be obtained that \(h_{\overline{R}_4^{\beta , C}}(\omega ) \le _{\mathcal {H}}\) \(h_{\overline{R}_1^{\beta , C}(A)}(\omega ) \le _{\mathcal {H}} h_{\overline{R}_3^{\beta , C}(A)}(\omega ).\) It means that \(\overline{R}_4^{\beta , C}(A) \Subset \overline{R}_1^{\beta , C}(A) \Subset \overline{R}_3^{\beta , C}(A).\)
-
(4)
The proof is similar to the proof of (3).
5 The Applications of HF\(\beta \)CIORS Models in MADM
In the introduction, it is highlighted that HFMADM has become increasingly prominent in the realm of decision-making. Scientifically grounded decision-making methods is crucial for decision makers to mitigate risks associated with erroneous choices. Consequently, this section is dedicated to introducing a novel approach tailored for addressing HFMADM issues.
In the context of HFMADM, define a system with the following elements: a set \(\Omega \) representing n choices, and a set C denoting m criteria with an associated weight in the vector \(d = (d_1,d_2,\ldots ,d_m)^{{\textbf {T}}}.\) The sum of these weights equals 1. The evaluation information is captured in the set \(F=\{C_j(\omega _i)|i=1,2,\ldots ,n; j=1,2,\ldots ,m \},\) in which \(C_j(\omega _i)\) indicates the score of choice \(\omega _i\) in relation to criterion \(C_j\) based on HFE. This forms the HF information system \((\Omega , C, F, d).\)
We introduce a novel approach to address HFMADM challenges by integrating the 1-HF\(\beta \) CIORS model based on TOPSIS. The specifics of this methodology are elaborated below.
Different from traditional decision-making strategies, rough sets-based methods focus primarily on creating decision entities. There are two prevalent techniques for this: the pre-determined approach and the ideal solution method. In this research, we employ the ideal solution method to derive both the optimal and the least desirable decision entities.
First of all, based on TOPSIS technique, construct a HF positive ideal solution (HFPIS) \(A^{+}\) and a HF negative ideal solution (HFNIS) \(A^{-}.\)
where \(k=\max (l_{h_{C_j(\omega _i)}}).\)
Next, identify the maximum length of the HFSs for each attribute as k. Assume all attributes are benefit.
Then, to evaluate how closely an option \(\omega _i\) aligns with both the ideal \(A^{+}\) and the least desirable \(A^{-}\) solutions, a novel similarity measure for HFSs is defined as:
Definition 19
Consider a set \(U=\{y_1, y_2, \ldots , y_m\}\) paired with weights \(\omega =(\omega _1, \omega _2, \ldots , \omega _m)^{{\textbf {T}}}.\) The sum of these weights equals 1. Consider any two elements A, B in the HFSs of U, expressed as \(A,B\in HF(U).\) The similarity between A and B is defined as
In decision analysis, particularly in HFMADM, the similarity measure is an important tool for quantifying the degree of similarity between two choices. However, due to the inherent ambiguity and uncertainty in evaluation data, using a precise number within the range of [0, 1] to represent similarity in this context is challenging. As highlighted in Definition 19, the HF similarity measure, S(A, B), emerges as a more informative alternative than a mere numerical value in the [0, 1] range, as it is essentially a HFE.
Furthermore, this similarity measure satisfies [23]:
-
(1)
\(S(U, \varnothing )=0_{{\mathcal {H}}}\) indicates zero similarity when compared to an empty set.
-
(2)
\(S(A, A)=1_{{\mathcal {H}}}\) reflects maximum similarity when an element is compared with itself.
-
(3)
\(S(A, B)=S(B, A)\) ensures the measure is symmetric.
-
(4)
In the scenario where \(A \Subset B \Subset C\) for any \(A, B, C \in HF(U),\) satisfying \(S(A, C) \le _{{\mathcal {H}}} S(A, B) \curlywedge (B, C),\) suggesting a relational hierarchy in similarity measures.
In the subsequent step, we compute \(S_i^{+}=S(\omega _i, A^{+})\) and \(S_i^{-}=S(\omega _i, A^{-})\) for each alternative \(\omega _i.\) \(S_i^{+},\) \(S_i^{-}\) represent the similarity of \(\omega _i\) to the HFPIS \(A^{+}\) and the HFNIS \(A^{-},\) respectively. Based on these similarity values, define the optimal decision object \({\textrm{H}}^{+}\) and the worst decision object \({\textrm{H}}^{-}\):
The next step in the HFMADM process involves determining the upper and lower approximations(ULA) for both \({\textrm{H}}^{+}\) and \({\textrm{H}}^{-},\) computed based on the 1-HF \(\beta \)CIORS model.
Then, illustrate the concept of ULA of \({\textrm{H}}^{+}.\) The lower approximation of \({\textrm{H}}^{+}\) denotes the objects that are certainly contained in \({\textrm{H}}^{+},\) embodying a pessimistic decision criterion; On the other hand, the upper approximation of \({\textrm{H}}^{+}\) denotes the objects that are definitely and possibly part in \({\textrm{H}}^{+},\) embodying an optimistic decision criterion. However, DMs often exhibit a mix of optimism and pessimism in real life. Addressing this issue, a risk preference coefficient to merge the ULA of \({\textrm{H}}^{+}\) and \({\textrm{H}}^{-}\) is proposed:
where \(\alpha \in [0,1]\) is risk preference coefficient. When \(\alpha \) takes values of 0, 0.5, and 1, it signifies that DMs are respectively risk-averse, risk-neutral, risk-seeking.
Proceeding further in the HFMADM methodology, we define the relative closeness coefficient for each alternative \(\omega _i\) in relation to \(\delta ^{+}\) and \(\delta ^{-}.\) This coefficient is expressed mathematically as:
where, \(r(\omega _i)\) is an HFE, which inherently can not be directly applied to order the alternatives due to its nature. To address this, we calculate the score value \(s(r(\omega _i))\) using the score function outlined in Definition 2.
Ultimately, the ranking of all alternatives is determined based on the computed score value \(s(r(\omega _i)).\) It’s important to note that a lower score value \(s(r(\omega _i))\) means a higher affiliation of the alternative \(\omega _i\) with the optimal decision object \(H^{+},\) implying that lower scores correspond to more favorable alternatives. This final step effectively concludes the decision-making process, providing a clear and quantifiable ranking of the available choices.
For practical implementation, the method is broken down into specific steps:
In the context of the HFMADM problem involving n alternatives and m attributes, the computational complexity of the proposed method can be analyzed step by step. In Step 1, the time complexity is \(\hat{O}(2mn)\). Similarly, Step 2 also exhibits a time complexity of \(\hat{O}(2mn),\) as it involves comparable computations. Step 3, requiring minimal operations, has a constant time complexity of \(\hat{O}(1)\). For Step 4, the complexity rises to \(\hat{O}(n^2 + mn)\). Steps 5 through 7, focusing on finalizing results, operate with a linear time complexity of \(\hat{O}(n)\). Considering these steps together, the overall time complexity of the proposed decision-making method is dominated by \(\hat{O}(n^2 + mn)\).
This structured methodology enables a systematic and thorough evaluation of alternatives in HF decision-making environments.
Additionally, the process of the proposed HFMAMD method is shown in Fig. 2.
6 Illustrative Examples
In this section, we apply our newly developed decision-making approach to a practical scenario: an enterprise project investment problem, as referenced from source [9]. This application serves to demonstrate the real-world utility of the method in a business context.
The procedure involves several key steps:
Problem Application: Implementing the method to address the specific challenges of the enterprise project investment problem.
Comparative Analysis: To verify the efficacy and benefits of our approach, we undertake a comparative analysis against existing decision-making methods. This comparison will highlight the distinct advantages our method offers.
Sensitivity Analysis: Conducting a sensitivity analysis is crucial to assess the robustness and reliability of the method. This analysis examines how changes in input parameters (like the value of \(\alpha \) or weights of attributes) impact the outcomes.
Through these steps, the section aims to underscore not just the theoretical soundness of the method, but also its practical applicability in handling complex decision-making scenarios in business environments.
6.1 An Enterprise Project Investment Problem
Enterprise project investment decisions significantly impact an enterprise’s operation. DMs must make well-informed choices to enhance the economic benefits of the enterprise. Imagine an enterprise considering various investment projects, including a business project \((\omega _1),\) a technology projects \((\omega _2\) ) a medical project \((\omega _3),\) an education project \((\omega _4)\).To evaluate these projects, four attributes are employed: policy support \((C_1),\) market benefit \(( C_2),\) urban constructiveness \((C_3),\) and public expectations \((C_4)\). All these attributes are advantageous, and higher values indicate better prospects. To address the potential inconsistencies in expert opinions, HFEs are utilized to represent the evaluation of each alternative under the different attributes. The assessment results are compiled in Table 4, and the weighting of the attributes, determined by the experts, is represented by the vector \(d=(0.2,0.4,0.2,0.2 )^{{\textbf {T}}}.\)
The subsequent sections detail each step of this decision-making process.
Step 1: In the decision-making process for the enterprise project investment, Table 5 plays a role by presenting the construction of both the HFPIS \(A^{+}\) and the HFNIS \(A^{-}.\)
Step 2: In the decision-making process, compute the similarity between each project alternative and both \(A^{+}\) and \(A^{-}.\) The calculation results are presented in Table 6.
Step 3: Calculate \({\textrm{H}}^{+}\)and \({\textrm{H}}^{-}.\)
Taking \(\beta =\{0.5,0.4,0.3\},\) the calculation results of \(N_{1, \omega _i}^{\beta , C}\) are presented in Table 7.
Subsequently, various cases arise based on the different HF logic operators in the 1-HF \(\beta \)CIORS model.
Case 1: Let \({\mathcal {I}}={\mathcal {I}}(h_1,h_2)=\{ ( 1\wedge 1-h_1^{\delta (s)}+h_2^{\delta (s)} ) \mid s=1,2, \ldots , k\}, {\mathcal {O}}={\mathcal {O}}_{nm},\) and \(\alpha =0.5.\)
Step 4: According to the calculation method mentioned above, calculate the ULA of \({\textrm{H}}^{+}\) and \({\textrm{H}}^{-}.\)
Step 5: Based on the calculation method mentioned above, calculate \(\delta ^{+}\)and \(\delta ^{-}.\) The relative closeness coefficients \(\delta ^{+}\) and \(\delta ^{-}\) are presented in Table 8 for each alternative.
Step 6: Then, for each project alternative, compute the relative closeness coefficient \(r(\omega _i),\) denoted as \(\omega _i\) for \(i=1,2, \ldots , 4.\) The results of these calculations are presented in Table 9.
Step 7: In accordance with Definition 2, the score value \(s(r(\omega _i))\) for each \(r(\omega _i)\) (where \(i=1,2, \ldots , 4)\) is determined. These score values are included in Table 9.
Finally, the ranking of all the project alternatives is based on the scoring values calculated in the previous step. The rankings, which determine the most to least favorable projects based on the evaluated criteria, are displayed in Table 9.
Case 2: Let \({\mathcal {I}}={\mathcal {I}}(h_1,h_2)=\{ ( (1-h_1^{\delta (s)})\vee (h_1^{\delta (s)} \wedge h_2^{\delta (s)}) ) \mid s=1,2, \ldots , k\}, {\mathcal {O}}={\mathcal {O}}_{p}(p=2),\) and \(\alpha =0.5.\) Based on the methods introduced above, we can perform relevant calculations in case 2 (Table 10) and obtain the following results :
Based on the ranking results of all alternatives in Table 11, \(\omega _3\) stands out as the most optimal project. Below are the final results:
\(\omega _3 \succ \omega _1 \succ \omega _4 \succ \omega _2\)
Case 3: Let \({\mathcal {I}}={\mathcal {I}}(h_1,h_2)=\{ ( (1-h_1^{\delta (s)})\vee (h_2^{\delta (s)}) ) \mid s=1,2, \ldots , k\}, {\mathcal {O}}={\mathcal {O}}_{mp}(p=1),\) and \(\alpha =0.5.\)
Because of the similarity in computational procedures between Case 3 and Case 1, only the final results are provided below.
\(\omega _3 \succ \omega _1 \succ \omega _4 \succ \omega _2\)
Based on the ranking results above, the optimal project identified is \(\omega _3\)
Remark 2
The results of Case 1, Case 2 and Case 3 reveal a significant aspect of the decision-making process that involves HF logic operators. Despite variations in these operators, which influence the ordering of options, the top choice remains stable. This consistency accentuates the correctness and flexibility of the proposed decision-making method. Significantly, it empowers DMs with the ability to select from a spectrum of HF logic operators and adjust parameter settings. This flexibility allows for the customization of the decision-making process to align with specific needs and criteria, demonstrating the method’s adaptability to different scenarios and preferences.
6.2 Comparative Analysis
For addressing HFMADM problems, some decision-making techniques have been introduced by researchers, including the HF-TOPSIS method [30], HF-VIKOR method [30], HFWA method [27], HFOWA method [27], and the approach proposed by Fu et al. [4]. To highlight how well our proposed method works, we compare it with the five methods listed above, using the enterprise project investment problem from Sect. 6.1. The results of this comparison are displayed in Table 12 below. Additionally, Fig. 3 provides a more intuitive view of the ranking results across different decision-making methods. By examining both Table 12 and Fig. 3, we can draw the following conclusions.
If we analyze it from the perspective of the optimal solution, the best alternatives identified by our proposed method and the other five methods are consistent, showing that \(\omega _3\) should be selected as the investment project. This outcome verifies the effectiveness of our proposed method. To better demonstrate the scientific validity and rationality of our method, the Spearman rank correlation coefficient (SRCC) statistical method is introduced for analysis. SRCC is a statistical technique used to describe the correlation between two different variables. So, we use SRCC to evaluate the relationship between the ranking results of the seven methods discussed earlier. As shown in Fig. 4, the SRCCs between the ranking results obtained by the seven methods are based on the data from Table 13. Generally, when the SRCC between two methods surpasses 0.8, the correlation between them is considered significant. The HF-TOPSIS and HF-VIKOR methods are well-established for effectively addressing HFMADM problems, making them suitable benchmarks for comparison. If the SRCCs between our method’s ranking results and those of these two methods are high, it confirms that our method is both reasonable and valid.
Now let’s look at the ranking results. Table 12 shows that our method and the other five methods produce different rankings, mainly because each method follows different ranking principles. However, the correlation between our method (Case 1 and Case 2) and the HF-TOPSIS, HFWA, and HF-VIKOR methods is 0.8, which suggests that our method is both valid and reliable. In contrast, the SRCC between our method (Case 1) and the HFOWA method is 0.4, and the SRCCs between the HFOWA method and our method (Case 2), HF-TOPSIS, HFWA, and HF-VIKOR methods are all below 0.6. This indicates that our method (Case 1 and Case 2) is more effective than the HFOWA method.
For Fu.et al.’s method, its model is based on an HF rough set with HF t-norm. Since HF t-norm must satisfy associativity, and HF-overlap function is an extension of HF t-norm that does not satisfy associativity. Moreover, the existing HF \(\beta \)-covering rough set based on HF t-norm cannot effectively handle the overlap and correlation between hesitant information. When there is an overlap between hesitant information, the fuzzy \(\beta \)-covering rough set based on hesitant overlap function is a better choice. Therefore, the model proposed in this paper, based on HF-overlap function, possesses effectiveness and practical value.
The comparative analysis above validates the effectiveness and reliability of our method. To further highlight its advantages, we present the following three examples.
Example 10
To demonstrate the applicability of the new method we proposed, we introduce another problem related to corporate project investment decisions. Let \(\Omega =\{\omega _1, \omega _2, \omega _3, \omega _4, \omega _5\}\) represent the five candidate items and \(C=\{C_1, C_2, C_3, C_4\}\) represent the four evaluation attributes. After calculation, the weight given by the experts is \(d = (0.2197, 0.2064, 0.3543, 0.2196)^T.\) The specific evaluation values of each attribute are shown in Table 14. Then use our proposed method, HFWA method, and HFOWA method to solve respectively. The comparison results are shown in Table 15.
As shown in Table 15, the HFWA and HFOWA methods fail to produce valid rankings, as all alternatives are evaluated as approximately equal \((\omega _5 \approx \omega _2 \approx \omega _1 \approx \omega _4 \approx \omega _3 )\). This result demonstrates the limitations of these methods in effectively distinguishing and ranking alternatives in this case. In contrast, our proposed method successfully generates a complete and meaningful ranking \(( \omega _5 \succ \omega _2 \succ \omega _1 \succ \omega _4 \succ \omega _3 ),\) clearly differentiating the alternatives. This comparison highlights the robustness and effectiveness of our method in addressing scenarios where traditional methods are inadequate for providing useful rankings.
Example 11
Suppose \(\Omega =\{\omega _1, \omega _2, \omega _3, \omega _4, \omega _5\}\) is a set of 5 malls, and \(C = \{C_1, C_2, C_3, C_4\}\) represents a set of 4 attributes. The weight vectors of the four attributes are \(d = (0.3, 0.3, 0.2, 0.2)^T.\) The evaluation values of each attribute of the mall are shown in Table 16.
According to the above three decision-making methods, the ranking results of all alternatives are shown in Table 17.
As can be seen from Table 17, in this example, the HF-VIKOR method cannot rank alternatives due to algorithm limitations. In addition, although the HF-TOPSIS method can obtain the optimal solution \(\omega _4,\) the other four schemes cannot distinguish priorities. Our method can rank the other three solutions except \(\omega _2\) and \(\omega _3,\) and conclude that the optimal solution is \(\omega _1.\)
Therefore, based on the results of Examples 10 and 11, we can conclude that our method is more effective than the other four methods and has a wider range of applicable scenarios.
Example 12
To demonstrate the applicability of the new method we proposed in the case of a larger number of alternatives, we introduce another problem related to corporate project investment decisions. Let \(\Omega =\{\omega _1, \omega _2, \omega _3, \omega _4\}\) represent the four candidate projects, and \(C=\{C_1, C_2, C_3, C_4\}\) represent the four evaluation attributes. After calculation, the weight given by the experts is \(d = (0.253, 0.248, 0.251, 0.248)^T.\) The specific evaluation values of each attribute are shown in Table 18.
Then use our proposed method and Fu et al.’s method to solve respectively. The comparison results are shown in Table 19 and Fig. 5.
By comparing the results, we can observe that our method still performs well when there are many options. However, when there is significant overlap among the hesitant information, Fu’s method struggles to distinguish and rank some of the options, such as \(\omega _1,\) \(\omega _7,\) \(\omega _8,\) and \(\omega _9.\) This indicates that the model we proposed has good classification ability even when there is significant overlap in hesitant information, suggesting it has a broader range of applications.
Based on the discussion above, we summarize the advantages of our method as follows:
-
The HF similarity measure used in our method is better at capturing the hesitant characteristics in HFMADM compared to traditional real-number representations. This allows for a more accurate reflection of the DM’s uncertain or ambiguous preferences, which are common in real-world decision-making.
-
Compared to the five methods mentioned in this paper (see Table 12), our approach provides a more precise analysis aligned with the DM’s preferences. This is because our method considers the risk preferences of the DM, offering a solution that better matches actual situations.
-
Our method combines the strengths of the HF\(\beta \)CIORS model and the TOPSIS method, allowing us to address some HFMADM problems that traditional methods cannot solve well. This is evident in Examples 10 and 11, where our approach performs more effectively in complex decision-making environments.
-
When compared to Fu et al.’s method, our approach still performs well even with a large number of options. In particular, when hesitant information overlaps significantly, Fu et al.’s method struggles to distinguish and rank certain options. In contrast, our model, which incorporates the HF overlap function, can sort options that Fu et al.’s method cannot. This shows that our method has a better classification ability, especially in situations with considerable overlapping hesitant information. As a result, our model has a wider range of applications and can handle more complex decision-making scenarios.
6.3 Sensitivity Analysis
In Sect. 6.1, \(\alpha \) is denoted as a risk preference coefficient, determined subjectively by DMs. This subsection extends the investigation to explore the influence of parameter \(\alpha \) on decision-making outcomes in two distinct cases, within the realm of enterprise project investment. Methodologically, the parameter \(\alpha \) is varied systematically from 0 to 1, in increments of 0.1. According to the framework proposed in this study, the rankings for each case, corresponding to the diverse values of \(\alpha ,\) are calculated and presented. The resultant rankings, contingent upon these variations in \(\alpha ,\) are elucidated in Fig. 6, and Table 20. This examination provides a comprehensive understanding of how shifts in the risk preference coefficient \(\alpha \) affect decision-making processes in enterprise project investment scenarios.
In Fig. 6, we can see that the optimal solution for Case 1 does not change with variations in \(\alpha ,\) indicating that our proposed method is stable. When \(\alpha \) ranges from 0 to 0.6, the ranking order is \(\omega _3 \succ \omega _4 \succ \omega _1 \succ \omega _2,\) while for \(\alpha \) ranging from 0.7 to 1, the ranking order shifts to \(\omega _3 \succ \omega _4 \succ \omega _2 \succ \omega _1.\) On the other hand, as observed from Fig. 7 for Case 2, both the best and the worst solutions remain constant regardless of changes in \(\alpha .\) When \(\alpha \) is between 0 and 0.2, the ranking order is \(\omega _3 \succ \omega _4 \succ \omega _1 \succ \omega _2,\) and when \(\alpha \) ranges from 0.3 to 1, the ranking order changes to \(\omega _3 \succ \omega _1 \succ \omega _4 \succ \omega _2.\)
To sum up, in Case 1 (or Case 2), despite the varying impacts of \(\alpha \) on the ranking results, the optimal choice remains unchanged. This means that while the risk preference of DMs may make a difference in the outcome of decision, it does not change the optimal outcome. Thus, the method proposed in this paper is of stability.
7 Conclusion
In this study, we proposed a hesitant fuzzy overlap function and gives several related examples. Since the overlap function can better handle the overlap and correlation between information, the application scope of the hesitant fuzzy overlap function is expanded. Then, we propose HF\(\beta \)CIORS models and studies their basic properties. Additionally, we integrate HF\(\beta \)CIORS with the TOPSIS method and apply to the MADM problems and the feasibility of this method is verified by some practical examples. Finally, through sensitivity analysis and comparative analysis, the stability and effectiveness of the proposed method are validated.
Our proposed method offers more precise analysis, aligns more closely with the decision maker’s preferences, and addresses certain HFMADM problems that traditional methods struggle to resolve, particularly in scenarios with significant overlapping hesitant information. However, our method achieves greater accuracy when the appropriate overlap function is carefully selected. The choice of overlap function plays a critical role in capturing the specific characteristics of the hesitant information.
In future work, we plan to explore and select more suitable overlap functions for experimental analysis. Additionally, we will introduce the attribute reduction based on HF\(\beta \)CIORS models and variable precision fuzzy rough set based on hesitant fuzzy overlap functions.
Data Availability
No datasets were generated or analysed during the current study.
References
Bai, W., Zhang, C., Zhai, Y., Sangaiah, A.K., Wang, B., Li, W.: Cognitive analysis of medical decision-making: an extended MULTIMOORA-based multigranulation probabilistic model with evidential reasoning. Cogn. Comput. 16(6), 3149–3167 (2024)
Bu, H., Wang, J., Shao, S., Zhang, X.: A novel model of fuzzy rough sets based on grouping functions and its application. Comput. Appl. Math. 44(1), 1–31 (2025)
Bustince, H., Fernandez, J., Mesiar, R., Montero, J., Orduna, R.: Overlap functions. Nonlinear Anal.: Theory Methods Appl. 72(3–4), 1488–1499 (2010)
Fu, C., Qin, K., Yang, L., Hu, Q.: Hesitant fuzzy \(\beta \)-covering (t, i) rough set models: an application to multi-attribute decision-making. J. Intell. Fuzzy Syst. 44(6), 10005–10025 (2023)
Garg, H., Rani, D.: Generalized geometric aggregation operators based on t-norm operations for complex intuitionistic fuzzy sets and their application to decision-making. Cogn. Comput. 12(3), 679–698 (2020)
Gómez, D., Rodriguez, J.T., Montero, J., Bustince, H., Barrenechea, E.: n-Dimensional overlap functions. Fuzzy Sets Syst. 287, 57–75 (2016)
Han, N., Qiao, J.: On (go, o)-fuzzy rough sets derived from overlap and grouping functions. J. Intell. Fuzzy Syst. 43(3), 3173–3187 (2022)
Han, N., Qiao, J., Li, T., Ding, W.: Multigranulation fuzzy probabilistic rough sets induced by overlap functions and their applications. Fuzzy Sets Syst. 481, 108893 (2024)
Jia, F., Liu, P.: A novel three-way decision model under multiple-criteria environment. Inf. Sci. 471, 29–51 (2019)
Jiang, H., Hu, B.Q.: On (o, g)-fuzzy rough sets based on overlap and grouping functions over complete lattices. Int. J. Approx. Reason. 144, 18–50 (2022)
Liang, D., Liu, D.: A novel risk decision making based on decision-theoretic rough sets under hesitant fuzzy information. IEEE Trans. Fuzzy Syst. 23(2), 237–247 (2014)
Liao, H., Xu, Z.: A VIKOR-based method for hesitant fuzzy multi-criteria decision making. Fuzzy Optim. Decis. Mak. 12(4), 373–392 (2013)
Liao, H., Xu, Z., Herrera-Viedma, E., Herrera, F.: Hesitant fuzzy linguistic term set and its application in decision making: a state-of-the-art survey. Int. J. Fuzzy Syst. 20, 2084–2110 (2018)
Matzenauer, M., Reiser, R., Santos, H., Bedregal, B., Bustince, H.: Strategies on admissible total orders over typical hesitant fuzzy implications applied to decision making problems. Int. J. Intell. Syst. 36(5), 2144–2182 (2021)
Paiva, R., Santiago, R., Bedregal, B., Palmeira, E.: Lattice-valued overlap and quasi-overlap functions. Inf. Sci. 562, 180–199 (2021)
Pawlak, Z.: Rough sets. Int. J. Comput. Inf. Sci. 11, 341–356 (1982)
Qiao, J.: On (io, o)-fuzzy rough sets based on overlap functions. Int. J. Approx. Reason. 132, 26–48 (2021)
Qiao, J.: Constructions of quasi-overlap functions and their generalized forms on bounded partially ordered sets. Fuzzy Sets Syst. 446, 68–92 (2022)
Seikh, M.R., Mandal, U.: Some picture fuzzy aggregation operators based on Frank t-norm and t-conorm: application to MADM process. Informatica 45(3), 447-461 (2021)
Su, Z., Xu, Z., Zhang, S.: Multi-attribute decision-making method based on probabilistic hesitant fuzzy entropy. In: Hesitant Fuzzy and Probabilistic Information Fusion: Theory and Applications, pp. 73–98. Springer Nature Singapore, Editor: Li, X (2024)
Torra, V.: Hesitant fuzzy sets. Int. J. Intell. Syst. 25(6), 529–539 (2010)
Torra, V., Narukawa, Y.: On hesitant fuzzy sets and decision. In: 2009 IEEE International Conference on Fuzzy Systems, pp. 1378–1382. IEEE, Piscataway, NJ, USA (2009)
Wang, D.-G., Meng, Y.-P., Li, H.-X.: A fuzzy similarity inference method for fuzzy reasoning. Comput. Math. Appl. 56(10), 2445–2454 (2008)
Wang, H.: Constructions of overlap functions on bounded lattices. Int. J. Approx. Reason. 125, 203–217 (2020)
Wang, J., Zhang, X.: Intuitionistic fuzzy granular matrix: novel calculation approaches for intuitionistic fuzzy covering-based rough sets. Axioms 13(6), 411 (2024)
Wu, W.-Z., Zhang, W.-X.: Neighborhood operator systems and approximations. Inf. Sci. 144(1–4), 201–217 (2002)
Xia, M., Xu, Z.: Hesitant fuzzy information aggregation in decision making. Int. J. Approx. Reason. 52(3), 395–407 (2011)
Xian, S., Ma, D., Feng, X.: Z hesitant fuzzy linguistic term set and their applications to multi-criteria decision making problems. Expert Syst. Appl. 238, 121786 (2024)
Xin, G., Ying, L.: Multi-attribute decision-making based on comprehensive hesitant fuzzy entropy. Expert Syst. Appl. 237, 121459 (2024)
Xu, Z., Zhang, X.: Hesitant fuzzy multi-attribute decision making based on TOPSIS with incomplete weight information. Knowl. Based Syst. 52, 53–64 (2013)
Yang, X., Song, X., Qi, Y., Yang, J.: Constructive and axiomatic approaches to hesitant fuzzy rough set. Soft Comput. 18, 1067–1077 (2014)
Yao, Y.: Constructive and algebraic methods of the theory of rough sets. Inf. Sci. 109(1–4), 21–47 (1998)
Zakowski, W.: Approximations in the space (u, \(\pi \)). Demonstratio mathematica 16(3), 761–770 (1983)
Zhang, C., Li, D., Liang, J.: Hesitant fuzzy linguistic rough set over two universes model and its applications. Int. J. Mach. Learn. Cybern. 9, 577–588 (2018)
Zhang, C., Li, D., Liang, J.: Multi-granularity three-way decisions with adjustable hesitant fuzzy linguistic multigranulation decision-theoretic rough sets over two universes. Inf. Sci. 507, 665–683 (2020)
Zhang, C., Li, D., Zhai, Y., Yang, Y.: Multigranulation rough set model in hesitant fuzzy information systems and its application in person-job fit. Int. J. Mach. Learn. Cybern. 10, 717–729 (2019)
Zhang, H., Shu, L., Liao, S., Xiawu, C.: Dual hesitant fuzzy rough set and its application. Soft Comput. 21, 3287–3305 (2017)
Zhang, H., Shu, L., Xiong, L.: On novel hesitant fuzzy rough sets. Soft Comput. 23, 11357–11371 (2019)
Zhang, X., Liang, R., Bustince, H., Bedregal, B., Fernandez, J., Li, M., Ou, Q.: Pseudo overlap functions, fuzzy implications and pseudo grouping functions with applications. Axioms 11(11), 593 (2022)
Zhang, X., Ou, Q., Wang, J.: Variable precision fuzzy rough sets based on overlap functions with application to tumor classification. Inf. Sci. 666, 120451 (2024)
Zhao, N., Xu, Z., Ren, Z.: On typical hesitant fuzzy prioritized “or’’ operator in multi-attribute decision making. Int. J. Intell. Syst. 31(1), 73–100 (2016)
Zhou, J.-J., Li, X.-Y.: Hesitant fuzzy \(\beta \) covering rough sets and applications in multi-attribute decision making. J. Intell. Fuzzy Syst. 41(1), 2387–2402 (2021)
Zhu, B., Xu, Z., Xia, M.: Dual hesitant fuzzy sets. J. Appl. Math. 2012(1), 879629 (2012)
Acknowledgements
This work is supported by the National Natural Science Foundation of China (Grant No. 12271319), Natural Science Basic Research Program of Shaanxi Province (Grant No. 2023-JC-QN-0046), Natural Science Foundation of Zhejiang Province (Grant No. LY20A010012) and Scientific research plan projects of Shaanxi Education Department (Grant No. 22JK0299).
Author information
Authors and Affiliations
Contributions
Conceptualization, X.Z.; methodology, X.Z., S.S. and X.M.; Validation, J.W. and X.Z.; writing original draft preparation, J.W., S.S., M.X. and X.Z.; writing, reviewing, and editing, J.W., S.S., M.X. and X.Z.; project assessment, X.Z. All authors have reviewed and agreed to the published version of the manuscript.
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Wang, J., Shao, S., Mao, X. et al. Hesitant Fuzzy \(\beta \)-Covering \(({\mathcal {I}},\) \({\mathcal {O}})\) Rough Set Models and Applications to Multi-attribute Decision-Making. Int J Comput Intell Syst 18, 49 (2025). https://doi.org/10.1007/s44196-025-00769-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44196-025-00769-9