1 Introduction

In practice, decision-making is a complex and uncertain process, as the evaluative information from decision-makers (DMs) often involves cognitive uncertainty and fuzziness. Cognitive differences among DMs may result in varying responses to the same issue. Addressing this problem, there has been increasing research on hesitant fuzzy sets (HFSs) in recent years. Unlike traditional fuzzy sets, HFSs represent membership degrees by constructing a set of possible values rather than a single fuzzy value. It is highly beneficial for modeling uncertain information in real-world problems, as it helps more accurately reflect the DMs’ level of uncertainty. HFSs were initially introduced by Torra [21, 22], expanding the concept of fuzzy sets to allow for more comprehensive information representation and reducing personal biases. Xia and Xu [27] formalized the mathematical expression of HFSs in 2011 and developed the related hesitant fuzzy (HF) aggregation operators. Later, Zhu et al. [43] introduced dual hesitant fuzzy sets (DHFSs) in 2012, exploring their basic operations and properties in depth. Liao et al. [13] applied HF linguistic preference relations in decision-making processes, while Faizi expanded HFSs theoretically, applying them to the characteristic object method and demonstrating their effectiveness in addressing uncertainty in decision-making problems. Xian et al. [28] proposed a model using Z hesitant fuzzy linguistic term sets to address uncertainty and fuzziness. Xin and Ying [29] developed comprehensive hesitant fuzzy entropy. The development of HFSs has effectively addressed the differences of opinion and uncertainty among DMs, helping experts express their hesitation or ambiguity. This flexibility makes HFSs an ideal tool for constructing decision support systems. By integrating the uncertainty and hesitation of experts, HFSs provide a richer information base, leading to more accurate decision-making [1, 20, 34, 35].

Similar to traditional fuzzy sets, it is important to integrate related fuzzy logic for HFSs. Fuzzy logic studies the logical properties of fuzzy propositions and connectives, as well as their inferential relationships, extending classical logic. Fuzzy logic connectives are key components. Logic systems based on t-norms have long dominated fuzzy logic theory, especially in the field of aggregation operators. [5] constructed generalized geometric aggregation operators based on t-norms. [19] constructed picture fuzzy aggregation operators based on the Frank t-norm. However, since t-norms must satisfy the associative law, they face limitations in certain application scenarios. Overlap functions, which are closely linked to t-norms, are not restricted by the associative law, thus emerging as new non-associative fuzzy logic connectives, gradually gaining attention in both practical applications and theoretical research. Bustince et al. [3] was the first to introduce overlap functions and applied them to image processing and classification. Gómez et al. [6] defined n-dimensional overlap functions and demonstrated their axiomatization. Zhang et al. [39] introduced pseudo-overlap functions by eliminating commutativity, showing practical applications. Wang [24] created new overlap functions on bounded lattices, and Qiao [18] proposed quasi-overlap functions and their generalizations. Paiva et al. [15] defined quasi-overlap functions on lattices, exploring properties like transitivity, uniformity, idempotence, and the cancellation law. Building on these advances, Qiao [17] developed the \((I_O,O)\)-fuzzy rough set model, extending rough approximation operators to overlap functions, pioneering new directions. Han and Qiao [7] introduced the \((G_O,O)\) fuzzy rough set model based on overlap and group functions, analyzing its characteristics and topological properties. Zhang et al. [40] proposed a variable precision fuzzy rough set model based on an overlap function, investigates its properties, and demonstrates its effectiveness in tumor classification. Han et al. [8] introduces overlap function-based fuzzy probabilistic rough sets and multigranulation fuzzy probabilistic rough sets, showcasing their effectiveness and superior classification performance over t-norms-based models through examples and experiments.

On the other hand, with the rapid increase in information and the growing societal demand for complete information, researchers have focused on how to obtain such information. To address this issue, Pawlak [16] first introduced rough set theory in 1982, which has since become an important tool in the field of uncertainty mathematics. However, the equivalence conditions of rough sets are not easily obtained. Consequently, related theories have been continuously proposed, covering various aspects of research such as [26, 32, 33]. Zakowski [33] introduced the concept of covering rough sets (CRSs), which replaces equivalence relations with coverings, retaining the original advantages of rough sets and significantly enhancing their practicality. Covering rough sets have been widely applied in decision analysis and other fields [2, 25], but there are still imperfections in some practical scenarios. Specifically, when dealing with real-world problems involving various attribute values, such as hesitant fuzzy (HF) numbers [21], further consideration is required. To address this issue, Yang et al. [31] proposed the HF rough set theory, which was subsequently extended by Zhang et al. [36, 37]. Liang [11] introduced HFSs into HF decision theory rough sets and studied their decision mechanisms. In the context of covering rough sets involving HFSs, Zhou and Li [42] proposed four HF-\(\beta \) neighborhood operators, while Fu et al. [4] introduced the HF-\(\beta \) covering \(({\mathcal {T}},\) \({\mathcal {I}})\) rough set (HF\(\beta \)CTIRS) model.

Since the HF t-norm must satisfy the associative law, it is limited in some application scenarios. In particular, as the complexity and relevance of information continue to grow, the limitations of existing HF-\(\beta \) covering rough sets based on the HF t-norm become apparent. These models face challenges in effectively addressing overlapping and interrelated hesitant information, which is crucial for accurately representing and analyzing complex data relationships. In addition, there is currently no definition or example of a representable HF t-norm, which also limits their applicability.

The overlap function is not constrained by the associative law, and can better handle the overlap between information in practical applications, and has a wider range of application prospects. Therefore, it is natural to study the overlap function under HFS, introduce new concepts and examples, and establish a new HF-\(\beta \) covering a rough set model based on the proposed overlap function. This can expand the application of the overlap function in new fields, and also provide a new method for the processing and analysis of hesitant fuzzy data.To better understand the relevant concepts mentioned in this paper, Fig. 1 provides the connection between the concepts.

Based on the above research, the main research contents of this paper are as follows:

  • Extend the existing HF rough set model. Given the limited research on the representable HF t-norms, this paper proposes the representable HF t-norms based on the work of Xia and Xu [27], enriching the theoretical content of the HF t-norm.

  • The aggregation operator in the existing HF-\(\beta \) covering rough set model is limited to the t-norm, while the HF t-norm is not suitable for dealing with the overlap and correlation between hesitant information. Therefore, this paper introduces the HF overlap function and illustrates it through examples.

  • Based on the HF overlap function and HF implication, a new HF\(\beta \)CIORS model is proposed, and its key properties are explored.

  • The HF\(\beta \)CIORS model is combined with the TOPSIS method and applied to the hesitant fuzzy multi-attribute decision making (HFMADM) problem, and the results are analyzed and calculated through examples. The results are subjected to sensitivity analysis and comparative analysis to verify the stability and effectiveness of the proposed method.

Fig. 1
figure 1

The connections between the fundamental concepts mentioned in the paper

2 Preliminaries

In this section, some fundamental concepts are reviewed.

2.1 HFSs

Definition 1

[21] Consider a non-empty and finite set \(\Omega .\) A HFS E on \(\Omega \) is expressed as:

$$\begin{aligned} E=\{\langle \omega , h_E(\omega )\rangle \mid \omega \in \Omega \} \end{aligned}$$

where \( h_E(\omega )\subseteq [0,1],\) indicating the possible membership degree of the element \(\omega \) to E. To facilitate our subsequent expression, \( h_E(\omega )\) is termed as a hesitant fuzzy element (HFE).

Then \( h_E(\omega )\) is denoted by HFE and \({\mathcal {H}}\) is denoted by a set of all HFEs, Then the set of all HFSs on \(\Omega \) is referred to as \(HF(\Omega ).\) Some special HFSs have also been proposed:

\(\forall \omega \in \Omega ,\) an empty HFS is characterized by \(h(\omega )=0_{\mathcal {H}}=\{0\},\) and it is represented as \(\varnothing .\) \(\forall \omega \in \Omega ,\) a full HFS is characterized by \(h(\omega )=1_{\mathcal {H}}=\{1\},\) and it is represented as \(\Omega .\)

Definition 2

[27] \(\forall h_A\in {\mathcal {H}},\) score function of \(h_A\) is expressed as

$$\begin{aligned} s(h_A)=\frac{1}{l_{h_A}}\sum _{\eta \in h_A}\eta , \end{aligned}$$

\(l_{h_A}\) denotes the amount of values in \(h_A.\) For two HFEs \(h_A\) and \(h_B,\) if \(s(h_A) > s(h_B)\) \((s(h_A) < s(h_B)),\) \(h_A > h_B\) \((h_A < h_B);\) if \(s(h_A) = s(h_B),\) \(h_A = h_B.\)

It is noteworthy that the amount of values contained in various HFEs may differ, and these values are not necessarily arranged in a specific order. To tackle this concern, [27] put forth the following two assumptions.

  1. 1.

    Values in each HFE h are ordered in descending sequence. Let \(h^{\delta (s)}\) represent the s-th largest value in h.

  2. 2.

    \(\forall h_A, h_B \in {\mathcal {H}}\) , if \(l_{h_A}> l_{h_B},\) \(h_B\) should be extended to be as long as \(h_A.\)

To achieve this goal, we extend \(h_B\) by adding the minimum value to it until \(l_{h_A} = l_{h_B},\) following the extension rules presented by Xu and Zhang [30].

Based on the mentioned above, the basic calculations between HFSs are as follows.

Definition 3

[12] Consider a non-empty and finite set \(\Omega .\) \(\forall \) \(\alpha , \beta \in HF(\Omega )\) and \(\omega \in \Omega ,\)

  1. (1)

    \(h_{\alpha \sqcup \beta }(\omega )=h_\alpha (\omega ) \curlyvee h_\beta (\omega )=\{h_\alpha ^{\delta (s)}(\omega )\vee h_\beta ^{\delta (s)}(\omega ) \mid s=1,2, \ldots , k\};\)

  2. (2)

    \(h_{\alpha \sqcap \beta }(\omega )=h_\alpha (\omega ) \curlywedge h_\beta (\omega )=\{h_\alpha ^{\delta (s)}(\omega ) \wedge h_\beta ^{\delta (s)}(\omega ) \mid s=1,2, \ldots , k\};\)

  3. (3)

    \(h_{\alpha \boxplus \beta }(\omega )=h_\alpha (\omega ) \oplus h_\beta (\omega )=\{h_\alpha ^{\delta (s)}(\omega )+h_\beta ^{\delta (s)}(\omega )-h_\alpha ^{\delta (s)}(\omega ) h_\beta ^{\delta (s)}(\omega ) \mid s=1,2, \ldots , k\} \)

  4. (4)

    \(h_{\alpha \boxtimes \beta }(\omega )=h_\alpha (\omega ) \otimes h_\beta (\omega )= \{h_\alpha ^{\delta (s)}(\omega ) h_\beta ^{\delta (s)}(\omega ) \mid s=1,2, \ldots , k\}\)

  5. (5)

    \(h_{\alpha \boxdot \beta }(\omega )=h_\alpha (\omega ) \oslash h_\beta (\omega )=\{\overline{\gamma }^{\delta (s)} \mid s=1,2, \ldots , k\},\) where

    $$\begin{aligned} \overline{\gamma }^{\delta (s)}=\left\{ \begin{array}{ll} \frac{h_\alpha ^{\delta (s)}(\omega )}{h_\beta ^{\delta (s)}(\omega )}, & \quad h_\alpha ^{\delta (s)}(\omega ) \le h_\beta ^{\delta (s)}(\omega ), h_\beta ^{\delta (s)}(\omega ) \ne 0 \\ 1, & \quad \text {others} \end{array}\right. \end{aligned}$$
  6. (6)

    \(h_{\sim \alpha }(\omega )={\mathcal {N}}(h_\alpha (\omega ))=\{1-h_\alpha ^{\delta (s)}(\omega ) \mid s=1,2, \ldots , k\},\) where \({\mathcal {N}}\) is a HF standard negator;

  7. (7)

    \(\lambda (h_\alpha (\omega ))=\{1-(1-h_\alpha ^{\delta (s)}(\omega ))^\lambda \mid s=1,2, \ldots , k\},\) where \(\lambda >0.\)

In (1) and (7), \(k=l_{h_\alpha (\omega )}, k=\max (l_{h_\alpha (\omega )}, l_{h_\beta (\omega )}).\)

Example 1

Consider two HFEs given by \(h_A=\{0.8,0.6,0.4\}\) and \(h_B=\{0.8,0.7,0.5,0.3\}.\) Based on the assumption (2), since \(l_{h_A}=3<l_{h_B}=4,\) we can obtain \(h_A=\{0.8,0.6,0.4,0.4\}.\) Next,

$$\begin{aligned} {\mathcal {N}}(h_A)= & \{1-0.8,1-0.6,1-0.4,1-0.4\} \\= & \{0.6,0.6,0.4,0.2\}\\ h_A \curlyvee h_B= & \{0.8 \vee 0.8,0.6 \vee 0.7,0.4 \vee 0.5,0.4 \vee 0.3\} \\= & \{0.8,0.7,0.5,0.4\} . \end{aligned}$$

The remaining operations can also be obtained using Definition 3.

Definition 4

[41] \(\forall h_A, h_B \in {\mathcal {H}},\) Defined as follows is a partial order \(\le _{{\mathcal {H}}}\):

$$\begin{aligned} h_A \le _{{\mathcal {H}}} h_B \Longleftrightarrow h_A^{\delta (s)} \le h_B^{\delta (s)} \end{aligned}$$

where \(s=1,2, \ldots , k\) and \(k=\max (l_{h_A}, l_{h_B}).\) The pair \(({\mathcal {H}}, \le _{{\mathcal {H}}})\) forms a bounded lattice, where the smallest element is \(0_{{\mathcal {H}}}={0}\) and the largest element is \(1_{{\mathcal {H}}}={1}.\)

Definition 5

[38] Consider a non-empty and finite set \(\Omega .\) \(\forall \alpha , \beta \in H F(\Omega ),\) when \(h_{\alpha }(\omega ) \le _{{\mathcal {H}}} h_{\beta }(\omega )\) holds \(\forall \omega \in \Omega ,\) \(\alpha \) is termed as a HF subset of \(\beta ,\) this relationship is denoted as \(\alpha \Subset \beta .\)

Specifically, \(\alpha \) and \(\beta \) are considered equal, if \(\forall \omega \in \Omega ,\) satisfying \(h_\alpha (\omega )=h_\beta (\omega )(h_\alpha (\omega )=h_\beta (\omega ) \Longleftrightarrow h_\alpha ^{\delta (s)}(\omega )=h_\beta ^{\delta (s)}(\omega ), s=1,2, \ldots , k).\)

2.2 HF Logical Operators

This subsection reviews the logic operators of HF, which represent a broader application of fuzzy logic operators in HF environment.

Definition 6

[41] A HF t-norm is defined as a mapping \({\mathcal {T}}: {\mathcal {H}}^2 \rightarrow {\mathcal {H}},\) satisfying:

  1. (i)

    \({\mathcal {T}}(1_{{\mathcal {H}}}, h_A)=h_A\) (border condition);

  2. (ii)

    \({\mathcal {T}}(h_A, h_B)={\mathcal {T}}(h_B, h_A)\) (commutativity);

  3. (iii)

    \({\mathcal {T}}(h_A, {\mathcal {T}}(h_B, h_C))={\mathcal {T}}({\mathcal {T}}(h_A, h_B), h_C)\) (associativity);

  4. (iv)

    If \(h_A \le _{{\mathcal {H}}} h_C\) and \(h_B \le _{{\mathcal {H}}} h_D,\) then \({\mathcal {T}}(h_A, h_B) \le _{{\mathcal {H}}} {\mathcal {T}}(h_C, h_D)\) (monotonicity), where \(h_i \in {\mathcal {H}}(i=A,B,C,D).\)

A HF t-conorm is defined as a mapping \({\mathcal {S}}: {\mathcal {H}}^2 \rightarrow {\mathcal {H}},\) exhibiting the following properties:

  1. (i)

    \({\mathcal {S}}(0_{{\mathcal {H}}}, h_A)=h_A\) (border condition);

  2. (ii)

    \({\mathcal {S}}(h_A, h_B)={\mathcal {S}}(h_B, h_A)\) (commutativity);

  3. (iii)

    \({\mathcal {S}}(h_A, {\mathcal {S}}(h_B, h_C))={\mathcal {S}}({\mathcal {S}}(h_A, h_B), h_C)\) (associativity);

  4. (iv)

    If \(h_A \le _{{\mathcal {H}}} h_C\) and \(h_B \le _{{\mathcal {H}}} h_D,\) then \({\mathcal {S}}(h_A, h_B) \le _{{\mathcal {H}}} {\mathcal {S}}(h_C, h_D)\) (monotonicity), where \(h_i \in {\mathcal {H}}(i=A,B,C,D).\)

Three different typical HF t-norms and HF t-conorms are shown as :

  1. (1)

    \({\mathcal {T}}_M (h_A, h_B)=h_A \curlywedge h_B=\{h_A^{\delta (s)} \wedge h_B^{\delta (s)} \mid s=1,2, \ldots , k\}; S_M(h_A, h_B)=h_A \curlyvee h_B=\{h_A^{\delta (s)} \vee h_B^{\delta (s)} \mid s=1,2, \ldots , k\} \)

  2. (2)

    \({\mathcal {T}}_P(h_A, h_B)=h_A \otimes h_B=\{h_A^{\delta (s)} h_B^{\delta (s)} \mid s=1,2, \ldots , k\}; S_P(h_A, h_B)=h_A \oplus h_B=\{h_A^{\delta (s)}+h_B^{\delta (s)}-h_A^{\delta (s)} h_B^{\delta (s)} \mid s=1,2, \ldots , k\} \)

  3. (3)

    \({\mathcal {T}}_L(h_A, h_B)=\{(h_A^{\delta (s)}+h_B^{\delta (s)}-1) \vee 0 \mid s=1,2, \ldots , k\}; S_L(h_A, h_B)=\{(h_A^{\delta (s)}+h_B^{\delta (s)}) \wedge 1 \mid s=1,2, \ldots , k\}.\)

Definition 7

[14] A HF implicator is defined as a mapping \({\mathcal {I}}: {\mathcal {H}}^2 \rightarrow {\mathcal {H}},\) exhibiting the following properties:

  1. (i)

    \({\mathcal {I}}(0_{\mathcal {H}}, 0_{\mathcal {H}}) = {\mathcal {I}}(0_{\mathcal {H}}, 1_{\mathcal {H}}) = {\mathcal {I}}(1_{\mathcal {H}}, 1_{\mathcal {H}}) = 1_{\mathcal {H}},\)

  2. (ii)

    \({\mathcal {I}}(1_{\mathcal {H}}, 0_{\mathcal {H}}) = 0_{\mathcal {H}}.\)

If \(h_A \le _{\mathcal {H}} h_B \Rightarrow {\mathcal {I}}(h_A, h_C) \ge _{\mathcal {H}} {\mathcal {I}}(h_B, h_C),\) then \({\mathcal {I}}\) is left monotonic decreasing; If \(h_A \le _{\mathcal {H}} h_B \Rightarrow {\mathcal {I}}(h_C, h_A) \le _{\mathcal {H}} {\mathcal {I}}(h_C, h_B),\) then \({\mathcal {I}}\) is right monotonic increasing.

2.3 HF \(\beta \)-Covering Approximation Space

This section reviews concepts of HF \(\beta \)-covering approximation space(HF\(\beta \)-CAS).

Definition 8

[42] Consider a non-empty and finite set \(\Omega ,\) and let \(C=\{C_1, C_2, \ldots , C_m\},\) where \(C_i \in \) HF(U) for \(i=1,2, \ldots , m.\) \(\forall \) HFE \(\beta \in {\mathcal {H}},\) C is a HF \(\beta \)-covering of \(\Omega \) if \(h_{\sqcup _{i=1}^m} C_i(\omega ) \ge _{\mathcal {H}} \beta \) holds for any \(\omega \in \Omega .\) \((\Omega , C)\) is defined as a HF\(\beta \)-CAS.

Definition 9

[42] Consider a HF\(\beta \)-CAS \((\Omega ,C).\) \(\forall \) \(\omega \in \Omega ,\) a HF \(\beta \)-neighborhood(HF\(\beta \)-N) of \(\omega \) is defined as:

$$\begin{aligned} N_{1, \omega }^{\beta , C}=\sqcap \{C_i \in C \mid h_{C_i}(\omega ) \ge _{\mathcal {H}} \beta , i=1,2, \ldots , m\} . \end{aligned}$$

Definition 10

[42] Consider a HF\(\beta \)-CAS \((\Omega ,C).\) \(\forall \omega \in \Omega ,\) a HF complementary \(\beta \)-neighborhood of \(\omega \) is defined as:

$$\begin{aligned} N_{2, \omega }^{\beta , C}=\left\{ \langle y, h_{N_{2, \omega }^{\beta , C}}(y)\rangle \mid y \in \Omega \right\} , \end{aligned}$$

where \(h_{N_{2, \omega }^{\beta , C}}(y)=h_{N_{1, y}^{\beta , C}}(\omega ).\) Construct two HF\(\beta \)-N operators by the union and the intersection between \(N_{1, \omega }^{\beta , C}\) and \(N_{2, \omega }^{\beta , C}\): \(\forall \omega \in \Omega \)

$$\begin{aligned} & N_{3, \omega }^{\beta , C}=N_{1, \omega }^{\beta , C} \sqcup N_{2, \omega }^{\beta , C},\\ & N_{4, \omega }^{\beta , C}=N_{1, \omega }^{\beta , C} \sqcap N_{2, \omega }^{\beta , C} . \end{aligned}$$

3 HF Overlap Functions

Regarding the HF t-norm mentioned in reference, we can observe that there are more HF t-norms that satisfy the definition, not just the one denoted by \({\mathcal {T}}(h_A, h_B)=\{T(h_A^{\delta (s)},h_B^{\delta (s)}) \mid s=1,2, \ldots , k\} .\) Therefore, the definition of HF t-norms can be extended.

Example 2

Consider two HFEs \(h_A=\{0.8,0.6,0.4\}\) and \(h_B=\) \(\{0.8,0.7,0.5\}.\) Applying \(T_M\) between \(h_A^{\delta (1)}\) and \(h_B^{\delta (1)},\) \(h_A^{\delta (2)}\) and \(h_B^{\delta (2)},\) respectively. And applying \(T_P\) simultaneously between \(h_A^{\delta (3)}\) and \(h_B^{\delta (3)}.\) We refer to t-norm that satisfies this operational rule as \({\mathcal {T}}_1\):

$$\begin{aligned} {\mathcal {T}}_1=(h_A,h_B)=\{T_M(h_A^{\delta (1)},h_B^{\delta (1)}), T_M(h_A^{\delta (2)},h_B^{\delta (2)}), T_P(h_A^{\delta (3)},h_B^{\delta (3)})\}. \end{aligned}$$

Then \({\mathcal {T}}_1(h_A, h_B)=\{0.8,0.6,0.2\}.\) It is easy to prove \({\mathcal {T}}_1\) satisfies the definition of HF t-norm.

Therefore, the concept of representable HF t-norms can be provided next.

Definition 11

The representable HF t-norm\(({\mathcal {T}}: {\mathcal {H}}^2 \rightarrow {\mathcal {H}})\) has the following form:

$$\begin{aligned} {\mathcal {T}}(h_A, h_B)=\{T_1(h_A^{\delta (1)},h_B^{\delta (1)}), T_2(h_A^{\delta (2)},h_B^{\delta (2)}),\ldots , T_k(h_A^{\delta (k)},h_B^{\delta (k)})\} \end{aligned}$$

where \(T_1\le T_2\le \cdots \le T_k.\)

Example 3

Let \(h_A=\{h_A^{\delta (1)}, h_A^{\delta (2)} \}\) and \(h_B=\{h_B^{\delta (1)}, h_B^{\delta (2)} \}\) be two HFEs. So, we have the following representable HF t-norm:

$$\begin{aligned} {\mathcal {T}}(h_A, h_B)=\{ h_A^{\delta (1)} + h_B^{\delta (1)} - 1 \vee 0 , h_A^{\delta (2)} h_B^{\delta (2)}\}. \end{aligned}$$

Then, we give the definition of overlap function:

Definition 12

[3] A bivariate function O : \([0, 1]^2 \rightarrow [0, 1]\) is defined as an overlap function if satisfying:

  1. (1)

    \(O(\nu _1, \nu _2) =O(\nu _2, \nu _1);\)

  2. (2)

    \(O(\nu _1, \nu _2) =0\) iff \(\nu _1=0\) or \(\nu _2=0;\)

  3. (3)

    \(O(\nu _1, \nu _2) =1\) iff \(\nu _1=\nu _2=1;\)

  4. (4)

    O is increasing;

  5. (5)

    O is continuous.

Example 4

[3] The following are some operations of overlap function, where p is positive:

  • Minimum-Maximum Overlap Function:

    \(O_{n m}(\nu _1, \nu _2)= \min (\nu _1, \nu _2) \max (\nu _1^2, \nu _2^2);\)

  • Product Overlap Function:

    \(O_p(\nu _1, \nu _2)=\nu _1^p \nu _2^p;\)

  • Minimum-Power Overlap Function:

    \(O_{m p}(\nu _1, \nu _2)=\min (\nu _1^p, \nu _2^p);\)

  • Maximum-Power Overlap Function:

    \(O_{M p}(\nu _1, \nu _2)=1-\max ((1-\nu _1)^p,(1-\nu _2)^p);\)

  • Dubois and Prade’s Overlap Function:

    \(O_{D B}(\nu _1, \nu _2)= {\left\{ \begin{array}{ll}\frac{2 \nu _1 \nu _2}{\nu _1+\nu _2}, & \text{ if } \nu _1+\nu _2 \ne 0, \\ 0, & \text{ if } \nu _1+\nu _2=0.\end{array}\right. }\)

Next, let’s discuss the partial order problem related to overlap functions.

Definition 13

Let \(f_A, f_B \in O\)

  1. (i)

    We say that \(f_A \preceq f_B,\) iff \(f_A(\nu _1, \nu _2) \le f_B(\nu _1, \nu _2)\) holds, \(\forall \) \(\nu _1, \nu _2 \in \) [0, 1].

  2. (ii)

    We say that \(f_A \prec f_B\) iff \(f_A \preceq f_B\) and \(f_A \ne f_B.\)

Example 5

According to the overlap functions mentioned in Example 4, we have

  • \(O_{n m} \preceq O_{m p},\) where \(0<p \le 1;\)

  • \(O_{m p} \preceq O_{n m},\) where \(p \ge 3;\)

  • \(O_p \preceq O_{D B},\) where \(p \ge 1;\)

  • \(O_p \preceq O_{m p}.\)

But if we want to establish overlap functions on a hesitant set, it is more reasonable to build overlap functions on the lattice rather than constructing the original overlap functions.

Definition 14

[24] Consider a bounded lattice \(({\mathcal {L}}, \le , 0, 1),\) where 1 is the greatest element and 0 is the smallest element. A binary operator \(O: {\mathcal {L}}^2 \rightarrow {\mathcal {L}}\) is defined as an overlap functions on \({\mathcal {L}},\) if the following conditions are met for any \( \nu _1,\nu _2 \in {\mathcal {L}}\):

  1. (1)

    \(O(\nu _1, \nu _2) = O(\nu _2, \nu _1);\)

  2. (2)

    \(O(\nu _1, \nu _2)=0\) iff \(\nu _1=0\) or \(\nu _2=0;\)

  3. (3)

    \(O(\nu _1, \nu _2)=1\) iff \(\nu _1=\nu _2=1;\)

  4. (4)

    O preserves directed sups and filtered infs in each variable.

Definition 15

[10] A function \(O: {\mathcal {L}}^2 \rightarrow {\mathcal {L}}\) is defined as an overlap function on the complete lattice \({\mathcal {L}}\) if it satisfies the following conditions for any \(\nu _1, \nu _2 \in {\mathcal {L}}\) and any \(\{z_i: i \in \Lambda \} \subseteq {\mathcal {L}}\):

(1):

\(O(\nu _1, \nu _2)=O(\nu _2, \nu _1);\)

(2):

\(O(\nu _1, \nu _2)=0_{{\mathcal {L}}}\) iff \(\nu _1=0_{{\mathcal {L}}}\) or \(\nu _2=0_{{\mathcal {L}}};\)

(3):

\(O(\nu _1, \nu _2)=1_{{\mathcal {L}}}\) iff \(\nu _1=\nu _2=1_{{\mathcal {L}}};\)

(4):

\(O(\nu _1, \nu _2) \le O(\nu _1, z)\) if \(\nu _2 \le z;\)

(5):

\(O(\nu _1, \vee _{i \in \Lambda } z_i)=\vee _{i \in \Lambda } O(\nu _1, z_i)\)

(6):

\(O(\nu _1, \wedge _{i \in \Lambda } z_i)=\wedge _{i \in \Lambda } O(\nu _1, z_i);\)

where \(1_{\mathcal {L}}\) and \(0_{\mathcal {L}}\) denote the greatest and smallest element.

Based on the definition of overlap function, the concept of HF overlap function is defined as follows:

Definition 16

A HF overlap function is a mapping \({\mathcal {O}}: {\mathcal {H}} \times {\mathcal {H}} \rightarrow {\mathcal {H}}\) satisfying for any \(h_1, h_2, h_3 \in {\mathcal {H}}\):

\(({\mathcal {O}} 1)\) Commutativity: \({\mathcal {O}}(h_1, h_2)={\mathcal {O}}(h_2, h_1);\)

\(({\mathcal {O}} 2)\) Boundary condition: \({\mathcal {O}}(h_1, h_2)=0_{{\mathcal {H}}}\) iff \(h_1=0_{{\mathcal {H}}}\) or \(h_2=0_{{\mathcal {H}}};\)

\(({\mathcal {O}} 3)\) Boundary condition: \({\mathcal {O}}(h_1, h_2)=1_{{\mathcal {H}}}\) iff \(h_1=h_2=1_{{\mathcal {H}}};\)

\(({\mathcal {O}} 4)\) Monotonicity: \({\mathcal {O}}(h_1, h_2) \le _{{\mathcal {H}}} {\mathcal {O}}(h_1, h_3)\) if \(h_2 \le _{{\mathcal {H}}} h_3;\)

\(({\mathcal {O}} 5)\) Continuity: \({\mathcal {O}}\) is continuous, i.e.\(\forall i \in \Lambda , h_i \in {\mathcal {H}}, {\mathcal {O}}(h, \vee _{i \in \Lambda } h_i)=\vee _{i \in \Lambda } {\mathcal {O}}(h, h_i)\) and \({\mathcal {O}}(h, \wedge _{i \in \Lambda } h_i)=\wedge _{i \in \Lambda } {\mathcal {O}}(h, h_i);\)

Some typical examples of HF overlap are shown:

  1. (1)

    \( {\mathcal {O}}_p(h_1, h_2)=\{(h_1^{\delta (s)})^p (h_2^{\delta (s)})^p \mid s=1,2, \ldots , k\} \)

  2. (2)

    \({\mathcal {O}}_{nm}(h_1, h_2)=\{ ( h_1^{\delta (s)}\wedge h_2^{\delta (s)} ) ( (h_1^{\delta (s)})^2 \vee (h_2^{\delta (s)})^2 ) \mid s=1,2, \ldots , k\} \)

  3. (3)

    \({\mathcal {O}}_{mp}(h_1, h_2)=\{(h_1^{\delta (s)})^p \wedge (h_2^{\delta (s)})^p \mid s=1,2, \ldots , k\} \)

  4. (4)

    \({\mathcal {O}}_{Mp}(h_1, h_2)=\{1-((1-h_1^{\delta (s)})^p \vee (1-h_2^{\delta (s)})^p) \mid s=1,2, \ldots , k\}.\)

Similarly, we can also define representable HF overlap functions.

Definition 17

The representable HF overlap functions \(({\mathcal {O}}: {\mathcal {H}}^2 \rightarrow {\mathcal {H}})\) has the following form:

$$\begin{aligned} {\mathcal {O}}(h_1, h_2)=\{O_1(h_1^{\delta (1)},h_2^{\delta (1)}), O_2(h_1^{\delta (2)},h_2^{\delta (2)}),\ldots ,O_k(h_1^{\delta (k)},h_2^{\delta (k)})\} \end{aligned}$$

where \(O_1\le O_2\le \cdots \le O_k.\)

Example 6

Let \(h_1=\{h_1^{\delta (1)}, h_1^{\delta (2)} \}\) and \(h_2=\{h_2^{\delta (1)}, h_2^{\delta (2)} \}\) be two HFEs , where \(0 < p \le 1.\) So, we have the following representable HF overlap functions:

  1. (1)

    \({\mathcal {O}}_a(h_1, h_2)=\{(h_1^{\delta (1)})^p (h_2^{\delta (1)})^p , (h_1^{\delta (2)})^p \wedge (h_2^{\delta (2)})^p \} \)

  2. (2)

    \({\mathcal {O}}_b(h_1, h_2)=\{( h_1^{\delta (1)}\wedge h_2^{\delta (1)})( (h_1^{\delta (1)})^2 \vee (h_2^{\delta (1)})^2 ) , (h_1^{\delta (2)})^p \wedge (h_2^{\delta (2)})^p\}.\)

But not all HF overlap functions are representable; here is an example of unrepresentable HF overlap functions.

Example 7

Let \(h_1=\{h_1^{\delta (1)}, h_1^{\delta (2)} \}\) and \(h_2=\{h_2^{\delta (1)}, h_2^{\delta (2)} \}\) be two HFEs. The HF overlap functions

$$\begin{aligned} {\mathcal {O}}(h_1, h_2)=\{0.5 h_1^{\delta (1)}h_2^{\delta (1)} + 0.5 \max (0,h_1^{\delta (1)} + h_2^{\delta (1)} - 1 ),1- \min (1, 2-h_1^{\delta (2)} - h_2^{\delta (1)} , 2-h_1^{\delta (1)} - h_2^{\delta (2)} ) \} \end{aligned}$$

is an unrepresentable HF overlap functions.

Proof

First, for all \(h_1, h_2 \in {\mathcal {H}},\) \(h_1=\{h_1^{\delta (1)}, h_1^{\delta (2)} \}\) and \(h_2=\{h_2^{\delta (1)}, h_2^{\delta (2)} \},\) where \(h_1^{\delta (1)} \ge h_1^{\delta (2)}\) and \(h_2^{\delta (1)} \ge h_2^{\delta (2)},\) we need to prove \({\mathcal {O}}(h_1, h_2)\) is a HFE, which means we need to prove \(O_1(h_1^{\delta (1)},h_2^{\delta (1)})\ge O_2(h_1^{\delta (2)},h_2^{\delta (2)}),\) where \(O_1(h_1^{\delta (1)},h_2^{\delta (1)})=0.5 h_1^{\delta (1)}h_2^{\delta (1)} + 0.5\max (0,h_1^{\delta (1)} + h_2^{\delta (1)} - 1 ),\) \(O_2(h_1^{\delta (2)},h_2^{\delta (2)})=1-\min (1, 2-h_1^{\delta (2)} - h_2^{\delta (1)}, 2-h_1^{\delta (1)} - h_2^{\delta (2)}).\)For a clearer presentation, the process of proving that \({\mathcal {O}}(h_1, h_2)\) is HFE is represented by the Table 1.

Table 1 The proof process of Example 7

Then, prove that it is a HF overlap functions \((\forall h_1, h_2, h_3 \in {\mathcal {H}})\).

\(({\mathcal {O}} 1)\) Commutativity: \({\mathcal {O}}(h_2,h_1)=\{ 0.5 h_2^{\delta (1)}h_1^{\delta (1)} + 0.5\max (0,h_2^{\delta (1)} + h_1^{\delta (1)} - 1 ), 1-\min (1, 2-h_2^{\delta (2)} - h_1^{\delta (1)}, 2-h_2^{\delta (1)} - h_1^{\delta (2)} )\}={\mathcal {O}}(h_1,h_2)\)

\(({\mathcal {O}} 2)\) Boundary condition: \({\mathcal {O}}(h_1,h_2)=0_{\mathcal {H}}=(0,0)\Leftrightarrow h_1 = 0_{\mathcal {H}}=(0,0)\) or \(h_2=0_{\mathcal {H}} =(0,0).\)

\(({\mathcal {O}} 3)\) Boundary condition: \({\mathcal {O}}(h_1,h_2)=1_{\mathcal {H}}=(1,1)\Leftrightarrow h_1 = 1_{\mathcal {H}}=(1,1)\) and \(h_2=1_{\mathcal {H}} =(1,1).\)

\(({\mathcal {O}} 4)\) Monotonicity: If \(h_2 \le _{\mathcal {H}} h_3,\) \(i.e. h_2^{\delta (1)} \le h_3^{\delta (1)},h_2^{\delta (2)} \le h_3^{\delta (2)}. \) Then, \(0.5 h_1^{\delta (1)}h_2^{\delta (1)} + 0.5\max (0,h_1^{\delta (1)} + h_2^{\delta (1)} - 1 ) \le 0.5 h_1^{\delta (1)}h_3^{\delta (1)} + 0.5\max (0,h_1^{\delta (1)} + h_3^{\delta (1)} - 1 )\) and \(1-\min (1, 2-h_1^{\delta (2)} - h_2^{\delta (1)}, 2-h_1^{\delta (1)} - h_2^{\delta (2)})\le 1-\min (1, 2-h_1^{\delta (2)} - h_3^{\delta (1)}, 2-h_1^{\delta (1)} - h_3^{\delta (2)}).\) Consequently, \({\mathcal {O}}(h_1,h_2) \le {\mathcal {O}}(h_1,h_3).\)

\(({\mathcal {O}} 5)\) Continuity: First, prove left continuous, i.e. \( {\mathcal {O}}(h_1, \vee _{i \in \Lambda } h_i) = \vee _{i \in \Lambda }{\mathcal {O}}(h_1,h_2).\)

Let \(O_1(h_1^{\delta (1)},h_2^{\delta (1)})=0.5 h_1^{\delta (1)}h_2^{\delta (1)} + 0.5\max (0,h_1^{\delta (1)} + h_2^{\delta (1)} - 1 ),\) \(O_2(h_1^{\delta (2)},h_2^{\delta (2)})=1-\min (1, 2-h_1^{\delta (2)} - h_2^{\delta (1)}, 2-h_1^{\delta (1)} - h_2^{\delta (2)}).\) Because \(O_1,O_2\) are continuous, \(O_1(h_1^{\delta (1)}, \vee _{i \in \Lambda } h_i^{\delta (1)})=\vee _{i \in \Lambda } O_1(h_1^{\delta (1)},h_i^{\delta (1)})\) and \(O_2(h_1^{\delta (2)}, \vee _{i \in \Lambda } h_i^{\delta (2)}) = \vee _{i \in \Lambda }O_2(h_1^{\delta (2)},h_i^{\delta (2)})\) are holding. It can be obtained that

$$\begin{aligned} {\mathcal {O}}(h_1, \vee _{i \in \Lambda } h_i)= & \{ O_1(h_1^{\delta (1)}, \vee _{i \in \Lambda } h_i^{\delta (1)}),O_2(h_1^{\delta (2)}, \vee _{i \in \Lambda } h_i^{\delta (2)})\}\\= & \{ \vee _{i \in \Lambda } O_1(h_1^{\delta (1)},h_i^{\delta (1)}), \vee _{i \in \Lambda } O_2(h_1^{\delta (2)}, h_i^{\delta (2)}) \}\\= & \vee _{i \in \Lambda } \{ O_1(h_1^{\delta (1)},h_i^{\delta (1)}) , O_2(h_1^{\delta (2)},h_i^{\delta (2)})\}\\= & \vee _{i \in \Lambda }{\mathcal {O}}(h_1,h_i). \end{aligned}$$

Therefore, \({\mathcal {O}}\) is left continuous. Similarly, it can be obtained that \({\mathcal {O}}(h_1, \wedge _{i \in \Lambda } h_i) = \wedge _{i \in \Lambda }{\mathcal {O}}(h_1,h_i).\) Hence \({\mathcal {O}}\) is continuous. But it is an unrepresentable HF overlap functions.

Let \(O_1(h_1^{\delta (1)},h_2^{\delta (1)})=0.5 h_1^{\delta (1)}h_2^{\delta (1)} + 0.5\max (0,h_1^{\delta (1)} + h_2^{\delta (1)} - 1 ),\) \(O_2(h_1^{\delta (2)},h_2^{\delta (2)})=1-\min (1, 2-h_1^{\delta (2)} - h_2^0, 2-h_1^0 - h_2^{\delta (2)}),\) where \(h_1^0,h_2^0 \in [0,1]\) is constant.

Because \(O_2(1,1) =1-\min (1, 1-h_2^0,1-h_1^0),\) we have

$$\begin{aligned} O_2(1,1) = {\left\{ \begin{array}{ll} h_2^0, & \text {if } h_2^0 \geqslant h_1^0 \\ h_1^0, & \text {if } h_1^0 \geqslant h_2^0. \end{array}\right. } \end{aligned}$$

Obviously, \(O_2(1,1)\ne 1,\) it doesn’t satisfy the definition of overlap function. Thus, the HF overlap functions \({\mathcal {O}}(h_1, h_2)=\{ 0.5 h_1^{\delta (1)}h_2^{\delta (1)} + 0.5\max (0,h_1^{\delta (1)} + h_2^{\delta (1)} - 1 ), 1-\min (1, 2-h_1^{\delta (2)} - h_2^{\delta (1)}, 2-h_1^{\delta (1)} - h_2^{\delta (2)} ) \}\) is an unrepresentable HF overlap functions.

Example 8

Let \(h_1=\{h_1^{\delta (1)}, h_1^{\delta (2)}, h_1^{\delta (3)}, h_1^{\delta (4)}, \ldots , h_1^{\delta (k)}\}\) and \(h_2=\{h_2^{\delta (1)}, h_2^{\delta (2)}, h_2^{\delta (3)}, h_2^{\delta (4)} \ldots , h_2^{\delta (k)} \}\) be two HFEs.

The overlap functions

$$\begin{aligned} {\mathcal {O}}(h_1, h_2)=\{P_1(h_1^{\delta (1)},h_2^{\delta (1)}), P_2(h_1^{\delta (2)},h_2^{\delta (2)}),P_3(h_1^{\delta (3)},h_2^{\delta (3)}), P_4(h_1^{\delta (4)},h_2^{\delta (4)}), \ldots , P_k(h_1^{\delta (k)},h_2^{\delta (k)})\} \end{aligned}$$

is an unrepresentable HF overlap functions, where

$$\begin{aligned} P_1(h_1^{\delta (1)},h_2^{\delta (1)})= & 0.5 h_1^{\delta (1)}h_2^{\delta (1)} + 0.5\max (0,h_1^{\delta (1)}+h_2^{\delta (1)} - 1 ),\\ P_2(h_1^{\delta (2)},h_2^{\delta (2)})= & 1-\min (1, 2-h_1^{\delta (2)} - h_2^{\delta (1)} , 2-h_1^{\delta (1)} - h_2^{\delta (2)} ), \\ P_3(h_1^{\delta (3)},h_2^{\delta (3)})= & 1-\min (1, 2-h_1^{\delta (2)} - h_2^{\delta (3)} ,2-h_1^{\delta (1)} - h_2^{\delta (3)} ), \\ P_4(h_1^{\delta (4)},h_2^{\delta (4)})= & 1-\min (1, 2-h_1^{\delta (2)} - h_2^{\delta (4)} ,2-h_1^{\delta (1)} - h_2^{\delta (4)} ),\ldots P_k(h_1^{\delta (k)},h_2^{\delta (k)})\\= & 1-\min (1, 2-h_1^{\delta (2)} - h_2^{\delta (k)} , 2-h_1^{\delta (1)} - h_2^{\delta (k)} ). \end{aligned}$$

Proof

The proofs are similar to the proof of Example 7.

Remark 1

HF overlap functions and HF t-norms are not mutually inclusive concepts. For example, the HF overlap functions

$$\begin{aligned} {\mathcal {O}}_p(h_1, h_2)=\{(h_1^{\delta (s)})^p (h_2^{\delta (s)})^p \mid s=1,2, \ldots , k\}\quad (p>1) \end{aligned}$$

is not a HF t-norm. The HF t-norm \( {\mathcal {T}}_H(h_1, h_2)=\{( T_H(h_1^{\delta (s)},h_2^{\delta (s)})) \mid s=1,2, \ldots , k\}\) is not a HF overlap functions, where \(T_H(\nu _1, \nu _2) = \frac{\nu _1 \cdot \nu _2}{p + (1 - p) \cdot (\nu _1 + \nu _2 - \nu _1 \cdot \nu _2)}.\)

4 HF \(\beta \)-Covering \(({\mathcal {I}},{\mathcal {O}})\) Rough Set Models

Four types of HF \(\beta \)-covering \(({\mathcal {I}},{\mathcal {O}})\) rough set (HF\(\beta \)CIORS) models using HF logic operators and HF\(\beta \)-Ns are defined. Additionally, we explore the fundamental properties of models and investigate the connections between them.

Definition 18

Consider a continuous HF overlap function \({\mathcal {O}}\) and a HF implicator \({\mathcal {I}}\) on the interval [0, 1]. Suppose \((\Omega , C)\) represents a HF\(\beta \)-CAS. \(\forall \) \(A \in HF(U),\) the r-th \((r = 1, 2, 3, 4)\) type of HF \(\beta \)-covering \({\mathcal {O}}\)-upper, and \({\mathcal {I}}\)-lower approximation operators of A are defined as:

$$\begin{aligned} & \overline{R}_r^{\beta , C}(A)=\{\langle \omega , h_{\overline{R}_r^{\beta , C}(A)}(\omega )\rangle \mid \omega \in \Omega \},\\ & \underline{R}_r^{\beta , C}(A)=\{\langle \omega , h_{\underline{R}_r^{\beta , C}(A)}(\omega )\rangle \mid \omega \in \Omega \}, \end{aligned}$$

where

$$\begin{aligned}&h_{\overline{R}_r^{\beta , C}(A)}(\omega )=\curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), h_A(y)), \\&h_{\underline{R}_r^{\beta , C}(A)}(\omega )=\curlywedge _{y \in \Omega } {\mathcal {I}}(h_{N_{r, \omega }^{\beta , C}}(y), h_A(y)). \end{aligned}$$

The pair \((\overline{R}_r^{\beta , C}(A), \underline{R}_r^{\beta , C}(A))\) is defined as the r-th type of \({\textrm{HF}} \beta {\textrm{CIORS}}\) of A (r-HF \(\beta \) CIORS).

Example 9

Consider a set \(\Omega =\{\omega _1, \omega _2, \ldots , \omega _4\}.\) A set of HFSs \(C=\{C_1, C_2, C_3, C_4\}\) on \(\Omega \) is shown in Table 2. The calculation results of \(N_{1, \omega _i}^{\beta , C}(i=1,2, \ldots , 4)\) are listed in Table 3. Taking \(\beta =\{0.5,0.4,0.3\},\) then C is the HF \(\beta \)-covering of \(\Omega .\) When \(r=1,\) there are \(N_{1, \omega _1}^{\beta , C}=C_1 \sqcap C_4, N_{1, \omega _2}^{\beta , C}=\) \(C_1 \sqcap C_3, N_{1, \omega _3}^{\beta , C}=C_2 \sqcap C_3 \sqcap C_4, N_{1, \omega _4}^{\beta , C}=C_3 \sqcap C_4.\)

Table 2 The HF \(\beta \)-covering C
Table 3 The computation results of \(N_{1, \omega _i}^{{\beta }^*, C}\)

Let \( A=\{\langle \omega _1,\{0.6,0.4,0.2\}\rangle ,\langle \omega _2,\{0.5,0.2,0.1\}\rangle ,\) \( \langle \omega _3,\{0.7,0.5,0.3\}\rangle , \langle \omega _4,\{0.4,0.2\}\rangle \}. \)

Assuming \({\mathcal {O}}={\mathcal {O}}_{P(P=2)}\) and \({\mathcal {I}}={\mathcal {I}}(h_1,h_2)=\{ ( 1\wedge 1-h_1^{\delta (s)}+h_2^{\delta (s)} ) \mid s=1,2, \ldots , k\},\) then by Definition 18, there are

$$\begin{aligned}&\overline{R}_1^{\beta , C}(A)= \{\langle \omega _1,\{0.1764,0.0256,0.0036\}\rangle , \langle \omega _2,\{0.1764, 0.0225, 0,0036\}\rangle ,\\&\qquad \langle \omega _3,\{0.2401,0.0625,0.0144\}\rangle , \langle \omega _4,\{0.0784,0.0144,0.0064\}\rangle \}.\\&\underline{R}_1^{\beta , C}(A)= \{\langle \omega _1,\{0.5,0.4,0.2\}\rangle ,\quad \langle \omega _2,\{0.5,0.4,0.2\}\rangle , \qquad \langle \omega _3,\{0.6,0.4,0.3\}\rangle , \qquad \langle \omega _4,\{0.6,0.3,0.2\}\rangle \}. \end{aligned}$$

2-HF \(\beta \) CIORS, 3-HF \(\beta \) CIORS and 4-HF \(\beta \) CIORS of A are calculated in a similar way. Next, the basic properties of the HF\(\beta \) CIORS models are analysed.

Theorem 1

Consider a HF\( \beta \)-CAS \((\Omega , C)\) and an index set \(\Lambda .\) \(\forall \) \(A, B \in H F(\Omega ),\) it satisfies : 

  1. (1)

    \(\overline{R}_r^{\beta , C}(\varnothing )=\varnothing .\)

  2. (2)

    If \(A \Subset B,\) \(\overline{R}_r^{\beta , C}(A) \Subset \overline{R}_r^{\beta , C}(B).\)

  3. (3)

    \(\overline{R}_r^{\beta , C}(\sqcap _{i \in \Lambda } A_i) \Subset \sqcap _{i \in \Lambda } \overline{R}_r^{\beta , C}(A_i).\)

  4. (4)

    \(\overline{R}_r^{\beta , C}(\sqcup _{i \in \Lambda } A_i)=\sqcup _{i \in \Lambda } \overline{R}_r^{\beta , C}(A_i).\)

  5. (5)

    If \(\beta _1 \le _{{\mathcal {H}}} \beta _2(\beta _1, \beta _2 \in {\mathcal {H}}),\) \(\overline{R}_r^{\beta _1, C}(A) \Subset \) \(\overline{R}_r^{\beta _2, C}(A).\)

Proof

(1) Since \({\mathcal {O}}(h_1, h_2)=0_{{\mathcal {H}}}\) iff \(h_1=0_{{\mathcal {H}}}\) or \(h_2=0_{{\mathcal {H}}}.\) Then \(\forall \) \(\omega \in \Omega ,\) there is

$$\begin{aligned} h_{\overline{R}_r^{\beta , C}(\Omega )}(\omega )= & \curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), h_{\varnothing }(y)) \\= & \curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), 0_{{\mathcal {H}}}) \\= & 0_{{\mathcal {H}}}\\= & \varnothing . \end{aligned}$$

Hence, \( \overline{R}_r^{\beta , C}(\varnothing )=\varnothing .\)

(2) Since \({\mathcal {O}}\) is monotonic increasing and \(A \Subset \) B,  then \(\forall \) \(\omega \in \Omega ,\)

$$\begin{aligned} h_{\overline{R}_r^{\beta , C}(A)}(\omega ) & =\curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), h_A(y)) \\ & \le _H \curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), h_B(y)) \\ & =h_{\overline{R}_r^{\beta , C}(B)}(\omega ). \end{aligned}$$

Hence, \(\overline{R}_r^{\beta , C}(A) \Subset \overline{R}_r^{\beta , C}(B).\)

(3) Since \({\mathcal {O}}(h, \curlywedge _{i \in \Lambda } h_j)=\curlywedge _{i \in \Lambda } {\mathcal {O}}(h, h_i),\) then \(\forall \) \(\omega \in \Omega ,\)

$$\begin{aligned} h_{\overline{R}_r^{\beta , C}(\sqcap _{i \in \Lambda } A_i)}(\omega ) & =\curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), h_{\sqcap _{i \in \Lambda } A_i}(y)) \\ & =\curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), \curlywedge _{i \in \Lambda } h_{A_i}(y)) \\ & =\curlyvee _{y \in \Omega } \curlywedge _{i \in \Lambda } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), h_{A_i}(y)) \\ & \le _{{\mathcal {H}}}\curlywedge _{i \in \Lambda } \curlyvee _{y\in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), h_{A_i}(y))\\ & = \curlywedge _{i \in \Lambda }h_{\overline{R}_r^{\beta , C}(A_i)}(\omega ). \end{aligned}$$

Hence, \(\overline{R}_r^{\beta , C}(\sqcap _{i \in \Lambda } A_i) \Subset \sqcap _{i \in \Lambda } \overline{R}_r^{\beta , C}(A_i)\)

(4) Since \({\mathcal {O}}(h, \curlyvee _{i \in \Lambda } h_i)=\curlyvee _{i \in \Lambda } {\mathcal {O}}(h, h_i),\) then \(\forall \) \(\omega \in \Omega ,\)

$$\begin{aligned} h_{\overline{R}_r^{\beta , C}(\sqcup _{i \in \Lambda } A_i)}(\omega )= & \curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), h_{\sqcup _{i \in \Lambda } A_i}(y)) \\= & \curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), \curlyvee _{i \in \Lambda } h_{A_i}(y)) \\= & \curlyvee _{y \in \Omega } \curlyvee _{i \in \Lambda } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), h_{A_i}(y)) \\= & \curlyvee _{i \in \Lambda } \curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), h_{A_i}(y)) \\= & \curlyvee _{i \in \Lambda } h_{\overline{R}_r^{\beta , C}(A_i)}(\omega ) . \end{aligned}$$

Hence, \(\overline{R}_r^{\beta , C}(\sqcup _{i \in \Lambda } A_i)=\) \(\sqcup _{i \in \Lambda } \overline{R}_r^{\beta , C}(A_i).\)

(5) If \(\beta _1\le _{{\mathcal {H}}}\beta _2\), \(\forall \) \(\omega \in \Omega ,\) \(N_{r,\omega }^{\beta _1, C}\Subset N_{r,\omega }^{\beta _2, C},\) since \({\mathcal {O}}\) is monotonic increasing, then

$$\begin{aligned} h_{\overline{R}_r^{\beta _1, C}(A)}(\omega ) & = \curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta _1, C}}(y), h_A(y)) \\ & \le _H \curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta _2, C}}(y), h_A(y)) \\ & =h_{\overline{R}_r^{\beta _2, C}(A)}(\omega ). \end{aligned}$$

Hence, \(\overline{R}_r^{\beta _1, C}\Subset \overline{R}_r^{\beta _2, C}.\)

Theorem 2

Consider a HF\( \beta \)-CAS \((\Omega , C)\) and an index set \(\Lambda .\) \(\forall \) \(A, B \in H F(\Omega )\) satisfy : 

  1. (1)

    \(\underline{R}_r^{\beta , C}(\Omega )=\Omega ,\) if \({\mathcal {I}}\) is left monotonic decreasing.

  2. (2)

    If \(A \Subset B\) and \({\mathcal {I}}\) is right monotonic increasing, \(\underline{R}_r^{\beta , C}(A) \Subset \underline{R}_r^{\beta , C}(B).\)

  3. (3)

    \(\underline{R}_r^{\beta , C}(\sqcap _{i \in \Lambda } A_i)=\sqcap _{i \in \Lambda } \underline{R}_r^{\beta , C}(A_i),\) if \({\mathcal {I}}(h, \curlywedge _{i \in \Lambda } h_j)=\curlywedge _{i \in \Lambda } {\mathcal {I}}(h, h_j).\)

  4. (4)

    \(\underline{R}_r^{\beta , C}(\sqcup _{i \in \Lambda } A_i) \Supset \sqcup _{i \in \Lambda } \underline{R}_r^{\beta , C}(A_i),\) if \({\mathcal {I}}\) is right monotonic increasing.

  5. (5)

    Assume that \({\mathcal {I}}\) is left monotonic decreasing. If \(\beta _1 \le _{{\mathcal {H}}} \beta _2(\beta _1, \beta _2 \in {\mathcal {H}} ),\) then \(\underline{R}_r^{\beta _1, C}(A) \Supset \) \(\underline{R}_r^{\beta _2, C}(A).\)

Proof

The proofs are similar to that of Theorem 1.

Theorem 3

Consider a HF\( \beta \)-CAS \((\Omega , C).\) \(\forall \) \(A \in H F(\Omega ),\) if \(N_{r, \omega }^{\beta , C}\) is reflective (i.e.,  \(h_{N_{r, \omega }^{\beta , C}}(\omega )= 1_{{\mathcal {H}}})\) and \({\mathcal {O}}(1_{\mathcal {H}},h_A(\omega ))\ge _{\mathcal {H}} h_A(\omega )\) for any \(\omega \in \Omega ,\) \(\underline{R}_r^{\beta , C}(A) \Subset A \Subset \overline{R}_r^{\beta , C}(A),\) provided that \({{\mathcal {O}}}(1_{{\mathcal {H}}}, h)=h\) for \(\forall h \in {\mathcal {H}}.\)

Proof

\(\forall \) \(\omega \in \Omega ,\) there are

$$\begin{aligned} h_{\underline{R}_r^{\beta , C}}(\omega ) & =\curlywedge _{y \in \Omega } {\mathcal {I}}(h_{N_{r, \omega }^{\beta , C}}(y), h_A(y))\\ & \le _{\mathcal {H}} {\mathcal {I}}(h_{N_{r, \omega }^{\beta , c}}(\omega ), h_A(\omega ))\\ & ={\mathcal {I}}(1_{{\mathcal {H}}}, h_A(\omega ))( \text{ by } {\mathcal {I}}(1_{{\mathcal {H}}}, h)=h)\\ & =h_A(\omega ),\\ h_{\overline{R}_r^{\beta , C}(A)}(\omega )= & \curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), h_A(y))\\ & \ge _{\mathcal {H}} {\mathcal {O}}(h_{N_{r, \omega }^{\beta , c}}(\omega ), h_A(\omega ))\\ & ={\mathcal {O}}(1_{{\mathcal {H}}}, h_A(\omega ))\\ & \ge _{\mathcal {H}} h_A(\omega ). \end{aligned}$$

Hence, it can be obtained that \(h_{\underline{R}_r^{\beta , C}(A)}(\omega ) \le _{\mathcal {H}}\) \(h_A(\omega ) \le _{\mathcal {H}} h_{\overline{R}_r^{\beta , C}(A)}(\omega ).\) It means that \(\underline{R}_r^{\beta , C}(A) \Subset A \Subset \overline{R}_r^{\beta , C}(A).\)

Theorem 4

Consider a non-empty and finite set \(\Omega \) and \(A \in H F(\Omega ).\) Assume that C and \(C^{\prime }\) are two HF \(\beta \) coverings of \(\Omega ,\) where \(C=\{C_1, C_2, \ldots , C_m\}, C^{\prime }=\) \(\{C_1^{\prime }, C_2^{\prime }, \ldots , C_n^{\prime }\}\) and \(\beta \in {\mathcal {H}}.\) If \({\mathcal {I}}\) is left monotonic decreasing and \(N_{r, \omega }^{\beta , C} \Subset N_{r, \omega }^{\beta , C^{\prime }}\) for any \(\omega \in \Omega ,\) \(\underline{R}_r^{\beta , C}(A) \Supset \underline{R}_r^{\beta , C^{\prime }}(A)\) and \(\overline{R}_r^{\beta , C}(A) \Subset \overline{R}_r^{\beta , C^{\prime }}(A).\)

Proof

Since \(N_{r, \omega }^{\beta , C} \Subset N_{r, \omega }^{\beta , C^{\prime }},\) \(\forall \omega \in \Omega .\) There exists \(h_{N_{r, \omega }^{\beta , C}}(y) \le h_{N_{r, \omega }^{\beta , C^{\prime }}}(y),\) \(\forall \) \(y \in \Omega .\) Then

$$\begin{aligned} h_{\underline{R}_r^{\beta , C}(A)}(\omega ) & =\curlywedge _{y \in \Omega } {\mathcal {I}}(h_{N_{r, \omega }^{\beta , C}}(y), h_A(y))\\ & \ge _{\mathcal {H}} \curlywedge _{y \in \Omega } {\mathcal {I}}(h_{N_{r, \omega }^{\beta , C^{\prime }}}(y), h_A(y))\\ & =h_{\underline{R}_r^{\beta , C^{\prime }}(A)}(\omega ) . \end{aligned}$$

Since \(N_{r, \omega }^{\beta , C} \Subset N_{r, \omega }^{\beta , C^{\prime }},\) \(\forall \omega \in \Omega .\) There exists \(h_{N_{r, \omega }^{\beta , C}}(y) \le h_{N_{r, \omega }^{\beta , C^{\prime }}}(y),\) \(\forall y \in \Omega .\) Then

$$\begin{aligned} h_{\overline{R}_r^{\beta , C}(A)}(\omega ) & =\curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C}}(y), h_A(y))\\ & \le _{\mathcal {H}} \curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{r, \omega }^{\beta , C^{\prime }}}(y), h_A(y))\\ & =h_{\underline{R}_r^{\beta , C^{\prime }}(A)}(\omega ) . \end{aligned}$$

Hence, \(\overline{R}_r^{\beta , C}(A) \Subset \overline{R}_r^{\beta , C^{\prime }}(A).\) The proof of the other one is the same.

The relationships among the four types of \({\textrm{HF}} \beta \) CIORS models is explored as follows.

Theorem 5

Consider a HF \(\beta \)CAS \((\Omega , C).\) \(\forall \) \(A \in H F(\Omega )\) satisfy : 

  1. (1)

    If \({\mathcal {I}}\) is left monotonic decreasing,  \(\underline{R}_3^{\beta , C}(A) \Subset \underline{R}_1^{\beta , C}(A) \Subset \underline{R}_1^{\beta , C}(A).\)

  2. (2)

    If \({\mathcal {I}}\) is left monotonic decreasing,  \(\underline{R}_3^{\beta , C}(A) \Subset \underline{R}_2^{\beta , C}(A) \Subset \underline{R}_1^{\beta , C}(A).\)

  3. (3)

    \(\overline{R}_4^{\beta , C}(A) \Subset \overline{R}_1^{\beta , C}(A) \Subset \overline{R}_3^{\beta , C}(A).\)

  4. (4)

    \(\overline{R}_4^{\beta , C}(A) \Subset \overline{R}_2^{\beta , C}(A) \Subset \overline{R}_3^{\beta , C}(A).\)

Proof

  1. (1)

    Based on Definition 9, we have \(h_{N_{4, \omega }^{\beta , C}}(y) \le _{\mathcal {H}} h_{N_{1, \omega }^{\beta , C}}(y) \le _{\mathcal {H}} h_{N_{3, \omega }^{\beta , C}}(y),\) \(\forall \omega , y \in \Omega .\) Because of the left monotonic decrease of \({\mathcal {I}},\) it can be obtained that \({\mathcal {I}}(h_{N_{3, \omega }^{\beta , C}}(y), h_A(y)) \le _{\mathcal {H}}\) \({\mathcal {I}}(h_{N_{1, \omega }^{\beta , C}}(y), h_A(y)) \le _{\mathcal {H}} {\mathcal {I}}(h_{N_{4, \omega }^{\beta , C}}(y), h_A(y)). \) Then, \(\forall \omega \in \Omega ,\) there are

    $$\begin{aligned} h_{\underline{R}_3^{\beta , C}(A)}(\omega ) & =\curlywedge _{y \in \Omega } {\mathcal {I}}(h_{N_{3, \omega }^{\beta , C}}(y), h_A(y)) \\ & \le _{\mathcal {H}} \curlywedge _{y \in \Omega } {\mathcal {I}}(h_{N_{1, \omega }^{\beta , C}}(y), h_A(y)) \\ & =h_{\underline{R}_1^{\beta , C}(A)}(\omega ), \end{aligned}$$
    $$\begin{aligned} h_{\underline{R}_1^{\beta , C}(A)}(\omega ) & =\curlywedge _{y \in \Omega } {\mathcal {I}}(h_{N_{1, \omega }^{\beta , C}}(y), h_A(y)) \\ & \le _{\mathcal {H}} \curlywedge _{y \in \Omega } {\mathcal {I}}(h_{N_{4, \omega }^{\beta , C}}(y), h_A(y)) \\ & =h_{\underline{R}_1^{\beta , C}(A)}(\omega ) . \end{aligned}$$

    Hence, it can be obtained that \(h_{\underline{R}_3^{\beta , C}(A)}(\omega ) \le _{\mathcal {H}}\) \(h_{\underline{R}_1^{\beta , C}(A)}(\omega ) \le _{\mathcal {H}} h_{\underline{R}_1^{\beta , C}(A)}(\omega ).\) It means that \(\underline{R}_3^{\beta , C}(A) \Subset \underline{R}_1^{\beta , C}(A) \Subset \underline{R}_1^{\beta , C}(A).\)

  2. (2)

    The proof is similar to the proof of (1).

  3. (3)

    \(\forall \omega \in \Omega ,\) there are

    $$\begin{aligned} h_{\overline{R}_4^{\beta , C}(A)}(\omega ) & =\curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{4, \omega }^{\beta , C}}(y), h_A(y)) \\ & \le _{\mathcal {H}} \curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{1, \omega }^{\beta , C}}(y), h_A(y))\\ & =h_{\overline{R}_1^{\beta , C}(A)}(\omega ), \end{aligned}$$
    $$\begin{aligned} h_{\overline{R}_1^{\beta , C}(A)}(\omega ) & =\curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{1, \omega }^{\beta , C}}(y), h_A(y)) \\ & \le _{\mathcal {H}} \curlyvee _{y \in \Omega } {\mathcal {O}}(h_{N_{3, \omega }^{\beta , C}}(y), h_A(y))\\ & =h_{\overline{R}_3^{\beta , C}(A)}(\omega ). \end{aligned}$$

    Hence, it can be obtained that \(h_{\overline{R}_4^{\beta , C}}(\omega ) \le _{\mathcal {H}}\) \(h_{\overline{R}_1^{\beta , C}(A)}(\omega ) \le _{\mathcal {H}} h_{\overline{R}_3^{\beta , C}(A)}(\omega ).\) It means that \(\overline{R}_4^{\beta , C}(A) \Subset \overline{R}_1^{\beta , C}(A) \Subset \overline{R}_3^{\beta , C}(A).\)

  4. (4)

    The proof is similar to the proof of (3).

5 The Applications of HF\(\beta \)CIORS Models in MADM

In the introduction, it is highlighted that HFMADM has become increasingly prominent in the realm of decision-making. Scientifically grounded decision-making methods is crucial for decision makers to mitigate risks associated with erroneous choices. Consequently, this section is dedicated to introducing a novel approach tailored for addressing HFMADM issues.

In the context of HFMADM, define a system with the following elements: a set \(\Omega \) representing n choices, and a set C denoting m criteria with an associated weight in the vector \(d = (d_1,d_2,\ldots ,d_m)^{{\textbf {T}}}.\) The sum of these weights equals 1. The evaluation information is captured in the set \(F=\{C_j(\omega _i)|i=1,2,\ldots ,n; j=1,2,\ldots ,m \},\) in which \(C_j(\omega _i)\) indicates the score of choice \(\omega _i\) in relation to criterion \(C_j\) based on HFE. This forms the HF information system \((\Omega , C, F, d).\)

We introduce a novel approach to address HFMADM challenges by integrating the 1-HF\(\beta \) CIORS model based on TOPSIS. The specifics of this methodology are elaborated below.

Different from traditional decision-making strategies, rough sets-based methods focus primarily on creating decision entities. There are two prevalent techniques for this: the pre-determined approach and the ideal solution method. In this research, we employ the ideal solution method to derive both the optimal and the least desirable decision entities.

First of all, based on TOPSIS technique, construct a HF positive ideal solution (HFPIS) \(A^{+}\) and a HF negative ideal solution (HFNIS) \(A^{-}.\)

$$\begin{aligned} A^{+}= & \left\{ \langle C_j,\left\{ \max _{1 \le i \le n} h_{C_j(\omega _i)}^{\delta (s)} \mid s=1,2, \ldots , k\right\} \rangle |j=1,2, \ldots , m\right\} \\ A^{-}= & \left\{ \langle C_j,\left\{ \min _{1 \le i \le n} h_{C_j(\omega _i)}^{\delta (s)} \mid s=1,2, \ldots , k\right\} \rangle |j=1,2, \ldots , m\right\} \end{aligned}$$

where \(k=\max (l_{h_{C_j(\omega _i)}}).\)

Next, identify the maximum length of the HFSs for each attribute as k. Assume all attributes are benefit.

Then, to evaluate how closely an option \(\omega _i\) aligns with both the ideal \(A^{+}\) and the least desirable \(A^{-}\) solutions, a novel similarity measure for HFSs is defined as:

Definition 19

Consider a set \(U=\{y_1, y_2, \ldots , y_m\}\) paired with weights \(\omega =(\omega _1, \omega _2, \ldots , \omega _m)^{{\textbf {T}}}.\) The sum of these weights equals 1. Consider any two elements AB in the HFSs of U,  expressed as \(A,B\in HF(U).\) The similarity between A and B is defined as

$$\begin{aligned} S(A, B)=\bigoplus _{j=1}^m w_j[(h_A(y_j) \curlywedge h_B(y_j)) \oslash (h_A(y_j) \curlyvee h_B(y_j))] . \end{aligned}$$

In decision analysis, particularly in HFMADM, the similarity measure is an important tool for quantifying the degree of similarity between two choices. However, due to the inherent ambiguity and uncertainty in evaluation data, using a precise number within the range of [0, 1] to represent similarity in this context is challenging. As highlighted in Definition 19, the HF similarity measure, S(AB),  emerges as a more informative alternative than a mere numerical value in the [0, 1] range, as it is essentially a HFE.

Furthermore, this similarity measure satisfies [23]:

  1. (1)

    \(S(U, \varnothing )=0_{{\mathcal {H}}}\) indicates zero similarity when compared to an empty set.

  2. (2)

    \(S(A, A)=1_{{\mathcal {H}}}\) reflects maximum similarity when an element is compared with itself.

  3. (3)

    \(S(A, B)=S(B, A)\) ensures the measure is symmetric.

  4. (4)

    In the scenario where \(A \Subset B \Subset C\) for any \(A, B, C \in HF(U),\) satisfying \(S(A, C) \le _{{\mathcal {H}}} S(A, B) \curlywedge (B, C),\) suggesting a relational hierarchy in similarity measures.

In the subsequent step, we compute \(S_i^{+}=S(\omega _i, A^{+})\) and \(S_i^{-}=S(\omega _i, A^{-})\) for each alternative \(\omega _i.\) \(S_i^{+},\) \(S_i^{-}\) represent the similarity of \(\omega _i\) to the HFPIS \(A^{+}\) and the HFNIS \(A^{-},\) respectively. Based on these similarity values, define the optimal decision object \({\textrm{H}}^{+}\) and the worst decision object \({\textrm{H}}^{-}\):

$$\begin{aligned} & {\textrm{H}}^{+}=\{\langle \omega _1, S_1^{+}\rangle ,\langle \omega _2, S_2^{+}\rangle , \ldots ,\langle x_n, S_n^{+}\rangle \},\\ & {\textrm{H}}^{-}=\{\langle \omega _1, S_1^{-}\rangle ,\langle \omega _2, S_2^{-}\rangle , \ldots ,\langle x_n, S_n^{-}\rangle \} . \end{aligned}$$

The next step in the HFMADM process involves determining the upper and lower approximations(ULA) for both \({\textrm{H}}^{+}\) and \({\textrm{H}}^{-},\) computed based on the 1-HF \(\beta \)CIORS model.

Then, illustrate the concept of ULA of \({\textrm{H}}^{+}.\) The lower approximation of \({\textrm{H}}^{+}\) denotes the objects that are certainly contained in \({\textrm{H}}^{+},\) embodying a pessimistic decision criterion; On the other hand, the upper approximation of \({\textrm{H}}^{+}\) denotes the objects that are definitely and possibly part in \({\textrm{H}}^{+},\) embodying an optimistic decision criterion. However, DMs often exhibit a mix of optimism and pessimism in real life. Addressing this issue, a risk preference coefficient to merge the ULA of \({\textrm{H}}^{+}\) and \({\textrm{H}}^{-}\) is proposed:

$$\begin{aligned} \delta ^{+}=\left\{ \begin{array}{ll} \underline{R}_1^{\beta , C}({\textrm{H}}^{+}), & \quad \alpha =0, \\ \alpha \overline{R}_1^{\beta , C}({\textrm{H}}^{+}) \boxplus (1-\alpha ) \underline{R}_1^{\beta , C}({\textrm{H}}^{+}), & \quad 0<\alpha <1, \\ \overline{R}_1^{\beta , C}({\textrm{H}}^{+}), & \quad \alpha =1, \end{array}\right. \end{aligned}$$
$$\begin{aligned} \delta ^{-}=\left\{ \begin{array}{ll} \underline{R}_1^{\beta , C}({\textrm{H}}^{-}), & \quad \alpha =0, \\ \alpha \overline{R}_1^{\beta , C}({\textrm{H}}^{-}) \boxplus (1-\alpha ) \underline{R}_1^{\beta , C}({\textrm{H}}^{-}), & \quad 0<\alpha <1, \\ \overline{R}_1^{\beta , C}({\textrm{H}}^{-}), & \quad \alpha =1, \end{array}\right. \end{aligned}$$

where \(\alpha \in [0,1]\) is risk preference coefficient. When \(\alpha \) takes values of 0, 0.5, and 1, it signifies that DMs are respectively risk-averse, risk-neutral, risk-seeking.

Proceeding further in the HFMADM methodology, we define the relative closeness coefficient for each alternative \(\omega _i\) in relation to \(\delta ^{+}\) and \(\delta ^{-}.\) This coefficient is expressed mathematically as:

$$\begin{aligned} r(\omega _i)=\{h_{\delta ^{-}}(\omega _i) \oslash (h_{\delta ^{+}}(\omega _i) \oplus h_{\delta ^{-}}(\omega _i)), \quad i=1,2, \ldots , n\} \end{aligned}$$

where, \(r(\omega _i)\) is an HFE, which inherently can not be directly applied to order the alternatives due to its nature. To address this, we calculate the score value \(s(r(\omega _i))\) using the score function outlined in Definition 2.

Ultimately, the ranking of all alternatives is determined based on the computed score value \(s(r(\omega _i)).\) It’s important to note that a lower score value \(s(r(\omega _i))\) means a higher affiliation of the alternative \(\omega _i\) with the optimal decision object \(H^{+},\) implying that lower scores correspond to more favorable alternatives. This final step effectively concludes the decision-making process, providing a clear and quantifiable ranking of the available choices.

For practical implementation, the method is broken down into specific steps:

figure a

In the context of the HFMADM problem involving n alternatives and m attributes, the computational complexity of the proposed method can be analyzed step by step. In Step 1, the time complexity is \(\hat{O}(2mn)\). Similarly, Step 2 also exhibits a time complexity of \(\hat{O}(2mn),\) as it involves comparable computations. Step 3, requiring minimal operations, has a constant time complexity of \(\hat{O}(1)\). For Step 4, the complexity rises to \(\hat{O}(n^2 + mn)\). Steps 5 through 7, focusing on finalizing results, operate with a linear time complexity of \(\hat{O}(n)\). Considering these steps together, the overall time complexity of the proposed decision-making method is dominated by \(\hat{O}(n^2 + mn)\).

This structured methodology enables a systematic and thorough evaluation of alternatives in HF decision-making environments.

Additionally, the process of the proposed HFMAMD method is shown in Fig. 2.

Fig. 2
figure 2

The structure of the proposed decision-making method

6 Illustrative Examples

In this section, we apply our newly developed decision-making approach to a practical scenario: an enterprise project investment problem, as referenced from source [9]. This application serves to demonstrate the real-world utility of the method in a business context.

The procedure involves several key steps:

Problem Application: Implementing the method to address the specific challenges of the enterprise project investment problem.

Comparative Analysis: To verify the efficacy and benefits of our approach, we undertake a comparative analysis against existing decision-making methods. This comparison will highlight the distinct advantages our method offers.

Sensitivity Analysis: Conducting a sensitivity analysis is crucial to assess the robustness and reliability of the method. This analysis examines how changes in input parameters (like the value of \(\alpha \) or weights of attributes) impact the outcomes.

Through these steps, the section aims to underscore not just the theoretical soundness of the method, but also its practical applicability in handling complex decision-making scenarios in business environments.

6.1 An Enterprise Project Investment Problem

Enterprise project investment decisions significantly impact an enterprise’s operation. DMs must make well-informed choices to enhance the economic benefits of the enterprise. Imagine an enterprise considering various investment projects, including a business project \((\omega _1),\) a technology projects \((\omega _2\) ) a medical project \((\omega _3),\) an education project \((\omega _4)\).To evaluate these projects, four attributes are employed: policy support \((C_1),\) market benefit \(( C_2),\) urban constructiveness \((C_3),\) and public expectations \((C_4)\). All these attributes are advantageous, and higher values indicate better prospects. To address the potential inconsistencies in expert opinions, HFEs are utilized to represent the evaluation of each alternative under the different attributes. The assessment results are compiled in Table 4, and the weighting of the attributes, determined by the experts, is represented by the vector \(d=(0.2,0.4,0.2,0.2 )^{{\textbf {T}}}.\)

Table 4 The HF \(\beta \)-covering C

The subsequent sections detail each step of this decision-making process.

Step 1: In the decision-making process for the enterprise project investment, Table 5 plays a role by presenting the construction of both the HFPIS \(A^{+}\) and the HFNIS \(A^{-}.\)

Table 5 The HFPIS \(A^{+}\)and the HFNIS \(A^{-}\)

Step 2: In the decision-making process, compute the similarity between each project alternative and both \(A^{+}\) and \(A^{-}.\) The calculation results are presented in Table 6.

Table 6 The similarity \(S(\omega _i, A^{+})\) and \(S(\omega _i, A^{-})\)

Step 3: Calculate \({\textrm{H}}^{+}\)and \({\textrm{H}}^{-}.\)

$$\begin{aligned} {\textrm{H}}^{+}&= \{\langle \omega _1,\{0.6453, 0.4512, 0.3812\}\rangle , \langle \omega _2,\{0.5563, 0.5008, 0.4656\}\rangle , \\&\qquad \langle \omega _3,\{0.6568, 0.6544, 0.6480\}\rangle , \langle \omega _4,\{0.5885, 0.4342, 0.3719\}\rangle \}.\\ {\textrm{H}}^{-}&= \{\langle \omega _1,\{0.6192, 0.5095, 0.4929\}\rangle , \langle \omega _2,\{0.6080,0.5761,0.4528\}\rangle , \\&\qquad \langle \omega _3,\{0.4651, 0.3196, 0.3116\}\rangle , \langle \omega _4,\{0.6339, 0.5968, 0.5504\}\rangle \}. \end{aligned}$$

Taking \(\beta =\{0.5,0.4,0.3\},\) the calculation results of \(N_{1, \omega _i}^{\beta , C}\) are presented in Table 7.

Table 7 The representation of \(N_{1, \omega _i}^{\beta , C}(i=1,2, \ldots , 4)\)

Subsequently, various cases arise based on the different HF logic operators in the 1-HF \(\beta \)CIORS model.

Case 1: Let \({\mathcal {I}}={\mathcal {I}}(h_1,h_2)=\{ ( 1\wedge 1-h_1^{\delta (s)}+h_2^{\delta (s)} ) \mid s=1,2, \ldots , k\}, {\mathcal {O}}={\mathcal {O}}_{nm},\) and \(\alpha =0.5.\)

Step 4: According to the calculation method mentioned above, calculate the ULA of \({\textrm{H}}^{+}\) and \({\textrm{H}}^{-}.\)

$$\begin{aligned} \overline{R}_1^{\beta , C}({\textrm{H}}^{+})&= \{\langle \omega _1,\{0.3162 , 0.1285 , 0.0840\}\rangle , \langle \omega _2,\{0.3560 , 0.1285 , 0.0867 \}\rangle , \\&\qquad \langle \omega _3,\{0.3218 , 0.2141 , 0.1680\}\rangle , \langle \omega _4,\{ 0.3218 , 0.2141 , 0.1680\}\rangle \}. \\ \underline{R}_1^{\beta , C}({\textrm{H}}^{+})&= \{\langle \omega _1,\{1.0000,1.0000,0.9453\}\rangle , \langle \omega _2,\{ 1.0000,1.0000,0.7563\}\rangle , \\&\qquad \langle \omega _3,\{1.0000,1.0000,0.8885\}\rangle , \langle \omega _4,\{1.0000,0.8885 , 0.8342\}\rangle \}. \\ \overline{R}_1^{\beta , C}({\textrm{H}}^{-})&= \{\langle \omega _1,\{0.3034 , 0.1038 , 0.0729\}\rangle , \langle \omega _2,\{0.3891 , 0.1659 , 0.0820\}\rangle , \\&\qquad \langle \omega _3,\{0.3106 , 0.0799 , 0.0499\}\rangle , \langle \omega _4,\{0.3106 , 0.2148 , 0.0909\}\rangle ,\\ \underline{R}_1^{\beta , C}({\textrm{H}}^{-})&= \{\langle \omega _1,\{ 1.0000 , 1.0000, 0.8651 \}\rangle , \langle \omega _2,\{1.0000 ,1.0000 , 0.8080 \}\rangle , \\&\qquad \langle \omega _3,\{ 0.9116, 0.8196 , 0.7651 \}\rangle , \langle \omega _4,\{0.9116 , 0.8196 ,0.7651 \}\rangle \}.\\ \end{aligned}$$

Step 5: Based on the calculation method mentioned above, calculate \(\delta ^{+}\)and \(\delta ^{-}.\) The relative closeness coefficients \(\delta ^{+}\) and \(\delta ^{-}\) are presented in Table 8 for each alternative.

$$\begin{aligned} \delta ^{+}= & \{\langle \omega _1,\{0.8051 , 0.8034 , 0.7717 \}\rangle , \langle \omega _2,\{ 0.8051 , 0.8035 , 0.6332 \}\rangle ,\\ & \langle \omega _3,\{0.8086 , 0.8067 , 0.7294 \}\rangle , \langle \omega _4,\{ 0.8067, 0.7294 , 0.6816 \}\rangle \}. \end{aligned}$$
$$\begin{aligned} \delta ^{-}= & \{\langle \omega _1,\{ 0.8042 , 0.8029 , 0.7108 \}\rangle , \langle \omega _2,\{ 0.8066 , 0.8033 , 0.6739 \}\rangle ,\\ & \langle \omega _3,\{0.7320 , 0.6612 , 0.6362\}\rangle , \langle \omega _4,\{0.7342 ,0.6705 , 0.6362\}\rangle \} \end{aligned}$$
Table 8 The relative closeness coefficients \(\delta ^{+}\) and \(\delta ^{-}\) for each alternative (Case 1)

Step 6: Then, for each project alternative, compute the relative closeness coefficient \(r(\omega _i),\) denoted as \(\omega _i\) for \(i=1,2, \ldots , 4.\) The results of these calculations are presented in Table 9.

Step 7: In accordance with Definition 2, the score value \(s(r(\omega _i))\) for each \(r(\omega _i)\) (where \(i=1,2, \ldots , 4)\) is determined. These score values are included in Table 9.

Finally, the ranking of all the project alternatives is based on the scoring values calculated in the previous step. The rankings, which determine the most to least favorable projects based on the evaluated criteria, are displayed in Table 9.

Table 9 The calculation results of Case 1

Case 2: Let \({\mathcal {I}}={\mathcal {I}}(h_1,h_2)=\{ ( (1-h_1^{\delta (s)})\vee (h_1^{\delta (s)} \wedge h_2^{\delta (s)}) ) \mid s=1,2, \ldots , k\}, {\mathcal {O}}={\mathcal {O}}_{p}(p=2),\) and \(\alpha =0.5.\) Based on the methods introduced above, we can perform relevant calculations in case 2 (Table 10) and obtain the following results :

$$\begin{aligned} \overline{R}_1^{\beta , C}({\textrm{H}}^{+})&= \{\langle \omega _1,\{0.2040 , 0.0385 , 0.0195\}\rangle , \langle \omega _2,\{0.1981 , 0.0627 , 0.0347\}\rangle , \\&\qquad \langle \omega _3,\{0.2114 , 0.1071 , 0.0672\}\rangle , \langle \omega _4,\{ 0.2114 , 0.1071 , 0.0672\}\rangle \}. \\ \underline{R}_1^{\beta , C}({\textrm{H}}^{+})&= \{\langle \omega _1,\{0.5563 , 0.6000 , 0.7000\}\rangle , \langle \omega _2,\{ 0.5000 , 0.5000 , 0.6000\}\rangle , \\&\qquad \langle \omega _3,\{0.5000 , 0.5000 , 0.6000\}\rangle , \langle \omega _4,\{0.5000 , 0.4342 , 0.6000\}\rangle \}. \\ \overline{R}_1^{\beta , C}({\textrm{H}}^{-})&=\{\langle \omega _1,\{ 0.1879 , 0.0415 , 0.0219\}\rangle , \langle \omega _2,\{ 0.2366 , 0.0830 , 0.0328 \}\rangle , \\&\qquad \langle \omega _3,\{ 0.1969 , 0.0255 , 0.0155\}\rangle , \langle \omega _4,\{ 0.1969 , 0.1282 , 0.0273\}\rangle ,\\ \underline{R}_1^{\beta , C}({\textrm{H}}^{-})&= \{\langle \omega _1,\{ 0.4651 , 0.6000 , 0.7000 \}\rangle , \langle \omega _2,\{ 0.4651 , 0.5000 , 0.6000\}\rangle , \\&\qquad \langle \omega _3,\{ 0.4651 , 0.5000 , 0.6000 \}\rangle , \langle \omega _4,\{0.4651 , 0.5000 , 0.6000 \}\rangle \}\\ \delta ^{+}&= \{\langle \omega _1,\{ 0.2859 , 0.2021 , 0.2208 \}\rangle , \langle \omega _2,\{ 0.2678 , 0.1873 , 0.1999 \}\rangle , \\&\qquad \langle \omega _3,\{0.2758 , 0.2137 , 0.2186\}\rangle , \langle \omega _4,\{ 0.2758 , 0.1954 , 0.2186 \}\rangle \}.\\ \delta ^{-}&= \{\langle \omega _1,\{ 0.2527 , 0.2038 , 0.2221 \}\rangle , \langle \omega _2,\{ 0.2820 , 0.1994 , 0.1988 \}\rangle , \\&\qquad \langle \omega _3,\{0.2581 , 0.1652 , 0.1889\}\rangle , \langle \omega _4,\{ 0.2581 , 0.2263 , 0.1956\}\rangle \}. \end{aligned}$$
Table 10 The relative closeness coefficient \(\delta ^{+}\) and \(\delta ^{-}\) for each alternative (Case 2)
Table 11 The calculation results of Case 2

Based on the ranking results of all alternatives in Table 11, \(\omega _3\) stands out as the most optimal project. Below are the final results:

\(\omega _3 \succ \omega _1 \succ \omega _4 \succ \omega _2\)

Case 3: Let \({\mathcal {I}}={\mathcal {I}}(h_1,h_2)=\{ ( (1-h_1^{\delta (s)})\vee (h_2^{\delta (s)}) ) \mid s=1,2, \ldots , k\}, {\mathcal {O}}={\mathcal {O}}_{mp}(p=1),\) and \(\alpha =0.5.\)

Because of the similarity in computational procedures between Case 3 and Case 1, only the final results are provided below.

\(\omega _3 \succ \omega _1 \succ \omega _4 \succ \omega _2\)

Based on the ranking results above, the optimal project identified is \(\omega _3\)

Remark 2

The results of Case 1, Case 2 and Case 3 reveal a significant aspect of the decision-making process that involves HF logic operators. Despite variations in these operators, which influence the ordering of options, the top choice remains stable. This consistency accentuates the correctness and flexibility of the proposed decision-making method. Significantly, it empowers DMs with the ability to select from a spectrum of HF logic operators and adjust parameter settings. This flexibility allows for the customization of the decision-making process to align with specific needs and criteria, demonstrating the method’s adaptability to different scenarios and preferences.

6.2 Comparative Analysis

For addressing HFMADM problems, some decision-making techniques have been introduced by researchers, including the HF-TOPSIS method [30], HF-VIKOR method [30], HFWA method [27], HFOWA method [27], and the approach proposed by Fu et al. [4]. To highlight how well our proposed method works, we compare it with the five methods listed above, using the enterprise project investment problem from Sect. 6.1. The results of this comparison are displayed in Table 12 below. Additionally, Fig. 3 provides a more intuitive view of the ranking results across different decision-making methods. By examining both Table 12 and Fig. 3, we can draw the following conclusions.

If we analyze it from the perspective of the optimal solution, the best alternatives identified by our proposed method and the other five methods are consistent, showing that \(\omega _3\) should be selected as the investment project. This outcome verifies the effectiveness of our proposed method. To better demonstrate the scientific validity and rationality of our method, the Spearman rank correlation coefficient (SRCC) statistical method is introduced for analysis. SRCC is a statistical technique used to describe the correlation between two different variables. So, we use SRCC to evaluate the relationship between the ranking results of the seven methods discussed earlier. As shown in Fig. 4, the SRCCs between the ranking results obtained by the seven methods are based on the data from Table 13. Generally, when the SRCC between two methods surpasses 0.8, the correlation between them is considered significant. The HF-TOPSIS and HF-VIKOR methods are well-established for effectively addressing HFMADM problems, making them suitable benchmarks for comparison. If the SRCCs between our method’s ranking results and those of these two methods are high, it confirms that our method is both reasonable and valid.

Now let’s look at the ranking results. Table 12 shows that our method and the other five methods produce different rankings, mainly because each method follows different ranking principles. However, the correlation between our method (Case 1 and Case 2) and the HF-TOPSIS, HFWA, and HF-VIKOR methods is 0.8, which suggests that our method is both valid and reliable. In contrast, the SRCC between our method (Case 1) and the HFOWA method is 0.4, and the SRCCs between the HFOWA method and our method (Case 2), HF-TOPSIS, HFWA, and HF-VIKOR methods are all below 0.6. This indicates that our method (Case 1 and Case 2) is more effective than the HFOWA method.

For Fu.et al.’s method, its model is based on an HF rough set with HF t-norm. Since HF t-norm must satisfy associativity, and HF-overlap function is an extension of HF t-norm that does not satisfy associativity. Moreover, the existing HF \(\beta \)-covering rough set based on HF t-norm cannot effectively handle the overlap and correlation between hesitant information. When there is an overlap between hesitant information, the fuzzy \(\beta \)-covering rough set based on hesitant overlap function is a better choice. Therefore, the model proposed in this paper, based on HF-overlap function, possesses effectiveness and practical value.

The comparative analysis above validates the effectiveness and reliability of our method. To further highlight its advantages, we present the following three examples.

Table 12 The comparison between different models
Table 13 The comparison between different models (Expression 2)
Fig. 3
figure 3

The comparison between different models

Example 10

To demonstrate the applicability of the new method we proposed, we introduce another problem related to corporate project investment decisions. Let \(\Omega =\{\omega _1, \omega _2, \omega _3, \omega _4, \omega _5\}\) represent the five candidate items and \(C=\{C_1, C_2, C_3, C_4\}\) represent the four evaluation attributes. After calculation, the weight given by the experts is \(d = (0.2197, 0.2064, 0.3543, 0.2196)^T.\) The specific evaluation values of each attribute are shown in Table 14. Then use our proposed method, HFWA method, and HFOWA method to solve respectively. The comparison results are shown in Table 15.

Table 14 The HF evaluation information table (Example 10)
Table 15 The ranking results of Example 10

As shown in Table 15, the HFWA and HFOWA methods fail to produce valid rankings, as all alternatives are evaluated as approximately equal \((\omega _5 \approx \omega _2 \approx \omega _1 \approx \omega _4 \approx \omega _3 )\). This result demonstrates the limitations of these methods in effectively distinguishing and ranking alternatives in this case. In contrast, our proposed method successfully generates a complete and meaningful ranking \(( \omega _5 \succ \omega _2 \succ \omega _1 \succ \omega _4 \succ \omega _3 ),\) clearly differentiating the alternatives. This comparison highlights the robustness and effectiveness of our method in addressing scenarios where traditional methods are inadequate for providing useful rankings.

Fig. 4
figure 4

The SRCCs of ranking results for different case

Example 11

Suppose \(\Omega =\{\omega _1, \omega _2, \omega _3, \omega _4, \omega _5\}\) is a set of 5 malls, and \(C = \{C_1, C_2, C_3, C_4\}\) represents a set of 4 attributes. The weight vectors of the four attributes are \(d = (0.3, 0.3, 0.2, 0.2)^T.\) The evaluation values of each attribute of the mall are shown in Table 16.

Table 16 The HF evaluation information table (Example 11)

According to the above three decision-making methods, the ranking results of all alternatives are shown in Table 17.

Table 17 The ranking results of Example 11

As can be seen from Table 17, in this example, the HF-VIKOR method cannot rank alternatives due to algorithm limitations. In addition, although the HF-TOPSIS method can obtain the optimal solution \(\omega _4,\) the other four schemes cannot distinguish priorities. Our method can rank the other three solutions except \(\omega _2\) and \(\omega _3,\) and conclude that the optimal solution is \(\omega _1.\)

Therefore, based on the results of Examples 10 and 11, we can conclude that our method is more effective than the other four methods and has a wider range of applicable scenarios.

Example 12

To demonstrate the applicability of the new method we proposed in the case of a larger number of alternatives, we introduce another problem related to corporate project investment decisions. Let \(\Omega =\{\omega _1, \omega _2, \omega _3, \omega _4\}\) represent the four candidate projects, and \(C=\{C_1, C_2, C_3, C_4\}\) represent the four evaluation attributes. After calculation, the weight given by the experts is \(d = (0.253, 0.248, 0.251, 0.248)^T.\) The specific evaluation values of each attribute are shown in Table 18.

Table 18 The HF \(\beta \)-covering C (Example 12)

Then use our proposed method and Fu et al.’s method to solve respectively. The comparison results are shown in Table 19 and Fig. 5.

Table 19 The ranking results of Example 12

By comparing the results, we can observe that our method still performs well when there are many options. However, when there is significant overlap among the hesitant information, Fu’s method struggles to distinguish and rank some of the options, such as \(\omega _1,\) \(\omega _7,\) \(\omega _8,\) and \(\omega _9.\) This indicates that the model we proposed has good classification ability even when there is significant overlap in hesitant information, suggesting it has a broader range of applications.

Based on the discussion above, we summarize the advantages of our method as follows:

  • The HF similarity measure used in our method is better at capturing the hesitant characteristics in HFMADM compared to traditional real-number representations. This allows for a more accurate reflection of the DM’s uncertain or ambiguous preferences, which are common in real-world decision-making.

  • Compared to the five methods mentioned in this paper (see Table 12), our approach provides a more precise analysis aligned with the DM’s preferences. This is because our method considers the risk preferences of the DM, offering a solution that better matches actual situations.

  • Our method combines the strengths of the HF\(\beta \)CIORS model and the TOPSIS method, allowing us to address some HFMADM problems that traditional methods cannot solve well. This is evident in Examples 10 and 11, where our approach performs more effectively in complex decision-making environments.

  • When compared to Fu et al.’s method, our approach still performs well even with a large number of options. In particular, when hesitant information overlaps significantly, Fu et al.’s method struggles to distinguish and rank certain options. In contrast, our model, which incorporates the HF overlap function, can sort options that Fu et al.’s method cannot. This shows that our method has a better classification ability, especially in situations with considerable overlapping hesitant information. As a result, our model has a wider range of applications and can handle more complex decision-making scenarios.

Fig. 5
figure 5

The comparison between different models

6.3 Sensitivity Analysis

In Sect. 6.1, \(\alpha \) is denoted as a risk preference coefficient, determined subjectively by DMs. This subsection extends the investigation to explore the influence of parameter \(\alpha \) on decision-making outcomes in two distinct cases, within the realm of enterprise project investment. Methodologically, the parameter \(\alpha \) is varied systematically from 0 to 1, in increments of 0.1. According to the framework proposed in this study, the rankings for each case, corresponding to the diverse values of \(\alpha ,\) are calculated and presented. The resultant rankings, contingent upon these variations in \(\alpha ,\) are elucidated in Fig. 6, and Table 20. This examination provides a comprehensive understanding of how shifts in the risk preference coefficient \(\alpha \) affect decision-making processes in enterprise project investment scenarios.

Fig. 6
figure 6

The ranking results with different values of \(\alpha \) under Case 1

Fig. 7
figure 7

The ranking results with different values of \(\alpha \) under Case 2

Table 20 The ranking results with different values of \(\alpha \) under Case 1 and Case 2

In Fig. 6, we can see that the optimal solution for Case 1 does not change with variations in \(\alpha ,\) indicating that our proposed method is stable. When \(\alpha \) ranges from 0 to 0.6, the ranking order is \(\omega _3 \succ \omega _4 \succ \omega _1 \succ \omega _2,\) while for \(\alpha \) ranging from 0.7 to 1, the ranking order shifts to \(\omega _3 \succ \omega _4 \succ \omega _2 \succ \omega _1.\) On the other hand, as observed from Fig. 7 for Case 2, both the best and the worst solutions remain constant regardless of changes in \(\alpha .\) When \(\alpha \) is between 0 and 0.2, the ranking order is \(\omega _3 \succ \omega _4 \succ \omega _1 \succ \omega _2,\) and when \(\alpha \) ranges from 0.3 to 1, the ranking order changes to \(\omega _3 \succ \omega _1 \succ \omega _4 \succ \omega _2.\)

To sum up, in Case 1 (or Case 2), despite the varying impacts of \(\alpha \) on the ranking results, the optimal choice remains unchanged. This means that while the risk preference of DMs may make a difference in the outcome of decision, it does not change the optimal outcome. Thus, the method proposed in this paper is of stability.

7 Conclusion

In this study, we proposed a hesitant fuzzy overlap function and gives several related examples. Since the overlap function can better handle the overlap and correlation between information, the application scope of the hesitant fuzzy overlap function is expanded. Then, we propose HF\(\beta \)CIORS models and studies their basic properties. Additionally, we integrate HF\(\beta \)CIORS with the TOPSIS method and apply to the MADM problems and the feasibility of this method is verified by some practical examples. Finally, through sensitivity analysis and comparative analysis, the stability and effectiveness of the proposed method are validated.

Our proposed method offers more precise analysis, aligns more closely with the decision maker’s preferences, and addresses certain HFMADM problems that traditional methods struggle to resolve, particularly in scenarios with significant overlapping hesitant information. However, our method achieves greater accuracy when the appropriate overlap function is carefully selected. The choice of overlap function plays a critical role in capturing the specific characteristics of the hesitant information.

In future work, we plan to explore and select more suitable overlap functions for experimental analysis. Additionally, we will introduce the attribute reduction based on HF\(\beta \)CIORS models and variable precision fuzzy rough set based on hesitant fuzzy overlap functions.