[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (88)

Search Parameters:
Keywords = Bayes theory

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1955 KiB  
Article
Adaptive Recognition and Control of Shield Tunneling Machine in Soil Layers Containing Plastic Drainage Boards
by Qiuping Wang, Wanli Li, Zhikuan Xu and Yougang Sun
Actuators 2024, 13(12), 470; https://doi.org/10.3390/act13120470 - 22 Nov 2024
Viewed by 548
Abstract
The underground plastic vertical drains (PVDs) are a significant problem for shield machines in tunneling construction. At present, the main method to deal with PVDs is to manually adjust the parameters of the shield machine. To ensure that a shield machine autonomously recognizes [...] Read more.
The underground plastic vertical drains (PVDs) are a significant problem for shield machines in tunneling construction. At present, the main method to deal with PVDs is to manually adjust the parameters of the shield machine. To ensure that a shield machine autonomously recognizes and adjusts the control in soil layers containing PVDs, this study constructs a shield machine advance and rotation state-space model utilizing Bayesian decision theory for the judgment of excavation conditions. A Bayesian model predictive control (Bayes-MPC) method for the shield machine is proposed, followed by a simulation analysis. Finally, a validation experiment is conducted based on a Singapore subway project. Compared with traditional methods, the method proposed in this paper has better performance in the simulation, and it also has demonstrated effectiveness and accuracy in experiments. The research outcomes can provide a reference for the adaptive assistance system of shield machines excavating underground obstacles. Full article
(This article belongs to the Section Actuators for Surface Vehicles)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Distribution of plastic drainage plates in a tunnel project; (<b>b</b>) longitudinal geological section schematic.</p>
Full article ">Figure 2
<p>Advance and rotation system of shield machine.</p>
Full article ">Figure 3
<p>Simplified control model for a single hydraulic cylinder.</p>
Full article ">Figure 4
<p>Bayes-MPC method for shield machine.</p>
Full article ">Figure 5
<p>Simulation outputs.</p>
Full article ">Figure 6
<p>Simulation inputs.</p>
Full article ">Figure 7
<p>Experimental situation.</p>
Full article ">Figure 8
<p>(<b>a</b>) Cutterhead speed control performance; (<b>b</b>) excavation speed control performance.</p>
Full article ">Figure 9
<p>(<b>a</b>) Comparison of cutterhead torque; (<b>b</b>) comparison of thrust.</p>
Full article ">
17 pages, 2716 KiB  
Article
Disentangled Representation Learning for Robust Radar Inter-Pulse Modulation Feature Extraction and Recognition
by Luyao Zhang, Mengtao Zhu, Ziwei Zhang and Yunjie Li
Remote Sens. 2024, 16(19), 3585; https://doi.org/10.3390/rs16193585 - 26 Sep 2024
Viewed by 915
Abstract
Modern Multi-Function Radars (MFRs) are sophisticated sensors that are capable of flexibly adapting their control parameters in transmitted pulse sequences. In complex electromagnetic environments, efficiently and accurately recognizing the inter-pulse modulations of non-cooperative radar pulse sequences is a key step for modern Electronic [...] Read more.
Modern Multi-Function Radars (MFRs) are sophisticated sensors that are capable of flexibly adapting their control parameters in transmitted pulse sequences. In complex electromagnetic environments, efficiently and accurately recognizing the inter-pulse modulations of non-cooperative radar pulse sequences is a key step for modern Electronic Support (ES) systems. Existing recognition methods focus more on algorithmic designs, such as neural network structure designs, to improve recognition performance. However, in open electromagnetic environments with increased flexibility in radar transmission, these methods would suffer performance degradation due to domain shifts between training and testing datasets. To address this issue, this study proposes a robust radar inter-pulse modulation feature extraction and recognition method based on disentangled representation learning. At first, inspired by the Representation Learning Theory (RLT), the received radar pulse sequences can be disentangled into three explanatory factors related to (i) modulation types, (ii) modulation parameters, and (iii) measurement characteristics, such as measurement noise. Then, an explainable radar pulse sequence disentanglement network is proposed based on auto-encoding variational Bayes. The features extracted through the proposed method can effectively represent the key latent factors related to recognition tasks and maintain performance under domain shift conditions. Experiments on both ideal and non-ideal situations demonstrate the effectiveness, robustness, and superiority of the proposed method in comparison with other methods. Full article
(This article belongs to the Special Issue Recent Advances in Nonlinear Processing Technique for Radar Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Different inter-pulse modulation types for PRI. (<b>a</b>) Constant. (<b>b</b>) Stagger. (<b>c</b>) Jittered. (<b>d</b>) Sliding. (<b>e</b>) Dwell and switch. (<b>f</b>) Periodic.</p>
Full article ">Figure 2
<p>The procedure of radar PRI sequence disentanglement through the LiSTaR method.</p>
Full article ">Figure 3
<p>Network structure in LiSTaR.</p>
Full article ">Figure 4
<p>T-SNE visualization of learned representations for four methods. (<b>a</b>) LiSTaR. (<b>b</b>) AE. (<b>c</b>) VAE. (<b>d</b>) LSTMAE.</p>
Full article ">Figure 5
<p>Confusion matrix of the recognition performance for four methods in three different data domain shift scenarios.</p>
Full article ">Figure 5 Cont.
<p>Confusion matrix of the recognition performance for four methods in three different data domain shift scenarios.</p>
Full article ">Figure 6
<p>Radar PRI modulation recognition performance in non-ideal situations for four methods.</p>
Full article ">
38 pages, 1053 KiB  
Article
Thompson Sampling for Stochastic Bandits with Noisy Contexts: An Information-Theoretic Regret Analysis
by Sharu Theresa Jose and Shana Moothedath
Entropy 2024, 26(7), 606; https://doi.org/10.3390/e26070606 - 17 Jul 2024
Cited by 2 | Viewed by 1192
Abstract
We study stochastic linear contextual bandits (CB) where the agent observes a noisy version of the true context through a noise channel with unknown channel parameters. Our objective is to design an action policy that can “approximate” that of a Bayesian oracle that [...] Read more.
We study stochastic linear contextual bandits (CB) where the agent observes a noisy version of the true context through a noise channel with unknown channel parameters. Our objective is to design an action policy that can “approximate” that of a Bayesian oracle that has access to the reward model and the noise channel parameter. We introduce a modified Thompson sampling algorithm and analyze its Bayesian cumulative regret with respect to the oracle action policy via information-theoretic tools. For Gaussian bandits with Gaussian context noise, our information-theoretic analysis shows that under certain conditions on the prior variance, the Bayesian cumulative regret scales as O˜(mT), where m is the dimension of the feature vector and T is the time horizon. We also consider the problem setting where the agent observes the true context with some delay after receiving the reward, and show that delayed true contexts lead to lower regret. Finally, we empirically demonstrate the performance of the proposed algorithms against baselines. Full article
(This article belongs to the Special Issue Information Theoretic Learning with Its Applications)
Show Figures

Figure 1

Figure 1
<p>Comparison of Bayesian regret of proposed algorithms with baselines as a function of number of iterations. (<bold>Left</bold>): Gaussian bandits with <inline-formula><mml:math id="mm861"><mml:semantics><mml:mrow><mml:mi>K</mml:mi><mml:mo>=</mml:mo><mml:mn>40</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm862"><mml:semantics><mml:mrow><mml:msubsup><mml:mi>σ</mml:mi><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msubsup><mml:mo>=</mml:mo><mml:msubsup><mml:mi>σ</mml:mi><mml:mi>γ</mml:mi><mml:mn>2</mml:mn></mml:msubsup><mml:mo>=</mml:mo><mml:mn>1.1</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>; (<bold>Center</bold>) Logistic bandits with <inline-formula><mml:math id="mm863"><mml:semantics><mml:mrow><mml:mi>K</mml:mi><mml:mo>=</mml:mo><mml:mn>40</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm864"><mml:semantics><mml:mrow><mml:msubsup><mml:mi>σ</mml:mi><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msubsup><mml:mo>=</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm865"><mml:semantics><mml:mrow><mml:msubsup><mml:mi>σ</mml:mi><mml:mi>γ</mml:mi><mml:mn>2</mml:mn></mml:msubsup><mml:mo>=</mml:mo><mml:mn>2.5</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>; (<bold>Right</bold>) MovieLens dataset with added Gaussian context noise and Gaussian prior: parameters set as <inline-formula><mml:math id="mm866"><mml:semantics><mml:mrow><mml:msubsup><mml:mi>σ</mml:mi><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0.1</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm867"><mml:semantics><mml:mrow><mml:msubsup><mml:mi>σ</mml:mi><mml:mi>γ</mml:mi><mml:mn>2</mml:mn></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0.6</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>.</p>
Full article ">Figure A1
<p>Bayesian cumulative regret of Algorithm 1 as a function of iterations over varying number <italic>K</italic> of actions.</p>
Full article ">
13 pages, 428 KiB  
Article
A Possibilistic Formulation of Autonomous Search for Targets
by Zhijin Chen, Branko Ristic and Du Yong Kim
Entropy 2024, 26(6), 520; https://doi.org/10.3390/e26060520 - 17 Jun 2024
Cited by 1 | Viewed by 909
Abstract
Autonomous search is an ongoing cycle of sensing, statistical estimation, and motion control with the objective to find and localise targets in a designated search area. Traditionally, the theoretical framework for autonomous search combines sequential Bayesian estimation with information theoretic motion control. This [...] Read more.
Autonomous search is an ongoing cycle of sensing, statistical estimation, and motion control with the objective to find and localise targets in a designated search area. Traditionally, the theoretical framework for autonomous search combines sequential Bayesian estimation with information theoretic motion control. This paper formulates autonomous search in the framework of possibility theory. Although the possibilistic formulation is slightly more involved than the traditional method, it provides a means for quantitative modelling and reasoning in the presence of epistemic uncertainty. This feature is demonstrated in the paper in the context of partially known probability of detection, expressed as an interval value. The paper presents an elegant Bayes-like solution to sequential estimation, with the reward function for motion control defined to take into account the epistemic uncertainty. The advantages of the proposed search algorithm are demonstrated by numerical simulations. Full article
(This article belongs to the Special Issue Advances in Uncertain Information Fusion)
Show Figures

Figure 1

Figure 1
<p>Simulation setup: the cyan stars indicate the true targets; the blue dotted line is the trajectory of the searching agent up to <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>140</mn> </mrow> </semantics></math> steps; the red dots indicate detections at <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>140</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>The imprecise model of the probability of detection <span class="html-italic">D</span> used in simulations. The true <span class="html-italic">D</span> is plotted with the blue solid line.</p>
Full article ">Figure 3
<p>Posterior possibility maps at time <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>140</mn> </mrow> </semantics></math>: (<b>a</b>) Target presence map <math display="inline"><semantics> <msubsup> <mrow> <mi mathvariant="bold">Π</mi> </mrow> <mi>k</mi> <mn>1</mn> </msubsup> </semantics></math>; (<b>b</b>) target absence map <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">Π</mi> <mi>k</mi> <mn>0</mn> </msubsup> </semantics></math> (white colour implies zero possibility).</p>
Full article ">Figure 4
<p>Output of the search algorithm: Estimated target locations (indicated by red asterisks).</p>
Full article ">Figure 5
<p>Mean OSPA errors obtained from 100 Monte Carlo runs. The scenario involves 80 targets placed at (<b>a</b>) uniformly random locations; (<b>b</b>) uniformly random locations of two diagonal squares in the search area.</p>
Full article ">Figure 5 Cont.
<p>Mean OSPA errors obtained from 100 Monte Carlo runs. The scenario involves 80 targets placed at (<b>a</b>) uniformly random locations; (<b>b</b>) uniformly random locations of two diagonal squares in the search area.</p>
Full article ">
19 pages, 11229 KiB  
Article
Several Feature Extraction Methods Combined with Near-Infrared Spectroscopy for Identifying the Geographical Origins of Milk
by Xiaohong Wu, Yixuan Wang, Chengyu He, Bin Wu, Tingfei Zhang and Jun Sun
Foods 2024, 13(11), 1783; https://doi.org/10.3390/foods13111783 - 6 Jun 2024
Cited by 4 | Viewed by 1350
Abstract
Milk is a kind of dairy product with high nutritive value. Tracing the origin of milk can uphold the interests of consumers as well as the stability of the dairy market. In this study, a fuzzy direct linear discriminant analysis (FDLDA) is proposed [...] Read more.
Milk is a kind of dairy product with high nutritive value. Tracing the origin of milk can uphold the interests of consumers as well as the stability of the dairy market. In this study, a fuzzy direct linear discriminant analysis (FDLDA) is proposed to extract the near-infrared spectral information of milk by combining fuzzy set theory with direct linear discriminant analysis (DLDA). First, spectral data of the milk samples were collected by a portable NIR spectrometer. Then, the data were preprocessed by Savitzky–Golay (SG) and standard normal variables (SNV) to reduce noise, and the dimensionality of the spectral data was decreased by principal component analysis (PCA). Furthermore, linear discriminant analysis (LDA), DLDA, and FDLDA were employed to transform the spectral data into feature space. Finally, the k-nearest neighbor (KNN) classifier, extreme learning machine (ELM) and naïve Bayes classifier were used for classification. The results of the study showed that the classification accuracy of FDLDA was higher than DLDA when the KNN classifier was used. The highest recognition accuracy of FDLDA, DLDA, and LDA could reach 97.33%, 94.67%, and 94.67%. The classification accuracy of FDLDA was also higher than DLDA when using ELM and naïve Bayes classifiers, but the highest recognition accuracy was 88.24% and 92.00%, respectively. Therefore, the KNN classifier outperformed the ELM and naïve Bayes classifiers. This study demonstrated that combining FDLDA, DLDA, and LDA with NIR spectroscopy as an effective method for determining the origin of milk. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

Figure 1
<p>The spectra of milk samples from 5 different provinces (<b>a</b>) original spectrogram; (<b>b</b>) average spec-trogram.</p>
Full article ">Figure 2
<p>NIR spectra of milk samples under different pretreatment methods: (<b>a</b>) multiplying scattering correction (MSC); (<b>b</b>) standard normal variables (SNV); (<b>c</b>) Savitzky–Golay (SG); (<b>d</b>) mean centering (MC); (<b>e</b>) Savitzky–Golay and multiplying scattering correction (SG + MSC); (<b>f</b>) Savitzky–Golay and standard normal variables (SG + SNV); (<b>g</b>) Savitzky–Golay and mean centering (SG + MC).</p>
Full article ">Figure 2 Cont.
<p>NIR spectra of milk samples under different pretreatment methods: (<b>a</b>) multiplying scattering correction (MSC); (<b>b</b>) standard normal variables (SNV); (<b>c</b>) Savitzky–Golay (SG); (<b>d</b>) mean centering (MC); (<b>e</b>) Savitzky–Golay and multiplying scattering correction (SG + MSC); (<b>f</b>) Savitzky–Golay and standard normal variables (SG + SNV); (<b>g</b>) Savitzky–Golay and mean centering (SG + MC).</p>
Full article ">Figure 3
<p>The data distribution after PCA.</p>
Full article ">Figure 4
<p>The distribution of training samples after direct linear discriminant analysis (DLDA) under different preprocessing methods: (<b>a</b>) multiplying scattering correction (MSC); (<b>b</b>) standard normal variables (SNV); (<b>c</b>) Savitzky–Golay (SG); (<b>d</b>) mean centering (MC); (<b>e</b>) Savitzky–Golay and multiplying scattering correction (SG + MSC); (<b>f</b>) Savitzky–Golay and standard normal variables (SG + SNV); (<b>g</b>) Savitzky–Golay and mean centering (SG + MC). (There are 45 training samples for each type of commercial milk, totaling 225 training samples for the 5 commercial kinds of milk.).</p>
Full article ">Figure 4 Cont.
<p>The distribution of training samples after direct linear discriminant analysis (DLDA) under different preprocessing methods: (<b>a</b>) multiplying scattering correction (MSC); (<b>b</b>) standard normal variables (SNV); (<b>c</b>) Savitzky–Golay (SG); (<b>d</b>) mean centering (MC); (<b>e</b>) Savitzky–Golay and multiplying scattering correction (SG + MSC); (<b>f</b>) Savitzky–Golay and standard normal variables (SG + SNV); (<b>g</b>) Savitzky–Golay and mean centering (SG + MC). (There are 45 training samples for each type of commercial milk, totaling 225 training samples for the 5 commercial kinds of milk.).</p>
Full article ">Figure 5
<p>Fuzzy membership values of milk samples.</p>
Full article ">Figure 6
<p>The distribution of training samples after fuzzy direct linear discriminant analysis (FDLDA) under different preprocessing methods: (<b>a</b>) multiplying scattering correction (MSC); (<b>b</b>) standard normal variables (SNV); (<b>c</b>) Savitzky–Golay (SG); (<b>d</b>) mean centering (MC); (<b>e</b>) Savitzky–Golay and multiplying scattering correction (SG + MSC); (<b>f</b>) Savitzky–Golay and standard normal variables (SG + SNV); (<b>g</b>) Savitzky–Golay and mean centering (SG + MC). (There are 45 training samples for each type of commercial milk, totaling 225 training samples for the 5 commercial kinds of milk.).</p>
Full article ">Figure 6 Cont.
<p>The distribution of training samples after fuzzy direct linear discriminant analysis (FDLDA) under different preprocessing methods: (<b>a</b>) multiplying scattering correction (MSC); (<b>b</b>) standard normal variables (SNV); (<b>c</b>) Savitzky–Golay (SG); (<b>d</b>) mean centering (MC); (<b>e</b>) Savitzky–Golay and multiplying scattering correction (SG + MSC); (<b>f</b>) Savitzky–Golay and standard normal variables (SG + SNV); (<b>g</b>) Savitzky–Golay and mean centering (SG + MC). (There are 45 training samples for each type of commercial milk, totaling 225 training samples for the 5 commercial kinds of milk.).</p>
Full article ">Figure 7
<p>Classification results of FDLDA with different values of weight coefficient <math display="inline"><semantics> <mrow> <mi>m</mi> </mrow> </semantics></math>.</p>
Full article ">
19 pages, 3930 KiB  
Article
Enhancing Probabilistic Solar PV Forecasting: Integrating the NB-DST Method with Deterministic Models
by Tawsif Ahmad, Ning Zhou, Ziang Zhang and Wenyuan Tang
Energies 2024, 17(10), 2392; https://doi.org/10.3390/en17102392 - 16 May 2024
Cited by 2 | Viewed by 1132
Abstract
Accurate quantification of uncertainty in solar photovoltaic (PV) generation forecasts is imperative for the efficient and reliable operation of the power grid. In this paper, a data-driven non-parametric probabilistic method based on the Naïve Bayes (NB) classification algorithm and Dempster–Shafer theory (DST) of [...] Read more.
Accurate quantification of uncertainty in solar photovoltaic (PV) generation forecasts is imperative for the efficient and reliable operation of the power grid. In this paper, a data-driven non-parametric probabilistic method based on the Naïve Bayes (NB) classification algorithm and Dempster–Shafer theory (DST) of evidence is proposed for day-ahead probabilistic PV power forecasting. This NB-DST method extends traditional deterministic solar PV forecasting methods by quantifying the uncertainty of their forecasts by estimating the cumulative distribution functions (CDFs) of their forecast errors and forecast variables. The statistical performance of this method is compared with the analog ensemble method and the persistence ensemble method under three different weather conditions using real-world data. The study results reveal that the proposed NB-DST method coupled with an artificial neural network model outperforms the other methods in that its estimated CDFs have lower spread, higher reliability, and sharper probabilistic forecasts with better accuracy. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

Figure 1
<p>Segmentation of historical dataset for implementing the NB-DST method.</p>
Full article ">Figure 2
<p>NB classifiers and corresponding probability outputs for forecast intervals at hour, <math display="inline"><semantics> <mi>h</mi> </semantics></math>.</p>
Full article ">Figure 3
<p>Framework of the proposed NB-DST method.</p>
Full article ">Figure 4
<p>PCC of weather variables with the hourly solar generation data.</p>
Full article ">Figure 5
<p>The 95% PIs obtained from all the probabilistic solar forecasting methods on a particular “clear” day (19 May 2022), with regard to (<b>a</b>) SVR-NB-DST, (<b>b</b>) ANN-NB-DST, (<b>c</b>) QR-NB-DST, (<b>d</b>) AnEn, and (<b>e</b>) PerEn. The red dotted lines denote the actual hourly observations of solar power and the shaded areas denote the 95% PIs of the forecasted solar generation obtained from the corresponding methods.</p>
Full article ">Figure 6
<p>Estimated CDFs of the PV power using all the probabilistic forecasting methods at 10:00 AM on (<b>a</b>) a “clear” day (19 May 2022), (<b>b</b>) an “overcast” day (18 December 2022), and (<b>c</b>) a “partially cloudy” day (9 July 2022). The solid black line represents the CDF of the actual observation.</p>
Full article ">Figure 7
<p>The 95% PIs obtained from all the probabilistic solar forecasting methods on a particular “overcast” day (28 December 2022): (<b>a</b>) SVR-NB-DST, (<b>b</b>) ANN-NB-DST, (<b>c</b>) QR-NB-DST, and (<b>d</b>) AnEn, and (<b>e</b>) PerEn.</p>
Full article ">Figure 8
<p>The 95% PIs obtained from all the probabilistic solar forecasting methods on a particular “partially cloudy” day (9 July 2022): (<b>a</b>) SVR-NB-DST, (<b>b</b>) ANN-NB-DST, (<b>c</b>) QR-NB-DST, (<b>d</b>) AnEn, and (<b>e</b>) PerEn.</p>
Full article ">Figure 9
<p>Comparative improvement in the ANN-NB-DST method over the widely used benchmark, the PerEn method.</p>
Full article ">
27 pages, 1287 KiB  
Article
Exploring Trust Dynamics in Online Social Networks: A Social Network Analysis Perspective
by Stavroula Kridera and Andreas Kanavos
Math. Comput. Appl. 2024, 29(3), 37; https://doi.org/10.3390/mca29030037 - 15 May 2024
Cited by 3 | Viewed by 3078
Abstract
This study explores trust dynamics within online social networks, blending social science theories with advanced machine-learning (ML) techniques. We examine trust’s multifaceted nature—definitions, types, and mechanisms for its establishment and maintenance—and analyze social network structures through graph theory. Employing a diverse array of [...] Read more.
This study explores trust dynamics within online social networks, blending social science theories with advanced machine-learning (ML) techniques. We examine trust’s multifaceted nature—definitions, types, and mechanisms for its establishment and maintenance—and analyze social network structures through graph theory. Employing a diverse array of ML models (e.g., KNN, SVM, Naive Bayes, Gradient Boosting, and Neural Networks), we predict connection strengths on Facebook, focusing on model performance metrics such as accuracy, precision, recall, and F1-score. Our methodology, executed in Python using the Anaconda distribution, unveils insights into trust formation and sustainability on social media, highlighting the potent application of ML in understanding these dynamics. Challenges, including the complexity of modeling social behaviors and ethical data use concerns, are discussed, emphasizing the need for continued innovation. Our findings contribute to the discourse on trust in social networks and suggest future research directions, including the application of our methodologies to other platforms and the study of online trust over time. This work not only advances the academic understanding of digital social interactions but also offers practical implications for developers, policymakers, and online communities. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of the Web of Trust concept, depicting users as nodes connected by varying colors and sizes of links that represent different levels and types of trust relationships.</p>
Full article ">Figure 2
<p>Application of the Ant Colony Optimization (ACO) algorithm for leveraging social trust in recommendation systems.</p>
Full article ">Figure 3
<p>This graph illustrates the complexity of trust interactions within a network, using circles of varying colors and sizes to represent different user groups and trust intensities. Each color signifies a distinct trust group, while the size indicates the group’s influence or trust level within the network, offering insights into how trust propagates in social media settings.</p>
Full article ">Figure 4
<p>Application of Convolutional Graph Neural Networks (GCNN) in the modeling of trust relationships within a social network [<a href="#B33-mca-29-00037" class="html-bibr">33</a>].</p>
Full article ">
23 pages, 5003 KiB  
Article
Active Data Selection and Information Seeking
by Thomas Parr, Karl Friston and Peter Zeidman
Algorithms 2024, 17(3), 118; https://doi.org/10.3390/a17030118 - 12 Mar 2024
Cited by 3 | Viewed by 3353
Abstract
Bayesian inference typically focuses upon two issues. The first is estimating the parameters of some model from data, and the second is quantifying the evidence for alternative hypotheses—formulated as alternative models. This paper focuses upon a third issue. Our interest is in the [...] Read more.
Bayesian inference typically focuses upon two issues. The first is estimating the parameters of some model from data, and the second is quantifying the evidence for alternative hypotheses—formulated as alternative models. This paper focuses upon a third issue. Our interest is in the selection of data—either through sampling subsets of data from a large dataset or through optimising experimental design—based upon the models we have of how those data are generated. Optimising data-selection ensures we can achieve good inference with fewer data, saving on computational and experimental costs. This paper aims to unpack the principles of active sampling of data by drawing from neurobiological research on animal exploration and from the theory of optimal experimental design. We offer an overview of the salient points from these fields and illustrate their application in simple toy examples, ranging from function approximation with basis sets to inference about processes that evolve over time. Finally, we consider how this approach to data selection could be applied to the design of (Bayes-adaptive) clinical trials. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Reasoning)
Show Figures

Figure 1

Figure 1
<p>The active sampling cycle. This graphic illustrates the basic idea behind this paper. Analogous with action–perception cycles of the sort found in biological systems, it shows reciprocal interactions between the process of sampling data (in biology, through acting upon the world to solicit new sensations) and drawing inferences about these data (akin to perceptual inference).</p>
Full article ">Figure 2
<p>A worked example. This figure offers a worked example of how we compute expected information gain. It draws from the message passing perspective set out in Equations (3) and (4) which detail the analytic computations that lead us to expected information gain as a function of our choices, <span class="html-italic">π</span>, for this generative model. The colours in the upper left plot represent different values of <span class="html-italic">ϴ</span>. The red arrows in the lower right factor graph indicate the directions of messages being passed.</p>
Full article ">Figure 3
<p>Function approximation with random sampling. The upper part of this figure illustrates the generative model schematically in terms of a Bayesian network. Here, a set of 16 parameters with zero-centred normally distributed priors weight the columns of a matrix of Gaussian basis functions (only three are shown in the schematic). The action we choose when generating our data selects which row of the matrix is used to generate that outcome. The middle-left panel shows the result of taking 200 or so samples from the underlying function that we seek to approximate. The middle-right panel shows the predictive distribution following the final (28th) sequential sample—shown as a blue dot—in terms of mode and 90% credible intervals. The lower-left panel shows the posterior parameter modes and 90% credible intervals, while the lower-right panel shows the sequence of choices (from top to bottom) aligned with the <span class="html-italic">x</span>-axis of the plot above. The red circle samples illustrate redundant or inefficient sampling under this random scheme.</p>
Full article ">Figure 4
<p>Function approximation with intelligent sampling. This figure uses the same format as <a href="#algorithms-17-00118-f003" class="html-fig">Figure 3</a>, but now each choice is sampled to maximise anticipated information gain. Note the more accurate approximation to the region between −100 and −50 in the middle-right panel. The α (i.e., precision or inverse temperature) parameter here is set to 64 to ensure almost deterministic sampling of the most informative locations.</p>
Full article ">Figure 5
<p>Function approximation with cost of sampling. Using the same format as in <a href="#algorithms-17-00118-f003" class="html-fig">Figure 3</a>, we here see the effect of equipping sampling with a cost. In the absence of any potential information gain, there is a preference for not sampling. The result of this is that the process of gathering data terminates after a smaller number of acquired samples. This termination occurs once the potential information gain falls below the cost of acquiring further samples. This means that the quality of the function approximation will depend upon the cost associated with sampling. In this example, a reasonable approximation to the function has been attained—inferior to that of <a href="#algorithms-17-00118-f003" class="html-fig">Figure 3</a> and <a href="#algorithms-17-00118-f004" class="html-fig">Figure 4</a> but still capturing a broad outline of the function.</p>
Full article ">Figure 6
<p>Function approximation with unresolvable uncertainty. This figure deals with the situation in which the variance associated with the likelihood distribution is heterogeneous along the <span class="html-italic">x</span>-axis. Specifically, it increases quadratically with distance from zero. The result of this is that less information gain is available at the extremes of the <span class="html-italic">x</span>-axis. This leads to termination of sampling despite the relatively larger uncertainty about the function in these extremes. This is optimal in this context as there is a limit to the amount of additional uncertainty that can be resolved.</p>
Full article ">Figure 7
<p>Dynamic information seeking. This figure deals with the situation in which the function we are trying to map out changes with time. The generative model is summarised in the upper part of this figure as the product of a matrix of coefficients with two matrices of basis functions. The data generated now depend both upon our choice and upon time. The plot of sample data shows data we could have sampled from the function at the final time point by taking the final row of the plot in the lower left and generating samples from this for each possible sampling location. This is to afford a comparison between a relatively exhaustive sampling of the data and the inferred shape of the function in the prediction plot based upon only one sample from this time point. The prediction plot shows the predicted data at this time point. The plot of posterior modes shows the estimated coefficients with 90% credible intervals at the final time point. The plot of choices is like that shown in previous figures. However, each choice occurs eight time steps after the previous choice. The lower two plots show both the time evolution of the underlying function generating the data and the inferred (expected) values of this function at each time point. We see that the shape of this function at different time points, despite only the sparse sampling of the data, is well captured by this scheme.</p>
Full article ">Figure 8
<p>Trajectory of beliefs. This figure captures several snapshots of the time-evolving inference process summarised in <a href="#algorithms-17-00118-f007" class="html-fig">Figure 7</a>. The first row shows the inferences made following the first data sample. The second row shows the inferences made at time step 50. The third shows the inferences made by the end of the time period. Note the improvement in predictive precision in the second and third rows, corresponding to the use of prior information from previous time steps.</p>
Full article ">Figure 9
<p>Random sampling for a Randomised Control Trial. The upper part of this figure shows the form of the generative model as a Bayesian network with the factors shown below. The underlying dynamics of the simulated trial are shown in the four plots on the middle left. These show the expected survival curves for each combination of demographics and treatment group. The letter T indicates the treatment group, the letter P indicates the placebo group, the letter F indicates females, and the letter M indicates males. The middle-right plot shows predicted survival from the time of enrolment (the units of the time axis are arbitrary). The thin red and blue lines give the true survival curves based upon the underlying generative process. The predictions are shown as thicker lines with accompanying 90% credible intervals. Below this is a plot of choices which takes the same format as in previous figures but now also reports the choice randomization ratio. This is shown by the size of the markers for each choice. There are three sizes of marker representing, from small to large, 2/3 allocated to a treatment group, 1/2 allocated to a treatment group, and 1/3 allocated to a treatment group. The lower-right plot shows the cumulative number of people allocated to the treatment (blue) and placebo (red) groups. Finally, the lower-left plot shows the posterior estimates of the parameters, with parameter five representing the effect of demographic on survival and parameter six representing the effect of treatment on survival.</p>
Full article ">Figure 10
<p>Intelligent sampling for a Randomised Control Trial. Using the same format as <a href="#algorithms-17-00118-f009" class="html-fig">Figure 9</a>, this figure shows the result of active sampling that maximises (subject to some random noise) the expected information gain about the parameters. This leads to a slightly better predicted survival curve, which appears to be due to a tendency to select later follow-up times. Later times tend to have greater potential information gain as survival to a later time is informative about previous survival probabilities.</p>
Full article ">Figure 11
<p>Sampling with preference for long-term survival. Again, using the format of <a href="#algorithms-17-00118-f009" class="html-fig">Figure 9</a>, this figure demonstrates the effect of preferring long-term survival. As might be expected, this bias is the follow-up time towards the end of the trial period. Perhaps more interestingly, although initially randomising to both treatment arms, once it becomes apparent that long-term survival is compromised in the treatment group, these preferences bias randomization towards the placebo group.</p>
Full article ">
22 pages, 442 KiB  
Article
On Estimation of Shannon’s Entropy of Maxwell Distribution Based on Progressively First-Failure Censored Data
by Kapil Kumar, Indrajeet Kumar and Hon Keung Tony Ng
Stats 2024, 7(1), 138-159; https://doi.org/10.3390/stats7010009 - 8 Feb 2024
Cited by 3 | Viewed by 2151
Abstract
Shannon’s entropy is a fundamental concept in information theory that quantifies the uncertainty or information in a random variable or data set. This article addresses the estimation of Shannon’s entropy for the Maxwell lifetime model based on progressively first-failure-censored data from both classical [...] Read more.
Shannon’s entropy is a fundamental concept in information theory that quantifies the uncertainty or information in a random variable or data set. This article addresses the estimation of Shannon’s entropy for the Maxwell lifetime model based on progressively first-failure-censored data from both classical and Bayesian points of view. In the classical perspective, the entropy is estimated using maximum likelihood estimation and bootstrap methods. For Bayesian estimation, two approximation techniques, including the Tierney-Kadane (T-K) approximation and the Markov Chain Monte Carlo (MCMC) method, are used to compute the Bayes estimate of Shannon’s entropy under the linear exponential (LINEX) loss function. We also obtained the highest posterior density (HPD) credible interval of Shannon’s entropy using the MCMC technique. A Monte Carlo simulation study is performed to investigate the performance of the estimation procedures and methodologies studied in this manuscript. A numerical example is used to illustrate the methodologies. This paper aims to provide practical values in applied statistics, especially in the areas of reliability and lifetime data analysis. Full article
(This article belongs to the Section Reliability Engineering)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of PFFC scheme.</p>
Full article ">Figure 2
<p>TTT plots under consideration of real data set.</p>
Full article ">Figure 3
<p>Empirical and fitted Maxwell distribution plots for real data.</p>
Full article ">Figure 4
<p>MCMC diagnostic plots for different PFFC samples in <a href="#stats-07-00009-t008" class="html-table">Table 8</a>.</p>
Full article ">Figure 5
<p>Simulated posterior predictive densities for the Bayesian estimates based on different PFFC samples in <a href="#stats-07-00009-t008" class="html-table">Table 8</a>.</p>
Full article ">
20 pages, 393 KiB  
Review
Dynamical Asymmetries, the Bayes’ Theorem, Entanglement, and Intentionality in the Brain Functional Activity
by David Bernal-Casas and Giuseppe Vitiello
Symmetry 2023, 15(12), 2184; https://doi.org/10.3390/sym15122184 - 11 Dec 2023
Cited by 1 | Viewed by 1797
Abstract
We discuss the asymmetries of dynamical origin that are relevant to functional brain activity. The brain is permanently open to its environment, and its dissipative dynamics is characterized indeed by the asymmetries under time translation transformations and time-reversal transformations, which manifest themselves in [...] Read more.
We discuss the asymmetries of dynamical origin that are relevant to functional brain activity. The brain is permanently open to its environment, and its dissipative dynamics is characterized indeed by the asymmetries under time translation transformations and time-reversal transformations, which manifest themselves in the irreversible “arrow of time”. Another asymmetry of dynamical origin arises from the breakdown of the rotational symmetry of molecular electric dipoles, triggered by incoming stimuli, which manifests in long-range dipole-dipole correlations favoring neuronal correlations. In the dissipative model, neurons, glial cells, and other biological components are classical structures. The dipole vibrational fields are quantum variables. We review the quantum field theory model of the brain proposed by Ricciardi and Umezawa and its subsequent extension to dissipative dynamics. We then show that Bayes’ theorem in probability theory is intrinsic to the structure of the brain states and discuss its strict relation with entanglement phenomena and free energy minimization. The brain estimates the action with a higher Bayes probability to be taken to produce the aimed effect. Bayes’ rule provides the formal basis of the intentionality in brain activity, which we also discuss in relation to mind and consciousness. Full article
(This article belongs to the Special Issue The Study of Brain Asymmetry)
19 pages, 379 KiB  
Article
Bayes Inference of Structural Safety under Extreme Wind Loads Based upon a Peak-Over-Threshold Process of Exceedances
by Elio Chiodo, Fabio De Angelis, Bassel Diban and Giovanni Mazzanti
Math. Comput. Appl. 2023, 28(6), 111; https://doi.org/10.3390/mca28060111 - 30 Nov 2023
Viewed by 1676
Abstract
In the present paper, the process of estimating the important statistical properties of extreme wind loads on structures is investigated by considering the effect of large variability. In fact, for the safety design and operating conditions of structures such as the ones characterizing [...] Read more.
In the present paper, the process of estimating the important statistical properties of extreme wind loads on structures is investigated by considering the effect of large variability. In fact, for the safety design and operating conditions of structures such as the ones characterizing tall buildings, wind towers, and offshore structures, it is of interest to obtain the best possible estimates of extreme wind loads on structures, the recurrence frequency, the return periods, and other stochastic properties, given the available statistical data. In this paper, a Bayes estimation of extreme load values is investigated in the framework of structural safety analysis. The evaluation of extreme values of the wind loads on the structures is performed via a combined employment of a Poisson process model for the peak-over-threshold characterization and an adequate characterization of the parent distribution which generates the base wind load values. In particular, the present investigation is based upon a key parameter for assessing the safety of structures, i.e., a proper safety index referred to a given extreme value of wind speed. The attention is focused upon the estimation process, for which the presented procedure proposes an adequate Bayesian approach based upon prior assumptions regarding (1) the Weibull probability that wind speed is higher than a prefixed threshold value, and (2) the frequency of the Poisson process of gusts. In the last part of the investigation, a large set of numerical simulations is analyzed to evaluate the feasibility and efficiency of the above estimation method and with the objective to analyze and compare the presented approach with the classical Maximum Likelihood method. Moreover, the robustness of the proposed Bayes estimation is also investigated with successful results, both with respect to the assumed parameter prior distributions and with respect to the Weibull distribution of the wind speed values. Full article
16 pages, 1564 KiB  
Article
Is Drug Delivery System a Deterministic or Probabilistic Approach? A Theoretical Model Based on the Sequence: Electrodynamics–Diffusion–Bayes
by Huber Nieto-Chaupis
Mathematics 2023, 11(21), 4528; https://doi.org/10.3390/math11214528 - 3 Nov 2023
Viewed by 1075
Abstract
Commonly, it is accepted that oncology treatment would yield outcomes with a certain determinism without any quantitative support or mathematical model that establishes such determinations. Nowadays, with the advent of nanomedicine, the targeting drug delivery scheme has emerged, whose central objective is the [...] Read more.
Commonly, it is accepted that oncology treatment would yield outcomes with a certain determinism without any quantitative support or mathematical model that establishes such determinations. Nowadays, with the advent of nanomedicine, the targeting drug delivery scheme has emerged, whose central objective is the uptake of nanoparticles by tumors. Once they are injected into the bloodstream, it is unclear as to which process governs the directing of nanoparticles towards the desired target, deterministic or stochastic. In any scenario, an optimal outcome, small toxicity and minimal dispersion of drugs is expected. Commonly, it is expected that an important fraction of them can be internalized into tumor. In this manner, due to the fraction of nanoparticles that have failed to uptake, the success of the drug delivery scheme might be at risk. In this paper, a theory based on the sequence electrodynamics–diffusion–Bayes theorem is presented. The Bayesian probability that emerges at the end of the sequence might be telling us that dynamical processes based on the injection of electrically charged nanoparticles might be dictated by stochastic formalism. Thus, rather than expecting a deterministic process, the chain of events would convert the drug delivery scheme to be dependent on a sequence of conditional probabilities. Full article
(This article belongs to the Special Issue Mathematical Modelling in Biology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sketch of a three-phases process: (a) injection, (b) diffusion, transit, (c) arrival and internalization, of nanoparticles into blood stream. It is hypothesized the hybrid view by which the entire process would consist in electrodynamics, diffusion and global probability. The circle indicates that while angiogenesis has started, nanoparticles can also travel through the created vessels.</p>
Full article ">Figure 2
<p>The theoretical efficiency of arrival up to for 3 different critical distances. While these distances are increasing, the efficiencies turns out to be reduced as a consequence of scattering of nanoparticles at the blood stream. Thus one observes a transition from Weibull (sc = 3.5 and sc = 4.0) to Gaussian distributions (sc = 5.0).</p>
Full article ">Figure 3
<p>Contour plots of Equation (<a href="#FD42-mathematics-11-04528" class="html-disp-formula">42</a>) as function of argument <math display="inline"><semantics> <msqrt> <mrow> <mi>v</mi> <mo>/</mo> <mi>D</mi> <mo>×</mo> <mi>s</mi> </mrow> </msqrt> </semantics></math>. (<b>Left-side</b>): the case when <math display="inline"><semantics> <mrow> <mi>v</mi> <mo>=</mo> <mi>s</mi> <mo>/</mo> <mi>t</mi> <mo>≈</mo> <mn>0</mn> </mrow> </semantics></math> displaying the maximum efficiency of order of 12%. (<b>Right-side</b>): the case with the approximation <math display="inline"><semantics> <mi>sin</mi> </semantics></math>(<math display="inline"><semantics> <mrow> <mi>κ</mi> <mi>s</mi> </mrow> </semantics></math>) is applied (see text below) displaying zones of a null efficiency due Coulomb effects at the injected nanoparticles. Arrows are indicating the transition of a null efficiency to one of order of 20%. Plots were done with package of Ref. [<a href="#B37-mathematics-11-04528" class="html-bibr">37</a>].</p>
Full article ">Figure 4
<p>Sketch for the probabilistic interpretation of Equation (<a href="#FD42-mathematics-11-04528" class="html-disp-formula">42</a>) by which it is argued that the events of internalization and rejection might to be dictated by the Bayes’s theorem. It should be noted that the hypoxic region is analogue to the case where nanoparticles are not arriving to tumor.</p>
Full article ">Figure 5
<p>Contour plots of Equation (<a href="#FD47-mathematics-11-04528" class="html-disp-formula">47</a>) showing the apparition of nonlinearities despite the linear relation between time and space traveled by nanoparticles. Left-side: the case where the exponential was approximated to be sinusoidal sin(x). Here it is noted a linearity between the relation space-time (dashed line). Right-side: the case where it was opted by the sin<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>(x) expressing nonlinearity between space and time as consequence of electrical forces hypothetically due to either rejection or attraction. Plots were done with package of Ref. [<a href="#B37-mathematics-11-04528" class="html-bibr">37</a>].</p>
Full article ">
19 pages, 755 KiB  
Article
Samaritan Israelites and Jews under the Shadow of Rome: Reading John 4:4–45 in Ephesus
by Laura J. Hunt
Religions 2023, 14(9), 1149; https://doi.org/10.3390/rel14091149 - 8 Sep 2023
Viewed by 2441
Abstract
Genealogies, knowledge, and purity all can provide separate identities with the means for competing self-definition. This article assumes a social location near Ephesus with Samaritan Israelites and Judeans in a Jesus-believing network. Rather than providing an analysis in which divisions are transcended, this [...] Read more.
Genealogies, knowledge, and purity all can provide separate identities with the means for competing self-definition. This article assumes a social location near Ephesus with Samaritan Israelites and Judeans in a Jesus-believing network. Rather than providing an analysis in which divisions are transcended, this reading suggests that a negotiation in John 4:4–45 of these three characteristics navigates divisions to create a complex, merged superordinate identity. Full article
14 pages, 441 KiB  
Article
Shrinking the Variance in Experts’ “Classical” Weights Used in Expert Judgment Aggregation
by Gayan Dharmarathne, Gabriela F. Nane, Andrew Robinson and Anca M. Hanea
Forecasting 2023, 5(3), 522-535; https://doi.org/10.3390/forecast5030029 - 23 Aug 2023
Cited by 1 | Viewed by 1888
Abstract
Mathematical aggregation of probabilistic expert judgments often involves weighted linear combinations of experts’ elicited probability distributions of uncertain quantities. Experts’ weights are commonly derived from calibration experiments based on the experts’ performance scores, where performance is evaluated in terms of the calibration and [...] Read more.
Mathematical aggregation of probabilistic expert judgments often involves weighted linear combinations of experts’ elicited probability distributions of uncertain quantities. Experts’ weights are commonly derived from calibration experiments based on the experts’ performance scores, where performance is evaluated in terms of the calibration and the informativeness of the elicited distributions. This is referred to as Cooke’s method, or the classical model (CM), for aggregating probabilistic expert judgments. The performance scores are derived from experiments, so they are uncertain and, therefore, can be represented by random variables. As a consequence, the experts’ weights are also random variables. We focus on addressing the underlying uncertainty when calculating experts’ weights to be used in a mathematical aggregation of expert elicited distributions. This paper investigates the potential of applying an empirical Bayes development of the James–Stein shrinkage estimation technique on the CM’s weights to derive shrinkage weights with reduced mean squared errors. We analyze 51 professional CM expert elicitation studies. We investigate the differences between the classical and the (new) shrinkage CM weights and the benefits of using the new weights. In theory, the outcome of a probabilistic model using the shrinkage weights should be better than that obtained when using the classical weights because shrinkage estimation techniques reduce the mean squared errors of estimators in general. In particular, the empirical Bayes shrinkage method used here reduces the assigned weights for those experts with larger variances in the corresponding sampling distributions of weights in the experiment. We measure improvement of the aggregated judgments in a cross-validation setting using two studies that can afford such an approach. Contrary to expectations, the results are inconclusive. However, in practice, we can use the proposed shrinkage weights to increase the reliability of derived weights when only small-sized experiments are available. We demonstrate the latter on 49 post-2006 professional CM expert elicitation studies. Full article
(This article belongs to the Special Issue Feature Papers of Forecasting 2023)
Show Figures

Figure 1

Figure 1
<p>Normalized classical and shrinkage CM weights.</p>
Full article ">Figure 2
<p>Mean and maximum absolute differences between the classical and shrinkage CM weights.</p>
Full article ">Figure 3
<p>Particular studies with large differences between the classical and shrinkage CM weights.</p>
Full article ">Figure 4
<p>Particular study with the largest absolute differences between the classical and shrinkage CM weights.</p>
Full article ">Figure 5
<p>The DMs’ performance when using classical and shrinkage weights.</p>
Full article ">
34 pages, 1583 KiB  
Article
A Study to Identify Long-Term Care Insurance Using Advanced Intelligent RST Hybrid Models with Two-Stage Performance Evaluation
by You-Shyang Chen, Ying-Hsun Hung and Yu-Sheng Lin
Mathematics 2023, 11(13), 3010; https://doi.org/10.3390/math11133010 - 6 Jul 2023
Cited by 1 | Viewed by 1866
Abstract
With the motivation of long-term care 2.0 plans, forecasting models to identify potential customers of long-term care insurance (LTCI) are an important and interesting issue. From the limited literature, most past researchers emphasize traditional statistics techniques to address this issue; however, these are [...] Read more.
With the motivation of long-term care 2.0 plans, forecasting models to identify potential customers of long-term care insurance (LTCI) are an important and interesting issue. From the limited literature, most past researchers emphasize traditional statistics techniques to address this issue; however, these are lacking in some areas. For example, intelligent hybrid models for LTCI are lacking, performance measurement of components for hybrid models is lacking, and research results for interpretative capacities are lacking, resulting in a black box scenario and difficulty in making decisions, and the gap between identifying potential customers and constructing hybrid models is unbridged. To solve the shortcomings mentioned above, this study proposes some advanced intelligent single and hybrid models; the study object is LTCI customers. The proposed hybrid models were used on the experimental dataset collected from real insurance data and possess the following advantages: (1) The feature selection technique was used to simplify variables for the purpose of improving model performance. (2) The performance of hybrid models was evaluated against some machine learning methods, including rough set theory, decision trees, multilayer perceptron, support vector machine, genetic algorithm, random forest, logistic regression, and naive Bayes, and sensitivity analysis was performed in terms of accuracy, coverage, rules number, and standard deviation. (3) We used the C4.5 algorithm of decision trees and the LEM2 algorithm of rough sets to extract and provide valuably comprehensible decisional rules as decision-making references for the interested parties for their varied benefits. (4) We used post hoc testing to verify the significant difference in groups. Conclusively, this study effectively identifies potential customers for their key attributes and creates a decision rule set of knowledge for use as a reference when solving practical problems by forming a structured solution. This study is a new trial in the LTCI application field and realizes novel creative application values. Such a hybrid model is rarely seen in identifying LTCI potential customers; thus, the study has sufficient application contribution and managerial benefits to attract much concern from the interested parties. Full article
(This article belongs to the Special Issue Industrial Mathematics in Management and Engineering)
Show Figures

Figure 1

Figure 1
<p>The processing flow of the proposed AIHLCIM.</p>
Full article ">Figure 2
<p>Rule results for DT-C4.5 in 66%/34% after feature selection and data discretization techniques.</p>
Full article ">Figure 3
<p>Rule results for DT-C4.5 in 66%/34% without feature selection and data discretization techniques.</p>
Full article ">Figure 4
<p>Rule results of a set for RS-LEM2 for 66%/34% after feature selection and data discretization.</p>
Full article ">
Back to TopTop