[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (28)

Search Parameters:
Keywords = cardinality constraints

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1888 KiB  
Article
On the Game-Based Approach to Optimal Design
by Vladimir Kobelev
Eng 2024, 5(4), 3212-3238; https://doi.org/10.3390/eng5040169 - 4 Dec 2024
Viewed by 377
Abstract
A game problem of structural design is defined as a problem of playing against external circumstances. There are two classes of players, namely the “ordinal” and “cardinal” players. The ordinal players, designated as the “operator” and “nature”, endeavor to, respectively, minimize or maximize [...] Read more.
A game problem of structural design is defined as a problem of playing against external circumstances. There are two classes of players, namely the “ordinal” and “cardinal” players. The ordinal players, designated as the “operator” and “nature”, endeavor to, respectively, minimize or maximize the payoff function, operating within the constraints of limited resources. The fundamental premise of this study is that the action of player “nature” is a priori unknown. Statistical decision theory addresses decision-making scenarios where these probabilities, whether or not they are known, must be considered. The solution to the substratum game is expressed as a value of the game “against nature”. The structural optimization extension of the game considers the value of the game “against nature” as the function of certain parameters. Thus, the value of the game is contingent upon the design parameters. The cardinal players, “designers”, choose the design parameters. There are two formulations of optimization. For the single cardinal player, the pursuit of the maximum and minimum values of the game reduces the problem of optimal design. In the second formulation, there are multiple cardinal players with conflicting objectives. Accordingly, the superstratum game emerges, which addresses the interests of the superstratum players. Finally, the optimal design problems for games with closed forms are presented. The game formulations could be applied for optimal design with uncertain loading, considering “nature” as the source of uncertainty. Full article
(This article belongs to the Special Issue Feature Papers in Eng 2024)
Show Figures

Figure 1

Figure 1
<p>Shapes of beams for the Nash equilibrium states from Equation (57).</p>
Full article ">Figure 2
<p>Dimensionless function <math display="inline"><semantics> <mrow> <mo>Ψ</mo> <mfenced separators="|"> <mrow> <mi>k</mi> </mrow> </mfenced> </mrow> </semantics></math> from Equation (62).</p>
Full article ">Figure 3
<p>Eigenvalue <math display="inline"><semantics> <mrow> <mi mathvariant="double-struck">L</mi> </mrow> </semantics></math> as the function of length <math display="inline"><semantics> <mrow> <mi>l</mi> </mrow> </semantics></math> and shape factor <math display="inline"><semantics> <mrow> <mi>k</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Optimal shapes of twisted rods for the Nash equilibrium states.</p>
Full article ">Figure 5
<p>Pareto fronts for the states of Nash equilibrium for different shape factors.</p>
Full article ">
25 pages, 2301 KiB  
Article
Cryptocurrency Portfolio Allocation under Credibilistic CVaR Criterion and Practical Constraints
by Hossein Ghanbari, Emran Mohammadi, Amir Mohammad Larni Fooeik, Ronald Ravinesh Kumar, Peter Josef Stauvermann and Mostafa Shabani
Risks 2024, 12(10), 163; https://doi.org/10.3390/risks12100163 - 11 Oct 2024
Viewed by 1159
Abstract
The cryptocurrency market offers attractive but risky investment opportunities, characterized by rapid growth, extreme volatility, and uncertainty. Traditional risk management models, which rely on probabilistic assumptions and historical data, often fail to capture the market’s unique dynamics and unpredictability. In response to these [...] Read more.
The cryptocurrency market offers attractive but risky investment opportunities, characterized by rapid growth, extreme volatility, and uncertainty. Traditional risk management models, which rely on probabilistic assumptions and historical data, often fail to capture the market’s unique dynamics and unpredictability. In response to these challenges, this paper introduces a novel portfolio optimization model tailored for the cryptocurrency market, leveraging a credibilistic CVaR framework. CVaR was chosen as the primary risk measure because it is a downside risk measure that focuses on extreme losses, making it particularly effective in managing the heightened risk of significant downturns in volatile markets like cryptocurrencies. The model employs credibility theory and trapezoidal fuzzy variables to more accurately capture the high levels of uncertainty and volatility that characterize digital assets. Unlike traditional probabilistic approaches, this model provides a more adaptive and precise risk management strategy. The proposed approach also incorporates practical constraints, including cardinality and floor and ceiling constraints, ensuring that the portfolio remains diversified, balanced, and aligned with real-world considerations such as transaction costs and regulatory requirements. Empirical analysis demonstrates the model’s effectiveness in constructing well-diversified portfolios that balance risk and return, offering significant advantages for investors in the rapidly evolving cryptocurrency market. This research contributes to the field of investment management by advancing the application of sophisticated portfolio optimization techniques to digital assets, providing a robust framework for managing risk in an increasingly complex financial landscape. Full article
(This article belongs to the Special Issue Cryptocurrency Pricing and Trading)
Show Figures

Figure 1

Figure 1
<p>A triangular fuzzy number.</p>
Full article ">Figure 2
<p>A trapezoid fuzzy number.</p>
Full article ">Figure 3
<p>Credibility of triangular fuzzy variable.</p>
Full article ">Figure 4
<p>Credibility of trapezoidal fuzzy variable.</p>
Full article ">Figure 5
<p>The presentation of portfolios under different scenarios. Source: Authors’ own estimation.</p>
Full article ">Figure 5 Cont.
<p>The presentation of portfolios under different scenarios. Source: Authors’ own estimation.</p>
Full article ">
18 pages, 2101 KiB  
Review
Robust Portfolio Mean-Variance Optimization for Capital Allocation in Stock Investment Using the Genetic Algorithm: A Systematic Literature Review
by Diandra Chika Fransisca, Sukono, Diah Chaerani and Nurfadhlina Abdul Halim
Computation 2024, 12(8), 166; https://doi.org/10.3390/computation12080166 - 18 Aug 2024
Viewed by 2030
Abstract
Traditional mean-variance (MV) models, considered effective in stable conditions, often prove inadequate in uncertain market scenarios. Therefore, there is a need for more robust and better portfolio optimization methods to handle the fluctuations and uncertainties in asset returns and covariances. This study aims [...] Read more.
Traditional mean-variance (MV) models, considered effective in stable conditions, often prove inadequate in uncertain market scenarios. Therefore, there is a need for more robust and better portfolio optimization methods to handle the fluctuations and uncertainties in asset returns and covariances. This study aims to perform a Systematic Literature Review (SLR) on robust portfolio mean-variance (RPMV) in stock investment utilizing genetic algorithms (GAs). The SLR covered studies from 1995 to 2024, allowing a thorough analysis of the evolution and effectiveness of robust portfolio optimization methods over time. The method used to conduct the SLR followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines. The result of the SLR presented a novel strategy to combine robust optimization methods and a GA in order to enhance RPMV. The uncertainty parameters, cardinality constraints, optimization constraints, risk-aversion parameters, robust covariance estimators, relative and absolute robustness, and parameters adopted were unable to develop portfolios capable of maintaining performance despite market uncertainties. This led to the inclusion of GAs to solve the complex optimization problems associated with RPMV efficiently, as well as fine-tuning parameters to improve solution accuracy. In three papers, the empirical validation of the results was conducted using historical data from different global capital markets such as Hang Seng (Hong Kong), Data Analysis Expressions (DAX) 100 (Germany), the Financial Times Stock Exchange (FTSE) 100 (U.K.), S&P 100 (USA), Nikkei 225 (Japan), and the Indonesia Stock Exchange (IDX), and the results showed that the RPMV model optimized with a GA was more stable and provided higher returns compared with traditional MV models. Furthermore, the proposed method effectively mitigated market uncertainties, making it a valuable tool for investors aiming to optimize portfolios under uncertain conditions. The implications of this study relate to handling uncertainty in asset returns, dynamic portfolio parameters, and the effectiveness of GAs in solving portfolio optimization problems under uncertainty, providing near-optimal solutions with relatively lower computational time. Full article
(This article belongs to the Special Issue Quantitative Finance and Risk Management Research: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>PRISMA stages.</p>
Full article ">Figure 2
<p>Categorization of Dataset 1 by publication year and citation counts. Sourced from <a href="https://www.r-project.org/" target="_blank">https://www.r-project.org/</a>.</p>
Full article ">Figure 3
<p>A representation of the commonly appearing words across Dataset 1. Sourced from <a href="https://www.vosviewer.com/" target="_blank">https://www.vosviewer.com/</a>.</p>
Full article ">Figure 4
<p>Mapping of the themes. Source from <a href="https://www.r-project.org/" target="_blank">https://www.r-project.org/</a>.</p>
Full article ">Figure 5
<p>Visualization of the evolution of themes. Source from <a href="https://www.r-project.org/" target="_blank">https://www.r-project.org/</a>.</p>
Full article ">
37 pages, 4204 KiB  
Article
MFC-RMA (Matrix Factorization and Constraints- Role Mining Algorithm): An Optimized Role Mining Algorithm
by Fubao Zhu, Chenguang Yang, Liang Zhu, Hongqiang Zuo and Jingzhong Gu
Symmetry 2024, 16(8), 1008; https://doi.org/10.3390/sym16081008 - 7 Aug 2024
Viewed by 657
Abstract
Role-based access control (RBAC) is a widely adopted access control model in various domains for defining security management. Role mining is closely related to role-based access control, as the latter employs role assignments to offer a flexible and scalable approach to managing permissions [...] Read more.
Role-based access control (RBAC) is a widely adopted access control model in various domains for defining security management. Role mining is closely related to role-based access control, as the latter employs role assignments to offer a flexible and scalable approach to managing permissions within an organization. The edge role mining problem (Edge RMP), a variant of the role mining problem (RMP), has long been recognized as an effective strategy for role assignment. Role mining, which groups users with similar access permissions into the same role, bears some resemblance to symmetry. Symmetry categorizes objects or graphics with identical characteristics into one group. Both involve a certain form of “classification” or “induction”. Edge-RMP reduces the associations between users and permissions, thereby lowering the security risks faced by the system. While an algorithm based on Boolean matrix factorization exists for this problem, it fails to further refine the resulting user–role assignment (UA) and role–permission assignment (PA) relationships. Additionally, this algorithm does not address constraint-related issues, such as cardinality constraints, user exclusion constraints, and user capabilities. Furthermore, it demonstrates significant redundancy of roles when handling large datasets, leaving room for further optimization of Edge-RMP results. To address these concerns, this paper proposes the MFC-RMA algorithm based on Boolean matrix factorization. The method achieves significant optimization of Edge-RMP results by handling relationships between roles possessing various permissions. Furthermore, this paper clusters, compresses, modifies, and optimizes the original data based on the similarity between users, ensuring its usability for role mining. Both theoretical and practical considerations are taken into account for different types of constraints, and algorithms are devised to reallocate roles incorporating these constraints, thereby generating UA and PA matrices. The proposed approach yields optimal numbers of generated roles and the sum of the minimum number of generated edges to address the aforementioned issues. Experimental results demonstrate that the algorithm reduces management overhead, provides efficient execution results, and ensures the accuracy of generated roles. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

Figure 1
<p>Illustration of the basic role mining process: (<b>a</b>) original user-permission assignment (UPA) matrix and (<b>b</b>) decomposed user–role (UA) and role–permission (PA) matrices.</p>
Full article ">Figure 2
<p>Illustration of Edge RMP processing results.</p>
Full article ">Figure 3
<p>A role permission set is a superset of another role permission set.</p>
Full article ">Figure 4
<p>Two roles have a common permission set.</p>
Full article ">Figure 5
<p>Illustration of preliminary experimental results: (<b>a</b>) Edge-RMP results and (<b>b</b>) the results of removing redundant rows and columns.</p>
Full article ">Figure 6
<p>Optimized Edge-RMP algorithm results.</p>
Full article ">Figure 7
<p>Initial RUC data.</p>
Full article ">Figure 8
<p>The refined result of the RUC algorithm.</p>
Full article ">Figure 9
<p>Initial RPC data.</p>
Full article ">Figure 10
<p>The refined result of the RPC algorithm.</p>
Full article ">Figure 11
<p>Diagram illustrating the role generation algorithm.</p>
Full article ">Figure 12
<p>Graph illustrating the summation of edges.</p>
Full article ">Figure 13
<p>Role-user assignment optimization for role-user cardinality (RUC).</p>
Full article ">Figure 14
<p>Role-permission assignment optimization for role-permission cardinality (RPC).</p>
Full article ">Figure 15
<p>Performance of RMC in the healthcare dataset with a different MUC<sub>role</sub>.</p>
Full article ">Figure 16
<p>Performance of RMC in the healthcare dataset with a different <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">ρ</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>Performance of RMC in the APJ dataset with a different MUC<sub>role</sub>.</p>
Full article ">Figure 18
<p>Performance of RMC in the APJ dataset with a different <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">ρ</mi> </mrow> </semantics></math>.</p>
Full article ">
26 pages, 1467 KiB  
Article
A Novel Improved Genetic Algorithm for Multi-Period Fractional Programming Portfolio Optimization Model in Fuzzy Environment
by Chenyang Hu, Yuelin Gao and Eryang Guo
Mathematics 2024, 12(11), 1694; https://doi.org/10.3390/math12111694 - 29 May 2024
Viewed by 687
Abstract
The complexity of historical data in financial markets and the uncertainty of the future, as well as the idea that investors always expect the least risk and the greatest return. This study presents a multi-period fractional portfolio model in a fuzzy environment, taking [...] Read more.
The complexity of historical data in financial markets and the uncertainty of the future, as well as the idea that investors always expect the least risk and the greatest return. This study presents a multi-period fractional portfolio model in a fuzzy environment, taking into account the limitations of asset quantity, asset position, transaction cost, and inter-period investment. This is a mixed integer programming NP-hard problem. To overcome the problem, an improved genetic algorithm (IGA) is presented. The IGA contribution mostly involves the following three points: (i) A cardinal constraint processing approach is presented for the cardinal constraint conditions in the model; (ii) Logistic chaotic mapping was implemented to boost the initial population diversity; (iii) An adaptive golden section variation probability formula is developed to strike the right balance between exploration and development. To test the model’s logic and the performance of the proposed algorithm, this study picks stock data from the Shanghai Stock Exchange 50 for simulated investing and examines portfolio strategies under various limitations. In addition, the numerical results of simulated investment are compared and analyzed, and the results show that the established models are in line with the actual market situation and the designed algorithm is effective, and the probability of obtaining the optimal value is more than 37.5% higher than other optimization algorithms. Full article
Show Figures

Figure 1

Figure 1
<p>Full text frame diagram.</p>
Full article ">Figure 2
<p>Flow chart of the improved IGA algorithm.</p>
Full article ">Figure 3
<p>Frequency distribution histogram.</p>
Full article ">
17 pages, 4881 KiB  
Article
Effects of Limiting the Number of Different Cross-Sections Used in Statically Loaded Truss Sizing and Shape Optimization
by Nenad Kostić, Nenad Petrović, Vesna Marjanović, Ružica R. Nikolić, Janusz Szmidla, Nenad Marjanović and Robert Ulewicz
Materials 2024, 17(6), 1390; https://doi.org/10.3390/ma17061390 - 18 Mar 2024
Viewed by 852
Abstract
This research aims to show the effects of adding cardinality constraints to limit the number of different cross-sections used in simultaneous sizing and shape optimization of truss structures. The optimal solutions for sizing and shape optimized trusses result in a generally high, and [...] Read more.
This research aims to show the effects of adding cardinality constraints to limit the number of different cross-sections used in simultaneous sizing and shape optimization of truss structures. The optimal solutions for sizing and shape optimized trusses result in a generally high, and impractical, number of different cross-sections being used. This paper presents the influence of constraining the number of different cross-sections used on the optimal results to bring the scientific results closer to the applicable results. The savings achieved using the cardinality constraint are expected to manifest in more than just the minimization of weight but in all the other aspects of truss construction, such as labor, assembly time, total weld length, surface area to be treated, transport, logistics, and so on. It is expected that the optimal weight of the structures would be greater than when not using this constraint; however, it would still be below conventionally sized structures and have the added benefits derived from the simplicity and elegance of the solution. The results of standard test examples for each different cardinality constraint value are shown and compared to the same examples using only a single cross-section on all bars and the overall optimal solution, which does not have the cardinality constraint. An additional comparison is made with results of just the sizing optimization from previously published research where authors first used the same cardinality constraint. Full article
Show Figures

Figure 1

Figure 1
<p>General example of constructing the variable sets.</p>
Full article ">Figure 2
<p>Planar 10-bar truss layout with labeled bars (1–10) and nodes ((1) to (6)).</p>
Full article ">Figure 3
<p>Planar 17-bar truss layout with labeled bars (1–17) and nodes ((1) to (9)).</p>
Full article ">Figure 4
<p>Spatial 25-bar truss layout with labeled bars (1–25) and nodes ((1) to (10)).</p>
Full article ">Figure 5
<p>Comparison of results for sizing optimization [<a href="#B14-materials-17-01390" class="html-bibr">14</a>] and sizing shape optimization from this research for different numbers of cross-sections of the 10-bar truss problem.</p>
Full article ">Figure 6
<p>Comparison of results for sizing optimization [<a href="#B14-materials-17-01390" class="html-bibr">14</a>] and sizing shape optimization from this research for different numbers of cross-sections of the 17-bar truss problem.</p>
Full article ">Figure 7
<p>Comparison of results for sizing optimization [<a href="#B14-materials-17-01390" class="html-bibr">14</a>] and sizing shape optimization from this research for different numbers of cross-section groups of the 25-bar truss problem.</p>
Full article ">Figure 8
<p>Optimal solutions of the 10-bar truss problem where (<b>a</b>) 8 and (<b>b</b>) 3 different cross-sections are used.</p>
Full article ">Figure 9
<p>Optimal solutions of the 17-bar truss problem where (<b>a</b>) 6 and (<b>b</b>) 3 different cross-sections are used.</p>
Full article ">Figure 10
<p>Optimal solutions of the 25-bar spatial truss problem where (<b>a</b>) 6 and (<b>b</b>) 3 different cross-sections are used in groups.</p>
Full article ">Figure 11
<p>Differences from the optimal solutions based on the number of different cross-sections used for 10, 17 and 25-bar examples.</p>
Full article ">
26 pages, 9562 KiB  
Article
Hyperspectral Anomaly Detection with Auto-Encoder and Independent Target
by Shuhan Chen, Xiaorun Li and Yunfeng Yan
Remote Sens. 2023, 15(22), 5266; https://doi.org/10.3390/rs15225266 - 7 Nov 2023
Cited by 3 | Viewed by 2052
Abstract
As an unsupervised data representation neural network, auto-encoder (AE) has shown great potential in denoising, dimensionality reduction, and data reconstruction. Many AE-based background (BKG) modeling methods have been developed for hyperspectral anomaly detection (HAD). However, their performance is subject to their unbiased reconstruction [...] Read more.
As an unsupervised data representation neural network, auto-encoder (AE) has shown great potential in denoising, dimensionality reduction, and data reconstruction. Many AE-based background (BKG) modeling methods have been developed for hyperspectral anomaly detection (HAD). However, their performance is subject to their unbiased reconstruction of BKG and target pixels. This article presents a rather different low rank and sparse matrix decomposition (LRaSMD) method based on AE, named auto-encoder and independent target (AE-IT), for hyperspectral anomaly detection. First, the encoder weight matrix, obtained by a designed AE network, is utilized to construct a projector for generating a low-rank component in the encoder subspace. By adaptively and reasonably determining the number of neurons in the latent layer, the designed AE-based method can promote the reconstruction of BKG. Second, to ensure independence and representativeness, the component in the encoder orthogonal subspace is made into a sphere and followed by finding of unsupervised targets to construct an anomaly space. In order to mitigate the influence of noise on anomaly detection, sparse cardinality (SC) constraint is enforced on the component in the anomaly space for obtaining the sparse anomaly component. Finally, anomaly detector is constructed by combining Mahalanobi distance and multi-components, which include encoder component and sparse anomaly component, to detect anomalies. The experimental results demonstrate that AE-IT performs competitively compared to the LRaSMD-based models and AE-based approaches. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>A graphic diagram of the AE-IT algorithm.</p>
Full article ">Figure 2
<p><b>I</b> refers to the dataset for HYDICE Urban Scene. <b>II</b> refers to the dataset for Pavia Scene. <b>III</b> refers to the dataset for Hyperion Scene. <b>IV</b> refers to the dataset for San Diego Airport scene. <b>V</b> refers to the dataset for Gulfport Scene. (<b>a</b>) Pseud-color image; (<b>b</b>) ground truth map; (<b>c</b>) and the mean spectrum of target and BKG.</p>
Full article ">Figure 2 Cont.
<p><b>I</b> refers to the dataset for HYDICE Urban Scene. <b>II</b> refers to the dataset for Pavia Scene. <b>III</b> refers to the dataset for Hyperion Scene. <b>IV</b> refers to the dataset for San Diego Airport scene. <b>V</b> refers to the dataset for Gulfport Scene. (<b>a</b>) Pseud-color image; (<b>b</b>) ground truth map; (<b>c</b>) and the mean spectrum of target and BKG.</p>
Full article ">Figure 3
<p>The criteria curves versus <span class="html-italic">p</span> for the HYDICE Urban Scene.</p>
Full article ">Figure 4
<p>The detection results of AEIT with its six detectors for HYDICE Urban Scene.</p>
Full article ">Figure 5
<p>The detection results using AEIT with its six detectors for the Pavia City Scene.</p>
Full article ">Figure 6
<p>The detection results of AEIT with its six detectors for the Hyperion Scene.</p>
Full article ">Figure 7
<p>The detection results of AEIT with its six detectors for the San Diego Airport Scene.</p>
Full article ">Figure 8
<p>The detection results of AEIT with its six detectors for the Gulfport Scene.</p>
Full article ">Figure 9
<p>The detection results of HYDICE Urban Scene using different methods.</p>
Full article ">Figure 10
<p>The 3D−ROC and three 2D−ROC curves of HYDICE Urban Scene using different methods.</p>
Full article ">Figure 11
<p>The detection results of the Pavia City Scene using different methods.</p>
Full article ">Figure 12
<p>The 3D−ROC and three 2D−ROC curves of the Pavia City Scene using different methods.</p>
Full article ">Figure 13
<p>The detection results of the Hyperion Scene using different methods.</p>
Full article ">Figure 14
<p>The 3D−ROC and three 2D−ROC curves of the Hyperion Scene using different methods.</p>
Full article ">Figure 15
<p>The detection results of the San Diego Airport Scene using different methods.</p>
Full article ">Figure 16
<p>The 3D−ROC and three 2D−ROC curves of the San Diego Airport Scene using different methods.</p>
Full article ">Figure 17
<p>The detection results of the Gulfport Scene using different methods.</p>
Full article ">Figure 18
<p>The 3D−ROC and three 2D−ROC curves of the Gulfport Scene using different methods.</p>
Full article ">
17 pages, 540 KiB  
Article
Carousel Greedy Algorithms for Feature Selection in Linear Regression
by Jiaqi Wang, Bruce Golden and Carmine Cerrone
Algorithms 2023, 16(9), 447; https://doi.org/10.3390/a16090447 - 19 Sep 2023
Viewed by 1775
Abstract
The carousel greedy algorithm (CG) was proposed several years ago as a generalized greedy algorithm. In this paper, we implement CG to solve linear regression problems with a cardinality constraint on the number of features. More specifically, we introduce a default version of [...] Read more.
The carousel greedy algorithm (CG) was proposed several years ago as a generalized greedy algorithm. In this paper, we implement CG to solve linear regression problems with a cardinality constraint on the number of features. More specifically, we introduce a default version of CG that has several novel features. We compare its performance against stepwise regression and more sophisticated approaches using integer programming, and the results are encouraging. For example, CG consistently outperforms stepwise regression (from our preliminary experiments, we see that CG improves upon stepwise regression in 10 of 12 cases), but it is still computationally inexpensive. Furthermore, we show that the approach is applicable to several more general feature selection problems. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms)
Show Figures

Figure 1

Figure 1
<p>An illustration of head and tail for <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Flowchart of default CG.</p>
Full article ">Figure 3
<p>Improvements from carousel greedy with stepwise initialization for the CT slice dataset. The RSS of stepwise regression is at <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Improvements from carousel greedy with stepwise initialization for the Building dataset.</p>
Full article ">Figure 5
<p>Improvements for carousel greedy with stepwise initialization for the Insurance dataset.</p>
Full article ">Figure 6
<p>Improvements from carousel greedy with random initialization for the Insurance dataset.</p>
Full article ">
22 pages, 1297 KiB  
Article
The Hypervolume Newton Method for Constrained Multi-Objective Optimization Problems
by Hao Wang, Michael Emmerich, André Deutz, Víctor Adrián Sosa Hernández and Oliver Schütze
Math. Comput. Appl. 2023, 28(1), 10; https://doi.org/10.3390/mca28010010 - 9 Jan 2023
Viewed by 3560
Abstract
Recently, the Hypervolume Newton Method (HVN) has been proposed as a fast and precise indicator-based method for solving unconstrained bi-objective optimization problems with objective functions. The HVN is defined on the space of (vectorized) fixed cardinality sets of decision space vectors for a [...] Read more.
Recently, the Hypervolume Newton Method (HVN) has been proposed as a fast and precise indicator-based method for solving unconstrained bi-objective optimization problems with objective functions. The HVN is defined on the space of (vectorized) fixed cardinality sets of decision space vectors for a given multi-objective optimization problem (MOP) and seeks to maximize the hypervolume indicator adopting the Newton–Raphson method for deterministic numerical optimization. To extend its scope to non-convex optimization problems, the HVN method was hybridized with a multi-objective evolutionary algorithm (MOEA), which resulted in a competitive solver for continuous unconstrained bi-objective optimization problems. In this paper, we extend the HVN to constrained MOPs with in principle any number of objectives. Similar to the original variant, the first- and second-order derivatives of the involved functions have to be given either analytically or numerically. We demonstrate the applicability of the extended HVN on a set of challenging benchmark problems and show that the new method can be readily applied to solve equality constraints with high precision and to some extent also inequalities. We finally use HVN as a local search engine within an MOEA and show the benefit of this hybrid method on several benchmark problems. Full article
Show Figures

Figure 1

Figure 1
<p>Example of a hypervolume indicator Hessian computation in three-dimensional objective space with a collection of points <math display="inline"><semantics> <mrow> <mo>{</mo> <msup> <mi mathvariant="bold">y</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <msup> <mi mathvariant="bold">y</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <msup> <mi mathvariant="bold">y</mi> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </msup> <mo>}</mo> </mrow> </semantics></math> and reference point <math display="inline"><semantics> <mi mathvariant="bold">r</mi> </semantics></math>.</p>
Full article ">Figure 2
<p>On problem P1, the convergence of the HVN method is shown for three different initializations of the starting approximation set (<math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>)—linear (<b>top row</b>), logistic (<b>middle</b>), and logit spacing (<b>bottom</b>). We depict the final approximation set (<b>left column</b>; green stars), the corresponding objective points (<b>middle column</b>; green stars), and the evolution of the HV value and <math display="inline"><semantics> <mfenced separators="" open="&#x2225;" close="&#x2225;"> <mi>G</mi> <mo>(</mo> <mi mathvariant="bold">X</mi> <mo>,</mo> <mi>λ</mi> <mo>)</mo> </mfenced> </semantics></math> (<b>right column</b>).</p>
Full article ">Figure 3
<p>On problem P2 with a spherical constraint, we depict for three sizes of the approximation set (<math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>∈</mo> <mo>{</mo> <mn>20</mn> <mo>,</mo> <mn>40</mn> <mo>,</mo> <mn>60</mn> <mo>}</mo> </mrow> </semantics></math>; from <b>top</b> to <b>bottom</b>), the final approximation set (<b>left column</b>; green stars), the corresponding objective points (<b>middle column</b>; green stars), and the evolution of the HV value and <math display="inline"><semantics> <mfenced separators="" open="&#x2225;" close="&#x2225;"> <mi>G</mi> <mo>(</mo> <mi mathvariant="bold">X</mi> <mo>,</mo> <mi>λ</mi> <mo>)</mo> </mfenced> </semantics></math> (<b>right column</b>). The initial points are sampled uniformly at random in the convex hull of three points <math display="inline"><semantics> <mrow> <msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> <mo>,</mo> <msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mo>−</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> </mrow> </semantics></math>, and <math display="inline"><semantics> <msup> <mrow> <mo>(</mo> <mo>−</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> </semantics></math>.</p>
Full article ">Figure 4
<p>On problem P3 with a spherical constraint, we depict for three sizes of the initial approximation set (<math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>∈</mo> <mo>{</mo> <mn>20</mn> <mo>,</mo> <mn>40</mn> <mo>,</mo> <mn>60</mn> <mo>}</mo> </mrow> </semantics></math>; from <b>top</b> to <b>bottom</b>), the final approximation set (<b>left column</b>; green stars), the corresponding objective points (<b>middle column</b>; green stars), and the evolution of the HV value and <math display="inline"><semantics> <mfenced separators="" open="&#x2225;" close="&#x2225;"> <mi>G</mi> <mo>(</mo> <mi mathvariant="bold">X</mi> <mo>)</mo> </mfenced> </semantics></math> (<b>right column</b>). The initial decision points are sampled uniformly at random in the feasible space of <math display="inline"><semantics> <mrow> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>4</mn> <mo>]</mo> </mrow> <mo>×</mo> <msup> <mrow> <mo>[</mo> <mo>−</mo> <mn>4</mn> <mo>,</mo> <mn>4</mn> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>On Eq-DTLZ1-3 problems, the HVN method starts from a small local perturbation (black crosses) of the Pareto set (sphere in the decision space), i.e., <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="bold">X</mi> <mo>*</mo> </msup> <mo>+</mo> <mn>0.02</mn> <mi mathvariant="script">U</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, where <math display="inline"><semantics> <msup> <mi mathvariant="bold">X</mi> <mo>*</mo> </msup> </semantics></math> (of size 200) is sampled uniformly at random on the Pareto set. The final approximation set of the HVN method is depicted as green points. Only the first three search dimensions are shown for the decision space.</p>
Full article ">Figure 6
<p>On the Eq-DTLZ2 (<b>a</b>) and the Eq-IDTLZ1 (<b>b</b>) problem, we compare the hybridization of HVN and NSGA-III to NSGA-III with roughly the same budget: for the former, the hybrid algorithm first executes NSGA-III with <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics></math> for 1000 iterations and then runs the HVN method for 10 iterations. In HVN, the total function evaluations and AD takes ca. 270 s CPU time on an Intel(R) Core(TM) i5-8257U CPU, which corresponds to ca. <math display="inline"><semantics> <mrow> <mn>4.8</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>5</mn> </msup> </mrow> </semantics></math> FEs. Hence, for the latter, we set 3400 (<math display="inline"><semantics> <mrow> <mo>=</mo> <mn>4.8</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>5</mn> </msup> <mo>/</mo> <mn>200</mn> <mo>+</mo> <mn>1000</mn> </mrow> </semantics></math>) iterations in total for <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics></math>. We use the same hyperparameter setting for the standalone NSGA-III and the one used in the hybridization. The decision space is <math display="inline"><semantics> <msup> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> <mn>11</mn> </msup> </semantics></math>, and the reference point is <math display="inline"><semantics> <msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> </semantics></math> for HVN.</p>
Full article ">
12 pages, 300 KiB  
Article
Solving Constrained Mean-Variance Portfolio Optimization Problems Using Spiral Optimization Algorithm
by Werry Febrianti, Kuntjoro Adji Sidarto and Novriana Sumarti
Int. J. Financial Stud. 2023, 11(1), 1; https://doi.org/10.3390/ijfs11010001 - 20 Dec 2022
Cited by 3 | Viewed by 3144
Abstract
Portfolio optimization is an activity for balancing return and risk. In this paper, we used mean-variance (M-V) portfolio models with buy-in threshold and cardinality constraints. This model can be formulated as a mixed integer nonlinear programming (MINLP) problem. To solve this constrained mean-variance [...] Read more.
Portfolio optimization is an activity for balancing return and risk. In this paper, we used mean-variance (M-V) portfolio models with buy-in threshold and cardinality constraints. This model can be formulated as a mixed integer nonlinear programming (MINLP) problem. To solve this constrained mean-variance portfolio optimization problem, we propose the use of a modified spiral optimization algorithm (SOA). Then, we use Bartholomew-Biggs and Kane’s data to validate our proposed algorithm. The results show that our proposed algorithm can be an efficient tool for solving this portfolio optimization problem. Full article
23 pages, 508 KiB  
Article
Making Improvisations, Reconfiguring Livelihoods: Surviving the COVID-19 Lockdown by Urban Residents in Uganda
by Esther K. Nanfuka and David Kyaddondo
COVID 2022, 2(12), 1666-1688; https://doi.org/10.3390/covid2120120 - 26 Nov 2022
Cited by 4 | Viewed by 3272
Abstract
The declaration of the coronavirus disease 2019 (COVID-19) pandemic led to the enforcement of national lockdowns in several countries. While lockdowns are generally effective in containing the spread of infectious diseases, they are associated with negative impacts on livelihoods. Although evidence suggests that [...] Read more.
The declaration of the coronavirus disease 2019 (COVID-19) pandemic led to the enforcement of national lockdowns in several countries. While lockdowns are generally effective in containing the spread of infectious diseases, they are associated with negative impacts on livelihoods. Although evidence suggests that urban informal sector populations in low-resource settings bore the brunt of the adverse economic effects of COVID-19 lockdowns, there is little on how they survived. The article provides insights into the survival mechanisms of urban informal sector populations during a COVID-19 lockdown. Data are from narrative interviews with 30 residents of Kampala City and surrounding areas. We found that the COVID-19 lockdown chiefly jeopardized the livelihoods of urban residents through job loss and reduced incomes. Affected individuals and households primarily survived by making improvisations such as adjusting expenditures and reconfiguring their livelihoods. The cardinal elements of the informal sector, such as limited regulation, served as both a facilitator and constraint to survival. Therefore, the informal sector is an important buffer against livelihood shocks in situations of crisis. However, its inherent limitations imply that promoting livelihood resilience among urban residents during lockdowns and similar shocks may necessitate harnessing both formal and informal safety nets. Full article
Show Figures

Figure 1

Figure 1
<p>Themes and sub-themes.</p>
Full article ">
29 pages, 4423 KiB  
Article
A Synthesis of Pulse Influenza Vaccination Policies Using an Efficient Controlled Elitism Non-Dominated Sorting Genetic Algorithm (CENSGA)
by Asma Khalil Alkhamis and Manar Hosny
Electronics 2022, 11(22), 3711; https://doi.org/10.3390/electronics11223711 - 13 Nov 2022
Cited by 5 | Viewed by 1534
Abstract
Seasonal influenza (also known as flu) is responsible for considerable morbidity and mortality across the globe. The three recognized pathogens that cause epidemics during the winter season are influenza A, B and C. The influenza virus is particularly dangerous due to its mutability. [...] Read more.
Seasonal influenza (also known as flu) is responsible for considerable morbidity and mortality across the globe. The three recognized pathogens that cause epidemics during the winter season are influenza A, B and C. The influenza virus is particularly dangerous due to its mutability. Vaccines are an effective tool in preventing seasonal influenza, and their formulas are updated yearly according to the WHO recommendations. However, in order to facilitate decision-making in the planning of the intervention, policymakers need information on the projected costs and quantities related to introducing the influenza vaccine in order to help governments obtain an optimal allocation of the vaccine each year. In this paper, an approach based on a Controlled Elitism Non-Dominated Sorting Genetic Algorithm (CENSGA) model is introduced to optimize the allocation of the influenza vaccination. A bi-objective model is formulated to control the infection volume, and reduce the unit cost of the vaccination campaign. An SIR (Susceptible–Infected–Recovered) model is employed for representing a potential epidemic. The model constraints are based on the epidemiological model, time management and vaccine quantity. A two-phase optimization process is proposed: guardian control followed by contingent controls. The proposed approach is an evolutionary metaheuristic multi-objective optimization algorithm with a local search procedure based on a hash table. Moreover, in order to optimize the scheduling of a set of policies over a predetermined time to form a complete campaign, an extended CENSGA is introduced with a variable-length chromosome (VLC) along with mutation and crossover operations. To validate the applicability of the proposed CENSGA, it is compared with the classical Non-Dominated Sorting Genetic Algorithm (NSGA-II). The results indicate that optimal vaccination campaigns with compromise tradeoffs between the two conflicting objectives can be designed effectively using CENSGA, providing policymakers with a number of alternatives to accommodate the best strategies. The results are analyzed using graphical and statistical comparisons in terms of cardinality, convergence, distribution and spread quality metrics, illustrating that the proposed CENSGA is effective and useful for determining the optimal vaccination allocation campaigns. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence for Health)
Show Figures

Figure 1

Figure 1
<p>General outline of SIR model.</p>
Full article ">Figure 2
<p>Anatomy of the optimizer’s main components.</p>
Full article ">Figure 3
<p>Chromosome representation.</p>
Full article ">Figure 4
<p>Selection strategy of CENSGA.</p>
Full article ">Figure 5
<p>Controlled elitism procedure is illustrated.</p>
Full article ">Figure 6
<p>Main effects plot for the SN ratio of NSGA-II in (<b>A</b>) round 1 and (<b>B</b>) round 2.</p>
Full article ">Figure 7
<p>Main effects plot for SN ratio of CENSGA.</p>
Full article ">Figure 8
<p>Non-dominated pareto front of the guardian phase of the problem. Where, Black circles are solutions belong to PF, while red star indicates the selected solution Δ<span class="html-italic">t<sub>gc</sub></span> = 8.3335 and <span class="html-italic">v<sub>gc</sub></span> = 0.8752. (<b>A</b>) NSGA-II, (<b>B</b>) CENSGA.</p>
Full article ">Figure 9
<p>Image set of all obtained solutions considering all executions of the evolutionary algorithms for the contingent phase presented in Equation (6). (<b>A</b>) NSGA-II, (<b>B</b>) CENSGA.</p>
Full article ">Figure 10
<p>Non-dominated Pareto front of the contingent phase problem. (<b>A</b>) NSGA-II, (<b>B</b>) CENSGA.</p>
Full article ">Figure 11
<p>SIR behavior under different situation. (<b>A</b>) SIR general behavior. (<b>B</b>) Behavior of the selected solution from NSGA-II. (<b>C</b>) Behavior of the selected solution from CENSGA.</p>
Full article ">Figure 11 Cont.
<p>SIR behavior under different situation. (<b>A</b>) SIR general behavior. (<b>B</b>) Behavior of the selected solution from NSGA-II. (<b>C</b>) Behavior of the selected solution from CENSGA.</p>
Full article ">Figure 12
<p>Mean number of vaccinations for (<b>A</b>) solution 8 from the NSGA-II non-dominated set and (<b>B</b>) solution 10 from the CENSGA non-dominated set.</p>
Full article ">Figure 13
<p>Mean values of P<sub>C</sub> and P<sub>RE</sub>.</p>
Full article ">Figure A1
<p>Flowchart of the CENSGA algorithm.</p>
Full article ">Figure A2
<p>Performance measures of all mean experiment cases. (<b>A</b>) Error ratio. (<b>B</b>) Generational distance. (<b>C</b>) <span class="html-italic">ε</span>-indicator. (<b>D</b>) Hypervolume.</p>
Full article ">
19 pages, 469 KiB  
Article
Efficient Streaming Algorithms for Maximizing Monotone DR-Submodular Function on the Integer Lattice
by Bich-Ngan T. Nguyen, Phuong N. H. Pham, Van-Vang Le and Václav Snášel
Mathematics 2022, 10(20), 3772; https://doi.org/10.3390/math10203772 - 13 Oct 2022
Viewed by 1804
Abstract
In recent years, the issue of maximizing submodular functions has attracted much interest from research communities. However, most submodular functions are specified in a set function. Meanwhile, recent advancements have been studied for maximizing a diminishing return submodular (DR-submodular) function on the integer [...] Read more.
In recent years, the issue of maximizing submodular functions has attracted much interest from research communities. However, most submodular functions are specified in a set function. Meanwhile, recent advancements have been studied for maximizing a diminishing return submodular (DR-submodular) function on the integer lattice. Because plenty of publications show that the DR-submodular function has wide applications in optimization problems such as sensor placement impose problems, optimal budget allocation, social network, and especially machine learning. In this research, we propose two main streaming algorithms for the problem of maximizing a monotone DR-submodular function under cardinality constraints. Our two algorithms, which are called StrDRS1 and StrDRS2, have (1/2ϵ), (11/eϵ) of approximation ratios and O(nϵlog(logBϵ)logk), O(nϵlogB), respectively. We conducted several experiments to investigate the performance of our algorithms based on the budget allocation problem over the bipartite influence model, an instance of the monotone submodular function maximization problem over the integer lattice. The experimental results indicate that our proposed algorithms not only provide solutions with a high value of the objective function, but also outperform the state-of-the-art algorithms in terms of both the number of queries and the running time. Full article
(This article belongs to the Special Issue Complex Network Modeling: Theory and Applications)
Show Figures

Figure 1

Figure 1
<p>The results of the experimental comparison of algorithms on the datasets.</p>
Full article ">
16 pages, 2137 KiB  
Review
Impact of Constraint-Induced Movement Therapy (CIMT) on Functional Ambulation in Stroke Patients—A Systematic Review and Meta-Analysis
by Ravi Shankar Reddy, Kumar Gular, Snehil Dixit, Praveen Kumar Kandakurti, Jaya Shanker Tedla, Ajay Prashad Gautam and Devika Rani Sangadala
Int. J. Environ. Res. Public Health 2022, 19(19), 12809; https://doi.org/10.3390/ijerph191912809 - 6 Oct 2022
Cited by 8 | Viewed by 5646
Abstract
Constraint-induced movement therapy (CIMT) has been delivered in the stroke population to improve lower-extremity functions. However, its efficacy on prime components of functional ambulation, such as gait speed, balance, and cardiovascular outcomes, is ambiguous. The present review aims to delineate the effect of [...] Read more.
Constraint-induced movement therapy (CIMT) has been delivered in the stroke population to improve lower-extremity functions. However, its efficacy on prime components of functional ambulation, such as gait speed, balance, and cardiovascular outcomes, is ambiguous. The present review aims to delineate the effect of various lower-extremity CIMT (LECIMT) protocols on gait speed, balance, and cardiovascular outcomes. Material and methods: The databases used to collect relevant articles were EBSCO, PubMed, PEDro, Science Direct, Scopus, MEDLINE, CINAHL, and Web of Science. For this analysis, clinical trials involving stroke populations in different stages of recovery, >18 years old, and treated with LECIMT were considered. Only ten studies were included in this review, as they fulfilled the inclusion criteria. The effect of CIMT on gait speed and balance outcomes was accomplished using a random or fixed-effect model. CIMT, when compared to controlled interventions, showed superior or similar effects. The effect of LECIMT on gait speed and balance were non-significant, with mean differences (SMDs) of 0.13 and 4.94 and at 95% confidence intervals (Cis) of (−0.18–0.44) and (−2.48–12.37), respectively. In this meta-analysis, we observed that despite the fact that several trials claimed the efficacy of LECIMT in improving lower-extremity functions, gait speed and balance did not demonstrate a significant effect size favoring LECIMT. Therefore, CIMT treatment protocols should consider the patient’s functional requirements, cardinal principles of CIMT, and cardiorespiratory parameters. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart depicting the process of synthesis of included studies for this review.</p>
Full article ">Figure 2
<p>Details of risk of bias among the included studies.</p>
Full article ">Figure 3
<p>Gait speed: post-treatment and post-follow-up.</p>
Full article ">Figure 4
<p>Balance: post-treatment and post-follow-up.</p>
Full article ">
20 pages, 1165 KiB  
Review
Environmental DNA Metabarcoding: A Novel Contrivance for Documenting Terrestrial Biodiversity
by Shahnawaz Hassan, Sabreena, Peter Poczai, Bashir Ah Ganai, Waleed Hassan Almalki, Abdul Gafur and R. Z. Sayyed
Biology 2022, 11(9), 1297; https://doi.org/10.3390/biology11091297 - 31 Aug 2022
Cited by 17 | Viewed by 7581
Abstract
The dearth of cardinal data on species presence, dispersion, abundance, and habitat prerequisites, besides the threats impeded by escalating human pressure has enormously affected biodiversity conservation. The innovative concept of eDNA, has been introduced as a way of overcoming many of the difficulties [...] Read more.
The dearth of cardinal data on species presence, dispersion, abundance, and habitat prerequisites, besides the threats impeded by escalating human pressure has enormously affected biodiversity conservation. The innovative concept of eDNA, has been introduced as a way of overcoming many of the difficulties of rigorous conventional investigations, and is hence becoming a prominent and novel method for assessing biodiversity. Recently the demand for eDNA in ecology and conservation has expanded exceedingly, despite the lack of coordinated development in appreciation of its strengths and limitations. Therefore it is pertinent and indispensable to evaluate the extent and significance of eDNA-based investigations in terrestrial habitats and to classify and recognize the critical considerations that need to be accounted before using such an approach. Presented here is a brief review to summarize the prospects and constraints of utilizing eDNA in terrestrial ecosystems, which has not been explored and exploited in greater depth and detail in such ecosystems. Given these obstacles, we focused primarily on compiling the most current research findings from journals accessible in eDNA analysis that discuss terrestrial ecosystems (2012–2022). In the current evaluation, we also review advancements and limitations related to the eDNA technique. Full article
(This article belongs to the Special Issue Macro-Ecology, Macro-Evolution and Conservation of Animals and Plants)
Show Figures

Figure 1

Figure 1
<p>Fate and mobility of DNA in a terrestrial environment.</p>
Full article ">Figure 2
<p>Environmental DNA in terrestrial ecosystems for biodiversity characterization.</p>
Full article ">
Back to TopTop