Discriminant Analysis Based on the Patch Length and Crack Depth to Determine the Convergence of Global–Local Non-Intrusive Analysis with 1D-to-3D Coupling
<p>Degrees of freedom per node for 1D frame elements (using SAP2000) and 3D tetrahedron elements (using Code_Aster).</p> "> Figure 2
<p>Reference mechanical problem (domain of the <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi>R</mi> </msub> </semantics></math> structure).</p> "> Figure 3
<p>Summary of the geometrical configurations for all cases.</p> "> Figure 4
<p>Original and cracked section of a wide-flange steel section.</p> "> Figure 5
<p>Contour lines for the number of iterations for all 39 cases analyzed.</p> "> Figure 6
<p>Linear discriminant analysis for the 39 cases.</p> "> Figure 7
<p>Confusion matrix of LDA.</p> "> Figure 8
<p>Quadratic discriminant analysis for the 39 cases.</p> "> Figure 9
<p>Confusion matrix of QDA.</p> "> Figure 10
<p>SAP2000 3-story building.</p> "> Figure 11
<p>Linear discriminant analysis with building cases.</p> "> Figure 12
<p>Quadratic discriminant analysis with building cases.</p> "> Figure 13
<p>Linear and quadratic discriminant analyses with building cases and safety factors.</p> ">
Abstract
:1. Introduction
- Prepare the database: The data used to build models are presented in the form of input variables (features) and output variables (labels, categories, or classes). In the case of structures, geometric dimensions and material properties can be classified as features, whereas resistance and deflection are used as labels. In this step, it is important to perform a classification analysis in order to identify the main features among different experiments and to group large amounts of data, considering certain variables that adequately explain certain analyzed behaviors. The features of the initial data and the performance of the learning algorithm affect the accuracy of ML models.
- Learn: This step aims to train some of the existing ML algorithms using the data obtained from the previous step.
- Evaluate the model: With the ML model trained, the performance is evaluated using a loss function as a performance indicator.
2. Methodology
2.1. Global–Local Non-Intrusive Analysis with 1D-To-3D Coupling
- The equation systems for the global problem are solved using SAP2000, returning the displacements of the global domain , as presented in Equation (1):
- As the software enables obtaining the nodal reaction forces of an embedded substructure, the auxiliary problem is solved, returning in the interface zone. In the case that certain software does not enable obtaining the nodal reaction forces directly, Equation (2) must be used:The interface displacements of the auxiliary domain can be obtained from the global model displacement, considering the following Equation (3):
- A non-linear local problem is solved using Code_Aster software, imposing displacements onto the nodes of the interface . After the non-linear problem is solved, i.e., a number of propagation steps are completed, the final displacements of the local model are obtained using Equation (4):The reaction forces of the local model in the interface are obtained using Equation (5):
- The local reaction forces are then integrated and transformed into three equivalent forces and moments, as per the 6 degrees of freedom used in SAP2000. This is performed using the PROJ_CHAMP operator in Code_Aster.
- With the local and auxiliary forces calculated, the correction forces for the next iteration are obtained using Equation (6):
2.2. Machine Learning Models
2.2.1. Classification Models
- Linear Discriminant Analysis and Quadratic Discriminant Analysis: These types of discriminant analysis consist of two classifiers with linear and quadratic decision boundaries, respectively. These automatic supervised learning techniques are used for classification and dimensionality reduction. The objective is to find a linear combination of features that maximizes the separation between classes within a dataset. These classification models are attractive due to their closed-form solutions that can be easily calculated. They have been proven to work well with small datasets for training and because there are no hyperparameters to be adjusted [59]. Their advantages are, for example, when the classes are linearly or quadratically separable and the covariances are equal. Interpretability is another strength since the coefficients of the resulting linear combinations provide information on the most relevant characteristics for the separation of the classes. Their disadvantages are, for example, sensitivity to equality of covariances since they assume that the covariances of the classes are equal. Additionally, valid results are only accepted in the domain of training the independent variables [60].
- Support Vector Machines: Support vector machines are powerful supervised learning algorithms used in both classification and regression tasks. The main objective is to find an optimal separation hyperplane that maximizes the margin between the classes. Their benefits include efficiency in high-dimensional spaces since they can efficiently handle datasets with a large number of attributes, robustness since they have the ability to generalize and are less prone to overfitting, and flexibility in the kernel, allowing the use of different kernel functions such as linear, polynomial, and radial basis functions (RBF), making them adaptable to a wide range of problems and data structures. Their disadvantages include the selection of the appropriate hyperparameters and their inefficiency on large datasets, as they can be computationally expensive and difficult to interpret since they can be less intuitive in terms of parameter interpretation [61].
- Nearest Neighbors Classification: K-nearest neighbor (K-NN) classification is a supervised machine learning method used to address classification problems. Its main focus is to assign a class label to a data instance based on the class labels of the closest training instances in a feature space. Its benefits include its conceptual simplicity since it is easy to understand and implement; adaptability since it can learn relatively well, including nonlinear and multiclass classification problems; suitability for small datasets; and the nonparametric algorithm used since it does not make assumptions about the functional form of the data. Its disadvantages include that the choice of the number of neighbors (K) is critical and can significantly affect the performance of K-NN, and it is sensitive to scale, so it is important to perform adequate normalization before applying it [62].
- Decision Tree Classifier: This is a supervised machine learning method whose main objective is to create a tree model that partitions the feature space into decision nodes, where each node represents a region or dataset with a specific class label. The benefits of this model include its interpretability, which allows the model’s decision-making process to be visualized and understood. In addition, it can handle datasets that include both numerical and categorical characteristics. Moreover, it requires little data preparation since it does not require data normalization, it can efficiently handle missing values, and it is effective in detecting interactions since it can capture non-linear relationships and detect interactions between features. The disadvantages include that it tends to overfit the training data if its growth is not adequately controlled. Furthermore, it can exhibit instability since it is sensitive to small variations in the training data, which can result in different trees for similar datasets. Finally, it has limitations in a complex class separation where classes overlap or very complex separation is required and a tendency to be biased toward dominant classes, as it may have difficulty in handling unbalanced datasets [63].
- Random Forest: This is a decision tree-based machine learning algorithm used for both classification and regression problems. Unlike a single decision tree, random forest creates a collection of trees and combines their predictions for more robust and accurate results. Its benefits include high precision, as it tends to achieve high prediction accuracy on a variety of datasets and problems; robustness against overfitting, as by combining multiple trees, random forest reduces the tendency to overfit; and its ability to effectively handle missing data and outliers, without the need for extensive preprocessing. Its disadvantages include its low interpretability because although each tree within a random forest is interpretable, the combination of many trees can make the model difficult to interpret. Another disadvantage is the need to configure each of the hyperparameters, such as the number of trees and the maximum depth, which must be adjusted appropriately to optimize performance. Finally, the computational efficiency and running time are disadvantages due to the construction of multiple trees and the combination of predictions [64].
2.2.2. Discriminant Analysis
- One variable is considered categorical and the others are numerical intervals or values, independent of the categorical variables.
- Two minimum cases are needed in order to generate at least two clusters or groups.
- The number of discriminant variables must be less than the number of cases minus 2, i.e., , where and n is the number of observations or cases.
- Any discriminant variable cannot be a linear combination of two or more discriminant variables.
- The covariance matrices of each group must be approximately equal.
- All continuous variables must comply with a multivariate normal distribution.
- Linear discriminant analysis (LDA): In this case, the posterior is calculated using Equation (9):
- Quadratic discriminant analysis (QDA): In this case, the posterior can be calculated as expressed in Equation (10):
3. Results
- The convergence strongly depended on the patch length. According to Saint Venant’s principle, the discontinuity (crack position) must be far from the interface to neglect the non-linear effects at the end of the local model.
- The convergence status of the method was observed to depend on the length of the overall beam. This is because a very flexible patch in relation to the auxiliary model can negatively affect the convergence of the method.
- Micro-Score: This metric is calculated by summing the results of all the classes and calculating the metric over all the datasets. This is the score presented in Figure 6, which is calculated as follows:
- Macro-Score: This metric is calculated for each class separately to obtain a simple average of each metric. Each class contributes equally to the calculation of the macro-score, independent of its size.
- Weighted Score: For this case, the size of each class is considered when calculating the weighted average of each metric. This implies that larger classes have a bigger impact on the final score.
- Micro-score: 0.7692
- Macro-score: 0.7594
- Weighted score: 0.7647
- Micro-score: 0.7948
- Macro-score: 0.7824
- Weighted score: 0.7915
- A three-story steel structure with a height between floors of 3 m.
- A span length of 10 m with rigid supports on the column bases.
- Beam and column sections corresponding to a wide flange, with H = 300 mm, B = 200 mm, tf = 10, and tw = 6 mm.
- The uncracked inertia in (mm), uncracked area in (mm), and complete beam length in (mm) were 95,109,333, 5680, and 10,000 (mm), respectively.
- The cracked inertia in (mm) and cracked area in (mm) were 20,010,062 and 3440, respectively.
- Four patches were analyzed with local lengths of 1000, 1500, 2000, and 2500 mm.
- Each local model was analyzed with an initial crack length of 50 mm and three propagation steps.
3.1. Convergence Analysis of the Building Cases
3.2. Summary of the Results
4. Discussion
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- ANSI/AISC 360–10; Specification for Structural Steel Buildings. American Institute of Steel Construction: Chicago, IL, USA, 2010.
- ANSI/AISC 341–16; Seismic Provision for Structural Steel Buildings. American Institute of Steel Construction: Chicago, IL, USA, 2016.
- Juhászová, T.; Miarka, P.; Jindra, D.; Kala, Z.; Seitl, S. Evaluation of Fatigue Crack Growth Rates in an IPE Beam Made of AISI 304 under Various Stress Ratios. Procedia Struct. Integr. 2023, 43, 172–177. [Google Scholar] [CrossRef]
- Lou, T.; Wang, W.; Li, J. Seismic behaviour of a self-centring steel connection with replaceable energy-dissipation components. Eng. Struct. 2023, 274, 115204. [Google Scholar] [CrossRef]
- Machado, W.G.; da Silva, A.R.; das Neves, F.d.A. Dynamic analysis of composite beam and floors with deformable connection using plate, bar and interface elements. Eng. Struct. 2019, 184, 247–256. [Google Scholar] [CrossRef]
- Dexter, R.J.; Connor, R.J.; Mahmoud, H. Review of steel bridges with fracture-critical elements. Transp. Res. Rec. 2005, 1928, 74–82. [Google Scholar] [CrossRef]
- Frangopol, D.M.; Soliman, M. Life-cycle of structural systems: Recent achievements and future directions. In Structures and Infrastructure Systems; Routledge: London, UK, 2019; pp. 46–65. [Google Scholar]
- Moës, N.; Dolbow, J.; Belytschko, T. A finite element method for crack growth without remeshing. Int. J. Numer. Methods Eng. 1999, 46, 131–150. [Google Scholar] [CrossRef]
- Belytschko, T.; Black, T. Elastic crack growth in finite elements with minimal remeshing. Int. J. Numer. Methods Eng. 1999, 45, 601–620. [Google Scholar] [CrossRef]
- Khoei, A.R. Extended Finite Element Method: Theory and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
- Valipour, H.R.; Foster, S.J. Nonlocal Damage Formulation for a Flexibility-Based Frame Element. J. Struct. Eng. 2009, 135, 1213–1221. [Google Scholar] [CrossRef]
- Roux, F.X. Method of finite element tearing and interconnecting and its parallel solution algorithm. Int. J. Numer. Methods Eng. 1991, 32, 1205–1227. [Google Scholar]
- Whitcomb, J.D. Iterative global/local finite element analysis. Comput. Struct. 1991, 40, 1027–1031. [Google Scholar] [CrossRef]
- Pebrel, J.; Rey, C.; Gosselet, P. A Nonlinear Dual-Domain Decomposition Method: Application to Structural Problems with Damage. Int. J. Multiscale Comput. Eng. 2008, 6, 251–262. [Google Scholar] [CrossRef]
- Hinojosa, J.; Allix, O.; Guidault, P.A.; Cresta, P. Domain decomposition methods with nonlinear localization for the buckling and post-buckling analyses of large structures. Adv. Eng. Softw. 2014, 70, 13–24. [Google Scholar] [CrossRef]
- Guidault, P.A.; Allix, O.; Champaney, L.; Navarro, J.P. A two-scale approach with homogenization for the computation of cracked structures. Comput. Struct. 2007, 85, 1360–1371. [Google Scholar] [CrossRef]
- Kerfriden, P.; Allix, O.; Gosselet, P. A three-scale domain decomposition method for the 3D analysis of debonding in laminates. Comput. Mech. 2009, 44, 343–362. [Google Scholar] [CrossRef]
- Oumaziz, P.; Gosselet, P.; Boucard, P.A.; Guinard, S. A non-invasive implementation of a mixed domain decomposition method for frictional contact problems. Comput. Mech. 2017, 60, 797–812. [Google Scholar] [CrossRef]
- Allix, O.; Gosselet, P. Non intrusive global/local coupling techniques in solid mechanics: An introduction to different coupling strategies and acceleration techniques. In Modeling in Engineering Using Innovative Numerical Methods for Solids and Fluids; Springer: Berlin/Heidelberg, Germany, 2020; pp. 203–220. [Google Scholar]
- Duval, M.; Passieux, J.C.; Salaün, M.; Guinard, S. Non-intrusive Coupling: Recent Advances and Scalable Nonlinear Domain Decomposition. Arch. Comput. Methods Eng. 2016, 23, 17–38. [Google Scholar] [CrossRef]
- Passieux, J.C.; Réthoré, J.; Gravouil, A.; Baietto, M.C. Local/global non-intrusive crack propagation simulation using a multigrid X-FEM solver. Comput. Mech. 2013, 52, 1381–1393. [Google Scholar] [CrossRef]
- Noii, N.; Aldakheel, F.; Wick, T.; Wriggers, P. An adaptive global–local approach for phase-field modeling of anisotropic brittle fracture. Comput. Methods Appl. Mech. Eng. 2020, 361, 112744. [Google Scholar] [CrossRef]
- Blanchard, M.; Allix, O.; Gosselet, P.; Desmeure, G. Space/time global/local noninvasive coupling strategy: Application to viscoplastic structures. Finite Elem. Anal. Des. 2019, 156, 1–12. [Google Scholar] [CrossRef]
- Fuenzalida-Henriquez, I.; Oumaziz, P.; Castillo-Ibarra, E.; Hinojosa, J. Global-Local non intrusive analysis with robin parameters: Application to plastic hardening behavior and crack propagation in 2D and 3D structures. Comput. Mech. 2022, 69, 965–978. [Google Scholar] [CrossRef]
- Jaque-Zurita, M.; Hinojosa, J.; Fuenzalida-Henríquez, I. Global–Local Non Intrusive Analysis with 1D to 3D Coupling: Application to Crack Propagation and Extension to Commercial Software. Mathematics 2023, 11, 2540. [Google Scholar] [CrossRef]
- Duarte, C.A.; Kim, D.J. Analysis and applications of a generalized finite element method with global-local enrichment functions. Comput. Methods Appl. Mech. Eng. 2008, 197, 487–504. [Google Scholar] [CrossRef]
- Fonseca, G.M.; Barros, F.B.; de Oliveira, T.S.; Monteiro, H.A.; Novelli, L.; Pitangueira, R.L. 2-D Crack propagation analysis using stable generalized finite element method with global-local enrichments. Eng. Anal. Bound. Elem. 2020, 118, 70–83. [Google Scholar] [CrossRef]
- Malekan, M.; Barros, F.B. Well-conditioning global–local analysis using stable generalized/extended finite element method for linear elastic fracture mechanics. Comput. Mech. 2016, 58, 819–831. [Google Scholar] [CrossRef]
- Thai, H.T. Machine learning for structural engineering: A state-of-the-art review. Structures 2022, 38, 448–491. [Google Scholar]
- Abambres, M.; Lantsoght, E.O. Neural network-based formula for shear capacity prediction of one-way slabs under concentrated loads. Eng. Struct. 2020, 211, 110501. [Google Scholar] [CrossRef]
- Mohammadhassani, M.; Saleh, A.; Suhatril, M.; Safa, M. Fuzzy modelling approach for shear strength prediction of RC deep beams. Smart Struct. Syst. 2015, 16, 497–519. [Google Scholar] [CrossRef]
- Chou, J.S.; Ngo, N.T.; Pham, A.D. Shear strength prediction in reinforced concrete deep beams using nature-inspired metaheuristic support vector regression. J. Comput. Civ. Eng. 2016, 30, 04015002. [Google Scholar] [CrossRef]
- Kotsovou, G.M.; Cotsovos, D.M.; Lagaros, N.D. Assessment of RC exterior beam-column Joints based on artificial neural networks and other methods. Eng. Struct. 2017, 144, 1–18. [Google Scholar] [CrossRef]
- Sarothi, S.Z.; Ahmed, K.S.; Khan, N.I.; Ahmed, A.; Nehdi, M.L. Predicting bearing capacity of double shear bolted connections using machine learning. Eng. Struct. 2022, 251, 113497. [Google Scholar] [CrossRef]
- Sakla, S.S. Neural network modeling of the load-carrying capacity of eccentrically-loaded single-angle struts. J. Constr. Steel Res. 2004, 60, 965–987. [Google Scholar] [CrossRef]
- Djerrad, A.; Fan, F.; Zhi, X.D.; Wu, Q.J. Artificial neural networks (ANN) based compressive strength prediction of afrp strengthened steel tube. Int. J. Steel Struct. 2020, 20, 156–174. [Google Scholar] [CrossRef]
- Xu, Y.; Zhang, M.; Zheng, B. Design of cold-formed stainless steel circular hollow section columns using machine learning methods. Structures 2021, 33, 2755–2770. [Google Scholar]
- Cascardi, A.; Micelli, F.; Aiello, M.A. An Artificial Neural Networks model for the prediction of the compressive strength of FRP-confined concrete circular columns. Eng. Struct. 2017, 140, 199–208. [Google Scholar] [CrossRef]
- Raza, A.; Shah, S.A.R.; Ul Haq, F.; Arshad, H.; Raza, S.S.; Farhan, M.; Waseem, M. Prediction of axial load-carrying capacity of GFRP-reinforced concrete columns through artificial neural networks. Structures 2020, 28, 1557–1571. [Google Scholar]
- Bakouregui, A.S.; Mohamed, H.M.; Yahia, A.; Benmokrane, B. Explainable extreme gradient boosting tree-based prediction of load-carrying capacity of FRP-RC columns. Eng. Struct. 2021, 245, 112836. [Google Scholar] [CrossRef]
- Tran-Ngoc, H.; Khatir, S.; De Roeck, G.; Bui-Tien, T.; Wahab, M.A. An efficient artificial neural network for damage detection in bridges and beam-like structures by improving training parameters using cuckoo search algorithm. Eng. Struct. 2019, 199, 109637. [Google Scholar] [CrossRef]
- Hasni, H.; Alavi, A.H.; Jiao, P.; Lajnef, N. Detection of fatigue cracking in steel bridge girders: A support vector machine approach. Arch. Civ. Mech. Eng. 2017, 17, 609–622. [Google Scholar] [CrossRef]
- Huang, H.; Burton, H.V. Classification of in-plane failure modes for reinforced concrete frames with infills using machine learning. J. Build. Eng. 2019, 25, 100767. [Google Scholar] [CrossRef]
- Lei, Y.; Zhang, Y.; Mi, J.; Liu, W.; Liu, L. Detecting structural damage under unknown seismic excitation by deep convolutional neural network with wavelet-based transmissibility data. Struct. Health Monit. 2021, 20, 1583–1596. [Google Scholar] [CrossRef]
- Naderpour, H.; Mirrashid, M. Classification of failure modes in ductile and non-ductile concrete joints. Eng. Fail. Anal. 2019, 103, 361–375. [Google Scholar] [CrossRef]
- Hadi, M.N. Neural networks applications in concrete structures. Comput. Struct. 2003, 81, 373–381. [Google Scholar] [CrossRef]
- Tashakori, A.; Adeli, H. Optimum design of cold-formed steel space structures using neural dynamics model. J. Constr. Steel Res. 2002, 58, 1545–1566. [Google Scholar] [CrossRef]
- Horton, T.A.; Hajirasouliha, I.; Davison, B.; Ozdemir, Z. Accurate prediction of cyclic hysteresis behaviour of RBS connections using deep learning neural networks. Eng. Struct. 2021, 247, 113156. [Google Scholar] [CrossRef]
- Truong, V.H.; Vu, Q.V.; Thai, H.T.; Ha, M.H. A robust method for safety evaluation of steel trusses using Gradient Tree Boosting algorithm. Adv. Eng. Softw. 2020, 147, 102825. [Google Scholar] [CrossRef]
- Sun, H.; Burton, H.V.; Huang, H. Machine learning applications for building structural design and performance assessment: State-of-the-art review. J. Build. Eng. 2021, 33, 101816. [Google Scholar] [CrossRef]
- Murphy, K.P. Machine Learning: A Probabilistic Perspective; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
- Ghojogh, B.; Crowley, M. Linear and quadratic discriminant analysis: Tutorial. arXiv 2019, arXiv:1906.02590. [Google Scholar]
- Khan, A.; Kim, H.S. Classification and prediction of multidamages in smart composite laminates using discriminant analysis. Mech. Adv. Mater. Struct. 2022, 29, 230–240. [Google Scholar] [CrossRef]
- Janeliukstis, R.; Rucevskis, S.; Chate, A. Classification-based damage localization in composite plate using strain field data. J. Phys. 2018, 1106, 012022. [Google Scholar] [CrossRef]
- Yu, J. Machinery fault diagnosis using joint global and local/nonlocal discriminant analysis with selective ensemble learning. J. Sound Vib. 2016, 382, 340–356. [Google Scholar] [CrossRef]
- Angra, S.; Ahuja, S. Machine learning and its applications: A review. In Proceedings of the 2017 International Conference on Big Data Analytics and Computational Intelligence (ICBDAC), Chirala, India, 23–25 March 2017; pp. 57–60. [Google Scholar] [CrossRef]
- Castillo-Ibarra, E.; Alsina, M.A.; Astudillo, C.A.; Fuenzalida-Henríquez, I. PFA-Nipals: An Unsupervised Principal Feature Selection Based on Nonlinear Estimation by Iterative Partial Least Squares. Mathematics 2023, 11, 4154. [Google Scholar] [CrossRef]
- James, G.; Witten, D.; Hastie, T.; Tibshirani, R. Classification. In An Introduction to Statistical Learning: With Applications in R; Springer: New York, NY, USA, 2021; pp. 129–195. [Google Scholar] [CrossRef]
- Ledoit, O.; Wolf, M. Honey, I Shrunk the Sample Covariance Matrix. SSRN Electron. J. 2003. [Google Scholar] [CrossRef]
- Jombart, T.; Devillard, S.; Balloux, F. Discriminant analysis of principal components: A new method for the analysis of genetically structured populations. BMC Genet. 2010, 11, 94. [Google Scholar] [CrossRef]
- Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
- Alpaydin, E. Introduction to Machine Learning, 2nd ed.; The MIT Press: Cambridge, MA, USA, 2010. [Google Scholar]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Hastie, T.; Tibshirani, R.; Friedman, J. Linear Methods for Classification. In The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer: New York, NY, USA, 2009; pp. 101–137. [Google Scholar] [CrossRef]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Tada, H.; Paris, P.C.; Irwin, G.R. The Stress Analysis of Cracks Handbook, 3rd ed.; ASME Press: New York, NY, USA, 2000. [Google Scholar] [CrossRef]
- Anderson, T.L. FRACTURE MECHANICS: Fundamentals and Applications, 4th ed.; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar] [CrossRef]
- EDF. Code Aster/Salome-Meca Module 2: Advanced Training; EDF: Paris, France, 2017. [Google Scholar]
Case | Section * (H × B × tf × tw in mm) | Original Depth of Crack (mm) | Beam Patch Length (mm) | Complete Beam Length (mm) | Uncracked Area (mm) | Uncracked Inertia (mm) |
---|---|---|---|---|---|---|
1 | H200 × 200 × 8 × 6 | 25 | 500 | 3000 | 4304 | 32,623,018 |
2 | H200 × 200 × 8 × 6 | 50 | 500 | 3000 | 4304 | 32,623,018 |
3 | H200 × 200 × 8 × 6 | 75 | 500 | 3000 | 4304 | 32,623,018 |
4 | H200 × 200 × 8 × 6 | 50 | 650 | 3000 | 4304 | 32,623,018 |
5 | H200 × 200 × 8 × 6 | 75 | 650 | 3000 | 4304 | 32,623,018 |
6 | H200 × 200 × 8 × 6 | 25 | 750 | 3000 | 4304 | 32,623,018 |
7 | H200 × 200 × 8 × 6 | 50 | 750 | 3000 | 4304 | 32,623,018 |
8 | H200 × 200 × 8 × 6 | 75 | 750 | 3000 | 4304 | 32,623,018 |
9 | H200 × 200 × 8 × 6 | 75 | 900 | 3000 | 4304 | 32,623,018 |
10 | H200 × 200 × 8 × 6 | 25 | 1000 | 3000 | 4304 | 32,623,018 |
11 | H200 × 200 × 8 × 6 | 50 | 1000 | 3000 | 4304 | 32,623,018 |
12 | H200 × 200 × 8 × 6 | 75 | 1000 | 3000 | 4304 | 32,623,018 |
13 | H300 × 150 × 8 × 6 | 40 | 1250 | 3000 | 4104 | 62,624,352 |
14 | H300 × 150 × 8 × 6 | 60 | 1250 | 3000 | 4104 | 62,624,352 |
15 | H300 × 150 × 8 × 6 | 100 | 1250 | 3000 | 4104 | 62,624,352 |
16 | H300 × 150 × 8 × 6 | 140 | 1250 | 3000 | 4104 | 62,624,352 |
17 | H400 × 300 × 16 × 10 | 80 | 900 | 3000 | 13,280 | 395,629,227 |
18 | H400 × 300 × 16 × 10 | 160 | 900 | 3000 | 13,280 | 395,629,227 |
19 | H400 × 300 × 16 × 10 | 80 | 1100 | 3000 | 13,280 | 395,629,227 |
20 | H400 × 300 × 16 × 10 | 120 | 1100 | 3000 | 13,280 | 395,629,227 |
21 | H400 × 300 × 16 × 10 | 160 | 1100 | 3000 | 13,280 | 395,629,227 |
Case | Section * (H × B × tf × tw in mm) | Original Crack Depth (mm) | Beam Patch Length (mm) | Complete Beam Length (mm) | Uncracked Area (mm) | Uncracked Inertia (mm) |
---|---|---|---|---|---|---|
22 | H200 × 200 × 8 × 6 | 30 | 500 | 5000 | 4304 | 32,623,018 |
23 | H200 × 200 × 8 × 6 | 60 | 500 | 5000 | 4304 | 32,623,018 |
24 | H200 × 200 × 8 × 6 | 90 | 500 | 5000 | 4304 | 32,623,018 |
25 | H200 × 200 × 8 × 6 | 30 | 750 | 5000 | 4304 | 32,623,018 |
26 | H200 × 200 × 8 × 6 | 60 | 750 | 5000 | 4304 | 32,623,018 |
27 | H200 × 200 × 8 × 6 | 90 | 750 | 5000 | 4304 | 32,623,018 |
28 | H200 × 200 × 8 × 6 | 30 | 1000 | 5000 | 4304 | 32,623,018 |
29 | H200 × 200 × 8 × 6 | 60 | 1000 | 5000 | 4304 | 32,623,018 |
30 | H200 × 200 × 8 × 6 | 90 | 1000 | 5000 | 4304 | 32,623,018 |
Case | Section * (H × B × tf × tw in mm) | Original Crack Depth (mm) | Beam Patch Length (mm) | Complete Beam Length (mm) | Uncracked Area (mm) | Uncracked Inertia (mm) |
---|---|---|---|---|---|---|
31 | H200 × 200 × 8 × 6 | 25 | 500 | 7000 | 4304 | 32,623,018 |
32 | H200 × 200 × 8 × 6 | 50 | 500 | 7000 | 4304 | 32,623,018 |
33 | H200 × 200 × 8 × 6 | 75 | 500 | 7000 | 4304 | 32,623,018 |
34 | H200 × 200 × 8 × 6 | 25 | 750 | 7000 | 4304 | 32,623,018 |
35 | H200 × 200 × 8 × 6 | 50 | 750 | 7000 | 4304 | 32,623,018 |
36 | H200 × 200 × 8 × 6 | 75 | 750 | 7000 | 4304 | 32,623,018 |
37 | H200 × 200 × 8 × 6 | 25 | 1000 | 7000 | 4304 | 32,623,018 |
38 | H200 × 200 × 8 × 6 | 50 | 1000 | 7000 | 4304 | 32,623,018 |
39 | H200 × 200 × 8 × 6 | 75 | 1000 | 7000 | 4304 | 32,623,018 |
Section Case | Section * (H × B × tf × tw in mm) | Original Crack Depth (mm) | Area of Cracked Section mm | Cracked Inertia in Minor Axis mm | Cracked Inertia in Major Axis mm |
---|---|---|---|---|---|
S1 | H200 × 200 × 8 × 6 | 25 | 2602 | 5,336,339 | 7,054,597 |
S2 | H200 × 200 × 8 × 6 | 30 | 2572 | 5,336,249 | 6,503,006 |
S3 | H200 × 200 × 8 × 6 | 50 | 2452 | 5,335,889 | 4,567,420 |
S4 | H200 × 200 × 8 × 6 | 60 | 2392 | 5,335,709 | 3,754,370 |
S5 | H200 × 200 × 8 × 6 | 75 | 2302 | 5,335,439 | 2,715,291 |
S6 | H200 × 200 × 8 × 6 | 90 | 2212 | 5,335,169 | 1,878,233 |
S7 | H300 × 150 × 8 × 6 | 40 | 2712 | 2,254,536 | 19,314,453 |
S8 | H300 × 150 × 8 × 6 | 60 | 2592 | 2,254,176 | 15,529,984 |
S9 | H300 × 150 × 8 × 6 | 100 | 2352 | 2,253,456 | 9,422,895 |
S10 | H300 × 150 × 8 × 6 | 140 | 2112 | 2,252,736 | 5,078,668 |
S11 | H400 × 300 × 16 × 10 | 80 | 7840 | 36,025,333 | 71,161,800 |
S11 | H400 × 300 × 16 × 10 | 120 | 7440 | 36,022,000 | 48,818,746 |
S11 | H400 × 300 × 16 × 10 | 160 | 7040 | 36,018,667 | 31,461,314 |
Case | Section | Crack Depth | Patch Length (mm) | Categorical Label | Status | Iterations Until Convergence |
---|---|---|---|---|---|---|
1 | H200 | 25 | 500 | 0 | Non Conv. | 50 |
2 | H200 | 50 | 500 | 0 | Non Conv. | 50 |
3 | H200 | 75 | 500 | 0 | Non Conv. | 50 |
4 | H200 | 50 | 650 | 1 | Conv. | 44 |
5 | H200 | 75 | 650 | 1 | Conv. | 32 |
6 | H200 | 25 | 750 | 1 | Conv. | 27 |
7 | H200 | 50 | 750 | 1 | Conv. | 24 |
8 | H200 | 75 | 750 | 1 | Conv. | 37 |
9 | H200 | 75 | 900 | 1 | Conv. | 49 |
10 | H200 | 25 | 1000 | 1 | Conv. | 25 |
11 | H200 | 50 | 1000 | 1 | Conv. | 23 |
12 | H200 | 75 | 1000 | 1 | Conv. | 28 |
13 | H300 | 40 | 1250 | 1 | Conv. | 20 |
14 | H300 | 60 | 1250 | 1 | Conv. | 25 |
15 | H300 | 100 | 1250 | 1 | Conv. | 18 |
16 | H300 | 140 | 1250 | 0 | Non Conv. | 50 |
17 | H400 | 80 | 900 | 0 | Non Conv. | 50 |
18 | H400 | 160 | 900 | 0 | Non Conv. | 50 |
19 | H400 | 80 | 1100 | 1 | Conv. | 24 |
20 | H400 | 120 | 1100 | 1 | Conv. | 32 |
21 | H400 | 160 | 1100 | 0 | Non Conv. | 50 |
22 | H200 | 30 | 500 | 1 | Conv. | 31 |
23 | H200 | 60 | 500 | 1 | Conv. | 19 |
24 | H200 | 90 | 500 | 0 | Non Conv. | 50 |
25 | H200 | 30 | 750 | 1 | Conv. | 13 |
26 | H200 | 60 | 750 | 1 | Conv. | 38 |
27 | H200 | 90 | 750 | 0 | Non Conv. | 50 |
28 | H200 | 30 | 1000 | 1 | Conv. | 28 |
29 | H200 | 60 | 1000 | 0 | Non Conv. | 50 |
30 | H200 | 90 | 1000 | 0 | Non Conv. | 50 |
31 | H200 | 25 | 500 | 1 | Conv. | 30 |
32 | H200 | 50 | 500 | 1 | Conv. | 36 |
33 | H200 | 75 | 500 | 0 | Non Conv. | 50 |
34 | H200 | 25 | 750 | 0 | Non Conv. | 50 |
35 | H200 | 50 | 750 | 1 | Conv | 34 |
36 | H200 | 75 | 750 | 1 | Conv | 25 |
37 | H200 | 25 | 1000 | 1 | Conv. | 23 |
38 | H200 | 50 | 1000 | 1 | Conv. | 25 |
39 | H200 | 75 | 1000 | 0 | Non Conv. | 16 |
Building Section Case | Section | X Feature (Dimensionless) | Y Feature (Dimensionless) | Local Patch Length (mm) | Status |
---|---|---|---|---|---|
BS1 | H300 × 200 × 10 × 6 | 0.16667 | 0.154652891 | 1000 | Non-Conv. |
BS2 | H300 × 200 × 10 × 6 | 0.16667 | 0.24070462 | 1500 | Conv. |
BS3 | H300 × 200 × 10 × 6 | 0.16667 | 0.325220802 | 2000 | Conv. |
BS4 | H300 × 200 × 10 × 6 | 0.16667 | 0.409051684 | 2500 | Conv. |
Building Section Case | Iter. for Conv. | Degrees of Freedom | Execution Times (s) | Y Feature | Ratio | Ratio |
---|---|---|---|---|---|---|
BS1 | – | 30159 | – | 0.1546 | 0.97 | 0.92 |
BS2 | 19 | 47703 | 391 | 0.2407 | 1.51 | 1.44 |
BS3 | 20 | 55434 | 442 | 0.3252 | 2.04 | 1.95 |
BS4 | 18 | 65781 | 455 | 0.4091 | 2.57 | 2.45 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jaque-Zurita, M.; Hinojosa, J.; Castillo-Ibarra, E.; Fuenzalida-Henríquez, I. Discriminant Analysis Based on the Patch Length and Crack Depth to Determine the Convergence of Global–Local Non-Intrusive Analysis with 1D-to-3D Coupling. Symmetry 2023, 15, 2068. https://doi.org/10.3390/sym15112068
Jaque-Zurita M, Hinojosa J, Castillo-Ibarra E, Fuenzalida-Henríquez I. Discriminant Analysis Based on the Patch Length and Crack Depth to Determine the Convergence of Global–Local Non-Intrusive Analysis with 1D-to-3D Coupling. Symmetry. 2023; 15(11):2068. https://doi.org/10.3390/sym15112068
Chicago/Turabian StyleJaque-Zurita, Matías, Jorge Hinojosa, Emilio Castillo-Ibarra, and Ignacio Fuenzalida-Henríquez. 2023. "Discriminant Analysis Based on the Patch Length and Crack Depth to Determine the Convergence of Global–Local Non-Intrusive Analysis with 1D-to-3D Coupling" Symmetry 15, no. 11: 2068. https://doi.org/10.3390/sym15112068
APA StyleJaque-Zurita, M., Hinojosa, J., Castillo-Ibarra, E., & Fuenzalida-Henríquez, I. (2023). Discriminant Analysis Based on the Patch Length and Crack Depth to Determine the Convergence of Global–Local Non-Intrusive Analysis with 1D-to-3D Coupling. Symmetry, 15(11), 2068. https://doi.org/10.3390/sym15112068