Shape Optimization of a Diffusive High-Pressure Turbine Vane Using Machine Learning Tools
<p>Baseline numerical domain.</p> "> Figure 2
<p>Baseline design: (<b>a</b>) endwall profile with control points, and (<b>b</b>) vane profile with control points.</p> "> Figure 3
<p>DOE: (<b>a</b>) Endwall profiles generated using LHS, and (<b>b</b>) Vane profiles generated using LHS.</p> "> Figure 4
<p>Mesh: (<b>a</b>) entire domain, (<b>b</b>) mid-span plane, and (<b>c</b>) Inflation layers.</p> "> Figure 5
<p>Optimal ANN models training history: (<b>a</b>) model used to optimize <math display="inline"><semantics> <mi>η</mi> </semantics></math>, and (<b>b</b>) model used to optimize <math display="inline"><semantics> <mi mathvariant="normal">Θ</mi> </semantics></math>.</p> "> Figure 6
<p>ANN Parity plots: (<b>a</b>) training (<math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.98</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>O</mi> <mi>F</mi> <mo>=</mo> <mi>η</mi> </mrow> </semantics></math>), (<b>b</b>) validation (<math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.98</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>O</mi> <mi>F</mi> <mo>=</mo> <mi>η</mi> </mrow> </semantics></math>), (<b>c</b>) test (<math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.98</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>O</mi> <mi>F</mi> <mo>=</mo> <mi>η</mi> </mrow> </semantics></math>), (<b>d</b>) training (<math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.99</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>O</mi> <mi>F</mi> <mo>=</mo> <mi mathvariant="normal">Θ</mi> </mrow> </semantics></math>), (<b>e</b>) validation (<math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.99</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>O</mi> <mi>F</mi> <mo>=</mo> <mi mathvariant="normal">Θ</mi> </mrow> </semantics></math>), and (<b>f</b>) test (<math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.99</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>O</mi> <mi>F</mi> <mo>=</mo> <mi mathvariant="normal">Θ</mi> </mrow> </semantics></math>).</p> "> Figure 7
<p>RF Parity plots: (<b>a</b>) train (<math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.98</mn> </mrow> </semantics></math>), (<b>b</b>) validation (<math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.88</mn> </mrow> </semantics></math>), and (<b>c</b>) test (<math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.89</mn> </mrow> </semantics></math>).</p> "> Figure 8
<p>Optimal random forest model trained for different data size.</p> "> Figure 9
<p>Evolution of the objective function with GA generations: (<b>a</b>) optimization of <math display="inline"><semantics> <mi>η</mi> </semantics></math>, and (<b>b</b>) optimization of <math display="inline"><semantics> <mi mathvariant="normal">Θ</mi> </semantics></math>.</p> "> Figure 10
<p>Baseline and optimized geometries: (<b>a</b>) diffusive endwalls, and (<b>b</b>) vane profile.</p> "> Figure 11
<p>Helicity: (<b>a</b>) Baseline, (<b>b</b>) OPT-1, (<b>c</b>) OPT-3, (<b>d</b>) Baseline flow structures, (<b>e</b>) OPT-1 flow structures, and (<b>f</b>) OPT-3 flow structures.</p> "> Figure 12
<p>Outlet conditions: (<b>a</b>) pressure loss coefficient, (<b>b</b>) outlet yaw angle, (<b>c</b>) outlet pitch angle, and (<b>d</b>) outlet Mach number.</p> "> Figure 13
<p>Mach contour: (<b>a</b>) baseline, (<b>b</b>) <span class="html-italic">OPT-1</span>, (<b>c</b>) <span class="html-italic">OPT-3</span>, (<b>d</b>) baseline isentropic Mach distribution, (<b>e</b>) <span class="html-italic">OPT-1</span> isentropic Mach distribution, and (<b>f</b>) <span class="html-italic">OPT-3</span> isentropic Mach distribution.</p> "> Figure 14
<p>Pressure contours at the mid-span: (<b>a</b>) baseline, (<b>b</b>) <span class="html-italic">OPT-1</span>, (<b>c</b>) <span class="html-italic">OPT-3</span>.</p> "> Figure 15
<p>Velocity contours at the mid-span: (<b>a</b>) baseline, (<b>b</b>) <span class="html-italic">OPT-1</span>, (<b>c</b>) <span class="html-italic">OPT-3</span>.</p> ">
Abstract
:1. Introduction
2. Parametric Design and Data Collection
3. Numerical Methodology
4. Machine Learning Approaches
4.1. Artificial Neural Network
- The number of neurons in each hidden layer.
- The activation function:
- −
- Hyperbolic tangent function (tanh):
- −
- Rectified Linear Units (ReLu):
- −
- Leaky Rectified Linear Units (Leaky ReLu):
- −
- Sigmoid:
- The loss function optimizer:
- −
- Adam: it is an algorithm for first-order gradient-based optimization characterized by an adaptive moment estimation, introduced by Kingma and Ba [30]. This method computes individual adaptive learning rates for different parameters from estimates of the first and second moments of the gradients. During the optimization process, weights are updated inversely proportional to the scaled norm of past gradients.
- −
- Adamax: it is an extension of the Adam model where the update rule changes and the norm of past gradients is used instead of the norm.
- −
- Nadam: it combines the Adam optimization algorithm with the idea of Nesterov Accelerated Gradient (NAG) [31].
- −
- rmsprop: it uses an adaptive learning rate calculated through a moving average of the squared gradient for each weight. In this way, the algorithm provides a faster solution than the stochastic gradient descent approach.
- The dropout rate: it expresses the percentage of neurons that are randomly removed from each layer of the network. In this way, the model becomes simpler and less prone to overfitting [32].
- The regularization coefficient: regularization is a strategy to prevent overfitting through manipulation of the loss function. , also known as “Ridge regularization”, adds a penalty factor to the loss function that is proportional to the squared magnitude of the weight coefficient (Equation (20)). In this way, large coefficients (i.e., large weights) are more penalized, and the influence of a single strong coefficient will be spread across multiple weaker coefficients.
- The batch size: it indicates the number of sub-datasets that are used to train the model.
4.2. Random Forest
- Number of estimators: the number of decision trees that create the forest.
- Maximum depth of the tree.
- Minimum sample split: the minimum number of samples required to split a decision node.
- Minimum sample leaf: the minimum number of samples required to be at a leaf node.
4.3. Performance Metrics and Loss Function
4.4. Hyperparameters Optimization
5. Optimization
6. CFD Results
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
RDC | Rotating Detonation Combustor |
HPT | High Pressure Turbine |
LHS | Latin Hypercube Sampling |
DOE | Design Of Experiments |
ANN | Artificial Neural Network |
RF | Random Forest |
GA | Genetic Algorithm |
OF | Objective function |
ReLu | Rectified Linear Units |
NAG | Nesterov accelerated gradient |
CR | Contraction Ratio |
SB | Separation Bubble |
PSHL | Pressure Side Horseshoe Leg |
SSHL | Suction Side Horseshoe Leg |
Nomenclature | |
N | Number of samples |
z | Summed weighted of the input |
w | Weight |
b | Bias |
x | Input variable |
L | Loss function |
Regularized loss function | |
y | True value |
Model prediction | |
Mass flow rate | |
T | Temperature |
P | Pressure |
h | enthalpy |
U | mean velocity |
u | velocity fluctuation |
k | turbulent kinetic energy |
momentum source term | |
energy source term | |
buoyancy term in turbulent kinetic energy transport equation | |
buoyancy term in turbulent dissipation rate transport equation | |
SST blending function | |
A | Area |
Ma | Mach number |
R | Gas constant |
Specific heat at constant pressure | |
Total pressure loss coefficient | |
X | Lateral direction |
Y | Vertical direction |
Z | Axial direction |
C | Chord of the vane |
k | Cross validation fold index |
Greek | |
Vane efficiency | |
Root Squared Index | |
Vane exit yaw angle | |
Vane exit pitch angle | |
density | |
dynamic viscosity | |
turbulent viscosity | |
kinematic viscosity | |
turbulent dissipation rate | |
thermal conductivity | |
stress tensor | |
Specific heat ratio | |
regularization coefficient | |
Subscripts | |
train | Training dataset |
test | Test dataset |
val | Validation dataset |
1 | Inlet of the vane |
2 | Outlet of the vane |
is | Isentropic |
ax | Axial |
th | Throat |
References
- Li, J.; Du, X.; Martins, J.R. Machine learning in aerodynamic shape optimization. Prog. Aerosp. Sci. 2022, 134, 100849. [Google Scholar] [CrossRef]
- Queipo, N.V.; Haftka, R.T.; Shyy, W.; Goel, T.; Vaidyanathan, R.; Kevin Tucker, P. Surrogate-based analysis and optimization. Prog. Aerosp. Sci. 2005, 41, 1–28. [Google Scholar] [CrossRef]
- Rai, M.M.; Madavan, N.K. Aerodynamic Design Using Neural Networks. AIAA J. 2000, 38, 173–182. [Google Scholar] [CrossRef]
- Renganathan, S.A.; Maulik, R.; Ahuja, J. Enhanced data efficiency using deep neural networks and Gaussian processes for aerodynamic design optimization. Aerosp. Sci. Technol. 2021, 111, 106522. [Google Scholar] [CrossRef]
- Li, J.; Cai, J.; Qu, K. Surrogate-based aerodynamic shape optimization with the active subspace method. Struct. Multidiscip. Optim. 2018, 59, 403–419. [Google Scholar] [CrossRef]
- Zhang, C.; Janeway, M. Optimization of Turbine Blade Aerodynamic Designs Using CFD and Neural Network Models. Int. J. Turbomach. Propuls. Power 2022, 7, 20. [Google Scholar] [CrossRef]
- Mengistu, T.; Ghaly, W. Aerodynamic optimization of turbomachinery blades using evolutionary methods and ANN-based surrogate models. Optim. Eng. 2007, 9, 239–255. [Google Scholar] [CrossRef]
- Zhang, Y.; Sung, W.J.; Mavris, D. Application of Convolutional Neural Network to Predict Airfoil Lift Coefficient. arXiv 2018, arXiv:1712.10082. [Google Scholar]
- Du, Q.; Li, Y.; Yang, L.; Liu, T.; Zhang, D.; Xie, Y. Performance prediction and design optimization of turbine blade profile with deep learning method. Energy 2022, 254, 124351. [Google Scholar] [CrossRef]
- Dasari, S.K.; Cheddad, A.; Andersson, P. Random Forest Surrogate Models to Support Design Space Exploration in Aerospace Use-Case. In Artificial Intelligence Applications and Innovations; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 532–544. [Google Scholar] [CrossRef]
- Bergstra, J.; Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
- Kramer, O.; Kramer, O. Genetic Algorithms; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
- Giannakoglou, K.C.; Papadimitriou, D.I. Adjoint Methods for Shape Optimization. In Optimization and Computational Fluid Dynamics; Springer: Berlin/Heidelberg, Germany, 2008; pp. 79–108. [Google Scholar] [CrossRef]
- Jameson, A. Aerodynamic Shape Optimization Using the Adjoint Method; Lectures at the Von Karman Institute; Von Karman Institute: Brussels, Belgium, 2003. [Google Scholar]
- Salvadori, S.; Insinna, M.; Martelli, F. Unsteady Flows and Component Interaction in Turbomachinery. Int. J. Turbomach. Propuls. Power 2024, 9, 15. [Google Scholar] [CrossRef]
- Hishida, M.; Fujiwara, T.; Wolanski, P. Fundamentals of rotating detonations. Shock Waves 2009, 19, 1–10. [Google Scholar] [CrossRef]
- Liu, Z.; Braun, J.; Paniagua, G. Integration of a transonic high-pressure turbine with a rotating detonation combustor and a diffuser. Int. J. Turbo Jet-Engines 2023, 40, 1–10. [Google Scholar] [CrossRef]
- Grasa, S.; Paniagua, G. Design, Multi-Point Optimization, and Analysis of Diffusive Stator Vanes to Enable Turbine Integration into Rotating Detonation Engines. J. Turbomach. 2024, 146, 111002. [Google Scholar] [CrossRef]
- Gallis, P.; Salvadori, S.; Misul, D.A. Numerical Analysis of a Flow Control System for High-Pressure Turbine Vanes Subject to Highly Oscillating Inflow Conditions. In Turbo Expo: Power for Land, Sea, and Air, Volume 5: Cycle Innovations; American Society of Mechanical Engineers: New York, NY, USA, 2024; Volume 87974, p. V005T06A021. [Google Scholar] [CrossRef]
- Sieverding, C.; Arts, T.; De’nos, R.; Martelli, F. Investigation of the flow field downstream of a turbine trailing edge cooled nozzle guide vane. In Proceedings of the International Gas Turbine and Aeroengine Congress and Exposition, The Hague, The Netherlands, 13–16 June 1994; American Society of Mechanical Engineers: New York, NY, USA, 1994; Volume 1. [Google Scholar] [CrossRef]
- Denos, R.; Sieverding, C.; Arts, T.; Brouckaert, J.; Paniagua, G.; Michelassi, V. Experimental investigation of the unsteady rotor aerodynamics of a transonic turbine stage. Proc. Inst. Mech. Eng. Part A J. Power Energy 1999, 213, 327–338. [Google Scholar]
- McKay, M.D.; Beckman, R.J.; Conover, W.J. Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code. Technometrics 1979, 21, 239–245. [Google Scholar] [CrossRef]
- Roache, P.J. Verification of Codes and Calculations. AIAA J. 1998, 36, 696–702. [Google Scholar] [CrossRef]
- Menter, F.R. Two-equation eddy-viscosity turbulence models for engineering applications. AIAA J. 1994, 32, 1598–1605. [Google Scholar]
- ANSYS, Inc. ANSYS CFX-Solver Theory Guide; ANSYS, Inc.: Canonsburg, PA, USA, 2009. [Google Scholar]
- Lau, M.M.; Hann Lim, K. Review of Adaptive Activation Function in Deep Neural Network. In Proceedings of the 2018 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES), Sarawak, Malaysia, 3–6 December 2018; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar] [CrossRef]
- Bejani, M.M.; Ghatee, M. A systematic review on overfitting control in shallow and deep neural networks. Artif. Intell. Rev. 2021, 54, 6391–6438. [Google Scholar] [CrossRef]
- Dubey, A.K.; Jain, V. Comparative Study of Convolution Neural Network’s Relu and Leaky-Relu Activation Functions. In Applications of Computing, Automation and Wireless Systems in Electrical Engineering; Springer: Singapore, 2019; pp. 873–880. [Google Scholar] [CrossRef]
- Xu, B.; Huang, R.; Li, M. Revise Saturated Activation Functions. arXiv 2016, arXiv:1602.05980. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Dozat, T. Incorporating nesterov momentum into adam. In Proceedings of the 4th International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016; pp. 1–4. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Biau, G.; Scornet, E. A random forest guided tour. TEST 2016, 25, 197–227. [Google Scholar] [CrossRef]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Lee, T.H.; Ullah, A.; Wang, R. Bootstrap Aggregating and Random Forest. In Macroeconomic Forecasting in the Era of Big Data; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 389–429. [Google Scholar] [CrossRef]
- Gad, A.F. Pygad: An intuitive genetic algorithm python library. Multimed. Tools Appl. 2023, 83, 58029–58042. [Google Scholar]
Inlet total pressure | 161,600 Pa |
Inlet total temperature | 440 K |
Wall conditions | Adiabatic (no-slip) |
Periodic surfaces | Angular periodicity |
Outlet static pressure | 83,289 Pa |
GCI (C-M) | GCI (M-F) | Asymptotic Range of Convergence | |
---|---|---|---|
Hyperparameter | Range | Optimal Model () | Optimal Model () |
---|---|---|---|
Number of neurons: first hidden layer | 4–128 | 76 | 61 |
Number of neurons: second hidden layer | 4–128 | 81 | 88 |
Number of neurons: third hidden layer | 4–128 | 69 | 47 |
( regularization): first hidden layer | 0– | ||
( regularization): second hidden layer | 0– | ||
( regularization): third hidden layer | 0– | ||
Dropout rate: first hidden layer | 0–30% | ||
Dropout rate: second hidden layer | 0–30% | ||
Dropout rate: third hidden layer | 0–30% | ||
Batch size | 8, 16, 32, 64 | 64 | 32 |
Activation function | tanh, ReLu, Sigmoid, Leaky ReLu | Leaky ReLu | Leaky ReLu |
Optimizer | Adam, rmsprop, Adamax, Nadam | Adamax | Adamax |
k-Fold | k = 1 | k = 2 | k = 3 | k = 4 | k = 5 |
---|---|---|---|---|---|
() | |||||
() |
Hyperparameter | Range | Optimal Model () |
---|---|---|
Number of decision trees | 5–500 | 401 |
Maximum depth | 1–20 | 16 |
Min samples split | 2–10 | 3 |
Min samples leaf | 1–6 | 2 |
k-Fold | k = 1 | k = 2 | k = 3 | k = 4 | k = 5 |
---|---|---|---|---|---|
Case ID | Model | Prediction | CFD | Error |
---|---|---|---|---|
Baseline | - | - | - | |
- | - | |||
OPT-1 | ANN | <1% | ||
- | - | |||
OPT-2 | RF | −1.1% | ||
- | - | |||
OPT-3 | ANN | - | - | |
<1% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nastasi, R.; Labrini, G.; Salvadori, S.; Misul, D.A. Shape Optimization of a Diffusive High-Pressure Turbine Vane Using Machine Learning Tools. Energies 2024, 17, 5642. https://doi.org/10.3390/en17225642
Nastasi R, Labrini G, Salvadori S, Misul DA. Shape Optimization of a Diffusive High-Pressure Turbine Vane Using Machine Learning Tools. Energies. 2024; 17(22):5642. https://doi.org/10.3390/en17225642
Chicago/Turabian StyleNastasi, Rosario, Giovanni Labrini, Simone Salvadori, and Daniela Anna Misul. 2024. "Shape Optimization of a Diffusive High-Pressure Turbine Vane Using Machine Learning Tools" Energies 17, no. 22: 5642. https://doi.org/10.3390/en17225642
APA StyleNastasi, R., Labrini, G., Salvadori, S., & Misul, D. A. (2024). Shape Optimization of a Diffusive High-Pressure Turbine Vane Using Machine Learning Tools. Energies, 17(22), 5642. https://doi.org/10.3390/en17225642