Beam-Selection for 5G/B5G Networks Using Machine Learning: A Comparative Study
Abstract
:1. Introduction
- We provide an effective way to produce synthetic data for a common dataset found in the literature.
- We propose an ensemble learning composed of base shallow learners method to obtain more accurate results with low complexity.
- We show that an optimal beam pair can be obtained using our approach up to an accuracy of about 94%.
2. Related Work
3. Preliminaries
4. Performance Metrics
- 1.
- Accuracy is defined as the fraction of correct predictions. It is expressed as
- 2.
- Top-k Accuracy can be considered as a generalization of accuracy. The differentiation is that a prediction is regarded as accurate if the true label is connected to one of the top k forecasted scores. This is defined mathematically as
- 3.
- Precision, which in multiclass classification is defined as
- 4.
- Recall is expressed as
- 5.
- F1 which aggregates Precision and Recall under the concept of harmonic mean. This is defined asF1 can be considered as a weighted average between Precision and Recall.
- 6.
- Area Under the Curve (AUC) is the measure of a classifier’s ability to differentiate between classes. AUC computes the area under the receiver operating characteristic (ROC) curve. This summarizes the curve information into a single number. ROC is a probability curve, whereas AUC is a measure or degree of separability. It indicates how well the model can differentiate between classes. The model performs better with a higher AUC. In multiclass classification problems, the One-vs-one algorithm is a common solution. This algorithm determines the average AUC for all possible pairwise class combinations. The definition of this multiclass AUC metric, which is weighted uniformly, is given mathematically by [25]
5. Numerical Results
- Initial shallow analysis using several different ML learners, using the ML models of the original work [7].
- Data augmentation with SMOTEN, to increase the samples of each class to at least 10. After that, we could analyze the dataset with the 10k stratified CV method.
- Optimal hyperparameters search using the GridSearch algorithm.
- Deep Learning analysis with both classification and regression tasks.
5.1. Shallow Learning
5.2. Data Augmentation
Algorithm 1 Pseudocode for SMOTE-NC samples separation. |
|
5.3. Optimal Hyperparameters Search
5.3.1. GridSearchCV
5.3.2. Stratified Cross-Validation Results
5.4. DNN Analysis
5.4.1. DNN Classification
5.4.2. DNN Regression
6. Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
CNN | Convolutional Neural Network |
DNN | Deep Learning Neural Networks |
DT | Decision Trees |
ET | Extra Trees |
lr | learning_rate |
MAE | Mean Absolute Error |
ML | Machine Learning |
mln | max_leaf_nodes |
MLP | Multi-layer Perceptrons |
MAPE | Mean Square Percentage Error |
msl | min_samples_leaf |
mss | min_samples_split |
NN | Neural Net |
QoS | Quality of Service |
ReLU | Rectified Linear Unit |
RF | Random Forest |
SCC | Sparse Categorical Crossentropy |
SGD | Stohastic Gradient Descent |
vs | var_smoothing |
References
- MacCartney, G.R.; Yan, H.; Sun, S.; Rappaport, T.S. A flexible wideband millimeter-wave channel sounder with local area and NLOS to LOS transition measurements. In Proceedings of the 2017 IEEE International Conference on Communications (ICC), Paris, France, 21–25 May 2017; pp. 1–7. [Google Scholar] [CrossRef] [Green Version]
- Ali, A.; Rahim, H.A.; Pasha, M.F.; Dowsley, R.; Masud, M.; Ali, J.; Baz, M. Security, Privacy, and Reliability in Digital Healthcare Systems Using Blockchain. Electronics 2021, 10, 2034. [Google Scholar] [CrossRef]
- Noor-A-Rahim, M.; Liu, Z.; Lee, H.; Khyam, M.O.; He, J.; Pesch, D.; Moessner, K.; Saad, W.; Poor, H.V. 6G for Vehicle-to-Everything (V2X) Communications: Enabling Technologies, Challenges, and Opportunities. Proc. IEEE 2022, 110, 712–734. [Google Scholar] [CrossRef]
- Roh, W.; Seol, J.Y.; Park, J.; Lee, B.; Lee, J.; Kim, Y.; Cho, J.; Cheun, K.; Aryanfar, F. Millimeter-wave beamforming as an enabling technology for 5G cellular communications: Theoretical feasibility and prototype results. IEEE Commun. Mag. 2014, 52, 106–113. [Google Scholar] [CrossRef]
- Salehi, B.; Reus-Muns, G.; Roy, D.; Wang, Z.; Jian, T.; Dy, J.; Ioannidis, S.; Chowdhury, K. Deep Learning on Multimodal Sensor Data at the Wireless Edge for Vehicular Network. IEEE Trans. Veh. Technol. 2022, 71, 7639–7655. [Google Scholar] [CrossRef]
- González-Prelcic, N.; Ali, A.; Va, V.; Heath, R.W. Millimeter-Wave Communication with Out-of-Band Information. IEEE Commun. Mag. 2017, 55, 140–146. [Google Scholar] [CrossRef] [Green Version]
- Klautau, A.; Batista, P.; González-Prelcic, N.; Wang, Y.; Heath, R.W. 5G MIMO Data for Machine Learning: Application to Beam-Selection Using Deep Learning. In Proceedings of the 2018 Information Theory and Applications Workshop (ITA), San Diego, CA, USA, 11–16 February 2018; pp. 1–9. [Google Scholar] [CrossRef]
- Elhalawany, B.; Hashima, S.; Hatano, K.; Wu, K.; Mohamed, E. Leveraging Machine Learning for Millimeter Wave Beamforming in Beyond 5G Networks. IEEE Syst. J. 2022, 16, 1739–1750. [Google Scholar] [CrossRef]
- Chawla, N.; Bowyer, K.; Hall, L.; Kegelmeyer, W. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
- Khan, M.; Hossain, S.; Mozumdar, P.; Akter, S.; Ashique, R. A review on machine learning and deep learning for various antenna design applications. Heliyon 2022, 8, e09317. [Google Scholar] [CrossRef] [PubMed]
- Tan, K.; Bremner, D.; Le Kernec, J.; Zhang, L.; Imran, M. Machine learning in vehicular networking: An overview. Digit. Commun. Netw. 2022, 8, 18–24. [Google Scholar] [CrossRef]
- Tang, F.; Mao, B.; Kato, N.; Gui, G. Comprehensive survey on machine learning in vehicular network: Technology, applications and challenges. IEEE Commun. Surv. Tutor. 2021, 23, 2027–2057. [Google Scholar] [CrossRef]
- Tang, F.; Kawamoto, Y.; Kato, N.; Liu, J. Future Intelligent and Secure Vehicular Network Toward 6G: Machine-Learning Approaches. Proc. IEEE 2020, 108, 292–307. [Google Scholar] [CrossRef]
- Ozpoyraz, B.; Dogukan, A.; Gevez, Y.; Altun, U.; Basar, E. Deep Learning-Aided 6G Wireless Networks: A Comprehensive Survey of Revolutionary PHY Architectures. IEEE Open J. Commun. Soc. 2022, 3, 1749–1809. [Google Scholar] [CrossRef]
- Eclipse. SUMO. Available online: https://www.eclipse.org/sumo/ (accessed on 14 August 2022).
- Dias, M.; Klautau, A.; González-Prelcic, N.; Heath, R.W. Position and LIDAR-Aided mmWave Beam Selection using Deep Learning. In Proceedings of the 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Cannes, France, 2–5 July 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Mashhadi, M.B.; Jankowski, M.; Tung, T.Y.; Kobus, S.; Gündüz, D. Federated mmWave Beam Selection Utilizing LIDAR Data. IEEE Wirel. Commun. Lett. 2021, 10, 2269–2273. [Google Scholar] [CrossRef]
- Mukhtar, H.; Erol-Kantarci, M. Machine Learning-Enabled Localization in 5G using LIDAR and RSS Data. In Proceedings of the 2021 IEEE Symposium on Computers and Communications (ISCC), Athens, Greece, 5–8 September 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Zecchin, M.; Mashhadi, M.B.; Jankowski, M.; Gündüz, D.; Kountouris, M.; Gesbert, D. LIDAR and Position-Aided mmWave Beam Selection With Non-Local CNNs and Curriculum Training. IEEE Trans. Veh. Technol. 2022, 71, 2979–2990. [Google Scholar] [CrossRef]
- Zhang, Z.; Yang, R.; Zhang, X.; Li, C.; Huang, Y.; Yang, L. Backdoor Federated Learning-Based mmWave Beam Selection. IEEE Trans. Commun. 2022, 70, 6563–6578. [Google Scholar] [CrossRef]
- Elbir, A.; Coleri, S. Federated Learning for Channel Estimation in Conventional and RIS-Assisted Massive MIMO. IEEE Trans. Wirel. Commun. 2022, 21, 4255–4268. [Google Scholar] [CrossRef]
- Gao, F.; Lin, B.; Bian, C.; Zhou, T.; Qian, J.; Wang, H. FusionNet: Enhanced Beam Prediction for mmWave Communications Using Sub-6 GHz Channel and a Few Pilots. IEEE Trans. Commun. 2021, 69, 8488–8500. [Google Scholar] [CrossRef]
- Grandini, M.; Bagli, E.; Visani, G. Metrics for Multi-Class Classification: An Overview. arXiv 2020, arXiv:2008.05756. [Google Scholar] [CrossRef]
- Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
- Hand, D.; Till, R. A Simple Generalisation of the Area Under the ROC Curve for Multiple Class Classification Problems. Mach. Learn. 2001, 45, 171–186. [Google Scholar] [CrossRef]
- Ahmad, N.; Ghadi, Y.; Adnan, M.; Ali, M. Load Forecasting Techniques for Power System: Research Challenges and Survey. IEEE Access 2022, 10, 71054–71090. [Google Scholar] [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 25 February 2023).
- Zhang, C.; Ma, Y. Ensemble Machine Learning: Methods and Applications; Springer Publishing Company: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Breiman, L. Bagging Predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
- Fan, R.E.; Chang, K.W.; Hsieh, C.J.; Wang, X.R.; Lin, C.J. LIBLINEAR: A Library for Large Linear Classification. J. Mach. Learn. Res. 2008, 9, 1871–1874. [Google Scholar]
- Freund, Y.; Schapire, R. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef] [Green Version]
- Moore II, D.H. Classification and regression trees, by Leo Breiman, Jerome H. Friedman, Richard A. Olshen, and Charles J. Stone. Brooks/Cole Publishing, Monterey, 1984,358 pages, $27.95. Cytometry 1987, 8, 534–535. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Zhang, H. The Optimality of Naive Bayes. In Proceedings of the Seventeenth International Florida Artificial Intelligence Research Society Conference (FLAIRS 2004), Miami Beach, FL, USA, 17–19 May 2004; Barr, V., Markov, Z., Eds.; AAAI Press: Washington, DC, USA, 2004. [Google Scholar]
- Christodoulou, C.; Georgiopoulos, M. Applications of Neural Networks in Electromagnetics; Artech House: Norwood, MA, USA, 2001. [Google Scholar]
- Altman, N.S. An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression. Am. Stat. 1992, 46, 175–185. [Google Scholar]
- Chatzoglou, E.; Kambourakis, G.; Kolias, C.; Smiliotopoulos, C. Pick Quality Over Quantity: Expert Feature Selection and Data Preprocessing for 802.11 Intrusion Detection Systems. IEEE Access 2022, 10, 64761–64784. [Google Scholar] [CrossRef]
- Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.Y. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Advances in Neural Information Processing Systems 30; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; pp. 3146–3154. [Google Scholar]
- Geurts, P.; Ernst, D.; Wehenkel, L. Extremely randomized trees. Mach. Learn. 2006, 63, 3–42. [Google Scholar] [CrossRef] [Green Version]
- Oliveira, A. 5gm-beam-selection. Available online: https://github.com/lasseufpa/5gm-beam-selection (accessed on 14 August 2022).
Classifiers | Accuracy (%) in [7] | Accuracy (%) This Study |
---|---|---|
LinearSVC | 33.20 | 35.62 |
AdaBoost | 55.00 | 54.96 |
Decision Trees | 55.50 | 52.06 |
Random Forest | 63.20 | 57.88 |
DNN | 63.20 | 62.20 |
Naive Bayes | – | 8.70 |
Artificial Neural Net (ANN) | – | 40.50 |
kNN | – | 43.04 |
SGDClassifier | – | 37.80 |
LightGBM | – | 26.66 |
Logistic Regression | – | 43.76 |
Extra Trees (ET) | – | 52.66 |
Classifiers | AUC (%) | Accuracy (%) | Top-2 | Top-5 | Top-10 |
---|---|---|---|---|---|
Naive Bayes | 64.18 | 4.79 | 10.44 | 23.11 | 29.44 |
Decision Trees | 84.85 | 92.94 | 94.39 | 95.16 | 95.60 |
Random Forest | 74.45 | 87.44 | 96.09 | 99.16 | 99.33 |
AdaBoost | 50.20 | 60.28 | 70.81 | 86.36 | 93.87 |
LinearSVC | 74.42 | 79.25 | – | – | – |
NN | 67.00 | 88.96 | 95.89 | 98.64 | 99.18 |
kNN | 76.73 | 87.68 | 94.44 | 97.37 | 97.47 |
SGDClassifier | 70.46 | 59.52 | – | – | – |
LightGBM | 49.97 | 40.49 | 51.28 | 51.52 | 52.40 |
Logistic Regression | 63.49 | 76.83 | 90.73 | 97.23 | 98.58 |
ET | 50.00 | 58.52 | 69.23 | 85.90 | 93.53 |
Classifiers | AUC | Accuracy | Top-2 | Top-5 | Top-10 |
---|---|---|---|---|---|
DT | 83.67 | 93.59 | 94.62 | 94.64 | 94.75 |
Random Forest | 66.49 | 86.96 | 96.34 | 99.48 | 99.73 |
NN | 84.12 | 92.35 | 94.44 | 99.04 | 99.48 |
kNN | 76.73 | 87.68 | 94.44 | 97.37 | 97.47 |
Logistic Regression | 80.16 | 85.37 | 95.83 | 99.21 | 99.57 |
Naive Bayes | 79.14 | 14.12 | 30.97 | 66.20 | 84.15 |
AdaBoost | 50.20 | 60.28 | 70.78 | 86.35 | 93.87 |
LinearSVC | 76.53 | 83.61 | – | – | – |
SGDClassifier | 72.63 | 82.86 | – | – | – |
LightGBM | 85.38 | 95.10 | 98.37 | 99.52 | 99.71 |
ET | 68.74 | 85.95 | 96.22 | 99.17 | 99.69 |
Classifier | AUC | Accuracy | Prec | Recall | F1 | Top-2 | Top-5 | Top-10 |
---|---|---|---|---|---|---|---|---|
DT | 93.77 | 94.56 | 89.00 | 87.69 | 87.54 | 95.37 | 95.41 | 95.54 |
RF | 85.22 | 87.91 | 83.67 | 70.89 | 74.57 | 96.63 | 99.52 | 99.80 |
NN | 93.15 | 93.67 | 89.60 | 86.49 | 87.00 | 97.89 | 99.41 | 99.64 |
kNN | 90.66 | 91.71 | 90.07 | 81.57 | 84.42 | 90.66 | 96.39 | 98.32 |
Logistic Regression | 91.39 | 85.87 | 90.25 | 83.24 | 85.58 | 96.19 | 99.33 | 99.69 |
Naive Bayes | 79.64 | 17.22 | 40.88 | 71.38 | 44.74 | 36.88 | 68.33 | 84.95 |
AdaBoost | 50.74 | 58.71 | 3.42 | 3.18 | 2.55 | 68.99 | 84.35 | 91.84 |
LinearSVC | 89.55 | 84.48 | 9.83 | 79.64 | 83.12 | – | – | – |
SGDClassifier | 87.72 | 83.31 | 85.11 | 76.01 | 78.16 | – | – | – |
LightGBM | 93.90 | 95.64 | 92.08 | 87.93 | 89.18 | 98.61 | 99.53 | 99.77 |
ET | 83.41 | 85.85 | 82.46 | 67.33 | 71.54 | 95.98 | 99.24 | 99.74 |
Voting-4 | 94.50 | 95.86 | 91.89 | 87.13 | 89.66 | 98.79 | 99.58 | 99.76 |
Voting-8 | 94.12 | 95.40 | 92.10 | 88.39 | 89.47 | 98.22 | 99.59 | 99.87 |
Bagging-DT | 92.06 | 94.00 | 91.57 | 84.32 | 86.78 | 98.74 | 99.60 | 99.68 |
Bagging-RF | 87.07 | 89.55 | 87.30 | 74.54 | 78.54 | 97.93 | 99.64 | 99.81 |
Parameters | MLP | Autoencoder |
---|---|---|
Activator | ReLU | ReLU |
Output activator | Softmax | Softmax |
Initializer | He_uniform | – |
Optimizer | SGD | SGD |
Momentum | 0.9 | 0.9 |
Dropout | 0.15/0.1 | 0.1 |
Learning rate | 0.01 | 0.01 |
Loss | SCC | SCC |
Batch normalization | Yes | Yes |
Layers | 11 | 16 |
Nodes (per layer) | 500/450/400/ 350/300/250/ 200/160/120/ 60/56 | 300/250/200/ 100/100/50/25/ 10/25/50/100/ 100/200/250/300/ 56 |
Batch size | 32 | 32 |
Model | AUC | Accuracy | Precision | Recall | F1 | Epochs |
---|---|---|---|---|---|---|
MLP | 93.07 | 94.18 | 85.90 | 86.29 | 85.17 | 50.6 |
Autoencoder | 87.28 | 91.68 | 74.78 | 74.79 | 73.27 | 62.5 |
CNN | |
---|---|
Activator | ReLU |
Output activator | Linear |
Optimizer | SGD |
Momentum | 0.9 |
Dropout | 0.8 |
Learning rate | 0.01 |
Loss | MAE |
Batch normalization | Yes |
Layers | 6 |
Nodes (per layer) | Conv1D(128,10)/ Conv1D(64,12)/ Conv1D(32,10)/ Dense(100)/ Flatten()/ Dense(1) |
Batch size | 32 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chatzoglou, E.; Goudos, S.K. Beam-Selection for 5G/B5G Networks Using Machine Learning: A Comparative Study. Sensors 2023, 23, 2967. https://doi.org/10.3390/s23062967
Chatzoglou E, Goudos SK. Beam-Selection for 5G/B5G Networks Using Machine Learning: A Comparative Study. Sensors. 2023; 23(6):2967. https://doi.org/10.3390/s23062967
Chicago/Turabian StyleChatzoglou, Efstratios, and Sotirios K. Goudos. 2023. "Beam-Selection for 5G/B5G Networks Using Machine Learning: A Comparative Study" Sensors 23, no. 6: 2967. https://doi.org/10.3390/s23062967
APA StyleChatzoglou, E., & Goudos, S. K. (2023). Beam-Selection for 5G/B5G Networks Using Machine Learning: A Comparative Study. Sensors, 23(6), 2967. https://doi.org/10.3390/s23062967