[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3538641.3561484acmconferencesArticle/Chapter ViewAbstractPublication PagesracsConference Proceedingsconference-collections
research-article

Deep reinforcement learning based secondary user transmit power control for underlay cognitive radio networks

Published: 20 October 2022 Publication History

Abstract

To improve the spectral utilization efficiency, underlay cognitive radio network has been extensively investigated in recent years. The underlay paradigm allows secondary users to operate if the interference they cause to primary user is below a given threshold. The transmit power control problem of the secondary user is critical and challenging for underlay cognitive radio networks, especially when the network scenario is dynamic. In this work, we propose a deep reinforcement learning based secondary user transmit power control scheme for underlay cognitive radio networks. The proposed scheme dynamically controls the transmit power of a mobile secondary user, with the purpose of improving the system's spectral utilization efficiency and meanwhile satisfying the required SINR (signal-to-interference plus noise power ratio) for primary users. The performance of the proposed scheme in terms of interference ratio and throughput is validated by extensive simulations.

References

[1]
Internet traffic (traffic volume) transition data https://www.ntt.com/about-us/covid-19/traffic.html (viewed 15 Feb 2021)
[2]
J. Mitola and G.Q. Maguire, "Cognitive radio: making software radios more personal," IEEE Personal Communications, vol. 6, no. 4, pp. 13--18, Aug. (2002).
[3]
X. Wang, M. Umehira, M. Akimoto, B. Han and H. Zhou, "Green Spectrum Sharing Framework in B5G Era by Exploiting Crowdsensing", IEEE Transactions on Green Communications and Networking, 2022.
[4]
L. B. Le, E. Hossain. "Resource allocation for spectrum underlay in cognitive radio networks," IEEE Transactions on Wireless Commun. vol. 7, no. 12, pp. 5306--5315, Dec. (2008).
[5]
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, "Human-level control through deep reinforcement learning," Nature, vol. 518, no. 7540, pp. 529--533, Feb. (2015).
[6]
H. Zhang, N. Yang, W. Huangfu, K. Long and C. M. Leung, "Power Control Based on Deep Reinforcement Learning for Spectrum Sharing", IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 19, NO. 6, JUNE 2020, pp. 4209--4219(2020).
[7]
L. Zhang and Y.-C. Liang, "Deep Reinforcement Learning for Multi-Agent Power Control in Heterogeneous Networks," in IEEE Transactions on Wireless Communications, (2020).
[8]
B. Golkar and E. Sousa, "A network shadow fading model for autonomous infrastructure wireless networks," 2012 Proceedings of the 20th European Signal Processing Conference (EUSIPCO), Bucharest, Romania, pp. 2659--2663 (2012).
[9]
Watkins, C.J.C.H., Dayan, P. Q-learning. mach Learn 8, 279--292 (1992). Available at
[10]
Y. Kishimoto, X. Wang, and M. Umehira, "Reinforcement Learning for Joint Channel/Subframe Selection of LTE in the Unlicensed Spectrum", Wireless Communications and Mobile Computing, vol. 2021, Article ID 9985972, 15 pages, 2021.
[11]
H. Zhou, X. Wang, M. Umehira, X. Chen, C. Wu, and Y. Ji, "Wireless Access Control in Edge-Aided Disaster Response: A Deep Reinforcement Learning-based Approach", IEEE Access, vol. 9, pp. 46600--46611, 2021.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
RACS '22: Proceedings of the Conference on Research in Adaptive and Convergent Systems
October 2022
208 pages
ISBN:9781450393980
DOI:10.1145/3538641
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 20 October 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. deep reinforcement learning
  2. transmit power control
  3. underlay cognitive radio

Qualifiers

  • Research-article

Conference

RACS '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 393 of 1,581 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 32
    Total Downloads
  • Downloads (Last 12 months)10
  • Downloads (Last 6 weeks)1
Reflects downloads up to 26 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media