Task Scheduling in Vehicular Networks: A Multi-Agent Reinforcement Learning Based Reverse Auction Mechanism
Abstract
References
Index Terms
- Task Scheduling in Vehicular Networks: A Multi-Agent Reinforcement Learning Based Reverse Auction Mechanism
Recommendations
Multi-agent reinforcement learning based resource allocation for vehicular networks
PCCNT '23: Proceedings of the 2023 International Conference on Power, Communication, Computing and Networking TechnologiesVehicular communication becomes a hot research topic due to its advantage in congestion and traffic accident avoidance. To enhance the user experience quality and driving safety of vehicular networks, it is imperative to increase the capacity of each V2X ...
Mediated Multi-Agent Reinforcement Learning
AAMAS '23: Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent SystemsThe majority of Multi-Agent Reinforcement Learning (MARL) literature equates the cooperation of self-interested agents in mixed environments to the problem of social welfare maximization, allowing agents to arbitrarily share rewards and private ...
Deep reinforcement learning techniques for vehicular networks: Recent advances and future trends towards 6G
AbstractEmploying machine learning into 6G vehicular networks to support vehicular application services is being widely studied and a hot topic for the latest research works in the literature. This article provides a comprehensive review of ...
Comments
Please enable JavaScript to view thecomments powered by Disqus.Information & Contributors
Information
Published In

Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates
Author Tags
Qualifiers
- Research-article
Conference
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 28Total Downloads
- Downloads (Last 12 months)28
- Downloads (Last 6 weeks)14
Other Metrics
Citations
View Options
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign inFull Access
View options
View or Download as a PDF file.
PDFeReader
View online with eReader.
eReaderFull Text
View this article in Full Text.
Full TextHTML Format
View this article in HTML Format.
HTML Format