Abstract
Smart Cities promise to their residents, quick journeys in a clean and sustainable environment. Despite, the benefits accrued by the introduction of traffic management solutions (e.g. improved travel times, maximisation of throughput, etc.), these solutions usually fall short on ensuring the environmental sustainability around the implementation areas. This is because the environmental dimension (e.g. vehicle emissions) is usually absent from the optimisation methodologies adopted for traffic management strategies. Nonetheless, since environmental performance corresponds as a primary goal of contemporary mobility planning, solutions that can guarantee air quality are significant. This study presents an advanced Artificial Intelligence-based (AI) signal control framework, able to incorporate environmental considerations into the core of signal optimisation processes. More specifically, a highly flexible Reinforcement Learning (RL) algorithm has been developed in order to identify efficient but -more importantly- environmentally friendly signal control strategies. The methodology is deployed on a large-scale micro-simulation environment able to realistically represent urban traffic conditions. Alternative signal control strategies are designed, applied, and evaluated against their achieved traffic efficiency and environmental footprint. Based on the results obtained from the application of the methodology on a core part of the road urban network of Nicosia, Cyprus the best strategy achieved a 4.8% increase of the network throughput, 17.7% decrease of the average queue length and a remarkable 34.2% decrease of delay while considerably reduced the CO emissions by 8.1%. The encouraging results showcase ability of RL-based traffic signal controlling to ensure improved air-quality conditions for the residents of dense urban areas.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Papageorgiou, M.: Overview of road traffic control strategies. In: IFAC Proceedings Volumes (IFAC-PapersOnline). IFAC Secretariat, pp. 29–40 (2004)
Mannion, P., Duggan, J., Howley, E.: An Experimental Review of Reinforcement Learning Algorithms for Adaptive Traffic Signal Control. In: McCluskey, T., Kotsialos, A., Müller, J., Klügl, F., Rana, O., Schumann, R. (eds.) Autonomic Road Transport Support Systems, pp. 47–66. Springer International Publishing, Cham (2016)
Bakker, B., Whiteson, S., Kester, L., Groen, F.C.A.: Traffic light control by multiagent reinforcement learning systems. Stud. Comput. Intell. 281, 475–510 (2010). https://doi.org/10.1007/978-3-642-11688-9_18
Zhong, D., Boukerche, A.: Traffic signal control using deep reinforcement learning with multiple resources of rewards. In: Proceedings of the 16th ACM International Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, & Ubiquitous Networks - PE-WASUN 2019, pp. 23–28. ACM Press, New York (2019)
Urbanik, T., Tanaka, A., Lozner, B., et al.: Signal Timing Manual, 2nd edn. Transportation Research Board (2015)
Buşoniu, L., Babuška, R., De Schutter, B.: A comprehensive survey of multiagent reinforcement learning. . IEEE Trans Syst. Man Cybern. Part C Appl. Rev. 38, 156–172 (2008)
Penic, M.A., Upchurch, J.: TRANSYT-7F: enhancement for fuel consumption, pollution emissions, and user costs. Transp. Res. Rec. 104–111 (1992)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ballis, H., Dimitriou, L. (2021). Evaluating the Performance of Reinforcement Learning Signalling Strategies for Sustainable Urban Road Networks. In: Nathanail, E.G., Adamos, G., Karakikes, I. (eds) Advances in Mobility-as-a-Service Systems. CSUM 2020. Advances in Intelligent Systems and Computing, vol 1278. Springer, Cham. https://doi.org/10.1007/978-3-030-61075-3_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-61075-3_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-61074-6
Online ISBN: 978-3-030-61075-3
eBook Packages: EngineeringEngineering (R0)