8000 GitHub - Xubin1011/route_planning_dqn: Constrained Route Planning of Electric Vehicles with Recharging Based on Deep Reinforcement Learning
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Constrained Route Planning of Electric Vehicles with Recharging Based on Deep Reinforcement Learning

Notifications You must be signed in to change notification settings

Xubin1011/route_planning_dqn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Over the past few years, the increasing concern about climate change and governmental initiatives have led to a rapid rise in the number of electric vehicles. However, the limited driving range and sparse charging infrastructure can pose constraints on the widespread deployment of electric vehicles. Particularly during long-distance journeys, making incorrect charging decisions can significantly impact travel time, and in the worst case, result in an inability to reach the destination due to insufficient energy.

The existed route planning solutions of electric vehicles have primarily focused on minimizing travel time, distance, and monetary expenses through charging decisions. These approaches often neglect the rest requirements of drivers during long trips and pay little attention to the success rate of generating feasible routes during deployment. In response to these challenges, a two-layer Route Planning Model has been developed based on reinforcement learning to approach an optimal solution.

The first layer of the model involves the training phase, introducing a reward method that balances driving time, charging time, and rest time to maximize the reward. In the second layer, the trained Q-Network is deployed to evaluate feasible actions and, in combination with a Take Steps Back Method, select actions that do not violate constraints. These two layers are distinct modules.

The experiment shows that the model's effectiveness in addressing electric vehicle route planning problems with multiple constraints. Furthermore, the trained Q-Network can be applied to other maps without retraining. And when combined with the Take Steps Back Method in the deployment phase, the Success Rate is improved. 屏幕截图 2024-03-09 085710 The training model (upper) and the deployment model (lower)

data_source Data processing in the training phase

About

Constrained Route Planning of Electric Vehicles with Recharging Based on Deep Reinforcement Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0