[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

A Survey of Machine Learning-Based Ride-Hailing Planning

Published: 10 May 2024 Publication History

Abstract

Ride-hailing is a sustainable transportation paradigm where riders access door-to-door traveling services through a mobile phone application, which has attracted a colossal amount of usage. There are two major planning tasks in a ride-hailing system: 1) matching, i.e., assigning available vehicles to pick up the riders; and 2) repositioning, i.e., proactively relocating vehicles to certain locations to balance the supply and demand of ride-hailing services. Recently, many studies of ride-hailing planning that leverage machine learning techniques have emerged. In this article, we present a comprehensive overview on latest developments of machine learning-based ride-hailing planning. To offer a clear and structured review, we introduce a taxonomy into which we carefully fit the different categories of related works according to the types of their planning tasks and solution schemes, which include collective matching, distributed matching, collective repositioning, distributed repositioning, and joint matching and repositioning. We further shed light on many real-world data sets and simulators that are indispensable for empirical studies on machine learning-based ride-hailing planning strategies. At last, we propose several promising research directions for this rapidly growing research and practical field.

References

[1]
J. Ye, “Transportation: A data driven approach,” in Proc. ACM SIGKDD, 2019, p. 3183.
[2]
(May 25, 2022) Uber Revenue and Usage Statistics. [Online]. Available: https://www.businessofapps.com/data/uber-statistics/
[3]
R. Ma and H. M. Zhang, “The morning commute problem with ridesharing and dynamic parking charges,” Transp. Res. B, Methodol., vol. 106, pp. 345–374, Dec. 2017.
[4]
J. Cramer and A. B. Krueger, “Disruptive change in the taxi business: The case of Uber,” Amer. Econ. Rev., vol. 106, no. 5, pp. 177–182, May 2016.
[5]
G. Feng, G. Kong, and Z. Wang, “We are on the way: Analysis of on-demand ride-hailing systems,” Manuf. Service Oper. Manag., vol. 23, no. 5, pp. 1237–1256, Sep. 2021.
[6]
M. Hyland and H. S. Mahmassani, “Operational benefits and challenges of shared-ride automated mobility-on-demand services,” Transp. Res. A, Policy Pract., vol. 134, pp. 251–270, Apr. 2020.
[7]
(May 25, 2022). Smart Cities and Its Relevance to Mobility. [Online]. Available: https://assets.kpmg/content/dam/kpmg/za/pdf/pdf2020/smart-cities-and-its-relevance-to-mobility.pdf
[8]
M. Taiebat, E. Amini, and M. Xu, “Sharing behavior in ride-hailing trips: A machine learning inference approach,” Transp. Res. D, Transp. Environ., vol. 103, Feb. 2022, Art. no.
[9]
J. Holleret al., “Deep Q-learning approaches to dynamic multi-driver dispatching and repositioning,” in Proc. NIPS Workshop Deep Reinforcement Learn., Dec. 2018.
[10]
J. Holleret al., “Deep reinforcement learning for multi-driver vehicle dispatching and repositioning problem,” in Proc. IEEE Int. Conf. Data Mining (ICDM), Nov. 2019, pp. 1090–1095.
[11]
Z. Qin, X. Tang, Y. Jiao, F. Zhang, C. Wang, and Q. Li, “Deep reinforcement learning for ride-sharing dispatching and repositioning,” in Proc. 29th Int. Joint Conf. Artif. Intell., Aug. 2019, pp. 6566–6568.
[12]
J. Jinet al., “CoRide: Joint order dispatching and fleet management for multi-scale ride-hailing platforms,” in Proc. 28th ACM Int. Conf. Inf. Knowl. Manag., Nov. 2019, pp. 1983–1992.
[13]
A. Tafreshian, N. Masoud, and Y. Yin, “Frontiers in service science: Ride matching for peer-to-peer ride sharing: A review and future directions,” Service Sci., vol. 12, nos. 2–3, pp. 44–60, Jun. 2020.
[14]
N. Agatz, A. L. Erera, M. W. P. Savelsbergh, and X. Wang, “Dynamic ride-sharing: A simulation study in metro Atlanta,” Proc.-Social Behav. Sci., vol. 17, pp. 532–550, Jan. 2011.
[15]
R. Lloret-Batlle, N. Masoud, and D. Nam, “Peer-to-peer ridesharing with ride-back on high-occupancy-vehicle lanes: Toward a practical alternative mode for daily commuting,” Transp. Res. Rec., J. Transp. Res. Board, vol. 2668, no. 1, pp. 21–28, Jan. 2017.
[16]
N. Ta, G. Li, T. Zhao, J. Feng, H. Ma, and Z. Gong, “An efficient ride-sharing framework for maximizing shared route,” IEEE Trans. Knowl. Data Eng., vol. 30, no. 2, pp. 219–233, Feb. 2018.
[17]
J. Long, W. Tan, W. Y. Szeto, and Y. Li, “Ride-sharing with travel time uncertainty,” Transp. Res. B, Methodol., vol. 118, pp. 143–171, Dec. 2018.
[18]
M. Stiglic, N. Agatz, M. Savelsbergh, and M. Gradisar, “Making dynamic ride-sharing work: The impact of driver and rider flexibility,” Transp. Res. E, Logistics Transp. Rev., vol. 91, pp. 190–207, Jul. 2016.
[19]
R. Regue, N. Masoud, and W. Recker, “car2work: Shared mobility concept to connect commuters with workplaces,” Transp. Res. Rec., J. Transp. Res. Board, vol. 2542, no. 1, pp. 102–110, Jan. 2016.
[20]
X. Bei and S. Zhang, “Algorithms for trip-vehicle assignment in ride-sharing,” in Proc. AAAI, 2018, pp. 1–7.
[21]
M. Tamannaei and I. Irandoost, “Carpooling problem: A new mathematical model, branch-and-bound, and heuristic beam search algorithm,” J. Intell. Transp. Syst., vol. 23, no. 3, pp. 203–215, May 2019.
[22]
M. Noruzoliaee and B. Zou, “One-to-many matching and section-based formulation of autonomous ridesharing equilibrium,” Transp. Res. B, Methodol., vol. 155, pp. 72–100, Jan. 2022.
[23]
N. Masoud and R. Jayakrishnan, “A real-time algorithm to solve the peer-to-peer ride-matching problem in a flexible ridesharing system,” Transp. Res. B, Methodol., vol. 106, pp. 218–236, Dec. 2017.
[24]
N. Agatz, A. Erera, M. Savelsbergh, and X. Wang, “Sustainable passenger transportation: Dynamic ride-sharing,” ERIM Rep. Ser. Res. Manag., Rotterdam, The Netherlands, Tech. Rep., ERS-2010-010-LIS, 2010.
[25]
N. Masoud and R. Jayakrishnan, “A decomposition algorithm to solve the multi-hop peer-to-peer ride-matching problem,” Transp. Res. B, Methodol., vol. 99, pp. 1–29, May 2017.
[26]
J. Alonso-Mora, S. Samaranayake, A. Wallar, E. Frazzoli, and D. Rus, “On-demand high-capacity ride-sharing via dynamic trip-vehicle assignment,” Proc. Nat. Acad. Sci. USA, vol. 114, no. 3, pp. 462–467, Jan. 2017.
[27]
M. Liu, Z. Luo, and A. Lim, “A branch-and-cut algorithm for a realistic dial-a-ride problem,” Transp. Res. B, Methodol., vol. 81, pp. 267–288, Nov. 2015.
[28]
Y. Qu and J. F. Bard, “A branch-and-price-and-cut algorithm for heterogeneous pickup and delivery problems with configurable vehicle capacity,” Transp. Sci., vol. 49, no. 2, pp. 254–270, May 2015.
[29]
S. N. Parragh, J. Pinho de Sousa, and B. Almada-Lobo, “The dial-a-ride problem with split requests and profits,” Transp. Sci., vol. 49, no. 2, pp. 311–334, May 2015.
[30]
S. N. Parragh, J.-F. Cordeau, K. F. Doerner, and R. F. Hartl, “Models and algorithms for the heterogeneous dial-a-ride problem with driver-related constraints,” OR Spectr., vol. 34, no. 3, pp. 593–633, Jul. 2012.
[31]
L. Häme and H. Hakula, “A maximum cluster algorithm for checking the feasibility of dial-a-ride instances,” Transp. Sci., vol. 49, no. 2, pp. 295–310, May 2015.
[32]
C. E. Cortés, M. Matamala, and C. Contardo, “The pickup and delivery problem with transfers: Formulation and a branch-and-cut solution method,” Eur. J. Oper. Res., vol. 200, no. 3, pp. 711–724, Feb. 2010.
[33]
F. Ordonez and M. M. Dessouky, “Dynamic ridesharing,” in Leading Developments From INFORMS Communities. Catonsville, MD, USA: INFORMS, 2017, pp. 212–236.
[34]
Y. Molenbruch, K. Braekers, and A. Caris, “Typology and literature review for dial-a-ride problems,” Ann. Oper. Res., vol. 259, nos. 1–2, pp. 295–325, Dec. 2017.
[35]
Y. Luo and P. Schonfeld, “Online rejected-reinsertion heuristics for dynamic multivehicle dial-a-ride problem,” Transp. Res. Rec., J. Transp. Res. Board, vol. 2218, no. 1, pp. 59–67, Jan. 2011.
[36]
M. Chassaing, C. Duhamel, and P. Lacomme, “An ELS-based approach with dynamic probabilities management in local search for the dial-a-ride problem,” Eng. Appl. Artif. Intell., vol. 48, pp. 119–133, Feb. 2016.
[37]
U. Ritzinger, J. Puchinger, and R. F. Hartl, “Dynamic programming based metaheuristics for the dial-a-ride problem,” Ann. Oper. Res., vol. 236, no. 2, pp. 341–358, Jan. 2016.
[38]
C. Dutta, “When hashing met matching: Efficient spatio-temporal search for ridesharing,” in Proc. AAAI, 2021, pp. 1–9.
[39]
S. Sharif Azadeh, B. Atasoy, M. E. Ben-Akiva, M. Bierlaire, and M. Y. Maknoon, “Choice-driven dial-a-ride problem for demand responsive mobility service,” Transp. Res. B, Methodol., vol. 161, pp. 128–149, Jul. 2022.
[40]
A. de Palma, P. Stokkink, and N. Geroliminis, “Influence of dynamic congestion with scheduling preferences on carpooling matching with heterogeneous users,” Transp. Res. B, Methodol., vol. 155, pp. 479–498, Jan. 2022.
[41]
Z. Zhou and C. Roncoli, “A scalable vehicle assignment and routing strategy for real-time on-demand ridesharing considering endogenous congestion,” Transp. Res. C, Emerg. Technol., vol. 139, Jun. 2022, Art. no.
[42]
W. Peng and L. Du, “Investigating optimal carpool scheme by a semi-centralized ride-matching approach,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 9, pp. 14990–15004, Sep. 2022. 10.1109/TITS.2021.3135648.
[43]
D. T. Nguyen, A. Kumar, and H. C. Lau, “Policy gradient with value function approximation for collective multiagent planning,” in Proc. NIPS, 2017, pp. 1–11.
[44]
A. O. Al-Abbasi, A. Ghosh, and V. Aggarwal, “DeepPool: Distributed model-free algorithm for ride-sharing using deep reinforcement learning,” IEEE Trans. Intell. Transp. Syst., vol. 20, no. 12, pp. 4714–4727, Dec. 2019.
[45]
D. Shi, Y. Tong, Z. Zhou, B. Song, W. Lv, and Q. Yang, “Learning to assign: Towards fair task assignment in large-scale ride hailing,” in Proc. ACM SIGKDD, 2021, pp. 3549–3557.
[46]
T. Oda, “Equilibrium inverse reinforcement learning for ride-hailing vehicle network,” in Proc. Web Conf., Apr. 2021, pp. 2281–2290.
[47]
M. Haliem, G. Mani, V. Aggarwal, and B. Bhargava, “A distributed model-free ride-sharing approach for joint matching, pricing, and dispatching using deep reinforcement learning,” IEEE Trans. Intell. Transp. Syst., vol. 22, no. 12, pp. 7931–7942, Dec. 2021.
[48]
N. Agatz, A. Erera, M. Savelsbergh, and X. Wang, “Optimization for dynamic ride-sharing: A review,” Eur. J. Oper. Res., vol. 223, no. 2, pp. 295–303, Dec. 2012.
[49]
M. Furuhata, M. Dessouky, F. Ordóñez, M.-E. Brunet, X. Wang, and S. Koenig, “Ridesharing: The state-of-the-art and future directions,” Transp. Res. B, Methodol., vol. 57, pp. 28–46, Nov. 2013.
[50]
A. Mourad, J. Puchinger, and C. Chu, “A survey of models and algorithms for optimizing shared mobility,” Transp. Res. B, Methodol., vol. 123, pp. 323–346, May 2019.
[51]
C. Yan, H. Zhu, N. Korolko, and D. Woodard, “Dynamic pricing and matching in ride-hailing platforms,” Nav. Res. Logistics (NRL), vol. 67, no. 8, pp. 705–724, 2020.
[52]
L. D. C. Martins, R. de la Torre, C. G. Corlu, A. A. Juan, and M. A. Masmoudi, “Optimizing ride-sharing operations in smart sustainable cities: Challenges and the need for agile algorithms,” Comput. Ind. Eng., vol. 153, Mar. 2021, Art. no.
[53]
Z. Xuet al., “Large-scale order dispatch in on-demand ride-hailing platforms: A learning and planning approach,” in Proc. 24th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, Jul. 2018, pp. 905–913.
[54]
K. Lin, R. Zhao, Z. Xu, and J. Zhou, “Efficient large-scale fleet management via multi-agent deep reinforcement learning,” in Proc. 24th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, Jul. 2018, pp. 1774–1783.
[55]
X. Tanget al., “A deep value-network based approach for multi-driver order dispatching,” in Proc. 25th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, Jul. 2019, pp. 1780–1790.
[56]
M. Veres and M. Moussa, “Deep learning for intelligent transportation systems: A survey of emerging trends,” IEEE Trans. Intell. Transp. Syst., vol. 21, no. 8, pp. 3152–3168, Aug. 2020.
[57]
N. P. Farazi, B. Zou, T. Ahamed, and L. Barua, “Deep reinforcement learning in transportation research: A review,” Transp. Res. Interdiscipl. Perspect., vol. 11, Sep. 2021, Art. no.
[58]
A. K. Haghighat, V. Ravichandra-Mouli, P. Chakraborty, Y. Esfandiari, S. Arabi, and A. Sharma, “Applications of deep learning in intelligent transportation systems,” J. Big Data Anal. Transp., vol. 2, no. 2, pp. 115–145, 2020.
[59]
J. Chakraborty, D. Pandit, F. Chan, and J. Xia, “A review of ride-matching strategies for ridesourcing and other similar services,” Transp. Rev., vol. 41, no. 5, pp. 578–599, Sep. 2021.
[60]
Y. Liu, R. Jia, J. Ye, and X. Qu, “How machine learning informs ride-hailing services: A survey,” Commun. Transp. Res., vol. 2, Dec. 2022, Art. no.
[61]
Z. T. Qin, H. Zhu, and J. Ye, “Reinforcement learning for ridesharing: A survey,” in Proc. IEEE ITSC, Sep. 2021, pp. 2447–2454.
[62]
Z. Qin, H. Zhu, and J. Ye, “Reinforcement learning for ridesharing: An extended survey,” Transp. Res. C, Emerg. Technol., vol. 144, Nov. 2022, Art. no.
[63]
J. Dickerson, K. Sankararaman, A. Srinivasan, and P. Xu, “Allocation problems in ride-sharing platforms: Online matching with offline reusable resources,” in Proc. AAAI, 2018, pp. 1–17.
[64]
J. Alonso-Mora, A. Wallar, and D. Rus, “Predictive routing for autonomous mobility-on-demand systems with ride-sharing,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), Sep. 2017, pp. 3583–3590.
[65]
Y. Liu, W. Skinner, and C. Xiang, “Globally-optimized realtime supply-demand matching in on-demand ridesharing,” in Proc. World Wide Web Conf., May 2019, pp. 3034–3040.
[66]
Q. Lin, W. Xu, M. Chen, and X. Lin, “A probabilistic approach for demand-aware ride-sharing optimization,” in Proc. 20th ACM Int. Symp. Mobile Ad Hoc Netw. Comput., Jul. 2019, pp. 141–150.
[67]
C. Wang, Y. Hou, and M. Barth, “Data-driven multi-step demand prediction for ride-hailing services using convolutional neural network,” in Proc. SAI, 2019, pp. 11–22.
[68]
I. Jindal, Z. T. Qin, X. Chen, M. Nokleby, and J. Ye, “Optimizing taxi carpool policies via reinforcement learning and spatio-temporal mining,” in Proc. IEEE Int. Conf. Big Data (Big Data), Dec. 2018, pp. 1417–1426.
[69]
M. Zhouet al., “Multi-agent reinforcement learning for order-dispatching via order-vehicle distribution matching,” in Proc. 28th ACM Int. Conf. Inf. Knowl. Manag., Nov. 2019, pp. 2645–2653.
[70]
S. Liu and H. Jiang, “Personalized route recommendation for ride-hailing with deep inverse reinforcement learning and real-time traffic conditions,” Transp. Res. E, Logistics Transp. Rev., vol. 164, Aug. 2022, Art. no.
[71]
I. Shah, M. El Affendi, and B. Qureshi, “SRide: An online system for multi-hop ridesharing,” Sustainability, vol. 12, no. 22, p. 9633, Nov. 2020.
[72]
Y. Xu, L. Kulik, R. Borovica-Gajic, A. Aldwyish, and J. Qi, “Highly efficient and scalable multi-hop ride-sharing,” in Proc. ACM SIGSPATIAL, 2020, pp. 215–226.
[73]
S. Li, M. Li, and V. C. Lee, “Trip-vehicle assignment algorithms for ride-sharing,” in Proc. COCOA, 2020, pp. 681–696.
[74]
Z. Xuet al., “When recommender systems meet fleet management: Practical study in online driver repositioning system,” in Proc. Web Conf., Apr. 2020, pp. 2220–2229.
[75]
H. A. Chaudhari, J. W. Byers, and E. Terzi, “Learn to earn: Enabling coordination within a ride-hailing fleet,” in Proc. IEEE Int. Conf. Big Data (Big Data), Dec. 2020, pp. 1127–1136.
[76]
J. Gao, X. Li, C. Wang, and X. Huang, “Learning-based open driver guidance and rebalancing for reducing riders’ wait time in ride-hailing platforms,” in Proc. IEEE Int. Smart Cities Conf. (ISC2), Sep. 2020, pp. 1–7.
[77]
Z. Zhang, Y. Li, and H. Dong, “Multiple-feature-based vehicle supply-demand difference prediction method for social transportation,” IEEE Trans. Computat. Social Syst., vol. 7, no. 4, pp. 1095–1103, Aug. 2020.
[78]
Y. Guo, Y. Zhang, Y. Boulaksil, and N. Tian, “Multi-dimensional spatiotemporal demand forecasting and service vehicle dispatching for online car-hailing platforms,” Int. J. Prod. Res., vol. 60, no. 6, pp. 1832–1853, Mar. 2022.
[79]
C. Zhang, F. Zhu, Y. Lv, P. Ye, and F.-Y. Wang, “MLRNN: Taxi demand prediction based on multi-level deep learning and regional heterogeneity analysis,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 7, pp. 8412–8422, Jul. 2022. 10.1109/TITS.2021.3080511.
[80]
J. Ke, S. Feng, Z. Zhu, H. Yang, and J. Ye, “Joint predictions of multi-modal ride-hailing demands: A deep multi-task multi-graph learning-based approach,” Transp. Res. C, Emerg. Technol., vol. 127, Jun. 2021, Art. no.
[81]
Z. Chen, K. Liu, J. Wang, and T. Yamamoto, “H-ConvLSTM-based bagging learning approach for ride-hailing demand prediction considering imbalance problems and sparse uncertainty,” Transp. Res. C, Emerg. Technol., vol. 140, Jul. 2022, Art. no.
[82]
K. O’Keeffe, S. Anklesaria, P. Santi, and C. Ratti, “Using reinforcement learning to minimize taxi idle times,” J. Intell. Transp. Syst., vol. 26, no. 4, pp. 498–509, 2021.
[83]
S. Ji, Z. Wang, T. Li, and Y. Zheng, “Spatio-temporal feature fusion for dynamic taxi route recommendation via deep reinforcement learning,” Knowl.-Based Syst., vol. 205, Oct. 2020, Art. no.
[84]
T. Oda and Y. Tachibana, “Distributed fleet control with maximum entropy deep reinforcement learning,” in Proc. NIPS Workshop Mach. Learn. Intell. Transp. Syst., Dec. 2018.
[85]
Z. Liu, J. Li, and K. Wu, “Context-aware taxi dispatching at city-scale using deep reinforcement learning,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 3, pp. 1996–2009, Mar. 2022.
[86]
T. Oda and C. Joe-Wong, “MOVI: A model-free approach to dynamic fleet management,” in Proc. IEEE INFOCOM Conf. Comput. Commun., Apr. 2018, pp. 2708–2716.
[87]
A. S. Yengejeh and S. L. Smith, “Rebalancing self-interested drivers in ride-sharing networks to improve customer wait-time,” IEEE Trans. Control Netw. Syst., vol. 8, no. 4, pp. 1637–1648, Dec. 2021.
[88]
S. He and K. G. Shin, “Spatio-temporal capsule-based reinforcement learning for mobility-on-demand network coordination,” in Proc. World Wide Web Conf., May 2019, pp. 2806–2813.
[89]
E. Liang, K. Wen, W. H. K. Lam, A. Sumalee, and R. Zhong, “An integrated reinforcement learning and centralized programming approach for online taxi dispatching,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 9, pp. 4742–4756, Sep. 2022.
[90]
A. Singh, A. O. Al-Abbasi, and V. Aggarwal, “A distributed model-free algorithm for multi-hop ride-sharing using deep reinforcement learning,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 7, pp. 8595–8605, Jul. 2021. 10.1109/TITS.2021.3083740.
[91]
S. C. Chau, S. Shen, and Y. Zhou, “Decentralized ride-sharing and vehicle-pooling based on fair cost-sharing mechanisms,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 3, pp. 1936–1946, Mar. 2022.
[92]
Z. Chen, P. Li, J. Xiao, L. Nie, and Y. Liu, “An order dispatch system based on reinforcement learning for ride sharing services,” in Proc. IEEE 22nd Int. Conf. High Perform. Comput. Commun., IEEE 18th Int. Conf. Smart City, IEEE 6th Int. Conf. Data Sci. Syst. (HPCC/SmartCity/DSS), Dec. 2020, pp. 758–763.
[93]
L. Agussurja, S.-F. Cheng, and H. C. Lau, “A state aggregation approach for stochastic multiperiod last-mile ride-sharing problems,” Transp. Sci., vol. 53, no. 1, pp. 148–166, Feb. 2019.
[94]
Y. Wang, Y. Tong, C. Long, P. Xu, K. Xu, and W. Lv, “Adaptive dynamic bipartite graph matching: A reinforcement learning approach,” in Proc. IEEE 35th Int. Conf. Data Eng. (ICDE), Apr. 2019, pp. 1478–1489.
[95]
G. Qin, Q. Luo, Y. Yin, J. Sun, and J. Ye, “Optimizing matching time intervals for ride-hailing services using reinforcement learning,” Transp. Res. C, Emerg. Technol., vol. 129, Aug. 2021, Art. no.
[96]
Z. Hong, Y. Chen, H. S. Mahmassani, and S. Xu, “Commuter ride-sharing using topology-based vehicle trajectory clustering: Methodology, application and impact evaluation,” Transp. Res. C, Emerg. Technol., vol. 85, pp. 573–590, Dec. 2017.
[97]
M. Parimala, D. Lopez, and N. Senthilkumar, “A survey on density based clustering algorithms for mining large spatial databases,” Int. J. Adv. Sci. Technol., vol. 31, no. 1, pp. 59–66, 2011.
[98]
L. Zheng, L. Chen, and J. Ye, “Order dispatch in price-aware ridesharing,” Proc. VLDB Endowment, vol. 11, no. 8, pp. 853–865, Apr. 2018.
[99]
B. Shen, B. Cao, Y. Zhao, H. Zuo, W. Zheng, and Y. Huang, “Roo: Route planning algorithm for ride sharing systems on large-scale road networks,” in Proc. IEEE Int. Conf. Big Data Smart Comput. (BigComp), Feb. 2019, pp. 1–8.
[100]
R. Trasarti, F. Pinelli, M. Nanni, and F. Giannotti, “Mining mobility user profiles for car pooling,” in Proc. 17th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, Aug. 2011, pp. 1190–1198.
[101]
L. Mitropoulos, A. Kortsari, and G. Ayfantopoulou, “A systematic literature review of ride-sharing platforms, user factors and barriers,” Eur. Transp. Res. Rev., vol. 13, no. 1, pp. 1–22, Dec. 2021.
[102]
G. Yatnalkar, H. S. Narman, and H. Malik, “An enhanced ride sharing model based on human characteristics and machine learning recommender system,” Proc. Comput. Sci., vol. 170, pp. 626–633, Jan. 2020.
[103]
H. S. Narman, H. Malik, and G. Yatnalkar, “An enhanced ride sharing model based on human characteristics, machine learning recommender system, and user threshold time,” J. Ambient Intell. Humanized Comput., vol. 12, no. 1, pp. 13–26, Jan. 2021.
[104]
C. Levinger, N. Hazon, and A. Azaria, “Human satisfaction as the ultimate goal in ridesharing,” Future Gener. Comput. Syst., vol. 112, pp. 176–184, Nov. 2020.
[105]
M. Montazery and N. Wilson, “Learning user preferences in matching for ridesharing,” in Proc. 8th Int. Conf. Agents Artif. Intell., 2016.
[106]
M. Montazery and N. Wilson, “A new approach for learning user preferences for a ridesharing application,” in Transactions on Computational Collective Intelligence XXVIII. Berlin, Germany: Springer, 2018, pp. 1–24.
[107]
L. Tang, Z. Liu, Y. Zhao, Z. Duan, and J. Jia, “Efficient ridesharing framework for ride-matching via heterogeneous network embedding,” ACM Trans. Knowl. Discovery Data, vol. 14, no. 3, pp. 1–24, Jun. 2020.
[108]
Y. Sun and J. Han, “Mining heterogeneous information networks: Principles and methodologies,” Synth. Lectures Data Mining Knowl. Discovery, vol. 3, no. 2, pp. 1–159, Jul. 2012.
[109]
T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” in Proc. ICLR, 2013, pp. 1–12.
[110]
Y. Dong, N. V. Chawla, and A. Swami, “metapath2vec: Scalable representation learning for heterogeneous networks,” in Proc. 23rd ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, Aug. 2017, pp. 135–144.
[111]
L. Zhanget al., “A taxi order dispatch model based on combinatorial optimization,” in Proc. 23rd ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, Aug. 2017.
[112]
J. Friedman, T. Hastie, and R. Tibshirani, The Elements of Statistical Learning (Series in Statistics), vol. 1, no. 10. New York, NY, USA: Springer, 2001.
[113]
L. Mason, J. Baxter, P. Bartlett, and M. Frean, “Boosting algorithms as gradient descent in function space,” in Proc. NIPS, 1999, pp. 512–518.
[114]
S. Schleibaum and J. P. Müller, “Human-centric ridesharing on large scale by explaining AI-generated assignments,” in Proc. 6th EAI Int. Conf. Smart Objects Technol. Social Good, Sep. 2020, pp. 222–225.
[115]
R. Jonker and T. Volgenant, “Improving the Hungarian assignment algorithm,” Operations Res. Lett., vol. 5, no. 4, pp. 171–175, Oct. 1986.
[116]
Y. Guo, Y. Zhang, J. Yu, and X. Shen, “A spatiotemporal thermo guidance based real-time online ride-hailing dispatch framework,” IEEE Access, vol. 8, pp. 115063–115077, 2020.
[117]
L. Breiman, “Random forests,” Mach. Learn., vol. 45, no. 1, pp. 5–32, 2001.
[118]
R. S. Sutton and A. G. Barto, “Reinforcement learning,” J. Cognit. Neurosci., vol. 11, no. 1, pp. 126–134, 1999.
[119]
Z. Wang, Z. Qin, X. Tang, J. Ye, and H. Zhu, “Deep reinforcement learning with knowledge transfer for online rides order dispatching,” in Proc. IEEE Int. Conf. Data Mining (ICDM), Nov. 2018, pp. 617–626.
[120]
V. Mnihet al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, pp. 529–533, 2015.
[121]
O. De Lima, H. Shah, T.-S. Chu, and B. Fogelson, “Efficient ridesharing dispatch using multi-agent reinforcement learning,” 2020, arXiv:2006.10897.
[122]
T. Rashid, M. Samvelyan, C. Schroeder, G. Farquhar, J. Foerster, and S. Whiteson, “QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning,” in Proc. ICML, pp. 4295–4304.
[123]
M. Liet al., “Efficient ridesharing order dispatching with mean field multi-agent reinforcement learning,” in Proc. World Wide Web Conf., May 2019, pp. 983–994.
[124]
N. Bicocchi and M. Mamei, “Investigating ride sharing opportunities through mobility data analysis,” Pervas. Mobile Comput., vol. 14, pp. 83–94, Oct. 2014.
[125]
D. M. Blei, A. Y. Ng, and M. I. Jordan, “Latent Dirichlet allocation,” J. Mach. Learn. Res., vol. 3, pp. 993–1022, Mar. 2003.
[126]
E. L. Lasmar, F. O. de Paula, R. L. Rosa, J. I. Abrahão, and D. Z. Rodríguez, “RsRS: Ridesharing recommendation system based on social networks to improve the user’s QoE,” IEEE Trans. Intell. Transp. Syst., vol. 20, no. 12, pp. 4728–4740, Dec. 2019.
[127]
X. Tang, M. Li, X. Lin, and F. He, “Online operations of automated electric taxi fleets: An advisor-student reinforcement learning framework,” Transp. Res. C, Emerg. Technol., vol. 121, Dec. 2020, Art. no.
[128]
C. Fluri, C. Ruch, J. Zilly, J. Hakenberg, and E. Frazzoli, “Learning to operate a fleet of cars,” in Proc. IEEE Intell. Transp. Syst. Conf. (ITSC), Oct. 2019, pp. 2292–2298.
[129]
Y. Deng, H. Chen, S. Shao, J. Tang, J. Pi, and A. Gupta, “Multi-objective vehicle rebalancing for ridehailing system using a reinforcement learning approach,” J. Manage. Sci. Eng., vol. 7, no. 2, pp. 346–364, Jun. 2022.
[130]
X. Yu, S. Gao, X. Hu, and H. Park, “A Markov decision process approach to vacant taxi routing with e-hailing,” Transp. Res. B, Methodol., vol. 121, pp. 114–134, Mar. 2019.
[131]
T. Verma, P. Varakantham, S. Kraus, and H. C. Lau, “Augmenting decisions of taxi drivers through reinforcement learning for improving revenues,” in Proc. ICAPS, 2017, pp. 409–417.
[132]
M. Guériau and I. Dusparic, “SAMoD: Shared autonomous mobility-on-demand using decentralized reinforcement learning,” in Proc. 21st Int. Conf. Intell. Transp. Syst. (ITSC), Nov. 2018, pp. 1558–1563.
[133]
N. D. Kullman, M. Cousineau, J. C. Goodson, and J. E. Mendoza, “Dynamic ride-hailing with electric vehicles,” Transp. Sci., vol. 56, no. 3, pp. 775–794, May 2022.
[134]
D. Wang, Q. Wang, Y. Yin, and T. C. E. Cheng, “Optimization of ride-sharing with passenger transfer via deep reinforcement learning,” Transp. Res. E, Logistics Transp. Rev., vol. 172, Apr. 2023, Art. no.
[135]
G. Guo and Y. Xu, “A deep reinforcement learning approach to ride-sharing vehicle dispatching in autonomous mobility-on-demand systems,” IEEE Intell. Transp. Syst. Mag., vol. 14, no. 1, pp. 128–140, Jan. 2022.
[136]
Y. Liang, Z. Ding, T. Ding, and W.-J. Lee, “Mobility-aware charging scheduling for shared on-demand electric vehicle fleet using deep reinforcement learning,” IEEE Trans. Smart Grid, vol. 12, no. 2, pp. 1380–1393, Mar. 2021.
[137]
S. Lloyd, “Least squares quantization in PCM,” IEEE Trans. Inf. Theory, vol. IT-28, no. 2, pp. 129–137, Mar. 1982.
[138]
J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” 2017, arXiv:1707.06347.
[139]
D. Shi, X. Li, M. Li, J. Wang, P. Li, and M. Pan, “Optimal transportation network company vehicle dispatching via deep deterministic policy gradient,” in Proc. WASA, 2019, pp. 297–309.
[140]
D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller, “Deterministic policy gradient algorithms,” in Proc. ICML, 2014, pp. 387–395.
[141]
Z. Shou and X. Di, “Reward design for driver repositioning using multi-agent reinforcement learning,” Transp. Res. C, Emerg. Technol., vol. 119, Oct. 2020, Art. no.
[142]
C. Riley, P. van Hentenryck, and E. Yuan, “Real-time dispatching of large-scale ride-sharing systems: Integrating optimization, machine learning, and model predictive control,” in Proc. 29th Int. Joint Conf. Artif. Intell., Jul. 2020, pp. 1–8.
[143]
R. Iglesias, F. Rossi, K. Wang, D. Hallac, J. Leskovec, and M. Pavone, “Data-driven model predictive control of autonomous mobility-on-demand systems,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), May 2018, pp. 6019–6025.
[144]
J. Xu, R. Rahmatizadeh, L. Bölöni, and D. Turgut, “Taxi dispatch planning via demand and destination modeling,” in Proc. IEEE 43rd Conf. Local Comput. Netw. (LCN), Oct. 2018, pp. 377–384.
[145]
S.-F. Cheng, S. S. Jha, and R. Rajendram, “Taxis strike back: A field trial of the driver guidance system,” in Proc. AAMAS, 2018, pp. 1–9.
[146]
X. Li, C. Wang, X. Huang, and Y. Nie, “A data-driven dynamic stochastic programming framework for ride-sharing rebalancing problem under demand uncertainty,” in Proc. IEEE Int. Conf Parallel Distrib. Process. Appl., Big Data Cloud Comput., Sustain. Comput. Commun., Social Comput. Netw. (ISPA/BDCloud/SocialCom/SustainCom), Dec. 2020, pp. 1120–1125.
[147]
M. Pouls, A. Meyer, and N. Ahuja, “Idle vehicle repositioning for dynamic ride-sharing,” in Proc. ICCL, 2020, pp. 507–521.
[148]
(May 25, 2022). Gurobi. [Online]. Available: https://www.gurobi.com
[149]
T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,” in Proc. ACM SIGKDD, 2016, pp. 785–794.
[150]
A. Makhorin. (2008). GLPK (Gnu Linear Programming Kit). [Online]. Available: https://www.gnu.org/s/glpk/glpk.html
[151]
J. Ke, F. Xiao, H. Yang, and J. Ye, “Optimizing online matching for ride-sourcing services with multi-agent deep reinforcement learning,” 2019, arXiv:1902.06228.
[152]
A. Castagna, M. Guériau, G. Vizzari, and I. Dusparic, “Demand-responsive zone generation for real-time vehicle rebalancing in ride-sharing fleets,” in Proc. ATT, 2020, pp. 47–54.
[153]
A. Castagna, M. Guériau, G. Vizzari, and I. Dusparic, “Demand-responsive rebalancing zone generation for reinforcement learning-based on-demand mobility,” AI Commun., vol. 34, no. 1, pp. 73–88, Feb. 2021.
[154]
X. Tanget al., “Value function is all you need: A unified learning framework for ride hailing platforms,” in Proc. 27th ACM SIGKDD Conf. Knowl. Discovery Data Mining, Aug. 2021, pp. 3605–3615.
[155]
(May 25, 2022). Demandprop: A Scalable Algorithm for Real-Time Predictive Positioning of Fleets in Dynamic Ridesharing Systems. [Online]. Available: https://www.itrl.kth.se/polopoly_fs/1.1060792.1616668658!/PREDICT_preliminary_paper_2503.pdf
[156]
Y. Jiaoet al., “A deep value-based policy search approach for real-world vehicle repositioning on mobility-on-demand platforms,” in Proc. NIPS, 2020, pp. 1–10.
[157]
Y. Jiaoet al., “Real-world ride-hailing vehicle repositioning using deep reinforcement learning,” Transp. Res. C, Emerg. Technol., vol. 130, Sep. 2021, Art. no.
[158]
J. Kim and K. Kim, “Optimizing large-scale fleet management on a road network using multi-agent deep reinforcement learning with graph neural network,” in Proc. IEEE Int. Intell. Transp. Syst. Conf. (ITSC), Sep. 2021, pp. 990–995.
[159]
T. Verma, P. Varakantham, and H. C. Lau, “Entropy based independent learning in anonymous multi-agent settings,” in Proc. ICAPS, 2019, pp. 655–663.
[160]
E. T. Jaynes, “Information theory and statistical mechanics,” Phys. Rev., vol. 106, no. 4, pp. 620–630, May 1957.
[161]
R. Kemker, M. McClure, A. Abitino, T. Hayes, and C. Kanan, “Measuring catastrophic forgetting in neural networks,” in Proc. AAAI, 2018, pp. 1–9.
[162]
M. Haliem, V. Aggarwal, and B. Bhargava, “AdaPool: An adaptive model-free ride-sharing approach for dispatching using deep reinforcement learning,” in Proc. 7th ACM Int. Conf. Syst. Energy-Efficient Buildings, Cities, Transp., Nov. 2020, pp. 304–305.
[163]
M. Haliem, V. Aggarwal, and B. Bhargava, “AdaPool: A diurnal-adaptive fleet management framework using model-free deep reinforcement learning and change point detection,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 3, pp. 2471–2481, Mar. 2022.
[164]
Z. Lei, X. Qian, and S. V. Ukkusuri, “Optimal proactive vehicle relocation for on-demand mobility service with deep convolution-LSTM network,” in Proc. IEEE Intell. Transp. Syst. Conf. (ITSC), Oct. 2019, pp. 3373–3378.
[165]
K. Khetarpal, M. Riemer, I. Rish, and D. Precup, “Towards continual reinforcement learning: A review and perspectives,” J. Artif. Intell. Res., vol. 75, pp. 1401–1476, Dec. 2022.
[166]
A. Xie, J. Harrison, and C. Finn, “Deep reinforcement learning amidst continual structured non-stationarity,” in Proc. ICML, 2021, pp. 11393–11403.
[167]
W. Mao, K. Zhang, R. Zhu, D. Simchi-Levi, and T. Basar, “Near-optimal model-free reinforcement learning in non-stationary episodic MDPs,” in Proc. ICML, 2021, pp. 7447–7458.
[168]
Y. Liu, F. Wu, C. Lyu, S. Li, J. Ye, and X. Qu, “Deep dispatching: A deep reinforcement learning approach for vehicle dispatching on online ride-hailing platform,” Transp. Res. E, Logistics Transp. Rev., vol. 161, May 2022, Art. no.
[169]
S. He and K. G. Shin, “Spatio-temporal capsule-based reinforcement learning for mobility-on-demand coordination,” IEEE Trans. Knowl. Data Eng., vol. 34, no. 3, pp. 1446–1461, Mar. 2022.
[170]
A. Singh, A. Alabbasi, and V. Aggarwal, “A reinforcement learning based algorithm for multi-hop ride-sharing: Model-free approach,” in Proc. NIPS, 2019, pp. 1–7.
[171]
M. Haliem, G. Mani, V. Aggarwal, and B. Bhargava, “A distributed model-free ride-sharing algorithm with pricing using deep reinforcement learning,” in Proc. CSCS, 2020, pp. 1–10.
[172]
M. Gueriau, F. Cugurullo, R. A. Acheampong, and I. Dusparic, “Shared autonomous mobility on demand: A learning-based approach and its performance in the presence of traffic congestion,” IEEE Intell. Transp. Syst. Mag., vol. 12, no. 4, pp. 208–218, Winter 2020.
[173]
K. Manchella, M. Haliem, V. Aggarwal, and B. Bhargava, “PassGoodPool: Joint passengers and goods fleet management with reinforcement learning aided pricing, matching, and route planning,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 4, pp. 3866–3877, Apr. 2022.
[174]
K. Manchella, A. K. Umrawal, and V. Aggarwal, “FlexPool: A distributed model-free deep reinforcement learning algorithm for joint passengers and goods transportation,” IEEE Trans. Intell. Transp. Syst., vol. 22, no. 4, pp. 2035–2047, Apr. 2021.
[175]
J. Li and V. H. Allan, “Balancing taxi distribution in a city-scale dynamic ridesharing service: A hybrid solution based on demand learning,” in Proc. IEEE Int. Smart Cities Conf. (ISC), Sep. 2020, pp. 1–8.
[176]
U. Kaggle, “Uber pickups in New York City,” IEEE Dataport, Sep. 2022. 10.21227/p668-gy46.
[177]
(May 25, 2022). New York City Taxi and Limousine Commission Trip Record Data. [Online]. Available: https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page
[178]
(May 25, 2022). Didi Gaia Open Dataset. [Online]. Available: https://outreach.didichuxing.com/app-vue/DataList02
[179]
(May 25, 2022). Chicago Taxi Data (2013-to-Present). [Online]. Available: https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew
[180]
[181]
J. Yuanet al., “T-drive: Driving directions based on taxi trajectories,” in Proc. 18th SIGSPATIAL Int. Conf. Adv. Geographic Inf. Syst., Nov. 2010, pp. 99–108.
[182]
J. Yuan, Y. Zheng, X. Xie, and G. Sun, “Driving with knowledge from the physical world,” in Proc. 17th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, Aug. 2011, pp. 316–324.
[183]
(May 25, 2022). GPS Trajectories With Transportation Mode Labels Collected in (Microsoft Research Asia) Geolife Project. [Online]. Available: https://www.microsoft.com/en-us/download/details.aspx?id=52367
[184]
Y. Zheng, L. Liu, L. Wang, and X. Xie, “Learning transportation mode from raw GPS data for geographic applications on the web,” in Proc. 17th Int. Conf. World Wide Web, Apr. 2008, pp. 247–256.
[185]
Y. Zheng, Q. Li, Y. Chen, X. Xie, and W.-Y. Ma, “Understanding mobility based on GPS data,” in Proc. 10th Int. Conf. Ubiquitous Comput., Sep. 2008, pp. 312–321.
[186]
Y. Zheng, Y. Chen, Q. Li, X. Xie, and W.-Y. Ma, “Understanding transportation modes based on GPS data for web applications,” ACM Trans. Web, vol. 4, no. 1, pp. 1–36, Jan. 2010.
[187]
J. Lian and L. Zhang, “One-month Beijing taxi GPS trajectory dataset with taxi IDs and vehicle status,” in Proc. 1st Workshop Data Acquisition Anal., Nov. 2018, pp. 3–4.
[188]
(May 25, 2022). Porto Taxi Data (2013–2014). [Online]. Available: https://www.kaggle.com/crailtap/taxi-trajectory
[189]
M. Piorkowski, N. Sarafijanovic-Djukic, and M. Grossglauser, “Crawdad EPFL/mobility,” IEEE Dataport, Nov. 2022. 10.15783/C7J010.
[190]
L. Bracciale, M. Bonola, P. Loreti, G. Bianchi, R. Amici, and A. Rabuffi, “Crawdad roma/taxi,” IEEE Dataport, Dec. 2022. 10.15783/C7QC7M.
[191]
A. Mehmood and F. Mehmood, “Vehicular trajectories from Jeju, South Korea,” IEEE Dataport, Mar. 2022. 10.21227/y8vk-wj40.
[192]
(May 25, 2022). The Source of the Grab-Posisi Dataset. [Online]. Available: https://engineering.grab.com/grab-posisi
[193]
X. Huanget al., “Grab-posisi: An extensive real-life GPS trajectory dataset in Southeast Asia,” in Proc. 3rd ACM SIGSPATIAL Int. Workshop Predict. Human Mobility, Nov. 2019, pp. 1–10.
[194]
K. Liu, “Demands for bus ridesharing in Shanghai, China,” IEEE Dataport, May 2019. 10.21227/2877-mk46.
[195]
K. Liu and J. Liu, “Optimization approach to improve the ridesharing success rate in the bus ridesharing service,” IEEE Access, vol. 8, pp. 208296–208310, 2020.
[196]
(May 25, 2022). Foursquare Check-in Datasets. [Online]. Available: https://sites.google.com/site/yangdingqi/home/foursquare-dataset
[197]
(May 25, 2022). Brightkite Check-in Datasets. [Online]. Available: https://snap.stanford.edu/data/loc-brightkite.html
[198]
(May 25, 2022). Gowalla Check-in Datasets. [Online]. Available: https://snap.stanford.edu/data/loc-Gowalla.html
[199]
(May 25, 2022). Data Sets of Land Transport Authority. [Online]. Available: https://datamall.lta.gov.sg/content/datamall/en.html
[200]
(May 25, 2022). Travel Time, Speed, and Mobility Heatmap Data Provided by Uber. [Online]. Available: https://movement.uber.com/explore/amsterdam/travel-times/query 2022.
[201]
(May 25, 2022). Didi Open-Sourced Simulation Platform. [Online]. Available: https://outreach.didichuxing.com/Simulation/
[202]
C. Ruch, S. Hörl, and E. Frazzoli, “AMoDeus, a simulation-based testbed for autonomous mobility-on-demand systems,” in Proc. 21st Int. Conf. Intell. Transp. Syst. (ITSC), Nov. 2018, pp. 3639–3644.
[203]
C. Ruch, C. Lu, L. Sieber, and E. Frazzoli, “Quantifying the efficiency of ride sharing,” IEEE Trans. Intell. Transp. Syst., vol. 22, no. 9, pp. 5811–5816, Sep. 2021.
[204]
(May 25, 2022). A High-Capacity on-Demand Ride-Sharing Simulator, With Three Representative Vehicle Dispatch Algorithms Implemented. [Online]. Available: https://github.com/sustech-isus/AMoD2
[205]
C. Li, D. Parker, and Q. Hao, “Optimal online dispatch for high-capacity shared autonomous mobility-on-demand systems,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), May 2021, pp. 779–785.
[206]
AMOD-ABM, an Agent-Based Modeling Platform for Simulating Autonomous Mobility-on-Demand Systems. Accessed: May 25, 2022. [Online]. Available: https://github.com/wenjian0202/amod-abm
[207]
(May 25, 2022). Mod-ABM-2.0, an Agent-Based Modeling Platform for Mobility-on-Demand Simulations. [Online]. Available: https://https://github.com/wenjian0202/mod-abm-2.0
[208]
J. Wen, J. Zhao, and P. Jaillet, “Rebalancing shared mobility-on-demand systems: A reinforcement learning approach,” in Proc. IEEE 20th Int. Conf. Intell. Transp. Syst. (ITSC), Oct. 2017, pp. 220–225.
[209]
A. Horni, K. Nagel, and K. W. Axhausen, The Multi-Agent Transport Simulation MATSim. London, U.K.: Ubiquity Press, 2016.
[210]
M. Tsao, D. Milojevic, C. Ruch, M. Salazar, E. Frazzoli, and M. Pavone, “Model predictive control of ride-sharing autonomous mobility-on-demand systems,” in Proc. Int. Conf. Robot. Autom. (ICRA), May 2019, pp. 6665–6671.
[211]
P. A. Lopezet al., “Microscopic traffic simulation using SUMO,” in Proc. 21st Int. Conf. Intell. Transp. Syst. (ITSC), Nov. 2018, pp. 2575–2582.
[212]
A. Castagna and I. Dusparic, “Multi-agent transfer learning in reinforcement learning-based ride-sharing systems,” in Proc. 14th Int. Conf. Agents Artif. Intell., 2022, pp. 1–11.
[213]
L. Zhu, Z. Zhao, and G. Wu, “Shared automated mobility with demand-side cooperation: A proof-of-concept microsimulation study,” Sustainability, vol. 13, no. 5, p. 2483, Feb. 2021.
[214]
H. Zhanget al., “Cityflow: A multi-agent reinforcement learning environment for large scale city traffic scenario,” in Proc. WWW, 2019, pp. 3620–3624.
[215]
M. Ota, H. Vo, C. Silva, and J. Freire, “STaRS: Simulating taxi ride sharing at scale,” IEEE Trans. Big Data, vol. 3, no. 3, pp. 349–361, Sep. 2017.
[216]
M. Mounesan, V. Jayawardana, Y. Wu, S. Samaranayake, and H. T. Vo, “Fleet management for ride-pooling with meeting points at scale: A case study in the five boroughs of New York City,” 2021, arXiv:2105.00994.
[217]
J. Khalilet al., “Realistic urban traffic simulation with ride-hailing services: A revisit to network kernel density estimation (systems paper),” in Proc. 30th Int. Conf. Adv. Geographic Inf. Syst., Nov. 2022, pp. 1–10.
[218]
F. Salman, V. P. Sisiopiku, J. Khalil, M. Jafarzadehfadaki, and D. Yan, “Quantifying the impact of transportation network companies on urban congestion in a medium sized city,” J. Traffic Transp. Eng., vol. 11, no. 1, pp. 1–14, Feb. 2023.
[219]
G. Brockmaet al., “Openai gym,” 2016, arXiv:1606.01540.
[220]
S. Liu, Y. Wang, X. Chen, Y. Fu, and X. Di, “SMART-eFlo: An integrated SUMO-gym framework for multi-agent reinforcement learning in electric fleet management problem,” in Proc. IEEE 25th Int. Conf. Intell. Transp. Syst. (ITSC), Oct. 2022, pp. 3026–3031.
[221]
(May 25, 2022). Openstreetmap. [Online]. Available: https://www.openstreetmap.org/
[222]
G. Boeing, “OSMnx: New methods for acquiring, constructing, analyzing, and visualizing complex street networks,” Comput. Environ. Urban Syst., vol. 65, pp. 126–139, Sep. 2016.
[223]
(May 25, 2022). Data Tools: Local Climatological Data (LCD). [Online]. Available: https://www.ncdc.noaa.gov/cdo-web/datatools/lcd
[224]
(May 25, 2022). Learning to Dispatch and Reposition on a Mobility-on-Demand Platform. [Online]. Available: https://www.biendata.xyz/competition/kdd_didi/
[225]
A. Carron, F. Seccamonte, C. Ruch, E. Frazzoli, and M. N. Zeilinger, “Scalable model predictive control for autonomous mobility-on-demand systems,” IEEE Trans. Control Syst. Technol., vol. 29, no. 2, pp. 635–644, Mar. 2021.
[226]
J. Bischoff and M. Maciejewski, “Simulation of city-wide replacement of private cars with autonomous taxis in Berlin,” Proc. Comput. Sci., vol. 83, pp. 237–244, Jan. 2016.
[227]
D. F. Allan and A. M. Farid, “A benchmark analysis of open source transportation-electrification simulation tools,” in Proc. IEEE 18th Int. Conf. Intell. Transp. Syst., Sep. 2015, pp. 1202–1208.
[228]
C. Mao, Y. Liu, and Z.-J. Shen, “Dispatch of autonomous vehicles for taxi services: A deep reinforcement learning approach,” Transp. Res. C, Emerg. Technol., vol. 115, Jun. 2020, Art. no.
[229]
D. Mo, X. Chen, and J. Zhang, “Modeling and managing mixed on-demand ride services of human-driven vehicles and autonomous vehicles,” Transp. Res. B, Methodol., vol. 157, pp. 80–119, Mar. 2022.
[230]
G. Fanet al., “Joint order dispatch and charging for electric self-driving taxi systems,” in Proc. IEEE INFOCOM Conf. Comput. Commun., May 2022, pp. 1619–1628.
[231]
S. Zhang, C. Markos, and J. J. Q. Yu, “Autonomous vehicle intelligent system: Joint ride-sharing and parcel delivery strategy,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 10, pp. 18466–18477, Oct. 2022.
[232]
(May 25, 2022). Uber’s Dirty Tricks Quantified: Rival Counts 5,560 Canceled Rides. [Online]. Available: https://money.cnn.com/2014/08/11/technology/uber-fake-ride-requests-lyft/index.html
[233]
Y. Zhao, I. Shumailov, H. Cui, X. Gao, R. Mullins, and R. Anderson, “Blackbox attacks on reinforcement learning agents using approximated temporal information,” in Proc. 50th Annu. IEEE/IFIP Int. Conf. Dependable Syst. Netw. Workshops (DSN-W), Jun. 2020, pp. 16–24.
[234]
C. Cook, R. Diamond, J. V. Hall, J. A. List, and P. Oyer, “The gender earnings gap in the gig economy: Evidence from over a million rideshare drivers,” Rev. Econ. Stud., vol. 88, no. 5, pp. 2210–2238, Nov. 2020.
[235]
J. D. Tjaden, C. Schwemmer, and M. Khadjavi, “Ride with me—Ethnic discrimination, social markets, and the sharing economy,” Eur. Sociol. Rev., vol. 34, no. 4, pp. 418–432, Aug. 2018.
[236]
S. Wang and M. Smart, “The disruptive effect of ridesourcing services on for-hire vehicle drivers’ income and employment,” Transp. Policy, vol. 89, pp. 13–23, Apr. 2020.
[237]
O. Wolfson and J. Lin, “Fairness versus optimality in ridesharing,” in Proc. 18th IEEE Int. Conf. Mobile Data Manage. (MDM), May 2017, pp. 118–123.
[238]
N. Lesmana, X. Zhang, and X. Bei, “Balancing efficiency and fairness in on-demand ridesourcing,” in Proc. NIPS, 2019, pp. 1–11.
[239]
Y. Xu and P. Xu, “Trade the system efficiency for the income equality of drivers in rideshare,” in Proc. 29th Int. Joint Conf. Artif. Intell., Jul. 2020, pp. 1–10.
[240]
V. Nanda, P. Xu, K. A. Sankararaman, J. P. Dickerson, and A. Srinivasan, “Balancing the tradeoff between profit and fairness in rideshare platforms during high-demand hours,” in Proc. AAAI/ACM Conf. AI, Ethics, Soc., Feb. 2020, pp. 2210–2217.
[241]
T. Sühr, A. J. Biega, M. Zehlike, K. P. Gummadi, and A. Chakraborty, “Two-sided fairness for repeated matchings in two-sided markets: A case study of a ride-hailing platform,” in Proc. 25th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, Jul. 2019, pp. 3082–3092.
[242]
N. Raman, S. Shah, and J. Dickerson, “Data-driven methods for balancing fairness and efficiency in ride-pooling,” in Proc. 13th Int. Joint Conf. Artif. Intell., Aug. 2021, pp. 1–9.
[243]
S. Barocas, M. Hardt, and A. Narayanan, “Fairness in machine learning,” in Proc. NIPS, vol. 1, 2017, p. 2.
[244]
B. Zhao, P. Xu, Y. Shi, Y. Tong, Z. Zhou, and Y. Zeng, “Preference-aware task assignment in on-demand taxi dispatching: An online stable matching approach,” in Proc. AAAI, 2019, pp. 2245–2252.
[245]
Y. Xu, P. Xu, J. Pan, and J. Tao, “A unified model for the two-stage offline-then-online resource allocation,” in Proc. 29th Int. Joint Conf. Artif. Intell., Jul. 2020, pp. 1–10.
[246]
M. Lowalekar, P. Varakantham, and P. Jaillet, “Competitive ratios for online multi-capacity ridesharing,” in Proc. AAMAS, 2020, pp. 1–28.
[247]
T. Lykouris and S. Vassilvtiskii, “Competitive caching with machine learned advice,” in Proc. ICML, 2018, pp. 3296–3305.
[248]
M. Purohit, Z. Svitkina, and R. Kumar, “Improving online algorithms via ML predictions,” in Proc. NIPS, 2018, pp. 1–10.
[249]
K. Anand, R. Ge, and D. Panigrahi, “Customizing ML predictions for online algorithms,” in Proc. ICML, 2020, pp. 303–313.
[250]
E. Bamas, A. Maggiori, and O. Svensson, “The primal-dual method for learning augmented algorithms,” in Proc. NIPS, 2020, pp. 20083–20094.
[251]
D. Rohatgi, “Near-optimal bounds for online caching with machine learned advice,” in Proc. ACM-SIAM SODA, 2020, pp. 1834–1845.
[252]
M. Mitzenmacher and S. Vassilvitskii, “Algorithms with predictions,” in Beyond the Worst-Case Analysis of Algorithms, T. Roughgarden, Ed. Cambridge, U.K.: Cambridge Univ. Press, 2020, ch. 30.
[253]
Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, “A comprehensive survey on graph neural networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 1, pp. 4–24, Jan. 2021.
[254]
Y. Li, R. Yu, C. Shahabi, and Y. Liu, “Diffusion convolutional recurrent neural network: Data-driven traffic forecasting,” in Proc. ICLR, 2018, pp. 1–16.
[255]
B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting,” in Proc. 27th Int. Joint Conf. Artif. Intell., Jul. 2018, pp. 1–7.
[256]
H. Yaoet al., “Deep multi-view spatial–temporal network for taxi demand prediction,” in Proc. AAAI, 2018, pp. 1–8.
[257]
S. Kim, U. Lee, I. Lee, and N. Kang, “Idle vehicle relocation strategy through deep learning for shared autonomous electric vehicle system optimization,” J. Cleaner Prod., vol. 333, Jan. 2022, Art. no.
[258]
B. Li, N. Ammar, P. Tiwari, and H. Peng, “Decentralized ride-sharing of shared autonomous vehicles using graph neural network-based reinforcement learning,” in Proc. Int. Conf. Robot. Autom. (ICRA), May 2022, pp. 912–918.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Intelligent Transportation Systems
IEEE Transactions on Intelligent Transportation Systems  Volume 25, Issue 6
June 2024
1533 pages

Publisher

IEEE Press

Publication History

Published: 10 May 2024

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Jan 2025

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media