8000 VDV AUS support by mority · Pull Request #840 · motis-project/motis · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

VDV AUS support #840

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 54 commits into
base: master
Choose a base branch
from
Draft

VDV AUS support #840

wants to merge 54 commits into from

Conversation

mority
Copy link
Contributor
@mority mority commented Apr 29, 2025

No description provided.

@mority
Copy link
Contributor Author
mority commented Apr 29, 2025
unsupported additional runs: 369
cancelled runs: 0
total stops: 421555
resolved stops: 408217
unknown stops: 13338
unsupported additional stops: 193
no transport found at stop: 1613
searches on incomplete runs: 1889
found runs: 10542
multiple matches: 2485
total runs: 24014
matched runs: 18333
unmatchable runs: 2274
runs without stops: 1739
skipped vdv stops: 3584
excess vdv stops: 1074
updated events: 356292
propagated delays: 69528

multiple matches: 2485 indicates cases in which we find two runs that match with the exact same score due to duplicates in the timetable. While we are not able to remove the duplicates, we could update both runs which would not be wrong considering that they are duplicates.

I am worried about searches on incomplete runs: 1889 which indicates that the server is sending updates for runs of which we have not seen a complete version yet. According to the protocol specification, the first update for each run has to include all stops of the run, i.e., be a complete run. Maybe the server does not adhere to our hourly unsubscribe/subscribe action and presumes that we still know all the runs that were transmitted prior to the unsubscribe/subscribe action?

I guess, we could keep the mapping [VDV AUS run --> nigiri run] longer and not scrub it every time we resubscribe. Instead, we could run a cleanup routine once per day and drop yesterday's mappings or something like that.

Alternatively, we could increment the VDV AUS AboID every time we resubscribe. But that is just a guess.

@derhuerst
Copy link

Just jumping in here and leaving some remarks & recommendations because I recently implemented a VDV-453/-454 client (which implements the older VDV-453 v2.4.0 & VDV-454 v2.0 and doesn't fully adhere to it yet). 🙈

  • As it is currently implemented, the client does not do the "alive handling" documented in VDV-453 v3.1.0 chapter 5.1.7 & 5.1.8. With VBB's VDV-453 API, I had to implement the detection of StartDienstZst resets to make sure the client re-creates all subscriptions.
    • With the VBB API, checking the VDV server's health via StatusAntwort's Status & StartDienstZst also turned out to be a very helpful tool in detecting its unavailability.
  • Currently, the client does not seem to handle WeitereDaten=true in DatenAbrufenAntwort. (It doesn't seem to fetch data at all?) The spec allows the server to deliver its data in chunks using WeitereDaten=true, and VBB's VDV API uses this (as it sends at most a few hundred IstFahrts per chunk).
  • The client does not fetch new data on datenbereit.xml requests. While this is not necessary, immediately fetching with DatensatzAlle=false upon such a request (instead of just periodically) reduces the delay until new realtime data is available. Note that such requests may come while a data fetching "iteration" is already running.
  • I had to learn the hard way (it is obvious in hindsight) that the client should either a) delete all active subscriptions at start, or b) persist the set of created subscriptions locally. When I had a crash (or server reboot), my client would create another (set of) subscriptions even though the old ones were still active on the server, leading to higher amounts of realtime data transfered, at some point to the extent where fetching in chunks (see above) never caught up.

@derhuerst
Copy link
derhuerst commented Apr 30, 2025

According to the protocol specification, the first update for each run has to include all stops of the run, i.e., be a complete run. Maybe the server does not adhere to our hourly unsubscribe/subscribe action and presumes that we still know all the runs that were transmitted prior to the unsubscribe/subscribe action?

I witnessed this sometimes too with VBB's system. I assume it indirectly depends on the behaviour of the system supplying data to the VDV "Datendrehscheibe".

I ended up keeping, for each FahrtBezeichner, the REF-AUS SollFahrt (if received), the AUS Komplettfahrt=true IstFahrt (if received) and the AUS "partial" IstFahrts (if received). Whenever I receive any of these, I merge them together and match them with the GTFS schedule data, best-effort style. I haven't tested yet how well this works, though.

Alternatively, we could increment the VDV AUS AboID every time we resubscribe. But that is just a guess.

I generate it randomly in order to prevent collisions.

@felixguendling
Copy link
Member

Thank you very much! We'll look into it :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0