-
Notifications
You must be signed in to change notification settings - Fork 1.4k
[animebytes] Found no results while trying to browse this tracker (Test) #3008
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Same here Jackett Version 0.8.886.0 |
Same. |
Same here. Tried to delete the tracker and add it again. Even that failed. |
It seems like they are blocking scraping tools like Jackett again: Please start a discussion in their forums regarding this. |
This is the response I got from the support staff:
Is it the search api that's failing or the login api? |
Looks like another cookie has been added without which it's throwing the 403 Forbidden Error. |
Looks like staff member misinterpreted your question. Yes, we are blocking any scraping tools from performing queries to search page via means of dynamically created cookie in JavaScript - while obviously you can go around this, it will achieve nothing except cat and mouse game and will show that you are willing to ignore wishes of staff as message is quite clear on the fact that we do not wish for Jackett or other scraping tools. In future we may consider opening search API accessible similarly to RSS feeds via passkey authentication and in such case Jackett would be free to use it. For now however, we would like to request removal of AnimeBytes support from Jackett.
What kind of API? Jackett requests HTML pages and operates on DOM elements to create requests similarly to as if you've clicked using browser - there is no API there. |
Sorry, wording went across wrong. Meant to ask whether the login was failing or the search. Turns out its the search. So, basically, since the RSS feed doesn't have more than 50 items at the moment, we can only perform manual searches on AB? This is a huge inconvenience but that's fine for now. Hoping for the search API to come soon. Also, is there any reason only the search page is blocked unless the dynamic cookie is present and the login and main page are allowed? |
We still want to allow users to use the site without JavaScript - this check is performed only on search page as a means of weighting between ability to use site without JavaScript and blocking any scrapper tool.
By all means you should use RSS custom feeds (Power User+) if you want to fetch new releases. Alternative is to fetch global RSS feed and performing check in client or using something like autodl-irssi. We see no reason behind implementing special feature that would allow to externally search site, despite this we obviously see this form of browsing gaining some traction (which I personally don't understand, but let's not discuss why Sonarr/Radarr even exists in the first place here) and external search API (Torznab API in future, when we get to reworking search engine) is on TODO list. There are many reasons why we don't want scraping tools like Jackett, but most important ones are here:
|
Thanks for the reasoning behind the decision, @proton-ab. Hopefully, after seeing this people won't raise any more related issues, but I wouldn't count on it. |
@proton-ab
It seems like you removed/disabled the ajax.php API which is usually provided by many gazelle based trackers. May I ask why? If you could enable it again we would change the indexer to use it (like we already do for many other trackers). That would eliminate any torrents.php/torrents2.php scraping. Alternatively I would suggest a temporary hybrid solution: We query the RSS feed if there are no search keywords and fallback to scraping if there are search keywords. While we would lose some information which aren't available via RSS in this case, it should be better than nothing. Once there's a API providing search functionallyity we would completely migrate to it. I belive it's better to work together, I definitly won't start a cat and mouse game with you. But I believe the time invested into implementing the javascript cookie mechanism could have been better invested into something more usefull. |
First of all, thank you for continuing this thread, we would definitely like to find a common solution to this issue.
That would definitely be a huge step forward, especially considering amount of 'hacks' using leaked passwords that trackers are seeing, however it won't change anything for AnimeBytes specifically - torrents.php won't be available to Jackett anymore, however alternatives are coming (see below for details) Regarding encryption of secrets, I don't have any specific suggestions and I didn't really look at code to see how it's implemented. Assuming you are checking permissions that should be enough to satisfy me for now.
Regardless for whatever reason it's asked for, it's plain bad as people will likely not update it afterwards and others will start using it as a way of reducing possible bugs after seeing such suggestions.
Sadly we're using non-standard Gazelle search engine. The ajax API you mention returned only group IDs. We plan to open scrape.php soon that will return JSON objects of groups and torrents inside groups, accessible completely via passkey which will eliminate need for storing passwords, 2fa, cookies or any other issues that come with it. I can not give you any estimate on it, sadly this implementation needs working with legacy code and our priorities are set on reworking legacy Gazelle code into Tentacles freamwork, but I can assure you that it will come and we will announce it on Dev Blog and here.
Actually the JavaScript cookie mechanism is so simple that it takes single line of code in JavaScript and 2 lines of code on PHP side (including newline).
Sadly this is scheduled to be done whenever we get to reworking search engine which implies migrating from Sphinx to something better and rewriting A LOT of legacy code - it can't be done until smaller parts of rewrite are done, otherwise we would be stuck into rewriting half of the site at once which would quickly turn into a never-finished mess. |
@proton-ab thank you for explaining it. Seems like you have to deal with a lot of legacy stuff. What about my suggestion of allowing limited scraping (use RSS for latest and use scraping for search requests)? Would that be an acceptable temporary solution until you finished the new API? |
You should be able to use https://animebytes.tv/scrape.php?torrent_pass=[:passkey]&type=[music,anime] now. Rest of parameters mirror torrents.php ones with exception of action=advanced which is implied and hence not required. Result limiting is set to 50. There is no need to log in or authenticate in any other way than providing passkey. Additionally such scraping is exempt from any form of VPN ban as it does not trigger browse action on user account (similarly to RSS feeds or downloading torrent file) |
Thank you so much for the fast turn around @proton-ab. 2 observations:
|
@proton-ab thank you for the quick solution. I'll try to find some time to migrate the Jackett implementation to the scrape.php API tomorrow. |
Done
type=[anime,music] is equivalent to torrents.php and torrents2.php - it's general type where anime is everything non-music. You can further narrow categories within each type by using same parameters as with torrents.php |
@proton-ab almost finished the jackett update but it's currently useless because the API doesn't include the group title. |
Nice catch, that's a bug obviously. I'll also split them into separate fields. |
@kaso17 No way to email you, so leaving this here in the hopes that you see it before anyone else. Animebytes.cs has someone's (hopefully not yours) passkey publicly visible. |
Passkey has now been revoked. The bug with names should be now fixed. I've also split name into parts, however do note that for type=anime GroupName will contain something like 'TV Series' while actual name will be under 'SeriesName', while for type=music GroupName will contain actual album name. This is how we actually store data right now so not much we can do about it. |
@darthShadow Thank you for letting me know. It's was my key, very embarrassing. Jackett v0.8.929 now contains a working AnimeBytes indexer again. @proton-ab: thank you for making this happen |
@kaso17 @proton-ab, thanks for all the effort you guys put in so far. Jackett was patched to the new version, but unfortunately, when I try to test the indexer after pasting my passkey, I get a parse error. The original error is in Dutch (because of my OS language), but it roughly translates to: "The object reference is not set on an instance of an object." |
@NewBlueMew Can you try deleting and adding the indexer again? I am using it myself without any issues so far. |
@darthShadow, tried it several times, didn't solve it for me. Also tried selecting some of the checkboxes (include RAW, add E0, etc), but none resolved the parse error. |
Reinstalled Jackett, AnimeBytes is working flawlessly now :). Thanks for the help! |
@kaso17 Heads up - we now require 'username' param to be present along with 'torrent_pass'. It should contain string matching username of account that passkey from 'torrent_pass' belongs to. Check is case-sensitive. |
Uh oh!
There was an error while loading. Please reload this page.
Getting "Found no results while trying to browse this tracker" for animebytes
log.txt
Jackett version: Jackett Version 0.8.886.0
Mono version (if not using Windows): Mono JIT compiler version 5.2.0.224 (tarball Tue Oct 3 19:51:58 UTC 2017)
The text was updated successfully, but these errors were encountered: