8000 SubsetSampler not available. · Issue #1337 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

SubsetSampler not available. #1337

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
shubhamjain0594 opened this issue Apr 23, 2017 · 7 comments
Closed

SubsetSampler not available. #1337

shubhamjain0594 opened this issue Apr 23, 2017 · 7 comments

Comments

@shubhamjain0594
Copy link
Contributor

In torch.utils.data.sampler there is RandomSampler and SubsetRandomSampler but not SubsetSampler in cases you don't want random sampling from the indices. Any reasons for this? If it is just a missing feature, I can send a PR.

@soumith
Copy link
Member
soumith commented Apr 23, 2017

you dont need a separate SubsetSampler, just giving an interable of subset indices is sufficient.

@shubhamjain0594
Copy link
Contributor Author

I am sorry but I don't understand it exactly. To what should we give an iterator of subset indices?

@soumith
Copy link
Member
soumith commented Apr 23, 2017

You are wondering if we can just have a SubsetSequentialSampler, like in this PR: 4d4336e

To the sampler, you will give a subset of indices (say a list of indices) and it will sample from them.
I'm saying that you dont need a separate SubsetSampler that takes in indices and makes the data loader iterate over them, you can just give a list of indices to the Data loader as the sampler (a list of indices will also have __iter__ and __len__ defined on them)

@soumith soumith closed this as completed Apr 23, 2017
@shubhamjain0594
Copy link
Contributor Author

Ohh okay. Sorry I didn't realize that. Thanks for helping out.

@josh-gleason
Copy link

@soumith This doesn't seem to be true either in either pytorch 0.4.1 or 1.3.1. In both cases the following code gives an error.

loader = DataLoader(range(10), sampler=[0,1,2])

which results in

ValueError: sampler should be an instance of torch.utils.data.Sampler, but got sampler=[0, 1, 2]

@javierbg
Copy link
javierbg commented May 4, 2020

@soumith This doesn't seem to be true either in either pytorch 0.4.1 or 1.3.1. In both cases the following code gives an error.

loader = DataLoader(range(10), sampler=[0,1,2])

which results in

ValueError: sampler should be an instance of torch.utils.data.Sampler, but got sampler=[0, 1, 2]

This is true for 1.5.0 too, it seems like DataLoader checks that sampler is an instance of torch.utils.data.Sampler, so the previous solution is not valid anymore.

@ssnl
Copy link
Collaborator
ssnl commented May 13, 2020

@josh-gleason @javierbg this is being relaxed in #38403

facebook-github-bot pushed a commit that referenced this issue May 14, 2020
Summary:
Since the check was added in #6249, one can not pass an iterable as a sampler to the data loader anymore, which was a very handy feature (e.g., #1337). I think the check should be removed for two-fold reasons:
1. It is too strict. There is no reason that it should not be a general iterable.
2. It is inconsistent. In `DataLoader` (the main place where people use samplers), you can pass a general iterable as `batch_sampler` but not `sampler` due to this check.
Pull Request resolved: #38403

Differential Revision: D21555958

Pulled By: soumith

fbshipit-source-id: c7267bb99a31edd8f2750689205d6edc5dab5cff
eqy pushed a commit to eqy/pytorch that referenced this issue Jan 20, 2022
hubertlu-tw pushed a commit to hubertlu-tw/pytorch that referenced this issue Nov 1, 2022
Take-over of pytorch#1097

* Add fast CUDA focal loss implementation.

* Enable fast math for CUDA focal loss.

* Correct typo.

* replace deprecated macros

* Add fast CUDA focal loss implementation.

* Enable fast math for CUDA focal loss.

* Correct typo.

* replace deprecated macros

* TORCH_CUDA_CHECK -> AT_CUDA_CHECK

The former is defined in torch/csrc/profiler/cuda.cpp so it's not available usually.
The latter however is defined in ATen/cuda/Exceptions.h as an alias of C10_CUDA_CHECK.

* add test

* clean up

* guard for torchvision

Co-authored-by: Wil Kong <alpha0422@gmail.com>
rraminen pushed a commit to rraminen/pytorch that referenced this issue Jan 10, 2024
Signed-off-by: Wang, Yanyao <yanyao.wang@amd.com>
Co-authored-by: Wang, Yanyao <yanyao.wang@amd.com>
akashveramd pushed a commit to akashveramd/pytorch that referenced this issue Apr 9, 2025
…1337)

* Add insert_dummy_dep_per_dword over-loading for length 64

* Fix insert_dummy_dep_per_dword and remove over-loading for length 64

* Remove blank lines

---------

Co-authored-by: Po Yen Chen <PoYen.Chen@amd.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants
0