Description
Automatically add _all__
(AAA)
I have been playing around with using Python's built-in tokenizer
to build a big sorted table of which torch.
modules import which modules and symbols from torch.
(summary at end).
Seeing this issue about adding __all__
, it strikes me that I could quite easily modify my code to automatically add or update __all__
in any existing .py
file, if there were any interest.
Right now there are 1303 .py files below torch/
that do not contain __all__
and 695 that do.
all_linter
An all_linter
would check that all symbols imported by another module appear in __all__
for their module.
Given AAA it'd be easy to naïvely run over each Python file on each commit (13 seconds on this fast machine, a bit slow). Writing it to work incrementally is a better idea, probably not hard, more design needed.
Appendix: the most imported modules in torch
My original goal was to figure out which modules and symbols were the best candidates for adding typing and documentation by seeing which were imported the most from code within torch/
.
I have excerpts from a run of https://github.com/rec/test/blob/master/python/importer_counter.py below.
I note an experimental
near the top. 😁
The full "report" (it's JSON) goes into increasing levels of detail and is about 45k lines as of this writing.
"torch": 1808,
"torch._inductor.pattern_matcher": 486,
"torch._dynamo.utils": 339,
"torch._inductor.utils": 323,
"torch.utils._pytree": 272,
"torch.fx": 179,
"torch.fx.experimental.symbolic_shapes": 177,
"torch.nn": 173,
"torch.optim.optimizer": 165,
"torch.testing._internal.common_utils": 164,
"torch._prims_common": 161,
"torch._dynamo.source": 149,
...
{
"torch": {
"(module)": 1121,
"Tensor": 183,
"config": 84,
"_C": 38,
"ir": 35,
"variables": 19,
"SymInt": 14,
"_dtypes_impl": 11,
"torch._inductor.pattern_matcher": {
"CallFunction": 32,
"KeywordArg": 30,
"Arg": 29,
"CallFunctionVarArgs": 27,
"Ignored": 26,
"ListOf": 26,
....
"torch._inductor.pattern_matcher": {
"compute_mutation_region_ids": [
"torch._functorch.compile_utils"
],
"same_mutation_regions": [
"torch._functorch.compile_utils"
],
"Arg": [
"torch._inductor.fx_passes.b2b_gemm",
"torch._inductor.fx_passes.binary_folding",
"torch._inductor.fx_passes.decompose_mem_bound_mm",
"torch._inductor.fx_passes.mkldnn_fusion",
"torch._inductor.fx_passes.post_grad",
"torch._inductor.fx_passes.quantization",
"torch._inductor.fx_passes.serialized_patterns._sfdp_pattern_1",
"torch._inductor.fx_passes.serialized_patterns._sfdp_pattern_10",
"torch._inductor.fx_passes.serialized_patterns._sfdp_pattern_11",
"torch._inductor.fx_passes.serialized_patterns._sfdp_pattern_12",
"torch._inductor.fx_passes.serialized_patterns._sfdp_pattern_13",
"torch._inductor.fx_passes.serialized_patterns._sfdp_pattern_14",
"torch._inductor.fx_passes.serialized_patterns._sfdp_pattern_15",
...
Alternatives
No response
Additional context
No response