Replies: 3 comments 2 replies
-
Hi. I've tried to make an approximate estimation of number of prunable timm models among all 964 models in its actual version v0.6.13. The key difficulty is that each timm model can require model-specific input, some models require extra installations of different stuff. The evaluation code is the following:
|
Beta Was this translation helpful? Give feedback.
-
Hello @Serjio42, I wanted to express my great appreciation for the work you've done. It's indeed not easy to prepare inputs for different cases individually. I will try your script and update the benchmark soon. It seems like there is still some work to be done to ensure compatibility. Thank you again for your work! |
Beta Was this translation helpful? Give feedback.
-
Hi @Serjio42, I find a solution for the varied input sizes. In timm, we can fetch the input size using import torch
import timm
import torch_pruning as tp
import gc
import re
timm_models = timm.list_models()
example_inputs = torch.randn(1,3,224,224)
imp = tp.importance.MagnitudeImportance(p=2, group_reduction="mean")
prunable_list = []
unprunable_list = []
problem_with_input_shape = []
for i, model_name in enumerate(timm_models):
print("Pruning %s..."%model_name)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
if 'rexnet' in model_name or 'sequencer' in model_name or 'botnet' in model_name: # pruning process stuck with that architectures - skip them.
unprunable_list.append(model_name)
continue
try:
model = timm.create_model(model_name, pretrained=False, no_jit=True).eval().to(device)
except: # out of memory error
model = timm.create_model(model_name, pretrained=False, no_jit=True).eval()
device = 'cpu'
input_size = model.default_cfg['input_size']
example_inputs = torch.randn(1, *input_size).to(device)
test_output = model(example_inputs)
#print(model)
prunable = True
try:
pruner = None
pruner = tp.pruner.MagnitudePruner(
model,
example_inputs,
global_pruning=False, # If False, a uniform sparsity will be assigned to different layers.
importance=imp, # importance criterion for parameter selection
iterative_steps=1, # the number of iterations to achieve target sparsity
ch_sparsity=0.05,
ignored_layers=[],
)
pruner.step()
test_output = model(example_inputs)
except Exception as e:
prunable = False
if prunable:
prunable_list.append(model_name)
else:
unprunable_list.append(model_name)
print("Prunable: %d models, \n %s\n"%(len(prunable_list), prunable_list))
print("Unprunable: %d models, \n %s\n"%(len(unprunable_list), unprunable_list))
#print("Problem with input shape: %d models, \n %s\n"%(len(problem_with_input_shape), problem_with_input_shape))
#del model
#if pruner:
# del pruner
#gc.collect() |
Beta Was this translation helpful? Give feedback.
-
We have established a compatibility benchmark for Torchvision version 0.13.1, with a compatibility rate of 85% (73 out of 85 models supported). Our thanks go to @Serjio42 for the insightful discussion in Issue #119.
In the future, we plan to expand our benchmark by utilizing the popular model zoo for classification, Timm. This page is for future discussion on this topic.
Beta Was this translation helpful? Give feedback.
All reactions