-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Conversation
pull code
pull code
pull code
pull code
Fix gp tuner (microsoft#1592)
pull code
pull code
pull code
pull code
Fix compressor op_types (microsoft#1670)
pull code
pull code
pull code
pull code
pull code
pull code
Filter prune algo implementation (microsoft#1655)
pull code
pull code
pull code
pull code
pull code
pull code
pull code
pull code
pull code
pull code
pull code
pull code
document the dispatcher working dir (microsoft#1866)
pull code
else: | ||
inputs.append(input_name) | ||
if input_name in output_to_node: | ||
for predecessor_node in output_to_node[input_name]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the element in output_to_node
is a list? This means one output is generated by more than one node?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assertion added to ensure the list has 1 element at most.
@@ -177,7 +177,6 @@ def channel_prune(model): | |||
pruner.compress() | |||
pruner.export_model(model_path=MODEL_FILE, mask_path=MASK_FILE) | |||
|
|||
@unittest.skipIf(torch.__version__ >= '1.6.0', 'not supported') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you check if the test cases work with PyTorch 1.6+ in following scripts?
- test_graph_utils.py
- test_compression_utils.py
- test_pruners.py
- test_dependecy_aware.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pytorch 1.6+ turned on for
test_compression_utils.py
test_pruners.py
test_dependecy_aware.py
for test_graph_utils.py
, there are some expected file are generated by Yuge, they may need to be upgraded for pytorch 1.6+
@ultmaster , would you please help upgrade the protobuf test cases in test_graph_utils.py for pytroch1.6+ ?
node_group.append(predecessor_node) | ||
node_queue.put(predecessor_node) | ||
else: | ||
inputs.add(input_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If predecessor_node is already merged into this node group, then I guess it's output should not be taken as the input of this node group?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
checking added, please review again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess I didn't explain myself clearly, sorry about that.
What I want to say is that maybe we can remove the else
part of the following code.
if not self._is_key_func(predecessor_node):
node_group.append(predecessor_node)
node_queue.put(predecessor_node)
else:
inputs.add(input_name)
Because whether the input_name
should be added to the inputs(inputs
) of this nodes group is only decided by if predecessor_node
is in nodes
? It has nothing to do with whether predecessor_node
is a key node/func? I guess? Please correct me if I'm wrong. Thanks~
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does not work removing the else
as I tested. Because if the predecessor node is key func, the predecessor node will be merged into another node. There is only one key func node within one merged node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it! Thanks~
something changed in pytorch v1.6 which breaks graphutils:
torch.onnx.set_training
is removedprim::Constant
nodes can be shared as inputs of multiple nodes in torch1.6:for example:
In traced graph of MyModel2, we found that, conv1 and conv2 share a few
prim::Constant
input nodes, bn1 and bn2 also shares a fewprim::Constant
nodes as input, this breaks graphutils.