Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TPC attach 2 framework #1296

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

TPC attach 2 framework #1296

wants to merge 1 commit into from

Conversation

ofirgo
Copy link
Collaborator

@ofirgo ofirgo commented Dec 18, 2024

Pull Request Description:

Implementing TPC Attach2Fw module to attach a given TPC model to the framework-specific operators.
The attachments is based on a hardcoded mapping between the TPC schema's OperatorSetNames enum to each framework layers and ops (keras/pytorch).

Note that current TPCs do not use this module, since they will be removed and replaced with a simpler initialization framework in the future. So currently, this module is not used in MCT but it is just a preparation for future modifications.

Checklist before requesting a review:

  • I set the appropriate labels on the pull request.
  • I have added/updated the release note draft (if necessary).
  • I have updated the documentation to reflect my changes (if necessary).
  • All function and files are well documented.
  • All function and classes have type hints.
  • There is a licenses in all file.
  • The function and variable names are informative.
  • I have checked for code duplications.
  • I have added new unittest (if necessary).

@@ -61,7 +61,6 @@ class OperatorSetNames(Enum):
OPSET_DROPOUT = "Dropout"
OPSET_SPLIT = "Split"
OPSET_CHUNK = "Chunk"
OPSET_UNBIND = "Unbind"
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lior-dikstein ignore this, since you already remove this in your PR

}

if FOUND_SONY_CUSTOM_LAYERS:
self._opset2layer[OperatorSetNames.OPSET_POST_PROCESS] = [SSDPostProcess]
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lior-dikstein assuming this is the operator set name that is going to be added to the enum for pre process

OperatorSetNames.OPSET_SUB.value: [tf.subtract, Subtract],
OperatorSetNames.OPSET_MUL.value: [tf.math.multiply, Multiply],
OperatorSetNames.OPSET_DIV.value: [tf.math.divide, tf.math.truediv],
OperatorSetNames.OPSET_MIN_MAX.value: [tf.math.minimum, tf.math.maximum, Minimum, Maximum],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Split two operator min & max

@@ -9,3 +9,4 @@
OperatorSetConcat= schema.OperatorSetConcat
Fusing = schema.Fusing
TargetPlatformModel = schema.TargetPlatformModel
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this in my PR

OperationsSetToLayers


class TpcAttach2Fw:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I perfer the name "AttachTPModelToFramework" or "AttachTpModelToFw"
"AttachTpModelToPytorch"/"Keras"

# operation set are provided separately in the mapping.
self._opset2attr_mapping = None

def attach(self, tpc_model: TargetPlatformModel,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why "tpc_model" and not "tp_model"?

tpc = TargetPlatformCapabilities(tpc_model)

with tpc:
for opset_name, operators in self._opset2layer.items():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't we want to eliminate the context manager functionality in this new implementation?

# its framework-specific attribute name.
# note that a DefaultDict should be provided if not all the layer types in the
# operation set are provided separately in the mapping.
self._opset2attr_mapping = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add comments:
self._opset2layer = None # Mapping of operation sets to their corresponding framework-specific layers

 # A mapping that associates each layer type in the operation set (with weight attributes and  a quantization configuration in the target platform model) to its framework-specific attribute name. If not all layer types in the operation set are provided in the mapping,  a DefaultDict should be supplied to handle missing entries.

if attr_mapping is None:
OperationsSetToLayers(opset_name, operators)
else:
OperationsSetToLayers(opset_name, operators, attr_mapping=attr_mapping)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need the "if"? Just do
OperationsSetToLayers(opset_name, operators, attr_mapping=attr_mapping

def attach(self, tpc_model: TargetPlatformModel,
custom_opset2layer: Dict[str, Tuple[List[Any], Optional[Dict[str, DefaultDict]]]] = None
) -> TargetPlatformCapabilities:

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to add documentation


def __init__(self):
self._opset2layer = None

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not get it as an input and eliminate the context manager functionality? then you also document it with type hint and documentation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants