Skip to content

Commit

Permalink
Update model analysis documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
chandrasekaranpradeep authored and github-actions[bot] committed Feb 12, 2025
1 parent 5b71d92 commit 37f9536
Show file tree
Hide file tree
Showing 241 changed files with 9,251 additions and 31,547 deletions.
1,118 changes: 529 additions & 589 deletions model_analysis_docs/ModelsInfo.md

Large diffs are not rendered by default.

156 changes: 56 additions & 100 deletions model_analysis_docs/models/albert/pt_albert_base_v1_mlm_hf.md

Large diffs are not rendered by default.

150 changes: 53 additions & 97 deletions model_analysis_docs/models/albert/pt_albert_base_v1_token_cls_hf.md

Large diffs are not rendered by default.

156 changes: 56 additions & 100 deletions model_analysis_docs/models/albert/pt_albert_base_v2_mlm_hf.md

Large diffs are not rendered by default.

150 changes: 53 additions & 97 deletions model_analysis_docs/models/albert/pt_albert_base_v2_token_cls_hf.md

Large diffs are not rendered by default.

156 changes: 56 additions & 100 deletions model_analysis_docs/models/albert/pt_albert_large_v1_mlm_hf.md

Large diffs are not rendered by default.

150 changes: 53 additions & 97 deletions model_analysis_docs/models/albert/pt_albert_large_v1_token_cls_hf.md

Large diffs are not rendered by default.

156 changes: 56 additions & 100 deletions model_analysis_docs/models/albert/pt_albert_large_v2_mlm_hf.md

Large diffs are not rendered by default.

150 changes: 53 additions & 97 deletions model_analysis_docs/models/albert/pt_albert_large_v2_token_cls_hf.md

Large diffs are not rendered by default.

146 changes: 51 additions & 95 deletions model_analysis_docs/models/albert/pt_albert_xlarge_v1_mlm_hf.md

Large diffs are not rendered by default.

140 changes: 48 additions & 92 deletions model_analysis_docs/models/albert/pt_albert_xlarge_v1_token_cls_hf.md

Large diffs are not rendered by default.

146 changes: 51 additions & 95 deletions model_analysis_docs/models/albert/pt_albert_xlarge_v2_mlm_hf.md

Large diffs are not rendered by default.

140 changes: 48 additions & 92 deletions model_analysis_docs/models/albert/pt_albert_xlarge_v2_token_cls_hf.md

Large diffs are not rendered by default.

156 changes: 56 additions & 100 deletions model_analysis_docs/models/albert/pt_albert_xxlarge_v1_mlm_hf.md

Large diffs are not rendered by default.

150 changes: 53 additions & 97 deletions model_analysis_docs/models/albert/pt_albert_xxlarge_v1_token_cls_hf.md

Large diffs are not rendered by default.

156 changes: 56 additions & 100 deletions model_analysis_docs/models/albert/pt_albert_xxlarge_v2_mlm_hf.md

Large diffs are not rendered by default.

150 changes: 53 additions & 97 deletions model_analysis_docs/models/albert/pt_albert_xxlarge_v2_token_cls_hf.md

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@
<th>18</th>
<td>MaxPool2d</td>
<td>Operand(type=Activation, shape=(1, 256, 13, 13), dtype=float32)</td>
<td>kernel_size : 3<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : False<br>max_pool_add_sub_surround : False<br>max_pool_add_sub_surround_value : 1.0<br>channel_last : 0</td>
<td>kernel_size : 3<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : False<br>channel_last : 0</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
Expand All @@ -222,7 +222,7 @@
<th>19</th>
<td>MaxPool2d</td>
<td>Operand(type=Activation, shape=(1, 64, 55, 55), dtype=float32)</td>
<td>kernel_size : 3<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : False<br>max_pool_add_sub_surround : False<br>max_pool_add_sub_surround_value : 1.0<br>channel_last : 0</td>
<td>kernel_size : 3<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : False<br>channel_last : 0</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
Expand All @@ -233,7 +233,7 @@
<th>20</th>
<td>MaxPool2d</td>
<td>Operand(type=Activation, shape=(1, 192, 27, 27), dtype=float32)</td>
<td>kernel_size : 3<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : False<br>max_pool_add_sub_surround : False<br>max_pool_add_sub_surround_value : 1.0<br>channel_last : 0</td>
<td>kernel_size : 3<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : False<br>channel_last : 0</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@
<td>&#x274C;</td>
<td>&#x274C;</td>
<td></td>
<td>[MLIR][mlir::AffineMap collapsedLinearAffineMap] python: /__w/tt-forge-fe/tt-forge-fe/third_party/tt-mlir/lib/Dialect/TT/IR/TTOpsTypes.cpp:445: mlir::AffineMap collapsedLinearAffineMap(::mlir::MLIRContext *, ::llvm::ArrayRef<int64_t>, ::llvm::ArrayRef<int64_t>, ::llvm::ArrayRef<std::pair<std::int64_t, std::int64_t>>): Assertion `found && "Dim does not participate in AffineMap RHS"' failed.</td>
<td>[MLIR][mlir::AffineMap collapsedLinearAffineMap] python: /__w/tt-forge-fe/tt-forge-fe/third_party/tt-mlir/lib/Dialect/TT/IR/TTOpsTypes.cpp:460: mlir::AffineMap collapsedLinearAffineMap(::mlir::MLIRContext *, ::llvm::ArrayRef<int64_t>, ::llvm::ArrayRef<int64_t>, ::llvm::ArrayRef<std::pair<std::int64_t, std::int64_t>>): Assertion `found && "Dim does not participate in AffineMap RHS"' failed.</td>
</tr>
<tr>
<th>10</th>
Expand All @@ -128,7 +128,7 @@
<td>&#x274C;</td>
<td>&#x274C;</td>
<td></td>
<td>[MLIR][mlir::AffineMap collapsedLinearAffineMap] python: /__w/tt-forge-fe/tt-forge-fe/third_party/tt-mlir/lib/Dialect/TT/IR/TTOpsTypes.cpp:445: mlir::AffineMap collapsedLinearAffineMap(::mlir::MLIRContext *, ::llvm::ArrayRef<int64_t>, ::llvm::ArrayRef<int64_t>, ::llvm::ArrayRef<std::pair<std::int64_t, std::int64_t>>): Assertion `found && "Dim does not participate in AffineMap RHS"' failed.</td>
<td>[MLIR][mlir::AffineMap collapsedLinearAffineMap] python: /__w/tt-forge-fe/tt-forge-fe/third_party/tt-mlir/lib/Dialect/TT/IR/TTOpsTypes.cpp:460: mlir::AffineMap collapsedLinearAffineMap(::mlir::MLIRContext *, ::llvm::ArrayRef<int64_t>, ::llvm::ArrayRef<int64_t>, ::llvm::ArrayRef<std::pair<std::int64_t, std::int64_t>>): Assertion `found && "Dim does not participate in AffineMap RHS"' failed.</td>
</tr>
<tr>
<th>11</th>
Expand Down Expand Up @@ -233,7 +233,7 @@
<th>20</th>
<td>MaxPool2d</td>
<td>Operand(type=Activation, shape=(1, 96, 54, 54), dtype=float32)</td>
<td>kernel_size : 3<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : True<br>max_pool_add_sub_surround : False<br>max_pool_add_sub_surround_value : 1.0<br>channel_last : 0</td>
<td>kernel_size : 3<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : True<br>channel_last : 0</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
<td>&#x274C;</td>
Expand All @@ -244,7 +244,7 @@
<th>21</th>
<td>MaxPool2d</td>
<td>Operand(type=Activation, shape=(1, 256, 27, 27), dtype=float32)</td>
<td>kernel_size : 3<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : True<br>max_pool_add_sub_surround : False<br>max_pool_add_sub_surround_value : 1.0<br>channel_last : 0</td>
<td>kernel_size : 3<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : True<br>channel_last : 0</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
Expand All @@ -255,7 +255,7 @@
<th>22</th>
<td>MaxPool2d</td>
<td>Operand(type=Activation, shape=(1, 256, 13, 13), dtype=float32)</td>
<td>kernel_size : 3<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : True<br>max_pool_add_sub_surround : False<br>max_pool_add_sub_surround_value : 1.0<br>channel_last : 0</td>
<td>kernel_size : 3<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : True<br>channel_last : 0</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@
<th>9</th>
<td>MaxPool2d</td>
<td>Operand(type=Activation, shape=(1, 16, 28, 28), dtype=float32)</td>
<td>kernel_size : 2<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : False<br>max_pool_add_sub_surround : False<br>max_pool_add_sub_surround_value : 1.0<br>channel_last : 0</td>
<td>kernel_size : 2<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : False<br>channel_last : 0</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
Expand All @@ -123,7 +123,7 @@
<th>10</th>
<td>MaxPool2d</td>
<td>Operand(type=Activation, shape=(1, 4, 14, 14), dtype=float32)</td>
<td>kernel_size : 2<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : False<br>max_pool_add_sub_surround : False<br>max_pool_add_sub_surround_value : 1.0<br>channel_last : 0</td>
<td>kernel_size : 2<br>stride : 2<br>padding : [0, 0, 0, 0]<br>dilation : 1<br>ceil_mode : False<br>channel_last : 0</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
<td>&#x274C;</td>
Expand Down
Loading

0 comments on commit 37f9536

Please sign in to comment.