Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logical Layout #416

Merged
merged 16 commits into from
Aug 25, 2021
Merged

Logical Layout #416

merged 16 commits into from
Aug 25, 2021

Conversation

vinx13
Copy link
Collaborator

@vinx13 vinx13 commented Jul 22, 2021

TODO:

  • Add more test cases

@vinx13 vinx13 force-pushed the feat/logical_layout2 branch 3 times, most recently from 83f59fb to afb5fb3 Compare July 26, 2021 22:02
@vinx13 vinx13 changed the title [WIP] Logical Layout Logical Layout Jul 27, 2021
@vinx13 vinx13 force-pushed the feat/logical_layout2 branch from 3c2f7c1 to 0d47d84 Compare July 29, 2021 17:11
@vinx13 vinx13 force-pushed the feat/logical_layout2 branch from 97da808 to 63c122b Compare August 23, 2021 18:48
@vinx13 vinx13 force-pushed the feat/logical_layout2 branch from 56bfe08 to 2c8f2df Compare August 23, 2021 19:18
@vinx13 vinx13 merged commit 5135eb9 into tlc-pack:main Aug 25, 2021
@MasterJH5574
Copy link
Collaborator

Hi @vinx13! I just ran the unit test test_tir_transform_lower_logical_layout.py on the latest main branch. But surprisingly it failed... Could you take a look at the test to see what is going wrong 😳?

P.S. Here is the error message:

Traceback (most recent call last):
  File "unittest/test_tir_transform_lower_logical_layout.py", line 240, in <module>
    test_non_bijective()
  File "unittest/test_tir_transform_lower_logical_layout.py", line 230, in test_non_bijective
    _check_fail(cache_read_non_bijective)
  File "unittest/test_tir_transform_lower_logical_layout.py", line 37, in _check_fail
    mod = tvm.tir.transform.LowerLogicalLayout()(mod)
  File "/home/rhlai/lrh-tensorir/python/tvm/ir/transform.py", line 161, in __call__
    return _ffi_transform_api.RunPass(self, mod)
  File "/home/rhlai/lrh-tensorir/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
ValueError: Traceback (most recent call last):
  [bt] (8) /home/rhlai/lrh-tensorir/build/libtvm.so(tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)+0xb4) [0x7f0461b62aaa]
  [bt] (7) /home/rhlai/lrh-tensorir/build/libtvm.so(tvm::NodeFunctor<tvm::tir::Stmt (tvm::runtime::ObjectRef const&, tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>*)>::operator()(tvm::runtime::ObjectRef const&, tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>*) const+0x148) [0x7f0461b66a10]
  [bt] (6) /home/rhlai/lrh-tensorir/build/libtvm.so(tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>*)#14}::_FUN(tvm::runtime::ObjectRef const&, tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>*)+0x40) [0x7f0461b662ba]
  [bt] (5) /home/rhlai/lrh-tensorir/build/libtvm.so(tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>*)#14}::operator()(tvm::runtime::ObjectRef const&, tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>*) const+0x59) [0x7f0461b6625b]
  [bt] (4) /home/rhlai/lrh-tensorir/build/libtvm.so(tvm::tir::LogicalLayoutMutator::VisitStmt_(tvm::tir::BufferStoreNode const*)+0x8b5) [0x7f0462764fef]
  [bt] (3) /home/rhlai/lrh-tensorir/build/libtvm.so(tvm::tir::LogicalLayoutMutator::GetInverseAffineIterMap(tvm::runtime::Array<tvm::PrimExpr, void> const&, std::unordered_set<tvm::tir::Var, tvm::runtime::ObjectPtrHash, tvm::runtime::ObjectPtrEqual, std::allocator<tvm::tir::Var> >&, tvm::runtime::Array<tvm::PrimExpr, void> const&, tvm::runtime::String const&)+0x30a) [0x7f046276666c]
  [bt] (2) /home/rhlai/lrh-tensorir/build/libtvm.so(tvm::runtime::detail::LogFatal::~LogFatal()+0x36) [0x7f0461b2ea2a]
  [bt] (1) /home/rhlai/lrh-tensorir/build/libtvm.so(tvm::runtime::detail::LogFatal::Entry::Finalize()+0x4a) [0x7f0461b2eb9e]
  [bt] (0) /home/rhlai/lrh-tensorir/build/libtvm.so(tvm::runtime::Backtrace[abi:cxx11]()+0x35) [0x7f046338bf13]
  File "/home/rhlai/lrh-tensorir/src/tir/transforms/lower_logical_layout.cc", line 295
ValueError: Check failed: iter_map.size() == loop_vars.size() (0 vs. 1) : Logical layout warp.non_bijective should be bijective.

@vinx13
Copy link
Collaborator Author

vinx13 commented Aug 29, 2021 via email

@MasterJH5574
Copy link
Collaborator

@vinx13 I see. I didn't look into it before. I just noticed that what you catch is tvm.TVMError, but actually the pass throws a ValueError. If I catch ValueError, it can successfully catch the error. So I can help fix it directly. Thanks for letting me know!

@MasterJH5574
Copy link
Collaborator

Hi @vinx13! I just ran the unit test test_tir_transform_lower_logical_layout.py on the latest main branch. But surprisingly it failed... Could you take a look at the test to see what is going wrong 😳?

Fixed in #460 :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants