forked from apache/tvm
-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix TVM compilation with USE_LLVM=OFF #194
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* [µTVM] Specify loader for yaml.load Specify the loader to be used by yaml.load as the current form used without specifying explicitly a loader is deprecated since PyYAML 5.1 and will throw a noisy warning. For details, please see: https://github.com/yaml/pyyaml/wiki/PyYAML-yaml.load(input)-Deprecation Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org> * [µTVM] Avoid using tvm.target.create Avoid using tvm.target.create as it's deprecated and use tvm.target.Target directly instead. Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
* Added Ops * Regular * Remove copy * Remove copy * Tests * Black Co-authored-by: Ubuntu <ubuntu@ip-172-31-27-149.us-east-2.compute.internal> Co-authored-by: Ubuntu <ubuntu@ip-172-31-19-34.us-east-2.compute.internal>
* [FIX] Remove leftovers from check_correctness * remove unused numpy import
…checking (apache#7278) * Correct handling of call node attrs to handle non-operator calls (attrs may be undefined) * Linting fix
* Add MicroTVM support for the STM32F746 Discovery board Signed-off-by: Tom Gall <tom.gall@linaro.org> * Add reference to the discovery board in the docs Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Add if pattern commit 1ee052fd494a5bdd881c242c3ea0c95cf2a613e5 Author: Masahiro Masuda <masahi129@gmail.com> Date: Sat Dec 26 22:19:17 2020 +0900 add comment commit c846a6999e9c9e48fbc019780e705a990f46cb22 Author: Masahiro Masuda <masahi129@gmail.com> Date: Sat Dec 26 21:14:20 2020 +0900 max_out_size rewrite added to the test commit 2c7c7fbd0e6563aba694e7fb6baa7bda8e4fadca Author: Masahiro Masuda <masahi129@gmail.com> Date: Sat Dec 26 20:57:55 2020 +0900 max_out_size rewrite working commit 319e930acb8162c1ec4a5d4fb71d134580a68f13 Author: Masahiro Masuda <masahi129@gmail.com> Date: Sat Dec 26 20:43:16 2020 +0900 refactor dyn strided slice pattern commit fb6917b703440748800bde624bc20efaf5798b8a Author: Masahiro Masuda <masahi129@gmail.com> Date: Sat Dec 26 11:21:33 2020 +0900 update NMS pattern following frontend change commit 255a98f1da8f300d4fe417cce3587c0d71e38ed3 Author: Masahiro Masuda <masahi129@gmail.com> Date: Thu Dec 24 05:19:31 2020 +0900 add some comment to explain the pattern commit 52cea1cc2bff533ca60acfc2416477fc8b058428 Author: Masahiro Masuda <masahi129@gmail.com> Date: Wed Dec 23 08:35:14 2020 +0900 revert tutorial change commit d3e0e0d7e2427c40067d6ad2680ec5b3f0076223 Author: Masahiro Masuda <masahi129@gmail.com> Date: Wed Dec 23 08:02:29 2020 +0900 test fixed by setting force_surpress=False commit 2fa1a574f932001be2d8f601338a342dab92f79c Author: Masahiro Masuda <masahi129@gmail.com> Date: Wed Dec 23 07:22:32 2020 +0900 fixed coord_start commit 6ba88f27dec1bdb0b0ba746c268591a59264088e Author: Masahiro Masuda <masahi129@gmail.com> Date: Wed Dec 23 06:50:46 2020 +0900 add doc commit 8d386b6a1c92ce4fe3349ff20e320199a1b5b310 Author: Masahiro Masuda <masahi129@gmail.com> Date: Wed Dec 23 05:27:26 2020 +0900 updated tutorial commit 3206b49ecfdd874e0ff8feb0fa586c4c4282f705 Author: Masahiro Masuda <masahi129@gmail.com> Date: Wed Dec 23 05:04:44 2020 +0900 update object detection test to add rewrite commit 74bebb2f4376aeb67d8c4aad395f9f2661fe6b3e Author: Masahiro Masuda <masahi129@gmail.com> Date: Wed Dec 23 05:02:15 2020 +0900 add a pattern to rewrite nms to batched nms commit f410e6dde0ed949b90312c5a7ddbb6c234f9acc1 Author: Masahiro Masuda <masahi129@gmail.com> Date: Sat Dec 26 22:20:16 2020 +0900 add comment commit f1e078b0724bd22e7be0a812055e1c7c650d94da Author: Masahiro Masuda <masahi129@gmail.com> Date: Sat Dec 26 19:54:22 2020 +0900 Add if pattern * add doc * add test * doc formatting * cpplint fix
…for Cuda & X86 (apache#7148) * [Frontend][Tensorflow] Sparse_Dense Op CSR scheduling issue resolved for both cuda & x86 * [1] Review comments handled * [2] Review comments handled * [3] Review comments handled
…#7254) Currently tutorial script 'micro_tflite.py' assumes that all boards with target STM32F746 are Nucleo boards. As a consequence once that target is selected the script automatically defaults to the Nucleo board. However, the STM32F746 is also used on Discovery Kit boards (aka disco) which are quite similar but have some differences, so Nucleo config and final image don't work on the disco boards. That commit adds a way to select a different dev board and adds comments accordingly, informing how to use the script with STM32F746 disco boards. Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
…7210) * made TShapeDataDependant array * add stub * dyn strided slice working * reshape also working * remove log * works on maskrcnn * lint fix * fix cpp test * remove stale pop back * add more doc * dependant -> dependent * remove redundant check * remove data_dependent_
This commit mainly introduces a byoc c-source module example to uTVM. Moreover, it carries certain modifications to the example codegen_c external module generator code to generate utvm friendly c-source. Change-Id: I09f3a42017d518dd5b6c89e3fe0a0332b80088b0
* Add fix and unit test for const autoconvert dtype. * formatting * Address review comment, casting input value to int32 * Fix failing test * Augment unit test
…uirement (apache#7294) * this test current sets a requirement to "uses_gpu", which causes it to fail in cpu-only machine * this patch changes it to be "requires_tensorcore", as per discussion on issue apache#7277
…eld (apache#7306) [TIR][REFACTOR] ForNode update - Remove deprecated device_api. - Add ThreadBinding for_type. - Add additional annotations. More style consistency refactor to make the ForNode to be consistent with rest of the codebase. - ForType => ForKind - Add constant prefix k to enum consts per Google C style - Introduce ForKind to the python side.
… during testing (apache#7300) Co-authored-by: Josh Fromm <jwfromm@uw.edu>
* improve scatter 4d init * do not launch sorting based scatter for small input * do not use hard coded num threads * separate sort based implementation * register scatter as autotvm task * add missing import * fix strategy * add dedicated schedule and dummy flop * add test tuning script * try adding dummy knob * skip random_fill when a tuning workload is from scatter This reverts commit 1fed883. * cleanup memcpy ir * remove scatter tuning script * make sure zero init arguments * add comment on why skip random init for scatter * restore ctx sync Co-authored-by: masa <masa@pop-os.localdomain>
Added an ability to infer argument shapes if shapes are not present in TFLite files. The set of networks on which the patch was tested is internal to Arm. Any help with creating unit tests would be appreciated.
* Use correct default value of False for is_ascend * Add unit test for default topk is_ascend value
…conv when groups>1 (apache#7595) * trt num_outputs * asdf * fix lint Co-authored-by: Leyuan Wang <leyuan.wang@bytedance.com>
* Fix negative axis in gather * Clang Format * Black * Empty Commit Co-authored-by: Ubuntu <ubuntu@ip-172-31-42-251.us-east-2.compute.internal>
Co-authored-by: Wuwei Lin <wuwei@apache.org>
* Fix autotuning, broken in apache#7337 * retrigger CI, because I don't understand how it passed
…e#7588) * init * fix * fix
apache#7539 Co-authored-by: guoweijun <guoweijun@baidu.com>
…pache#7313) * Add sparse dense tuning tutorial * Add sparse input fusion * Update the dag to support output fusion * Update * Add task input to search_task * Update * Add search_inputs to measure * Lint fix * Lint fix * Update * Update * Update * Update * Add file save load support * Update * Update * Update * Remove add_task_inputs API * Update * Update * Update * Lint fix * Lint fix * Lint fix * Lint fix * Update * Add example ci_log * Update * retrigger ci * Update * Update * Update * Lint fix * Lint fix * Lint fix
…ache#7603) * Move SimplifyConvPad to a new pass and don't enable it by default * rename pass * move files * fix lint * adjust test tolerance
…ecutor (apache#7604) * properly return and unflatten outputs from GraphExecutor * lint * cleaner approach, not sure what I was thinking before * remove unused import * forgot copyto cpu * make solution even cleaner using iterator
* [Torch] support hardsigmoid * qhswish first impl * add qhardsigmoid but the result is not correct * add qmv3 to test * comment fix
…pache#7602) * [TE] Fix bug in AutoInlineElemWise and implement AutoInlineBroadcast * [TE] Add AutoInlineBroadcast API to schedule_pass.h
* add ShapeFunc for tanh * _schedule_dense_small_batch turn autotvm off when dense's inner dim is unknown * fix CI pylint
* [Relay] Fix relay op strategy for cuda dense int8 * Remove uint8 && Add autotvm task extraction test for relay graph that contains dense op (int8 * int8 -> int32) * Reformat the code of test case
* [Relay] add ShapeFunc for one_hot op * fix pylint * add test for shapefunc of one_hot op
…#7607) * sort started to working * static size sort seems to be working * test sort on vulkan * add nvptx to sort test too
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
apache#7398 introduced an compilation error when using
USE_LLVM=OFF
. Fix by adding preprocessor if guard.