forked from apache/tvm
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pull #2
Merged
Merged
pull #2
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* [TENSORFLOW]Sparse2Dense support * Formatting issues fixed
* Add pre transpose support for layout rewrite * Update * Bug fix * Bug fix * Update * Bug fix * CI Fix * Update * Update * Re-trigger CI * Update * Update test_auto_scheduler_layout_rewrite.py * Update test_auto_scheduler_layout_rewrite.py * Update task_scheduler ut, re-trigger CI Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
… swap Mutex for RwLock (#6815)
…6711) * Improve depthwise convolution through smlal/smlal2 intrinsic - Added an intrinsic to load a single int16x8 vector and produce two int32x4 output vectors through smlal/smlal2 instructions - Changed the NHWC depthwise schedule to accomodate the aforementioned intrinsic Change-Id: I347c3bf98fa8dd87057304dcda0d78e558424c57 * Address review comments * Rebasing - 2 * Rebasing - 3 * Rebasing - 3 * Fix linting
When stride < 0, the slicing range for whole demension should be [-1, -(dim+1))
…of any size (including zero) (#6826) * Fix Annotate Target * Add Test Cases * Formatting * Comments C++ * Remove Unnecesssary test cases * typo * annotate_target Co-authored-by: Ubuntu <ubuntu@ip-172-31-27-149.us-east-2.compute.internal>
Also fixed a sphinx warning in pytorch.
Signature of FTVMAnnotateTarget changed to runtime::TypedPackedFunc<bool(const Expr& expr)> which allows to utilise extra information from passed expr argument.
* [CI] Disable flaky tests * format
* [Relay][Frontend] SparseTensorDenseMatMul support for Tensorflow * Lint error resolved * [1] Review comments handled * [2] Review comments handled
* debugging * added three shape funcs * fix lint * address comment * resolve conflicts * resolve conflicts * resolve conflicts * resolve conflicts * resolve conflicts
* [TopHub] Update version * trigger ci
* add test * test working * uncomment other tests * remove redundant visit * test double nesting * support nested tuple in CallNode's return type * Revert "support nested tuple in CallNode's return type" This reverts commit 66225ed.
Co-authored-by: Mikael Sevenier <mikael.sevenier@sima.ai>
* Split transport classes into transport package. * Introduce transport timeouts. * black format * Add metadata-only artifacts * Simplify utvm rpc server API and ease handling of short packets. * add zephyr test against qemu * Add qemu build config * fix typo * cleanup zephyr main * fix nonblocking piping on some linux kernels * don't double-open transport * validate FD are in non-blocking mode * gitignore test debug files * cleanup zephyr compiler * re-comment serial until added * remove logging * add zephyr exclusions to check_file_type * add asf header * lint * black format * more pylint * kill utvm rpc_server bindings, which don't work anymore and fail pylint * fix compiler warning * fixes related to pylint * clang-format again * more black format * add qemu regression * Fix paths for qemu/ dir * fix typo * fix SETFL logic * export SessionTerminatedError and update except after moving * fix test_micro_artifact * retrigger staging CI * fix jenkins syntax hopefully * one last syntax error * Add microTVM VM setup scripts * obliterate USE_ANTLR from cmake.config * add poetry deps to pyproject.toml - mainly taken from output of `pip freeze` in ci-gpu and ci-lint * initial attempt at setup.py + autodetect libtvm_runtime SO path * hack to hardcode in build * make pyproject lock * Add ci_qemu to Jenkinsfile * build in qemu * checkpoint * create diff for jared * add missing stuff * address liangfu comments * fix new bug with list passing * release v0.0.2 * works on hardware * switch to pytest for zephyr tests * add missing import * fix option parsing * remove extraneous changes * lint * asf lint, somehow local pass didn't work * file type lint * black-format * try to fix ARMTargetParser.h #include in LLVM < 8.0 * rm misspelled deamon lines * move to apps/microtvm-vm * fetch keys from kitware server * fix path exclusions in check_file_type * retrigger CI * reorganize vm, add tutorial * fixes for reorganization - enable vagrant ssh * update ssh instructions * rm commented code * standardize reference VM release process, add prerelease test * remove -mfpu from this change * fix exit code of test_zephyr * rm unneeded files, update check_file_type * add asf header * git-black * git-black against main * git-black with docker * fixes for virtualbox * black format * install python3.8, for zephyr gdb * timestamp zephyr vm name, permits launching multiple VMs * log warning when initial vagrant destroy fails * revert changes moved into #6789 * address leandron@ comments * black format * black format * add --skip-build to test subcommand, detach device from other VMs * black format * address leandron@ comments * don't rm release test when building only 1 provider * revert pyproject.toml * remove need to copy pyproject.toml to root * this often contributes to erroneous changes to that file
* WIP * WIP * WIP * WIP * Disable WASM and fix rebase * Work on finishing tests * Make entire object system printable * Write some more tests for IRModule * All tests pass * Format * Restore module.cc * Bump syn
* enable scatter gpu test on cuda * adding update_func arg * pytorch scatter_add gpu tests working * update 3d and 4d scatter * enable scatter_add gpu test Co-authored-by: masa <masa@pop-os.localdomain>
* If operator support in ONNX. * Small tweak. * Added uses_gpu tag. * Disable test on GPU until onnxruntime version is updated. * Use parametrize_target to specify CPU only. * Just dont use onnxruntime for now i guess.
* add dynamic dequantize * register quantize and dequantize as opaque * make tests better * black * remove main fn * fix black again * move tests * fix import * fix import again * try again * fix import
* remove get_valid_counts from pytorch nms * fix pytorch nms for negative score * merge reset by -1 * move max_out_size handling to triangle loop * update torch nms test * fuse the last two kernels * parallelize the first kernel * merge first and last kernel * remove unnecessary cases * fix typo * revert pytorch frontend change * fuse rearrange step with triangle loop * fix max_output_size handling * check if already surpressed * fix topi vision test by wrapping tir const around int argument * fix for num anchors = 0 case * fix missing zero init of num valid boxes when the input is empty * add some comments and missing doc * typo fix * add a guard against zero dim grid / thread block inside ir_buidlder * typo fix * trigger CI
* Created CSourceMetaData module for model metadata * Currently, there is a MetaData module to capture constants conditionaly if the runtime modules implement const init PackedFuncs. However, this one relies on a load process in which the metadata is created on volatile memory that may be not usable in uTVM environments. * There is a need for model level metadata that is valid across all runtime modules such as the func registry when creating a system-lib. * This commit implements a CSoureMetaData module to hold func registry that collects function names from the runtime module and generates a c source file to be linked with final artifact. * Modified and added export_library for utvm Change-Id: Ie2e8e2aea1a66520f03fe8af7cc5bdf27339ea10 * Created CSourceMetaData module for model metadata * fixed llvm_module to return null pfs for get_symbol and get_const_vars Change-Id: I84810e0695d4d6fb314af2469117f965eed71b51 * Created CSourceMetaData module for model metadata *fixed bundle_deploy tests Change-Id: I0d1332a4abbb6830531784c59264021bbbd7148a * Created CSourceMetaData module for model metadata *fixed export_library not to insert "options" when targeting tar *fixed unit tests Change-Id: Ia1686889498b71af66f1a0311a059154ad3c2c3e * Created CSourceMetaData module for model metadata * enable wasm to support csource metadata module * disabled non DSOExportables from using csource metadata module Change-Id: Ie09beaad35cbc2ef738d1d24d91e249b5e099569 * Created CSourceMetaData module for model metadata * changed const pfs to be called only on external modules or DSOExportable modules Change-Id: I6ad28f166c0fc27a2548c851bf9287ec805550d1 * Created CSourceMetaData module for model metadata * CSourceMetadata module wrapper is only created for c/llvm targets Change-Id: I13cb4140c17e2e1f91d495b15a1ff7eeab9fb14d * Created CSourceMetaData module for model metadata *target should be defined to use csourcemetdata module Change-Id: Id8e55b23d0007a79c550334de2c0fec63d40171f * Created CSourceMetaData module for model metadata * reinstate llvm func registry Change-Id: I53e0754b6fb533637f08b25e98064d8c04092de4 * Created CSourceMetaData module for model metadata * addressed comments and fixed bugs Change-Id: I26401685dc803aeaf7642c865df88d683419e859 * Created CSourceMetaData module for model metadata * addressed a missed comment Change-Id: I65e65c30bc780a946f3f1b8372c40a49a5c20582 * Created CSourceMetaData module for model metadata * te build interface should only include c-source metadata if targetting "c" Change-Id: Ie23cb8c6231c1f2de6d2827084774e3510288098 * Created CSourceMetaData module for model metadata * c_source modules should be created only if they are non-DSO exportable Change-Id: I53f2f8e9caa41f133446f8881b9dc541ebeee8cc * Created CSourceMetaData module for model metadata * documetation misalignment in source_module.cc Change-Id: I83e2c29b1f2980ca65a694304720dc58a5cb7879 * Created CSourceMetaData module for model metadata * typo : same object file written as a dependency in the Makefile Change-Id: I8becc4196d286cfb6372768687b3c836799dcb78 * Created CSourceMetaData module for model metadata * removed unused param from a brief Change-Id: Ie4db2aca3b7ea147bd8c65ef5d1cc2146f530e76 * Created CSourceMetaData module for model metadata * made export library use c as the format for c source modules Change-Id: Ie2fd6204414f0fa43988a8082d18af7a3225e237 * Created CSourceMetaData module for model metadata *addressed a nit Change-Id: I6084b8c06ddfaaece295439dbab589e6e202b664
* [Auto Scheduler] Mali Support * Fix doc * fix lint * address comments * fix doc
…el_vm()` (#7134) * Add div_ and is_floating_point operators * Add handling of exprs to op, update tests * add test + supporting functions * Revert whitespace changes * Properly assign dtype to random integers * Reformat with black * Switched default dtype logic, removed extra line
pack operation now accepts constant arguments
ACL codegen now uses AnnotateTarget pass with include_non_call_ops = False to prevent promoting non-call ops under the target of its arguments. Squeezenet unit test added.
* impl isobjectref for array * array test * cargo fmt
* Add a FunctionPattern, remove unused attributes in CallPattern * update docs
* Use self.dag in Python object * Add sch to ComputeDAG * address comment
* [AutoScheduler] Support string processing to records * doc * remove log
* complex reduce * fix * fix * fix
* sort refactor initial import * sort test working * scatter 1d with positive indices working * remove negatiev indices, using extern for now * minor fix * minor fix * add sort by key test * revert scatter change * add document * fix py format Co-authored-by: masa <masa@pop-os.localdomain>
* add * make it work * format * add poilcy * comment * move test * format * fix ci * Delete useless old code Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
… on CPU (#7161) * [AutoScheduler] Add layout rewrite for dense and batch_matmul * Fix test & Address comments * Fix shape inference * fix test
* [Relay] Add fast_softmax * fix * fix
* Fix pytorch nms conversion for negative scores * updated mask rcnn test to verify outputs and also run cuda target * set rpn_post_nms_top_n_test to 200 * fix parameter name * dump output box information * simplifying
* [AutoScheduler] Improve tutorials * fix lint * address comments
…v3d (#7168) * [AutoScheduler] Enable winograd for conv2d & Enable layout rewrite for conv3d * fix test * fix test * update tutorials
Co-authored-by: zhangfucheng <zhangfucheng.jason@bytedance.com>
…7156) * Add layout rewrite options for measure * Update schedule for inserted transform stage * Set layout rewrite when tuning for network * Update the log version
* Add platform timer to microTVM. * Address liangfu comments * cppformat * clang-format Co-authored-by: Liangfu Chen <liangfu@apache.org>
Xuxue1
pushed a commit
that referenced
this pull request
Dec 29, 2020
* Change onnx importer to use dynamic upsampling3d (#3) fix pylint * Refactor ONNX frontend to be dynamic Make OneHot dynamic Support BatchMatMul with dynamically shaped inputs fix dynamic broadcast Add null checks to broadcast_to rel functions fail more isolated broadcast_to test use StructuralEqual instead of pointer comparisions in dynamic_to_static pass add an optional weight freeze argument to onnx importer convert onnx resize to dynamic op add dynamic expand to onnx importer add a shape_func for power fix BERTSquad, lint handle onnx graph initializer parameters more intelligently * Dynamic ONNX importer: Upsampling and Pad (#2) fix lint fix Call reference fix a type issue with expand fix a bad test refactor respond to review comments, fix batch matmul tests * black format * fix batch matmul test * add dynamic strided slice to the onnx importer * fix clip importer * fix qnn tutorial * fix bad merge, respond to review comments * add a simple dynamic model test * Add dynamic-shaped autopadding to convolution and pooling ops * fix dynamic issues in a few ops * fix pylint * disable tests onnxrt doesn't support * fix pytorch test * respond to review comments * add documentation about partially supporting dynamic shapes Co-authored-by: Lily Orth-Smith <lorthsmith@octoml.ai>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
pull