Skip to content

Commit

Permalink
Merge branch 'main' into aluo/onnx/sceloss
Browse files Browse the repository at this point in the history
* main:
  [UnitTests][Contrib] Enable contrib tensorrt/coreml unit tests (apache#8902)
  [BUG] DataType Bug In SplitRel (apache#8899)
  Enable python debug runtime for exported network libraries (apache#8793)
  Set default value of p in LpPool as 2 (apache#8866)
  [Community] @Hzfengsy -> Committer (apache#8908)
  Trivial uTVM -> microTVM "spelling" fix to align with branding. (apache#8905)
  [Vulkan][Topi] Parametrizing additional topi tests, marking vulkan failures (apache#8904)
  Move to new style issue template system (apache#8898)
  [Onnx] Support Negative Log Loss (apache#8872)
  [ROCm][TVMC] Add ROCm to the TVMC driver (apache#8896)
  fix error report on Store (apache#8895)
  [Docker] Re-enabled automatic --tty flag when running bash. (apache#8861)
  • Loading branch information
AndrewZhaoLuo committed Sep 2, 2021
2 parents 3371073 + aac0754 commit 4e43297
Show file tree
Hide file tree
Showing 36 changed files with 1,602 additions and 1,450 deletions.
27 changes: 27 additions & 0 deletions .github/ISSUE_TEMPLATE/bug-report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
name: "\U0001F41B Bug report"
about: To help the developer act on the issues, please include a description of your environment, preferably a minimum script to reproduce the problem.
title: "[Bug] "
labels: "type: bug"

---

Thanks for participating in the TVM community! We use https://discuss.tvm.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first :smile_cat:

Issues that are inactive for a period of time may get closed. We adopt this policy so that we won't lose track of actionable issues that may fall at the bottom of the pile. Feel free to reopen a new one if you feel there is an additional problem that needs attention when an old one gets closed.

### Expected behavior

What you were expecting

### Actual behavior

What actually happened

### Environment

Any environment details, such as: Operating System, TVM version, etc

### Steps to reproduce

Preferably a minimal script to cause the issue to occur.
29 changes: 29 additions & 0 deletions .github/ISSUE_TEMPLATE/ci-image.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
---
name: "\U0001F40B Update CI Docker Image"
about: Provide information on CI Docker Images requiring updates
title: "[CI Image] "

---

Thanks for participating in the TVM community! We use https://discuss.tvm.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first :smile_cat:

Issues that are inactive for a period of time may get closed. We adopt this policy so that we won't lose track of actionable issues that may fall at the bottom of the pile. Feel free to reopen a new one if you feel there is an additional problem that needs attention when an old one gets closed.

- [ ] S0. Reason: For example, a blocked PR or a feature issue

- [ ] S1. Tag of nightly build: TAG. Docker hub: https://hub.docker.com/layers/tlcpackstaging/ci_cpu/...

- [ ] S2. The nightly is built on TVM commit: TVM_COMMIT. Detailed info can be found here: https://ci.tlcpack.ai/blue/organizations/jenkins/docker-images-ci%2Fdaily-docker-image-rebuild/detail/daily-docker-image-rebuild/....

- [ ] S3. Testing the nightly image on ci-docker-staging: https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/ci-docker-staging/...

- [ ] S4. Retag TAG to VERSION:
```
docker pull tlcpackstaging/IMAGE_NAME:TAG
docker tag tlcpackstaging/IMAGE_NAME:TAG tlcpack/IMAGE_NAME:VERSION
docker push tlcpack/IMAGE_NAME:VERSION
```

- [ ] S5. Check if the new tag is really there: https://hub.docker.com/u/tlcpack

- [ ] S6. Submit a PR updating the IMAGE_NAME version on Jenkins
22 changes: 22 additions & 0 deletions .github/ISSUE_TEMPLATE/ci-problem.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
name: "\U0000274C CI Problem"
about: To help the developers act on these problems, please give us as many details of the CI failure as possible.
title: "[CI Problem] "

---

Thanks for participating in the TVM community! We use https://discuss.tvm.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first :smile_cat:

Issues that are inactive for a period of time may get closed. We adopt this policy so that we won't lose track of actionable issues that may fall at the bottom of the pile. Feel free to reopen a new one if you feel there is an additional problem that needs attention when an old one gets closed.

### Branch/PR Failing

Please provide a link to the PR that has failed to run CI.

### Jenkins Link

Provide a link to the specific run that has failed.

### Flakiness

Have you seen this multiple times in this branch or in other branches?
5 changes: 5 additions & 0 deletions .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
blank_issues_enabled: false # default: true
contact_links:
- name: 💬 Discourse
url: https://discuss.tvm.apache.org/
about: Thanks for participating in the TVM community! We use https://discuss.tvm.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first 😺
Original file line number Diff line number Diff line change
@@ -1,7 +1,14 @@
Thanks for participating in the TVM community! We use https://discuss.tvm.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first :)
---
name: "\U0001F527 Feature Tracking"
about: List clear, small actionable items so we can track the progress of the change.
title: "[Tracking Issue] "
labels: type:rfc-tracking

Issues that are inactive for a period of time may get closed. We adopt this policy so that we won't lose track of actionable issues that may fall at the bottom of the pile. Feel free to reopen a new one if you feel there is an additional problem that needs attention when an old one gets closed.
---

Thanks for participating in the TVM community! We use https://discuss.tvm.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first :smile_cat:

For bug reports, to help the developer act on the issues, please include a description of your environment, preferably a minimum script to reproduce the problem.
Issues that are inactive for a period of time may get closed. We adopt this policy so that we won't lose track of actionable issues that may fall at the bottom of the pile. Feel free to reopen a new one if you feel there is an additional problem that needs attention when an old one gets closed.

For feature proposals, list clear, small actionable items so we can track the progress of the change.
### This issue is to track progress for FEATURE NAME
- [ ] P1. Title of this piece of the feature (PR link if available)
1 change: 1 addition & 0 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ We do encourage everyone to work anything they are interested in.
- [Tianqi Chen](https://github.com/tqchen) (PMC): @tqchen - topi, compiler, relay, docs
- [Wei Chen](https://github.com/wweic): @wweic - runtime, relay, vm
- [Zhi Chen](https://github.com/zhiics) (PMC): @zhiics - relay, quantization, pass manager
- [Siyuan Feng](https://github.com/Hzfengsy): @Hzfengsy - tir
- [Josh Fromm](https://github.com/jwfromm): @jwfromm - frontends, quantization, topi
- [Yuwei Hu](https://github.com/Huyuwei): @Huyuwei - topi, frontends
- [Nick Hynes](https://github.com/nhynes): @nhynes: - sgx, rust
Expand Down
2 changes: 1 addition & 1 deletion cmake/config.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ set(USE_GRAPH_EXECUTOR_CUDA_GRAPH OFF)
# Whether to enable the profiler for the graph executor and vm
set(USE_PROFILER ON)

# Whether enable uTVM standalone runtime
# Whether enable microTVM standalone runtime
set(USE_MICRO_STANDALONE_RUNTIME OFF)

# Whether build with LLVM support
Expand Down
5 changes: 3 additions & 2 deletions docker/bash.sh
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ set -euo pipefail

function show_usage() {
cat <<EOF
Usage: docker/bash.sh [-i|--interactive] [--net=host]
Usage: docker/bash.sh [-i|--interactive] [--net=host] [-t|--tty]
[--mount MOUNT_DIR] [--repo-mount-point REPO_MOUNT_POINT]
[--dry-run]
<DOCKER_IMAGE_NAME> [--] [COMMAND]
Expand Down Expand Up @@ -95,7 +95,7 @@ DOCKER_IMAGE_NAME
COMMAND
The command to be run inside the docker container. If this is set
to "bash", both the --interactive and --net=host flags are set.
to "bash", the --interactive, --tty and --net=host flags are set.
If no command is specified, defaults to "bash". If the command
contains dash-prefixed arguments, the command should be preceded
by -- to indicate arguments that are not intended for bash.sh.
Expand Down Expand Up @@ -235,6 +235,7 @@ fi

if [[ ${COMMAND[@]+"${COMMAND[@]}"} = bash ]]; then
INTERACTIVE=true
TTY=true
USE_NET_HOST=true
fi

Expand Down
20 changes: 18 additions & 2 deletions docs/dev/debugger.rst
Original file line number Diff line number Diff line change
Expand Up @@ -123,12 +123,12 @@ Example of loading the parameters
How to use Debugger?
***************************************

1. In ``config.cmake`` set the ``USE_GRAPH_EXECUTOR_DEBUG`` flag to ``ON``
1. In ``config.cmake`` set the ``USE_PROFILER`` flag to ``ON``

::

# Whether enable additional graph debug functions
set(USE_GRAPH_EXECUTOR_DEBUG ON)
set(USE_PROFILER ON)

2. Do 'make' tvm, so that it will make the ``libtvm_runtime.so``

Expand All @@ -148,6 +148,22 @@ How to use Debugger?
m.run()
tvm_out = m.get_output(0, tvm.nd.empty(out_shape, dtype)).numpy()

4. If network previously was exported to external libray using ``lib.export_library("network.so")``
like shared object file/dynamic linked library, the initialization
of debug runtime will be slightly different

::

lib = tvm.runtime.load_module("network.so")
m = graph_executor.create(lib["get_graph_json"](), lib, dev, dump_root="/tmp/tvmdbg")
# set inputs
m.set_input('data', tvm.nd.array(data.astype(dtype)))
m.set_input(**params)
# execute
m.run()
tvm_out = m.get_output(0, tvm.nd.empty(out_shape, dtype)).numpy()


The outputs are dumped to a temporary folder in ``/tmp`` folder or the
folder specified while creating the runtime.

Expand Down
4 changes: 3 additions & 1 deletion python/tvm/driver/tvmc/runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ def add_run_parser(subparsers):
# like 'webgpu', etc (@leandron)
parser.add_argument(
"--device",
choices=["cpu", "cuda", "cl", "metal", "vulkan"],
choices=["cpu", "cuda", "cl", "metal", "vulkan", "rocm"],
default="cpu",
help="target device to run the compiled module. Defaults to 'cpu'",
)
Expand Down Expand Up @@ -394,6 +394,8 @@ def run_module(
dev = session.metal()
elif device == "vulkan":
dev = session.vulkan()
elif device == "rocm":
dev = session.rocm()
else:
assert device == "cpu"
dev = session.cpu()
Expand Down
23 changes: 6 additions & 17 deletions python/tvm/relay/frontend/onnx.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,21 +37,9 @@
from .. import random as _random
from .. import ty as _ty
from .. import vision as _vision
from .common import (
AttrCvt,
Renamer,
fold_constant,
get_name,
get_relay_op,
gru_cell,
infer_channels,
infer_shape,
infer_type,
infer_value,
lstm_cell,
new_var,
unbind,
)
from .common import (AttrCvt, Renamer, fold_constant, get_name, get_relay_op,
gru_cell, infer_channels, infer_shape, infer_type,
infer_value, lstm_cell, new_var, unbind)

__all__ = ["from_onnx"]

Expand Down Expand Up @@ -909,8 +897,9 @@ def _impl_v1(cls, inputs, attr, params):
else:
attr["layout"] = onnx_default_layout(dims=(len(input_shape) - 2), op_name="LpPool")

p = _expr.const(attr["p"], dtype)
reci_p = _expr.const(1.0 / attr["p"], dtype)
p_value = attr.get("p", 2)
p = _expr.const(p_value, dtype)
reci_p = _expr.const(1.0 / p_value, dtype)
data = _op.power(data, p)

out = AttrCvt(
Expand Down
4 changes: 4 additions & 0 deletions python/tvm/rpc/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -217,6 +217,10 @@ def metal(self, dev_id=0):
"""Construct Metal device."""
return self.device(8, dev_id)

def rocm(self, dev_id=0):
"""Construct ROCm device."""
return self.device(10, dev_id)

def ext_dev(self, dev_id=0):
"""Construct extension device."""
return self.device(12, dev_id)
Expand Down
2 changes: 1 addition & 1 deletion python/tvm/script/parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -536,7 +536,7 @@ def transform_SubscriptAssign(self, node):
if len(indexes) != 1:
self.report_error(
f"Store is only allowed with one index, but {len(indexes)} were provided.",
tvm.ir.Span.union([x.span for x in indexes]),
node.params[1].span,
)
# Store
return tvm.tir.Store(
Expand Down
1 change: 1 addition & 0 deletions python/tvm/testing/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
from .utils import known_failing_targets, requires_cuda, requires_cudagraph
from .utils import requires_gpu, requires_llvm, requires_rocm, requires_rpc
from .utils import requires_tensorcore, requires_metal, requires_micro, requires_opencl
from .utils import requires_package
from .utils import identity_after, terminate_self

from ._ffi_api import nop, echo, device_test, run_check_signal, object_use_count
Expand Down
44 changes: 44 additions & 0 deletions python/tvm/testing/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -774,7 +774,51 @@ def requires_rpc(*args):
return _compose(args, _requires_rpc)


def requires_package(*packages):
"""Mark a test as requiring python packages to run.
If the packages listed are not available, tests marked with
`requires_package` will appear in the pytest results as being skipped.
This is equivalent to using ``foo = pytest.importorskip('foo')`` inside
the test body.
Parameters
----------
packages : List[str]
The python packages that should be available for the test to
run.
Returns
-------
mark: pytest mark
The pytest mark to be applied to unit tests that require this
"""

def has_package(package):
try:
__import__(package)
return True
except ImportError:
return False

marks = [
pytest.mark.skipif(not has_package(package), reason=f"Cannot import '{package}'")
for package in packages
]

def wrapper(func):
for mark in marks:
func = mark(func)
return func

return wrapper


def parametrize_targets(*args):

"""Parametrize a test over a specific set of targets.
Use this decorator when you want your test to be run over a
Expand Down
11 changes: 7 additions & 4 deletions src/relay/op/tensor/transform.cc
Original file line number Diff line number Diff line change
Expand Up @@ -2829,15 +2829,18 @@ bool SplitRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
}
reporter->Assign(types[1], TupleType(Array<Type>(fields)));
} else {
auto indices = Downcast<Array<ObjectRef>>(param->indices_or_sections);
Array<IndexExpr> indices;
for (auto i : Downcast<Array<Integer>>(param->indices_or_sections)) {
indices.push_back(IntImm(DataType::Int(32), i.as<IntImmNode>()->value));
}
auto begin = IndexExpr(tir::make_zero(DataType::Int(32)));
std::vector<Type> fields;
for (unsigned int i = 0; i < indices.size(); ++i) {
ICHECK(reporter->Assert(Downcast<IndexExpr>(indices[i]) > begin))
ICHECK(reporter->Assert(indices[i] > begin))
<< "indices_or_sections need to be a sorted ascending list";
std::vector<IndexExpr> oshape(data->shape.begin(), data->shape.end());
oshape[axis] = Downcast<IndexExpr>(indices[i]) - begin;
begin = Downcast<IndexExpr>(indices[i]);
oshape[axis] = indices[i] - begin;
begin = indices[i];
auto vec_type = TensorType(oshape, data->dtype);
fields.push_back(vec_type);
}
Expand Down
4 changes: 4 additions & 0 deletions src/runtime/graph_executor/graph_executor_factory.cc
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,10 @@ PackedFunc GraphExecutorFactory::GetFunction(
}
*rv = this->ExecutorCreate(devices);
});
} else if (name == "get_graph_json") {
return PackedFunc(
[sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = this->graph_json_; });

} else if (name == "debug_create") {
return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
ICHECK_GE(args.size(), 2);
Expand Down
3 changes: 3 additions & 0 deletions src/target/spirv/spirv_support.cc
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,9 @@ SPIRVSupport::SPIRVSupport(tvm::Target target) {
if (target->GetAttr<Bool>("supports_float16")) {
supports_float16 = target->GetAttr<Bool>("supports_float16").value();
}
if (target->GetAttr<Bool>("supports_float64")) {
supports_float64 = target->GetAttr<Bool>("supports_float64").value();
}
if (target->GetAttr<Bool>("supports_int8")) {
supports_int8 = target->GetAttr<Bool>("supports_int8").value();
}
Expand Down
Loading

0 comments on commit 4e43297

Please sign in to comment.