Skip to content

Commit

Permalink
Merge branch 'main' into jcw/bump-c-17
Browse files Browse the repository at this point in the history
  • Loading branch information
gramalingam authored Oct 18, 2023
2 parents 92d32af + 5f908a9 commit c3fe91e
Show file tree
Hide file tree
Showing 18 changed files with 48 additions and 910 deletions.
10 changes: 9 additions & 1 deletion .azure-pipelines/Linux-CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ jobs:
- script: |
source venv/bin/activate
pytest -sv
pytest -sv --cov=onnx --cov-report=xml --cov-append --cov-branch --junit-xml pytest.xml
if [ $? -ne 0 ]; then
echo "pytest failed"
exit 1
Expand All @@ -97,6 +97,14 @@ jobs:
displayName: 'Run ONNX tests'
- script: |
curl -Os https://uploader.codecov.io/latest/linux/codecov
chmod +x codecov
./codecov
continueOnError: true
displayName: 'Upload to codecov'
- script: |
source venv/bin/activate
python onnx/backend/test/cmd_tools.py generate-data --clean
Expand Down
3 changes: 3 additions & 0 deletions .reuse/dep5
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,9 @@ Files: .azure-pipelines/*.yml
COPYRIGHT: Copyright (c) ONNX Project Contributors
License: Apache-2.0

Files: codecov.yml
COPYRIGHT: Copyright (c) ONNX Project Contributors
License: Apache-2.0

Files: .github/**/*.md .github/pull_request_template.md
COPYRIGHT: Copyright (c) ONNX Project Contributors
Expand Down
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,12 +43,12 @@ ONNX is [widely supported](http://onnx.ai/supported-tools) and can be found in m

# Contribute

ONNX is a community project and the open governance model is described [here](community/readme.md). We encourage you to join the effort and contribute feedback, ideas, and code. You can participate in the [Special Interest Groups](community/sigs.md) and [Working Groups](community/working-groups.md) to shape the future of ONNX.
ONNX is a community project and the open governance model is described [here](https://github.com/onnx/onnx/blob/main/community/readme.md). We encourage you to join the effort and contribute feedback, ideas, and code. You can participate in the [Special Interest Groups](https://github.com/onnx/onnx/blob/main/community/sigs.md) and [Working Groups](https://github.com/onnx/onnx/blob/main/community/working-groups.md) to shape the future of ONNX.

Check out our [contribution guide](/CONTRIBUTING.md) to get started.
Check out our [contribution guide](https://github.com/onnx/onnx/blob/main/CONTRIBUTING.md) to get started.

If you think some operator should be added to ONNX specification, please read
[this document](docs/AddNewOp.md).
[this document](https://github.com/onnx/onnx/blob/main/docs/AddNewOp.md).

# Community meetings

Expand Down Expand Up @@ -280,7 +280,7 @@ For full list refer to CMakeLists.txt
* `USE_MSVC_STATIC_RUNTIME` should be 1 or 0, not ON or OFF. When set to 1 onnx links statically to runtime library.
**Default**: `USE_MSVC_STATIC_RUNTIME=0`
* `DEBUG` should be 0 or 1. When set to 1 onnx is built in debug mode. or debug versions of the dependencies, you need to open the [CMakeLists file](CMakeLists.txt) and append a letter `d` at the end of the package name lines. For example, `NAMES protobuf-lite` would become `NAMES protobuf-lited`.
* `DEBUG` should be 0 or 1. When set to 1 onnx is built in debug mode. or debug versions of the dependencies, you need to open the [CMakeLists file](https://github.com/onnx/onnx/blob/main/CMakeLists.txt) and append a letter `d` at the end of the package name lines. For example, `NAMES protobuf-lite` would become `NAMES protobuf-lited`.
**Default**: `Debug=0`
### CMake variables
Expand Down Expand Up @@ -321,7 +321,7 @@ pytest

# Development

Check out the [contributor guide](/CONTRIBUTING.md) for instructions.
Check out the [contributor guide](https://github.com/onnx/onnx/blob/main/CONTRIBUTING.md) for instructions.

# License

Expand Down
8 changes: 8 additions & 0 deletions codecov.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
coverage:
status:
project:
default:
informational: true
patch:
default:
informational: true
2 changes: 1 addition & 1 deletion docs/Operators.md
Original file line number Diff line number Diff line change
Expand Up @@ -23094,7 +23094,7 @@ expect(
node,
inputs=[data, axes],
outputs=[reduced],
name="test_reduce_sum_empty_set",
name="test_reduce_sum_empty_set_non_reduced_axis_zero",
)
```

Expand Down
2 changes: 1 addition & 1 deletion docs/TestCoverage.md
Original file line number Diff line number Diff line change
Expand Up @@ -15681,7 +15681,7 @@ expect(
node,
inputs=[data, axes],
outputs=[reduced],
name="test_reduce_sum_empty_set",
name="test_reduce_sum_empty_set_non_reduced_axis_zero",
)
```

Expand Down
2 changes: 1 addition & 1 deletion onnx/backend/test/case/node/reducesum.py
Original file line number Diff line number Diff line change
Expand Up @@ -242,5 +242,5 @@ def export_non_reduced_axis_zero() -> None:
node,
inputs=[data, axes],
outputs=[reduced],
name="test_reduce_sum_empty_set",
name="test_reduce_sum_empty_set_non_reduced_axis_zero",
)
Binary file modified onnx/backend/test/data/node/test_reduce_sum_empty_set/model.onnx
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
6 changes: 3 additions & 3 deletions onnx/checker.cc
Original file line number Diff line number Diff line change
Expand Up @@ -190,11 +190,11 @@ void check_tensor(const TensorProto& tensor, const CheckerContext& ctx) {
}
std::string data_path = path_join(ctx.get_model_dir(), relative_path);
// use stat64 to check whether the file exists
#if defined(__APPLE__) || defined(__wasm__)
struct stat buffer; // APPLE does not have stat64
#if defined(__APPLE__) || defined(__wasm__) || !defined(__GLIBC__)
struct stat buffer; // APPLE, wasm and non-glic stdlibs do not have stat64
if (stat((data_path).c_str(), &buffer) != 0) {
#else
struct stat64 buffer; // All POSIX except APPLE have stat64
struct stat64 buffer; // All POSIX under glibc except APPLE and wasm have stat64
if (stat64((data_path).c_str(), &buffer) != 0) {
#endif
fail_check(
Expand Down
38 changes: 16 additions & 22 deletions onnx/defs/traditionalml/defs.cc
Original file line number Diff line number Diff line change
Expand Up @@ -382,28 +382,6 @@ ONNX_ML_OPERATOR_SET_SCHEMA(
fail_shape_inference(
"At least one of values_tensor, values_strings, values_int64s, values_floats must be set.");
}

int default_length, default_type;
std::tie(default_type, default_length) = getAttributeElementTypeAndLength(
ctx, {"default_tensor", "default_string", "default_int64", "default_float"});
if (default_type != TensorProto::UNDEFINED) {
if (value_type != default_type) {
fail_shape_inference(
"The value type ",
value_type,
" and the default type ",
default_type,
" are different, which is not permitted for LabelEncoders.");
}

// Ensure default_tensor is a singleton if set
const AttributeProto* default_tensor = ctx.getAttribute("default_tensor");
if (default_tensor != nullptr &&
(default_tensor->t().dims_size() != 1 || default_tensor->t().dims(0) != 1)) {
fail_shape_inference("default_tensor must be a singleton if set.");
}
}

if (value_length != key_length) {
fail_shape_inference(
"The number of keys ",
Expand All @@ -413,6 +391,22 @@ ONNX_ML_OPERATOR_SET_SCHEMA(
" must be the same in the LabelEncoder.");
}

auto default_attr = ctx.getAttribute("default_tensor");
if (nullptr != default_attr && default_attr->has_t() && default_attr->t().has_data_type() &&
default_attr->t().data_type() != TensorProto_DataType_UNDEFINED) {
auto default_tensor = default_attr->t();
if (default_tensor.data_type() != value_type) {
fail_shape_inference(
"The default tensor type ",
default_tensor.data_type(),
" and the value type ",
value_type,
" must be the same in the LabelEncoder.");
}
if (1 != default_tensor.dims_size() || 1 != default_tensor.dims(0)) {
fail_shape_inference("The default tensor must be a singleton 1D tensor.");
}
}
// Propagate shape from input type and assign output type based on value type
ctx.getOutputType(0)->mutable_tensor_type()->set_elem_type(value_type);
propagateShapeFromInputToOutput(ctx, 0, 0);
Expand Down
Loading

0 comments on commit c3fe91e

Please sign in to comment.