Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Vulkan][Runtime] Uniform buffer bugfix, minor cleanup #7966

Merged
merged 4 commits into from
May 4, 2021

Conversation

Lunderberg
Copy link
Contributor

  • Bugfix, missing decoration on uniform buffer arguments.
    Caused segfault when running on NVidia GPUs, with models that required uniform buffer arguments for constants (>128 bytes of constants).

  • Updated test_target_codegen_spirv.py to use @tvm.testing.requires_vulkan
    Previously, these tests would show success if USE_VULKAN=OFF. Now, they correctly show that they are skipped instead.

  • Explicitly require int64 support at device creation time, since the TVM-generated shaders require it.

  • Allocate an appropriate descriptor set pool size for the buffer inputs, including both uniform and storage buffers, when not using the push_descriptor extension.

@Lunderberg
Copy link
Contributor Author

Potential reviewers: @masahi @tmoreau89

Lunderberg added 3 commits May 3, 2021 11:39
Caused segfault when running on NVidia GPUs, with models that required
uniform buffer arguments for constants.
Previously, these tests would show success if USE_VULKAN=OFF.  Now,
they correctly show that they are skipped instead.
- Explicitly require int64 support at device creation time, since the
  TVM-generated shaders require it.

- Allocate an appropriate pool size for the buffer inputs, including
  both uniform and storage buffers.
Copy link
Contributor

@tmoreau89 tmoreau89 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the bug fix @Lunderberg ! I'll wait for @masahi to do a more thorough pass, but at first glance, the changes LGTM!

@masahi masahi self-assigned this May 3, 2021
Copy link
Member

@masahi masahi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great thanks!

@masahi
Copy link
Member

masahi commented May 3, 2021

I confirmed that this patch works on AMD too.

@masahi masahi merged commit fb631a2 into apache:main May 4, 2021
@masahi
Copy link
Member

masahi commented May 4, 2021

Thanks @Lunderberg @tmoreau89

umangyadav pushed a commit to umangyadav/tvm that referenced this pull request May 5, 2021
* Bugfix, missing decoration on uniform buffer arguments.

Caused segfault when running on NVidia GPUs, with models that required
uniform buffer arguments for constants.

* Updated test_target_codegen_spirv.py to use @tvm.testing.requires_vulkan

Previously, these tests would show success if USE_VULKAN=OFF.  Now,
they correctly show that they are skipped instead.

* Minor cleanup on the vulkan runtime.

- Explicitly require int64 support at device creation time, since the
  TVM-generated shaders require it.

- Allocate an appropriate pool size for the buffer inputs, including
  both uniform and storage buffers.

* [Vulkan][Tests] Merged test_target_codegen_spirv.py into test_target_codegen_vulkan.py

Co-authored-by: Eric Lunderberg <elunderberg@octoml.ai>
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request May 6, 2021
* Bugfix, missing decoration on uniform buffer arguments.

Caused segfault when running on NVidia GPUs, with models that required
uniform buffer arguments for constants.

* Updated test_target_codegen_spirv.py to use @tvm.testing.requires_vulkan

Previously, these tests would show success if USE_VULKAN=OFF.  Now,
they correctly show that they are skipped instead.

* Minor cleanup on the vulkan runtime.

- Explicitly require int64 support at device creation time, since the
  TVM-generated shaders require it.

- Allocate an appropriate pool size for the buffer inputs, including
  both uniform and storage buffers.

* [Vulkan][Tests] Merged test_target_codegen_spirv.py into test_target_codegen_vulkan.py

Co-authored-by: Eric Lunderberg <elunderberg@octoml.ai>
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request May 6, 2021
* Bugfix, missing decoration on uniform buffer arguments.

Caused segfault when running on NVidia GPUs, with models that required
uniform buffer arguments for constants.

* Updated test_target_codegen_spirv.py to use @tvm.testing.requires_vulkan

Previously, these tests would show success if USE_VULKAN=OFF.  Now,
they correctly show that they are skipped instead.

* Minor cleanup on the vulkan runtime.

- Explicitly require int64 support at device creation time, since the
  TVM-generated shaders require it.

- Allocate an appropriate pool size for the buffer inputs, including
  both uniform and storage buffers.

* [Vulkan][Tests] Merged test_target_codegen_spirv.py into test_target_codegen_vulkan.py

Co-authored-by: Eric Lunderberg <elunderberg@octoml.ai>
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request May 6, 2021
* Bugfix, missing decoration on uniform buffer arguments.

Caused segfault when running on NVidia GPUs, with models that required
uniform buffer arguments for constants.

* Updated test_target_codegen_spirv.py to use @tvm.testing.requires_vulkan

Previously, these tests would show success if USE_VULKAN=OFF.  Now,
they correctly show that they are skipped instead.

* Minor cleanup on the vulkan runtime.

- Explicitly require int64 support at device creation time, since the
  TVM-generated shaders require it.

- Allocate an appropriate pool size for the buffer inputs, including
  both uniform and storage buffers.

* [Vulkan][Tests] Merged test_target_codegen_spirv.py into test_target_codegen_vulkan.py

Co-authored-by: Eric Lunderberg <elunderberg@octoml.ai>
@Lunderberg Lunderberg deleted the ubo_fixes branch May 7, 2021 20:14
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request May 11, 2021
* Bugfix, missing decoration on uniform buffer arguments.

Caused segfault when running on NVidia GPUs, with models that required
uniform buffer arguments for constants.

* Updated test_target_codegen_spirv.py to use @tvm.testing.requires_vulkan

Previously, these tests would show success if USE_VULKAN=OFF.  Now,
they correctly show that they are skipped instead.

* Minor cleanup on the vulkan runtime.

- Explicitly require int64 support at device creation time, since the
  TVM-generated shaders require it.

- Allocate an appropriate pool size for the buffer inputs, including
  both uniform and storage buffers.

* [Vulkan][Tests] Merged test_target_codegen_spirv.py into test_target_codegen_vulkan.py

Co-authored-by: Eric Lunderberg <elunderberg@octoml.ai>
Lunderberg added a commit to Lunderberg/tvm that referenced this pull request May 20, 2021
spvValidate found the bug that was fixed in apache#7966, along with a few
other issues on missing capability/extension declarations.  Now that
all unit tests pass with it enabled, would like to enable by default.
masahi pushed a commit that referenced this pull request May 21, 2021
…ion (#8098)

spvValidate found the bug that was fixed in #7966, along with a few
other issues on missing capability/extension declarations.  Now that
all unit tests pass with it enabled, would like to enable by default.

Co-authored-by: Eric Lunderberg <elunderberg@octoml.ai>
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Jun 17, 2021
…ion (apache#8098)

spvValidate found the bug that was fixed in apache#7966, along with a few
other issues on missing capability/extension declarations.  Now that
all unit tests pass with it enabled, would like to enable by default.

Co-authored-by: Eric Lunderberg <elunderberg@octoml.ai>
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Jun 17, 2021
…ion (apache#8098)

spvValidate found the bug that was fixed in apache#7966, along with a few
other issues on missing capability/extension declarations.  Now that
all unit tests pass with it enabled, would like to enable by default.

Co-authored-by: Eric Lunderberg <elunderberg@octoml.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants