Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change to use MXNet's topk for CPUs in inference #506

Closed
wants to merge 1 commit into from

Conversation

ciyongch
Copy link

Since MXNet's topk has better performance than numpy version with
PR apache/mxnet#12085, in order
to leverage such performance boost, change to use MXNet's topk for
CPU device when doing inference.

(description of the change)

Pull Request Checklist

  • Changes are complete (if posting work-in-progress code, prefix your pull request title with '[WIP]'
    until you can check this box.
  • Unit tests pass (pytest)
  • Were system tests modified? If so did you run these at least 5 times to account for the variation across runs?
  • System tests pass (pytest test/system)
  • Passed code style checking (./style-check.sh)
  • You have considered writing a test
  • Updated major/minor version in sockeye/__init__.py. Major version bump if this is a backwards incompatible change.
  • Updated CHANGELOG.md

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

Since MXNet's topk has better performance than numpy version with
PR apache/mxnet#12085, in order
to leverage such performance boost, change to use MXNet's topk for
CPU device when doing inference.
@fhieber
Copy link
Contributor

fhieber commented Aug 13, 2018

We are already preparing Sockeye for upcoming MXNet 1.3 in the 'blocks' branch, which already switches to mxnet topk for both GPU and CPU: https://github.com/awslabs/sockeye/blob/blocks/sockeye/inference.py

I think, as long as mxnet==1.3 is not released, we should stay with the current best option (Sockeye/master always depends on released mxnet versions, not mxnet/master).

@pengzhao-intel
Copy link

@fhieber thanks for the information and it's fine to wait for the release 1.3.
Do you have a chance to try the performance with the new topk?
Feel free to let us know if any performance issue is found in your case.

@ciyongch
Copy link
Author

@fhieber @pengzhao-intel Since there's already another plan to enable new MXNet topk, I would like to close this PR.

@ciyongch ciyongch closed this Aug 14, 2018
@xinyu-intel
Copy link
Contributor

@fhieber Since MXNet 1.3.0 has been released, do you have any plan to switch to use MXNet's topk for
CPU device when doing inference?

@fhieber
Copy link
Contributor

fhieber commented Sep 12, 2018

sure, I'll work on getting the 'blocks' branch up-to-date and merged into master while updating the requirements; probably this or next week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants