Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Update NEWS, README and website for 1.7.0 #19047

Merged
merged 7 commits into from
Sep 2, 2020
Merged

Conversation

ciyongch
Copy link
Contributor

Update NEWS README and website for 1.7.0 release.

@szha @TaoLv @leezu

@mxnet-bot
Copy link

Hey @ciyongch , Thanks for submitting the PR
All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands:

  • To trigger all jobs: @mxnet-bot run ci [all]
  • To trigger specific jobs: @mxnet-bot run ci [job1, job2]

CI supported jobs: [unix-cpu, centos-gpu, miscellaneous, windows-cpu, unix-gpu, windows-gpu, edge, clang, website, sanity, centos-cpu]


Note:
Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin.
All CI tests must pass before the PR can be merged.

@TaoLv
Copy link
Member

TaoLv commented Aug 30, 2020

It's strange. I remember I changed the download table to use mirror links in #16501.

Copy link
Contributor

@leezu leezu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ciyongch
Copy link
Contributor Author

It's strange. I remember I changed the download table to use mirror links in #16501.

They're changed by #18487

@ciyongch
Copy link
Contributor Author

Must use archive.apache.org for all but the latest release. See https://infra.apache.org/release-publishing.html

Sure, I will update the download link according to this spec. Thanks!

@ciyongch
Copy link
Contributor Author

Hi @leezu , I've updated the download link, please help to take a review.

- [1.7.0](#170)
- [New features](#new-features)
- [MXNet Extensions: custom operators, partitioning, and graph passes](#mxnet-extensions-custom-operators-partitioning-and-graph-passes)
- [OpPerf utility enabled in the binary distribution](#opperf-utility-enabled-in-the-binary-distribution)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which PR did this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ChaiBapchya , probably it's not drived by any PR in the current scope, but included in the binary release process. cc @szha to see if any comments.
When trying to install mxnet release (1.7.0) wheel via pip, the benchmark/opperf is also installed now. And the original proposal from you is listed in 1.7.0 roadmap

@ciyongch
Copy link
Contributor Author

ciyongch commented Sep 1, 2020

Hi @leezu @szha @marcoabreu @ChaiBapchya @TaoLv @pengzhao-intel , please help to take a review and comment if anything needs to be updated, thanks!

Copy link
Contributor

@leezu leezu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use the official mxnet.apache.org domain?

{% highlight bash %}
pip install mxnet
{% endhighlight %}

Start from 1.7.0 release, MKL-DNN is enabled in pip packages by default. Which are
optimized for Intel hardware. You can find performance numbers
in the <a href="https://mxnet.io/api/faq/perf#intel-cpu">MXNet tuning guide</a>.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Link to mxnet.apache.org/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The perf link under mxnet.apache.org will contain the version number like "https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu". Is there any basic link without version number which can redirect to the latest version of the page?
Please check my latest update.

{% highlight bash %}
pip install mxnet
{% endhighlight %}

Start from 1.7.0 release, MKL-DNN is enabled in pip packages by default. Which are
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oneDNN?

{% highlight bash %}
pip install mxnet
{% endhighlight %}

Start from 1.7.0 release, MKL-DNN is enabled in pip packages by default. Which are
optimized for Intel hardware. You can find performance numbers
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Elaborate on the "optimization? For example "oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform performance library of basic building blocks for deep learning applications. The library is optimized for Intel Architecture Processors, Intel Processor Graphics and Xe architecture-based Graphics. Support for other architectures such as Arm* 64-bit Architecture (AArch64) and OpenPOWER* Power ISA (PPC64) is experimental." (from the oneDNN repo).

Please mention that the mxnet-native release is available which does come without oneDNN

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the valuable inputs, leezu, I will update the description here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, I didn't find available mxnet-native pip wheel via pip install mxnet-native, is it still WIP or using the different name?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes it's WIP.

@ciyongch
Copy link
Contributor Author

ciyongch commented Sep 2, 2020

@mxnet-bot run ci [windows-gpu]

@mxnet-bot
Copy link

Jenkins CI successfully triggered : [windows-gpu]

@leezu leezu merged commit f2e90a2 into apache:master Sep 2, 2020
@access2rohit access2rohit mentioned this pull request Feb 17, 2021
13 tasks
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants