Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[RFC] v1.8.0 release #18800

Open
samskalicky opened this issue Jul 27, 2020 · 54 comments
Open

[RFC] v1.8.0 release #18800

samskalicky opened this issue Jul 27, 2020 · 54 comments
Labels
RFC Post requesting for comments Roadmap

Comments

@samskalicky
Copy link
Contributor

samskalicky commented Jul 27, 2020

Hi MXNet community,

Now that the 1.7.0 release development is closed (and in the midst of the release process), I wanted to start a discussion around another 1.x based release. There are many users that will continue to use 1.x for the foreseeable future while the community transitions to 2.x. Some examples are those who are using toolkits (ie. GluonCV/NLP/etc.) that are pinned to a 1.x version of MXNet.

Are there features that we want to make available in a 1.8.0 release while the 2.0 transition is ongoing?

Feature freeze (code freeze) was September 18th.

Deferred items

Completed items

@samskalicky
Copy link
Contributor Author

@mxnet-label-bot update [RFC, Roadmap]

@lanking520 lanking520 added RFC Post requesting for comments Roadmap and removed Bug needs triage labels Jul 27, 2020
@Kh4L
Copy link
Contributor

Kh4L commented Jul 27, 2020

These PR related to Partition API changes for the Gluon support could be also added:

@stu1130
Copy link
Contributor

stu1130 commented Jul 27, 2020

I would like to include the BatchNorm performance improvement PR for axis != 1 to 1.8

@pengzhao-intel pengzhao-intel pinned this issue Jul 29, 2020
@DickJC123
Copy link
Contributor

A major feature of CUDA 11 and cuDNN 8.0 is support for the new A100 GPU and its TensorFloat-32 (TF32) mode of computation. I would like to include PR #18694, "Unittest tolerance handling improvements", which allows MXNet to use TF32 effectively. The PR also makes sensible adjustments to the unittest tolerances based on device context and dtype, ensuring A100 compatibility with our unittest suite.

With cuDNN 8.0 also comes compatibility with CUDA Graph Capture- I would like to include a PR (near complete, but not yet submitted) that enables CUDA Graph use. This will permit MXNet to bypass much of the CPU preparation for launching identical kernel sequences, as are commonly seen in many deep learning training and inferencing environments.

@wkcn
Copy link
Member

wkcn commented Aug 5, 2020

@bartekkuncer
Copy link
Contributor

I would like to include update of oneDNN version to v1.6: #18867.

@samskalicky
Copy link
Contributor Author

Hey @wkcn can you create a PR to backport this change on v1.x?

@wkcn
Copy link
Member

wkcn commented Aug 12, 2020

Hi @samskalicky , I created a PR #18910 to backport it : )

@HahTK
Copy link

HahTK commented Aug 18, 2020

I would also like to see the "Duplicate subgraph input and output" fix in MXNet.18
Specifically, this PR #16131.

If I am not mistaken that PR had some test failures.
However, I have a different implementation that is passing.
Can we make this a goal for MXNet 1.8 as well ?

I can do the PR or fix the old one if we approve it for 1.8

@stu1130
Copy link
Contributor

stu1130 commented Aug 20, 2020

I would like to include one more PR #18218 which helps use to build windows mxnet.

@leezu
Copy link
Contributor

leezu commented Aug 20, 2020

@stu1130 the PR you reference updates the CI configuration to workaround a bug in NVidia CUDA. Unfortunately NVidia will not fix the bug in CUDA 10. Just backporting the PR may not help users building MXNet from source. It may be more helpful to add a paragraph to the installation docs to recommend windows users to build with CUDA 11 or to manually patch their CUDA 10 version to backport the thrust changes included in CUDA 11. But ideally NVidia would just fix the bugs in CUDA 10 as well, given this bug affects all CUDA 10 users on Windows (not just MXNet): NVIDIA/thrust#1090

@samskalicky
Copy link
Contributor Author

@leezu There is a CUDA 11 update planned for 1.8. @DickJC123 is planning to do this work. Would we want #18218 anyway?

@leezu
Copy link
Contributor

leezu commented Aug 20, 2020

@samskalicky generally MXNet supports the last 2 CUDA versions, so MXNet 1.8 would support CUDA 11 and CUDA 10.2

@stu1130
Copy link
Contributor

stu1130 commented Aug 20, 2020

@leezu @samskalicky I agree to add a doc for this, will raise the PR later. But I would prefer to include the code so users don't need to patch it manually and for DJL team we don't forget to patch it unless the patch causes another problem.

@samskalicky
Copy link
Contributor Author

samskalicky commented Aug 20, 2020

Just to summarize, we're all on board with including #18218 in v1.8.

Just wanted to clarify that this PR makes the CI more stable, but doesnt help users with the same problem after they install MXNet on windows since thrush is a dependency that gets installed with whatever version of CUDA a user has setup on their machine. If we can document this issue too that would be helpful, but is separate from getting #18218 backported to v1.x.

So @stu1130 go ahead and create the backport PR and we'll track that for v1.8. Thanks!

@pengzhao-intel
Copy link
Contributor

pengzhao-intel commented Aug 21, 2020

@samskalicky what's the timeline of 1.8?

Does anyone work on updating for the wiki?
https://cwiki.apache.org/confluence/display/MXNET/Project+Proposals+for+next+MXNet+Release

@samskalicky
Copy link
Contributor Author

Hi @pengzhao-intel we havent set any dates for the 1.8 release yet. Thanks for bringing this up. I have been holding off while waiting on the vote on general@ to complete for the v1.7 release. But we should start to formalize the 1.8 plans now that we're moving forward with backporting PRs and have consensus from the community on having another 1.x release before 2.0.

I would like to propose that we have a feature freeze 1 month from today on September 18th. Does that work for those that intend to submit PRs for the v1.8 release @DickJC123 @stu1130 @Kh4L and others?

Once we decide on a feature freeze date we'll work backwards from there and i'll formally setup the release plan with @ChaiBapchya and @josephevans who also will be co-managing the v1.8 release with me.

@Kh4L
Copy link
Contributor

Kh4L commented Aug 21, 2020

@samskalicky that works for me and the TensorRT related PRs I am backporting.

@szha
Copy link
Member

szha commented Aug 23, 2020

Github milestone which seems to be a suitable tool for tracking release progress. I created one here and added all mentioned items that we can try out: https://github.com/apache/incubator-mxnet/milestone/5

@leezu leezu mentioned this issue Sep 9, 2020
4 tasks
@leezu
Copy link
Contributor

leezu commented Sep 11, 2020

I'd like to include a fix for NaiveEngine::PushAsync #19122

@kohillyang
Copy link

@szha I found that training with mx.mod.Module setting MXNET_BACKWARD_DO_MIRROR to 1 takes more GPU memory than Gluon HybridBlock. Because if setting MXNET_BACKWARD_DO_MIRROR to 1, MXNET_USE_FUSION must be also set to 1 because it seems that relu has been fused. Does it mean that Gluon does not need MXNET_BACKWARD_DO_MIRROR? Or we can't generate Symbol from HybridBlock and must write a network with pure symbol API?

I test the memory consuming with the following codes:

import mxnet as mx
import mxnet.autograd as ag


class NaiveDataset(object):
    def __len__(self):
        return 10000

    def __getitem__(self, idx):
        if idx % 2 ==0:
            label = mx.nd.zeros(shape=(1000, ))
            label[0] = 1
            return mx.nd.array(mx.nd.zeros(shape=(3, 224, 224))), label
        else:
            label = mx.nd.zeros(shape=(1000, ))
            label[1] = 1
            return mx.nd.array(mx.nd.ones(shape=(3, 224, 224))), label


def train_gluon_model_with_module():
    import os
    # os.environ["MXNET_BACKWARD_DO_MIRROR"]="1"
    # os.environ["MXNET_USE_FUSION"]="0"
    ctx_list = [mx.gpu(0)]
    from models.backbones.resnet._resnetv1b import resnet50_v1b
    net = resnet50_v1b(pretrained=False)
    # net = mx.gluon.model_zoo.vision.resnet50_v1(pretrained=False)
    net.initialize()
    _ = net(mx.nd.zeros(shape=(1, 3, 224, 224)))
    arg_params = {}
    aux_params = {}
    arg_params_collected = net.collect_params()
    for k in arg_params_collected:
        arg_params[k] = arg_params_collected[k].data(mx.cpu())
    for k in arg_params_collected:
        aux_params[k] = arg_params_collected[k].data(mx.cpu())

    data = mx.sym.var(name="data")
    sym = net(data)
    module = mx.mod.Module(sym, data_names=['data'], label_names=[], context=ctx_list)
    module.bind(data_shapes=[("data", (len(ctx_list) * 2, 3, 224, 224))])
    module.init_params(arg_params=arg_params, aux_params=aux_params, allow_missing=False, allow_extra=True)
    module.init_optimizer(force_init=True)
    train_loader = mx.gluon.data.DataLoader(dataset=NaiveDataset(), batch_size=100,
                                            num_workers=8, last_batch="discard", shuffle=True,
                                            thread_pool=False)
    for data_batch in train_loader:
        module_data_batch = mx.io.DataBatch(data=[data_batch[0], ], label=None)
        module.forward(module_data_batch, is_train=True)
        y_hat = module.get_outputs(merge_multi_context=True)
        label_list = mx.gluon.utils.split_and_load(data_batch[1], ctx_list=ctx_list, batch_axis=0)
        preds_list = mx.gluon.utils.split_and_load(y_hat[0], ctx_list=ctx_list, batch_axis=0)
        pred_grad_list = []
        for pred, label in zip(preds_list, label_list):  # type: mx.nd.NDArray, mx.nd.NDArray
            pred.attach_grad()
            label.attach_grad()
            with ag.record():
                pred_log_softmax = mx.nd.log_softmax(pred,  axis=1)
                loss = pred_log_softmax * label * -1
            loss.backward()
            pred_grad_list.append(pred.grad)
        pred_gradients = mx.nd.concatenate(pred_grad_list, axis=0)
        module.backward([pred_gradients])
        module.update()
        print(loss.sum().asnumpy())
        mx.nd.waitall()


def train_gluon_model_with_gluon():
    ctx_list = [mx.gpu(0)]
    net = mx.gluon.model_zoo.vision.resnet50_v1(pretrained=False)
    net.initialize()
    net.collect_params().reset_ctx(ctx_list)
    net.hybridize(static_alloc=True)
    trainer = mx.gluon.Trainer(
        net.collect_params(),  # fix batchnorm, fix first stage, etc...
        'sgd',
        {
            'learning_rate':1e-2
         },
    )

    train_loader = mx.gluon.data.DataLoader(dataset=NaiveDataset(), batch_size=100,
                                            num_workers=8, last_batch="discard", shuffle=True,
                                            thread_pool=False)
    for data_batch in train_loader:
        data_list = mx.gluon.utils.split_and_load(data_batch[0], ctx_list=ctx_list, batch_axis=0)
        label_list = mx.gluon.utils.split_and_load(data_batch[1], ctx_list=ctx_list, batch_axis=0)
        losses = []
        for data, label in zip(data_list, label_list):  # type: mx.nd.NDArray, mx.nd.NDArray
            with ag.record():
                y_hat = net(data)
                pred_log_softmax = mx.nd.log_softmax(y_hat,  axis=1)
                loss = pred_log_softmax * label * -1
            losses.append(loss)
        ag.backward(losses)
        trainer.step(1)
        print(loss.sum().asnumpy())
        mx.nd.waitall()


if __name__ == '__main__':
    # train_gluon_model_with_module()
    train_gluon_model_with_gluon()

By default train_gluon_model_with_module and train_gluon_model_with_gluon need almost same GPU memory, but if set MXNET_BACKWARD_DO_MIRROR to 1 and set MXNET_USE_FUSION to 0, train_gluon_model_with_module will fail and raise a OOM exception.

@samskalicky
Copy link
Contributor Author

@kohillyang thanks for the update. Would you mind opening a separate issue to track debugging? If you do end up opening a PR with a fix please post back here so we can track that for the release. Thank you!

@szha
Copy link
Member

szha commented Sep 13, 2020

@kohillyang mirror and fusion are two orthogonal techniques that help save memory and Gluon (or specifically CachedOp) still needs to implement it. I can help elaborate more once you open a tracking issue.

@kohillyang
Copy link

@samskalicky @szha Thank you very much. I created an issue for this #19133.

@bgawrych
Copy link
Contributor

Can we add fix for ElemwiseSum too? : #19200

@samskalicky
Copy link
Contributor Author

Hi @bgawrych, #19200 is already merged into the v1.8.x branch. We're planning to tag the release candidate on Friday and start the vote on Saturday 9/26 so your PR should be included.

@r3stl355
Copy link
Contributor

Can someone review this simple one please - a back-port of bug fix reported first for v1.6: #19095

@bartekkuncer
Copy link
Contributor

Please include fix for GCC8 with oneDNN v1.6.4: #19251

@Neutron3529
Copy link
Contributor

@samskalicky It seems that my merged BUG fix (for KLDivLoss, #18423 ) is not included in 1.8.0-rc1.

@samskalicky
Copy link
Contributor Author

samskalicky commented Oct 13, 2020

@samskalicky It seems that my merged BUG fix (for KLDivLoss, #18423 ) is not included in 1.8.0-rc1.

Hi @Neutron3529 it looks like #18423 was merged to the master branch. The v1.8.0 release (and rc0/rc1) branched off of the v1.x branch and since the PR was not backported it is not included in this release.

@Neutron3529
Copy link
Contributor

@samskalicky It seems that my merged BUG fix (for KLDivLoss, #18423 ) is not included in 1.8.0-rc1.

Hi @Neutron3529 it looks like #18423 was merged to the master branch. The v1.8.0 release (and rc0/rc1) branched off of the v1.x branch and since the PR was not backported it is not included in this release.

How to merge the PR to 1.x?
I manually opened a new PR(#19356).
Is it enough to fix the bug(if all checks pasts)?

@m-ky
Copy link

m-ky commented Nov 25, 2020

Can #19410 be merged?

@bartekkuncer
Copy link
Contributor

Is fix for lacking mkldnn headers: #18608 included?

@leezu
Copy link
Contributor

leezu commented Nov 25, 2020

@bartekkuncer I think so: 2aa2702

@bartekkuncer
Copy link
Contributor

Please include FindMKL.cmake fix: #19152 and update of mkldnn to v1.7.0: #19560

@hassonofer
Copy link

Any timeline updates ?
The tables at https://cwiki.apache.org/confluence/display/MXNET/1.8.0+Release+Plan+and+Status
seems out of date

@samskalicky
Copy link
Contributor Author

samskalicky commented Jan 8, 2021 via email

@hassonofer
Copy link

Thanks for that

@lgg
Copy link

lgg commented Mar 5, 2021

@samskalicky @stu1130 @szha hello, please, can you tell any approximate plans for release 1.8.0 to pip?

@samskalicky
Copy link
Contributor Author

Hi @lgg currently no date for pip. we're still working on making the pip packaging compliant with ASF. we'll send out an update on dev@ when we have more info.

@fhieber
Copy link
Contributor

fhieber commented Apr 4, 2021

@samskalicky is there a timeline for mac wheels being available on pip? I noticed the post0 release of 1.8 for Linux on 3/30, but Mac wheels are still missing.

@samskalicky
Copy link
Contributor Author

@samskalicky is there a timeline for mac wheels being available on pip? I noticed the post0 release of 1.8 for Linux on 3/30, but Mac wheels are still missing.

@fhieber currently we only release Linux wheels as part of our release process. The Windows and Mac wheels are currently built by other community members (@yajiedesign for Windows, @szha for Mac) outside of the normal release process. At some point in the future we hope to integrate the building of these into the release process but where not there yet.

@fhieber
Copy link
Contributor

fhieber commented Apr 5, 2021

@samskalicky is there a timeline for mac wheels being available on pip? I noticed the post0 release of 1.8 for Linux on 3/30, but Mac wheels are still missing.

@fhieber currently we only release Linux wheels as part of our release process. The Windows and Mac wheels are currently built by other community members (@yajiedesign for Windows, @szha for Mac) outside of the normal release process. At some point in the future we hope to integrate the building of these into the release process but where not there yet.

I see, thanks. It would be helpful to make this clear on the website where selection of MacOs/Python/{CPU,GPU} still shows the pip option.
What is the recommendation for downstream projects like Sockeye what to build against by default in their CI systems, only Linux?

@szha
Copy link
Member

szha commented Apr 5, 2021

I think we have been supporting mac and windows so far and it's just that @yajiedesign and I are part of the release process. I'm currently working on producing the mac wheels for 1.8.

By the way, I believe @access2rohit is currently making the CD for mac available in the main repo in #19957.

@szha
Copy link
Member

szha commented Apr 5, 2021

Mac 1.8 wheels should be available now.

@Pagey
Copy link

Pagey commented May 2, 2021

I think we have been supporting mac and windows so far and it's just that @yajiedesign and I are part of the release process. I'm currently working on producing the mac wheels for 1.8.

By the way, I believe @access2rohit is currently making the CD for mac available in the main repo in #19957.

@yajiedesign hey any idea when 1.8 wheels for Windows?

@szha szha unpinned this issue May 3, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
RFC Post requesting for comments Roadmap
Projects
None yet
Development

No branches or pull requests