Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

out of memory issue while using mxnet with sockeye #18662

Closed
MrRaghav opened this issue Jul 5, 2020 · 9 comments
Closed

out of memory issue while using mxnet with sockeye #18662

MrRaghav opened this issue Jul 5, 2020 · 9 comments
Labels

Comments

@MrRaghav
Copy link

MrRaghav commented Jul 5, 2020

Description

When I run sockeye.train command with mxnet 1.6.0 , it provides two information in logs:

  1. mxnet.base.MXNetError: [09:58:26] src/storage/./pooled_storage_manager.h:161: cudaMalloc retry failed: out of memory
  2. learning rate from lr_scheduler has been overwritten by learning_rate in optimizer.

Basically I submit sockeye.train as a job in my server and its output comes as Run time 00:06:03, FAILED, ExitCode 1

Versions on software are as follows:

[username]@[server]:/username/sockeye/dir1$ pip3 list | grep mxnet
mxnet 1.6.0
mxnet-cu101mkl 1.6.0
mxnet-mkl 1.6.0
[username]@[server]:/username/sockeye/dir1$ pip3 list | grep sockeye
sockeye 2.1.7

Error Message

[username]@[server]:~/username/sockeye$ tail -30 77233.out
File "/home/username/.local/lib/python3.7/site-packages/sockeye/train.py", line 997, in
main()
File "/home/username/.local/lib/python3.7/site-packages/sockeye/train.py", line 764, in main
train(args)
File "/home/username/.local/lib/python3.7/site-packages/sockeye/train.py", line 992, in train
training_state = trainer.fit(train_iter=train_iter, validation_iter=eval_iter, checkpoint_decoder=cp_decoder)
File "/home/username/.local/lib/python3.7/site-packages/sockeye/training.py", line 242, in fit
self._step(batch=train_iter.next())
File "/home/username/.local/lib/python3.7/site-packages/sockeye/training.py", line 346, in step
loss_func.metric.update(loss_value.asscalar(), num_samples.asscalar())
File "/home/username/.local/lib/python3.7/site-packages/mxnet/ndarray/ndarray.py", line 2553, in asscalar
return self.asnumpy()[0]
File "/home/username/.local/lib/python3.7/site-packages/mxnet/ndarray/ndarray.py", line 2535, in asnumpy
ctypes.c_size_t(data.size)))
File "/home/username/.local/lib/python3.7/site-packages/mxnet/base.py", line 255, in check_call
raise MXNetError(py_str(LIB.MXGetLastError()))
mxnet.base.MXNetError: [09:58:26] src/storage/./pooled_storage_manager.h:161: cudaMalloc retry failed: out of memory
Stack trace:
[bt] (0) /home/username/.local/lib/python3.7/site-packages/mxnet/libmxnet.so(+0x6d554b) [0x7f6c5b3d054b]
[bt] (1) /home/username/.local/lib/python3.7/site-packages/mxnet/libmxnet.so(+0x41a0c72) [0x7f6c5ee9bc72]
[bt] (2) /home/username/.local/lib/python3.7/site-packages/mxnet/libmxnet.so(+0x41a694f) [0x7f6c5eea194f]
[bt] (3) /home/username/.local/lib/python3.7/site-packages/mxnet/libmxnet.so(+0x3972e10) [0x7f6c5e66de10]
[bt] (4) /home/username/.local/lib/python3.7/site-packages/mxnet/libmxnet.so(+0x39730c7) [0x7f6c5e66e0c7]
[bt] (5) /home/username/.local/lib/python3.7/site-packages/mxnet/libmxnet.so(mxnet::imperative::PushFCompute(std::function<void (nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocatormxnet::TBlob > const&, std::vector<mxnet::OpReqType, std::allocatormxnet::OpReqType > const&, std::vector<mxnet::TBlob, std::allocatormxnet::TBlob > const&)> const&, nnvm::Op const*, nnvm::NodeAttrs const&, mxnet::Context const&, std::vector<mxnet::engine::Var*, std::allocatormxnet::engine::Var* > const&, std::vector<mxnet::engine::Var*, std::allocatormxnet::engine::Var* > const&, std::vector<mxnet::Resource, std::allocatormxnet::Resource > const&, std::vector<mxnet::NDArray*, std::allocatormxnet::NDArray* > const&, std::vector<mxnet::NDArray*, std::allocatormxnet::NDArray* > const&, std::vector<unsigned int, std::allocator > const&, std::vector<mxnet::OpReqType, std::allocatormxnet::OpReqType > const&)::{lambda(mxnet::RunContext)#1}::operator()(mxnet::RunContext) const+0x281) [0x7f6c5e66e4d1]
[bt] (6) /home/username/.local/lib/python3.7/site-packages/mxnet/libmxnet.so(+0x3896f19) [0x7f6c5e591f19]
[bt] (7) /home/username/.local/lib/python3.7/site-packages/mxnet/libmxnet.so(+0x38a3c31) [0x7f6c5e59ec31]
[bt] (8) /home/username/.local/lib/python3.7/site-packages/mxnet/libmxnet.so(+0x38a7170) [0x7f6c5e5a2170]

 learning rate from lr_scheduler has been overwritten by learning_rate in optimizer.

To Reproduce

sockeye 2.1.7 calls mxnet 1.6.0 (installed for cuda).

Steps to reproduce

python3 -m sockeye.train -d training_data -vs dev.BPE.de -vt dev.BPE.en --shared-vocab -o model

What have you tried to solve it?

  1. Referred : MXNetError:cudaMalloc failed: out of memory deepinsight/insightface#257 and tried to reduce the batch size (--batch-size) for sockeye.train from 4096 to 64, 70, 100 etc. but it kept failing.

Environment

We recommend using our script for collecting the diagnositc information. Run the following command and paste the outputs below:

curl --retry 10 -s https://raw.githubusercontent.com/dmlc/gluon-nlp/master/tools/diagnose.py | python

# paste outputs here

username@server:~/username/sockeye$ curl --retry 10 -s https://raw.githubusercontent.com/dmlc/gluon-nlp/master/tools/diagnose.py | python
----------Python Info----------
('Version :', '2.7.16')
('Compiler :', 'GCC 8.3.0')
('Build :', ('default', 'Oct 10 2019 22:02:15'))
('Arch :', ('64bit', 'ELF'))
------------Pip Info-----------
('Version :', '20.1.1')
('Directory :', '/home/username/.local/lib/python2.7/site-packages/pip')
----------MXNet Info-----------
No MXNet installed.
----------System Info----------
('Platform :', 'Linux-4.19.0-9-amd64-x86_64-with-debian-10.4')
('system :', 'Linux')
('node :', 'server')
('release :', '4.19.0-9-amd64')
('version :', '#1 SMP Debian 4.19.118-2 (2020-04-29)')
----------Hardware Info----------
('machine :', 'x86_64')
('processor :', '')
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
Stepping: 1
CPU MHz: 1200.726
CPU max MHz: 2900.0000
CPU min MHz: 1200.0000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 30720K
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts flush_l1d
----------Network Test----------
Setting timeout: 10
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0057 sec, LOAD: 0.4408 sec.
Timing for D2L: http://d2l.ai, DNS: 0.0010 sec, LOAD: 0.0191 sec.
Timing for FashionMNIST: https://repo.mxnet.io/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0009 sec, LOAD: 0.6619 sec.
Error open Conda: https://repo.continuum.io/pkgs/free/, HTTP Error 403: Forbidden, DNS finished in 0.00109004974365 sec.
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0012 sec, LOAD: 0.7398 sec.
Timing for GluonNLP: http://gluon-nlp.mxnet.io, DNS: 0.0012 sec, LOAD: 0.3613 sec.
Timing for D2L (zh-cn): http://zh.d2l.ai, DNS: 0.0011 sec, LOAD: 0.0085 sec.
Timing for GluonNLP GitHub: https://github.com/dmlc/gluon-nlp, DNS: 0.0000 sec, LOAD: 1.2439 sec.

@MrRaghav MrRaghav added the Bug label Jul 5, 2020
@MrRaghav
Copy link
Author

MrRaghav commented Jul 5, 2020

Thank you @szha for suggestion on following link: #16487

I hope I have provided all the required information.

@szha
Copy link
Member

szha commented Jul 6, 2020

@MrRaghav thanks for creating the issue. What model of GPU are you using? What's the GPU memory size?
Also, have you tried using export MXNET_GPU_MEM_POOL_TYPE=Round? https://mxnet.apache.org/api/faq/env_var#memory-options

Round: A memory pool that always rounds the requested memory size and allocates memory of the rounded size. MXNET_GPU_MEM_POOL_ROUND_LINEAR_CUTOFF defines how to round up a memory size. Caching and allocating buffered memory works in the same way as the naive memory pool.

@MrRaghav
Copy link
Author

MrRaghav commented Jul 6, 2020

Hello, please find the information in following points -

  1. I am using rtx2080ti

  2. To run sockeye, I used 3 GPUs and specified device ids. Memory of GPUs is as follows

    username@server:~/username/sockeye$ nvidia-smi --format=csv --query-gpu=memory.total
    memory.total [MiB]
    11019 MiB
    11019 MiB
    11019 MiB

  3. regarding export command, I tried running sockeye.train like below:

    export MXNET_GPU_MEM_POOL_TYPE=Round

    python3 -m sockeye.train -s trained.BPE.de -t trained.BPE.en -vs dev.BPE.de -vt dev.BPE.en --shared-vocab
    --device-ids -3 --max-checkpoints 3 -o model

But, still got the same error.

@szha
Copy link
Member

szha commented Jul 6, 2020

@fhieber do you have recommendation on how to run sockeye on the above GPU?

@fhieber
Copy link
Contributor

fhieber commented Jul 7, 2020

Lowering the batch size should definitely allow you to train a model. You could also try lowering the size of the model --transformer-model-size and --num-embed, or reduce the number of layers --num-layers.

I am also not sure whether your output of pip3 list | grep mxnet isn't concerning. To my knowledge it is not advisable to have 3 different versions of MXNet installed.

@MrRaghav
Copy link
Author

MrRaghav commented Jul 8, 2020

Hello, thank you for your suggestion. Actually, I've started working on machine translation just few days back and wanted to try all the possible scenarios before replying to you.
Before contacting to the developers, I referred deepinsight/insightface#257 and already tried by reducing default batch size from 4096 to 2048,1024, 512 and many more (according to the mutiple of 2/3 GPUs which I used to allot for the job). During all these cases, sockeye.train used to fail after 2-3 minutes of running.

But, yesterday I found one combination which 'seems' to have fixed out of memory issue. Due to this, I didn't uninstall other versions of mxnet (as suggested by you) for the time being.

  1. I tried with 5 GPUs and reduced the batch size to 200

  2. Following parameters of sockeye.train worked okay: --shared-vocab --num-embed 512 --batch-type sentence --batch-size 200 --num-layers 6:6 --transformer-model-size 512 --device-ids -5 -max-checkpoints 3 and it ran for ~33 minutes

  3. It didn't prompt any memory issue but this prompted a new error:
    [ERROR:root] Uncaught exception
    Traceback (most recent call last):
    File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "main", mod_spec)
    File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
    File "/home/ username/.local/lib/python3.7/site-packages/sockeye/train.py", line 997, in
    main()
    File "/home/ username/.local/lib/python3.7/site-packages/sockeye/train.py", line 764, in main
    train(args)
    File "/home/ username/.local/lib/python3.7/site-packages/sockeye/train.py", line 992, in train
    training_state = trainer.fit(train_iter=train_iter, validation_iter=eval_iter, checkpoint_decoder=cp_decoder)
    File "/home/ username/.local/lib/python3.7/site-packages/sockeye/training.py", line 264, in fit
    val_metrics = self._evaluate(self.state.checkpoint, validation_iter, checkpoint_decoder)
    File "/home/ username/.local/lib/python3.7/site-packages/sockeye/training.py", line 378, in _evaluate
    decoder_metrics = checkpoint_decoder.decode_and_evaluate(output_name=output_name)
    File "/home/ username/.local/lib/python3.7/site-packages/sockeye/checkpoint_decoder.py", line 176, in decode_and_evaluate
    references=self.target_sentences),
    File "/home/ username/.local/lib/python3.7/site-packages/sockeye/evaluate.py", line 57, in raw_corpus_chrf
    return sacrebleu.corpus_chrf(hypotheses, references, order=sacrebleu.CHRF_ORDER, beta=sacrebleu.CHRF_BETA,
    AttributeError: module 'sacrebleu' has no attribute 'CHRF_ORDER'
    learning rate from lr_scheduler has been overwritten by learning_rate in optimizer.

  4. I have checked it and it doesn't seem to be related with out of memory. However, there is a similar issue mentioned under pytorch: I got AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER'. facebookresearch/fairseq#2049.

  5. I have following versions of scarebleu, sockeye and mxnet
    sacrebleu 1.4.10
    sockeye 2.1.7
    mxnet 1.6.0
    mxnet-cu101mkl 1.6.0
    mxnet-mkl 1.6.0

  6. I don't think opening random issues in every repository is a good idea but I can't find any such issue/solution in the issues section of sockeye, mxnet or sacrebleu.

I request to spare few minutes and suggest me if I missed anything.

@szha
Copy link
Member

szha commented Jul 8, 2020

on 1.

I tried with 5 GPUs and reduced the batch size to 200

due to the hardware and programming model design in CUDA, it's a good idea to always use a multiple of 32 in batch size.

  1. looks like an integration and compatibility issue.

@fhieber
Copy link
Contributor

fhieber commented Jul 8, 2020

Point 3: sacrebleu 1.4.10 requires a newer version of Sockeye. We recently published a newer version on pypi that is compatible with sacrebleu 1.4.10.

@MrRaghav
Copy link
Author

Hello, sorry for late reply. I was working on your suggestions and used sacrebleu version 1.4.3 to get successful model with sockeye 2.1.7.

Machine translation model was built successfully. I was able to run the sockeye.translate command but the translated results are not up to the mark. I will work in it.

Thank you so much for your time. I am closing this issue.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants