-
Notifications
You must be signed in to change notification settings - Fork 6.8k
out of memory issue while using mxnet with sockeye #18662
Comments
@MrRaghav thanks for creating the issue. What model of GPU are you using? What's the GPU memory size? Round: A memory pool that always rounds the requested memory size and allocates memory of the rounded size. MXNET_GPU_MEM_POOL_ROUND_LINEAR_CUTOFF defines how to round up a memory size. Caching and allocating buffered memory works in the same way as the naive memory pool. |
Hello, please find the information in following points -
But, still got the same error. |
@fhieber do you have recommendation on how to run sockeye on the above GPU? |
Lowering the batch size should definitely allow you to train a model. You could also try lowering the size of the model I am also not sure whether your output of |
Hello, thank you for your suggestion. Actually, I've started working on machine translation just few days back and wanted to try all the possible scenarios before replying to you. But, yesterday I found one combination which 'seems' to have fixed out of memory issue. Due to this, I didn't uninstall other versions of mxnet (as suggested by you) for the time being.
I request to spare few minutes and suggest me if I missed anything. |
on 1.
due to the hardware and programming model design in CUDA, it's a good idea to always use a multiple of 32 in batch size.
|
Point 3: sacrebleu 1.4.10 requires a newer version of Sockeye. We recently published a newer version on pypi that is compatible with sacrebleu 1.4.10. |
Hello, sorry for late reply. I was working on your suggestions and used sacrebleu version 1.4.3 to get successful model with sockeye 2.1.7. Machine translation model was built successfully. I was able to run the sockeye.translate command but the translated results are not up to the mark. I will work in it. Thank you so much for your time. I am closing this issue. |
Description
When I run sockeye.train command with mxnet 1.6.0 , it provides two information in logs:
Basically I submit sockeye.train as a job in my server and its output comes as Run time 00:06:03, FAILED, ExitCode 1
Versions on software are as follows:
Error Message
To Reproduce
sockeye 2.1.7 calls mxnet 1.6.0 (installed for cuda).
Steps to reproduce
python3 -m sockeye.train -d training_data -vs dev.BPE.de -vt dev.BPE.en --shared-vocab -o model
What have you tried to solve it?
Environment
We recommend using our script for collecting the diagnositc information. Run the following command and paste the outputs below:
username@server:~/username/sockeye$ curl --retry 10 -s https://raw.githubusercontent.com/dmlc/gluon-nlp/master/tools/diagnose.py | python
----------Python Info----------
('Version :', '2.7.16')
('Compiler :', 'GCC 8.3.0')
('Build :', ('default', 'Oct 10 2019 22:02:15'))
('Arch :', ('64bit', 'ELF'))
------------Pip Info-----------
('Version :', '20.1.1')
('Directory :', '/home/username/.local/lib/python2.7/site-packages/pip')
----------MXNet Info-----------
No MXNet installed.
----------System Info----------
('Platform :', 'Linux-4.19.0-9-amd64-x86_64-with-debian-10.4')
('system :', 'Linux')
('node :', 'server')
('release :', '4.19.0-9-amd64')
('version :', '#1 SMP Debian 4.19.118-2 (2020-04-29)')
----------Hardware Info----------
('machine :', 'x86_64')
('processor :', '')
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
Stepping: 1
CPU MHz: 1200.726
CPU max MHz: 2900.0000
CPU min MHz: 1200.0000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 30720K
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts flush_l1d
----------Network Test----------
Setting timeout: 10
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0057 sec, LOAD: 0.4408 sec.
Timing for D2L: http://d2l.ai, DNS: 0.0010 sec, LOAD: 0.0191 sec.
Timing for FashionMNIST: https://repo.mxnet.io/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0009 sec, LOAD: 0.6619 sec.
Error open Conda: https://repo.continuum.io/pkgs/free/, HTTP Error 403: Forbidden, DNS finished in 0.00109004974365 sec.
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0012 sec, LOAD: 0.7398 sec.
Timing for GluonNLP: http://gluon-nlp.mxnet.io, DNS: 0.0012 sec, LOAD: 0.3613 sec.
Timing for D2L (zh-cn): http://zh.d2l.ai, DNS: 0.0011 sec, LOAD: 0.0085 sec.
Timing for GluonNLP GitHub: https://github.com/dmlc/gluon-nlp, DNS: 0.0000 sec, LOAD: 1.2439 sec.
The text was updated successfully, but these errors were encountered: