-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing output from alphafold prediction #376
Comments
It seems the execution may have failed during the model prediction - the log you specify says that the process was killed. I cannot see more information on what caused the process to be killed from the log. It might be worth trying to run the sequence again (no need to download the databases again). Also it is worth making sure you are using the latest Alphafold version (https://github.com/deepmind/alphafold/releases/tag/v2.1.2, released on 28th January) as this release improved memory utilisation, which can be a problem during model prediction. |
Hi thanks for the reply. Got: I use: Output:
The screenshot is also attached here on the day I ran the docker (3 March). The error was reported at 18:29, and the docker.log file screenshot is here). I think might still OOM problem?? As for the Version, I cloned the from github on 11 Fed, So I think it should be the V2.1.2. |
Thanks @liuqs1990 for the comprehensive traces. Am I right in thinking you are running under WSL (Windows?). We would recommend that you try running in native Linux if possible, as there seem to be some memory issues with running CUDA in WSL, and we are not able to support WSL. There are several suggestions in the thread for #197 which may help. |
Hi @liuqs1990 @andycowie, same issue here.
I noticed when it came to the 4th prediction and minimization, GPU memory was almost full, and then it was killed. +-----------------------------------------------------------------------------+
And I also try the method mentioned in this issue #197, I comment out the following lines in run_docker.py
And then, it showed the same error @Ikajiro said in #197. Perhaps because my sequence is long (~900)
Same issue also happens here #130 |
I have the same questions. And I have 4 A100 GPUs but I cannot solve it. |
I had the same problem. |
This was hopefully fixed in v2.3.0. Feel free to re-open if still a problem. |
Hi there,
I used docker to run alphafold and here are my output:
(alphafold) qiushi@Qiushi-AWM15R3-2K9L9II:~/af2/alphafold$ nvidia-smi
output:
Then I ran:
python3 docker/run_docker.py \ --fasta_> --fasta_paths=/mnt/h/AF_prediction/T1083.fasta \ > --max_template_date=2020-05-14 \ > --model_preset=monomer \ > --db_preset=reduced_dbs \ > --data_dir=/mnt/h/qiushi_AF2_db \ > --gpu_devices=0
The output:
I think the file line indicating sth wrong there?
Maybe due to the memory??
The output file only contains:
features.pkl
msas folder
No prediction .pdb in the output file. Any suggestion?
Thanks in advance!
The same issue also happened here: #130
The text was updated successfully, but these errors were encountered: