You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to run fc_phase tutorial but I got this error. I tried to fix the bug with config files but I didn't work well.
I will be grateful if you help me to understand the error and how to fix it.
Thanks in advance.
falcon-phase 1.0.0
falcon-kit 1.3.0
pypeflow 2.2.0
[INFO]Setup logging from file "None".
[ERROR]job.defaults.submit is not set; pwatcher_type=fs_based; but job_type is not set. Maybe try "job_type=local" first.
[INFO]Using config=
falcon-phase 1.0.0
falcon-kit 1.3.0
pypeflow 2.2.0
[INFO]Setup logging from file "None".
[INFO]Using config=
{'General': {},
'Phase': {'cns_h_ctg_fasta': './4-polish/cns-output/cns_h_ctg.fasta',
'cns_p_ctg_fasta': './4-polish/cns-output/cns_p_ctg.fasta',
'enzyme': '"GATC"',
'iterations': '10000000',
'min_aln_len': '5000',
'output_format': 'pseudohap',
'reads_1': '/BiO/Research/Project1/KOGIC-KOREF_v2_PacBio-Genome-2018-11/Workspace/changhan/falcon_tutorial/F1_bull_test_data/F1_bull_test.HiC_R1.fastq.gz',
'reads_2': '/BiO/Research/Project1/KOGIC-KOREF_v2_PacBio-Genome-2018-11/Workspace/changhan/falcon_tutorial/F1_bull_test_data/F1_bull_test.HiC_R2.fastq.gz'},
'job.defaults': {'JOB_QUEUE': 'all.q@tiger',
'MB': '3200',
'NPROC': '2',
'job_type': 'sge',
'njobs': '100',
'pwatcher_type': 'blocking',
'submit': 'qsub -S /bin/bash -sync y -V \\\n-q ${JOB_QUEUE} \\\n-N ${JOB_NAME} \\\n-o "${JOB_STDOUT}" \\\n-e "${JOB_STDERR}" \\\n-pe smp ${NPROC} \\\n"${JOB_SCRIPT}"',
'use_tmpdir': False},
'job.step.phase.alignment': {},
'job.step.phase.placement': {}}
[INFO]PATH=/BiO/Access/changhan/anaconda3/envs/denovo_asm/bin:/BiO/Access/changhan/anaconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin:/BiO/BioTools/root5.34.26/bin:/opt/SSM/SuperDoctor5
[INFO]$('which which')
/usr/bin/which
[INFO]$('which samtools')
/BiO/Access/changhan/anaconda3/envs/denovo_asm/bin/samtools
[INFO]$('which nucmer')
/BiO/Access/changhan/anaconda3/envs/denovo_asm/bin/nucmer
[INFO]$('which show-coords')
/BiO/Access/changhan/anaconda3/envs/denovo_asm/bin/show-coords
[INFO]$('which delta-filter')
/BiO/Access/changhan/anaconda3/envs/denovo_asm/bin/delta-filter
[INFO]$('which bwa')
/BiO/Access/changhan/anaconda3/envs/denovo_asm/bin/bwa
[INFO]$('which bedtools')
/BiO/Access/changhan/anaconda3/envs/denovo_asm/bin/bedtools
[INFO]$ nucmer --version >
[INFO]nucmer ['4', '0', '0beta2'] is >= 4.0.0
[INFO]$ bwa mem >
[INFO]$ show-coords -h >
[INFO]$ delta-filter -h >
[INFO]$('bedtools --version')
bedtools v2.28.0
[INFO]$ samtools >
[INFO]samtools ['1', '9'] is >= 1.3
[INFO]In simple_pwatcher_bridge, pwatcher_impl=<module 'pwatcher.fs_based' from '/BiO/Access/changhan/anaconda3/envs/denovo_asm/lib/python2.7/site-packages/pwatcher/fs_based.pyc'>
[INFO]job_type='local', (default)job_defaults={'pwatcher_type': 'fs_based', 'use_tmpdir': False, 'MB': '3200', 'job_type': 'local', 'NPROC': '2', 'njobs': '100'}, use_tmpdir=False, squash=False, job_name_style=0
[INFO]Setting max_jobs to 100; was None
[INFO]This is where FALCON-Phase workflow should be initialized and run.
Traceback (most recent call last):
File "/BiO/Access/changhan/anaconda3/envs/denovo_asm/bin/fc_phase.py", line 11, in <module>
load_entry_point('falcon-phase==1.0.0', 'console_scripts', 'fc_phase.py')()
File "/BiO/Access/changhan/anaconda3/envs/denovo_asm/lib/python2.7/site-packages/falcon_phase/mains/start_falcon_phase.py", line 26, in main
falcon_phase.run(**vars(args))
File "/BiO/Access/changhan/anaconda3/envs/denovo_asm/lib/python2.7/site-packages/falcon_phase/falcon_phase.py", line 70, in run
falcon_phase_all(config)
File "/BiO/Access/changhan/anaconda3/envs/denovo_asm/lib/python2.7/site-packages/falcon_phase/falcon_phase.py", line 21, in falcon_phase_all
tasks_falcon_phase.run_workflow(wf, config, rule_writer)
File "/BiO/Access/changhan/anaconda3/envs/denovo_asm/lib/python2.7/site-packages/falcon_phase/tasks/tasks_falcon_phase.py", line 322, in run_workflow
min_aln_len, default_nproc, config, rule_writer)
File "/BiO/Access/changhan/anaconda3/envs/denovo_asm/lib/python2.7/site-packages/falcon_phase/tasks/tasks_falcon_phase.py", line 267, in add_placement_as_parallel_tasks
job_dict=config['job.step.phase.placement'],
TypeError: gen_parallel_tasks() got multiple values for keyword argument 'run_dict'
Have you updated to the latest version conda update package? Yes
Have you updated the complete env by running conda update --all? Yes
Have you ensured that your channel priorities are set up according to the bioconda recommendations at https://bioconda.github.io/#set-up-channels? Yes
Not sure of the problem, but it's definitely old code. Try conda update --all first. (You might want to do that in a fresh env, to avoid over-writing whatever you have if important.)
I am trying to run fc_phase tutorial but I got this error. I tried to fix the bug with config files but I didn't work well.
I will be grateful if you help me to understand the error and how to fix it.
Thanks in advance.
Operating system
Ubuntu 14.04.1
Package name
falcon_phase
Have you updated to the latest version
conda update package
? YesHave you updated the complete env by running
conda update --all
? YesHave you ensured that your channel priorities are set up according to the bioconda recommendations at https://bioconda.github.io/#set-up-channels? Yes
Conda environment
The text was updated successfully, but these errors were encountered: