Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

updates for compatibility with input zstacks #48

Draft
wants to merge 18 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ Takes TIF images (tiled or prestitched) and outputs a validated BIDS Microscopy
- Python >= 3.11
- Lightsheet data:
- Raw Ultramicroscope Blaze OME TIFF files (include `blaze` in the acquisition tag)
- can be 2D or 3D TIFF files
- Prestitched TIFF files (include `prestitched` in the acquisition tag)


Expand Down Expand Up @@ -63,10 +64,8 @@ or for snakemake<8.0, use:
snakemake -c all --use-singularity
```

Note: if you run the workflow on a system with large memory, you will need to set the heap size for the stitching and fusion rules. This can be done with e.g.: `--set-resources bigstitcher_spark_stitching:mem_mb=60000 bigstitcher_spark_fusion:mem_mb=100000`
Note: if you run the workflow on a system with large memory, you will need to set the heap size for the stitching and fusion rules. This can be done with e.g.: `--set-resources bigstitcher_stitching:mem_mb=60000 bigstitcher_fusion:mem_mb=100000`

7. If you want to run the workflow using a batch job submission server, please see the executor plugins here: https://snakemake.github.io/snakemake-plugin-catalog/


Alternate usage of this workflow (making use of conda) is described in the [Snakemake Workflow Catalog](https://snakemake.github.io/snakemake-workflow-catalog?repo=khanlab/SPIMprep).
<!--intro-end-->
10 changes: 4 additions & 6 deletions config/config.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
datasets: 'config/datasets.tsv'


root: 'bids' # can use a s3:// or gcs:// prefix to write output to cloud storage
work: 'work'

Expand All @@ -13,6 +12,7 @@ cores_per_rule: 32
#import wildcards: tilex, tiley, channel, zslice (and prefix - unused)
import_blaze:
raw_tif_pattern: "{prefix}_Blaze[{tilex} x {tiley}]_C{channel}_xyz-Table Z{zslice}.ome.tif"
raw_tif_pattern_zstack: "{prefix}_Blaze[{tilex} x {tiley}]_C{channel}.ome.tif"
intensity_rescaling: 0.5 #raw images seem to be at the upper end of uint16 (over-saturated) -- causes wrapping issues when adjusting with flatfield correction etc. this rescales the raw data as it imports it..

import_prestitched:
Expand All @@ -36,12 +36,10 @@ bigstitcher:
downsample_in_x: 4
downsample_in_y: 4
downsample_in_z: 1
method: "phase_corr" #unused
methods: #unused
methods: #unused, only for reference
phase_corr: "Phase Correlation"
optical_flow: "Lucas-Kanade"
filter_pairwise_shifts:
enabled: 1 #unused
min_r: 0.7
max_shift_total: 50
global_optimization:
Expand All @@ -64,7 +62,7 @@ bigstitcher:
block_size_factor_z: 32

ome_zarr:
desc: sparkstitchedflatcorr
desc: stitchedflatcorr
max_downsampling_layers: 5 # e.g. 4 levels: { 0: orig, 1: ds2, 2: ds4, 3: ds8, 4: ds16}
rechunk_size: #z, y, x
- 1
Expand Down Expand Up @@ -155,5 +153,5 @@ report:


containers:
spimprep: 'docker://khanlab/spimprep-deps:main'
spimprep: 'docker://khanlab/spimprep-deps:v0.1.0'

10 changes: 5 additions & 5 deletions resources/qc/ff_html_temp.html
Original file line number Diff line number Diff line change
Expand Up @@ -43,13 +43,13 @@ <h2>Chunk - {{ chunk_num }} Channel - {{ loop.index0 }}</h2>
<tr>
{%- for image in channel %}
<td>
<img src="{{ image.img_corr }}"></img>
<h3>Corrected</h3>
<img src="{{ image.img_uncorr }}"></img>
<h3>Uncorrected</h3>
<p>Slice-{{ image.slice }}</p>
</td>
<td>
<img src="{{ image.img_uncorr }}"></img>
<h3>Uncorrected</h3>
<img src="{{ image.img_corr }}"></img>
<h3>Corrected</h3>
<p>Slice-{{ image.slice }}</p>
</td>
{%- endfor %}
Expand Down Expand Up @@ -117,4 +117,4 @@ <h3>Uncorrected</h3>
}
})
</script>
</body>
</body>
2 changes: 1 addition & 1 deletion workflow/Snakefile
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ rule all:
input:
get_all_targets(),
get_bids_toplevel_targets(),
get_qc_targets(), #need to skip this if using prestitched
# get_qc_targets(), #need to skip this if using prestitched
localrule: True


Expand Down
45 changes: 45 additions & 0 deletions workflow/lib/dask_image.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
from __future__ import annotations

import os
from glob import glob

try:
import tifffile
except (AttributeError, ImportError):
pass

from dask.array.core import Array
from dask.base import tokenize


def add_leading_dimension(x):
return x[None, ...]

def imread(fn, page):
return tifffile.imread(fn,key=page)


def imread_pages(filename, preprocess=None):


tif = tifffile.TiffFile(filename)
pages = [i for i in range(len(tif.pages))]
name = "imread-%s" % tokenize(filename, os.path.getmtime(filename))

sample = tif.pages[0].asarray()

if preprocess:
sample = preprocess(sample)

keys = [(name, i) + (0,) * len(sample.shape) for i in pages]
if preprocess:
values = [
(add_leading_dimension, (preprocess, (imread, filename,i))) for i in pages
]
else:
values = [(add_leading_dimension, (imread, filename,i)) for i in pages]
dsk = dict(zip(keys, values))

chunks = ((1,) * len(pages),) + tuple((d,) for d in sample.shape)

return Array(dsk, name, chunks, sample.dtype)
56 changes: 0 additions & 56 deletions workflow/macros/AutostitchMacro.ijm

This file was deleted.

47 changes: 0 additions & 47 deletions workflow/macros/FuseImageMacroZarr.ijm

This file was deleted.

Loading
Loading