You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
We've found that in some cases, batching exists solely for the purpose of loading more image data in at once. We later loop over each FOV in that batch anyways, which doesn't add any significant optimization.
This additionally causes issues with cohorts with different image sizes, because loading a batch into a 1024x1024 or 2048x2048 array will cause dimension errors.
Describe the solution you'd like
We've already removed batching from generate_deepcell_input, the following functions also need to be modified in this regard:
data_utils.generate_and_save_pixel_cluster_masks (and by extension, data_utils.generate_pixel_cluster_mask)
data_utils.generate_and_save_cell_cluster_masks (and by extension, data_utils.label_cells_by_cluster)
marker_quantification.generate_cell_table
spatial_analysis.batch_channel_spatial_enrichment and spatial_analysis.batch_cluster_spatial_enrichment: talked to @ackagel about this, we can condense these to a per-FOV basis with negligible speed difference
spatial_analysis.create_neighborhood_matrix (and by extension, the neighborhood analysis notebook): currently, this process requires precomputing all the distance matrices prior to running then function, then an additional per-FOV loop during neighborhood analysis. We should condense all this down to one per-FOV loop in the neighborhood matrix function.
spatial_analysis_utils.calc_dist_matrix: after the neighborhood analysis process is updated, the dist matrix function will no longer be receiving a batch of FOVs to process over, so it should be condensed to process just one FOV.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
We've found that in some cases, batching exists solely for the purpose of loading more image data in at once. We later loop over each FOV in that batch anyways, which doesn't add any significant optimization.
This additionally causes issues with cohorts with different image sizes, because loading a batch into a 1024x1024 or 2048x2048 array will cause dimension errors.
Describe the solution you'd like
We've already removed batching from
generate_deepcell_input
, the following functions also need to be modified in this regard:data_utils.generate_and_save_pixel_cluster_masks
(and by extension,data_utils.generate_pixel_cluster_mask
)data_utils.generate_and_save_cell_cluster_masks
(and by extension,data_utils.label_cells_by_cluster
)marker_quantification.generate_cell_table
spatial_analysis.batch_channel_spatial_enrichment
andspatial_analysis.batch_cluster_spatial_enrichment
: talked to @ackagel about this, we can condense these to a per-FOV basis with negligible speed differencespatial_analysis.create_neighborhood_matrix
(and by extension, the neighborhood analysis notebook): currently, this process requires precomputing all the distance matrices prior to running then function, then an additional per-FOV loop during neighborhood analysis. We should condense all this down to one per-FOV loop in the neighborhood matrix function.spatial_analysis_utils.calc_dist_matrix
: after the neighborhood analysis process is updated, the dist matrix function will no longer be receiving a batch of FOVs to process over, so it should be condensed to process just one FOV.The text was updated successfully, but these errors were encountered: