Skip to content

Commit

Permalink
Improve cal_sum() performance, fixes #36
Browse files Browse the repository at this point in the history
Using colSums instead of pmap and dropping the bad parallelisation attempt
  • Loading branch information
joeroe committed Oct 17, 2024
1 parent 234fcb1 commit 14658d8
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 11 deletions.
10 changes: 3 additions & 7 deletions R/cal_aggregate.R
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,6 @@
#'
#' @return The summed probability distribution as a [cal] vector.
#'
#' @details
#' This function can be evaluated in parallel using [future::plan()], which
#' substantially speeds up the computation for large sets of dates.
#'
#' @family functions for aggregating calendar probability distributions
#' @export
#'
Expand All @@ -28,11 +24,11 @@
#' cal_sum(shub1_cal)
cal_sum <- function(x, range = cal_age_common(x), normalise = FALSE, ...) {
# TODO: ensure x and range have the same era - or is this a job for cal validation?
# TODO: normalise interpolated??
x <- cal_interpolate(x, range)

pdens_sum <- furrr::future_pmap_dbl(cal_pdens(x), \(...) sum(..., na.rm = TRUE))
pdens_mat <- do.call(rbind, cal_pdens(x))
pdens_sum <- colSums(pdens_mat, na.rm = TRUE)
if (isTRUE(normalise)) pdens_sum <- pdens_sum / sum(pdens_sum, na.rm = TRUE)

new_cal(list(data.frame(age = cal_age(x)[[1]], pdens = pdens_sum)))
}
}
4 changes: 0 additions & 4 deletions man/cal_sum.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

0 comments on commit 14658d8

Please sign in to comment.