Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

melt() is significantly outperformed by cbind() and stack() in R, 32x slower on a 25 x 100,000 dataframe #6981

Closed
2 tasks done
Jaage opened this issue Feb 17, 2023 · 1 comment · Fixed by #7003
Closed
2 tasks done
Assignees
Labels
bug Something isn't working rust Related to Rust Polars

Comments

@Jaage
Copy link

Jaage commented Feb 17, 2023

Polars version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of Polars.

Issue description

Both the eager and lazy implementations of melt() are heftily beaten by an equivalent usage of cbind() and stack() in R. For 25x1,000 dataframes, Rust took 3ms, R took 4ms. for 10,000 columns, Rust took 127ms, R took 30ms, for 100,000 columns Rust took 16.25s and R took 500ms.

It seems there is a bug with melt() having significantly greater time complexity, noticeable beyond 1000 columns.

Reproducible example

Here is the eager Rust implementation, which will generate a 25 x n ndarray, convert it to a dataframe, and melt it. I am currently compiling Polars like so: polars = { version = "0.27.2", features = ["concat_str", "lazy", "rank", "strings", "performant", "cse"] }

use std::time::Instant;
use polars::prelude::*;
use rayon::prelude::*;
use ndarray::Array2;
#[macro_use]
extern crate fstrings;

fn main() {
  let n: i32 = 100_000;
  let arr = Array2::zeros((25, n));
  
  let mut df: DataFrame = DataFrame::new(
          arr.axis_iter(ndarray::Axis(1))
              .into_par_iter()
              .enumerate()
              .map(|(i, col)| {
                  Series::new(
                      &f!("{i}"),
                      col.to_vec()
                  )
              })
              .collect::<Vec<Series>>()
          ).unwrap();
  
  let sample_cols= (0..n).into_par_iter()
    .map(|l| format!("{}", l))
    .collect::<Vec<String>>();
      
  df.with_column(Series::new("A", &["1", "2", "3", "4", "5", "1", "2", "3", "4", "5", "1", "2", "3", "4", "5", "1", "2", "3", "4", "5", "1", "2", "3", "4", "5"]))?;
  df.with_column(Series::new("B", &["1", "1", "1", "1", "1", "2", "2", "2", "2", "2", "3", "3", "3", "3", "3", "4", "4", "4", "4", "4", "5", "5", "5", "5", "5"]))?;
  df.with_column(Series::new("C", &["1", "2", "3", "4", "5", "1", "2", "3", "4", "5", "1", "2", "2", "4", "5", "1", "2", "3", "4", "5", "1", "2", "3", "4", "5"]))?;
  df.with_column(Series::new("D", (0..df.shape().0 as i32).collect::<Vec<i32>>()))?;
  
  let start = Instant::now();
  let _df = df.melt(&["A", "B", "C", "D"], sample_cols).unwrap();
  let duration = start.elapsed();
  println!("{:?}", duration);
}

Here is the R code which significantly outperforms it:

n = 100000
mx = matrix(0, 25, 100000)
dframe = data.frame(mx)
rownames(dframe) = 1:nrow(mx)
dframe$A = c("1", "2", "3", "4", "5", "1", "2", "3", "4", "5", "1", "2", "3", "4", "5", "1", "2", "3", "4", "5", "1", "2", "3", "4", "5")
dframe$B = c("1", "1", "1", "1", "1", "2", "2", "2", "2", "2", "3", "3", "3", "3", "3", "4", "4", "4", "4", "4", "5", "5", "5", "5", "5")
dframe$C = c("1", "2", "3", "4", "5", "1", "2", "3", "4", "5", "1", "2", "2", "4", "5", "1", "2", "3", "4", "5", "1", "2", "3", "4", "5")
dframe$D = 1:25

head(dframe, 5)
start_time = Sys.time()
melted_dframe = cbind(dframe[ncol(dframe)], dframe[ncol(dframe)-3], dframe[ncol(dframe)-2], dframe[ncol(dframe)-1], stack(dframe[1:(ncol(dframe)-4)]))
end_time = Sys.time()
print(end_time - start_time)
head(melted_dframe, 5)

Expected behavior

I was very surprised to rewrite this in Rust only to find R was significantly beating it for large N. Ideally my N would be near 100,000, so currently I am thinking I need to call Rust for some parts of the code, then back to R for this, back to Rust, and finally back to R.

Installed versions

features = ["concat_str", "lazy", "rank", "strings", "performant", "cse"]
@Jaage Jaage added bug Something isn't working rust Related to Rust Polars labels Feb 17, 2023
@Jaage Jaage changed the title melt() is significantly outperformed by cbind() and stack() in R, 32x slower on a 25 x 10,000 dataframe melt() is significantly outperformed by cbind() and stack() in R, 32x slower on a 25 x 100,000 dataframe Feb 17, 2023
@ritchie46 ritchie46 self-assigned this Feb 18, 2023
@ritchie46
Copy link
Member

Seems like accidental quadratic behavior. I will take a look.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working rust Related to Rust Polars
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants