You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
microbenchmark(new.env(), hashmap(numeric(0), numeric(0)))
# Unit: nanoseconds
# expr min lq mean median uq max neval cld
# new.env() 640 803.5 1245.52 1182.5 1310.5 14147 100 a
# hashmap(numeric(0), numeric(0)) 93833 95027.0 101540.09 95987.5 98018.5 258528 100 b
inserting one element: ~ 10x slower
microbenchmark(assign("toto", TRUE, envir = env), hm$insert("toto", TRUE))
# Unit: nanoseconds
# expr min lq mean median uq max neval cld
# assign("toto", TRUE, envir = env) 514 622.5 743.01 713.0 800.5 2406 100 a
# hm$insert("toto", TRUE) 7512 7765.0 8881.30 7860.5 8039.5 93803 100 b
I understand (from the benchmarks) that you are focusing on bulk operations, but still considering the title "The Faster Hash Map" I was quite disappointed.
The text was updated successfully, but these errors were encountered:
(empty) hashmap creation: ~ 80x slower
inserting one element: ~ 10x slower
I understand (from the benchmarks) that you are focusing on bulk operations, but still considering the title "The Faster Hash Map" I was quite disappointed.
The text was updated successfully, but these errors were encountered: