The goal of this project is to assess whether a language implementation is highly optimizing and thus able to remove the overhead of programming abstractions and frameworks. We are interested in comparing language implementations (not languages!) with each other and optimize their compilers as well as the run-time representation of objects, closures, arrays, and strings.
This is in contrast to other projects such as the Computer Language Benchmark game, which encourage finding the smartest possible way to express a problem in a language to achieve best performance, an equally interesting but different problem.
To allow us to compare the degree of optimization done by the implementations as well as the absolute performance achieved, we set the following basic rules:
-
The benchmark is 'identical' for all languages.
This is achieved by relying only on a widely available and commonly used subset of language features and data types. -
The benchmarks should use language 'idiomatically'.
This means, they should be realized as much as possible with idiomatic code in each language, while relying only on the core set of abstractions.
For the detailed set of rules see the guidelines document. For a description of the set of common language abstractions see the core language document.
The initial publication describing the project is Cross-Language Compiler Benchmarking: Are We Fast Yet? and can be cited as (bib file):
Stefan Marr, Benoit Daloze, Hanspeter Mössenböck. 2016. Cross-Language Compiler Benchmarking: Are We Fast Yet? In Proceedings of the 12th Symposium on Dynamic Languages (DLS '16). ACM.
To facilitate our research, we want to be able assess the effectiveness of compiler and runtime optimizations for a common set of abstractions between languages. As such, many other relevant aspects such as GC, standard libraries, and language-specific abstractions are not included here. However, by focusing on this one aspect, we know exactly what is compared.
Currently, we have 14 benchmarks ported to ten different languages, including C++, Crystal, Java, JavaScript, Lua, Python, Ruby, SOM Smalltalk, SOMns (a Newspeak implementation), and Smalltalk (Squeak/Pharo).
The graph below shows some older results for different implementations after warmup, to ensure peak performance is reported:
A detailed overview of the results is in docs/performance.md.
The benchmarks are listed below. A detailed analysis including metrics for the benchmarks is in docs/metrics.md.
-
CD is a simulation of an airplane collision detector. Based on WebKit's JavaScript CDjs. Originally, CD was designed to evaluate real-time JVMs.
-
DeltaBlue is a classic VM benchmark used to tune, e.g., Smalltalk, Java, and JavaScript VMs. It implements a constraint solver.
-
Havlak implements a loop recognition algorithm. It has been used to compare C++, Java, Go, and Scala performance.
-
Json is a JSON string parsing benchmark derived from the
minimal-json
Java library. -
Richards is a classic benchmark simulating an operating system kernel. The used code is based on Wolczko's Smalltalk version.
Micro benchmarks are based on SOM Smalltalk benchmarks unless noted otherwise.
-
Bounce simulates a ball bouncing within a box.
-
List recursively creates and traverses lists.
-
Mandelbrot calculates the classic fractal. It is derived from the Computer Languages Benchmark Game.
-
NBody simulates the movement of planets in the solar system. It is derived from the Computer Languages Benchmark Game.
-
Permute generates permutations of an array.
-
Queens solves the eight queens problem.
-
Sieve finds prime numbers based on the sieve of Eratosthenes.
-
Storage creates and verifies a tree of arrays to stress the garbage collector.
-
Towers solves the Towers of Hanoi game.
Considering the large number of languages out there, we are open to contributions of benchmark ports to new languages. We would also be interested in new benchmarks that are in the range of 300 to 1000 lines of code.
When porting to a new language, please carefully consider the guidelines and description of the core language to ensure that we can compare results.
A list of languages we would definitely be interested in is on the issues tracker.
This includes languages like Dart, Scala, and Go. Other interesting ports could be for Racket, Clojure, or CLOS, but might require more carefully thought-out rules for porting. Similarly, a port to Rust need additional care to account for the absence of a garbage collector and should be guided by our C++ port.
To obtain the code, benchmarks, and documentation, checkout the git repository:
git clone --depth 1 https://github.com/smarr/are-we-fast-yet.git
The benchmarks are sorted by language in the benchmarks
folder.
Each language has its own harness. For JavaScript and Ruby, the benchmarks are
executed like this:
cd benchmarks/JavaScript
node harness.js Richards 5 10
cd ../Ruby
ruby harness.rb Queens 5 20
The harness takes three parameters: benchmark name, number of iterations, and problem size. The benchmark name corresponds to a class or file of a benchmark. The number of iterations defines how often a benchmark should be executed. The problem size can be used to influence how long a benchmark takes. Note that some benchmarks rely on magic numbers to verify their results. Those might not be included for all possible problem sizes.
The rebench.conf file specifies the supported problem sizes for each benchmark.
Each port of the benchmarks comes with a build.sh
file, which either runs any
build steps needed, or with ./build.sh style
runs code style checks.
Though, the repository does not contain setup steps for the various languages anymore.
We abandoned the idea of maintaining a full setup, since it took too much work.
Benchmark are configured and executed with the ReBench tool.
ReBench can be installed via the Python package manager pip:
pip install ReBench
The benchmarks can be executed with the following command in the root folder, assuming they have be previously built:
rebench -d --without-nice rebench.conf all
The -d
gives more output during execution, and --without-nice
means that
the nice
tool enforcing high process priority is not used. We don't use it
here to avoid requiring root rights.
Note: The rebench.conf file specifies how and which benchmarks to execute. It also defines the arguments to be passed to the benchmarks.
-
Improving Native-Image Startup Performance
M. Basso, A. Prokopec, A. Rosà, W. Binder. Proceedings of the 2025 IEEE/ACM International Symposium on Code Generation and Optimization, CGO 2025. -
Interactive Programming for Microcontrollers by Offloading Dynamic Incremental Compilation
F. Mochizuki, T. Yamazaki, S. Chiba. Proceedings of the 21st ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes, MPLR 2024. -
Towards Realistic Results for Instrumentation-Based Profilers for JIT-Compiled Systems
H. Burchell, O. Larose, S. Marr. Proceedings of the 21st ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes, MPLR 2024. -
Taking a Closer Look: An Outlier-Driven Approach to Compilation-Time Optimization
F. Huemer, D. Leopoldseder, A. Prokopec, R. Mosaner, H. Mössenböck. Proceedings of the 38th European Conference on Object-Oriented Programming, ECOOP 2024. -
Live Objects All The Way Down: Removing the Barriers between Applications and Virtual Machines
J. E. Pimás, S. Marr, D. Garbervetsky. The Art, Science, and Engineering of Programming, Programming'24. -
Don’t Trust Your Profiler: An Empirical Study on the Precision and Accuracy of Java Profilers
H. Burchell, O. Larose, S. Kaleba, S. Marr. Proceedings of the 20th ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes, MPLR'23, -
AST vs. Bytecode: Interpreters in the Age of Meta-Compilation
O. Larose, S. Kaleba, H. Burchell, S. Marr. Proceedings of the ACM on Programming Languages, OOPSLA'23 -
Collecting Cyclic Garbage across Foreign Function Interfaces: Who Takes the Last Piece of Cake?
T. Yamazaki, T. Nakamaru, R. Shioya, T. Ugawa, S. Chiba. Proceedings of the ACM on Programming Languages, PLDI 2023. -
Simple Object Machine Implementation in a Functional Programming Language
Filip Říha. Bachelor's Thesis, CTU Prague, 2023. -
Supporting multi-scope and multi-level compilation in a meta-tracing just-in-time compiler
Y. Izawa. PhD Dissertation. Tokyo Institute of Technology, 2023. -
Optimizing the Order of Bytecode Handlers in Interpreters using a Genetic Algorithm
W. Huang, S. Marr, T. Ugawa. The 38th ACM/SIGAPP Symposium on Applied Computing, SAC 2023. -
Who You Gonna Call: Analyzing the Run-time Call-Site Behavior of Ruby Applications
S. Kaleba, O. Larose, R. Jones, S. Marr. Proceedings of the 18th Symposium on Dynamic Languages, DLS 2022. -
Generating Virtual Machine Code of JavaScript Engine for Embedded Systems
Y. Hirasawa, H. Iwasaki, T. Ugawa, H. Onozawa. Journal of Information Processing. 2022. -
Profile Guided Offline Optimization of Hidden Class Graphs for JavaScript VMs in Embedded Systems
T. Ugawa, S. Marr, R. Jones. Proceedings of the 14th ACM SIGPLAN International Workshop on Virtual Machines and Intermediate Languages, VMIL 2022. -
Implementation Strategies for Mutable Value Semantics
D. Racordon, D. Shabalin, D. Zheng, D. Abrahams, B. Saeta. Journal of Object Technology, 2022. -
Fusuma: Double-Ended Threaded Compaction
H. Onozawa, T. Ugawa, H. Iwasaki. Proceedings of the 2021 ACM SIGPLAN International Symposium on Memory Management, ISMM 2021. -
A Surprisingly Simple Lua Compiler – Extended Version
H. M. Gualandi, R. Ierusalimschy. Journal of Computer Languages. 2021. -
Contextual Dispatch for Function Specialization
O. Flückiger, G. Chari, M. Yee, J. Ječmen, J. Hain, J. Vitek. Proceedings of the ACM on Programming Languages, OOPSLA 2020. -
GraalSqueak: Toward a Smalltalk-based Tooling Platform for Polyglot Programming
F. Niephaus, T. Felgentreff, R. Hirschfeld. Proceedings of 16th International Conference on Managed Programming Languages & Runtimes, MPLR 2019. -
Scopes and Frames Improve Meta-Interpreter Specialization
V. Vergu, A. Tolmach, E. Visser. 33rd European Conference on Object-Oriented Programming, ECOOP 2019. -
Self-Contained Development Environments
G. Chari, J. Pimás, J. Vitek, O. Flückiger. Proceedings of the 14th ACM SIGPLAN International Symposium on Dynamic Languages. DLS 2018. -
Interflow: Interprocedural Flow-Sensitive TypeInference and Method Duplication
D. Shabalin, M. Odersky. Proceedings of the 9th ACM SIGPLAN International Symposium on Scala 2018. -
Specializing a Meta-Interpreter: JIT Compilation of DynSem Specifications on the Graal VM
V. Vergu, E. Visser. Proceedings of the 15th International Conference on Managed Languages and Runtimes, ManLang 2018. -
Newspeak and Truffle: A Platform for Grace?
S. Marr, R. Roberts, J. Noble, Grace'18, p. 3, 2018. Presentation. -
Parallelization of Dynamic Languages: Synchronizing Built-in Collections
B. Daloze, A. Tal, S. Marr, H. Mössenböck, E. Petrank Proceedings of the ACM on Programming Languages, OOPSLA 2018 -
Efficient and Deterministic Record & Replay for Actor Languages
D. Aumayr, S. Marr, C. Béra, E. Gonzalez Boix, H. Mössenböck Proceedings of the 15th International Conference on Managed Languages and Runtimes, ManLang 2018. -
Fully Reflective Execution Environments: Virtual Machines for More Flexible Software
G. Chari, D. Garbervetsky, S. Marr, S. Ducasse IEEE Transactions on Software Engineering, IEEE TSE, p. 1–20, 2018. -
Garbage Collection and Efficiency in Dynamic Metacircular Runtimes
J. Pimás, J. Burroni, J., B. Arnaud, S. Marr Proceedings of the 13th ACM SIGPLAN International Symposium on Dynamic Languages, DLS 2017. -
Applying Optimizations for Dynamically-typed Languages to Java
M. Grimmer, S. Marr, M. Kahlhofer, C. Wimmer, T. Würthinger, H. Mössenböck Proceedings of the 14th International Conference on Managed Languages and Runtimes, ManLang 2017. -
Efficient and Thread-Safe Objects for Dynamically-Typed Languages
B. Daloze, S. Marr, D. Bonetta, Hanspeter Mössenböck In Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2016.