Skip to content

Benchmarking reflection-/metadata-based framework for instrumenting Java programs with support for checkpointing.

Notifications You must be signed in to change notification settings

iambriccardo/jenchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

Jenchmark

Jenchmark is a super simple (and absolutely thread-unsafe) benchmarking library for Java based on reflection and metadata. The framework by default benchmarks the whole method. However, it supports checkpointing for measuring the performance of specific sections within each method.

I hope you will like this lil-project made during a boring afternoon 🥱. Should you have any comments or suggestions feel free to open a PR.

Jenchmark has been created while experimenting with framework design and is not meant to be used in a production environment. In addition, it doesn't have a super refined design because it was built during a continuous stream of ideas (e.g., exceptions are not handled, some parts require better abstractions, monotonic clocks should be used, thread-safety should be implemented).

Sample usage

1. Define class to be benchmarked

  • The annotation @Benchmark specifies that we want to benchmark this class. It requires the name of the benchmark and optionally the generator used to automatically inject values for methods.
  • The annotation @InjectCheckpointer specifies that we want to have access to a Checkpointer in this class. The Checkpointer is used to checkpoint the function in specific places.

In case you are using the checkpointer, you must have a constructor accepting a Checkpointer implementation.

@Benchmark(name = "TestBenchmark", fallbackGenerator = BaseGenerator.class)
@InjectCheckpointer(checkpointer = BenchmarkEngine.EngineCheckpointer.class)
public class TestBenchmark {

    private final Checkpointer checkpointer;

    public TestBenchmark(Checkpointer checkpointer) {
        this.checkpointer = checkpointer;
    }
}

2. Define generator(s) (only if needed)

If you are using generated params then you can either use the BaseGenerator which supports primitive types and Strings or you can build your own by implementing the BenchmarkGenerator interface.

public class MyGenerator implements BenchmarkGenerator {

    @Override
    public boolean supports(Class<?> clazz) {
        return true;
    }

    @Override
    public Object generateValueFor(Class<?> clazz) {
        throw new RuntimeException("Couldn't find a generator for Class<" + clazz + ">");
    }
}

In addition, Jenchmark provides two other generators respectively FallbackGenerator and VoidGenerator. Their implementation is strictly meaningful for the framework internals, therefore their usage is discouraged.

3. Define methods

You can either define a simple method with no annotations or use the @BenchmarkParam annotation to specify that you want parameters automatically injected by the generator.

public void webPlusDb() throws InterruptedException {
    Thread.sleep(1000);
    checkpointer.onCheckpoint("Web request");

    Thread.sleep(500);
    checkpointer.onCheckpoint("Database query");
}

It is important to note that the order of the annotations must match the order of the parameters. In addition, all the parameters must be annotated because for now the framework doesn't support hybrid parameters.

Besides the type, you can also specify a custom generator for each specific parameter which could be helpful when you want a different value.

@BenchmarkParam(type = Long.class)
@BenchmarkParam(type = Long.class, generator = MyGenerator.class)
public void webPlusDbParams(long webDelay,long dbDelay) throws InterruptedException{
    Thread.sleep(webDelay);
    checkpointer.onCheckpoint("Web request");

    Thread.sleep(dbDelay);
    checkpointer.onCheckpoint("Database query");
}

4. Define the benchmark

You can build a Benchmark via the BenchmarkBuilder which will make the whole process as simple as it gets.

A Benchmark is built by the following methods:

  • of: class that we want to benchmark.
  • method: name of the method we want to benchmark, together with the types of its parameters (useful in case of method overloading).
  • withParams: pass the params to be injected into the method.
  • withGenerator: specify to inject the params generated by the generator.
  • repeat: execute the benchmark method a number of times. This is useful when you are testing non-deterministic functions with side effects.
  • pause: pause for a number of ms. This is useful when the system's performance is impacted by the execution's behavior.
  • build: create the benchmark.

In order to execute a benchmark you will need to initiate the BenchmarkEngine and call the benchmark method by passing the Benchmark instance. This design decision to lazily evaluate benchmarks was done in order to allow the definition of benchmarks independently of their execution.

BenchmarkEngine engine = new BenchmarkEngine();

Benchmark<TestBenchmark> benchmark = new BenchmarkBuilder<TestBenchmark>()
  .of(TestBenchmark.class)
  .method("webPlusDb")
  .withParams()
  .repeat(1)
  .pause(1000)
  .repeat(1)
  .build();

BenchmarkResult<TestBenchmark> result = engine.benchmark(benchmark);

The execution of the benchmark will return a BenchmarkResult which will contain all the measurements and the details of the benchmark.

About

Benchmarking reflection-/metadata-based framework for instrumenting Java programs with support for checkpointing.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages