Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance tests part 1 #3434

Merged
merged 60 commits into from
Jan 23, 2024
Merged

Performance tests part 1 #3434

merged 60 commits into from
Jan 23, 2024

Conversation

kciesielski
Copy link
Member

@kciesielski kciesielski commented Jan 2, 2024

This PR introduces a new set of performance tests with a customizable runner. Documentation will be added once the runner passes an initial review.

Short description:
Tests can be run using perfTests/Test/runMain sttp.tapir.perf.PerfTestSuiteRunner -s servers -m simulations -d duration -users -g where some parameters are optional. The runner uses scopt library to handle params, here's its generated usage help:

[error] Usage: perf [options]
[error]   -s, --server <value>    Comma-separated list of short server names, or '*' for all. Available servers: http4s.TapirMulti, http4s.Tapir, http4s.VanillaMulti, http4s.Vanilla, netty.cats.TapirMulti, netty.cats.Tapir, netty.future.TapirMulti, netty.future.Tapir, pekko.TapirMulti, pekko.Tapir, pekko.VanillaMulti, pekko.Vanilla, play.TapirMulti, play.Tapir, play.VanillaMulti, play.Vanilla, vertx.TapirMulti, vertx.Tapir, vertx.VanillaMulti, vertx.Vanilla, vertx.cats.TapirMulti, vertx.cats.Tapir
[error]   -m, --sim <value>       Comma-separated list of short simulation names, or '*' for all. Available simulations: PostBytes, PostFile, PostLongBytes, PostLongString, PostString, SimpleGetMultiRoute, SimpleGet
[error]   -u, --users <value>     Number of concurrent users, default is 1
[error]   -d, --duration <value>  Single simulation duration in seconds, default is 10
[error]   -g, --gatling-reports   Generate Gatling reports for individuals sims, may significantly affect total time (disabled by default)

Number of concurrent users is 1 by default.
Each simulation starts with 5 seconds of warmup sending GETs and POSTs.

Example HTML report fragment:
image

Some Netty fixes are still in this PR, but they have been also extracted to a separate PR: #3469


TODO:

  • Documentation

@kciesielski kciesielski added this to the Performance milestone Jan 2, 2024
@kciesielski
Copy link
Member Author

@adamw Can I ask for a pre-review of the test running mechanism? After a deeper investigation I noticed that we cannot handle this with sbt tasks, because a dynamic expression like

  (Compile / runMain).toTask(s" sttp.tapir.perf.${servName}Server").value

works only if servName is known at compilation time. It is a limitation imposed by sbt macros, which fail with quite a cryptic error.

My proposition is a command, but one that runs just a single task - the gatling perf test. This is because running a server from a command cannot be nicely coordinated with running a test (and then closing the server) using multiple tasks. I'm passing server name to the gatling simulation using system properties (ouch) and then resolving the server runner using reflection (OUCH), but ultimately this solution allows us to gracefully start server before tests, and close it afterwards.
We can then run tests with

perf http4s.Tapir OneRoute
perf netty.TapirMulti PostString

etc.

I'm working now on:

  • Defining the rest of servers and scenarios
  • Adding more params to perf, like user count (possibly with additional commands like perfSingleUser or sth like that)
  • Docs, etc

build.sbt Outdated
.jvmPlatform(scalaVersions = List(scala2_13))
.dependsOn(core, akkaHttpServer, http4sServer)
.dependsOn(core, akkaHttpServer, http4sServer, nettyServer)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's just use pekko & scala 3? :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gatling is based on Akka and requires Scala 2.13 😞 But we could test Pekko instead of Akka anyway, that won't hurt.

build.sbt Outdated
args match {
case Seq(servName, simName) =>
System.setProperty("tapir.perf.serv-name", servName)
Command.process(s"perfTests/Gatling/testOnly sttp.tapir.perf.${simName}Simulation", state)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using the run task (or is it a command?) w/ arguments doesn't work?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then maybe we could have (Compile / runMain).toTask(s"sttp.tapir.perf.Server").value with dynamically-determined arguments? Then Server could pattern-match on the required server impl and run it

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still need dynamic, which doesn't work with tasks. Additionally, running and stopping server in the simulation using before and after seems to give a better control over the server.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, sounds reasonable - put it in a comment for posterity ;)

.errorOut(stringBody)
.out(stringBody)
def replyWithStr[F[_]](endpoint: PublicEndpoint[_, _, String, Any])(implicit monad: MonadError[F]): ServerEndpoint[Any, F] =
endpoint.serverLogicSuccess[F](_ => monad.eval("ok"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to stay close to what a user would write, maybe we should avoid using our monad here, and just accept a function Unit => F[String]? so that specific impls would have to provide it explicitly

@adamw
Copy link
Member

adamw commented Jan 22, 2024

One note about the help screen: great idea, I'd add defaults next to the options (what are the servers, number of concurrent users etc.)

val LargeInputSize = 5 * 1024 * 1024
val Port = 8080
val TmpDir: File = new java.io.File(System.getProperty("java.io.tmpdir")).getAbsoluteFile
def tempFilePath(): Path = TmpDir.toPath.resolve(s"tapir-${new Date().getTime}-${Random.nextLong()}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's a path to a new temp file?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'll rename this function to make it clearer.

private val httpProtocol = http.baseUrl(baseUrl)

// Scenarios
val warmUpScenario = scenario("Warm-Up Scenario")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warmup info would be great to include in the help as well

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What should be shown there? It is not configurable right now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well you have the "help" screen, it could just mention that there's a 5-second 3-user warmup before the tests are run, configured as below: (list of options)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adamw You mean this screen?

[info] ===========================================================================================
[info] Running a suite of 1 tests, each for 1 users and 10 seconds
[info] Servers: List(pekko.Vanilla)
[info] Simulations: List(SimpleGet)
[info] Expected total duration: at least 1 minutes
[info] Generated suite report paths will be printed to stdout after all tests are finished.
[info] ===========================================================================================

Let's call it "info screen" to tell it apart from the message displayed when input params are missing/incorrect :)
So the help screen would be:

[error] Usage: perf [options]
[error]   -s, --server <value>    Comma-separated list of short server names, or '*' for all. Available servers: http4s.TapirMulti, http4s.Tapir, http4s.VanillaMulti, http4s.Vanilla, netty.cats.TapirMulti, netty.cats.Tapir, netty.future.TapirMulti, netty.future.Tapir, pekko.TapirMulti, pekko.Tapir, pekko.VanillaMulti, pekko.Vanilla, play.TapirMulti, play.Tapir, play.VanillaMulti, play.Vanilla, vertx.TapirMulti, vertx.Tapir, vertx.VanillaMulti, vertx.Vanilla, vertx.cats.TapirMulti, vertx.cats.Tapir
[error]   -m, --sim <value>       Comma-separated list of short simulation names, or '*' for all. Available simulations: PostBytes, PostFile, PostLongBytes, PostLongString, PostString, SimpleGetMultiRoute, SimpleGet
[error]   -u, --users <value>     Number of concurrent users, default is 1
[error]   -d, --duration <value>  Single simulation duration in seconds, default is 10
[error]   -g, --gatling-reports   Generate Gatling reports for individuals sims, may significantly affect total time (disabled by default)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm well taking inspiration from JMH, it always prints the full config in the info screen - including warmup. So maybe that's an even better place (I did mean the help screen originally).

@adamw
Copy link
Member

adamw commented Jan 22, 2024

Looks good! :shipit:

As a follow-up, I think the with-interceptor test is missing, from the initial design? Maybe others as well?

@kciesielski
Copy link
Member Author

@adamw Regarding interceptors: I'm not sure whether they should be separate servers, or configured with runner parameters? Something like -i log,metrics for example? I'll add them in another PR once we decide.
BTW logging interceptor is enabled by default, so I'll probably need to disable it in this PR for all servers, right?

@adamw
Copy link
Member

adamw commented Jan 22, 2024

@kciesielski I don't think the logging interceptor is there by default, but maybe this differs between interpreters. If that's the case, we should definitely run all with the same set of interceptors.

I think -i log,metrics (with only these 2 options available) would be great

@adamw
Copy link
Member

adamw commented Jan 22, 2024

(run all with the same set of interceptors = disable the logging interceptor by default, only enable on-demand)

@kciesielski
Copy link
Member Author

@adamw I checked all server options for Vert.x,Netty,http4s,Pekko, and Play - they have some variant of .serverLog(defaultServerLog) as a default:

object VertxCatsServerOptions {

  /** Allows customising the interceptors used by the server interpreter. */
  def customiseInterceptors[F[_]: Async](
      dispatcher: Dispatcher[F]
  ): CustomiseInterceptors[F, VertxCatsServerOptions[F]] =
    CustomiseInterceptors(
      createOptions = (ci: CustomiseInterceptors[F, VertxCatsServerOptions[F]]) =>
        VertxCatsServerOptions(
          dispatcher,
          VertxServerOptions.uploadDirectory(),
          file => Sync[F].delay(Defaults.deleteFile()(file)),
          maxQueueSizeForReadStream = 16,
          ci.interceptors
        )
    ).serverLog(defaultServerLog(LoggerFactory.getLogger("tapir-vertx")))

  def default[F[_]: Async](dispatcher: Dispatcher[F]): VertxCatsServerOptions[F] = customiseInterceptors(dispatcher).options

@adamw
Copy link
Member

adamw commented Jan 22, 2024

@kciesielski ah ... well the log impls are different, so it might be best to turn it off by default. And turn on on-demand.

@kciesielski kciesielski marked this pull request as ready for review January 22, 2024 15:51
@kciesielski
Copy link
Member Author

@adamw I disabled all the logs and updated README.md. Also applied updates after your initial review. Should I also add the configurable interceptors to this PR?

@adamw
Copy link
Member

adamw commented Jan 22, 2024

Great :) Let's merge this, smaller PRs are better PRs. Just let's not forget about the interceptors test, we might as well add it after running & profiling these tests.

@kciesielski kciesielski changed the title [wip] Performance tests Performance tests part 1 Jan 23, 2024
@kciesielski kciesielski merged commit 334f8a6 into master Jan 23, 2024
28 checks passed
@kciesielski kciesielski deleted the perf-tests-2023 branch January 23, 2024 09:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants