Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor KVI 2 #538

Closed
wants to merge 13 commits into from
Closed

Conversation

k163377
Copy link
Contributor

@k163377 k163377 commented Jan 10, 2022

This PR is a continuation of #512.
It also implements up to 2 of the comments below.
#439 (comment)

Changes from existing

Improve the efficiency of argument management

The following inefficiencies existed in the argument management process in KVI.

  • In the initialization process for reading arguments, the same decision is repeated for each call.
  • Map generation is being done when calling ValueCreator.callBy.
  • Calls are made with KFunction.callBy even in situations where KFunction.call can be used.

In this change, these processes have been improved.

About ArgumentBucket

Here is an overview of ArgumentBucket, the most significant change in this PR.

ArgumentBucket is a class that holds arguments while sorting them like a bucket sort.
This makes it easy to call KFunction.call or super.createFromObjectWith(cxt, argumentBucket.actualValues).
Compared to #439, the difference is that the Map interface is inherited, but this is a change to pass it directly as an argument to KFunction.callBy.

Initialization for argument reading is done only minimally in the BucketGenerator.

Points that have not been changed this PR

No changes have been made to the refactoring and streamlining of the argument reading process in KVI.

Since the same judgment is made for the same KParameter each time it is called, caching the process is expected to speed up the process.
However, considering that the scope of the change would be too large if it were included in this PR, I decided not to modify it.

Benchmark results

The results of the benchmark comparison are as follows.
Compared to 2.13.2-SNAPSHOT, the score itself has not changed much, but the error score has decreased, so it is probably improved.

2.13.1

Benchmark                                           Mode  Cnt        Score        Error  Units
c.w.constructor.Benchmarks.useDefaultKotlinMapper  thrpt    9   626907.310 ±  28895.923  ops/s
c.w.constructor.Benchmarks.useJavaMapper           thrpt    9  2647305.786 ± 363443.365  ops/s
c.w.constructor.Benchmarks.useKotlinMapper         thrpt    9  1098450.486 ± 102087.899  ops/s
c.w.factory.Benchmarks.useDefaultKotlinMapper      thrpt    9   536998.112 ±  16534.997  ops/s
c.w.factory.Benchmarks.useJavaMapper               thrpt    9  2808159.311 ±  40007.999  ops/s
c.w.factory.Benchmarks.useKotlinMapper             thrpt    9   566703.227 ±  15186.272  ops/s

99389f3(2.13.2-SNAPSHOT)

Benchmark                                           Mode  Cnt        Score        Error  Units
c.w.constructor.Benchmarks.useDefaultKotlinMapper  thrpt    9   902379.251 ±  37142.938  ops/s
c.w.constructor.Benchmarks.useJavaMapper           thrpt    9  2760360.174 ± 114464.503  ops/s
c.w.constructor.Benchmarks.useKotlinMapper         thrpt    9  1278327.229 ±  20343.157  ops/s
c.w.factory.Benchmarks.useDefaultKotlinMapper      thrpt    9   891055.319 ±  19565.590  ops/s
c.w.factory.Benchmarks.useJavaMapper               thrpt    9  2758595.266 ±  70719.811  ops/s
c.w.factory.Benchmarks.useKotlinMapper             thrpt    9  1225093.140 ±  37411.392  ops/s

this change

Benchmark                                           Mode  Cnt        Score       Error  Units
c.w.constructor.Benchmarks.useDefaultKotlinMapper  thrpt    9   902349.504 ± 24542.867  ops/s
c.w.constructor.Benchmarks.useJavaMapper           thrpt    9  2785190.898 ± 84849.035  ops/s
c.w.constructor.Benchmarks.useKotlinMapper         thrpt    9  1242015.235 ± 33887.768  ops/s
c.w.factory.Benchmarks.useDefaultKotlinMapper      thrpt    9   893465.449 ± 11438.631  ops/s
c.w.factory.Benchmarks.useJavaMapper               thrpt    9  2756748.040 ± 70982.802  ops/s
c.w.factory.Benchmarks.useKotlinMapper             thrpt    9  1243297.794 ±  9396.874  ops/s

I don't think there is any major concern about this result, at least not since the score has not decreased.
In fact, this change does not have enough effect to reduce the reflection process.

However, I am concerned about the fact that the change in the c.w.factory.Benchmarks.useKotlinMapper score seems to be small, despite the avoidance of Spread Operator and the use of KFunction.call.
In my experience, I expected these changes to have a larger effect.
I haven't had time to check why this is the case, but I may add it to the explanation if I find out anything.

@k163377
Copy link
Contributor Author

k163377 commented Jan 12, 2022

@dinomite
I created a refactored branch of the argument reading process that is a follow-up to this PR.
Without default arguments, the performance appears to be better than #439 (i.e., it can be even faster).

https://github.com/k163377/jackson-module-kotlin/compare/speed_up/argument_bucket...k163377:speed_up/3?expand=1

Benchmark                                           Mode  Cnt        Score        Error  Units
c.w.constructor.Benchmarks.useDefaultKotlinMapper  thrpt    9  1259407.738 ±  16637.232  ops/s
c.w.constructor.Benchmarks.useJavaMapper           thrpt    9  2758888.351 ±  49898.808  ops/s
c.w.constructor.Benchmarks.useKotlinMapper         thrpt    9  2201595.204 ±  45559.798  ops/s
c.w.factory.Benchmarks.useDefaultKotlinMapper      thrpt    9  1144147.578 ± 266005.616  ops/s
c.w.factory.Benchmarks.useJavaMapper               thrpt    9  2808369.868 ±  45898.266  ops/s
c.w.factory.Benchmarks.useKotlinMapper             thrpt    9  2057443.723 ±  76189.304  ops/s

@k163377 k163377 changed the base branch from 2.13 to 2.15 March 22, 2023 15:22
@k163377 k163377 deleted the branch FasterXML:2.15 May 6, 2023 14:53
@k163377 k163377 closed this May 6, 2023
@k163377 k163377 deleted the speed_up/argument_bucket branch January 13, 2024 17:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant