Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: Refine test cases for BenchmarkTaxonomiesGetTerms #12612

Conversation

razonyang
Copy link
Contributor

The tag terms on PR #12611 seems too small ([a,b,c,d,e,f]), this PR will create a large number of tags to cover more situations. However I just modified the test cases, since I don't have much ability/knowledge to improve it.

The result of benchmark on my laptop as follows.

Running tool: /usr/bin/go test -benchmem -run=^$ -bench ^BenchmarkTaxonomiesGetTerms$ github.com/gohugoio/hugo/hugolib

goos: linux
goarch: amd64
pkg: github.com/gohugoio/hugo/hugolib
cpu: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz
BenchmarkTaxonomiesGetTerms/pages_100-16         	     100	  10308688 ns/op	 6421780 B/op	   73897 allocs/op
BenchmarkTaxonomiesGetTerms/pages_1000-16        	       9	 132214460 ns/op	49648174 B/op	  556238 allocs/op
BenchmarkTaxonomiesGetTerms/pages_10000-16       	       1	7714456586 ns/op	476423784 B/op	 5419983 allocs/op
BenchmarkTaxonomiesGetTerms/pages_20000-16       	       1	41908415387 ns/op	951093296 B/op	10836903 allocs/op
PASS
ok  	github.com/gohugoio/hugo/hugolib	55.180s

Please close it if I'm understand it wrongly.

@jmooring
Copy link
Member

The number of generated tags seems unrealistic, and the existing test covers the underlying problem.

@razonyang
Copy link
Contributor Author

the existing test covers the underlying problem.

It seems doesn't cover the case of comparing different number of taxonomy terms.

@razonyang
Copy link
Contributor Author

razonyang commented Jun 19, 2024

If I correctly understand the benchmark above, the speed seems getting slower with large number of terms, maybe it should also be considered a improvable case.

Please close it if I'm wrong, but I see some users to generate a lot of terms on their site.

@bep
Copy link
Member

bep commented Jun 19, 2024

Thanks for this, but the original benchmarks are good enough. The problem with the original GetTerms code was obvious, and the benchmarks confirmed the theory.

@bep bep closed this Jun 19, 2024
@bep
Copy link
Member

bep commented Jun 19, 2024

As to running Go benchmarks, I use this tool:

https://github.com/bep/gobench

... lacks some documentation, but is very useful when comparing branches etc. as it supports benchstat and pprof under the hood.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants