-
-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consolidate benchmarks and use functions rather than scenes as the base unit #22
Conversation
Hello, are there any questions or concerns with this proposed format that I should address before it can be merged? |
I've been busy lately and haven't had time to review pull requests. Holiday season is also starting, so it will take a few weeks until I can get to this and give it enough time for a proper review. Rest assured, your work is very appreciated 🙂 |
Okay, thank you for the update. Happy holidays! |
Benchmark results:
|
This may be due to shader compilation, I'm observing a similar
FYI, you can use |
CPU time is much lower on a second run in 3D benchmarks, which indicates an issue with shader compilation times being counted in CPU time. Edit: Issue opened: #29 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
GDScripts are now used as categories rather than directories, and functions are now the basic benchmarking unit rather than scenes. This allows nearly 40 files to be consolidated into 3 GDScripts, and makes it much easier to write new benchmarks since contributors only need to add a single script.
On startup, the loader now scans
benchmarks/
for GDScripts, opens them up, and scans those for functions that begin withbenchmark_
for automatic registration.Instead of the
time_limit
mechanism, thebenchmark_()
functions return aNode
to be added to the scene, which ifnull
will signal to the manager that there is no time limit. This is admittedly a bit hacky, but it was the easiest way to do this consolidation while minimizing the amount of refactoring needed.