Tictactoe benchmark #492
Replies: 4 comments 8 replies
-
FWIW this is a gap that's much wider than we've seen before, but we've mostly switched to VS 2019. VS 2017 doesn't use some important optimizations that affect the interpreter, so the code ends up slower. The timings from my machine: luajit 74ms My recollection was that vs2017 => vs2019 was a ~10% delta on average so I'm surprised that you're seeing an effect that significant. I don't think the benchmark is doing anything particularly unusual - we haven't analyzed it closely though. And yeah unusure what is going on with clang here, we'd need to look closely at the benchmark. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the detailed answers! I'm considering moving my indie game projects from LuaJIT to Luau and I'm indeed using Also good to know that the performance difference should not be as big. I need to dig deeper what the issue here is. Maybe I have messed up compiler settings or something... |
Beta Was this translation helpful? Give feedback.
-
btw. I get a 5% perf improvement in this benchmark if I disable /GS compiler option (security check for buffer overruns). That option is enabled by default at least in ReleaseWithDebInfo configuration. Edit: Other benchmarks are affected too. Might be worth considering disabling that option? |
Beta Was this translation helpful? Give feedback.
-
Solved my original problem -- I had forgotten to call luaL_sandbox. Oops! |
Beta Was this translation helpful? Give feedback.
-
Hi! I'm comparing the performance of Luau and LuaJIT interpreters (with JIT off for fairness). One thing which stands out is the performance of the tictactoe benchmark that comes bundled with Luau:
luajit 153ms
luau 421ms
This is on MSVC 2017 on Windows. (For some reason the performance is even worse for this benchmark when compiled with clang, when usually it's the other way around.)
I would expect that the perf difference would be in the same ballpark (within 40% or so) as it is with other benchmarks. Is there something particularly nasty in this benchmark for Luau?
Beta Was this translation helpful? Give feedback.
All reactions