Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The algorithms here accomplish two main things:
The new algorithm works by separately computing distributions when both players roll the maximum number of dice and when at least one player is rolling less than the maximum number, and by caching way fewer values for these distributions by ignoring probabilities that are so small that they will get lost in floating point rounding.
When at least one player is rolling less than the maximum number of dice we cache outcome distributions in a pretty similar way to the old algorithm, with one full outcome distribution for each attacker/defender pair. However, when keeping the size of one army at a very small fixed size, the distribution of losses as the size of the other army grows very quickly approaches a fixed distribution, so above a certain size of the larger army, just dropping every probability under a given threshold will yield literally the same outcome distribution for every army size. So we only cache results up until this point and can then cache an accurate outcome distribution for every possible outcome with only a fixed size cache.
When both players have enough troops to roll the maximum number of dice, they will each roll the same number of dice again and again until one of them loses enough troops that they cannot roll the maximum number of dice. There are a fixed set of possible remaining troop combinations to have when we are done with identical rolls, and we can compute a distribution for the chance on ending on each remaining troop combo from any possible starting number of troops. This computation can be very efficient if we have cached distributions for every possible outcome of running k maximum dice rolls for every value of k. We can then combine this distribution with the distributions of outcome chance for every possible non-max dice roll to get the overall outcome chance distribution.
Storing the outcome distribution of k maximum dice rolls for every value of k is not feasible memory wise for very large armies and iteratively computing it is a bit too slow. However this distribution converges to a gaussian distribution with easily computed mean and variance, and by caching the exact results for k up to 1000 we only need to compute distributions that are very close to this gaussian approximation.
I tested the new algorithm against every win chance up to 1000 as computed using the old algorithm and these do match (they can be off by around 1 in a million due to floating point rounding but for no matchup does the effect of this rounding make the difference between whether or not the 5% win chance cutoff is applied). The full outcome distribution also matches up to 200 troops but I haven't invested the CPU time to actually compute distributions up to 1000 on the old algorithm. (All of this testing is repeated 7 times when factoring in all modifier combinations that can actually happen in the game.)