You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the advent of .NET Core, the process of running benchmarks is much more complicated than it used to be given that we have to cover a much larger surface area of runtimes, run time versions, and platforms.
The design that we implemented to address this when .NET Core 1.0 was on the horizon is frankly pretty weak and, as it turns out, never worked properly in the first place. Therefore, we should use this as an opportunity to reassess our design and build something that can properly support a much larger platform / runtime surface area.
Moreover, I want to support something that is much more extensible than what we have today in terms of both benchmark execution, discovery, and metrics collection.
Going to keep this issue open as a means of organizing some of the designs for accomplishing this.
The text was updated successfully, but these errors were encountered:
Should be able to click for a full size image, but this is how the execution flow essentially works today, minus some features that we haven't implemented such as #220. However, the way this is expressed via the NBench SDK isn't all that clear and we intend on breaking up the execution model to allow for customization with each of these parts much more easily in the future.
With the advent of .NET Core, the process of running benchmarks is much more complicated than it used to be given that we have to cover a much larger surface area of runtimes, run time versions, and platforms.
The design that we implemented to address this when .NET Core 1.0 was on the horizon is frankly pretty weak and, as it turns out, never worked properly in the first place. Therefore, we should use this as an opportunity to reassess our design and build something that can properly support a much larger platform / runtime surface area.
Moreover, I want to support something that is much more extensible than what we have today in terms of both benchmark execution, discovery, and metrics collection.
Going to keep this issue open as a means of organizing some of the designs for accomplishing this.
The text was updated successfully, but these errors were encountered: