You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 8, 2022. It is now read-only.
Hi all. I'm using Argo for an online course on uncertainty using Monte Carlo simulation.
Uncertainty can be derived from a simple mathematical equation and the Mean and SD resulting form the software simulation.
However, the latter slightly change at every simulation run after file is opened up (even though no input data or equation is changed) .
While I think this should be a consequence of the randomness of values following each input's distribution (do you agree?), I'm most importantly interested to know if a way to tell Argo to keep running until Mean stabilizes exists.
The text was updated successfully, but these errors were encountered:
Hi all. I'm using Argo for an online course on uncertainty using Monte Carlo simulation.
Uncertainty can be derived from a simple mathematical equation and the Mean and SD resulting form the software simulation.
However, the latter slightly change at every simulation run after file is opened up (even though no input data or equation is changed) .
While I think this should be a consequence of the randomness of values following each input's distribution (do you agree?), I'm most importantly interested to know if a way to tell Argo to keep running until Mean stabilizes exists.
The text was updated successfully, but these errors were encountered: