You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently it is just a script that calculates chi-squared for some files that you put in by hand. Since each folding data point costs about 3 seconds on a virgo-machine for a 1GB file, I computed them on the virgo-cluster at Kapteyn (more points is more precise, although maybe not, but that is another issue).
Then, once enough trial periods are calculated, I fitted the chisq peaks by hand separately for each pulsar (because different fitfuncs were necessary, and checking if the fit actually converges etc.). I used ray for the cluster-computing. How to use that needs to be explained too I guess (it involves some setup to make it go smoothly...). And how do we want this to go into the analysis code? Making the users go through a jupyter notebook to actually check the fits? Just trying every function we have and selecting the one that fits best?
The text was updated successfully, but these errors were encountered:
I think it needs some explaining how to set it up, I think we don't want the user to go through a jupyter notebook, I think it is best to just try every function. Computation power is not a problem, I think doing a fit on these points will take less than a second, so doing all functions won't be a problem. So I think the one that fits best is a good way to go, if the user didn't specified the function, but I would also output the results for the other functions, such that the user at least has the result for all the functions.
The performance issue is in the calculating the chi-squared part. For each (period, DM, chi-squared) datapoint, the data needs to be folded with a different period. If you want say 10k points (if you don't have a good guess and want good precision that is reasonable), this'd take 30k seconds = 8h, so using a cluster or at least all cores is nice.
Another thing I am not sure about is how to estimate uncertainties from this chi-square analysis. The fit uncertainty is not a good indicator, it turns out.
Currently it is just a script that calculates chi-squared for some files that you put in by hand. Since each folding data point costs about 3 seconds on a virgo-machine for a 1GB file, I computed them on the virgo-cluster at Kapteyn (more points is more precise, although maybe not, but that is another issue).
Then, once enough trial periods are calculated, I fitted the chisq peaks by hand separately for each pulsar (because different fitfuncs were necessary, and checking if the fit actually converges etc.). I used
ray
for the cluster-computing. How to use that needs to be explained too I guess (it involves some setup to make it go smoothly...). And how do we want this to go into the analysis code? Making the users go through a jupyter notebook to actually check the fits? Just trying every function we have and selecting the one that fits best?The text was updated successfully, but these errors were encountered: