Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparison of model-fitting errors with input values using simulated images #403

Open
louisquilley opened this issue Sep 23, 2021 · 4 comments
Labels

Comments

@louisquilley
Copy link

Hello,
I have worked a bit on simulated images with sourcextractor++ and while the model-fitting usually retrieved values close to the parameters used to build the images, the difference between data and model tend to be significantly higher than the sourcextractor++ error-bars.

I used 100 simulated images of size 512px*512px and 0.396 arcsec/pixel angular resolution, with 216 sec exposure time, and 5352 for the gain, in one band (g - SDSS filter) with the following apparent magnitude distribution :

simul_mag_distrib

The images were created using Stuff and SkyMaker, with a Sersic profile for the bulge and an exponential profile for the disk (both models are concentric), and with the following parameters :

  • a Sersic index n between 2.0 and 6.0
  • an aspect bflat between 0.5 and 1.0
  • a unique bulge size bsize=4.0 arcmin => 4.0/0.396 = 10.1 pix.
  • a disk size from 1 to 3 times bsize, so from 10 to 30 pix

I used the following configuration for the model-fitting :
Bulge :

  • x_B, y_B = get_pos_parameters()
  • n_sersic_B_g = 3.0, linear range between 0.5 and 7.0
  • rad_B_g = 3.0, exp range between 0.1 and 30
  • aspect_B_g = 0.8, exp range between 0.4 and 1.2
  • angle_B_g = FreeParameter(lambda o: o.get_angle(), Range((-2 * np.pi, 2 * np.pi), RangeType.LINEAR))
    Disk :
  • x_D, y_D = get_pos_parameters()
  • exp model so n=1
  • rad_D_g = FreeParameter(lambda o: o.get_radius(), Range(lambda v, o: (.1 * v, 10 * v), RangeType.EXPONENTIAL))
  • aspect_D_g = FreeParameter(lambda o: o.get_aspect_ratio(), Range(lambda v,o: (0.5v,2.0v), RangeType.EXPONENTIAL))
  • angle_D_g = FreeParameter(lambda o: o.get_angle(), Range(lambda v, o: (v - 0.17, v + 0.17), RangeType.LINEAR))

Below you can see the discrepancies between the "actual error" (x axis) and the error retrieved by sourcextractor++ (y axis) on some parameters. My concern is that most of the galaxies have error_param_srx < param_srx-param_input with up to 2 orders of magnitude between the errors (except for the positional parameters). So I wonder whether the errors are underestimated when I use observed images.

Note that two kinds of images were generated (in pixels or world coordinates). The resulting points are in blue/red. But this makes no difference on the statistics.

simul_rad_B_error_srx_vs_real
simul_rad_D_error_srx_vs_real
simul_nsersic_B_g_error_srx_vs_real
simul_BT_ratio_g_error_srx_vs_real
simul_aspect_B_g_error_srx_vs_real
simul_aspect_D_g_error_srx_vs_real
simul_x_B_error_srx_vs_real
simul_y_B_error_srx_vs_real
simul_x_D_error_srx_vs_real
simul_y_D_error_srx_vs_real
simul_angle_B_g_error_srx_vs_real
simul_angle_D_g_error_srx_vs_real

Is there a way to access the covariance error terms (I mean the non diagonal elements of the covariance matrix)? For the analysis I am doing, it would be would be useful to see for example the covariance term between the Sersic index and the scale radius (of a Sersic profile), which I expect to be significant.

Thank you for your feedback.

Best regards,
Louis

@marcschefer
Copy link
Member

Sorry this was left unaddressed.

We don't currently output the full covariance matrix, this is not that easy as the matrix is based on internal parameters. I'll classify this as a feature request.

@vdelapparent
Copy link

Thanks!
What do you think of this 1-2 order of magnitude under-estimation of the parameter errors (compared to simulated data)?

@mShuntov
Copy link

Hello,

I am pinging this issue, because it seems to still not be resolved and it remains very relevant.

Indeed, from my experience with making SE++ catalogs on images from JWST, HST, UVISTA, HSC I persistently encounter uncertainty values of the model-fitting parameters that are significantly underestimated.

The following figure is an example of JWST/NIRCam fluxes measured on simulated images from SE++, compared with the true input fluxes. The yellow envelope shows the 1sigma scatter of the measured-true values, while the red envelope shows the median of the reported SE++ uncertainties as a function of magnitude. One would expect the two to be largely in agreement if the SE++ uncertainties are accurate. However, there is a large difference that shows that the SE++ are largely underestimated.

image

These uncertainties are then key ingredient in SED fitting codes and in accurately measuring S/N of sources in different bands that are key in dropout selection etc.

Has this issue been somehow addressed? Do you maybe have some guidelines of how to go around this? Also, maybe some clearer description of how the uncertainties are derived will be useful in discussing results and figuring out ad-hoc solutions to correct the uncertainties.

Thank you for the consideration.

@rgavazzi
Copy link

rgavazzi commented Mar 1, 2024

Hi Marko,
we found in Euclid paper XXV (Euclid Morphology Challenge papers) https://ui.adsabs.harvard.edu/abs/2023A%26A...671A.101E/abstract (figs 22 and 23) that uncertainties can indeed be underestimated for bright objects but are more or less ok at the faint end.
Could you distinguish in your scatter plot the compact and extended sources as well as those that are isolated and those that belong to groups?
Things that could go wrong in the formal uncertainty for bright compact sources:

  • your PSF model is not accurate enough (sampled on a finer pixel scale, and extending far enough)
  • Gain (are you sure that this is properly set in sims and in the code?)
  • detector non-linearity, flat-fielding, stacking (if you're not working on individual exposures, as it is generally advised)
  • noise correlation
    Things that could go wrong on large extended objects
  • most of the above
  • inaccurate background
  • grouping,deblending,...

Do the reported uncertainties in aperture photometry better match the observed scatter!?!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants