-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability to read in YODA files #229
Comments
Hi Graeme! I'm just in the process of preparing submissions for the reference data files in Rivet that don't have a HepData entry yet. I'm currently struggling to use the I know this is supported in principle, e.g. by just omitting the respective components in the dictionary. However, when using the library, it seems
Is there a trick? |
PS - just to be clear: of course I can "make it pass" by just setting the uncertainty to zero, but then all bins will have three uncertainty components, some of them zero, which is not the same as the bin not having the component in its breakdown to begin with. I think the problem is that the check for "non-zero uncertainties" only checks if there's at least one non-zero component and then adds all of them, regardless of their value. Can we make this more flexible? |
On a different note: We have few cases where we have a discrete (string) axis where a subset of the edges is technically a floating point range. The library throws an error e.g. like this
Of course I agree that a discrete axis where all bins are of the form One simple example I'm just looking at is one where we have two bins = |
On second thought, I suspect this requirement comes from the cases where we have a differential distribution, which is prepended/appended by a single bin corresponding to the average, which probably shouldn't be allowed. Maybe best to leave the validator as is and I will work around these cases (there's only 5 of them, so should be manageable). |
This error comes from the |
Well, there were only 5 cases where I encountered this issue, so I've just replaced the dash with a "to" or "&" , depending on the context. It's sufficiently rare that this is probably good enough for now. Good news, though: I've now managed to create submission tarballs that make the validator happy for all of the Rivet reference files that don't have a HepData entry yet. There's a total of 780 tarballs. What's the best way to submit them? I hope I don't have to upload them through the browser one by one? 😉 |
PS - I have a guest account for the IPPP cluster if it would be helpful for me to upload them there somewhere? |
Great work! You should log into hepdata.net and click "Request Coordinator Privileges" on your Dashboard, then enter "Rivet" as the Experiment/Group. You can then click the "Submit" button to initiate a submission with an INSPIRE ID and specify an Uploader and Reviewer (maybe just yourself in both roles, unless you want a check from someone else). This will create an empty record that allows you to upload, then the record can be reviewed (there's a shortcut "Approve All Tables") and finalised from your Dashboard. In terms of automation, we haven't yet encountered a need for bulk uploads like this, so unfortunately, there's not an easy way to finalise 780 records. The upload stage could be done from the command line (or from Python) using the |
I've approved your Coordinator request. I realised that we already have a module for bulk imports that was written to import records from |
Great - thank you!! 🙏 |
For cases where an analyser has data already in the YODA format for use with Rivet, it would be useful if
hepdata_lib
could read YODA files for conversion to the HEPData YAML format. It would be preferable if YODA was an optional and not mandatory dependence. The question of converting YODA to HEPData YAML has been a long-standing issue (HEPData/hepdata-converter#10), but it would be better handled byhepdata_lib
than thehepdata-converter
.Cc: @20DM
The text was updated successfully, but these errors were encountered: