-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mostly polished version of AIA timelags notebook #1
Conversation
Looking briefly through this notebook I'm pretty much amazed. I'd be curious to know what performance was like (I suspect that there are areas where we could improve). |
PR on @wtbarnes adds the binder config to run this on binder.pangeo.io: |
That notebook is very thorough and looks lovely and of course rigorous. From the Intake point of view, it's not immediately obvious to me what you would want to do here. The input is FITS, and I have published an intake FITS driver (in intake-astro), which produces arrays (with WCS as an attribute) or tables. However, the data structure is more like Xarrays, with multiple variables and reference coordinates - except that the coordinate system is more complex than a regular grid. Furthermore, the code uses sunpy classes and functionality rather than typical (x)array stuff. In short, you certainly could write a FITS->sunpy classes driver than contains some of the code in the notebook, and that might be very convenient for several datasets along these lines, to be able to reference them in a public catalogue. You could also write FITS->xarray that may be more generally useful outside of sun stuff, but then you also need to keep some ingest code separate. You could also have a combination of the two, a driver for FITS->(x)array and a second driver that takes those data-sets as inputs and creates the more specialised data containers from them. As an aside, we talked about the possibility of more general coordinate systems in Xarray, but I don't know where that discussion got to. Of course, there is always a trade-of between the wish to have shared code and data representation frameworks, and the flexibility of more domain-specialised classes. |
Wow, this is awesome! Thanks @wtbarnes! I will definitely have something cool to demo at the heliophysics workshop. I especially appreciate the detailed comments and explanations. A couple points in terms of the notebook readability / useability. Note that this are just intended as nerdy tech-talk, not a criticism of your excellent contribution.
|
cc also @SimonKrughoff |
@martindurant Thanks for all of the suggestions! Xarray would really be a game changer for dealing with solar data. As I think this notebook communicates, dealing with data one FITS file at a time is VERY limiting. Perhaps there should be some functionality in SunPy (or a SunPy affiliated package) for mapping many FITS files to an xarray dataset that would still have all of the functionality that the SunPy |
The following comment is admittedly very biased, since I am an xarray core developer. So take it with a grain of salt. What I have observed over the years is that multidimensional labelled arrays + metadata are ubiquitous in physical sciences. They have been implemented over and over by different domain-specific packages. Refactoring those packages around xarray would allow those package developers to focus more on their domain-specific problems. Plus xarray comes will full dask integration for free. A great example of how a more domain specific package leverages xarray under the hood can be found in the satpy docs. Refactoring is not trivial of course, but it can lead to big payoffs. |
@rabernat I hope this is helpful to your talk. I'm happy to clarify anything that is not clear! I suspect many of the solar people in the audience will have at least heard of this timelag technique as it has become fairly popular in the field in the last few years. Of course, the general approach is useful for other analyses as well! To address your other points:
|
@mrocklin Thanks! Dask makes everything so easy! What specifically do you mean by performance? |
Add binder configs so this can be run on binder.pangeo.io
Or, perhaps even more ideal, sunpy can piggy-back on existing interfaces in xarray. How feasible that is, I don't really know, but I think it's a lofty goal. |
(Very quick response to this) The main blocker last time sunpy evaluated xarray was the need for having an object to calculate the two spatial coordinates. This is primarily because utilising existing libraries for the coupled map-projection spatial dimensions is practically the only way to go. It's been a few years since I looked at this, but conversations with people at scipy seemed that this might be possible given enough work. |
So is this PR good to go? If so, someone from @pangeo-data/pangeo-astro should merge. |
@rabernat I'm happy to merge this, but I'm not sure how this gets pulled into the |
That depends on how you have your cluster set up. I don't know the answer. |
This lgtm from a quick read. I don't have the time to go over it in detail. |
Right now the notebook is very long and complex for a demo, mostly because of the package management and the classes you define. Two totally optional things you could do to streamline would be the following:
|
@rabernat OK I'll try to make those fixes ASAP. The talk is Wednesday correct? |
Correct.
…On Mon, Nov 12, 2018 at 1:44 PM Will Barnes ***@***.***> wrote:
@rabernat <https://github.com/rabernat> OK I'll try to make those fixes
ASAP. The talk is Wednesday correct?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABJFJuRblNsXzbwTHqse-rj8ZySusSBXks5uucFwgaJpZM4YVz45>
.
|
p.s. THANK YOU so much for doing this! |
OK, so maybe a question for @NicWayand or @dsludwig. Apologies, but I haven't had a chance to wrap my head around how all this hangs together yet. |
If a CiricleCI job is set up for astro.pangeo.io (per @dsludwig's guide), then you need to create a new PR to https://github.com/pangeo-data/astro.pangeo.io-deploy (staging branch likely), and a new image will automatically be created that astro.pangeo.io will load. |
Yes. That all works. What I didn't know was if these examples get pulled into that deployment. |
Ah ok, sorry I don't know that. |
These examples are not setup to be pulled into that deployment. I can look into doing that, or you could copy this example into the image directory here: https://github.com/pangeo-data/astro.pangeo.io-deploy/tree/staging/deployments/astro.pangeo.io/image That's where the current examples are loaded into the astro.pangeo.io image. |
OK. Thanks a lot. I guess I'd also need to diff the environments as well to see if any new packages need to be added. |
Where is the best place to specify the packages to be installed when astro.pangeo.io spins up? Or is it best to just create a new environment? |
You can specify new packages here:
https://github.com/pangeo-data/astro.pangeo.io-deploy/blob/staging/deployments/astro.pangeo.io/image/binder/environment.yml
…On Tue, Nov 13, 2018 at 12:30 PM Will Barnes ***@***.***> wrote:
Where is the best place to specify the packages to be installed when
astro.pangeo.io spins up? Or is it best to just create a new environment?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAwCVzlh1AsxzSWEayxbcsmrvnF_Ijc3ks5uuyvRgaJpZM4YVz45>
.
|
In the short term, I would focus on making sure the binder works, rather than astro.pangeo.io. I will have to fork this in order to get it ready for tomorrow. |
Thanks @dsludwig ! |
EDIT: looks like that is already there @rabernat Ok so I should just be able to add - git+https://github.com/sunpy/sunpy under the |
I've now pulled out the AIA cube code from the notebook into a separate file, openfiles = dask.bytes.open_files('gcs://pangeo-data/SDO_AIA_Images/diffrot/171/*.fits')
futures = client.map(get_header, openfiles) where I don't know whether this would be a problem when launching with binder as well. |
The cwd of the dask workers is not necessarily the same as your ipython kernel, it would be best to set a PYTHONPATH or otherwise "install" the file. You should be able to find out with |
Hmm |
You would need util in both maybe, or explicitly set the working directory? |
@rabernat I'm having some difficulties with pulling out the data structures into a separate file and properly resolving the paths on the workers. In the interest of time, I'm just including all of the code inline. This could either be placed at the top or the bottom of the notebook to "hide" it during the demo. Sorry this is not ideal. |
No worries. This is great as is. |
I went through this as is and was able to get it working by putting the class / function definitions back into the notebook. What @martindurant was missing is the fact that the dask workers can't see your home directly. There is no shared filesystem here. So the only way to get the module to the workers is to actually install it into their environment using pip / conda. Since there is no package for your @wtbarnes were you planning to make a new commit? I screwed up and merged this before you actually updated. Any chance you could push the corrected, working notebook? |
@rabernat OK. At some point, this functionality will hopefully will live in an installable package. I can push a fix now. Sorry this is coming so late! I'm at another meeting right now and on mountain time! |
@rabernat , of course! Must have been late in the day... |
Regarding pangeo-data/astro.pangeo.io-deploy#15
@Cadair can you take a look at this and let me know if you have any suggestions?
@rabernat @martindurant I'd also be interested to see how any of the data viz could be improved with holoviews or the data import simplified with intake.