Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[request] - Add support for borehole images #284

Closed
manuelblancovalentin opened this issue Jul 26, 2019 · 11 comments
Closed

[request] - Add support for borehole images #284

manuelblancovalentin opened this issue Jul 26, 2019 · 11 comments

Comments

@manuelblancovalentin
Copy link

Even though it is possible to use lasio to import borehole images, it requires an extra step of post-processing to merge all channels with the same mnemonic together, which is undesirable.

I.e, a certain borehole image (such as the raw amplitude from a UBI tool) might have N cols (thus ranging from AWBK[0] to AWBK[N]). If I'm not mistaken, currently your code recognizes these entries as separate logs, so it's required to merge them after importation.

It would be awesome If you could just detect that AWBK[0] and AWBK[i] belong to the same log.

Thanks!

@manuelblancovalentin
Copy link
Author

Btw, I am not sure how difficult that is to implement right now, but I'd be more than happy to contribute and maybe fix this issue, If you wish so.

@ThomasMGeo
Copy link

It's super hard to do, look into https://github.com/equinor/dlisio

@kinverarity1
Copy link
Owner

kinverarity1 commented Jul 26, 2019 via email

@dagrha
Copy link
Collaborator

dagrha commented Oct 16, 2019

Would making this a method of the LASFile object make any sense (rather than doing it automatically on read)? Something like las.concatenate_channels().

It would be straightforward to do the columns-to-array operation(s) on a DataFrame (las.df()), then set again with the set_data_from_df() method.

Maybe the array would get dropped on write (since those aren't part of LAS 1.2/2 spec...or maybe they are?), in other words only the AWBK[1] etc. channels would get written

@kinverarity1
Copy link
Owner

A method makes sense to me, yeah. I prefer "curve" to "channel". Also I wonder if las.concatenate() might confuse with anyone trying to join two LAS files together (e.g. pd.concat()). Would las.stack_channels() or just las.stack() make sense? Returning a 2D ndarray.

The LAS file object itself would stay unaffected, since I'm pretty sure in the specs curves can only be 1D.

We could do it on the dataframe but probably also just using indexing on las.data, using the indices of each curve e.g. las.data[:, np.array([3,4,...,103])] if the first three columns were not part of the image.

@dagrha
Copy link
Collaborator

dagrha commented Oct 17, 2019

Those are all good points. I'd be happy to take a stab at adding this feature.

@dagrha
Copy link
Collaborator

dagrha commented Oct 21, 2019

@manuelblancovalentin, i don't frequently work with image logs, so it's not obvious to me what kind of processing should be done on the multi-channel curves. What datatype are you wanting from this? What specifically should it look like? What I've got working now spits out a 2D ndarray, as suggested by kinverarity1, each row as simply an array of the values of AWBK[1] through AWBK[n]

@manuelblancovalentin
Copy link
Author

When I tested your code I tried importing a borehole image but I did not got a single 2d nd.array as output. Instead I got 180 single vectors, one for each column of the borehole image (AWBK in this specific case, which ranged from AWBK[0] to AWBK[179]). I was simply asking for a single 2d output for the whole AWBK channel.

I might be mistaken tho. It's been a long time since I opened this thread.

@ThomasMGeo
Copy link

Can you share a notebook and data file? @manuelblancovalentin

@dagrha
Copy link
Collaborator

dagrha commented Oct 21, 2019

Oh sorry for the confusion! I've been working on a local branch, not yet merged here! I'm working on adding some tests before i submit a PR

@kinverarity1
Copy link
Owner

Thanks to @dagrha this has been implemented in master via #293 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants