Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Towards "pandas 1.0" #10000

Closed
jorisvandenbossche opened this issue Apr 27, 2015 · 41 comments
Closed

Towards "pandas 1.0" #10000

jorisvandenbossche opened this issue Apr 27, 2015 · 41 comments

Comments

@jorisvandenbossche
Copy link
Member

Here's our roadmap document: https://docs.google.com/document/d/151ct8jcZWwh7XStptjbLsda6h2b3C0IuiH_hfZnUA58/edit#

Just because it is a nice round number :-)

Or maybe we can use it to discuss how we imagine a possible pandas 1.0 ..


Some clarification (from @shoyer): This is not the place to make new feature requests -- please continue to make separate GitHub issues for those. Almost every new feature can be added without a 1.0 release. If there is a change you think would be necessary to do in pandas 1.0, feel free to reference issues where it is described in more detail.

@shoyer
Copy link
Member

shoyer commented Apr 27, 2015

My wish list for pandas 1.0:

  1. Fix []/__getitem__ (Overview of [] (__getitem__) API #9595)
  2. Make the index/column distinction less painful (ENH/API: clarify groupby by to handle columns/index names #5677, Allowing the index to be referenced by name, like a column #8162)

I also have a fantasy world where the pandas Index becomes entirely optional, but that might be too big of a break even for pandas 1.0.

@jorisvandenbossche
Copy link
Member Author

I want to add:

  1. Clean up the Index vs MultiIndex API (Unify index and multindex (and possibly others) API #3268)

@jnmclarty
Copy link
Contributor

What if, every pnl, df, s, had a mode, that changed the slicing/getitem behavior. One could set the default in the options, and change it on a per-object basis when necessary? It could allow old-new to transition smoother, plus, get more creative where desired.

@shoyer
Copy link
Member

shoyer commented Apr 27, 2015

@jnmclarty A better option would be some sort of flag that could be set per module, similar to a future statement -- changing the way in which a specific DataFrame is queried is just begging for someone to pass it off to an incompatible function. In fact, I just asked if this is possible on StackOverflow: http://stackoverflow.com/questions/29905278/using-future-style-imports-for-module-specific-features-in-python/

@djchou
Copy link

djchou commented Apr 30, 2015

It would be nice if there was an option to have boxplot X axis labels match line plot's X axis labels.

@shoyer
Copy link
Member

shoyer commented Apr 30, 2015

@djchou is there an existing issue for that? If not, please make one :).

@sinhrks
Copy link
Member

sinhrks commented May 1, 2015

Congrats on the great package:D

My wish is:

@datnamer
Copy link

datnamer commented May 1, 2015

dplyr like macros: https://github.com/dalejung/naginpy

A guy can wish...

@TomAugspurger
Copy link
Contributor

I've been working on problems recently where having groupbys run in parallel would have been great (I think). Also maps / applys.

@jorisvandenbossche jorisvandenbossche changed the title Our 10,000th issue! Towards "pandas 1.0" May 29, 2015
@lexual
Copy link
Contributor

lexual commented Jun 6, 2015

ref #1907

@jorisvandenbossche jorisvandenbossche added this to the 1.0 milestone Jun 8, 2015
@toddrjen
Copy link
Contributor

toddrjen commented Jun 9, 2015

These may be too small, but since this is a wishlist I would like to see some improvements in the consistency of the API. Some example:

  • More consistent usage of singular vs. plural, for example index/indexes, column/columns, and level/levels. This includes both the names and whether they accept single values, multiple values, or both.
  • Make sure the axis argument is available wherever operations are applied across along an axis.
  • Go through related functions and make sure they have the same arguments in the same order. For example, for DataFrame, cumsum has a skip_na argument, while diff doesn't.
  • If an argument does the same thing as a method, it should have the same name as the method. So for example fill_value should be fillna.
  • Try to get the use of underscores more consistent. For example, in DataFrame we have sort_index and sortlevel, and is_copy, isin, and isnull.

@shoyer
Copy link
Member

shoyer commented Jun 9, 2015

For the record, I'm strongly -1 on @toddrjen's suggestion to rename methods to make the use of underscores more consistent. Even Python 3 didn't clean things up like that.

@bwillers
Copy link
Contributor

Integer columns with missing data support :)

xref #8643

@benjello
Copy link
Contributor

Allow "statistics"l function like count, sum, mean, quantile etc to handle weighted data

@shoyer
Copy link
Member

shoyer commented Jun 12, 2015

@bwillers I added a xref to an existing issue where that was discussed

@benjello is there already a github issue for adding weights? If not, please make one :).

@bwillers @benjello The good news is that I don't think either of your suggestions require pandas 1.0. Both could be done incrementally.

@benjello
Copy link
Contributor

@shoyer #2501 and #10030 are somehow about weights: should I open a new one ?

@jorisvandenbossche
Copy link
Member Author

@benjello I think we can discuss this further at #10030. That issue is now only about the mean, but would be good to discuss there to which methods we would want to add this functionality.

@tgarc
Copy link

tgarc commented Jul 14, 2015

I wasn't entirely sure where to put this but I've written up a short gist as an IPython notebook on the current state of MultiIndexing with DataFrames.loc.

https://nbviewer.jupyter.org/gist/tgarc/6c40a65f648302b6b9d7#

What is particularly relevant to this discussion is in the last section. Specifically pandas allows,

df.loc[('foo','bar'), ('one','two'), ('three','four')] (1)

To be taken to mean

df.loc[(('foo','bar'), ('one','two'), ('three','four')), :]

But this type of indexing is ambiguous in the case when the number of indexing tuples is 2 since

df.loc[('foo','bar'), ('one','two')]

could mean incomplete indexing as in

df.loc[(('foo','bar'), ('one','two')),:]

or row,column indexing. Currently, pandas just interprets this as row, column indexing when there are 2 indexing tuples.

My feeling is that the incomplete indexing as in (1) shouldn't be allowed for MultiIndex DataFrames because of the aforementioned ambiguity. I'm not sure whether changing this would break other code and hence whether it should be considered a change that should be held off until v1.0.

This comment and gist is also a summary of some of the discussion that I had with @shoyer and @jonathanrocher at the SciPy sprints.

@toddrjen
Copy link
Contributor

This may or may not be a good idea, but it may at least be worth thinking about. Considering that PanelND has always been marked as "experimental" and not all features support it, and considering the work that has been going on in xray, is PanelND something that could be deprecated or dropped for 1.0?

@jorisvandenbossche
Copy link
Member Author

@tgarc Nice overview notebook! (by the way, if you would like to submit parts of that to improve the docs, very welcome!)

Part of what you describe is also discussed here (collapsing index levels or not): #10552

For the allowing of 'incomplete' indexing on frames, there is already a warning in docs for this: http://pandas.pydata.org/pandas-docs/stable/advanced.html#using-slicers (the red warning box). So it is explicitly "allowed, although warned for because of possible ambiguities" (so not a bug in that sense).
But the question is indeed of this is a good idea. It is somewhat convenient that it works in the non-ambiguous cases, but maybe better to not allow this. If we want to discuss this more in detail, probably better to open a separate issue.

@jreback
Copy link
Contributor

jreback commented Jul 14, 2015

@toddrjen

@shoyer and I have had discussion about this. The proposal is to rename Xray -> pandas-nd. We can discuss further consolidation at some later point. I think we would then deprecate Panelnd (e.g. 4D and higher) and point to pandas-nd. Their are a couple of API issues if we also did this for Panel.

Mainly I think we would need some conversions, e.g. to_nd as a mainline function.

@jreback
Copy link
Contributor

jreback commented Jul 14, 2015

@tgarc this was added quite a long time ago as a convenience / magic feature. It is specifically warned about and is a limitation of the python syntax.

There are times when it can be detected and other times it is ambiguous. I am not sure that we can do anything about it. If people don't read the docs what can you do.

@tgarc
Copy link

tgarc commented Jul 14, 2015

@jorisvandenbossche Thanks, I'll look to see if there's an appropriate place to add documentation. Thanks for pointing me to that warning - I admit I didn't know it was there.

@jreback I realize that this is an established feature and that there is a warning about it in the docs but as we were discussing pulling back on the complexity of indexing in the future of pandas, modifying this particular feature seemed like a good opportunity to simplify existing code and restrict the number of ways users can do multi-indexing. I'll give this some more thought and potentially open as a new issue.

EDIT opened as issue #10574

jreback added a commit that referenced this issue Feb 10, 2016
supersedes #11950
xref #10000

Author: Jeff Reback <jeff@reback.net>

Closes #11972 from jreback/xarray and squashes the following commits:

85de0b7 [Jeff Reback] ENH: add to_xarray conversion method
cldy pushed a commit to cldy/pandas that referenced this issue Feb 11, 2016
supersedes pandas-dev#11950
xref pandas-dev#10000

Author: Jeff Reback <jeff@reback.net>

Closes pandas-dev#11972 from jreback/xarray and squashes the following commits:

85de0b7 [Jeff Reback] ENH: add to_xarray conversion method
@jorisvandenbossche
Copy link
Member Author

Pinging here on github as well, as I am not sure everybody is aware of the pandas-dev mailing list. But there is currently a thread started by Wes on a pandas 1.0 / future roadmap, and you are certainly welcome to also provide feedback or share ideas.

https://mail.python.org/pipermail/pandas-dev/2016-July/000512.html

cc @chris-b1 @gfyoung @MaximilianR @kawochen @JanSchulz

@shoyer
Copy link
Member

shoyer commented Jul 29, 2016

One other major breaking change to consider:

We should consider making arithmetic between a Series and a DataFrame broadcast across the columns of the dataframe, i.e., aligning series.index with df.index, rather than the current behavior aligning series.index with df.columns.

I think this would be far more useful than the current behavior, because it's much more common to want to do arithmetic between a series and all columns of a DataFrame. This would make broadcasting in pandas inconsistent with NumPy, but I think that's OK for a library that focuses on 1D/2D rather than N-dimensional data structures.

@TomAugspurger
Copy link
Contributor

Some questions for the next couple releases...

  • Is the idea for 1.0 to stabilize the 0.x API, or to drop a handful of larger API-breaking changes? Or are we pushing the API-breaking changes (e.g. fixing __getitem__) till 2.0?

Actually, that's really my only question. I guess the only followup would be "what falls into that bucket of large API-breaking changes that are actually feasible?"

I think now that 1.0 is upon us, we should refocus this issue from "wishlist" to "stuff that's actually going to happen for 1.0". As we go through issues prepping for 0.19, what's our policy on pushing issues' milestones? Do we push to "1.0" or "Someday"? I'd lean towards "Someday", and only change use 1.0 for stuff that's blockers.

@jorisvandenbossche
Copy link
Member Author

Is the idea for 1.0 to stabilize the 0.x API, or to drop a handful of larger API-breaking changes? Or are we pushing the API-breaking changes (e.g. fixing getitem) till 2.0?

As it is now discussed on the pandas-dev mailing list, I think the conclusion is indeed how you state it here: 1.0 as a stabilization of the current 0.x API, and 2.0 with an internal refactor / larger API changes (eg getitem)

we should refocus this issue from "wishlist" to "stuff that's actually going to happen for 1.0"

I think what is discussed in this issue is actually what we now are discussing as changes for 2.0, so I would rather change the milestone, and open an new issue for things we want to do before 1.0

As we go through issues prepping for 0.19, what's our policy on pushing issues' milestones? Do we push to "1.0" or "Someday"? I'd lean towards "Someday", and only change use 1.0 for stuff that's blockers.

+1, there is also 'next major release', that is often used in the past to push issues to that are not included anymore in the current release. But indeed, I would not rename automatically all issues of 'next major release' to '1.0', but keep the '1.0' milestone to selectively add to issues that we regard as blockers for 1.0

@jreback
Copy link
Contributor

jreback commented Aug 10, 2016

here's why I have the tags set this way. We have approx 1000 issues under next major release. This is really just a placeholder for things to do, that otherwise are not categorized as pie-in-sky Someday.

The way things have been working is to pull issues off of this to a numbered release. IOW, when someone submits a pull-request I mark the issue. Then when the PR is actually merged it gets set with the version number. Otherwise you get a bunch of stale PR's that have version numbers and you have to then go back and manually unassign them.

Same thing with issues. Before I switched to this way (IIRC was 0.15 or 0.16). I would would have to manually go thru each each and reassign it (well, often did it in bulk, but the idea was to review open issues). The 'issue' is that we have a LOT of open issues. They are only semi-prioritized. Prioritizing is quite difficult as resources are not generally available (IOW, there aren't people to 'assign' issues, rather its the reverse, people 'assign' them to themselves).

So generally newish issues I would assign to the current version number, as time closes to the release, I would push newer issues to next major release. Then would still review open issues that have a version number and push / request help.

This activity get's quickie bugs fixed, while allowing some semblance of 'newish' issues (IOW those that happened recently).

Of course if anyone has better suggestions on how to manage issues. speak up!

@wesm
Copy link
Member

wesm commented Aug 10, 2016

pandas has basically been operating in Kanban style since its beginning. Issues are marked as "on deck" (here: "next major release" -- perhaps we could give this a better name like "approved", "on deck", "fair game" -- some issues may be either pie-in-the-sky or have not yet reached consensus about the path forward) with potentially an additional level of prioritization (e.g. blocker)

It may be a good idea to start thinning down the 1.0 TODO list to things that absolutely must get done. We also need to figure out a procedure for maintaining both a 1.x maintenance branch as well as an unstable 2.0 development branch. I believe that the 2.0 branch can be made to cleanly rebase until the first cut of the internals (libpandas + wrapper classes) stabilizes (which will likely take on the order of months) and can begin to be integrated into pandas/core. At some point a more serious divergence will have to take place, at which point "forward-porting" bug fixes may become complicated.

@dragonator4
Copy link

Proper units support would be a good thing for 1.0: #10349. I think @jreback's idea of using the dtype is very organic and awesome.

IMHO, it is OK to break considerable backwards compatibility with a huge release, which in this case would be a culmination of lessons learned, feature additions, etc. There was no way all the current capabilities, and the pending feature requests, bug fixes and enhancements could have been planned for at the time of creation of pandas. Since so much has been bolted on with occasional API changes, as required, there are quite a few inconsistencies in implementation. 1.0 can be a way to organically build up all features from a single trunk. If you need my opinion, I am in favor of libpandas, because I see it as a door to independent development in Python and other languages. You all are better at figuring this out though. Users can always freeze/force older versions in environments to avoid code breakage.

@h-vetinari
Copy link
Contributor

Now that there is an actual plan for 1.0 release (i.e. v.0.24 -> v.0.25 -> v.1.0), some of this might be too ambitious, but essentially, those are all about consistency (or lack thereof) that I'd like to see in pandas 1.0:

@h-vetinari
Copy link
Contributor

h-vetinari commented Jan 30, 2019

I know that most people can't wait to finally have pandas 1.0, but IMO there are some very fundamental parts of the API that should still stabilize some more:

These three points concern some of the most fundamental aspects of the API surface, and leaving them muddy means it will be much harder to fix after 1.0, because many people will be shouting "SemVer!", whether that's the policy or not.

Going over the thread, there's also some very good points brought up that have not been addressed yet.

To be sure, there's been a lot of progress (EAs will have a huge impact for good), but even though I'm raining on the parade, I think it's a necessary discussion. At the very least, there needs to be clear communication what the policy for breaking changes & versioning is going to be post-1.0. - for example numpy-style rolling deprecations, similar to the current MO?

I believe that SemVer would either lead to massive ossification, or alternatively, that the current minor releases (like 0.23 -> 0.24) would always have to be major version bumps every ~6 months (which would be a valid choice too), at least for the foreseeable future.

@simonjayhawkins
Copy link
Member

would always have to be major version bumps every ~6 months (which would be a valid choice too)

if that was the expectation then would <year>.<month>.<patch> versioning with a 6 month release cycle be more appropriate than semver?

Towards Pandas 20.1 FTW!

@rgommers
Copy link
Contributor

Is the Google Doc linked in the description currently the best publicly available Pandas roadmap? Or https://pandas-dev.github.io/pandas2/goals.html#id1? Or is it all so out of date that it's better to state that there currently isn't any roadmap?

@TomAugspurger
Copy link
Contributor

https://github.com/pandas-dev/pandas/wiki/Pandas-Sprint-(July,-2018)#towards-pandas-10 is probably the most up to date, though there are already some inaccurate items.

0.24.0 was just released in January, so 0.25.0 will be a few months from now, and 1.0 sometime in the middle of the year (perhaps at SciPy?)

@rgommers
Copy link
Contributor

Thanks @TomAugspurger!

@datapythonista
Copy link
Member

Probably worth referencing this PR adding a roadmap here: #27478

@TomAugspurger
Copy link
Contributor

@jorisvandenbossche is there anything concrete in this issue that isn't recorded elsewhere? We'll need to re-title it soon :)

Is there anything here that's a blocker / nice-to-have for 1.0?

@TomAugspurger TomAugspurger modified the milestones: 1.0, Contributions Welcome Jan 2, 2020
@jangorecki
Copy link

Shouldn't this issue be already resolved/obsolete by recent release of pandas 1.0.0?

@mroeschke
Copy link
Member

Since pandas 1.0 has already been released, I think we are safe to close this issue. We may want to continue discussion on a new "Towards pandas 2.0" issue. Closing for now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests