Skip to content
This repository has been archived by the owner on May 3, 2023. It is now read-only.

RLS v0.24.2 #43

Merged
merged 7 commits into from
Mar 14, 2019
Merged

Conversation

jorisvandenbossche
Copy link
Contributor

No description provided.

@@ -17,5 +17,7 @@ function run_tests {
# Runs tests on installed distribution from an empty directory
export PYTHONHASHSEED=$(python -c 'import random; print(random.randint(1, 4294967295))')
python -c 'import pandas; pandas.show_versions()'
python -c 'import pandas; pandas.test(extra_args=["--skip-slow", "--skip-network", "--skip-db", "-n=2"])'
# --deselect for 0.24.x
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively, we could write a patch and apply it before running the tests. I'm not sure which is preferred. I'm also not sure if the patch would affect the pandas version number picked up by versioner (IIRC that's set at this point, but I may be wrong).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would really prefer not to change this at all. these need to be fixed in pandas 0.24.x branch and then just re-run this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some more patches need to be backported.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would that require a new tag?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Certainly, we should do the backports like pandas-dev/pandas#25186 anyway.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can overwrite the tag if need be as it hasn't been released

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would vote to just skip the tests here, as Tom is doing. It has already been released on github. Making a new tag / release is easy, but not sure it is worth it for skipping a test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what???? why was this put on github before this??? that is crazy

@TomAugspurger
Copy link
Contributor

@bashtage do you have any clue what this stata failure is about? https://travis-ci.org/MacPython/pandas-wheels/jobs/505474715#L4646

__ TestCommonIOCapabilities.test_write_fspath_all[to_stata-writer_kwargs8-os] __
[gw1] linux -- Python 3.6.6 /venv/bin/python
self = <pandas.tests.io.test_common.TestCommonIOCapabilities object at 0x7fe5153d7780>
writer_name = 'to_stata', writer_kwargs = {}, module = 'os'
    @pytest.mark.parametrize('writer_name, writer_kwargs, module', [
        ('to_csv', {}, 'os'),
        ('to_excel', {'engine': 'xlwt'}, 'xlwt'),
        ('to_feather', {}, 'feather'),
        ('to_html', {}, 'os'),
        ('to_json', {}, 'os'),
        ('to_latex', {}, 'os'),
        ('to_msgpack', {}, 'os'),
        ('to_pickle', {}, 'os'),
        ('to_stata', {}, 'os'),
    ])
    def test_write_fspath_all(self, writer_name, writer_kwargs, module):
        p1 = tm.ensure_clean('string')
        p2 = tm.ensure_clean('fspath')
        df = pd.DataFrame({"A": [1, 2]})
    
        with p1 as string, p2 as fspath:
            pytest.importorskip(module)
            mypath = CustomFSPath(fspath)
            writer = getattr(df, writer_name)
    
            writer(string, **writer_kwargs)
            with open(string, 'rb') as f:
                expected = f.read()
    
            writer(mypath, **writer_kwargs)
            with open(fspath, 'rb') as f:
                result = f.read()
    
>           assert result == expected
E           AssertionError: assert b'r\x02\x01\x...2\x00\x00\x00' == b'r\x02\x01\x0...2\x00\x00\x00'
E             At index 107 diff: 56 != 55
E             Use -v to get the full diff

I haven't found any relevant commits on 0.25 yet.

@jreback
Copy link
Contributor

jreback commented Mar 13, 2019

the stats issue was also fixed in master

@bashtage
Copy link

Timestamp? The file has a time stamp accurate to 1 second that is written. It probably passes most of the time with exact binary comparison, but this is a bad test for Stata dta files.

@TomAugspurger
Copy link
Contributor

Oh, stata encodes that in the binary file? Yeah, we shouldn't be doing that then... I'll make a pandas PR to not pass it.

@jreback do you have a link to the stata fix on master? I don't see this specific failure in any of the recent logs. It appears to be a flaky test, but hard to say.

@TomAugspurger
Copy link
Contributor

I think @bashtage is correct. The error message is

E           AssertionError: assert b'r\x02\x01\x...2\x00\x00\x00' == b'r\x02\x01\x0...2\x00\x00\x00'
E             At index 107 diff: 56 != 55

I made a stata file locally. Index position 107 is on the second row here. The value is 33 (so 3 in text)

72 02 01 00 02 00 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  r

  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 31 33 20 4d 61 72 20 32 30 31 39 20 30 38 3a 34 33 00 fd fd 69 6e 64 65 78 00 00
                                                                                                  1  3  •  M  a  r  •  2  0  1  9  •  0  8  :  4  3           i  n  d  e  x

@TomAugspurger
Copy link
Contributor

Does anyone see an issue with my --deselects? Locally,

pytest --skip-slow --skip-network --skip-db -n=2 --deselect=pandas/tests/indexes/multi/test_analytics.py::test_numpy_ufuncs --deselect=pandas/tests/io/test_common.py::TestCommonIOCapabilities::test_write_fspath_all pandas/tests/indexes/multi/test_analytics.py

skips the numpy_ufuncs tests.

@jreback
Copy link
Contributor

jreback commented Mar 13, 2019

yes this is a defect in the process. These need to be fixed in master before anything is actually released.

@TomAugspurger
Copy link
Contributor

TomAugspurger commented Mar 13, 2019

It's fixed in master. And it's backported to 0.24.x now.

But I don't think we need to retag for a test that's failing to be skipped.

@jreback
Copy link
Contributor

jreback commented Mar 13, 2019

great. then you shouldn't need to skip anything.

@TomAugspurger
Copy link
Contributor

It's fixed on 0.24.x, after the 0.24.2 tag: pandas-dev/pandas@68278a8

@jreback
Copy link
Contributor

jreback commented Mar 13, 2019

so simply retag, i am not sure why that is a problem. this tag has not been released anywhere.

@TomAugspurger
Copy link
Contributor

The tag / release is public on Github. This and conda-forge build from the tag.

@TomAugspurger
Copy link
Contributor

I think --deselect is new in pytest 4.0. We pin to pytest 3.8.2 (which is too old for master anyway). Will try removing the pin.

@jorisvandenbossche
Copy link
Contributor Author

@jreback I don't really understand the problem. Can you try to clarify your objection with skipping the test here?
(for sure, ideally it is not needed, and in the future we should try to catch it before hand, but it is now like this, and we did that (skipping tests) before, and after a new tag we would also be skipping the tests)

@jreback
Copy link
Contributor

jreback commented Mar 13, 2019

@jorisvandenbossche my problem here is that things were released w/o actually testing the release. the point is to get all of the builds passing before you actually release anything.

@TomAugspurger
Copy link
Contributor

TomAugspurger commented Mar 13, 2019 via email

@jorisvandenbossche
Copy link
Contributor Author

jorisvandenbossche commented Mar 13, 2019

my problem here is that things were released w/o actually testing the release.

That's not exactly the same as what you are arguing above.

There are two things: one thing is how to resolve the current situation we are having, and a second thing is how to improve our workflow to prevent this from happening again.

For the first (getting 0.24.2 released), Tom and I propose to go forward with this PR and announce 0.24.2. Can you explicitly say that you are fine (or not) with this aspect?

Second, we should indeed learn from this and improve our workflow. But you can be sure that I did test properly before releasing (just not all python versions on all platforms). And as Tom says, we need to actually release before making wheels and conda packages.
So what can we learn from this: a) we need to remember to backport CI fixes as well and b) to make sure we de this, we should probably run the cron job here on the active release branch as well (so we can catch CI problems we didn't backport), as Tom already proposed above as well.

@matthew-brett
Copy link
Contributor

Is there some reason you can't build and test the wheels on the proposed release candidate, before doing the release?

@jorisvandenbossche
Copy link
Contributor Author

We don't do a release candidate for bugfix releases. But maybe you mean just the commit / branch that we plan to use for the release? (so something like BUILD_COMMIT=0.24.x (the branch name) instead of BUILD_COMMIT=v0.24.2 (the tag name) ?
I am not very familiar with multi-build, but I assume that is possible? That would indeed not be difficult to do, but think it would still be a good idea to do the cron job for the release branch as well, just to catch those earlier.

@matthew-brett
Copy link
Contributor

Right - yes - you can name any commit as the commit to build, by hash or branch name.

@matthew-brett
Copy link
Contributor

When I'm releasing on my projects, I build the wheels first, to check for problems, make the release tag, build the wheels from that tag, upload the wheels, and only then upload the source release.

@TomAugspurger
Copy link
Contributor

TomAugspurger commented Mar 13, 2019

It's unlikely, but I worry slightly about something being backported to the 0.24.x branch in between the time this PR is created and the merge build is started. In that case, I think we would build a binary for the wrong version.

Though if that does happen, we could submit another PR that exactly pins the commit to 0.24.2...

@TomAugspurger
Copy link
Contributor

Bumping pytest doesn't look especially promising.

Will keep looking.

@jorisvandenbossche
Copy link
Contributor Author

Though if that does happen, we could submit another PR that exactly pins the commit to 0.24.2...

I think the idea is to always do that, no? So first to a test run, if everything is green, tag, and then make a new PR to actually build the release wheels.

So for the wheels we could do a first check run like this, but for conda-forge I think we need to upload the sdist to github first (although you could upload it somewhere else, but I am not sure we want to do that).

@TomAugspurger
Copy link
Contributor

Gotcha. That sounds fine.

Agreed we don't want to build conda-forge from elsewhere.

@jorisvandenbossche
Copy link
Contributor Author

For the new failures: the only change is that you removed the pytest pin?

This reverts commit 8dce6dc.
@TomAugspurger
Copy link
Contributor

These pytest failures are strange. It seems like somethings gone awry with pytest's fixture discovery.

I'll revert the pytest update and try an alternative method for skipping the originally problematic tests.

@TomAugspurger
Copy link
Contributor

I'll update a cron job for the maintenance branch after we do the master wheel build, to not take up CI time.

@jreback
Copy link
Contributor

jreback commented Mar 13, 2019

@jorisvandenbossche its fine, go ahead and do what you need.

@TomAugspurger
Copy link
Contributor

Using -k to deselect the tests seemingly didn't work either: https://travis-ci.org/MacPython/pandas-wheels/jobs/505998246#L9479

I've pushed
3423895 to ignore the output of the test build for now. I'll manually inspect each of the build outputs to ensure that the only failures are the Py2 tests that should be skipped.

@jorisvandenbossche
Copy link
Contributor Author

Checked that all builds are passing and only the py2 ones have the expected failures.

@jorisvandenbossche jorisvandenbossche merged commit 98b85e2 into MacPython:master Mar 14, 2019
@jorisvandenbossche jorisvandenbossche deleted the RLS-v0.24.2 branch March 14, 2019 07:15
@jorisvandenbossche
Copy link
Contributor Author

Using -k to deselect the tests seemingly didn't work either

I think that is because you did -k -test_numpy_ufuncs, but locally it works for me with -k" -test_numpy_ufuncs" (no space after the k, that's how we do it for the doctests as well). Anyway, we can remove them now.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants