-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python: Fix warnings from pytest #6703
Changes from 2 commits
56c80ee
2506fd3
42024ed
af4a849
9d314ce
b4db336
205ff5f
d7e1708
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -67,7 +67,18 @@ def get_random_databases(n: int) -> Set[str]: | |
|
||
@pytest.fixture(name="_bucket_initialize") | ||
def fixture_s3_bucket(_s3) -> None: # type: ignore | ||
_s3.create_bucket(Bucket=BUCKET_NAME) | ||
bucket = _s3.create_bucket(Bucket=BUCKET_NAME) | ||
yield bucket | ||
|
||
response = _s3.list_objects_v2( | ||
Bucket=BUCKET_NAME, | ||
) | ||
while response["KeyCount"] > 0: | ||
_s3.delete_objects(Bucket=BUCKET_NAME, Delete={"Objects": [{"Key": obj["Key"]} for obj in response["Contents"]]}) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I haven't checked the details of this test, but delete_objects can delete at max 1000 keys for a given bucket. We're probably not creating close to that much here which is why this is passing, but just so the logic is robust we probably want to handle that (if not here, we can do in a separate PR as well) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm also no expert on the subject, but it looks like the max number of key returned is also 1000: This will iterate until |
||
response = _s3.list_objects_v2( | ||
Bucket=BUCKET_NAME, | ||
) | ||
_s3.delete_bucket(Bucket=BUCKET_NAME) | ||
|
||
|
||
@mock_glue | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -422,6 +422,7 @@ def test_void_transform() -> None: | |
|
||
class TestType(IcebergBaseModel): | ||
__root__: Transform[Any, Any] | ||
__test__ = False | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Was this fixed in the other PR? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think that was a PR to the 0.3.0 branch, so we can create another RC from that branch without having the latest pyarrow changes released along with it. |
||
|
||
|
||
def test_bucket_transform_serialize() -> None: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this change the return type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, this is a bit of an interesting fixture. It didn't return anything before, and now it just blocks the yield until the test is passed, and then cleans it up.