Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[data] autodoc mishandling type annotations #45129

Closed
angelinalg opened this issue May 3, 2024 · 3 comments
Closed

[data] autodoc mishandling type annotations #45129

angelinalg opened this issue May 3, 2024 · 3 comments
Assignees
Labels
data Ray Data-related issues docs An issue or change related to documentation P1 Issue that should be fixed within a few weeks

Comments

@angelinalg
Copy link
Contributor

Description

The doc string for write_parquet doesn't seem to be rendering as intended. For example, the ~ray.data.datasource.filename_provider.FilenameProvider:
https://docs.ray.io/en/releases-2.9.0/data/api/doc/ray.data.Dataset.write_parquet.html
The source file is here: src/ray/python/ray/data/dataset.py

Broken since version 2.9.0 up until 2.20.0.

Link

https://docs.ray.io/en/releases-2.9.0/data/api/doc/ray.data.Dataset.write_parquet.html

@angelinalg angelinalg added P1 Issue that should be fixed within a few weeks docs An issue or change related to documentation data Ray Data-related issues labels May 3, 2024
@peytondmurray
Copy link
Contributor

peytondmurray commented May 3, 2024

This happens because we have a lambda function as a default argument. We shouldn't be using mutable default arguments anyway (this should be caught by pre-commit, if we end up implementing it. There's a flake8-bugbear rule for this).

@peytondmurray
Copy link
Contributor

peytondmurray commented Jul 9, 2024

Looked more into this; I found a bunch of instances with the following grep:

grep -rzPo --include="*.py" "(?s)def\s+\w+\s*\([^)]*?=\s*lambda[^)]*?\)" ./ | tr '\0' '\n'

Key points:

  • (?s) makes grep treat . as matching newlines
  • [^)]*? means "lazily match anything that isn't the closing parenthesis"

Here's what grep returned. I think these will have to be addressed on a library-by-library basis:

./dashboard/client/node_modules/npm/node_modules/node-gyp/gyp/pylib/gyp/common.py:def uniquer(seq, idfun=lambda x: x)
./python/ray/data/_internal/aggregate.py:def percentile(input_values, key=lambda x: x)
./python/ray/data/_internal/datasource/json_datasink.py:def __init__(
        self,
        path: str,
        *,
        pandas_json_args_fn: Callable[[], Dict[str, Any]] = lambda: {},
        pandas_json_args: Optional[Dict[str, Any]] = None,
        file_format: str = "json",
        **file_datasink_kwargs,
    )
./python/ray/data/_internal/datasource/csv_datasink.py:def __init__(
        self,
        path: str,
        *,
        arrow_csv_args_fn: Callable[[], Dict[str, Any]] = lambda: {},
        arrow_csv_args: Optional[Dict[str, Any]] = None,
        file_format="csv",
        **file_datasink_kwargs,
    )
./python/ray/data/_internal/datasource/parquet_datasink.py:def __init__(
        self,
        path: str,
        *,
        arrow_parquet_args_fn: Callable[[], Dict[str, Any]] = lambda: {},
        arrow_parquet_args: Optional[Dict[str, Any]] = None,
        num_rows_per_file: Optional[int] = None,
        filesystem: Optional["pyarrow.fs.FileSystem"] = None,
        try_create_dir: bool = True,
        open_stream_args: Optional[Dict[str, Any]] = None,
        filename_provider: Optional[FilenameProvider] = None,
        dataset_uuid: Optional[str] = None,
    )
./python/ray/data/aggregate/_aggregate.py:def __init__(
        self,
        init: Callable[[KeyType], AggType],
        merge: Callable[[AggType, AggType], AggType],
        name: str,
        accumulate_row: Callable[[AggType, T], AggType] = None,
        accumulate_block: Callable[[AggType, Block], AggType] = None,
        finalize: Callable[[AggType], U] = lambda a: a,
    )
./python/ray/data/tests/conftest.py:def _assert_base_partitioned_ds(
        ds,
        count=6,
        num_input_files=2,
        num_rows=6,
        schema="{one: int64, two: string}",
        sorted_values=None,
        ds_take_transform_fn=lambda taken: [[s["one"], s["two"]] for s in taken],
        sorted_values_transform_fn=lambda sorted_values: sorted_values,
    )
./python/ray/data/dataset.py:def write_parquet(
        self,
        path: str,
        *,
        filesystem: Optional["pyarrow.fs.FileSystem"] = None,
        try_create_dir: bool = True,
        arrow_open_stream_args: Optional[Dict[str, Any]] = None,
        filename_provider: Optional[FilenameProvider] = None,
        arrow_parquet_args_fn: Callable[[], Dict[str, Any]] = lambda: {},
        num_rows_per_file: Optional[int] = None,
        ray_remote_args: Dict[str, Any] = None,
        concurrency: Optional[int] = None,
        **arrow_parquet_args,
    )
./python/ray/data/dataset.py:def write_json(
        self,
        path: str,
        *,
        filesystem: Optional["pyarrow.fs.FileSystem"] = None,
        try_create_dir: bool = True,
        arrow_open_stream_args: Optional[Dict[str, Any]] = None,
        filename_provider: Optional[FilenameProvider] = None,
        pandas_json_args_fn: Callable[[], Dict[str, Any]] = lambda: {},
        num_rows_per_file: Optional[int] = None,
        ray_remote_args: Dict[str, Any] = None,
        concurrency: Optional[int] = None,
        **pandas_json_args,
    )
./python/ray/data/dataset.py:def write_csv(
        self,
        path: str,
        *,
        filesystem: Optional["pyarrow.fs.FileSystem"] = None,
        try_create_dir: bool = True,
        arrow_open_stream_args: Optional[Dict[str, Any]] = None,
        filename_provider: Optional[FilenameProvider] = None,
        arrow_csv_args_fn: Callable[[], Dict[str, Any]] = lambda: {},
        num_rows_per_file: Optional[int] = None,
        ray_remote_args: Dict[str, Any] = None,
        concurrency: Optional[int] = None,
        **arrow_csv_args,
    )
./python/ray/tune/tests/test_trial_scheduler.py:def explore_fn(
            config, mutations, resample_probability, custom_explore_fn=lambda x: x
        )
./python/ray/autoscaler/_private/load_metrics.py:def freq_of_dicts(
    dicts: List[Dict], serializer=lambda d: frozenset(d.items()
./python/ray/train/_internal/storage.py:def _list_at_fs_path(
    fs: pyarrow.fs.FileSystem,
    fs_path: str,
    file_filter: Callable[[pyarrow.fs.FileInfo], bool] = lambda x: True,
)
./python/ray/train/tests/dummy_preprocessor.py:def __init__(self, transform=lambda b: b)
./python/ray/_private/runtime_env/uri_cache.py:def __init__(
        self,
        delete_fn: Callable[[str, logging.Logger], int] = lambda uri, logger: 0,
        max_total_size_bytes: int = DEFAULT_MAX_URI_CACHE_SIZE_BYTES,
        debug_mode: bool = False,
    )
./python/ray/_private/runtime_env/agent/thirdparty_files/aiohttp/client.py:def __init__(
        self,
        base_url: Optional[StrOrURL] = None,
        *,
        connector: Optional[BaseConnector] = None,
        loop: Optional[asyncio.AbstractEventLoop] = None,
        cookies: Optional[LooseCookies] = None,
        headers: Optional[LooseHeaders] = None,
        skip_auto_headers: Optional[Iterable[str]] = None,
        auth: Optional[BasicAuth] = None,
        json_serialize: JSONEncoder = json.dumps,
        request_class: Type[ClientRequest] = ClientRequest,
        response_class: Type[ClientResponse] = ClientResponse,
        ws_response_class: Type[ClientWebSocketResponse] = ClientWebSocketResponse,
        version: HttpVersion = http.HttpVersion11,
        cookie_jar: Optional[AbstractCookieJar] = None,
        connector_owner: bool = True,
        raise_for_status: Union[
            bool, Callable[[ClientResponse], Awaitable[None]]
        ] = False,
        read_timeout: Union[float, _SENTINEL] = sentinel,
        conn_timeout: Optional[float] = None,
        timeout: Union[object, ClientTimeout] = sentinel,
        auto_decompress: bool = True,
        trust_env: bool = False,
        requote_redirect_url: bool = True,
        trace_configs: Optional[List[TraceConfig]] = None,
        read_bufsize: int = 2**16,
        max_line_size: int = 8190,
        max_field_size: int = 8190,
        fallback_charset_resolver: _CharsetResolver = lambda r, b: "utf-8",
    )
./rllib/env/wrappers/group_agents_wrapper.py:def _group_items(self, items, agg_fn=lambda gvals: list(gvals.values()

matthewdeng pushed a commit that referenced this issue Jul 15, 2024
This PR fixes instances where lambda functions are used as default
arguments. Since default arguments are evaluated at function definition
time, mutable objects can have unintuitive behavior; additionally, they
prevent our documentation from rendering correctly. This PR is part of
#45129, but has been split up to minimize codeowner impact.

Signed-off-by: pdmurray <peynmurray@gmail.com>
matthewdeng pushed a commit that referenced this issue Jul 15, 2024
This PR fixes instances where lambda functions are used as default
arguments. Since default arguments are evaluated at function definition
time, mutable objects can have unintuitive behavior; additionally, they
prevent our documentation from rendering correctly. This PR is part of
#45129, but has been split up to minimize codeowner impact.

Signed-off-by: pdmurray <peynmurray@gmail.com>
can-anyscale pushed a commit that referenced this issue Jul 18, 2024
## Why are these changes needed?

This PR fixes instances where lambda functions are used as default
arguments. Since default arguments are evaluated at function definition
time, mutable objects can have unintuitive behavior; additionally, they
prevent our documentation from rendering correctly. This PR is part of
#45129, but has been split up to minimize codeowner impact.

## Related issue number

Partially addresses #45129.

## Checks

- [x] I've signed off every commit(by using the -s flag, i.e., `git
commit -s`) in this PR.
- [x] I've run `scripts/format.sh` to lint the changes in this PR.
- [x] I've included any doc changes needed for
https://docs.ray.io/en/master/.
- [ ] I've added any new APIs to the API Reference. For example, if I
added a
method in Tune, I've added it in `doc/source/tune/api/` under the
           corresponding `.rst` file.
- [x] I've made sure the tests are passing. Note that there might be a
few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
   - [x] Unit tests
   - [ ] Release tests
   - [ ] This PR is not tested :(

Signed-off-by: pdmurray <peynmurray@gmail.com>
@peytondmurray
Copy link
Contributor

All PRs merged; closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
data Ray Data-related issues docs An issue or change related to documentation P1 Issue that should be fixed within a few weeks
Projects
None yet
Development

No branches or pull requests

2 participants