-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove redundant internal methods from create_account_group #3395
+31
−47
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
HariGS-DB
changed the title
Todo/3170
Remove redundant internal methods from create_account_group
Nov 29, 2024
HariGS-DB
temporarily deployed
to
account-admin
November 29, 2024 22:44 — with
GitHub Actions
Inactive
✅ 5/5 passed, 1m33s total Running from acceptance #7601 |
JCZuurmond
approved these changes
Dec 2, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes LGTM, added a note about something that was already there
HariGS-DB
temporarily deployed
to
account-admin
December 4, 2024 13:11 — with
GitHub Actions
Inactive
HariGS-DB
temporarily deployed
to
account-admin
December 4, 2024 13:19 — with
GitHub Actions
Inactive
gueniai
added a commit
that referenced
this pull request
Dec 12, 2024
* Added handling for Databricks errors during workspace listings in the table migration status refresher ([#3378](#3378)). In this release, we have introduced changes to the workflow assessment in the table migration status refresher to improve error handling and robustness when dealing with Databricks workspaces. We have added error handling for Databricks SDK methods used in listing catalogs, schemas, and tables, and introduced a new method, `_iter_catalogs`, to handle potential DatabricksError exceptions. The code now logs warnings and continues processing when encountering a non-existent schema or catalog or a DatabricksError during schema or catalog listing. Additionally, we have added unit tests to verify these changes and test for errors that may occur when listing catalogs, schemas, and tables. This release resolves issue [#3262](#3262) and enhances the reliability and robustness of the table migration status refresher in handling errors from Databricks workspaces, ensuring that processing can continue for other workspaces without failing entirely. * Convert READ_METADATA to UC BROWSE permission for tables, views and database ([#3403](#3403)). This commit modifies the `grants.py` file in the `databricks/labs/ucx/hive_metastore` directory to convert the `READ_METADATA` permission to `UC BROWSE` for tables, views, and databases, addressing issue [#2023](#2023) and [#340](#340) * Migrates Pipelines crawled during the assessment phase ([#2778](#2778)). In this release, new functionality has been added to the `databricks/labs/ucx` package to facilitate the migration of DLT pipelines to Unity Catalog (UC) using the `migrate_dlt_pipelines` command in `cli.py`. This command takes parameters such as `w`, `ctx`, `run_as_collection`, and `a`, and uses a new class, `PipelineMigrator`, to perform the migration process. Additional changes include the introduction of the `PipelinesMigrator`, `JobsCrawler`, and `PipelinesCrawler` classes for pipeline migration during the assessment phase, and modifications to the `conftest.py` file to support unit testing of pipeline migration functionality. Unit and integration tests have been added to ensure the proper functioning of the new features. * Removed `try-except` around verifying the migration progress prerequisites in the `migrate-tables` cli command ([#3439](#3439)). In this update, we've improved the `migrate-tables` CLI command in the `ucx` package by removing the try-except block surrounding the verification of progress tracking prerequisites. This change ensures that users receive a more specific warning message directly from the `verify_progress_tracking.verify()` method if the prerequisites are not met. Additionally, the unit test `test_migrate_tables_errors_out_before_assessment` has been updated to accommodate these changes. The test now creates a mock `VerifyProgressTracking` object with a `verify` method that raises a `RuntimeWarning` when called, and checks if the expected warning is raised when running the `migrate-tables` command with this mocked object. Finally, the test verifies that `jobs.run_now` is not called when the verification fails. These modifications enhance the user experience by providing more precise information on prerequisite verification, and ensuring appropriate test coverage for this functionality. * Removed redundant internal methods from create_account_group ([#3395](#3395)). In this release, the `create_account_group` function has undergone refactoring to eliminate redundant internal methods and simplify its implementation. The `create_account_level_groups` function in `workspaces.py` no longer accepts a workspace ID as a parameter, instead retrieving it from `accountworkspace._workspaces()`. This change removes the need for the `_get_valid_workspaces_ids` method, which has been removed. Similarly, the `create_account_groups` function in `cli.py` no longer accepts a `workspace_id` parameter and now retrieves the account context workspace ID from `accountworkspace._workspaces()`. The `create_account_level_groups` method of `ctx.account_workspaces` no longer requires the `ctx.workspace_ids` parameter, and the tests in `test_account.py` have been updated to reflect these changes. Additionally, the tests in `test_workspaces.py` have been updated to ensure the `create_account_level_groups` method behaves correctly with its new signature. These changes simplify the codebase, reduce redundancy, and improve maintainability, while ensuring that the function behaves as expected and handles edge cases correctly. * Updated sqlglot requirement from <25.33,>=25.5.0 to >=25.5.0,<25.35 ([#3413](#3413)). In this release, we have updated the `sqlglot` package requirement from version >=25.5.0 and <25.34 to version >=25.5.0 and <25.35. This update includes various bug fixes and new features, such as support for generated columns in PostgreSQL, proper consumption of dashed table parts in BigQuery, and updated mapping of the `TIMESTAMP` type to `Type.TIMESTAMPTZ`. Additionally, the `NEXT` keyword is now treated as a function keyword, and `NEXT VALUE FOR` is parsed correctly in TSQL and Oracle. However, this update also includes breaking changes, such as mapping `TIMESTAMP` to `Type.TIMESTAMPTZ` and treating `NEXT` as a function keyword. Therefore, it is recommended to thoroughly test the updated package for compatibility with existing code and workflows to ensure smooth integration. * Updated sqlglot requirement from <25.35,>=25.5.0 to >=25.5.0,<26.1 ([#3433](#3433)). In this pull request, we are updating the sqlglot requirement in the pyproject.toml file from version range '<25.35,>=25.5.0' to '>=25.5.0,<26.1'. This update will allow us to install the latest version of sqlglot while ensuring compatibility with future releases, as sqlglot version 26.0.0 introduced breaking changes, new features, and bug fixes. The breaking changes affect transpile support for bitor/bit_or snowflake function and preservation of roundtrips of DATETIME/DATETIME2. The new features include transpile support for bitor/bit_or snowflake function and support for inline FOREIGN KEY. The bug fixes address issues with transpiling MySQL FORMAT to DuckDB and allowing escape strings similar to Postgres for the duckdb dialect. We recommend reviewing the sqlglot CHANGELOG and testing the changes in a development environment before merging this pull request to ensure that our dependencies are up-to-date and that any bugs or compatibility issues are addressed. * changing table_migration to user_isolation ([#3389](#3389)). In this release, we have renamed the `table_migration` job cluster to `user_isolation` in various classes and methods of the `workflows.py` file, including `MigrationProgress`, `MigrationRecon`, `LegacyGroupMigration`, and `PermissionsMigrationAPI`. This is a purely cosmetic change, as the functionality remains the same. The updated cluster configuration includes a data security mode of `USER_ISOLATION` and specifies a maximum number of workers for autoscaling. Additionally, we have updated the integration tests' configuration and the `test_migration_job_ext_hms` function in `test_ext_hms.py` to reflect this name change. This change improves the clarity and consistency of the codebase by updating the naming of a job cluster to better reflect its intended use, and it resolves issue [#3172](#3172).
gueniai
added a commit
that referenced
this pull request
Dec 12, 2024
* Added handling for Databricks errors during workspace listings in the table migration status refresher ([#3378](#3378)). In this release, we have implemented changes to enhance error handling and improve the stability of the table migration status refresher in the open-source library. We have resolved issue [#3262](#3262), which addressed Databricks errors during workspace listings. The `assessment` workflow has been updated, and new unit tests have been added to ensure proper error handling. The changes include the import of `DatabricksError` from the `databricks.sdk.errors` module and the addition of a new method `_iter_catalogs` to list catalogs with error handling for `DatabricksError`. The `_iter_schemas` method now replaces `_ws.catalogs.list()` with `self._iter_catalogs()`, also including error handling for `DatabricksError`. Furthermore, new unit tests have been developed to check the logging of the `TableMigration` class when listing tables in the Databricks workspace, focusing on handling errors during catalog, schema, and table listings. These changes improve the library's robustness and ensure that it can gracefully handle errors during the table migration status refresher process. * Convert READ_METADATA to UC BROWSE permission for tables, views and database ([#3403](#3403)). The `uc_grant_sql` method in the `grants.py` file has been modified to convert `READ_METADATA` permissions to `BROWSE` permissions for tables, views, and databases. This change involves adding new entries to the dictionary used to map permission types to their corresponding UC actions and has been manually tested. The behavior of the `grant_loader` function in the `hive_metastore` module has also been modified to change the action type of a grant from `READ_METADATA` to `EXECUTE` for a specific case. Additionally, the `test_grants.py` unit test file has been updated to include a new test case that verifies the conversion of `READ_METADATA` to `BROWSE` for a grant on a database and handles the conversion of `READ_METADATA` permission to `UC BROWSE` for a new `udf="function"` parameter. These changes resolve issue [#2023](#2023) and have been tested through manual testing and unit tests. No new methods have been added, and existing functionality has been changed in a limited scope. No new unit or integration tests have been added as it is assumed that the existing tests will continue to pass after these changes have been made. * Migrates Pipelines crawled during the assessment phase ([#2778](#2778)). A new utility class, `PipelineMigrator`, has been introduced in this release to facilitate the migration of Databricks Labs SQL (DLT) pipelines. This class is used in a new workflow that tests pipeline migration, which involves cloning DLT pipelines in the assessment phase with specific configurations to a new Unity Catalog (UC) pipeline. The migration can be skipped for certain pipelines by specifying their pipeline IDs in a list. Three test scenarios, each with different pipeline specifications, are defined to ensure the proper functioning of the migration process under various conditions. The class and the migration process are thoroughly tested with manual testing, unit tests, and integration tests, with no reliance on a staging environment. The migration process takes into account the `WorkspaceClient`, `WorkspaceContext`, `AccountClient`, and a flag for running the command as a collection. The `PipelinesMigrator` class uses a `PipelinesCrawler` and `JobsCrawler` to perform the migration and ensures better functionality for the users with additional parameters. The commit also introduces a new command, `migrate_dlt_pipelines`, to the CLI of the ucx package, which helps migrate DLT pipelines. The migration process is tested using a mock installation, unit tests, and integration tests. The tests cover the scenario where the installation has two jobs, `test` and 'assessment', with job IDs `123` and `456` respectively. The state of the installation is recorded in a `state.json` file. A configuration file `pipeline_mapping.csv` is used to map the source pipeline ID to the target catalog, schema, pipeline, and workspace names. * Removed `try-except` around verifying the migration progress prerequisites in the `migrate-tables` cli command ([#3439](#3439)). In the latest release, the `ucx` package's `migrate-tables` CLI command has undergone a significant modification in the handling of progress tracking prerequisites. The previous try-except block surrounding the verification has been removed, and the RuntimeWarning is now propagated, providing a more specific and helpful error message. If the prerequisites are not met, the `verify` method will raise an exception, and the migration will not proceed. This change enhances the accuracy of error messages for users and ensures that the prerequisites for migration are properly met. The tests for `migrate_tables` have been updated accordingly, including a new test case `test_migrate_tables_errors_out_before_assessment` that checks whether the migration does not proceed with the verification fails. This change affects the existing `databricks labs ucx migrate-tables` command and brings improved precision and reliability to the migration process. * Removed redundant internal methods from create_account_group ([#3395](#3395)). In this change, the `create_account_group` function's internal methods have been removed, and its signature has been modified to retrieve the workspace ID from `accountworkspace._workspaces()` instead of passing it as a parameter. This resolves issue [#3170](#3170) and improves code efficiency by removing unnecessary parameters and methods. The `AccountWorkspaces` class now accepts a list of workspace IDs upon instantiation, enhancing code readability and eliminating redundancy. The function has been tested with unit tests, ensuring it creates a group if it doesn't exist, throws an exception if a group already exists, filters system groups, and handles cases where a group already has the required number of members in a workspace. These changes simplify the codebase, eliminate redundancy, and improve the maintainability of the project. * Updated sqlglot requirement from <25.33,>=25.5.0 to >=25.5.0,<25.34 ([#3407](#3407)). In this release, we have updated the sqlglot requirement to version 25.33.9999 from a range that included versions 25.5.0 to 25.32.9999. This update allows us to utilize the latest version of sqlglot, which includes various bug fixes and new features. In v25.33.0, there were two breaking changes: the TIMESTAMP data type now maps to Type.TIMESTAMPTZ, and the NEXT keyword is now treated as a function keyword. Several new features were also introduced, including support for generated columns in PostgreSQL and the ability to preserve tables in the replace_table method. Additionally, there were several bug fixes, including fixes for issues related to BigQuery, Presto, and Spark. The v25.32.1 release contained two bug fixes related to BigQuery and one bug fix related to Presto. Furthermore, v25.32.0 had three breaking changes: support for ATTACH/DETACH statements, tokenization of hints as comments, and a fix to datetime coercion in the canonicalize rule. This release also introduced new features, such as support for TO_TIMESTAMP\* variants in Snowflake and improved error messages in the Redshift transpiler. Lastly, there were several bug fixes, including fixes for issues related to SQL Server, MySQL, and PostgreSQL. * Updated sqlglot requirement from <25.33,>=25.5.0 to >=25.5.0,<25.35 ([#3413](#3413)). In this release, the `sqlglot` dependency has been updated from a version range that allows up to `25.33`, but excludes `25.34`, to a version range that allows `25.5.0` and above, but excludes `25.35`. This update was made to enable the latest version of `sqlglot`, which includes one breaking change related to the alias expansion of USING STRUCT fields. This version also introduces two new features, an optimization for alias expansion of USING STRUCT fields, and support for generated columns in PostgreSQL. Additionally, two bug fixes were implemented, addressing proper consumption of dashed table parts and removal of parentheses from CURRENT_USER in Presto. The update also includes a fix to make TIMESTAMP map to Type.TIMESTAMPTZ, a fix to parse DEFAULT in VALUES clause into a Var, and changes to the BigQuery and Snowflake dialects to improve transpilation and JSONPathTokenizer leniency. The commit message includes a reference to issue `[#3413](https://github.com/databrickslabs/ucx/issues/3413)` and a link to the `sqlglot` changelog for further reference. * Updated sqlglot requirement from <25.35,>=25.5.0 to >=25.5.0,<26.1 ([#3433](#3433)). In this release, we have updated the required version of the `sqlglot` library to a range that includes version 25.5.0 but excludes version 26.1. This change is crucial due to the breaking changes introduced in `sqlglot` v26.0.0 that are not yet compatible with our project. The commit message includes the changelog for `sqlglot` v26.0.0, which highlights the breaking changes, new features, bug fixes, and other modifications in this version. Additionally, the commit includes a list of commits merged into the `sqlglot` repository for a comprehensive understanding of the changes. As a software engineer, I recommend approving this change to maintain compatibility with `sqlglot`. However, I advise thorough testing to ensure the updated version does not introduce any new issues. Furthermore, I suggest keeping track of future `sqlglot` updates to ensure the project stays up-to-date with the library. * changing table_migration to user_isolation ([#3389](#3389)). In this release, the job cluster name in the Hive Metastore to Unity Catalog migration workflows has been changed from `table_migration` to "user_isolation." This renaming change affects all references to the job cluster in various methods including convert_managed_table, migrate_external_tables_sync, migrate_dbfs_root_delta_tables, migrate_dbfs_root_non_delta_tables, migrate_views, migrate_hive_serde_in_place, and update_migration_status, as well as job_task decorators that specify the job cluster. This change enhances user isolation during the migration process and resolves issue [#3172](#3172). Engineers should note that this change purely affects naming and does not modify the functionality of the code.
Merged
github-merge-queue bot
pushed a commit
that referenced
this pull request
Dec 12, 2024
* Added handling for Databricks errors during workspace listings in the table migration status refresher ([#3378](#3378)). In this release, we have implemented changes to enhance error handling and improve the stability of the table migration status refresher in the open-source library. We have resolved issue [#3262](#3262), which addressed Databricks errors during workspace listings. The `assessment` workflow has been updated, and new unit tests have been added to ensure proper error handling. The changes include the import of `DatabricksError` from the `databricks.sdk.errors` module and the addition of a new method `_iter_catalogs` to list catalogs with error handling for `DatabricksError`. The `_iter_schemas` method now replaces `_ws.catalogs.list()` with `self._iter_catalogs()`, also including error handling for `DatabricksError`. Furthermore, new unit tests have been developed to check the logging of the `TableMigration` class when listing tables in the Databricks workspace, focusing on handling errors during catalog, schema, and table listings. These changes improve the library's robustness and ensure that it can gracefully handle errors during the table migration status refresher process. * Convert READ_METADATA to UC BROWSE permission for tables, views and database ([#3403](#3403)). The `uc_grant_sql` method in the `grants.py` file has been modified to convert `READ_METADATA` permissions to `BROWSE` permissions for tables, views, and databases. This change involves adding new entries to the dictionary used to map permission types to their corresponding UC actions and has been manually tested. The behavior of the `grant_loader` function in the `hive_metastore` module has also been modified to change the action type of a grant from `READ_METADATA` to `EXECUTE` for a specific case. Additionally, the `test_grants.py` unit test file has been updated to include a new test case that verifies the conversion of `READ_METADATA` to `BROWSE` for a grant on a database and handles the conversion of `READ_METADATA` permission to `UC BROWSE` for a new `udf="function"` parameter. These changes resolve issue [#2023](#2023) and have been tested through manual testing and unit tests. No new methods have been added, and existing functionality has been changed in a limited scope. No new unit or integration tests have been added as it is assumed that the existing tests will continue to pass after these changes have been made. * Migrates Pipelines crawled during the assessment phase ([#2778](#2778)). A new utility class, `PipelineMigrator`, has been introduced in this release to facilitate the migration of Databricks Labs SQL (DLT) pipelines. This class is used in a new workflow that tests pipeline migration, which involves cloning DLT pipelines in the assessment phase with specific configurations to a new Unity Catalog (UC) pipeline. The migration can be skipped for certain pipelines by specifying their pipeline IDs in a list. Three test scenarios, each with different pipeline specifications, are defined to ensure the proper functioning of the migration process under various conditions. The class and the migration process are thoroughly tested with manual testing, unit tests, and integration tests, with no reliance on a staging environment. The migration process takes into account the `WorkspaceClient`, `WorkspaceContext`, `AccountClient`, and a flag for running the command as a collection. The `PipelinesMigrator` class uses a `PipelinesCrawler` and `JobsCrawler` to perform the migration and ensures better functionality for the users with additional parameters. The commit also introduces a new command, `migrate_dlt_pipelines`, to the CLI of the ucx package, which helps migrate DLT pipelines. The migration process is tested using a mock installation, unit tests, and integration tests. The tests cover the scenario where the installation has two jobs, `test` and 'assessment', with job IDs `123` and `456` respectively. The state of the installation is recorded in a `state.json` file. A configuration file `pipeline_mapping.csv` is used to map the source pipeline ID to the target catalog, schema, pipeline, and workspace names. * Removed `try-except` around verifying the migration progress prerequisites in the `migrate-tables` cli command ([#3439](#3439)). In the latest release, the `ucx` package's `migrate-tables` CLI command has undergone a significant modification in the handling of progress tracking prerequisites. The previous try-except block surrounding the verification has been removed, and the RuntimeWarning is now propagated, providing a more specific and helpful error message. If the prerequisites are not met, the `verify` method will raise an exception, and the migration will not proceed. This change enhances the accuracy of error messages for users and ensures that the prerequisites for migration are properly met. The tests for `migrate_tables` have been updated accordingly, including a new test case `test_migrate_tables_errors_out_before_assessment` that checks whether the migration does not proceed with the verification fails. This change affects the existing `databricks labs ucx migrate-tables` command and brings improved precision and reliability to the migration process. * Removed redundant internal methods from create_account_group ([#3395](#3395)). In this change, the `create_account_group` function's internal methods have been removed, and its signature has been modified to retrieve the workspace ID from `accountworkspace._workspaces()` instead of passing it as a parameter. This resolves issue [#3170](#3170) and improves code efficiency by removing unnecessary parameters and methods. The `AccountWorkspaces` class now accepts a list of workspace IDs upon instantiation, enhancing code readability and eliminating redundancy. The function has been tested with unit tests, ensuring it creates a group if it doesn't exist, throws an exception if a group already exists, filters system groups, and handles cases where a group already has the required number of members in a workspace. These changes simplify the codebase, eliminate redundancy, and improve the maintainability of the project. * Updated sqlglot requirement from <25.33,>=25.5.0 to >=25.5.0,<25.34 ([#3407](#3407)). In this release, we have updated the sqlglot requirement to version 25.33.9999 from a range that included versions 25.5.0 to 25.32.9999. This update allows us to utilize the latest version of sqlglot, which includes various bug fixes and new features. In v25.33.0, there were two breaking changes: the TIMESTAMP data type now maps to Type.TIMESTAMPTZ, and the NEXT keyword is now treated as a function keyword. Several new features were also introduced, including support for generated columns in PostgreSQL and the ability to preserve tables in the replace_table method. Additionally, there were several bug fixes, including fixes for issues related to BigQuery, Presto, and Spark. The v25.32.1 release contained two bug fixes related to BigQuery and one bug fix related to Presto. Furthermore, v25.32.0 had three breaking changes: support for ATTACH/DETACH statements, tokenization of hints as comments, and a fix to datetime coercion in the canonicalize rule. This release also introduced new features, such as support for TO_TIMESTAMP\* variants in Snowflake and improved error messages in the Redshift transpiler. Lastly, there were several bug fixes, including fixes for issues related to SQL Server, MySQL, and PostgreSQL. * Updated sqlglot requirement from <25.33,>=25.5.0 to >=25.5.0,<25.35 ([#3413](#3413)). In this release, the `sqlglot` dependency has been updated from a version range that allows up to `25.33`, but excludes `25.34`, to a version range that allows `25.5.0` and above, but excludes `25.35`. This update was made to enable the latest version of `sqlglot`, which includes one breaking change related to the alias expansion of USING STRUCT fields. This version also introduces two new features, an optimization for alias expansion of USING STRUCT fields, and support for generated columns in PostgreSQL. Additionally, two bug fixes were implemented, addressing proper consumption of dashed table parts and removal of parentheses from CURRENT_USER in Presto. The update also includes a fix to make TIMESTAMP map to Type.TIMESTAMPTZ, a fix to parse DEFAULT in VALUES clause into a Var, and changes to the BigQuery and Snowflake dialects to improve transpilation and JSONPathTokenizer leniency. The commit message includes a reference to issue `[#3413](https://github.com/databrickslabs/ucx/issues/3413)` and a link to the `sqlglot` changelog for further reference. * Updated sqlglot requirement from <25.35,>=25.5.0 to >=25.5.0,<26.1 ([#3433](#3433)). In this release, we have updated the required version of the `sqlglot` library to a range that includes version 25.5.0 but excludes version 26.1. This change is crucial due to the breaking changes introduced in `sqlglot` v26.0.0 that are not yet compatible with our project. The commit message includes the changelog for `sqlglot` v26.0.0, which highlights the breaking changes, new features, bug fixes, and other modifications in this version. Additionally, the commit includes a list of commits merged into the `sqlglot` repository for a comprehensive understanding of the changes. As a software engineer, I recommend approving this change to maintain compatibility with `sqlglot`. However, I advise thorough testing to ensure the updated version does not introduce any new issues. Furthermore, I suggest keeping track of future `sqlglot` updates to ensure the project stays up-to-date with the library. * changing table_migration to user_isolation ([#3389](#3389)). In this release, the job cluster name in the Hive Metastore to Unity Catalog migration workflows has been changed from `table_migration` to "user_isolation." This renaming change affects all references to the job cluster in various methods including convert_managed_table, migrate_external_tables_sync, migrate_dbfs_root_delta_tables, migrate_dbfs_root_non_delta_tables, migrate_views, migrate_hive_serde_in_place, and update_migration_status, as well as job_task decorators that specify the job cluster. This change enhances user isolation during the migration process and resolves issue [#3172](#3172). Engineers should note that this change purely affects naming and does not modify the functionality of the code.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Changes
changed the signature of create_account_groups and removed the workspace_id parameter and instead retrieve it from accountworkspace._workspaces() which contains the account context workspace_id named parameter
Resolves #3170
Functionality
databricks labs ucx ...
Tests